text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Moral Complexities of Student Question-Asking in Classroom Practice
Prior research on student question-asking has primarily been conducted from a cognitive, epistemological standpoint. In contrast, we present a hermeneutic-phenomenological investigation that emphasizes the moral-practical context in which question-asking functions as a situated way of being in the midst of practice. More particularly, we present a hermeneutic study of student question-asking in a graduate seminar on design theory (i.e., a seminar focused on theory and philosophy of design, emphasizing the work of design scholars such as Simon, Cross, Krippendorff, and Lawson). The study offers a unique moral-practical perspective on this commonly studied phenomenon. Our analysis yielded four themes regarding the moral-practical intricacies of question-asking in this setting, with a particular focus on time-related constraints on participation, various types of background understanding, and value-laden expectations that participants encountered in this complex ecology of practice.
Introduction
Student question-asking has traditionally been studied in contexts of learning, cognition, and various domains of education. By and large, scholarship in these areas has been informed by enlightenment assumptions regarding knowing, mind, and world-assumptions that frame question-asking as an almost exclusive process of knowledge gathering in the form of filling mental space with needed content (information, sensory experience, representations, etc.). Within this tradition, questioning is treated primarily as an epistemological, rather than ontological or axiological, phenomenon. In contrast, the hermeneutic study we present here situates question-asking in an intrinsically moral-practical space, allowing it to be revealed in a unique and, we suggest, insightful way. To set the stage for this study we offer a brief review of literature on question-asking, followed by a description of the hermeneutic moral-realist interpretive frame upon which this investigation was based.
Prior Research on Question-Asking
Many studies of student question-asking focus on cognitive mechanisms such as representational structures theorized to exist in the mind, along with the computational procedures that operate on those structures (Chin & Osborne, 2008;Otero & Graesser, 2001;Tsui, 1992). Related research has focused on information search strategies (Mosher & Hornsby, 1968), sequencing of information (Robertson & Swartz, 1988), use of schemata and scripts (Flammer et al., 1981), and domain knowledge and heuristics (Schraagen, 1993;Siegler, 1977). Research on responses to questions has studied questions as the cause of some effect, usually involving tasks such as gathering information (Dreher & Brown, 1993), constructing answers (Nelson-LeGall & Glor-Scheib, 1985;Newman, 1991), and processing answers (Van der Meij, 1990). Research in this area has also investigated the relationship between questions, understanding, and learning styles (Pedrosa de Jesus et al., 2004).
Though still focused on knowledge-gathering per se, other forms of inquiry have examined the grammatical and semantic structures of questions. This general approach has focused on the syntax and semantics of questions in order to create a metalanguage or system of symbols that allow questions to be abstracted, categorized, evaluated, and connected to answers (Belnap & Steel, 1977). Such research has, in this way, often focused on the underlying logical structure of questions (Prior & Prior, 1955). Based on such inquiry, researchers have contended that the question-answer relation fundamentally underlies all reasoning, both valid and invalid (Koralus & Mascarenhas, 2013).
Other investigations have studied questions as an impetus to learning, largely focusing on strategies to elicit questions that allow students to acquire content (Chin & Osborne, 2008;Harper, Etkina, & Lin, 2003, King, 1994Wong, 1985). These studies are based on the assumption that students can be trained to ask questions that lead to increased learning and literacy (Davey & McBride, 1986;King, 1989;Singer & Donlan, 1982). Strategies for improving the effectiveness of questions include practical advice in addition to more theoretically derived alternatives focused on text-based comprehension, science-based strategies, problem-based learning, and socially situated initiatives (see Gong, 2018 for a review). As these trends in the literature suggest, student questions are typically studied in cognitive and logiccentered terms; they are commonly conceptualized as part of a dualist, representationalist epistemological apparatus of some form. Moreover, the studies that make up these trends are quantitative in nature, typically emphasizing statistical relations among variables.
By contrast, a few studies have investigated student question-asking by virtue of qualitative research approaches. Some (Van Zee, 2000;Van Zee et al., 2001) have utilized ethnography of communication to explore the experiences of teachers as they strived to elicit student questioning and document the types of questions that students generated based on these elicitations, while another (Volkman, 2004) used van Manen's (1990) hermeneutic phenomenology to explore a graduate student's experience as she learned how to respond to undergraduate student questions in a physics course. Whereas these qualitative studies have primarily emphasized the instructor's perspective and experience, another (Rop, 2003) used ethnography to explore the experiences of several high school students who commonly asked questions in their classes. In this latter study, the author offered insight into the challenges that sometimes accompany student question-asking in the classroom-for example, feeling various kinds of social pressure to ask fewer questions by teachers and other students.
Finally, in a previously published qualitative report , we explored student question-asking using the same hermeneutic moral realist interpretive frame that we describe below. Our findings in the previous report, based on this interpretive frame, pointed to the issue of how student question-asking functioned as a way of contributing-or not-to the common good of the class we investigated. Based on our analysis, student questionasking sometimes facilitated learning, sometimes occluded it, and, at times, created challenging classroom dynamics, including tensions among class members.
While these qualitative reports were revealing in some ways, they constitute a small proportion of the student question-asking literature in education and are significantly outstripped in frequency by quantitative investigations of cognitive and logic centric phenomena. It should come as little surprise that this would be the case, given the widespread acceptance of various representationalist-cognitive models over the past sixty years. However, in light of the many critiques of multiple generations of cognitive science (e.g., Dreyfus, 1992;Spackman & Yanchar, 2014;Suchman, 1987;Wheeler, 2005) and concomitant experimental research paradigms (Yuille, 1986), efforts to provide more experiential, qualitative studies conducted from a primarily non-cognitive, non-epistemological standpoint seem warranted. What aspects of student question-asking in the classroom, it might be asked, can be revealed by inquiries that continue to explore this phenomenon from novel, experiential perspectives?
A Hermeneutic Moral Realist Perspective on Question-Asking
As a further exploration of this possibility, we report here an investigation of questionasking as a lived, in-the-world practice, rooted in the hermeneutic-phenomenological writings of Heidegger (1962Heidegger ( , 1971, Gadamer (1989), Taylor (1985Taylor ( , 1989, and Dreyfus (1992Dreyfus ( , 2014. Whereas traditional cognitive accounts are informed by assumptions such as (but not limited to) dualism (i.e., a mind-body split), representationalism (e.g., mental schemas, scripts, networks), and mechanism (i.e., determinant structures and operations that govern information processing), the hermeneutic basis of our inquiry was informed by assumptions such as participation-in-theworld as a unitary phenomenon and understanding and interpretation as projecting and pressing into possibilities. These assumptions provide the basis for a non-dualist, non-representationalist account of human agency (for more on hermeneutic accounts of agency, see Guignon, 2002;Martin et al., 2003;Yanchar, 2011).
From this hermeneutic perspective, agents-in-the-world learn through a dynamic and continuous interplay of background understanding, concernful participation, interruption, exploration, and tacitization that leads to a fully-embodied and situated (but unfinalizable) kind of familiarity within a particular practice (Dreyfus, 2014;Yanchar et al., 2013). Moreover, from this perspective, questions are viewed ontologically-they flow out of one's background familiarity, concernful participation, ways of learning, and stance-taking on a given subject matter or phenomenon. As Taylor (1989) stated: We take as basic that the human agent exists in a space of questions. And these are the questions to which our framework-definitions are answers, providing the horizon within which we know where we stand, and what meanings things have for us. (p. 29) Questioning in this hermeneutic sense is more than merely academic. It lies at the core of what it means to be open to experience (Bingham, 2005;Gadamer, 1989;Taylor, 1989). One's question-asking is, in this sense, a way of being in the world; an expression of one's agency and a way in which the world is disclosed. In this regard, hermeneutics moves theorizing about question-asking away from the detached Cartesian model of scientific objectivity toward immersion in a world of "everyday concerns, practical involvements…inherited customs and traditions, social relations and language uses" (Hatab, 2000, p. 11). Exploring question-asking from this perspective is not a search for objectively specified generalizations that demonstrate initial states and outcomes; rather, it suggests situated, fully embodied integration and reintegration in a lived world of practical significance. Thus, from this perspective, a crucial methodological commitment involves the study of question-asking as a form of social practice in the lived spaces of everyday comportment rather than an isolated phenomenon in abstract mental or experimental space.
With regard to student question-asking in formal educational settings, hermeneutics implies a methodological commitment to studying learners' lived experiences as they navigate the relational complexities of actual practice in a variety of academic contexts. As we will clarify below, however, a study of question-asking in lived spaces of practical involvement also implies exploration of the value-laden, morally-constituted nature of academic practice itself (Brinkmann, 2004(Brinkmann, , 2011Stigliano, 1990;Taylor, 1989;Yanchar & Slife, 2017). As Brinkmann (2004) argued, "We cannot describe the human world adequately without at the same time describing values, goods, and reasons for action" (p. 57). Thus, studying phenomena in light of real, in-the-world moral goods and values allows researchers to take fuller account of the manytiered, complex and potentially conflicting layers of meaningful participation in practice. Bringing tensions among various (often tacit) moral demands to the foreground, for instance, can make visible the often-tacit moral ecology of a given practice and offer an opportunity to explore the goods and values endemic to it (Yanchar & Slife, 2017). In this sense, real-world banalities as well as challenges, complexities, accidents, excitements and surprises create important threads for inquiry into the moral space of practice. Bringing these threads together is a central task of interpretive researchers who seek to describe and understand phenomena within what Brinkmann (2004, p. 59) called a "moral topography"-a view of how the organization of values and goods are embedded in social practices.
Having assumed this hermeneutic moral realist perspective, we conducted an investigation of student question-asking in a formal setting, namely, a graduate seminar on design theory in education. Our intent was to provide unique insight into this ordinary aspect of academic practice-insight that may not only inform understandings of question-asking in this kind of highly-interactive educational context but also the broader meaning of question-asking as a ubiquitous human phenomenon. Given this background framing-which has not been employed in other studies of this topic, including qualitative studies-our general research questions were as follows: How does student question-asking fit into the moral configuration of goods and values in this graduate class? And what is revealed about student question-asking when studied from this perspective?
Study Overview
We used the interpretive frame described below to provide an analysis of student question-asking in a graduate class on design thinking. Methodologically speaking, our investigative strategy was similar to other hermeneutic-phenomenological approaches that emphasize the explication of everyday activity and its meaning (e.g., Addison, 1992;Packer, 1985;van Manen, 1990), although our approach focused on the moral ecology surrounding student question-asking and how question-asking showed up from this perspective. Like other qualitative approaches, this study's findings can be transferred to other contexts (Lincoln & Guba, 1985), and in that sense, offer applicable insight to other situations and help inform theorizing about this topic.
A Hermeneutic Moral Realist Interpretive Frame
As we indicated above, the conceptual framework of our investigation was based on hermeneutic moral realism, particularly as offered by Taylor (1985Taylor ( , 1989; see also MacIntyre 1984), but with deeper roots in the hermeneutic-phenomenological work of Heidegger (1962Heidegger ( , 1971) and those following in his wake (Dreyfus, 1991(Dreyfus, , 2014Guignon, 1983;Hatab, 2000). While this type of investigation has significant overlap with other hermeneuticphenomenological approaches, it is unique in that it specifically seeks to foreground the morallyconfigured, value-laden aspects of phenomena in the contexts of everyday practices. As others have contended, the formulation of novel investigative approaches is often required to produce unique insight into one's subject matter (e.g., Elsbach & Kramer, 2017;van Manen, 1990). More detailed articulations of what inquiry based on this perspective look like can be found in the writings of some psychologists (Brinkmann, 2011;Stigliano, 1990), including the interpretive frame presented by Yanchar and Slife (2017) which we employed in the design and conduct of this study.
As a framework for inquiry, this hermeneutic-phenomenological perspective begins with practices, conceptualized as more or less cohesive ways of participating in the world, with others, with equipment, and so on, that entail some form of intrinsic good (for more on this concept of practices, see Brinkmann, 2011;MacIntyre, 1984;Stigliano, 1990;Yanchar & Slife, 2017). From this viewpoint, playing a sport in a given cultural context provides an example of a practice; it involves certain forms of participation and equipment that fit into an overall pattern, performed for certain ends that constitute its intrinsic good (e.g., the enjoyment of playing, excitement of watching, thrill of victory, and so forth). Participating in a practice may also involve extrinsic goods (e.g., monetary payment), but an extrinsic good is not definitive of the practice per se; it is, in this sense, incidental. For example, people can play a sport without any material remuneration, and often do.
Intrinsic goods are integral to practices because they bring with them shape and purpose; they provide a kind of immanent rationale for a practice's existence and activities. But a practice entails more than a cohesive set of activities guided by an intrinsic good. It also entails values or expectations regarding how one ought to go about participating, or what is expected of the good participant-that is, one who pursues the intrinsic good with competence or excellence. For example, the good baseball player is knowledgeable about the game, plays skillfully, follows the rules, is a "good sport," provides a good example to younger players, and so on. In general, from this perspective, it can be said that no engagement in a practice would be possible without these reference points and goods; there would be nothing to orient participants to the practice in correct ways, no sense of what one hopes to accomplish, and no sense of what one should do.
In actual practice, participants will surely act in better or worse ways with regard to these expectations; and how one acts will be evaluated according to how the participant lives up to these goods and reference points in particular situations. Indeed, for this reason these expectations are sometimes referred to as moral "reference points" (Smith, 2002, p. 97; see also Brinkmann, 2011;Yanchar & Slife, 2017)-they offer guidance regarding how one ought to participate in a practice and pursue its intrinsic goods. This is true for any cohesive form of social interaction that entails these goods (e.g., playing games like baseball or checkers, being a teacher, physician, parent, etc.). Thus, the practical and the moral are, from this perspective, inseparable.
From this hermeneutic standpoint, these moral-practical goods and reference points are taken to be as ontologically real as practices themselves. Just as baseball and particular baseball games actually exist in the world, so too do the goods and reference points that enable baseball playing to exist as a practice. In this respect, goods and reference points are not conceptualized as subjective content stored within a putatively private, internal realm of mental life; rather, they exist in the midst of practice, in the real world of engagement and participation, and thus have a kind of moral-practical reality. What we describe here is sometimes referred to as a form of moral realism for this reason (Brinkman, 2011); the goods and reference points that define practices are ontologically-real in this sense and actually exist "out there" in the lived world of participation. For baseball players to excel, for example, they must effectively negotiate the moral configuration of goods and reference points that make participating in any particular game an instance of the practice of baseball.
This understanding of practices extends over vast ranges of human participation. In one way or another, it would seem to apply to much of what people do in the midst of everyday practical involvement. Thus, to understand people in action is to understand them in the midst of practices. And to the extent that this is the case, a satisfactory understanding of vast ranges of human participation will require an account of participation in practices which, in turn, implies understanding cast in terms of moral goods, reference points, and, as we will suggest, the challenges and complexities produced by these moral-practical demands. Inquiry into a given human phenomenon, then, would benefit from consideration of its significance and dynamics within a moral space of practice.
Class Selection and Participants
This study included eight participants involved in a graduate class on design theory and research in a school of education. This class took a philosophical approach to questions of how human objects and systems are designed, what approaches designers take to their tasks, how design research is conceived, and what kind of meaning is embodied in design activities and goals. It focused on the philosophical background and various design issues in fields such as communications, engineering, computer technology, as well as education. The main course reading was Nelson and Stolterman's (2003) The Design Way. The seminar also included sources such as Simon's (1996) The Sciences of the Artificial, Krippendorff's (2006) The Semantic Turn: A New Foundation for Design, Cross's (2001) Designerly Ways of Knowing, and Lawson and Dorst's (2009) Design Expertise.
Much of the class discussion centered around the proposed and evolving definitions of design in general and problems with different design approaches, particularly the problems of theoretical concepts versus practical applications. The class was assigned readings for each class period and open (unstructured) reflection papers once a week. The goal of the course was to help students survey various approaches to design and synthesize them. The final project was to have each student develop and defend their own formulation of design theory and practice.
This class was chosen for several reasons. First, it was designed by the instructor to be discussion-oriented, with an expectation that students would contribute to class dialogue by asking questions; thus, it was likely to provide rich opportunities for studying student questionasking in a formal academic setting. The instructor (pseudonym: Dr. Smith) was an effective discussion leader in the class. He came prepared to class with specific questions to focus and initiate student discussion. He typically directed questions back to group discussion, and encouraged student questions and interaction, often starting class with a request for questions that students had with regard to the reading for that day. At times he let the discussion range freely but would also bring the class back to the text as a basis for exploration. Second, we chose this class because students varied with respect to their familiarity with the subject matter, presumably creating an opportunity to explore the dynamics of question-asking among students with different degrees of knowledge, experience, and confidence.
After we received IRB approval, the instructor and students all agreed to participate and were included in all phases of data collection. Participants, identified by pseudonyms, are described in Table 1. Professor Smith was a fairly new faculty member in the department, with substantial professional experience in design outside of the university setting.
Harry had already graduated from the department and was taking this class as professional enrichment. He had significant experience with educational theory, but little with design theory. He was primarily interested in developing his own personal learning and teaching model.
Jacky was a doctoral student who had professional experience in curriculum design and concurrently worked part-time at the university as a curriculum design team leader. She had background in instructional design but not underlying theory, and thus took the class out of curiosity about theories of design.
Charles was a master's student, coming into the graduate program after years as a public school teacher and online course designer. He took the design theory course to fulfill part of his master's course requirements at the recommendation of his academic advisor. He came in with experience in curriculum design development but had had little acquaintance with design theory.
David was a master's degree candidate with extensive background in graphic design. He entered the classroom with a question about how his background in graphic design could be integrated and expanded by learning about design theory. He had professional graphic design experience but had done little with either design theory or instructional design.
Peter, a postgraduate with a PhD in the field, was working as a professional curriculum designer at a different university and sat in on the class electronically. He had conducted research in instructional design but had not explored the more general design theories and practices emphasized in the seminar.
Jim was a doctoral candidate with a background focus on language acquisition and extensive teaching experience in foreign languages. His hope was to teach a foreign language at the university level.
Anne was a master's degree candidate who had just received her undergraduate degree the preceding spring. Her long-range goal was teaching supply economics at a business school. She took the seminar because its description sounded interesting to her.
Thus, of the three participants who already had completed PhDs, one was "sitting in" on the course and not seeking a degree (Peter), another was taking it for credit as a work requirement (Harry), while the third was the course instructor. One participant not taking the class for credit attended at a distance electronically (Peter). Regarding participants taking the class for credit, three were master's degree candidates (Anne, Charles, and David) while two were doctoral candidates (Jacky and Jim). Of these participants taking the course for credit, one (Charles) already held a master's degree in another subfield of education.
Data Collection
Our data sources included class observations, class artifacts, and in-depth, semistructured interviews. Class observations were conducted by one of us (SPG), who attended eight class sessions over a four-week period. Each class session was recorded, transcribed, and analyzed in order to understand ordinary question-asking interactions in class. Question-asking interactions as recorded in class transcripts were primarily used to help guide the semi-structured interviews that we conducted after observations were complete (more on this below). We interviewed each participant three times, with the exception of the course instructor, who we interviewed twice. Each of the interviews lasted approximately an hour.
Interview 1 (conducted by SPG) was used to gain general familiarity with our participants and their typical ways of engaging in class discussions. Interview 2 (also conducted by SPG) was used to explore specific instances of question-asking and ensuing class discussions. These interviews were highly tailored to each participant in light of their involvement in class. Interview 3 (conducted by both authors) was used to more deeply explore issues raised in the first two interviews and obtain participants' impressions of the themes that we had developed at that point. With regard to the course instructor, we used a similar approach, though condensed into two interviews. In interview 1, one of us (SPG) explored his views on student learning and question-asking in general; in interview 2, we together queried into question-asking interchanges from this class, his impressions of our emerging themes, and a few related issues. (For examples of interview questions, see Table 2.) Table 2 Example Interview Questions Interview 1 • "Why are you taking this class?" • "How does the class fit into the bigger picture of your studies or purposes?", • "How often do you ask questions?" • "What kind of questions did you ask in general?" • "Are your questions like your classmates or different? How so?" • "Do the questions you're exploring in class fit with the questions that you need to have answered professionally? Please clarify" Interview 2 • "Let's look at the video (or listen to the audio) to look at your questions. What were you trying to find out here?" • "What did you mean when you said . . . ?" • "This answer and that answer seem contradictory. Are they?" • "What did you think of this questioning interchange?"
Data Analysis
We analyzed data by way of a hermeneutic moral realist-informed thematic analysis. We prepared for each participant's first interview by becoming sufficiently familiar with artifacts, field notes, class recordings, and class transcripts. While this preliminary review of class context provided an important basis for all interviews, it was especially relevant to interview 2, which focused on question-asking interchanges that had occurred in class. We analyzed interviews independently of one another and later combined our emerging analyses into one overall set of themes (as described below). We followed the same general data analysis pattern: analyze interview 1 before conducting interview 2, and analyze interview 2 before conducting interview 3 (to the point of "initial thematizing;" more on this below). Thus, our interviews 2 and 3 each probed more specifically and deeply into issues raised in earlier interviews. Our analyses of all interviews included the following steps. (What we describe below with respect to analysis is very similar to what has been enumerated in other qualitative reports of this type, e.g., McDonald & Michela, 2019;. Initial coding. This step involved each of us (separately) reading each interview as it was transcribed, in order to gain a general sense of the whole and begin identifying parts of interviews that were particularly relevant to our research questions. To facilitate the process of identifying phenomena from a hermeneutic moral realist perspective (e.g., goods, reference points, tensions, related forms of participation) we developed a set of a priori codes. We often assigned more than one code to a passage or interchange. These initial, generic codes had to do with issues such as descriptions of practical involvement in class (coded as P), explicit value judgments made by students (coded as V), student self-evaluations (coded as S), instances in which questioning enabled (coded as E) or hindered (coded as H) a student's ability to do something, and noteworthy observations that did not fit into these codes (O).
Expanded coding. Once we each completed initial coding of a transcript, we (separately) revisited the initial codes (usually after an interim period of one or more days) to check on their appropriateness and made changes as needed. Also during this step, we augmented each initial code with a more detailed "expanded" code. For example, in one transcript one of us supplemented the initial codes "P, V" with the expanded code: "Didn't want to hijack class; it's ok if questions/discussion go in other directions." Initial thematizing. In this step, we each (separately) began formulating themes by grouping together expanded codes that had similar or related meanings. Through this process we independently created a number of initial themes, each of which was later deleted, revised, or integrated into other themes.
Initial inferring. In this step, we (separately) made initial inferences regarding goods, reference points, and tensions of practice apparent in the data. Some of our initial inferences concerned possible reference points associated with "time usage," "sincerity," and being an "engaged contributor." We made approximately the same inference about the primary good of student participation in this class, which was becoming a better designer through expanded theoretical understanding. Finally, we inferred several tensions and balances, the most substantive and significant of which we include below in our findings.
Refined thematizing and inferring. In this step, we (separately) refined our themes and inferences by merging, splitting, adding, deleting, editing, and so on. While engaged in this refining we also sought to identify interrelations among themes, goods, reference points, and tensions in the data by looking for part-part and part-whole connections (e.g., asking "is this tension related to more than one reference point?" "How are these two reference points related?" or "How does this reference point guide toward the broader good of practice?").
Structuring. In this step, we (together) combined our independently developed themes and inferences into a single collection, which we collaboratively refined into a single thematic structure regarding the phenomenon of interest.
Trustworthiness
To create a trustworthy set of findings (in the sense described by Lincoln & Guba, 1985) we utilized the following well-known and widely-used credibility standards: reflexive journaling, peer debriefing, persistent observation, data triangulation, and negative case analysis. By following these standards, in conjunction with our hermeneutic moral realist interpretive frame, we sought provide an illuminating, fair, and defensible account of several interrelated aspects of student question-asking as a moral-practical phenomenon in this graduate school context.
Findings
In what follows, we present four themes (see Table 3 below) regarding student questionasking as a kind of value-laden participation within a classroom moral ecology. These themes are based on our analysis of two forms of participant lived experience: (a) their experience as students in the classroom, asking questions and engaging in classroom discussion, and (b) their experience engaging with us as researchers as we explored classroom question-answer interactions through interviews, often focusing directly on classroom conversations as they were captured in seminar transcripts. The themes we present below became clarified, first, as we (researchers and students together) reflected on participants' classroom experience and discussed the meaning of specific question-asking interchanges, and second, as we (researchers alone) analyzed classroom interchanges and interview data.
As one might expect, while in class students were absorbed in seminar activitieslistening, talking, questioning, answering, debating, and so on, all with a kind of smooth, tacit familiarity that one would expect in this kind of academic setting. During interviews, on the other hand, students were confronted by statements and portions of dialogue from class which invited more conscious reflection on themselves, others, and the seminar experience overall, inviting them to articulate themselves in a more thoughtful-sometimes analytical, sometimes emotional-way. In this respect, interviews allowed participants to explicate, to some extent, the meanings and understandings associated with their tacit classroom involvement, bringing issues out more thematically, and thus often making them seem more explicit than they actually were in the seminar experience itself. This is especially true of the moral reference points that we discuss in our findings (see below). While students didn't speak of moral goods and reference points directly in interviews, we as researchers, guided by a hermeneutic moral realist interpretive frame, took account of their classroom lived experience in this light.
The themes we present are all related to complications or disruptions that somehow occurred in ordinary seminar sessions and revealed something about the practice of being a student qua question-asker in this context-for example, issues pertaining to uses of class time, class preparation (in more ways than one), ways of challenging others, or being an engaged learner. These themes collectively provide a glimpse into the moral demands encountered by students in this seminar and, in so doing, offer a perspective on student question-asking not typically seen in this literature.
Theme 1: Contextual Constraints of Class Time in Questioning
Intertwined with student question-asking was a host of practical and moral complexities about how time constraints could be navigated with regard to questioning in class. Implicitly or explicitly, constraints regarding the use of time, and the ways students dealt with those constraints in classroom questioning exchanges, formed a morally-charged context of practice. More specifically, it became clear the time was treated as a kind of scarce resource and needed to be managed carefully. As David said, when he was asked about why he spoke up at certain times, but not others: "I tried to be aware of how long I'm talking" and "I don't want to be the know-it-all or the Hermione Granger sort of person who takes everyone's time." Like David, other class members seemed well attuned to how uses of class time would matter to other members of the class. Sometimes this was expressed in terms of avoiding waste, as when Anne concisely stated: "I'm afraid of wasting people's time." This concern was also expressed by some participants in terms of appropriate self-awareness. For example, Harry offered the following self-reflection: I can be a very dominant personality when I want to. And sometimes even when I don't intend to. And part of the idea is that I recognize that there are some people who are just sitting in the class and listening, and I hadn't heard perspectives from them on things, and by me talking, it didn't create the option for them to answer questions or ask questions. And by me shutting up, it created that awkward silence, or it created the opportunity for them to speak and to be able to share what they felt. Jacky, as she explained her participation in questioning exchanges during the interviews, said that she worried because, "I tend to talk too much," and "I try to not dominate the conversation. But sometimes I'm not very good at that." Peter felt similarly: "I don't overshare, I hope." Charles talked about this issue in terms of something like appropriate balance: "there's times where I feel like I have some value, I can bring to the conversation, and there's other times I think it's good just to listen and hear new perspectives." Ultimately, all of the students, in one way or another, were concerned with what might be thought of as an interrogative opportunity cost: only one question (or set of questions) can be asked at a time, and time in class is limited; thus, upon some reflection, students and instructors may always wonder whether a given question was really worth the time it took to be answered and whether other questions might have been more worthwhile for some, if not many, of the students involved.
Time constraints were also connected to how students perceived their place in the seminar. Anne, for example, was the youngest member of the class and a new admit to the program, so jumping into an advanced class on design thinking, backgrounded sometimes by sophisticated discussions of philosophical issues, left her feeling often that she was in over her head. Not only did she refrain from speaking up or answering questions, she deflected questions that were asked of her, often making a joke or demurring to another classmate when the teacher or other students tried to engage her. In her eyes, she was so far behind the others that her questions would be "too basic" for the other students, and it would take too much time to bring her up to speed right there in class. Peter, on the other hand, had already graduated with a PhD from the program and was not taking the class for credit. He would typically decide which of his questions were appropriate to ask in class based on the notion that his non-credit status gave him less legitimate access to class time, and so he only asked questions that he felt added to the discussion at hand. According to Peter, his questions would be problematic if they appropriated or "hijacked" class time for his personal professional goals.
Thus, exploring contextual constraints on time revealed how students viewed what might or might not constitute a worthy question. Some questions were deemed too simple, especially for graduate study, while others seemed idiosyncratic and not relevant to some or most other students. Some questions might be insensitive to others (in substance and not just form), some incomprehensible, some distracting or diversionary from the topic at hand, and some asked at inopportune moments. Across these variations was a kind of concern for everyone's edification, which a given student's question may or may not help achieve. This was a recurring theme among participants, even those who asked questions most frequently and seemed least concerned with allocations of time. From this hermeneutic moral realist perspective, then, the good of education-conceptualized as something like the intellectual growth and development of learners in a particular domain-is obstructed or made less attainable when time is not wisely utilized. To the extent that this is the case, responsible use of time functioned as a kind of moral reference point; it provided implicit moral-practical guidance regarding how students ought to pursue the classroom good when asking questions.
Theme 2: Obligation and Risk in Preparedness and Background
Participants in this study connected responsible-and thus morally-practically correctclassroom questioning with adequate background and preparation in several interrelated ways. For one, student questions should be informed by a reasonable, if only basic, understanding of the course content, aided by course readings and other assignments. On this issue, students were fairly firm. As Jim stated with regard to any student who would ask questions without being sufficiently prepared: "you should have just done the reading and then you would know the answer to your question, we don't need to answer this question in class, just go back and do the reading." Through interviews it became clear that students had varying styles of class preparation and varying levels of background with regard to the subject matter. A number of the students read the assignments and wrote their reflection papers by the deadline (prior to class) but did not formulate questions for class in advance. If they asked questions during a given class period, it was because something confused or intrigued them in the midst of discussion. Jacky, however, was more deliberate in how she would formulate questions for class. She would start preparing for an upcoming class session a week ahead of time. She would read the course reading assignments first, explore alternative, tangential or related readings, and then reread the assignment again right before class. She took notes from the beginning and began work on writing her reflection paper after her first review of the material. In the time just before class started, she would tighten her reflection paper and note questions she was interested in exploring with the professor and other students. Anne, on the other hand, was more tentative in her approach. Often confused by the readings, she would typically wait for the classroom discussions to aid her interpretations of them, often writing her reflections on topics the class had already covered. When asked about their class preparation, in conjunction with question asking, participants were quite comfortable using a language of obligation: should or ought. They saw these expectations as an ordinary and inevitable part of being a good student-simply the way things are in an enriching class environment. In this regard, class preparation functioned as another moral reference point: a kind of guiding value in the midst of practice regarding how a good student participates on a regular basis. As reported by participants, however, adequate preparation was important not only for one's own academic success; it was necessary for one to be a meaningful contributor in class and to facilitate a quality educational experience for all. Questions that could be answered by way of adequate preparation, and that prepared students need not ask, were seen as unnecessary and, in the hermeneutic sense we have described here, morally-practically problematic.
The course instructor offered insight into this issue by suggesting that he observed a decrease in the quality of student questions and class discussion toward the end of the term. He described the situation as follows: I felt that the quality of reading that went into the class preparation went down. And students even admitted that, they didn't read as much. And so, I think the quality of discussion degraded a little bit maybe the last week of the term, just because the students were less prepared to have good conversations, so they were coming with less questions and they were less able to answer my questions because they weren't as prepared. Consistent with this moral-practical reference point, and conscious of their standing before the other students and instructor on this matter, participants related that they themselves felt a sense of disquiet when not properly prepared for a class. It was common in interviews for participants to say things like: "If I didn't really put forth an effort, then I probably would sit back and hide and not try and draw attention to myself" [Jim]; "If I haven't engaged with the text very much, then I don't want to take up a lot of time or take time away from people that have done the reading and do know what they're talking about" [Peter]; and "I think there's always a hesitancy when you're not sure you're asking a good question, you didn't want to seem ignorant or that you don't know what's happening" [Charles].
A second sense in which participants expressed concern about adequate preparation for the seminar had to do with their general background in the subject matter and how their in-class questions, as a reflection of that background, would be perceived by other students and the instructor. Participants felt that they needed to be already aware of the basic nature of the field, the key thinkers and concepts, and typical ways of thinking, knowing, and doing, even if in a relatively fledgling or intermediate way. Without some background of this sort, student questions would be too basic, perhaps embarrassingly so, and not able to contribute significantly to a graduate-level discussion of ideas and practices. Some participants wondered about their ability to function at this level, as they came from a variety of undergraduate majors or professional fields. David, for instance, had a practical background in graphic design, but not in design theory or education, and thus was skeptical of the adequacy of his background for the seminar, especially during the first weeks of the semester. As he stated: they always say that there's no stupid questions, but there are questions that make you feel like an idiot. And so, asking that kind of question in a class where people are obviously PhD students, master's students coming into this, and I'm thinking, I'm going to ask a really basic question here that's almost like, you know, how do you use a pencil? And people are just going to say, why is this guy in this class? He's really out of his league here. In later interviews, David remarked: "That's why I didn't want to ask too much to reveal my ignorance at the beginning." And as a coping strategy, he admitted that he often refrained from asking questions at this early point in the term, relying instead on questions asked by others to clarify course material. David was not the only student concerned with being unprepared, uninformed, and unable to function at the same level as others in the class. Anne, when asked in interviews to explain her reticence in class discussions, shared that she often felt like a "secondclass citizen;" or in her words: I feel like everyone else has all of this experience with design and with interacting and with being in the practice. And I frankly don't want to be an idiot, and I don't want to hold the class back from ideas, so I feel like really intimidated a lot of times with opening up.
In rendering these self-assessments, participants seemed to be evaluating their questions in light of another moral-practical reference point: they must be academically prepared for a class like this with adequate background; and if they do not, their basic questions should not be asked, as they're not appropriate-they don't lead to enriching discussions and would likely end in a sort of academic embarrassment. In short, these kinds of questions would not be worth whatever benefit they might yield. The risk was too high.
Participants talked about two general ways of handling this issue. One way, according to Harry, was not to be overly self-conscious or "care what other people think." This statement appears to be connected with a primary good of the class itself: edification of learners. If a student lacks knowledge or clarity, she or he should just ask. But this sentiment does not square fully with the lived reality of our participants, who often withheld their questions in the face of conflicting demands-for example, a distinct impression, in the presence of others, that a question was too simplistic and seemed to reveal ignorance.
A second way of dealing with the issue of inadequate background, as described by our participants, was to become familiar with course subject matter through one's own personal study rather than asking questions in class. Perhaps not surprisingly, the obvious resource to use in one's personal study was, for our participants, the internet. The good student should be able to catch up with the rest of class by making use of various websites and databases. Indeed, according to several of our participants, questions that could be answered via an internet search were not, generally speaking, appropriate. When David was asked about how he evaluated some of the questions in class, he suggested: "if this was a basic question that I could answer with a Google search, there's no need to put a pin on it for class time. I could go solve that myself." With an adequate background, however, participants would be able to engage more fully in question-asking interchanges and make helpful contributions. And as David's experience suggests (described above), some students were counting on others to make those contributions. Moreover, as suggested by participants, what one finds on the internet may itself be worth asking about in class if it enriches the discussion, broadens the scope of interest, creates connections, and facilitates learning.
But to engage in personal study of course subject matter (especially its rudimentary concepts and underpinnings), instead of asking basic questions in front of other students, was not always a straightforward process of obtaining useful information. As Anne reported, her internet searches sometimes increased rather than reduced her confusion, leading to a sort of diminishing academic returns. Charles noted challenges associated with the sheer volume of information on the internet: "you can kind of do overload. You find yourself looking and looking, and there's so much stuff out there." Without the class community as a resource for clarifying course content and related internet information, students would be unable to assess what they were encountering and equally unable to receive assistance because of their unwillingness to seek more help during class discussions. In this regard, there seemed, at least at times, to be a tension between one's own efforts to master course material and an edifying class experience for all.
It is perhaps for this reason that some participants did not affirm the practice of personal internet study in moments of basic uncertainty; or they agreed that personal internet study might be useful at times (for simple, relatively minor questions), but not as a primary learning resource. For example, some participants suggested that asking basic questions could allow other students to see various perspectives on, or interpretations of, the subject matter; it could clarify confusion that many students might have, but not be willing to ask about due to the anxiety that we have already described. Thus, it was not always clear to participants how to cope with reference points pertaining to background and preparedness. It appears that there were times to ask basic questions and times to refrain. There was no clear, unambiguous rule to follow, but rather a need for judgment, sensitivity, balance, and practical wisdom in context.
Interestingly, it was Dr. Smith who most clearly wondered about students' ability to locate the best online sources and critically assess their content. While he recognized that students could, at times, find correct answers to their questions on the internet, a lack of careful analysis of online information could lead to erroneous understandings. From his perspective, class time itself offered a very appropriate platform for such analysis. Moreover, the instructor was not convinced that basic questions about course content were inappropriate in the first place. He expressed the view that it was often basic questions that would have helped students the most. This remark suggests that, from the instructor's perspective, students may not have been as prepared as they took themselves to be, at least at some points in the course. As he stated: "There were some of those readings that I wish the students would have asked more substantive questions about what was the author trying to say." And a little later in the same interview: Because I want to make sure that when we're asking those other questions about "what does this mean?", "what is the value of it that we're talking about?" we're asking those questions from an informed place. And based on the way some of those discussions went, I'm not always sure that they were asking those questions from an informed place.
Having said that, however, the instructor also suggested that some basic questions are more helpful than others. Thus, while this moral reference point about question-asking and background may seem simple in one sense-good questions will lead to illuminating conversations for most students and not be overly or problematically basic-that simplicity seemed to obscure a deeper complexity directly related to student involvement: It's not always clear what a good question looks like in a given situation and who might benefit from its being answered. This complexity likely follows from a set of reference points that do not fit into a simple, unproblematic picture of good class involvement by student and instructor. The more one's participation seemed to honor one reference point in some way, the more likely it was to de-emphasize another.
Theme 3: The Complexities of Challenging Questions
We have suggested that class preparation and general background was an enabling condition, though sometimes a hindrance to question-asking for participants in our study. From a hermeneutic standpoint, background is largely tacit and taken for granted in some way. But a student's background and understanding of course readings may be subject to clarification and challenge, which is part of education according to hermeneutic thinkers (e.g., Gadamer, 1989;Slife & Williams, 1995). Our interviews and observations suggested that questions qua challenges played an important role for students and that being willing to ask these kinds of questions functioned as a moral reference point.
As classroom interactions evolved throughout the term, however, it also became clear that students had different styles of questioning and different approaches to the challenges inherent in their questions. Harry, for example, was comfortable questioning the basic assumptions of the authors whose readings formed the basis of discussion. He had his own strong opinions and would often strike at the core of the assertions made by the experts in the field. He openly challenged (i.e., questioned) the professor at times, and in his personal stories during the interviews he revealed a remarkable sense of ease about what might appear contentious to others. The following excerpt from class, in which Harry questioned how theoretical precepts advanced in the reading for that day were being discussed, was typical for him: So, he's got this model that's here, and we can see where it fits and it works here, and we never take a step out of that model to see where it doesn't work, and where it breaks down, and the second issue I have kind of relates to education, is he looks to design for the algorithm that accounts for everyone, but it never accounts for everyone. And that's the struggle that it has." In an interview, Harry told a story of this kind of interaction when he was a graduate student: When I worked on my doctoral program, I worked with a friend…and he and I on some points would adopt a very adversarial position. We would really go at each other, back and forth, arguing our points, like deep down dirty dog arguing. But neither of us took it personally...So, I have had that style, it's a very abrasive and confrontational one…I don't see it as a negative thing…because it makes me stronger, you know. It's kind of like that phrase "you don't always make my life easier, but you do make it better." Jacky, too, enjoyed vigorous back and forth dialogue about core issues. She relished the task of identifying assumptions and examining them closely. She felt that challenging authority was an inherent part of growing and learning. Over time, however, her style had become even more other-centered than Harry's. In interviews, she confessed that in the past she had inadvertently offended professional and academic peers by not appreciating how threatened they might feel when challenged. She had become much more cautious in group settings, she said, and had developed a pattern of approaching fellow students after class in private to check in on their emotional state. Her style had a lot to do with her rationale for limiting her challenging comments and questions. She said that when "challenging became personal" it would be wrong. She seemed to be pointing to a related reference point when she noted that "Honoring and respecting that creating a threatening environment inhibits other people's learning matters to me… . For sure I don't want to hurt people or offend people with the questions I ask." For example, in one class session Jacky questioned the idea, presented in that day's reading, of design being facilitated through "breathing together," a concept of unified action through a deeply shared vision. This notion was "so Utopian" to her that she felt it could not be taken seriously, which prompted her to ask the question, "Who's ever been in a design relationship like that?" While the class laughed at her cynical question, some found the idea of working in that unified way appealing. One member of the class claimed to have actually experienced it and stated so during the discussion. Although Jacky's experience had mostly involved team members with differing perspectives and agendas, she picked up on the class sentiment and offered a softened view, emphasizing points of agreement. She admitted that this kind of unity could be achieved, "if the members of this design partnership, the service partnership, truly share [a certain] outcome as their goal instead of their personal motive or their personal intention." In one interview Harry made a similar comment about this kind of sensitivity, suggesting that how he treated other students would make a difference in the appropriateness of the challenge. As he worded it, being "abrasive" and "confrontive" was not in itself wrong, but it became problematic if people took his forceful questioning in the wrong way. He suggested that it was acceptable when the parties involved found it edifying.
With an even greater sensitivity to others, Charles emphasized the importance of kindness when questioning and challenging-what seems to be another reference point to consider. When asked about his pattern of holding back in challenging discussions, he summarized his stance as follows: "If it isn't kind, don't say it." Others in class seemed to share this view, to some degree at least, and at times would rather withdraw from a conversation than create distracting interpersonal tension. For example, during an interview Anne stated that, on one occasion, she refrained from challenging another student on the grounds that it could be taken the wrong way and possibly strain relations in class-a possible result that she saw as unacceptable. As she put it: "rather than cause that uncomfortable confrontation or make him defensive or make him think that I was challenging him to be confrontational, I just didn't ask questions or didn't say anything to him." Her response was informed by earlier classroom interactions, as she had already seen others push back against this student several times (which is another option available to students) and felt that she might initiate a negative pattern of interaction with him if she pushed back also.
Despite differences in how students participated in challenging situations, there was substantial agreement, in general, about the value of challenging others through questioning and confronting. Dr. Smith and most of the students agreed that questions which challenge others are helpful to learning: they open up new perspectives; they explicate what is tacitly assumed; they expand ways of seeing the world; they show limitations in one's thinking; they help learners grow. As we have suggested, for Harry this kind of challenging and being challenged made for better learning. In a similar vein, Jim suggested that helpful questions "challenge ideas and the assumptions behind them," and furthermore that if people aren't willing to challenge and be challenged, "they will ask superficial questions." In this sense, Jim suggested that real learning and real depth of thought flow from this kind of questioning. For Jacky also, challenges led to good questioning. While discussing (in an interview) how she felt about several of the challenging interactions in the classroom, she said, "I love being challenged because it makes me question my assumptions." For Jacky, such challenges and questions were a part of learning itself: My personal learning is enriched through…challenging of ideas and thinking in new ways…I don't want a learning experience that just confirms my preconceived notions. If I'm not changed, if my eyes aren't opened to a different thought or different viewpoints, then I'm not sure how rich my learning is.
Theme 4: The Way Personal Questions Mattered
An important aspect of questioning which emerged in the seminar we refer to here as the personal question. In using this phrase, we mean a kind of centrally-important question for individual students that informed, to a significant extent, how course material would show up for them-that is, as relevant or irrelevant, interesting or uninteresting, worth asking about or not, and so on. One's personal question would be based on what mattered to her or him in a broad sense, offering a kind of reason (either tacit or explicit) for engaging in class discussions in certain ways and informing what she or he saw as a more or less meaningful question-asking interchange. In this sense, it might be said that a student's personal question offered them a way of being in this class.
For example, theorizing provided by Krippendorff's approach to design was deeply engaging to Peter and Jim, but less so to David and Charles. On occasion, a question or discussion was intensely interesting to everyone. For example, the moral ambiguities inherent in certain designs, such as that of the atom bomb, were engaging on some level to all the members of the class. To be sure, students often asked questions not directly related to their personal question; and a few students seemed to have multiple questions of this sort or, on the other hand, none at all. Nonetheless, our exploration of participants' personal questions revealed layers of meaningful engagement by students in the classroom questioning exchanges-a sense of how the classroom showed up as mattering to them in various ways.
Peter, for example, had been interested in experience-based learning since he took a class on experience design in another department. This previous class had served as a practical counterpart to his theoretical interest in learner agency. However, while he wanted to be true to these major concerns and interests, he also wanted to be professionally credible and accepted by others in the field. Thus, in the current class he wanted to know if his commitment to experiencebased design and learner agency would facilitate his career aspirations. As he noted, "Always as a kind of a background concern of mine was 'what's the viability of this kind of talk [about student experience and agency] in our field?'" Although Peter explicitly articulated his question concerning professional possibilities only once in class during the term, he said that it was a "pretty important question for me to figure out" and "And so, I had that big question about prospects for scholarship all along, but I waited until it kind of emerged from the discussion in the usual material of the course to ask it and to really voice that." Similarly, through interviews Anne revealed two personal questions that guided her interests in the course, questions which she brought with her into nearly every situation: "why do people do what they do?" and "what is happiness?" These two grand questions prompted her interest in becoming a teacher. Anne anticipated a future of teaching business classes at a university, and the extent to which these two questions were addressed in some way provided the basic standard by which she could measure the success of her teaching. But this led to a problem: for her, teaching, had little to do with principles of design and much to do with people's motivations and what makes them happy. She was interested in how teachers touch and are touched by the lives of their individual students. Given this frame of reference, it is easy to see why she often felt bewildered by class discussions. They did not address questions that mattered most to her; or their nature and genuine relevance to education was, for her, obscured by her focus on her two personal questions.
Jim, on the other hand, saw the seminar content and goals as well tailored to his personal ambitions. He wanted to teach second language learning at a university. When asked in the interviews about his purposes and the course purposes, he said, "I felt like they were well aligned." Thus, when conversations and questions seemed to take classroom discussions off into what he thought might be personal tangents, posturing, or aimless opining, Jim became frustrated and withdrew from the discussion, keeping his questions to himself. Jim judged that classroom questions ought to enhance the potential professional possibilities of all the students and that questions which diverted appropriate discussions were an infringement on others' opportunities to learn.
However, personal questions also seemed to serve important moral-practical ends for participants in our study. We observed, through class sessions and interviews, that a personal question could help students stay engaged, lead to useful discussions, bring a sense of connection and cohesiveness to the subject matter, and help provide a meaningful experience. A student who brought a personal question to the course as part of her background was typically an engaged, motivated learner-one who would contribute to the class good, namely, being edified in this area of study. Thus, in our study having a personal question was educationally valuable and functioned as a kind of moral reference point leading to the class good.
As an example, consider a class conversation started by Charles, guided by his personal question regarding how to achieve excellence in design while treating kindness to others as a principal virtue. He made reference to a recent work experience in which he, as project manager, found himself caught between two teammates who had conflicting views on how to proceed on the project at hand. His desire to act in kindness made it difficult for him to side with one of the co-workers and possibly frustrate or alienate the other. As he recounted: "I've gotten halfway through and realized that one person's voice is going to be more important than the other one…and I had to change the entire project to meet that other person." This issue for class consideration, created by Charles' personal question, sparked comments from almost every student about inclusive processes in design work and ways of handling professional relationships. It inspired the students and kept them engaged in the conversation. In this regard, Charles' personal question performed a positive, enabling pedagogical function in the seminar. It pointed out an important issue in design work that, in all likelihood, would not otherwise have come up in class discussions.
However, personal questions could also disrupt the flow of a class session. For example, Anne's lack of background knowledge in the field and preoccupation with her personal questions would at times, lead her to ask questions that did not mesh with what the class was talking about, though they obviously mattered deeply to her. In one case, when covering the work of Herbert Simon, Anne asked about his perspective as a psychologist in a very broad and basic way: "how does he [Simon] explain psychology?" Her query set into motion a disjointed exchange among students. One quipped "in ten words or less?", which evoked laughter from others and pointed to the question's lack of fit with the discussion taking place. Finally, Professor Smith intervened by offering a brief, general answer to Anne's question and directed attention back to Simon's view of design. It appeared that Anne's personal questions sometimes worked to conceal from her awareness aspects of course content -because she was, at least at times, more concerned about those questions than topics of concern to others.
Moreover, several participants suggested that students could, on occasion, engage too assertively in class discussions, pressing their agenda and personal queries beyond the point of being instructive. When discussing some question-asking exchanges that seemed to flow out of another student's personal question, David offered the following observation: "it would tend to be drawn out when it seemed to be like a fairly insignificant topic to take so long in discussing it, and so that would just kind of get old, and I felt like it did waste a little bit of class time." From Jim's perspective, time-consuming queries driven by one student's personal question were sometimes frustrating: "there's these ideas that I want to hear from the person that I respect...I want to hear that, but then if something gets in the way of that, I get frustrated by the other person, for instance." Based on our analysis, it appears that personal questions did, at times, motivate this assertiveness among some, though clearly such assertiveness could be motivated for other reasons.
How might students respond to a classmate who, driven by her or his personal question (and its intrinsic meaningfulness), failed to honor a tacit reference point regarding appropriate use of class time? One option, it appears, was to do nothing. And there were moral-practical reasons for doing so. As we described above (in Theme 3), at times Anne would withdraw from discussions rather than push back and possibly strain class relationships. In contrast, Jim described a strategy that might be thought of as an open, yet passive, resistance. As he described it: "I just pull up my phone and I've just looked at Facebook during these comments, and because I just want to give no bodily messages that I am interested in this question. Because I don't think the question is where we should be." While Jim's strategy may have been effective in some measure, the more typical approach among students was to avoid taking action and, one might surmise, wait for the instructor to handle class management issues. But clearly this is not a failsafe approach, as instructors operate within the same configuration of moral goods and reference points as students (though faced with different expectations), and are thus also caught in complex tensions between individual student needs and the overall good-for example, helping a student feel heard while not allowing him or her to dominate the conversation. Nonetheless, the instructor did report taking some action. As he recounted: "I thought that everyone's contributions in the second half of the term in that regard were probably deeper and I tried to be a little more sensitive to maybe shutting down some things before they would be terribly unproductive." Whatever might be said about how personal questions mattered and fit within the classroom moral ecology, it was clear that participants faced a constant need to navigate these reference points in dealing with them and, again, that there were no obvious rules to follow in doing so. As stated above, a practical sensitivity to others and the class good in context was required, though not always achieved.
Discussion
From the standpoint of hermeneutic moral realism, student questioning as a part of graduate study can be seen as a kind of practice within a practice; and as such, it will be given shape and meaning by moral goods, reference points, and the tensions they produce. It might be said, in this sense, that question-asking exists within, and is made possible by, a network of moral expectations rather than a network of determinant laws and mechanisms (Brinkmann, 2011), as seen in behaviorist and cognitive accounts (Slife & Williams, 1995). In our study, student question-asking was closely connected with interpersonal and relational concerns as much as knowledge-gathering ones and, indeed, both showed up as aspects of a larger moral enterprise-that of being a good graduate student in this setting. A summary of the primary moral reference points encountered by our participants, as seen in table 4, provides one broad view of the moral configuration in which participants found themselves. As our findings suggest, one's way of dealing with any particular reference point could go too far or not far enough; and many reference points needed to be balanced against others in context. Tensions created by a less-than-optimal balance among reference points seemed omnipresent, often being equally or more influential than instructor expectations or one's desire to learn. And a lack of proper balance sometimes led students to demur. One example of tensions that blocked, or potentially blocked, student engagement concerned the complex interrelation between questions and knowledge-namely, that question-asking presupposes knowledge (qua background) as much as it produces it. In this sense, a student needs to know enough about a topic to ask an appropriate or even intelligible question; and as our participants suggested, not knowing enough, or perceiving oneself as not sufficiently informed, often led to silence. This obviously created an impediment for students who wouldn't ask questions due to a perceived sense of their own deficiencies, but potentially (and actually) for others as well, such as Anne and David, who, due to their own sense of inadequacy, often relied on the questions of others for clarification and understanding. Thus, a tension between needing to know (seeking understanding) and adequate background and preparation for the course was, at least for some, resolved by refraining from queries that might have led to informative discussions for others.
This tension between needing to know and background, and others in the data, reveal a larger, more encompassing tension-that of positioning oneself in terms of the course versus positioning the course in terms of oneself, both by way of question-asking. Clearly participants saw the class as a way to grow professionally and their questions typically reflected their interests. For students with a fairly well-developed personal question, class was an instrument by which they could gain important insight and pursue professional development. Student engagement of this sort was clearly apparent in class. In a hermeneutic sense, it might be said that class content and discussions tended to show up in terms of students' personal questions and strivings: as boring or interesting, relevant or irrelevant, and so on. This engagement-treating the course as an instrument in one's own development-was sometimes beneficial to oneself and others, sometimes not (as suggested above).
However, comments from our participants also suggest that they positioned themselves in terms of the moral-practical configurations of the class, as in the obvious case of refraining from asking questions in light of the moral-practical expectation to use time responsibly, or perhaps due to fear of embarrassment. One could not participate in this class, it seems, without encountering and somehow coping with the goods and reference points of graduate student practice in this setting. Indeed, as we suggested above, it is this moral configuration that rendered student question-asking a meaningful practice in the first place. In our study, it is what gave question-asking purpose and form (i.e., with regard to what good students do, how they do it, and so on). For example, good students pursued edification, but in ways that did not obstruct the learning of others. If one's questions seemed not to fit the context in some way, then those questions were asked at risk of disrupting class, creating tensions among students, and possibly impeding edification. Overall, our analysis suggests that here was no escape from a need for balance in context-perhaps some kind of student-oriented phronesis-if one sought to pursue the class good.
Importantly, we acknowledge that the setting of our study was somewhat unique within the world of education. Graduate students are typically motivated learners and graduate classes are, it may be assumed, more discussion-centered than classes at other levels. The degree to which student questions play a role in class proceedings surely varies across contexts, but the depth and complexity of student questions are likely to be greater in graduate study than at most other levels. Thus, the findings of this study may transfer fairly well to some settings (e.g., other graduate courses, some undergraduate courses) and not so well to others (elementary or middle school classes). However, as is often the case in studies of this type, the extent to which these findings may be insightful or applicable remains to be determined by the consumer of the research (Lincoln & Guba, 1985). And from these methodological observations, it follows that further inquiry into student question-asking at other levels of academics, such as high school and college, or in other class subjects, such as science and humanities, would add insight into the nature of student question-asking as a central aspect of formal learning experiences.
Moreover, in future investigations the motives and intentions of students could be taken into account more directly, with specific attention paid to issues of intrinsic versus extrinsic motivation and how those motivations may lead to subtle, or not so subtle, differences in how students ask questions or how they perceive the questions of others. In the present study, students were often motivated by an interest in instructional design per se and seemed not to be overly concerned with extrinsic matters such as course grade. But for many, their interest was also professional, in the sense of learning in order to be credentialed so they could attain the professional position or status they desired. It is not always easy to discern where intrinsic motivations end and extrinsic motivations begin when the subject matter concerns people's professional aspirations, as interests or commitments with regard to the work per se often merge, or possibly conflict, with practical matters such as professional survival or promotion (e.g., one's desire to design excellent educational experiences can be synergistic with, or oppose, doing what is required for professional advancement). Continued research could explore student question asking from this perspective.
Another limitation of this study concerns our focus on student question asking in the classroom. Of course, student questions were situated within a broader educational context that transcended and situated any particular moment in a given class session. For example, a formal itinerary of participation in which student questions could be asked-including course readings scheduled in a certain order, reflection paper assignments, final project, guided conversations, and so forth-was arranged by the course instructor. And the course itself was situated within a specific graduate program that offered a disciplinary context in which students were able to study and learn about educational practices in certain ways. This is all to say that how students asked questions was, in all likelihood, a function of more than specific discussion topics and curiosities at specific moments in the seminar. A larger educational structure made certain kinds of questions possible, in a sense, and that larger structure could be explored what regard to the overall setting it created for student practices, including the moral goods and reference points that students navigated (either tacitly or explicitly). Thus, it must be acknowledged, when interpreting these findings, that this broader educational context was not explicitly addressed in interviews and data analysis. One obvious way forward, in this regard, would be to explore how instructors fit into the moral contexts of classes, especially with regard to the kinds of questions they ask and how they may invite students to contribute in certain ways. Another step would be to explore more carefully how student experiences in other classes-in the past, with other students and instructors-may provide a context for how students ask questions in a current class.
Conclusion
A hermeneutic moral realist interpretive frame casts question-asking in a unique light and, in our study, enabled this educational activity to be seen as more nuanced and complex than what might be thought of as a purely cognitive, knowledge-acquiring exercise. From this perspective, how students asked questions in this graduate seminar not only constituted a way of being in the classroom, but a moral way of being in the hermeneutic sense that we have described. More specifically, how students asked questions functioned as a kind of commentary on course subject matter and the goods and reference points that they encountered in the midst of everyday class involvement. Moreover, moral tensions in class required deft balancing, in implicit or explicit ways, as participants sought to make the most of class experiences. How participants balanced those tensions was, again, a kind of commentary on the course and the moral configurations that situated their learning activities. In sum, this study-in conjunction with some others McDonald & Michela, 2019;)-suggests that inquiry of this type can expand the range methodological resources available to researchers and offer novel (i.e., intrinsically moral-practical) ways of interpreting human action in context. | 17,995.6 | 2020-12-21T00:00:00.000 | [
"Philosophy"
] |
Reciprocal Trust as an Ethical Response to the COVID-19 Pandemic
The COVID-19 pandemic has generated a range of responses from countries across the globe in managing and containing infections. Considerable research has highlighted the importance of trust in ethically and effectively managing infectious diseases in the population; however, considerations of reciprocal trust remain limited in debates on pandemic response. This paper aims to broaden the perspective of good ethical practices in managing an infectious disease outbreak by including the role of reciprocal trust. A synthesis of the approaches drawn from South Korea and Taiwan reveals reciprocal trust as an important ethical response to the COVID-19 pandemic. Reciprocal trust offers the opportunity to reconcile the difficulties arising from restrictive measures for protecting population health and individual rights.
Introduction
COVID-19 continues to be a part of our daily lives. As countries around the world continue to stem the tide of emerging COVID-19 strains, many countries, including Sweden and France, have turned their gaze to Southeast Asian countries as exemplars of pandemic management (Fisher and Choe 2020). Measures undertaken to trace and contain transmissions include fast and vast testing regimes, clear, consistent and streamlined communication and public education and publicity on hygienic practices such as regular hand washing and mask wearing (Partridge-Hicks 2020; Sridhar 2020). These approaches typify some important shared elements of ethical practices fostering trust in the authorities. Authorities, in this context, refer to government official or those entrusted with responsibility to discharge duties to the public. Trust is often associated with doctor-patient healthcare encounters; however, the pandemic has refocused the role of trust in broader social contexts. Indeed, the current research has emphasised the utility and importance of trust in authorities in managing the pandemic (Wong and Jensen 2020;Paek et al. 2008). The rise in campaigns against public health actions such as mask wearing, vaccinations, test and trace programmes, mandatory quarantine and lockdown gestures towards distrust in authorities (Haddad 2021;Safi 2021;Stewart 2020;Picheta 2020;Read 2020). These responses however reveal a deeper concern for trust within the pandemic ecosystem: That reciprocal trust is absent from a diverse range of ethical frameworks. I will explore the relationship between trust and reciprocal trust in the relevant section below. The pandemic ecosystem comprises a complex environment, with various interconnected social, economic, political factors and networks of population and interactions, all of which presents a challenging environment for pandemic relief efforts. Reciprocal trust is an ethically important response to COVID-19 due to a sustained urgency posed by a highly transmissible virus requiring the collective effort from the authorities and the population. These collective efforts necessitate trade-offs from the population (such as movement restrictions), which could be more than in ordinary times, and complex decision-making by the authorities in balancing the different priorities and interests that operate in a pandemic ecosystem. Reciprocal trust thus cushions the harshness of restrictive measures and inconvenience experienced by the population. Inattention to this element consequently widens the gulf of trust to the detriment of population health. This paper aims to broaden the perspective of good ethical practices in managing the COVID-19 pandemic by including the role of reciprocal trust. It will illustrate with examples that engender reciprocal trust drawn from the practices adopted primarily by South Korea and Taiwan and examine how such reciprocity is deployed and negotiated in the COVID-19 pandemic management context. These countries are chosen as exemplars for their successful exemplification of reciprocal trust between the authorities and the population and within the population in curbing the spread of the pandemic. I will demonstrate how reciprocal trust is promoted in the pandemic context supported by appropriate examples drawn from these two jurisdictions in the relevant sections below. It will be clear that prior trustworthy experience is highly likely to support reciprocal trust, which explains the general willingness of the South Koreans and Taiwanese in engaging in voluntary exchanges in view of the restrictions based on previous successes in pandemic (MERS, SARS) managements.
Conceptions and Characterisations of Reciprocal Trust
Research on reciprocal trust features notably in management and organisation studies (Serva et al. 2005;Korsgaard et al. 2015). Reciprocal trust, which is considered to have significant organisational and interpersonal implications, is defined as "the trust that results when a party observes the actions of another and reconsiders one's trust-related attitudes and subsequent behaviours based on those observations" in a project management team setting (Serva et al. 2005, 625). This characterisation of reciprocal trust indicates that the presence of trustworthiness of one party is likely to foretell the other party's perception of such trustworthiness that will then lead to ensuing trust and actions that convey that trust. Although the exploratory research is focused on project management settings, the study sheds light on the existence of reciprocal trust in considerably large groups and complex social settings, the dynamic interactions of different factors affecting trust over a course of time and the psychological aspects of trust in relationships which are demonstrated by behaviours and responses that manifest trust or otherwise (Serva et al. 2005, 626, 627). These findings are pertinent to the pandemic context as the pandemic ecosystem is varied, highly complex and liable to many permutations. The authors correctly observed that reciprocal trust requires an appreciation for the role of trust in a relationship, particularly the active process in understanding how trust is reciprocated, gained or lost (Serva et al. 2005, 627, 628;Korsgaard 2018, 14). This observation suggests that the presence of trust or trustworthiness is crucial in forming reciprocal trust and subsequent demonstrations of trust. Reciprocal trust is thus understood as one's trust that affect the other party's trust through actions and behaviours that demonstrate the attribute of trustworthiness. This understanding brings to light the selfreinforcing nature of reciprocal trust (Korsgaard et al. 2015, 53), thus amplifying the "trust-begets-trust" paradigm. An example of such self-reinforcing behaviour of reciprocal trust is the positive association between leaders' trust in the followers in organisations and vice versa (Korsgaard et al. 2015, 54).
Another important characterisation of reciprocal trust is it is a process rather than a "construct" (Korsgaard et al. 2015), similar to Serva and colleagues' (2005) reference to its dynamic nature that occurs over a period of time. The process continues as long as mutually beneficial outcomes exist and cease to exist where there is no trust (Korsgaard et al. 2015). The processual nature of reciprocal trust indicates the variability of trust levels throughout the interactions and the relationships formed between the parties. Consequently, where trust is felt to be violated, voluntary exchanges between the parties would cease, as evidenced through protests and remonstrations against restrictions in a pandemic context. Arising from this appreciation for its process, reciprocal trust can be characterised as a "bidirectional" occurrence (Korsgaard 2018, 14). The bidirectional aspect of reciprocal trust encompasses continuous "cycle[s] of relationships between trust and cooperation represented by paths from trust to cooperation within persons and from cooperation to trust between persons …with both parties giving and receiving benefits and thus is both a trustor and a trustee" (Korsgaard 2018, 16). The bidirectional nature of reciprocal trust lends weight to the notion that it has a circular effect. The strength of the relationships changes throughout these interactions, as the parties familiarise themselves with the motives, values and interests leading to reviews of the aims of the reciprocal relationship (Korsgaard 2018, 17, 23).
Reciprocal trust is recently represented in the general healthcare literature from the perspective of reciprocal relationships. A functional healthcare relationship possesses attributes that reflect reciprocity, for example trust in the healthcare professionals and reciprocal trust in following the recommendations towards health recovery. A trustor's perceptions of trustees' ability, benevolence or integrity, reflected by clinicians' technical ability, skill or competence, and interpersonal skills thus contribute to trust in healthcare (Peters and Bilton 2018, 333). Consequently, a reciprocal relationship is relevant to the trust framework, while trust as an element in the healthcare relationship can engender reciprocity, thereby strengthening the reciprocal nature of the relationship. Reciprocal relationships in healthcare often embody common aspirations, and shared values of respect and trust that support such relationships (Tumosa 2017). Common aspirations or shared goals enable people to change or cultivate new behaviours to achieve the goal of wellness in their healthcare experience. Reciprocal trust is premised on the acceptance of inherent vulnerabilities within a mutually dependent setting requiring mutual respect and trust to achieve mutually beneficial outcomes. Reciprocal trust can be characterised by various attributes, skills and behaviours ranging from respectful listening and averting presumptions to canvassing patient views and conversations towards improving awareness of what is wrong (Tumosa 2017, 58). These skills, attributes and behaviours are likely to affect the quality of the therapeutic relationship between doctors and patients.
Reciprocal trust is conceivably attained through genuine concern for people, encouraging open discussion, leadership in times of uncertainties or changing circumstances, providing expert advice and responding with empathy to intense situations (Robinson 2016, 10). Reciprocal trust is relational to people and between people, either the authorities or the population in general. The "circular" nature of reciprocal behaviours "serves to grow and sustain the patterns" (Robinson 2016, 3). Thorne and Robinson (1988) advocate for reciprocal trust as necessary in maintaining a functional healthcare relationship, for both the carer and the cared for. This perspective is more nuanced than trust because reciprocal trust signals that "trust from health care professionals fosters trust in health care professionals" (Thorne and Robinson 1988, 786), indicating continuous, conscious actions and behaviours that influence the other rather than a one-directional feature of trust. Reciprocal trust enables us to reconsider the relationship between trust and reciprocal trust in pandemic management, protecting the public from harm and trustworthy communication. It is through reciprocal trust that trust can flourish, where trust instils professional capability, consequently influencing the cared for in handling their illness towards achieving wellness (Thorne and Robinson 1988, 787). In a healthcare relationship, reciprocal trust facilitates a constructive exchange of information to reach decisions appropriate for the circumstances and which are more likely to be accepted by family members (Robinson 2016, 9).
A Reciprocal Trust Conception in a Pandemic Context
The key conceptions of reciprocal trust drawn from the preceding section are instrumental in presenting a working definition of reciprocal trust for the present paper. Trust is context specific, and so is reciprocal trust. Reciprocal trust, in the context of the pandemic, refers to "a cyclical, reciprocal relationship based on trustworthy actions towards achieving the shared aspiration of population well-being." This definition of reciprocal trust contains three important attributes. First, I characterise the relationship between the parties, whether between the authorities and the population or within the population as relational and circular in nature. This approach is consistent with the highly complex and interconnected factors and actors existing in the pandemic ecosystem, where the actions of one influence the other. This understanding brings to light the dynamic nature of reciprocity, where it is understood as an ongoing process, requiring a constant negotiation and renegotiation of powers and actions. Consequently, reciprocal trust is facilitative in building and sustaining the relationships between the parties based on a mutual understanding of, amongst others, shared vulnerabilities, while being mindful of the necessary trade-offs in a pandemic. The shared understanding is helpful in facilitating trust or strengthening the bonds between the parties, demonstrated through cooperative or collaborative actions. Reciprocal trust is more likely to flourish if trust exists.
The second aspect of trustworthy actions points to the essential feature of demonstrating evidence of trust in order to promote reciprocal trust. This means that the authorities have to evidence the extent to which they are competent, reliable and honest in fulfilling their distinct role in society, in this case, towards the goal of breaking the chain of transmissions initially, and achieving population well-being in the long term as countries slowly recover from the ramifications of the pandemic. Trustworthy actions suggest how well the authorities resolve the difficulties faced by the population, which will then lead to the population reciprocating that trustworthiness by displaying cooperative responses to the proposed measures. This may require empirical substantiation, but the examples in the sections below offer a persuasive inference that trust is reciprocated where evidence of trust exists in the first place. We will see examples of how feedback (often demonstrated by actions) from the population to various health interventions during the pandemic leads to improvements of the action plans that are implemented, and supported by effective, accessible communications between the parties. Consequently, actions embodying trustworthiness promote reciprocal trust. Actions that cultivate reciprocity in relationships demonstrate that the population is treated as collaborators, rather than passive recipients of advice in a top-down approach demanding compliance.
Third, I have adopted the term population well-being to reflect not only the physical and mental health and well-being of the population, but also their social and economic well-being in the broadest sense. The pandemic affects the population's lived experience in an incredibly challenging way. We are all vulnerable in different ways; however, the pandemic has amplified these vulnerabilities, and which has affected the most disadvantageous population, such as additional burdens on working households, gendered responsibilities and precarious working conditions. The actions taken by the authorities in managing the pandemic must be directed to address these challenges. These actions are premised on an appreciation of the relational and circular effects of the pandemic, where, for instance, if there is a lack of actual relief support, the authorities may be perceived as untrustworthy, resulting in an unwillingness to reciprocate their trust in complying with the pandemic restrictions, leading to a deterioration in the health and well-being of the population.
Reciprocal trust, as understood above, is important in appreciating the dynamics of reciprocity and trust in a pandemic setting, helping us identify actions that are likely to promote or shatter trust. When there is a loss of, or depletion in trust, the relationship is no longer perceived as mutual or reciprocal, resulting in revolt or disengagement from the shared interests and aspirations that are essential in a pandemic. Discontent with the authorities, evidenced from remonstrations to restrictions, suggests that trust in the authorities is withdrawn. Maintaining a relationship of reciprocal trust is necessary in health governance in pandemic management. The characterisation of reciprocal trust reveals that trust is an essential element in promoting reciprocal trust. It becomes necessary to consider the relationship between trust and reciprocal trust in understanding how each of them influences the other and consequently their application in the pandemic.
The Relationship Between Trust and Reciprocal Trust
Trust is "the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party" (Serva et al. 2005;Korsgaard 2018). Trust is thus characterised as unidirectional, with the truster (the party that trusts the other) trusting the trustee (the party who is being trusted or entrusted with something) and acting in a manner consistent with an attitude of trust towards the other. Let us consider the following example of trust: If I disclose a secret to you, I trust that you will not repeat the secret to another and to safeguard that information imparted to you, regardless of my ability to ensure that you do not disclose said information or if you would keep the promise. This means that by disclosing the secret to you, I am being vulnerable and have taken the risk to do so. Let us assume that the interactions continued, and you have displayed reliability and honesty in safeguarding the information, thus heightening my perceptions of your trustworthiness, upon which I continued to engage with you in sustaining our interactions. This example demonstrates reciprocal trust in the relationship. Let us consider the example above from the point of lack of reciprocal trust: I discovered not too long after that you have disclosed the information to the others, leading me to perceive you as being untrustworthy. The interaction between us has consequently changed arising from this change in perception, supported by evidence of such untrustworthiness. Flowing from this realisation, I then consciously disengage from further interactions with you, and would be unlikely to reciprocate your request for trust in future matters involving keeping promises. This shows that there is no reciprocal trust because the initial trust is breached, which then influences my attitude and behaviour towards you and the interaction.
Trust has been characterised as "an ongoing process of building on reasons, routine and reflexivity, suspending irreducible social vulnerability and uncertainty as if they were favourably resolved and maintaining thereby a state of favourable expectation toward the actions and intentions of more or less specific others" (Raaphorst and Van de Walle 2018, 469). This signifies that leap of faith in trust rather than in reciprocal trust where there exists an assessment of trustworthiness of the other based on prior experiences, behaviours and actions. Trust cannot be seen or is not visible in a tangible form. However, trust can be evidenced by trustworthiness arising from another's ability, benevolence and integrity (Serva et al. 2005, 626;Korsgaard et al. 2015, 53). Korsgaard and colleagues (2015, 49) have described trust as: an attitude held by one party the trustor toward the other party, the trustee… trust has cognitive, affective and intentional components…the cognitive component reflects the trustor's beliefs about the character and intentions of the trustee which is based on the trustor's pre-existing expectations as well as assessments of the characteristics of the other party, the quality of the relationship itself and other situational variables that are likely to influence the relationship.
Trust is thus built on trustworthiness, as vulnerabilities lie in the knowledge imbalance between the truster and the trustee (O'Neill 2018). Consequently, trustworthiness affects reciprocal trust. This means that in transposing this understanding of trust to reciprocal trust relationship in a pandemic, the authorities have to display trustworthy attributes of competence, honesty and reliability: trustworthiness to earn the trust from the population which then enables reciprocal trust to occur. The population would look at evidence of trustworthiness from the actions and conduct of the authorities to then decide if they would like to reciprocate that trust and to sustain that relationship.
Trust is therefore distinguished from reciprocal trust as unidirectional, while reciprocal trust is bidirectional (moving in both directions), where there is a "mutual influence process whereby the trust one party has in the other through its effects on trusting or cooperative behaviour influences the other party's trust" (Korsgaard et al. 2015, 50). In "simple" trust, only one party is vulnerable and would be made more vulnerable by reposing trust in another, while the other party may have nothing to lose. In contrast, reciprocal trust means that both parties are vulnerable arising from continued engagement in the interactions resulting in potentially loss of anticipated outcome if their goals and motivations are no longer aligned. It is similarly clear from the illustration above that trust promotes reciprocal trust. Trust is often portrayed as static while reciprocal trust is circular and capable of variability in the sense that it is subject to fluctuations depending on the actions of the parties in the interactions through evidence of trustworthiness as a reliable and beneficent party (Korsgaard 2018, 25;Korsgaard et al. 2015). It can be reasonably inferred that trust and reciprocal trust carry practical and significant differences, as they depict important factors affecting the nature, quality and level of trust in a reciprocal relationship.
Another practical and significant difference between trust and reciprocal trust is the presence of gratitude. There is no element of gratitude in trust, while gratitude is present in a reciprocal trust relationship. Gratitude motivates changes in people's behaviour towards another. This display of gratitude can be overt or covert. Examples of overt gestures of gratitude include express display of appreciation for carers (e.g. participating in nationwide "clap for carers" activity or expressing thank you to these workers) or offering of discounts and free food to essential workers. A covert indication of gratitude may range from behind the scenes effort in ensuring that there is ample supply of personal protective equipment for essential workers, or establishing functional testing centres, which would be reciprocated with higher compliance to these efforts or displaying conduct that demonstrate the willingness to participate rather than remonstrate. In turn, the authorities display gratitude to their participation, rather than taking things for granted or expecting continued patience and sacrifices, by ensuring that restrictions are no longer than necessary. There is a bilateral gesture of gratitude emanating from both parties in pandemic management interventions.
Trust is, despite its differences from reciprocal trust, a prerequisite to reciprocal trust, consistent with the notion of trust begets trust. Without trust, reciprocal trust is unlikely to occur. Without trust, reciprocal trust cannot continue; reciprocity is the ingredient that sustains the trusting cycle in the relationship. This suggests a co-dependency between trust and reciprocal trust. This co-dependency has an important role and practical significance in sustaining reciprocal trust. Reciprocal trust crumbles when there is a perceived violation of trust. It is through trust that reciprocal trust can flourish. Consequently, in a pandemic situation, states have to persevere in establishing trust in order to obtain reciprocal trust from the population. For example, reciprocal trust is expressed by the collective adherence to travel restrictions or face mask wearing. I will next consider more closely the application of reciprocal trust in the pandemic.
Reciprocal Trust in the Pandemic
Reciprocal trust is important in maintaining and sustaining an ongoing, trusting relationship between the authorities and the population in a pandemic, especially in transforming governmental action plans into collective actions at the population level. Recent research has gestured towards reciprocal trust as a key factor that binds the population and the authorities and amongst population horizontally and vertically, exemplified by the population trusting the information and following the recommendations offered by the authorities as accurate and the authorities trusting the population in actioning those recommendations (Harring et al. 2021). The breakdown in reciprocal trust is evidenced through increasingly high reliance on surveillance measures and enforcement for compliance of pandemic restrictions and vice versa (Harring et al. 2021). The pandemic illustrates shared vulnerabilities, which necessitates competent actions to overcome the difficulties of coping with the pandemic (illness). A pandemic is likened to a disease or illness experienced by individuals who are ill in a doctor-patient relationship, but more widespread and affecting more individuals. The vulnerabilities experienced by people are now increased and not confined to personal experiences of coping with the illness. Reciprocal trust thus has particular significance in a pandemic setting because there is mutual dependence: on one part the compliance with the measures imposed by the authorities and the other, the openness, reasoning and accountability of the government in introducing restrictive measures. In order to function effectively, the relationship between the authorities and the population requires reciprocal trust in successfully managing the pandemic. In the pandemic context, reciprocal trust refers to actions that sustain the relationship between the authorities and the population and amongst the population in breaking the chain of transmissions. Trust is mutual and not taken for granted or demanded; rather, authorities assume responsibility in achieving reciprocal trust from the population in managing the pandemic. The paper will now consider manifestations of reciprocal trust in the pandemic.
Examples of Reciprocal Trust in the Pandemic
Let us consider the example of the supply and rationing of face masks during the initial stage of the pandemic where there is a shortage in South Korea leading to public panic. The authorities stepped in swiftly, implementing rationing and shortage supply issues, which demonstrated their competence in handling the shortage (Moon 2020). The authorities, through their competent and reliable actions, provide evidence of their trustworthiness in managing the shortage and alleviating public panic, resulting in compliance from the public in accessing facemasks with cooperation from suppliers in providing facemasks. One of the major successes demonstrated by Taiwan and South Korea is a record of reliability in pandemic management strategies based on previous successful experience in managing infectious diseases such as MERS and SARS. These evidence offer the population a reason to reciprocate the trust. Where the population do not feel assuaged, then the stockpiling is more likely to continue, leading to rejection of further compliance with proposed measures. Competence, honesty and reliability are attributes of trustworthiness (O'Neill 2017(O'Neill , 2018. These attributes speak to the core of supporting trust and subsequently reciprocal trust. The authorities in this example, by being open about the shortage and acknowledging the difficulty in supply and demand but taking steps to remedy the shortfall immediately through restrictions, have allowed the population to appreciate the real situation, resulting in a higher inclination to reciprocate that trust to restore the shortage and for them to acquiesce to the rationing measure until such time that supply returns to normal. Transparency has often been associated with trust; however, it does not guarantee accessibility (O'Neill 2018). Consequently, it is unlikely to play a role in promoting reciprocal trust. Additionally, transparency in communicating information does not always correlate with voluntary compliance of policies or across all domains (Porumbesco et al. 2017). Openness, on the other hand, means accessibility; and being accessible is "evidence" for which the population can judge the extent of trustworthiness of the authorities and for them to decide if they would like to reciprocate that trust. As O'Neill (2009) rightly observed, "without accessibility, communicative acts fail because they cannot communicate with intended audiences. Some may be unintelligible because intended audiences cannot follow what is communicated: Even if satisfactory as acts of self-expression, they inevitably fail as communication." As an illustration, the Taiwanese authorities used easy-to-follow, interactive visual communication in extrapolating the meaning, significance and gravity of the pandemic and imparting important messages about daily preventative measures to stop the spread of infection through wearing face masks, hand washing and social distancing that are accessible and intelligible to the population (Hsieh and Child 2020; Lee et al. 2020). The onus is on the authorities to make important information accessible so that the population feel included in the effort to counter the spread of infections and subsequently promote the reciprocal trust of the people. Once the population truly understand their important role in pandemic management strategies, they would then decide whether they would like to participate in the collective effort and to reciprocate that trust. Similarly, if we apply the approach of accessibility in communications regarding preferences for one course of action over the other in managing the pandemic, such communications must be accessible to the people, so that they can understand what is the adopted strategy, why it is selected and what are the implications. Authorities therefore must not be economical with the truth. Being accessible in this sense promotes reciprocal trust.
How do authorities reciprocate the (presumptive) trust from people in handling personal information arising from test and trace systems? The ability to collect information rightly raises privacy concerns, putting authorities in situations of power with identifiable and potentially sensitive information of the public (Zastrow 2020). An ethical response is an accountable, assurance of privacy, where, despite the potential of identifiable information arising from test and trace applications, the authorities should offer explanations on how this collected information is used, and the safeguards installed to protect such information in the public domain. This approach not only reflects openness, but also treats population as collaborators with respect, illustrating the continuous actions needed in sustaining reciprocal trust. The willingness to trade-off privacy for public health is evident in the South Korea and Taiwan approaches (Thompson 2020;Lee et al. 2020;Marszalek 2020). These approaches include efficient, centralised communication channels, effective leadership, cohesive collaborations with all levels of government, well-prepared and adaptive infectious diseases plan and stringent test and trace system. These trust-generating actions lead to a higher level of population participation in proposed restrictive measures such as quarantine, travel restrictions or stay-at-home instructions, physical distancing and face mask wearing. Where there is a perceived discrepancy between competing interests or assumptions that people are not prioritised, as in the example of prioritising economic safety over population health (Mainous 2020), reciprocal trust cannot prevail. Reciprocal trust cannot prevail in such circumstances because the authorities' lack of trustworthiness in taking actions to prioritise public health sends a message to the population that engagements with any proposed measures to contain the spread of infections are not crucial, consequently leading to behavioural changes and a business-as-usual mindset. It is reasonable to postulate further that the population might form the perception that they have to take matters into their own hands to protect themselves because they could no longer trust that the authorities have their best interests in recommended policy actions. Where there is a lack or absence of reciprocal trust, the population's level of participation in pandemic management measures will either plummet or become disengaged, leading to social and healthcare costs, such as increased hospitalisations, death and longterm mental health consequences. The socially disadvantaged may be more likely to experience the brunt of social and economic consequences, which will then spiral to a lower level affecting their longer term recovery.
Reciprocal trust has a circular effect with the potential to shape the lived experience of people under pandemic conditions. People are already experiencing transformations in how they socialise, work, communicate, shop, travel, sleep, live and make decisions, big or small. Consequently, reciprocal trust between authorities and the population and amongst the population is indispensable in pandemic management, not least because the pandemic entails social, personal and economic consequences but also in dealing with further unknowns that are likely to develop along the continuum of the pandemic. Authorities have to confront both cognitive and psychosocial factors affecting the population's inclination to comply with restrictions (Prati et al. 2011;Hendy 2020). For example, South Korea's success in curbing the COVID-19 pandemic is far from an overnight effort but a sustained endeavour derived from previous pandemics such as MERS and SARS where shortcomings foregrounded by public criticism of the mishandling of these pandemics led to the authorities revamping their approaches towards managing infectious diseases (Ragavan 2020;Thompson 2020). Pandemic response measures demonstrate that to promote reciprocal trust, the authorities with clear responsibility in making decisions in times of public health crisis must convey a clear message in taking actions that protect public health while accommodating population needs, thus gaining the trust of the people, who are then more likely to collaborate and reciprocate through participation and behavioural changes for temporary inconvenience (Kim 2020;Choi 2020;Park 2020;Fouser 2020). Reciprocal trust thus generates more willingness of united actions (Siegrist and Zingg 2014;Roy 2020) and galvanises the population to act in solidarity against a common COVID-19 threat (Libal and Kashwan 2020).
The process of achieving reciprocal trust is dynamic, not free from values, and is liable to constant renegotiation between the stakeholders as the trajectory of the pandemic develops. Values that are relevant to reciprocal trust in a pandemic context include honesty, reliability, competence and citizenship, which highlight the potential conflict of competing interests between the authorities and the population vis-àvis implementing and complying with temporary restrictive measures. An example to illustrate the need for constant renegotiation in a relationship that embodies reciprocal trust is implementing and lifting restrictions on things that you can do and cannot do. The public would want to know the reasons and the length of time for which the restrictions are imposed, and the authorities, having the power to do so, must execute these measures well in a reliable and competent manner. There are two key approaches exemplified by countries that have successfully managed the pandemic that contribute to promoting reciprocal trust: psychological safety in the population generated by effective leadership; and clear, consistent public health communication resulting in reciprocity, cooperation and collaboration for the mutual benefit of population and authorities.
Reciprocal Trust Between the Authorities and Population
Reciprocal trust underpins the success of key strategies such as rigorous mass testing, quarantine, facemask wearing and other measures due to the trusting relationship that exists in collaborative actions in managing the pandemic. Psychological safety is an influential feature to achieve reciprocal trust in managing population behaviour during pandemics. The pandemic generates various health and psychological anxieties, ranging from fear of being infected by the virus to economic security and restricted social movements, consequently creating various emotional and behavioural responses (Sauer et al. 2020). Creating psychological safety in the population during the pandemic is similar to attempts at regulating their emotional experiences in navigating their daily, lived experiences that are drastically transformed during a pandemic. There are psychological and behavioural consequences such as depression during pandemics, where the population faced limited opportunities in restoring emotional and psychological resources due to competing demands (Restubog et al. 2020). Consequently it becomes important to maintain these emotional experiences in order to minimise adverse emotions that give rise to depression or other negative psychological responses. Psychological safety is predominantly popular in the fields of organisational culture, behaviours and management (Schein and Bennis 1965;Frazier et al. 2017). Psychological safety is characterised as a method to minimise anxiety or the cognitive state of workers within organisational cultures towards creating an environment that encourages positive changes in engaging with work (Schein and Bennis 1965;Frazier et al. 2017). Anxiety, which could arise from vulnerability and lack of trust, creates low psychological safety (Frazier et al. 2017). Other factors that affect a worker's level of psychological safety include leadership, interpersonal relationships, group dynamics and norms, with positive correlations between psychological safety and good relationship with leaders (Frazier et al. 2017, 117, 140). Recently, the concept of psychological safety has been applied in the context of promoting continuity of learning at schools during the pandemic, where school leaders are encouraged to harness positive psychological safety amongst workers so that workers may effectively respond to changing learning and teaching ways during crisis times (Weiner et al. 2020).
Although the concept of psychological safety is primarily contextualised in the workplace environment, it has relevance to pandemic management, particularly the leadership aspect in demonstrating the relationship between the management (e.g. authorities, governments, broadly construed) and those who are being managed (the population). In considering the factors that promote workers' engagement with their roles at workplaces, it is similar to identifying important traits that support population engagement with collective efforts advocated by the authorities. The authorities, as the object of and foundation for trust for the population, would take care to support positive psychological safety in the population through trustworthy actions, with the aim of promoting reciprocal trust. For example, in a healthcare setting, care providers "can create trust by providing situational normality or structural assurances" within complex organisational structures and systems (Peters and Bilton 2018, 334). Healthcare providers often symbolise norms and values that represent trustworthiness, amongst others in realising their responsibilities of care to patients. Similarly, the authorities manifest certain values and interests in exercising their responsibility that the population have come to expect in a competent manner in the interest of the public. This aspect is relevant in generating various positive or negative psychological responses, such as confidence or fear and minimise negative emotional responses arising from uncertainties and accept these uncertainties despite the lack of control or knowledge (337).
It is reasonably postulated that the authorities initially have a "reservoir" of trust that are implicitly reposed in them by the population in fulfilling their functions and discharging their responsibilities to the population in ordinary times as well as in times of crisis. Generally, even if the population do not completely trust the authorities (as is normal in most democratic societies), there is a minimum level of trust in the authorities that they will do the right thing during a health crisis. Therefore, even if they may hold some distrust towards the authorities, they may be willing to give them the benefit of doubt and to trust them, with ensuing behaviours adjusted as the interactions continued, and whether they would reciprocate their trust. This phenomenon is consistent with the notion that trust is not a given but must be earned and sustained in order to stimulate reciprocal trust. The authorities usually draw from the implicit reserves of trust, until such reserves are depleted for which the population may not reciprocate their trust. The authorities would be wise to continue adding to the reservoir of trust prior to its depletion through trustworthy actions. Where the population perceives trustworthy actions, they will reciprocate their trust, especially when there are positive outcomes arising from effective pandemic management. A deficit in trust is unlikely to promote reciprocal trust.
The authorities ought to be mindful of the evidence of change in the relationship with the population in the process of promoting reciprocal trust. While most public services are perceived as trustworthy, citizens have lower trust in public administrative bodies or bureaucrats, possibly influenced by their experience in the delivery of public services or engaging in the process and their own attitudes to authorities (Raaphorst and Van de Walle 2018, 470). Trust or distrust in public authorities is manifested in various ways, from voicing their frustrations online or in physical protests and withdrawing from participating in governmental arrangements (such as refusing to send children to schools or anti vaccinations) (Raaphorst and Van de Walle 2018, 471, 472). Likewise, the authorities' trust in citizens is demonstrated through the level of public monitoring and enforcement of actions. Translated to the pandemic setting, this would apply in the context of compliance with essential travel only rules or face mask wearing.
The pandemic tests the boundaries of vulnerability and mutuality of interests. It exposes the weaknesses in the existing pandemic management infrastructure, and the tensions that arise from competing priorities, while casting the light on shared interests and purpose. The growing scientific evidence surrounding COVID-19 calls for a certain amount of leap of faith between the authorities and the population due to the many uncertainties in managing the pandemic. These uncertainties deplete resilience and resemble the antithesis of safety and assurance (Killgore et al. 2020). There is no reciprocity when people are uninformed or where information is suppressed, and where people's questions remain unanswered. In this climate, psychological safety is breached as fear is not assuaged; uncertainties grow, culminating in refusal to comply with measures that are directed at curbing transmission of infections (Melimopoulos 2021).
Replacing fear with genuine reassurance is therefore critical in various key stages of pandemic management. Such assurance can be derived from proactive, concrete pandemic preparedness actions that are accessible to the population so that reciprocal trust can be generated and behavioural changes promoted. These actions range from functional testing centres being in place, sufficient supply of personal protective equipment and economic packages to support temporary unemployment or costs for quarantines (Lee et al. 2020). Such measures demonstrate that the authority prioritises the people in coping with the disruptions to their lives due to the pandemic (Marszalek 2020). An example to illustrate this is the infrastructure readiness in South Korea and Taiwan where pandemic management resources are in place in advance of public announcements or dissemination of key information (Choi 2020;Ahn 2020). This infrastructure readiness is especially vital in assuring the public about the state of transmission unfolding before their eyes, what is being done and what is required from them, leading to reciprocity in trust and actions. When the public perceived that their expectations on pandemic relief resources were largely met, such as the readiness of quarantine support and information, they are more likely to reciprocate their trust through compliant actions. Public anxiety is likely to be quelled or minimised because of the authorities' openness which in turn promote public confidence in the adopted approach. South Korea implemented strict, mandatory quarantine and fines or deportation for violations (Thompson 2020). When the public perceived that breaches to restrictions were not tolerated, for example in the case of health officials enforcing quarantine monitoring, it shifted people's psychological and behavioural responses (Yeh and Cheng 2020). These constructive measures instilled reciprocal trust in the governance relationship, unlike in some countries where confidence in the authority is low compared to South Korea and Taiwan (McPhillips 2020;Fancourt et al. 2020;Helm 2020;Hsieh and Child 2020). It is clear that where the population has confidence in the infrastructure preparedness in managing the pandemic, there is a higher likelihood of participation and collaboration from the population in the collective effort to contain the pandemic.
Clear, consistent, timely, accurate and open communications contribute to population assurance and psychological safety, which translates to promoting reciprocal trust. Clear communication is related to competency which encompasses effective messaging and listening skills in responding to the ever-changing situation and judging the best course of actions to take. These may seem elementary but are especially crucial in supporting reciprocal trust, leading to a sense of solidarity in managing the pandemic. Truth telling engenders reciprocal trust and thus cannot be underestimated. The proliferation of information across multiple social media platforms enabled a comparison of infection rates and deaths with other countries, which led to confirmation or doubts about the veracity of the advice and appropriateness of measures taken, subsequently promoting or hindering compliance with safety measures (Wong and Jensen 2020). It is hence essential to avoid sudden unexplained reversals of approaches or inexplicable reasoning for adopting particular strategies. Timely communication requires competency in conveying information in a way that manages public "panic and fear" and the preparedness to request the public for help in reciprocating the actions in confronting the virus (Ragavan 2020;McPhillips 2020). Increased voluntary cooperation from the population in turn reinforces authorities' effort.
Recent research similarly support the approach of providing timely, accurate and contextualised information to the population to allay the anxiety arising from the uncertainties of future dangers, including information about "the expected outcome of different approaches, vaccine development and prevalence of infections" followed by clear justifications for adopting specific approaches (Blanco et al. 2020(Blanco et al. , 2758. Comparisons tend to be made where there are variable approaches and the chosen course of action should be clearly explicated to enable the population to understand the foundation of these choices between competing socio-economic interests. An understanding of these choices is likely to cultivate social support and solidarity in surviving individual sacrifices in the pandemic (Blanco et al. 2020). Actions that promote reciprocal trust are likely to enable health authorities to gain the cooperation from marginalised communities in identifying potentially vulnerable groups and their specific needs through tailored communication of risks and public health strategies (Henderson et al. 2020). This includes paying attention to the marginalised societal groups including single parents, and those in the lower socio-economic and migrant groups in managing the pandemic.
Effective, competent leadership promotes psychological safety in the population and engenders reciprocal trust in the relationship. The state shoulders the main responsibility in organising and coordinating containment efforts, owing to its resources and access to information. Consequently states have to persevere in establishing trust in order to obtain reciprocal trust from the population. An example of an effective leadership that translates to reciprocal trust is South Korea where lockdown and the attendant socio-economic consequences have been averted. Taiwan similarly demonstrated centralised leadership, voluntary participation and effective network of monitoring and coordination (Yeh and Cheng 2020;Lee et al. 2020;Hsieh and Child 2020). Positive tangible outcomes arising from proactive strategies lead to increased collaborations that contain the tide of infection. An example is the effective intervention in addressing facemask shortage which calmed population anxiety (Moon 2020;Ahn 2020). This may seem an oversimplification of factors, given other important contributions that shaped compliance with restrictions (such as financial, social and psychological incentives) (Choi 2020;Yeh and Cheng 2020;Prati et al. 2011); however, perceptions of competent, coordinated leadership are likely to be clear indicators of strengthening reciprocal trust between the authorities and the population. The circular effect between effective performance of pandemic management and reciprocal trust is significant in gauging and shifting public behaviour. Consequently, an agile and adaptive leadership in contrast to lagged, ill-prepared management creates desired collaborative effects (Moon 2020). Reciprocal trust can be seen as severely lacking in behavioural responses that manifest resistance towards measures such as mask wearing, lockdown and quarantining and objections to vague and inconsistent implementation of rules. An open, interactive approach between the authority and the population towards building consensus on solutions (Hsieh and Child 2020) fosters public reciprocity, thus demonstrating a shared purpose and collective endeavour in breaking the chain of transmission (Wilson 2020;Cairney and Wellstead 2020). These attributes contribute to maintaining reliability of the authorities (Henderson et al. 2020) in strengthening reciprocal trust.
Comparison is never far from the gaze of the population as the world becomes more connected than ever before. The World Health Organization is perceived as the guiding beacon in responding to public health threats in its role in preparing for, preventing, protecting against and detecting risks of outbreak during health emergencies. The availability of international standards in managing infectious diseases, such as supporting technical guidance (WHO 2020) and the International Health Regulations, enables a sense of assurance for the population when the authorities are regarded as following established practices, thus influencing population perceptions of leadership, which would be more likely in leading to a higher likelihood of compliance. Where actions produce desirable outcomes, they increase reciprocity because trust levels remain high. The nature of the pandemic lends itself to heightened vulnerability arising from various risks to life and curtailment of rights. Such reciprocity may be tempered by time, culture and specific contexts; however, reciprocal trust is underpinned by an acknowledged but potential vulnerability and faith in the authorities taking appropriate actions. Such actions include effective coordination between local authorities and other stakeholders-exemplified by hospitals arranging for referrals "even in the absence of legal framework" (Choi 2020). This demonstrates increased reciprocity from the existing level in the relationship towards achieving a zero-outbreak target.
Reciprocal Trust Amongst the Population
Reciprocal trust between the authorities and the population can translate to reciprocal trust amongst the population, from individuals to communities. Reciprocal trust that exists between the authorities and the population is dynamic and has the galvanising effect on the population, which continues to influence the attitude and behaviours of the population. While reciprocal trust at the authorities-population is not necessarily a prerequisite to reciprocity at the population level, pandemic management strategies are more likely to be collaboratively carried out, with resourcing needs identified at each level of the population, consequently enabling a virtuous cycle of exchanges. The reciprocal trust that initially moves vertically (between authorities and population) will then extend horizontally amongst the population, as the population is now bound by shared goals, common interests and supportive strategies to manage the pandemic together. Taiwan is a case that demonstrates collaborative efforts amongst the population (Schwartz and Yen 2017). There is less of an "I" and more of a "we" and "us" in complying with the temporary restrictive measures. Reciprocal trust within the population is important to galvanise the different segments of the society with varying needs and vulnerabilities. Collaborative participation enables a "better understanding of local conditions, vulnerabilities and capacities and better allocation of resources" (Schwartz and Yen 2017, 127), resulting in heightened reciprocity amongst the population. Reciprocity amongst population occurs in trusting that people will comply with the restrictions, such as isolating upon arrival from specified countries supported by monitoring and enforcement (New Zealand), wearing masks and physical distancing (South Korea, Taiwan, Singapore). Similarly, stockpiling essential items and implementing rationing measures are less likely to occur (Moon 2020). The presence of reciprocal trust can be seen from people taking increased responsibility in reporting and updating information about their health conditions through digital reporting of COVID-19 symptoms, and efficient contact tracing supported by robust technological infrastructure (Choi 2020). This will be effective where the broad phase of reciprocal trust between the authorities and population is satisfied.
Public support is central in ensuring compliance as it is impracticable to expect authorities to have the capability to manage quarantined population (Choi 2020). Reciprocal trust potentially creates a more engaged population to act consistent with restrictive measures. It can elevate common interests over individual rights. For example, the active deployment of test, track and trace can only work with reciprocal trust and actions from the population. Reciprocity is accordingly highly relative to the success of public health measures. The strategies of "be right, be first, build trust, express empathy and promote action" are steps that instil reciprocity (You 2020).
These steps demonstrate the trustworthiness of the authorities which in turn generate reciprocal changes in the population in their attitude to and interactions with the authorities in the course of the pandemic. As COVID-19 mutations continue to occur, combined with the imminence of successive waves of infections, it is essential and timely to support reciprocal trust by implementing effective approaches, and inform policy decision-making for current and future pandemic preparations.
Conclusion
COVID-19 has increased the significance of reciprocal trust in pandemic management. It brings to light vital ethical practices that challenge current ways of thinking about the relationship between the authorities and the population, and amongst the people. It prompts us to reconsider the role of reciprocal trust in our ethical view through the experience of dealing with COVID-19. Psychological safety encourages reciprocal trust amongst people and vice versa, where the population and the authorities play their distinct roles in the reciprocal relationship while offering the opportunity to reconcile competing interests. The authorities need to appreciate the dynamic nature of the social agreement between them and the population, underscoring the significance of reciprocal trust as a continuous process, requiring a constant negotiation and readjustments of actions. While not all strategies can be replicated owing to differences in the socio-legal structures, active steps that support reciprocal trust are necessary in whatever measures taken by the authorities. These are illustrated through steps that represent a coordinated, collaborative action, and approaches that enable prompt feedback, and flexible and open responses. Reciprocal trust can help gauge public view of actions taken and how to improve them in containing COVID-19 transmissions. Clarity in pandemic management strategies and consistent and streamlined actions strengthen reciprocal trust. Early, proactive measures underpinned by reciprocal trust can pre-empt harsh, successive lockdowns, which, over a period of time, create weariness and distrust, detrimentally affecting compliance. Consequently, strengthening reciprocal trust is one of the lessons learned from this pandemic. | 11,722.2 | 2021-04-15T00:00:00.000 | [
"Medicine",
"Philosophy"
] |
A Comprehensive Review on Image Dehazing
Haze is a challenging problem that degrades the quality of digital images. It entirely affects the field of military and civil systems, surveillance e.t.c. Image dehazing can provide the best solution to enhance these images. Nowadays, deep learning methods have been progressed in the field of image dehazing. Various studies have been introduced for the removal of haze. This paper reviews several works that deal with image dehazing of daytime and nighttime images.
INTRODUCTION
Haze is one of the most critical problem in the areas of image processing and computer vision. Under the haze condition, the quality of digital images becomes worse. Because, it changes the colors and reduces the contrast of images, it diminishes the visibility of the scenes and it is a threat to the reliability of many applications like Outdoor surveillance, Object detection, Outdoor photography, Images: Autocropping, Automatic Thumb nailing, Content aware resizing, Video Compression, Graphics, Rendering, Art. Haze also decreases the clarity of the satellite images and underwater images.
Haze often occurs when absorption and scattering of dust and smoke particles in relatively dry air. When atmospheric conditions block the suspension of smoke and other pollutants they concentrate and form a low-hanging shroud that damages visibility. Subduing of haze is a very challenging task in the case of image processing The removal of haze in the image is known as image dehazing. There are two different of dehazing: Daytime and nighttime dehazing. There have been many dehazed methods for daytime dehazing. The daytime haze model is a linear equation consisting of the transmission map and atmospheric light. To produce a magnificent daytime dehazing image, it is proved to estimate its corresponding atmospheric light and the transmission map. Apart from daytime dehazing, nighttime dehazing is also a relevant topic.
In the case of Nighttime dehazing, atmospheric light is not a global uniform. Apart from daytime dehazing, it is due to the light sources reflected from different sources like car lights, street lights, neon lights, etc. The figure shows a detailed explanation of the standard daytime and nighttime model. The daytime model contributes a uniform global light source, usually the sun. The camera captures atmospheric light as well as the light reflected from the object. There are no other light sources so that there is no extra glow term added in the camera image.
Apart from the direct transmission, there are light sources in the nighttime haze model so that a glow term is found in the camera's image. The reason for the glow in the camera image is because of multiple scattering of light sources in an irregular direction.
II. RELATED WORKS Yu Li [1] introduced a haze model that is suitable for varying light sources and their glow. As mentioned, this model consists of atmospheric light, transmission map and also includes a glow. The input is glow image and it is separated into the glow and glow free images through a quadratic optimization problem. Further processing is done on the glow-free images. Estimation of atmospheric light and transmission map is the main procedure in this method. This method is very simple and cost-effective. But it does not produce any better haze results.
Cosmin Ancuti [2] contributed a fusion process which is a single image-based approach that is used to enhance nighttime haze images. It is done on the patches of the image, not on the entire image. This method uses several inputs from the original hazy image. The first input is computed using a small patch size, thereby preventing estimation of the airlight from multiple light sources. The second input using larger patches and it improves the global contrast since it removes a significant fraction of the airlight. The third input is the discrete Laplacian of the original image which is used to reduce the glow effects from the image. Thus input images make a way to enhance the finest details transferred that are to the fused output.
Three weight maps provide greater emphasis in the fusion process to ensure the regions of high contrast or of high saliency. Local contrast weight identifies the amount of local variation of each input and is computed by applying a Laplacian filter to the luminance of each processed image. This has been used in applications such as tone mapping and assigns high values to edge and texture variations. The saturation weight map controls the saturation gain in the output image.
The main goal of the fusion process is to produce a better output image by blending derived inputs with specified weight maps that are designed to preserve the most significant features in an image. The advantages of this method are simplicity and computational efficiency. However, this fusion process leads to annoying halo artifacts, mostly at locations with strong transitions in the weight maps. Such unpleasing artifacts can be overcome by using a multiscale Laplacian decomposition. Jing Zhang [3] proposed a new imaging model related to the diminishing of nighttime haze, which uses a novel efficient dehazing method with illumination estimation for nighttime haze conditions.
Based on the imaging model, the dehazed method contains three steps: light compensation, color correction, and dehazing. The first step(light compensation) estimates the light intensity and further for enhancement, a gamma correction is applied to the light intensity to balance the overall illumination of the image. Then, the next step (color correction) estimates the color characteristics of the incident light. Finally, haze is removed by using the dark channel prior to estimating the pointwise environmental light in the dehazing step. By comparing with other methods, this method can achieve illumination balanced and haze-free results and noiseless. Moreover, it also has a good ability to renders colors in objects using light.
Jing Zhang [4] introduced a new method, maximum reflectance prior which is a core idea to address a haze removal problem from a single nighttime image, even in the presence of multicolored and non-uniform illumination, a model is proposed. For daytime dehazing, this model is appropriate. The main reason is that the global atmospheric light is assumed to be the only light source for the daytime haze environment, and the attenuation and scattering characteristics are identical for each channel. However, nighttime scenes usually have multiple colored artificial light sources, resulting in a strongly non-uniform and varicolored ambient illumination.
Therefore, the local ambient illumination is added into both the attenuation term and scattering term of the standard hazy imaging model to obtain the nighttime hazy imaging model. This model is entirely different from Li et al method [1].
The aim is to estimate the ambient illumination and the transmission for each pixel to recover the haze-free image at nighttime. The maximum reflectance prior estimates the color map of ambient illumination and removes its effect from the processing image. Then, estimate the intensity of varying illumination and the transmission and remove the haze effect and finally, obtain the color-balanced and hazefree image. But there are some failure cases: there are some color distortions in the regions of grasses and leaves. The main reason is that the maximum reflectance prior does not influence on these regions.
Minmin Yang [5] proposed a super pixel-based method to remove haze from a single nighttime haze image. This method is a revamped version of [1]. The input night image contains a glow that is decomposed into the glow and glow-free images through solving a quadratic optimization problem. The super pixel-based algorithm is applied to the nighttime glow free images. There are two components to be identified in the proposed algorithm, estimation of atmospheric light and transmission map. The nighttime glow free haze image is divided into super-pixels using the SLIC algorithm. The brightest pixel intensity in each superpixel is regarded as this superpixel's atmospheric light. The transmission map is estimated through a dark channel prior. The dark channel of the haze image is decomposed into two layers (base and detail layer). The transmission map is computed from the base layer. The WGIF is used to decompose and also for reducing morphological artifacts from the image. To avoid noise in the sky region, a threshold is applied in the resultant transmission map, after the haze is removed. However, the segmentation of glowfree nighttime haze images into super-pixels increases the algorithm's complexity.
Pei and Lee [6] introduced a method based the color transfer processing. The color transfer method is applied to the nighttime haze input image, which uniformly transfers the air light color into grayish. After the color transfer processing on the initial resultant nighttime haze images, the output images are frequently operating the refined dark channel prior method for eliminating the haze and the Bilateral Filter in Local Contrast Correction (BFLCC) to enhance contrast. After such procedures, the resultant images are appealing haze-free images. The colors of a nighttime haze image were mapped to the daytime haze image. Then, a dark channel prior based algorithm was adopted to estimate the transmission map. A post-processing step was provided to improve the inadequate brightness and low overall contrast of a haze-free image. Although their method has reliable dehazing quality, the color of the whole haze-free image looks unnatural due to the color transfer.
Wencheng Wang [7] introduced a fast algorithm for single image dehazing, based on the linear transformation that includes only linear operation. It is assumed that a linear relationship exists between the hazy image and the haze-free image The method is divided into 3 steps. First, the Estimation of atmospheric light is done through grayscale transformation (0-255). Second, the transmission map is estimated using a linear transformation model that has less computational complexity. The atmospheric light is obtained with a channel method based on a quad-tree subdivision. It is manupulated by using the ratio of grays and gradients in the region. With that knowledge, a hazefree image is acquired through the atmospheric scattering model. The transmission map is roughly calculated using the minimum color channel. The linear transformation algorithm is used to identify the rough transmission map. Third, Gaussian blurring is used to refine the rough transmittance function. Once the atmospheric light and transmission map is estimated, scene radiance can be recovered. The experimental results show that this method avoids saturation and halo effects.
Zheng Guo Li [8] contributed a new globally guided image filtering (G-GIF) to overcome the problem of Guided image filtering (GIF) and Weighted guided image filtering (WGIF). The G-GIF has mainly consisted of a global structure transfer filter and a global edge-preserving smoothing filter. In this paper, G-GIF is used to study nighttime haze removal. This is based on the minimal color channel and dark channel prior. The dark channel prior is decomposed into the base layer and detail layer via G-GIF. The structure of the base layer is compared with the structure of a haze image to avoid morphological artifacts. The atmospheric light is obtained by using a hierarchical searching method based on the quad-tree subdivision. When the transmission map is estimated via G-GIF, the image can be restored. Inputs of the G-GIF are an image to be filtered and a guidance vector field where the structure is defined by the guidance vector field. The structure of the haze image is preserved better by the minimal channel and it is selected to generate the guidance vector field. Experimental results show that the dehazed image by the G-GIF is sharper than the GIF and WGIF. Zhengguo Li [9] proposed this method which is similar to [8]. The edge-preserving algorithm is to estimate the transmission map based on the concepts of the minimal color channel and simplified dark channel [8]. The main difference is dark channel is used to reduce the variation of the direct attenuation. The simplified dark channel of the haze image can be decomposed into a base layer and a detail layer via an existing edge-preserving smoothing technique. This method applies to haze images, underwater images and normal images without haze.
Yi-Hsuan Lai [10] introduced a theoretic and heuristic bounds of scene transmission to guide the optimum and well known DCP of haze-free images that can justify by showing theoretic bound. There are two scene priors with the constraints on the solution space including scene radiance and scene transmission, to formulate a constrained minimization problem and solve it by quadratic programming..
Kaiming He [11] proposed a novel dark channel in image haze removal. Dark channel means most of the local regions which do not cover the sky, frequently some pixels (dark pixels) show very low intensity in at least one of the color (RGB) channel.
The intensity of these dark pixels in that channel is mainly handout by the airlight. Therefore, these dark pixels can directly provide an accurate estimation of the transmission of haze. The dark channel prior is not a good option for the sky regions. Fortunately, the color of the sky is usually similar to the atmospheric light in a haze image. Since the sky is infinite and tends to has zero transmission. A soft matting algorithm is used to refine transmission. The next step is to estimate the transmission map and atmospheric light to recover the scene radiance.
Boyi Li et. al. [12] proposed Aod-net which is an end to end frame work to dehaze an image. The input is haze image and output is a dehazed image. Compared to all other deep learning methods, the Aod-net is very simple. The main contribution of Aod-net is that it estimates atmospheric light and transmission maps in a simplified manner. It is not at all complex. There are 2 modules: kestimation and clean image generation module. The kestimation module aims to minimize the MSE between haze and an original image. But it has a drawback that light passing through the emulsion on a film or plate, is not reflected into it, but is absorbed by a layer of dye or pigment, usually on the back of the film(Antihalation).
Bolun Cai [13] proposed Dehazenet which is a trainable end to end CNN system. The input is a haze image and it outputs a median transmission map. Feature extraction, multi-scale mapping, local extremum, and nonlinear regression were used to estimate the median transmission map. The Maxout unit in feature extraction maximizes feature maps to discover the apprehension of the hazy images. BReLU is an activation function that is useful for image restoration and reconstruction instead of ReLU and Sigmoid, which is a bilateral restraint and local linearity for image restoration. Dehazenet is a lightweight architecture, increases efficiency and restore haze-free images. Sky region in haze image is a difficult task because sky and haze show similar phenomenons according to the atmospheric scattering model. Dehazenet tries to reduce antihalation which is an appreciable task.
Jinjiang Li [14] proposed a residual deep CNN for dehazing which contributed more efficiency and less errorprone. It is based on residual deep CNN. Nowadays, CNN is very efficient for image dehazing. Each decade CNN develops a variety of changes in the field of image dehazing. Residual deep CNN is one of them that continues to learn the development in the field of image dehazing. The network is divided into two phases. In the first phase, the estimation of the transmission map. In the second phase, the clear image is estimated using the residual network to obtain a clear image. Batch normalization is used to increase learning speed.
The total number of convolutional neural networks is six layers, which are convolution layer, a slice layer, element-by-element operation layer, multi-scale convolution layer, max pool layer, and convolution layer to estimate the transmission map. The network is divided into two phases. In the first phase, the estimation of the transmission map by minimizing the loss function between the reconstruction transmission and the corresponding ground truth map. In the second phase, a clear image is obtained using the residual network.
Batch normalization is used to increase learning speed. This part combines the principle of convolution with the ReLU activation function, batch normalization, and residual network theory. It is a fast and efficient dehazing CNN model based on residual error. But more layer, higher the cost.
Wenqi Ren [15] proposed a multi-scale CNN to learn effective and multiple features. There are two networks, a coarse-scale network, and a fine-scale network. The scene transmission map is estimated by a coarse-scale network that predicts a holistic transmission map based on the entire image and refines dehazed results locally by the fine-scale network.
III. CONCLUSIONS
Haze removal methods have become more useful for many image processing and computer vision applications. All the dehazing methods useful for surveillance, for remote sensing and under water imaging, photography etc. Most of the methods are based on the estimation of atmospheric light and transmission map. This paper presents review of few papers related to image dehazing and addressed haze removal techniques. Table 1 shows the reference number of the paper and its accuracy. | 3,925.6 | 2020-06-30T00:00:00.000 | [
"Engineering",
"Computer Science",
"Environmental Science"
] |
Gym-ANM: Open-source software to leverage reinforcement learning for power system management in research and education
Gym-ANM is a Python package that facilitates the design of reinforcement learning (RL) environments that model active network management (ANM) tasks in electricity networks. Here, we describe how to implement new environments and how to write code to interact with pre-existing ones. We also provide an overview of ANM6-Easy, an environment designed to highlight common ANM challenges. Finally, we discuss the potential impact of Gym-ANM on the scientific community, both in terms of research and education. We hope this package will facilitate collaboration between the power system and RL communities in the search for algorithms to control future energy systems.
Introduction
Active network management (ANM) of electricity distribution networks is the process of controlling generators, loads, and storage devices for specific purposes (e.g., minimizing operating costs, keeping voltages and currents within operating limits) [1]. The modernization of distribution networks is taking place with the addition of distributed renewable energy resources and storage devices. This attempt to transition towards sustainable energy systems leaves distribution network operators (DNO) facing many new complex ANM problems (overvoltages, transmission line congestion, voltage coordination, investment issues, etc.) [2].
There is a growing belief that reinforcement learning (RL) algorithms have the potential to tackle these complex ANM challenges more efficiently than traditional optimization methods. This optimism results from the fact that RL approaches have been successfully and extensively applied to a wide range of fields with similarly difficult decision-making problems, including games [3,4,5,6], robotics [7,8,9,10], and autonomous driving [11,12,13].
What games, robotics, and autonomous driving all have in common is that the environment in which the decisions have to be taken can be efficiently replicated using open-source software simulators. In addition, these software libraries usually provide interfaces tailored for writing code for RL research. Hence, the availability of such packages makes it easier for RL researchers to apply their algorithms to decision-making problems in these fields, without needing to first develop a deep understanding of the underlying dynamics of the environments with which their agents interact.
Put simply, we believe that ANM-related problems would benefit from a similar amount of attention from the RL community if open-source software simulators were available to model them and provide a simple interface for writing RL research code. With that in mind, we designed Gym-ANM, an open-source Python package that facilitates the design and the implementation of RL environments that model ANM tasks [14]. Its key features, which differentiate it from traditional power system modeling software (e.g., MATPOWER [15], pandapower [16]), are: arXiv:2105.08846v1 [cs.
LG] 18 May 2021 • Very little background in power system modeling is required, since most of the complex dynamics are abstracted away from the user. • The environments (tasks) built using Gym-ANM follow the OpenAI Gym interface [17], with which a large part of the RL community is already familiar. • The flexibility of Gym-ANM, with its different customizable components, makes it a suitable framework to model a wide range of ANM tasks, from simple ones that can be used for educational purposes, to complex ones designed to conduct advanced research.
Finally, as an example of the type of environment that can be built using Gym-ANM, we also released ANM6-Easy, an environment that highlights common ANM challenges in a 6-bus distribution network.
Both the Gym-ANM framework and the ANM6-Easy environment, including detailed mathematical formulations, were previously introduced in [14]. Here, our goal is to provide a short practical guide to the use of the package and discuss the impact that it may have on the research community.
The Gym-ANM package
The Gym-ANM package was designed to be used for two particular use cases. The first is the design of novel environments (ANM tasks), which requires writing code that simulates generation and demand curves for each device connected to the power grid (Section 2.1). The second use case is the training of RL algorithms on an existing environment (Section 2.2).
Design a Gym-ANM environment
The internal structure of a Gym-ANM environment is shown in Figure 1. At each timestep, the agent passes an action a t to the environment. The latter generates a set of stochastic variables by calling the next vars() function, which are then used along with a t to simulate the distribution network and transition to a new state s t+1 . Finally, the environment outputs an observation vector o t+1 and a reward r t through the observation() and reward() functions.
The core of the power system modeling is abstracted from the user in the next state() call. The grey blocks, next vars() and observation(), are the only components that are fully customizable when designing new Gym-ANM environments.
Use a Gym-ANM environment
A code snippet illustrating how a custom Gym-ANM environment can be used alongside an RL agent implementation is shown in Listing 2. Note that for clarity, this example omits the agent-learning procedure. Because Gym-ANM is built on top of the Gym toolkit [17], all Gym-ANM environments provide the same interface as traditional Gym environments, as described in their online documentation 2 . env = gym . make ( ' MyANMEnv ') # Initialize the environment . obs = env . reset () # Reset the env . and collect o0 . Listing 2: A Python code snippet illustrating environment-agent interactions [14].
Example: the ANM6-Easy environment
ANM6-Easy is the first Gym-ANM environment that we have released [14]. It models a 6-bus network and was engineered so as to highlight some of the most common ANM challenges faced by network operators. A screenshot of the rendering of the environment is shown in Figure 2. Figure 2: The ANM6-Easy Gym-ANM environment, taken from [14].
In order to limit the complexity of the task, the environment was designed to be fully deterministic: both the demand from loads (1: residential area, 3: industrial complex, 5: EV charging garage) and the maximum generation (before curtailment) profiles from the renewable energies (2: solar farm, 4: wind farm) are modelled as fixed 24-hour time series that repeat every day, indefinitely.
More information about the ANM6-Easy environment can be found in the online documentation 3 .
Research and educational impact
Many software applications exist for modeling steady-state power systems in industrial settings, such as Pow-erFactory [18], ERACS [19], ETAP [20], IPSA [21], and PowerWorld [22], all of which require a paid license. In addition, these programs are not well suited to conduct RL research since they do not integrate well with the two programming languages mostly used by the RL community: MATLAB and Python. Among the power system software packages that do not require an additional license and that are compatible with these programming languages, the commonly used in power system management research are MATPOWER (MAT-LAB) [15], PSAT (MATLAB) [23], PYPOWER (Python interface for MATPOWER) [24], and pandapower (Python) [16].
Nevertheless, using the aforementioned software libraries to design RL environments that model ANM tasks is not ideal. First, the user needs to become familiar with the modeling language of the library, which already requires a good understanding of the inner workings of the various components making up power systems and of their interactions. Second, these packages often include a large number of advanced features, which is likely to overwhelm the inexperienced user and get in the way of designing even simple ANM scenarios. Third, because these libraries were designed to facilitate a wide range of simulations and analyses, they often do so at the cost of solving simpler problems more slowly (e.g., simple AC load flows). Fourth, in the absence of a programming framework agreed upon by the RL research community interested in tackling energy system management problems, various research teams are likely to spend time and resources implementing the same underlying dynamics common to all such problems.
By releasing Gym-ANM, we hope to address all the shortcomings of traditional modeling packages described in the previous paragraph. Specifically: • The dissociation between the design of the environment (Section 2.1) and the training of RL agents on it (Section 2.2) encourages collaboration between researchers experienced in power system modeling and in RL algorithms. Thanks to the general framework provided by Gym-ANM, each researcher may focus on their particular area of expertise (designing or solving the environment), without having to worry about coordinating their implementations. • This dissociation also means that RL researchers are able to tackle the ANM tasks modelled by Gym-ANM environments without having to first understand the complex dynamics of the system. As a result, existing Gym-ANM environments can be explored by many in the RL community, from novices to experienced researchers. This is further facilitated by the fact that all Gym-ANM environments implement the Gym interface, which allows RL users to apply their own algorithms to any Gym-ANM task with little code modification (assuming they have used Gym in the past). • Gym-ANM focuses on a particular subset of ANM problems. This specificity has two advantages.
The first is that it simplifies the process of designing new environments, since only a few components need to be implemented by the user. The second is that, during the implementation of the package, it allowed us to focus on simplicity and speed. That is, rather than providing a large range of modeling features like most of the other packages, we focused on optimizing the computational steps behind the next state() block of Figure 1 (i.e., solving AC load flows). This effectively reduces the computational time required to train RL agents on environments built with Gym-ANM.
The simplicity with which Gym-ANM can be used by both the power system modeling and the RL communities has an additional advantage: it makes it a great teaching tool. This is particularly true for individuals interested in working at the intersection of power system management and RL research. One of the authors, Damien Ernst, has recently started incorporating the ANM6-Easy task in his RL course, Optimal decision making for complex systems, at the University of Liège [25].
Finally, we also compared the performance of the soft actor-critic (SAC) and proximal policy optimization (PPO) RL algorithms against that of an optimal model predictive control (MPC) policy on the ANM6-Easy task in [14]. We showed that, with almost no hyperparameter tuning, the RL policies were already able to reach near-optimal performance. These results suggest that state-of-the-art RL methods have the potential to compete with, or even outperform, traditional optimization approaches in the management of electricity distribution networks. Of course, ANM6-Easy is only a toy example, and confirming this hypothesis will require the design of more complex and advanced Gym-ANM environments.
Conclusions and future works
In this paper, we discussed the usage of the Gym-ANM software package first introduced in [14], as well as its potential impact on the research community. We created Gym-ANM as a framework for the RL and energy system management communities to collaborate on tackling ANM problems in electricity distribution networks. As such, we hope to contribute to the gathering of momentum around the applications of RL techniques to challenges slowing down the transition towards more sustainable energy systems.
In the future, we plan to design and release Gym-ANM environments that more accurately model real-world distribution networks as opposed to that modeled by ANM6-Easy. However, we also highly encourage other teams to design and release their own Gym-ANM tasks and/or to attempt to solve existing ones.
Declaration of competing interests
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2,682.4 | 2021-05-18T00:00:00.000 | [
"Computer Science",
"Engineering",
"Education"
] |
Multilevel Evolutionary Algorithm that Optimizes the Structure of Scale-Free Networks for the Promotion of Cooperation in the Prisoner’s Dilemma game
Understanding the emergence of cooperation has long been a challenge across disciplines. Even if network reciprocity reflected the importance of population structure in promoting cooperation, it remains an open question how population structures can be optimized, thereby enhancing cooperation. In this paper, we attempt to apply the evolutionary algorithm (EA) to solve this highly complex problem. However, as it is hard to evaluate the fitness (cooperation level) of population structures, simply employing the canonical evolutionary algorithm (EA) may fail in optimization. Thus, we propose a new EA variant named mlEA-CPD-SFN to promote the cooperation level of scale-free networks (SFNs) in the Prisoner’s Dilemma Game (PDG). Meanwhile, to verify the preceding conclusions may not be applied to this problem, we also provide the optimization results of the comparative experiment (EAcluster), which optimizes the clustering coefficient of structures. Even if preceding research concluded that highly clustered scale-free networks enhance cooperation, we find EAcluster does not perform desirably, while mlEA-CPD-SFN performs efficiently in different optimization environments. We hope that mlEA-CPD-SFN may help promote the structure of species in nature and that more general properties that enhance cooperation can be learned from the output structures.
The Prisoner's Dilemma Game (PDG) is a popular abstract mathematical method and has been employed in biology to explain the emergence and persistence of cooperation behavior among selfish individuals [1][2][3][4][5][6][7][8] . After all, survival of the fittest is a widely accepted natural selection rule, and individuals employing the selfish strategy might be expected to be more likely to persist. After carefully studying PDG, researchers found that organisms may still form a cooperative community even if they all act entirely for their own interest. Even so, researchers still found it hard to explain large-scale cooperation in reality, as defection usually dominates in their simulations. To explain this puzzle, researchers have long been exploiting the deeper mechanisms.
In the past decades, network reciprocity, proposed by Nowak et al., has had wide influence in this avenue of research. Individuals are constrained by spatial structure to play only with their immediate neighbors 8 . Nowak et al. concluded that topology constraints influence the evolution of cooperation (confirmed years later). After that, many extended studies have contributed to network reciprocity. In the early stages, researchers focused on the single layer networks: They found that population structure plays a determinate role in the evolution of cooperation 9 and cooperators in PDG are likely to form clusters to defend against defectors 10 . They revealed the potential positive relationship between cooperation and some network properties, such as heterogeneity 7,11 and clustering coefficients 12 . And they also focused on how error and attack on the poulation structures may influence the evolution of cooperation 13 . Recently, researchers have analyzed the evolutionary game in interdependent networks, as populations in reality are not isolated and interaction exists between different layers [14][15][16][17] . These studies have reflected that interdependence may induce some new mechanisms that enhance cooperation and fixed the cooperation behavior on the system. And György et al. in ref. 18 reviewed how the population structure can modify long-term behavioral patterns in evolutionary games.
In addition to studying how network reciprocity may influence the evolution of cooperation, some researchers have focused on investigating the potential behavior whereby players may adjust their interaction with others based on the gaming results. This is a natural phenomenon since population structure in reality may dynamically change during the game process. A representative method in this subject area is the coevolutionary rule, which was designed and proposed by Zimmermann et al. in ref. 19. Even if a large number of studies have contributed to this subject, most works in this area can be divided into those that employ strategy independent rules for connection adaptation [20][21][22] and those that take strategies or their performance as factors to influence the population reorganization 23,24 . Perc et al. have also provided a review of this research in ref. 25.
While network reciprocity seems to have preliminarily explained large-scale cooperation in reality, some researchers have practically analyzed the real human game. Their experiment results revealed that humans do not base their strategy decisions on other's payoffs while playing PDG. In addition, Gracialázaro et al. in their experiments have found the existence of a population structure does not seem to have an influence on the global outcome of cooperation 26 . Following these experiments, researchers have also found that cooperation obviously depends on the strategy updating rule. Cimini et al. in ref. 27 have analyzed cooperation frequency in a simulation where different strategy updating rules are introduced. They found cooperation frequency assessed under the imitation-based strategy updating rule depends heavily on the population structure, but network reciprocity seems to have little effect on the game dynamics when individuals do not take neighbors' payoffs into consideration (non-imitative rule). These experiments and extended works seem to have put an end to network reciprocity. However, it remains difficult to conclude that population structure has little effect on promoting cooperation, as different strategy updating rules place different levels of emphasis on different game processes in nature. Evolutionary dynamics based on payoff comparisons are appropriate to model biological evolution, while they may not apply to social or economic contexts 26 . Moreover, Carlos et al. has emphasized in their research that their conclusion applies only to human cooperation, and network reciprocity may still be relevant to cooperation in other contexts.
Therefore, even if the relevance between network reciprocity and cooperation in social or economic issues remains controversial, population structure is still essential to cooperation in biological evolution. Moreover, just as group selection indicates that cooperative groups may be more likely to survive in nature than uncooperative ones, cooperation is essential to the survival and evolution of species in nature. A high cooperation level can help species to maintain high competitiveness in nature, which may partially explain why helping family members finally helps the individual itself (Kin selection). Therefore, methods to help optimize the population structure should be important. Even if quite a lot of studies have contributed to the promotion of cooperation in the population, it remains difficult problem 28 . One representative breakthrough that has been made on this subject is the introduction of coevolutionary rules, which has provided a way to help understand the self-reorganizing ability of the population. Even so, little has been done to investigate how a population structure can be constructed or adjusted through man-intervention. And cooperation promotion methods that do not rely on the self-adapt ability of a population may shed light on this problem.
In this paper, we design a variant of evolutionary algorithm to optimize the population structure and thereby enhancing cooperation. Even if much of the preceding research has found a correlation of cooperation and some network properties, naively applying these conclusions to our problems may be quite problematic since these conclusions are mostly obtained based on specific network models and lack generality. Therefore, we employ the cooperation level of population structures as the objective value of our algorithm.
To our knowledge, no appropriate simple approach has been proposed to exactly determine the cooperation level of different population structures. Moreover, the evaluated cooperation level of structures actually fluctuates within a range. Therefore, the evaluation of structures within EA is fuzzy and may interfere with the selection of EA over the elite solutions. In the field of evolutionary computation, researchers also refer to this interference that leads to potential failure in optimization as the "EA cheat". Apparently, simply employing the canonical evolutionary algorithm (EA) may fail in optimization, even if EA have been applied to many engineering problems [29][30][31][32][33][34][35][36] . Even so, there are two widely accepted methods to reduce the evaluation error of different structures' cooperation level: (1) Prolong the simulation time of game evolution. (2) Average over independent evaluations. However, the corresponding computation cost cannot be ignored.
To successfully apply EA to the optimization of population structure, we propose a new EA variant named mlEA-C PD -SFN to promote cooperation in the Prisoner's Dilemma Game (PDG). Within mlEA-C PD -SFN, a modified local search operator named multilevel evolutionary operator is designed for the purpose of revising the wrong-filtering solution in EA population and exploiting solutions with potential higher cooperation levels. Therefore, we designed a memory structure (restoration list) within the operator to record some reliable solutions for the revision and some rules for the operator to control the search bias.
To test the performance of mlEA-C PD -SFN, different types of scale-free structures have been employed, and optimization has been constrained not to change the initial degree distribution. Meanwhile, to verify that the preceding conclusions may not be applied to this problem, we also provide the optimization results of the comparative experiment (EA cluster ), which optimizes the clustering coefficient of structures. Even if the preceding research concluded that highly clustered scale-free networks enhance cooperation, we still find EA cluster cannot perform as satisfactorily as mlEA-C PD -SFN does. Moreover, we also find that the mlEA-C PD -SFN can perform well and simultaneously maintain a low computation cost (details in III). Finally, to verify the adaptability of mlEA-C PD -SFN subject to different optimization environments, different strategy update rules are employed in our experiments. The simulation results verify the adaptability of mlEA-C PD -SFN.
Results
Prisoner's Dilemma Game and Evaluation of Population Structure. Understanding the emergence of cooperation in the context of Darwinian evolution has been a challenge in recent decades. Even if relevant works in this field are almost entirely theoretical, it is quite likely to have broad-reaching implications for the future.
PDG is one of the most commonly used tools to help explain how cooperation endures in nature. In PDG, the defectors receive the highest reward T (temptation to defect) when defecting to a cooperator who receives the lowest payoff S (sucker value). If both of the players choose the same strategy, they receive a payoff R as a reward for cooperation or P as punishment for defection. Moreover, T, P, S, R follow the rule T > R > P > S. As a result, in a single round of PDG, defection is the best strategy no matter what the opponents' strategies are, even though all players would be better off if they all cooperate.
Population structures provide the basic organization of the game. In ref. 10, players interact only within a limited local neighborhood. When a site x is updated, the current occupant and all neighbors around compete to recolonize this site with their offspring. Those offspring keep the same strategy as their parents. The probability of neighbor y succeeding in reproduction is: where d > = max{d x , d y }, D = T-S, d i marks the degree of node i and P i marks the payoff of i. Therefore, the probability that the focal individual succeeds in reproduction is , an offspring of one neighbor takes over site x. The relative probability for the success of neighbor y is W sx←sy /∑ l W sx←sy , where l marks neighbors of x. In this paper, the synchronous update method is employed.
Generally, individuals in a population are initially designated as cooperators or defectors with equal probability. And a corresponding cooperation level is obtained through averaging over generations after the equilibrium of a population is reached. Thus, the evaluated cooperation level of structures may naturally fluctuate among independent evaluations.
To illustrate this phenomenon, the distribution of evaluated cooperation level is given in Fig. 1. Each sub-graph in Fig. 1 contains 5000 independent evaluated results of a BA network. Meanwhile, two different evaluation modes have been designed to obtain these simulation results on the same group of population structures: Therefore, the fluctuation of evaluation may interfere with the selection of EA over the elite solutions. As overlapping may exist between the evaluation distribution of different structures, there naturally exists a certain probability that a worse structure is mistaken by EA as the superior. This phenomenon is termed the "EA cheat" and is a knock down to the algorithm.
To avoid unnecessary computation cost, we employ Mode-A to evaluate the cooperation level of structures in our paper. Moreover, we introduce the multi-sampling method to average over independent evaluations, thereby approaching the ideal mean value of the evaluation distribution. To analyze the effect of this approach, the new evaluation distributions obtained under the multi-sampling method are provided in Fig. 2. On the whole, the multi-sampling method can help reduce the evaluation error of structures. However, different sampling numbers may be necessary toward different evaluation distributions of structures, which explains why the fluctuation range in Fig. 2(c) is wider than that in Fig. 2(b). Therefore, it is difficult to determine an appropriate sampling number, not to mention the fact that this method only decreases the fluctuation range without effectively dealing with the EA cheat (e.g., if the ideal mean value of two evaluation distributions is close enough).
Multilevel Evolutionary Operator and mlEA-CPD-SFN. Previous simulation results have revealed
that the evaluation of structures may fluctuate and interfere with the selection of EA over the elite solutions. Multi-sampling may help reduce evaluation error. But determining an appropriate sampling number will be difficult. Moreover, the corresponding increase in computation cost is unbearable.
Therefore, the reserved (potential) structures should be repeatedly sampled more to ensure reliability of their evaluation results. However, the abandoned (mediocre) structures should be repeat sampled less to decrease computational cost. With this in mind, we propose a local search operator variant named multilevel evolutionary operator (see Methods) to achieve this. And for a timely revision of the wrong-filtering solution caused by the EA cheat, some reliable structures are saved as substitutes and a memory structure (see Methods) is designed. Given these, we further propose a new EA variant named mlEA-CPD-SFN (see Methods) to optimize the structure of scale-free networks for the promotion of cooperation in Prisoner's Dilemma game.
Optimizing Clustering Coefficient through EA. Assenza et al. have revealed the enhancement of cooperation in highly clustered scale-free networks 12 . Therefore, optimizing the clustering coefficient may help promote the cooperation level of scale-free structures. The objective value in EA cluster is the clustering coefficient of structures, which can be obtained as follows: Suppose neighbors of nodes i construct a graph represented Two types of scale-free networks are employed to test the performance of EA cluster : Barabási Albert networks (BANs) 37 and Holme and Kim networks (HKNs: p = 1). BANs is a type of commonly used scale-free network, while the HKNs is one of its variants and may have higher clustering coefficients in pairs with a higher p (p ∈ [0,1]) for construction 12 . For fair comparison with mlEA-C PD -SFN, canonical local search operator is designed within EA cluster , and the maximum generation of EA cluster is set to 120 where evolution of EA cluster has almost converged. The simulation results are given in Fig. 3.
Apparently, optimizing the clustering coefficient of scale-free structures can help promote cooperation in PDG. However, the efficiency of EA cluster is quite limited. Moreover, as EA cluster fails in the optimization of 500-node BANs, we can conclude that naively applying preceding research conclusions to the practical optimization of structures may not perform desirably and sometimes maybe problematic.
Efficiency of mlEA-C PD -SFN in Optimizing Population Structure. In this part, the same structures are employed to test the performance of mlEA-C PD -SFN (see Methods). Two types of mlEA-C PD -SFN and a hybrid mlEA-C PD -SFN are considered: • Only one level in each pyramid: mlEA-C PD -SFN 1 (shortened to lv1).
• Only one level in each pyramid but multi-sampling is employed for initial evaluation: mlEA-C PD -SFN 1 -M (shortened to lv1-M).
To protect the best record in the restoration list, α in mlEA-C PD -SFN 1 is set to 0, while α = 0.5 in mlEA-C PD -SFN 5 . Within mlEA-C PD -SFN 1 -M, initial evaluation of structures is obtained by averaging over 5 independent evaluations. The optimization results of these algorithms are given in Fig. 4.
The initial evaluation of structures in mlEA-C PD -SFN 1 and mlEA-C PD -SFN 5 contains only one sampling. Therefore, compared with mlEA-C PD -SFN 1 -M, these two mlEA-C PD -SFNs in theory are more likely to overestimate or underestimate the cooperation level of structures. As the performance of mlEA-C PD -SFN 1 -M surpasses mlEA-C PD -SFN 1 , we can conclude that accurate evaluation of structures obviously influences the optimization results. This conclusion also explains why the performance of mlEA-C PD -SFN 1 worsens when optimizing smaller structures. Even if the complexity of a problem synchronously descends with the structure scale, the corresponding evaluation error gets more obvious (refer to Fig. 1). Even so, in our experiments, mlEA-C PD -SFN 5 performs better than mlEA-C PD -SFN 1 -M in general. This phenomenon reveals that the level of mlEA-C PD -SFN is positive to its performance. The restoration list provides records to maintain the evolving of the EA population subject to the attack of "EA cheat". With the increase in level, more history records are available, and these records can be saved longer.
The evaluated cooperation level of structures provided in our paper is obtained by averaging over 5000 independent evaluations. Therefore, these simulation results should be reliable. However, further investigation regarding whether prolonging simulation time will influence our results should be undertaken, as those results are obtained under Mode-A with the purpose of saving computation cost. Thus, we prolong the simulation time and track the trend in cooperation during the game process (Fig. 5). Each data point within the simulation results is obtained through averaging over 500 independent runs (50 runs/structure). It is apparent that prolonging simulation time has little influence upon the evaluation results as cooperation frequency has almost converged around 1.1 N generation (Mode-A).Therefore, the simulation results we obtain should be credible. In addition, we can Simulation results of EA cluster in optimizing population structure for the promotion of cooperation in PDG. Each group contains 10 independent structures whose evaluated cooperation level lies within a gray bar (black points mark the mean value). On the whole, naively optimizing the clustering coefficient of a population structure may promote cooperation in PDG, but it is not efficient enough. Moreover, sometimes this method may fail and produce a worse structure, as shown in the optimization of 500-node BANs (the mean value gets smaller). conclude that mlEA-C PD -SFN performs effectively in optimizing population structure and promoting cooperation in PDG (mlEA-C PD -SFN 5 performs best, and mlEA-C PD -SFN 1 -M follows).
We further analyze the optimization characteristic of mlEA-C PD -SFN and explore its advantages in optimization. Therefore, we provide the running time and evaluation number of the above three algorithms in Table 1.
Apparently, the computation cost of mlEA-C PD -SFN 1 -M surpasses those of the others (almost triple). Thus, the multi-sampling method is paired with a dramatic increase in computation cost. However, mlEA-C PD -SFN n performs well and simultaneously maintains a low computation cost. This is due to the basic principle of mlEA-C PD -SFN: The reserved (potential) structures should be repeatedly sampled more to ensure reliability of their evaluation results. However, the abandoned (mediocre) structures should be repeat sampled less to decrease computational cost. In addition, the structure of the restoration list is irrelevant to the evaluation of structures.
Adaptability of mlEA-C PD -SFN to Different Update Rules. Previous research has concluded that update rules influence the cooperation level of structures 27 . Therefore, investigation of mlEA-C PD -SFN's adaptability to different strategy update rules is necessary. Two common strategy update rules are additionally employed in our experiments: • Fermi rule: A neighbor (Supposed as y) of x is chosen randomly. The imitation probability for y to learn from where P i is the payoff of individual i and k denotes the amplitude of noise and is set to 0.1.
• Unconditional imitation rule: Each individual x imitates the neighbor y with the largest payoff, provided P y > P x .
The configuration of algorithms remains unchanged. As cooperation almost dominates upon HKNs(p = 1) under the Fermi rule when the cost to benefit ratio r = 0.95, we partially employ HKNs(p = 0.5) to test the performance of algorithms. Moreover, performance of mlEA-C PD -SFN 10 (10 levels in each pyramid: lv10) is also provided in Fig. 6. Notably, the simulation time to evaluate the initial and optimized structures in this part is set to 22000 generations, and the equilibrium cooperation level of structures is obtained by averaging the last 2000 generations (like the results in Fig. 5). Therefore, we do not further provide the corresponding mean cooperation frequency of structures during the game process.
Overall, these algorithms all successfully promote cooperation in PDG (Fig. 6). Therefore, we can conclude that mlEA-C PD -SFN is adaptable to different strategy updating rules. Moreover, the level of mlEA-C PD -SFN apparently influence its efficiency in performance. Finally, we can see mlEA-C PD -SFN 1 apparently fails in some optimization, while mlEA-C PD -SFN 1 -M performs well. This phenomenon verifies that the evaluation error may cheat EA and thereby cause the failure of optimization.
Note that unlike other strategy updating rules, the unconditional imitation rule leads to a deterministic dynamic. Therefore, the initialized distribution of strategy plays an important role in the final cooperation frequency, and thereby, the evaluation of structures should be more unstable. This may explain why mlEA-C PD -SFN 1 -M performs best under the unconditional imitation rule even if these algorithms all perform effectively.
Discussion
Cooperation is essential in many aspects of life. In biology, the prisoner's dilemma game (PDG) has long been used to help explain how cooperation endures in nature. As cooperation is highly relevant to the competitiveness of groups in nature, understanding cooperation in PDG and proposing methods of promotion should have significant implications. To our knowledge, although many well-known mechanisms have provided ways to understand the self-reorganizing ability of a population toward an optimal situation for cooperation, little has been done to investigate how to construct or adjust the population structure with man-intervention, even if the preceding research recognizes its importance. Therefore, cooperation optimization methods that do not rely on the self-regulation mechanism of a population may shed light on this problem.
The contributions of this paper are summarized as follows: (1) We propose a new EA variant named mlEA-C PD -SFN to optimizes the structure of scale-free networks for the promotion of cooperation in the Prisoner's Dilemma game without changing the initial structures' degree distribution. (2) We reveal that evaluation error of population structures may cause the "EA cheat" and canonical evolutionary algorithm (EA) may fail in optimization. (3) Different types of scale-free structures and updating rules have been applied to verify the performance of mlEA-C PD -SFN. (4) We provide the optimization results of the comparative experiment (EA cluster ) and reveal that naively applying preceding research conclusions to the practical optimization of structures may not perform desirably and sometimes maybe problematic. (5) The experimental results show that mlEA-C PD -SFN can perform well in various situations and simultaneously maintain a low computation cost.
We hope mlEA-C PD -SFN may help promote the structure of species in nature and that more general properties that enhance cooperation can be learned from the output structures.
Methods
Restoration List. Even if the "EA cheat" may lead to some wrong-filtering solution in the EA population, timely revision still may save EA from failure in optimization. Therefore, a hierarchical memory structure (given in Fig. 7) is designed to backup some reliable solutions in case of need. There are 2Ω pyramid-like sub-lists in the restoration list, and each contains some vacant spaces for records. Notably, records in the upper pyramid are prior to those in the bottom. L n,m marks the mth record of the nth pyramid, and L n marks the nth pyramid. L n is responsible for nth solution in the EA population. The reason to separate the memory structure into numbers of parts is: (1) Maintain the diversity of the EA population and to avoid premature convergence of algorithm. (1) Four rules are designed for the restoration list: insertion rule, mutation rule, sorting rule, and information update rule. (2) Two comparison strategies are designed to compare structures with different sampling numbers. Strategy 1 is used to compare solutions in the EA population and strategy 2 is used to compare the solutions in the EA population and restoration list.
Strategy 1 (I i and I j ): if avg Gi > avg Gj , I i is better than I j . Strategy 2 (I i and I j (record)): We suppose G i ' is a variant from G i . (i) Only when avg Gi < avg Gi' < avg Gj , I j is considered more optimal than I i . (ii) Only when avg Gi' > avg Gi > avg Gj , I i is considered more optimal than I j . Details of how G i ' is produced are given below.
Multilevel Evolution Operator: A Variant from the Local Search Operator. Local search operator
is a widely employed method to improve the exploitation of EA through continuous fine-tuning of solutions. In our paper, we employed an edge switching process (in Fig. 8) to complete the fine-tune of the current solution G i and obtain G i ': (a) A node u (d u ≥ 2) with its two neighbors i and j (d i , d j ≥ 2) are selected. (b) Edges e jk and e im (u ≠ i ≠ j ≠ k ≠ m) are selected. (c) Remove e jk , e im and add e ji , e km (e ji , e km ∉ G i ). edge_switch(G i , v u , v j , v i , v k , v m ) marks this edge switching operation (step c), and node_select(v u , v j ) marks the node selecting process (step a-b).
Canonical local search operator only compares and selects between the initial and adjusted solutions in terms of their evaluation results. However, the records in the restoration list are also considered by the multilevel evolutionary operator (Suppose I i , I i ' mark the initial solution and adjusted solution, while L i,0 is the top level record in the ith pyramid): Output: Ĩ k ': modified solution from I k . avg Ĩk' : Evaluation of Ĩ k '; Step 1: Initialization, see algorithm 2 for more information.
Step 2: Local search based on the hill-climbing, iterates over every edge of nodes.
(a) Edge switching (see algorithm 3 for more information).
(b) If the adjusted solution is unreasonable, conduct algorithm 4.
(c) If the adjusted solution is accepted, operates according to the specific situations. See algorithm 5 for more information.
(d) If the loop is over go to step 3, else go to the next round.
Step 3: Output current I k and avg GK as Ĩ k ' and avg Ĩk' ; 10: avg Gk ' ← C(G k ');//C(G k ') evaluate cooperation level of G k ' once; | 6,375.6 | 2017-06-28T00:00:00.000 | [
"Computer Science"
] |
A Systemic Functional Analysis of Two Multimodal Covers
Our society is influenced by new texts, which are clearly characterised by the increasing dominance of the visual mode; this implies that new literacies need to be developed as a way of enabling the readers to question the texts they are
Introduction
It is evident that we need to be active participants in today' s society, which is why we need to develop a critical perspective to read texts that employ a variety of modes to convey meaning.Our society is influenced by the presence of new texts, which are clearly characterised by the increasing dominance of the visual mode.This implies that new literacies, such as critical media literacy or critical literacy, need to be developed as a way of enabling the readers to develop a critical attitude to the texts they are exposed to, in Kress' words (2003:61), we need to analyse "how the modes of image and writing appear together, how they are designed to appear together and how they are to be read together".
The discourse of covers is multimodal in nature and presents an increased emphasis on modes of representation which are not written texts, especially an increased dominance of the visual mode to catch people's attention.This leads us to consider how the visual elements and contexts of a text contribute to our overall experience of the text because there is a clear combination of verbal and visual meanings.The context in which these texts are created and the one in which they are going to be used is crucial, as the following quotation corroborates: "Contexts are not simply containers within which actions, practices, and activities occur.Instead, they are dynamic streams of overlapping and integrated discourses, spaces, sociocultural practices, and power relations".(Kambeleris and de la Luna, 2004: 243) When we read any page we have certain expectations, and the page should be visually designed to meet these expectations.Since the texts under analysis are the covers of different magazines and they need to attract the reader's attention, it is expected that they connote their visual elements by using many colours and images, as we will see in the analysis in section three of this article.As Francés (2004: 124) states: "That we associate particular visual arrangements wifh different genres of writing means that the visual arrangements do some of the work of the genre.This means, then, that the visual arrangements can be analysed in terms of the genre work they do".
We argüe for a multimodal and situated approach for understanding and interpreting writing on covers.This is why we are going to analyse two covers of free British magazines (published in London on July 14,2003; see Appendix) to see the different resources they use to attract people' s attention and to persuade them to encourage readership, particularly because they were delivered at the exit of underground stations.We will analyse the two covers as multimodal texts, taking into consideration that the text can be a complex phenomenon (Halliday and Matthiessen, 2004: 3); we will also consider that "[...] the interpretation of texts is structured not only by 'what the text says', but also by contextually specific rules of interpretation, [...]" (van Leeuwen, 2005: 83) Since the text will be the basic unit of our analysis Systemic Functional Linguistics (hereafter SFL) will be used as an analytical framework, because this linguistic school studies language in relation to society and analyses the main reasons for choosing between some linguistic forms or others, which is always determined for the function that those linguistic forms have in society.We are interested in a systemic functional approach to language because of the interrelationship between language, text and the contexts in which those texts occur, and because it includes a social perspective in the study of language.
SFL concentrates on the analysis of authentic products of social interaction (texts), considered in the social and cultural context in which they take place.The most generalized application of systemic linguistics is "to understand the quality of texts: why a text means what it does, and why it is valued as it is".(Halliday, 1994: xxix).
Systemics describes a text in terms of the different choices of language we find in a given text and in the way a text function's realises what is happening (ideational metafunction); how it interacts with the reader or hearer (interpersonal metafunction), and how a text coheres (textual metafunction).As Martin (1992: 493) proposes: "texis are social processes and need to be analysed as manifestations of the culture they in large measure construct".Systemic linguists place considerable emphasis on the idea of choice, i. e., we view language as a network of interrelated options from which speakers and writers can select according to their communicative needs.
Having SFL as a framework helps us understand why a written text is used in the way it is.It does so by pay ing attention to its context and textual organization because it enables an indepth study of the construction of meaning in the text.The analysis will reveal that the visual elements and the context of a text influence our overall experience of the text, because texts have to be understood in their context since they represent the reality that surrounds them as Kambeleris and de la Luna (2004: 241) discuss: Texts are not simply denotative devices that stand for and correspond to "real-world" referents that lend them meaning.Instead, they give shape to the reality they implícate as much as they present or represent it.Because texts are indexical-pointing to the contexts in which they have concrete meanings and functions-paying careftil attention to the formal (semiotic) properties of texts can tell us a lot not only about the internal organization of the texts themselves but also about their authors, contexts of use, audiences, and so on.
As readers, we have to be able to interpret critically the texts we interact with on a daily basis and become more analytical in our literacy practices around texts.In this way we will develop a critical literacy.That is why we need to use different tools to analyse in detail, and to talk about a text in any mode -to develop skills about texts and to improve our understanding of multimodal texts.In this sense, Kress and van Leeuwen's (1996) Reading Images: A Grammar of Visual Design can be very useful since it is a grammar for the visual mode based on the principies of SFL.
The logical starting point for analysis of written texts is to consider the meaning of the text since all texts are about something, as Bazerman and Prior (2004: 2) declare: "To understand writing, we need to explore the practices that people engage in to produce texts as well as the ways that writing practices gain their meanings and functions as dynamic elements of specific cultural settings".
We are going to analyse the different resources used in the covers under analysis to convey messages and to catch the readers' attention.With the analysis, we intend to answer the following questions: " Who and what are the kinds of people, places and things depicted in this image, and how do we recognize them as such? and What ideas and valúes do we associate with these depicted people, places and things, and what is it that allows them to do so?" (van Leeuwen, 2001: 92)
Analysis of visual grammar's concepts and linguistic features of the written text in two multimodal covers
The texts on the two magazines are multimodal because they include two different modes: visual and written.In this section we intend to highlight the different valúes of these texts.We do not understand the multimodal text as one that can be divided into different semiotic channels On the contrary, the multimodal text is a unity in which we can observe different resources (Thibault, 2000: 321).
Following Kress and van Leeuwen (1996) we can point out that the main visual features ofmagazine texts are: colour, layout, salience, framingandphotographs.Itisevidentthat colour is of great significance since it is used to attract the reader's attention.As Kress and van Leeuwen (2002: 347) declare, the colours of the text, including clothing, are used to denote specific aspects of the person or character.In the backgrounds, we normally find bright colours, which establish a contrast with the colours used for the writing.Normally the background's colour makes it easier to see and read the written message.The colour always suggests something-some colours make the reader comfortable and others can make him/her feel uncomfortable.The selection of colours normally has an impact on the feelings of the reader, drawing an emotional response to the text (Kress and van Leeuwen, 2002: 348).
The colour of backgrounds creates a specific context.In this way we can speak about cohesión in the use of colours.In this kind of text, the written text becomes part of the visual and usually contrasts with the background; it is normally black or write or links with another element on the page, for example the colour of the heading.The dominance of colour background or image are designed to attract the reader's attention, which normally implies the reader's desire to engage with the text i.e., people are normally attracted by the colours, which is why we can state that colours encourage interaction.
Colour is a very important aspect in this kind of magazines because people received these two different magazines on the same day.Therefore, the fact that they decide to look at one of them or read some of the articles inside is favoured by the colours they use and the personality of the reader who has to ask himself/herself: 'Since I do not have much time, which magazine shall I read until I get to the underground station where I get off?'.Colours and photographs have an impact on the readers and have a lot to do with the decisión they make.
In the two covers we find a clear differentiation of colours-brightness is important in these texts, the written text is easy to read and there is a clear división of the information according to its valué: the left side of the page is associated with the Given and the right side of the page is associated with the New, i.e., with the most important part of the information.In visual discourse, right pages are normally dominated by large and salient photographs; on the other hand, left pages contain mostly verbal text.As Kress and van Leeuwen declare (1996: 187), the information valué we find on the left and on the right is Given and New respectively: [the Given is defined as] something the viewer already knows, as a familiar and agreed-upon point of departure for the message.For something to be New means that it is presented as something which is not yet known, or perhaps not yet agreed upon by the viewer, henee as something to which the viewer must pay special attention.
The way the page is arranged is intended to guide the reader's attention to certain parts of thetext: fheimage/s, the written text, theheading, etc.What isfoundat the topof the page is given prominent position, especially if it is in the right-hand side córner.Images normally stand out since they are what we see first.Normally, half the page is taken up by the image/s and the other half is taken up by the written text, as we can clearly see in the covers of Gat and Ms London (figures 1 and 2; see Appendix).
Headings are usually large and bold, and placed at the top of the page.Images and written text blocks may be placed on the right-hand or left-hand side of the page or on the top or bottom.The left-hand side, the space for Given information, places elements of less importance than those placed on the right side because they are assumed to be known by the reader; the right-hand side, the space for the New information is where the most important elements are located and where the reader should concéntrate his/her attention (Kress and vanLeeuwen, 1996: 186).
Salience can créate a "hierarchy of importance among the elements, selecting some as more important, more worthy of attention than others" (Kress and van Leeuwen, 1996: 212).Salience is realised through size, colour, colour contrasts, tonal contrast and placement of the page, thus drawing the reader's gaze to an element of the text.Size is normally a major factor of salience.In combination with colour, the heading is usually large and stands out.In magazines the image is normally the most salient on the page because it takes up a large part of the page and seems to dominate the written text.This contrasts with the written text blocks, which are normally in small font and are rarely the most salient feature.
Frames are another important visual feature.They are normally used to highlight the written text or an image; the frame has an effect on our perception of elements as sepárate units of information.Kress and van Leeuwen (1996: 183) explain that "the presence or absence of framing devices (dividing or framing Unes) which connect or disconnect elements, signifying that they belong or do not belong together".
Photographs are often used to bring a sense of immediacy and reality to the text in a way that promotes interaction with the reader, as we will see in the following paragraphs.Covers, as any text, are semantic units and consist of the ideational, interpersonal and textual meanings which form a coherent whole.In the following section we will comment on different aspeets related to the three functions of language 1 : all aspeets related to cohesión have to do with the textual function of language; we concéntrate on the ideational function because we see the covers as a whole and then we observe the different parts in which we can divide them; the analysis of colours, of appraised vocabulary to evalúate the world, which is related to the interpersonal function.
The visual elements of covers (colours, shapes, written texts and images) are carefully chosen because they perform a persuasive task.For example, when choosing colours, the designer normally chooses a combination of bright and dark since this mixture attracts the reader's attention.
The visual aspects of texts are to be understood as embedded in the social context where they are used because the designer is influenced by the social circumstances in which the text is composed, and because he/she also considers the circumstances in which these texts will appear and do their job.The designer creates the page according to what he wants the reader to see first on a page and the mood that the text should créate.He/she creates a relationship between the diff erent parts of the text that contribute to the internal coherence and cohesión.This internal coherence is related to the logic of the text.In Halliday's words (1994: 339): For a text to be coherent, it must be cohesive; but it musí be more besides.It must deploy the resources of cohesión in ways that are motivated by the register of which it is an instance; it must be semantically appropriate, with lexicogrammatical realizations to match (i.e. it must make sense); and it must have structure.
The two covers try to persuade readers that any information they take from these pages is as cióse to the present moment as possible.Readers see first what is on top of the page (because we have been taught to read starting at the top) and then they move down.The diff erent ways in which the elements of a multimodal text are placed have an effect on how our attention moves over the page.The size and colour of something and its placement at top or bottom, left or right, has an influence in the way we perceive the page since there is normally a hierarchical relationship between elements.
Since these magazines are delivered to readers who are getting the underground in the early morning, which implies that most of them are in a hurry to get to work or somewhere else, the persuasive effect of the cover's colour is crucial: it must invite a busy reader to open the magazine and read.Since he/she receives diff erent magazines at the same time, the cover has an important role.
It is evident that, among the two covers, the elements given the most visual attention are the headings, which stand out because of their placement at the top, and the images because of their size.In the two covers analysed, we can observe that there is one significant image which takes up the largest part of the page.Apart from the image we f ind written information in the left side of the page; the written text can be considered a clarification of the image.Roland Barthes (1977:40) coined the term 'anchorage' for cases like these in which the text helps to understand the image and vice versa.
The letters chosen to present the written texts in the covers are big enough to be read easily.There is enough space between the words and the different written blocks.As expected, the heading with the ñame of the magazine is placed at the top of the page in capital letters: they are white in Gat, which contrast with the black top the woman is wearing, and red in Ms London because this is the colour of the woman's dress.Although this is not the norm, the heading of the two covers under analysis is short so that it is easy to remember: Gat and Ms. London.Gat is surrounded by a blue square, the same colour as some of the written blocks, giving cohesión to the cover.
In Ms London, the heading is in red, which matches the colour of the woman's dress and some of the written blocks; the red writing stands out against the white background colour.This colour makes the red of the dress, the title, and the written blocks stand out even more.The letter blocks are placed on the left, on the Given part.Some of them are red and others are black, two colours that match well and establish a clear contrast.Red matches with the woman's dress and the title of the magazine and at the same time there is a contrast with the black of her shoes, hand bag and bracelet.
Since red is a bright colour, a symbol of passion in our culture, there are two Unes bigger than the rest which are written in pink.This clearly draws the reader's attention to them and refer to the most important arricie we can find in the magazine since they refer to 'Gwyneth Paltrow's BIG MAKE ORBREAK', where we can perceive an antithesis.It is a fact that pink is associated with girls.It is soft, pretty, a stereotypical feminine colour which links to the sweet and sensual image of the woman presented in the picture.In this cover we also find a written block on the right, in the New part as a way of showing that this is an important part of the information.The right side of the written text block aligns with the shape of the woman's body.
As regards the written text, it is important to highlight that in the cover we find several questions that invite the reader to read the magazine and find the answers: 'How Hollywood divas handle the put down', 'Got the ring, got the bloke but will she and Chris Martin falter at the altar?'Also interesting is the use of appraisal at the beginning of the cover: dawn, buches and the put down.In this way the reader's attention is attracted because those words have negative connotations that cause shock because that is not what we would expect on a cover.
The woman in Ms London is moving, which suits the intended audience: people who are starting their day going somewhere by underground.Her mouth is open as if she is going to say something.The background between the written blocks and the picture frames the sections, making them distinct from each other, while at the same time associated by the alignments chosen.
In the magazine Gat, we find the written text on the left, the space of the Given, and the photograph of a blond, good looking woman is placed on the right, the place of the New.The heading is on a blue square and there is a sub-heading 'Girl about town' in black small capitals.The background of this page is quite dark-dark blue, almost black-but since the letters are white and bright blue, the written text blocks can be read easily.
The exclamation on the wordjo/ and the imperative 'bring back the mingers' are used for emphasis and to créate interaction with the reader.Apart from this, it is also important to note the use of some contractions and short forms (fests, we're, telly), which give an informal quality to the cover in such a way that it can connect with the reader.In the same way, the use of appraisal -'Nasty niffs', bitch, evil, sex-catches the reader's attention.
As we have already commented with Ms London, the background between the written blocks and the picture frames the sections, distinguishing from each other.On the other hand, the picture is different from the one in the previous magazine because the woman in this picture seems to be static, observing, with her mouth closed.Her hair style covers part of her eyes, which makes her enigmatic.Her darkblue necklace and the small part of the top of her black dress clearly match the dark blue of the background of the page.The makeup on her lips and around her eyes is the same colour, Le., colour creates cohesión.
Next to the heading, on the right side at the very top of the page, we find two lines of written text in white and blue, the two colours combined in the writing of the entire cover: 'summer fests postglastonbury -best of the rest'.Since they are placed at the top of the page in the side where new information is placed, they are given prominence.
Comparison of both covers
In the following paragraphs, we are going to establish a parallelism between the cover of Gat and Ms London in several aspects: Firstly, mere is an overlap between both images and a part of the written block on the left, at the bottom of the page, which creates a cohesive link with the image through association by placement.In both covers, the written text block and the images are placed on the same background, creating an associative link between the two elements, i.e. the writing and the image go together.The background contrasts strongly with the two photographs: with the red dress in Ms London and with the white skin and the blond hair in Gat.
Another interesting parallelism is that both women are blond, alfhough one of them has short hair and the other has long hair.The woman in Ms London is dressed in red and the woman in Gat is dressed in black, two colours that clearly represent the trend in woman's fashion, and they try to get an emotional response from the reader.Both women are serious and don't look very natural i.e., they can be considered ideal women because they would need hair stylist, make-up artists, etc. to get that look.
It is evident that, in both covers, the image of the woman and the headings are the most salient elements on the page.In both cases, the image is on the right side, the prominent place for information, and the written text is on the left because it is secondary to the image.Thus it can be deduced that if the image does not catch the reader's attention it is very likely that he/she will not read anything written on the cover or inside the magazine.In this way the image takes up most of the right side of the article and dominates the written text.The woman's face appears on the upper part of the page and in Ms London, it covers a great part of the ND of the word London, which is the main one we find in the heading.
In Ms London the placement of the writing on a palé background (white and blue) facilitates reading, not just the viewing of the page or the image alone.Written texts in different sizes are more likely to be read than the rest of the written text.It is interesting to note that the date of the magazines (14* Jury 2003) is placed in two different áreas of the covers: Ms London places it at the very top of the page, on the right side, on the heading and it is in black and Gat uses the white colour and places it on the left, after the heading and the sub-heading.The different choices (we cannot forget that SFL proposes a grammar of the language as choice and observes how the choices at all stratabuild the context), i.e. size, colour, brightness versus images have an effect on the meanings made.
Multimodal resources can construct stereotypes, as happens in Ms London and Gat where we are presented two pictures of young beautiful women, representing the type of woman most people want to become in European countries.As Cameron (1997: 49) proposes: "Gender has constantly to be reaffirmed and publicly displayed by repeatedly performing particular acts in accordance with the cultural norms (themselves historically and socially constructed, and consequently variable) which define 'masculinity' and 'feminity'".
The red colour, for example, is symbol of passion in our culture and since it is abright colour, it is also associated with abundance, i.e., there are cultural valúes associated to colours and images.From what we have said in the previous Unes, it is evident that there is a link between the purpose of the text, its structure, and its major language and visual features because it is for a very explicit audience: people getting the underground in the early morning: professionals, semi-professionals, commuters, etc.As Hyland (2000: 10) acknowledges: " Readers must be drawn in, influenced and persuaded by a text that sees the world in similar ways to them, employing accepted resources for the purpose of sharing meanings in that context".
We can see how these covers contain cultural attitudes and ideologies: in both we find two images of women in the New part, i.e., they are the most important part of the text; they are very attractive and good looking.We can see an overall balance and harmony in these covers.At the same time, how these covers are read depends on the reader's ideology, alfhough in our society images are very important, and being good looking and beautiful seems to be essential.That is why we agree with Beard (2001: 4) when he considers that: "A text cannot have an existence independent of its readers, who recréate the text through bringing their own culturally-conditioned views and attitudes to bear on it".
Conclusions
In the two texts under analysis, the designer has chosen a wide range of language features and visual devices to setup the structure of the text, avoiding repetitiveness.This analysis has made us reflect on the purpose, the appropriate structure and the common features of this type of text.In this article we have analysed some aspects of the style, structure, visual and linguistic features of the multimodal texts found in two free English magazines.
Texts are always produced in the socio-cultural context of fheir time, and we gain a betterunderstanding of themby taking context into account.Taking into considerationthe contextual issues surrounding texts involves a kind of social relationship between writer and reader.Readers need to see social interactions (writer and reader) through the process of reading multimodal texts, keeping in mind that texts are always interpreted according to the cultural frame of the individual.
The covers analysed are effective texts that contain ideas consistent with the cultural conventions in which they are written, i.e., there is a connection between form and meaning.The two covers are designed to be read as well as seen by the readers.It is evident that a text cannot exist independent of its readers-they always bring their culture and ideology to the text, recreating and enriching it.In this way, the readers become active and not justpassive recipients-they add meaning to the text.As critical readers, we have to be aware of the social purpose of fhese multimodal texts.Texts are always produced inside their context.It is important that we keep in mind that we as readers can understand a culture by paying attention to all texts that are produced in that culture.
Having SFL as a framework, we have concentrated on the visual mode, paying attention to aspects such as the use of colour, the photographs or pichares and the writing and the background.It is evident that in the two covers there is an image of significance and the written text can be considered a visual element with different size, colours and the use of bold for emphasis.
In this analysis we have asked the following questions to concéntrate on the main aspects of the multimodal analysis we have followed:' What is the first thing you see when you look at the page?', 'Where is the image placed on the page?', 'What colours are used and how do they match?','Where is the written text placed and how is it in comparison to the image?' and above all, 'How does a multimodal text communicate, how does it establish relationships between the represented world and the reader?' We have discussed the placement of the image and writing in relation to the reasons a designer has to place an image on the right or left, considering that the right side is the space for the New, i.e., the área where the designer wants the reader to look or read first.We have also given attention to the types of images used, which is really significant in Ms London and Gat, where we find the typical, fashionable, good looking, slim woman.This analysis is not an end in itself because our purpose is to make people reflect on the persuasive effect the reading of a multimodal text has on the reader.Our intention is to highlight that the construction of multimodal texts is not random because the designers choose images carefully and place them in a very precise way.The colours are also chosen carefully, normally bright and dark together, because the colour is very important so that it suits the image, the size and type of writing.
The leaming of visual grammar has a definite impact on readers' understanding and knowledge of multimodal texts in magazines, in other words, visual grammar makes readers become more critical and be able to decode the hidden messages on multimodal texts.Multimodal awareness provides múltiple perspectives on how relations between discourses and society work and facilítate a better understanding of the way discourses work in different societies.
Multimodal texts hide interactive strategies from the readers.The designer chooses every aspect taking part in this kind of texts carefully.Choices of how to organise texts are influenced by context and the imagined audience, i.e., the text acquires meaning in the context in which it is produced, distributed and consumed. | 7,198.8 | 2006-11-15T00:00:00.000 | [
"Linguistics"
] |
The Implementation of a Gesture Recognition System with a Millimeter Wave and Thermal Imager
During the COVID-19 pandemic, the number of cases continued to rise. As a result, there was a growing demand for alternative control methods to traditional buttons or touch screens. However, most current gesture recognition technologies rely on machine vision methods. However, this method can lead to suboptimal recognition results, especially in situations where the camera is operating in low-light conditions or encounters complex backgrounds. This study introduces an innovative gesture recognition system for large movements that uses a combination of millimeter wave radar and a thermal imager, where the multi-color conversion algorithm is used to improve palm recognition on the thermal imager together with deep learning approaches to improve its accuracy. While the user performs gestures, the mmWave radar captures point cloud information, which is then analyzed through neural network model inference. It also integrates thermal imaging and palm recognition to effectively track and monitor hand movements on the screen. The results suggest that this combined method significantly improves accuracy, reaching a rate of over 80%.
Introduction
As the COVID-19 pandemic continued to spread, the virus posed a risk of transmission through various routes, including droplets and direct contact [1].While the use of alcoholbased disinfectants and hand washing with soap can help reduce the risk of exposure, these methods may not provide complete bacterial isolation.With the rapid growth of the Internet of Things (IoT), 5G communication networks, and self-driving cars, car manufacturers and governments are focusing on creating advanced smart vehicle systems that use remote communication and information technologies [2].In ref. [3], they introduced a technology that used millimeter wave radar sensors for gesture recognition to control in-vehicle infotainment systems.This technology provides a safe and intuitive control interface that reduces the possibility of driver distraction.This is unlike traditional car control panel buttons that require drivers to look away from the road for short periods.Furthermore, due to global technological advancements and medical breakthroughs, people are living longer than before.As a result, elderly care and assistance have become more important.Gesture recognition applications for elderly care are also introduced in the literature [4].Different gestures are analyzed by Kinect sensors; each gesture is associated with a specific request, such as water, food, toilets, help, and medicine.It is then sent as a text message to the caregiver's smartphone to reduce the caregiver's workload.As a result, there is a growing demand for non-contact control measures.Simultaneously, the rapid advancement of artificial intelligence (AI) was transforming various aspects of our daily lives.Many applications now integrate AI to improve convenience; one such example is the computer vision-based intelligent elevator information system proposed in [5].This technology uses face recognition to predict passenger characteristics and traffic flow for energy-efficient elevator management.The system uses object recognition technology, such as the YOLO (You Only Look Once) algorithm [6], to identify the user's face.YOLOv7 is an object detection model that was launched in July 2022.It offers a 50% reduction in computation and a 40% reduction in parameters, which has effectively improved its accuracy and speed.YOLOv7 outperforms most object detectors in terms of accuracy and speed in the range of 5 FPS to 160 FPS [7].In recent years, gesture recognition has gained prominence in various applications, including virtual reality (VR), human-computer interaction, and sports medicine [8][9][10][11][12].Traditional gesture recognition is typically based on two primary methods.The first method involves the use of data gloves fitted with sensors capable of detecting finger movements, which then transmit electronic signals to a computer for gesture recognition [13].However, this approach requires the use of specific hardware, and the sharing of gloves can introduce the risk of virus transmission.
The second method is Vision sensing.Vision sensing has become the prevailing approach to machine vision in recent years, utilizing captured images for analysis.This method transcends the restraints of two-dimensional image capture with the growing implementation of three-dimensional imaging systems utilizing dual-lens or depth cameras in diverse applications [14].Additionally, 2D images can have issues with background complexity, occlusion, illumination, and fast motion, which can be tackled by 3D cameras with depth information [15].Contemporary technology for recognizing gestures mainly depends on a lens combined with deep learning.As proposed in [16], one method proposed is the use of optical cameras for capturing and analyzing both dynamic and static gestures.However, this technique requires precise lighting conditions and fails to provide in-depth information.On the other hand, an alternative strategy utilizes RGB-D depth cameras for gesture classification [17].Despite this, such devices are unsuited for application in direct sunlight.Privacy concerns could arise with optical camera-based gesture recognition systems, since users may be concerned about unauthorized image capture or malicious use without their consent.
By contrast, miniature radar sensors offer a potential solution to overcome the limitations associated with cameras.In ref. [18], they proposed the importance of dynamic gesture recognition, which is an important part of human-computer interaction.The practical application of the gesture recognition system involves recognizing various dynamic and continuous gestures [19].However, the extraction of gesture features may be affected by changes in ambient light and background.In ref. [20], they proposed using millimeter wave (mmWave) radar for recognizing large-motion gestures.Millimeter wave radar is a radar system that operates in the millimeter wave frequency band, mainly using short-wavelength electromagnetic waves [21].A linear FM signal, also known as Frequency Modulated Continuous Wave (FMCW), is a type of sine wave signal that has a linearly increasing frequency over time.FMCW millimeter wave radar technology is known for its ability to provide accurate depth information and is less susceptible to temperature changes.It is particularly useful for measuring challenging environments such as occluded areas, foggy conditions, and both indoor and outdoor scenarios [22].In ref. [23], they also proved the interference immunity of millimeter wave radar by actually performing millimeter wave radar imaging tests in a low-visibility, smoky environment.The use of millimeter wave radar allays privacy concerns.The research in [24,25] laid the foundation for using millimeter waves to recognize gestures.To classify different gestures, deep neural networks have been widely used for multi-class classification tasks [26][27][28].Google Soli has utilized the range-Doppler (RD) spectrum for gesture recognition using 60 GHz frequencymodulated continuous wave (FMCW) radar sensors.Soli has brought this technology into the context of micro-gesture sensing, wearables, and smart devices.Soli has proposed a 60 GHz millimeter wave FMCW radar for detecting fine-grained gestures, capable of detecting four gestures from a single user.However, the proposed algorithm requires a computing power and is mainly used on PCs.In 2022, we released a millimeter wave-based gesture-controlled smart speaker to solve the privacy problem in smart homes.The smart speaker uses millimeter wave radar and can be redirected to any nearby location with Sensors 2024, 24, 581 3 of 20 a clapping sound.We can instantly analyze and classify five dynamic arm gestures by running a deep neural network (DNN) on a small but powerful computer, the NVIDIA Jetson Nano development kit.The result of the gesture recognition triggers the corresponding music control [29].As a result, more research is being conducted on hand gesture recognition using cost-effective and miniature radar sensors.With the advancement of smart camera technology, it has found a wide range of applications, such as surveillance systems, facial recognition, and more [30].However, there is an increasing concern that cameras may violate people's privacy by capturing unwanted images.In this regard, Kim proposed a method to protect privacy by blurring the unwanted areas of the image, such as faces [31].However, the use of cameras raises privacy concerns and may have an impact on the recognition rate in situations where lighting is too bright or insufficient.Amidst the COVID-19 pandemic, numerous organizations have opted to incorporate infrared thermal imaging cameras equipped with advanced AI face detection software to gauge body temperature.These cameras are capable of detecting and measuring infrared radiation energy released from an object's exterior, altering infrared radiation and the infrared radiation energy distribution into a visual image.Correspondingly, the device facilitates temperature measurements in dimly lit settings without being affected by light-related inconsistencies.Moreover, the data are processed through image processing to produce images exhibiting specific color distributions.This procedure mitigates the risk of data leakage and resolves issues related to optical cameras.This study introduces a gesture recognition system for large motions with a fusion of millimeter wave radar and thermal imagers, along with the integration of deep learning.Millimeter wave radar sensor is typically good at detecting radial motion, while a thermal imager can detect lateral motion; this complementarity promotes the fusion of measurements from both sensors to improve the accuracy of gesture recognition.This ground-breaking approach not only eradicates the necessity for physical contact with devices but also alleviates privacy anxieties linked to facial recognition cameras.
The rest of the paper is organized as follows: In Section 2, we introduce hand image recognition using a thermal imager and point cloud data collection by millimeter wave radar.Additionally, we discuss the processing flow and the neural network training for hand gesture recognition.In Section 3, we present our experimental results and compare the effectiveness of two methods: using millimeter wave alone and using millimeter wave combined with thermal imagers.Section 4 discusses the limitations of this study.Finally, Section 5 provides a summary of the paper.
Materials and Methods
In our study, we used a Lepton 3.5 thermal imager with a resolution of 160 × 120 pixels to capture hand images [32].The module is capable of Long-Wave Infrared (LWIR) detection, which detects the infrared radiation energy emitted by an object, converts the energy into temperature, and then creates and displays a color image.Joybien Batman's BM201-PC3 mm wave radar module [33] is used to collect hand-point cloud information.The millimeter wave module uses a Texas Instruments (TI) IWR6843 mm-wave sensor as its core and FMCW (Frequency Modulated Continuous Wave) radar technology.It operates primarily in the 60 GHz to 64 GHz band and has a continuous FM bandwidth of 4 GHz.The module uses four receive antennas and three transmit antennas, which can be used for range and speed measurement.After collecting the hand data, we use Jetson Xavier NX [34] for data processing and gesture recognition.Jetson Xavier NX is a Small On Module (SOM) system that is only 70 mm × 45 mm in size.However, it has an accelerated computing power of up to 21 million operations, consuming only 10-20 watts, allowing the user to simultaneously run multiple advanced neural networks and process data from multiple high-resolution sensors, which can help us perform gesture recognition faster.
The overall system architecture of the study is demonstrated in Figure 1.The BM201-PC3 mm wave radar collects point cloud data immediately when a user makes a gesture.These data are processed on the Jetson Xavier NX embedded evaluation board, which cre-ates a real-time analysis of time-series results and recognition of the five periodic dynamic gestures we designed, as shown in Figure 2. To begin with, we trained the hand image recognition model using YOLOv7.Afterward, the model is integrated into the Jetson Xavier NX, and the Lepton 3.5 thermal imager is utilized to capture real-time hand image data.Jetson Xavier NX then conducts a real-time analysis of hand image movements, which produces time-series data.After analyzing the data obtained from both the millimeter wave radar and thermal imager, the resulting gesture is communicated to the user through audio feedback.
The overall system architecture of the study is demonstrated in Figure 1.The BM201-PC3 mm wave radar collects point cloud data immediately when a user makes a gesture.These data are processed on the Jetson Xavier NX embedded evaluation board, which creates a real-time analysis of time-series results and recognition of the five periodic dynamic gestures we designed, as shown in Figure 2. To begin with, we trained the hand image recognition model using YOLOv7.Afterward, the model is integrated into the Jetson Xavier NX, and the Lepton 3.5 thermal imager is utilized to capture real-time hand image data.Jetson Xavier NX then conducts a real-time analysis of hand image movements, which produces time-series data.After analyzing the data obtained from both the millimeter wave radar and thermal imager, the resulting gesture is communicated to the user through audio feedback.The overall system architecture of the study is demonstrated in Figure 1.The BM201-PC3 mm wave radar collects point cloud data immediately when a user makes a gesture.These data are processed on the Jetson Xavier NX embedded evaluation board, which creates a real-time analysis of time-series results and recognition of the five periodic dynamic gestures we designed, as shown in Figure 2. To begin with, we trained the hand image recognition model using YOLOv7.Afterward, the model is integrated into the Jetson Xavier NX, and the Lepton 3.5 thermal imager is utilized to capture real-time hand image data.Jetson Xavier NX then conducts a real-time analysis of hand image movements, which produces time-series data.After analyzing the data obtained from both the millimeter wave radar and thermal imager, the resulting gesture is communicated to the user through audio feedback.
Image Processing of Hand Infrared Image
The temperature range for human palms is typically between 30 and 35 • C. In this study, two color conversion approaches are proposed: single-color and multi-color.In the single-color approach, the thermal imager detects the infrared radiation energy emitted by the object and converts it into temperature data.Pixels below 30 • C are filtered and appear colorless, while pixels above 30 • C are converted to red and displayed.However, this approach has some limitations.High body or room temperatures could cause multiple regions in the image to appear red, leading to unclear or non-existent hand image features, as shown in Figure 3.To address this issue, a multi-color conversion method is proposed.Firstly, the test was conducted in an indoor winter environment with a room temperature of approximately 22-26 • C. Observations showed that the measured hand temperature typically ranged between 30 and 36 • C during periods of minimal hand movement.However, when there was direct sunlight or a computer in the room, the thermal imager recorded temperatures above 36 • C. The multi-color conversion method is designed to reduce the effect of environmental factors that could obscure hand features, thereby improving the clarity of the hand image.When displaying the palm temperature, pixels below 30 • C are filtered out and remain black.Pixels with temperatures between 30 and 32 • C are displayed in red, while those between 32 and 34 • C are displayed in orange.Yellow pixels represent temperatures between 34 and 36 • C, and purple pixels represent temperatures above 36 • C. Details of the multi-color conversion are shown in Table 1.
Image Processing of Hand Infrared Image
The temperature range for human palms is typically between 30 and 35 °C.In this study, two color conversion approaches are proposed: single-color and multi-color.In the single-color approach, the thermal imager detects the infrared radiation energy emitted by the object and converts it into temperature data.Pixels below 30 °C are filtered and appear colorless, while pixels above 30 °C are converted to red and displayed.However, this approach has some limitations.High body or room temperatures could cause multiple regions in the image to appear red, leading to unclear or non-existent hand image features, as shown in Figure 3.To address this issue, a multi-color conversion method is proposed.Firstly, the test was conducted in an indoor winter environment with a room temperature of approximately 22-26 °C.Observations showed that the measured hand temperature typically ranged between 30 and 36 °C during periods of minimal hand movement.However, when there was direct sunlight or a computer in the room, the thermal imager recorded temperatures above 36 °C.The multi-color conversion method is designed to reduce the effect of environmental factors that could obscure hand features, thereby improving the clarity of the hand image.When displaying the palm temperature, pixels below 30 °C are filtered out and remain black.Pixels with temperatures between 30 and 32 °C are displayed in red, while those between 32 and 34 °C are displayed in orange.Yellow pixels represent temperatures between 34 and 36 °C, and purple pixels represent temperatures above 36 °C.Details of the multi-color conversion are shown in Table 1.
Thermal Palm Image Detection Model
In this research, the YOLOv7 model is used to train the palm detection model.Two different datasets were compiled: one consisted of single-color converted images derived from a thermal imager, while the other consisted of multi-color converted images.The original image captured by the thermal imager was a jpg file with a resolution of 160 × 120 pixels, but for easier viewing and labeling, we scaled the image to a jpg file with a resolution of 400 × 300 pixels.Due to the similarity in palm images captured by the thermal Table 1.Thermal imager temperature versus multi-color conversion.
Thermal Palm Image Detection Model
In this research, the YOLOv7 model is used to train the palm detection model.Two different datasets were compiled: one consisted of single-color converted images derived from a thermal imager, while the other consisted of multi-color converted images.The original image captured by the thermal imager was a jpg file with a resolution of 160 × 120 pixels, but for easier viewing and labeling, we scaled the image to a jpg file with a resolution of 400 × 300 pixels.Due to the similarity in palm images captured by the thermal imager, both datasets contain a total of 437 photographs taken by the same person.Before training, the objects were labeled using the labeling image labeling tool, with the designated
Gesture Point Cloud Data of mmWave
When millimeter wave radar detects a moving object, it generates point cloud information.This section describes how to use point cloud data from mmWave radar for neural network training, as illustrated in Figure 5. First, we need to collect the point cloud information for each gesture, followed by pre-processing to generate time-series feature data.Finally, these data are imported into the neural network for model training, resulting in the generation of the model file.
Gesture Point Cloud Data of mmWave
When millimeter wave radar detects a moving object, it generates point cloud information.This section describes how to use point cloud data from mmWave radar for neural network training, as illustrated in Figure 5. First, we need to collect the point cloud information for each gesture, followed by pre-processing to generate time-series feature data.Finally, these data are imported into the neural network for model training, resulting in the generation of the model file.Figure 6 shows the visual point cloud image of the mmWave radar detecting movement within the current range in the mmWave point cloud measurement screen.Specifically, Figure 6b shows the point cloud information resulting from rapid forward and backward hand movements.The box indicates the measurement range of the mmWave radar.
Point Cloud Data Pre-Processing
In this section, we will discuss the pre-processing methods used for point cloud data obtained from the mmWave radar.The objective is to filter out any environmental noise, retain the hand-point cloud data, and extract its time-series signature information.The point cloud data go through a series of processing steps, including superposition, maximum speed limit, initial density-based spatial clustering of applications with noise (DBSCAN), registration, K-means, secondary DBSCAN, and extraction of time-series
mmWave Gesture Detection Model
During this research, three different types of neural networks were built using PyTorch: recurrent neural networks (RNNs), Long Short-Term Memory (LSTM), and Gated Recurrent Units (GRUs).The recurrent neural network [36] is different from the general feedforward neural network in that the message transfer in the feedforward neural network is only in one direction.However, in the recurrent neural network, the output value of the neuron at the current stage is sent back to the neural network and used as the input for the next stage or other neurons.In other words, the results of the current stage of message processing are retained in the neural network as a reference for the next stage of processing through message return.Because of this feature, the recurrent neural network has short-term memory and can find the temporal relationship in the data, which is widely used in natural language processing, handwriting recognition, time-series prediction, etc.Although the RNN has short-term memory, its disadvantage is that when
mmWave Gesture Detection Model
During this research, three different types of neural networks were built using PyTorch: recurrent neural networks (RNNs), Long Short-Term Memory (LSTM), and Gated Recurrent Units (GRUs).The recurrent neural network [36] is different from the general feedforward neural network in that the message transfer in the feedforward neural network is only in one direction.However, in the recurrent neural network, the output value of the neuron at the current stage is sent back to the neural network and used as the input for the next stage or other neurons.In other words, the results of the current stage of message processing are retained in the neural network as a reference for the next stage of processing through message return.Because of this feature, the recurrent neural network has short-term memory and can find the temporal relationship in the data, which is widely used in natural language processing, handwriting recognition, time-series prediction, etc.Although the RNN has short-term memory, its disadvantage is that when the input data are too long, it is easy to generate gradient vanishing and gradient explosion.Compared with RNN, LSTM can handle long time-series data and solve the problem of gradient vanishing.However, due to the large size of its neural network model, the computation time is longer and data processing is more time-consuming.GRU is also a variant of RNN, but its structure is simpler than LSTM, which makes GRU faster in execution and computation.Therefore, we selected these three neural networks for our study.One person recorded 200 frames of point cloud data for each gesture, resulting in a total of 16 samples.The total number of point cloud data for five gestures is 16,000 frames.After pre-processing, each sample generated 181 time-series feature data.Consequently, each gesture produced 2896 timeseries feature data after pre-processing, resulting in a total of 14,480 time-series feature data for the five gestures.After the model was trained, it was imported into the Jetson Xavier NX for gesture recognition.When the program is turned on, the mmWave radar will be in standby mode.If an object is detected within one meter, the program pauses.After collecting 20 frames of point cloud data, the speaker emitted a "stop" voice signal, and the collected point cloud data were saved.Jetson Xavier NX then processed the data and extracted time-series signature information, which was then fed into the neural network model.Gesture predictions were made, and the predicted gesture speech was played back.
Results
In this section, we will explain the results of the research questions, which are divided into three parts.These sections cover the training results for hand image recognition using a thermal imager and the training results for the gesture recognition model.Within gesture recognition, the discussion is further divided into two facets: the results obtained using mmWave radar alone and the results obtained using a hybrid approach combining both mmWave radar and thermal imager.
Thermal Imager Hand Image Recognition
Unlike an RGB camera, the thermal imager uses a special process to capture images.It filters out areas where the temperature falls below a certain threshold and performs color conversion for areas above this threshold.Figure 10 We made several noteworthy observations during the experiment on hand-image recognition.It was discovered that the performance of the single-color conversion process was affected by both ambient and body temperatures.Specifically, when the temperature increases slightly or the body temperature rises, the area of color in the image merges with the background, as illustrated in Figure 12a.This blending of colors in the image led to decreased recognition capabilities since the features of the hand image could not be accurately distinguished.On the other hand, the multi-color conversion was also tested, and the result is shown in Figure 12b.During the test, ambient and body temperatures were elevated, causing the colored area in the image to dominate most of the screen.However, the multi-color conversion method prevents the blending of hand image features with the background.This resulted in a significant improvement in recognition rate and accuracy, even when the background is intricate.Table 2 provides a comparative analysis between the single-color and multi-color conversions, demonstrating that multicolor conversion achieves superior accuracy compared to its single-color counterpart.We made several noteworthy observations during the experiment on hand-image recognition.It was discovered that the performance of the single-color conversion process was affected by both ambient and body temperatures.Specifically, when the temperature increases slightly or the body temperature rises, the area of color in the image merges with the background, as illustrated in Figure 12a.This blending of colors in the image led to decreased recognition capabilities since the features of the hand image could not be accurately distinguished.On the other hand, the multi-color conversion was also tested, and the result is shown in Figure 12b.During the test, ambient and body temperatures were elevated, causing the colored area in the image to dominate most of the screen.However, the multi-color conversion method prevents the blending of hand image features with the background.This resulted in a significant improvement in recognition rate and accuracy, even when the background is intricate.Table 2 provides a comparative analysis between the single-color and multi-color conversions, demonstrating that multi-color conversion achieves superior accuracy compared to its single-color counterpart.
Gesture Recognition Using mmWave Radar
The point cloud data for the hand gestures undergoes pre-processing to isolate the hand's point cloud data.Figure 13 shows the sequential pre-processing results for the counterclockwise gesture, including superposition, maximum speed limit, first DBSCAN, alignment of hand and body, K-means separation, and second DBSCAN.Subsequently, the time-series data of the point cloud is extracted for training purposes.To enhance the efficiency of training and data processing, we make use of normalization techniques.This helps us standardize the data and make them more consistent, which in turn leads to better accuracy and faster processing times.The MinMaxScaler is utilized for the (x, y, z) coordinates of the time signature data, thus scaling the data to a range of 0 to 1. Furthermore, MaxAbsScaler is applied to the average speed of the time signature data, scaling the data to the range of −1 to 1.For clarity, we extract six frames from the output to observe the center of mass changes in the point cloud, as demonstrated in Figure 14.The red dot represents the current center of mass, while the blue dot represents the previous one.It is apparent that the center of mass for the clockwise gesture shifts in a clockwise circle while the counterclockwise gesture shifts in a counterclockwise circle.In the same way, the right gesture shifts horizontally to the right, and the left gesture shifts horizontally to the left.Lastly, the punch gesture shifts vertically up and down.
Gesture Recognition Using mmWave Radar
The point cloud data for the hand gestures undergoes pre-processing to isolate the hand's point cloud data.Figure 13 shows the sequential pre-processing results for the counterclockwise gesture, including superposition, maximum speed limit, first DBSCAN, alignment of hand and body, K-means separation, and second DBSCAN.Subsequently, the time-series data of the point cloud is extracted for training purposes.To enhance the efficiency of training and data processing, we make use of normalization techniques.This helps us standardize the data and make them more consistent, which in turn leads to better accuracy and faster processing times.The MinMaxScaler is utilized for the (x, y, z) coordinates of the time signature data, thus scaling the data to a range of 0 to 1. Furthermore, MaxAbsScaler is applied to the average speed of the time signature data, scaling the data to the range of −1 to 1.For clarity, we extract six frames from the output to observe the center of mass changes in the point cloud, as demonstrated in Figure 14.The red dot represents the current center of mass, while the blue dot represents the previous one.It is apparent that the center of mass for the clockwise gesture shifts in a clockwise circle while the counterclockwise gesture shifts in a counterclockwise circle.In the same way, the right gesture shifts horizontally to the right, and the left gesture shifts horizontally to the left.Lastly, the punch gesture shifts vertically up and down.This study analyzed the results of mmWave radar's hand image recognition using 14,480 sample data from five different gestures.The data were separated into three sets randomly, with 60% for training and 20% each for validation and testing.This study trained the GRU, LSTM, and RNN models using these sets over forty iterations.Figure 15 shows the confusion matrix for all models, indicating prediction accuracies of 99.51%, 99.37%, and 81.11% for GRU, LSTM, and RNN, respectively.GRU performed better than the other models.In terms of the prediction time, GRU took 0.462 ms, LSTM took 0.483 ms, and RNN took 0.461 ms, with RNN being the fastest.However, its accuracy did not match that of GRU and LSTM.Table 3 illustrates the accuracy rates of the three models during the gesture recognition test.In our testing, we waved each gesture 10 times to determine their correct identification.This highlighted the superiority of the GRU model over LSTM and RNN in terms of accuracy and efficiency.This study analyzed the results of mmWave radar's hand image recognition using 14,480 sample data from five different gestures.The data were separated into three sets randomly, with 60% for training and 20% each for validation and testing.This study trained the GRU, LSTM, and RNN models using these sets over forty iterations.Figure 15 shows the confusion matrix for all models, indicating prediction accuracies of 99.51%, 99.37%, and 81.11% for GRU, LSTM, and RNN, respectively.GRU performed better than the other models.In terms of the prediction time, GRU took 0.462 ms, LSTM took 0.483 ms, and RNN took 0.461 ms, with RNN being the fastest.However, its accuracy did not match that of GRU and LSTM.Table 3 illustrates the accuracy rates of the three models during the gesture recognition test.In our testing, we waved each gesture 10 times to determine their correct identification.This highlighted the superiority of the GRU model over LSTM and RNN in terms of accuracy and efficiency.
Gesture Recognition Using mmWave Radar with a Thermal Imager
Continuing from the previous section, this section provides further details on the gesture recognition technique that combines an mmWave radar and a thermal imager.In addition to the millimeter wave point cloud data, we will also extract the normalized time signature data and the coordinate timing change of the thermal imager for gesture recognition.During the gesture test, the thermal imager uses YOLOv7 to recognize the hand image and record the timing changes in its coordinates.Figure 16 highlights the resulting coordinate changes for the five gestures.The mmWave radar captured 20 frames of point cloud data during gesture recognition.Nonetheless, the thermal imager employed for YOLOv7 hand recognition operates at a slower execution speed.This study revealed that the duration necessary for the mmWave radar to acquire 20 frames was equivalent to the time required for the thermal imager to process 12 frames of hand image recognition.In some cases, the thermal imager may not identify the hand image in a given frame, resulting in less than 12 frames of recorded time-series data.To incorporate the thermal imager's coordinate timing changes into the mmWave time signature data for gesture model training, interpolation is carried out on the thermal imager's coordinate
Gesture Recognition Using mmWave Radar with a Thermal Imager
Continuing from the previous section, this section provides further details on the gesture recognition technique that combines an mmWave radar and a thermal imager.In addition to the millimeter wave point cloud data, we will also extract the normalized time signature data and the coordinate timing change of the thermal imager for gesture recognition.During the gesture test, the thermal imager uses YOLOv7 to recognize the hand image and record the timing changes in its coordinates.Figure 16 highlights the resulting coordinate changes for the five gestures.The mmWave radar captured 20 frames of point cloud data during gesture recognition.Nonetheless, the thermal imager employed for YOLOv7 hand recognition operates at a slower execution speed.This study revealed that the duration necessary for the mmWave radar to acquire 20 frames was equivalent to the time required for the thermal imager to process 12 frames of hand image recognition.In some cases, the thermal imager may not identify the hand image in a given frame, resulting in less than 12 frames of recorded time-series data.To incorporate the thermal imager's coordinate timing changes into the mmWave time signature data for gesture model training, interpolation is carried out on the thermal imager's coordinate timing change curve.This expands the data to 20 frames without modifying the waveform.Subsequently, the data are scaled using MinMaxScaler nine times, and the outcomes are fused into 200 frames of time-series data.The model for gesture recognition, which uses both a thermal imager and mmWave, is an improvement over the one that only relies on mmWave.During training, the model utilizes the average mmWave velocity, the time-series variation of thermal imager coordinates, and mmWave time-series signature data as inputs.We trained the model using GRU, LSTM, and RNN for 40 iterations.A total of 14,480 data were randomly divided into three sets: 60% for training, 20% for validation, and 20% for testing for the five gestures.The model's performance was evaluated using a confusion matrix in Figure 17, which shows that the GRU and LSTM models achieved prediction accuracies of 100%, while the RNN model achieved an accuracy of 98.14%.Table 4 outlines the results of real gesture recognition tests, comparing the accuracy of mmWave technology alone with the combination of mmWave radar and a thermal imager.The analysis reveals a significant improvement in precision when the two technologies are combined.Among these, the results show that clockwise and counterclockwise gestures have a higher accuracy rate than the other three.During the analysis of gesture recognition, we measured the time required to recognize a gesture in the model training test and the time taken to import it into Jetson Xavier NX for actual recognition.This can be observed in Figure 18.This figure shows that the time needed for real-time recognition in Jetson Xavier NX is longer than that observed during the model validation test in Google Colab.This is because the embedded system has to process multiple programs at the same time, resulting in increased memory usage and reduced performance.
timing change curve.This expands the data to 20 frames without modifying the waveform.Subsequently, the data are scaled using MinMaxScaler nine times, and the outcomes are fused into 200 frames of time-series data.The model for gesture recognition, which uses both a thermal imager and mmWave, is an improvement over the one that only relies on mmWave.During training, the model utilizes the average mmWave velocity, the time-series variation of thermal imager coordinates, and mmWave timeseries signature data as inputs.We trained the model using GRU, LSTM, and RNN for 40 iterations.A total of 14,480 data were randomly divided into three sets: 60% for training, 20% for validation, and 20% for testing for the five gestures.The model's performance was evaluated using a confusion matrix in Figure 17, which shows that the GRU and LSTM models achieved prediction accuracies of 100%, while the RNN model achieved an accuracy of 98.14%.Table 4 outlines the results of real gesture recognition tests, comparing the accuracy of mmWave technology alone with the combination of mmWave radar and a thermal imager.The analysis reveals a significant improvement in precision when the two technologies are combined.Among these, the results show that clockwise and counterclockwise gestures have a higher accuracy rate than the other three.During the analysis of gesture recognition, we measured the time required to recognize a gesture in the model training test and the time taken to import it into Jetson Xavier NX for actual recognition.This can be observed in Figure 18.This figure shows that the time needed for real-time recognition in Jetson Xavier NX is longer than that observed during the model validation test in Google Colab.This is because the embedded system has to process multiple programs at the same time, resulting in increased memory usage and reduced performance.
Discussion
Our research indicates that by combining millimeter waves and thermal imaging, we can enhance the precision of hand gesture recognition.These results support our hypothesis that the thermal imager can be used to detect hand images while maintaining privacy.However, it is important to acknowledge that the findings of this study are subject to some limitations.For instance, the thermal imager may not be able to capture hand images accurately in situations where the ambient temperature is too high or when
Discussion
Our research indicates that by combining millimeter waves and thermal imaging, we can enhance the precision of hand gesture recognition.These results support our hypothesis that the thermal imager can be used to detect hand images while maintaining privacy.However, it is important to acknowledge that the findings of this study are subject to some limitations.For instance, the thermal imager may not be able to capture hand images accurately in situations where the ambient temperature is too high or when
Discussion
Our research indicates that by combining millimeter waves and thermal imaging, we can enhance the precision of hand gesture recognition.These results support our hypothesis that the thermal imager can be used to detect hand images while maintaining privacy.However, it is important to acknowledge that the findings of this study are subject to some limitations.For instance, the thermal imager may not be able to capture hand images accurately in situations where the ambient temperature is too high or when the body temperature is close to the ambient temperature, which may limit the universality of our results.Nevertheless, this study has several advantages.These include the inclusion of the coordinate timing variations of the thermal imager in the millimeter wave time signature data.Millimeter waves can detect objects with relative velocity variations and produce point cloud data, while the thermal imager is particularly good at detecting lateral motion.This helped us identify gesture locations and variations, which improved the accuracy of our results.In terms of gesture design, we found that clockwise and left gestures and counterclockwise and right gestures have similar swing trajectories, which makes it easy to misjudge gestures.Future research could further analyze gesture design to clarify these issues.
Conclusions
In this study, a large-motion gesture recognition system is developed using deep learning techniques that integrate millimeter wave radar with a thermal imager.The palm image information is captured using an infrared thermal imager, and the coordinate movement changes in the palm on the screen are recorded at the same time.The point cloud data from the millimeter wave radar, including three-axis coordinates and velocity, is integrated and pre-processed to produce time-series data.These data are processed via a neural network for recognizing gestures, and real-time recognition is achieved through the use of the Jetson Xavier NX embedded evaluation board.The results demonstrate that the accuracy of gesture recognition with the combination of millimeter wave radar and the thermal imager is significantly better than with millimeter wave radar alone.As before, the model trained with a Gated Recurrent Unit outperforms the Long Short-Term Memory and recurrent neural network models in gesture recognition tasks.This study advances the development of multimodal gesture recognition systems such as gaming, somatosensory applications, virtual reality, etc., highlighting the potential for higher accuracy and performance through the integration of various sensing technologies and deep learning approaches.
Figure 1 .
Figure 1.The system architecture.Figure 1.The system architecture.
Figure 1 .
Figure 1.The system architecture.Figure 1.The system architecture.
Figure 3 .
Figure 3.The thermal image at a higher ambient temperature.
Figure 3 .
Figure 3.The thermal image at a higher ambient temperature.
Sensors 2024 ,
24, 581 6 of 20 classification categories including palm image and person, as shown in Figure 4. Data enhancement and dataset segmentation were performed using the Roboflow website.The dataset is split, with 95% allocated to training and 3% and 2% allocated to testing and validation, respectively.The palm model is then trained.Training of the YOLOv7 hand image recognition model was performed using Google Colab.Since the image training set is not a huge dataset, 80 iterations are sufficient for the loss function to finish convergence.We set a batch size of 8 for 80 iterations.The resulting palm image recognition model was then saved.Sensors 2024, 24, x FOR PEER REVIEW 6 of 21 imager, both datasets contain a total of 437 photographs taken by the same person.Before training, the objects were labeled using the labeling image labeling tool, with the designated classification categories including palm image and person, as shown in Figure 4. Data enhancement and dataset segmentation were performed using the Roboflow website.The dataset is split, with 95% allocated to training and 3% and 2% allocated to testing and validation, respectively.The palm model is then trained.Training of the YOLOv7 hand image recognition model was performed using Google Colab.Since the image training set is not a huge dataset, 80 iterations are sufficient for the loss function to finish convergence.We set a batch size of 8 for 80 iterations.The resulting palm image recognition model was then saved.(a) (b)
Figure 4 .
Figure 4.The object labeling of (a) palm and (b) person.
Figure 6 shows the visual point cloud image of the mmWave radar detecting movement within the current range in the mmWave point cloud measurement screen.Specifically, Figure6bshows the point cloud information resulting from rapid forward and backward hand movements.The box indicates the measurement range of the mmWave radar.
Figure 4 .
Figure 4.The object labeling of (a) palm and (b) person.
21 Figure 5 .
Figure 5.The procedure for training a gesture recognition model.Figure 5.The procedure for training a gesture recognition model.
Figure 5 .
Figure 5.The procedure for training a gesture recognition model.Figure 5.The procedure for training a gesture recognition model.
Figure 8 .
Figure 8.The flow of superimposed point cloud data on the same array.Figure 8.The flow of superimposed point cloud data on the same array.
Figure 8 .
Figure 8.The flow of superimposed point cloud data on the same array.Figure 8.The flow of superimposed point cloud data on the same array.
Figure 8 .Figure 9 .
Figure 8.The flow of superimposed point cloud data on the same array.
Figure 11 .
Figure 11.The loss function of (a) single-color and (b) multi-color.
Figure 11 .
Figure 11.The loss function of (a) single-color and (b) multi-color.
Figure 12 .
Figure 12.The hand-image recognition of (a) single-color and (b) multi-color conversion.
Figure 12 .
Figure 12.The hand-image recognition of (a) single-color and (b) multi-color conversion.
Figure 14 .
Figure 14.The mass of the point cloud for the gesture: (a) clockwise; (b) counterclockwise; (c) left; (d) right; and (e) punch, where the red star represents the current center of mass.Figure 14.The mass of the point cloud for the gesture: (a) clockwise; (b) counterclockwise; (c) left; (d) right; and (e) punch, where the red star represents the current center of mass.
Figure 18 .
Figure 18.The gesture recognition time of mmWave combined with a thermal imager.
Figure 18 .
Figure 18.The gesture recognition time of mmWave combined with a thermal imager.
Table 1 .
Thermal imager temperature versus multi-color conversion.
Table 2 .
The accuracy of single-color and multi-color conversion.
Table 2 .
The accuracy of single-color and multi-color conversion.
Table 3 .
The accuracy of the three models using mmWave radar.
Table 3 .
The accuracy of the three models using mmWave radar.
Table 4 .
The accuracy of the three models using mmWave radar with thermal imager.
Figure 18.The gesture recognition time of mmWave combined with a thermal imager.
Table 4 .
The accuracy of the three models using mmWave radar with thermal imager.
Table 4 .
The accuracy of the three models using mmWave radar with thermal imager. | 10,032.4 | 2024-01-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
MRT discrete Boltzmann method for compressible exothermic reactive flows
An efficient, accurate and robust multiple-relaxation-time (MRT) discrete Boltzmann method (DBM) is proposed for compressible exothermic reactive flows, with both specific heat ratio and Prandtl number being flexible. The chemical reaction is coupled with the flow field naturally and the external force is also incorporated. An efficient discrete velocity model which has sixteen discrete velocities (and kinetic moments) is introduced into the DBM. With both hydrodynamic and thermodynamic nonequilibrium ef- fects under consideration, the DBM provides more detailed and accurate information than traditional Navier–Stokes equations. This method is suitable for fluid flows ranging from subsonic, to supersonic and hypersonic ranges. It is validated by various benchmarks. http://creativecommons.org/licenses/by/4.0/
Introduction
Exothermic reactive flows are commonplace in nature and industry which play significant roles in economic and social development all over the world. In fact, more than 80% utilizable energy is transformed through exothermic reactive phenomena in the world [1] . On the other hand, they are associated with environmental problems, accidents or even disasters. For example, atmospheric pollution, global warming and climate change are closely linked to harmful emissions from reactive flows. In particular, fire hazards, which often induce explosion and shock, may cause huge danger and damage to human life, property and environment. Although considerable researches have been devoted to these fields, there are still many open issues due to their complexity. To be specific, they have a wide span of physicochemical phenomena, interact over various spatio-temporal scales, and involve various hydrodynamic and thermodynamic nonequilibrium behaviours [2][3][4] . Especially, for a spacecraft flying from the earth surface to outer space, where the chemical reaction and gravity exist, it covers a wide range of Knudsen numbers and various essential nonequilibrium phenomena. To describe such complex systems, traditional macroscopic models have the benefit of high computing efficiency, but could not capture detailed information accurately. While microscopic models have the merit of an accurate and full description, they encounter spatio-temporal constraints because of their high computing costs.
At the mesoscopic level, the lattice Boltzmann method (LBM) may overcome aforementioned problems [5][6][7][8][9][10][11][12][13][14][15][16] . In the past three decades, the LBM has achieved significant success in the simulation of complex systems, including reactive flows [17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35] . The traditional LBM usually works as an alternative tool to solve macroscopic equations, such as incompressible Navier-Stokes (NS) equations. Various physical quantities, such as flow velocity and temperature, may be described by different sets of the discrete distribution function. Recently, a novel variant of LBM, discrete Boltzmann method (DBM), has emerged as an efficient kinetic model to capture both hydrodynamic and thermodynamic nonequilibrium effects in fluid flows [36,37] . Different from traditional LBMs, the DBM employs only one set of discrete distribution function to describe various physical quantities, including the density, temperature, velocity, and other high order kinetic moments, which is in line with the Boltzmann equation. Since 2013, several Single-Relaxation-Time DBMs have been formulated for exothermic reactive flows [38][39][40] . Yet, the Prandtl number in those proposed model is fixed at Pr = 1 . To overcome this, a multiple-relaxation-time (MRT) DBM was presented [41] . There are 24 independent kinetic moments satisfied by 24 discrete equilibrium distribution functions in this work [41] . These kinetic moments are necessary for the DBM to recover the reactive NS equations in the hydrodynamic limit [41] . Besides, the effects of external force are neglected in this model [41] . However, external forces (such as gravity) often have essential influences upon reactive flows. In the present work, we introduce a new form of reaction and force terms, and reduce the 24 kinetic moments (and discrete equilibrium distribution functions) to only 16 while the recovery of the NS equations is made as well. Besides its practical value as an efficient computational tool for the traditional dynamics of complex systems, this model also provides details of nonequilibrium behaviours dynamically and conveniently. We describe the DBM in Section 2 , validate it in Section 3 , and finally summarize this work in Section 4 .
Discrete Boltzmann method
The DBE takes the form, Here · · · f eq N ) T denote discrete distribution functions and their equilibrium counterparts, re- represent kinetic moments of discrete distribution function and their equilibrium counterparts, respectively. M −1 is the inverse matrix of M , and M is a square matrix, see Appendix A .
, N and N = 16 . As shown in Fig. 1 with tunable parameters v a and v b controlling the value of v i .
The artificial term ˆ A = (0 · · · 0 ˆ A 8 ˆ A 9 0 · · · 0) T is used to modify the collision operator The reason for this modification is as follows. Although the tunable relaxation coefficients ˆ S i seem mathematically independent of each other, coupling may exist among the relaxation processes of various kinetic modes ( ˆ f ne i = ˆ f i −ˆ f eq i ) from the physical point of view. For the sake of correct description of macroscopic behaviours, we should perform the Chapman-Enskog expansion, analyze the consistency of nonequilibrium transportation terms in the recovered hydrodynamic equations, and find a solution for the modification to the collision term. In short, this modification is incorporated in the DBM to recover the consistent NS equations in the hydrodynamic limit, see Appendix A . The artificial term is the function of the velocities ( u x , u y ) and the first-order partial derivatives of them with respect to x or y . These derivatives can be solved by various finite difference schemes. In this work, the central difference scheme is adopted. For example, at the node ( i x , i y ). Numerical tests demonstrate that the artificial term does not induce significant numerical problems. Furthermore, the artificial term can be removed for the case ˆ S 5 = ˆ S 8 and ˆ S 7 = ˆ S 9 .
The force and reaction terms, Mathematically, the difference of the equilibrium distribution functions over a small time interval is an approximation to the change rate of distribution functions, based on the assumption f i ≈ f eq i . The physical reason for Eq. (6) is as follows. It is regarded that neither external force nor chemical reaction changes the density ρ.
The external force affects the hydrodynamic velocity u with acceleration a . Consequently, the velocity changes from u into u + a τ within a small time interval τ due to the external force. Meanwhile, the temperature changes into T + τ T on account of the chemical reaction. Specifically, the change rate of energy is because of the external force and chemical reaction. From Eq. (7) and the definition E = D + I 2 ρT + 1 2 ρu · u , we obtain the change rate of temperature where D = 2 stands for the number of dimensions, I the number of extra degrees of freedom corresponding to molecular rotation and/or internal vibration. The reaction process λ is defined as the mass ratio of the chemical product to mixture. The chemical reaction is controlled by the Cochran's rate function which depends upon the pressure, p = ρT , in terms of adjustable parameters ω 1 , ω 2 , m and n [42] . Here λ is defined as the local mass fraction of the reaction product. Without loss of generality, we choose ( ω 1 , ω 2 , m, n ) = (2, 100, 2, 2.5), and employ the ignition temperature T ig = 1 . 1 in this work. Only when T > T ig can the chemical reaction take place.
For the sake of recovering the NS equations, the discrete equilibrium distribution function should satisfy the following relations f eq i = ρ, (10) f eq f eq In contrast, all the amplification factors are identical in the SRT model, i.e., Pr = 1 , which is only a special case of the MRT model.
It can be found that discrete Boltzmann equation is in a simple form and its algorithm is easy to code. In contrast, the NS equations depend upon both the first-order and second-order partial derivatives of velocities ( u x , u y ) with respect to x or y , which are nonlinear terms relatively difficult to be treated with [40] . Moreover, it often needs to solve the Poisson equation based on global data transfer in NS method, while all spatio-temporal information communication is local in DBM that is suitable for massively parallel computing. In addition, the DBM provides an efficient tool to study detailed nonequilibrium effects and/or rarefied effects of gas flows beyond NS equations by capturing the departures of kinetic moments from their equilibrium counterparts [40,43] . Finally, it is easy to have a proper kinetic boundary condition for DBM to describe the velocity slip and the flow characteristics in the Knudsen layer that cannot be well described by traditional hydrodynamic models [43] .
Validation and verification
For validation and verification purposes, four benchmark tests are performed. (i) The chemical reaction in a free falling box is simulated to verify the effects of external force and chemical reaction. (ii) The simulation of a detonation wave is carried out to demonstrate the DBM in the case with violent chemical heat release. Additionally, we assess the spatial and temporal convergence of the numerical results. (iii) To verify the DBM for adjustable specific heat ratios and Prandtl numbers, we simulate Couette flow. Moreover, it is demonstrated that the nonequilibrium information provided by the DBM coincides with its analytical solution. (iv) Finally, a typical two-dimensional benchmark, shock reflection, is simulated successfully. Besides, it is demonstrated in the first two tests that the discrete velocity model D2V16 has higher efficiency and better robustness than D2V24 [41] . Note that the second order Runge-Kutta scheme is adopted for the time derivative, while the second order nonoscillatory and nonfree-parameter dissipation difference scheme [44] is employed for the space derivative in Eq. (1) . It is preferable to set t ≤ 1 / Max ( ˆ S i ) due to the explicit scheme for the time derivative, where Max ( ˆ S i ) denotes the maximum among ˆ S i . The relation between the time step t and space step x = y should satisfy convergence conditions. Additionally, variables and parameters used in this paper are expressed in nondimensional forms, i.e., the widely accepted LB units [45,46] .
Detonation wave
In order to test the present DBM under the condition with violet chemical heat release, we target the detonation wave. The ini-tial configuration is (ρ, T , u x , u y , λ) L = (1 . 38837 , 1 . 57856 , 0 . 57735 , 0 , 1) (ρ, T , u x , u y , λ) R = (1 , 1 , 0 , 0 , 0) (17) where the suffix L indexes the left part, 0 ≤ x ≤ 0.05, and R the right part 0.05 < x ≤ 1, see Fig. 3 . The inflow or outflow condition is adopted in the x direction, the period condition is employed in the y direction. The parameters are I = 3 , Q = 1 , (1 . 7 , 3 . 7 , 3 . 3) , t = 10 −5 , x = y = 10 −4 , and N x × N y = 10 , 0 0 0 × 1 . The collision parameters are ˆ S i = 10 5 except ˆ S μ (i.e., ˆ S 5 , ˆ S 6 , ˆ S 7 ) = 2 × 10 4 . The detonation wave travels from left to right with speed v s . The chemical reactant is in front of the detonation wave with λ = 0 , and it changes into the product after the wave with λ = 1 . Fig. 4 illustrates the propagation of pressure at time instants, ferences are (0.05%, 0.04%, 0.08%, 0.02%), respectively. Obviously, the numerical and analytical results coincide well in Fig. 5 . The tiny differences between them are due to the fact that the ZND theory ignores the viscosity and heat conduction, and the von Neumann peak is assumed as a strong discontinuity which is not a truth. The DBM considers the viscosity, heat conduction as well as other nonequilibrium effects. Note that, with the decrease of collision parameters, the nonequilibrium effects are enhanced, and the differences between the DBM and analytical solutions become large [41] .
To compare the numerical robustness of D2V16 and D2V24 [41] , the aforementioned detonation wave is simulated by us- Obviously, D2V16 gives a smooth profile around the detonation front, while D2V24 gives an oscillating profile. This nonphysical oscillation is soon amplified and results in the stop of the simulation program. Moreover, further tests demonstrate that D2V16 is capable of simulating the detonation wave for Mach number Ma > 100. However, it is difficult and even impossible to use D2V24 to simulate such high-Mach systems.
Next, let us assess the spatial and temporal convergence of the DBM results. The spatial convergence is proved considering several values of the space step, x = y = 5 × 10 −6 , 1 × 10 −5 , 2 × 10 −5 , 4 × 10 −5 , 8 × 10 −5 , 1 . 6 × 10 −4 , with fixed time step t = 1 × 10 −6 . The relative difference of the minimum value of ˆ f ne 5 around the detonation wave is chosen as the numerical error. Fig. 7 (a) When the field reaches steady, the temperature is different for various γ or Pr , see Fig. 9 . The space step is x = y = 10 −3 , the time step t = 5 × 10 − 5 , and the parameters ( v a , v b , η a ) = (1.1, 1.7, 2.3). Periodic boundary conditions are employed for the left and right boundaries, and the nonequilibrium extrapolation method is applied to the top and bottom boundaries. The sketch of the initial configuration for Couette flow is shown in Fig. 8 . Fig. 9 illustrates the temperature T versus y when the Couette flow reaches equilibrium. Fig. 9 (a) shows the cases with γ = 1 . 3 , 1.5, 1.8, and fixed Pr = 1 . 0 ; Fig. 9 (b) shows the cases with Pr = 0 . 5 , 1.0, 2.0, and fixed γ = 1 . 5 . The collision parameter ˆ S μ is 2 × 10 3 for Pr = 0 . 5 , 1 × 10 3 for Pr = 1 . 0 , and 5 × 10 2 for Pr = 2 . 0 , the other collision parameters ˆ S i are 1 × 10 3 . The symbols represent DBM results, the lines denote the corresponding analytical solutions, Clearly, the numerical and analytical results coincide well with each other. Hence, the DBM has the capability of capturing the flow field in the dynamic process of the Couette Flow. In panel (b), the lines stand for the analytical solutions It can be found that the DBM results are in good agreement with the analytical values. That is to say, the DBM could describe the nonequilibrium behaviours accurately.
Shock reflection
For the purpose of verifying the model for two dimensional systems, we use a typical benchmark: regular shock reflection. The computational domain is a rectangle. The reflecting surface is imposed on the bottom, the supersonic outflow is adopted for the right boundary, and the Dirichlet conditions are utilized on the top and left boundaries, i.e., The interesting readers refer to Ref. [41] for more details of the initial configuration. The parameters are N x × N y = 300 × 100 , x = y = 10 −3 , t = 5 × 10 −6 , I = 2 , ( v a , v b , η a ) = (1.7, 2.9, 3.0). The collision parameters are ˆ S μ = 1 . 8 × 10 5 , and 2 × 10 5 for the others. Fig. 11 exhibits the density contour of the steady regular shock reflection. Theoretically, the angle between the incident shock wave and the wall is φ = π / 6 while the DBM gives the angle φ = ArcTan (0 . 1 / 0 . 173) . The relative difference between them is only 0.1%, which is satisfying.
Conclusions
We present an MRT DBM for compressible flows, taking both chemical reaction and external force into account. The specific heat ratio as well as the Prandtl number are flexible. This model recovers the reactive NS equations in the hydrodynamic limit.
Meanwhile, thermodynamic nonequilibrium effects are dynamically taken into account through considering the departures of kinetic moments from their equilibrium counterparts. In fact, the nonequilibrium effects together with their relaxation parameters play a crucial role in fluid systems.
Compared with a previous MRT DBM where 24 discrete velocities (and kinetic moments) are employed to couple the chemical reaction with fluid flows [41] , our model requires only 16 discrete velocities (and kinetic moments) and thus less computing efforts. Compared to another MRT DBM with the incorporation of only a conventional force term [37] , our model introduces a new form for both force and reaction terms, which are physically more general. In this paper, we also demonstrate that the present model provides high computational efficiency, physical fidelity, and numerical robustness iy , Then the first nine elements of ˆ F and ˆ R are obtained, i.e., ˆ ˆ F 6 = ρu x a y + ρu y a x , ˆ F 7 = 2 ρu y a y , ˆ F 8 = 2 ρu x u x a x + u y a y + ρa x u 2 + ρa x ( D + I + 2 ) T , ˆ F 9 = 2 ρu y u x a x + u y a y + ρa y u 2 + ρa y ( D + I + 2 ) T , Substituting the variables' expansion, (A.14) From Eqs. (A.3) to (A.5) , we get ∂ξ ∂t 25) where j α = ρu α is the momentum in α direction, and ξ = (D + I) ρT + ρu 2 is twice the total energy, with | 4,335.8 | 2018-04-30T00:00:00.000 | [
"Engineering",
"Physics"
] |
LAGRANGE MULTIPLIERS IN THE PROBABILITY DISTRIBUTIONS ELICITATION PROBLEM: AN APPLICATION TO THE 2013 FIFA CONFEDERATIONS CUP
Contributions from the sensitivity analysis of the parameters of the linear programming model for the elicitation of experts' beliefs are presented. The process allows for the calibration of the family of probability distributions obtained in the elicitation process. An experiment to obtain the probability distribution of a future event (Brazil vs. Spain soccer game in the 2013 FIFA Confederations Cup final game) was conducted. The proposed sensitivity analysis step may help to reduce the vagueness of the information given by the expert.
INTRODUCTION
"It is notable that the probability that emerged so suddenly is Janus-faced. On the one side it is statistical, concerning itself with stochastic laws of chance processes. On the other side it is epistemological, dedicated to assessing reasonable degrees of belief in propositions quite devoid of statistical background." Ian Hacking (1975) The purpose of a knowledge (or belief) elicitation process is to obtain one or more probability distributions that represent the experts' beliefs in a random event, π(θ). A parameterized distribution is usually assumed to facilitate the elicitation process. A method of elicitation of the experts' knowledge based on linear programming is proposed in Nadler Lins & Campello de Souza (2001) and Campello de Souza (2002). In Nadler Lins & Campello de Souza (2001), emphasis is given to the dificulty indicators of the elicitation process for the random variable case. Campello de Souza (2002) generalizes the model for the case without a random variable.
The contribution of Lagrange multipliers analysis to the model proposed by Campello de Souza (2002) is an unresolved problem and the main objective of this paper. A set of elicitations on the prediction for the Brazil vs Spain soccer match in the 2013 Confederations Cup is conducted; in terms of probabilistic predictions, the method proposed should be seen as one more option. In general, if the objective was to make a prediction of that particular game, the applied method should be used along with other methods, in the same way that other variables on the outcome of soccer matches should be taken into account. In any case, a presentation of the references on Campello de Souza's method (2002) and of the existing methods of soccer predictions, as well as the differences and similarities involved is necessary. Among the advantages of the method proposed by Campello de Souza (2002), the fact that it is compatible with other views of probabilistic representation as well as the possibility to answer other soccer questions, such as the question of the alternative ranking, can be highlighted. It is evident that changes in the elicitation questionnaire will be necessary in this case. Studies by Campello de Souza (1983) and Campello de Souza (1986) are classical references regarding the probabilistic preferences with triangle inequalities in order to obtain inaccurate and imprecise preferences, which have been recently proposed in the analysis of conflict stability (Santos & Rêgo, 2014;Rêgo & Santos, 2015).
For this purpose, this paper is organized as follows: in Section 2, the elicitation process proposed in Campello de Souza (2002) is reviewed; in Section 3, the sensitivity analysis and its contributions to the elicitation process and model calibration is performed; in Section 4, an application with 23 students of Economics to elicit the probability of a future event (Brazil vs Spain soccer game in the final of the 2013 FIFA Confederations Cup) is presented; finally, the article ends with conclusions and a view into future work in Section 5. (Keynes, 1979), Keynes supports the hypothesis that, in the long run, we will all be dead and that historical data which would allow making predictions about our future would never exist. Thus, when there is little data or no data, the expert's a priori knowledge should be used.
A general model that can represent all aspects of random phenomena is still far from being achieved. The method for obtaining probability distribution families proposed by Campello de Souza (2002) Souza (2002), among the imprecise probability models (Walley, 1991).
The elicitation method of the expert's a priori distribution has as basic assumption the fact that the expert has vague knowledge about the probability distribution of the random event of interest, π(θ). It is also assumed that it can make "only a finite number of comparative probabilistic assertions" when answering questions about the likelihood of a random variable to take a value in one of two ranges. The method leads to expressing the expert's knowledge as families of probabilities distributions, featuring the human being's natural limitations. Thus, the expert's knowledge could be represented by a set of probability distributions limited by "a stochastically greater distribution than all other distributions compatible with the answers that have been given", as well as by a stochastically distribution lower than all others.
Initially, an elicitation questionnaire is proposed. Considering the case where the state of nature, θ, is a real and continuous parameter, the plausible range for θ should be established, in other words, [θ min , θ max ), where the probability that the value of θ is out of this range is zero. The range is partitioned into 2n subintervals. Then, π j = Pr(θ ∈ [θ j −1 , θ j )) is defined, where j = 1, . . . , 2n − 1.
The model consists of solving two linear programming problems, namely, first, solving a maximization problem and, second, a minimization one, subject to the same set of constraints obtained from the expert's survey responses. Mathematically, they are expressed as follows: subject to: where k(r) < l(r), a r > 0, r = 1, . . . , q, q is the number of questions answered by the expert and f (r) ∈ {0, 1} and its value depend on the r-th response to the expert's questionnaire.
Depending on the combination of parameters a r and b r , the expert's opinion can be captured in many ways. The two last restrictions are to ensure that one has a probability distribution.
Coefficients c j may be placed as the sum of the area of cumulative probability distribution. In this case, the goal would be to minimize the expected value when solving the maximization problem, and maximize the expected value when solving the minimization problem. This is made possible by considering c j = 2n − j + 1. Using the fact that θ j = θ 0 + j a, where a = θ j − θ j −1 , it can be shown (Campello de Souza, 2002) that maximizing Obviously, the choice of other values for c j will produce different results. The family of probability distributions defined by solving the optimization problem, in principle, is smaller than the set of all possible distributions compatible with the experts' responses. If the feasible set of the optimization problem is empty, it means that the expert was not consistent in his responses. The questions not answered by the expert will not enter the constraints of the linear programming problem.
It is now possible to present the dual model and perform a sensitivity analysis of the elicitation model.
THE DUAL MODEL: LAGRANGE MULTIPLIER AND SENSITIVITY ANALYSIS
Another mathematical programming problem known as the dual problem can be set to any mathematical programming problem with constraints (known as primal problem). The solution to both problems must be the same. The dual problem arises from the use of auxiliary variables, known as Lagrange multipliers, used to incorporate restrictions into the objective function of the problem.
Considering the model presented in the previous section, the aim is to obtain the distributions which provide the maximum and the minimum expected value by making c j = 2n − j + 1.
The choice of the objective function of the problem could be another one, but in this case, the problem can be considered a first order estimation; in other words, there is an attempt to estimate the mean of the distribution. However, other objective functions can be used, such as the variance or the entropy of the distribution. Any of these other functions would make the optimization problem nonlinear. Another way to incorporate the other quantities into the problem is for the researcher to use them in desired restrictions. Again, the problem would become nonlinear. The interpretation of the Lagrange multiplier in the dual problem depends on the choice of the objective function, the linear case being the one analyzed here.
The Simple Case
The behavior of the model is observed in a case with only two outcomes in which only a single question is answered by the expert. The model could then be written as follows: subject to: For the case of maximizing and minimizing the expected value, the values are c 1 = 2 and c 2 = 1. The dual problem can be written as subject to: Studying the primal model for the case where b = 0, as there are only two constraints, this problem can be represented as in Figure 1. Considering the case where constraint (6) is of type ≤, since the slope of the objective function is equal to −2, the maximum is obtained at the point where constraint (6) is satisfied in equality, the values that maximize the objective function being π 1 = 1 a+1 and π 2 = a a+1 . Thus, in this case, the value of the Lagrange multiplier λ 1 is positive. In the second case, where the problem is to minimize the expected value, the optimum is attained at π 1 = 0 and π 2 = 1; consequently, λ 1 = 0. The dashed line between points (π 1 , π 2 ) = a a+1 , 1 a+1 and (π 1 , π 2 ) = (0, 1) represents the feasible set. When a Lagrange multiplier is zero to any question, the interpretation is that it does not contribute at all to obtaining the optimal distribution, being the answer to the corresponding question deductible from the other answers.
An Interpretation for the Lagrange Multiplier
In Economics, linear programming models are typically used as tools for interpreting economic phenomena. This is a typical case where economic thinking produces scientific knowledge. In most cases, the analogy is made with physics or biology, but it can also be made with economic thinking. The objective function is what is desired. In the case of the firm, the objective function can be understood as the profit of a company, coefficients a r representing the coefficients of a technological matrix, the value of b r being the availability of an input, and the interpretation made of the Lagrange multiplier is that it reflects the opportunity cost, in terms of profit, of not having more of one of the inputs.
In the linear programming problems within this article, Lagrange multipliers measure how much the expected value could increase or decrease, in case it was possible to increase the difference between the probabilistic masses of the intervals presented in a question put to the expert. In the linear programming problem, the objective is the expected value, the value of b reflecting the maximum value stated by the expert to the difference between the odds ratios of the probabilistic masses of the two given intervals.
If b = 0, then coefficient a represents exactly the odds ratios between the two probabilistic masses and, in this case, the expert states no positive value for the difference between the odds ratios of the probabilistic masses. However, this does not mean that there is no opportunity cost related to this question. If the inequality constraint becomes an equality constraint, in other words, in the example above π 2 = 1 a π 1 , there will be an associated cost, since the expected value will not be higher in case of minimization, or will not be lower in case of maximization for not having a positive value, whichever it is, for the difference between the odds ratios of the probabilistic masses of the two given intervals. If the expert could refine this information by introducing a positive value for the difference between the odds ratios of the probabilistic masses, the model would provide a difference between the maximum and minimum expected values, smaller or equal, increasing the accuracy of his elicited beliefs.
When λ = 0, this implies that the corresponding answer to the question is informative about the phenomenon and the question is considered active. If the number of active questions is big, it means that there is contribution of a large number of questions in the questionnaire to obtain the distributions.
The General Case
To favor the presentation of the general case of the dual problem, the inequality will always be considered as ≤. Yet, it is easy to see that, regardless of the answer given by the expert, it is possible to transform the constraint into an inequality of this type. The linear programming problem described in (1)-(4) can be put in matrix form to make the presentation of the problem easier. Thus, where c = (c 1 , . . . , c j ), π T = (π 1 , . . . , π j ), b T = (b 1 , . . . , b q ), 1 = (1, . . . , 1) and Therefore, the problem proposed in (Campello de Souza, 2002) can be rewritten with its respective dual as follows: where λ r = (λ 1 , . . . , λ q ) and λ s refers to the inequality and equality constraints, respectively.
Sensitivity Analysis
The sensitivity analysis of parameters consists of assessing the changes in the defined objective function of the problem, given an infinitesimal change in the values of a r or b r . This paper follows the presentation in (Intriligator, 1971) in the use of the Langrangean function for sensitivity analysis. The Lagrangean function is defined at the optimum point as Then the following proposition can be obtained: Given an optimal solution (π * , λ r * , λ * s ) to the problem if λ r = 0 for some restriction r, then The demonstration for the case of minimization is similar. The immediate consequence of the above proposition is that if the goal is to decrease the length of the interval between the minimum and maximum expected values of variable θ, where max F = E(θ) and min F = E(θ), then a marginal change in b r , i.e., in the group of the third statements, is more relevant than the same marginal change in a r , i.e., in the group of the second statements.
The Example of a Soccer Game
In Walley (2000), the author presents an example of statements about the possible outcomes of a soccer game to make comparisons between models of imprecise probability. In a soccer game, there are three possible outcomes: victory W , draw, D and defeat, L. Considering an expert making three qualitative judgments about the possible outcomes of the game: 'not winning' is more likely than winning; winning is more likely than losing; a draw is more likely than losing the game, To implement the elicitation method proposed by Campello de Souza (2002), it is necessary to define the objective function. Since a random variable which allows for the calculation of the expected value is not defined, some possibilities may be suggested. There are two types of soccer competitions: consecutive points championships, where the winning team is granted three points; a draw gives one point to each team; a defeat, zero to the team beaten. The second type of championship is known as cup or knockout stage confrontation in at least one of its phases and, in this case, only one team remains in the competition, making it a zero-sum game. In the former situation, the objective function can be written as X = 3I W + I D , which gives the number of points scored in a match, whereas in the latter, the objective function can be written as X = 3I W − 3I L . The use of the function of the latter situation was chosen for the calculation of the distributions in the experiment. Thus, the linear programming model, according to the method proposed by Campello de Souza (2002), for the example by Walley (2000), can be written as follows: π(W ), π(D), π(L) ≥ 0 π(W ) + π(D) + π(L) = 1 As seen in Section 2, E(X ) should be maximized as well as minimized in order to obtain the set of probability distributions. First, the experts' statements must be made sure to be a nonempty set, proving to be consistent. Distributions that maximize (minimize) the expected value are shown in Table 1. Aspects on Lagrange multiplier's behavior to model constraints were never observed. There are four restrictions to be observed, the first three are related to the expert's qualitative judgment and the last is the equality constraint to obtain a probability distribution, the Lagrange multiplier being always active on this constraint since it is an equality constraint. For the first three statements, the maximization problem of Lagrange multipliers will be (1.5; 0; 0). With this result, only the first judgment is being used to determine the maximum E(X ). In the minimization problem, the Lagrange multipliers are (0; −3; 0); in this case, only the second judgment informs about the probability distribution that minimizes E(X ).
The Choice of the Objective Function
Given the possibilities of the kind of competition, an interesting problem arises regarding the choice of the objective function. There are two alternative linear programming problems with respect to the representation of points in consecutive games: the first problem has already been described, considering the values of the points earned by the home team, X = 3I W + I D ; the alternative problem is to consider the earnings by the visitor team Y = 3I L + I D . Events W , D and L are always considered taking the home team in the point of view. Hence, if the home team loses (L), the visitor gets 3 points. As the method indicates minimizing and maximizing the objective function, the same set of probability distributions can be expected to be found, but this is not what happens. It can be said that, despite the fact that both models represent the same situation, the results are different and this problem of choice of the objective function can be considered as an example of the Framing Effect problem (Tversky & Kahneman, 1981). The results for both problems are presented in Table 2.
Study Design
Regarding international competitions between national soccer teams, the greatest interest is in the FIFA World Cup, but a competition test before the World Cup is the FIFA Confederations Cup. This study refers to the 2013 FIFA Confederations Cup which had a small number of participating teams, as it was a competition test, eight in total: six representing each of the soccer confederation cup winners, the world cup champion and the host country, namely: Mexico, Italy, Tahiti, Japan, Uruguay, Nigeria, Spain and Brazil. By the competition's format, which is divided into two phases, a confrontation between Brazil and Spain (World Champion) could already be anticipated to only occur in the finals.
Before the beginning of the 2013 FIFA Confederations Cup, an experiment with undergraduate students of Economics was held. Two groups of students were selected from a total of 23 students. The first group was characterized as experienced students in the course and, therefore, in probability and game theory, as they were undergraduate seniors. The second group were beginner students in the Economics course, who, despite their initial training in probability, did not have any complete training in Economics, as they were undergraduate sophomores. There were nine students in the first group, six men and three women. The second group had eight men and six women.
The experiment was divided into three parts, the questions being presented to the students following the type of questions presented in Section 2. Firstly, students responded to probability statements of the first kind; later, of the second type and, finally, of the third kind. It was optional for the students not to answer any or all questions of a particular group. All questions referred to the possible match between Brazil and Spain. The first set of questions were about the following events: W , D and L, where each one represented Brazil's victory, draw and defeat, respectively. Which one of the following events is more likely to occur: 4. W or L?
D or L?
The statements about the odds ratios and differences between probabilistic masses are related to the first five questions and formed the next two parts of the experiment. X = 3I W − 3I L was used as the objective function in the linear programming problems. Three linear programming problems were analyzed. The first considered only the first group of statements; the second considered also the second set of statements; finally, the third linear programming problem consisted of all groups of statements. Five restrictions regarding the statements were presented in each problem, as well as the necessary condition for the probabilities to add up to one. The responses to each set of statements of the five questions above by the 23 experts are shown in Table 4 in the Appendix. For instance, Expert 1's answers to the first question in each group of statements are as follows: in the first group of statements, he considered Brazil's defeat more likely than a draw or a victory; the value of the odds ratios of the probability masses, a 1 , was equal to 3 in the second group of statements for the same question and, finally, he considered the upper bound for the difference between the odds ratios of the probability masses, b 1 , equal to 0.1. The interpretation of the answers to the other questions and by the other experts, given in Table 4 in the Appendix, is similar.
The results of the three linear programming problems are shown in Tables 5, 6 and 7 in the Appendix. As expected, as information is added to the linear programming problem, the length of the interval between the minimum and maximum expected values of points to be obtained seems to decrease. Taking the first expert as an example (one of the few who answered the three types of questions without generating any inconsistency), for the first set of information, the expected score was observed to be between [−0.75; −3.00], considering the odds ratios, the interval shrinks to [−2.00; −3.00]; finally, considering all the questions, the length of the interval gets even smaller [−2.30; −2.40]. Figures 2 and 3 show the distributions that maximize and minimize the expected points in a match by experts 1 and 14, respectively.
Another point to highlight is that the elicitation method shows experts' inconscistencies regarding their probabilistic knowledge. Knowledge about football is inherent to most Brazilians, but this does not necessarly mean a bias in favor of Brazil's victory when eliciting Brazilian experts, since at least 9 out of 23 experts had decided for Brazil's defeat in the fourth question.
Finally, a question information index is proposed. When the Lagrange Multiplier of a question is active, there is an indication that this question is limiting the growth of expected values and, consequently, the question tells something about the expert's opinion. Given the experiment with the 23 experts, the proposed index reflects which of the five questions were relevant to the experts elicited family of probability distributions. The information index is defined as: where L is the number of times the multiplier is nonzero, considering the maximization and minimization problem, and N is the number of experts who answered question x. The information index for each question per group of statements is shown in Table 3.
From observing the information index values for the first and second groups of statements, the fourth question was verified to be the most informative one with an information index of 0.3095. However, the information index of this question is much reduced in the second group of state- On the other hand, the information index of the second question was more stable considering the first and second groups, being above 0.3 in both groups. Thus, if one must choose a single question for the first two groups of statements, question 2 should be chosen.
CONCLUSION
The main contribution of this paper is to propose a sensitivity analysis evaluation of the linear programming problem used to obtain families of probability distributions arising from experts' answers to a questionnaire, enabling a refinement of their knowledge, making it a calibration step. In this calibration step, it was possible to demonstrate that the evaluations of the difference between probabilistic masses are more informative than changes in the odds ratios between two events.
The example of soccer games, besides being recurrent in the literature, allows for an elicitation process, which is easy to apply to different groups of people for the easy exposition of concepts such as odds ratios and differences between probabilistic masses. However, one must have a certain care with the statements of differences between probabilistic masses, since only two experts were able to respond to such statements and generate a nonempty feasible set.
Finally, it is possible to have some alternative applications of the proposed sensitivity analysis in the elicitation process proposed by Campello de Souza (2002) through the elicitation questionnaire developed in Nadler & Campello de Souza (2001): the questionnaire can be dynamic and only seek questions which may alter the set of distributions already established so far. This is possible by evaluating whether the Lagrange multiplier would become nonzero in a follow-up question. In this case, the information index for each question would be higher when compared to the corresponding information index of a pre-established questionnaire. Table 4 -Responses of Experts. | 6,073.2 | 2015-12-01T00:00:00.000 | [
"Mathematics"
] |
On Uncertainty Measures of the Interval-Valued Hesitant Fuzzy Set
. Interval-valued hesitant fuzzy sets (IVHFS), as a kind of decision information presenting tool which is more complicated and more scientifc and more elastic, have an important practical value in multiattribute decision-making. Tere is little research on the uncertainty of IVHFS. Te existing uncertainty measure cannot distinguish diferent IVHFS in some contexts. In my opinion, for an IVHFS, there should exist two types of uncertainty: one is the fuzziness of an IVHFS and the other is the nonspecifcity of the IVHFS. To the best of our knowledge, the existing index to measure the uncertainty of IVHFS all are single indexes, which could not consider the two facets of an IVHFS. First, a review is given on the entropy of the interval-valued hesitant fuzzy set, and the fact that existing research cannot distinguish diferent interval-valued hesitant fuzzy sets in some circumstances is pointed out. With regard to the uncertainty measures of the interval-valued hesitant fuzzy set, we propose a two-tuple index to measure it. One index is used to measure the fuzziness of the interval-valued hesitant fuzzy set, and the other index is used to measure the nonspecifcity of it. Te method to construct the index is also given. Te proposed two-tuple index can make up the fault of the existing interval-valued hesitant fuzzy set’s entropy measure.
Introduction
In some real-life scenarios, we often need to make multicriteria decision-making, which is to sort some plans with several criteria and select the best one. One important stage in multicriteria decision-making is determining the membership degree of one alternative in regard to one certain evaluation term. Te traditional method is a black and white problem. Tat is if the alternative meets the requirement of the evaluation term, then the membership degree is one; otherwise, the membership degree is zero. Tis kind of rule is simple to operate but is too absolute to lose a lot of information. In fact, the membership degree in a lot of circumstances is not a clear distinction between black and white, which otherwise have a certain degree of grey. In order to describe membership degree more perfectly, Zadeh creatively proposed the fuzzy sets (FS) theory based on sets theory [1]. In fuzzy sets, the information has some kind of uncertainty which has two dimensions. Te frst dimension is fuzziness which states that we cannot clearly defne the degrees that one element is belonging to and not to a certain fuzzy set. De Luca and Termini proposed an entropy measure for FS which is not based on probability theory [2], and Liu developed the axiomatic defnition of entropy for FS [3], both of which are important research studies on the fuzziness of FS. Fan and Ma had given some general results of the fuzzy entropy of FS based on the axiomatic defnition of the fuzzy entropy of FS and distance measure of FS, and they generalized the fuzzy entropy formulation of FS proposed by De Luca and Termini [2]. Te other aspect of the uncertainty of the FS is nonspecifcity which measures the amount of information contained in the FS. Yager proposed several nonspecifcity indexes to measures the degree that the FS only contains one element [5]. Garmendia et al. gave the general formulation for the nonspecifcity measure of FS based on T-norms and negation operator [6].
Tere is one membership degree and nonmembership degree for each element in the FS. However, in some circumstances, it is more suitable to consider the hesitation degree. We assume that a committee is composed of ten experts, and the attitude of fve of them is positive, that of three of them is negative, and two are abstained from voting. Ten, the membership degree for the alternative to the feasible alternative set may be defned as 0.5, the nonmembership degree may be defned as 0.3, and the hesitation degree may be defned as 0.2. FS is not suited to be used in this kind of cases. Because of the universality of this kind of cases, Atanassov generalized FS to the intuitionistic fuzz set (IFS) [7]. Each element in the IFS has a membership degree, a nonmembership degree, and a hesitation degree, thus making IFS more suitable to deal with problems of fuzziness and uncertainty. Some research studies have been conducted on the quantifcation of the uncertainty of IFS. Xia and Xu proposed a new entropy and a new crossentropy of IFS, and they discussed the relation between them [8]. Huang developed two entropy measures for IFS based on the distance between two IFSs, which is simple to calculate and can give reliable results [9]. Huang and Yang gave the defnition of fuzzy entropy based on probability theory [10]. Pal et al. pointed that there are two aspects associated with the uncertainty of IFS, which are fuzziness and nonspecifcity, and existing studies cannot distinguish them [11].
Sometimes, in real decision-making, there is a hesitation among several membership degree values. We assume that several experts evaluate a plan on one attribute. Expert A thinks the membership degree of the plan that belongs to the attribute is 0.4, expert B thinks that the membership degree is 0.6, expert C thinks that the membership degree is 0.8, and they cannot reach an agreement, so how do we describe the evaluation result? FS and IFS both cannot be used in this circumstance. Hesitant fuzzy sets (HFS), proposed by Torra and Narukawa [12] and Torra [13], are more suitable in this kind of circumstance. Te membership degree of every element in an HFS is a set, called the hesitant fuzzy element (HFE). HFS is an efective tool describing the hesitance degree of the decision maker, which is widely used in practical decision-making problems [14], so it is important to study uncertainty problems associated to the HFS. HFS is a new kind of information presentation tool, and there is little research on the uncertainty of it. Xu and Xia gave the axiom defnition of the entropy for HFE, and they proposed several entropy formulations to measure the fuzziness degree of HFE [15]. Farhadina [16] pointed out that the entropy formulation of HFE proposed by Xu and Xia [15] gave the same value to several HFEs with diferent uncertainties intuitively. Singh and Ganie thought that the entropy formulation developed by Xu and Xia [15] cannot distinguish diferent HFEs in some circumstances and gave the same weights to attributes having diferent importance obviously, and they constructed creatively generalized hesitant fuzzy knowledge measure formulation which can be used to handle these two problems [17]. Zhao et al. [18] think that the entropy formula for the HFE introduced by Farhadinia [16] cannot diferentiate diferent HFEs in some circumstances such as when two HFEs have the same distance to HFE {0.5}, and they gave the defnition of binary entropy for HFS, with one entropy measuring the fuzziness of the HFE and the other measuring the nonspecifcity. Wei et al. investigated the problem of how to apply diferent uncertainty facets of hesitant fuzzy linguistic term sets in diferent decision-making settings [19]. Xu et al. established the axiomatic defnitions of fuzzy entropy and hesitancy entropy of weak probabilistic hesitant fuzzy elements [20]. Fang revisited the concept of uncertainty measures for probabilistic hesitant fuzzy information by comprehensively considering their fuzziness and hesitancy and proposed some novel entropy and cross-entropy measures for them [21]. Wei et al. focused on studying how to measure the uncertainty presented by the information of an extended hesitant fuzzy linguistic term set [22]. Fang developed some hybrid entropy and crossentropy measures of probabilistic linguistic term sets [23]. Wang et al. proposed an entropy measure of the Pythagorean fuzzy set by taking into account both Pythagorean fuzziness entropy in terms of membership and nonmembership degrees and Pythagorean hesitation entropy in terms of the hesitant degree [24]. Xu et al. modifed the axiomatic defnition of fuzzy entropy fuzzy sets (FSs), and the axiomatic defnitions of fuzzy entropy and hesitancy entropy of intuitionist fuzzy sets (IFSs) and Pythagorean fuzzy sets (PFSs) are also revised [25]. In order to measure the uncertainly for type-2 fuzzy sets (T2FSs), the axiomatic framework of fuzzy entropy of T2FSs is established [26].
Chen et al. introduced the thought of interval number into HFS and proposed the defnition of interval-valued hesitant fuzzy sets (IVHFS) which is a kind of generalization of HFS [27]. IVHFSs, as a kind of decision information presenting tool which is more complicated and more scientifc and more elastic, have an important practical value in multiattribute decision-making [28,29]. Tere is little research on the uncertainty of IVHFS. Farhadinia proposed the defnition of entropy for IVHFS based on the distance between two IVHFSs, but the entropy formula of IVHFS cannot distinguish diferent IVHFSs in some contexts [16]. Pal et al. pointed out that there exist two types of uncertainty for an IFS, fuzzy-type uncertainty and nonspecifcity-type uncertainty [11]. Zhao et al. thought that for an HFS, except for the fuzziness, there exists another kind of uncertainty, which is nonspecifcity [18]. In my opinion, for an IVHFS, there exist two types of uncertainty, one is the fuzziness of an IVHFS, which is related to the departure of the IVHFS from its nearest script set, and the other is the nonspecifcity of the IVHFS, which is related to the imprecise knowledge contained in the IVHFS. To the best of our knowledge, the existing index to measure the uncertainty of IVHFS are all single indexes, which could not consider the two facets of an IVHFS. Pal et al. stated that we cannot put forward any total measure of uncertainty for an HFE as we do not know how exactly these two types of uncertainty interact, so we also cannot put forward any total measure of uncertainty for an IVHFS as we do not know how exactly these two types of uncertainty of an IVHFS interact [13]. In view of that, this paper proposes an axiom frame which uses two-tuple entropy indexes to measure the uncertainty of the IVHFS. One entropy index is used to measure the IVHFS' fuzzy degree and the other to measure its unspecifcity. Te approaches to construct the two kinds of uncertainty measure are also given, and the two-tuple indexes can make up for the shortcomings of the existing entropy measures. Te novelty of the paper lies in that, and to my knowledge, this is the frst paper studying on the uncertainty measure of the intervalvalued hesitant fuzzy set. Based on the two-tuple index, we can defne distance measure, similarity measure, and design clustering algorithm to classify a set of interval-valued hesitant fuzzy sets. Distance measure has wide applications in decision-making, such as developing methods to reach consensus in a group, pattern recognition, and image processing [16]. So, this paper lays the foundation to develop distance measure and similarity measure between intervalvalued hesitant fuzzy sets.
Te paper is organized as follows: Section 2 introduces the concept of HFS, IVHFS, and the existing uncertainty measure of IVHFS. Section 3 proposes a two-tuple index and approaches to construct the two kinds of uncertainty measure and some theorems. Te paper is concluded in Section 4, including the trends and directions of the IVHFS.
Preliminaries
Defnition 1 (see [13]). An HFS M on the reference set X is defned in terms of a function h M (X) as follows: where h M (x) is a set of several values in [0, 1], which represent possible membership degrees of the elements x of X to the set M. Based on the practical need, Chen et al. integrated the thought of interval number into HFS and proposed the interval-valued hesitant fuzzy sets [27].
Defnition 2 An internal-valued hesitant fuzzy set M on the reference set X is defned as follows: where h M (x) is a set of several intervals of [0, 1], representing the membership degree of the element x in the reference set X to the IVHFS M. Chen et al. called M as the interval-valued hesitant fuzzy element (IVHFE) [27].
is an IVHFS. Let H be the set of all IVHFEs. Based on the defnition of the complement to an HFE α proposed by Torra and Narukawa [12], this paper defnes the complement of an IVHFE α as Defnition 3 (see [15]). Given two IVHFSs M and N, h h N (x i ), respectively. Te interval-valued hesitant normalized Hamming distance is defned as follows: Te generalized hybrid interval-valued hesitant weighted distance is defned as follows:
Advances in Fuzzy Systems
where w i (i � 1, 2, ..., n) is the weight of the element x i with w i ∈ [0, 1] and n i�1 w i � 1.
Theorem 4 (see [15]) Let Z: [0, 1] ⟶ [0, 1] be a strictly monotone decreasing real function and d be a distance between IVHFSs. Ten, for any IVHFS, M and N Are, respectively, a similarity measure and two entropies for IVHFSs based on the corresponding distance d.
It is obvious that, for any , so we cannot diferentiate M and N in this case. What is more, uncertainty can be considered of diferent types such as fuzziness and nonspecifcity [30], and the index to measure the uncertainty of the interval-valued hesitant fuzzy set proposed by Farhadinia is a single index, which could not consider all facets of an interval-valued hesitant fuzzy set.
In view of that, this paper proposes an axiom frame in Section 3 which uses two-tuple entropy indexes to measure the uncertainty of the IVHFS. One entropy index is used to measure the IVHFS' fuzzy degree, and the other is used to measure its unspecifcity.
Two-Tuple Entropy Measures for HFE
Note. Te relationship of ″≺″ and ″≻″ in (E F 4) is complemented according to the calculation rules of the interval number [28]. For example, we assume that a and b are two interval numbers, and if the possibility degree of a bigger than b is larger or equates to 0.5, we call a ≻ b is true; otherwise, if the possibility degree of b bigger than a is larger or equates to 0.5, we call b ≻ a is true. (2) In (E NS 4), α σ(i) − α σ(j) and β σ(i) − β σ(j) are two interval numbers. If interval number a is [0.2,0.5], then we have In Defnition 3, a two-tuple (E F , E NS ) is utilized to measure the uncertainty of IVHFE α. Te fuzzy entropy E F is used to measure the fuzziness degree of α, that is, the distance between α and the crisp value which is closest to α. Te nonspecifcity entropy E NS is used to measure the nonspecifc degree of α, that is, the degree of which only contains one interval. Terefore, the two-tuple (E F , E NS ) not only considers the fuzziness of a set which traditional entropy can measure, but it also quantifes the nonspecifcity of a set, which is more reasonable [11].
Te Fuzzy Entropy E F of IVHFE.
Te uncertainty of an IVHFE α comprised fuzziness and nonspecifcity. First, we study how to measure the fuzziness degree of an IVHFE. We will give some methods to construct the measure that can be used to quantify the fuzziness degree of an IVHFE. First, a general result is given as follows: Proof. Te proof of Teorem 6 is provided in Appendix A. Note. From the proof of Teorem 4, we have E F (α) defned in (7) Advances in Fuzzy Systems satisfying axiom (E F 1) − (E F 4). Ten, R possesses the following properties: Proof. Te proof of Teorem 7 is provided in Appendix B.
Te Nonspecifcity Entropy of IVHFE E NS .
In this section, we investigate the other aspect of the uncertainty of the IVHFE, that is, nonspecifcity. First, we proposed a new measure called nonspecifcity used to measure the other aspect of uncertainty of the IVHFE. If l α is one, let 〈l α 〉 take the value two; otherwise, if l α is equal or larger than two, let 〈l α 〉 take the value l α (l α − 1). We give a general result as follows: Which meets axiom (E NS 1) − (E NS 4). Ten, F has the properties as follows: Proof. Te proof of Teorem 9 is provided in Appendix D.
Note. We assume that h: [− 1, 1] ⟶ I is a function which can generate a new interval by taking the absolution of the two endpoints and sorting the two numbers. Proof. Te proof of Teorem 10 is provided in Appendix E.
We assume that [a 1 , a 2 ] � x≺y � [b 1 , b 2 ], which is the possibility of x which is smaller than y and is larger than 0.5, and from Xu and Da [30], we obtain that b 1 + b 2 > a 1 + a 2 . Ten, we have.
Conclusions
To the best of our knowledge, there are a few research studies on the uncertainty of IVHFE and most of them cannot diferentiate diferent IVHFE in some situations. Tis paper proposed a two-tuple entropy model to quantify the uncertainty of IVHFE. We use one index to measure its fuzziness degree and the other index to measure its nonspecifcity. For nonspecifcity entropy, we gave some methods to construct this index and represent some examples to illustrate the efectiveness of it. With regard to fuzzy entropy, due to the difculty in the comparison of the interval number, we failed to give construction approaches, which is the important problem that we are going to focus in the near future. Furthermore, the theoretical frame of this paper can be used to quantify the uncertainty of more generalized fuzzy sets. For example, Fu and Zhao proposed the concept of the hesitant intuition fuzzy set, integrating the advantages of both hesitant fuzzy set and intuition fuzzy set [31], and Zhu et al. proposed the concept of the dual hesitant fuzzy set (DHFS) [32], as an extension of HFS to deal with the hesitant fuzzy set both for membership degree and nonmembership degree. Ren et al. introduced the normal wiggly hesitant fuzzy sets (NWHFS) as an extension of the hesitant fuzzy set [33]. So, how to apply the theoretical frame of this paper in the hesitant intuition fuzzy situation, the dual hesitant fuzzy situation, and the normal wiggly hesitant fuzzy information environment is an important topic. Based on the proposed two-tuple index, the fuzzy knowledge measure and accuracy measure can be developed further which can be used in pattern analysis and multiple attribute decision-making [34].
Based on the two-tuple entropy measure, the experts can construct interval-valued hesitant fuzzy preference relations in group decision-making problems. In order to guarantee that decision makers are nonrandom and logical and obtain reasonable decision results that are accepted by most decision makers, we can consider individual consistency control in consensus reaching processes for group decisionmaking problems [35]. Due to increasingly complicated decision conditions and relatively limited knowledge of decision makers, decision makers may provide incomplete interval-valued hesitant fuzzy preference relations, so how to apply the new two-tuple measure in an incomplete environment is also an important research topic. To fully consider the properties of social network evolution and improve the efciency of consensus reaching process in group decision-making, Dong et al. introduced the concept of the local world opinion derived from individuals' common friends and then proposed an individual and local world opinion-based opinion dynamics (OD) model [36]. As future work, the study of the OD model based on social network could be extended to interval-valued hesitant fuzzy preference relations in group decision-making problems [37]. Besides, how to apply the two-tuple index proposed in this paper to the OD model is an interesting research topic. Let l α be the number of intervals in α, then the mapping E F : H ⟶ [0, 1] defned as follows meets the axioms (E F 1) − (E F 4): Proof. We assume that E F (α) is defned as equation (A.1).
Data Availability
Te data used to support the fndings of this study are included within the article.
Conflicts of Interest
Te author declares that there are no conficts of interest. | 4,712.4 | 2023-06-17T00:00:00.000 | [
"Computer Science",
"Economics"
] |
Comparative Performance of Pseudo-Median Procedure , Welch ’ s Test and Mann-Whitney-Wilcoxon at Specific Pairing
The objective of this study is to investigate the performance of two-sample pseudo-median based procedure in testing differences between groups. The procedure is the modification of one-sample Wilcoxon procedure using pseudo-median of differences between group values as the central measure of location. The test was conducted on two groups setting with moderate sample sizes of symmetric and asymmetric distributions. The performance of the procedure was measured and evaluated in terms of Type I error and power rates obtained via Monte Carlo methods. Type I error and power rates of the procedure were then compared with the alternative parametric and nonparametric procedures namely the Welch’s test and Mann-Whitney-Wilcoxon test. The findings revealed that the pseudo-median procedure is capable in controlling its Type I error close to the nominal level when heterogeneity of variances exists. In terms of robustness, the pseudo-median procedure outperforms the Welch’s and Mann Whitney Wilcoxon tests when distributions are skewed. The pseudo-median procedure is also capable in maintaining high power rates especially for negative pairing.
Introduction
Testing the equality of central tendency (location) parameters or differences between two groups is a common statistical problem.Under traditional parametric test statistics, it is well known that Student's two-independent sample t-test (Student, 1908) can be highly unsatisfactory when the distribution of the data is non-normal and variances are unequal (Teh & Othman, 2009;Zimmerman, 2004;Zimmerman & Zumbo, 1993).This test also produces low power under arbitrarily small departures from normality (Keselman, Othman, Wilcox & Fradette, 2004).In cases where distributions are normal but population variances are unequal, Welch (1938) gave the solution to this problem.His solution is an approximate degrees of freedom t test.However, Welch's test still has problems in controlling Type I error under non-normal distributions (Algina, Oshima & Lin, 1994;Zimmerman & Zumbo, 1993).
A popular alternative for analyzing data from non-normal populations is to use nonparametric test statistics such as Mann-Whitney-Wilcoxon test.Nonparametric statistics are insensitive to the deviation of normality.Even though nonparametric methods are distribution free, but they are not assumptions free.Usually the underlying distribution has to be symmetric (Gibbons & Chakraborti, 2003).Nonparametric procedures are more appropriate for data based on weak measurement scales and appropriate for symmetric shape (Syed Yahaya, Othman & Keselman, 2004).In addition, procedures in nonparametric statistics are less powerful than the parametric ones and therefore, require larger sample sizes to reject false hypotheses.Thus, choosing non parametric tests as alternative to the classical tests might not guarantee a reliable method due to the weakness of the tests.
To circumvent the effects of assumptions violations on the classical procedures, researchers have been advised to adopt heteroscedastic test statistics, replace the conventional methods with permutation, or transform their data to achieve normality and/or homogeneity.Some studies suggested substituting robust estimators (e.g.trimmed means and Winsorized variances) for the least square estimators (i.e. the usual mean and variance).Robustness to non-normality and variance heterogeneity in unbalanced independent group designs can be achieved by using robust estimators with heteroscedastic test statistics as demonstrated by a number of papers (Keselman, Algina, Wilcox & Kowalchuk, 2000;Keselman, Kowalchuk & Lix, 1998;Wilcox, Keselman, Muska & Cribbie, 2000).These literatures also indicated that by applying robust estimators with heteroscedastic test statistics, distortion in rates of Type I error could generally be eliminated.However, the use of trimmed means for example, required a percentage of observations to be discarded from the whole data which might cause some useful information from the data to be lost.
Over the years, many procedures were developed to handle the violation of the assumptions.However, each of the aforementioned procedures can only handle certain violations and so far, no single statistical method can be considered ideal.In this study, we proposed a statistical procedure which is based on the pseudo-median to deal with the problem of multiple violations such as non-normality, variance heterogeneity and unbalanced group sizes occurring simultaneously.This study also investigates the performance of the pseudo-medians procedure in terms of controlling Type I error and maintaining high power rates under these multiple violations.The performance of this procedure was then compared with the parametric and nonparametric tests namely the Welch's test and the Mann-Whitney-Wilcoxon test, respectively.This method optimistically, will help researchers in conducting their research in a more flexible situation without having to worry about the rigid assumptions.
The rest of the paper is organized as follows.The second section briefly explains the criteria for evaluating the performance of a statistical test.The third section elaborates on the methods used in this study.The design specifications of the data are described in the fourth section while the fifth section discusses the results.The final section concludes our study.
Performance Evaluation of the Statistical Test
The evaluation of any statistical test involves two attributes namely Type I error and power rates.Type I error and power rates are measured in the form of p-values.Type I error happens when a true null hypothesis 0 H is incorrectly rejected.The power of a statistical test of 0 H is the probability that the 0 H will be rejected when it is false, that is, the probability of obtaining a statistically significant result or the test resulting in the conclusion that the phenomenon exists (Cohen, 1988;1992).A procedure having its Type I error close to nominal value is considered as robust.If a procedure is able to control its Type I error rates close to the nominal value and generates good statistical power simultaneously, then the procedure is deemed to be the procedure of choice.These properties are usually used as the criteria for evaluating the performance of a statistical test.
Methods
The pseudo-median procedure is generated from the modification of one-sample nonparametric Wilcoxon procedure with the incorporation of pseudo-median of differences between group values as the statistic of interest in a two groups setting.As stated in Hoyland (1965), the pseudo-medians of a distribution F is defined as the median of the distribution of where 1 X and 2 X are independently and identically distributed according to F. Hollander and Wolfe (1999) noted that the pseudo-median of a distribution F is the , where 1 Z and 2 Z are independent, each with the same distribution F.
In this procedure, suppose be samples from distributions 1 F and 2 F , respectively.Let the differences between the observations from both samples be 1 2 , 1, 2,..., and 1, 2,..., . The absolute value of the differences is given by ij D and ij R denote the rank of ij D .An indicator function, ij e , is defined as in Equation 1.1.0, 0 Then the Wilcoxon statistic is defined as Equation 1.2.
The pseudo-median is a location parameter and its value has to be estimated.The estimation is done using the Hodges-Lehmann estimator (Hollander & Wolfe, 1999).The Hodges-Lehmann estimator of pseudo-median is given in Equation 1.3 where i Z are the differences between the observations from both samples.
ˆ, 1 ,..., 2 The modification of the Wilcoxon procedure is performed by adding the pseudo-median value to all observations in the second sample.A bootstrap procedure was employed to test the hypothesis as given in Equation 1.4 where d is the pseudo-median.
The algorithm of the bootstrap procedure is enumerated below.
1. Based on the two samples, find W and estimate the pseudo-median, d .
2. Shift the second sample by adding d to all members.
where U = 1 or 0 and L = 1 or 0. 9. Calculate the p-value as 2 minimum (number of L, number of U)/B.
Design Specifications
This study focused on completely randomized design containing two groups with moderate sample size.The total sample size was set to be 40 and then split to form unbalanced design with sample sizes (15, 25), respectively.The test was conducted under heterogeneous group variances as variance heterogeneity can affect both Type I error and power of the analysis (Wilcox, Charlin & Thompson, 1986).Luh and Olejnik (1990) stated that when the population variances differ, the actual statistical power could be less than that desired.To examine the effect of variance heterogeneity on the procedure, in this study, the group variances were set to be 1:36.This ratio was chosen as it reflects extreme variance heterogeneity.This variance ratio was used by a number of researchers in their study for two groups case (Keselman, Wilcox, Lix, Algina & Fradette, 2007;Othman, Keselman, Padmanabhan, Wilcox & Fradette, 2004;Luh & Guo, 1999).
Unequal group sizes, when paired with unequal group variances, will produce either positive or negative pairings.
A positive pairing occurs when the largest group size is associated with the largest group variance, while the smallest group size is associated with the smallest group variance.On the other hand, a negative pairing referred to the case in which the largest group size is paired with the smallest group variance and the smallest group size is paired with the largest group variance.These conditions were chosen since the test for equality of central tendency parameters typically produces conservative results for the positive pairings and liberal results for the negative pairings (Syed Yahaya et al., 2004;Othman et al., 2004;Keselman et al., 2004).According to Cribbie and Keselman (2003), when variance and sample size are directly paired, Type I error estimates can be conservative and power correspondingly will be deflated.On the other hand, when variance and sample size are inversely paired, Type I error estimates can be liberal and power correspondingly will be inflated.Therefore, all the tests were examined under these two types of pairings to appraise their ability in controlling the Type I error and maintaining good power value.
In terms of distributions, we chose a g = 0, h = 0.225 (Hoaglin, 1985) distribution to represent symmetric leptokurtic and the chi-square distribution with three degrees of freedom 2 3 to represent skewed leptokurtic.
The former distribution has zero skewness and kurtosis equal to 154.84 while the later distribution has skewness and kurtosis equal to 1.63 and 4.0, respectively.Both distributions have positive kurtosis which indicates a peaked distribution with heavy tails.Normal distribution was used as a basis of comparison.
This study was based on simulated data.The simulation was carried out using the random-number-generating function in SAS and the simulation program was written in SAS/IML (SAS, 2006).In terms of data generation, pseudo-random standard normal variates were generated by employing the SAS generator RANDGEN and this involved the straight forward usage of the (RANDGEN(Y, 'NORMAL')).To generate the chi-square variates with three degrees of freedom, we used RANDGEN(Y, 'CHISQUARE', 3).To generate data from a g-and hdistribution, standard normal variates were converted to g-and h-variates via where Z values were generated using the generator RANDGEN with the normal distribution option.
The effect size or the shift parameter used in this study is not a single point but its values ranging from 0.2 to 2.0 with increment of 0.2 units.Therefore, for each condition, ten power values were obtained.This effect size is computed based on the common language (CL) statistics proposed by McGraw and Wong (1992) and Vargha and Delaney (2000).In this study, 0.80 was used as the standard for adequacy in power analysis.There are no hard and fast rules about how much power is enough, but according to Murphy and Myors (2004), power of 0.80 or above is usually judged to be adequate.Most power analyses specify 0.80 as the desired level, and this convention seems to be widely accepted.For each condition examined, 599 bootstrap samples were generated and 5000 data sets were simulated.The nominal level of significance was set at = 0.05.
Results and Discussion
The simulation results of Type I error for pseudo-median (PM), Welch's-test (W) and Mann-Whitney-Wilcoxon (MWW) procedures are presented in Table 1.This study uses the Bradley's (1978) liberal criterion of robustness to quantify the performance of a statistical test to control its probability of Type I error.According to Bradley's liberal criterion of robustness, a test can be considered robust if its empirical rate of Type I error is within the interval 0.5 ,1.5 .Thus, when the nominal level is set at 0.05
, the procedure or test is considered robust if its' Type I error rate is in between 0.025 and 0.075.Type I error rates greater than 0.075 are considered liberal and those less than 0.025 are considered conservative.
Under normal distribution, all procedures are able to control their Type I error rates close to the nominal level of 0.05 for positive pairing.The error rates are 0.0486, 0.0492 and 0.0458 for the pseudo-median procedure, the Welch's test and the Mann-Whitney-Wilcoxon, respectively.Under negative pairing, the pseudo-median procedure shows outstanding performance in controlling the Type I error rates.The recorded rate is 0.0492, very close to the nominal value.Regardless of pairing, the pseudo-median procedure produces consistent Type I error rates under normal distribution.On the other hand, Welch's test produced Type I error rate with value of 0.0514 that is slightly greater than 0.05 but still very close to the nominal level.Under the same condition, Mann-Whitney-Wilcoxon has Type I error rate that is beyond Bradley's liberal criterion with value equal to 0.1142.Under the g-and-h distribution, both Welch's test and pseudo-median procedure produced Type I error within the Bradley's liberal criterion.For both pairings, the pseudo-median procedure and the Welch's test produced good and consistent Type I errors.For positive pairing, the values for the pseudo-median procedure and the Welch's test are 0.0518 and 0.0448, respectively.As for negative pairing, the value for pseudo-median is slightly inflated to 0.0532 while the value for Welch's test is consistently around 0.044.Meanwhile, the Mann-Whitney-Wilcoxon has good Type I error (0.0436) for positive pairing but very liberal Type I error (0.108) for negative pairing.
Under skewed distribution, pseudo-median procedure produced Type I error rates within the Bradley's liberal criterion.The result seems to follow the norm where positive and negative pairings typically produce smaller and larger rates, respectively.The rates of Type I error for both pairings are 0.0476 and 0.055.However, Welch's test produced Type I error considerably greater than 0.05 for both pairings with the rates equal to 0.0654 and 0.0736, however, the rates are still within the robustness criterion.Unfortunately, Mann-Whitney-Wilcoxon produced very liberal Type I error for both pairings with values equal to 0.1812 (positive pairing) and 0.2398 (negative pairing).
The last row of Table I displays the "Average" values obtain by averaging both p-values corresponding to each procedure and distribution.Underlined average values denote that the "Average" is within the Bradley's liberal criterion.As we can observe, regardless of distributions the "Average" values for pseudo-median procedure and Welch's test are within the robustness criterion.However, Mann-Whitney-Wilcoxon depicts liberal "Average" values for all distributions.
In statistical power analysis, we only considered procedures which were identified to be in control of Type I error rates.The comparisons of statistical power will only be meaningful if the procedures being compared are capable of controlling their rates of Type I error.The results of power analysis are tabulated in Table 2 and also illustrated in Figure 1.Table 2 is divided into two parts (above and below) based on the pairings.The first column of Table 2 represents the shift parameter used in the study.The rest of the columns record the power rates corresponding to each of the procedures tested under each type of distribution.
As we can observe from Table 2, the power rates for all the tests fail to achieve the desired level for both pairings.Between the pairings (table above and below), the comparison shows that all the procedures under positive pairings produce greater power rates than the negative pairing.When scrutinizing the results under positive pairing, the analysis reveals that under normal distribution, the power of pseudo-median procedure is just slightly below the Welch's procedure, but performs much better than Mann-Whitney-Wilcoxon procedure.However, the power of pseudo-median procedure improves under g-and-h distribution but decline again when the skewness of the distribution gets larger as shown in the second last column.Under negative pairing, even though the power values for the pseudo-median procedure slightly dropped from the positive pairing, but the procedure performs better than the Welch's test under g-and-h and chi-square distributions.Under this pairing, we did not include the Mann-Whitney-Wilcoxon because of its inability to control Type I error.
Conclusion
The objective of this study is to investigate the performance of the pseudo-median procedure in terms of controlling its Type I error rates and maintaining high power value.With respect to robust performance, the pseudo-median procedure is capable in controlling its Type I error close to nominal level when heterogeneity of variances exists.The pseudo-median procedure also outperforms the Welch's test and Mann-Whitney-Wilcoxon under skewed distributions.The popular Mann-Whitney-Wilcoxon is capable in controlling its Type I error only for positive pairing under symmetric distribution but fails in controlling its Type I error under asymmetric distribution.The study also reveals that pseudo-median procedure perform better than the other procedures especially under the influence of negative pairing.
Figure 1.Power Curves for all distributions under specific pairing
Table 1 .
Type I error rates for all procedures under specific pairing
Table 2 .
Power rates for all procedures under specific pairings | 3,945.2 | 2011-09-29T00:00:00.000 | [
"Mathematics"
] |
Capturing the cloud of diversity reveals complexity and heterogeneity of MRSA carriage, infection and transmission
Genome sequencing is revolutionizing clinical microbiology and our understanding of infectious diseases. Previous studies have largely relied on the sequencing of a single isolate from each individual. However, it is not clear what degree of bacterial diversity exists within, and is transmitted between individuals. Understanding this ‘cloud of diversity’ is key to accurate identification of transmission pathways. Here, we report the deep sequencing of methicillin-resistant Staphylococcus aureus among staff and animal patients involved in a transmission network at a veterinary hospital. We demonstrate considerable within-host diversity and that within-host diversity may rise and fall over time. Isolates from invasive disease contained multiple mutations in the same genes, including inactivation of a global regulator of virulence and changes in phage copy number. This study highlights the need for sequencing of multiple isolates from individuals to gain an accurate picture of transmission networks and to further understand the basis of pathogenesis.
T he use of rapid whole-genome sequencing in clinical microbiology is showing considerable potential for the diagnosis, characterization and surveillance of pathogens [1][2][3] . The power of genome sequencing to identify outbreaks and transmission events and to track the source of bacterial pathogens has been exemplified by a number of studies and will become an invaluable tool to inform hospital infection control, particularly in view of the increasing problems posed by multidrug-resistant pathogens [4][5][6][7] . However, studies to date have largely relied upon sequencing of single colonies from individual hosts, while recent data indicate within-host populations of bacterial pathogens may be heterogeneous. Recent studies on sequencing multiple individual isolates from Staphylococcus aureus carriers have shown that individual colonies that differ by up to 40 single-nucleotide polymorphisms (SNPs) can be isolated from a single colonized individual 5,8,9 . Currently, little data are available about how this within-host 'cloud of diversity' fluctuates temporally, in newly, or long-term colonized or infected individuals. Importantly, no data are available regarding the degree of bacterial diversity that is transferred from one individual to another in a transmission event.
To further understand the cloud of diversity that exists within colonized individuals and during transmission, we undertook deep sequencing of a methicillin-resistant S. aureus (MRSA) 'outbreak' at a veterinary hospital involving both staff and animal patients. We show that there is considerable within-host diversity during carriage and infection, which may rise and fall over time, and can include multiple genotypes. Our data also provide new insights into the degree of diversity transmitted between individuals. Finally, we highlight the need for sequencing of multiple isolates for the accurate determination of transmission networks.
Results
Investigation of MRSA transmission in a veterinary hospital. The index case was a 4-year-old German Shepard dog admitted to a veterinary hospital with suspected toxic epidermal necrolysis manifesting in an open abdominal wound. A wound swab on admission produced a positive culture for Pasteurella multocida, which was fully susceptible to all antibiotics tested. Five days after admission, a second wound swab grew MRSA. The animal's condition deteriorated and it died 11 days after admission. A post mortem was performed, confirming the cause of death as S. aureus sepsis. Additional swabs of the index case taken at 8 days and samples taken at post-mortem were all positive for MRSA. To identify the potential source of infection and other transmission events, we screened the hospital staff and animal patients for MRSA. A total of 97 members of staff and 158 animal patients from the veterinary hospital were screened for MRSA carriage. Any MRSA-positive staff members were re-swabbed at intervals to identify persistent carriers (Fig. 1). Seven staff members (Staff A-G) and three more animal patients (Dogs 30, 150 and 158) were positive for MRSA (prevalence of 7.2% and 6.8%, respectively). Six of the seven MRSA-positive staff members (Staff A, B, C, D, F and G) were involved in the direct care of the index case, whereas all three of the MRSA-positive animals first entered the hospital after the death of the index case ( Fig. 1). Therefore, the availability of dense sampling of both staff and animals taken both during the initial case and in the time immediately afterwards presented the opportunity to investigate bacterial diversity present within and potentially transferred between each colonized host.
Genomic characterization of MRSA populations. Whole-genome sequencing was carried out on 20 separate colonies (or all the colonies grown if the total was less than 20) from each MRSApositive swab from the staff and animals (Fig. 1). Multilocus sequencing types extracted from the genome sequence identified the presences of two different sequence types: ST22 (EMRSA-15) 10 and ST772 (Bengal Bay clone) 11,12 . However, ST772 was only found in a single member of staff, whereas ST22 was isolated from the index case, five members of staff (staff A, B, C, D and F) and three animal patients (dogs 30, 150 and 158; Fig. 1). A phylogenetic tree constructed using SNPs present in the core genome, identified that three distinct clades of ST22 were circulating in the veterinary hospital (clades 1, 2 and 3) separated by 4180 SNP from each other (Figs 1 and 2). The majority (141 of 143) of the isolates from the index case were clade 1, suggesting that organisms from this clade were causing the pathology. A swab taken from the axilla in the index case was also positive for two clade 2 isolates (Fig. 1). Co-colonization with isolates from both clades 1 and 2 was also seen in three other staff members (staff A (samples: A1 and A3), staff B (B1) and staff F (F1) in Fig. 1). In staff member A, the relative proportions of the two populations fluctuated over the three sample points taken over 57 days (% proportions of clade 1:clade 2 at day 12 ¼ 45:55, at day 22 ¼ 100:0 and at day 69 ¼ 35:65; Fig. 1). Staff member F's first swab also produced colonies from both clade 1 and 2 (clade 1:clade 2 ¼ 15:85), although their second and third swabs only produced isolates from clade 2 (staff F2 and F3 in Fig. 1).
Temporal changes in within host diversity in persistent MRSA carriers. We analysed the MRSA populations in the three staff members who were persistent carriers to define changes in diversity over time. The 34 clade 1 isolates from staff member A were differentiated by a total of 22 SNPs, 4 deletions (two intergenic) and 1 insertion (Figs 3a and 4). Diversity was observed across the three time points, with both unique and identical isolates (defined henceforth as the same genome-type) in the second and third swabs compared with the first (Fig. 3a). The 24 clade 2 isolates from staff member A were differentiated by 11 SNPs and two intergenic deletions (Fig. 3b). The 11 isolates from swab one were identical except for one SNP variant, but the 13 isolates from swab two split into two distinct sub-clades (Fig. 3b). The second distinct sub-clade of seven isolates shared a common ancestor with the first clade and was differentiated by five SNPs. The clade 2 isolates from staff member F were differentiated by a total of 19 SNPs and a single intergenic deletion, with changes in diversity observed over time. The 16 isolates from swab one (Staff F1), taken on day 14 varied by 16 SNPs and a single intergenic insertion (Figs 3c and 5a). The second swab (Staff F2) taken on day 19 produced 16 isolates, 14 of which were identical to isolates from the first swab, whereas the isolates from swab three taken on day 69 (Staff F3) were more homogenous, with 16 of identical genome-type and three isolates each differing by only a single SNP ( Fig. 3c and Supplementary Fig. 2). The predominant genome-type from swab three was detected at lower frequency in the two previous swabs, indicating a shift over time in the predominant colonizing population.
Staff member D was the only carrier of clade 3 isolates, which was more diverse than the other two clades combined (84 SNPs, 7 intergenic deletions, 6 deletions, 4 intergenic insertions and 1 insertion; Figs 3d and 5b). The higher diversity was confirmed by tests of pairwise diversity and Watterson's theta ( Supplementary Fig. 2). One explanation for this higher diversity is a higher mutation rate in clade 3 compared with clade 1 and 2, but this was not shown experimentally (Supplementary Table 1). Furthermore, maximum a posteriori estimates from the genome sequence data, which revealed that isolates from clade 3 had a mutation rate of 8.1 Â 10 À 6 substitutions per nucleotide site per year (95% Bayesian credible intervals: 4.6 Â 10 À 6 -1.3 Â 10 À 5 ), comparable to a previous measurements of ST22 population: 1.3 Â 10 À 6 substitutions per nucleotide site per year (95% Bayesian credible intervals, 1.2 Â 10 À 6 to 1.4 Â 10 À 6 ) 13 . An estimate for time to most recent common ancestor was 179 days (95% highest posterior density (HPD): 177-288 days). The majority of the isolates (14 of 18) in the third swab were part of a distinct clade not seen in the first two swabs, although directly descended from populations that were (Fig. 3d). All 14 isolates in this clade had an 8.5-kb deletion between sdrC and sdrE (deleting sdrC, sdrD and sdrE) present in 14 isolates (Fig. 3d). A single isolate (Staff D_3_H) from this clade had also lost the fSa3 phage, which encodes modulators of the human innate immune response 14,15 . We also identified that all the isolates from staff member D had a premature stop codon (Trp76STOP) present in agrC (AgrC is the autoinducer sensor protein component of the accessory gene regulator (agr), a quorum sensing system and global transcriptional regulator of staphylococcal virulence 16 ). The loss of agr function was confirmed by a lack of d-haemolytic activity (a proxy for agr activity 17 ; Supplementary Table 2 and Supplementary Fig. 1).
Analysis of within host diversity during infection. We next analysed the isolates from the index case. A total of 141 isolates were differentiated by 24 SNPs. In all, 57 of the 141 (40%) isolates were genetically indistinguishable (the same genome-type) on the basis of core genome SNPs (isolates identical to Dog_Index_1_C in Fig. 3e). Another 53 isolates only differed by a single SNP from this population, meaning that 78% of isolates from the index case varied by a maximum of two core genome SNPs. All the isolates from the first MRSA-positive swab taken on day 5, were either the predominant genome-type (7 isolates) or belong to one of two single SNP sub-clades (3 and 10 isolates; Fig. 3e). The isolates from the nasal swab taken on day 8 (Dog Index 2) contained the most SNPs (nine SNPs between ten isolates) and were the most diverse sample from the index case ( Supplementary Fig. 2). Clusters of isolates from individual anatomical sites were differentiated into sub-clades that descended from the predominant genome-type (Dog Index 1, 3, 4, 5, and 8 in Fig. 3e). Three isolates (Dog_Index_3B, C and S) from the axilla had also lost the fSa3 phage, interestingly, like in staff member D, two of three isolates that had lost fSa3 phage also had increased fSa2 HO 5096 0412 copy number ( Fig. 3e and Supplementary Fig. 4). Unlike the broad distribution of isolates with altered fSa2 HO 5096 0412 copy number in staff A, F and D, all but two of the isolates from index case with an increased fSa2 HO 5096 0412 number were phylogenetically distinct isolates from the axilla (Dog Index 3, 8 of 11 isolates) and the precapsular lymph node (Dog Index 8, 14 of 19 isolates; Fig. 3e and Supplementary Fig. 4). Furthermore, all Phase copy number increase Phase copy number decrease isolates from the only invasive site sampled, the prescapular lymph node (Dog Index 8) were differentiated from the predominant genome-type of the index case by two different SNPspresent in the same gene: agrR (AgrR is the response regulator component of the accessory gene regulator (agr) 18 . The first SNP, present in 13 isolates, was a non-synonymous mutation causing a Ser202Asn substitution (isolates around Dog_Index_8_F in Fig. 3e). The second mutation, a premature stop codon (Lys236STOP) at the C-terminal end of AgrA was present in six isolates (isolates around Dog_Index_8_B in Fig. 3e). The loss of d-haemolytic activity was confirmed in multiple isolates with both of these agrA mutations, demonstrating the mutations caused a loss of AgrA function (Supplementary Table 2 and Supplementary Fig. 1). Furthermore, two of the isolates (Dog_Index_8_S and K) with the S202N substitution had two further non-synonymous mutations (Gln1256Glu and Asn1329Asp) in the same gene: fmtB, a surface anchored protein associated with methicillin resistance 19 .
Interpretation of transmission pathways for clade 1. Next we analysed the entire clade 1 data set to elucidate possible transmission events. The isolates from staff member A were consistently the most basal, with identical and unique basal isolates being present in all three swabs (Figs 3a and 4 and Supplementary Fig. 2). Furthermore, the isolates from staff member A's first swab (Staff A1 in Figs 3a and 4) had the greatest pairwise diversity of all clade 1 samples ( Supplementary Fig. 2a).
For the index case, all isolates except for two basal isolates (Dog_Index_2_D and Dog_Index_2_A) in the nasal swab taken on day 8 were descended from the basal population present in the staff member A (Fig. 4). Directly basal isolates to the sub-clade of isolates from the left antebrachium (forearm; Dog_index_7 in Fig. 4) were present only in staff member A (Staff_A_2_N and C in Fig. 3). At days 12 and 22, staff member A also carried isolates that were either identical (Staff_A_1_A) or that differed by a single SNP (Staff_A_2_L) to the predominant genome-type in the index case (Fig. 4) SNPs) and that descended from the predominant genome-type in the index case ( Fig. 4 and Supplementary Fig. 2). Dog 30 also produced a single isolate identical to the predominant genometype of the index case (Fig. 4).
Evidence for transmission of clade 2 isolates. As for in clade 1, one staff member was persistently colonized with diverse basal isolates of clade 2, namely, staff member F ( Fig. 5a and Supplementary Fig. 2). All the clade 2 isolates from staff member A (the other persistent carrier that was co-colonized with clades 1 and 2) from both positive swabs were descended from the basal population present in staff member F (Staff A1 and A3 in Fig. 5a). Furthermore, an isolate from staff member F (Staff_F_1_T) was directly basal to one of the two sub-clades from staff member A's third swab (Fig. 5a). Dog 150, which was both nasally colonized and had wound infected by clade 2 isolates, first entered the veterinary hospital on day 58, 10 days before the third swab was taken from staff member F (Fig. 1). Dog 150 was populated with isolates that were identical to the predominant genome-type (or that differed by a single SNP) of isolates from staff member F in their third swab (Staff_F_3; Fig. 5a). In contrast to the homogeneity of clade 2 isolates seen in the dog 150, the four clade 2 isolates from staff member B (co-colonized with clade 1 and clade 2) differed by a total of ten SNPs (with two identical isolates from the predominant sub-clade seen in dog 150 and staff F3 in Fig. 5a). Despite the degree of diversity present being equivalent to staff member F, staff member B was only positive for clade 2 isolates on day 12 and was negative 2 days later, suggesting they were only a transient carrier (Figs 1 and 5a and Supplementary Fig. 2). Finally, the clade 2 isolates from staff member G (single isolate: Staff_G_1_A) and the index case (two isolates: Dog_In-dex_3_E and N) both differed by a single SNP from the genometype of one of the basal clades that made up the majority of staff member F's population at their first two swabs taken at the same time (Staff_F1 and Staff_F2 in Fig. 5a).
Discussion
We have used whole-genome sequencing of multiple individual colonies to investigate MRSA carriage and transmission among human staff and animal patients at a veterinary hospital. Using a combination of epidemiological and genome sequence data it is possible to elucidate the probable MRSA transmission events. For the clade 1 isolates (the cause of the index case infection), the most parsimonious explanation for the source was staff member A, because: (i) their isolates were consistently the most basal in the phylogenetic tree, (ii) the isolates in their first swab exhibited the greatest pairwise diversity, (iii) they possessed isolates representing the predominant genome types in the index case, (iv) they were the only known persistent carrier of clade 1 and (v) they were directly involved in care of the index case (Fig. 6). Similar reasoning suggests that for staff members B, C and F it was most likely that they acquired their isolate by transmission from the index case (Fig. 6). In the case of dog 30, either staff members A, B and F or the index case (indirectly through environmental transmission) might have been the source (Figs 4 and 6). For dog 158, although colonized by a population derived from the predominant genome-type of the index case, it first entered the hospital 32 days after the death of index case (Fig. 1). The only known carrier at this time was staff member A, who was still populated with basal isolates, therefore the chain of transmission is not clear (Fig. 6). We also found that at the same time, a second distinct clone of ST22 (clade 2) had been transmitted between staff and animal patients. Staff member F was the most likely source of clade 2 as they were a persistent carrier, and were populated with the most diverse and basal isolates. Staff member A most likely acquired their clade 2 isolates from staff member F, as staff members F's isolates were basal to the population in staff member A (Figs 5a and 6). For staff member B, the direction of transmission was less clear, as their four isolates were as diverse and representative of the population present in staff member F's at approximately the same time (Staff B cf. Staff F1 and F2 in Fig. 5a Supplementary Fig. 2). However, staff member B was not a persistent carrier, and most likely acquired a diverse population from staff member F (Figs 1 and 6). This clade was subsequently acquired by staff member G and the index case, again, most likely from staff member F as their isolates were identical to or differed by a single SNP to staff member F's contemporaneous isolates. Clade 2 isolates were also found in dog 150. These isolates were identical or varied by a single SNP to the predominant genome type in staff member F's third swab (Fig. 5a). Dog 150 first entered the veterinary hospital close to the time of staff member F's third swab (Fig. 1) suggesting that staff member F was a likely source of the clade 2 isolates in dog 150 (Fig. 6). This study yielded some interesting insights concerning withinhost bacterial diversity during colonization and following transmission. Co-colonization by distinct spa-types has been described 20 and here we provide evidence of colonization by separate sub-populations of the same multi-locus sequence type. The almost equal proportions of clade 1 and 2 isolates in staff member A's initial swab indicate that the possibility of correctly identifying their involvement in clade 1 transmission using a single isolate chosen at random would have been only 45% (Staff A 1 in Fig. 1). The implications of this finding are not only limited to S. aureus, as co-infection or co-colonization by a number of bacterial and viral pathogens has been reported 10,21-28 .
The deep sequencing of temporally spaced samples identified that varying degrees of diversity are present in colonized individuals and that the composition of colonizing populations can shift dramatically, including becoming less diverse over time (as in the case of the third swab from Staff member F; Fig. 5a and Supplementary Fig. 2). This situation may be more complex than reported here using only nasal swabs, as variation in populations from different body sites has been reported for S. aureus [29][30][31] .
The extent of the diversity of S. aureus in host populations will be influenced by the amount of diversity transferred during a transmission event and by the duration between transmission event and sampling. The data presented here suggest that the populations present in post-transmission recipients can either be homogenous (Clade 1: Staff B, C, F, Dog 30 and 158, Clade 2: Staff G and Dog 150) or heterogeneous (Clade 1: Index case, Clade 2: Staff B) (Figs 4 and 5a). Of course, no data were available about the number of transmission events or their exact timing, or the nature of population bottlenecks occurring post transmission.
Three staff members (staff A, D and F) were persistent MRSA carriers but had different rates of transmission. Staff member D, although an active member of the clinical team looking after the index case, neither transmitted their clade (clade 3) nor acquired any of the other clades. Staff member A, who was the apparent source of clade 1 also carried clade 2, but did not transmit this clade. Behavioural characteristics, the nature of the contact, adherence to hygiene and aseptic technique, all may be expected to influence the likelihood of transmission and acquisition. Furthermore, biological characteristics of the individual bacterial lineages and hosts also likely influence the likelihood of transmission and colonization.
Studies using theoretical modelling have highlighted caveats in the use of sequencing data from single colonies alone to infer transmission 32,33 . Our data provide empirical evidence from a real world situation that supports the concerns raised by modelling work. The selection of 20 individual isolates to sequence was arbitrary; is there an optimal number of isolates? A rarefaction and extrapolation analysis of isolates obtained from the index case ( Supplementary Fig. 3) suggests that there would be a near linear increase in the number of genomic variants obtained with increased sampling. One answer might be the use of shotgun metagenomics [34][35][36][37] to assess the degree of diversity present in clinical samples (with microbial DNA enrichment 38 ) or cultured isolates to provide an empirical basis for the selection of the number of colonies to sequence.
Previously, we have shown a shared population of CC22 MRSA circulates in humans, cats and dogs 39 ARTICLE the view that ST22 has a broad host range and behaves as a nosocomial pathogen within a veterinary health-care setting just as it does within human hospitals 39,40 .
An investigation of mutations among isolates within the index case sheds light on the evolution of the pathogen over the course of infection. We identified that the population present in the lymph node of the index case all had one of two different mutations causing inactivation of the same gene: agrA. No other clade 1 isolates had the same mutations, suggesting they were present at very low frequency or they were generated de novo during infection. Selection of agr mutants by the presence of antibiotics (the index case was treated throughout its time in hospital) has been reported, probably due to the fitness cost of RNAIII expression, providing agr defective 'cheaters' a distinct fitness advantage [41][42][43][44] . agr dysfunction has also been identified as risk factor for persistent bacteremia in human patients and is associated with a poor clinical outcome [45][46][47] . Our findings, extending these observation to a canine host suggest that inactivation of the agr system might be advantageous for survival irrespective of the host species. The isolates from lymph node and axilla all had an increased copy number of fSa2 HO 5096 0412 . Given the proximity of the axilla to the prescapular lymph node this might suggest that the increase in phage copy occurred in the ancestral population in the axilla and that this population then seeded the invasive infection. Further work is required to investigate the role of both fSa2 HO 5096 0412 and phage-copy number variation in S. aureus pathogenesis.
In conclusion, this study confirms the value of whole-genome sequencing in the epidemiological investigation of nosocomial transmission of MRSA while identifying potential pitfalls. It highlights the potential complexity of transmission networks and the potential limitations of obtaining sequencing data from a single isolate from a host. Although this study was performed in a veterinary context its results are highly relevant to any health-care setting.
Methods
Ethical review. The proposed study was submitted to the Ethical Review Committee at the Department of Veterinary Medicine at the University of Cambridge. It was approved on 22 April 2013 (CR88).
MRSA screening and bacterial isolation. Sterile Ames media transport swabs (Medical Wire) were used to sample both anterior nares of staff members and animal patients. The personnel who conducted the swabbing and processing of samples were also swabbed and found to be negative for MRSA. Swabs were then inoculated into 4 ml of Müller Hinton broth with 6.5% NaCl and grown statically for 24 h at 37°C in air. One hundred microlitres of broth were then plated onto MRSA Brilliance Agar 2 plate (Oxoid) and incubated for 24 h at 37°C in air. Plates with no growth were incubated for a further 24 h to confirm the negative result. Putative MRSA were confirmed by PCR for mecA and femB 48 .
Whole-genome sequencing. Genomic DNA was extracted from overnight cultures grown from single colonies in 5 ml of tryptic soy broth overnight at 37°C using the MasterPure Gram Positive DNA Purification Kit. Illumina library preparation was carried out as described by Quail et al. 49 , and Mi-seq or Hi-seq sequencing was carried out following the manufacturer's standard protocols (Illumina, Inc.). Nucleotide sequences been deposited in the European Nucleotide Archive (Supplementary Data set 1).
Bioinformatic analysis and phylogenetics. Fastq files for the isolates were mapped against the ST22 MRSA reference genome HO 5096 0412 (EMBL accession code HE681097) using SMALT (http://www.sanger.ac.uk/resources/software/ smalt/) in order to identify SNPs, as previously described (Supplementary Data sets 2, 3 and 4) 50 . SNPs located in mobile genetic elements or low-quality regions (insertions and deletions (indels)/low coverage/repeat regions) were identified by manual inspection and removed from the alignments (Supplementary Data sets 5, 6 and 7). High-quality SNPs were then manually inspected in the alignment and using BAM files mapped on the reference genome. Any isolates with large numbers of N's (uncalled SNPs) in the alignment were removed from the analysis as potentially contaminated. The maximum likelihood tree was generated from the resulting SNPs in the core genome using RAxML 51 . Indels were identified as previously described 52 (Supplementary Data sets 2, 3 and 4). Indels were manually assessed using BAM files mapped on the reference. Comparison of the mobile genetic content of the isolates was assessed by BLAST analysis against Velvet de novo assemblies 53 .
Experimental measurement of mutation rate. The mutation rates for representative isolates from each clade were measured using the methods described by O'Neill and Chopra 54 . Briefly, three independent colonies were picked for three representative isolates distributed throughout the phylogeny of each ST22 clade (Clades 1, 2 and 3) and grown in 5 ml Isosensitest broth (Oxoid) at 37°C at 200 r.p.m. until they reached an optical density of B1 at 595 nm. One hundred microlitres of each culture was then serially diluted and plated onto either Iso-Sensitest agar plates (Oxoid) containing 4 Â minimum inhibitory concentration of rifampicin (as determined by agar dilution 55 ) or on Iso-Sensitest agar plates. Mutation rates were calculated as the number resistant colonies recovered on the rifampicin plates as a proportion of the total population as determined on Iso-Sensitest plates (Supplementary Table 1). A previously described mutS-knockout strain RN4220mutS and its isogenic wild-type RN4200 (ref. 54) were included as controls with mutation rates of 3.02(±0.13) Â 10 À 7 and 3.19( ± 0.13) Â 10 À 6 , respectively, a 9.5-fold difference, and similar to that previously reported for these strains 50 .
Haemolysis assay. A haemolysis assay was carried out as previously described 17 . A single colony of S. aureus RN4200 was streaked down the centre of a Columbia agar with sheep blood plate (Oxoid). Single colonies of representative isolates were then cross-streaked horizontally up to the RN4220. Plates were then incubated for 18 h at 37°C followed by 6 h at 4°C. Isolates that produced enhanced haemolysis in the area close to RN4220, which only produces b-haemolysin were scored as positive for production of d-haemolysis (Supplementary Table 2). Representative photographs are shown in Supplementary Fig. 1.
BEAST analysis. The presence of temporal signal was tested for separately in the isolates sampled from staff A, D, F and the index dog, through fitting a regression of root-to-tip genetic distance against sampling date using Path-O-Gen v1.4 (http:// tree.bio.ed.ac.uk/software/pathogen/). Temporal signal was only judged as present in the isolates from staff D. For the samples from this individual, the fit of linear regression of root-to-tip distance against sampling time to the data had a large and positive correlation coefficient (r ¼ 0.68). For other individuals, this was not the case (staff A, r ¼ 0.11; staff F, r ¼ À 0.26; index dog, r ¼ 0.06). The date of the most recent common ancestor and the evolutionary rate were estimated using BEAST for isolates from staff D 56 . A HKY substitution model with a 4-category gamma distribution of rates and no partitioning of sites was used. Because of low levels of divergence, a strict molecular clock model and constant population size model were used. Estimates were similar to those obtained from a relaxed clock model.
Bayesian phylogenetic analysis. SNPs, indels and mobile genetic elements (MGEs) from the index dog and staff members A, D and F were used to estimate the topologies of phylogenies of the strains found within these individuals. Separate analyses were performed for the two clades of MRSA found within staff member A. The data sets for each individual were divided into two partitions: (i) SNPs and (ii) indels and MGEs (Supplementary Data sets 2, 3 and 4). Evolutionary rates were estimated separately for these two partitions. A GTR model with a gamma distribution of rates over sites was used for the SNP partition. A two-state F81 model with no rate variation over sites was used for the indel/MGE partition. MrBayes was used to estimate the phylogenies 57 . No information about fixed sites was included in the models. Convergence and burnin were established through use of Tracer v1.6 (http://beast.bio.ed.ac.uk/Tracer). All MrBayes runs were well converged with estimated sample size (ESS) values 4200. The topologies that resulted from the analysis of SNPs, indels and MGEs are described in Fig. 4. Trees represent consensus tree topology, with nodes present in o50% trees collapsed to polytomies, and rooting according to the Maximum likelihood (ML) topology.
Calculation of diversity. The genetic diversity within populations of sequenced isolates was measured using standard population genetic statistics, namely, Watterson's theta (a measure based on the number of segregating sites) and nucleotide diversity, pi, (the mean proportion of sites that differ between pairs of sequences 58 ; Supplementary Fig. 2).
Rarefaction and extrapolation.
To estimate the number of additional variants that would be obtained from the further sequencing of isolates from a single host, a rarefaction and extrapolation analysis were performed using data from the nine swabs taken from the index dog 59 (Supplementary Fig. 3). This analysis was performed using EstimateS 9.1.0 (EstimateS: Statistical estimation of species richness and shared species from samples. Version 9. http://purl.oclc.org/ estimates) 60 .
Copy number variation. To screen genomes for copy number variation, we used the cn.MOPS R package 61 using the haplocn.mops algorithm adjusted for haploid genomes. All genomes were screened visually for regions with potentially changed copy number. Next, the mean read count per base was generated and an R script was then used to all regions to identify regions with an increase in copy number relative to the mean copy number of the entire genome. Regions identified to have an increased copy number were then manually inspected in the annotated genome sequence and by comparative genomics to identify the boundaries of mobile genetic elements. One region encoding a likely bacteriophage (fSa2 HO 5096 0412 , present at coordinates 1520142-1566315 in the reference genome HO 5096 0412 (ref. 13)) was identified to be variable in isolates in all three clades. To take into account the difference in coverage due to proximity to the origin of replication and terminus, the fold difference copy number of fSa2 HO 5096 0412 sequences between isolates was calculated by dividing the mean read count of the entire region spanning fSa2 HO 5096 0412 over the mean read count of 100 kb upstream and 100 kb downstream. Isolates with a fSa2 HO 5096 0412 copy number Z10% above or below the median (median copy numbers: 1.05, 1.36 and 1.19 for clade 1, 2 and 3, respectively) for the entire clade deemed to have a changed copy number ( Supplementary Figs 4 and 5). | 7,746.2 | 2015-03-27T00:00:00.000 | [
"Biology",
"Environmental Science",
"Medicine"
] |
Pillar[n]arene-Mimicking/Assisted/Participated Carbon Nanotube Materials
The recent progress in pillar[n]arene-assisted/participated carbon nanotube hybrid materials were initially summarized and discussed. The molecular structure of pillar[n]arene could serve different roles in the fabrication of attractive carbon nanotube-based materials. Firstly, pillar[n]arene has the ability to provide the structural basis for enlarging the cylindrical pillar-like architecture by forming one-dimensional, rigid, tubular, oligomeric/polymeric structures with aromatic moieties as the linker, or forming spatially “closed”, channel-like, flexible structures by perfunctionalizing with peptides and with intramolecular hydrogen bonding. Interestingly, such pillar[n]arene-based carbon nanotube-resembling structures were used as porous materials for the adsorption and separation of gas and toxic pollutants, as well as for artificial water channels and membranes. In addition to the art of organic synthesis, self-assembly based on pillar[n]arene, such as self-assembled amphiphilic molecules, is also used to promote and control the dispersion behavior of carbon nanotubes in solution. Furthermore, functionalized pillar[n]arene derivatives integrated carbon nanotubes to prepare advanced hybrid materials through supramolecular interactions, which could also incorporate various compositions such as Ag and Au nanoparticles for catalysis and sensing.
Introduction
The investigation of macrocyclic compounds [1,2] had a very "Noble" beginning [3] with the study of crown ethers, known as the "first generation" of macrocyclic compounds [4][5][6], which awarded Cram [7], Lehn [8] and Pedersen [9] the Nobel Prize in Chemistry in 1987. Since then, a lot of marvelous research has been carried out to enrich the synthesis of different macrocyclic structures [10,11], as well as to expand the border of macrocyclic compound-based materials [12]. For example, macrocycle-modified, hybrid, multi-dimensional carbon materials [13,14] have attracted much more attention, due to their wide applications in biomedicine [15,16], catalysis [17], batteries [18] and sensors [17,19]. The most famous ones were macrocycle-based graphene [20]/carbon nanotube materials [21]. The physiochemical properties and behaviors of those carbon materials have been improved greatly in the presence of functionalized macrocyclic compounds [22]. However, the very interesting point of whether carbon materials such as carbon nanotubes (Chart 1) could be directly prepared or not by a macrocyclic skeleton was rarely revealed [23].
Actually, macrocycles provided a dream structural skeleton as the missing piece for possible carbon materials such as carbon nanotubes via the art of organic synthesis [24][25][26]. Years ago [27], pillar[n]arene (Chart 1) was reported by Ogoshi et al. as the "fifth generation" of macrocyclic compounds [28][29][30][31][32], which was composed of hydroquinone units that were bridged by methylene subunits at para-positions, leading to the fabrication of an electron-rich cavity [33,34], pillar-shaped cyclic rigid molecular architectures [35], and unique planar chirality [36][37][38][39]. The first thing needed for utilizing pillar[n]arene from the organic synthesis view was to construct another famous carbon material: the carbon nanotube [40,41]. Up to now, several classic, organic syntheses [26] have been employed to mimic the structure of carbon nanotubes by using the skeleton of pillar[n]arene, leading to interesting porous materials [42]. Additionally, the proper functionalization of the "short" cylindrical pillar[n]arene can also be similar to the characteristics of carbon nanotubes. Due to the introduction of the architecture of functionalized pillar[n]arene as well as supramolecular interactions, the physiochemical properties of modified pillar[n]arene were very similar to those of carbon nanotubes, resulting in a lot of work being carried out to compare the diverse performance of pillar[n]arene and carbon nanotubes in several fields, such as artificial channels [43]. Furthermore, functional pillar[n]arene derivatives, such as the water-soluble one, were used as an amphiphilic addictive [44] to promote the physiochemical properties of carbon nanotubes, e.g., by greatly improving the dispersion of carbon nanotubes in aqueous solutions. Finally, functional pillar[n]arene was directly decorated on the surface of carbon nanotubes to expand the current applications, constructing pillar[n]arene-carbon nanotube hybrid materials for sensing [45], catalysis [46] and supramolecular gels [47].
bon nanotubes in several fields, such as artificial channels [43]. Furthermore, functional pillar[n]arene derivatives, such as the water-soluble one, were used as an amphiphilic addictive [44] to promote the physiochemical properties of carbon nanotubes, e.g., by greatly improving the dispersion of carbon nanotubes in aqueous solutions. Finally, functional pillar[n]arene was directly decorated on the surface of carbon nanotubes to expand the current applications, constructing pillar[n]arene-carbon nanotube hybrid materials for sensing [45], catalysis [46] and supramolecular gels [47].
In this review, we initially summarize the pillar[n]arene-assisted carbon nanotube materials. At the very beginning, the structural skeleton of pillar[n]arene was taken as the basis for synthesizing linear, rigid, oligomeric/polymeric and flexible, conformational, peptide-modified, advanced, one-dimensional and spatially limited architectures to resemble the morphological molecular structure and physiochemical characteristics of carbon nanotubes, leading to various interesting applications such as the adsorption of toxic pollutants, gas adsorption and separation, as well as artificial water channels. In addition, water-soluble pillar[n]arene could include hydrophobic guest molecules into self-assembled amphiphiles to assist in dispersing carbon nanotubes in aqueous solutions. Furthermore, both polymeric and water-soluble pillar[n]arene have the capacity of integrating carbon nanotubes in the absence and presence of other inorganic compositions, respectively, for the preparation of carbon nanotube-based hybrid materials, paving the way for supramolecular organogels, catalysis and sensing. Finally, we also discuss new challenges in the Overview and Outlook section, and provide primary suggestions for future work in this field. Chart 1. Schematic presentation of structures of pillar[n]arene (left), as well as single-walled (middle) and multi-walled carbon nanotubes (right).
Preparing Linear Pillar[n]arene-Based Oligomer/Polymer via Rigid Aromatic Bridges
Due to the possession of rigid pillar-like molecular structures and an electron-rich cavity [48], the skeleton of functionalized pillar[n]arene was employed as previous pieces for the construction of linear oligomeric and polymeric architectures, such as P1-P7 (Scheme 1), by introducing rigid aromatic bridging subunits to mimic carbon nanotubes In this review, we initially summarize the pillar[n]arene-assisted carbon nanotube materials. At the very beginning, the structural skeleton of pillar[n]arene was taken as the basis for synthesizing linear, rigid, oligomeric/polymeric and flexible, conformational, peptidemodified, advanced, one-dimensional and spatially limited architectures to resemble the morphological molecular structure and physiochemical characteristics of carbon nanotubes, leading to various interesting applications such as the adsorption of toxic pollutants, gas adsorption and separation, as well as artificial water channels. In addition, water-soluble pillar[n]arene could include hydrophobic guest molecules into self-assembled amphiphiles to assist in dispersing carbon nanotubes in aqueous solutions. Furthermore, both polymeric and water-soluble pillar[n]arene have the capacity of integrating carbon nanotubes in the absence and presence of other inorganic compositions, respectively, for the preparation of carbon nanotube-based hybrid materials, paving the way for supramolecular organogels, catalysis and sensing. Finally, we also discuss new challenges in the Overview and Outlook section, and provide primary suggestions for future work in this field.
Preparing Linear Pillar[n]arene-Based Oligomer/Polymer via Rigid Aromatic Bridges
Due to the possession of rigid pillar-like molecular structures and an electron-rich cavity [48], the skeleton of functionalized pillar[n]arene was employed as previous pieces for the construction of linear oligomeric and polymeric architectures, such as P1-P7 (Scheme 1), by introducing rigid aromatic bridging subunits to mimic carbon nanotubes [42,[49][50][51][52]. Several classic organic syntheses such as heterocyclization and Pd-catalyzed coupling reactions [53][54][55][56] have been thoroughly employed. Particularly, those carbon nanotuberesembling linear pillar[n]arene-based porous materials have been used in diverse applications such as gas absorption [57,58], as well as the absorption and separation of toxic pollutants in water [59]. For example, P2 exhibits a recognition towards solvent molecules such as dichloromethane [50], whereas P3 has the ability to capture toxic pollutants such as adiponitrile [51]. In addition, P5 can selectively absorb CO 2 rather than N 2 or methane [52], whereas P6 and P7 were used for separating propane gas from the simulated gas mixture of methane and propane [42]. [42,[49][50][51][52]. Several classic organic syntheses such as heterocyclization and Pd-catalyzed coupling reactions [53][54][55][56] have been thoroughly employed. Particularly, those carbon nanotube-resembling linear pillar[n]arene-based porous materials have been used in diverse applications such as gas absorption [57,58], as well as the absorption and separation of toxic pollutants in water [59]. For example, P2 exhibits a recognition towards solvent molecules such as dichloromethane [50], whereas P3 has the ability to capture toxic pollutants such as adiponitrile [51]. In addition, P5 can selectively absorb CO2 rather than N2 or methane [52], whereas P6 and P7 were used for separating propane gas from the simulated gas mixture of methane and propane [42].
Preparing Peptide-Appended Pillar[n]arene Processing Intramolecular Hydrogen Bonds
Except for introducing rigid aromatic moieties as the bridging linker for the construction of pillar[n]arene-based polymeric architectures, diverse designs and synthetic strategies were carried out for mimicking carbon nanotubes; for example [60], the peptide-appended pillar[n]arene P8 (Scheme 2) was produced to introduce intramolecular interactions such as hydrogen bonding [61] to form "closed", tubular, molecular architectures [62], as well as to resemble the performance of carbon nanotubes as artificial water channels [63,64] and permeable membranes [65][66][67]. It was revealed that the average singlechannel osmotic water permeability [65] and the ion rejection of P8 were greatly analogous to those of carbon nanotubes [67]. Furthermore, the pore density [68] of P8-based channel arrays was much higher than that of carbon nanotube-based ones [65]. It was also found that the flexible conformation of the peptide-appended pillar[n]arene was available for water permeability [66,69,70]. Scheme 1. Chemical structures of pillar [5]arene-based polymers P1-P7.
Preparing Peptide-Appended Pillar[n]arene Processing Intramolecular Hydrogen Bonds
Except for introducing rigid aromatic moieties as the bridging linker for the construction of pillar[n]arene-based polymeric architectures, diverse designs and synthetic strategies were carried out for mimicking carbon nanotubes; for example [60], the peptide-appended pillar[n]arene P8 (Scheme 2) was produced to introduce intramolecular interactions such as hydrogen bonding [61] to form "closed", tubular, molecular architectures [62], as well as to resemble the performance of carbon nanotubes as artificial water channels [63,64] and permeable membranes [65][66][67]. It was revealed that the average single-channel osmotic water permeability [65] and the ion rejection of P8 were greatly analogous to those of carbon nanotubes [67]. Furthermore, the pore density [68] of P8-based channel arrays was much higher than that of carbon nanotube-based ones [65]. It was also found that the flexible conformation of the peptide-appended pillar[n]arene was available for water permeability [66,69,70].
Dispersion of Carbon Nanotube by Using Functionalized Pillar[n]arene
Water-soluble pillar[n]arene could include hydrophobic guest molecules [71, ducing pillar[n]arene-based self-assembled amphiphiles (PSAs) [73,74] and res the performance of general surfactant-dispersing carbon nanotubes [75]. For exam water-soluble carboxylate-perfunctionalized [77] pillar [6]arene P9 (Scheme 3 and has the ability to recognize the hydrophobic pyrene [78] derivative G1 (Scheme stoichiometry of 1/1, and aggregate into vesicular architectures [79] in an aqueous as confirmed by transmission electron microscopy (TEM) [80]. Since the water s of P9 changes with the change of pH value, the morphology of P9 ⸧ G1-based sel blies could be also controlled. In addition [76], P9 could include another pyrene de G2 (Scheme 3) with the association constant (Ka) of (8.04 ± 0.68) 10 4 M −1 . Addition obtained amphiphilic inclusion could further disperse multi-walled carbon na well by sonication in aqueous solutions ( Figure 1 and Table 1) as confirmed by T scanning electron microscopy (SEM). The π-π stacking interactions [81] between subunits and carbon nanotubes played a significant role in this process as confi fluorescence spectroscopy [82].
Dispersion of Carbon Nanotube by Using Functionalized Pillar[n]arene
Water-soluble pillar[n]arene could include hydrophobic guest molecules [71,72], producing pillar[n]arene-based self-assembled amphiphiles (PSAs) [73,74] and resembling the performance of general surfactant-dispersing carbon nanotubes [75]. For example [76], water-soluble carboxylate-perfunctionalized [77] pillar [6]arene P9 (Scheme 3 and Table 1) has the ability to recognize the hydrophobic pyrene [78] derivative G1 (Scheme 3) in the stoichiometry of 1/1, and aggregate into vesicular architectures [79] in an aqueous solution as confirmed by transmission electron microscopy (TEM) [80]. Since the water solubility of P9 changes with the change of pH value, the morphology of P9 ⊃ G1-based self-assemblies could be also controlled. In addition [76], P9 could include another pyrene derivative G2 (Scheme 3) with the association constant (K a ) of (8.04 ± 0.68) 10 4 M −1 . Additionally, the obtained amphiphilic inclusion could further disperse multi-walled carbon nanotubes well by sonication in aqueous solutions ( Figure 1 and Table 1) as confirmed by TEM and scanning electron microscopy (SEM). The π-π stacking interactions [81] between pyrene subunits and carbon nanotubes played a significant role in this process as confirmed by fluorescence spectroscopy [82].
Similarly [83], another pyrene derivative G3 (Scheme 3 and Table 1) which is responsive to UV irradiation and degrades into F1 and F2 (Scheme 3) was further employed as the hydrophobic guest to be included by P9. Thus, the amphiphilic inclusion P9 ⊃ G3 was obtained and exhibited the critical aggregation concentration (CAC) [84] of 1.0 × 10 6 mol L −1 , which has the capacity of dispersing multi-walled carbon nanotubes in aqueous solutions as confirmed by TEM (Table 1). Interestingly, the dispersion of carbon nanotubes could be controlled upon UV irradiation according to the formation/deformation of G3. Similarly [83], another pyrene derivative G3 (Scheme 3 and Table 1) which is responsive to UV irradiation and degrades into F1 and F2 (Scheme 3) was further employed as Scheme 3. Chemical structures of pillar [6]arene derivative P9, guests such as pyrene derivatives G1-G3, as well as functional molecules F1 and F2 [76,83].
Similarly [83], another pyrene derivative G3 (Scheme 3 and Table 1) which is re sive to UV irradiation and degrades into F1 and F2 (Scheme 3) was further employ the hydrophobic guest to be included by P9. Thus, the amphiphilic inclusion P9 ⸧ G Figure 1. Illustration of dispersing multi-walled carbon nanotube by using amphiphilic host-guest inclusion P9 ⊃ G2 in aqueous solution [76].
Besides the organic and polymeric composition, diverse inorganic materials were also employed for fabricating different functional, carbon nanotube-based, hybrid materials. For example [86], the water-soluble phosphate-perfunctionalized pillar [6]arene P12 (Scheme 5 and Table 1) could decorate the surface of a single-walled carbon nanotube at room temperature via π-π stacking interactions between the carbon nanotube and benzene moieties of P12 by sonicating in aqueous solutions, as confirmed by zeta potentials and Fourier transform infrared (FTIR) spectroscopy. Particularly, Ag nanoparticles could be further well dispersed on the surface of the carbon nanotube due to the coordination environment provided by the cavity of P12 (Figures 4 and 5). Thus, the obtained hybrid materials containing Ag nanoparticles, carbon nanotubes and P12 (Figures 4 and 5 and Table 1) exhibited strong catalytic activity towards a series of guest molecules such as 4-nitrophenol (G5, Scheme 5), methylene (G6, Scheme 5) and paraquat (G7, Scheme 5), paving the way for efficient electrochemical sensing of highly toxic herbicides. . Photographs of pillar [5]arene-based polymer-carbon nanotube-complexed organogels in 1,2-dichlorobenzene via noncovalent interactions (right), compared to those phenomena in the presence of P11 ⸧ G4 (middle), as well as physical mixture of P11 and PEG600 [85]. Copyright © 2022 by American Chemical Society.
and Fourier transform infrared (FTIR) spectroscopy. Particularly, Ag nanoparticles could be further well dispersed on the surface of the carbon nanotube due to the coordination environment provided by the cavity of P12 (Figures 4 and 5). Thus, the obtained hybrid materials containing Ag nanoparticles, carbon nanotubes and P12 (Figures 4 and 5 and Table 1) exhibited strong catalytic activity towards a series of guest molecules such as 4nitrophenol (G5, Scheme 5), methylene (G6, Scheme 5) and paraquat (G7, Scheme 5), paving the way for efficient electrochemical sensing of highly toxic herbicides.
and Fourier transform infrared (FTIR) spectroscopy. Particularly, Ag nanoparticles could be further well dispersed on the surface of the carbon nanotube due to the coordination environment provided by the cavity of P12 (Figures 4 and 5). Thus, the obtained hybrid materials containing Ag nanoparticles, carbon nanotubes and P12 (Figures 4 and 5 and Table 1) exhibited strong catalytic activity towards a series of guest molecules such as 4nitrophenol (G5, Scheme 5), methylene (G6, Scheme 5) and paraquat (G7, Scheme 5), paving the way for efficient electrochemical sensing of highly toxic herbicides.
Scheme 5.
Chemical structures of pillar [6]arene P12 and pillar [5]arene P13, as well as guests such as G5-G8 [86,87]. Except for loading Ag nanoparticles, Au nanoparticles [91][92][93] could also be introduced into carbon nanotube-based hybrid materials via the assistance of pillar[n]arene. For example, water-soluble hydroxyl pillar [5]arene P13 (Scheme 5) was also used for dispersing single-walled carbon nanotubes in aqueous solutions via non-covalent interactions, and further assisted in promoting the formation of Au nanoparticles on the surface Except for loading Ag nanoparticles, Au nanoparticles [91][92][93] could also be introduced into carbon nanotube-based hybrid materials via the assistance of pillar[n]arene. For example, water-soluble hydroxyl pillar [5]arene P13 (Scheme 5) was also used for dispersing single-walled carbon nanotubes in aqueous solutions via non-covalent interactions, and further assisted in promoting the formation of Au nanoparticles on the surface of carbon nanotubes, leading to the formation of hybrid materials Au@(P13 interacting with carbon nanotubes) ( Table 1) [87]. It has been revealed that such hybrid materials had reasonable performances in catalyzing an ethanol oxidation reaction (EOR), as well as sensing pdinitrobenzene (G8, Scheme 5) because of pillar [5]arene cavities.
Overview and Outlook
In conclusion, we summarized the recent progress of pillar[n]arene-assisted carbon nanotube hybrid materials. During the preparation of such hybrid materials, pillar[n]arene could initially play different, but significant, roles. For example, either aromatic linkers or peptide subunits could be introduced during the syntheses to transfer pillar[n]arene moieties into carbon nanotube-like molecular structures and physiochemical characteristics, leading to the formation of one-dimensional rigid oligomers and polymers by covalent bonds, as well as spatially limited, "closed", artificial water channels containing intermolecular interactions. Particularly, by taking advantage of the art of organic synthesis, those carbon nanotube-resembled molecular architectures exhibited reasonable performances in applications of gas adsorption, adsorption and separation of toxic pollutants, as well as artificial channels and membranes with ion rejection. Furthermore, due to the formation of self-assembled amphiphiles by including hydrophobic neutral and positive-charged guest molecules in aqueous solutions, water-soluble pillar[n]arene could greatly improve and control the dispersion of carbon nanotubes in accordance with different external stimuli such as changed pH values. Finally, functional pillar[n]arene derivatives could non-covalently integrate carbon nanotubes into diverse, advanced, hybrid materials in the absence/presence of other inorganic compositions such as Ag and Au nanoparticles, revealing attractive activity in catalysis and sensing.
A lot of perspective work in this area is still attractive to researchers, for example, in exploring synthesis methods and designing more functional materials.
Firstly, more functional pillar[n]arene derivatives could be introduced to assist and participate in the fabrication of carbon nanotube-based hybrid materials. For example, up to now, only pillar [5]arene and pillar [6]arene were employed in this field, and other largersized pillar[n]arene was not used ( Figure 6) [94]. As known, pillar[n]arene with bigger cavities not only exhibits a different molecular geometry and shape [95,96], but also shows different behaviors in physiochemical characteristics such as host-guest inclusions [97,98]. Thus, the encapsulation process of pillar[n]arene-based carbon nanotube hybrid materials should be further explored.
Secondly, more efforts should be made to investigate the new fabrication methods of pillar[n]arene-involved carbon nanotube-based hybrid materials; for example, how can we build hybrid materials by covalently coupling pillar[n]arene moieties and carbon nanotubes together? Covalent bonds may promote the stability of this hybrid material, but provide great challenges in employing/choosing proper organic synthesis strategies. Additionally, particular carbon materials such as carbon ends saturated with hydrogen atoms in carbon nanotubes, as well as different sizes of carbon nanotubes should also be employed in future research for exploring new construction strategies.
Thirdly, pillar[n]arene-assisted carbon nanotubes have shown a very good example for exploring novel pillar[n]arene-assisted carbon materials such as pillar[n]arene-based fullerene [99]/carbon black/C 3 N 4 -containing hybrid materials [100]. The expansion of those studies will not only enhance the physiochemical feature generated from carbon nanotube-based hybrid materials, [101][102][103] but also greatly enrich the current family of hybrid carbon materials, paving the way to possibly enlarge the fields of attractive applications [104].
Secondly, more efforts should be made to investigate the new fabrication pillar[n]arene-involved carbon nanotube-based hybrid materials; for examp we build hybrid materials by covalently coupling pillar[n]arene moieties nanotubes together? Covalent bonds may promote the stability of this hyb but provide great challenges in employing/choosing proper organic synthes Additionally, particular carbon materials such as carbon ends saturated wi atoms in carbon nanotubes, as well as different sizes of carbon nanotubes sh employed in future research for exploring new construction strategies.
Thirdly, pillar[n]arene-assisted carbon nanotubes have shown a very go for exploring novel pillar[n]arene-assisted carbon materials such as pillar[n] fullerene [99]/carbon black/C3N4-containing hybrid materials [100]. The e those studies will not only enhance the physiochemical feature generated nanotube-based hybrid materials, [101][102][103] but also greatly enrich the curre hybrid carbon materials, paving the way to possibly enlarge the fields of attr . Upper (left) and side view (middle), as well as packing model (right) of X-ray single-crystal structures of (a) pillar [8]arene, (b) pillar [9]arene and (c) pillar [10]arene. The hydrogen atoms were omitted for clarity [94]. Copyright © 2012 by The Royal Society. | 4,844 | 2022-09-01T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
The potential for artificial intelligence to transform healthcare: perspectives from international health leaders
Artificial intelligence (AI) has the potential to transform care delivery by improving health outcomes, patient safety, and the affordability and accessibility of high-quality care. AI will be critical to building an infrastructure capable of caring for an increasingly aging population, utilizing an ever-increasing knowledge of disease and options for precision treatments, and combatting workforce shortages and burnout of medical professionals. However, we are not currently on track to create this future. This is in part because the health data needed to train, test, use, and surveil these tools are generally neither standardized nor accessible. There is also universal concern about the ability to monitor health AI tools for changes in performance as they are implemented in new places, used with diverse populations, and over time as health data may change. The Future of Health (FOH), an international community of senior health care leaders, collaborated with the Duke-Margolis Institute for Health Policy to conduct a literature review, expert convening, and consensus-building exercise around this topic. This commentary summarizes the four priority action areas and recommendations for health care organizations and policymakers across the globe that FOH members identified as important for fully realizing AI’s potential in health care: improving data quality to power AI, building infrastructure to encourage efficient and trustworthy development and evaluations, sharing data for better AI, and providing incentives to accelerate the progress and impact of AI.
Artificial intelligence (AI), supported by timely and accurate data and evidence, has the potential to transform health care delivery by improving health outcomes, patient safety, and the affordability and accessibility of high-quality care 1,2 .AI integration is critical to building an infrastructure capable of caring for an increasingly aging population, utilizing an everincreasing knowledge of disease and options for precision treatments, and combatting workforce shortages and burnout of medical professionals.However, we are not currently on track to create this future.This is in part because the health data needed to train, test, use, and surveil these tools are generally neither standardized nor accessible.This is true across the international community, although there is variable progress within individual countries.There is also universal concern about monitoring health AI tools for changes in performance as they are implemented in new places, used with diverse populations, and over time as health data may change.
The Future of Health (FOH) is an international community of senior health care leaders representing health systems, health policy, health care technology, venture funding, insurance, and risk management.FOH collaborated with the Duke-Margolis Institute for Health Policy to conduct a literature review, expert convening, and consensus-building exercise.In total, 46 senior health care leaders were engaged in this work, from eleven countries in Europe, North America, Africa, Asia, and Australia.This commentary summarizes the four priority action areas and recommendations for health care organizations and policymakers that FOH members identified as important for fully realizing AI's potential in health care: improving data quality to power AI, building infrastructure to encourage efficient and trustworthy development and evaluations, sharing data for better AI, and providing incentives to accelerate the progress and impact of AI.
Powering AI through high-quality data "Going forward, data are going to be the most valuable commodity in health care.Organizations need robust plans about how to mobilize and use their data."AI algorithms will only perform as well as the accuracy and completeness of key underlying data, and data quality is dependent on actions and workflows that encourage trust.
To begin to improve data quality, FOH members agreed that an initial priority is identifying and assuring reliable availability of high-priority data elements for promising AI applications: those with the most predictive value, those of the highest value to patients, and those most important for analyses of performance, including subgroup analyses to detect bias.
Leaders should also advocate for aligned policy incentives to improve the availability and reliability of these priority data elements.There are several examples of efforts across the world to identify and standardize highpriority data elements for AI applications and beyond, such as the multinational project STANDING Together, which is developing standards to improve the quality and representativeness of data used to build and test AI tools 3 .
Policy incentives that would further encourage high-quality data collection include (1) aligned payment incentives for measures of health care quality and safety, and ensuring the reliability of the underlying data, and (2) quality measures and performance standards focused on the reliability, completeness, and timeliness of collection and sharing of high-priority data itself.
Trust and verify "Your AI algorithms are only going to be as good as the data and the realworld evidence used to validate them, and the data are only going to be as good as the trust and privacy and supporting policies." FOH members stressed the importance of showing that AI tools are both effective and safe within their specific patient populations.This is a particular challenge with AI tools, whose performance can differ dramatically across sites and over time, as health data patterns and population characteristics vary.For example, several studies of the Epic Sepsis Model found both location-based differences in performance and degradation in performance over time due to data drift 4,5 .However, realworld evaluations are often much more difficult for algorithms that are used for longer-term predictions, or to avert long-term complications from occurring, particularly in the absence of connected, longitudinal data infrastructure.As such, health systems must prioritize implementing data standards and data infrastructure that can facilitate the retraining or tuning of algorithms, test for local performance and bias, and ensure scalability across the organization and longer-term applications 6 .
There are efforts to help leaders and health systems develop consensusbased evaluation techniques and infrastructure for AI tools, including HealthAI: The Global Agency for Responsible AI in Health, which aims to build and certify validation mechanisms for nations and regions to adopt; and the Coalition for Health AI (CHAI), which recently announced plans to build a US-wide health AI assurance labs network 7,8 .These efforts, if successful, will assist manufacturers and health systems in complying with new laws, rules, and regulations being proposed and released that seek to ensure AI tools are trustworthy, such as the EU AI Act and the 2023 US Executive Order on AI.
Sharing data for better AI
"Underlying these challenges is the investment required to standardize business processes so that you actually get data that's usable between institutions and even within an institution." While high-quality internal data may enable some types of AI-tool development and testing, this is insufficient to power and evaluate all AI applications.To build truly effective AI-enabled predictive software for clinical care and predictive supports, data often need to be interoperable across health systems to build a diverse picture of patients' health across geographies, and reliably shared.
FOH members recommended that health care leaders work with researchers and policymakers to connect detailed encounter data with longitudinal outcomes, and pilot opportunities across diverse populations and systems to help assure valid outcome evaluations as well as address potential confounding and population subgroup differencesthe ability to aggregate data is a clear rate-limiting step.The South African National Digital Health Strategy outlined interventions to improve the adoption of digital technologies while complying with the 2013 Protection of Personal Information Act 9 .Although challenges remain, the country has made progress on multiple fronts, including building out a Health Patient Registration System as a first step towards a portable, longitudinal patient record system and releasing a Health Normative Standards Framework to improve data flow across institutional and geographic boundaries 10 .
Leaders should adopt policies in their organizations, and encourage adoption in their province and country, that simplify data governance and sharing while providing appropriate privacy protectionsincluding building foundations of trust with patients and the public as previously discussed.Privacy-preserving innovations include ways to "share" data without movement from protected systems using approaches like federated analyses, data sandboxes, or synthetic data.In addition to exploring privacy-preserving approaches to data sharing, countries and health systems may need to consider broad and dynamic approaches to consent 11,12 .As we look to a future where a patient may have thousands of algorithms churning away at their data, efforts to improve data quality and sharing should include enabling patients' access to and engagement with their own data to encourage them to actively partner in their health and provide transparency on how their data are being used to improve health care.For example, the Understanding Patient Data program in the United Kingdom produces research and resources to explain how the National Health Service uses patients' data 13 .Community engagement efforts can further assist with these efforts by building trust and expanding understanding.
FOH members also stressed the importance of timely data access.Health systems should work together to establish re-usable governance and privacy frameworks that allow stakeholders to clearly understand what data will be shared and how it will be protected to reduce the time needed for data use agreements.Trusted third-party data coordinating centers could also be used to set up "precertification" systems around data quality, testing, and cybersecurity to support health organizations with appropriate data stewardship to form partnerships and access data rapidly.
Incentivizing progress for AI impact "Unless it's tied to some kind of compensation to the organization, the drive to help implement those tools and overcome that risk aversion is going to be very high… I do think that business driver needs to be there." AI tools and data quality initiatives have not moved as quickly in health care due to the lack of direct payment, and often, misalignment of financial incentives and supports for high-quality data collection and predictive analytics.This affects both the ability to purchase and safely implement commercial AI products as well as the development of "homegrown" AI tools.
FOH members recommended that leaders should advocate for paying for value in healthquality, safety, better health, and lower costs for patients.This better aligns the financial incentives for accelerating the development, evaluation, and adoption of AI as well as other tools designed to either keep patients healthy or quickly diagnose and treat them with the most effective therapies when they do become ill.Effective personalized health care requires high-quality, standardized, interoperable datasets from diverse sources 14 .Within value-based payments themselves, data are critical to measuring quality of care and patient outcomes, adjusted or contextualized for factors outside of clinical control.Value-based payments therefore align incentives for (1) high-quality data collection and trusted use, (2) building effective AI tools, and (3) ensuring that those tools are improving patient outcomes and/or health system operations.
Conclusion
Data have become the most valuable commodity in health care, but questions remain about whether there will be an AI "revolution" or "evolution" in health care delivery.Early AI applications in certain clinical areas have been promising, but more advanced AI tools will require higher quality, real-world data that is interoperable and secure.The steps health care organization leaders and policymakers take in the coming years, starting with short-term opportunities to develop meaningful AI applications that achieve measurable improvements in outcomes and costs, will be critical in enabling this future that can improve health outcomes, safety, affordability, and equity. | 2,581.6 | 2024-04-09T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Factors Affecting Youth Generation Interest on Agricultural Fields ( Case Study in Deli Serdang District )
Indonesian agriculture has a serious problem with the decline in interest of the younger generation towards businesses in agriculture, especially food crops. This is shown in a period of 10 years there has been a decline of nearly 15% of farmer households engaged in agriculture (BPS Data 2013), but on the other hand the need for food continues to increase along with the increase in the population of Indonesia. Indonesian agriculture also faces the problem of decreasing the quality of agroecosystems, foreign product competition, productivity and land conversion. Paddy rice cultivation is becoming increasingly less attractive to the younger generation, especially for several years, due to a decline in income levels. The purpose of this study was to understand the factors that influence the interest of young people in rice farming. This research was conducted in Deli Serdang District. Determination of the research location is based on the potential area of rice cultivation. Research methods using linear regression survey and analysis methods. The results of this study indicate that internal and external factors (age, gender, education, marital status, expectations, wide land ownership, socialization and technology) have a significant influence on the interest of the younger generation. Keywords—Youth generation, interest, rice farming, Deli Sedang, Sumatera.
I. INTRODUCTION
Until now the agricultural sector still has a strategic role in national development with its role as a food provider for the population of Indonesia, which amounts to nearly 260 million people (BPS 2016).But on the other hand Indonesian Agriculture is experiencing serious challenges.Not only from the decreasing quality of agroecosystems, the destruction of imported products, the stagnation of production, but also the decline in the number of farmers.These conditions indicate that the agricultural sector is currently less attractive to the younger generation.Similar conditions also occur in developing economies, where the number of farmers will continue to decrease."There is no reintegration at the age of farmers because the percentage of young farmers under 35 years of age continues to shrink.It was seen from 2003 to 2013, the number of farming families was reduced.BPS data records that within 10 years, 2003-2013, the number of farmer households decreased by 5 million.This figure is quite large and has implications for the sustainability of the agricultural sector.Because our agricultural model is a family farming model that has been proven to be able to maintain the production and sustainability of the life of farmers.In addition to the reduced number of farmers, other problems are related to the age and productivity of the farmers themselves.The age structure of farmers is old, ie 60.8% above 45 years with 73.97% to only the elementary level, and the capacity to apply new technology is low.Agricultural problems are not only old-age farmers, but also problems related to Hard and Resources of human resources in agriculture, namely PPL (Field Agriculture Extension) and POPT (Observers of Plant Disturbing Organisms) most of which have entered old age, namely 70% over 50 years and approaching retire.This certainly affects the performance, and even the sustainability of the national agricultural system.The low level of young age groups in the agricultural sector is not a new phenomenon.We have been faced with this situation for a long time and continue to increase in degrees.There are many reasons that young people can be reluctant to return to agriculture.The main reason is of course related to the economy.Farmers are still seen as a profession that is not promising, gives no hope, does not provide big profits, businesses are at high risk due to crop failure caused by pests and diseases, natural disasters and unclear price fluctuations so farmers often experience losses , and wrestle with poverty.With this stigma, the agricultural sector is not a sector that can attract the attention of young people.They would prefer to work as factory workers or work in the city.Interst and participation of young people in agriculture continue to decline.There are a number of causes, such as agriculture being considered unable to sustain the future, limited access to land and capital, and a lack of other support for the younger generation.Based on data from the Agriculture Service Office of Deli Serdang District, the average worker working in the agricultural sector has been more than 45 years of age.The low level of young people in the agricultural sector causes no regeneration in agriculture.Agriculture as a supplier of food for humans is possible not to experience development because the younger generation as a generation that is rich in little ideas plunged into the world of agriculture.The imbalance in the agricultural sector will affect the decline in the amount of food produced.The interest of the young generation in the Coal Regency to work in the agricultural sector in general is still low at present, this is supported by the opinion of Herlina in Herawati (2017), which states that currently many young people have advanced cultural value orientations and choose jobs outside the agricultural sector in urban areas, to gain wealth and glory.Some facts in Deli Serdang District show that the younger generation is starting to be reluctant to try in the field of agricultural business.This has proven that the younger generation prefers to work in the industrial sector, preferring to work in non-agricultural fields such as construction workers, porters, online motorbike taxi drivers, hair barber and so on.As an excuse for not choosing to work in the agricultural business because the selling price of agriculture is not fixed, uncertain price fluctuations that often cause losses and the assessment of the younger generation that to be able to make money from the sale of agricultural products requires a long time, lack of encouragement or support from the government for socialization the importance of young people to the world of agriculture.Based on the description above, it is very necessary to do research with the title "Factors Affecting Interest in Young Generation Against Enterprises in the Field of Agriculture in Deli Serdang District".
The Aim of Research
From the identification of the problems that have been raised above, it can be explained that the purpose of this study is: 1.To examine the interest of the younger generation in the business in agriculture in Deli Serdang Regency.Variable measurement in this study uses a Likert scale.What will be measured is translated into a variable indicator and the indicator is used as a starting point to compile instrument items that can be statements or questions.The measurement of the variables causing the effectiveness of farmer group management can be seen in table 1 below.Sampling was conducted on 73 randomly selected farmers from the young generation.To find out the factors that influence the interest of young people towards businesses in agriculture, multiple linear regression analysis is performed with the following mathematical formula.To determine the suitability of the analysis models of these factors used coefficient of determination (R2) and F test (overall test).The value of determination (R2) is to determine the accuracy of the model used showing the ability of the independent variable to explain its effect on the dependent variable, which is expressed by what percentage of the dependent variable is explained by the independent variables included in the regression model.R2 values range from 0-1 and if the results obtained are close to 1, the model is said to be good.Koefisien determinasi diformulasikan sebagai berikut: The F test is used to determine the level of influence of all independent variables (X) together on the dependent variable (Y) or to find out whether the independent variable (X) together affects the dependent variable (Y).Based on table 2 it can be seen that the value of Fcount (9.435)> Ftable (2.62) and the significance value of 0.000 <0.05 then H0 is rejected and H1 is accepted.This means that variable X simultaneously has a significant effect on variable Y.The second hypothesis states the factors of education, gender, marital status, age, desires and expectations, needs, socialization, land, technology, attractiveness of other occupations have a significant effect on interest the younger generation of businesses in agriculture in the Deli Serdang Regency is accepted.
Effect of Formal Education (X1.1) variables on Y
Based on the results of the t-test obtained the value of tcount is (2.412) <ttable (1.999), which means that the formal education variable (X1.1) has a significant influence on the interest of the younger generation towards business in agriculture.This is strengthened by the significance value (0, 003) <(0.05).To see the magnitude of the contribution of the value of formal education variables (X1.1) to the interest of the younger generation (Y) is 20.6%.This is evidenced from the results of multiple linear regression analysis with the Standardized Coefficients Beta value of 0.206.This is because the agricultural world is not only a young man or someone who graduated from elementary school, but even those who graduated from high school and even graduate graduates also liked working in the agricultural sector.Even undergraduate graduates who are majors outside of agriculture just switch to the agricultural sector.In addition, recently the faculty or university of agriculture has been flooded with students who want to study agriculture, more and more students who want to enter the faculty of agriculture show one of their emerging interests in the agricultural sector.This is also reinforced by the opinion of Eryanto (2013), formal education is an effort to lead to the achievement of developments that can stimulate a rational, creative and systematic way of thinking.
Effect of gender variables (X1.2) on Y
The t-test results obtained by the value of t count as (1.856)> t table (1.999) which means that the formal education variable (X1.2) has no significant insignificant effect on the interest of the younger generation towards businesses in agriculture.This is strengthened by the significance value (0.102) <(0.05).To see the magnitude of the contribution of the value of formal education variables (X1.1) to the interest of the younger generation (Y) is 10.1%.This is evidenced by the results of multiple linear regression analysis with the Standardized Coefficients Beta value of 0.101.Herlina (2002) suggests that youth perception of work is also influenced by gender differences.This is indicated by the perception in the community of employment in the agricultural sector as a tiring and destructive performance, so that it is inappropriate for unmarried girls to work in the agricultural sector.
Effect of Marital Status (X1.3) on Y
The t-test results obtained t-count value of (1.304) <ttable (1.999) which means that the marital status variable (X1.3) has no significant insignificant effect on the interest of the younger generation towards business in agriculture.This is reinforced by the significance value (0.762)> (0.05).To see the magnitude of the contribution of the value of formal education variables (X1.1) to the interest of the younger generation (Y) is 2.8%.This is evidenced by the results of multiple linear regression analysis with the Standardized Coefficients Beta value of 0.101.Herlina (2002), married young people have a good perception of employment in the agricultural sector when compared to youth who have never married.Youth who have never been married have the notion of working in the agricultural sector as heavy and dirty work, as well as a low social status in the eyes of society.Meanwhile, married young men are faced with demands to fulfill their family's income, so they have to work even though the work is heavy, so this is the cause of married youth who have a better perception of agricultural work than unmarried youth.
Effect of Age (X1.4) on Y
The t-test results obtained by the value of tcount is (1,330)> t table (1,999) which means that the age variable (X1.4) has no significant insignificant effect on the interest of the younger generation towards business in agriculture.The magnitude of the contribution of the age variable (X1.4) to the interest of the younger generation (Y) is 4.2%.This is evidenced by the results of multiple linear regression analysis with a Standardized Coefficients Beta value of 0.042.The younger generation is also active in activities outside the agricultural sector, they also wrestle outside the agricultural sector such as in the industrial sector, trade and so on.This is reinforced by the opinion of Lionberger (1960), that younger age usually has the enthusiasm to want to know what they do not know, so that they try to adopt innovations more quickly even though they have not exp erienced the adoption of these innovations.Tjakrawati in Amelia (2005) factors driving the lack of involvement of young workers in the agricultural sector are caused by the assumption in the individual that states that at a young age they are looking for other jobs outside the agricultural sector which are more challenging and in accordance with their interests.They will do agricultural work later if they have collected money from working outside the agricultural sector to work in the agricultural sector in old age.In addition, they are encouraged to work outside the agricultural sector, there can be positive results they will get later.
Effect of Desire and Hope (X1.5) on Y
The results of statistical analysis show that the variables of desire and expectation have a significant effect on the perception of generation shown by the sign value of 0,000> 0.05.This is evidenced by the t-count (2,528)> of the t-table (1,999) at a 5% error rate.The expectation and desire variables affect the interest seen from the sign value.0.004 <0.05 at a 5% error rate.Expectations and desires will influence the interest in farming because there is a belief that it will succeed when planting rice and hoping to get a profit that can meet the family's living needs.Another hope is that the government helps in the success of farming.
Effect of needs (X1.6) on Y
The results of statistical analysis show that the need variable has a significant effect on the perception of generation is indicated by the sign value of 0.268> 0.05.This is evidenced by the t-count value (1.114 <of t-table (1.999) at an error rate of 5%.The variables of desire and hope have no effect on interest seen from the sign value 0.268> 0.005 at an error rate of 5%.
Effect of Socialization (X2.1) on Y
The results of statistical analysis show that the socialization variable has a significant effect on the interest of the younger generation to work in agriculture.This has proven the sign value of 0.001> 0.05.This is evidenced by the t-count value (3.063> of t-table (1.999) at an error rate of 5%.The expectation and desire variables affect the interest seen from the sign value 0.001 <0.05 at an error rate of 5%.The socialization of farming efforts is generally obtained by youth from families, newspapers, brochures, magazines, television and radio and sometimes from extension activities organized by extension agents or related institutions.The role of families in socializing farming activities greatly influences the perception of family members that will shape the attitudes and views of the younger generation towards agriculture in general and try to plant rice in particular.In certain times the young generation is involved in farming activities because in general their parents are farmers.Sucipto in Chandra (2008) the process of socialization is cultural development and development takes place in the form of activities involving young people in a series of learning processes and appreciation of cultural values prevailing in the community with teachings, guidance, exemplary from the family.
Effect of Land Area (X2.2) Against Y
To see the magnitude of the contribution value of the farming area variable (X2.2) to the interest of the younger generation (Y) is 16.7%.This is evidenced by the results of multiple linear regression analysis with the Standardized Coefficients Beta value of 0.167.From the results of statistical analysis, the effect of farming land area variables on the interest of the younger generation on business in the agricultural sector has a significant effect.This is evidenced by the value of t-count (2.198> from ttable (1.999) at an error rate of 5%.The results of this study illustrate that the extent of the effect of farming has a significant effect on the interest of young peop le towards businesses in agriculture.the higher the area of farming, the greater the interest of the younger generation in the business in agriculture.According to Luntungan's opinion (2012), farming is usually interpreted as the study of how to allocate existing resources effectively and efficiently for the purpose of obtaining high profits at a certain time.It is said to be effective if farmers or producers can allocate the resources they have as well as possible and are said to be efficient if the resource utilization results in output that exceeds input.
Influence of Technology (X2.3) Against Y
The results of statistical analysis show that the socialization variable has a significant effect on the interest of the younger generation to work in agriculture.This proved that the sign value was 0.002 <0.05.This is evidenced by the t-count value (2.721> of t-table (1.999).The magnitude of the effect of the Technology Variable on the interest of the younger generation is 23.2% at an error rate of 5%.The nature of the technology used in farming will affect the interest of young people towards businesses in agriculture where the easier the technology is implemented and easy to do throughout the year and does not require large costs will be more easily accepted.The use and ownership of technology affects youth perception of agriculture.Usually technology ownership is only owned by workers who have money because this technology is expensive.Youth who do not have land eventually become farm laborers.Cultivators prefer their own land to be cultivated so that farm laborers do not have income.
Effect of Attractiveness of Other Jobs (X2.4) Against Y
The results of statistical analysis show that the variables of other job attractiveness have no significant effect on the interest of the younger generation to work in agriculture.This proved to be a sign value of 0.106> 0.005.This is also evidenced by the t-count value (1.613 <of t-table (1.999) at an error rate of 5%.Variables Another attraction of labor gives a 10.2% effect on interest seen from the Standardized Coefficients Beta value of 0.102 at the level 5% error.Simamora in Andriani (2017) states that prospects are individuals, groups or organizations that are considered potential marketers and want to be involved in a business exchange.In short, prospects are prospective buyers who have a desire for a particular product or service.Datad in the field shows that the interest of the younger generation in other jobs does not affect the interest of the younger generation so that the younger generation chooses jobs that are easily obtainable and have more understandin g. also to get profit or profit.
B. Suggestions
To increase the interest of the younger generation in the agricultural sector is done by: 1. Socialization of rice farming through families, communities, extension workers, media and agricultural institutions.2. The use of technology that has characteristics that are easy to implement, requires cheap (efficient) costs and can be done throughout the year.
X2.2: Land X2.4: Technology X2.5:The attractiveness of another job The results of estimating the value of the dependent variable Y = Average value of the dependent variable Yi = value of observation R2 = Coefficient of Determination Ftable = (k-1), (n-k): α Information R2 = coefficient of determination k = Number of regression coefficients n = Number of samples α = Critical value III.RESULTS AND DISCUSSIONS All variables tested are cured by using a Likert Scale with 4 levels of scale and the type of data used is ordinal data.Variable Y (Interest in Young Generation) is measured based on the indicators specified.
Table .
So in this case the determination coefficient value obtained is 60.6%.This means that the variable X has a contribution effect of 60.6% on the Y variable and another 39.4% is influenced by other factors outside the variable X (predictors).In addition, the R value which is a symbol of the correlation coefficient is obtained at 0.769.
Source: Primary Data Analysis(2016)Regression models can be explained using coefficient of determination (KD = R Square x 100).The greater the value, the better.Based on table 1, the value of R Square is 0.606.a. Predictors: (Constant), education, gender, marital status, age, desires and hopes, needs, socialization, land, technology, attractiveness of other occupations.b.Dependent Variable: interest of young generation Source: Primary Data Analysis (2016)
Table 3 .
Results of Multiple Linear Analysis Y=α+ 1 1 + 2 2 + 3 3 + 4 4 + 5 5 + 6 6 + 7 7 8 8 9 9 + 10 A. Conclusion 1.All variables tested are Internal (X1) with sub-variables of education, gender, marital status, age, hope and desire, needs and External (X2) with sub-variables of socialization, land area, technology and other work attractiveness Simultaneously significant impact on young people's interest in business in agriculture (Y). 2. Partially the factors of formal education, desires and expectations, socialization, land area, and technology have a significant effect on the interest of the younger generation on businesses in agriculture and the more dominant variables that influence the interest of the younger generation are the socialization sub-variables (X2.1) namely 28.9%. | 4,867.8 | 2018-10-29T00:00:00.000 | [
"Economics"
] |
Novel role of STRAP in progression and metastasis of colorectal cancer through Wnt/β-catenin signaling
Serine-Threonine Kinase Receptor-Associated Protein (STRAP) interacts with a variety of proteins and influences a wide range of cellular processes. Aberrant activation of Wnt/β-catenin signaling has been implicated in the development of colorectal cancer (CRC). Here, we show the molecular mechanism by which STRAP induces CRC metastasis by promoting β-catenin signaling through its stabilization. We have genetically engineered a series of murine and human CRC and lung cancer cell lines to investigate the effects of STRAP on cell migration and invasion in vitro, and on tumorigenicity and metastasis in vivo. Downregulation of STRAP inhibits invasion, tumorigenicity, and metastasis of CRC cells. Mechanistically, STRAP binds with GSK-3β and reduces the phosphorylation, ubiquitylation, and degradation of β-catenin through preventing its binding to the destruction complex. This leads to an inhibition of Wnt/β-catenin signaling and reduction in the expression of downstream targets, such as Cyclin D1, matrix metalloproteinases 2 and 9, and ß-TrCP. In human CRC specimens, higher STRAP expression correlates significantly with β-catenin expression with increased nuclear levels (R =0.696, p < .0001, n =128). Together, these results suggest that STRAP increases invasion and metastasis of CRC partly through inhibiting ubiquitin-dependent degradation of β-catenin and promoting Wnt/β-catenin signaling.
INTRODUCTION
Wnt/β-catenin signaling plays a pivotal role in many human malignancies, especially in colorectal cancers (CRC) [1]. Even though the general mechanisms for Wnt/β-catenin signaling have been well established, some new components of the signaling pathway have still been identified, like WTX in Wilms tumor [2] and RACK1 in gastric tumor [3] were identified to interact within the destruction complex to modulate this signaling. Furthermore, β-catenin signaling can also be mediated by Wnt-independent signaling, such as EGFR [4], AKT [5] and JNK [6] etc. Although it has been reported that more than 80% of CRC have APC truncation/mutation or β-catenin mutation, both of which can activate the Wnt/βcatenin signaling during colorectal cancer development [7][8][9], Wnt/β-catenin signaling is also modulated through various other mechanisms in cancer, including crosstalk with other altered signaling pathways [7]. However, little is known about the role of this crosstalk in contributing to the hyperactivation of Wnt/ß-catenin signaling in CRC.
Serine-Threonine Kinase Receptor-Associated Protein (STRAP) is a WD40 domain-containing protein [10] that facilitates specific protein-protein interactions, sometimes leading to multi-protein complexes. We identified STRAP as a negative regulator of TGF-β signaling through the interaction with TGF-β receptors and Smad7 [10,11]. Our previous study showed that www.impactjournals.com/oncotarget STRAP is upregulated both in colon and lung carcinomas and promote tumorigenicity [12]. Subsequently, we have reported that STRAP can modulate EWS function in a TGF-β-independent manner [13]. STRAP has also been shown to regulate other multiple signaling pathways through direct physical interaction with other proteins, like PDK1 [14], NM23-H1 [15], ASK1 [16], and Sp1 [17] etc. Together, these findings suggest that STRAP functions as an oncogene through functional interaction with other signaling pathways. However, nothing is known about the role of STRAP in regulating Wnt/β-catenin signaling in colorectal cancer.
Our present study shows that knockdown of STRAP reduces CRC cell invasion and metastasis in vitro and in vivo. In an attempt to understand the mechanism, we have observed that STRAP stabilizes β-catenin by inhibiting its ubiquitin-dependent degradation, thus resulting in the inhibition of the expression of its downstream target genes. Most interestingly, in support of these results we have observed that both STRAP and β-catenin is co-upregulated in high percent of human CRC (R = 0.696, p < .0001, n =128). Thus, our results provide evidence of how STRAP is involved in the contribution to CRC development and progression by a unique mechanism.
Effect of downregulation of STRAP on migration, invasion and tumorigenicity in CRC cell lines
Our previous study has shown that STRAP is upregulated in colon carcinoma and upregulation of STRAP in human cancers may provide growth advantage to tumor cells via TGF-β-dependent and TGF-β-independent mechanisms [12]. To investigate the role of STRAP on invasion and metastasis in CRC, we stably knocked down STRAP in murine colon carcinoma cell lines MC38 and CT26 as determined by western blotting ( Figure 1A and Supplementary Figure S1A). To evaluate the effects of STRAP on tumorigenicity of CRC cells in vitro, we performed cell counting and soft agar assays. Downregulation of STRAP significantly inhibited CRC cell growth in liquid culture as well as in soft agar in both MC38 and CT26 cells ( Figure 1B and 1C and Supplementary Figure S1B and S1C). To explore the role of STRAP knockdown on cell migration and invasion, we performed trans-well migration and invasion assays (through collagen and matrigel). As shown in Figure 1D and 1E and Supplementary Figure S1D, knockdown of STRAP in these two cell lines reduced cell migration and invasion. Next, we investigated the effects of STRAP on tumorigenicity of CRC cells in vivo using xenograft models. When compared with the vector control cells, downregulation of STRAP remarkably inhibited tumor growth in syngeneic mice ( Figure 1F and Supplementary Figure S1E). Lower expression of STRAP in tumors derived from knockdown clones was maintained ( Figure 1G and Supplementary Figure S1F). Together, these results suggest that STRAP promotes tumorigenic behavior of CRC cells in vitro and in vivo.
Role of STRAP on regulating β-catenin expression and signaling in CRC cell lines
To determine whether upregulation of STRAP in CRC regulates Wnt/β-catenin signaling, we first examined β-catenin protein expression in STRAP knockdown MC38 and CT26 clones by western blotting. β-catenin was significantly downregulated in knockdown clones when compared with that in control cells (Figure 2A). On the contrary, relative phosphorylation of β-catenin at Ser33/ Ser37/Thr41, which initiates β-catenin ubiquitin-dependent degradation [18], was significantly increased. To detect whether STRAP can regulate β-catenin subcellular distribution, we examined β-catenin expression in different subcellular fraction. In concert with the total β-catenin, both cytoplasmic and nuclear β-catenin was decreased in stable clones (Supplementary Figure S2A and S2B). These findings prompted us to investigate whether STRAP can activate Wnt/β-catenin signaling by increasing β-catenin expression in CRC. We tested Wnt/β-catenin signaling activity using its signaling reporter TOP Flash, which contains three copies of an optimal TCF binding motif (CCTTTGATC), and FOP Flash as a negative control. Downregulation of STRAP significantly inhibited the activity of TOP Flash in both CRC cell lines when compared with vector controls ( Figure 2B). To further validate this hypothesis, we tested the expression of Wnt/β-catenin signaling target genes, including Cyclin D1 [19], c-Myc [20], β-TrCP [21], MMP2, MMP7 and MMP9 [22,23]. Cyclin D1 and β-TrCP level was reduced in STRAP knockdown clones from both cell lines ( Figure 2A and Supplementary Figure S2C and S2D). However, there was not much difference in c-Myc expression that might be due to cancer type and/or cellular context. In addition, we didn't see any difference either in the level of GSK-3β or in its phosphorylation at Ser9 [24,25]. Downregulation of STRAP inhibited the expression of MMP2 and MMP9 at the transcription level ( Figure 2D) and their activity ( Figure 2E), but not for MMP7 (data not shown). In an attempt to determine whether STRAP regulates β-catenin in transcriptional level, we did not see any difference in β-catenin mRNA level after STRAP knock down ( Figure 2C) suggesting that STRAP promotes Wnt/β-catenin signaling through stabilizing β-catenin protein.
To determine the specificity of this effect of STRAP, we performed the rescue experiment through infecting MC38 vector and STRAP knockdown clones by STRAP-Flag adenovirus or β-gal adenovirus. As shown www.impactjournals.com/oncotarget in Figure 2F, β-catenin expression was restored when STRAP expression was rescued in stable knockdown clones. Together, downregulation of STRAP inhibits Wnt/β-catenin signaling by reducing β-catenin expression.
Role of GSK-3β and Wnt3a on STRAP-induced stabilization of β-catenin
We have previously shown that STRAP binds with GSK-3β when both proteins were overexpressed in 293T cells and GSK-3β inhibitors can reduce this binding [26]. To investigate the endogenous binding in CRC cell lines, we performed immunoprecipitaion assays with anti-GSK-3β antibody or anti-STRAP antibody using lysates from MC38 and CT26 cells. STRAP coprecipitated GSK-3β and vice versa, thus suggesting an endogenous interaction in CRC cell lines ( Figure 3A). These findings prompted us to explore whether GSK-3β inhibitor and proteasomal inhibitor regulates the effect of STRAP on β-catenin expression. We treated MC38 cell clones with GSK-3β inhibitors LiCl (20 mM) and SB415286 (25 uM), and proteasomal inhibitor MG132 (25 uM), and then tested β-catenin expression and subcellular localization by western blotting and immunofluorescence, respectively. As shown in Figure 3B and Supplementary Figure 3, β-catenin was completely restored after treatment with GSK-3β inhibitors and MG132 when compared with vector control, although the basal β-catenin levels were lower in STRAP knockdown clones. These results further suggest that STRAP stabilizes β-catenin through interacting with GSK-3β. Next, to test whether Wnt ligand has any effect on STRAP-induced stabilization of β-catenin, we treated MC38 cell clones with increasing doses of Wnt3a and the expression of β-catenin was detected. β-catenin was stable clones after transfection of STRAP shRNA was examined by western blotting. β-actin was used as loading control. (B) Cell counting assay. MC38 stable clonal cells with STRAP knockdown and parental and vector control cells were cultured for a total of 6 days. Cells were counted everyday for 5 days from the third day after the cells were seeded and the cell numbers are plotted. Individual data points are mean ± S.D. of triplicate determinations. ***P < .001. (C) Soft agarose assay. MC38 cells were cultured in 0.4% sea plague agarose for 14 days. Number of colonies is counted and shown as mean ± S.D. of triplicate wells. ***P < .001. (D) Cell migration assay. MC38 cells were allowed to migrate through collagen coated transwells for 6 h. Then the migrated cell were fixed and stained. Six random high power fields in each well were counted. Each data point represents mean ± S.D. from three wells. ***P < .001. (E) Cell invasion assay. MC38 cells were allowed to pass through a collagen barrier (top) or a matrigel layer (bottom) in the transwell chambers. Then the invaded cells were fixed and stained. Six random high power fields in each well were counted. Each data point represents mean ± S.D. from three wells. ***P < .001. (F) Replication deficient adenoviruses (RDA), that are able to transiently express Flag-tagged STRAP, were infected into MC38 stable clones. After 60 hours of incubation, the cells were harvested. The expression of STRAP and β-catenin was examined by western blotting.
Oncotarget 16027 www.impactjournals.com/oncotarget upregulated similarly in a dose dependent manner in both vector and STRAP knockdown clones, suggesting that the effect of STRAP was superseded by Wnt3a ( Figure 3C). To further evaluate the biological outcome of these effects, we performed cell counting and matrigel invasion assays with MC38 cell clones after treating with SB415286 or Wnt3a. SB415286 and Wnt3a promoted CRC cell growth ( Figure 3D) and invasion ( Figure 3E) in both vector control and knockdown clones when compared with corresponding no-treatment group. However, STRAP knockdown stable clones still showed lower cell growth ( Figure 3D) and invasion ( Figure 3E) when compared with vector control with the same treatments, suggesting that downregulation of STRAP in CRC cell lines inhibits cell growth and invasion partly through inhibiting Wnt/β-catenin signaling. These findings suggest that STRAP promotes cell growth and invasion in CRC through regulating β-catenin expression.
Inhibition of ubiquitin-dependent degradation of β-catenin by STRAP
The above observations prompted us to investigate whether STRAP stabilizes β-catenin through inhibiting its ubiquitin-dependent degradation. To evaluate this hypothesis, we first performed exogenous β-catenin ubiquitin assay in 293T cells. Overexpression of STRAP reduced ubiquitylated β-catenin in the pulldown/western blot experiment ( Figure 4A). For endogenous ubiquitin assay, we directly immunoprecipitated β-catenin and the ubiquitylated β-catenin was detected by anti-ubiquitin antibody. The ubiquitylated endogenous β-catenin level was much higher in STRAP knock down stable clones when compared with that in control cells ( Figure 4B). These observations prompted us to presume that STRAP might block β-catenin binding to the destruction complex when binding with GSK-3β and Axin. To validate this
Figure 3: Role of GSK-3β inhibitors, proteasomal inhibitor and Wnt3a on STRAP-induced stabilization of β-catenin.
(A) Endogenous interaction between STRAP and GSK-3β in colon cancer cell lines. Lysates from MC38 and CT26 were subjected to immunoprecipitation using 1 ug of mouse anti-STRAP, rabbit anti-GSK-3β antibodies or corresponding IgG (negative control). Immunoprecipitated STRAP or GSK-3β was detected by western blotting. (B) MC38 stable clonal cells were treated with GSK-3β inhibitor SB415286 (25 uM, two time points) and proteasomal inhibitor MG132 (25 uM, 6 h). Then the lysates were subjected to western blotting for β-catenin and STRAP. (C) MC38 stable clonal cells were treated with Wnt3a (6 h) for different doses. Treatment with MG132 was used as positive control. Then the cells were harvested and subjected to western blotting for β-catenin and STRAP. (D) Growth inhibition in MC38 stable clones was partially abolished by treating with SB415286 and Wnt3a. MC38 stable clones with STRAP knockdown and parental and control vector cells were treated with 12.5 uM SB415286 or 50 ng/ml Wnt3a, and counted everyday for 5 days. The cell numbers of day 5 were shown. b & c, P < .01, a, P < .001. (E) Reduction in invasion of MC38 stable clones was partially rescued by treating with SB415286 and Wnt3a. MC38 cells were seeded on a thin layer of Matrigel. 9 hours later, after the cells settling down, the cells were treated with 12.5 uM SB415286 or 50 ng/ml Wnt3a for another 12 hours. The invaded cells were counted for six random high power fields in each well. Each data point represents mean ± S.D. from three wells. e, f, g, P < .001. These experiments were repeated three times. www.impactjournals.com/oncotarget hypothesis, we co-transfected β-catenin and GSK-3β plasmids into 293T cells with increasing doses of STRAP plasmid. GSK-3β was immunoprecipitated from the cell lysates, and the immune-complexes were analyzed by western blotting for β-catenin. STRAP inhibited β-catenin binding to its destruction complex in a dose dependent manner ( Figure 4C). To further prove that STRAP can stabilize β-catenin, we treated MC38 cell clones with cycloheximide for different time points as indicated. We observed that knockdown of STRAP strongly promoted β-catenin degradation in stable clones when compared with that in vector clone and decreased the half-life of β-catenin ( Figure 4D and 4E). These results suggest that STRAP stabilizes β-catenin by inhibiting its interaction were transfected with HA-tagged β-catenin, His6-tagged ubiquitin and Flag-tagged STRAP in combinations as indicated. After treated with MG132, the cells were lysed in a modified lysis buffer as detailed in the Materials and Methods. Proteins tagged with His 6 -ubiquitin molecules were pulled down with Ni-NTA agarose beads. Eluted proteins were subjected to western blotting with anti-HA antibody to detect ubiquitinated β-catenin. Expression of the proteins was tested by western blotting. (B) Endogenous ubiquitin assay. The same amount of lysates from MC38 stable clones were harvested after treatment with MG132 for immunoprecipitation with anti-β-catenin antibody. The ubiquitinated β-catenin was evaluated with ubiquitin antibody by western blotting. (C) STRAP inhibits β-catenin binding to the destruction complex. 293T cells were transfected with β-catenin-HA, GSK-3β-myc and different doses of STRAP-Flag plasmids in combinations as indicated. The lysates were subjected to immunoprecipitation with c-myc antibody, and bound β-catenin was detected by western bloting with anti-HA antibody. Expression of the proteins was tested by western blotting. (D) and (E) STRAP inhibits the β-catenin protein degradation in MC38 stable clones. MC38 stable clones were treated with cycloheximide (CHX, 100 ug/ml) and harvested after the indicated time of treatment. β-catenin protein levels were analyzed by Western blotting (D). The density of β-catenin was normalized against β-actin and the relative density is presented (E). Half-life of β-catenin in different clones was calculated. www.impactjournals.com/oncotarget
Role of STRAP on CRC metastasis in an orthotopic model
To evaluate the biological function of STRAP stabilizing β-catenin in CRC, splenic and orthotopic cecum injection models of metastasis were performed. For splenic injection, MC38 and CT26 cells were injected into spleens of syngeneic C57BL/6 and Balb/c mice, respectively, to generate liver metastases. Downregulation of STRAP significantly reduced the rate of metastatic foci formation as well as their growth and spread in the liver in both cell lines ( Figure 5A and Supplementary Figure S4A and S4B). Lower expression of STRAP in liver metastases derived from knockdown clones decreased the expressions of β-catenin, some of its target genes, and the proliferation marker PCNA when compared to those in vector control tumors ( Figure 5B and Supplementary Figure S4C and S4D). For the orthotopic cecum injection model, we have generated a highly aggressive and metastatic mouse MC38-LM10 (LM10) cell line that was derived from MC38 cells after passing 10 cycles stepwise through splenic injection model of liver metastasis. LM10 cells were subserosally injected into the ceca of C57BL/6 mice to generate primary tumors and metastases. Downregulation of STRAP remarkably reduced primary tumor growth in ceca ( Figure 5C), in which no mice had lymph node metastasis, but all vector control mice had lymph node metastases and one mouse had liver metastasis ( Figure 5E). Lower expression of STRAP in tumors from knockdown cells resulted in the downregulation of β-catenin and its downstream targets Cyclin D1, β-TrCP, MMP2 and MMP9 ( Figure 5D and 5F). Together, STRAP promotes CRC tumorigenicity and metastasis in vivo through regulating β-catenin expression and signaling, which is consistent with our in vitro observations.
Effect of β-catenin mutation and APC truncation on STRAP induced stabilization of β-catenin
About 80% of CRC have APC truncation and about 10% of CRC bear β-catenin mutation, both of which can activate the Wnt/β-catenin signaling during progression [7][8][9]. To investigate the effect of these mutations on STRAP-induced stabilization of β-catenin, we chose three different human colon cancer cell lines, SW480, HCT116 and RKO having different mutational status in APC and β-catenin genes ( Figure 6A). After the knockdown of STRAP, we did not see any effect on β-catenin protein stabilization in HCT116 cells, which has activating mutation at Ser45 of β-catenin. However, knockdown of STRAP significantly decreased β-catenin protein stabilization in the isogenic cell lines, SW480 ( Figure 6B) and SW620 (Supplementary Figure S5A) having truncation at 1338aa of APC gene. In contrast, RKO cells with no mutation in β-catenin and APC gene showed 50% decrease in the level of β-catenin ( Figure 6B). These results were further supported by TOP/FOP flash reporter luciferase assays in these cell lines. Downregulation of STRAP significantly inhibited the activity of TOP Flash in SW620 (Supplementary Figure S5B), SW480 and RKO cell lines ( Figure 6C), but not in HCT116 cells. Interestingly, we also found that MC38 and CT26 had wt β-catenin (data not shown) and wt APC ( Figure 6D), which further validate these observations. These results suggest that STRAP has no effect on stabilizing β-catenin in β-catenin mutated CRC cells, but has partial effect in APC truncated cells.
Stabilization of β-catenin by STRAP in lung cancer
Recently, increasing evidences have suggested that Wnt/β-catenin signaling plays an important role in lung carcinogenesis, which has much less APC and β-catenin mutations unlike colon cancers [27]. Besides, our previous studies have shown that STRAP is up regulated in 78% of lung carcinomas [12]. All these evidences prompted us to investigate whether STRAP has any effect on stabilizing β-catenin in lung cancer. To validate this, we chose two non-small cell lung cancer (NSCLC) cell lines H460 and A549, both of which have wt APC and wt β-catenin. Interestingly, we observed similar results as in CRC cell lines, like knockdown of STRAP increased β-catenin phosphorylation, decreased the expression of β-catenin and Cyclin D1 and TOP Flash reporter activity (Supplementary Figure S6A-S6C). We did not observe any change in β-catenin mRNA level, whereas MMP2 mRNA was decreased in STRAP knockdown clones (Supplementary Figure S6D). These results further generalize the effects of STRAP on stabilizing β-catenin through inhibiting its ubiquitin-dependent degradation.
Correlation between the expression of STRAP and β-catenin in colorectal cancer
Based on the previous observations that the expression of STRAP is upregulated in human colorectal cancers [12,28] and our findings that STRAP stabilizes β-catenin, we predicted that the expression levels of STRAP and β-catenin would be functionally correlated in colorectal cancers. To test this hypothesis, we immunostained for STRAP and β-catenin in serial sections of colon tissue microarrays (TMA) containing 130 CRC patient specimens. Consistent with previous reports, the expression of both STRAP [12,28] and β-catenin [29] was upregulated in 76.9% and 71.2%, respectively (Supplementary Table S2). We also noticed that 50.8% cases had β-catenin nuclear accumulation, which is known www.impactjournals.com/oncotarget to be associated with CRC prognosis [30]. Interestingly, we also observed higher nuclear accumulation of β-catenin in STRAP high expression group compared to low expression group, when STRAP was mostly localized in the cytoplasm ( Figure 7A and 7B). A statistically significant positive correlation was observed between the expressions of STRAP and β-catenin in these specimens as shown in Figure 7D (R = 0.696, p < .0001, n =128). Furthermore, we found that the expression of both STRAP and β-catenin were much higher in AJCC stage I than that in other stages ( Figure 7C), indicating that both STRAP and β-catenin may function in the early stage of CRC tumorigenesis. Together, these observations indicate that there is a highly significant correlation between the expression of STRAP and the expression and nuclear localization of β-catenin in CRC. This further validates our in vitro findings that STRAP stabilizes β-catenin by reducing its ubiquitin-dependent degradation.
DISCUSSION
Our previous studies have shown that STRAP is upregulated in CRC and lung cancer, and can provide growth advantage to tumor cells via TGF-β-dependent and -independent mechanisms [12]. In this study, we have explored its new invasive and metastatic functions in CRC through a novel mechanism. STRAP activates Wnt/β-catenin signaling and regulates downstream target genes through stabilizing β-catenin protein. Higher expression of these two proteins in the early stage of CRC progression and increased nuclear accumulation of β-catenin in tumors with higher STRAP expression suggest their cooperative role in the development and progression of CRC.
In this study, we have noted that expression of STRAP significantly reduced binding of β-catenin to the destruction complex probably through steric hindrance ( Figure 4C) and inhibited its subsequent ubiquitylation ( Figure 4A and 4B). Both Wnt stimulation and inhibition of GSK-3β activate Wnt/β-catenin signaling in STRAP knock down cells through its stabilization, suggesting that the effect of STRAP was superseded by these agents. These findings provide further evidence that STRAP stabilizes β-catenin through GSK-3β signaling and the effect of Wnt signaling in activating β-catenin is stronger than that of STRAP. Interestingly, even β-catenin expression was restored by Wnt3a or GSK-3β inhibitors, treated cells with STRAP knockdown still status of APC and β-catenin in SW480, HCT116 and RKO was presented. (B) Lysates from SW480, HCT116 and RKO polyclones after knocking STRAP down were subjected to western blotting for STRAP, β-catenin. β-actin was used as loading control. Normalized expression of β-catenin from three independent experiments was presented as mean ± S.D. **P < .01, ***P < .001. (C) The transcriptional activity of Wnt/β-catenin signaling was detected in SW480, HCT116 and RKO polyclones after knocking STRAP down using TOP/FOP Flash reporter as described above. *P < .05, **P < .01. (D) The expression of wild type (wt) APC in MC38 and CT26 was detected by western blotting. The same amount of lysates from MC38, CT26 and HCT116 were subjected to western blotting. HCT116 was used as positive control for wt size of APC.
Oncotarget 16032 www.impactjournals.com/oncotarget Oncotarget 16033 www.impactjournals.com/oncotarget showed less tumorigenic properties when compared with control cells (Figure 3D and 3E). This indicates that STRAP can promote tumor cell growth and invasion not only through Wnt/β-catenin pathway, but may also through regulating other signaling pathways, such as TGF-β signaling and MAPK pathway [31,32]. From this study it is difficult to distinguish the effect of STRAP on tumor growth and metastasis in vivo. However, our in vitro studies indicate that downregulation of STRAP in CRC cells inhibit cell migration and invasion as shown in Figure 1 and Supplemental Figure S1. STRAP knockdown stable clones showed lower cell invasion ( Figure 3E) when compared with vector control and parental cells with the same treatments with GSK-3β inhibitors and Wnt3a, suggesting that downregulation of STRAP in CRC cell lines inhibits cell migration and invasion, at least in part, directly through inhibiting Wnt/β-catenin signaling.
In this study, we also found that APC truncation at 1338aa, which loses all the Axin binding domains but still have one 20 amino acid repeat for β-catenin binding, partially compromise the effect of STRAP stabilizing β-catenin. While an activating phosphorylation site mutation in the N-terminus of β-catenin that stabilizes it, completely supplanted the effect of STRAP ( Figure 6B and 6C). In contrast, STRAP stabilizes β-catenin in colon cancer cell lines with wild type APC and β-catenin. It is possible that the effect of the mutations in APC and β-catenin in stabilizing β-catenin is stronger than that of STRAP.
We observed in 130 colorectal cancer specimens that STRAP is upregulated in about 70% cases ( Figure 7C and Supplementary Table S2). Interestingly, we found that the expression of both STRAP and β-catenin were much higher in AJCC stage I than that in other stages. These, coupled with the finding that STRAP is also upregulated in 50.8% colorectal adenomas [28], indicate that both STRAP and β-catenin function in the early stage of CRC. Besides, there is highly significant correlation between the expression of STRAP and β-catenin including its nuclear accumulation in these tumor samples ( Figure 7B, 62.5% vs 41.3%). These observations indicate that STRAP can interact with other proteins and regulate their function to regulate CRC development and progression.
In summary, these studies demonstrate novel mechanistic insights into the functions of STRAP in colorectal cancer invasion and metastasis. STRAP decreases the phosphorylation, and increases stabilization of β-catenin through interaction with GSK-3β. Thus, STRAP promotes CRC initiation and progression through activating Wnt/β-catenin signaling by inhibiting β-catenin ubiquitin-dependent degradation and nuclear localization ( Figure 7D). This study provides a rationale for targeting STRAP for therapeutic intervention in colorectal cancers.
Cell culture
Mouse colon adenocarcinoma cell lines MC38 and CT26; human colon adenocarcinoma cell lines SW480, SW620, HCT116 and RKO; non-small cell lung cancer (NSCLC) cell lines A549 and H460; and HEK-293T cells were maintained in 7% serum-containing medium supplemented with penicillin and streptomycin.
Regents and antibodies
Proteasomal inhibitor MG132 and GSK3β inhibitor SB415286 were purchased from Selleckchem.com and TOCRIS bioscience (Bristol, UK), respectively. Lithium chloride was obtained from Calbiochem (La Jolla, CA). Cycloheximide was purchased from Sigma
Stable STRAP knockdown cell lines
Cell lines (as indicated) were infected with STRAP shRNA lentivirus and selected with puromycin. STRAP knockdown polyclonal populations of SW480, HCT116, and RKO were generated using similar protocol. The expression of STRAP was verified by western blotting.
Western blot and immunoprecipitation analyses
Western blot and immunoprecipitation analyses were performed as previously described [11]. For www.impactjournals.com/oncotarget subcellular localizations of endogenous β-catenin, nuclear and cytoplasmic protein extracts were prepared as previously described [40]. Lysates were analyzed by western blotting as indicated in figure legends. In order to investigate the effect of STRAP on the β-catenin binding to the degradation complex, 293T cells were transfected with expression constructs. After 48h, cells were harvested for immunoprecipitation which has been described previously [11]. Then the immunoprecipitates were analyzed by western blotting Cell counting assays MC38 and CT26 cells were plated in 12-well plates. After 48 h, cells were counted every day for 5 days and the average cell numbers from triplicate wells were plotted.
Soft agarose assays and xenograft studies
MC38 and CT26 cells were plated for soft agarose assays as described previously [40]. For xenograft studies, 1 × 10 5 cells from STRAP knockdown stable clones and vector control cells derived from MC38 and CT26 cell lines were subcutaneously injected into C57BL/6 and Balb/c mice, respectively. All animal experiments were performed in accordance with IACUC and state and federal guidelines for the humane treatment and care of laboratory animals. The animals were monitored for tumor formation every 3 days for a total of 3-5 weeks and the tumors were measured as discussed previously [41].
Migration and invasion assays
Migration and invasion assays were performed as previously described [41]. MC38 and CT26 cells were allowed to migrate for 6 hours, and to invade for 12 hours through collagen and for 21 hours through matrigel. After migration or invasion, cells were fixed, stained, and counted from 6 random fields and averaged.
Real time qRT-PCR
Total RNA was extracted from MC38, CT26, H460 and A549 cells using Trizol reagent and RT-PCR amplification was performed using iScript TM Reverse Transcription Supermix (BIO-RAD, Hercules, CA). Real-time PCR was carried out using 2.5 ul cDNA with FastStart SYBR Green Master (Roche, Nutley, NJ) following the manufacturer's instruction. The primer sequences of STRAP, β-catenin, β-TrCP, MMP2, MMP7, MMP9 and GAPDH were shown in Supplementary Table S1.
TOP/FOP flash reporter assay
Cells were seeded in 12-well plates. After overnight incubation, TOP/FOP Flash reporter plasmid (0.5 ug/well) were transfected into cells with Lipofectamine 2000 following the manufacturer's protocol. β-galactosidase (25 ng/well) was used as a control for the transfection efficiency. After approximately 44 hours, cells were harvested and luciferase assays were performed using Monolight TM 3010 (BD Pharmingen, San Diego, CA) according to the manufacturer's protocol. Transfection of each group was performed in triplicate. Luciferase reading values were normalized with β-gal values and triplicates were averaged.
Gelatin zymography
For gelatin zymography, cells were seeded in 6-well plates. After overnight incubation, cells were treated with 900 ul serum-free media for another 16 hours and the media were collected. After normalized with total protein of the cell lysate, the media were subjected to zymography analyses with non-reduced SDS-PAGE using 10% gels containing 0.1% gelatin according to the protocol as discussed elsewhere [42].
In vivo ubiquitination assay
HEK-293T cells were transiently transfected with combinations of expression constructs his 6 -ubiquitin, β-catenin-HA and STRAP-Flag with lipofectamine 2000 (Invitrogen, Calsbad, CA). Forty hours after transfection, the cells were treated with 25 μM of MG132 for 6 hours. Then the cells were lysed in highly denaturing conditions with 6 M guanidine hydrochloride buffer and ubiquitylated proteins were pulled down from the lysates with Ni-NTA agarose resin following the manufacturer's protocol. The bound proteins were eluted with 2X Laemmli buffer containing 250 mM imidazole. The eluants were analyzed by western blotting.
Splenic injection and orthotopic cecum injection models
Splenic injection and cecum injection models were described in our previous reports [41,43]. Briefly, for splenic injection, 1 × 10 5 cells in 100ul PBS were injected into the spleens. 5 min after injection, spleens were removed. C57BL/6 mice injected with MC38 cells were sacrificed after 3 weeks, whereas Balb/c mice injected with CT26 cells were sacrificed after 4 weeks for analysis of tumor formation in the liver. For cecum injection, 1 × 10 5 LM10 cells, derived from MC38 cells, were suspended in 50 ul PBS and were injected subserosally into ceca using a 30-G needle under stereomicroscope. www.impactjournals.com/oncotarget Mice were monitored and sacrificed 48 days after injection when some mice became moribund. The tumors in ceca, livers, and lymph nodes were examined for primary and metastatic tumor growth.
Immunohistochemical analyses
Paraffin-embeded mouse xenograft tissue and human colon cancer tissue microarray slides were subjected to immunostaining. The slides were stained as we described previously [41]. Anti-STRAP antibody at 1:50 and anti-β-catenin antibody at 1:300 dilutions were used for staining. The immunohistochemical evaluation for human colon cancer tissues for the expression of STRAP and β-catenin was according to that previously described, including the determination of staining intensity and the percentage of cells stained [3,44,45]. The proportion score represents the estimated fraction of positive cells (0 = 0%, 1 = 1%-24%, 2 = 25%-49%, 3 = 50%-74%, and 4 = 75%-100%), while the intensity score represents their average staining intensity (0 = no staining; 1 = weak staining, 2 = moderate staining, 3 = strong staining). The final staining score was determined by multiplying the intensity score by the proportion score. As a result, scoring was between 0 and 12. "Up-regulation" means that the score for the cancer tissues is higher than the score for the matched normal tissues. "no-change" means that the score for the cancer tissues is equal to the score for the matched normal tissues. "Down-regulation" means that the score for the cancer tissues is lower than the score for the matched normal tissues. When evaluating the expression of STRAP, we defined a score of 0-4 as low and 6-12 as high, respectively. Regarding the localization of β-catenin, we tried to divide the staining pattern into four groups according to The Human Protein ATLAS's β-catenin expression in CRC (http://www.proteinatlas.org/ ENSG00000168036-CTNNB1): cytoplasmic/membranous /nuclear, cytoplasmic/membranous, nuclear, and none.
Statistical analysis
The data are presented as mean ± S.D. Statistical analyses were performed by Student's t-test or analysis of variance (ANOVA) with Bonferroni post hoc test. Spearman rank correlation coefficient analysis was performed to analyze the correlation between STRAP and β-catenin expression in human CRC samples. Chi-square was used to analyze β-catenin nuclear accumulation in different STRAP expression groups in human CRC samples. Kruskal-Wallis test was used to analyze the associations between the expression of STRAP or β-catenin and clinical information, including AJCC stage, pathology grade and age. Statistical analysis was performed with SPSS software for windows (version 16.0; SPSS, Chicago, IL). A 2-sided p value of less than .05 was considered statistically significant.
GRANT SUPPORT
This study was supported by National Cancer Institute R01 CA95195, Veterans Affairs Merit Review Award, and a Faculty Development Award from UAB Comprehensive Cancer Center, P30 CA013148 (to PK Datta). | 7,838.6 | 2016-02-20T00:00:00.000 | [
"Medicine",
"Biology"
] |
The Derivation of the Electrical Conductance/Temperature Dependency for Tin Dioxide Gas Sensor
The dependency between the electrical conductivity and the temperature of the tin dioxide layer at the constant concentration of the detected gas is derived here. The derived equation is modified as a function of two variables describing the dependence of the electrical conductivity of the sensor on both the temperature and the concentration. The derived formulas were verified by approximation of the measured data. These formulas can be useful in practical applications.
Introduction
The tin dioxide sensors for the gas detection in air have been used for years.The detection process is based on the changes of the electrical conductivity of a tin dioxide layer heated to a sufficiently high temperature.
Sensor response -it is a term used here to refer to the electrical conductance of the tin dioxide layer.The metal oxide detection layer is a polycrystalline structure.This is why the physical-chemical phenomena appearing on the surface of the layer are complex and it is very difficult to built the mathematical description of the behaviour of the sensor during the detection of the substance.
There are several methods for the description of the sensor behaviour.One group of the methods is based on the band theory.On its base the equations describing the electrical conductance of the sensor are derived in [1], [2], [3].The advantage of the above-mentioned description is that it is general and universal.The disadvantage is its low objectivity and weak coherence with physical-chemical phenomena during the detection.The second possibility is using a pure mathematical approximation of the measured data.It is shown for example in [4], [5], [6].The advantage of this approach is that it is not necessary to deal with the complex physical-chemical phenomena or the band theory.But the obtained equations are without any physical relation to the object being described.The third group deals with the physical-chemical phenomena.The examples of such an approach can be found in [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17].The advantages of this description are the objectivity and the relation to the physical-chemical phenomena.So the sensor behaviour may be better understood from the macroscopic point of view.On the other hand, the complexity of the obtained formulas and their accordance with the experimental values depends on the choice of the phenomena used for the behaviour of the sensor at the given conditions.This article describes the sensor response in terms of physical-chemical phenomena under simplified assumptions.The aim of this work is to conceive a simple model of the sensor behaviour, that fits in the sensor response and that can be useful for the sensor applications.This model extends the results described in [18].
Experiments
The commercial tin dioxide sensors of type TGS 813, TGS 822, SP 11 and MQR 1003 were chosen for the experiments.The sensors TGS 813, SP 11 and MQR 1003 are meant for detection of methane.TGS 822 is meant for the detection of ethanol, acetone and organic solvents.Different technologies were used for production of the sensors, so they have different levels of selectivity and sensitivity.The vapour of acetone, benzene, ethanol and hexane were used in the experiments, always only one compound in air.Each substance was tested by all sensors at the same time to ensure the equal concentration of the tested vapours and the equal value of the heating voltage for all sensors.The saturated vapour above the liquid was obtained at a constant ambient temperature of 25 • C. The desired concentration was prepared by injecting of a certain volume of saturated vapour into the tested chamber filled with the air of a known volume of 2700 ml according to the Antoine equation.The heating voltage was set to U=5 V and the sensors were inside the testing chamber in a flux of clean air before every measurement so as to remove the adsorbed undesired substances from the surface of SnO 2 [19], [20].The used laboratory air was purged of dust and its relative humidity was decreased to RH=25 %.The recovery of the sensors was completed when the sensor response became practically independent of time.Related value of the sensor response never exceeded 10 µS.Afterwards, the heating voltage was decreased to U = 2 V by a step change to start sensing process, but to keep suitable temperature of the layer to avoid adsorption of water to the surface of SnO 2 .According to the previous experience, the value of U = 2 V corresponds to the surface temperature of approximately 500 K.After that the tested vapour was injected into the chamber and the measurement was carried out.The heating voltage was increased step by step by 20 mV up to 5 V.The electrical conductance was sensed 10 s after the change of the heating voltage.The time of 10 s was established experimentally as the moment at which the value of the conductivity is practically independent of time.The electrical circuit for sensing the conductivity of the sensor was used according to the data sheet recommendations of the producers of the sensors.The resistor of R = 2 kΩ in series with the sensor and the supply voltage of U = 5 V were used.The temperature of the detection layer relevant to the range of the heating voltage of 2 V up to 5 V was estimated according to the temperature dependency of the alloy NiCr electrical resistance [21], which is widely used for the heating system of the given sensors.Furthermore, this estimated temperature was compared to the data from [22], where the temperature range for the detection of ethanol is described.Considering this, the temperature range related to the heating voltage U = 2 V up to 5 V corresponded to T = 500 K up to T = 700 K.The measurements were carried out at different values of concentration of the tested substances in air.The temperature characteristics of the sensor -it is a term used here to refer to the dependency of the electrical conductivity of the sensor on the temperature at the constant concentration.
Theory
Chemisorption of gases on the surface of a tin dioxide detection layer can be considered as monomolecular adsorption.So the process of chemisorption can be described by the Langmuir isotherm.On the basis of this, the electrical conductivity of the sensor Eq. ( 1) was derived in [18].
where x is the relative concentration of the detected gas in ppm, G 0 is the basic electrical conductivity of the sensor in clean air, that is at x = 0, b is the adsorption coefficient in Pa −1 , p T is the total pressure of the mixture in Pa, e is the electron electrical charge in C, N A is the Avogadro constant in mol −1 , k 1 is the reaction rate constant in s −1 , m 1 is the mass of the detection layer SnO 2 in kg, a max is the adsorption capacity of SnO 2 in mol • kg −1 , U is the used measuring voltage in V and it is always kept constant during the experiments.It is assumed in Eq. ( 1) that the temperature of the detection layer is constant while the concentration x is a variable.In this article this equation is referred to as the transfer characteristics of the sensor.The examples of the reaction mechanisms of some substances on the surface during the detection are described in [23] , [24], [25] , [26] , [27] , [28].Equation ( 1) is derived under the assumption that a system of subsequent reactions on the surface can be commonly expressed by one symbolic equation where R represents the reducing gas being detected, O the adsorbed oxygen on the layer of SnO 2 , D represents the final products formed during the detection, V 0 are the oxygen vacancies (the released electrons changing the electrical conductivity of the sensor), r is a rational number.
The derivation described herein is based on the hypothesis, that the system of subsequent reactions during the detection can be represented by one symbolic reaction and the reactions during the detection in mentioned range of temperatures release free electron.This electron increases the electrical conductivity of the tin dioxide.Next hypothesis is, that the adsorption process of the detected substance and the desorption process of the reaction product are the features both for the transfer characteristic and for the temperature characteristic.All hypotheses and assumptions are supported with the experimental results presented herein.
Since there is a sufficient amount of the detected substance contained in the air surrounding of the sensor, the electrical conductivity of SnO 2 reaches the value that is relevant to the dynamic equilibrium at the given conditions.This value is practically independent of time.
It is necessary to modify Eq. ( 1) to be used for the derivation of temperature characteristics of the sensor.The concentration x will be constant and the temperature T of the detection layer of SnO 2 will be a variable.It means that coefficients G 0 , Y and K become dependent on the temperature T .In general, the temperature dependency of the conductivity of a detection layer exists in clean air x = 0 as well [1].The temperature dependencies of the conductivities of the tested sensors were found experimentally and they do not exceed the values of 10 µS at x = 0 in the range of T = 500 K up to 700 K.That value can be neglected compared to the usual values of the sensor response (100 µS up to 1000 µS) for used range of concentrations and range of temperatures.So the term G 0 in formula Eq. ( 1) can be neglected.
The temperature dependency of coefficients Z and K in Eq. ( 1) can be derived by using the kinetic theory of gases.Relative coverage of a surface is used for gas adsorption on a solid surface and it is defined by where N is the number of species adsorbed on the surface, N t is the total number of the adsorption sites on the surface, a is the adsorption coefficient mol • kg −1 , a max is the adsorption capacity of the surface mol • kg −1 .The adsorption process of the substance on the surface can be described by where S is the sticking coefficient, F is the number of the molecules striking the surface equal one per the time equal one, σ 0 is the area of one adsorption site.The Hertz Knudsen equation is valid for the ideal gas: where p is the gas pressure, m 1 is the mass of one gas molecule, k is the Boltzmann constant, T is the thermodynamic temperature.The sticking coefficient represents the probability that the struck molecule will remain adsorbed on the surface and will not get back to the gas phase.The coefficient is dependent on the temperature and on the coverage of the surface.In case that one molecule occupies only one adsorption site the formula is as follows: where S 0 is the probability of occupation of one site in case of a clean, still uncovered surface, that is Θ = 0. S 0 usually equals 1. E A is the activation energy of adsorption of gas in J • mol −1 , R is the universal gas constant, T is the thermodynamic temperature.For desorption of the substance out of the surface of coverage Θ, the Wigner Polanyi equation is used having the following form for the reaction of the first order: where E d is the activation energy of desorption of the adsorbed substance in J • mol −1 , R is the universal gas constant, T is the thermodynamic temperature, k 2 is the rate constant of the reaction.ν 0 is the frequency factor in the Arrhenius equation.For the frequency factor the approximative formula according to the Frenkel equation [29] is valid: where k is the Boltzmann constant, h is the Planck constant, T is the thermodynamic temperature, ν 0 represents the frequency of vibration of one molecule chemisorbed on the surface.If the energy delivered to amount-of-substance of 1 mol equals to or is greater than E d , the bond between the surface and the molecule is broken and the molecule desorbs to the gas phase.Time τ 0 in [30] is referred to the time for which the molecule remains on the surface before leaving it.
For the equilibrium state between adsorption and desorption according to Eq. ( 5) and Eq. ( 8) there is By substitution from Eq. ( 6), Eq. ( 7) and Eq. ( 9) to Eq. ( 10) the following equation is obtained: The equation can be modified into the form of From formula Eq. ( 12) the coverage of the surface Θ can be calculated: Formula Eq. ( 14) is the Langmuir isotherm.Coefficient b can be calculated by substitution of K A and K B from Eq. ( 13) and we get by using the following formulas formula Eq. ( 15) can be modified to the form mentioned in [30], or in [31], that is to where ∆E is the heat necessary for desorption of 1 mol of amount-of-mass of the substance (it practically equals the negative value of the adsorption heat), M is the molar mass of the adsorbed substance, N A is the Avogadro constant, σ 0 is the area of one adsorption site, τ 0 is the time for the duration of which the adsorbed molecule stays on the surface, R is the universal gas constant, T is the thermodynamic temperature.At this moment the temperature dependency of coefficients Z and K in Eq. ( 2) can be derived.The Arrhenius equation is valid for the temperature dependency of the rate constant
RT
where where E 1 in J • mol −1 is the activation energy of the chemical reaction in Eq. ( 3).The following equation will be valid for Z after substitution of Eq. ( 18) into formula Eq. ( 2): After substitution b from Eq. ( 17) and τ 0 from Eq. ( 9) and their inserting into Eq.( 2) we obtain the following equation for K: And finally, after substitution Eq. ( 19) and Eq. ( 20) into Eq.( 1), the equation for the temperature dependency of the detection layer will be in the following form: where for coefficients C 0 , C 1 , C 2 and C 3 we get Thus, from formulas Eq. ( 19) and Eq. ( 22) the following formulas emerge for coefficients Z and C 0 : and from formulas Eq. ( 20) and Eq. ( 23) the following formulas are valid for coefficients K and C 2 :
Results
Equation ( 21) was used for approximation of the measured values of the electrical conductance of a tin dioxide layer at the concentration level x = 600 ppm at the temperature from 500 K up to 700 K.The computer programme XYMATH was used for the approximation by means of the least square method and the total error option was chosen as well.The correlation coefficient varied around r = 0.98 and the standard deviation was approximately 8 percent.That is acceptable.The example of approximation of the measured data for hexane and the used sensors are in Fig. 1, the example for different substances and sensor TGS 813 are in Fig. 2.
The approximation of the measured data for different concentration values of acetone and sensor TGS 813 are in Fig. 3.These examples demonstrate good approximation ability of measured data by Eq. ( 21).The numerical values of the approximation coefficients of the temperature characteristics of the tested substances are in Tab. 1.
It is apparent from Tab. 1 that coefficients C 0 up to C 3 differ from each other.Coefficient C 0 is the highest and coefficient C 2 is the smallest one.The values of C 1 and C 3 are ordinarily comparable and the sign is opposite.C 1 is related to the activation energy of reaction Eq.(3) of the detected gas and oxygen.This value is always negative as, according to the Arrhenius equation, the temperature increases its value the reaction rate increases.The value of C 3 is related to the adsorption heat of the substance adsorbed on the surface of SnO 2 .The value of the adsorption heat is negative because chemisorption is an exothermic process.It is Tab.1: The coefficients of approximations of the temperature dependency of the electrical conductance of the sensors for x = 600 ppm.On the base of the derived formula Eq. ( 21) and its verification by the experimental results it is possible to suggest simplified behaviour of the sensor during the detection.If temperature of the layer increases its value the rate constant k 1 of reaction Eq.(3) increases.That is why the number of the released electrons increases.This leads to increasing the conductivity of the layer of SnO 2 .When the maximum value of conductivity is reached, the sensor response starts to decrease together with the rise of the temperature.Behind the maximum value of the sensor response the desorption influence begins to prevail since the rate constant k 2 in Eq. ( 8) increases its value together with the rising temperature.The molecules of the adsorbed substance oscillate more intensively and it is more difficult for them to remain on the surface.Their number decreases with the rising temperature and due to this the amountof-substance R in Eq. ( 3) decreases.That is why the number of the electrons released by chemical reaction Eq. ( 3) decreases as well and the conductivity of the layer decreases.These phenomena form the bell shape of the temperature characteristics, which is typical for most of the detected substances.Some substances have the increasing dependency only at the tested range of the temperature as apparent for hexane in Fig. 1.The hypothesis for explanation is, that the maximum of the detection curve of these substances is shifted towards the higher temperature that lies out of the tested temperature range.The maximum value of the response theoretically exists according the Eq. ( 21), but the observation of this maximum is limited by the maximum permissible value of the heating voltage so as not to damage the sensor.
Equation ( 21) manifests the typical features of the sensor.Then one equation of two variables can be formed from formulas Eq. (1) and Eq. ( 21): where x 0 is the reference value of the concentration.The dependency of the electrical conductivity of sensor TGS 813 for ethanol is in Fig. 4. The from Tab. 1 and the reference value x 0 = 600 ppm were used here.
Conclusion
The equation describing the temperature dependency of the electrical conductance of a tin dioxide layer for the detection of the reducing gas is derived here.The derivation is based on the physical-chemical phenomena.Next research can be focused to theoretical justification of the simplified assumptions used herein.The derived equation was successfully verified by comparing with the experimental data.Reached results can be useful in practical applications.
Fig. 1 :
Fig. 1: The temperature dependencies of electrical conductivity G of the tested sensors for the concentration x = 600 ppm of hexane.The curves are approximations, the indicated points are the measured values.
Fig. 2 :
Fig. 2: The temperature dependencies of the electrical conductivity G of sensor TGS 813 for the concentration x = 600 ppm for different substances.The curves are approximations, the indicated points are the measured values.
Fig. 3 :
Fig. 3: The temperature dependencies of electrical conductivity G of sensor TGS 813 for different concentrations of acetone in air.The curves are approximations, the indicated points are the measured values.
Fig. 4 :
Fig. 4: The dependency of electrical conductance G of sensor TGS 813 on temperature T and gas concentration x for acetone. | 4,552.8 | 2014-12-29T00:00:00.000 | [
"Engineering",
"Physics"
] |
Are Missing Links Predictable? An Inferential Benchmark for Knowledge Graph Completion
We present InferWiki, a Knowledge Graph Completion (KGC) dataset that improves upon existing benchmarks in inferential ability, assumptions, and patterns. First, each testing sample is predictable with supportive data in the training set. To ensure it, we propose to utilize rule-guided train/test generation, instead of conventional random split. Second, InferWiki initiates the evaluation following the open-world assumption and improves the inferential difficulty of the closed-world assumption, by providing manually annotated negative and unknown triples. Third, we include various inference patterns (e.g., reasoning path length and types) for comprehensive evaluation. In experiments, we curate two settings of InferWiki varying in sizes and structures, and apply the construction process on CoDEx as comparative datasets. The results and empirical analyses demonstrate the necessity and high-quality of InferWiki. Nevertheless, the performance gap among various inferential assumptions and patterns presents the difficulty and inspires future research direction. Our datasets can be found in https://github.com/TaoMiner/inferwiki.
Introduction
Knowledge Graph Completion (KGC) aims to predict missing links in KG by inferring new knowledge from existing ones. Attributed to its reasoning ability, KGC models are crucial in alleviating the KG's incompleteness issue and benefiting many downstream applications, such as recommendation (Cao et al., 2019b) and information extraction (Hu et al., 2021;Cao et al., 2020a). However, the KGC performance on existing benchmarks are still unsatisfactory -0.51 Hit Ratio@1 and 187 Mean Rank of the top-ranked model (Wang et al., 2019) on the widely used FB15k237 . Do we have a slow progress of models (Akrami et al., 2020)? Or should we blame for the low-quality of benchmarks?
In this paper, we re-think the task of KGC and construct a new benchmark dubbed InferWiki that highlights three fundamental objectives: Test triples should be inferential: this is the essential requirement of KGC. Each test triple should have supportive samples in the train set. However, we observe two major issues of current KGC datasets: unpredictable and meaningless test triples, which may hinder evaluating and advancing stateof-the-arts. As shown in Table 1, the first example of inferring the location for David (i.e., Florida) is even impossible for humans -not to mention machines -merely based on his birthplace and nationality (i.e., Atlanta and USA). In contrast, the second one is predictable but meaningless to find the missing month from a list of months within a year. The above cases are very common in existing datasets, e.g., YAGO3-10 (Dettmers et al., 2018) and CoDEx (Safavi and Koutra, 2020), mainly due to their construction process: first collecting a highfrequency subset of entities and then randomly splitting their triples into train/test. In this setting, KGC models may be over-or under-estimated, as we are even unsure if a human can perform better.
Test triples may be inferred positive, negative, or unknown. Following open-world assumption: what is not observed in KG is not necessar- (-\UNK) -\--\--\-10,311\--\--\-1,868\1,501 6,062\1,685 ily false, but unknown (Shi and Weninger, 2018). However, existing benchmarks generate unseen triples as negatives (i.e., the closed-world assumption), because KG contains only positive triples. They usually randomly corrupt the head or tail entity in a triple, sometimes with type constraints . This leads to trivial evaluation (almost 100% accuracy in triple classification (Safavi and Koutra, 2020)). Besides, the lack of unknown test ignores a critical inference capacity and may cause false negative errors in knowledge-driven tasks (Kotnis and Nastase, 2017). Inference has various patterns. Concentrating on limited patterns in evaluation may bring in severe bias. Domain-specific datasets Kinship and Country only focus on a few relations and are nearly solved (Das et al., 2017). General-domain WN18RR contains prevalent symmetry relation types, which incorrectly boosts the performance of RotatE (Abboud et al., 2020). Clearly, limited patterns leads to unfair comparisons among KGC models.
To this end, we curated an Inferential KGC dataset extracted from Wikidata and establish the benchmark with two settings of varying in sizes and structures: InferWiki64k and InferWiki16k. Instead of random split, we mine rules via Any-BURL (Meilicke et al., 2019) to guide train/test generation. All test triples are thus guaranteed inferential from training data. To avoid the rule leakage, we utilize two sets of triples: a large set for high-quality rule extraction and a small set for train/test split. Moreover, we infer unseen triples and manually annotate them with positive, negative and unknown labels to improve the difficulty of evaluation following both closed-world and openworld assumptions. For inference patterns, we include and balance triples with different reasoning path length, relation types and patterns (e.g., symmetry and composition).
Our contributions can be summarized as follows: • We summarize three principles of KGC: inferential ability, assumptions and patterns, and construct a rule-guided dataset.
• We highlight the importance of negatives and unknowns, and initiate open-world evaluation.
• We conduct extensive experiments to establish the benchmark. The results and deep analyses verify the necessity and challenge of Infer-Wiki, providing insights for future research.
Related Work
We can roughly classify current KGC datasets into two groups: inferential and non-inferential datasets. The first group is usually manually curated to ensure each testing sample can be inferred from training data through reasoning paths, while they only focus on specific relations, such as Families , Kinship (Kemp et al., 2006), and Country . The limited scale and inference patterns make them not challenging. HOLE (Nickel et al., 2016) achieves 99.7% ACU-PR on the dataset of Country. The second group of datasets are automatically derived from public KGs and randomly split positive triples into train/test, leading to a risk of testing samples non-inferential from training data. Popular datasets include FB15k-237 , WN18RR , and YAGO3-10 . CoDEx (Safavi and Koutra, 2020) argues the scope and difficulty of the above datasets, thus propose a comprehensive dataset with manually verified hard negatives.
In fact, inference is an important ability for intelligence. Various fields study how inference is done in practice, ranging from logic to cognitive psychology. Inference helps people make reliable predictions, which is also an expected ability for AI models. Indeed, once deployed, a model may have to make a prediction when there is no evidence in the training set. But, instead of an unreliable guess, we highlight the ability to know unknown, a.k.a. open-world assumption. Therefore, we aim to curate an large-scale inferential benchmark InferWiki including various inference patterns and testing samples (i.e., positive, negative, and unknown), for better evaluation. We list the statistics in Table 2.
Dataset Design
We describe our dataset construction that comprises four steps: data preprocessing, rule mining, ruleguided train/test generation, and inferred test labeling. We then give a detailed analysis.
Data Preprocessing
More and more studies utilize Wikidata 1 as a knowledge resource due to its high quality and large quantity. We utilize the September 2019 English dump in experiments. Data preprocessing aims to define relation vocabulary and extract two sets of triples from Wikidata: a large one for rule mining T r and a relatively small one for dataset generation T d . The reason for using two sets is to avoid the leakage of rules. In other words, some frequent rules on the large set may be very few on the small set. The different distributions shall avoid that rule mining methods will easily achieve high performance. Besides, more triples can improve the quality of mined rules. In contrast, the relatively small set is enough for efficient KGC training and evaluation.
In specific, we first extract all triples that consist of two entity items and one relation with English labels. We then remove the repeated triples and obtain 40,199,175 triples with 7,734,841 entities and 1,170 different relation types. Considering rule mining efficiency, we reduce the relation vocabulary by (1) manually filtering out meaningless relations, such as movie ID or film rating, (2) removing relations of InstanceOf and subClassOf following existing benchmarks , (3) select the most frequent 500 relation types. We focus on the most frequent 800,000 entities, which result in 8,632,777 triples as the large set for rule mining. To obtain the small set for dataset construction, we further select the most frequent 120,000 entities and 300 relations, which result in 1,283,246 triples. Note that we also infer new triples and label them as positive, negative, or unknown later.
Rule Mining
Since developing advanced rule mining models is not the focus of this paper and several mature tools are available online, such as AMIE+ (Galárraga et al., 2015) and AnyBURL (Meilicke et al., 2019). We utilize AnyBURL 2 in experiments due to its efficiency and effectiveness.
Given a set of triples (i.e., the large set T r ), this step aims to automatically learn rules F = {(f p , λ p )} P p=1 , where f p denotes a horn rule, e.g., spouse(x, y) ∧ father(x, z) ⇒ mother(y, z), and λ p ∈ [0, 1] denotes the confidence of f p . For each rule f p , the left side of ⇒ is called the premise, and the right side is called the conclusion, where the conclusion contains a single atom and the premise is a conjunction of several atoms in the Horn rule scheme. We can ground specific entities to replace x, y, z in f p , which shall denote an inferential relationship between premise and conclusion triples. For example, given spouse(LeBron James, Savannah Brinson) and father(LeBron James, Bronny James), we may infer a new triple mother(Savannah Brinson, Bronny James).
Of course, not all of the mined rules are reasonable. To alleviate the negative impacts of unreasonable rules, we rely on more data (a large set of triples) and keep high-confidence rules only. Particularly, we follow the suggested configuration of AnyBURL. We run it for 500 seconds to ensure that all triples can be traversed at least once and obtain 251,317 rules, where 168,996 out of them whose confidence meets λ p > 0.1 have been selected as the rule set to guide dataset construction.
Rule-guided Dataset Construction
Different from existing benchmarks, InferWiki provides inferential testing triples with supportive data in the training set. Moreover, it aims to include as many inference patterns as possible and these patterns are better evenly distributed to avoid biased evaluation. Thus, this step has four objectives: ruleguided split, path extension, negative supplement, and inference pattern balance. Rule-guided Split grounds the mined rules F on triples T d to obtain premise triples and corresponding conclusion triples. All premise triples form a training set, and all conclusion triples form a test set. Thus, they are naturally guaranteed to be inferential. For correctness, all of premise triples must exist in the given triple set T d , while conclusion triples are not necessarily in T d and may be generated for further annotation (i.e., Section 3.4).
For example, given a rule spouse(x, y) ∧ father(x, z) ⇒ mother(y, z), we traverse all of the given triples and find entities LeBron James, Savannah Brinson, and Bronny James that meet the premise. We then add the premise triples spouse(LeBron James, Savannah Brinson) and father(LeBron James, Bronny James) into the training set, and generate the conclusion triple mother(Savannah Brinson, Bronny James) for testing, no matter it is given or not. Path Extension aims to increase the inference path patterns by (1) adding more reasoning paths for the same testing triple, and (2) elongating paths by replacing those premise triples that have reasoning paths. For example, we replace father(LeBron James, Bronny James) with two triples that can infer it: father(LeBron James, Bryce James) and brother(Bronny James, Bryce James). The original path is then extended by one hop. Correspondingly, we define the confidence of extended paths as the multiplication of all involved rules. Longer paths will challenge long-distance reasoning ability. Negative Supplement is to generate negative triples if we cannot annotate the same number of negatives with positive triples. Otherwise, we will face an imbalance issue. Following conventions, we randomly corrupt the head or tail entities in a positive triple with the following constraints: (1) the relation of the positive triple is exclusive, e.g., placeOfBirth, if the ratio from head to tail entities is smaller than a threshold (we choose 1.2 heuristically in experiments); otherwise, the corrupted negative triple may be actually positive, leading to false negative errors. (2) We choose positive triples from the test set for corruption to improve the difficulty -the model has to correctly infer the corresponding positive triple from training data, then classify the corrupted triple as negative through the confliction. Particularly, for non-exclusive relation types, most of their corrupted results should be unknown following open-world assumption. The inferred test set covers such cases, which will be discussed in Section 3.4. Inference Pattern Balance aims to balance various inference patterns, including path length, relation types, and relation patterns (i.e., symmetry, inversion, hierarchy, composition, and others). This is because concentrating on some patterns may lead to severe bias and unfair comparison between KGC models . We first count the frequency of testing triples according to the path lengths, relation types and patterns, respectively. For each of them, we rank their counting and choose highest ranked groups of triples as frequent ones, instead of setting a threshold. We then carefully remove some frequent triples randomly, until the new distributions reach an accepted range (checked by humans).
Inferred Test Triple Labeling
Different from existing datasets, InferWiki aims to include positive, negative, and unknown testing triples, to evaluate the model under two types of assumptions: open-world assumption and closedworld assumption. The main difference between them is whether unknown triples are regarded as negatives. That is, the open-world evaluation is a three-class classification problem (i.e., positive, negative, and unknown). The closed-world evaluation targets only positive and negative triples, and we can simply relabel unknown triples as negatives without changing the test set.
So far, we have two test sets: one is generated via rule guidance, and the other contains the supplemented negatives. This section aims to label the generated triples. First, we automatically label the triples with positive if they exist in Wikidata. Then, we manually annotate the remaining 4,053 triples. The annotation guideline can be found in Appendix B. Note that all of the unknowns are factually incorrect but not inferential. To assess the quality of annotations, we verify a random selection of 300 test triples (100 for each label). The annotators agree with our labels 84.3% of the time. We further investigate the disagreements by relabeling 100 samples. 85% of the time, humans prefer an unknown, while automatic labeling tends to assign them with positive or negative labels. This suggests the inferential difference between humans and machines -the capacity of knowing unknown.
Finally, we remove the entities that are not in any of the grounded paths and their triples. We randomly select half of the test set as valid. This forms InferWiki64k. We further extract a dense subset InferWiki16k by filtering out the positive triples whose confidence is smaller than 0.6. Correspondingly, negative/unknown triples are reduced to keep balance. The statistics is listed in Table 2. Table 3 shows positive, negative, and unknown examples of InferWiki and their (possible) supportive training data. For positives, their paths seem reasonable and vary in length, relation types, and patterns. The 7-hop path of the sibling example is even difficult for a human. For negatives and unknowns, they are indeed incorrect and more challenging. There are no direct contradicted triples in the train set -the model is encouraged to reason related triples and justify if there is a confliction (i.e., negative) or not (i.e., unknown). Nevertheless, there are two minor issues. First, some unreasonable paths may corrupt the predictability. We thus increase the rule confidence threshold λ > 0.6 for InferWiki16k and manually annotate uncertain test triples for the correctness of labels. More advanced rule mining models can improve the construction pipeline. We leave it in the future. Second, does unknown triples have a bias on certain relation types?
Dataset Analysis
The answer is yes but not exactly. As shown in Table 3, the relation connectsW ith is involved in both positive and unknown triples, which is also determined by the paths.
Next, we analyze the relation patterns and path length distribution through comparisons with existing KGC datasets. Due to the different construction pipelines, existing datasets are difficult to offer quantitative statistics. We thus apply our pipeline on CoDEx (Safavi and Koutra, 2020). Only inferential test triples remain, and the training set keeps unchanged, namely CoDEx-m-infer, which reduces the test and valid positives from 20,622 to 7,050. This agree with the original paper that reports 20.56% triples are symmetry or compositional through AMIE+ analysis. We find more paths due to more extensive rules extracted from a large set of triples. This also demonstrates the necessity of rule-guided train/test generation -most test triples are not guaranteed inferential when using random split. Relation Pattern Following convention, we count reasoning paths for various patterns: symmetry, inversion, hierarchy, composition, and others, whose detailed explanations and examples can be found in Appendix C. If a triple has multiple paths, we count all of them. As Figure 1 shows, we can see that (1) there are no inversion and only a few symmetry and hierarchy patterns in CoDEx-m, as most current datasets remove them to avoid train/test leakage. But, we argue that learning and remembering such patterns are also an essential capacity of inference. It just needs to control their numbers for a fair comparison. (2) The patterns of InferWiki is more evenly distributed. Note that the patterns of symmetry, inversion, and hierarchy refer to 1hop paths, while composition and others refer to multi-hop paths. So, the total number of the former three is almost the same as that of the latter two, to balance paths with varying lengths, which will be discussed next. Path Length Distribution The reasoning paths can ensure test triples' predictability but may not be the shortest ones, as there may be undiscovered paths connecting two entities. Thus, our statistics concerning path length offer a conservative analysis and give an upper bound. For a test triple with multiple paths, we count the shortest one. As shown in Figure 2, we can see that InferWiki has more long-distance paths, while CoDEx-m-infer normally concentrates on maximum 3-hop reasoning paths. In specific, the maximum path length of InferWiki is 9 (4 before path extension) and the average length is 2.9 (1.5 before path extension).
Further analysis of relation, entity and neighbor distributions can be found in Appendix D&E.
Limitation
Although we carefully design the construction of in-ferWiki, there are still two types of limitations: rule biases and dataset errors, that can to be addressed along with the development of KG techniques in the future. In terms of rule biases, AnyBURL may be over-estimated due to its role in the construction. Although we utilize two triple sets to avoid rule leakage, their overlap may still bring unfair performance gain to AnyBURL. We consider synthesize several rule mining results to improve InferWiki in the next version. In terms of dataset errors, first, to balance positive and negative triples in the larger InferWiki64k, we follow conventions to randomly sample a portion of negatives. These negatives may be unknown if following open-world assumption. We manually assess the randomly sampled negatives and find a 15.7% error rate. Therefore, we conduct open-world experiments on the smaller InferWiki16k, all of whose testing negatives are verified by humans. The second type of errors is due to unreasonable rules for dataset split, which is caused by prediction errors of existing rule mining models. However, there is no suitable evaluation in this field to provide quantitative analysis. Our ongoing work aims to develop an automatic evaluation for path rationality to improve the mining quality, and thus facilitate our inferential pipeline.
Tasks
We benchmark performance on InferWiki for the tasks: (1) Link Prediction, the task of predicting the missing head/tail entity for a given query triple (?, r, t) or (h, r, ?). Models are encouraged to rank correct entities higher than others in the vocabulary. We adopt the filtering setting (Bordes et al., 2013) that excludes those entities, if the predicted triples have been seen in the train set. Mean reciprocal rank (MRR) and hits@k are standard metrics for evaluation. (2) Triple Classification aims to predict a label for each given triple (h, r, t). The label following open-world assumption is trinary y ∈ {−1, 0, 1} and becomes binary y ∈ {−1, 1} when adopting closed-world assumption -all 0-label triples are re-labeled with −1, since our unknown triples are factually negative yet non-inferential from training data. Since KGC models output real-value scores for triples, we classify scores into labels by choosing one or two thresholds per relation type on valid. Accuracy, precision, recall, and F1 are measurements.
Models
For comprehensive comparison, we choose three types of representative models as baselines: (1 . Note that the latter two are specially designed for link prediction. The detailed implementation including parameters and thresholds can be found in Appendix F. well -around 90% F1 scores. This is consistent with recent findings that triple classification is a nearly solved task (around 98% F1 scores) (Safavi and Koutra, 2020). Nevertheless, the lower performance demonstrates the difficulty of our curated datasets, mainly due to the manually annotated hard negatives of InferWiki (and CoDEx). Figure 3 presents the accuracy on InferWiki16k regarding various types of triples: positive, random supplemented negatives, and annotated negatives (including relabeled unknowns). We can see that (1) random negative triples are indeed trivial for all of baseline models, which motivates the necessity of harder negative triples to push this research direction forward, (2) positive triples are slightly difficult to judge than random negatives, and (3) the accuracy significantly drops on annotation negatives. This is mainly because most annotated triples are actually unknown -they are factually incorrect, but there are no obvious abnormal patterns. Such non-inferential cases may underestimate KGC models.
Open-world Assumption
Since most baselines fail in judging unknown as negative, we now investigate them following open-world assumption to see their ability in recog- nizing unknown triples. Table 5 shows the macro performance 3 on InferWiki16k. We can see that all of the baseline models perform worse than those under the closed-world assumption. On one hand, the trinary classification is intuitively more difficult than binary classification. On the other hand, it is a rather straightforward method to search two decision thresholds -one between positive and unknown and the other between unknown and negative. This motivates us future works on advanced models to represent KG, which should also be able to detect the limitation and boundaries of given KG. It is a fundamental capacity of inference to respond "I do not know", to avoid false negatives in downstream applications. Figure 4 presents a detailed analysis of each model regarding their search thresholds. We can see that although their best performance seems not bad, the worst scores are only around 10%. That is, they are very sensitive to thresholds. Besides, most of the time, the average F1 scores of ComplEx, Ro-tatE, and TuckER are around 20%, while transE achieves higher scores. Maybe that is the reason why it is still the most widely used KGC method. ConvE stably outperforms other baselines, no matter in terms of best, worst, or average performance. Table 6 shows the average scores for head and tail prediction. We can see that (1) AnyBURL performs the best most of the time, but the performance gap is not significant. This is mainly due to its role in Table 6: Results of Link Prediction. Bold fonts denote the best scores and underlines highlight the second best. dataset construction, although we utilize two sets of triples to minimize rule leakage. Actually, inference of rules may be more important than we thought to improve the reliability and interpretability of knowledge-driven models. This also motivates us to incorporate rule knowledge into KGC training for advanced reasoning ability Li et al., 2019b).
Link Prediction Results
(2) KGC models perform better on InferWiki16k than InferWiki64k, due to the higher structure density and rule confidence.
(3) Models have higher hit@10 and lower hit@1 on InferWiki than other datasets (e.g., CoDEx). This agrees with an intuition that most entities are irrelevant, making it trivial to judge these corrupted triples as in triple classification. And, only a small portion of entities is difficult to predict, which requires strong inference ability. Besides, hit@1 varies a lot, so that we can better compare among models. Impacts of Inferential Path Length Figure 5 presents Hit@1 curves for tail prediction regarding varying path length on Infer-Wiki64k 4 . We can see an overall downwards trend along with the increasing path length. Meanwhile, the large fluctuation may be due to two possible reasons: (1) as discussed in Section 3.5, the inferential paths ensure the predictability, but may not be the shortest ones. This thus offers a conservative analysis and give an upper bound of the performance concerning k-hop paths. Our paths are of high coverage and quality compared with existing datasets, which either conduct case study or postprocess datasets via rule mining. (2) Relation types and patterns also have significant impacts. Shorter paths contain more long-tail relations, and longer paths tend to cover many common relations. This improves the difficulty of shorter paths and makes longer paths easier.
Impacts of Relation Patterns
We present the Hit@1 tail prediction on Infer-Wiki64k regarding relation patterns in Table 7. We can see that symmetry and inversion are not wellsolved, which should be considered into evaluation but limited in scale. TransE performs worse on symmetry and inversion relations, consistent with the analysis in Abboud et al. (2020). Even if ComplEx and RotatE can capture such patterns, they fail to rank corresponding entities at the top. Embedding-based models perform well on hierarchy relations, even outperforms AnyBURL. For compositional relations, it is still quite challenging and worthwhile further investigation.
Comparison of CoDEx-infer and CoDEx
We investigate the impacts of rule-based train/test generatation by comparing CoDEx-m-infer with Sym Inv Hier Comp Others TransE .000 .049 .479 .211 .296 ComplEx .130 .279 .502 .368 .414 RotatE .191 .246 .694 .477 .610 ConvE .558 .668 .855 .602 .784 TuckER .527 .612 .850 .625 .753 Multihop .231 .309 .345 .240 .296 AnyBURL .782 .793 .782 .686 .809 Table 7: Hit@1 tail prediction on Relation Patterns. CoDEx-m. The two datasets share the same training set. The only difference lies in how we obtain the test triples, either using our proposed pipeline (CoDEx-m-infer) or randomly (CoDEx-m). Thus, the results reflect the impacts of inferential guarantee for dataset construction and demonstrate the necessity to avoid over-estimation or underestimation of the inferential ability of KGC models. We report the performance on CoDEx-m from the original paper (Safavi and Koutra, 2020).
We can see that all of models perform better with inferential path guarantee on CoDEx-m-infer than CoDEx-m, except ComplEx for link prediction. This is because rule guidance elimites those noninferential testing triples, making the task easier. Nevertheless, the scores on hard cases are actually decreased (as discussed in Figure 3 and Table 7). Models are excepted a stronger reasoning ability among several related entities, instead of trivially filtering out massive irrelevant entities. This also demonstrates the necessity of InferWiki to avoid over-or under-estimation of the inferential ability of KGC models -learning new knowledge from existing ones.
Case Study of Relation Types
We illustrate the most frequent relation types and their distribution of InferWiki64k and Infer-Wiki16k in Figure 8. We can see that InferWiki has a diverse relation types that are not limited to specific domains. Besides, the triples of each relation type are well balanced.
Conclusion
We highlighted three principles for KGC datasets: inferential ability, assumptions, and patterns, and contribute a large-scale dataset InferWiki. We established a benchmark with three types of seven KGC models on two tasks of triple classification and link prediction. The results present a detailed analysis regarding various inference patterns, which demonstrates the necessity of an inferential guarantee for better evaluation and the difficulty of new open-world triple classification.
In the future, we are interested in cross-KGs inference and transfer , and investigating how to inject knowledge into deep learning architectures, such as for information extraction (Tong et al., 2020) or text generation (Cao et al., 2020b). Table 8 lists existing KGC datasets. We can roughly classify them into two groups: inferential and noninferential datasets. The first group are usually manually curated to ensure each testing sample can be inferred from training data through reasoning paths. Families test family relationships including cousin, ancestor, marriage, parent, sibling, and uncle, among the members of 5 families along 6 generations. Such that there are obvious compositional relationships like uncle ≈ sibling + parent or parent ≈ married + parent. Kinship (Kemp et al., 2006) contains kinship relationships among members of the Alyawarra tribe from Central Australia, while Country contains countries, regions, and subregions as entities and is carefully designed to explicitly test the location relationship (i.e., locatedIn and neighbor) among them. The above datasets are clearly limited in scale and inference patterns, thus become not challenging. HOLE (Nickel et al., 2016) even achieves 99.7% ACU-PR on dataset Country .
The second group of datasets are automatically derived from public KGs and randomly split positive triples into train/valid/test, leading to a risk of testing samples non-inferential from training data. FB13 (Socher et al., 2013) and FB15K are commonly used benchmark from FreeBase. FB15k401 ) is a subset of FB15k containing only frequent relations (relations with at least 100 training examples). To remove test leakage, FB15k-237 removes all equivalent or inverse relations. Similarly, FB5M removes all the entity pairs that appear in the testing set. WN18RR is the challenging version of WN18 extracted from WordNet. Textual information is also included for specific task, such as FB40K (Lin et al., 2015) targeting relation extraction dataset New York Times (Riedel et al., 2010). FB24K (Lin et al., 2016) introduce Attributes. FB15K+ introduce types and make fb15k more sparse by only filterring out relation with a frequency lower than one. Another popular knowledge source is YAGO, and the corresponding datasets include YAGO3-10 and YAGO37 . Except for open-domain KG, NELL concentrates on location and sports, and UMLS targets medical knowledge. CoDEx (Safavi and Koutra, 2020) argues the quality of the above benchmarks, such as NELL995 are nonsensical or overly generic. Thus they propose a comprehensive dataset consisting of three knowledge graphs varying in size and structure, entity types, multilingual labels and descriptions, and hard negatives.
B Annotation Guideline
We provide the following annotation guidelines for annotators to label inferred triples in Section 3.4.
Task This is a two-step annotations. First, you must annotate each triple with the label y ∈ {1, −1}, where 1 denotes that the triple is correct and −1 denotes that the triple is incorrect. You can find the answer from anywhere you want, such as commonsense, Wikipedia, and professional websites. If you cannot find any evidence to support the statement, you shall choose label −1. Second, you must annotate each incorrect triple with the labelŷ ∈ {0, −1}, where 0 denotes that you do not know the answer. Now, you can find the answer from our provided triples. If you cannot find any evidence to support the statement, you shall choose label 0.
Examples Here are some examples judged using three types of knowledge sources. • Wikipedia: Given the triples (Tōkaidō Shinkansen, connectsWith, Osaka Higashi Line) and (Tōkaidō Shinkansen, con-nectsWith, San'yō Main Line), you can find related station information from the page of Tōkaidō Shinkansen. You can find that Osaka Higashi Line shares a transfer station with Tōkaidō Shinkansen, thus label it with 1. And, San'yō Main Line doesn't show up in the page, you may label it with −1.
C Relation Patterns
InferWiki is able to analyze relation patterns for each path, including symmetry, inversion, hierarchy, and composition, where detailed explanations and examples are listed in Table 9.
D Relation Types
We illustrate the most frequent relation types and their distribution of InferWiki64k and Infer-Wiki16k in Figure 8. Figure 9 shows the distribution of entities and their neighbors as compared to widely used datasets: FB15k237 and CoDEx-m.
F Experiment Setup
Our experiments are run on the server with the following configurations: OS of Ubuntu 16.04.6 LTS, CPU of Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz, and GPU of GeForce RTX 2080 Ti. We use OpenKE 5 for re-implementing TransE, Com-plEx, and RotatE. For the rest models, we use the original codes for ConvE 6 , TuckER 7 , Multi-5 https://github.com/thunlp/OpenKE 6 https://github.com/TimDettmers/ConvE 7 https://github.com/ibalazevic/TuckER hop 8 , and AnyBURL 9 . Because we utilize various types of KGC models including embedding-based, multi-hop reasoning (reinforcement learning), and rule-based models, these models largely have their own hyperparameters. To avoid exhaustive parameter search in a large range, we conduct a series of preliminary experiments and find that the suggested parameters work well on Wikidata-based data. We then search the embedding size in the range of {256, 512}, number of negative samples in the range of {15, 25} and margin in the range of {4, 8}. The optimal parameters of each model on all of three datasets are listed in Table 10. The thresholds in triples classification are listed in Table 11 | 7,712.6 | 2021-08-03T00:00:00.000 | [
"Computer Science"
] |
Existence of Solution of Space–Time Fractional Diffusion-Wave Equation in Weighted Sobolev Space
In this paper, we consider Cauchy problem of space-time fractional diffusion-wave equation. Applying Laplace transform and Fourier transform, we establish the existence of solution in terms of Mittag-Leffler function and prove its uniqueness in weighted Sobolev space by use of Mikhlin multiplier theorem. e estimate of solution also shows the connections between the loss of regularity and the order of fractional derivatives in space or in time.
Fractional derivatives describe the property of memory and heredity of many materials, which is the major advantage compared with integer order derivatives. Fractional diffusion-wave equations are obtained from the classic diffusion equation and wave equation by replacing the integral order derivative terms by fractional derivatives of order 훼 ∈ (0, 1) ∪ (1, 2). It has attracted considerable attention recently for various reasons, which include modeling of anomalous diffusive and subdiffusive systems, description of fractional random walk, wave propagation phenomenon, multiphase fluid flow problems, and electromagnetic theory. Nigmatullin [1,2] pointed out that many of the universal electromagnetic, acoustic, and mechanical responses can be modeled accurately using the fractional diffusion-wave equations. Schneider and Wyss [3] presented the diffusion and wave equations in terms of integro-differential equations, and obtained the associated Greens functions in closed form in terms of the Foxs functions. Mbodje and Montseny [4] investigated the existence, uniqueness, and asymptotic decay of the wave equation with fractional derivative feedback, and showed that the method developed can easily be adapted to a wide class of problems involving fractional derivative or integral operators of the time variable. In fact, more numerical algorithms present an efficient method in solving the related problem [5][6][7][8]. e development of analytical methods is delayed since there are no analytic solutions in many cases [9][10][11][12]. Additional background, survey, and more applications of this field in science, engineering, and mathematics can be found in [13][14][15][16][17][18][19][20] and the references therein. e fractional wave equation has been researched in all probability for the first time in [21] with the same order in space and in time, i.e., 1 = 2 , where an explicit formula for the fundamental solution of this equation was established.
en this feature was shown to be a decisive factor for inheriting some crucial characteristics of the wave equation like a constant propagation velocity of both the maximum of its fundamental solution and its gravity and mass centers in [22]. Moreover, the first, the second, and the Smith centrovelocities of the damped waves described by the fractional wave equation are constant and depend just on the equation order.
While the fractional wave equation contains fractional derivatives of the same order in space and in time, we establish existence of solution of Cauchy problem to fractional wave equation (1) with different order in space and in time in weighted Sobolev spaces. e powers of the weighted show the connections between the loss of the regularity and the is paper is organized as follows: In Section 2, the related fractional calculus definition and Laplace transform are introduced, the explicit solution of fractional differential equation is given by use of Mittag-Leffler functions. In Section 3, based on the main result given in Section 2, we show the existence and uniqueness of solution of space-time fractional diffusion-wave equation.
Laplace Transform and Fractional Calculus
In this section, we recall some necessary definitions and properties of fractional calculus, then use Laplace transform to consider initial value problem of the related fractional differential equation.
en we have the following estimate where denotes a positive constant.
Theorem 2. Consider the problem (13), then there is a explicit solution which is given in the integral form
Proof. According to Definition 1-3, taking Laplace transform with respect to in both sides of Eq. (13), we obtain e inverse Laplace transform using Lemma 3 yields en substitute (15)(16)(17)(18) into (13) which yields eorem 2. ☐
Fourier Transform and the Main Result
In this section, based on the results of eorem 2, Mikhlin multiplier theorem, Mattag-Leffler function and Fourier transf orm, we establish the existence and uniqueness of solution of Cauchy problem of space-time fractional diffusion-wave equation in weighted Sobolev space. | 967.4 | 2020-02-07T00:00:00.000 | [
"Mathematics"
] |
Climate-effective use of straw in the EU bioeconomy—comparing avoided and delayed emissions in the agricultural, energy and construction sectors
A transformation towards a bioeconomy is needed to reduce the environmental impacts and resource requirements of different industries. However, considering the finiteness of land and biomass, such a transition requires strategizing resource and land allocation towards activities that yield maximum environmental benefit. This paper aims to develop a resource-based comparative indicator between economic sectors to enable optimal use of biobased resources. A new methodology is proposed to analyze the climate effectiveness of using straw in the agricultural, energy and construction sectors. For this purpose, avoided and delayed emissions are analyzed for different use cases of straw and then compared. Considering only avoided emissions, the use of straw as a feedstock for bioelectricity has the highest climate effectiveness (930 kg CO2 eq./tstraw). Considering only temporal carbon storage, straw-based insulation in buildings has the highest climate effectiveness (881 kg CO2 eq./tstraw). Combining avoided and delayed emissions, the use of straw-based insulation has the highest climate effectiveness (1344 kg CO2 eq./tstraw). Today EU-Policies incentives the use of straw in the agricultural sector and the energy sector, neglecting the benefit from its use in the construction sector. The results can support policymakers’ trans-sectoral incentives, where agriculture by-products are diverted towards the use of biomass that most boost economic activities and trigger maximum environmental benefit, given the local circumstances.
Introduction 1.The vision of a European bioeconomy
To stay within planetary boundaries, industries need to transform: away from materials and processes with high-environmental impact, towards a bio-based and circular economy powered by renewable energy, i.e. a transition towards a bioeconomy.In fact, naturebased solutions for carbon uptake are required to reach carbon neutrality.Yet, there are multiple challenges related to the transition towards a bioeconomy.This includes the finiteness of land and resources available (Haberl and Erb 2017), and lacking coherence between many different policy domains since the sourcing and use of biomass can influence different economic sectors (Muscat et al 2021).A report on resource efficiency and climate change by the International Resource Panel of the United Nations (IRP 2020) highlights the need for policy instruments that guide the efficient use of resources to minimize the impact of material use on the climate.
The European Union (EU) wants to promote using agricultural residues, such as straw, as a feedstock for the bioeconomy to avoid competition for arable land (DG for Research and Innovation of the EC 2018).Straw is the primary agricultural residue and accounts for ca.20% of total biomass produced in the EU (Scarlat et al 2019).With rising demand for biomass from different sectors, however, possible (regional) scarcity could lead to rivalry over the same resource (Daioglou et al 2016).Using biogenic resources not only offers an opportunity to mitigate climate change but also provides additional environmental and social benefits (Babí Almenar et al 2021).Due to the absence of standardized pricing for greenhouse gas (GHG) emissions and ecosystem services, there is a risk that resources will be allocated solely based on economic factors, without considering externalities (Haberl et al 2014).
Current frameworks on resource efficiency focus on minimizing the environmental impact of materials.Increasing material efficiency thus should reduce adverse environmental impacts caused by the extraction or use of materials (IRP 2020).This definition, however, is more relevant for non-renewable resources such as iron, cement or plastic, the use of which is associated with high life cycle emissions.In contrast, biogenic resources and their cultivation, as long as sustainably managed, can positively affect the climate, as shown for increased biomass carbon pools related to afforestation (Chen et al 2023) or increased soil carbon sequestration related to certain cropping systems (Valkama et al 2020).Even when storage is only temporary, these delayed emissions allow for lowering the temperature peak in climate scenarios and could thus help avoid some climate tipping points (Matthews et al 2022).Consequently, policymakers should shift the focus towards maximizing the benefits instead of minimizing impacts.Another limitation of current frameworks on resource efficiency, if applied to biogenic resources, is that the focus is on the end-use rather than the resource itself.This means that resource efficiency minimizes the impact of a particular product or service by using less or different materials.However, when considering the finiteness of available biomass, the more relevant question is: where is the biomass best allocated to maximize its benefits?In other words, there is a need to identify the effectiveness, i.e., benefits per unit of biomass, of different ways to use the same biomass to formulate policies that ensure allocation towards highly effective uses in climate-change mitigation.Such an approach aligns more with industrial ecology methods that address waste allocation between industrial sectors, rather than focusing solely on process efficiency, which aims to minimize resource consumption per production unit.
Existing straw uses in different sectors
Figure 1 shows the total availability of straw in the EU.Circa 42-63 Mt/year of straw are available considering technical harvest feasibility, sustainable removal rate and existing uses.More information on the analysis of straw availability in the EU can be found in Göswein et al (2021a).
Agricultural sector
In the agricultural sector, harvested straw is mainly used for animal bedding (Kaltschmitt et al 2016, Einarsson andPersson 2017).The amount of straw used for livestock in the EU is estimated to be 17.5 Mt/year (Einarsson and Persson 2017) to 28 Mt/year (Scarlat et al 2010).Non-used straw is often burned on the field despite being illegal (Ortiz et al 2008, Song et al 2016).The practice leads to GHG emissions (Pereira et al 2019).Yet, farmers use it for pest and disease control, stating that it is also the cheapest and fastest practice (Giannoccaro et al 2017).In contrast, straw incorporation in the soil leads to soil organic carbon accumulation and soil fertilization.It is, therefore, currently promoted by policy: at the European level through the EU's Common Agricultural Policy and at the national level through agri-environmental schemes that provide monetary incentives for farmers to leave straw on the fields, i.e. in Italy (Giannoccaro et al 2017) and in Ireland (IFA 2021).
Energy sector
The EU's share of renewable energy is increasing (Eurostat 2022).So far, the primary renewable energy source is bioenergy, for which most feedstock is provided by forestry (Scarlat et al 2019).To achieve the EU's ambitious renewable energy target of 40% in the energy mix by 2030, energy from biomass, including agricultural residues, will play a key role (Scarlat et al 2019).It is estimated that energy derived from agricultural residue can cover 2.3% to 4% of the EU's final energy consumption.In 2019, straw accounted for 75% of the agricultural residues (Scarlat et al 2019).
Straw is available for different end uses in the energy sector, depending on how it is processed.It can be used in its raw state or processed into straw pellets.The latter facilitates transportation thanks to a higher energy density.Straw used as solid fuels or in the form of biogas (biomass fuel) can produce electricity or useful heat, or both if used in combined heat and power plants.Further, straw can be used as feedstock for transport biofuels such as (straw) ethanol and different synthetic fuels.Today, mainly dedicated energy crops are used as feedstock for transport biofuel production in the EU: maize is the primary feedstock for ethanol, followed by sugar beet, wheat, and other cereals (European Commission 2020).However, as the subsidies shift from dedicated energy crops to residues (Einarsson and Persson 2017), straw will become a more critical feedstock because the high lignin content of straw makes it 'highly suitable' for bioethanol production (Iqbal et al 2016).According to the EU agricultural outlook (European Commission 2020), waste and residues are the only feedstock for bioethanol predicted to grow in this decade (10.8% annual growth between 2020 and 2030).
Construction sector
Straw, a traditional construction material, while historically significant, currently holds modest prominence in the construction sector (FNR 2017).Yet, similar to other rising natural materials (Hoxha et al 2020), straw is garnering increased attention.Recent studies explore its properties-thermal insulation, acoustics, load-bearing, and fire resistance (Beck et Straw's forms include bales (directly shaped by harvesters) or loose (swaths).Bales find application as load-bearing walls, external insulation, or infill for frames or post-beam structures, requiring minimal processing.Most common is using bales as infill without special permits, thanks to standardized support (SRB 2019).Loose straw requires prior processing, such as bundling for thatched roofs, or conversion into cob, straw chips, and light clay straw.Straw chips, aiding insulation and retrofitting, necessitate a frame structure as mounting, also filling gaps (due to wall unevenness) for enhanced thermal insulation.
Objective of the paper
There are two ways to reduce GHG emissions through increased efficiency: one involves replacing materials with a higher environmental impact to avoid emissions, while the other entails acting as a temporary carbon sink in natural systems like soils or in artificial systems such as buildings, thereby delaying GHG emissions.Various studies have demonstrated how the use of straw can reduce GHG emissions in a specific sector: through agroecosystems and soil incorporation for carbon sequestration (Powlson et al 2008, Cook et al 2013, Lugato et al 2018), through production of straw-derived bioenergy and biofuels (Scarlat et al 2010, Sastre et al 2015, Pereira et al 2019), and through straw-derived biochar for carbon sequestration (Wang et al 2020).Moreover, in buildings, through straw bale construction (Chaussinand et al 2015), or retrofit using a timber frame with strawchips-infill (Göswein et al 2021b).And comparing straw and other fast-growing bio-based insulation materials, highlighting the full carbon capture by crop regrowth one year after construction (Pittau et al 2018).However, to our knowledge, no previous study has standardized the comparison of climate-relevant straw uses across different sectors.Most studies neglect straw's potential as a construction material for insulation, limiting its application to agriculture and energy.
This paper develops a new methodology to compare the potential climate benefits of different straw uses.We consider positive effects on the climate through avoided GHG emissions linked with material substitution or delayed GHG emissions related to temporal carbon storage in the agricultural, energy and construction sector.This paper adds to the existing body of research by providing a method of comparing these benefits, thereby supporting policymakers to reframe existing legislation or to create regulatory frameworks to incentivize straw use that maximizes its climate mitigation effects.
Methods and data
This study develops and applies a resource-based comparative indicator, the climate effectiveness of straw use [kg CO 2eq /t straw ].The conceptual framework for determining climate effectiveness of different straw uses is shown in figure 2. The climate effectiveness of straw use in three sectors is compared in terms of avoided and delayed emissions for the following four use cases: • In the agricultural sector: (1) active straw incorporation in the soil, • In the energy sector: (2a) straw used for electricity generation (biomass fuel), and (2b) straw-based transport fuel (biofuel), • In the construction sector: (3) retrofit system with straw chips insulation for external walls of existing buildings.For more details about this retrofit system, please refer to supplementary information (SI) I.
In the first step, we compared total life cycle emissions of selected straw uses with non-renewable alternatives using life cycle assessment (LCA).LCA assessed environmental impact over the full life cycle (Hellweg and Canals 2014).We defined functional units (FUs) for avoided emissions per use case: FU 2a,electricity = 1 GJ from biomass fuels for electricity; FU 2b,transport = 1 GJ from biofuel for transport; FU 3,construction = 1 m 2 of retrofitted wall.
Note that for agricultural use case (1), significant avoided emissions were not considered (see SI II).
Avoided emissions calculations adhere to renewable energy directive (RED) II, which assigns emissions to FFCs using the EU fossil energy mix.RED II lacks details on the methodology for calculating FFC's life cycle emissions.Kalt et al (2020) stressed precise fossil fuel modeling for a thorough grasp of emissions reductions attained by shifting to renewables.
In the second step, we determined delayed emissions through temporal carbon storage using yearly averages of carbon accumulation within the system.We simplified carbon accumulation as an approximation of annual storage, considering how much biogenic carbon was retained when adding new straw each year (see figure 3).Fluxes between carbon pools are part of the carbon cycle, whereas accumulation constitutes temporal carbon storage.This storage increases the time lag between atmospheric fluxes, temporarily reducing carbon (CO 2 or CH 4 ) in the atmospheric carbon pool.Temporal carbon storage differs from technical carbon capture and storage (CCS), where CO 2 is compressed and stored in geological formations for centuries.
In this study, we assumed a 40% carbon content, equivalent to 0.4 kg C per kg of biomass, and evaluated each system's carbon retention capacity.The annual carbon accumulation rate represents the percentage of straw, by weight, retained as carbon when an equal amount is added annually.
For the FU for comparing delayed emissions in agricultural and construction cases (1) and (3), we used: FU CarbonStorage = 1 metric ton of straw.
Note that no carbon storage is considered for energy use cases (2a and 2b) as we assume immediate utilization.
In the third step, we defined a functional unit for climate effectiveness as [kg CO 2 eq./t straw /100 y] enabling comparability across straw uses while considering both avoided and delayed emissions.
Material substitution impacts
We assessed avoided emissions by comparing the full life cycle emissions of specific straw uses to nonbiobased, non-renewable alternatives.Figure 4 displays avoided emissions in the energy and construction sectors, including conventional material options.In the agricultural sector, straw incorporation could not replace any material or process, resulting in no avoided emissions.
For the use in the energy sector, the data is taken from the RED II (EC 2018), specifically for a transport distance of the raw material (A2) <500 km (as this distance is assumed economically and environmentally viable) for the biomass fuels, and is used to calculate avoided emissions, i.e. substituting renewables for its fossil fuel comparators (FFCs).Conventional transport fuel production and use emits 94 kg CO 2 eq.per GJ, whereas straw ethanol emits only 14 kg CO 2 eq.per GJ, yielding a potential savings of 80 kg CO 2 eq.per GJ.When substituting straw for FFCs in electricity generation, FFC emits 183 kg CO 2 eq.per GJ, while straw emits 179 kg CO 2 eq.per GJ, and straw pellets emit 175 kg CO 2 eq.per GJ.Straw pellets have higher life cycle emissions due to pelletization (see SI III for results per LC stage).
Straw-based building insulation emits 12 kg CO 2 eq.per m 2 during production but stores −5 kg CO 2 eq.per m 2 as biogenic carbon, totaling 7 kg CO 2 eq.per m 2 .Compared to traditional insulation materials (glass wool, rock wool, and expanded polystyrene), straw offers potential savings of 3, 7, and 35 kg CO 2 eq.per m 2 , respectively.
For additional details regarding the impacts per LC stage for the energy and construction use cases, kindly consult SI III.
Results in figure 4 are presented as kg of CO 2 eq.per functional unit: kg CO 2 eq.per GJ for fuel and electricity, and kg CO 2 eq.per m 2 for thermal insulation.To compare avoided emissions across sectors, we converted results: 1 ton of straw equals 4 GJ of biofuel, 5.3 GJ of electricity, or insulation for 28 m 2 , based on material assumptions for selected use cases.Refer to table 1 for the conversion from per functional unit to per ton of straw.The highest emission reduction potential is achieved by using straw as thermal insulation (saving 980 kg CO 2 per t of straw), followed by electricity generation (saving 949 kg CO 2 per t of straw), and straw pellet production for electricity (saving 928 kg CO 2 per t of straw).
Temporal carbon storage
The delayed emissions through temporal carbon storage were calculated with the average annual carbon accumulation rate within the respective system, i.e. the soil and the building stock.No carbon storage was considered for the energy use cases.The accumulation rate of carbon in soil was taken from Powlson et al (2008) with 11% during the first 20 years, decreasing to 4% (year 20-50) and plateauing at 3% after 50 years.The accumulation rate of carbon in the building stock was defined by an assumed 60 years of service life for the straw-based insulation system, based on Pittau et al (2018).The transfer from the biotic to the urban pool only postpones the carbon flux back into the atmosphere by the material's service life.Straw added to the building stock in the subsequent years does not further contribute to the carbon accumulated in the urban pool, as it replaces straw that has already been accounted for.Therefore, for the first 60 years, the carbon accumulation rate equals the carbon content of the material (40% or 0.4 kg of carbon per kg of biomass); after that, the carbon accumulation rate is 0%.The studied time horizon in this paper is 100 years.This results in the following 100 yearsaverages: This means that every ton of straw incorporated in the soil, leads to 50 kg of carbon retained in the soil, while for every ton of straw used for insulation, 240 kg is retained in the building stock.On average, straw incorporation can delay 184 kg CO 2 /t straw , while using straw as insulation material in the construction sector can delay 881 kg CO 2 /t straw .Note that no delayed emissions were accounted for in the energy sector use cases.
Total climate effectiveness
The climate effectiveness of a particular straw use is the combination of avoided emissions that can be achieved through material substitution and the delayed emissions through temporal carbon storage.
Figure 5 shows the effectiveness of the use cases selected for each sector.
Considering only avoided emissions (dark blue bars in figure 5), the use of straw as a feedstock for bioelectricity has the highest climate effectiveness (930 kg CO 2 eq./t straw ), followed by the use of straw as insulation material (463 kg CO 2 eq./t straw ) and the use of straw as transport fuel (317 kg CO 2 eq./t straw ).No avoided emissions are achieved when straw is incorporated into the soil.
Considering only temporal carbon storage (yellow bars in figure 5), the use of straw as an insulation material has the highest climate effectiveness (881 kg CO 2 eq./t straw ), followed by straw incorporation in soil (184 kg CO 2 eq./t straw ).The use of straw in the energy sector does not allow for carbon storage.
Combining the climate benefits from material substitution and temporal carbon storage, the use of straw as an insulation material has the highest climate effectiveness with 1344 kg CO 2 eq./t straw , followed by using straw as biomass fuel (electricity) and biofuel (transport).Straw incorporation has the lowest climate effectiveness.
Discussion
This paper analyzed and compared the climate benefits of various straw applications, introducing a new perspective on industrial symbiosis by incorporating the construction sector.
Our findings revealed that using straw as building insulation maximizes material substitution benefits and temporal carbon storage.However, it is important to note that emission savings vary based on the scenario.The more carbon-intensive the conventional system, the greater the potential to reduce emissions by using straw.For context, Buchspies et al (2020) found a saving of 107-610 kg CO 2 eq.per ton of straw when switching from soil incorporation to bioethanol production.In our study, we found 133-746 kg CO 2 eq.per ton of straw saved in the energy sector, excluding delayed soil emissions.
Subsequent sections provide broader context for the obtained results.
Enhancing climate effectiveness metrics
The 'climate effectiveness' indicator assesses the use of agricultural biomass residue to replace nonrenewable resources and mitigating GHG emissions by quantifying avoided and delayed biogenic carbon emissions.However, the accuracy and significance of the results could be improved by refining the following: • Using national or regional specific data for the life cycle emissions and the temporal carbon storage of a particular use.• Improving the accuracy of transformation factors.
Considering broader sustainability factors
Widening the analysis to include environmental factors other than GHG, as well as economic and social factors can give a broader perspective on the sustainability.Soil quality dictates its capacity to support diverse ecosystem services, with carbon storage being just one facet.We must also consider other essential roles it plays, including biomass production, environmental protection, gene preservation, support for human activities, raw material source, and its geogenic and cultural heritage significance (Drobnik et al 2018).Analyzing potential trade-offs is crucial.
Biochar and carbon storage considerations
We did not include biochar as a carbon storage option in our study due to its known slow carbon release in soil.This posed allocation challenges since biochar is typically produced by industry before agricultural use and recent developments involve its incorporation into carbon-neutral concrete.Hence, assessing biochar's efficiency for carbon storage was beyond our study's scope.Biochar originates from the energy sector we examined and can be applied in the other two sectors.
Despite the uncertainty of results, the magnitude of difference between climate effectiveness in the agricultural and in the construction sector, gives a clear indication of the high potential and importance of considering the construction sector in the resource allocation discourse and resource governance.Policies should consider sectors jointly, instead of a detached focus on one sector.In order to shift to a circular bioeconomy, material flows across sectors must be analyzed.
Global perspectives
In 2021, the top wheat producers were the EU (138 Mt), China (137 Mt), and India (110 Mt) (FAO 2023).This study analyzed EU's potential straw use, while the US, another major wheat producer (45 Mt), is also experiencing an increased demand for insulation material due to retrofitting of existing buildings.Yet, straw availability and (local) demand in the US is distinct from Europe, as cereal is produced in places with low population density.This results in often large transportation distances from field to building site.Even though the load-bearing straw bale construction technique has its origins in Nebraska, the large-scale implementation of straw construction is challenging.
China and India, being the world's top wheat producers, show theoretical potential for scaled-up straw construction.However, in tropical climates constructive systems tend to be lightweight and wellventilated, which is different from massive straw bale construction.In these regions, a more interesting fastgrowing bio-based material is bamboo as a structural building material (Zea Escamilla et al 2019).The resource use of woody biomass and residues, including bamboo, should be analyzed in further studies from a forest point of view.
Conclusions and policy recommendations
The following insights from this study are relevant for formulating policies for using straw as a resource in the bioeconomy to contribute to reaching carbon neutrality: • More detailed statistics on the local supply and demand of straw are necessary since long transport distances for unprocessed straw are not viable, both from an economic and environmental perspective.Reliable data on straw availability is essential for stakeholders of the agricultural, energy and construction sector for planning reliability for longterm transformation processes (within individual companies or the whole sector).• We know from economic theory that efficient resource allocation can only occur in undistorted markets.This prerequisite is not given today for straw because (i) subsidies (e.g. for straw incorporation) and mandates (e.g. for biofuels) support particular straw uses and (ii) lacking carbon taxation hinders an economization of climate straw uses.It is recommended that current policies regarding the use of straw are re-evaluated considering their climate effectiveness.Furthermore, it is recommended that future frameworks take a resource perspective, shifting the focus from minimizing adverse impact towards maximizing positive impacts.• The use cases analyzed in this study are not mutually exclusive but allow a cascading use of the resource: straw begins as insulation, transforms into biofuel or power, with CCS for permanent carbon storage.Biogas can return nutrients to the field, closing the loop but only through structures that encourage cross-sector collaboration for economic advantage.• Some of the straw uses investigated in this study are not yet well known and established in the market.
In October 2021, the first European full-scale commercial plant to produce straw ethanol opened in Podari, Romania.The first large scale application of straw-based thermal retrofitting is still to be seen.It is recommended that the EU and national governments promote the further development of straw applications, thereby making biogenic materials competitive with conventional materials in terms of price and ease of use.
Figure 1 .
Figure 1.Straw availability in the EU based on different studies.Notes: Values are taken from Einarsson and Persson (2017), Iqbal et al (2016), and Scarlat et al (2010).Only straw from wheat, barley, rye and oats are considered, in the EU-27 (2010 composition), to ensure comparability between studies.The underlined straw uses in the right box are analyzed in this study.
Figure 2 .
Figure 2. Conceptual framework for determining the climate effectiveness of different straw uses.
Figure 4 .
Figure 4. Avoided emissions in the energy sector and construction sector.Note: 'FFCs' stands for fossil fuel comparators; 'FU' stands for functional unit: in [GJ] for fuel and electricity and in [m 2 ] for thermal insulation.
Figure 5 .
Figure 5. Climate effectiveness of different straw uses in the agricultural, energy and construction sector.Note: The error bars represent the uncertainties for material substitution and temporal carbon (±30%).
Table 1 .
Comparison of savings potential for straw uses.'t' refers to metric tons. | 5,849.2 | 2023-10-19T00:00:00.000 | [
"Environmental Science",
"Economics"
] |
The lubricating role of water in the shuttling of rotaxanes
The special properties of water make it an effective lubricant in rotaxanes to enhance their shuttling.
Molecular dynamics simulations.
All the atomistic MD simulations presented herein were performed using the parallel, scalable program NAMD 2.11. 1 Water was described by the TIP3P model 2 while other molecules in this study were modeled by the CHARMM General Force Field (CGenFF). 3 The temperature and the pressure were maintained at 300 K and 1 atm, respectively, employing Langevin dynamics and the Langevin piston method. 4 Chemical bonds involving hydrogen atoms were constrained to their experimental lengths by means of the SHAKE/RATTLE 5-6 and SETTLE algorithms. 7 The r-RESPA multiple-time-stepping algorithm 8 was applied to integrate the equations of motion with a time step of 2 and 4 fs for short-and long-range interactions, respectively. A smoothed 12 A spherical cutoff was applied to truncate van der Waals and short-range electrostatic interactions. Periodic boundary conditions (PBCs) were applied in the three directions of Cartesian space. Long-range electrostatic forces were taken into account by the particle-mesh Ewald scheme. 9 Visualization and analysis of the MD trajectories were performed with VMD 1.9.2. 10 Free-energy calculations. The free-energy calculations reported herein were carried out utilizing the multiple-walker extended adaptive biasing force (MW-eABF) algorithm. [11][12][13] To increase the efficiency of the calculations, the free-energy surface was broken down into six consecutive, non-overlapping windows. Instantaneous values of the force were accrued in bins, with 0.1 Å × 2° wide. The sampling time required to determine each PMF was 2.2 μs. The least free-energy pathway connecting the minima of the two-dimensional free-energy landscapes was located using the LFEP algorithm. 14 The concept of committor 15,16 was utilized to demonstrate that the Figure S3). For each structure, 100 5000-step equilibrium simulations were carried out with different initial velocities. The frequency characterizing the molecular assembly tending to relax to state B before reaching state A, p B , was calculated for each structure. The distribution of p B for the 100 distinct structures is provided in Fig. S7. This distribution is Gaussian-like, with a peak at p B = 0.5, which suggests that the chosen coarse variables are suitable for studying the movement of the macrocycle in the rotaxane.
Comparison between classical ABF and eABF
In the classical ABF method, the biasing force is added to the groups of atoms at play, whereas in extended ABF, the bias is applied onto a fictitious particle coupled to the coarse variable of interest by means of a stiff spring. In most cases, classical ABF is appropriate for multidimensional free-energy calculations in the limit of low dimensionality problemstypically n ≤ 3. However, extended ABF must be employed in the following cases, i) The second derivative of the coarse variable is not available in the free-energy calculation engine.
ii) The chosen coarse variables are not independent from each other.
iii) The chosen coarse variables are coupled to geometric restraints or holonomic constraints.
In this study, the variable describing the conformational change of the macrocycle, φ=(φ 1 +φ 2 +φ 3 )/3, consists of three coarse variables coupled to each other. Extended ABF must, therefore, be used in the free-energy calculations.
Moreover, the eABF method possesses also a much higher convergence rate compared with the original algorithm. See ref.
12 and 17 for more information.
Lubrication effect by water on the motion of abiological and biological molecular machines
Water can greatly weaken hydrogen bonds and stabilize transition states due to its high polarity, ability to act as both a hydrogen donor and acceptor, and very small molecular volume.
Lubrication by water, therefore, is universal for all the hydrogen-bonding driven molecular S13 machines, including the wheel-and-axle machine reported by Panman et al. 18 The ability of changing the driving force from hydrogen bonding to hydrophobic interaction is, however, specific for those complexes possessing large hydrophobic groups. The wheel-and-axle machine only features succinamide moieties as stoppers, which are not large enough to help water convert the driving force from hydrogen bonding to hydrophobic interaction, as can be inferred from structure D1 and D2 in Fig. 5.
In addition to its role in abiological, artificially designed molecular machines, water also plays a key role in biological machines. For example, the rotation of the motor protein ATPase is generally believed to be controlled by electrostatic interactions. Water can, however, help hydrophobic residues induce local deformations prior to electrostatically driven rotation, thereby reducing the barriers of side-chain dissociation and association that drive stalk rotation. 19 | 1,054.2 | 2017-05-16T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
The iterated auxiliary particle filter
We present an offline, iterated particle filter to facilitate statistical inference in general state space hidden Markov models. Given a model and a sequence of observations, the associated marginal likelihood L is central to likelihood-based inference for unknown statistical parameters. We define a class of"twisted"models: each member is specified by a sequence of positive functions psi and has an associated psi-auxiliary particle filter that provides unbiased estimates of L. We identify a sequence psi* that is optimal in the sense that the psi*-auxiliary particle filter's estimate of L has zero variance. In practical applications, psi* is unknown so the psi*-auxiliary particle filter cannot straightforwardly be implemented. We use an iterative scheme to approximate psi*, and demonstrate empirically that the resulting iterated auxiliary particle filter significantly outperforms the bootstrap particle filter in challenging settings. Applications include parameter estimation using a particle Markov chain Monte Carlo algorithm.
Introduction
Particle filtering, or sequential Monte Carlo (SMC), methodology involves the simulation over time of an artificial particle system (ξ i t ; t ∈ {1, . . . , T }, i ∈ {1, . . . , N}). It is particularly suited to numerical approximation of integrals of the form where X = R d for some d ∈ N, T ∈ N, x 1:T := (x 1 , . . . , x T ), μ 1 is a probability density function on X, each f t a transition density on X, and each g t is a bounded, continuous, and nonnegative function. Algorithm 1 describes a particle filter, using which an estimate of (1) can be computed as Algorithm 1 A Particle Filter 1. Sample ξ i 1 ∼ μ 1 independently for i ∈ {1, . . . , N}. 2. For t = 2, . . . , T , sample independently f (x t−1 , x t )g(x t , y t )dx 1:T dy 1:T , where μ : X → R + is a probability density function, f : X × X → R + a transition density, g : X × Y → R + an observation density and A and B measurable subsets of X T and Y T , respectively. Statistical inference is often conducted upon the basis of a realization y 1:T of Y 1:T for some finite T , which we will consider to be fixed throughout the remainder of the article. Letting E denote expectations w.r.t. P, our main statistical quantity of interest is L := E[ T t=1 g(X t , y t )], the marginal likelihood associated with y 1:T . In the above, we take R + to be the nonnegative real numbers, and assume throughout that L > 0.
Running Algorithm 1 with corresponds exactly to running the bootstrap particle filter (BPF) of Gordon, Salmond, and Smith (1993), and we observe that when (4) holds, the quantity Z defined in (1) is identical to L, so that Z N defined in (2) is an approximation of L. In applications where L is the primary quantity of interest, there is typically an unknown statistical parameter θ ∈ that governs μ, f , and g, and in this setting the map θ → L(θ ) is the likelihood function. We continue to suppress the dependence on θ from the notation until Section 5.
The accuracy of the approximation Z N has been studied extensively. For example, the expectation of Z N , under the law of the particle filter, is exactly Z for any N ∈ N, and Z N converges almost surely to Z as N → ∞; these can be seen as consequences of Del Moral (2004, Theorem 7.4.2). For practical values of N, however, the quality of the approximation can vary considerably depending on the model and/or observation sequence. When used to facilitate parameter estimation using, for example, particle Markov chain Monte Carlo (Andrieu, Doucet, and Holenstein 2010), it is desirable that the accuracy of Z N be robust to small changes in the model and this is not typically the case.
In Section 2 we introduce a family of "twisted HMMs, " parameterized by a sequence of positive functions ψ := (ψ 1 , . . . , ψ T ). Running a particle filter associated with any of these twisted HMMs provides unbiased and strongly consistent estimates of L. Some specific definitions of ψ correspond to wellknown modifications of the BPF, and the algorithm itself can be viewed as a generalization of the auxiliary particle filter (APF) of Pitt and Shephard (1999). Of particular interest is a sequence ψ * for which Z N = L with probability 1. In general, ψ * is not known and the corresponding APF cannot be implemented, so our main focus in Section 3 is approximating the sequence ψ * iteratively, and defining final estimates through use of a simple stopping rule. In the applications of Section 5, we find that the resulting estimates significantly outperform the BPF, and exhibit some robustness to both increases in the dimension of the latent state space X and changes in the model parameters. There are some restrictions on the class of transition densities and the functions ψ 1 , . . . , ψ T that can be used in practice, which we discuss.
This work builds upon a number of methodological advances, most notably the twisted particle filter (Whiteley and Lee 2014), the APF (Pitt and Shephard 1999), block sampling (Doucet, Briers, and Sénécal 2006), and look-ahead schemes (Lin et al. 2013). In particular, the sequence ψ * is closely related to the generalized eigenfunctions described in Whiteley and Lee (2014), but in that work the particle filter as opposed to the HMM was twisted to define alternative approximations of L. For simplicity, we have presented the BPF in which multinomial resampling occurs at each timestep. Commonly employed modifications of this algorithm include adaptive resampling (Kong, Liu, and Wong 1994;Liu and Chen 1995) and alternative resampling schemes (see, e.g., Douc, Cappé, and Moulines 2005). Generalization to the time-inhomogeneous HMM setting is fairly straightforward, so we restrict ourselves to the timehomogeneous setting for clarity of exposition.
Twisted Models and the ψ-Auxiliary Particle Filter
Given an HMM (μ, f , g) and a sequence of observations y 1:T , we introduce a family of alternative twisted models based on a sequence of real-valued, bounded, continuous, and positive functionsψ := (ψ 1 , ψ 2 , . . . , ψ T ). Letting, for an arbitrary transition density f and function ψ, f (x, ψ ) := X f (x, x )ψ (x )dx , we define a sequence of normalizing functions(ψ 1 ,ψ 2 , . . . ,ψ T ) on X byψ t (x t ) := f (x t , ψ t+1 ) for t ∈ {1, . . . , T − 1},ψ T ≡ 1, and a normalizing constantψ 0 := X μ(x 1 )ψ 1 (x 1 )dx 1 . We then define the twisted model via the following sequence of twisted initial and transition densities: , . . . , T },(5) and the sequence of positive functions which play the role of observation densities in the twisted model. Our interest in this family is motivated by the following invariance result. We denote by 1 the sequence of constant functions equal to 1 everywhere.
Proposition 1. If ψ is a sequence of bounded, continuous and positive functions, and Proof. We observe that and the result follows.
From a methodological perspective, Proposition 1 makes clear a particular sense in which the L.H.S. of (1) is common to an entire family of μ 1 , ( f t ) t∈{2,...,T } and (g t ) t∈{1,...,T } . The BPF associated with the twisted model corresponds to choosing in Algorithm 1; to emphasize the dependence on ψ, we provide in Algorithm 2 the corresponding algorithm and we will denote approximations of L by Z N ψ . We demonstrate below that the BPF associated with the twisted model can also be viewed as an APF associated with the sequence ψ, and so refer to this algorithm as the ψ-APF. Since the class of ψ-APF's is very large, it is natural to consider whether there is an optimal choice of ψ, in terms of the accuracy of the approximation Z N ψ : the following proposition describes such a sequence.
2. For t = 2, . . . , T , sample independently for t ∈ {1, . . . , T − 1}. Then, Z N ψ * = L with probability 1. Proof. It can be established that and so we obtain from (6) that g ψ * 1 ≡ψ * 0 and g ψ * t ≡ 1 for t ∈ {2, . . . , T }. Hence, with probability 1. To conclude, we observe that Implementation of Algorithm 2 requires that one can sample according to μ ψ 1 and f ψ t (x, ·) and compute g ψ t pointwise. This imposes restrictions on the choice of ψ in practice, since one must be able to compute both ψ t andψ t pointwise. In general models, the sequence ψ * cannot be used for this reason as (8) cannot be computed explicitly. However, since Algorithm 2 is valid for any sequence of positive functions ψ, we can interpret Proposition 2 as motivating the effective design of a particle filter by solving a sequence of function approximation problems.
Alternatives to the BPF have been considered before (see, e.g., the "locally optimal" proposal in Doucet, Godsill, and Andrieu 2000 and the discussion in Del Moral 2004, Section 2.4.2). The family of particle filters we have defined using ψ are unusual, however, in that g ψ t is a function only of x t rather than (x t−1 , x t ); other approaches in which the particles are sampled according to a transition density that is not f typically require this extension of the domain of these functions. This is again a consequence of the fact that the ψ-APF can be viewed as a BPF for a twisted model. This feature is shared by the fully adapted APF of Pitt and Shephard (1999), when recast as a standard particle filter for an alternative model as in Johansen and Doucet (2008), and which is obtained as a special case of Algorithm 2 when ψ t (·) ≡ g(·, y t ) for each t ∈ {1, . . . , T }. We view the approach here as generalizing that algorithm for this reason.
It is possible to recover other existing methodological approaches as BPFs for twisted models. In particular, when each element of ψ is a constant function, we recover the standard BPF of Gordon, Salmond, and Smith (1993). Setting ψ t (x t ) = g(x t , y t ) gives rise to the fully adapted APF. By taking, for some k ∈ N and each t ∈ {1, . . . , T }, ψ corresponds to a sequence of look-ahead functions (see, e.g., Lin et al. 2013) and one can recover idealized versions of the delayed sample method of Chen, Wang, and Liu (2000) (see also the fixed-lag smoothing approach in Clapp and Godsill 1999), and the block sampling particle filter of Doucet, Briers, and Sénécal (2006). When k ≥ T − 1, we obtain the sequence ψ * . Just as ψ * cannot typically be used in practice, neither can the exact look-ahead strategies obtained by using (9) for some fixed k. In such situations, the proposed look-ahead particle filtering strategies are not ψ-APFs, and their relationship to the ψ * -APF is consequently less clear. We note that the offline setting we consider here affords us the freedom to define twisted models using the entire data record y 1:T . The APF was originally introduced to incorporate a single additional observation, and could therefore be implemented in an online setting, that is, the algorithm could run while the data record was being produced.
Asymptotic Variance of the ψ-APF
Since it is not typically possible to use the sequence ψ * in practice, we propose to use an approximation of each member of ψ * . To motivate such an approximation, we provide a central limit theorem, adapted from a general result due to Del Moral (2004, Chap. 9). It is convenient to make use of the fact that the estimate Z N ψ is invariant to rescaling of the functions ψ t by constants, and we adopt now a particular scaling that simplifies the expression of the asymptotic variance. In particular, we let Proposition 3. Let ψ be a sequence of bounded, continuous, and positive functions. Then We emphasize that Proposition 3, whose proof can be found in the Appendix, follows straightforwardly from existing results for Algorithm 1, since the ψ-APF can be viewed as a BPF for the twisted model defined by ψ. For example, in the case ψ consists only of constant functions, we obtain the standard asymptotic variance for the BPF From Proposition 3, we can deduce that σ 2 ψ tends to 0 as ψ approaches ψ * in an appropriate sense. Hence, Propositions 2 and 3 together provide some justification for designing particle filters by approximating the sequence ψ * .
Classes of f and ψ
While the ψ-APF described in Section 2 and the asymptotic results just described are valid very generally, practical implementation of the ψ-APF does impose some restrictions jointly on the transition densities f and functions in ψ. Here we consider only the case where the HMM's initial distribution is a mixture of Gaussians and f is a member of F, the class of transition densities of the form where M ∈ N, and (a k ) k∈{1,...,M} and (b k ) k∈{1,...,M} are sequences of mean and covariance functions, respectively and (c k ) k∈{1,...,M} a sequence of R + -valued functions with M k=1 c k (x) = 1 for all x ∈ X. Let define the class of functions of the form where M ∈ N, C ∈ R + , and (a k ) k∈{1,...,M} , (b k ) k∈{1,...,M} and (c k ) k∈{1,...,M} are a sequence of means, covariances, and positive real numbers, respectively. When f ∈ F and each ψ t ∈ , it is straightforward to implement Algorithm 2 since, for each can be computed explicitly and f ψ t (x, ·) is a mixture of normal distributions whose component means and covariance matrices can also be computed. Alternatives to this particular setting are discussed in Section 6.
Recursive Approximation of ψ *
The ability to compute f (·, ψ t ) pointwise when f ∈ F and ψ t ∈ is also instrumental in the recursive function approximation scheme we now describe. Our approach is based on the following observation.
Proof. The definition of ψ * provides that ψ * Let (ξ 1:N 1 , . . . , ξ 1:N T ) be random variables obtained by running a particle filter. We propose to approximate ψ * by Algorithm 3, for which we define ψ T +1 ≡ 1. This algorithm mirrors the backward sweep of the forward filtering backward smoothing recursion which, if it could be calculated, would yield exactly ψ * .
Algorithm 3 Recursive function approximations
For t = T, . . . , 1: Choose ψ t as a member of on the basis of ξ 1:N t and ψ 1:N t .
One choice in Step 2 of Algorithm 3 is to define ψ t using a nonparametric approximation such as a Nadaraya-Watson estimate (Nadaraya 1964;Watson 1964). Alternatively, a parametric approach is to choose ψ t as the minimizer in some subset of of some function of ψ t , ξ 1:N t and ψ 1:N t . Although a number of choices are possible, we focus in Section 5 on a simple parametric approach that is computationally inexpensive.
The Iterated Auxiliary Particle Filter
The iterated auxiliary particle filter (iAPF), Algorithm 4, is obtained by iteratively running a ψ-APF and estimating ψ * from its output. Specifically, after each ψ-APF is run, ψ * is reapproximated using the particles obtained, and the number of particles is increased according to a well-defined rule. The algorithm terminates when a stopping rule is satisfied.
Algorithm 4 An iterated auxiliary particle filter with parameters (N 0 , k, τ ) 1. Initialize: set ψ 0 to be a sequence of constant functions, l ← 0. 2. Repeat: (a) Run a ψ l -APF with N l particles, and setẐ l ← Z N l ψ l . (b) If l > k and sd(Ẑ l−k:l )/mean(Ẑ l−k:l ) < τ, go to 3. (c) Compute ψ l+1 using a version of Algorithm 3 with the particles produced. The rationale for Step 2(d) of Algorithm 4 is that if the sequenceẐ l−k:l is monotonically increasing, there is some evidence that the approximations ψ l−k:l are improving, and so increasing the number of particles may unnecessarily increase computational cost. However, if the approximationsẐ l−k:l have both high relative standard deviation in comparison to τ and are oscillating then reducing the variance of the approximation of Z and/or improving the approximation of ψ * may require an increased number of particles. Some support for this procedure can be obtained from the log-normal CLT of Bérard, Del Moral, and Doucet (2014): under regularity assumptions, log Z N ψ is approximately a N (−δ 2 ψ /2, δ 2 ψ ) random variable and so P(
Approximations of Smoothing Expectations
Thus far, we have focused on approximations of the marginal likelihood, L, associated with a particular model and data record y 1:T . Particle filters are also used to approximate so-called smoothing expectations, that is, π (ϕ) := E[ϕ(X 1:T ) | {Y 1:T = y 1:T }] for some ϕ : X T → R. Such approximations can be motivated by a slight extension of (1), where ϕ is a real-valued, bounded, continuous function. We can write π (ϕ) = γ (ϕ)/γ (1), where 1 denotes the constant function x → 1. We define below a well-known, unbiased, and strongly consistent estimate γ N (ϕ) of γ (ϕ), which can be obtained from Algorithm 1. A strongly consistent approximation of π (ϕ) can then be defined as γ N (ϕ)/γ N (1).
The definition of γ N (ϕ) is facilitated by a specific implementation of step 2. of Algorithm 1 in which one samples t−1 , ·), for each i ∈ {1, . . . , N} independently. Use of, for example, the Alias algorithm (Walker 1974(Walker , 1977 gives the algorithm O(N) computational complexity, and the random variables is unbiased and strongly consistent, and a strongly consistent approximation of π (ϕ) is The ψ * -APF is optimal in terms of approximating γ (1) ≡ Z and not π (ϕ) for general ϕ. Asymptotic variance expressions akin to Proposition 3, but for π N ψ (ϕ), can be derived using existing results (see, e.g., Del Moral and Guionnet 1999 ;Chopin 2004;Künsch 2005;Douc and Moulines 2008) in the same manner. These could be used to investigate the influence of ψ on the accuracy of π N ψ (ϕ) or the interaction between ϕ and the sequence ψ which minimizes the asymptotic variance of the estimator of its expectation.
Finally, we observe that when the optimal sequence ψ * is used in an APF in conjunction with an adaptive resampling strategy (see Algorithm 5), the weights are all equal, no resampling occurs and the ξ i t are all iid samples from P(X t ∈ · | {Y 1:T = y 1:T }). This at least partially justifies the use of iterated ψ-APFs to approximate ψ * : the asymptotic variance σ 2 ψ in (10) is particularly affected by discrepancies between ψ * and ψ in regions of relatively high conditional probability given the data record y 1:T , which is why we have chosen to use the particles as support points to define approximations of ψ * in Algorithm 3.
Applications and Examples
The purpose of this section is to demonstrate that the iAPF can provide substantially better estimates of the marginal likelihood L than the BPF at the same computational cost. This is exemplified by its performance when d is large, recalling that X = R d . When d is large, the BPF typically requires a large number of particles to approximate L accurately. In contrast, the ψ * -APF computes L exactly, and we investigate below the extent to which the iAPF is able to provide accurate approximations in this setting. Similarly, when there are unknown statistical parameters θ , we show empirically that the accuracy of iAPF approximations of the likelihood L(θ ) are more robust to changes in θ than their BPF counterparts.
Unbiased, nonnegative approximations of likelihoods L(θ ) are central to the particle marginal Metropolis-Hastings algorithm (PMMH) of Andrieu, Doucet, and Holenstein (2010), a prominent parameter estimation algorithm for general state space hidden Markov models. An instance of a pseudo-marginal Markov chain Monte Carlo algorithm (Beaumont 2003;Andrieu and Roberts 2009), the computational efficiency of PMMH depends, sometimes dramatically, on the quality of the unbiased approximations of L(θ ) (Andrieu and Vihola 2015; Lee and Łatuszyński 2014;Sherlock et al. 2015;Doucet et al. 2015) delivered by a particle filter for a range of θ values. The relative robustness of iAPF approximations of L(θ ) to changes in θ , mentioned above, motivates their use over BPF approximations in PMMH.
Implementation Details
In our examples, we use a parametric optimization approach in Algorithm 3. Specifically, for each t ∈ {1, . . . , T }, we compute numerically a regularized version of and then set where c is a positive real-valued function, which ensures that f ψ t (x, ·) is a mixture of densities with some nonzero weight associated with the mixture component f (x, ·). This is intended to guard against terms in the asymptotic variance σ 2 ψ in (10) being very large or unbounded. We chose (15) for simplicity and its low computational cost, and it provided good performance in our simulations. For the stopping rule, we used k = 5 for the application in Section 5.2, and k = 3 for the applications in Sections 5.3 and 5.4. We observed empirically that the relative standard deviation of the likelihood estimate tended to be close to, and often smaller than, the chosen level for τ . A value of τ = 1 should therefore be sufficient to keep the relative standard deviation around 1 as desired (see, e.g., Doucet et al. 2015;Sherlock et al. 2015). We set τ = 0.5 as a conservative choice for all our simulations apart from the multivariate stochastic volatility model of Section 5.4, where we set τ = 1 to improve speed. We performed the minimization in (15) under the restriction that was a diagonal matrix, as this was considerably faster and preliminary simulations suggested that this was adequate for the examples considered.
We used an effective sample size based resampling scheme (Kong, Liu, and Wong 1994;Liu and Chen 1995), described in Algorithm 5 with a user-specified parameter κ ∈ [0, 1]. The effective sample size is defined as ESS(W 1 , . . . , where R is the set of "resampling times. " This reduces to Algorithm 2 when κ = 1 and to a simple importance sampling algorithm when κ = 0; we use κ = 0.5 in our simulations. The use of adaptive resampling is motivated by the fact that when the effective sample size is large, resampling can be detrimental in terms of the quality of the approximation Z N .
Linear Gaussian Model
A linear Gaussian HMM is defined by the following initial, transition, and observation Gaussian densities: it is possible to implement the fully adapted APF (FA-APF) and to compute explicitly the marginal likelihood, filtering and smoothing distributions using the Kalman filter, facilitating comparisons. We emphasize that implementation of the FA-APF is possible only for a restricted class of analytically tractable models, while the iAPF methodology is applicable more generally. Nevertheless, the iAPF exhibited better performance than the FA-APF in our examples.
Relative Variance of Approximations of Z When d is Large
We consider a family of linear Gaussian models where m = 0, . . , d} for some α ∈ (0, 1). Our first comparison is between the relative errors of the approximationsẐ of L = Z using the iAPF, the BPF, and the FA-APF. We consider configurations with d ∈ {5, 10, 20, 40, 80} and α = 0.42 and we simulated a sequence of T = 100 observations y 1:T for each configuration. We ran 1000 replicates of the three algorithms for each configuration and report box plots of the ratioẐ/Z in Figure 1.
For all the simulations, we ran an iAPF with N 0 = 1000 starting particles, a BPF with N = 10,000 particles and an FA-APF with N = 5000 particles. The BPF and FA-APF both had slightly larger average computational times than the iAPF with these configurations. The average number of particles for the final iteration of the iAPF was greater than N 0 only in dimensions d = 40 (1033) Fixing the dimension d = 10 and the simulated sequence of observations y 1:T with α = 0.42, we now consider the variability of the relative error of the estimates of the marginal likelihood of the observations using the iAPF and the BPF for different values of the parameter α ∈ {0.3, 0.32, . . . , 0.48, 0.5}. In Figure 2, we report box plots ofẐ/Z in 1000 replications. For the iAPF, the length of the boxes are significantly less variable across the range of values of α. In this case, we used N = 50,000 particles for the BPF, giving a computational time at least five times larger than that of the iAPF. This demonstrates that the approximations of the marginal likelihood L(α) provided by the iAPF are relatively insensitive to small changes in α, in contrast to the BPF. Similar simulations, which we do not report, show that the FA-APF for this problem performs slightly worse than the iAPF at double the computational time.
Particle Marginal Metropolis-Hastings.
We and simulated a sequence of T = 100 observations. Assuming only that A is lower triangular, for identifiability, we performed Bayesian inference for the 15 unknown parameters {A i, j : i, j ∈ {1, . . . , 5}, j ≤ i}, assigning each parameter an independent uniform prior on [−5, 5]. From the initial point A 1 = I 5 , we ran three Markov chains A BPF 1:L , A iAPF 1:L , and A Kalman 1:L of length L = 300,000 to explore the parameter space, updating one of the 15 parameters components at a time with a Gaussian random walk proposal with variance 0.1. The chains differ in how the acceptance probabilities are computed, and correspond to using unbiased estimates of the marginal likelihood obtain from the BPF, iAPF or the Kalman filter, respectively. In the latter case, this corresponds to running a Metropolis-Hastings (MH) chain by computing the marginal likelihood exactly. We started every run of the iAPF with N 0 = 500 particles. The resulting average number of particles used to compute the final estimate was 500.2. The number of particles N = 20,000 for the BPF was set to have a greater computational time, in this case A BPF 1:L took 50% more time than A iAPF 1:L to simulate. In Figure 3, we plot posterior density estimates obtained from the three chains for 3 of the 15 entries of the transition matrix A. The posterior means associated with the entries of the matrix A were fairly close to A itself, the largest discrepancy being around 0.2, and the posterior standard deviations were all around 0.1. A comparison of estimated Markov chain autocorrelations for these same parameters is reported in Figure 4, which indicates little difference between the iAPF-PMMH and Kalman-MH Markov chains, and substantially worse performance for the BPF-PMMH Markov chain. The integrated autocorrelation time of the Markov chains provides a measure of the asymptotic variance of the individual chains' ergodic averages, and in this regard the iAPF-PMMH and Kalman-MH Markov chains were practically indistinguishable, while the BPF-PMMH performed between 3 and 4 times worse, depending on the parameter. The relative improvement of the iAPF over the BPF does seem empirically to depend on the value of δ. In experiments with larger δ, the improvement was still present but less pronounced than for δ = 0.25. We note that in this example, ψ * is outside the class of possible ψ sequences that can be obtained using the iAPF: the approximations in are functions that are constants plus a multivariate normal density with a diagonal covariance matrix, while the functions in ψ * are multivariate normal densities whose covariance matrices have nonzero, offdiagonal entries.
To compare the efficiency of the iAPF and the BPF within a PMMH algorithm, we analyzed a sequence of T = 945 observations y 1:T , which are mean-corrected daily returns computed from weekday close exchange rates r 1:T +1 for the pound/dollar from 1/10/81 to 28/6/85. These data have been previously analyzed using different approaches, for example, in Harvey, Ruiz, and Shephard (1994) and Kim, Shephard, and Chib (1998). We wish to infer the model parameters θ = (α, σ, β) using a PMMH algorithm and compare the two cases, where the marginal likelihood estimates are obtained using the iAPF and the BPF. We placed independent inverse Gamma prior distributions IG(2.5, 0.025) and IG(3, 1) on σ 2 and β 2 , respectively, and an independent Beta(20, 1.5) prior distribution on the transition coefficient α. We used (α 0 , σ 0 , β 0 ) = (0.95, √ 0.02, 0.5) as the starting point of the three chains: X iAPF 1:L , X BPF 1:L and X BPF L . All the chains updated one component at a time with a Gaussian random walk proposal with variances (0.02, 0.05, 0.1) for the parameters (α, σ, β). X iAPF 1:L has a total length of L = 150,000 and for the estimates of the marginal likelihood that appear in the acceptance probability we use the iAPF with N 0 = 100 starting particles. For X BPF 1:L and X BPF 1:L we use BPFs: X BPF 1:L is a shorter chain with more particles (L = 150,000 and N = 1000), while X BPF 1:L is a longer chain with fewer particles (L = 1,500,000, N = 100). All chains required similar running time overall to simulate. Figure 5 shows estimated marginal posterior densities for the three parameters using the different chains.
In Table 3, we provide the adjusted sample size of the Markov chains associated with each of the parameters, obtained by dividing the length of the chain by the estimated integrated autocorrelation time associated with each parameter. We can see an improvement using the iAPF, although we note that the BPF-PMMH algorithm appears to be fairly robust to the variability of the marginal likelihood estimates in this particular application.
Since particle filters provide approximations of the marginal likelihood in HMMs, the iAPF can also be used in alternative parameter estimation procedures, such as simulated maximum likelihood (Lerman and Manski 1981;Diggle and Gratton 1984). The use of particle filters for approximate maximum likelihood estimation (see, e.g., Kitagawa 1998;Hürzeler and Künsch 2001) has recently been used to fit macroeconomic models (Fernández-Villaverde and Rubio-Ramírez 2007). In Figure 6 we show the variability of the BPF and iAPF estimates of the marginal likelihood at points in a neighborhood of the approximate MLE of (α, σ, β) = (0.984, 0.145, 0.69). The iAPF with N 0 = 100 particles used 100 particles in the final iteration to compute the likelihood in all simulations, and took slightly more time than the BPF with N = 1000 particles, but far less time than the BPF with N = 10,000 particles. The results indicate that the iAPF estimates are significantly less variable than their BPF counterparts and may therefore be more suitable in simulated maximum likelihood approximations.
Multivariate Stochastic Volatility Model
We consider a version of the multivariate stochastic volatility model defined for and g(x, ·) = N (·; 0, exp(diag(x))), where m, φ ∈ R d and the covariance matrix U ∈ R d×d are statistical parameters. The matrix U is the stationary covariance matrix associated with (φ, U ). This is the basic MSV model in Chib, Omori, and Asai (2009, Sec. 2), with the exception that we consider a nondiagonal transition covariance matrix U and a diagonal observation matrix.
We analyzed two 20-dimensional sequences of observations y 1:T and y 1:T , where T = 102 and T = 90. The sequences correspond to the monthly returns for the exchange rate with respect to the US dollar of a range of 20 different international currencies, in the periods 3/2000-8/2008 (y 1:T , precrisis) and 9/2008-2/2016 (y 1:T , post-crisis), as reported by the Federal Reserve System (available at http://www. federalreserve.gov/releases/h10/hist/). We infer the model parameters θ = (m, φ,U ) using the iAPF to obtain marginal likelihood estimates within a PMMH algorithm. A similar study using a different approach and with a set of six currencies can be found in Liu and West (2001).
The aim of this study is to showcase the potential of the iAPF in a scenario where, due to the relatively high dimensionality of the state space, the BPF systematically fails to provide reasonable marginal likelihood estimates in a feasible computational time. To reduce the dimensionality of the parameter space we consider a band diagonal covariance matrix U with nonzero entries on the main, upper, and lower diagonals. We placed independent inverse Gamma prior distributions with mean 0.2 and unit variance on each entry of the diagonal of U , and independent symmetric triangular prior distributions on [−1, 1] on the correlation coefficients ρ ∈ R 19 corresponding to the upper and lower diagonal entries. We place independent Uniform(0, 1) prior distributions on each component of φ and an improper, constant prior density for m. This results in a 79-dimensional parameter space. As the starting point of the chains, we used φ 0 = 0.95 · 1, diag(U 0 ) = 0.2 · 1 and for the 19 correlation coefficients we set ρ 0 = 0.25 · 1, where 1 denotes a vector of 1s whose length can be determined by context. Each entry of m 0 corresponds to the logarithm of the standard deviation of the observation sequence of the relative currency.
We ran two Markov chains X 1:L and X 1:L , corresponding to the data sequences y 1:T and y 1:T , both of them updated one component at a time with a Gaussian random walk proposal with standard deviations (0.2 · 1, 0.005 · 1, 0.02 · 1, 0.02 · 1) for the parameters (m, φ, diag(U ), ρ). The total number of updates for each parameter is L = 12, 000 and the iAPF with N 0 = 500 starting particles is used to estimate marginal likelihoods within the PMMH algorithm. In Figure 7, we report the estimated smoothed posterior densities corresponding to the parameters for the Pound Sterling/U.S. Dollar exchange rate series. Most of the posterior densities are different from their respective prior densities, and we also observe qualitative differences between the pre-and post-crisis regimes. For the same parameters, sample sizes adjusted for autocorrelation are reported in Table 4. Considering the high-dimensional state and parameter spaces, these are satisfactory. In the later steps of the PMMH chain, we recorded an average number of iterations for the iAPF of around 5 and an average number of particles in the final ψ-APF of around 502. To compare this with the BPF, we found that with N = 10 7 particles, at a cost of roughly 200 times the iAPF-PMMH per step, a BPF-PMMH Markov chain accepted only 4 out of 1000 proposed moves, each of which attempted to update a single parameter.
The aforementioned qualitative change of regime seems to be evident looking at the difference between the posterior expectations of the parameter m for the post-crisis and the pre-crisis chain, reported in Figure 8. The parameter m can be interpreted as the period average of the mean-reverting latent process of the log-volatilities for the exchange rate series. Positive values of the differences for close to all of the currencies suggest a generally higher volatility during the post-crisis period.
Discussion
In this article, we have presented the iAPF, an offline algorithm that approximates an idealized particle filter, whose marginal likelihood estimates have zero variance. The main idea is to iteratively approximate a particular sequence of functions, and an empirical study with an implementation using parametric optimization for models with Gaussian transitions showed reasonable performance in some regimes for which the BPF was not able to provide adequate approximations. We applied the iAPF to Bayesian parameter estimation in general state-space HMMs by using it as an ingredient in a PMMH Markov chain. It could also conceivably be used in similar, but inexact, noisy Markov chains; Medina-Aguayo, Lee, and Roberts (2015) showed that control on the quality of the marginal likelihood estimates can provide theoretical guarantees on the behavior of the noisy Markov chain. The performance of the iAPF marginal likelihood estimates also suggests they may be useful in simulated maximum likelihood procedures. In our empirical studies, the number of particles used by the iAPF was orders of magnitude smaller than would be required by the BPF for similar approximation accuracy, which may be relevant for models in which space complexity is an issue.
In the context of likelihood estimation, the perspective brought by viewing the design of particle filters as essentially a function approximation problem has the potential to significantly improve the performance of such methods in a variety of settings. There are, however, a number of alternatives to the parametric optimization approach described in Section 5.1, and it would be of particular future interest to investigate more sophisticated schemes for estimating ψ * , that is, specific implementations of Algorithm 3. We have used nonparametric estimates of the sequence ψ * with some success, but the computational cost of the approach was much larger than the parametric approach. Alternatives to the classes F and described in Section 3.2 could be obtained using other conjugate families, (see, e.g., Vidoni 1999). We also note that although we restricted the matrix in (15) to be diagonal in our examples, the resulting iAPF marginal likelihood estimators performed fairly well in some situations, where the optimal sequence ψ * contained functions that could not be perfectly approximated using any function in the corresponding class. Finally, the stopping rule in the iAPF, described in Algorithm 4 and which requires multiple independent marginal likelihood estimates, could be replaced with a stopping rule based on the variance estimators proposed in Lee and Whiteley (2015). For simplicity, we have discussed particle filters in which multinomial resampling is used; a variety of other resampling strategies (see Douc, Cappé, and Moulines 2005, for a review) can be used instead. | 9,219 | 2015-11-19T00:00:00.000 | [
"Mathematics"
] |
Holliday junction resolution is modulated by archaeal chromatin components in vitro.
The Holliday junction-resolving enzyme Hjc is conserved in the archaea and probably plays a role analogous to that of Escherichia coli RuvC in the pathway of homologous recombination. Hjc specifically recognizes four-way DNA junctions, cleaving them without sequence preference to generate recombinant DNA duplex products. Hjc imposes an X-shaped global conformation on the bound DNA junction and distorts base stacking around the point of cleavage, three nucleotides 3' of the junction center. We show that Hjc is autoinhibitory under single turnover assay conditions and that this can be relieved by the addition of either competitor duplex DNA or the architectural double-stranded DNA-binding protein Sso7d (i.e. by approximating in vivo conditions more closely). Using a combination of isothermal titration calorimetry and fluorescent resonance energy transfer, we demonstrate that multiple Hjc dimers can bind to each synthetic four-way junction and provide evidence for significant distortion of the junction structure at high protein:DNA ratios. Analysis of crystal packing interactions in the crystal structure of Hjc suggests a molecular basis for this autoinhibition. The wider implications of these findings for the quantitative study of DNA-protein interactions is discussed.
Holliday junction-resolving enzymes catalyze a key step in the pathway of homologous recombination, cleaving four-way DNA junctions that link recombination intermediates to release heteroduplex DNA products (reviewed in Refs. 1 and 2). The first junction-resolving enzyme was identified in bacteriophage T4, in which the gene49 product was shown to cut branched DNA species and play a role in both viral DNA recombination and packaging (3,4). Subsequently, resolving enzymes have been identified in all domains of life, including recently the archaea (5,6), archaeal viruses (7), and pox viruses (8). Recent biochemical evidence suggests that the mammalian resolving enzyme exists in a complex with a junction-specific branch migration apparatus (9), which is analogous to the interaction of the Escherichia coli resolving enzyme RuvC with the RuvAB complex (10). The structures of four junction-resolv-ing enzymes, E. coli RuvC, T4 endonuclease VII, T7 endonuclease I, and archaeal Hjc (reviewed in Ref. 2) have been solved. The structural information, together with detailed sequence comparisons of the known genes, has allowed classification of the resolving enzymes, several of which can be assigned to one of two superfamilies. Thus, the eubacterial, mitochondrial, and pox viral resolving enzymes are grouped in the integrase superfamily, whereas the archaeal and bacteriophage T7 enzymes are members of the nuclease superfamily (11,12).
Despite the large volume of structural and biochemical data available for these enzymes, we do not fully understand how they work at a molecular level. In particular, the mechanism of the molecular recognition of the DNA junction structure by the enzymes is completely unknown. Resolving enzymes such as RuvC, Hjc, and T4 endonuclease VII all have extensive, fairly flat, highly basic surfaces that are presumed to interact with the highly acidic branched DNA substrates and clearly defined active sites positioned to allow phosphodiester bond cleavage on each side of the four-way junction. In addition, all of the enzymes studied to date manipulate the global and local structure of the junction on binding (reviewed in Ref. 2). However, in the absence of the crystal structure of a resolving enzymejunction complex, the molecular details of such interactions and the nature and extent of DNA distortion in the complex remain obscure.
Kinetic analysis of four-way junction cleavage by resolving enzymes has typically taken the form of pseudo-first order experiments in which the enzyme is present in a molar excess with respect to the junction substrate (13,14). Although these experiments have proven useful in determining the sequence specificity of the resolving enzymes, they clearly do not represent conditions in vivo. Resolving enzymes are generally expressed at very low levels in host cells, and they function in a cellular milieu that contains very high concentrations of nucleic acid and architectural DNA-binding proteins.
The Hjc protein is responsible for the major junction-resolving activity found in the archaea (6) and is probably the cellular resolving enzyme analogous to the RuvC endonuclease in E. coli. Hjc is a dimer of 143-amino acid subunits in Sulfolobus solfataricus with an N-terminal catalytic domain that is homologous to a superfamily of nucleases including the type II restriction enzymes typified by EcoRV. Four catalytic residues conserved between Hjc and EcoRV, Glu-12, Asp-42, Asp-55, and Lys-57, have been shown to be essential for catalytic activity (15,16). The three acidic residues act as ligands for the catalytic metal ions (either magnesium or manganese), whereas Lys-57 is thought to help stabilize the transition state during phosphodiester bond cleavage. Hjc cleaves Holliday junctions on opposing strands three nucleotides 3Ј of the point of strand exchange and manipulates the global and local struc-ture of the bound junction into an X shape with distortion of base pairing around the site of cleavage (16). The structures of Hjc from Pyrococcus furiosus and S. solfataricus have been solved recently (17,18) and have confirmed the relationship of the enzyme with the nuclease superfamily.
In this study, we investigate the activity of Sulfolobus Hjc under conditions approximating those found in vivo. In the course of this work, we uncovered an unexpected autoinhibitory feature of Hjc, which we have defined at a molecular level using a variety of biophysical techniques. We find that both double-stranded DNA and the archaeal dsDNA 1 -binding protein Sso7d can ameliorate this inhibitory phenomenon. The results demonstrate that caution should be applied when attempting to draw conclusions about the activity of junctionresolving enzymes based on simplified assay systems.
Expression and Purification of Recombinant Hjc-Recombinant
Hjc was expressed in E. coli strain BL21 (DE3) Codon Plus RIL (Stratagene), and the protein was purified as described previously (19). In brief, E. coli lysate containing recombinant Hjc was first subjected to SP-Sepharose high performance 26/10 column chromatography (Amersham Biosciences, Inc.) using Buffer A (50 mM Tris-HCl, pH 7.5, 1 mM EDTA, 1 mM dithiothreitol) and a linear elution gradient of 0 -1 M NaCl. Hjc activity peak fractions were concentrated and loaded onto a 26/70 gel filtration column (Superdex 200 Hi-Load, Amersham Biosciences, Inc.) that was developed with Buffer A containing 300 mM NaCl. Major protein absorbance peak fractions were pooled and shown by SDS-PAGE to contain essentially homogeneous Hjc protein. This enzyme was used for all subsequent analyses. The concentration of Hjc was estimated using the extinction coefficient ⌭ 280 (1 mg/ml) ϭ 0.16 calculated from the amino acid composition of the protein.
Purification of Native Sso7d from S. solfataricus-The S. solfataricus P2 biomass was supplied by Dr. Neil Raven, Centre for Applied Microbiology and Research, Porton Down, UK. Cell lysis, centrifugation, and chromatography steps were carried out at 4°C. 50 g of cells was thawed in 150 ml of lysis buffer and immediately sonicated for 5 ϫ 1 min with cooling. The lysate was centrifuged at 40,000 ϫ g for 30 min. The supernatant was diluted 4-fold with Buffer A and applied to an SP-Sepharose High Performance 26/10 column (Hi-Load, Amersham Biosciences, Inc.) equilibrated with Buffer A. A 500-ml linear gradient comprising 0 to 1000 mM NaCl was used to elute cationic proteins. Fractions containing Sso7d were identified by Western blotting using a polyclonal antibody raised against recombinant Sso7d protein. The relevant fractions were pooled, concentrated, and loaded onto a 26/70 gel filtration column (Superdex 200 Hi-Load, Amersham Biosciences, Inc.) and developed with Buffer A containing 300 mM NaCl. Fractions containing Sso7d were pooled and coincided with a single, essentially homogeneous polypeptide observed by SDS-PAGE. The protein identity was confirmed by matrix-assisted laser desorption/ionization time-offlight mass spectroscopy. The concentration of Sso7d was estimated using the extinction coefficient ⌭ 280 (1 mg/ml) ϭ 1.15 calculated from the amino acid composition of the protein.
Preparation of DNA-Oligonucleotide synthesis and annealing of four-way DNA junctions and dsDNA were carried out as described previously (20) using the sequences below (all written 5Ј to 3Ј). Junction Z28, a fixed junction containing 15-bp arms, was prepared with the following oligonucleotides: b strand, TCCGTCCTAGCAAGGAGTCT-GCTACCGGAA; h strand, TTCCGGTAGCAGACTAAAAGGTGGTT-GAAT; r strand, ATTCAACCACCTTTTTTTTAACTGCAGCAG; x strand, CTGCTGCAGTTAAAACCTTGCTAGGACGGA. For dsDNA, a 39-bp DNA duplex was used for isothermal titration calorimetry studies and was prepared with the following oligonucleotides: top strand, CAG-GAAAAGATGCATCTCATATGACAGAGGTGTTTCTCG; complement, CGAGAAACACCTCTGTCATATGAGATGCATCTTTTCCTG.
Isothermal Titration Calorimetry-ITC experiments were carried out using a VP-ITC device (MicroCal, Northampton, MA). All solutions were degassed. Hjc and DNA samples were extensively dialyzed against 20 mM Tris-HCl buffer, pH 8.0, containing 200 mM NaCl, 15 mM MgCl 2 . The binding experiments were performed at 25°C, under which no hydrolysis of the junction by thermophilic Hjc was detected. A 370-l syringe with stirring at 400 rpm was used to titrate Hjc into a cell containing ϳ1.4 ml of DNA solutions. Each titration consisted of a preliminary 1-l injection followed by up to 30 subsequent 10-l injections. Calorimetric data were analyzed using MicroCal ORIGIN software. All measurements of binding parameters presented are the means of at least duplicate experiments.
Assay of Hjc Activity-The specific cleavage of 5Ј-32 P-labeled fourway junction (100 nM) by Hjc was assayed in the presence or absence of 120 g/ml competitor calf thymus DNA (Sigma) by first forming Hjcjunction complexes in the binding buffer (20 mM Tris-HCl, pH 7.5, 50 mM NaCl) and then initiating hydrolysis by the addition of 15 mM MgCl 2 and incubating at 55°C for 1 min. The reactions were stopped by adding a formamide/EDTA loading mix and heating to 95°C for 5 min. Products were analyzed by denaturing gel electrophoresis and phosphorimaging as described previously and quantified by calculating the logarithm of the ratio of total:uncut substrate (13).
Gel Electrophoretic Retardation Analysis of Hjc-DNA Junction Interactions-5Ј-32 P-labeled four-way junction (100 nM) was incubated with increasing concentrations of Hjc in the binding buffer (20 mM Tris-HCl, pH 7.5, 50 mM NaCl) in the presence or absence of 120 g/ml competitor calf thymus DNA in 10 l of total volume. After a 5-min incubation at 20°C, 2 l of the loading buffer (0.25% bromphenol blue, 0.25% xylene cyanol FF, 15% Ficoll type 400) was added to the samples. Hjc-junction complexes were separated with 5% polyacrylamide gels in Tris borate-EDTA buffer. After electrophoresis, gels were dried on Whatman No. 3MM paper and exposed to x-ray film for documentation or to phosphorimaging screens for quantitation.
Fluorescence Resonance Energy Transfer-FRET data were recorded on an SLM-Aminco 8100 fluorometer and processed as described in Ref. 21.
Relationship of Hjc Concentration and Activity-We have
shown previously that Hjc can form multiple complexes with a four-way DNA junction, as judged by electrophoretic retardation analysis, with at least two distinct complexes apparent that may correspond to one and two dimers of Hjc respectively bound to each junction (19). In this study, we set out to characterize the activity of the enzyme in more detail and to investigate activity under conditions approaching those in vivo. We first examined the effect of increasing concentrations of Hjc on the junction-resolving activity (Fig. 1). Titration of Hjc against a fixed concentration of 100 nM junction resulted in a biphasic rate curve with maximal activity at 400 nM Hjc dimer. Cleavage activity was strongly inhibited above 400 nM Hjc, reducing to 1% of the maximal rate at 2 M Hjc. Parallel analysis of the enzyme-junction complexes formed was carried out by electrophoretic retardation analysis. Titration of Hjc into the junction resulted in formation of a complex (complex I) corresponding to one dimer of Hjc bound per junction, reaching a maximum level at 400 nM Hjc, which is in good agreement with the conditions under which maximal enzyme activity was observed. Further increases in the enzyme concentration led to the formation of more highly retarded complexes corresponding to larger protein:DNA ratios. Comparison of the gel electrophoretic retardation and activity data (Fig. 1, A and B) revealed that the initial accumulation of complex I was associated with the greatest junction-resolving activity of Hjc, whereas formation of higher complexes had a detrimental effect on the enzyme activity.
Effect of Competitor Duplex DNA on Hjc Activity-We next examined the effect of the competitor duplex DNA on the Hjcjunction complex formation and catalytic activity. The presence of a large excess of calf thymus dsDNA competitor over radio-actively labeled junction substrate resulted in a reduction but not abolition of the binding affinity of Hjc for the junction, as has been observed previously for the resolving enzymes (2) (Fig. 1C). In the presence of 120 g/ml duplex DNA competitor, the peak activity was shifted to relatively higher concentrations of Hjc (1 M, Fig. 1D). Again, we observed a good correlation between complex I formation and the specific activity. Comparison of the activity data in the absence (Fig. 1B) and presence (Fig. 1D) of dsDNA demonstrated that the highest activity levels detected for Hjc were found in the presence of competitor DNA. In general, higher catalytic activities were observed at a wider range of Hjc concentrations when competitor DNA was present in the reaction mix. Comparable results were obtained using circular or linearized plasmid DNA as a competitor (data not shown).
Effect of dsDNA-binding Protein Sso7d on Hjc Activity-Although the biochemical study of nucleases and other DNAmodifying enzymes is almost always carried out with "naked" DNA, in vivo the DNA is decorated extensively with architectural DNA-binding proteins that function to compact the genetic material and regulate its accessibility to other proteins. In eukaryotes, this role is undertaken primarily by the nucleosome, but histone proteins are absent from the crenarchaea (22). Instead, Sulfolobus has an abundant dsDNA-binding protein, Sso7d, that is thought to coat the genomic DNA, providing protection against thermal denaturation (23). The addition of Sso7d protein to cleavage assays with an Hjc concentration of 1 M resulted in a dramatic stimulation of junction cleavage activity ( Fig. 2) with an increase in activity of approximately 2 orders of magnitude at 450 M Sso7d, the highest concentration tested.
Isothermal Titration Calorimetry Analysis of Hjc-DNA Interactions-ITC allows a quantitative analysis of protein-ligand interactions in solution. We have previously utilized ITC to study the interaction of the yeast junction-resolving enzyme Cce1 with a four-way DNA junction (24). In that case, ITC provided complementary and in some respects superior data for the study of protein-DNA interactions as compared with gel electrophoretic retardation analysis because ITC allowed the quantitative analysis of weaker protein-DNA interactions at the high magnesium ion concentrations required for maximal endonuclease activity. In the present study, we examined the binding of Hjc to a four-way DNA junction in the presence of magnesium ions (Fig. 3). Interestingly, thermal titration curves displayed a complex shape, indicating that multiple Hjc dimers were binding to the junction. The best fit for the integrated heat data was obtained using a three sequential-binding sites model. Two consecutive endothermic binding events were followed by the exothermic binding of a third Hjc dimer. Using this model, the K D for the first Hjc dimer was 115 Ϯ 25 nM with subsequent dimers exhibiting lower affinity for the junction. This observation complements the gel electrophoretic data (Fig. 1), suggesting that multiple dimers of Hjc can also bind a single DNA junction in the presence of magnesium ions.
We next examined the binding of Hjc to a double-stranded DNA oligonucleotide. Again, we observed that multiple Hjc dimers were bound to this DNA species (Fig. 3). However, FIG. 1. Hjc activity and Hjc-junction complexes as a function of Hjc concentration. Radioactively 5Ј-32 P-labeled four-way DNA junction (100 nM) was incubated with increasing concentrations of Hjc. Free junction and Hjc-junction complexes were separated by electrophoresis in 5% polyacrylamide in TBE buffer. The hydrolysis of the corresponding Hjc-junction complexes was assayed by adding 15 mM MgCl 2 to the preformed complexes and incubating the reaction mix at 55°C for 1 min. The cleavage products were analyzed by denaturing gel electrophoresis and phosphorimaging. A, titration of Hjc into the junction in the absence of a competitor dsDNA followed by non-denaturing gel electrophoretic retardation analysis. B, quantitative results of Hjc activity at corresponding concentrations of Hjc and junction. C, titration of Hjc in the junction in the presence of 120 g/ml competitor calf thymus dsDNA. D, quantitative results of Hjc activity at corresponding concentrations of Hjc and junction.
FIG. 2. Effect of Sso7d on Hjc activity.
Radioactively 5Ј-32 P-labeled four-way junction (100 nM) was incubated with a fixed concentration of 1 M Hjc at 55°C for 1 min in the presence of 15 mM MgCl 2 , and the cleavage products were analyzed by denaturing gel electrophoresis and phosphorimaging. Under these conditions, autoinhibition by Hjc resulted in negligible quantities of cleaved-junction product. Increasing concentrations of the dsDNA-binding protein Sso7d were included in the incubation, resulting in a progressive stimulation of junction cleavage by Hjc.
comparison of the two binding thermograms immediately revealed a marked difference. The first higher affinity endothermic binding event detected with the Hjc-junction interaction was clearly absent in the Hjc-dsDNA interaction. This difference is due presumably to the specific binding of Hjc to the branch point of the junction in complex I. The binding of the higher complexes of Hjc to the junction molecule is reminiscent of that of Hjc-dsDNA interactions, suggesting that higher complexes of Hjc are accommodated on the double-stranded arms of the junction.
FRET Studies Reveal Distinct Junction Conformations for Different Hjc Complexes-We have shown previously, by both gel electrophoretic retardation analysis (16) and FRET (21), that the conformation of four-way DNA junctions is significantly altered upon the binding of one dimer of Hjc. A fixed four-way junction (Junction 3) in complex with Hjc adopted an X-shaped 2-fold symmetrical structure with the B/H arms and R/X arms subtending acute angles (Fig. 4A). In the present study, we addressed the question of whether higher complexes can further distort the junction conformation. We therefore repeated FRET experiments in the presence of a large molar excess of Hjc dimer over the DNA junction. The data obtained for Hjc concentrations higher than those of the junction displayed striking differences from those obtained under equimolar conditions (Fig. 4). In particular, the significant increase in FRET for BH and RX vectors indicated that these arms were driven closer in the higher Hjc complexes. The fluorescence anisotropy of fluorescein was unchanged in the presence of a large excess of Hjc, indicating that the mobility of the donor was unaffected by Hjc binding and suggesting that the increase in FRET efficiency was likely to be due to a shortening of the end-to-end distances.
A Molecular Explanation for Autoinhibition by Hjc-Similarities in the crystal packing of two crystal forms of S. solfataricus Hjc (17) and comparison with the P. furiosus Hjc structure (18) offer a plausible explanation for the observed concentrationdependent inactivation. Both hexagonal and cubic crystal forms of Hjc contain similar crystal contacts involving loops between strands D-E and H-I (Fig. 5). These strands form a pair of pincers that interact with the equivalent residues from a partner subunit. This part of the molecule comprises the region of largest deviation between the Sulfolobus and Pyrococcus Hjc structures. Furthermore, the residue at which the deviation begins, at the N-terminal end of strand D, is the catalytically implicated Lys-57. It is conceivable that this interaction is responsible for the autoinactivation of Hjc at elevated concentrations. The observations that the same fold is observed in two unrelated crystal forms, accompanied by an increase in the secondary structure of the relevant loops and that one of the major components of the interaction is an otherwise exposed hydrophobic residue, Ile-62, suggest that it is a conserved rather than a fortuitous interaction. DISCUSSION We have shown that several dimers of the Holliday junctionresolving enzyme Hjc can bind to a single synthetic four-way junction in vitro. In addition to targeting the junction specifically to form an enzyme-substrate complex, further molecules of Hjc can associate, probably through a combination of DNAprotein and protein-protein interactions. These higher order complexes are inhibitory to DNA cleavage, possibly because of the extreme distortion of the four-way junction conformation (Fig. 4). Inhibition can be relieved either by adding duplex DNA competitor or by the dsDNA-binding protein Sso7d. At a molecular level, Sso7d probably competes with extra molecules of Hjc for binding sites on the duplex DNA arms, thus preventing the formation of inhibitory complexes, whereas competitor duplex DNA may "soak up" the extra Hjc molecules that would otherwise bind on the duplex arms of the junction. We do not see a synergistic effect when both duplex DNA and Sso7d protein are added to Hjc cleavage reactions, which makes sense if both are functioning to relieve Hjc inhibition through essentially the same mechanism.
The crystal structure of Sulfolobus Hjc gives a strong indication as to how Hjc dimers might interact with one another through a pair of interlocking loops to generate larger complexes. What we do not yet know is whether this secondary interface has a functional role. It may, for example, represent a means by which the nuclease activity of Hjc is repressed in the absence of Holliday junctions to ensure that a nonspecific cleavage of phosphodiester bonds does not occur. If so, this mechanism is presumably not shared by Pyrococcus Hjc because the interaction surface does not exist for that enzyme. Mutagenesis studies are under way to assess the importance of the secondary interface for the structure and function of Sulfolobus Hjc.
In conclusion, the reconstitution of an approximation for FIG. 3. Isothermal titration calorimetry of Hjc-junction and Hjc-dsDNA interactions. A, titration of Hjc into DNA junction. Upper panel, raw data for sequential 10-l injections of 266 M Hjc into a solution containing 13.3 M four-way DNA junction. Lower panel, integrated heat data with a theoretical fit to a three sequential-binding sites model, yielding an initial high affinity binding event (K D (1) ϭ 115 Ϯ 25 nM) followed by two weaker affinity binding events (K D values in low micromolar range). B, titration of Hjc into a double-stranded DNA. Upper panel, raw data for sequential 10-l injections of 156 M Hjc into a solution containing 5.2 M dsDNA. Lower panel, integrated heat data with a theoretical fit to a three sequential-binding sites model yielding dissociation constants in the low micromolar range. Thus, ITC data indicate that multiple Hjc dimers can bind to a four-way DNA junction or dsDNA molecule in the presence of magnesium ions. However, the first higher affinity endothermic binding event characteristic for the Hjc-junction (A) interaction is absent with Hjc-dsDNA (B). archaeal chromatin in vitro has a significant influence on the activity of the Hjc enzyme. These observations highlight the complexity of DNA-protein interactions in vivo in which DNA is tightly packaged in chromatin and in which structures such as Holliday junctions must constitute a tiny proportion of the total nucleic acid present. Similar situations may be encountered for other enzymes that recognize a particular DNA structure or sequence during DNA replication, transcription, or repair, and there is evidence that chromatin structure can modulate DNA repair pathways both in vitro and in vivo (reviewed in Ref. 25). Backbone traces for Sulfolobus Hjc (blue) and Pyrococcus Hjc (green) in the region proposed to be involved in autoinactivation. A symmetry related Sulfolobus Hjc molecule (gray) is shown, and the approximate position of the crystallographic 2-fold axis is marked (ellipse). Ile-62 of both molecules is shown in a stick representation. Single uppercase letters mark the relevant secondary structure elements in Sulfolobus Hjc. Inset, relative location in the global structure. This figure was prepared using Molscript (26) and Raster3d (27). | 5,439 | 2002-01-25T00:00:00.000 | [
"Biology"
] |
Cobalt–Iron–Phosphate Hydrogen Evolution Reaction Electrocatalyst for Solar-Driven Alkaline Seawater Electrolyzer
Seawater splitting represents an inexpensive and attractive route for producing hydrogen, which does not require a desalination process. Highly active and durable electrocatalysts are required to sustain seawater splitting. Herein we report the phosphidation-based synthesis of a cobalt–iron–phosphate ((Co,Fe)PO4) electrocatalyst for hydrogen evolution reaction (HER) toward alkaline seawater splitting. (Co,Fe)PO4 demonstrates high HER activity and durability in alkaline natural seawater (1 M KOH + seawater), delivering a current density of 10 mA/cm2 at an overpotential of 137 mV. Furthermore, the measured potential of the electrocatalyst ((Co,Fe)PO4) at a constant current density of −100 mA/cm2 remains very stable without noticeable degradation for 72 h during the continuous operation in alkaline natural seawater, demonstrating its suitability for seawater applications. Furthermore, an alkaline seawater electrolyzer employing the non-precious-metal catalysts demonstrates better performance (1.625 V at 10 mA/cm2) than one employing precious metal ones (1.653 V at 10 mA/cm2). The non-precious-metal-based alkaline seawater electrolyzer exhibits a high solar-to-hydrogen (STH) efficiency (12.8%) in a commercial silicon solar cell.
Introduction
Hydrogen is a next-generation energy source that can solve environmental pollution and the energy-depletion crisis [1,2]. Among the various methods for producing hydrogen, electrochemical water splitting represents an ecofriendly, sustainable, and efficient route. To electrochemically produce hydrogen, enormous efforts have been devoted to the development of highly active electrocatalysts for water splitting in acidic or alkaline electrolytes containing high-purity fresh water. However, with the increasing demand for high-purity fresh water owing to the development of water splitting through electrolysis, the possibility of challenges, such as water distribution, must be considered [3,4]. Thus, the electrolysis of seawater is a promising alternative for mitigating the challenges accompanying the supply of high-purity freshwater. Seawater is the most abundant source of water resources on Earth; it can be employed as an inexpensive electrolyte for electrochemical water splitting [5]. However, despite these advantages, the side reactions caused by the chlorine ions (Cl − ) in seawater prevent seawater electrolysis [6][7][8][9]. Recently, it has been reported that the selectivity of the oxygen evolution reaction (OER) can be improved by changing the thermodynamic potential of the chlorine evolution reaction (ClER) via the adjustment of the pH of the seawater; thus, many ongoing studies have focused on developing catalysts for OER [10][11][12]. However, since the hydrogen evolution reaction (HER) is 2.2. Synthesis of (Co,Fe) 3
O 4 and (Co,Fe)PO 4
The prepared (Co,Fe)OOH was converted into a (Co,Fe) 3 O 4 sample via calcination for 2 h in the air at 500 • C and a heating rate of 5 • C/min employing a tube furnace. The (Co,Fe) 3 O 4 sample, which was named (Co,Fe) 3 O 4 , was obtained after cooling to room temperature.
The (Co,Fe)PO 4 sample was synthesized via a phosphidation process. Briefly, (Co,Fe) 3 O 4 and 2.0 g of sodium hypophosphite (NaH 2 PO 4 , Sigma-Aldrich Inc., St. Louis, MO, USA) were placed in two separate ceramic boats in a tube furnace. Next, NaH 2 PO 4 and (Co,Fe) 3 O 4 were placed at the upstream and downstream sides of the Ar gas flow, respectively. Subsequently, the tube furnace was heated for 2 h to 500 • C in Ar atmosphere at a heating rate of 5 • C/min and air cooled to room temperature. The (Co,Fe)PO 4 sample, which was obtained via the phosphidation of (Co,Fe) 3 O 4 for 2 h, was named (Co,Fe)PO 4.
Characterization of Physical Properties
X-ray diffraction (XRD) patterns were recorded on an X-ray diffractometer (Ultima IV, Rigaku, Tokyo, Japan) employing a Cu-Kα radiation source over the 2θ range of 10 • -90 • at a scan rate of 2 • /min. The surface morphologies of the samples were examined by field-emission scanning electron microscopy (FE-SEM, MIRA 3, TESCAN, Brno, Czechia). FE-transmission electron microscopy (FE-TEM), high-resolution TEM (HR-TEM), selected area electron diffraction (SAED), and elemental distribution spectroscopy (EDS) were performed on a TALOS F200X (Thermo Fisher Scientific, Waltham, USA). Further, the chemical states were investigated by X-ray photoelectron spectroscopy (XPS, K-Alpha + XPS System, Thermo Fisher Scientific, Waltham, USA).
Electrochemical Characterization
The electrochemical properties of the electrocatalysts were investigated using a potentiostat (VersaSTAT 4, AMETEK, Oak Ridge, USA) in a three-electrode cell at room temperature. The synthesized (Co,Fe)OOH, (Co,Fe) 3 O 4 , and (Co,Fe)PO 4 electrocatalysts were employed as the working electrode with dimensions of 1 cm × 1 cm. Hg/HgO (1 M KOH) and a graphite rod were employed as the reference and counter electrodes for the HER, respectively. The polarization curves for the HER activity were recorded via linear sweep voltammetry (LSV) at a scan rate of 1 mV/s in N 2 -purged 1 M KOH, 1 M KOH + 0.5 M NaCl, and 1 M KOH + seawater as the electrolyte. Real seawater was collected from the sea of Haeundae (Busan, Korea). The recorded potentials were converted into reversible hydrogen electrode (RHE) according to Nernst's equation (E RHE = E Hg/HgO + 0.0591 × pH + 0.098). All the electrochemical tests were performed with 90% iR compensation, and the Tafel slopes were measured from the corresponding polarization curves. Electrochemical impedance spectroscopy (EIS) was performed at an overpotential of −0.25 V RHE for HER in the frequency range from 100 kHz to 0.01 Hz with an amplitude of 10 mV. The double-layer capacitance (C dl ) was estimated in the 1 M KOH solution via cyclic voltammetry (CV) at different scan rates (10,20,40,80, and 160 mV/s) in the non-faradaic region. The durability tests for HER were performed at a constant current density of −100 mA/cm 2 for 72 h. The Faradaic efficiency (FE) was determined via the water displacement method. The volume of the generated H 2 was measured by collecting the amount of H 2 gas at a constant current density of 50 mA/cm 2 . To prepare the Pt/C noble metal electrocatalysts for comparison, an ink solution was fabricated by mixing commercial Pt/C powder (20 mg), 5 wt.% Nafion solution (100 µL), and ethanol (900 µL). Thereafter, the ink solution was coated onto the surface of an iron foam (1 cm × 1 cm) after ultrasonic dispersion for 15 min. The loading mass of Pt/C was~3 mg/cm 2 . Figure 1 shows the schematic for synthesizing the (Co,Fe)PO 4 electrocatalysts. Firstly, (Co,Fe)OOH was directly synthesized on the iron foam via surface corrosion in a CoCl 2 aqueous solution at room temperature. The prepared (Co,Fe)OOH was converted into (Co,Fe) 3 O 4 through calcination, after which the nanoneedle shape of (Co,Fe)PO 4 was synthesized through phosphidation. To determine the crystalline structures of the synthesized (Co,Fe)OOH, (Co,Fe)3O4, and (Co,Fe)PO4 electrocatalysts, the XRD patterns were obtained ( Figure S1). The diffraction peaks of the (Co,Fe)OOH sample appeared at 2θ = 27.0°, 36.4°, 46.9°, 60.8°, and 79.6°, and were indexed to the (021), (130), (150), (132), and (202) planes, respectively, of iron oxyhydroxide (FeOOH, JCPDS # 01-073-2326). The diffraction peaks of the (Co,Fe)3O4 sample that appeared at 2θ = 18.3°, 30.1°, 35.5°, 37.1°, 43.1°, 57.0°, and 62.6° were indexed to the (111), (220), (311), (222), (400), (511), and (440) planes, respectively, of iron oxide (Fe3O4, JCPDS # 01-075-0033). However, for the (Co,Fe)PO4 sample, both the iron phosphate and iron oxide phases were discerned, corresponding to FePO4 (JCPDS # 00-029-0715) and Fe3O4 (JCPDS # 01-075-0033). This result indicated the hybrid structure of (Co,Fe)3O4 and (Co,Fe)PO4. The peaks of Fe3O4 exhibited almost identical diffraction peaks with the (Co,Fe)3O4 sample, and the peaks from the (100) and (102) planes of FePO4 were observed at 20.3° and 25.8°, respectively. Generally, chemical transformation reactions, such as sulfurization, phosphidation, and selenization, also occurred at the surface. [45][46][47] Therefore, after phosphidation, the outer region of (Co,Fe)3O4 was converted into (Co,Fe)PO4, and the inner region of (Co,Fe)3O4 did not participate in the chemical transformation reaction. [48,49].
Results and Discussion
The surface morphologies of (Co,Fe)OOH, (Co,Fe)3O4, and (Co,Fe)PO4 were observed via the FE-SEM images. (Co,Fe)OOH exhibited thin nanosheets ( Figure S2), and (Co,Fe)3O4, which was obtained by calcining (Co,Fe)OOH, exhibited a nanoneedle morphology ( Figure S3). Interestingly, (Co,Fe)PO4 and (Co,Fe)3O4 exhibited almost the same surface morphologies even after phosphidation (Figure 2a,b). Particularly, the shape of the nanoneedles could extensively increase the concentrations of the reactants in the active sites and enhance local electric fields that promote the intrinsic catalytic activity. [50] Therefore, the shape of (Co,Fe)PO4 is suitable for electrochemical water splitting. [45][46][47]. Therefore, after phosphidation, the outer region of (Co,Fe) 3 O 4 was converted into (Co,Fe)PO 4 , and the inner region of (Co,Fe) 3 O 4 did not participate in the chemical transformation reaction [48,49].
The surface morphologies of (Co,Fe)OOH, (Co,Fe) 3 O 4 , and (Co,Fe)PO 4 were observed via the FE-SEM images. (Co,Fe)OOH exhibited thin nanosheets ( Figure S2), and (Co,Fe) 3 O 4 , which was obtained by calcining (Co,Fe)OOH, exhibited a nanoneedle morphology ( Figure S3). Interestingly, (Co,Fe)PO 4 and (Co,Fe) 3 O 4 exhibited almost the same surface morphologies even after phosphidation (Figure 2a,b). Particularly, the shape of the nanoneedles could extensively increase the concentrations of the reactants in the active sites and enhance local electric fields that promote the intrinsic catalytic activity [50]. Therefore, the shape of (Co,Fe)PO 4 is suitable for electrochemical water splitting.
The TEM images were obtained to confirm the surface morphology and phase information. (Co,Fe)OOH exhibited a nanosheet morphology ( Figure S4a). After calcination, Nanomaterials 2021, 11, 2989 5 of 13 (Co,Fe) 3 O 4 exhibiting a nanoneedle shape was obtained ( Figure S5a) owing to the escape of the water molecules in (Co,Fe)OOH during calcination. Interestingly, the nanoneedle shape was maintained well after phosphidation ( Figure 2c). Furthermore, the phase information was obtained from the SAED patterns. The ring patterns of (Co,Fe)OOH were indexed to the planes of the (021), (130), (150), and (202) reflections of FeOOH (inset of Figure S4a). Additionally, the ring patterns of (Co,Fe) 3 O 4 were indexed to the planes of the (111), (220), (311), and (222) reflections of Fe 3 O 4 (inset of Figure S5a). Further, the elemental distributions of (Co,Fe)OOH and (Co,Fe) 3 O 4 were uniform ( Figures S4b and S5b). The lattice fringes and ring pattern of (Co,Fe)PO 4 exhibited both Fe 3 O 4 and FePO 4 patterns, which are consistent with the XRD results (Figure 2d-g). The EDS mapping of (Co,Fe)PO 4 confirmed that each element was uniformly distributed therein ( Figure 2h). The EDX spectrum in the collected area is shown in Figure S7. Interestingly, the high-magnification TEM-EDS mapping images revealed that elemental P was mainly distributed in the outer region and that elemental Co and Fe were mainly distributed in the inner region. Additionally, elemental O was uniformly distributed in the inner and outer regions ( Figure S6). These results indicated that the chemical transformation reaction proceeded on the surface. The TEM images were obtained to confirm the surface morphology and phase information. (Co,Fe)OOH exhibited a nanosheet morphology ( Figure S4a). After calcination, (Co,Fe)3O4 exhibiting a nanoneedle shape was obtained ( Figure S5a) owing to the escape of the water molecules in (Co,Fe)OOH during calcination. Interestingly, the nanoneedle shape was maintained well after phosphidation (Figure 2c). Furthermore, the phase information was obtained from the SAED patterns. The ring patterns of (Co,Fe)OOH were indexed to the planes of the (021), (130), (150), and (202) reflections of FeOOH (inset of Figure S4a). Additionally, the ring patterns of (Co,Fe)3O4 were indexed to the planes of the (111), (220), (311), and (222) reflections of Fe3O4 (inset of Figure S5a). Further, the elemental distributions of (Co,Fe)OOH and (Co,Fe)3O4 were uniform (Figures S4b and S5b). The lattice fringes and ring pattern of (Co,Fe)PO4 exhibited both Fe3O4 and FePO4 patterns, which are consistent with the XRD results (Figure 2d-g). The EDS mapping of (Co,Fe)PO4 confirmed that each element was uniformly distributed therein (Figure 2h). The EDX spectrum in the collected area is shown in Figure S7. Interestingly, the high-magnification TEM-EDS mapping images revealed that elemental P was mainly distributed in the outer region and that elemental Co and Fe were mainly distributed in the inner region. Additionally, elemental O was uniformly distributed in the inner and outer regions ( Figure S6). These results indicated that the chemical transformation reaction proceeded on the surface. XPS analysis was performed to investigate the surface chemical states of (Co,Fe) 3 O 4 and (Co,Fe)PO 4 ( Figure 3). Figure 3a shows the full XPS survey spectra of (Co,Fe) 3 O 4 and (Co,Fe)PO 4 , which clearly confirmed the existence of Co, Fe, P and O. Figure 3b-e shows the HR-XPS profiles of Co, Fe, P and O. Notably, the binding energies of Co 2p and Fe 2p shifted along a higher direction after phosphidation. Additionally, the binding energy of O 1s shifted along a higher direction. These observations indicated that electrons were transferred from (Co,Fe) 3 O 4 to (Co,Fe)PO 4 in the hybrid (Co,Fe) 3 O 4 and (Co,Fe)PO 4 structures [51]. This changed local charge-density distribution was expected to reduce the energy barrier of HER, thus facilitating the adsorption and desorption processes between the reactant and resultant molecules [52][53][54][55].
XPS analysis was performed to investigate the surface chemical states of (Co,Fe)3O4 and (Co,Fe)PO4 ( Figure 3). Figure 3a shows the full XPS survey spectra of (Co,Fe)3O4 and (Co,Fe)PO4, which clearly confirmed the existence of Co, Fe, P and O. Figure 3b-e shows the HR-XPS profiles of Co, Fe, P and O. Notably, the binding energies of Co 2p and Fe 2p shifted along a higher direction after phosphidation. Additionally, the binding energy of O 1s shifted along a higher direction. These observations indicated that electrons were transferred from (Co,Fe)3O4 to (Co,Fe)PO4 in the hybrid (Co,Fe)3O4 and (Co,Fe)PO4 structures. [51] This changed local charge-density distribution was expected to reduce the energy barrier of HER, thus facilitating the adsorption and desorption processes between the reactant and resultant molecules. [52][53][54][55]. LSV was performed to measure the HER activity in a 1 M KOH solution (Figure 4a). For comparison, Pt/C, which is a benchmark precious metal electrocatalyst for HER, was tested; it exhibited a low overpotential of 48 mV at −10 mA/cm 2 . Moreover, (Co,Fe)OOH and (Co,Fe)3O4 exhibited overpotentials of 215 and 191 mV at −10 mA/cm 2 , respectively. Interestingly, (Co,Fe)PO4, obtained through phosphidation, exhibited a significantly reduced overpotential (122 mV at −10 mA/cm 2 ). Although the overpotential of (Co,Fe)PO4 was relatively higher compared with that of Pt/C, it still outperformed Pt/C at a high current density. This result is because the nanoneedle shape increased the concentration of the reactant in the active site and concurrently enhanced the local electric field. [50] The Tafel plots were calculated to elucidate the electrocatalytic kinetics. Figure 4b shows the Tafel slopes that were derived from the HER polarization curves. (Co,Fe)PO4 displayed a lower Tafel slope (−71 mV/dec) compared with those of (Co,Fe)3O4 (−77 mV/dec), (Co,Fe)OOH (−85 mV/dec), and the bare iron foam (−111 mV/dec). These results indicate that (Co,Fe)PO4 exhibited faster reaction kinetics for HER. Generally, HER proceeds via two different reaction routes: the Volmer-Heyrovsky and Volmer-Tafel mechanisms. [56,57] Considering the Tafel slope of (Co,Fe)PO4, it was inferred that (Co,Fe)PO4 followed the Volmer-Heyrovsky mechanism. [58] The electrochemically active surface area (ECSA) was estimated employing Cdl that was derived via CV in the non-Faradaic region ( Figures S8 and S9). (Co,Fe)PO4 and (Co,Fe)OOH exhibited the highest and the smallest Cdl values, respectively, indicating that (Co,Fe)PO4 exhibited the largest ECSA. Since ECSA was directly proportional to the number of active sites, as well as the efficiency of the mass and charge transports of catalysts, the largest ECSA of (Co,Fe)PO4 indicated that it exhibited the most active sites, as well as the most effective mass-and charge-transport capabilities, LSV was performed to measure the HER activity in a 1 M KOH solution (Figure 4a). For comparison, Pt/C, which is a benchmark precious metal electrocatalyst for HER, was tested; it exhibited a low overpotential of 48 mV at −10 mA/cm 2 . Moreover, (Co,Fe)OOH and (Co,Fe) 3 O 4 exhibited overpotentials of 215 and 191 mV at −10 mA/cm 2 , respectively. Interestingly, (Co,Fe)PO 4 , obtained through phosphidation, exhibited a significantly reduced overpotential (122 mV at −10 mA/cm 2 ). Although the overpotential of (Co,Fe)PO 4 was relatively higher compared with that of Pt/C, it still outperformed Pt/C at a high current density. This result is because the nanoneedle shape increased the concentration of the reactant in the active site and concurrently enhanced the local electric field [50]. The Tafel plots were calculated to elucidate the electrocatalytic kinetics. Figure 4b shows the Tafel slopes that were derived from the HER polarization curves. (Co,Fe)PO 4 displayed a lower Tafel slope (−71 mV/dec) compared with those of (Co,Fe) 3 O 4 (−77 mV/dec), (Co,Fe)OOH (−85 mV/dec), and the bare iron foam (−111 mV/dec). These results indicate that (Co,Fe)PO 4 exhibited faster reaction kinetics for HER. Generally, HER proceeds via two different reaction routes: the Volmer-Heyrovsky and Volmer-Tafel mechanisms [56,57]. Considering the Tafel slope of (Co,Fe)PO 4 , it was inferred that (Co,Fe)PO 4 followed the Volmer-Heyrovsky mechanism [58]. The electrochemically active surface area (ECSA) was estimated employing C dl that was derived via CV in the non-Faradaic region ( Figures S8 and S9). (Co,Fe)PO 4 and (Co,Fe)OOH exhibited the highest and the smallest C dl values, respectively, indicating that (Co,Fe)PO 4 exhibited the largest ECSA. Since ECSA was directly proportional to the number of active sites, as well as the efficiency of the mass and charge transports of catalysts, the largest ECSA of (Co,Fe)PO 4 indicated that it exhibited the most active sites, as well as the most effective mass-and charge-transport capabilities, which imparted it with the best HER activity [59]. EIS was performed to confirm the charge-transfer resistances of (Co,Fe)OOH, (Co,Fe) 3 O 4 , and (Co,Fe)PO 4 . Figure 4c shows the Nyquist plots, which were fitted into an inserted equivalent-circuit model, where R s is the solution resistance and R ct is the charge-transfer resistance [60]. (Co,Fe)PO 4 exhib-ited the smallest semicircular diameter (R ct = 0.60 Ω), indicating the lowest R ct compared with those of (Co,Fe) 3 O 4 (R ct = 1.44 Ω) and (Co,Fe)OOH (R ct = 1.89 Ω). To confirm the catalytic activity for HER in alkaline seawater, the LSV graphs were measured for different electrolytes (Figure 4d): alkaline solution (1 M KOH), artificial alkaline seawater (1 M KOH + 0.5 M NaCl), and real alkaline seawater (1 M KOH + seawater). The overpotentials of (Co,Fe)PO 4 were 134 and 137 mV at a current density of 10 mA/cm 2 in the 1 M KOH + 0.5 M NaCl and 1 M KOH + seawater electrolytes, respectively. The HER activities of (Co,Fe)PO 4 in 1 M KOH + 0.5 M NaCl and 1 M KOH + seawater were slightly lower than that of 1 M KOH. In the seawater environment, including real alkaline seawater, the HER activity was reduced owing to the blocking of Mg(OH) 2 or Ca(OH) 2 by the active site via precipitation [61]. Furthermore, impurities, such as bacteria, in the seawater interfered with the electrochemical reaction [6]. Compared with Pt/C, (Co,Fe)PO 4 exhibited better HER activity in real alkaline seawater, as well as the 1 M KOH solution, at a high current density (Figure 4e). FE was measured by collecting the generated H 2 gas via the water displacement method at a constant current density of −50 mA/cm 2 (Figure 4f). The FEs of (Co,Fe)PO 4 in 1 M KOH, 1 M KOH + NaCl, and 1 M KOH + seawater were still >98.6%, 96.5%, and 96.3% after 60 min, indicating that most of the electrons that participated in the reaction were consumed during HER. In addition to the catalytic activity, durability is also an essential factor for evaluating the performance of catalysts in practical applications [62,63]. The long-term stability of (Co,Fe)PO 4 for HER was tested by measuring the potentials in different electrolytes for over 72 h at a constant current density of −100 mA/cm 2 (Figure 4g-i). The measured potential indicated high stability during the continuous operation in all electrolytes (no noticeable deterioration was observed), demonstrating its excellent HER durability. Nanomaterials 2021, 11, x 7 of 13 which imparted it with the best HER activity. [59] EIS was performed to confirm the charge-transfer resistances of (Co,Fe)OOH, (Co,Fe)3O4, and (Co,Fe)PO4. Figure 4c shows the Nyquist plots, which were fitted into an inserted equivalent-circuit model, where Rs is the solution resistance and Rct is the charge-transfer resistance. [60] (Co,Fe)PO4 exhibited the smallest semicircular diameter (Rct = 0.60 Ω), indicating the lowest Rct compared with those of (Co,Fe)3O4 (Rct = 1.44 Ω) and (Co,Fe)OOH (Rct = 1.89 Ω). To confirm the catalytic activity for HER in alkaline seawater, the LSV graphs were measured for different electrolytes ( Figure 4d): alkaline solution (1 M KOH), artificial alkaline seawater (1 M KOH + 0.5 M NaCl), and real alkaline seawater (1 M KOH + seawater). The overpotentials of (Co,Fe)PO4 were 134 and 137 mV at a current density of 10 mA/cm 2 in the 1 M KOH + 0.5 M NaCl and 1 M KOH + seawater electrolytes, respectively. The HER activities of (Co,Fe)PO4 in 1 M KOH + 0.5 M NaCl and 1 M KOH + seawater were slightly lower than that of 1 M KOH. In the seawater environment, including real alkaline seawater, the HER activity was reduced owing to the blocking of Mg(OH)2 or Ca(OH)2 by the active site via precipitation. [61] Furthermore, impurities, such as bacteria, in the seawater interfered with the electrochemical reaction. [6] Compared with Pt/C, (Co,Fe)PO4 exhibited better HER activity in real alkaline seawater, as well as the 1 M KOH solution, at a high current density (Figure 4e). FE was measured by collecting the generated H2 gas via the water displacement method at a constant current density of −50 mA/cm 2 (Figure 4f). The FEs of (Co,Fe)PO4 in 1 M KOH, 1 M KOH + NaCl, and 1 M KOH + seawater were still >98.6%, 96.5%, and 96.3% after 60 min, indicating that most of the electrons that participated in the reaction were consumed during HER. In addition to the catalytic activity, durability is also an essential factor for evaluating the performance of catalysts in practical applications. [62,63] The long-term stability of (Co,Fe)PO4 for HER was tested by measuring the potentials in different electrolytes for over 72 h at a constant current density of −100 mA/cm 2 (Figure 4g-i). The measured potential indicated high stability during the continuous operation in all electrolytes (no noticeable deterioration was observed), demonstrating its excellent HER durability. Regarding the full-cell applications, a two-electrode alkaline water electrolyzer, which was assembled with (Co,Fe)PO 4 and NiFeOOH as the cathode and anode, respectively, was set up for overall seawater splitting employing alkaline natural seawater (1 M KOH + seawater) (Figure 5a). NiFeOOH, which is known as the best OER catalyst, was prepared, following a reported method, [64] and the polarization curve of the OER activity is shown in Figure S10. To avoid interference with the oxidation current, a cell voltage of 10 mA/cm 2 was measured via reverse-swept CV [65]. Interestingly, Figure 5b shows that the NiFeOOH//(Co,Fe)PO 4 electrolyzer exhibited excellent activity in this two-electrode system for overall seawater splitting in 1 M KOH + seawater. This electrolyzer required low voltages of 1.625 (η = 395 mV at 10 mA/cm 2 ), 1.749 (η = 519 mV at 50 mA/cm 2 ), and 1.801 V (η = 571 mV at 100 mA/cm 2 ) in 1M KOH + seawater, demonstrating better overall water-splitting performance compared with the IrO 2 //Pt/C precious metal electrolyzer in both 1 M KOH ( Figure S11) and 1 M KOH + seawater (Figure 5b). The performance of the NiFeOOH//(Co,Fe)PO 4 electrolyzer in 1 M KOH + seawater was comparable with or outperformed the recently reported electrolyzer that was based on the transition metal electrolyzer (Figure 5d). The FE of the NiFeOOH//(Co,Fe)PO 4 alkaline water electrolyzer was calculated by collecting the generated O 2 and H 2 gases for 60 min from each electrode at a constant current density of 50 mA/cm 2 in 1 M KOH + seawater (Figure 5c). The measured FE demonstrated high energy conversion rates of 97.4% and 99.0% for HER and OER in 1M KOH + seawater, respectively. Moreover, the NiFeOOH//(Co,Fe)PO 4 electrolyzer in 1 M KOH + seawater exhibited excellent durability. To confirm the long-term stability of the (Co,Fe)PO 4 electrolyzer, the measured voltage at a constant current density of +100 mA/cm 2 remained very stable without any noticeable deterioration for 50 h in the 1 M KOH + seawater electrolytes (Figure 5e). The durability test, which was conducted in the 1 M KOH electrolyte at a constant current density (+100 mA/cm 2 ) for 50 h, further confirmed the high stability ( Figure S12). These results demonstrate that the NiFeOOH//(Co,Fe)PO 4 alkaline water electrolyzer exhibited a high potential for application as a high-efficiency and durable seawater electrolyzer in natural seawater environments. In order to confirm the change in the morphology and phase, the SEM image and XRD patterns were presented in Figure S13. The surface morphology after durability test was well maintained. In addition, the XRD pattern showed an almost identical pattern to that of (Co,Fe)PO 4 before the durability test. These results indicate that the morphology and crystal structure of (Co,Fe)PO 4 were still maintained after the durability test. Furthermore, driving the electrolysis with natural solar power without artificial current is an ecofriendly and attractive method for conserving the cost of hydrogen production. Thus, the NiFeOOH//(Co,Fe)PO4 seawater electrolyzer was combined with a commercial silicon solar cell to set up a photo-assisted water-splitting system (Figure 5f), after which the overall seawater splitting performance was evaluated in the 1 M KOH + seawater electrolyte under natural sunlight. Figure 5g shows the J-V curve of a commercial silicon solar cell, and the solar-to-hydrogen (STH) efficiency was calculated from the intersection of the power curve of the solar cell and the polarization curve of the electrolyzer,[8] yielding an STH of 12.8%. When this photo-assisted seawater splitting device was driven under natural sunlight, the continuous release of H2 and O2 bubbles from both electrodes was clearly observed, confirming the successful production of H2 (inset of Figure 5e). Therefore, the photo-assisted seawater splitting system developed in this study could be applied to cost-effective hydrogen production in the seawater-splitting industry.
Conclusions
In summary, a non-precious-metal catalyst, (Co,Fe)PO4, was developed as an HER electrocatalyst for alkaline seawater electrolysis. (Co,Fe)PO4 demonstrated impressive HER activity with a low overpotential of 134 mV at −10 mA/cm 2 in 1 M KOH + seawater, as well as excellent durability. The nanoneedle shape of (Co,Fe)PO4 enhanced the local electric field, and its electronic structure, which was modified via phosphidation, enhanced the HER activity. The assembled seawater electrolyzer employing the non- Furthermore, driving the electrolysis with natural solar power without artificial current is an ecofriendly and attractive method for conserving the cost of hydrogen production. Thus, the NiFeOOH//(Co,Fe)PO 4 seawater electrolyzer was combined with a commercial silicon solar cell to set up a photo-assisted water-splitting system (Figure 5f), after which the overall seawater splitting performance was evaluated in the 1 M KOH + seawater electrolyte under natural sunlight. Figure 5g shows the J-V curve of a commercial silicon solar cell, and the solar-to-hydrogen (STH) efficiency was calculated from the intersection of the power curve of the solar cell and the polarization curve of the electrolyzer [8], yielding an STH of 12.8%. When this photo-assisted seawater splitting device was driven under natural sunlight, the continuous release of H 2 and O 2 bubbles from both electrodes was clearly observed, confirming the successful production of H 2 (inset of Figure 5e). Therefore, the photo-assisted seawater splitting system developed in this study could be applied to cost-effective hydrogen production in the seawater-splitting industry.
Conclusions
In summary, a non-precious-metal catalyst, (Co,Fe)PO 4 , was developed as an HER electrocatalyst for alkaline seawater electrolysis. (Co,Fe)PO 4 demonstrated impressive HER activity with a low overpotential of 134 mV at −10 mA/cm 2 in 1 M KOH + seawater, as well as excellent durability. The nanoneedle shape of (Co,Fe)PO 4 enhanced the local electric field, and its electronic structure, which was modified via phosphidation, enhanced the HER activity. The assembled seawater electrolyzer employing the non-precious-metal catalysts delivered excellent performance (1.625 V in 1 M KOH + seawater), which surpassed those of precious-metal-based electrolyzers. Further, the combination of the non-precious-metalbased electrolyzer with a commercial silicon solar cell successfully generated H 2 gas under natural sunlight in alkaline natural seawater. This study demonstrates that non-preciousmetal-based electrolyzers can outperform precious-metal-based ones, indicating that costeffective hydrogen production without artificial current is feasible with commercial silicon solar cells.
Conflicts of Interest:
The authors declare no conflict of interest. | 6,713 | 2021-11-01T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
The Emerging Neurobiology of Mindfulness and Emotion Processing
Mindfulness is associated with reduced negative affective states, increased positive affective states, and reduced clinical affective symptomatology (e.g., depression, anxiety) in previous studies. This chapter examines an emerging body of fMRI and EEG research exploring how mindfulness alters neurobiological emotion processing systems. We examine how dispositional (trait) mindfulness and how adopting a mindful attentional stance (after varying levels of mindfulness training) relate to changes in neural responses to affective stimuli. Evidence suggests mindfulness-related changes in a ventral affective processing network associated with core affect, a dorsal processing network associated with making attributions and appraisals of one’s affective experience, and regulatory networks involved in modulating affective processes. These neural effects may underlie the previously observed relationships between mindfulness and changes in reported emotion processing and reactivity. Findings are discussed in light of existing neurobiological models of emotion and we describe important questions for the field in the coming years.
dispositional quality that all individuals possess to varying degrees and an attentional state which can be fostered through training (Baer, 2003;Brown & Ryan, 2003). Trait mindfulness has been measured by a variety of validated self-report questionnaires. In contrast to trait mindfulness, mindfulness training research entails training meditation naïve participants to adopt a mindful attentional stance while completing emotion tasks, or examining how brief (4 days to 10-weeks) mindfulness meditation training impacts emotional responding. Finally, mindfulness training effects have been explored in studies that compare advanced meditators (with over 10 years of daily meditation practice, on average) to matched control participants. For a recent review of scientific measures and manipulations of mindfulness, see Lindsay et al. (in press).
For purposes of this chapter, we describe studies that include a measure or manipulation of mindfulness and a measure of brain activity while participants complete affective processing tasks. Accordingly, we first describe research relating dispositional (trait) mindfulness with neural measures of emotion processing. We then describe research exploring how a mindful attentional stance can impact neural markers of emotion processing. In the latter case, we order the sections by the amount of mindfulness training received: adopting a mindful attentional stance in meditation naïve participants, brief mindfulness meditation training, and in mindfulness meditation-trained experts.
Trait Mindfulness and Emotion Processing
Self-report measures of trait mindfulness have provided opportunities for investigators to relate self-reported individual differences in mindfulness to measures of brain activity during affective tasks. One recent study used electroencephalography (EEG) to assess the relationship between late positive potential (LPP) and trait mindfulness in an undergraduate sample (Brown, Goodman, & Inzlicht, 2012). The LPP is a positive deflection of the event-related potential in NEUROBIOLOGY OF MINDFULNESS AND EMOTION PROCESSING 5 the slow wave latency range (~400-500ms after stimulus onset), appearing most prominently in the posterior and central midline scalp regions. It is larger in response to more intense stimuli and correlates with subjective reports of arousal (Cuthbert, Schupp, Bradley, Birbaumer, & Lang, 2000). Because of these characteristics, some researchers consider it a sensitive marker of early emotional arousal (Hajcak, MacNamara, & Olvet, 2010). In this study of mindfulness and LPP response, researchers found that trait mindfulness as assessed by two self-report measures [the Mindful Attention Awareness Scale (Brown & Ryan, 2003) and Five Facet Mindfulness Questionnare (Baer, Smith, Hopkins, Krietemeyer, & Toney, 2006)] was associated with reduced LPP in response to high arousal, unpleasant stimuli (e.g., images of corpses). Trait mindfulness was also associated with reduced LPP in response to motivationally salient pleasant stimuli (e.g., erotica). These findings suggest that trait mindfulness is associated with a tempered early response (~500ms) to unpleasant and other motivationally salient stimuli that occurs before a subsequent response can arise and may indicate reduced emotional reactivity.
Another study assessed trait mindfulness with the Kentucky Inventory of Mindfulness Skills (KIMS; Baer, Smith, & Allen, 2004) and asked participants to imagine personally experiencing emotional vignettes (Frewen et al., 2010). Using functional magnetic resonance imaging (fMRI), they found that greater self-reported individual differences in observing (on the Mindful Observing subscale) was positively associated with activation of the amygdala and DMPFC while listening to scripts designed to elicit experiences of rejection or social praise. The positive association between observing and amygdala activation is opposite to research showing that dispositional mindfulness is associated with reduced amygdala activation (Creswell, Way, Eisenberger, & Lieberman, 2007;Modinos, Ormel, & Aleman, 2010) (discussed below).
Importantly, these studies that found an association between dispositional mindfulness and NEUROBIOLOGY OF MINDFULNESS AND EMOTION PROCESSING 6 down-regulated amygdala activation used regulatory instructions to modify the response to affective stimuli, while the Frewen et al. study did not. Additionally, the observing subscale has been found to operate differently in meditating and non-meditating samples (Baer et al., 2008).
In meditators, the observing subscale correlated with psychological adjustment and well-being.
However, in non-meditators this subscale showed associations in the opposite direction. It may be that acceptance is an important underlying factor for mindfulness effects, specifically, an amygdala response may be higher when observing internal and external experiences without an accepting or nonjudgmental stance. The reported DMPFC activation found in the Frewen study associated with observing may be important for generating attributions about one's emotional state (Barrett et al., 2007) and may describe a potential neural underpinning of meta-cognitive awareness in mindful individuals (cf. Teasdale et al., 2002).
Two studies of trait mindfulness and emotion processing found mindfulness to be associated with increased PFC activity and reduced amygdala activity in response to affective stimuli Modinos et al., 2010). Both of these studies used regulatory experimental instructions while participants were viewing affective stimuli. Previous studies have shown that linguistically labeling affective images activates ventrolateral PFC (VLPFC), and deactivates the amygdala . This research suggesting that labeling one's feelings may be a basic mechanism for regulating one's emotions, and interestingly, labeling and noting are commonly used during mindfulness meditation practices (e.g., noting the experience of anger in the body). Building on this, Creswell and colleagues (2007) showed that dispositional mindfulness [as measured by the Mindful Attention Awareness Scale (MAAS; Brown & Ryan, 2003)] moderated neural responses to an affect labeling task. Specifically, dispositional mindfulness was associated with greater activation of PFC regulatory regions NEUROBIOLOGY OF MINDFULNESS AND EMOTION PROCESSING 7 (including bilateral VLPFC) and greater deactivation of the amygdala, suggesting that mindful individuals may be better able to recruit PFC regulatory regions during affect labeling. A similar neural affect regulation effect was observed in mindful individuals when instructed to use a cognitive reappraisal regulatory strategy. Modinos and colleagues (2010) asked participants to view negative images (e.g., burn victims, funeral scenes) and to reappraise, or reinterpret, their meaning so that it was no longer negative. They found that trait mindfulness (as measured by the KIMS) during reappraisal was associated with increases in DMPFC activity, and this activity was negatively correlated with amygdala activity.
Summary: Trait Mindfulness Research
Trait mindfulness research involving neural processing of affective stimuli suggests a relationship between trait mindfulness and reduced emotional reactivity. In studies using regulatory instructions, trait mindfulness is also associated with enhanced recruitment of emotion regulation regions. EEG research has linked dispositional mindfulness to reduced early cortical emotional reactivity (Brown, Goodman, & Inzlicht, 2012). fMRI research using regulatory experimental instructions (e.g., affect labeling, cognitive reappraisal) showed that trait mindfulness is associated with increased PFC activation and reduced amygdala activation to affective images. Mindful traits were positively associated with increased activity in the PFC, and some functional connectivity findings Modinos et al., 2010) suggest that PFC is inversely associated with amygdala activity. This work suggests that when participants are given instructions to explicitly regulate affective responses (labeling, cognitive reappraisal), individual differences in mindfulness may activate emotion regulation regions in the PFC, which may in turn inhibit core affective responses in regions such as the amygdala. These findings suggesting a link between trait mindfulness and reduced emotional reactivity are NEUROBIOLOGY OF MINDFULNESS AND EMOTION PROCESSING 8 important because emotional reactivity is central to dysfunctional emotion regulation (Linehan 1993a), and dysfuntion in emotion regulation is a core component in disorders of anxiety, mood, substance abuse, and eating (Berking & Wupperman, 2012).
State Mindfulness Research in Meditation Naïve Participants
State mindfulness research can be categorized by participants' level of mindfulness training: meditation naïve, briefly trained, and expert. Studies of state mindfulness using meditation naïve participants instruct participants with no previous mindfulness training to adopt a mindful attentional stance. They use various experimental instructions to ask participants to pay attention to present moment experience. Instructions may ask participants to pay attention to present emotional experience and bodily sensations (Herwig, Kaffenberger, Jäncke, & Brühl, 2010), or to actively monitor their responses to stimuli including thoughts, feelings, memories and body sensations with an accepting attitude (Westbrook et al., 2011). Taylor and colleagues (2011) instructed meditation naïve participants to mindfully attend to images of differing emotional valences. These mindfulness instructions reduced self-reported emotional intensity experienced in response to the images across all valence categories (i.e., positive, negative & neutral). In these participants, a mindful attentional stance deactivated the amygdala in response to positive and negative images. Similarly, another study contrasted a narrative, conceptual think condition ("think about yourself, reflect who you are, about your goals") with a mindfulness of present moment emotions and bodily sensations feel condition in meditation naïve participants (Herwig et al., 2010). The think instructions increased amygdala activation, anterior cortical midline, and posterior cingulate cortex (PCC). The feel instructions deactivated the amygdala and resulted in a shift toward more posteriorly bilateral inferior frontal and premotor regions. The feel condition also activated the middle insula, which the authors NEUROBIOLOGY OF MINDFULNESS AND EMOTION PROCESSING 9 interpreted as mindful attention increasing interoception (i.e., awareness of bodily states) (Critchley, Wiens, Rotshtein, Öhman, & Dolan, 2004).
One study examining mindfulness and cue-induced cigarette craving in meditation naïve participants also found that a mindful attentional stance reduces neural activity regions implicated in core affective reactivity (Westbrook et al., 2011). Most craving researchers categorize craving as an affective state for motivating behavior (see Skinner & Aubin, 2010). In this study on mindfulness and craving, mindfulness instructions led to reduced self-reported cigarette craving and reduced neural reactivity to smoking cues in nicotine-deprived smokers.
The ACC, including its subgenual region (sgACC), plays a central role in the craving response of dependent smokers (Kühn & Gallinat, 2011). Mindfully attending to smoking cues not only reduced craving-related sgACC activation but also reduced its functional connectivity to other craving regions (e.g., ventral striatum). The area of deactivation around the sgACC extended to the ventromedial PFC (VMPFC), including Brodmann's area (BA) 10. BA 10 is thought to encode the subjective value of goods such as an appetitive snack (Hare, O'Doherty, Camerer, Schultz, & Rangel, 2008). Mindfulness related reductions in this area may indicate a shift away from the subjective self-referential value of experience to a more objective, non-evaluative engagement with present moment experience.
In addition to reduction of emotional arousal, Farb and colleagues (2007) also showed a shift away from midline to lateral regions with engagement of a mindful attentional stance in meditation naïve participants. This study contrasted states of narrative focus and experiential focus while mildly positive (e.g., charming) and negative (e.g., greedy) characteristics were presented to meditation naïve participants undergoing fMRI. (While we discuss results obtained prior to mindfulness training here, we note that this study & others that we review in this chapter included a mindfulness training component. We present neural results obtained after mindfulness training in our section on brief mindfulness training below.) Instructions for narrative focus entailed judging what is occurring, trying to figure out what that trait word means to the participant, whether it describes the participant, and allowing oneself to become caught up in a given train of thought. For experiential (mindfulness) focus, participants were to sense what is presently occurring in one's thoughts, feelings and body state, without a purpose or goal. the midline regions involved in narrative, self-referential processing. They interpreted reduced midline PFC activity as moving away from subjective, self-referntial valuation of experience to more objective and non-evaluative engagement (Farb et al., 2007).
Summary: State Mindfulness Research with Meditation Naïve Participants
State mindfulness research using meditation naïve participants, as well as trait mindfulness findings, indicate a mindfulness-related decrease in core affective neural response and reported emotion reactivity (Farb et al., 2007;Herwig et al., 2010;Taylor et al., 2011;Westbrook et al., 2011). When engaging a mindful attentional stance, participants demonstrated deactivation of core affective, ventral regions and activation of dorsal and regulatory regions. As mentioned above, diminished emotional reactivity may allow for improved emotion regulation and thus discourage development of psychopathology. Instructions to engage a mindful attentional stance reduces reactivity and may therefore be protective. There is also some evidence to support a mindfulness-related shift away from the DMN and medial PFC toward a left lateralized network when participants engage a mindful (experiential) attentional focus (Farb et al., 2007;Herwig et al., 2010). Although more research is needed to evaluate this claim, these neural findings are consistent with the idea that a mindful attentional stance shifts one from a subjective, self-referential valuation of experience to a more objective and non-evaluative perception of experience. This non-evaluative perception of experience may be a mechanism whereby a mindful attentional stance reduces emotional reactivity. With non-evaluative perception, a stimulus loses the self-referential valence required for strong reactivity.
State Mindfulness Research after Brief Mindfulness Meditation Training
In contrast to instructing meditation naïve participants to adopt a mindful attentional stance, many studies offer participants a brief mindfulness meditation training program. A growing number of published studies randomly assign participants to brief mindfulness meditation training programs (or control programs) and compare changes in neural activation patterns to affective stimuli before and after this training. These training programs vary in length and intensity, ranging from four days of approximately 25 minutes of training per day (Zeidan et al., 2011), to 8 weeks of group and individual daily practice in variants of the Mindfulness-Based Stress Reduction (MBSR) program (Kabat-Zinn, 1994).
One recent study examined changes in neural processing of pain stimuli after a brief foursession 25-minute mindfulness training and compared this to an eyes closed rest condition (Zeidan et al., 2011). Participants were exposed to noxious heat stimuli while instructed to attend to the breath before and after mindfulness training. Attend to breath instructions did not reduce self-reported pain ratings before training, but it did after mindfulness training. Training also reduced activity in somatosensory cortex corresponding to the applied heat stimuli during the attend to breath condition. Participants who reported the greatest meditation-related reduction in pain intensity had the largest meditation-related activation of the anterior insula and ACC. With regard to pain, mindfulness training coupled with meditation instruction during application of pain stimuli results in reduced experienced pain. This reduction may be explained by the reduced activation of primary somatosensory cortex and increased activation of the anterior insula and ACC. The authors suggested that these effects may describe a neural basis for how mindfulness meditation alters appraisals that impart significance to salient sensory (pain) events (Zeidan et al., 2011), which is consistent with early mindfulness research showing that chronic pain patients had decoupled their pain sensations from their cognitive-affective reactions after MBSR training (Kabat-Zinn, 1982).
In contrast to this new research using quite brief 4-session mindfulness training, the MBSR program training consists of 8 weekly 2-hour group sessions, a 1-day silent retreat, and daily home meditation practice during the 8-week training program (Kabat-Zinn, 1994). This program is facilitated by a MBSR-trained instructor who maintains a daily mindfulness meditation practice. Mindfulness is taught through a progression of body-based mindfulness exercises (including guided meditations, mindful stretching and yoga, & didactic exercises and group discussions). Two published studies examined differences in neural activation patterns after MBSR training while participants were exposed to emotional stimuli without any specific experimental instruction to modify their affective response. Both studies found reduced activity in the DMN and language areas in response to affective stimuli after mindfulness training. In the first study, participants diagnosed with social anxiety disorder (SAD) were shown positive and negative social trait adjectives before and after MBSR training and asked to consider if these traits described them (Goldin, Ramel, & Gross, 2009) The second study exposed control and MBSR trained participants to sad film clips (Farb et al., 2010). Mindfulness training reduced neural activity in response to sad film clips compared to controls, particularly in the precuneus, PCC, left posterior superior temporal gyrus (Wernicke's area), and left frontal operculum (Broca's area). The precuneus and PCC are midline cortical structures that have been associated with autobiographical memory retrieval and self-referential processing (Cavanna & Trimble, 2006), and the PCC is a central node in the DMN. In addition to this shift away from DMN and language areas, MBSR participants showed more insula activity during sad clips than controls.
Another study that showed a mindfulness training-related reduction in DMN activity suggests that this reduction occurs with a decoupling of the DMN and insula (Farb et al., 2007).
When experiential and narrative of focus were contrasted during presentation of positive and negative traits in participants who have completed MBSR, experiential focus reduced activity along the anterior cortical midline (rostral dorsal & ventral mPFC) and activates right regulatory PFC regions, insula, and secondary somatosensory cortex. Functional connectivity analyses showed that the insula was strongly correlated with VMPFC in controls but not in MBSR participants. Instead, the insula was coupled to DLPFC activity during experiential focus following meditation training. The authors suggested that interoception may be strongly coupled with narrative focus in controls but not in participants with mindfulness training. The overall pattern suggests that experiential (mindful) focus may reduce ventral core affective activity in the VMPFC and amygdala during the presentation of trait words, an effect that can be enhanced after mindfulness training. Moreover, mindfulness training increases the recruitment of rightlateralized PFC regulatory areas (right ventro-and dorsolateral PFC), providing suggestive evidence for mindfulness training effects on regulatory PFC regions (Cohen, Berkman, & Lieberman, in press).
A recent study explored how MBSR training impacts neural responding in SAD patients.
When these participants were instructed to shift attention to the breath while being exposed to negative self-beliefs, they exhibit a reduction in amygdala activity following MBSR training (Goldin & Gross, 2010). However, when participants were exposed to negative self-beliefs without instructions to direct attention to the breath, there was an initial increase or spike of amygdala activity that quickly dissipated. Because these participants report reduced experienced negative emotion in response to negative self-beliefs, it may be that this initial spike in amygdala activity indicates that MBSR training increases initial affective orienting or emotional processing (Goldin & Gross, 2010). Compared to baseline reacting to negative self-beliefs, these SAD participants also showed a shift away from anterior midline cortical and other DMN regions with training and breath focused attention. Similar to previous findings with SAD participants, they demonstrated increased activation of visual attention that may indicate reduced avoidance of negative stimuli.
In addition to the fMRI studies of brief mindfulness training, there are several studies that used EEG to examine the effects of brief mindfulness training on prefrontal α-asymmetry. It is believed that anterior hemispheric asymmetry reflects motivational direction, with dominant lefthemispheric activity reflecting appetitive, approach responses and dominant right-hemispheric activity reflecting aversive, withdrawal responses (Davidson & Irwin, 1999). EEG measurement of asymmetry in the α-band (8 -13 Hz) is used in this way to determine engagement of a positive, approach state or negative, withdrawal state. Studies that used this methodology have yielded mixed results. Two studies (one using MBSR & another using an abbreviated 5-week training) found that mindfulness training shifted α-asymmetry toward the left hemisphere suggesting that there may be a shift toward more approach-related positive emotionality (Davidson et al., 2003;Moyer et al., 2011); one study showed no change with meditation training and a shift toward right dominant asymmetry in controls using an 8-week Mindfulness-Based Cognitive Therapy (MBCT) with participants with past suicidality (Barnhofer et al., 2007); and in one study the whole sample shifted toward right dominant α-asymmetry regardless of whether participants were controls or mindfulness trained using MBCT (Keune, Bostanov, Hautzinger, & Kotchoubey, 2011). One fMRI study not designed to assess α-asymmetry noted dominant left PFC activity during meditation in expert meditators, and the authors interpreted this as indicative of a positive emotional state (Wang et al., 2011).
Summary: State Mindfulness Research with Brief Mindfulness Meditation Training
State mindfulness research involving brief mindfulness training and emotion processing indicates that training results in reduced markers of negative affect, such as SAD symptoms, negative emotion, and pain intensity and unpleasantness to an applied thermal pain probe (Goldin et al., 2009;Goldin & Gross, 2010;Zeidan et al., 2011). These effects of brief mindfulness meditation training also co-occur with changes in specific neural activation patterns.
Several studies indicate a mindfulness-related down-regulation of DMN areas (particularly the VMPFC, DMPFC, & PCC) and language areas to a broad range of affective stimuli (Farb et al., 2007;Farb et al., 2010, Goldin et al., 2009Goldin & Gross 2010). This may indicate that brief mindfulness training shifts participants away from a self-referential, narrative focus and subjective valuation of experience. In response to affective stimuli, two studies showed overall mindfulness-related amygdala deactivation (Farb et al., 2007;Goldin & Gross, 2010), and three studies found mindfulness increased insula activation (Farb et al., 2010;Farb et al., 2007;Zeidan et al., 2011). It may be that this insula activation indicates changes in interoception and the appraisal of salient sensory events. This body of work suggests that reduced negative affect as a result of mindfulness training may be driven by several underlying neural mechanisms: 1) deactivation of self-referential, evaluative, and narrative DMN regions 2) deactivation of the amygdala likely indicating reduced reactivity 3) increased insula activation indicative of altered interoception and representation of sensory events. These patterns indicate decreased activation of core affect regions both with and without the recruitment of affect regulation regions found in subjects high in trait mindfulness performing regulation tasks. It may be that a more objective perspective that accompanies movement away from self-referential DMN processing as a result of mindfulness training diminishes core affect reactivity without engaging regulatory processes.
However, improved functioning of regulatory regions likely also accompanies mindfulness training. There are likely diverse neural pathways whereby mindfulness training can reduce negative affect in response to affective stimuli, and reduction of negative affect is critical to diverse clinical outcomes. EEG evidence, although mixed, suggests that brief mindfulness training may shift anterior hemispheric dominance to the left or prevent increases in right anterior dominance, which has been interpreted as promoting a more positive and approachoriented mental stance (Barnhofer et al., 2007;Davidson et al., 2003;Moyer et al., 2011).
State Mindfulness Research in Experienced Meditators
Another body of research examined functional neural differences between mindfulness practitioners with significant meditation experience (i.e., several years of daily practice) and meditation naïve controls matched on variables such as age, sex, education, and handedness (Brewer et al., 2011;Hölzel et al., 2007;Ives-Deliperi, Solms, & Meintjes, 2011;Taylor et al., 2011;Wang et al., 2011). One study compared neural processing of emotionally evocative images in meditation naïve participants and meditators with over 1,000 hours of zen meditation experience under mindful viewing instruction and no instruction conditions (Taylor et al., 2011).
When looking at images without viewing instructions, the only difference between beginning and experienced meditators is that experts had decreased activity in the rostro-ventral ACC when viewing positive images. Under mindful viewing instructions, both beginning and experienced meditators reported reduced emotional intensity experienced in response to the images with differing, group-specific neural correlates. Mindful instructions in beginners was associated with a deactivation of the amygdala during processing of positive and negative images. In experienced meditators, mindful viewing decreased activity in the medial PFC (BA 10) and PCC across all valence categories. In another study comparing experienced meditators to matched naïve controls while practicing different types of meditation, meditation in experienced meditators was associated with deactivation of the medial PFC and PCC (Brewer et al., 2011).
Evidence suggests that mindfulness in experienced meditators entails a shift away from the DMN, including altered activation in the medial PFC (also part of the dorsal affective system).
Our Emerging Understanding of Mindfulness and the Neurobiology of Emotion
The findings to date indicate that mindfulness affects neurobiological networks implicated in emotion, including the ventral core affective network, the dorsal emotion processing network, and PFC regions implicated in the regulation of emotion. In the ventral core affective network, trait mindfulness and mindfulness training alter activation in amygdala, VMPFC, ACC, and insula in response to a broad range of affective stimuli. Reductions of amygdala activation in response to affective stimuli have been found in trait and state mindfulness studies Farb et al., 2007;Herwig, 2010;Modinos et al., 2010;Taylor et al., 2011). Along with other ventral affect processing regions (i.e., VMPFC & ACC), the amygdala influences the visceromotor responses related to the value-based representations of an object (Barrett et al., 2007). Altered responses related to mindfulness in the VMPFC (Farb et al., 2007;Westbrook et al., 2011) and ACC (Farb et al., 2010;Taylor et al., 2011;Westbrook et al., 2011;Zeidan et al., 2011) have also been reported, also supporting the possibility of changes in visceromotor value-based responses associated with mindfulness.
These changes in this core affect response system coupled with a shift from midline DMN areas associated with self-referential valuation and narrative focus (i.e., VMPFC & PCC) toward more lateral and posterior regions (Farb et al., 2010;Farb et al., 2007;Goldin et al., 2009;Goldin & Gross, 2010) may indicate a shift away from subjective valuation and narrative elaboration toward a more experiential and objective awareness of present experience. Many theorists describe how mindfulness is characterized by a non-judgmental awareness of one's moment-to-moment experience (Kabat-Zinn, 1994), and this may reduce the evaluation of affective stimuli in terms of whether it is good or bad for "me" and reduce the elaboration of thoughts related to that evaluation. The increase in insula activity with mindfulness (Farb et al., 2010;Farb et al., 2007;Herwig et al., 2010;Taylor et al., 2011), and its decoupling from the valuation-related VMPFC (Farb et al., 2007) may also underlie the movement from subjective evaluation to a bare awareness of present experience.
Mindfulness has also been linked with increase neural regulatory PFC regions when participants are instructed to regulate affect responses. When modifying responses to affective stimuli using a regulatory strategy (i.e, reappraisal or labeling), trait mindfulness is associated with increased PFC activation and decreased amygdala response Modinos et al., 2010). This suggests that individuals high in trait mindfulness might be better able to recruit regulatory networks when using a regulatory strategy. Changes in activation of the DMPFC, part of the PFC included in the dorsal affect processing system, are also related to mindfulness. This region may support attributions about affective experience. Three studies of trait mindfulness found mindfulness-related increases in DMPFC activity Frewen et al., 2010;Modinos et al., 2010), and one state mindfulness study in MBSR trained participants found a related decrease (Goldin et al., 2009). It may be that the DMPFC serves as part of the prefrontal regulatory system associated with regulatory strategies in individuals high in trait mindfulness and is down regulated during state mindfulness in the shift away from midline DMN areas.
In addition to the DMPFC activation mentioned above, mindfulness has also been found to be associated with altered activation of the MPFC and ACC, which have been described as emotion processing regions in the dorsal affect network (Barrett et al., 2007). In response to affective stimuli, a mindful attentional stance is associated with reduced activation of the MPFC (Farb et al., 2007;Goldin et al., 2009;Taylor et al., 2011). While the functional properties of the MPFC have yet to be precisely defined (Amodio & Frith, 2006), it is thought that this region participates in attributions made about the cause(s) of core affect (Barrett et al., 2007). In contrast to the MPFC, the ACC has shown mindfulness-related activation in response to affective stimuli (Farb et al., 2010;Taylor et al., 2011;Zeidan et al., 2011), although craving related activation of the sgACC in response to smoking cues is reduced with mindfulness (Westbrook et al., 2011). The ACC is thought to signal the need to represent mental contents in consciousness with the aim of reducing conflict, improving understanding, or exerting greater control over them (Barrett et al., 2007). The pattern of increased ACC and decreased MPFC associated with mindfulness may indicate increased understanding and control of mental contents while deemphasizing attributions about the affect itself.
Our review has focused on describing findings from self-report measures of mindfulness and mindfulness training interventions, and future research would benefit from comparing how these different types of measures and manipulations of mindfulness relate to activation patterns in response to the same affective stimuli (e.g., a sad film clip known to elicit robust sadness) (cf., Goldin & Gross, 2010;Taylor et al., 2011;Zeidan et al., 2011). When including state mindfulness, it would also be useful to ask participants how successful they felt they were in adopting a mindful attentional stance, and to include this in analyses. Instructions to engage mindful attention can be difficult to follow, and analyses using only subjects reporting success may further clarify mindful emotion processing patterns.
How do mindfulness-related changes in neural emotion processing measures relate to changes in clinical symptoms?
Mindfulness-based interventions have been shown to reduce clinical symptoms of depression and anxiety (Hofmann et al., 2010;;Roemer et al., 2008;Teasdale et al., 2000) as well as affective disturbances in chronic pain patients (Grossman et al., 2007;Kabat-Zinn, 1982;Kabat-Zinn et al., 1992). It seems likely that these clinical changes may be mediated by basic changes in neurobiological emotion processing systems, although very little work has attempted to explore neural mechanisms of these clinical symptom changes (Goldin et al., 2009;Goldin & Gross, 2010). We are aware of several groups who are exploring these brain-behavior links, so we hope to see advances in this area in the coming years. One challenge in advancing research on this question is the complexity of analytic models for testing these mechanistic models, but advances in neuroimaging toolboxes for mediation analysis are now available (e.g., Wager et al., 2009).
How do mindfulness-related structural brain changes inform our understanding of mindfulness and emotion processing?
Several studies document mindfulness associated changes in gray matter density and volume in neural regions implicated in emotion processing (i.e., amygdala, hippocampus, and OFC). One study reported that MBSR reduced perceived stress, and reductions in perceived stress co-varied with decreases in gray matter density of the right amygdala (Hölzel et al., 2010).
Given that some previous studies indicate that mindfulness alters amygdala response to affective stimuli Farb et al., 2007;Herwig et al., 2010;Modinos et al., 2010;Taylor et al., 2011), one promising future direction is to examine the relationship between structural and functional changes in amygdalar response (cf. Gianaros et al., 2008). Also, several studies indicate that mindfulness is associated with increases in grey matter density (Hölzel et al., 2011) and grey matter concentration (Hölzel et al., 2008;Luders, Toga, Lepore, & Gaser, 2009) in the hippocampus. The hippocampus sits adjacent to amygdala and has been implicated as a core affective region. It is thought that hippocampus facilitates fear extinction, emotion processing, and memory (Corcoran, Desmond, Frey, & Maren, 2005;Milad et al., 2007). Two studies indicate structural changes in orbitofrontal cortex (OFC) of experienced meditators (Hölzel et al., NEUROBIOLOGY OF MINDFULNESS AND EMOTION PROCESSING 22 2008;Luders et al., 2009). Gray matter density in the medial OFC was positively associated with hours of meditation practice in experienced meditators (Hölzel et al., 2008). Another study found increased gray matter volumes in the OFC of experienced meditators compared to nonmeditators (Luders et al., 2009). The OFC, part of the ventral affective processing system, is thought to represent the affective value of an object in a flexible, experience-or contextdependent manner that the VMPFC uses to make choices and judgments based on this initial valuation (Barrett et al., 2007), suggesting that mindfulness training may increase processing capacity for considering contextual factors during emotion valuation.
How does mindful awareness impact neural affective responses over time?
Affective experiences are not monotonic; instead, they are content-rich events that arise and pass away over time. Accordingly, mindfulness has the potential to alter early orienting and attention toward affective stimuli (Jha, Krompinger, & Baime, 2007;Vago & Nakamura, 2011), to modify early emotion processing as affective stimuli are initially perceived (Brown et al., 2012), and to change how these stimuli are processed and regulated over time (Goldin & Gross, 2010). We know very little about how mindful attention can alter this temporal process of emotion processing and its neural sequelae. For example, it is possible that mindful attention increases one's attention to threat related cues early in the emotion generation process, while also promoting a regulatory response during emotion processing. Indeed, an initial study suggests that this may be true (Vago & Nakamura, 2011), although the neurobiological mechanisms are unknown. One limitation with current fMRI approaches is their sluggish temporal resolution (collecting a whole brain volume can take 1-3 seconds), which makes it difficult to evaluate changes in emotion processing to discrete affective events. Currently, EEG (Brown et al., 2012) and MEG (Kerr et al., 2011) approaches offer the best temporal resolution for testing these unexplored areas, and are exciting areas for future research.
Conclusion
An exciting body of research is emerging that identifies how mindfulness changes the way the brain processes affective stimuli. The body of work we describe here represents our first steps in understanding the neurobiology of mindfulness and emotion processing. Collectively, this initial work indicates that trait mindfulness and mindfulness-training can alter ventral core affective reactivity, dorsal emotion processing, and PFC regulatory neural affect regions. The coming years will no doubt bring new research that increases the specificity of our knowledge about the neurobiology of mindfulness and emotion processing. | 7,855.4 | 2015-01-01T00:00:00.000 | [
"Psychology",
"Biology"
] |
Single center experience on the production of fluorine-18 radiopharmaceuticals using a 7.5 MeV cyclotron: capabilities and challenges
A plethora of cyclotron options have been developed to fulfil the demands of nuclear medicine industries in PET and SPECT radioisotopes. As a remote site, the difficulties of transporting fluorine 18 radiopharmaceuticals for PET examinations were overcome by the installation of a 7.5 MeV cyclotron for in-house production. The addition of a third-party synthesis module enabled the synthesis of 7 additional radiotracers according to a ‘’dose on demand’’ principle. Radiochemical yield is considered the primary factor in producing sufficient activity for a single patient dose, since low energy cyclotrons can only offer low initial activities. We hereby report the average radiochemical yields, synthesis times and doses per production for [18F]FDG, [18F]PSMA-1007, [18F]DOPA, [18F]FET, [18F]FLT, [18F]FMISO, [18F]Choline and [18F]FES using a BG75 cyclotron and a Neptis Mosaic-RS. Additionally, the presence of radionuclidic impurities in the final product was examined.
Introduction
Positron emission tomography with computed tomography (PET/CT) is gaining acceptance and is becoming the leading modality in diagnosis among the non-invasive diagnostic techniques in nuclear medicine [1]. Positron emitting isotopes have been used to radiolabel several compounds with different biodistributions, leading to an arsenal of radiopharmaceuticals that can be used to image certain reactions, metabolic routes and processes at the molecular level [1]. Due to its properties, fluorine-18 (F-18) is the radioisotope of choice for many radiopharmaceuticals. Among other characteristics, it has a half-life of 110 min, which is enough for production and administration or dispatching. However, its availability is challenging for remote sites where transport logistics are more complicated and time consuming. This has a direct impact on the examination costs and leads to the need for in-house production of F-18 using accelerators [2].
Nowadays there is a variety on available cyclotrons, differing on several properties and characteristics [3][4][5][6]. From the economical point of view, the cyclotron of choice depends primarily on the estimated number of PET examinations per year for the covering area of the institution [7]. Due to increasing demands of F-18 radiotracers remote sites could overcome their proximity ''disadvantage" using a ''dose on demand'' small cyclotron. As a result, several companies have manufactured and offered lower energy cyclotrons in order to achieve multiple runs of bombardment and fluorine-18 activity yields during a single day, according to the needs of the site [4,6].
Such an example is the BG-75 Biomarker Generator (Best ABT Molecular Imaging Inc., Knoxville, TN, USA), which consists of the mini cyclotron and the microchemistry unit. The cyclotron is a 7.5 MeV, self-shielded, proton beam, positive ion cyclotron, initially designed for Fluorodeoxyglucose ([ 18 F]FDG) production only, using "a dose on demand" principle.
However, the growing need for further non-FDG PET examinations led us to the introduction of an additional radiosynthesis module which, to the best of our knowledge, has never before been used together with a 7.5 MeV cyclotron.
We herein report the feasibility on production of several other fluorine-18 radiotracers using a BG75 cyclotron and a third-party synthesis module (NEPTIS mosaic-RS automated system, ORA, Philippeville, Belgium) by reporting our experience after more than 2 years. Additionally, the synthesis cassettes equipped with the cartridges required for each radiopharmaceutical and the reference compounds used for the quality control were also purchased from ABX GmbH. No modifications were made to the purchased Single-use synthesis cassettes and reagent kits. Methods and standard operating procedures for synthesis were provided by ORA NEPTIS manufacturer. Minor modifications on the sequences were made to avoid activity losses and ensure success on productions. For the better handling of low initial activity, a 6.2 cm line for introducing the activity to the synthesizer was used. The syntheses were hosted in a MecMurphil PET shielded isolator (MecMurphil S.r.l, Bologna, Italy) equipped with HEPA filters. HPLC solvents (HPLC grade) were purchased from Sigma-Aldrich.
Materials and methods
HPLC analysis was performed on a Shimadzu 20A/AT system with a Gabi detector (Elysia-Raytest; Straubenhardt, Germany) for the determination of the radiochemical and chemical purity as well as the chemical identity of each compound. An ACE 3 C18 150 × 1.5 mm (ACE), Rezex ROA-Organic Acid H + (8%) 250 × 4.6 mm and Luna® 5 µm Scx 100 Å 250 × 4.6 mm column was used. Prior the quality control, a reference standard of each compound was used for the determination of the retention time. GC (Shimadzu 17A, Lab solution 2.1 software; Kyoto, Japan) was used for the determination of residual solvents using a RESTEK RTX-624 (id 0.32 mm, length 1200 mm; Bellefonte, PA, USA) column. Radionuclidic identity was checked by half-life measurement via a dose calibrator (M.E.D. -Medizintechnik Dresden GmbH, Dresden, Germany). Radionuclidic purity was determined using a multi-channel analyser (Mucha star, Elysia -Raytest, Straubenhardt, Germany) with Gina star software (version 6.0, Elysia -Raytest, Straubenhardt, Germany). A spot test was used for the determination of both Tetrabutylammonium bicarbonate solution (TBA-HCO 3 ) and Kryptofix levels in the final product, TLC (mini Gina detector and Gina star software, Elysia-Raytest; Straubenhardt, Germany) was performed using TLC Silica Gel 60 F254 Alu sheets (25 × 70 mm) and a mixture of MeOH:Ammonia (9:1 v/v ratio) as stationary and mobile phase respectively for the development of the plate. Reference standards of 111 ppm TBA-HCO 3 solution and 600 ppm Kryptofix solution were used.
[ 18 F]Fluoride was produced by irradiation of Oxygen-18 enriched water (97%, ABX, GmbH; Radeberg, Germany) with a 7.5-MeV cyclotron according to 18 O(p,n) 18 F nuclear reaction. A 316-Stainless steel target of 0.28 mL maximum volume capacity and a window made of havar alloy, was used. The cyclotron's target current was maintained at 5.5 μA. A total of 0.25 mL Oxygen-18 enriched water was used. Elution and purge of the transfer line was carried out in each production run, using the same volume of water (0.25 mL). Average time of bombardment for all syntheses was 87.3 ± 9.2 min. Irradiations were performed in the facility with the BG75; 7.5 MeV proton beam, positive-ion cyclotron. All syntheses were carried out using Neptis Mosaic-RS.
In order to investigate the existence and the identity of long-lived radioisotopes that might be present in the [O-18] water coming from the target, a further analysis was carried out. To determine those radioisotopes, a gamma spectroscopy of the recovered [O-18] water of all productions carried out with Neptis, since the first day of operation, was made (Fig. 1). Additionally, gamma spectroscopy for the recovered [O-18] water of single FDG run was performed, in order to determine the consistency of the output of radioisotopes (Fig. 2). The gamma spectroscopy data of a decayed [ 18 F] FDG dose is also shown in Fig. 3.
Results
During the last 2 years of production, 1213 irradiations were carried out using both ABT's microchemistry unit and Neptis Mosaic-RS. The average activity of F-18
2-Deoxy-2-[ 18 F]Fluoro-d-glucose
After 82 productions of FDG using the Neptis mosaic-RS radiosynthesis module, with 92.4 min of average Case Study SN Applied Sciences (2020) 2:890 | https://doi.org/10.1007/s42452-020-2695-2 bombardment time and a synthesis time of 23 min, the average production yield was 53 ± 8%. This equates to an average activity of 2.56 ± 0.84 GBq. The activity was considered enough for four to five patients to be administered, depending on their weight (target activity of 3.5 MBq/ kg(BW)) and considering that only one PET/CT scanner is available.
[ 18 F]-PSMA 1007
After 93 productions of 18 F-PSMA-1007 with a synthesis time of 41 min, the average activity of 18 F-PSMA-1007 was 1.97 ± 0.57 GBq, which translates to a radiochemical yield of 57 ± 8.1%. Based on the scan time (3 h post injection; target activity 4 MBq/kg(BW)) the average activity translates to the administration of up to three patients with one production.
[ 18 F]Fluoro-l-dihydroxyphenylalanine
After 10 productions of 18 F-L-DOPA with a synthesis time of 86 min, the average activity was 258 ± 78 MBq and the radiochemical yield was 7 ± 1%. This activity corresponds to a single patient (target activity 4 MBq/kg(BW)) for each production.
[ 18 F]Fluoroethyl-l-tyrosine
After 24 productions of 18 F-FET with a synthesis time of 49 min, the average activity of 1.31 ± 0.3 GBq, with a radiochemical yield of 30 ± 6% was achieved. This activity was considered to be enough for up to three patients to be administered (target activity of 200 MBq/patient).
[ 18 F]Fluorothymidine
18 F-FLT was produced once with an activity of 169 MBq, which results in a radiochemical yield of 5.6%. The low achievable activity of 18 F-FLT (therefore high cost) have discouraged the use of 18 F-FLT and favoured the use of 3.1.4 in the daily routine. Synthesis time 43 min.
[ 18 F]Fluoromisonidazole
With two productions of 18 F-FMISO, the activities of 629 MBq and 527 MBq can be reported. This translates to radiochemical yields of 17% and 11% respectively. The achieved activity was enough for a single patient (target activity 5.5 MBq/kg(BW)). The time needed for synthesis was 55 min.
[ 18 F]Fluorocholine
After two syntheses we report doses of 973 MBq and 847 MBq, which translate to a radiochemical yield of 24% and 18%. The synthesis time was 49 min. A single patient was administered in each case (target activity 3 MBq/ kg(BW)).
[ 18 F]Fluoroestradiol
Two productions of 18 F-FES were carried out. After 71 min of syntheses the final activities recorded were 800 MBq and 852 MBq and radiochemical yields of 23% and 28%. Those were administered to patients in a radioactive dose of 111-222 MBq as a slow intravenous bolus injection over 1-2 min [8]. Due to the uptake time of 80-100 min, the final activity is considered to be enough for two patients. All radiopharmaceuticals passed the specifications for their quality control stated by European Pharmacopoeia.
Radionuclides generated using 7.5 MeV cyclotron
During normal operation, the BG-75 cyclotron activates certain materials, such as the target and its window. Since the target and the target window are made of stainless steel and havar alloy respectively, the generation of unwanted radionuclides cannot be avoided [9]. However, due to its low energy, the BG75 cyclotron only generates a very small amount of other, long-lived radioisotopes. Although most of the generated radionuclides expected [4,10] are detectable in both cases (Figs. 1, 2) shown below, the activities recorded are very low. The sum of the integrated values of the peaks generated are 400 cps and 20 cps for all time and for a single day, respectively. An FDG dose was left to decay for five days. Despite that the data from Figs. 1 and 2 are comparable, there is no indication that any of these radioisotopes are present in the final product.
The data from Figs. 1 and 2 show that there are certainly 2 radioisotopes that can be considered activation products of Havar and stainless steel, Manganese 52 (t1/2 = 5.6 days) and Cobalt 56 (t1/2 = 77 days). One more isotope that is expected to be appeared cannot be identify but only hypotheses can be made due to lack of equipment and the low of activity. That isotope is Technetium 96× (t1/2 = 4.6 days) where its parent isotope (Molybdenum) has been used as addition to the stainless-steel alloy.
Discussion
The need for specialized examinations led to the installation of Neptis module due to the limitation of Best ABT's microchemistry unit on producing radiopharmaceuticals other than FDG. The production of the new radiotracers has been shown to be feasible. All of them passed the quality control specifications and they can be considered as suitable for a routine nuclear medicine clinical practice. Eight radiopharmaceuticals were produced with comparable radiochemical yields to those reported by commercial cyclotron-synthesis modules [1,11,12]. Although the production of more FDG doses with one batch was not planned initially, the use of a third-party synthesis module enabled that option as well leading to the administration of even 5 patients with one hardware and reagents kit of FDG. Therefore, Best ABT's 7.5 MeV cyclotron can be considered efficient enough for a small clinical site of 30-40 cases per week depending on the availability of synthesis modules.
Furthermore, the lower initial activity restricts the number of patients that can be administered. A Longer line was added on introducing the activity to the module through a 10 mL reservoir column in order to avoid spillage of droplets on its sides. And therefore, to minimize losses of activity.
Moreover Long synthesis times combined with low initial activities and patient availability results in single dose productions. The rest of the radiopharmaceuticals could be administered to multiple patients. Despite similarities in imaging, FET was preferred over FLT because of its better ability to cross the blood-brain barrier [11] as well as the better yields that were obtained.
Additionally, the unwanted production of other radionuclides did not deviate from what has been reported previously [13,14] or suggested by cyclotron manufacturer. A similar study with different cyclotrons and target materials have shown similar results on the spectroscopic data [14,15]. Although there is a production of other radionuclides, their low activity compared to commercial cyclotrons allows for better waste or equipment management in order to comply with national and local regulations. Additionally, there was no indication that any of those by-products were present in the final product even after 5 days of decay.
Conclusion
In conclusion, the addition of Neptis Mosaic-RS to the existing ABT's 7.5 MeV cyclotron, made the production of additional radiopharmaceuticals feasible. Not only that, but also allowed the production of batches of radiopharmaceuticals needed for a certain number of patients. Most of the radiotracers have been produced to be administered to a single patient; however, the capabilities of the system in administering more patients with the same batch depends on the PET/CT scanners' and patients' availability. Due to the limitation of low energy cyclotrons in providing high initial activities of F-18, the radiochemical Vol:.(1234567890) Case Study SN Applied Sciences (2020) 2:890 | https://doi.org/10.1007/s42452-020-2695-2 yield is considered the parameter that will define the capabilities of each batch. | 3,255 | 2020-04-13T00:00:00.000 | [
"Medicine",
"Engineering",
"Physics"
] |
COVID-19 Surveillance through Twitter using Self-Supervised and Few Shot Learning
Public health surveillance and tracking virus via social media can be a useful digital tool for contact tracing and preventing the spread of the virus. Nowadays, large volumes of COVID-19 tweets can quickly be processed in real-time to offer information to researchers. Nonetheless, due to the absence of labeled data for COVID-19, the preliminary supervised classifier or semi-supervised self-labeled methods will not handle non-spherical data with adequate accuracy. With the seasonal influenza and novel Coronavirus having many similar symptoms, we propose using few shot learning to fine-tune a semi-supervised model built on unlabeled COVID-19 and previously labeled influenza dataset that can provide insights into COVID-19 that have not been investigated. The experimental results show the efficacy of the proposed model with an accuracy of 86%, identification of Covid-19 related discussion using recently collected tweets.
Introduction
The typical seasonal influenza virus and the current development of COVID-19 have multiple similarities from symptoms to how the virus is spread. Both viruses attack the respiratory system, can be spread through asymptomatic carriers, cases can range from mild to severe cases, and are transmitted by contact and/or droplets. Influenza and COVID-19 both can impact a community negatively due to the contagious nature of the virus and the high number of deaths caused by the viruses.
Public health surveillance like digital contact tracing (Ferretti et al., 2020;Ekong et al., 2020), epidemiological studies (Salathé et al., 2013), event detection (Lwowski et al., 2018), and monitoring the prevalence of vaccinations (Huang et al., 2017) can be used to help contain the virus and prevent its spread to the masses. These tools and techniques range from cellphone applications installed personal phones that track the exact spread of a virus (Ekong et al., 2020) to the development of machine learning-based techniques to study the spread of a virus using social media (Lamb et al., 2013;Corley et al., 2009Corley et al., , 2010Santillana et al., 2015;Broniatowski et al., 2013;Signorini et al., 2011). Similarly, machine learning-based methods have been developed to monitor the public's view on vaccines to combat the anti-vaccine narrative (Huang et al., 2017).
Using large sources of public information from social media to mine influenza and COVID-19 data allows researchers to help gain insight about the viruses. Just using a search word like "flu" and "coronavirus" with the Twitter API will return millions of tweets with information about vaccines, rumors, symptoms, and family/friends who have contracted the virus. Classifying tweets in to smaller subsets including categories like "Self vs Other" and "Awareness vs Infection" provides a deeper understanding on how the the influenza and COVID-19 are affecting the communities. Example tweets of each category can be found in Table 1 While other researchers aim to use unsupervised learning to cluster and perform topic modeling on COVID-19 tweets (Mackey et al., 2020;Medford et al., 2020), we decided to combine selfsupervised learning combined with few shot learning to produce more accurate predictions for specific categories.
The major road block for using deep learning models on the COVID-19 tweets is the lack of annotated data. With millions of tweets related to COVID-19 flooding social media, researchers have a difficult time performing supervised learning on the data. We propose a method to attack this problem by transferring knowledge learned in influenza data and integrating it with latent variables obtained from the unlabeled dataset of COVID-19 to preform a deeper understanding through self- Tweets that are classified as Awareness is beneficial to researchers when not wanting to look at information regarding users actually becoming infected with the virus. These topics can include perception of masks, rumors of vaccines, etc.
-#COVID19 is such a public health threat because the virus can be transmitted by individuals who are infected, but are not showing symptoms.
Infection -I'm absolutely broken! This morning I found out my bio mom (who lives in the UK) is infected; also has pneumonia. Her medical team has said to "prepare for worst case scenario." Well, here we are! She's going to die alone with her entire family in another country. F YOU #COVID19 When using Social Media for public health surveillance and contract tracing. Having a classifier that can accurately classify tweets as a positive case can be extremely beneficial in discovering hotspots for the virus as well as other people they may have came in contact with -So my 80yo old dad tested positive. He is now on a ventilator. I need all your prayers that he pulls through this. #Covid19 #CoronaVirus
Vaccine
-@siggyflicker I understand that, but unless the clinical studies are throughly completed (hopefully sped up), we have to continue to be cautious -even then, unfortunately it's still not a vaccine that will prevent the contraction and spread of #COVID19 Tweets aimed around the topic of Vaccines has present and future applications. In the present it allows researchers to investigate misinformation and/or the public perception of vaccines.
In future research we can further classify these tweets into Intent to Receive or Already has Received a vaccine. -I'm tired of hearing all the scary stuff about the virus. The news needs to include what strides we are making in treatments, a vaccine, or even a cure. #COVID-19 #coronavirus #Coron-avirusUSA (Lamsal, 2020). Also provided in the table is a brief explanation of why the specific category is important to understanding multiple aspects of COVID-19. These categories allow researchers to study the virus at a more granular level. supervised classification. The main contribution of this paper is three-fold: • We propose a self-supervised learning algorithm to monitor COVID-19 Twitter using an autoencoder to learn the latent representations and then transfer the knowledge to COVID-19 Infection classifier by fine-tuning the Multi-Layer Perceptron (MLP) using few shot learning.
• We evaluate the utility of Twitter data for COVID-19 surveillance by training the 4 four binary models, in a computing environment provided by Jetstream (Stewart et al., 2015), to classify tweets into 4 different categories, related to COVID-19, COVID-19 Infection, COVID-19 Self/Others Infection, and COVID-19 Vaccine.
• Lastly but not least, we transfer a pre-trained influenza MLP classifier to fine-tune the accuracy of the self-supervised model.
Methodology
In this section, we explain the set-up of our study and motivate the core components of the proposed COVID-19 self-supervised learning with less labeled COVID-19 data. We begin with the introduction of our three datasets and data annotation strategy in Section 2.1 and follow by describing the core components of our self-supervised learning, as shown in Figure 1. We first learn how to generate the latent representation of unlabeled COVID-19 tweets using self-supervised Convolutional Autoencoder model in Section 2.2. Subsequently, design our COVID-19 supervised downstream task with a pre-trained Influnza classification, due to symptom similarities, and fine-tuning of the model using COVID-19 Few shot learning in Section 2.3 and 2.4. We finally present our results and evaluation metrics in Section 3 followed by discussion and conclusion in Section 4.
In order to use few-shot learning, an annotated COVID-19 dataset must be used to fine tune the overall model. The dataset collected by Lamsal et al (Lamsal, 2020) provides us with a large amount of data for analysis and predictions. a subsample of 500k tweets are used from this dataset to train the autoencoder. To obtain accurate annotations for this dataset, all tweets are shuffled and randomly sampled 25 times with a size of 100 tweets. Each sample is then distributed to 3 different annotators. Each annotator is asked to answer 4 questions about the tweet corresponding to the 4 categories we hope to classify. Only tweets where at least 2 annotators agree on the label are used for training and testing. A breakdown of each task and total number of tweets can be found in Section 3.4.
The previous work done by Lamb et al. and Huang et al. provided annotated datasets that allow us to implement supervised deep learning models. The FluTrack dataset provides 3 different classification labels: is the tweet related to the influenza or not, is the tweet talking about awareness or infection of the flu, and finally is the tweet about the user or about someone else.The FluTrack dataset consisted of 11,990 tweets collected from years 2009 through 2012. The FluVacc dataset also has 3 classification labels: is the tweet about the vaccine or not, does the tweet contain intent to get the vaccine or not, and lastly does the tweet have information saying the user already received the vaccination or not. The FluVacc consisted of 10,000 total annotated tweets. The provided tweet labels are not mutually exclusive meaning a tweet can belong to multiple categories. In the example of "My mom caught the flu, hopefully i dont catch it..." can be classified as Infection and Other.
Convolutional Autoencoder
Most of the self-supervised techniques for latent text representation rely on Transformer architectures that predict the next token. In this study, we generate Coronavirus (Lam-sal, 2020) latent text representation by utilizing Convolutional Autoencoder. Autoencoders are a special category of deep neural networks that are deliberately programmed to make output as close as possible to input. Instead of training to predict y given input x, the network will be trained in an unsupervised approach to replicate its own input x. The autoencoder, in figure 1, is composed of 2 parts, the encoder and decoder. The job of the encoder is to compress the data into the latent space and the decoder takes the latent space as its input and attempts to reconstruct the original input. To simplify the autoencoder, we define it as a composite function, Equation 1, with the encoder E and decoder D with a loss function defined to minimize the difference between the input, X, and the output,X.
Once an autoencoder is trained for Coronavirus dataset, the latent variable z can be used to extract important features for COVID-19 tweets. As shown in figure 1, both the encoder and decoder are convolutional neural networks. We pass the vectorized tweet through a word embedding, the word embedding layer converts every word of the tweet into an n-dimensional vector. This converts the input dimensions into 2-D that can be fed to the Convolutional Autoencoders for our task.
Hadifar et al. (Hadifar et al., 2019) also uses an Autoencoder in order to help classify text. They also use the Autoencoder to pre-train the encoder, but instead of classifying text, they use KNN for clustering similar text with the learned latent space. Their claims support the use of Autoencoders to achieve a deeper understanding of short texts such as tweets.
The word embedding used in our research was the GloVe 50d trained on 6 billion words from Wikipedia (Pennington et al., 2014) with a max tweet length of being 50 words. The max length of the tweet was decided by calculating 2 standard deviation greater than the mean allowing more than 95% of the tweets to not be altered in size. And tweets that are less than 50 words are post padded with 0's, representing no word being present. The next step in the encoder consists of three different 1 dimensional convolutions with kernel sizes of 2,3 and 4. These three operations are grouped together in Table 2. This allows the encoder to learn 2,3 and 4 word relationships which is important in encoding meaning and semantics into the latent space. The outputs from each of the 3 convolutional layers are passed through an activation function (ReLU) and concatenated together. One more convolutional layer is used as well as a ReLU layer before flattening the output. The output of the flattened layer is the latent space which is half the size of the original input. The encoder ends at the Flatten layer in Table 2 and thus start the process of the decoder.
The output (latent space z c ) of the encoder is then used as the input to the decoder. Similar operations are performed in order to reverse the operations and reconstruct the original vector/tweet. If the decoder can accurately reconstruct the original input then the latent space has learned and encoded the right information in a compressed format. The decoder begins with the latent space being reshaped into a 2 dimensional vector. A convolution of 2,3,4 are performed on the latent space. The outputs are passed through the ReLU activation and concatenated in the same manner as the encoder. Where the decoder differs from the encoder is the final layer. The output of the final convolutional layer is passed through the softmax activation function. By minimizing the difference between the input vector and reconstructed vector we achieve our goal of generating Tweet latent representation. Once the Autoencoder is trained on Cornonavirus data,the decoder can be removed from the network, leaving the encoder and latent space to be used as its own model.
Influenza Classification
A similar CNN architecture found in Kim et al. CNN for text classification is used to train a influenza tweet classifier (Kim, 2014). The influenza tweets are transformed to vectors using the same word embeddings in the auto encoder. During training a latent representation is learned, Z i , from the convolutions of size 3,4 and 5 words. The latent representation is then passed to the Multi-Layer Perceptron, MLP W i , and is trained with supervision to predict the correct label Y . In Figure 1, this can be seen on the lower section of the image. A description of each label is discussed in detail in Section 2.1. The accuracy of the influenza classifier is discussed in Section 3.4 as well as Table 3.
Few-Shot Classification
A large influence for our research comes from Bowman et al. (Bowman et al., 2015). Bowman et al. takes the parameters of a model that was trained on one dataset and trains a new model using a portion of the new dataset. Bowman et al. suggests that by introducing a corpus that is high quality, it can be used to transfer knowledge and learn sentence meanings that can improve downstream text classification tasks.
At this point we have 2 trained models, the autoencoder for COVID-19 tweets and the influenza tweet classifier for the 6 categories. We want to use as much knowledge learned in the COVID-19 latent representation and Influenza MLP layer to accurately predict on COVID-19 classification tasks, with limited COVID-19 labeled data. We accomplish this task with few-shot learning with a warm start. This is a similar methodology to Dirkson et al. (Dirkson and Verberne, 2019). They use the ULMfit to transfer knowledge between health and twitter data (Howard and Ruder, 2018). Where we differ from the ULMfit learning is we completely freeze the encoders parameters and only allow the MLP to be trained.
The same process used for the influenza classifier is used for the COVID-19 classifier. The encoder W e and latent variable z are frozen and the decoder is removed. MLP i,c is appended to z and used to classify COVID-19 tweets. If this model were trained from a cold start, the weights of MLP i,c would be randomly assigned during the first epoch of training and adjusted from there. This prevents any classification knowledge being transferred from the influenza to the COVID-19 tweets. Instead, the weights of MLP i are used to initialize the weights of MLP i,c . This warm start training allows the model to use knowledge from the the influenza training. We can then train the model with the few labeled COVID-19 tweets and fine tune the model for the downstream task of classifying all COVID-19 tweets.
Results
In table 3, the accuracy of the influenza, COVID-19 trained on a cold start, and COVID-19 trained with the weights initialized with the influenza MLP (COVID*) are given for each of the 4 tasks. The task of Received and Intent are removed from the COVID-19 classifiers since there are no vaccines currently available for the virus.
Related
For classifying tweets into COVID-19 Related and COVID-19 Non-Related, we took a sub sample COVID-19 Tweets and a sub sample of influenza related tweets and combined them into one dataset. We trained the classifier on 1000 influenza and 400 COVID-19 tweets but, tested on 4000 influenza and 1400 COVID-19 tweets. A training accuracy of 91% and testing accuracy of 86% were achieved as well as a precision score of 0.69, a recall score of 0.77 and a F1 score of 0.73. The same train and testing are done on the cold start model but achieved a higher train but lower test accuracy with a precision score 0.71, recall score 0.70 and a F1 score of 0.71. The COVID-19 MLP initialized with the influenza MLP outperforms the the cold start on 3 out of the 5 statistics including a 3% increase on test accuracy.
Awareness Vs Infection
Awareness vs Infection can be seen as a binary classification problem. We classified each COVID-19 into 2 more subcategories, is the twitter user aware of the virus or are they talking about themselves or others being infected. We trained the classifier with 250 awareness and 250 infection tweets and tested on 450 tweets in each category. An accuracy of 98% on the train set and 73% on the test set were achieved as well as a precision score 0.73, a recall score of 0.72 and a F1 score 0.73. The same training and testing were carried our on the cold start model but achieved a lower training and lower testing accuracy with a precision score 0.69, recall score 0.68 and an F1 score of 0.68. The COVID-19 MLP initialized with the influenza MLP outperforms the the cold start on all accuracy statistics including a 3% increase on test accuracy.
Self Vs Other
In self vs other, we try and classify the infection tweets into further categories. We would like to clarify whether the post is about the author, or if the author is concerned about the COVID-19 infection of others. We trained the classifier on 55 tweets labeled "Other" and 20 tweets labeled as "Self". The test set consisted of 45 tweets labeled "Other" and 30 tweets labeled "Self". An accuracy of 98% on the train set and 86% on the test set were achieved as well as a precision score of 0.86, a recall score of 0.87, and a F1 score of 0.86. The same training and testing were carried our on the cold start model but achieved a lower training and lower testing accuracy with a precision score 0.81, recall score 0.76 and a F1 score of 0.77. The COVID-19 MLP initialized with the influenza MLP outperforms the the cold start on 4 out of the 5 statistics including a 5% increase on test accuracy.
Vaccine
The last category was Vaccine related tweets. We would like to clarify whether the post is about vaccine or cure, or whether it is about certain facets of the virus. The train set had 65 non vaccine related tweets and 35 vaccine related tweets. The test set had 60 non vaccine and 35 vaccine tweets. An accuracy of 97% for train and 72% on the test as well as a precision score of 0.67, a recall score of 0.65, and a F1 score of .66. The same train and testing are done on the cold start model but achieved a lower train but higher test accuracy with a precision score 0.77, recall score 0.73 and a F1 score of 0.74. The COVID-19 MLP initialized with the influenza MLP was outperformed by the cold start on 4 out of the 5 statistics including a 8% decrease on test accuracy.
Discussion and Conclusion
Diving further into the data and investigating how the classifiers could be improved, the first step would be to improve the quality of the tweets labels. Several tweets in the related category had mislabeled gold labels. For example, "@berniesanders thank you so very much @berniesanders for giving us hope as a nation for and end to this joke of a presidency thank you for all the fund raising and support that you have done for our country in the age of covid19 and good sir" was given the gold label of not related to covid when the model predicted it was. The overall theme was presidential candidate Bernie Sanders, but it was still related to COVID-19. Another example, "tell congress to put people first, demand paid sick leave for our most vulnerable workers covid-19". This tweet should be labeled awareness but was given the gold label of infected. This tweet should be labeled as "Awareness" but was given the gold label of "Infected". With more time to weed out accurate labels and provide higher quality data to the classifier would increase overall accuracy across the board. In future research we aim to use services like Amazon Mechanical Turk to label more data for us. Our self-supervised methodology for classifying COVID-19 tweets with fewer labeled data has been developed to overcome the challenges of labeling massive COVID-19 Tweet data. At the time of this research, labeled COVID-19 datasets for supervised learning were not readily available. That being said, to achieve maximal results, providing the deep learning models with large amounts of high quality annotations for learning would be ideal. Nevertheless, with unlabeled data available and small amount of annotated tweets, our research demonstrates that we can transfer knowledge from unsupervised latent representation and high quality datasets to similar domain classifiers using selfsupervision and few shot learning.
Lastly, our original hypothesis of influenza tweets and COVID-19 tweets being extremely close in context was not entirely true. While COVID-19 and Influenza tweets have a lot of similarities there also exists differences in the themes of the tweets. Twitter users tend to flood their timelines with posts and re-posts of the politics, rumors, misinformation and news related to COVID-19 rather than symptoms and infections.
Looking at the results for each category, leveraging the COVID-19 self-supervised pretext training to produce COVID-19 latent representations from unlabeled data for supervise downstream COVID-19 classifier shows promising results. Although the test accuracy of classifier can still be improved, the warm start of COVID-19 outperforms the cold start model on 3 out of 4 tasks in terms of the expected accuracy , precision , recall and f1. With this experiment we show that using high quality annotated data in a similar domain, can be used with self-supervision and few shot learning to train classifiers on data where labels are limited. Rather than starting from scratch we can initialize the weights on the MLP with the knowledge learned during supervised training. We believe that the architecture and methodology provided in this research show that using self supervision and few-shot learning can overcome some of the challenges of data with limited annotations. Using the proposed model to label tweets can assist future researchers to investigate COVID-19 tweets at a more granular level. | 5,315.8 | 2020-08-12T00:00:00.000 | [
"Computer Science"
] |
MXene/Graphene Oxide Heterojunction as a Saturable Absorber for Passively Q-Switched Solid-State Pulse Lasers
Owing to their unique characteristics, two-dimensional (2-D) materials and their complexes have become very attractive in photoelectric applications. Two-dimensional heterojunctions, as novel 2-D complex materials, have drawn much attention in recent years. Herein, we propose a 2-D heterojunction composed of MXene (Ti2CTx) materials and graphene oxide (GO), and apply it to an Nd:YAG solid-state laser as a saturable absorber (SA) for passive Q-switching. Our results suggest that a nano-heterojunction between MXene and GO was achieved based on morphological characterization, and the advantages of a broadband response, higher stability in GO, and strong interaction with light waves in MXene could be combined. In the passively Q-switched laser study, the single-pulse energy was measured to be approximately 0.79 µJ when the pump power was 3.72 W, and the corresponding peak power was approximately 7.25 W. In addition, the generation of a stable ultrashort pulse down to 109 ns was demonstrated, which is the narrowest pulse among Q-switched solid-state lasers using a 2-D heterojunction SA. Our work indicates that the MXene–GO nano-heterojunction could operate as a promising SA for ultrafast systems with ultrahigh pulse energy and ultranarrow pulse duration. We believe that this work opens up a new approach to designing 2-D heterojunctions and provides insight into the formation of new 2-D materials with desirable photonic properties.
Introduction
In recent years, two-dimensional (2-D) materials, including semiconductors, transition metal dichalcogenides, and topological insulators, have been widely applied in the photoelectric and biomedical fields due to their unique structures [1][2][3][4][5]. Two-dimensional heterojunctions, as novel 2-D materials constructed by combining different 2-D materials, were demonstrated to possess a broad optical response, a tunable band gap, and strong interaction with photons [6][7][8][9][10][11][12]. Thus, the advantages of the different 2-D materials were expected to be combined in such 2-D heterojunctions when incident light passed through the heterojunction boundary, obtaining an optimum photoelectric performance [13]. Due to these excellent characteristics, the use of 2-D heterojunctions in photoelectric devices has been reported, such as in solar cells, optical communication devices, and photodetectors [14][15][16][17][18][19][20][21]. Owing to these significant studies, the applications of 2-D materials have been rapidly broadened, accompanied by fast development in their photoelectric applications.
For example, solar cells using 2-D heterojunctions have been reported, obtaining better absorption coefficients and excellent stability [22][23][24][25]. In addition, mid-infrared detectors employing 2-D heterojunctions were also reported to achieve excellent performance due to the optimum structure of heterojunctions [26][27][28]. However, the preparation conditions of such 2-D heterojunctions were strict, and their application mechanism has not been clarified. In the past several years, 2-D heterojunctions as saturable absorbers (SAs) in laser technology have also been reported due to their ideal nonlinear optics [29][30][31]. Our team has reported the fabrication of 2-D graphene/phosphorene (BP) nano-heterojunction-based optical SAs, which showed excellent performance in an erbium-doped fiber laser, demonstrating the generation of a stable ultrafast pulse down to 148 fs [32]. Notably, nanosized BP was unstable under ambient conditions, so more protection was needed to eliminate oxidation. In addition to the synthesis of graphene/phosphorene (BP) nano-heterojunction SAs, a graphene/Bi 2 Te 3 2-D heterojunction SA was proposed in recent years and achieved excellent performance in mode-locking operation and Q-switched operation [33]. These results indicate that desirable photonic properties could be obtained by employing a 2-D heterojunction; however, few studies on MXene-based 2-D heterojunctions as SAs applied in passively Q-switched solid-state lasers have been reported, and the mechanism in passively Q-switched solid-state lasers is still unknown.
Herein, we propose a facile way to synthesize a MXene-graphene oxide (MXene-GO) 2-D heterojunction in a liquid phase, and a passively Q-switched laser in the solid-state using the MXene-GO 2-D heterojunction as an SA is investigated. Here, few-layer graphene oxide (GO) chosen to form 2-D heterojunctions with MXene materials was chosen not only for its higher damage threshold, lower saturation intensity, and broad absorption band, but also expected to tune the optical properties of MXene and improve its performance in optoelectronics [34][35][36][37]. Our results suggest that the MXene-GO 2-D heterojunction has better performance than pristine graphene and pure MXene. In passive Q-switching operation, we obtained a stable ultrashort pulse down to 109 ns, which is the narrowest pulse among Q-switched lasers in the solid-state using a 2-D heterojunction as an SA. We believe that our work paves the way to designing MXene-based materials to obtain passively Q-switched lasers in the solid state, and indicates that the MXene-GO composite exhibits higher performance in nonlinear optics and acts as a potential 2-D material for photoelectric applications.
Fabrication of the MXene-GO SA
GO solution dispersed in ionic water at 0.5 mg/mL was obtained from a commercial supplier (Nanotechnology, Beijing, China), and MXene (Ti 2 C) (99.9%) powder and isopropanol (IPA) solvent were supplied by Aladdin Co., Ltd. (Shanghai, China). All the other reagents and solvents were supplied from commercial sources and used without further purification unless otherwise noted. The mixture of MXene and GO, which both had a few-layer structure, was reacted in a liquid phase via chemical bonding or the Coulomb force. In brief, 5 mL GO solution at 0.5 mg/mL was put into 3 mL IPA solvent to achieve a uniform dispersion liquid. After that, GO with a few-layer structure was obtained via centrifugation at 1000 rpm for 5 min. Next, 8 mg multilayer MXene powder was put into IPA solvent and sonicated in an ultrasonic bath continuously for 10 min to achieve a dispersion liquid. Next, the dispersion liquid was centrifuged at 8000 rpm for 5 min to achieve few-layer MXene. Finally, a mixture of MXene and GO was prepared by combining the mother liquids in a volume ratio of 3:2, at a pH of 4, and fully magnetically stirring them for 1 h at room temperature.
An illustration of the preparation scheme is shown in Figure 1. As depicted, the blue balls represent the -OH group, the purple balls in the image represent the -COOH group of the GO material, and the yellow balls are the -F group on the surface of the MXene material. Thanks to these hydrophilic groups, the GO and MXene materials can be contacted easily in the IPA solvent, interaction would take place after the combination, and a chemical bond is formed under the magnetic string for 1 h.
Characterization of the MXene-GO SA
After the reaction, the MXene-GO sample was transferred onto a silica substrate for morphological characterization by scanning electron microscopy (SEM, Hitachi, SU8010, Tokyo, Japan), and elemental mapping was conducted by using energy dispersive spectroscopy (EDS). After centrifugation at 10,000 rpm for 5 min, the sediment-containing MXene-GO mixture was dispersed in absolute ethyl alcohol and then dropped onto a silica substrate for structural characterization and selective area electron diffraction (SAED), by using high-resolution transmission electron microscopy (HR-TEM, FEI, Tecnai G2 F30, Zhenzhou, China). After that, the sample was transferred onto a carbon film copper grid for thickness measurement, by using atomic force microscopy (AFM, Bruker Dimension Icon, Beijing, China). Raman spectra were obtained on a Raman spectrometer (Renishaw in Via Reflex, Shanghai, China) at room temperature for further analysis of the sediment structure. Absorption measurements were performed to analyze the optical properties by using a spectrophotometer (Agilent Cary 5000, Jiangxi, China) at room temperature. Figure 2 shows the morphological characterization of the prepared MXene and GO materials via SEM. As depicted in Figure 2a, the MXene materials showed sheet structure with sub-micro size according to the scale bar in the SEM image, while the GO materials exhibited a cluster-like shape, which is displayed in Figure 2b. Figure 3 shows the morphological and structural characterization of the heterojunction. As depicted in Figure 3a, the heterojunctions showed a sheet structure with sub-micro size, while the TEM result displayed in Figure 3b reveals that the heterojunction consisted of two different structures. Deep structure analysis of this heterojunction is shown by HR-TEM in Figure 3c. Two different lattice distances were present in the top right corner and bottom right corner of the image, which were measured as 0.23 and 0.36 nm, respectively. The value difference was beyond the resolution of the equipment we applied. Thus, we inferred that the two different lattice distances originated from two different structures. Figure 3d shows the SAED image obtained from the rectangular region in Figure 3b which exhibits two different diffraction spots of both heterojunctions. All the above analyses suggest that the MXene sheets adhered to other sheets, and a phase boundary was observed of the mixture. The bright spots in the AFM image shown in Figure 3e further confirm the phase boundary of the heterojunction, and the height profiles of the heterojunction shown in Figure 3f were obtained from three spots in Figure 3e, which reveal that the thickness of the heterojunction was between 15 and 25 nm. Elemental mappings of the heterojunction obtained by EDS were used to further confirm the composition of the heterojunctions, which are shown in Figure 4a-c. As depicted, titanium and carbon were both found in the heterojunction, while silica was also exhibited in the EDS spectrum because it was used as a substrate. Figure 4d presents the different elemental analyses of the MXene-GO material by weight ratio and atomic ratio. As the measurement results show, the weight ratios of carbon, silica, and titanium were 82.56%, 8.74%, and 8.70%, respectively, while the atomic ratios were 93.31%, 4.23%, and 2.46%, respectively. The much higher weight ratio and atomic ratio of carbon than of titanium suggest that collective -COOH groups were fixed on the surface of GO, attributed to the convenient effect of the MXene material and GO in forming the MXene-GO heterojunction. Figure 5a shows the absorption spectrum of the MXene-GO heterojunction, as well as the spectra of the MXene material and GO material for comparison. Here, we kept the same concentration of pure MXene, pure GO, and the MXene-GO heterojunction and measured them in the liquid phase. As depicted, the MXene material and GO material both have a broadband optical response, and the MXene-GO heterojunction takes full advantage of these two materials, resulting in a higher linear absorption intensity. Figure 5b presents the Raman spectrum of the MXene-GO heterojunction at room temperature to further demonstrate the formation of the MXene-GO heterojunction. The spectra of the MXene material and GO material were both measured for comparison. As depicted, the Raman peak located at approximately 1000 cm −1 was ascribed to the Raman mode of the Ti 2 C material, and the Raman peaks located at approximately 1350 and 1600 cm −1 were ascribed to the Raman mode of the GO material, which coincided with the literature [38][39][40]. However, when the MXene and GO materials were combined, Raman peaks of 1000, 1350, and 1600 cm −1 were found in the Raman spectra of the mixture, and the intensity of the Raman peaks was modified: for example, the Raman peak located around 1000 cm −1 was lower than the pure Ti 2 C materials, while the Raman peaks located around 1350 and 1600 cm −1 were both higher than the pure GO material. Hence, we inferred that the MXene materials may be oxidized to TiO 2 , and thus the combination of MXene and GO materials would result in the vibration of Raman peaks via the affection of GO materials and TiO 2 , which has been publicly reported in previous work [38,41]. However, more experiments would be needed to further demonstrate that indication. Figure 6 shows the X-ray photoelectron spectroscopy (XPS) characterization of MXene and MXene-GO materials. The peaks of O 1s, C 1s, and Ti 2p are, respectively, exhibited in Figure 6a-c, which can be matched to the results of EDS. As is depicted in Figure 6a, the peak of the oxide element for MXene materials was decreased when combined with GO materials. A similar phenomenon can be found for the carbon element, shown in Figure 6b. However, the peaks around 459 and 465 eV for the titanium element of MXene materials disappeared when the combination of MXene materials and GO materials was achieved. We suggest the reason to be that part of the Ti-C bond was broken, and a new chemical bond was formed, which may have originated from the formation of the MXene-GO heterojunction [42]. To the best of our knowledge, the oxidation of the MXene material in the air leads to a decrease in the Raman signal intensity or the modification of some Raman modes [43][44][45]. This oxidation directly affects the long-term stability of the MXene material and restricts its broad applications. To further investigate the stability of the MXene material when combined with GO material, it was exposed to air for one week, and pure MXene material was used as a comparison. In situ measurements were conducted on these two samples. Figure 7a shows the Raman spectra of pure MXene material, which displayed a remarkable decrease after exposure to air for 7 days. For convenient comparison, we assumed the Raman intensity after exposure to air for 0 days was I 1 , and after exposure to air for 7 days was I 2 . The stability factor can be presented as A = I 2 /I 1 . After the software analysis, we calculated that the scaling factor of pure MXene was 0.3, while the difference between the two Raman spectra of MXene-GO materials is displayed in Figure 7b.
As depicted, the decreasing signal around 1000 cm −1 has a scale factor of 0.78, calculated by the above expression, which is higher than the scale factor of pure MXene in Figure 7a, and it should be noted that the decreasing signals around 1350 and 1600 cm −1 have a scale factor of 0.83, which is higher than the peak around 1000 cm −1 . To the best of our knowledge, the Raman signal of the same sample may be changed at different spots due to the different concentrations of materials. To validate the comparison results, we observed the Raman signal of the same MXene material at different spots and found that the vibration was much smaller than that of the sample that was exposed to air for a week. Therefore, we inferred that these modifications did not originate from the different spots of the samples [43][44][45][46]. Thus, we inferred that the MXene materials were endowed with increased stability through combination with the GO materials. We believe that this work provides new insight that will help to improve the stability of MXene materials, to develop desired photoelectric applications. Figure 7. Stability test of the MXene-GO heterojunction: (a) Raman spectra of pure MXene (Ti 2 C) exposed to air for 0 days or a week; (b) Raman spectra of the pure MXene-GO heterojunction exposed to air for 0 days or a week.
Q-Switched Laser Setup
To examine the nonlinear optics of the MXene-GO material as an SA, its performance in Q-switched laser application was examined. The laser resonator is shown in Figure 8. The pump was a semiconductor laser with a maximum output power of 30 W, and the center wavelength of the pump laser was 808 nm at 25 • C. The fiber numerical aperture was 0.22, and the core diameter was 200 microns. To obtain the appropriate size of the pump light on the laser crystal, we used a fiber output focusing lens with a 1:0.8 image ratio. The gain medium in the laser was a 3 mm × 3 mm × 4 mm Nd:YAG crystal with a 1.2 wt% doping concentration, which was cut at the (111) crystal face. In the experiment, we employed a short resonant cavity to generate the laser. Therefore, the S1 surface of the laser crystal coated with an 808 nm antireflection and a 1064 nm high-reflection coating served as the plane mirror part of the resonant cavity. The S2 surface was coated with an 808 and 1064 nm antireflection coating to obtain better pumping efficiency. To ensure a good heat dissipation performance of the laser crystal, we wrapped the Nd:YAG crystal with indium foil, coated this foil with thermal grease, and finally placed the crystal on a copper base with a water cooling function, whose temperature was set at 17 • C. In the experiment, an output mirror with a curvature radius of 50 mm and a transmittance of 15% was used. The output mirror surface was coated with a coating that possessed high reflection at 808 nm and 15% transmittance at 1064 nm. With this designed coating, a relatively short plane-concave resonator of nearly 10 mm was formed. The MXene-GO heterojunction acted as an SA and was inserted into the laser resonator in the transmission path. By using the calculation of the ABCD matrix, we obtained a laser spot with a width of 88 microns on the SA. We supposed that a passively Q-switched pulsed laser would be achieved via such a small laser spot and that nonlinear effects may be produced by the MXene-GO SA. In the experiment, an optical power meter (30A-P-17, Ophir Optronics Solutions Ltd., Jerusalem, Israel) was used to measure the laser output power, and the laser output spectrum was obtained by applying an Ocean Optics spectrometer (USB4000-VIS-NIR, Ocean Optics Inc., Dunedin, EL, USA). In addition, the generation of a laser pulse was detected by an InAsSb photoelectric probe (DET10A/M, Thorlabs, Inc., Newton, NJ, USA), and the laser pulse generated by passive Q-switching was recorded by a high-speed digital oscilloscope (DPO4104B, Tektronix, Inc., Shawnee Mission, KS, USA) with a bandwidth of 1 GHz and a sampling rate of 5 GHz.
Results and Discussion
To analyze the Q-switched laser, the output power under continuous operation of the Nd:YAG laser should be studied as a comparison. Figure 9a presents the variation in the average output power of the laser in passively Q-switched mode and compares it with that in continuous wave (CW) mode. When the transmission rate of the output coupling (OC) mirror was 15%, the corresponding pumping threshold was 0.55 W. When the pump power was increased to 3.72 W, a continuous laser output of 0.74 W could be obtained, and the corresponding slope efficiency was 24.3%. When the MXene-GO material was inserted into the resonator as an SA, a stable passively Q-switched pulse was obtained with a pump power of 1.61 W. As the pump power was increased to 3.72 W, the corresponding maximum output power of the passively Q-switched laser reached 344.2 mW, the corresponding slope efficiency was 14.1%, and the optical-optical conversion efficiency was 9.3%. Figure 9b shows the corresponding shortest pulse width when the pump power was 3.72 W obtained from the oscilloscope; a stable ultrashort pulse down to 109 ns was achieved by the Nd:YAG solid-state laser via the MXene-GO heterojunction SA. Figure 9c depicts the variation in the pulse width and repetition frequency with pump power. With increasing pump power, the pulse width changed from 320 to 109 ns, and it presented a decreasing tendency, whereas an increasing trend was found for the repetition frequency, which changed from 153.8 to 434.8 kHz. When the pump power exceeded 3.72 W, the passively Q-switched pulse became unstable. When the pump power increases, the saturable absorber may have more heat accumulation, which leads to the structural destruction of the MXene-GO heterojunction SA. Therefore, we will consider coating the MXene-GO heterojunction SA with an antireflection coating to reduce the accumulation of heat on the SA, to increase the thermal damage threshold of the SA and conduct experimental studies related to high-power lasers [47,48]. Figure 9d shows the variation in the single-pulse energy and passively Q-switched pulse peak power with pump power. The single-pulse energy and pulse peak power both increased with increasing pump power. Notably, the single-pulse energy was 0.79 µJ when the pump power was 3.72 W, and the corresponding peak power was 7.25 W. Figure 10a presents the output spectrum of the laser with a central wavelength of 1064.9 nm, and the full width at half maximum was measured to be approximately 2 nm by applying an Ocean Optics spectrometer (USB4000-VIS-NIR, Ocean Optics Inc, Dunedin, FL, USA), suggesting good monochromaticity of the Q-switched laser beam. The beam profiles of the CW and Q-switched lasers under a 2.5 W pump power are both shown in Figure 10b, obtained by applying a beam quality analyzer (BeamGate, Ophir-Spiricon, North Logan, UT, USA). As depicted, the beam shape of the Q-switched laser was modified compared to that of the CW laser; in particular, the central area of the laser beam was enlarged in Q-switched mode. As a comparison, passively Q-switched laser results obtained when other 2-D heterojunction materials were used as an SA in solid-state lasers are summarized in Table 1. As presented, the stable ultrashort pulse generated in our work, down to 109 ns, was the narrowest among the 2-D heterojunction materials, and the peak power was also competitive. Compared with other laser resonators in Table 1, the cavity length of the resonator we designed is only an approximate 10 mm laser [49]. In addition, we coated a special optical film on the transparent surface of the laser crystal to effectively use the pump light in the laser crystal. Simultaneously, graphene oxide and Ti 2 CT x have excellent optical properties [36,50]. Therefore, compared to other heterojunction materials in the solid-state laser experimental results, we obtained the shortest pulse width, while the average power and peak power were also competitive. In terms of pulsed laser applications, a short pulse width and high peak power have always been the parameters pursued by researchers [51]. Compared with other heterojunction materials, it has obtained better parameters in solidstate laser applications, such as pulse width, peak power, single pulse energy, and higher optical to optical conversion efficiency. These parameters are very meaningful for laser applications [52,53].
Conclusions
In conclusion, we proposed a facile method to synthesize an MXene-GO heterojunction in the liquid phase via a chemical reaction. The MXene exhibited an increased absorption signal and improved stability after being combined with GO material, compared to either the MXene material or GO material alone, and better photoelectric performance was achieved for the MXene-GO heterojunction. We demonstrated a passively Q-switched laser by using an MXene-GO heterojunction as an SA. A stable passively Q-switched pulse train can be obtained. The minimum observed pulse width was 109 ns, which is narrower than that of other Q-switched solid-state lasers using a 2-D heterojunction SA, and the corresponding repetition rate was 434.8 kHz. The maximum average output power was 344.2 mW, the maximum peak power was 7.25 W, and the maximum single-pulse energy was 0.79 µJ. Our experiments suggest the excellent performance of the MXene-based material in Q-switched solid-state lasers when combined with GO as an SA. To the best of our knowledge, this is the first report of a passively Q-switched Nd:YAG solid-state laser using MXene-GO as an SA. We believe that this will pave the way for designing 2-D heterojunction materials and further broaden their application. | 5,331.2 | 2021-03-01T00:00:00.000 | [
"Physics"
] |
Role of Microvessel Density and Vascular Endothelial Growth Factor in Angiogenesis of Hematological Malignancies
Angiogenesis plays an important role in progression of tumor with vascular endothelial growth factor (VEGF) being key proangiogenic factor. It was intended to study angiogenesis in different hematological malignancies by quantifying expression of VEGF and MVD in bone marrow biopsy along with serum VEGF levels and observing its change following therapy. The study included 50 cases of hematological malignancies which were followed for one month after initial therapy along with 30 controls. All of them were subjected to immunostaining by anti-VEGF and factor VIII antibodies on bone marrow biopsy along with the measurement of serum VEGF levels. Significantly higher pretreatment VEGF scores, serum VEGF levels, and MVD were observed in cases as compared to controls (p < 0.05). The highest VEGF score and serum VEGF were observed in chronic myeloid leukemia and maximum MVD in Non-Hodgkin's Lymphoma. Significant decrease in serum VEGF levels after treatment was observed in all hematological malignancies except for AML. To conclude angiogenesis plays an important role in pathogenesis of all the hematological malignancies as reflected by increased VEGF expression and MVD in bone marrow biopsy along with increased serum VEGF level. The decrease in serum VEGF level after therapy further supports this view and also lays the importance of anti angiogenic therapy.
Introduction
Angiogenesis plays an important role in progression of tumor along with metastasis and invasion. It is a complex process which is mediated by angiogenic and antiangiogenic factors. Vascular endothelial growth factor (VEGF) and basic fibroblast growth factor (bFGF) are the key proangiogenic factors which interact with tyrosine kinase receptors and enhance endothelial cell proliferation and increased vascular permeability [1,2]. Although the role of angiogenesis in solid tumors is well established, its importance in hematological malignancies is being studied widely especially in reference to different types of leukemia, their prognosis, and therapeutic implications [3,4]. The quantification of angiogenesis may involve the measurement of microvessel density (MVD) and serum VEGF levels supported by the expression of VEGF in bone marrow biopsies. Different studies have given variable results regarding VEGF expression in various hematological malignancies with some showing increased expression while others concluding that no difference exists in VEGF expression between hematological malignancies and controls [2,5]. The present study was therefore conducted to study angiogenesis in different types of hematological malignancies by studying the expression of VEGF and MVD in bone marrow biopsy along with measurement of serum VEGF levels. It was also intended to study effect of therapy on angiogenesis by observing the change in serum VEGF levels following initial/induction therapy.
Material and Methods
The study included 50 new cases of hematological malignancies which were diagnosed on bone marrow examination and followed up for at least one month after initial therapy. Thirty controls were included in the study which were newly diagnosed cases of lymphoma or solid tumor malignancies but without the evidence of bone marrow infiltration. All the cases and controls were subjected to immunostaining by anti-VEGF and factor VIII antibodies on bone marrow biopsy (Biogenex, California, USA) along with the measurement of serum VEGF level by using the principle of enzyme linked immunosorbent assay (ELISA) at the time of diagnosis before treatment. Serum VEGF level was also quantified in followup after at least one month after treatment. The immunohistochemical expression of VEGF was studied by giving the score of 0-3 according to staining intensity in immunopositive cells in at least 10 fields (×40, Olympus) and observed independently by two observers (score 0, no staining; score 1, mild staining; score 2, moderate staining; and score 3, severe staining). For quantitation of MVD, the hotspots on bone marrow biopsy (areas containing highest number of blood vessels) were identified by using immunohistochemical expression of factor VIII antibodies. This was followed by counting of total number of vessels (×100, Olympus) in at least five fields with each field representing an area of 0.392 m 2 . The true vessel number in the biopsy was then expressed as mean of five counts. Statistical analysis was done using SPSS software version 17 and Student's -test and correlation coefficient test were used to determine the statistical significance association. < 0.05 was considered significant.
Results
The study included total 50 cases with male: female ratio of 1.5 : 1 and median age of 28.5 years with range of 5-75 years. Table 1 shows the distribution of different hematological malignancy with mean age and sex ratio. It shows that acute myeloid leukemia (AML) and acute lymphoid leukemia (ALL) were the most common leukemia with each comprising 30% of total cases. All the malignancies showed preponderance of males except AML (male female ratio of 0.8 : 1). Figures 1 and 2 show significantly higher pretreatment VEGF scores and MVD in cases as compared to controls. The pretreatment serum VEGF levels were also significantly raised in comparison to controls ( < 0.05). Table 2 shows average VEGF score and MVD in different hematological malignancy. All the hematological malignancies showed raised VEGF score and MVD with positive correlation coefficient ( = 0.1071). The highest VEGF score and serum VEGF level were observed in chronic myeloid leukemia (CML) and maximum MVD in Non-Hodgkin's Lymphoma (NHL) (Figure 3). Table 3 shows significant decrease in serum VEGF levels after treatment in comparison to pretreatment levels in all the hematological malignancies except for AML. A positive correlation between serum VEGF level before treatment and after treatment ( = 0.6616) which was statistically significant ( < 0.0001) was observed.
Discussion
Sustained angiogenesis is considered to play an important role in progression and metastasis of tumors. Tumor angiogenesis is controlled by the balance between angiogenesis promoters and inhibitors and VEGF is an important prognostic cytokine [6,7]. VEGF activates "Notch signaling pathways" to coordinate angiogenesis and upregulation of VEGF is influenced by mutations of RAS or MYC gene [7]. This has led to emergence of anti-VEGF drugs for treatment of cancers along with the strategies that inhibit "Notch Activation" [8]. VEGF along with microvessel density has been studied in different hematological malignancies for tumor angiogenesis. The present study observed that VEGF expression and MVD in bone marrow were increased in all the hematological malignancies including acute leukemias, chronic leukemias, multiple myeloma (MM), and NHL. It was observed that highest VEGF expression and serum VEGF levels were seen in CML indicating maximum angiogenic potential in it. Another important finding observed in the present study was significant decrease in serum VEGF level after treatment in follow-up further highlighting the importance of angiogenesis in pathogenesis of hematological malignancies. Further, a positive correlation coefficient was observed between VEGF score and MVD in all the hematological malignancies. Similar finding has also been observed by Gianelli et al. who concluded that VEGF expression correlates with MVD in Philadelphia negative chronic myeloproliferative disorders [2]. The highest MVD was seen in NHL followed by MM in the present study but El-Sorady et al. have observed that highest bone marrow microvessel count was present in MM suggesting higher angiogenic potential in such patients [9]. The observation of increased angiogenesis in all hematological malignancies in the present study especially CML and NHL indicates the potential use of anti-VEGF therapy therapies for their treatment. However studies have concluded that blocking of VEGF activity improves the delivery of cytotoxic drugs to tumor and endothelial cells but have also raised the question of importance of biomarkers to identify patients who may benefit from antiangiogenic treatment along with optimal dose and mechanism of resistance [10].
An important limitation of the present study was that angiogenesis was not studied in myelodysplastic syndrome (MDS) and its role in progression. However, previous studies have suggested that vascularity and angiogenic factors are increased in MDS and thus play an important role in leukemogenic process [11].
Conclusion
Thus to conclude angiogenesis plays an important role in pathogenesis of all the hematological malignancies including acute and chronic leukemia, lymphoma and multiple myeloma as reflected by increased VEGF expression, and MVD in bone marrow biopsy along with increased serum VEGF levels. The decrease in serum VEGF level after therapy further supports this view and also lays the importance of antiangiogenic therapy in all the hematological malignancies. In addition, further studies are needed to know the exact mechanism and interaction of VEGF with cellular components and bone marrow milieu which may affect prognosis, progression, and therapeutic outcome of these hematological malignancies. | 1,948.8 | 2016-02-22T00:00:00.000 | [
"Medicine",
"Biology"
] |
Early Transcriptomic Response to Phosphate Deprivation in Soybean Leaves as Revealed by RNA-Sequencing
Low phosphate (Pi) availability is an important limiting factor affecting soybean production. However, the underlying molecular mechanisms responsible for low Pi stress response and tolerance remain largely unknown, especially for the early signaling events under low Pi stress. Here, a genome-wide transcriptomic analysis in soybean leaves treated with a short-term Pi-deprivation (24 h) was performed through high-throughput RNA sequencing (RNA-seq) technology. A total of 533 loci were found to be differentially expressed in response to Pi deprivation, including 36 mis-annotated loci and 32 novel loci. Among the differentially expressed genes (DEGs), 303 were induced and 230 were repressed by Pi deprivation. To validate the reliability of the RNA-seq data, 18 DEGs were randomly selected and analyzed by quantitative RT-PCR (reverse transcription polymerase chain reaction), which exhibited similar fold changes with RNA-seq. Enrichment analyses showed that 29 GO (Gene Ontology) terms and 8 KEGG (Kyoto Encyclopedia of Genes and Genomes) pathways were significantly enriched in the up-regulated DEGs and 25 GO terms and 16 KEGG pathways were significantly enriched in the down-regulated DEGs. Some DEGs potentially involved in Pi sensing and signaling were up-regulated by short-term Pi deprivation, including five SPX-containing genes. Some DEGs possibly associated with water and nutrient uptake, hormonal and calcium signaling, protein phosphorylation and dephosphorylation and cell wall modification were affected at the early stage of Pi deprivation. The cis-elements of PHO (phosphatase) element, PHO-like element and P responsive element were present more frequently in promoter regions of up-regulated DEGs compared to that of randomly-selected genes in the soybean genome. Our transcriptomic data showed an intricate network containing transporters, transcription factors, kinases and phosphatases, hormone and calcium signaling components is involved in plant responses to early Pi deprivation.
Introduction
As a non-substitutable macronutrient, phosphorus (P) is essential for plant growth and development by being part of fundamental bio-molecules and participating in various cellular activities. However, plants frequently suffer from P deficiency, which is a limiting factor for crop productivity worldwide, because of the low availability of the available form of P (phosphate, Pi) in soils, especially in acid soils [1,2]. Substantial use of Pi fertilizers is the main solution for this problem but the the Pi concentration of Pi-deprived tissues decreases [23,36]. After Pi deprivation treatment for 24 h, Pi concentration in the roots, the first and the second trifoliate leaves of soybean significantly decreased but there was no significant difference for the third trifoliate leaves (Figure 1). Two biological replicates of RNA-seq were included for both Pi-sufficient leaves (PSL) and Pi-deprived leaves (PDL), and, therefore, a total of four libraries were constructed. By Illumina's deep sequencing, a total of 38.8 to 49.6 million reliable clean reads were obtained from each library after excluding the low-quality reads ( Table S1). Most of the clean reads (69.5-74.1%) from each library uniquely mapped to the soybean reference genome (v2.0). The correlation of the two biological replicates for PDL and PSL was calculated to determine the variability between the replicates. The Pearson's correlation (R value) of the two comparisons was both around 85% ( Figure S1), indicating a high correlation between the biological replicates. The abundance of transcripts from each gene mapped was measured in terms of the fragments per kilobase of transcript per million mapped reads (FPKM). A total of 39,505 gene loci were detected in PDL and/or PSL (Table S2). Differential expression analysis showed that a total of 533 loci were differentially expressed, including 36 putative mis-annotated loci and 32 putative novel loci (Tables S3-S5). Here, mis-annotated loci were considered as loci assembled by Cufflinks but spanning two or more loci annotated in the soybean genome assembly. This is often due to the incorrect annotation of the initial gene model or annotated genes being too close together to be resolved. Unannotated loci (i.e., the novel loci) are an assembled group of reads not overlapping with any known loci annotated in the soybean genome assembly. Among the DEGs that have been annotated previously, 270 were up-regulated and 195 were down-regulated by short-term Pi deprivation ( Figure S2). Interestingly, transcripts from nine DEGs were found only in PDL, while transcripts from 12 DEGs were found only in PSL ( Table 1). The exclusive expression patterns suggest that these genes could have a potential role in plant response to short-term Pi deprivation.
Verification of RNA-Seq Data by qRT-PCR
To validate the reliability of the RNA-seq data, 18 DEGs were randomly selected and investigated by qRT-PCR. The fold changes of these genes under short-term Pi deficiency observed by qRT-PCR were similar to that revealed by RNA-seq (Figure 2), indicating that the RNA-seq data Differential expression analysis showed that a total of 533 loci were differentially expressed, including 36 putative mis-annotated loci and 32 putative novel loci (Tables S3-S5). Here, mis-annotated loci were considered as loci assembled by Cufflinks but spanning two or more loci annotated in the soybean genome assembly. This is often due to the incorrect annotation of the initial gene model or annotated genes being too close together to be resolved. Unannotated loci (i.e., the novel loci) are an assembled group of reads not overlapping with any known loci annotated in the soybean genome assembly. Among the DEGs that have been annotated previously, 270 were up-regulated and 195 were down-regulated by short-term Pi deprivation ( Figure S2). Interestingly, transcripts from nine DEGs were found only in PDL, while transcripts from 12 DEGs were found only in PSL ( Table 1). The exclusive expression patterns suggest that these genes could have a potential role in plant response to short-term Pi deprivation. Table 1. Differentially expressed genes where transcripts were found only in Pi-deprived leaves (PDL) or Pi-sufficient leaves (PSL) after 24 h Pi deprivation. Expression level of each gene was measured in terms of fragments per kilobase of transcript per million mapped reads (FPKM). Gene functional descriptions were shown according to soybean genome annotation (V2.0). Accession number "Glyma." is named as "Gm" for short. −P refers to PDL and +P refers to PSL. NA: no annotation.
Verification of RNA-Seq Data by qRT-PCR
To validate the reliability of the RNA-seq data, 18 DEGs were randomly selected and investigated by qRT-PCR. The fold changes of these genes under short-term Pi deficiency observed by qRT-PCR were similar to that revealed by RNA-seq ( Figure 2), indicating that the RNA-seq data obtained in this study is reliable. However, there were differences between RNA-seq and qRT-PCR results for some genes. For example, the induction of Gm17g100000 transcripts appeared much stronger when detected by RNA-seq than by qRT-PCR, while the repression of Gm03145600 transcripts appeared much stronger when detected by qRT-PCR than by RNAseq. These discrepancies may be due to the facts that the samples used for RNA-seq and qRT-PCR were not the same and there would be inevitable differences between different batches of samples and perhaps some primer pairs used in qRT-PCR were not optimal for detecting the target transcripts.
Enrichment Analysis of Gene Ontology (GO) Functional Annotation and KEGG Pathways of Differentially Expressed Genes (DEGs)
Half of the DEGs (266/533) can be assigned to 183 GO terms (Table S3). GO enrichment analysis revealed that 29 and 25 GO terms were significantly enriched in the up-regulated DEGs and the down-regulated DEGs, respectively (Tables S6 and S7). For the up-regulated DEGs, carbohydrate metabolic process, cellular glucan metabolic process and lipid metabolic process were the three most significantly enriched GO terms within biological processes; each term contains at least three DEGs (Table S8). For the down-regulated DEGs, dTMP (deoxy-thymidine monophosphate) biosynthetic process, glycine biosynthetic process and nucleotide biosynthetic process were the three most significantly enriched GO terms within biological processes; each term contains at least two DEGs (Table S9). In addition, 15.6% of the DEGs (83/533) can be assigned to 152 KEGG pathways (Table S3). Eight and 16 KEGG pathways were significantly enriched in the up-regulated DEGs and the down-regulated DEGs, respectively (Table S10). Ether lipid metabolism, cutin, suberine and wax biosynthesis and pentose and glucuronate interconversions were the three most significantly enriched KEGG pathways in the up-regulated DEGs; each contains two to three DEGs. Drug metabolism-cytochrome P450, chemical carcinogenesis and metabolism of xenobiotics by cytochrome P450 were the three most significantly enriched KEGG pathways in the down-regulated DEGs; each contains three to four DEGs. values according to RNA-seq data or qRT-PCR results. Red color indicates induction and green color indicates repression. Accession number "Glyma." is named as "Gm" for short. −P refers to PDL and +P refers to PSL.
Genes Potentially Involved in Pi Signaling and Utilization Are Induced by Short-Term Pi Deprivation in Soybean Leaves
In this study, at least 13 classical Pi-responsive genes that are potentially involved in Pi signaling and utilization were found to have changed expression in response to short-term Pi deprivation ( Table 2). These genes included five genes encoding SPX domain-containing proteins, two purple acid phosphatase (PAP) genes, three phospholipase genes, one glycerol-3-phosphate permease (G3Pp) gene, one sulfolipid sulfoquinovosyldiacylglycerol (SQD) gene and one gene encoding subfamily IIIB acid phosphatase (Gm16g220900). All of these genes were significantly up-regulated by Pi deprivation, with the exception that Gm16g220900-which potentially encodes a class IIIB acid phosphatase-was repressed.
SPX-containing proteins can be classified into four sub-families based on the presence of additional domains in their structure, namely the SPX, SPX-EXS, SPX-MFS and SPX-RING families. They are conserved in higher plants and are essential for Pi signaling and utilization [4,37,38]. Previously, nine SPX proteins and 14 SPX-EXS proteins have been identified in the soybean genome [39,40]. Here, an additional six SPX-MFS genes and four SPX-RING genes were identified in the soybean genome based on homology searches using Arabidopsis counterparts (AtPHT5;1 and AtNLA1) (Tables S11 and S12). Alignments with the sequences of SPX-MFS proteins and SPX-RING proteins were performed by ClustalW ( Figures S3 and S4), suggesting that the SPX, MFS and RING domains are conserved among these proteins. Phylogenetic analysis indicated that soybean SPX-containing proteins are closely related to their homologs in Arabidopsis and rice (Figure 3), suggesting their conserved functions in land plants. Table 2. DEGs potentially involved in Pi signaling and utilization. Expression level of each gene was measured in terms of FPKM. Gene functional descriptions were shown according to soybean genome annotation (V2.0). Accession number "Glyma." is named as "Gm" for short. −P refers to PDL and +P refers to PSL. SPX: domain found in Syg1, Pho81, XPR1, and related proteins; HAD: haloacid dehydrogenase.
Class III (SPX-MFS)
Class IV (SPX-RING) Figure 3. Unrooted phylogenetic tree of the SPX domain-containing proteins in soybean (Glycine max), Arabidopsis (Arabidopsis thaliana) and rice (Oryza sativa). The alignment for the phylogenetic tree was performed with ClustalW using full-length amino acid sequences. The phylogenetic tree was created with the MEGA6 software and the neighbor-joining method with 1000 bootstrap replications. The bar indicates the relative divergence of the sequences examined. The red arrow indicates the up-regulation of SPX genes upon short-term Pi deprivation by RNA-seq in the present study. Soybean SPX domain-containing proteins are marked with blue diamond.
Expression of Genes Potentially Involved in Transportation of Water, Sugars and Mineral Nutrients Is Altered by Short-Term Pi Deprivation
Twenty-seven transporter genes were found to be differentially expressed in Pi-deprived soybean leaves (Table 3). Among them, several genes potentially involved in transportation of water; sugar; sulfate; and copper were up-regulated, while three genes potentially involved in zinc/iron transport were down-regulated by short-term Pi deprivation. In addition, one nitrate transporter gene (Gm18g127200) and one malate transporter gene (Gm11g179100) were down-regulated by Pi deprivation. The responsiveness of these transporter genes suggested that transport and allocation of water, sugars and nutrients are quickly altered by Pi deprivation in order to adapt to the stressed condition.
Genes Linked to Ca 2+ and Hormonal Signaling are Regulated by Short-Term Pi Stress
At least 10 genes differentially expressed in Pi-deprived soybean leaves were putatively linked to Ca 2+ signaling (Table 4). In addition, at least 15 DEGs are potentially involved in the transport and signaling of diverse hormones, including auxin, cytokinin, gibberellin (GA), brassinosteroids (BRs), jasmonate and ethylene (Table 4). Among these genes, DEGs related to GA, BR and jasmonate signaling were all up-regulated by Pi deficiency. The expressional changes of these hormone-related genes suggested that short-term Pi deficiency could affect hormone synthesis, transport and sensitivity.
Diverse Transcription Factor Family Genes Are Responsive to Short-Term Pi Deprivation in Soybean Leaves
Here, expression of at least 31 transcription factor genes was affected by short-term Pi deprivation in soybean leaves; among them, 14 were up-regulated and 17 were down-regulated (Table 5). Several transcription factors are also potentially involved in hormonal signaling. For example, Gm14g127400 encoding a BES1/BZR1 homolog protein is potentially involved in BR signaling and Gm05g144500 encoding an RR protein is potentially involved in cytokinin signaling (Table 4). These differentially expressed transcription factors belong to diverse families, such as MYB, WRKY, NAC (NAM, ATAF, and CUC), ERF (ethylene response factor), bHLH (basic helix-loop-helix), TCP, bZIP (basic leucine zipper domain), HD-ZIP (homeodomain leucine zipper), YABBY and zinc finger proteins (ZFPs) of various types (C2H2, C3H, B-box, DOF and HD). MYB/MYB-like, bHLH, C2H2 ZFP and YABBY were the most abundant transcription factor families differentially expressed; each of them contained at least three DEGs. Interestingly, four DEGs encoding bHLH transcription factors were all up-regulated, whereas two genes encoding TCP transcription factors were all repressed by Pi deprivation.
Short-Term Pi Deprivation Modifies the Expression of Genes Encoding Diverse Protein Kinases and Phosphatases
In this study, at least 31 protein kinase genes and nine phosphatase genes (including three acid phosphatase genes listed in Table 2) were found to be responsive to short-term Pi deprivation ( Figure 4). These DEGs potentially encode protein kinases of diverse families. In addition, most of the DEGs encoding potential phosphatases were up-regulated by Pi deprivation. However, a gene encoding phosphoinositide phosphatase (Gm10g060600) and a PP2C gene (Gm11g050900) were repressed by Pi deprivation, suggesting their differential roles in Pi-deprivation response. Table 3. DEGs potentially involved in the transportation of water, sugars and mineral nutrients. Expression level of each gene was measured in terms of FPKM. Gene functional descriptions were shown according to soybean genome annotation (V2.0). Accession number "Glyma." is named as "Gm" for short. −P refers to PDL and +P refers to PSL. ATOX1: antioxidant protein 1; PIP1: plasma membrane intrinsic protein 1; TIP2: tonoplast intrinsic aquaporin 2; ATP: adenosine triphospate; ABC: ATP-binding cassette. In this study, at least 31 protein kinase genes and nine phosphatase genes (including three acid phosphatase genes listed in Table 2) were found to be responsive to short-term Pi deprivation (Figure 4). These DEGs potentially encode protein kinases of diverse families. In addition, most of the DEGs encoding potential phosphatases were up-regulated by Pi deprivation. However, a gene encoding phosphoinositide phosphatase (Gm10g060600) and a PP2C gene (Gm11g050900) were repressed by Pi deprivation, suggesting their differential roles in Pi-deprivation response. Heatmap of DEGs that are putative protein kinases and phosphatases. The intensities of the color represent the fold changes in log2 values according to RNA-seq data. Red color indicates induction and green color indicates repression. Accession number "Glyma." is named as "Gm" for short. −P refers to PDL and +P refers to PSL.
The Expression of Genes Associated with Metabolism Is Affected by Short-Term Pi Deprivation in Soybean Leaves
At least 77 DEGs were found to be associated with metabolism and the majority of them (72.7%) were up-regulated by short-term Pi deprivation ( Figure 5, Table S13). Of the 15 DEGs potentially involved in lipid metabolism (including glycolipid synthesis, sulfolipid synthesis, fatty acid synthesis and elongation and lipid degradation), most were up-regulated under Pi deprivation. Ten DEGs potentially involved in cell wall degradation (including 1,4-β-mannan endohydrolases, Figure 4. Heatmap of DEGs that are putative protein kinases and phosphatases. The intensities of the color represent the fold changes in log2 values according to RNA-seq data. Red color indicates induction and green color indicates repression. Accession number "Glyma." is named as "Gm" for short. −P refers to PDL and +P refers to PSL.
The Expression of Genes Associated with Metabolism Is Affected by Short-Term Pi Deprivation in Soybean Leaves
At least 77 DEGs were found to be associated with metabolism and the majority of them (72.7%) were up-regulated by short-term Pi deprivation ( Figure 5, Table S13). Of the 15 DEGs potentially involved in lipid metabolism (including glycolipid synthesis, sulfolipid synthesis, fatty acid synthesis and elongation and lipid degradation), most were up-regulated under Pi deprivation. Ten DEGs potentially involved in cell wall degradation (including 1,4-β-mannan endohydrolases, pectate lyases and polygalacturonases) were all up-regulated by Pi deprivation. Seven of the 10 DEGs that are potentially involved in secondary metabolism of metabolites like carotenoids, terpenoids, isoflavonols and phenylpropanoids, were up-regulated by Pi deprivation. Moreover, nine DEGs were found to be potentially involved in the synthesis or degradation of some amino acids, like serine, cysteine, arginine, lysine, phenylalanine, tryptophan and proline (Table S13). Four of them were up-regulated and five were down-regulated by Pi deprivation, suggesting a complex regulation of amino acid metabolism under Pi deficiency and the different roles of these amino acids in Pi deficiency acclimations. In addition, nearly all of the 10 DEGs potentially involved in cell wall modification, such as expansins, xyloglucan endotransglycosylases, endoxyloglucan transferases and pectinesterases were up-regulated by short-term Pi deprivation. However, two DEGs potentially involved in cell wall cellulose synthesis (encoding cellulose synthases) were down-regulated by short-term Pi deprivation. These results suggest that plant metabolic acclimation to Pi deprivation stress occurs rapidly after encountering the stress. pectate lyases and polygalacturonases) were all up-regulated by Pi deprivation. Seven of the 10 DEGs that are potentially involved in secondary metabolism of metabolites like carotenoids, terpenoids, isoflavonols and phenylpropanoids, were up-regulated by Pi deprivation. Moreover, nine DEGs were found to be potentially involved in the synthesis or degradation of some amino acids, like serine, cysteine, arginine, lysine, phenylalanine, tryptophan and proline (Table S13). Four of them were up-regulated and five were down-regulated by Pi deprivation, suggesting a complex regulation of amino acid metabolism under Pi deficiency and the different roles of these amino acids in Pi deficiency acclimations. In addition, nearly all of the 10 DEGs potentially involved in cell wall modification, such as expansins, xyloglucan endotransglycosylases, endoxyloglucan transferases and pectinesterases were up-regulated by short-term Pi deprivation. However, two DEGs potentially involved in cell wall cellulose synthesis (encoding cellulose synthases) were down-regulated by short-term Pi deprivation. These results suggest that plant metabolic acclimation to Pi deprivation stress occurs rapidly after encountering the stress.
Identification of Pi-Responsive Cis-Regulatory Elements in the Promoters of DEGs
We examined the distribution of Pi-responsive cis-regulatory elements in the putative promoter regions up to 1000 bp upstream of the transcription start sites of DEGs. In total 501 promoter regions were obtained from 288 up-regulated loci and 213 down-regulated loci (Tables S14 and S15). Eight types of previously identified Pi-responsive cis-elements were found to exist in Pi-responsive genes [11,[41][42][43]. The cis-elements of PHO (phosphatase) element, PHO-like element and P responsive element were present more frequently in the promoter regions of up-regulated DEGs as compared to that of randomly-selected genes in the soybean genome (Table S15, Figure 6). In addition, the
Identification of Pi-Responsive Cis-Regulatory Elements in the Promoters of DEGs
We examined the distribution of Pi-responsive cis-regulatory elements in the putative promoter regions up to 1000 bp upstream of the transcription start sites of DEGs. In total 501 promoter regions were obtained from 288 up-regulated loci and 213 down-regulated loci (Tables S14 and S15). Eight types of previously identified Pi-responsive cis-elements were found to exist in Pi-responsive genes [11,[41][42][43]. The cis-elements of PHO (phosphatase) element, PHO-like element and P responsive element were present more frequently in the promoter regions of up-regulated DEGs as compared to that of randomly-selected genes in the soybean genome (Table S15, Figure 6). In addition, the Helix-loop-helix element was present less frequently in the promoters of down-regulated DEGs. However, no significant difference was found for P1BS, TATA-box like, TC element and NIT 2-like.
These results suggest that the Pi-responsive cis-elements enriched in the promoters of DEGs may be involved in the transcriptional regulation events at the early stage of Pi deprivation stress. Helix-loop-helix element was present less frequently in the promoters of down-regulated DEGs. However, no significant difference was found for P1BS, TATA-box like, TC element and NIT 2-like. These results suggest that the Pi-responsive cis-elements enriched in the promoters of DEGs may be involved in the transcriptional regulation events at the early stage of Pi deprivation stress.
Discussion
Because of the low availability of Pi in soils, understanding the molecular acclimation mechanisms under Pi stress is of critical importance for developing crops with enhanced P-use efficiency in modern agriculture. Several studies utilizing microarrays or deep sequencing have documented the genome-wide transcriptional responses of plants to Pi stress but most of these studies are based on mid-term or long-term Pi deficiency and the early responses remain poorly understood [9]. In this study, we measured the transcriptional responses of soybean leaves to short-term Pi deprivation using an RNA-Seq approach. A total of 533 loci were found to be responsive to early Pi deprivation. By comparison with the recent transcriptomic analyses in two recombinant inbred lines of soybean that have different Pi stress tolerance [32], 12 genes were found to be commonly responsive to Pi deprivation in leaves; five of them have similar responses to short-term or long-term Pi stress ( Figure S5). In addition, a small portion of them (28/533) were found to be responsive to long-term Pi deprivation in soybean roots [35] ( Figure S6). The majority of the 28 common genes showed a similar transcriptional response under short-term and long-term Pi deprivation in the different organs. There were 21 DEGs expressed exclusively in either PDL or PSL ( Table 1) but none of them were found to be responsive to long-term Pi deprivation in soybean roots. This may suggest their different responses to short-term and long-term Pi deficiency in leaves and roots.
At least 13 genes which are commonly responsive to Pi deprivation in plants were affected by short-term Pi deprivation, including five SPX domain-containing genes, two PAP genes, three phospholipase genes, one G3Pp gene and one SQD gene ( Table 2). Some of them, such as PLDZ2, 0% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% Percentage of genes with elements Up-regulated genes Down-regulated genes Randomly-selected genes ** * * * Figure 6. The occurrence of Pi-related cis-elements previously identified as common to Pi-responsive genes in the promoter regions (1000 bp) of up-regulated DEGs, down-regulated DEGs and randomly-selected genes. Promoter regions of 250 genes randomly selected from all the chromosomes of soybean genome were used as control. 288 and 213 promoter regions were acquired for up-regulated and down-regulated DEGs, respectively. The hypergeometric p-value was calculated online (http://systems.crump.ucla.edu/hypergeometric/index.php). Asterisk means significantly different from the genes with elements in the soybean genome that are predicted based on the results of randomly-selected genes (* p < 0.05, ** p < 0.01).
Discussion
Because of the low availability of Pi in soils, understanding the molecular acclimation mechanisms under Pi stress is of critical importance for developing crops with enhanced P-use efficiency in modern agriculture. Several studies utilizing microarrays or deep sequencing have documented the genome-wide transcriptional responses of plants to Pi stress but most of these studies are based on mid-term or long-term Pi deficiency and the early responses remain poorly understood [9]. In this study, we measured the transcriptional responses of soybean leaves to short-term Pi deprivation using an RNA-Seq approach. A total of 533 loci were found to be responsive to early Pi deprivation. By comparison with the recent transcriptomic analyses in two recombinant inbred lines of soybean that have different Pi stress tolerance [32], 12 genes were found to be commonly responsive to Pi deprivation in leaves; five of them have similar responses to short-term or long-term Pi stress ( Figure S5). In addition, a small portion of them (28/533) were found to be responsive to long-term Pi deprivation in soybean roots [35] ( Figure S6). The majority of the 28 common genes showed a similar transcriptional response under short-term and long-term Pi deprivation in the different organs. There were 21 DEGs expressed exclusively in either PDL or PSL ( Table 1) but none of them were found to be responsive to long-term Pi deprivation in soybean roots. This may suggest their different responses to short-term and long-term Pi deficiency in leaves and roots.
At least 13 genes which are commonly responsive to Pi deprivation in plants were affected by short-term Pi deprivation, including five SPX domain-containing genes, two PAP genes, three phospholipase genes, one G3Pp gene and one SQD gene ( Table 2). Some of them, such as PLDZ2, PAP13, PAP31, SPX3, SPX8 and Gm09g223700 were also found to be responsive to long-term Pi deficiency [35,39,44,45]. Whether the other genes are responsive to long-term Pi deprivation or only responsive to early Pi deprivation needs further investigation. Replacement of phospholipids in internal cellular membranes with galactolipids and sulfolipids is an important adaptive mechanism under Pi deficiency, which have been found in many plant species [46,47]. Lipid remodeling is also considered to be related to mobilization of P from less-essential cellular components to be exported to growing sinks [8]. PLDZ2 plays a critical role in galactolipid biosynthesis and thus facilitates Pi recycling from phospholipids [48]. SQD1 and SQD2 are Pi-responsive enzymes essential for sulfolipid biosynthesis under Pi-deficient conditions [49,50]; they also facilitate Pi recycling. In addition, PLDζ1 and PLDζ2 are Pi-responsive phospholipases that can hydrolyze phospholipids and thus contribute to Pi supply for Pi-deficient Arabidopsis [51]. Thus, the quick responses of these genes to Pi deprivation in leaves suggest that the replacement of phospholipids could be activated in a short time in order to adapt to Pi deficiency. G3Pp family genes which are potentially involved in transporting G3P, the hydrolysis products of diacylglycerol (a product of phospholipid breakdown), were found to be increased by Pi starvation in Arabidopsis [52]. Here, the homologs of these genes were also found to be induced by short-term Pi deprivation in soybean leaves but further researches are required to characterize their exact roles in Pi stress response and tolerance.
In this study, a total of 33 genes encoding SPX-containing proteins were found in soybean genome ( Figure 3). The number is much higher than that in Arabidopsis (20) and rice (15) [38], which could be related to the whole-genome duplication events which occurred about 59 and 13 million years ago [53]. The early Pi deprivation-responsive GmSPX3, GmSPX4 and GmSPX8 are closely related to SPX1 and SPX2 in Arabidopsis and rice. In Arabidopsis and rice, SPX1, SPX2, SPX4 and SPX6 have been demonstrated to be involved in Pi sensing and signaling by inhibiting the transcriptional activity of PHR1 [54][55][56][57]. It has also been shown that overexpression of soybean SPX1 and SPX3 increases and decreases total P concentration in plant tissues, respectively, suggesting their contrasting roles in the regulation of Pi distribution in the plant [39,45]. Whether the short-term Pi deprivation-responsive SPX genes in soybean would function in Pi deficiency response by repressing the activity of conserved PHR1 and/or by participating in other regulatory pathways deserve further investigation. In addition to SPX genes, two SPX-EXS genes GmPHO1.H12 and GmPHO1.H14 were also induced by short-term Pi deprivation. Their closely related homologs in Arabidopsis, such as AtPHO1 and AtPHO1.H1 are also responsive to Pi deficiency and are associated with Pi economy [37,58]. However, the functions of Pi-responsive SPX-containing proteins in soybean remain to be investigated in the future.
In addition to the genes that are potentially involved in Pi signaling and utilization, many genes possibly involved in transportation of water, sugars and other mineral nutrients were also altered by short-term Pi deprivation (Table 3). It has been known for a long time that Pi deficiency decreases leaf water potential and transpiration rate [59]. Thus, it would be interesting to determine whether the observed up-regulation of aquaporins could be a compensatory response for the lower hydraulic conductance under Pi deficiency. Enhancement of the uptake and translocation of sugars and sulfate could also promote the synthesis of galactolipids and sulfolipids to substitute for the decline of phospholipids under Pi deficiency [8]. Sugar is an important systemic signal for regulating Pi starvation responses and root system architecture [60]. In addition, the homeostasis of iron, zinc and copper could also be affected by Pi deficiency. The most abundant Cu proteins in green tissues are plastocyanin and Cu/Zn-superoxide dismutase (Cu/ZnSOD), which are associated with electron transfers in photosynthesis and the scavenging of stress-induced reactive oxygen species (ROS), respectively [61]. Therefore, the up-regulation of copper transporters may be a part of a mechanism to boost ROS scavenging and photosynthesis which are impaired by Pi deficiency. Both iron and zinc have been shown to accumulate in Pi-deficient leaves in Arabidopsis [21]. Thus, the down-regulated iron/zinc transporter genes shortly after Pi deficiency could impede the excessive accumulation of these metals in the long run.
In this study, at least 10 DEGs are potentially involved in Ca 2+ signaling (Table 4). Ca 2+ is a universal signal playing a critical role in plant responses to environmental stresses [62,63]. It is well-known that diverse external environmental stimuli can quickly trigger specific spatial-temporal patterns of changes in cytosolic Ca 2+ concentration, which can be perceived and decoded by a series of Ca 2+ sensors containing EF-hand motifs [64,65]. However, little information is available with respect to the effect of Pi deficiency on cytosolic Ca 2+ levels. Considering the critical roles of the Ca 2+ signal in plant responses to other nutrient stress, such as potassium and nitrate deficiencies [66,67], it is conceivable that Ca 2+ signals might be an important player in Pi stress response. In addition, Ca 2+ and Pi are incompatible ions, because Ca 2+ can form insoluble compounds with phosphate derivatives at high levels. Changing the allocation patterns of Pi may also necessitate changes to the patterns for Ca 2+ . Thus, how Ca 2+ signal is linked to Pi deficiency response should deserve further investigations.
Many hormones have been involved in Pi stress responses by regulating root development and architecture as well as shoot development [4]. Here 15 genes potentially involved in signaling of auxin, CK (cytokinin), GA, BRs, jasmonate and ethylene were responsive to short-term Pi deprivation (Table 4). It has been shown that bioactive GA levels are reduced by Pi deficiency, which leads to the accumulation of DELLA proteins and, therefore, modulates root system architecture and anthocyanin accumulation in leaves [68]. BRs are a class of plant polyhydroxysteroids playing a pivotal role in plant growth and development as well as a wide variety of environmental stress responses [69]. However, there is little information available on the role of BRs in Pi deficiency. Whether BRs are involved in regulating the Pi economy during leaf development under Pi stress remains to be examined. Initiation and expansion of soybean leaves were demonstrated to decline under Pi deficiency [70]. However, whether hormonal signals like auxin, BRs, CK are involved in these physiological processes remains to be answered. In addition, leaf senescence is usually accelerated under nutrient stress in order to enhance remobilization of nutrients from senescing leaves [8] but the exact roles of hormones like ethylene, CK, jasmonate, auxin in nutrient stress-induced leaf senescence remain to be investigated in the future.
Transcription factors are critical components mediating gene regulatory networks under Pi stress. In the present study, thirty-one transcription factor genes belonging to 10 diverse families including MYB/MYB-like, bHLH, WRKY and ERF were found to be responsive to Pi deprivation ( Table 5). Many of these transcription factor families were previously found to be responsive to Pi starvation and mediate transcriptional regulation of Pi-responsive genes in plants [21,[71][72][73][74]. The identification of these short-term Pi stress-responsive transcription factors may provide preliminary evidence for further characterization of their functions in early Pi stress signaling.
Many genes possibly involved in protein phosphorylation and dephosphorylation are transcriptionally affected by short-term Pi stress ( Figure 4). This result conforms with previous reports in Arabidopsis, demonstrating that many kinds of kinase and phosphatase genes are differentially expressed upon Pi deficiency [21,22,75]. Protein phosphorylation and dephosphorylation are linked with phosphate transporter trafficking and metabolic acclimations under Pi deficiency stress [76,77]. Recently, rice kinases CK2 and PSTOL1 were illustrated to be involved in regulating Pi stress response and tolerance [78,79]. Arabidopsis plasma membrane-localized receptor-like kinase BIK1 (botrytis-induced kinase 1) and MKK9-MPK3/MPK6 (mitogen-activated protein kinase kinase9-mitogen-activated protein kinase3/mitogen-activated protein kinase6) cascade were also shown to have functions in Pi signaling [80,81]. Moreover, some kinases like CIPKs can regulate the activities of transporters of nutrients like nitrate and potassium [82,83]. It would be interesting to examine the roles of Pi-responsive kinases or phosphatases in the early Pi stress signaling.
The expressions of many genes affected by short-term Pi deprivation are associated with metabolisms, such as lipid metabolism, cell wall degradation and modification, carbohydrate metabolism and amino acid metabolism ( Figure 5). Earlier studies also demonstrated these primary or secondary metabolic changes upon medium-term or long-term Pi deficiency in plants [21,22,27]. Thus the modifications in expression levels of genes involved in metabolism can exist from the early stage of Pi stress. Cell walls have crucial functions in regulating the rate and direction of growth and determining the morphology of plant cells and organs. Synthesis and remodeling of the cell wall were documented to be associated with the acclimations to many kinds of environmental stresses [84].
In conclusion, our RNA-seq analysis revealed an early transcriptomic response of soybean leaves to Pi deprivation, suggesting an intricate regulatory network of signaling components upon short-term Pi stress. Quick changes in the transcript levels of various genes allow the plant to properly and accurately acclimate to Pi limitation conditions. Although the exact roles of these early Pi stress-responsive genes remain to be investigated, our data provide a platform for further functional characterizations of these genes in Pi stress sensing, signaling and tolerance.
Plant Material and Growth Conditions
Williams 82 is the soybean cultivar used for the reference soybean genome [53]. Soybean seeds (Glycine max var. Williams 82) (kindly provided by Prof. Haijian Zhi from Nanjing Agricultural University) were soaked in sterilized water for 4 h and then incubated at room temperature in the dark between two layers of moistened filter paper. Four days later, seedlings were transferred and grown hydroponically in a 10 L tank filled with a half-strength modified Hoagland nutrient solution containing 2.5 mm Ca(NO 3
Pi Concentration Determination
Pi concentrations were analyzed as described previously [35]. About 0.2 g fresh tissue that were frozen in liquid nitrogen were ground into fine powder and suspended in extraction buffer (10 mm Tris, 1 mm EDTA, 100 mm NaCl and 1 mm β-mercaptoethanol, pH 8.0) at a ratio of 1 mg of fresh weight sample to 10 µL of extraction buffer. A total of 100 µL of sample suspension was mixed with 900 µL of 1% glacial acetic acid and incubated at 42 • C for 30 min. Then, the suspension was centrifuged at 13,000× g for 10 min and 500 µL of the supernatant was used for the Pi quantitation assay. The reaction mixture containing 1000 µL of Pi assay solution (0.34% (NH 4 ) 6 Mo 7 O 24 ·4H 2 O, 0.46 M H 2 SO 4 and 1.4% ascorbic acid) and 500 µL of supernatant was incubated at 42 • C for 30 min, cooled on ice and the absorbance at 820 nm was measured using a UV-Vis spectrum meter (Thermo Scientific BioMate 3S, Chino, CA, USA).
RNA Isolation, Library Construction and RNA Sequencing
Soybean leaves (the first trifoliate true leaves) were collected after Pi-deprivation treatment for 24 h. Four samples (two biological replicates of both Pi-deprived and Pi-sufficient leaves) were used for mRNA library construction and sequencing. Each biological replicate was sampled from three different randomly-selected plants. Total RNA was extracted using Trizol reagent (Invitrogen, Carlsbad, CA, USA) following the manufacturer's procedure. The total RNA quantity and purity were determined by Agilent Bioanalyzer 2100 with RNA 6000 Nano LabChip Kit (Agilent, Santa Clara, CA, USA). The RIN (RNA integrity number) of all RNA samples were determined to be more than 7.0. Approximately 10 µg of total RNA was subjected to Poly (A) mRNA isolation using poly-T oligo-attached magnetic beads (Invitrogen). The mRNA was fragmented into small pieces using divalent cations under elevated temperature after purification. The cleaved RNA fragments were reverse-transcribed to create the final cDNA library in accordance with the protocol for the mRNA-Seq sample preparation kit (Illumina, San Diego, CA, USA). The average insert size for the paired-end libraries was 300 ± 50 bp. Subsequently, paired-end sequencing was performed on an Illumina Hiseq 2000 at the LC-BIO TECHNOLOGIES (Hangzhou, China) following the instructions from Illumina.
RNA-Seq Reads Mapping and Differential Counting
The initial base calling and quality filtering of the reads generated with the Illumina analysis pipeline (Fastq format) were implemented using a custom Perl script and the default parameters of the Illumina pipeline (http://www.illumina.com). Additional filtering for poor-quality bases was carried out using the FASTX-toolkit available in the FastQC software package (http://www.bioinformatics. babraham.ac.uk/projects/fastqc/). To facilitate the read mapping, the Glycine max reference genome (Gmax2.0 version) was indexed by Bowtie2 (http://www.phytozome.net) [85]. The read mapping was conducted using the TopHat software package [86]. TopHat allows multiple alignments per read (up to 40) and a maximum of two mismatches when mapping the reads to the reference genome. The reads were first mapped directly to the genome using indexing and then the unmapped reads were used to identify novel splicing events. The aligned read files were processed by Cufflinks to measure the relative abundances of the transcripts by using the normalized RNA-seq fragment counts [87]. The estimated abundance of genes was measured in terms of the fragments per kilobase of transcript per million mapped reads (FPKM). Differentially expressed genes (DEGs) between the two sets of samples were identified using Cuffdiff [87]. Only the genes with a log2 fold change ≥1 or ≤−1 and a p-value ≤ 0.05 were considered as significantly DEGs.
The datasets were deposited in NCBI's Gene Expression Omnibus and are accessible through GEO accession number GSE104286 (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE104286) (the secure token for review is mbuhcgoylbanjyx).
Quantitative RT-PCR Analysis
Total RNA was extracted from soybean tissues using RNApure Plant Kit (with DNase I) (CoWin Biotech, Beijing, China) and digested with DNase I to eliminate genomic DNA contamination according to the manufacturer's instruction. cDNA was synthesized from 1.0 µg total RNA in a 20 µL reaction by SuperRT Reverse Transcriptase (CoWin Biotech) using oligo (dT) primers. Quantitative RT-PCR (qRT-PCR) was performed on a MyiQ Single Color Real-time PCR system (Bio-Rad, Hercules, CA, USA) as described previously [88]. Briefly, two µL of a 1/10 dilution of cDNA in water was added to 10 µL of 2 × UltraSYBR (with Rox) (CoWin Biotech); together with two gene-specific primers (200 nm each) the final volume was brought to 20 uL by adding DNase-free water. The procedures for PCR were as follows:15 • C for 10 min; 40 cycles of 95 • C for 15 s, 60 • C for 60 s. Amplifications were run in triplicate together with controls that contained no template and no reverse transcription for each of the examined genes. Relative expression levels were normalized to that of an internal control ACTIN11 (Glyma.18g290800) using the Pfaffl method Table S16.
Functional Annotation and Gene Ontology (GO) Enrichment
The DEGs were annotated for gene ontology (GO) terms [90] and categorized into molecular function, cellular component and biological process categories. A gene enrichment test was performed on each of the gene lists to acquire the terms that were significantly enriched among the DEGs. Fisher's exact test, which is based on hyper-geometric distribution, was used to calculate the p-value. A GO category (http://geneontology.org/) or KEGG pathway (http://www.genome.jp/kegg/) with a p-value ≤ 0.05 was regarded as significantly enriched. GO and KEGG enrichment analyses were conducted with the help of LC-BIO company (Hangzhou, China). | 9,301.6 | 2018-07-01T00:00:00.000 | [
"Biology"
] |
Time-frequency analysis associated with the Laguerre wavelet transform
We define the localization operators associated with Laguerre wavelet transforms. Next, we prove the boundedness and compactness of these operators, which depend on a symbol and two admissible wavelets on Lα(K), 1 ≤ p ≤ ∞.
Then T = ∂ ∂t and The theory of harmonic analysis on L p rad (H d ) was exploited by many authors (see [23,27,32]). When one considers the problems of radial functions on the Heisenberg group H d , the underlying manifold can be regarded as the Laguerre hypergroup K := [0, ∞) × R. Stempak [33] introduced a generalized translation operator on K 32 HATEM MEJJAOLI AND KHALIFA TRIMÈCHE and established the theory of harmonic analysis on L 2 (K, dν α ), where the weighted Lebesgue measure ν α on K is given by dν α (x, t) := x 2α+1 dxdt πΓ(α + 1) , α ≥ 0.
In this paper we are interested in the Laguerre hypergroup K. We recall that (K, * α ) is a commutative hypergroup [29], on which the involution and the Haar measure are respectively given by the homeomorphism (x, t) → (x, t) − = (x, −t) and the Radon positive measure dν α (x, t). The unit element of (K, * α ) is given by e = (0, 0).
In the classical setting, the notion of wavelets was first introduced by Morlet, a French petroleum engineer at Elf Aquitaine, in connection with his study of seismic traces. The mathematical foundations were given by Grossmann and Morlet in [18]. The harmonic analyst Meyer and many other mathematicians became aware of this theory and recognized many classical results inside it (see [6,21,26]). Classical wavelets have wide applications, ranging from signal analysis in geophysics and acoustics to quantum theory and pure mathematics (see [8,16] and the references therein).
Next, the theory of wavelets and the continuous wavelet transform has been extended to hypergroups, in particular to the Laguerre hypergroups (see [29,34]).
One of the aims of wavelet theory is the study of localization operators for the continuous wavelet transform.
Time-frequency localization operators are a mathematical tool to define a restriction of functions to a region in the time-frequency plane that is compatible with the uncertainty principle and to extract time-frequency features. In this sense, these operators have been introduced and studied by Daubechies [9,10,11] and Ramanathan and Topiwala [30], and they are now extensively investigated as an important mathematical tool in signal analysis and other applications [17,12,13,35,7].
As the harmonic analysis on the Laguerre hypergroup has known remarkable development, it is natural to ask whether there exists the equivalent of the theory of localization operators for the continuous wavelet transform related to this harmonic analysis.
Using the properties of the generalized Fourier transform on the Laguerre hypergroup K, our main aim in this paper is to expose and study the two-wavelet localization operator on the Laguerre hypergroup.
The reason for the extension from one wavelet to two wavelets comes from the extra degree of flexibility in signal analysis and imaging when the localization operators are used as time-varying filters. It turns out that localization operators with two admissible wavelets have a richer mathematical structure than the onewavelet analogues.
The remainder of this paper is arranged as follows. Section 2 contains some basic facts about the Laguerre hypergroup, its dual, and the Schatten-von Neumann classes. In Section 3 we introduce and study the two-wavelet localization operators in the setting of the Laguerre hypergroup. More precisely, the Schatten-von Neumann properties of these two localization wavelet operators are established, and for trace class Laguerre two-wavelet localization operators, the traces and the trace class norm inequalities are presented. Section 4 is devoted to proving that under suitable conditions on the symbols and two admissible wavelets, the L p boundedness and compactness of these two-wavelet localization operators hold.
Preliminaries
In this section we set some notation and we recall some basic results in harmonic analysis related to Laguerre hypergroups and Schatten-von Neumann classes. The main references are [29,35].
• C * (K) is the space of continuous functions on R 2 , even with respect to the first variable. • C * ,c (K) is the subspace of C * (K) formed by functions with compact support.
m being the Laguerre polynomial of degree m and order α.
•K := R × N equipped with the weighted Lebesgue measure γ α onK given by It is well known (see [29]) that for all (λ, m) ∈K, the system where D 1 and D 2 are singular partial differential operators given by The harmonic analysis on the Laguerre hypergroup K is generated by the singular operator , (x, t) ∈ K, while its dualK is generated by the differential difference operator where the operators Λ 1 , Λ 2 are given, for a suitable function g onK, by and the function where the difference operators ∆ + , ∆ − are given, for a suitable function g onK, by These operators satisfy some basic properties which can be found in [29,2]; namely, one has Definition 2.1. Let f ∈ C * ,c (K). For all (x, t) and (y, s) in K, we put (2.1) where x, y r,θ := x 2 + y 2 + 2xyr cos θ. The operators τ
Notation:
• S * (K) is the space of functions f : R 2 → C, even with respect to the first variable, C ∞ on R 2 , and rapidly decreasing together with their derivatives, i.e., for all k, p, q ∈ N we have Equipped with the topology defined by the semi-norms N k,p,q , S * (K) is a Fréchet space. • S(K) is the space of functions g :K → C such that (i) For all m, p, q, r, s ∈ N, the function is bounded and continuous on R, C ∞ on R * such that the left and the right derivatives at zero exist.
Equipped with the topology defined by the semi-norms ν k,p,q , S(K) is a Fréchet space.
(ii) The generalized Fourier transform F α extends to an isometric isomorphism from L 2 α (K) onto L 2 α (K). Corollary 2.11. For all f and g in L 2 α (K) we have the following Parseval formula for the generalized Fourier transform F α :
Schatten-von Neumann classes. Notation:
• l p (N), 1 ≤ p ≤ ∞, is the set of all infinite sequences of real (or complex) numbers u := (u j ) j∈N such that For p = 2, we provide the space l 2 (N) with the scalar product (ii) For 1 ≤ p < ∞, the Schatten class S p is the space of all compact operators whose singular values lie in l p (N). The space S p is equipped with the norm Remark 2.14. We note that the space S 2 is the space of Hilbert-Schmidt operators, and S 1 is the space of trace class operators.
Definition 2.15. The trace of an operator
for any orthonormal basis (v n ) n of L 2 α (K). Definition 2.17. We define S ∞ := B(L 2 α (K)), equipped with the norm 2.3. Basic Laguerre wavelet theory. In this subsection we recall some results introduced in [29].
where the measure µ α is defined by Definition 2.18. A Laguerre wavelet on K is a measurable function h on K satisfying, for almost all (λ, m) ∈K\{(0, 0)}, the condition Let a ∈ R\{0} and let h be a measurable function. We consider the function h a defined by are the generalized translation operators given by (2.1).
This transform can also be written in the form wheref (x, t) = f (x, −t) and * α is the generalized convolution product given by (2.2).
Laguerre two-wavelet localization operators
In this section we will derive a host of sufficient conditions for the boundedness and Schatten class of the Laguerre two-wavelet localization operators in terms of properties of the symbol σ and the windows h and k.
Preliminaries.
Definition 3.1. Let h, k be measurable functions on K, and σ a measurable function on R × K. We define L h,k (σ), the Laguerre two-wavelet localization operator on L p α (K), 1 ≤ p ≤ ∞, by According to the different choices of the symbols σ and the different continuities required, we need to impose different conditions on h and k, and then we obtain an operator on L p α (K). It is often more convenient to interpret the definition of L h,k (σ) in a weak sense, that is, for f in L p α (K), p ∈ [1, ∞], and g in L p α (K), In what follows, such operator L h,k (σ) will be named localization operator, for the sake of simplicity. p ∈ [1, ∞). Formally, we assume that we have
Proposition 3.2. Let
Then its adjoint is the linear operator L k,h (σ) : Proof. For all f in L p α (K) and g in L p α (K) it immediately follows from (3.2) that In the rest of this section, h and k will be two Laguerre wavelets on K such that The main result of this subsection is the proof that the linear operators We first consider this problem for σ in L 1 µα (R × K) and next in L ∞ µα (R × K), and we then conclude by using interpolation theory.
Proof. For all functions f and g in L 2 α (K), we have from relations (3.2) and (2.10), For all functions f and g in L 2 α (K), we have from Hölder's inequality . Using Plancherel's formula for Φ α h and Φ α k , given by relation (2.9), we get We can now associate a localization operator L h,k (σ) : The precise result is the following theorem.
. We consider the operator
Then, by Proposition 3.3 and Proposition 3.4,
and Since (3.7) is true for arbitrary functions f in L 2 α (K), we obtain the desired result. α (a, x, t). Proof. Let σ be in L p µα (R × K) and let (σ n ) n∈N be a sequence of functions in
Schatten-von Neumann properties for L h,k (σ). The main result of this subsection is the proof that the localization operator
such that σ n → σ in L p µα (R × K) as n → ∞. Then by Theorem 3.5, On the other hand, as by Proposition 3.6 L h,k (σ n ) is in S 2 and hence compact, it follows that L h,k (σ) is compact.
9)
where σ is given by where s j , j = 1, 2, . . . , are the positive singular values of L h,k (σ) corresponding to φ j . Then, we get
HATEM MEJJAOLI AND KHALIFA TRIMÈCHE
Thus, by Fubini's theorem, Cauchy-Schwarz's inequality, Bessel's inequality, and relations (2.8) and (2.6), we get . We now prove that L h,k (σ) satisfies the first inequality of (3.9). It is easy to see that σ belongs to L 1 α (K), and using formula (3.10) we get Then from Fubini's theorem, we obtain Thus using Plancherel's formula for Φ α h , Φ α k we get The proof is complete. α (a, x, t).
In the following we give the main result of this subsection. α (a, x, t). Now we state a result concerning the trace of products of localization operators. Corollary 3.12. Let σ 1 and σ 2 be any real-valued and non-negative functions in L 1 µα (R × K). We assume that h = k and that h is a function in L 2 α (K) such that h L 2 α (K) = 1. Then, the localization operators L h,k (σ 1 ), L h,k (σ 2 ) are positive trace class operators and, for any natural number n, n S1 .
HATEM MEJJAOLI AND KHALIFA TRIMÈCHE
Proof. By Theorem 1 in Liu's paper [22] we know that if A and B are in the trace class S 1 and are positive operators, then ∀ n ∈ N, tr(AB) n ≤ tr(A) n tr(B) n .
So, if we take A = L h,k (σ 1 ), B = L h,k (σ 2 ), and we invoke the previous remark, the desired result is obtained and the proof is complete.
4. L p α boundedness and compactness of L h,k (σ) In this section we will derive a host of sufficient conditions for the boundedness and compactness of the localization operators L h,k (σ) on L p α (K), 1 ≤ p ≤ ∞, in terms of properties of the symbol σ and the windows h and k.
Boundedness of
, and h ∈ L p α (K). We are going to show that L h,k (σ) is a bounded operator on L p α (K). Let us start with the following propositions.
For every function f in L 1 α (K), from Fubini's theorem and the relations (3.1), (2.11), and (2.7), we have Proof. Let f be in L ∞ α (K). As above, from Fubini's theorem and the relations (3.1), (2.11), and (2.7), we have . With a Schur technique, we can obtain an L p α -boundedness result as in the previous theorem, but the estimate for the norm L h,k (σ) B(L p α (K)) is cruder.
Then there exists a unique bounded linear operator (4.1) We have By simple calculations, it is easy to see that Thus by Schur's lemma (see [15]), we can conclude that L h,k (σ) : L p α (K) → L p α (K) is a bounded linear operator for 1 ≤ p ≤ ∞, and we have Proof. For any f ∈ L p α (K), consider the linear functional Using Fubini's theorem and the relation (2.11), we get Thus, the operator I f is a continuous linear functional on L p α (K), and the operator norm satisfies , which establishes the proposition.
Combining Proposition 4.1 and Proposition 4.7, we have the following theorem.
. We can now state and prove the main results in this subsection. Theorem 4.9. Let σ be in L r µα (R × K), r ∈ [1,2], and h, k ∈ L 1 Proof. Consider the linear functional By Proposition 4.1 and Theorem 3.5, we have and Therefore, by (4.2), (4.3), and the multi-linear interpolation theory (see [5,Section 10.1] for reference), we get a unique bounded linear operator By the definition of I, we have As the adjoint of L h,k (σ) is L k,h (σ), L h,k (σ) is a bounded linear map on L r α (K) with its operator norm satisfying where Using an interpolation of (4.4) and (4.5), we have that, for any p ∈ [r, r ], r+1 , 2r r−1 , and we have where In order to prove this theorem we need the following lemmas.
Then there exists a unique bounded linear operator Proof. Consider the linear functional Then by Proposition 4.1 and Theorem 3.5, and where · B(L p µα (R×K),B(L q α (K))) denotes the norm in the Banach space of the bounded linear operators from L p µα (R × K) into B(L q α (K)), 1 ≤ p, q ≤ ∞. Using an interpolation of (4.7) and (4.8) we get the result.
Proof. As the adjoint of is the bounded linear operator the result follows from duality and the previous lemma.
Proof. The proof follows from Theorem 4.8 and Theorem 3.5 with p = 1, q instead of p, and interpolation theory.
In the following we give two results for compactness of localization operators. Proof. The result is an immediate consequence of an interpolation of Corollary 3.10 and Proposition 4.14. See again [4, pp. 202-203] for the interpolation used. | 3,728.2 | 2021-03-01T00:00:00.000 | [
"Mathematics"
] |
First hitting time and place, monopoles and multipoles for pseudo-processes driven by the equation $\partial/\partial t = \pm\partial^N/\partial x^N$
Consider the high-order heat-type equation $\partial u/\partial t=\pm\partial^N u/\partial x^N$ for an integer $N>2$ and introduce the related Markov pseudo-process $(X(t))_{t\ge 0}$. In this paper, we study several functionals related to $(X(t))_{t\ge 0}$: the maximum $M(t)$ and minimum $m(t)$ up to time $t$; the hitting times $\tau_a^+$ and $\tau_a^-$ of the half lines $(a,+\infty)$ and $(-\infty,a)$ respectively. We provide explicit expressions for the distributions of the vectors $(X(t),M(t))$ and $(X(t),m(t))$, as well as those of the vectors $(\tau_a^+,X(\tau_a^+))$ and $(\tau_a^-,X(\tau_a^-))$.
Introduction
Let N be an integer greater than 2 and consider the high-order heat-type equation where κ N = (−1) 1+N/2 if N is even and κ N = ±1 if N is odd. Let p(t; z) be the fundamental solution of Eq. (1.1) and put p(t; x, y) = p(t; x − y).
The function p is characterized by its Fourier transform With Eq. (1.1) one associates a Markov pseudo-process (X(t)) t 0 defined on the real line and governed by a signed measure P, which is not a probability measure, according to the usual rules of ordinary stochastic processes: P x {X(t) ∈ dy} = p(t; x, y) dy and for 0 = t 0 < t 1 < · · · < t n , x 0 = x, Relation (1.2) reads, by means of the expectation associated with P, E x e iuX(t) = e iux+κ N t(iu) N .
Such pseudo-processes have been considered by several authors, especially in the particulary cases N = 3 and N = 4. The case N = 4 is related to the biharmonic operator ∂ 4 /∂x 4 . Few results are known in the case N > 4. Let us mention that for N = 2, the pseudo-process considered here is a genuine stochastic process (i.e., driven by a genuine probability measure), this is the most well-known Brownian motion.
The following problems have been tackled: • Analytical study of the sample paths of that pseudo-process: Hochberg [8] defined a stochastic integral (see also Motoo [14] in higher dimension) and proposed an Itô formula based on the correspondence dx 4 = dt, he obtained a formula for the distribution of the maximum over [0, t] in the case N = 4 with an extension to the even-order case. Noteworthy, the sample paths do not seem to be continuous in the case N = 4; • Study of the sojourn time spent on the positive half-line up to time t, T (t) = meas{s ∈ [0, t] : X(s) > 0} = t 0 1l {X(s)>0} ds: Krylov [11], Orsingher [20], Hochberg and Orsingher [9], Nikitin and Orsingher [16], Lachal [12] explicitly obtained the distribution of T (t) (with possible conditioning on the events {X(t) > (or =, or <)0}). Sojourn time is useful for defining local times related to the pseudo-process X, see Beghin and Orsingher [1]; • Study of the maximum and the minimum functionals Hochberg [8], Beghin et al. [2,3], Lachal [12] explicitly derived the distribution of M (t) and that of m(t) (with possible conditioning on some values of X(t)); • Study of the couple (X(t), M (t)): Beghin et al. [20] wrote out several formulas for the joint distribution of X(t) and M (t) in the cases N = 3 and N = 4; • Study of the first time the pseudo-process (X(t)) t 0 overshoots the level a > 0, τ + a = inf{t 0 : X(t) > a}: Nishioka [17,18], Nakajima and Sato [15] adopt a distributional approach (in the sense of Schwartz distributions) and explicitly obtained the joint distribution of τ + a and X(τ + a ) (with possible drift) in the case N = 4. The quantity X(τ + a ) is the first hitting place of the half-line [a, +∞). Nishioka [19] then studied killing, reflecting and absorbing pseudo-processes; • Study of the last time before becoming definitively negative up to time t, O(t) = sup{s ∈ [0, t] : X(s) > 0}: Lachal [12] derived the distribution of O(t); • Study of Equation (1.1) in the case N = 4 under other points of view: Funaki [6], and next Hochberg and Orsingher [10] exhibited relationships with compound processes, namely iterated Brownian motion, Benachour et al. [4] provided other probabilistic interpretations. See also the references therein.
This aim of this paper is to study the problem of the first times straddling a fixed level a (or the first hitting times of the half-lines (a, +∞) and (−∞, a)): τ + a = inf{t 0 : X(t) > a}, τ − a = inf{t 0 : X(t) < a} with the convention inf(∅) = +∞. In the spirit of the method developed by Nishioka in the case N = 4, we explicitly compute the joint "signed-distributions" (we simply shall call "distributions" throughout the paper for short) of the vectors (X(t), M (t)) and (X(t), m(t)) from which we deduce those of the vectors (τ + a , X(τ + a )) and (τ − a , X(τ − a )). The method consists of several steps: • Defining a step-process by sampling the pseudo-process (X(t)) t 0 on dyadic times t n,k = k/2 n , k ∈ N; • Observing that the classical Spitzer identity holds for any signed measure, provided the total mass equals one, and then using this identity for deriving the distribution of (X(t n,k ), max 0 j k X(t n,j )) through its Laplace-Fourier transform by means of that of X(t n,k ) + where x + = max(x, 0); • Expressing time τ + a (for instance) related to the sampled process (X(t n,k )) k∈N by means of (X(t n,k ), max 0 j k X(t n,j )); • Passing to the limit when n → +∞.
Meaningfully, we have obtained that the distributions of the hitting places X(τ + a ) and X(τ − a ) are linear combinations of the successive derivatives of the Dirac distribution δ a . In the case N = 4, Nishioka [17] already found a linear combination of δ a and δ ′ a and called each corresponding part "monopole" and "dipole" respectively, considering that an electric dipole having two opposite charges δ a+ε and δ a−ε with a distance ε tending to 0 may be viewed as one monopole with charge δ ′ a . In the general case, we shall speak of "multipoles".
Nishioka [18] used precise estimates for carrying out the rigorous analysis of the pseudoprocess corresponding to the case N = 4. The most important fact for providing such estimates is that the integral of the density p is absolutely convergent. Actually, this fact holds for any even integer N . When N is an odd integer, the integral of p is not absolutely convergent and then similar estimates may not be obtained; this makes the study of X very much harder in this case. Nevertheless, we have found, formally at least, remarkable formulas which agree with those of Beghin et al. [2,3] in the case N = 3. They obtained them by using a Feynman-Kac approach and solving differential equations. We also mention some similar differential equations for any N . So, we guess our formulas should hold for any odd integer N 3. Perhaps a distributional definition (in the sense of Schwartz distributions since the heat-kernel is locally integrable) of the pseudo-process X might provide a properly justification to comfirm our results. We shall not tackle this question here.
The paper is organized as follows: in Section 2, we write down general notations and recall some known results. In Section 3, we construct the step-process deduced from (X(t)) t 0 by sampling this latter on dyadic times. Section 4 is devoted to the distributions of the vectors (X(t), M (t)) and (X(t), m(t)) with the aid of Spitzer identity. Section 5 deals with the distributions of the vectors (τ + a , X(τ + a )) and (τ − a , X(τ − a )) which can be expressed by means of those of (X(t), M (t)) and (X(t), m(t)). Each section is completed by an illustration of the displayed results therein to the particular cases N ∈ {2, 3, 4}.
We finally mention that the most important results have been announced, without details, in a short Note [13].
Settings
The relation +∞ −∞ p(t; ξ) dξ = 1 holds for all integers N . Moreover, if N is even, the integral is absolutely convergent (see [12]) and we put Notice that ρ does not depend on t since p(t; ξ) = t −1/N p(1; ξ/t 1/N ). For odd integer N , the integral of p is not absolutely convergent; in this case ρ = +∞.
N th roots of κ N
We shall have to consider the N th roots of κ N (θ l for 0 l N − 1 say) and distinguish the indices l such that ℜθ l < 0 and ℜθ l > 0 (one never has ℜθ l = 0). So, let us introduce the following set of indices The numbers of elements of the sets J and K are #J = #K = p.
If N = 2p + 1, two cases must be considered: The numbers of elements of the sets J and K are #J = p and #K = p + 1 if p is even, #J = p + 1 and #K = p if p is odd. Figure 1 illustrates the different cases.
Recalling some known results
We recall from [12] the expressions of the kernel p(t; ξ) together with its Laplace transform (the so-called λ-potential of the pseudo-process (X(t)) t 0 ), for λ > 0, We also recall (see the proof of Proposition 4 of [12]): We recall the expressions of the distributions of M (t) and m(t) below.
• Concerning the densities: The following result will be used further: expanding into partial fractions yields, for any polynomial P of degree deg P #J, (2.9) • Applying (2.9) to x = 0 and P = 1 gives j∈J A j = k∈K B k = 1. Actually, the A j 's and B k 's are solutions of a Vandermonde system (see [12]).
• Applying (2.9) to x = θ k , k ∈ K, and P = 1 gives which simplifies, by (2.8), into (and also for the B k 's) (2.10) • Applying (2.9) to P = x p , p #J, gives, by observing that 1/θ j =θ j , Step-process In this part, we proceed to sampling the pseudo-process X = (X(t)) t 0 on the dyadic times t n,k = k/2 n , k, n ∈ N and we introduce the corresponding step-process X n = (X n (t)) t 0 defined for any n ∈ N by The quantity X n is a function of discrete observations of X at times t n,k , k ∈ N.
For the convenience of the reader, we recall the definitions of tame functions, functions of discrete observations, and admissible functions introduced by Nishioka [18] in the case N = 4. Definition 3.1 Fix n ∈ N. A tame function is a function of a finite number of observations of the pseudo-process X at times t n,j , 1 j k, that is a quantity of the form F n,k = F (X(t n,1 ), . . . , X(t n,k )) for a certain k and a certain bounded Borel function F : R k −→ C. The "expectation" of F n,k is defined as We plainly have the inequality Definition 3.2 Fix n ∈ N. A function of the discrete observations of X at times t n,k , k 1, is a convergent series of tame functions: F Xn = ∞ k=1 F n,k where F n,k is a tame function for all k 1. Assuming the series ∞ k=1 |E x (F n,k )| convergent, the "expectation" of F Xn is defined as The definition of the expectation is consistent in the sense that it does not depend on the representation of F Xn as a series (see [18]
Definition 3.3
An admissible function is a functional F X of the pseudo-process X which is the limit of a sequence (F Xn ) n∈N of functions of discrete observations of X: such that the sequence (E x (F Xn )) n∈N is convergent. The "expectation" of F X is defined as This definition eludes the difficulty due to the lack of σ-additivity of the signed measure P.
On the other hand, any bounded Borel function of a finite number of observations of X at any times (not necessarily dyadic) t 1 < · · · < t k is admissible and it can be seen that, according to Definitions 3.1, 3.2 and 3.3, as expected in the usual sense.
For any pseudo-process Z = (Z(t)) t 0 , consider the functional defined for λ ∈ C such that ℜ(λ) > 0, µ ∈ R, ν > 0 by where H Z , K Z , I Z are functionals of Z defined on [0, +∞), K Z being non negative and I Z bounded; we suppose that, for all t 0, H Z (t), K Z (t), I Z (t) are functions of the continuous observations Z(s), 0 s t (that is the observations of Z up to time t). For Z = X n , we have e −λt n,k +iµH Xn (t n,k )−νK Xn (t n,k ) I Xn (t n,k ).
Since H Xn (t n,k ), K Xn (t n,k ), I Xn (t n,k ) are functions of X n (t n,j ) = X(t n,j ), 0 j k, the quantity e iµH Xn (t n,k )−νK Xn (t n,k ) I Xn (t n,k ) is a tame function and the series in (3.2) is a function of discrete observations of X. If the series ∞ k=0 E x e −λt n,k +iµH Xn (t n,k )−νK Xn (t n,k ) I Xn (t n,k ) converges, the expectation of F Xn (λ, µ, ν) is defined, according to Definition 3.2, as Finally, if lim n→+∞ F Xn (λ, µ, ν) = F X (λ, µ, ν) and if the limit of E x [F Xn (λ, µ, ν)] exists as n goes to ∞, F X (λ, µ, ν) is an admissible function and its expectation is defined, according to Definition 3.3, as 4 Distributions of (X(t), M(t)) and (X(t), m(t)) We assume that N is even. In this section, we derive the Laplace-Fourier transforms of the vectors (X(t), M (t)) and (X(t), m(t)) by using Spitzer identity (Subsection 4.1), from which we deduce the densities of these vectors by successively inverting-three times-the Laplace-Fourier transforms (Subsection 4.2). Next, we write out the formulas corresponding to the particular cases N ∈ {2, 3, 4} (Subsection 4.3). Finally, we compute the distribution functions of the vectors (X(t), m(t)) and (X(t), M (t)) (Subsection 4.4) and write out the formulas associated with N ∈ {2, 3, 4} (Subsection 4.5). Although N is assumed to be even, all the formulas obtained in this part when replacing N by 3 lead to some well-known formulas in the literature.
The functional F + Xn (λ, µ, ν) is a function of discrete observations of X. Our aim is to compute its expectation, that is to compute the expectation of the above series and next to take the limit as n goes to infinity. For this, we observe that, using the Markov property, So, if ℜ(λ) > 2 n ln ρ, the series E x e −λt n,k +iµX n,k −νM n,k is absolutely convergent and then we can write the expectation of F + Xn (λ, µ, ν): e −λt n,k ϕ + n,k (µ, ν; x) for ℜ(λ) > 2 n ln ρ (4.2) with ϕ + n,k (µ, ν; x) = E x e iµX n,k −νM n,k = e (iµ−ν)x E 0 e −(ν−iµ)M n,k −iµ(M n,k −X n,k ) .
However, because of the domain of validity of (4.2), we cannot take directly the limit as n tends to infinity. Actually, we shall see that this difficulty can be circumvented by using sharp results on Dirichlet series.
• Step 2
Putting z = e −λ/2 n and noticing that e −λt n,k = z k , (4.2) writes The generating function appearing in the last displayed equality can be evaluated thanks to an extension of Spitzer identity, which we recall below.
Lemma 4.2 Let (ξ k ) k 1 be a sequence of "i.i.d. random variables" and set X 0 = 0, X k = k j=1 ξ j for k 1, and M k = max 0 j k X j for k 0. The following relationship holds for |z| < 1: Observing that 1 − z = exp[log(1 − z)] = exp[− ∞ k=1 z k /k], Lemma 4.2 yields, for ξ k = X n,k − X n,k−1 : We plainly have |ψ + (µ, ν; t)| 2ρ, and then the series in (4.3) defines an analytical function on the half-plane {λ ∈ C : ℜ(λ) > 0}. It is the analytical continuation of the function which was a priori defined on the half-plane {λ ∈ C : ℜ(λ) > 2 n ln ρ}. As a byproduct, we shall use the same notation E x [F + Xn (λ, µ, ν)] for ℜ(λ) > 0. We emphasize that the rhs of (4.3) involves only one observation of the pseudo-processus X (while the lhs involves several discrete observations). This important feature of Spitzer identity entails the convergence of the series lying in (4.2) with a lighter constraint on the domain of validity for λ.
• Step 3
In order to prove that the functional F + X (λ, µ, ν) is admissible, we show that the series E x e −λt n,k +iµX n,k −νM n,k is absolutely convergent for ℜ(λ) > 0. For this, we invoke a lemma of Bohr concerning Dirichlet series ( [5]). Let α k e −β k λ be a Dirichlet series of the complex variable λ, where (α k ) k∈N is a sequence of complex numbers and (β k ) k∈N is an increasing sequence of positive numbers tending to infinity. Let us denote σ c its abscissa of convergence, σ a its abscissa of absolute convergence and σ b the abscissa of boundedness of the analytical continuation of its sum. If the condition lim sup k→∞ ln(k)/β k = 0 is fulfilled, then σ c = σ a = σ b .
In our situation, we will show that the function of the variable λ lying in the rhs in (4.3) is bounded on each half-plane ℜ(λ) ε for any ε > 0. We write it as For any α ∈ C such that ℜ(α) 0, we have where we set ̺ = +∞ 0 ξ |p(1; −ξ)| dξ (̺ < +∞) and we used the elementary inequality |e ζ − 1| 2|ζ| which holds for any ζ ∈ C such that ℜ(ζ) 0. Similarly, This proves that the rhs of (4.3) is bounded on each half-plane ℜ(λ) ε for any ε > 0. So, the convergence of the series lying in (4.2) holds in the domain ℜ(λ) > 0 and the Now, we can pass to the limit when n → +∞ in (4.3) and we obtain A similar formula holds for F − X .
From (4.4), we see that we need to evaluate integrals of the form In the last step, we used the fact that the set {θ j , j ∈ J} is invariant by conjugating.
In a similar manner, we can obtain that of (X(t), m(t)). The proof of Theorem 4.1 is now completed.
Remark 4.3
Any of both formulas (4.1) can be deduced from the other one by using a symmetry argument.
• For even integers N , the obvious symmetry property X dist = −X holds and entails Observing that in this case which confirms the simple relationship between both expectations (4.1).
• If N is odd, although this case is not recovered by (4.1), it is interesting to note the asymmetry property X + dist = −X − and X − dist = −X + where X + and X − are the pseudo-processes respectively associated with κ N = +1 and κ N = −1. This would give Observing that now, with similar notations, Hence (X + (t), m + (t)) and (X − (t), −M − (t)) should have identical distributions, which would explain the relationship between both expectations (4.1) in this case.
Remark 4.4 By choosing ν = 0 in (4.1), we obtain the Fourier transform of the λ-potential of the kernel p. In fact, remarking that
Density functions
We are able to invert the Laplace-Fourier transforms (4.1) with respect to µ and ν.
Inverting with respect to ν
and, for z x, Applying expansion (2.11) to x = (iµ − ν)/ N √ λ yields: Writing now We can therefore invert the foregoing Laplace transform with respect to ν and we get the formula (4.9) corresponding the case of the maximum functional. That corresponding in the case of the minimum functional is obtained is a similar way.
Inverting with respect to µ
Theorem 4.6 The Laplace transforms with respect to time t of the joint density of X(t) and, respectively, M (t) and m(t), are given, for z x ∨ y, by and, for z x ∧ y, where the functions ϕ λ and ψ λ are defined by (2.6).
Proof. Let us write the following equality, as in the previous subsubsection (see (4.10)): We get, by (4.9) and (2.1), for z x, Writing now This proves (4.11) in the case of the maximum functional and the formula corresponding to the minimum functional can be proved in a same manner.
Remark 4.7 Formulas (4.11) contain in particular the Laplace transforms of X(t), M (t) and m(t) separately. As a verification, we integrate (4.11) with respect to y and z separately.
• By integrating with respect to y on [z, +∞) for z x, we get We used the relation j∈J A j = 1; see Subsection 2.3. We retrieve the Laplace transform (2.5) of the distribution of m(t).
• Suppose for instance that x y. Let us integrate (4.11) now with respect to z on (−∞, x]. This gives where we used (2.10) in the last step. We retrieve the λ-potential (2.3) of the pseudo-process (X(t)) t 0 since Remark 4.8 Consider the reflected process at its maximum (M (t) − X(t)) t 0 . The joint distribution of (M (t), M (t)−X(t)) writes in terms of the joint distribution of (X(t), M (t)), for x = 0 (set P = P 0 for short) and z, ζ 0, as: By introducing an exponentially distributed time T λ with parameter λ which is independent of (X(t)) t 0 , (4.12) reads This relationship may be interpreted by saying that −m(T λ ) and M (T λ ) − X(T λ ) admit the same distribution and M (T λ ) and M (T λ ) − X(T λ ) are independent.
Remark 4.9
The similarity between both formulas (4.11) may be explained by invoking a "duality" argument. In effect, the dual pseudo-process (X * (t)) t 0 of (X(t)) t 0 is defined by X * (t) = −X(t) for all t 0 and we have the following equality related to the inversion of the extremities (see [12]): Remark 4.10 Let us expand the function ϕ λ as λ → 0 + : Similarly As a result, putting (4.13) and (4.14) into (4.11) and using (2.1) and N −1 By integrating this asymptotic with respect to z, we derive the value of the so-called 0-potential of the absorbed pseudo-process (see [19] for the definition of several kinds of absorbed or killed pseudo-processes):
Inverting with respect to λ
Formulas (4.11) may be inverted with respect to λ and an expression by means of the successive derivatives of the kernel p may be obtained for the densities of (X(t), M (t)) and (X(t), m(t)). However, the computations and the results are cumbersome and we prefer to perform them in the case of the distribution functions. They are exhibited in Subsection 4.4.
Density functions: particular cases
In this subsection, we pay attention to the cases N ∈ {2, 3, 4}. Although our results are not justified when N is odd, we nevertheless retrieve well-known results in the literature related to the case N = 3. In order to lighten the notations, we set for, ℜ(λ) > 0, and then Example 4.12 Case N = 3: referring to Example 2.2, we have
Distribution functions
In this part, we integrate (4.11) in view to get the distribution function of the vector (X(t), M (t)): Obviously, if x > z, this quantity vanishes. Suppose now x z. We must consider the cases y z and z y. In the latter, we have P x {X(t) y, M (t) z} = P{M (t) z} and this quantity is given by (2.7). So, we assume that z x ∨ y. Actually, the quantity P x {X(t) y, M (t) z} is easier to derive.
Proposition 4.14 We have for z x ∨ y and ℜ(λ) > 0: and for z x ∧ y: As a result, combining the above formulas with (4.15), the distribution function of the couple (X(t), M (t)) emerges and that of (X(t), m(t)) is obtained in a similar way.
Theorem 4.15
The distribution functions of (X(t), M (t)) and (X(t), m(t)) are respectively determined through their Laplace transforms with respect to t by if y x z, and if z x y,
Inverting the Laplace transform
Theorem 4. 16 The distribution function of (X(t), M (t)) admits the following representation: where I k0 is given by (5.14) and the α jm 's being some coefficients given by (4.18).
Proof. We intend to invert the Laplace transform (4.16). For this, we interpret both exponentials e θ j N √ λ(x−z) and e θ k N √ λ(z−y) as Laplace transforms in two different manners: one is the Laplace transform of a combination of the successive derivatives of the kernel p, the other one is the Laplace transform of a function which is closely related to the density of some stable distribution. More explicitly, we proceed as follows.
• On one hand, we start from the λ-potential (2.3) that we shall call Φ: Differentiating this potential (#J − 1) times with respect to ξ leads to the Vandermonde system of #J equations where the exponentials e θ j N √ λ ξ are unknown: Introducing the solutions α jm of the #J elementary Vandermonde systems (indexed by m varying from 0 to #J − 1): The explicit expression of α jm is where the coefficients c jq , 0 q #J − 1, are the elementary symmetric functions of the θ l 's, l ∈ J \ {j}, that is c j0 = 1 and for 1 q #J − 1, • On the other hand, using the Bromwich formula, the function ξ −→ e θ k N √ λ ξ can be written as a Laplace transform. Indeed, referring to Section 5.2.2, we have for k ∈ K and ξ 0, where I k0 is given by (5.14).
Consequently, the sum lying in Proposition 4.14 may be written as a Laplace transform which gives the representation (4.17) for the the distribution function of (X(t), M (t)).
Remark 4.17
A similar expression obtained by exchanging the roles of the indices j and k in the above discussion and slightly changing the coefficient a km into another b jn may be derived: However, the foregoing result involves the same number of integrals as that displayed in Theorem 4.16.
Distribution functions: particular cases
Here, we write out (4.16) and (4.17) if y x z, Formula (4.17) writes The reciprocal relations, which are valid for ξ 0, imply that α 10 = 1. Then a 00 = 2B 0 On the other hand, we have for ξ 0, by (5.14), Consequently, Using the substitution σ = s 2 u+s together with a well-known integral related to the modified Bessel function K 1/2 , we get Then Finally, it can be easily checked, by using the Laplace transform, that As a result, we retrieve the famous reflection principle for Brownian motion: Example 4.19 Case N = 3: we have to cases to distinguish.
Boundary value problem
In this part, we show that the function x −→ F λ (x, y, z) solves a boundary value problem related to the differential operator D x = κ N d N dx N . Fix y < z and set F (x) = F λ (x, y, z) for x ∈ (−∞, z].
Proposition 4.21
The function F satisfies the differential equation together with the conditions Proof. The differential equation (4.20) is readily obtained by differentiating (4.16) with respect to x. Let us derive the boundary condition (4.21): where the last equality comes from (2.11) with x = θ k which yields Condition (4.22) is quite easy to check. 5 Distributions of (τ + a , X(τ + a )) and (τ − a , X(τ − a )) The integer N is again assumed to be even. Recall we set τ + a = inf{t 0 : X(t) > a} and τ − a = inf{t 0 : X(t) < a}. The aim of this section is to derive the distributions of the vectors (τ + a , X(τ + a )) and (τ − a , X(τ − a )). For this, we proceed in three steps: we first compute the Laplace-Fourier transform of, e.g., (τ + a , X(τ + a )) (Subsection 5.1); we next invert the Fourier transform (with respect to µ, Subsubsection 5.2.1) and we finally invert the Laplace transform (with respect to λ, Subsubsection 5.2.2). We have especially obtained a remarkable formula for the densities of X(τ + a ) and X(τ − a ) by means of multipoles (Subsection 5.4).
• Step 1
For the step-process (X n (t)) t 0 , the corresponding first hitting time τ + a,n is the instant t n,k with k such that X(t n,j ) a for all j ∈ {0, . . . , k−1} and X(t n,k ) > a, or, equivalently, such that M n,k−1 a and M n,k > a where M n,k = max 0 j k X n,j and X n,k = X(t n,k ) for k 0 and M n,−1 = −∞. We have, for x a, e −λt n,k +iµX n,k − e −λt n,k+1 +iµX n,k+1 1l {M n,k >a} .
The functional e −λτ + a,n +iµXn(τ + a,n ) is a function of discrete observations of X.
Noticing that 1 2iπ we get e −λt n,k t n,k ψ 1 (iµ; t n,k ) By imitating the method used by Nishioka (Appendix in [18]) for deriving subtil extimates, it may be seen that this last expression is bounded over the half-plane ℜ(λ) ε for any ε > 0. Hence, as in the proof of the validity of (4.2) for ℜ(λ) > 0, we see that (5.3) is also valid for ℜ(λ) > 0. It follows that the functional e −λτ + a +iµX(τ + a ) is admissible.
• Step 5
Now, we can let n tend to +∞ in (5.3). For ℜ(λ) > 0, we obviously have and we finally obtain the relationship (5.1) corresponding to τ + a . The proof of that corresponding to τ − a is quite similar.
Remark 5.3
Choosing µ = 0 in (5.4) supplies the Laplace transforms of τ + a and τ − a : for x a, Remark 5.4 An alternative method for deriving the distribution of (τ + a , X(τ + a )) consists of computing the joint distribution of X(t), 1l (−∞,a) (M (t)) instead of that of (X(t), M (t)) and next to invert a certain Fourier transform. This way was employed by Nishioka [18] in the case N = 4 and may be applied to the general case mutatis mutandis.
Remark 5.5 The following relationship issued from fluctuation theory holds for Levy processes: if x a, Let us check that (5.7) also holds, at least formally, for the pseudo-process X. We have, by (2.5), For x = a, this yields, by (2.11), As a result, by plugging (5.8) and (5.9) into (5.7), we retrieve (5.4).
Example 5.6 Case N = 2: we simply have Example 5.7 Case N = 3: • In the case κ 3 = +1, we have, for x a, and, for x a, Therefore, (5.4) writes • In the case κ 3 = −1, we similarly have that Example 5.8 Case N = 4: we have, for x a, and, for x a, Therefore, (5.4) becomes We retrieve formula (8.3) of [18].
Density functions
We invert the Laplace-Fourier transform (5.4). For this, we proceed in two stages: we first invert the Fourier transform with respect to µ and next invert the Laplace transform with respect to λ.
Inverting with respect to µ
Let us expand the product l∈J\{j} 1 −θ l x as where the coefficients c jq , 0 q #J − 1, are the elementary symmetric functions of the θ l 's, l ∈ J \ {j}, that is, more explicitly, c j0 = 1 and for 1 q #J − 1, In a similar way, we also introduce d k0 = 1 and for 1 q #K − 1, By applying expansion (5.10) to x = iµ/ N √ λ, we see that (5.4) can be rewritten as Now, observe that (−iµ) q e iµa is nothing but the Fourier transform of the q th derivative of the Dirac distribution viewed as a tempered Schwartz distribution: Hence, we have obtained the following intermediate result for the distribution of (τ + a , X(τ + a )) and also for that of (τ − a , X(τ − a )).
Proposition 5. 9 We have, for ℜ(λ) > 0, The appearance of the successive derivatives of δ a suggests to view the distribution of (τ + a , X(τ + a )) as a tempered Schwartz distribution (that is a Schwartz distribution acting on the space S of the C ∞ -functions exponentially decreasing together with their derivatives characterized by )) .
Inversion with respect to λ
In order to extract the densities of (τ + a , X(τ + a )) and (τ − a , X(τ − a )) from (5.12), we search functions I lq , 0 q max(#I − 1, #J − 1), such that, for ℜ(θ l ξ) < 0, The rhs of (5.13) seems closed to the Laplace transform of the probability density function of a completely asymmetric stable random variable, at least for q = 0. Nevertheless, because of the presence of the complex term θ l within the rhs of (5.13), we did not find any precise relationship between the function I lq and stable processes. So, we derive below an integral representation for I lq .
Invoking Bromwich formula, the function I lq writes The substitution λ −→ λ N yields In particular, for q = 0 we have, by integration by parts, Remark 5.10 The following relation holds between all the functions I lq 's: Hence, (5.12) can be rewritten as an explicit Laplace transform with respect to λ: We are able to state the main result of this part.
So, the sum lying within the second integral in (5.16) writes As a result, In particular, J q (t; ξ) is real and for q = 0 we have, since c j0 = 1 and j∈J A j = 1, which is nothing but P x {τ + a ∈ dt}/dt.
Distribution of the hitting places
We now derive the distribution of the hitting places X(τ + a ) and X(τ − a ). To do this for X(τ + a ) for example, we integrate (5.15) with respect to t: We need two lemmas for carrying out the integral lying in (5.17). Proof. We proceed by induction on n. The foregoing integral involves the elementary integral below: a j log b j which proves Lemma 5.13 in the case n = 1.
Assume now the result of the lemma valid for an integer n 1. Let m be an integer such that m n+2 and a 1 , . . . , a m and b 1 , . . . , b m be complex numbers such that ℜ(b j ) 0 and ℑ m j=1 a j b l j = 0 for 0 l n. By integration by parts, we have Applying L'Hôpital's rule n times, we see, using the condition ℑ m j=1 a j b l j = 0 for We have ℑ Lemma 5.14 We have, for 0 p q #J − 1, Proof. Consider the following polynomial: We then obtain, due to (2.11), if p #J − 1, which entails the result by identifying the coefficients of both polynomials above. Now, we state the following remarkable result.
Theorem 5.15
The "distributional densities" of X(τ + a ) and X(τ − a ) are given by It is worth that the distributions of X(τ + a ) and X(τ − a ) are linear combinations of the successive derivatives of the Dirac distribution δ a . This noteworthy fact has already been observed by Nishioka [17,18] in the case N = 4 and the author spoke of "monopoles" and "dipoles" respectively related to δ a and δ ′ a (see also [19] for more account about relationships between monopoles/dipoles and different kinds of absorbed/killed pseudoprocesses). More generally, (5.18) suggests to speak of "multipoles" related to the δ (q) a 's.
In the case of Brownian motion (N = 2), the trajectories are continuous, so X(τ ± a ) = a and then we classically write P x {X(τ ± a ) ∈ dz} = δ a (dz) where δ a is viewed as the Dirac probability measure. For N 4, it emerges from (5.18) that the distributional densities of X(τ ± a ) are concentrated at the point a through a sequence of successive derivatives of δ a where δ a is now viewed as a Schwartz distribution. Hence, we could guess in (5.18) a curious and unclear kind of continuity. In Subsection 5.6, we study the distribution of X(τ ± a −) which will reveal itself to coincide with that of X(τ ± a ). This will confirm this idea of continuity.
Proof. Let us evaluate the integral lying in (5.17). We have, thanks to Lemma 5.14, Therefore, the conditions of Lemma 5.13 are fulfilled and we get The second sum lying within the brackets is equal, by Lemma 5.14, to (−1) q . The first one vanishes: indeed, by using the symmetry σ : j ∈ J −→ σ(j) ∈ J such that θ σ(j) =θ j , ℜ j∈J c jq A j θ q j arg(θ j ) = 1 2 j∈Jc jq A j θ q j arg(θ j ) + j∈J c jqĀjθ q j arg(θ j ) The terms of the second last sum are the opposite of those of the first sum since c σ(j)qĀσ(j)θ q σ(j) =c jq A j θ q j and arg(θ σ(j) ) = − arg(θ j ) which proves the assertion. As a result, we get (5.18).
Fourier transforms of the hitting places
By using (5.18) and (5.11), it is easy to derive the Fourier transforms of the hitting places X(τ + a ) and X(τ − a ).
Proposition 5.16
The Fourier transforms of X(τ + a ) and X(τ − a ) are given by In this part, we suggest to retrieve (5.19) by letting λ tend to 0 + in (5.4). We rewrite (5.4), for instance for x a, as Using the elementary expansions, as λ → 0 + , On the other hand, applying (2.11) to x = 0 gives Consequently, the limit of E x e −λτ + a +iµX(τ + a ) as λ → 0 + ensues. The constant arising when combining (5.20) and (5.21) is In view of (5.19), we have proved the equality
Strong Markov property for τ ± a
We roughly state a strong Markov property related to the hitting times τ ± a .
Taking the expectations, we get for x a: E x F (X n (t)) 0 t<τ + a,n G (X n (t + τ + a,n )) t 0 = ∞ k=1 E x F n,k−1 1l {M n,k−1 <a M n,k } E X n,k (G n,0 ) = E x F (X n (t)) 0 t<τ + a,n E X(τ + a,n ) [G((X n (t)) t 0 )] (5.24) and (5.22) ensues by taking the limit of (5.24) as n tends to +∞ in the sense of Definition 3.3.
The argument in favor of discontinuity evoked in [12] should fail since, in view of (5.13), a term is missing when applying the strong Markov property.
Just before the hitting time
In order to lighten the notations, we simply write τ ± a = τ a and we introduce the jump ∆ a X = X(τ a ) − X(τ a −).
Proposition 5.19
The Laplace-Fourier transform of the vector (τ a , X(τ a −), ∆ a X) is related to those of the vectors (τ a , X(τ a −)) and (τ a , X(τ a )) according as, for ℜ(λ) > 0 and µ, ν ∈ R, E x e −λτa+iµX(τa−)+iν∆aX = E x e −λτa+iµX(τa−) = E x e −λτa+iµX(τa) . (5.25) Proof. The proof of Proposition 5.19 is similar to that of Lemma 5.1. So, we outline the main steps with less details. We consider only the case where τ a = τ + a and x a, the other one is quite similar.
For computing the term within brackets, we need the following quantities: • Step 3 We now take the limit of (5.27) as n tends to infinity: for p = 0, 0 for 1 p N − 1, κ N N ! t for p = N .
Proof. By differentiating k times the identity E 0 e iuX(t) = e κ N (iu) N t with respect to u and next substituting u = 0, we have that .
Fix a complex number α = 0. It can be easily seen by induction that there exists a family of polynomials (P k ) k∈N such that, for all k ∈ N, ∂ k ∂u k e αu N = P k (u) e αu N . (5.29) In particular, we have P 0 (u) = 1 and P 1 (u) = N αu N −1 . Using the Leibniz rule, we obtain This ascertains the aforementioned induction and gives, for u = 0, Choosing α = κ N i N t and u = 0 in (5.29), we immediately complete the proof of Lemma 5.20.
Boundary value problem
We end up this work by exhibiting a boundary value problem satisfied by the Laplace-Fourier transform U (x) = E x e −λτ + a +iµX(τ + a ) , x ∈ (−∞, a).
Proposition 5.25
The function U satisfies the differential equation We also refer the reader to [19] for a very detailed account on PDE's with various boundary conditions and their connections with different kinds of absorbed/killed pseudoprocesses. | 10,276 | 2007-02-19T00:00:00.000 | [
"Mathematics"
] |
Some Matrix Iterations for Computing Generalized Inverses and Balancing Chemical Equations
: An application of iterative methods for computing the Moore–Penrose inverse in balancing chemical equations is considered. With the aim to illustrate proposed algorithms, an improved high order hyper-power matrix iterative method for computing generalized inverses is introduced and applied. The improvements of the hyper-power iterative scheme are based on its proper factorization, as well as on the possibility to accelerate the iterations in the initial phase of the convergence. Although the effectiveness of our approach is confirmed on the basis of the theoretical point of view, some numerical comparisons in balancing chemical equations, as well as on randomly-generated matrices are furnished
Introduction
A chemical equation is only a symbolic representation of a chemical reaction and represents an expression of atoms, elements, compounds or ions.Such expressions are generated based on balancing through reactant or product coefficients, as well as through reactant or product molar masses [1].
In fact, equilibrating the equations that represent the stoichiometry of a reacting system is a matter of mathematics, since it can be reduced to the problem of solving homogeneous linear systems.
Balancing chemical equations is an important application of generalized inverses.To discuss further, the reflexive g-inverse of a matrix has been successfully used in solving a general problem of balancing chemical equations (see [2,3]).Continuing in the same direction, Krishnamurthy in [4] gave a mathematical method for balancing chemical equations founded by virtue of a generalized matrix inverse.The method used in [4] is based on the exact computation of reflexive generalized inverses by means of elementary matrix transformations and the finite-field residue arithmetic, as it was described in [5].
It is well known that the symbolic data processing, including both rational arithmetic and multiple modulus residue arithmetic, are time consuming, both for the implementation and for execution.On the other hand, the finite-field exact arithmetic is inapplicable to chemical reactions, which include atoms with fractional and/or integer oxidation numbers.This hard class of chemical reactions is investigated in [6].
Additionally, the balancing chemical equations problem can be readily resolved by computer algebra software, as was discussed in [7].The approach used in [7] is based on the usage of the Gaussian elimination (also known as the Gauss-Jordan algorithm), while the approach that is exploited in [2] is based on singular value decomposition (SVD).On the other hand, it is widely known that Gaussian elimination, as well as SVD require a large amount of numerical operations [8].Furthermore, small pivots that could appear in Gaussian elimination can lead to large multipliers [9], which can sometimes lead to the divergence of numerical algorithms.Two methods of balancing chemical equations, introduced in [10], are based on integer linear programming and integer nonlinear programming models, respectively.Notice that the linear Diophantine matrix method was proposed in [11].The method is applicable in cases when the reaction matrices lead to infinite stoichiometrically-independent solutions.
In the present paper, we consider the possibility of applying a new higher order iterative method for computing the Moore-Penrose inverse in the problem of balancing chemical equations.In general, our current research represents the first attempt to apply iterative methods in balancing chemical reactions.
A rapid numerical algorithm for computing matrix-generalized inverses with a prescribed range and null space is developed in order to implement this global idea.The method is based on an appropriate modification of the hyper-power iterative method.Furthermore, some techniques for the acceleration of the method in the initial phase of its convergence is discussed.We also try to show the applicability of proposed iterative schemes in balancing chemical equations.Before a more detailed discussion, we briefly review some of the important backgrounds incorporated in our work.
The outer inverse with prescribed range T and null space S of a matrix A ∈ C m×n r , denoted by A T,S , satisfies the second Penrose matrix equation XAX = X and two additional properties: R(X) = T and N (X) = S.The significance of these inverses is reflected primarily in the fact that the most important generalized inverses are particular cases of outer inverses with a prescribed range and null space.For example, the Moore-Penrose inverse A † , the weighted Moore-Penrose inverse A † M,N , the Drazin inverse A D and the group inverse A # can be derived by means of the appropriate choices of subspaces T and S in what follows: wherein A = M AN −1 and ind(A) denotes the index of a square matrix A (see, e.g., [12]).
Although there are many approaches to calculate these inverses by means of direct methods, an alternative and very important approach is to use iterative methods.Among many such matrix iterative methods, the hyper-power iterative family has been introduced and investigated (see, for example, [13][14][15]).The hyper-power iteration of the order p is defined by the following scheme (see, for example, [13]): The iteration Equation ( 2) requires p matrix-matrix multiplications (from now on denoted by mmm) to achieve the p-th order of convergence.The adoption p = 2 yields to the Schulz matrix iteration, originated in [16], with the second rate of convergence.Further, choice p = 3 gives the cubically-convergent method of Chebyshev [17], defined as follows: For more details about the background of iterative methods for computing generalized inverses, please refer to [18].
The main motivation of the paper [19] was the observation that the inverse of the reaction matrix cannot always be obtained.For this purpose, the author used an approach based on row-reduced echelon forms of both the reaction matrix and its transpose.Since the Moore-Penrose inverse always exists, replacement of the ordinary inverse by the corresponding pseudoinverse resolves the drawback that the inverse of the reaction matrix does not always exist.Furthermore, two successive transformations into the corresponding row-reduced echelon forms are time-consuming and badly-conditioned numerical processes, again based on Gaussian elimination.Our intention is to avoid the above-mentioned drawbacks that appear in previously-used approaches.
Here, we decide to develop an application of the Schulz-type methods.The motivation is based on the following advantages arising from the use of these methods.Firstly, they are totally applicable on sparse matrices possessing sparse inverses.Secondly, the Schulz-type methods are useful for providing approximate inverse preconditions.Thirdly, such schemes are parallelizable, while Gaussian elimination with partial pivoting is not suitable for the parallelism.
It is worth mentioning that an application of iterative methods in finding exact solutions that involve integers or rational entries requires an additional transformation of the solution and utilization of tools for symbolic data processing.To this end, we used the programming package Mathematica.
The rest of this paper is organized as follows.A new formulation of a very high order method is presented in Section 2. The method is fast and economical at the same time, which is confirmed by the fact that it attains a very high rate of convergence by using a relatively small number of mmm.Acceleration of the convergence via scaling the initial iterates is discussed, and some novel approaches in this trend are given in the same section.An application of iterative methods in balancing chemical equations is considered in Section 3. A comparison of numerical results obtained by applying the introduced method is shown against the results defined by using several similar methods.Some numerical experiments concerning the application of the new iterations in balancing chemical equations are presented in Section 4. Finally, some concluding remarks are drawn in the last section.
An Efficient Method and Its Acceleration
A Schulz-type method of high order p = 31 with two improvements is derived and chosen as one of the options for balancing chemical equations.The first improvement is based on a proper factorization, which reduces the number of mmm required in each cycle.The second straightening is based on a proper accelerating of initial iterations.
Toward this goal, we consider Equation (2) in the case p = 31 as follows: In its original form, the hyper-power iteration Equation ( 5) is of the order 31 and requires 31 mmm.It is necessary to remark that a kind of effectiveness of a computational iterative (fixed point-type) method can be estimated by the real number (called the computational efficiency index) EI = p 1 θ , wherein θ and p stand for the whole computational cost and the rate of convergence per cycle, respectively.Here, the most important burden and cost per cycle is the number of matrix-matrix products.
Clearly, in Equation ( 5), this proportion between the order of convergence and the needed number of mmm is not suitable, since its efficiency index: is relatively small.This shows that Equation ( 5) is not a useful iterative method.To improve the applicability of Equation ( 5) and, so, to derive a fast matrix iteration with a reduced number of mmm, i.e., to obtain an efficient method, we keep going as in the following subsection.
An Efficient Method
We rewrite Equation ( 5) as: Subsequently, Equation ( 7) results in the following formulation of Equation ( 5): Now, we could deduce our final fast matrix iteration by simplifying Equation ( 8) more: where only nine mmm are required.Therefore, the efficiency index of the proposed fast iterative method becomes: EI = 31 The efficiency index 1.4645 of Equation ( 9) is higher than the efficiency index 1.4142 of Equation ( 3), higher than the efficiency index 1.4422 of Equation ( 4) and finally higher than the efficiency index 1.4592 of the 30th order method proposed recently by Sharifi et al. [20].
At this point, it would be useful to provide the following theorem regarding the convergence behavior of Equation ( 9).Theorem 1.Let A ∈ C m×n r be a given matrix of rank r and G ∈ C n×m s be a given matrix of rank 0 < s ≤ r, which satisfy rank(GA) = rank(G).Then, the sequence {X k } k=∞ k=0 generated by the iterative method Equation (9) converges to T,S − AX 0 < 1 Proof.The proof of this theorem would be similar to the ones in [21].Hence, we skip it over and just include the following error bound: wherein The derived iterative method is very fast and effective, in contrast to the existing iterative Schulz-type methods of the same type.However, as was pointed out by Soleimani et al. [22], such iterative methods are slow at the beginning of the iterative process, and the real convergence rate cannot be observed.An idea to remedy this disadvantage is to apply a multiple root-finding algorithm on the matrix equation F (X) = X −1 − A = 0 and to try to accelerate the hyper-power method in its initial iterations.Such a discussion about a scaled version of the hyper-power method is the main aim of the next subsection.
Accelerating the Initial Phase
Another useful motivation of the present paper is processed here.The iterative scheme: was applied in [22] to achieve the convergence phase in the main (modified Householder) method much more rapidly and to accelerate the beginning of the process.In the second iteration phase, it is sufficient to apply the introduced fast and efficient modified Householder method, which then reaches its own full speed of convergence [22].
In the same vein, the iterative expression Equation ( 13) can be rewritten in the following equivalent, but more practical form: One can now observe that Equation ( 14) is the particular case (p = 2) of the following new scheme: Therefore, Equation ( 15) can be considered as an important acceleration of the Schulz-type method Equation (2) in the initial phase, before the convergence rate is practically achievable.We note that such accelerations are useful for large matrices, whereas the iterative methods require too much iterations to converge.Particularly, by following Equation (15), one can immediately notice that the accelerating in the initial phase of iteration Equation ( 9) is of the form: Remark 1.The particular choice β = 1 in Equation (15) reduces these iterations to the usual hyper-power family of the iterative methods possessing the order p ≥ 2: The choice p = 2 in Equation (15) leads to the scaled Schulz matrix iteration considered recently in [23], and the choice p = 2, β = 1 produces the original Schulz matrix iteration, originated in [12].
Finally, a hybrid algorithm may be written now by incorporating Equations ( 9) and ( 16) as follows.
Algorithm 1 The new hybrid method for computing generalized inverses.
3: set X 0 = X l 4: for k = 0, 1, . . .until convergence ( X k+1 − X k < ), use Equation ( 9) to converge with high order.5: end for Instead of the hybrid Algorithm 1, based on the usage of Equation ( 16) in the initial phase and Equation ( 9) in the final stage, our third result here is to define a unique iterative method, which can be derived by applying variable acceleration parameter β = 1 + β k , 0 ≤ β k ≤ 1.This approach yields scaled hyper-power iterations of the general form: where the initial approximation X 0 = αG is chosen according to Equation (11).
Furthermore, it is possible to propose various modifications of β k in a manner that guarantees 1 + β k → 1.For example:
Balancing Chemical Equations Using Iterations
In accordance with the intention that was motivated in the first section, in this section, we investigate the applicability of some iterations from the hyper-power family in balancing chemical equations.It is shown that the iterative methods can be applied successfully without any limitations.
Balancing Chemical Equations Using Iterative Methods
The coefficients x i are integers, rational or real numbers, which should be determined on the basis of three basic principles: (1) the law of conservation of mass; (2) the law of conservation of atoms; (3) the time-independence of Equation ( 20), an assumption usually valid for stable/non-sensitive reactions.Let there be m distinct atoms involved in the chemical reaction Equation (20) and n = r + s distinct reactants and products.It is necessary to form an m × n matrix A, called the reaction matrix, whose columns represent the reactants and products and the rows represent the distinct atoms in the chemical reaction.More precisely, the (i, j)-th element of A, denoted by a i,j , represents the number of atoms of type i in each compound/element (reactant or product).An arbitrary element a i,j is positive or negative according to whether it corresponds to a reactant or a product.Hence, the balancing chemical equation problem can be formulated as the homogeneous matrix equation: with respect to the unknown vector x ∈ R n , where A ∈ R m×n denotes the reaction matrix and 0 denotes the null column vector of the order m.In this way, an arbitrary chemical reaction can be formulated as a matrix equation.We would like to use the symbolic and numerical possibilities of the Mathematica computer algebra system in conjunction with the above-defined iterative method(s) for computing generalized inverses to automatize the chemical reactions balancing process.
The general solution of the balancing problem in the matrix form Equation ( 21) is given by: where c is the arbitrarily-selected n-dimensional vector.Let us assume that the approximation of A † generated by an arbitrary iterative method is given by X := X k+1 .
If the iterative method for computing X is performed in the floating point arithmetic, it is necessary to perform a transition from the solution whose coordinates are real numbers into an exact (integer and/or rational) solution.Thus, the iterative approach in balancing chemical equations assumes three general algorithmic steps, as is described in Algorithm 2.
Algorithm 2 General algorithm for balancing chemical equations by an iterative solver.
1: Apply (for example) Algorithm 1 and compute the approximation X := X k+1 of A † .2: Compute the vector s using Equation (22).3: Transform real numbers included in s into an exact solution.
A clear observation about Algorithms 2 is the following: -Steps 1 and 2 require usage of real arithmetic (with very high precision); -Step 3 requires usage of symbolic processing and exact arithmetic capabilities to deal with rational numbers.
As a result, in order to apply iterative methods to the problem of balancing chemical equations, it is necessary to use a software that meets two diametrically-opposite criteria: the ability to carry out numerical calculations (with very high precision) and the ability to apply the exact arithmetic and symbolic calculations.The programming language Mathematica possesses both of these properties.More details about this programming language can be found in [24].
The following (sample) Mathematica code can be used to determine the exact solution using real values contained in the vector s (defined in Equation ( 22)).
Id = IdentityMatrix[n]
; s = (Id -X.A).ConstantArray [1, n]; s = Rationalize[s, 10^(-300)]; c = s * LCM @@ Denominator /@ s; ( * Multiply s by the Least Common Multiple of denominators in s * ) s = c/Min @@ Numerator /@ c ( * Divide c by the Minimum of numerators in c * ) The standard Mathematica function Rationalize[x,dx] yields the rational number with the smallest denominator within a given tolerance dx of x.Sometimes, to avoid the influence of round-off errors and possible errors caused by the usage of the function Rationalize, it is necessary to perform iterative steps with a very high precision.
An improvement of the vector s can be attained as follows.
It is possible to propose an amplification of the vector s = I − A † A c, where c is an n-dimensional column vector.The improvement can be obtained using s = I − A † A I − A † A c .In the practical implementation, it is necessary to replace the expression s = (Id -X.A).ConstantArray [1, n] by s = (Id -X.A).((Id -X.A).ConstantArray [1, n]).
This replacement can be explained by the fact that A(I − A † A)s is closer to the zero vector zero than As.
Balancing Chemical Equations in Symbolic Form
As was explained in [6], balancing ℵ chemical reactions that possess atoms with fractional oxidation numbers and non-unique coefficients is an extremely hard problem in chemistry.The case when the system Equation ( 21) is not uniquely determined can be resolved using the Mathematica function Reduce.If a ℵ chemical reaction includes n reaction molecules and m reaction elements, then the reaction matrix A is of the order m × n.In the case n > m, the reaction has n m general solutions.All of them can be found applying the following expression:
Experimental Results
Let us denote the iterations Equation (3) by NM (Newton's Method), the iterations Equation ( 4) by CM (Chebyshev's Method) and Equation ( 9) by PM (Proposed Method).Here, we apply different methods in the Mathematica 10 environment to compute some generalized inverses and to show the superiority of our scheme(s).We also denote the hybrid algorithm given in [22] by HAL (Householder Algorithm) and our Algorithm 1 is denoted by APM (Accelerated Proposed Method).Throughout the paper the computer characteristics are Microsoft Windows XP Intel(R), Pentium(R) 4 CPU, 3.20 GHz with 4 GB of RAM, unless stated otherwise (as in the end of Example 1).
Numerical Experiments on Randomly-Generated Matrices
Example 1. [22] In this numerical experiment, we compute the Moore-Penrose inverse of a dense, randomly-generated m × n = 800 × 810 matrix, which is defined as follows: The numerical results corresponding to the number of iterations and the CPU time are illustrated in Table 1, wherein IT denotes the number of iterative steps.It shows that APM with m = 2 and five inner loops, while p = 2, is superior to the other existing methods.We employed HAL with m = 2 and eight inner loops.Note that the initial matrix is chosen as Here, • F stands for the Frobenius norm (Hilbert-Schmidt norm), which is for an m × n matrix A defined as: where A * denotes the conjugate transpose of A, σ i are the singular values of A and the trace function is used.
The results corresponding to the Moore-Penrose inverse of a randomly-generated matrix.IT, number of iterative steps.It is important to emphasize that the computational time is directly initiated by computer and software specifications.To clarify this, we execute our implemented algorithms/methods from Example 1 on a more recently-featured computer, whose characteristics are Windows 7 Ultimate, Intel(R) Core(TM) i5-4400 CPU 3.10 GHz with 8 GB of RAM and 64 Operating System.The results corresponding to this hardware/software configuration are given in Table 2 in terms of the elapsed CPU time.Furthermore, we re-ran Example 1 for an m × n = 1010 × 1000 matrix, which was randomly generated by the code: to show that our schemes can also simply be applied on matrices satisfying m ≥ n.The results generated by these values are arranged in Table 3, where m = 3 inner loops are considered for APM.
Methods
Table 2.The results corresponding to the Moore-Penrose inverse of randomly-generated matrices with a better equipped computer.The numerical example illustrates the theoretical results presented in Section 2. It can be observed from the results included in Tables 1-3 that, firstly, like the existing methods, the presented method shows a stable behavior along with a fast convergence.Additionally, according to results contained in Tables 1-3, it is clear that the number of iterations required in the APM method during numerical approximations of the Moore-Penrose inverse is smaller than the number of approximations generated by the classical methods.This observation is in accordance with the fact that the efficiency index is clearly the largest in the case of the APM method.In general, APM is superior among all of the existing famous hyper-power iterative schemes.This superiority is in accordance with the theory of efficiency analysis discussed before.
In fact, it can be observed that increasing the efficiency index by a proper factorization of the hyper-power method is a kind of nice strategy that gives promising results in terms of both the number of iterations and the computational time on different computers.
Here, it is also worth noting that Schulz-type solvers are the best choice for sparse matrices possessing sparse inverses.Since, in such cases, the usual SVD technique in the software, such as Mathematica or MATLAB, ruins the sparsity pattern and requires much more time, hence such iterative methods and the SVD-type (direct) schemes are both competitive, but have their own fields of applications.
Numerical Experiments in Balancing Chemical Equations
In this subsection, we present some clear examples indicating the applicability of our approach in the balancing chemical equations.We also apply the following initial matrix X 0 = 1 Example 2. Consider a specific skeletal chemical equation from [10]: where the left-hand side of the arrow consists of compounds/elements called reactants, while the right-hand side comprises compounds/elements called the products.Hence, Equation (24) is formulated as the homogeneous equation Ax = 0, wherein 0 denotes the null column vector and: The results generated after the comparison of numerical results derived in Example 2 are given in Table 4, using 300 precision digits, being large enough to minimize round-off errors, as well as to clearly observe the computed asymptotic error constants in the convergence phase.Although in all practical problems, the machine precision (double precision) is enough (just like Example (1)), here, our focus is to find very accurate coefficients for the chemical equation, since a very accurate tolerance, such as X k − A † ∞ ≤ 10 −150 , must be incorporated.
The final exact coefficients are defined as (x 1 , x 2 , x 3 , x 4 , x 5 ) T = (2, 4, 1, 3, 1) T .Thus, Experimental results clearly show that PM is the most efficient method for this purpose.In addition, we remark that since we use iterative methods in floating point arithmetic to obtain the coefficient, we must use the command Round[] in the last lines of our written Mathematica code, so as to attain the coefficients in exact arithmetic.In order to support the improvement described in Section 3, it is worth mentioning that (using Mathematica notations): Example 3. Now, we solve the following skeletal chemical equation from [10]: 27) is formulated as a homogeneous system of linear equations with the following coefficient matrix: The results of this experiment generated by using the ordinary double precision arithmetic and the stopping termination X k − A † ∞ ≤ 10 −10 are illustrated in Figure 1.
Note that the final coefficients obtained in exact arithmetic are equal to (x 1 , . . ., x 20 ) T = (2,3,3,6,6,6,10,12,15,20,88,2,3,3,6,6,6,10,15, 79) T .The results once again show that PM is the best iterative process.Example 4. Consider the following example from [19]: The reaction matrix A can be derived by taking into account both the law of conservation of atoms and the law of electrical neutrality, and it is equal to (see [19]): As in the previous examples, let us denote by X the result derived by the iterative method Equation (9).The rational approximation of s = A.(Id − XA)((Id − X.A).ConstantArray [1, n]) is equal to Example 5.In this example, it is shown that our iterative method is capable of producing the solution in the case when the real coefficients are used and the reaction is not unique within relative proportions.
The iterative method Equation (9) converges quickly, since the list of consecutive errors All possible 5 2 = 10 cases can be solved in the same way.
Example 7. As the last experiment and to show that the proposed iteration could preserve the sparsity pattern of the inverses if the inverses are sparse in nature, the following 4000 × 4000 matrix A = ExampleData["Matrix", "Bai/tols4000"] has been taken from Matrix Market database with the stopping termination X k+1 −X k ∞ X k+1 ∞ ≤ 10 −10 .The new scheme Equation ( 9) converges in twelve iterations.The matrix plots of the approximate inverse for this case are brought forward in Figure 2.
Conclusions
In this paper, we have developed a matrix iterative method for computing generalized inverses.The derived scheme has been constructed based on the hyper-power iteration.We have shown that this scheme achieves the order of convergence equal to 31 by using only nine mmm, which hits a very high computational efficiency index.
We also provided further schemes by extending some of the known results so as to accelerate the initial phase of convergence.Furthermore, we applied our iterative schemes to balancing chemical equations as an important application-oriented area.The derived numerical results clearly upheld our theoretical findings to a great extent.
Further discussions and generalizations can be considered for future works to provide much more robust, reliable and fast hybrid algorithms for computing generalized inverses with potential applications, for example as in [25].
Figure 1 .
Figure 1.Convergence history for different methods used in Example 3.
Figure
Figure The sparsity pattern for the approximate inverses: X 1 (top left); X 2 (top right); X 11 (bottom left); and X 12 (bottom right).
Table 3 .
The results corresponding to the Moore-Penrose inverse of the randomly-generated matrix A 1010×1000 . | 6,263.6 | 2015-11-03T00:00:00.000 | [
"Chemistry",
"Computer Science",
"Mathematics"
] |
Forecast of AMD Quantity by a Series Tank Model in Three Stages: Case Studies in Two Closed Japanese Mines
: There are about 100 sites of acid mine drainage (AMD) from abandoned / closed mines in Japan. For their sustainable treatment, future prediction of AMD quantity is crucial. In this study, AMD quantity was predicted for two closed mines in Japan based on a series tank model in three stages. The tank model parameters were determined from the relationship between the observed AMD quantity and the inflow of rainfall and snowmelt by using the Kalman filter and particle swarm optimization methods. The Automated Meteorological Data Acquisition System (AMeDAS) data of rainfall were corrected for elevation and by the statistical daily fluctuation model. The snowmelt was estimated from the AMeDAS data of rainfall, temperature, and sunshine duration by using mass and heat balance of snow. Fitting with one year of daily data was su ffi cient to obtain the AMD quantity model. Future AMD quantity was predicted by the constructed model using the forecast data of rainfall and temperature proposed by the Max Planck Institute–Earth System Model (MPI–ESM), based on the Intergovernmental Panel on Climate Change (IPCC) representative concentration pathway (RCP) 2.6 and RCP8.5 scenarios. The results showed that global warming causes an increase in the quantity and fluctuation of AMD, especially for large reservoirs and residence time of AMD. There is a concern that for mines with large AMD quantities, AMD treatment will be unstable due to future global warming. the fluctuation of AMD quantity is larger in Mine B than in Mine A, and the fluctuation in quality tends to be larger in Mine B as well. This suggests that fluctuations in AMD quantity due to global warming will cause large fluctuations in AMD quality.
Introduction
Japan has more than 5000 abandoned/closed mines, and about 100 of their sites produce acid mine drainage (AMD) due to the presence of sulfide mineralization [1]. The general treatment for AMD is neutralization and sedimentation by addition of a neutralizer, such as lime, calcium carbonate, and sodium hydroxide [2], and solid/liquid separation [3] of the produced sludge from the neutralized effluents. In these treatments, all toxic elements are concentrated into the sludge by precipitation [4][5][6] and adsorption [7][8][9][10][11][12], and the sludge is controlled in a tailing pond at a mine site or final disposal site. For these last several decades, AMD has been treated properly in Japan and has not caused severe pollution. However, since our results of the statistical calculation (details are shown below) suggested that some mines have required AMD treatment for over 150 years [13,14] and other groups suggested that more than 1000 years of treatment will be necessary [15] in the current situation, more sustainable treatment to reduce both AMD generation [16,17] and treatment cost [18] is needed. To reduce the treatment cost of the addition of chemicals and of sludge generation, for example, a passive treatment that utilizes the natural environment of mines, such as topography, plants, and microorganisms, has attracted attention as a sustainable AMD treatment based on new concepts [19][20][21]. Several researchers are trying to successfully reuse this sludge as an industrial material [22,23].
Unlike industrial wastewater, AMD quantity and quality differ significantly in mines due to regional, geological, mineralogical, and biological factors. Therefore, it is necessary to customize an appropriate treatment for each mine. To select an optimal treatment method from the various treatment technologies, including those based on both active and passive concepts, an accurate understanding of the current potential for AMD generation [24,25] and the future forecast of AMD quantity and quality are essential.
The objective of this study was to determine a forecast for AMD quantity. To accomplish this, we constructed a model that reproduces the current AMD quantity using previous monitoring data of AMD, and then extrapolated it to the future. To reproduce AMD quantity, there are two ways: one is a hydraulic simulation [26][27][28][29][30][31] and the other is a tank model. A hydraulic simulation provides detailed information on the origin and distribution of AMD and can be a powerful tool for discussing AMD generation countermeasures, but it requires, in addition to meteorological data, detailed geological, mineralogical, and hydraulic data, which are generally difficult to obtain, especially for abandoned and closed mines. On the other hand, a tank model is a blackbox to determine the relationship between inflow and outflow, but just inflow data of rainfall and snowmelt and outflow of AMD quantity are sufficient for the model [32,33]. In this study, the tank model was selected for AMD quantity modeling, and rainfall and snowmelt were used as inflow. The rainfall data were corrected for elevation and adjusted using the statistical daily fluctuation model to suit each AMD site. Snowmelt was also estimated from rainfall by considering mass and heat balance by using temperature and sunshine duration data. We did not select a hydraulic simulation but chose a statistical model because our target mines are closed and it was difficult to obtain detailed monitoring, geological, and hydraulic data for this study.
For the AMD quality model, we previously reported the geochemical calculation with first-order elution kinetics of sulfide minerals [13,14]. In the model, sulfides that should be the source of AMD were selected from the quality data, and their first-order elution rate and initial AMD generation potential were estimated by fitting to the time change of their elution amount obtained from the AMD quantity and quality data. The AMD quality could be estimated by the coupling of the kinetics for sulfide elution and oxidation, and the geochemical code for the chemical equilibrium calculation of precipitation and adsorption [34][35][36][37]. This means that accurate estimation of AMD quantity is crucial for the AMD quality model.
In this study, the AMD quantity model was constructed using two case studies of underground mines: a closed sulfur mine (Mine A) and a closed black-ore copper, lead, and zinc mine (Mine B). Mine A has a large quantity of AMD, averaging 18 m 3 min −1 with a small fluctuation, which is opposite to that from Mine B (1.5 m 3 min −1 ). From these case studies, the parameters of the model were estimated and the future AMD quantity for the next few decades was predicted using forecast data for rainfall and temperature based on the MPI-SEM (Max Planck Institute-Earth System Model) [37]. For this, we selected two kinds of global warming scenarios proposed by IPCC (Intergovernmental Panel on Climate Change): a low-stabilization scenario of RCP (representative concentration pathway) 2.6 and a high-level greenhouse gas emission scenario of RCP 8.5 [37]. We further discuss the effects of global warming on the forecast of AMD quantity stemming from the closed sulfide mines that were examined.
Tank Model
The AMD quantity model was constructed by using a series tank model in three stages, as shown in Figure 1. The first and second stages correspond to the nonpolluted water reservoir on the surface and inside the mine, respectively. The third stage corresponds to the polluted water reservoir in the ore deposit that causes AMD. Inflow, r (mm), is the summation of rainfall, r w , and snowmelt, r s . In each tank, a part of the inflow is distributed to the outflow (mm h −1 ), q oi (i = 1, 2, 3), and seepage flow to the next tank, q si (i = 1, 2), according to the water reservoir height (mm), x i (i = 1, 2, 3) and outflow height (mm), b i (i = 1, 2). The water balance in each tank is as follows: where t is time, q s0 = r = r w + r s , and q s3 = 0. The outflow is calculated from: where a oi (i = 1, 2, 3) is the outflow coefficient and b 3 = 0. The seepage flow is also calculated from: where a si (i = 1, 2) is the seepage coefficient.
Minerals 2020, 10, x FOR PEER REVIEW 3 of 13 and inside the mine, respectively. The third stage corresponds to the polluted water reservoir in the ore deposit that causes AMD. Inflow, r (mm), is the summation of rainfall, rw, and snowmelt, rs. In each tank, a part of the inflow is distributed to the outflow (mm h −1 ), qoi (i = 1, 2, 3), and seepage flow to the next tank, qsi (i = 1, 2), according to the water reservoir height (mm), xi (i = 1, 2, 3) and outflow height (mm), bi (i = 1, 2). The water balance in each tank is as follows: where t is time, qs0 = r = rw + rs, and qs3 = 0. The outflow is calculated from: where aoi (i = 1, 2, 3) is the outflow coefficient and b3 = 0. The seepage flow is also calculated from: where asi (i = 1, 2) is the seepage coefficient.
In this study, qo3 corresponded to AMD quantity. The inflow, r, was set using the following procedure. The Kalman filter and particle swarm optimization methods were used for fitting qo3 to the observed data of AMD quantity to estimate the xi, bi, asi, and aoi parameters [38].
Correction of Rainfall Data and Judgment of Snowfall
Rainfall data near each mine were derived from AMeDAS (Automated Meteorological Data Acquisition System) provided by the Japan Meteorological Agency [39]. To suit each mine situation, the daily data of rainfall and temperature obtained from AMeDAS were corrected for elevation and adjusted by using the statistical daily fluctuation model, as shown in Figure 2. Also, snowfall was estimated according to the corrected temperature, and if the temperature was under 2 °C, it was judged to be snowfall and not rainfall.
Since the capture rate of rainfall particles by the rain gauge decreases as the wind speed increases, known as the Jebons effect, Kondo et al. proposed the following statistical correction for daily rainfall data obtained from AMeDAS [40]: In this study, q o3 corresponded to AMD quantity. The inflow, r, was set using the following procedure. The Kalman filter and particle swarm optimization methods were used for fitting q o3 to the observed data of AMD quantity to estimate the x i , b i , a si , and a oi parameters [38].
Correction of Rainfall Data and Judgment of Snowfall
Rainfall data near each mine were derived from AMeDAS (Automated Meteorological Data Acquisition System) provided by the Japan Meteorological Agency [39]. To suit each mine situation, Minerals 2020, 10, 430 4 of 14 the daily data of rainfall and temperature obtained from AMeDAS were corrected for elevation and adjusted by using the statistical daily fluctuation model, as shown in Figure 2. Also, snowfall was estimated according to the corrected temperature, and if the temperature was under 2 • C, it was judged to be snowfall and not rainfall.
where rw2 (mm) is the corrected rainfall data by elevation, h (km) is the elevation of mine site, h0 (km) is the elevation of the AMeDAS observation point, and c is the coefficient (0.001 km −1 for 5 °C or less and 0.00064 km −1 for 5 °C or more). AMeDAS data of temperature, T, were also corrected according to following equation: where T1 (°C) is the corrected temperature. As shown in Figure 2, if the temperature at the mine site was above 2 °C, it was assumed that there was no snowfall and the inflow of rainfall was set as rw2. On the other hand, if the temperature was below 2 °C, the rainfall data, rw2, were judged to be equivalent to snowfall, rf = rw2, and rw2 = 0.
Estimation of Snowmelt and Snow Cover
Daily snowmelt, rs, and snow cover, rc, were also estimated by following the mass and heat balance of snow using the AMeDAS data of rainfall, temperature, and sunshine duration, as shown in Figure 2. If the temperature was below 0 °C, no snowmelt was assumed and rs = 0; otherwise, snowmelt was calculated according to the following procedure.
Snowmelt, rs (mm), was calculated from the ratio of fusion heat, Q (J), and latent heat, L (334 J kg −1 ), the catchment area, S (mm 2 ), and the density of water, ρ (9.97×10 −7 kg mm −3 ): The fusion heat was calculated from the heat balance of snow exposure as follows: Since the capture rate of rainfall particles by the rain gauge decreases as the wind speed increases, known as the Jebons effect, Kondo et al. proposed the following statistical correction for daily rainfall data obtained from AMeDAS [40]: where r w0 (mm) is the raw rainfall data, r w1 (mm) is the corrected rainfall data by the model, d (days) refers to the days from 1 January. Furthermore, the amount of rainfall near mines depends greatly on elevation due to rapid updraft and adiabatic expansion, and is corrected as follows [40]: where r w2 (mm) is the corrected rainfall data by elevation, h (km) is the elevation of mine site, h 0 (km) is the elevation of the AMeDAS observation point, and c is the coefficient (0.001 km −1 for 5 • C or less and 0.00064 km −1 for 5 • C or more). AMeDAS data of temperature, T, were also corrected according to following equation: where T 1 ( • C) is the corrected temperature. As shown in Figure 2, if the temperature at the mine site was above 2 • C, it was assumed that there was no snowfall and the inflow of rainfall was set as r w2 . On the other hand, if the temperature was below 2 • C, the rainfall data, r w2 , were judged to be equivalent to snowfall, r f = r w2 , and r w2 = 0.
Estimation of Snowmelt and Snow Cover
Daily snowmelt, r s , and snow cover, r c , were also estimated by following the mass and heat balance of snow using the AMeDAS data of rainfall, temperature, and sunshine duration, as shown in Figure 2. If the temperature was below 0 • C, no snowmelt was assumed and r s = 0; otherwise, snowmelt was calculated according to the following procedure.
Snowmelt, r s (mm), was calculated from the ratio of fusion heat, Q (J), and latent heat, L (334 J kg −1 ), the catchment area, S (mm 2 ), and the density of water, ρ (9.97 × 10 −7 kg mm −3 ): The fusion heat was calculated from the heat balance of snow exposure as follows: where Q 1 is the short-wavelength radiation, Q 2 is the long-wavelength radiation, Q 3 is the sensible heat transfer, Q 4 is the latent heat transfer, and Q 5 is the transfer heat due to rainfall. Here, heat changes in the snow layer and heat transfer from the ground were assumed to be negligible [41]. The short-wavelength radiation was calculated from albedo, r, which is the ratio of reflected sunshine radiation to sunshine radiation on the earth's surface and the daily average of solar irradiance, The average of solar irradiance was the function of the ratio of sunshine duration, N, and astronomical sunshine duration, N 0 [42]: where I 0 is the solar irradiance at the top of atmosphere. The values of solar irradiance at the top of the atmosphere, I 0 , astronomical sunshine duration, N 0 , and albedo, r, are available from references [39,43], and sunshine duration data, N, are available from AMeDAS. The long-wavelength radiation was the difference between the radiation from the atmosphere, Q a , and the radiation from the snow surface, Q s : where σ is the Stefan-Boltzmann constant (5.67 × 10 −8 W m −2 K −4 ) and e is amount of saturated water vapor [44]. The temperature of the snow surface, T s , was calculated from [45]: when T 1 ≤ 1.47.
The sensible heat was calculated from [46]: where K is the transfer coefficient of the sensible heat and the latent heat; 3.5 was proposed for the area near the mines that were modeled in this case study [46]. The latent heat was calculated from: 53K(e − 6.11), for T 1 ≥ 7 (20) The transfer heat due to rainfall was calculated from: where c w is the specific heat of the water (4.186 J kg −1 K −1 ). The snowmelt, r s , calculated from Equations (8) to (22), should be less than the snow cover, r c . Snow cover was calculated from following summation of daily mass balance: if r s > r c , then snowmelt should be r s = r c .
Forecast Data of Temperature, Rainfall, and Sunshine Duration
In the above-mentioned AMD quantity and quality models, daily data of rainfall, average temperature, and sunshine duration obtained from AMeDAS were used for model construction. Therefore, future forecasts of daily data of rainfall, average temperature, and sunshine duration were also necessary for the forecast of AMD quantity and quality in the future. For the AMD quality model, the geochemical calculation with first-order elution kinetics of sulfide minerals was used as mentioned above; the first-order elution rate and initial AMD generation potential were estimated by fitting to the time change of their elution amount obtained from the AMD quantity and quality data. The daily output data of MPI-ESM (Max Planck Institute-Earth System Model), which is an earth system model proposed by Max Planck Institute, were used in this study. Two kinds of IPCC RCPs for the greenhouse gas (GHG) concentration scenario were selected: RCP2.6 and RCP8.5. The former is the scenario with the lowest GHG emission to keep future temperature rise below 2 • C, and the latter is the scenario with the highest GHG emissions.
From the system, daily forecast data of rainfall, average temperature, and maximum and minimum temperatures were available. The daily forecast of the average of solar irradiance, I, was estimated from [47]: where ∆T is the difference between the maximum temperature and the minimum temperature, and ∆T ave is the monthly average of ∆T.
Case Studies in Two Closed Mines
In this study, two closed underground mines in the northern part of Japan were selected as a case study; Mine A has a large quantity of AMD, with small fluctuation of quantity and quality, and Mine B has a small quantity of AMD, with large fluctuation of quantity and quality. Snow is observed in winter at both of the closed mines. The locations of the mines are shown in Figure 3, and the AMD characteristics are shown in Table 1.
AMD Quantity Model Construction
The relation between the input data of AMeDAS rainfall (upper side) and the observed data and the calculated value of AMD quantity (underside) are shown in Figure 4. In this calculation, the daily observed data of AMD quantity in the prior one year were used for fitting, and later one-year data were used for the model validation. The fitting period was also changed from half a year to two years and the correlation coefficients between the observed and calculated values were compared, as shown in Table 2 and Supplementary Figure S1. Of course, the longer the fitting period, the higher the correlation coefficient in the validation period, but a fitting period of one year seemed to be generally sufficient for the reproduction of AMD quantity in the next one year. As shown in Supplementary Figure S2, when the correction for elevation and the statistical daily fluctuation and the snowmelt estimation were not conducted, the reproducibility of AMD quantity became worse, especially for Mine A. This is because Mine A is located at a higher elevation and the effects of elevation correction and snowfall are larger than for Mine B.
The parameters obtained for the tank model are shown in Table 3. Mine A has a smaller outflow height and a larger AMD reservoir than Mine B. Additionally, Mine A has the smaller value of outflow coefficient in the third stage, which directly affects AMD generation, compared to Mine B. This trend means that Mine A has the bigger reservoir and the longer residence time of AMD, which resulted in the smaller fluctuation of AMD, compared to Mine B. As we mentioned in the previous In Mine A, native sulfur and pyrite were mined during its operation. The size of the ore deposit was about 1500 m East-West, about 1500 m North-South, and 25-150 m of thickness, and ore reserves were about 230 million tons per year. The mine produced about 1 million tons of ore and one-third of Japan's sulfur demand, but it closed in 1971 due to the market influence of sulfur recovered from oil refining. The quantity of AMD is about 18 m 3 min −1 on average annually, which is one of the largest AMD values in Japan [48]. The AMeDAS point is located 11 km east and 825 m below the mine.
In Mine B, copper, lead, and zinc were mined during its operation. The ore deposit was a black ore type, which has changed from lower to yellow ore, black ore, and quartz band. The mine produced a maximum of about 25,000 tons per year, but closed in 1985 due to ore depletion. The annual average of the AMD quantity was 1.72 m 3 min −1 in 2017 and increased from 5 to 7 m 3 min −1 during the snowmelt season [49]. The AMeDAS point is located 18 km northwest and 465 m below the mine.
AMD Quantity Model Construction
The relation between the input data of AMeDAS rainfall (upper side) and the observed data and the calculated value of AMD quantity (underside) are shown in Figure 4. In this calculation, the daily observed data of AMD quantity in the prior one year were used for fitting, and later one-year data were used for the model validation. The fitting period was also changed from half a year to two years and the correlation coefficients between the observed and calculated values were compared, as shown in Table 2 and Supplementary Figure S1. Of course, the longer the fitting period, the higher the correlation coefficient in the validation period, but a fitting period of one year seemed to be generally sufficient for the reproduction of AMD quantity in the next one year. As shown in Supplementary Figure S2, when the correction for elevation and the statistical daily fluctuation and the snowmelt estimation were not conducted, the reproducibility of AMD quantity became worse, especially for Mine A. This is because Mine A is located at a higher elevation and the effects of elevation correction and snowfall are larger than for Mine B.
Forecast of AMD Quantity
The forecast of AMD quantity (underside) is shown in Figure 5, with the forecast of rainfall (upperside) proposed by MPI-ESM. According to MPI-ESM in RCP2.6, the forecast for temperature rise around the mines is about 2 °C by 2050, but remaining at about 1.0-2.5 °C after 2050. On the other hand, in RCP8.5, the forecast for temperature continues to rise and reaches +5.9 °C in 2100.
In Mine A, the MPI-ESM shows that both rainfall and heavy rain frequency, which is the number of days per year with greater than 50 mm of rainfall, increase due to the temperature rise. In 2100, the forecast of rainfall is +21% for both RCP2.6 and RCP8.5, and forecast of heavy rain frequency increases 4 days/year for RCP2.6 and 8 days/year for RCP8.5, compared to the present. According to these The parameters obtained for the tank model are shown in Table 3. Mine A has a smaller outflow height and a larger AMD reservoir than Mine B. Additionally, Mine A has the smaller value of outflow coefficient in the third stage, which directly affects AMD generation, compared to Mine B. This trend means that Mine A has the bigger reservoir and the longer residence time of AMD, which resulted in the smaller fluctuation of AMD, compared to Mine B. As we mentioned in the previous section, ore production was 230 million tons per year in Mine A and 25,000 tons per year in Mine B. This difference in scale should directly affect the difference in reservoir and residence time of AMD.
Forecast of AMD Quantity
The forecast of AMD quantity (underside) is shown in Figure 5, with the forecast of rainfall (upperside) proposed by MPI-ESM. According to MPI-ESM in RCP2.6, the forecast for temperature rise around the mines is about 2 • C by 2050, but remaining at about 1.0-2.5 • C after 2050. On the other hand, in RCP8.5, the forecast for temperature continues to rise and reaches +5.9 • C in 2100.
Minerals 2020, 10, x FOR PEER REVIEW 9 of 13 In Mine B, the MPI-ESM shows that the temperature rise of around 2 °C in the RCP2.6 scenario does not have much effect on rainfall and heavy rain frequency. In 2100, the forecast for rainfall decreases 1.5% and heavy rain frequency decreases 2 days/year, which results in a 0.55% increase for the forecast of AMD quantity, compared to the present. However, the temperature rise of 5.9 °C in the RCP8.5 scenario affects the forecast of rainfall and heavy rain frequency as much as for Mine A. In 2100, the rainfall forecast increases 22% and heavy rain frequency increases 5.5 days/year, which results in a 25% increase for the forecast of AMD quantity, compared to the present. The forecast of the standard deviation of AMD quantity is shown in Figure 6. A comparison of the coefficient of variation of AMD quantity between the present and the future is shown in Table 4. Here, the coefficient of variation was calculated for 10 years from 2010 to 2020 for the present and from 2100 to 2110 for the future. The temperature rise due to global warming caused larger fluctuations in the AMD quantity for Mine A. In the case of Mine B, since the AMD reservoir is small and the AMD residence time is short, even if the rainfall fluctuation increases in the future due to global warming, the AMD fluctuation will remain largely as it is now. On the other hand, in the case of Mine A, since the AMD reservoir is larger and the AMD residence time is longer, AMD fluctuation tends to increase gradually in the future, affected by increases in rainfall fluctuation due to global warming. This trend suggests that AMD treatment might be unstable because of global warming in the future, especially for mines with larger AMD quantities.
In general, AMD quality tends to deteriorate as the AMD quantity increases. This is because when the AMD quantity increases, AMD comes in contact with a new pollution source in the mine. Actually, at present, the fluctuation of AMD quantity is larger in Mine B than in Mine A, and the fluctuation in AMD quality tends to be larger in Mine B as well. This suggests that fluctuations in In Mine A, the MPI-ESM shows that both rainfall and heavy rain frequency, which is the number of days per year with greater than 50 mm of rainfall, increase due to the temperature rise. In 2100, the forecast of rainfall is +21% for both RCP2.6 and RCP8.5, and forecast of heavy rain frequency increases 4 days/year for RCP2.6 and 8 days/year for RCP8.5, compared to the present. According to these trends, the AMD quantity calculated from the constructed model increases, as shown in Figure 5. The forecast for AMD quantity in 2100 is +27% for RCP2.6 and +31% for RCP8.5.
In Mine B, the MPI-ESM shows that the temperature rise of around 2 • C in the RCP2.6 scenario does not have much effect on rainfall and heavy rain frequency. In 2100, the forecast for rainfall decreases 1.5% and heavy rain frequency decreases 2 days/year, which results in a 0.55% increase for the forecast of AMD quantity, compared to the present. However, the temperature rise of 5.9 • C in the RCP8.5 scenario affects the forecast of rainfall and heavy rain frequency as much as for Mine A. In 2100, the rainfall forecast increases 22% and heavy rain frequency increases 5.5 days/year, which results in a 25% increase for the forecast of AMD quantity, compared to the present.
The forecast of the standard deviation of AMD quantity is shown in Figure 6. A comparison of the coefficient of variation of AMD quantity between the present and the future is shown in Table 4. Here, the coefficient of variation was calculated for 10 years from 2010 to 2020 for the present and from 2100 to 2110 for the future. The temperature rise due to global warming caused larger fluctuations in the AMD quantity for Mine A. In the case of Mine B, since the AMD reservoir is small and the AMD residence time is short, even if the rainfall fluctuation increases in the future due to global warming, the AMD fluctuation will remain largely as it is now. On the other hand, in the case of Mine A, since the AMD reservoir is larger and the AMD residence time is longer, AMD fluctuation tends to increase gradually in the future, affected by increases in rainfall fluctuation due to global warming. This trend suggests that AMD treatment might be unstable because of global warming in the future, especially for mines with larger AMD quantities.
Conclusions
The AMD quantity model was constructed for two closed mines in Japan. The model was constructed with a series tank model, and fitted by using daily data for one year, which were enough to obtain adequate parameters. The results showed that Mine B has a smaller AMD reservoir and a shorter AMD residence time than Mine A, resulting in a large fluctuation of AMD quantity in Mine B. The forecast of AMD quantity was also estimated based on the forecast of rainfall and temperature proposed by the MPI-ESM with IPCC RCP2.6 and RCP8.5 scenarios. The forecast results showed that temperature rise due to global warming will cause an increase in rainfall, resulting in increased AMD quantity. The fluctuation of rainfall will also increase due to global warming, increasing the fluctuation of AMD quantity in Mine A. The effect of global warming in Mine A will be bigger than in Mine B due to its larger reservoir and longer residence time of AMD. In general, AMD quality tends to deteriorate as the AMD quantity increases. This is because when the AMD quantity increases, AMD comes in contact with a new pollution source in the mine. Actually, at present, the fluctuation of AMD quantity is larger in Mine B than in Mine A, and the fluctuation in AMD quality tends to be larger in Mine B as well. This suggests that fluctuations in AMD quantity due to global warming will cause large fluctuations in AMD quality.
Conclusions
The AMD quantity model was constructed for two closed mines in Japan. The model was constructed with a series tank model, and fitted by using daily data for one year, which were enough to obtain adequate parameters. The results showed that Mine B has a smaller AMD reservoir and a shorter AMD residence time than Mine A, resulting in a large fluctuation of AMD quantity in Mine B. The forecast of AMD quantity was also estimated based on the forecast of rainfall and temperature proposed by the MPI-ESM with IPCC RCP2.6 and RCP8.5 scenarios. The forecast results showed that temperature rise due to global warming will cause an increase in rainfall, resulting in increased AMD quantity. The fluctuation of rainfall will also increase due to global warming, increasing the fluctuation of AMD quantity in Mine A. The effect of global warming in Mine A will be bigger than in Mine B due to its larger reservoir and longer residence time of AMD.
In this study, it is expected that the quantity and fluctuation of AMD might increase due to global warming. This suggests that fluctuations in AMD quality might also increase. Therefore, when selecting future treatment methods, careful consideration should be given to whether or not the AMD fluctuation can be sufficiently dealt with in the future, especially for passive treatment. Funding: This research was partially funded by JOGMEC (Japan Oil, Gas and Metals National Corporation) and the Center for Eco-Mining, Japan. | 7,533.4 | 2020-05-11T00:00:00.000 | [
"Engineering"
] |
Modern Radiotherapy Era in Breast Cancer Modern Radiotherapy Era in Breast Cancer
Radiation therapy (RT) is one of the major treatment modalities that are used in breast cancer treatment, and depending on the chest-wall anatomy, RT fields have to be cus-tomized. Techniques used in planning have been evolving since last two decades from two dimensional (2D) to three-dimensional (3D), while intensity modulated radiotherapy (IMRT), volumetric modulated arc therapy (VMAT) and even proton therapy have been an option in daily approach. In addition, technological hardware and software advances in delivery and planning systems, total treatment duration of breast RT have been shortened in last decades along with recent hypofractionated radiotherapy schemes or emerging partial-breast irradiation protocols. The other attractive approach—acceler- ated partial breast irradiation (APBI) could be a reasonable option for highly selected subpopulation of early-stage breast cancer patients out of a clinical trial. Long-term follow-up results have emerged heart and coronary sparing with maximum safety and efficacy. The most important advance could be named as cardiac sparing—deep breath- hold approach—in all the modern technique improvement. Although most advanced techniques in management of breast cancer have not been verified to increase survival, we suggest recommending resource stratified advanced in order to provide best technical and clinical care in this long-term survivor candidates.
Introduction
Radiation therapy (RT) has become an essential component of breast cancer treatment, and depending on the anatomic structure of the region to be irradiated (breast, chest wall or regional lymphatics), RT can be technically challenging and varying from one patient to another.
Breast RT has evolved from two-dimensional (2D) to three-dimensional (3D), while intensity modulated radiotherapy (IMRT), volumetric modulated arc therapy (VMAT) and even proton therapy have been options to discuss with our patients in daily practice. Besides technological hardware and software advances in delivery and planning systems, total treatment duration of breast RT has been changing dramatically in last decades along with recent hypofractionated radiotherapy schemes or emerging partial-breast irradiation protocols. As modern RT allowed us a successive reduction in the treatment-related complications such as fibrosis and long-term cardiac toxicity in addition to improving the locoregional control rates, rationale of as low as possible is appealing to focus more on heart and coronary sparing with four-dimensional (4D) breath-hold techniques. Modern radiotherapy techniques and fundamentals need to be implemented in routine clinical care with maximum safety and efficacy in order to maximize the benefit of locoregional treatment and to minimize the risks of late complications.
We aim to summarize the advances of modern radiotherapy in breast cancer through clinical approaches and routine treatment indications based on present knowledge and evidencebased recommendations.
Supine
Radiotherapy has been widely used as a part of breast cancer in partial or total mastectomy. Radiotherapy technique can be difficult and variable depending on the anatomy of the patient such as chest-wall concavity, depth of axilla. The first step of radiation treatment is to perform CT simulation to obtain a reproducible detailed anatomy for planning conformal or intensitymodulated radiotherapy with using heart-sparing techniques such as breath hold or heart blocking, especially for left breast cancer patients. Adjuvant therapy for breast cancer starts early 4-6 weeks after surgery or after chemotherapy and was delivered with 6 or 18 MV photons using usually wedged tangent fields, or field and field, 3DCRT and IMRT at 1.8-2.67 cGy doses ranged from 40 to 60 Gy.
Treatment fields are a composite of adjacent whole breast or chest wall, mammaria interna, supraclavicular and axillary fields. The main purpose of the breast radiotherapy fields is to avoid hot and cold dose regions between contiguous fields while minimizing the dose of organs at risk such as lung and heart. RT fields have to be modified according to patient's chest wall and breast anatomy due to its irregular surface, which can cause dose inhomogeneity. At the same time, setup has to be easily applicable and reproducible. Immobilization devices, specially designed for breast cancer treatments, are commercially widely available and are frequently used in daily practice. The best known devices are listed as follows: inclined plane, breast boards, Board-wing butterfly Board, Vac-fix bag-Vacuum Cradle Bed and alpha cradle. The most common, preferred and basic set-up has been performed by a breast board having an inclined plane with an arm support, in supine position. The head of the patient has been pointed to the opposite side, and arm has been abducted (90°-120°) and externally rotated. Skin folds in supraclavicular field and soft tissue of arm has to be modified if required. The patient is positioned on her back on a stable breast board, and board is angled to ensure the sternum-chest parallel to table. This angle can be adjusted according to clinical needs, but larger angels can cause increased dose in lungs in patients requiring the supraclavicular field. The border between the chest wall and supraclavicular field is usually placed at the bottom of clavicular head. Radiopaque wires must be used to define incisions and breast borders [1]. Supine positioning has been used for breast cancer patient's alignment for several decades over the world. It provides patient comfort and position reproducibility for the whole treatment period, while ensures better axillary coverage in comparison to prone positioning. When setup errors in supine position were studied with three-dimensional cone-beam computed tomography (CBCT), the average magnitude of error was found to be generally less than 5 mm across a number of studies [2]. Sethi et al. compared both prone and supine positioning for 3DCRT and IMRT plans; traditional three-or four-field planning has inadequate nodal coverage, especially performed in prone setup compared to supine (29 and 42% vs. 50 and 59%), and this disadvantage has been altered by CT-based planning and coverage varied from 92 to 97% depending on IMRT or 3DCRT independent from positioning [3].
Prone
Rarely, in case of a very large pendulous breast, lateral decubitus or prone position can help. Prone position has been proposed for especially large breasted patient as this volume can cause dose homogeneity due to hot spots and also overlapping breast tissue could create an auto bolus effect, which can abbreviate skin toxicity [4,5]. While prone setup has also been proposed to increase the lung and heart tissue sparing, the literature has conflicting results in terms of normal tissue dose reduction [6,7]. Wurschmidt et al. stated that the prone position increasing incidental dose of LAD coronary artery to a mean dose of 33.5 Gy in comparison with supine setup with a mean dose of 25.6 Gy in left whole breast irradiation, without any significant differences in the average mean dose to heart between two different setups [6]. In contrast, Kirby et al. also documented prone positioning to reduce cardiac doses in almost 64% of 30 patients treated whole breast irradiation with a median reduction in LAD mean = 6.2 Gy and 24% of the 30 cases treated with partial breast irradiation (median reduction in LAD max = 29.3 Gy) in addition to reducing ipsilateral-lung (mean) in all whole breast and 61 of 65 partial irradiation cases, and chest wall V (50 Gy) in all whole breast irradiation cases. They concluded that prone positioning is likely to benefit left-breast-affected women of larger breast volume both for whole or partial breast irradiation, and right-breast-affected women regardless of breast volume [7]. Despite the improvement of dose homogeneity, prevention of hot spot regions and lower lung and heart doses, prone position for whole breast irradiation has not been applied in routine clinical practice. Prone setup has been considered to be more problematic to reproduce than supine position and to be less precise. In Varga et al.'s randomized study, the range of displacement was greater in prone position as well as the prone relocation precision presented an expansion over time without any correlation to any patient-related parameters [8]. Patient treatment-related comfort and inadequate target coverage of tumors especially extending down to chest wall were also mentioned as main concerns [9,10].
Main concern about prone position as setup errors and reproducibility in comparison with the international standard supine position in women undergoing whole-breast radiotherapy was justified by Kirby et al., matching chest wall and clips on cone-beam CT (CBCT) images acquired prior to the fractions 1, 4, 7, 8, 11 and 14. Setup errors were greater using prone technique than for supine technique as follows: systematic errors: 1.3-1.9 mm (supine) and 3.1-4.3 mm (prone) (p = 0.02) and random errors: 2.6-3.2 mm (supine) and 3.8-5.4 mm (prone) (p = 0.02). Even patient-comfort-scores and treatment times were similar, calculated CTV-PTV margins were calculated to be larger for prone (12-16 mm) than for supine treatment (10 mm) [11].
Lateral decubitus position
Lateral decubitus position is a side-lying setup especially generated for large-sized and pendulous breasted patients. In experienced clinics, this setup has been used especially for only breast irradiated cases as lymphatic coverage could be problematic in this position. Campana et al. presented their isocentric lateral decubitus technique at the Institute Curie where almost 500 patients were treated at 50 Gy whole beast radiotherapy [12]. Thin carbon fiber supports and special patient positioning devices have been developed especially for this technique. Their techniques have been proven to show good homogeneity of the dose in breast treatment volume, with extremely low dose to the underlying lung and heart [12]. Despite applicable single center results, this technique has not been spread out and accepted for the routine clinical work flow.
Thermoplastic bra
Use of thermoplastic bra has been investigated with the objective of minimizing organ at risk doses, as it moves the breast widely lateral. It has been found to provide shallower beam arrangement for left breast (medial: 288°-315° with bra vs. 302°-325° without bra) and to decrease lung doses by 30.6% without any dedicated selection criteria for daily clinical use [13]. The main concern on thermoplastic mask users related with the skin dose and possible associated clinical exacerbation of side effects turned out to be not significant.
3DCRT
Conventional two dimensional wedge compensators have been used to shape the treatment fields for many decades. After integration of CT and more sophisticated planning programs in radiotherapy clinical routine, target location can be defined precisely and dose distribution can be obtained more homogenously. The target and critical structure volumes for threedimensional conformal radiotherapy (3DCRT) have been defined according to ICRU reports 50 and 62 [14]. A major challenge to improve dose uniformity is the irregular shape and size of the breast while minimizing the risk of treatment-related complications. In recent years, conformal RT, particularly, forward or inverse intensity modulated RT (IMRT), which is a more advanced and sophisticated form of 3DCRT, is becoming popular for breast irradiation as it provides reduced inhomogeneity and/or better normal tissue sparing [15]. Additionally, lately accessible image-guided RT (IGRT) can significantly increase precision of conformal treatment delivery.
3DCRT is based on patient's simulation CT with pertinent anatomical data for target definition as the first and most important step of this advanced planning system. Target delineation and consistency of target volumes have been accepted as priority, RTOG and EORTC have published breast cancer-specific atlases easily reachable on websites for uniformity among interobservers [16,17] (http://www.rtog.org/CoreLab/ContouringAtlases/BreastCancerAtlas. aspx). In addition to atlas-based contouring publications, quantification of the multi-institutional, multi-observer variability of target and organ-at-risk (OAR) delineation for breast cancer radiotherapy and its dosimetric impacts has been an attractive topic. Li et al. assembled nine radiation oncologists specializing in breast RT from eight institutions to individually delineate lumpectomy cavity, boost planning target volume, breast, supraclavicular, axillary and internal mammary nodes, chest wall and OARs (e.g., heart, lung) on the same CT images of three demonstrative patients with breast cancer [18]. The variability in contouring the targets and OARs was as low as 10% while the volume variations had standard deviations up to 60%. These inter-observer differences can easily end up in significant changes in dosimetry in for breast radiotherapy planning. Further work is warranted to obtain a systematic consensus, especially in the era of IMRT/IGRT, which could be used and easily adapted by the institutions. In similar standardization attempts to minimize the variation in substructure delineation for organs at risk, a detailed cardiac CT atlas have been developed by University of Michigan [19]. If patient has supraclavicular positive lymph node present, additional dose to supraclavicular region will bring into the question of brachial plexus dose. Brachial plexus contouring is mostly thought as a part of head and neck or lung IMRT, so breast radiation oncologists are encouraged to follow contouring guidelines for the brachial plexus (BP) using anatomically validated cadaver data set and head and neck case series [20,21]. An average margin of 4.7 mm around the anatomically validated brachial plexus contour is instructed to cover and compensate all the anatomic variations of brachial plexus [20].
Many irradiation techniques such as single isocentric 3D conformal whole breast irradiation, prone position technique, four or five field irradiation technique for peripheral lymphatic were described and widely used all over the world and details will not be given as it is not in the scope of this chapter. For each CT data (2-5-mm slices), the dosimetric plans were created by appropriately adjusting the beam apertures such as beam angle, collimator angle, couch angle, wedges, energies, weights and multi-leaf collimators by virtual simulation through digitally reconstructed radiographs (DRR); therefore, the planning goals on coverage and OAR sparing can be achieved. Beam apertures were selected to fully cover the targets for each set of contours. Photon beams of 6 and/or high energy 15-18 MV were used to irradiate breasts, chest wall and boost PTVs tangentially, supraclavicular and axillary nodes. Electron beams with or without a combination of 6 MV photons were used for internal mammary nodes.
As the treatment plan evaluation starts with all axial slices to be checked whether bearing hot or cold regions or not. Next step is the evaluation of dose volume histograms (DVH), which is a graphic expression of dose distribution volume in target or OAR. The planning goals are recommended to cover the breast or chest wall with ≥95% with maximal point dose but ≤110%, while OAR doses are limited with contralateral breast ≤3.30 Gy, ≤20% of ipsilateral lung ≥20%, ≤5% of heart ≥20% for left-sided breast cancer and 0% of heart ≥25% for right-sided breast patients, and mean heart dose ≤5 Gy [22].
Transition from 2D to 3D has been promising under dosimetric studies revealing an improvement. When conventional 2D and mono-isocentric 3D techniques were dosimetrically compared in terms of coverage and normal tissue doses, Guillert et al. stated that homogeneity, regional lymphatic irradiation and heart and spinal cord protection were better with the mono-isocentric 3D technique [23]. Leite et al. dosimetrically assessed incidental irradiation of the internal mammary lymph nodes (IMLNs) with using opposed tangential fields with 45-50.4 Gy conventional two-dimensional (2D) or 3DCRT techniques in their cohort of 80 breast cancer patients and documented the mean dose to the IMLNs as 7.93 Gy in the 2D cohort in comparison with 20.64 Gy in the 3D cohort [24]. Even all dosimetric parameters were higher in 3DCRT plans, still we need to improve coverage. These results from the studies analyzed above have proven that more attempts have to be taken to cover target volume without increasing dose to normal organs.
IMRT
Breast has been one of the complex radiation delivery areas due to the complex anatomical geometry and differences of depth of regional nodal areas. Two-dimensional and 3DCRT have been used safely and with high local control rates, but homogeneity and normal tissue doses have been the two problematic topics until advanced radiation delivery techniques based on image guidance has been established. IMRT can be designed as a forward or inverse planning technique [25]. The forward planning is more common in clinical practice, uses similar beam angles without old school wedges, but manually created field in fields decreasing the hot high dose regions to optimize the dose distribution [26,27]. Forward planning follows optimization algorithms to provide dose homogeneity and coverage [27].
The use of IMRT in breast cancer radiotherapy has been investigated in couple of fundamental prospective clinical studies [28,29]. First was the Royal Marsden study comparing 2D wedge based, 3D and IMRT techniques in terms of acute and long-term side effects. The primary end point was objective change in breast appearance based on serial photographs of 306 patients obtained before treatment, at 1-, 2-and 5-year follow-up. The conventional treatment arm patients were 1.7 times more likely to have a change in breast appearance compared to IMRT arm patients, suggesting that minimization of dose inhomogeneity in the breast reduces late adverse effects, whereas there were no significant differences on the patient reported breast discomfort and quality of life between 2SD and IMRT arms [28]. Second randomized trial by the Canadian group has supported the findings and concluded that 4-7 segmented IMRT decreased moist desquamation rates which was also related with the breast cup size [30]. Third prospective trial from Cambridge has focused selective forward IMRT planning on the patients if inhomogeneity exceeds 107% with standard planning and concluded that improved plan parameters with forward IMRT were obtained [29]. Dosimetrically reduction in surface doses using IMRT technique has been shown to be almost 20%, and this has been turned to be a reduction in skin acute side effects from 52 to 39% in clinical experience without compromising local regional control success [26]. All pertinent studies have supported the value of early breast cancer treatment with IMRT providing lesser acute skin toxicities, which would effect long-term cosmetics [31,32].
The next question was whether more homogenous dose distribution will turn into survival advantage compared to conventional 2D or 3DCRT. Yang [33], while less frequent acute skin toxicity by IMRT did not translate into a significant decrease in late toxicity rates in follow-up.
IMRT can add benefit when hypofractionation is prescribed. Hardee et al. compared toxicity of patients treated according to the Canadian hypofractionation regimen (40 patients with 3DCRT and 57 with IMRT) [34] and demonstrated IMRT reducing the maximum dose (Dmax median, 109.96% for 3DCRT vs. 107.28% for IMRT; p < 0.0001) and improving median dose homogeneity in comparison with 3D-CRT. Besides, grade 2 dermatitis decreased from 13% in the 3DCRT group to 2% in the IMRT group, and decreasing rates of acute pruritus and grade 2-3 sub-acute hyperpigmentation were noted in IMRT group [34].
The use of more sophisticated treatment techniques will be more critical especially for organ at risk-lung and heart-doses in more complex treatment fields for locally advanced breast cancer patients. A dosimetric study by Lohr et al. evaluated the effect of IMRT on cardiac doses compared to 3DCRT at their CT data set of 14 patients [35]. Plans were generated by two conformal beam angles chosen to minimize heart and lung doses for 3DCRT and nine beams (0°-335°, 25° apart) over left hemi-thorax in a coplanar fashion for IMRT [35], where IMRT had provided superior dosimetric parameters for maximal dose to heart, V30 and V40 of heart and left ventricle except mean and median dose of heart which increased from 6.8 to 8.5 Gy and from 1.02 to 2.77 Gy, respectively. In the light of these results, Lohr et al. stated that mean risk of excess cardiac mortality significantly decreased from 6.03 to 0.25% according to their relative seriality model [35].
Conventional irradiation of regional nodal irradiation was known to deliver inadequate homogeneity and to be usually a challenge depending on the patients' geometry, location close to the normal organs and patient-dependent variation of depth [3,36]. In a dosimetric study, three field, four field, CT-based 3D and forward IMRT treatment options were compared and superior nodal coverage has been achieved by both CT-based 3D and IMRT techniques, despite the fact that contralateral breast and ipsilateral lung V5 and V20 doses increased by 3-4 field IMRT [3]. The recent rotational form of IMRT, volumetric arc therapy, has also been studied dosimetrically for locally advanced breast cancer patients requiring regional lymph node irradiation with conflicting results [37,38]. Ma et al. replanned leftsided, locally advanced patients with 3DCRT-field in field, five field IMRT (2 tangents, 2 anterior and 1 supraclavicular field) and two coplanar partial arc VMAT to a prescription dose of 50 Gy [37], the planning goals were defined as follows: PTV:[ D95 (95% of PTV receiving a prescription dose or higher) = 50 Gy, V47.5 Gy ≥95%, V53. 5 . Both 5F-IMRT and 2P-VMAT plans demonstrated comparable PTV coverage (V95%), hotspot areas (V110%) and conformity (all p > 0.05) which were significantly superior to 3DCRT-FinF, and 5F-IMRT plans provided significantly less heart and left lung dose than 2P-VMAT (all p < 0.05); therefore, Ma et al. specified that 5F-IMRT has dosimetrical advantages compared with the other two techniques in comprehensive breast irradiation for left-sided breast cancer based on balance between PTV coverage and normal organ sparing [37]. Tyran et al. evaluated arc therapy and a forward-planned multi-segment technique with a mono-isocenter technique for left-sided breast treatment, involving lymph node irradiation including the internal mammary chain [38]. VMAT improved PTV coverage and dose homogeneity but distributed low doses to a larger volume which blurred the clinical benefits. In another preclinic study revealed that VMAT achieved similar PTV coverage and sparing of organs at risk, with fewer monitor units and shorter delivery time than cIMRT with conventional modified wide-tangent (MWT) techniques for locoregional radiotherapy of breast cancer [39]. Based on the conflicting dosimetric studies and without any published clinical study, no general recommendations for VMAT could be drawn for its use in daily clinical practice, leaving the decision to the institutional decision based on the planner's experience, expectations and required quality assurance.
Especially forward IMRT, using tangential bream angels and creating multiple segment, can be accepted as standard approach in clinical practice taking into the considerations of acute toxicity [40,41]. The published literature of forward or inverse IMRT use in clinical practice of breast cancer, has mainly focusing on toxicities and have short follow-up time. In Canadian guidelines, based on the similar local control and overall survival results, IMRT has not been recommended over tangential radiotherapy field design [42]. Of course, the cost of using new technologies needs to considered as if they only reduce toxicity profile due to treatment. In USA, systematic analysis of Medicare reimbursement data during 2012-2015, for prostate, anal, gynecological and headneck cancers, declared that IMRT has been more costly than 3DCRT approximately 12.834$ per patients and this cost can go up to 19.113$ and breast IMRT has been named as the least expensive IMRT depending on the less complex structure compared to a head and neck workload [43].
Tomotherapy
Lately, an innovative method of IMRT has been developed as a combination of helical IMRT with CT image guidance at the University of Wisconsin-Madison named as TomoTherapy® Hi•Art® [44]. A small megavoltage X-ray source was built in an analogous to that of a CT X-ray source, and the geometry provided the chance to deliver treatment applying the 360° rotation of the CT gantry and the couch moving the patient slowly through the center of the ring, with the mounted megavoltage linear accelerator around the gantry ring in a spiral fashion to direct the beam at a slightly different plane at the each rotation of gantry. TomoTherapy Hi•Art can also accomplish a quick CT scan before each treatment starts for image guidance in the era of modern linear accelerators [45].
TomoTherapy has been used to treat other sites than breast such as prostate, brain, head and neck, lung, prostate, etc. [44], and when considered for breast cancer treatment, the format of helical tomotherapy sound unsuitable based on the use of all gantry angles delivering low doses to areas such as contralateral lung and breast in comparison with conventional standard tangents field design which would only deliver a scatter dose to these organs. Starting point of clinical experience of helical tomotherapy for breast cancer has been a treatment of complicated case scenarios such as bilateral breast cancer to be irradiated for the bilateral breasts/ chest wall and regional nodes. Kaidar-Person et al. reviewed nine-cased treated for breast and regional nodal irradiation with Helical tomotherapy in their institute in 5 years of period [46]. The average lung V20, lung V5 and heart mean dose was 29%, 66% and 20 G, respectively. Clinical significant acute toxicity was observed such as dysphagia (5/9), fatigue (4/9), nausea and weight loss (1/9) and skin desquamation (9/9) [46]. Goddu et al. also estimated the practicability of using helical tomotherapy for locally advanced left-sided breast cancer in a dosimetric planning study on 10 CT data sets comparing a multifield three-dimensional technique with the tomotherapy treatment planning for 50.4 Gy dose [47] and found tomotherapy to increase the minimal dose to the planning target volume and improve the dose homogeneity. While decreasing the mean percentage of the left lung volume receiving 20 Gy in the tomotherapy plans decreased from 32.6% to 17.6 ± 3.5%, while increasing lower dose levels as V5 from 25 to46%. The same observation was present for heart such tomotherapy decreased V35 Gy from 5.6 to 2.2% with an increase from 7.5 to 12.2 Gy for mean heart dose levels [47]. These dosimetric studies confirmed that tomotherapy plans provided better dose conformity and homogeneity than three-dimensional radiotherapy, while the disadvantage of tomotherapy seems to be low dose bath and higher lower dose parameters for the normal tissue bearing an unpredictable effect for the long-term effects. In a case presentation from Institue Curie, comparison of 3DCRT dorsal decubitus and 3DCRT lateral isocentric decubitus with tomotherapy plan for T2N0M0 breast cancer patient revealed that tomotherapy plan has been preferred as it could deliver optimal coverage to the planning target volumes while also providing tolerable doses to the patient's heart and lungs [48].
The use of the tomotherapy unit in fixed gantry positions with the beam intensity modulated by the micro collimators as the patient is moved through a stationary gantry could be the best approach in breast cancer treatment. This design can limit the low dose bath effect and created an almost a tangential approach. This form of tomotherapy has been used by O'Donnell and they present their case solutions for bilateral disease, left breast irradiation, pectus excavatum, prominent contralateral prosthesis and internal mammary chain disease [49]. Their planning results with a more limited number and angle of beams than standard helical tomotherapy technique results reassured better conformity of treatment with improved coverage of the planning target volume, including regional nodes, without field junction problems [49].
The major two important concerns in tomotherapy similar to IMRT and VMAT are timeconsuming planning and quality assurance than standard breast irradiation and increasing low dose 'bath' as a major concern on late oncogenesis. Published comparative studies of conformal radiotherapy and IMRT have revealed generally better target volume coverage and organ-at-risk dose reductions and worse risk of secondary cancer induction based on increased out-of-field leakage radiation with higher number of fields and used monitor units in IMRT plans; the overall estimation of lifetime attributable risk of the radiation-induced cancer risk was lower with 3DCRT than with IMRT or VMAT [50,51]. Comparison of five treatment modalities including tomotherapy, 3D conformal radiotherapy, field in field, IMRT and VMAT in breast cancer patients, tomotherapy plans provided better dose homogeneity in the target volume, as IMRT and VMAT plans created better dose coverage and dose conformity; the V20Gy of the ipsilateral lung was the lowest in the single isocentric IMRT plan, followed by the 3-4 arc VMAT, 3D-CRT, TOMO, and Field in field plans, and the V10Gy was the highest for the VMAT plan among the five modalities [52]. Keeping in mind that lifetime attributable risk of secondary cancers depends on organ's distance from the primary beam and the used modality, risk of secondary malignancies expected in the ipsilateral lung, thyroid, contralateral lung and contralateral breast were found to be the highest for the VMAT plans, followed by the IMRT plans [34], and remarkably, the risk of the Tomotherapy was comparable to or lower than those of the 3DCRT and Field in field plans [52]. This study clarified one of the major concerns of tomotherapy and can reflect more common use of tomotherapy in breast cancer treatment.
Proton therapy
Proton radiation is a particle radiation which has a capability of depositing therapeutic radiation at a fixed point with sparing of tissues beyond the target. Although proton therapy is prescribed in fractions similar to photons, its radiobiological effect rate is higher than (1.1) photons [53]. The use of protons in treatment has been evaluated primarily for tumors requiring high doses or located in close proximity to critical structures such as prostate cancer, brain tumors and childhood cancer. Despite dosimetric advantages, extensive cost of equipment and maintenance has been defined as an important barrier fact for protons to become widespread in clinical use. Nowadays, 61 centers are operating over the world, and in 2020, the estimated number of proportional proton radiotherapy centers will be 91 [54]. Clinically, proton has limited use in breast cancer, although it has an exclusive capability to archive full coverage of the breast or chest wall with a rapid fall-off of dose beyond the target which would be a great contribution for acute and late cardiopulmonary toxicities. Hence, greater data were present for accelerated partial breast irradiation (APBI) with longer follow-up.
Galland-Girodet et al. compared photon-based and proton-based APBI in phase 1 study and 7 year ipsilateral breast recurrence rate 11 vs. 4%, respectively. The physician assessment of overall cosmesis was good or excellent for 62% of proton patients, compared with 94% for photon patients depending on more skin toxicities such as telangiectasia, pigmentation changes, fibrosis and patchy atrophy [55]. Loma Linda Medical center has the largest protonbased APBI experience including 100 patients treated with 40 Gy (RBE) in 10 daily fractions, with patient and physician reported cosmesis, tumor recurrence and dermatitis rates of 90, 3 and62% at 5 years, respectively [56]. Proton-based APBI, therefore, is accepted as a noninferior treatment option for early-stage breast cancer patients.
There are few single-center case series that presented the use of proton for treating peripheral lymphatics, especially for locally advanced breast cancer with short follow-up periods. In a dosimetric comparison of proton in combination with 3DCRT to 3DCRT (photon + electron) and IMRT, proton have improved coverage and has decreased dose exposure to normal tissue adjacent to target [57]. First clinical report from Massachusetts General Hospital consists for 12 locally advanced breast cancer and they based their prospective trial on a dosimetric comparison of 11 patients plans with protons, partially wide tangent photon fields (PWTF) and photon/electron (P/E) fields. Proton therapy achieved superior coverage with a more homogeneous plan compared to PWTF and P/E fields, also considerable cardiac and pulmonary sparing was achieved with proton therapy as compared to PWTF and P/E [58]. They afterwards reported feasibility of proton delivery of post-mastectomy proton radiation to a dose of 50.4 Gy [relative biological effectiveness (RBE)] to the chest wall and 45-50.4 Gy (RBE) to the regional lymphatics with or without reconstruction. With maximum grade 2 skin toxicity (75%) and no radiation pneumonitis reported, proton RT for post-mastectomy RT was found to be feasible and well-tolerated. They noted that mean heart dose was as low as 0.44 Gy and this was the strongest argument for using protons for extensive chest-wall irradiation.
The second report by Memorial Sloan Kettering, including 30 patients, supported the positive results of early toxicity and normal tissue sparing shown by the previous literature [59]. They have used uniform scanning beams with anterior orientation for delivery. Supraclavicular field and chest-wall field were matched anteriorly, a set of beams with same orientation has been shifted 1-cm superior/inferiorly for feathering to minimize hot spots. Similar to previous report, mean heart dose was 1 Gy (RBE) and grade 2 skin toxicity rate was 71.4%, also 29% of the patient experienced moist desquamation [59]. Uniform scanning proton therapy provides100% dose at the skin without using a bolus for post-mastectomy patients. This effect depends on the technique, selective skin sparing can be obtained by pencil beam scanning with proximal range modulation advantage.
University of Florida recently published a prospective pilot study including 18 women (stage IIA-IIIB, 10 patients with proton therapy, 8 patients with proton-photon combination) requiring comprehensive breast radiation [60]. Proton therapy was shown to improve target coverage for the internal mammary nodes and level 2 axilla while median cardiac V5 was 0.6% with PT and 16.3% with conventional radiation (p < 0.0001). Within median 20-month follow-up, only grade 3 toxicity developed was dermatitis in four patients (22%) [60].
The most important advantage of proton treatment was almost none 'low dose bath' dose compared to IMRT techniques as high integral doses of heart, lung and coronary arteries could be associated with increased long-term complications and secondary cancers for especially young population. This philosophy behind using proton therapy in breast cancer treatment has been an attractive research area.
Another repeatedly cited concern concerning about the use of proton radiation is cost. Although the dosimetry serves for advantage dose distribution and superior normal organ sparing compared to standard RT, clearly more long-term and superior clinical results are also warranted to rationalize the higher cost of proton therapy. Lundkvist et al., accomplished a cost analysis demonstrating that proton therapy could be cost-effective if main aim is primarily heart sparing [61]. As a conclusion, proton radiotherapy dose distribution of radiation to chest wall/breast and regional lymphatics has been proven to provide excellent coverage with improved sparing of adjacent normal structures but until the cost of proton therapy decreases, we have to select eligible patients carefully.
Hypofractionation
Conventionally, radiation treatment after breast surgery has prescribed to the whole breast with total doses of 45-50 Gy delivered in 1.8-to 2-Gy daily fractions, and in many cases followed by an additional 10-to 15-Gy boost dose to the tumor bed, for a total of 5-6 weeks of daily treatment. The cost and travel distance to radiotherapy centers for multiple weeks are the most known barriers to the administration of radiotherapy. One of the solution was using increased daily fractions to lessen the total treatment time. Radiobiologic studies have proposed that breast cancer cells have a alpha-beta ratio which is similar to late reacting normal irradiated tissues [62] and the Royal Marsden Hospital/Gloucestershine Oncology Center trial based on the alpha-beta ratio of almost 4 Gy aiming equivalent tumor control with shorter hypofractionated schedules to a lower total dose randomized 1410 women with invasive breast cancer to receive 50 Gy radiotherapy given in 25 fractions, 39 Gy given in 13 fractions, or 42.9 Gy given in 13 fractions, all given over 5 weeks [63,64]. After a median follow-up of 9.7 years, the risk of ipsilateral tumor relapse after 10 years was 12.1% in the 50 Gy group, 14.8% in the 39 Gy group, and 9.6% in the 42.9 Gy group [64]. Hypofractionation schemes were confirmed to be safe, effective and encouraged shorter course for early-stage breast cancer patients without compromising local recurrence or survival end points.
Hypofractionated regimens of irradiation to the whole breast have been studied by Canadian and English radiation oncology groups. Initially, Canadian trial enrolled 1234 women with invasive, lymph node-negative breast cancer treated by lumpectomy with negative pathologic margins and small to moderate breast size (breast separation ≤ 25 cm) to randomize to receive hypofractionated whole breast irradiation of 42.5 Gy in 16 fractions over 22 days versus standard whole breast irradiation of 50 Gy in 25 fractions over 35 days [65]. Acute toxicity was recorded similar between the arms, with only grade 2 or 3 radiation skin toxicity observed in 3% of patients in each arm. Additionally, long-term outcomes also were comparable between treatment schemas, the 10-year risk of local recurrence was 6.2% in the hypofractionated arm and 6.7% in the standard arm, as well as the rate of good or excellent cosmesis was 69.8% in the hypofractionated arm and 71.3% in the standard arm [65]. The following supporting hypofractionation randomized trial presented by START Trialists' Group-START-A enrolled 1410 patients to either standard fractionated whole breast irradiation or hypofractionated schedules of 42.9 or 39 Gy in 13 fractions over 5 weeks [66,67]. Disease-free survival and overall survival were found to be similar in all arms except more moderate or marked skin toxicities were recorded at 39 Gy such as breast induration, telangiectasia and breast edema [66,67]. The START B trial randomized 50 Gy in 25 fractions over 5 weeks versus 40 Gy in 15 fractions over 3 weeks in 2215 women (pT1-3a pN0-1 M0), and after a median follow-up of 6.0 years, reported lower local-regional tumor relapse (2.2 vs. 3.3%) and also lower rates of late adverse effects by photographic and patient assessments at 5 years in the accelerated hypofractionated arm [68]. Combining these START trials have suggested that use of 40 Gy in 15 fractions schema with fewer fractions of larger dose per fraction is at least as safe and effective as the historical standard regimen (50 Gy in 25 fractions) for women after primary surgery for early breast cancer [68].
An unplanned subgroup analysis of Ontario study proposed that the hypofractionated regimen was less effective in patients with high-grade tumors, having 10 years of cumulative recurrence incidence of 4.7% for standard RT and 15.6% for the hypofractionated RT with highgrade tumors [65]. In contrast, START A and B studies did not demonstrate a significant outcome measure respective to grade [67]. The proportion of patients with high grade tumors were 19, 28% and 23% in the Canadian, START A and START B trials implying insufficient numbers for appropriate conclusions as well as not calculated for a proper hypothesis. Therefore, the American Society for Radiation Oncology (ASTRO) task force could not reach a strict conclusion for comfortably advising use of HF-WBI for women with high-grade tumors until other studies clarified the outcome [69]. Bane et al. reexamined molecular and pathological features of 989 patients whose tumor blocks were present and checked thoroughly the association between tumor classifications and local recurrence rates [70]. The 10-year cumulative incidence was 4.5% for luminal A and basal-like, 7.9% for luminal B and 16.9% for HER-2 enriched tumors (p < 0.01); albeit tumor grade, molecular subtype or hypoxia did not predict any correlation between local recurrence and hypofractionation. Accordingly, hypofractionated radiotherapy is now considered appropriate regimens as a first treatment option for all grades and molecular subtypes of breast cancer; ASTRO published an evidence-based guideline for the use of hypofractionation and whom to prescribe in clinical practice [69]. Mainly, the routine suitable group for hypofractionation was defined as follows: age older than 50 years, stage T1-T2, no use of chemotherapy and central axis dose of 93-107%. The recommended schedules were 42.5 Gy in 16 fractions (Canadian trail), 41.6 Gy in 16 fractions over 5 weeks (START A), 40 Gy in 15 fractions over 3 weeks (START B). As the clinical approach spread all over the radiation oncology world, the suitable group criteria's expanded and nowadays this scheme is suitable for all ductal carcinoma in situ or T1-T2 invasive ductal carcinoma tumors with N0 status above 40-year old without any restriction. In case of regional lymph node irradiation, the literature has low toxicity rates in retrospective analysis regarding brachial plexopathy with the use of hypofractionation.
There is an increasing attention to more intensified hypofractionation in the treatment of breast cancer which has ground for randomized UK FAST Trial, published in 2001 with first results [71]. They have compared 50 Gy in 25 fractions, 30 Gy in 5 fractions or 28.5 Gy in 5 fractions, all over 5 weeks, and based on adverse effects in the breast with 3-year median follow-up, 28.5 Gy in 5 fractions was found to be comparable to 50 Gy in 25 fractions and was significantly better than ultra-short schema 30 Gy in 5 fractions [71]. Further studies are ongoing to build upon these findings including questions for assessing the values of concomitant boost with IMRT.
Accelerated partial breast radiotherapy
The role of partial breast irradiation (PBI) has been based on the knowledge that whole breast radiotherapy does not appear to prevent the development of new primary cancer in elsewhere localization in breast other than primary tumor quadrant being true recurrences. Pathological studies have examined specimens, and it revealed that residual tumor is detected in 15 mm or less in more than 90% of the cases [72]. PBI is the limited volume irradiation of breast tissue covering just around the tumor bed with a margin. PBI delivers a larger fraction dose in shorter total treatment time to reduce RT waiting period. Today, this technique can be applied by either intracavitary brachytherapy or MammoSite, interstitial brachytherapy, intra-operative techniques using electrons or X-rays at 50 kVp or external beam radiotherapy.
In order to select proper patients for these modalities, three different groups have been described where only minor differences were present between the set criteria's. American Society of Therapeutic Radiation Oncology (ASTRO) recommendations are divided into three categories labeled as 'suitable' [≥60 years, tumor size ≤2 cm, pN0(i+/i−), no LVSI, invasive ductal carcinoma (IDC), margin (−), unifocal], 'cautionary' [50-59 years old, tumor size 2.1−3.0 cm, limited/focal LVSI, invasive lobular carcinoma (IlC), close margin (<2 mm), unifocal, DCIS ≤3 cm] 'unsuitable' [≤50 years, tumor size ≥3 cm, DCIS ≥3 cm, positive margin, multifocal, LVSI (+), ≥pN1] groups. American Society of Breast Surgeons (ASBS) has defined as age 45-year old or older for invasive cancer, age 50 years or older for DCIS, invasive carcinoma or ductal carcinoma in situ, Total tumor size less than or equal to 3 cm in size, negative microscopic surgical margins, pN0 [73]. American Brachytherapy Society (ABS) APBI criteria's based on a review of clinical and pathologic factors by the clinician [age (≥50 years old), tumor size (≤3 cm), all invasive subtypes and ductal carcinoma in situ, surgical margins (negative), LVSI (not present) and nodal status (negative)] [74]. To clarify the patient selecting for APBI depending on the clinicopathological features, a nomogram detecting the locoregional recurrence in patients treated with accelerated partial-breast irradiation has been developed. The nomogram was established on the results of a total of 2000 breasts (1990 women) treated with APBI at William Beaumont Hospital (n = 551) and in the American Society of Breast Surgeons MammoSite Registry Trial (n = 1449). Almost all APBI types were prescribed (multiplanar interstitial catheters, 98; balloon-based brachytherapy, 1689; and three-dimensional conformal radiation therapy, 213). Univariate analysis found that age <50 years, pre-/perimenopausal status, close/positive margins, estrogen receptor negativity and high grade were associated with a higher frequency of LRR [75].
Interstitial brachytherapy is the first technique used to treat only a partial amount of breast tissue. At that time, electron beam therapy was not available, so boosts were delivered to tumor bed using low dose rate (LDR) interstitial brachytherapy. With the advent of high-energy linear accelerators, electron beam boosts for the most part replaced interstitial brachytherapy with better dose homogeneity and improved overall cosmesis parallel to the experience [76]. To date, numerous single-arm and some randomized studies have been published examining multi-catheter interstitial brachytherapy [77][78][79][80]. Commonly, these studies registered patients with early-stage low-risk invasive and in situ carcinoma of, T1 or T2, with some allowing up to three positive axillary lymph nodes (N1) with negative surgical margins. Interstitial catheters were placed with a free-hand technique or a breast template with the placed surgical clips between 4 and 8 weeks after surgery. Earlier studies tend to use LDR or pulsed dose rate (PDR) sources, but the majority of the more recent series have been using 192Iridium (192Ir) high dose rate (HDR) brachytherapy. Generally, the target volume has been defined as the tumor bed plus 1-2 cm, 45-50 Gy with LDR and 30-36 Gy (using twice daily fractionation) with HDR. Local recurrence rates were ranged form0 to 8.9% [77,[79][80][81]. Usually, the rates of recurrence were low except the Guy's Hospital experience which stated an ipsilateral breast tumor recurrence rate of 18% [82]. GEC-ESTRO published 5-year follow-up results of randomized trial comparing interstisyel brachytherapy to whole breast radiotherapy for patients aged 40 years or more, small T1-2N0-miM0 (less than 3 cm) with negative margins and no lympho-vascular invasion (LVI) and excluded women with multifocal tumors. This trial has been conducted in 16 different centers in Europe. Planning and dose limits were as follows: The maximum skin dose less than 70% of the prescribed dose, the dose nonuniformity ratio (V100/V150) below 0.35, 100% of the prescribed dose covered at least 90% of the target volume (coverage index ≥0.9). APBI was delivered a total dose of 32.0 Gy in eight fractions (8 × 4.0 Gy) or 30.3 Gy in seven fractions (7 × 4.3 Gy), with fractionation twice a day, was used for HDR brachytherapy. A total dose of 50 Gy with pulses of 0.60-60.80 Gy/h (one pulse per h, 24 h/day) was given by PDR brachytherapy. Analysis of 1184 patients with low-risk invasive and ductal carcinoma in situ treated with breast-conserving surgery has demonstrated that the cumulative incidence of local recurrence was 1.44% with APBI and 0.92% with whole-breast irradiation. The five-year risk of grade 2-3 late side-effects to the skin was 3.2% with APBI versus 5.7% with whole-breast irradiation, and grade 3 fibrosis at 5 years was noted as 0.2% with wholebreast irradiation and 0% with APBI. Polgar et al., randomized 258 pT1N0-1miM0, grade 1 or 2; T1N0-N1miM0, grade 1 or 2 patients with invasive breast cancer (unifocal tumors, tumor size less than 20 mm, clinically or pathologically N0, or single microscopic nodal metastasis) after wide local excision of tumor and negative pathological margins (greater than 2 mm and less than 2.0 mm) to receive either 50 Gy whole-breast irradiation (n = 130), APBI with multicatheter HDR brachytherapy (n = 88), or APBI with electron beam irradiation (n = 40). The local recurrence at 10 years was 5.9% after APBI and 5.1% with whole-breast irradiation (p = 0.767) after median follow-up of 10.2 years. Excellent-to-good cosmetic results were 81% with APBI and 63% with whole-breast irradiation (p < 0.01) [77]. The literature has confirming results showing that the overall cosmesis scores were good to excellent for the majority of the patients with low rates of late complications [77,80,83]. Recently, phase 2 study of NRG Oncology/ Radiation Therapy Oncology Group 9517 published 10-year rates of oncological outcome measures of accelerated partial breast irradiation using multi-catheter brachytherapy including 98 stage I/II unifocal breast cancer patients (tumor size <3 cm, negative surgical margins and 0-3 positive axillary nodes without extracapsular extension). High dose rate group received 34 Gy in 10 twice-daily fractions over 5 days and low dose rate (LDR) brachytherapy had 45 Gy in 3.5-5 days. Only five regional recurrences were defined. The 10-year disease-free survival, overall survival and contralateral breast event rates were 69.8, 78.0 and 4.2%, respectively [84]. Despite the encouraging results of the literature and long years' experience, interstitial brachytherapy stayed limited to selected institutes owing to the requirements of dedicated team, experience, skills and specific equipment.
External-beam XRT is the other option for APBI administration with an advantage of noninvasive nature, widespread availability of required resources, and knowledge of final pathology before the treatment planning. External APBI is most frequently administered in a 38.5-Gy regimen divided into 10 fractions given twice per day for 5 days. Rodríguez et al. reported on the 5-year outcomes of 102 patients with features of pT1-2pN0M0 invasive ductal carcinoma, tumor size 3 cm or less, negative margins and grade 1 or 2 histology randomized to receive whole breast irradiation (48 Gy/with or without boost) using three-dimensional conformal external beam radiation therapy (37.5 Gy in 3.75 Gy per fraction) or APBI [85]. Beam weights were manually optimized to cover the PTV by the 95% isodose line while maintaining a hot spot of <105%. For imaging, portal images of each beam and an orthogonal (anteroposterior) images were obtained for the first and second fractions. At a median 5 years of follow-up, aside from no local recurrences, APBI also reduced acute side effects and radiation doses to healthy tissues compared with WBI. Physician assessment showed that >75% of patients in the APBI arm had excellent or good cosmesis similar to whole breast group, and these outcomes has not changed at the follow-up [85].
An interim analysis of the RAPID (randomized trial of APBI) trial was important in terms of cosmetic results in which 1108 patients (invasive ductal carcinoma or ductal carcinoma in situ with tumors <3 cm, negative margins and no involved axillary nodes) were randomized to either 3D external beam APBI or WBRT. RAPID trial used 3DCRT in 38.5 Gy/10 fractions over 5-8 days (with a minimum 6 h gap between fractions given on the same day) and two fractionation schemas for WBRT: 50 Gy/25 fractions or 42.5 Gy/16 fractions. Baseline posttreatment nurse assessment for adverse cosmesis was 19% in the APBI arm and 17% in the WBRT arm and at the third year evaluation, these rates were increased in APBI arm to 29% and remains stable −17% for WBRT [86]. The worsening cosmetic results have been shown previously reported by single institute reports of Michigan University and Tufts University. Despite the good cosmetic outcome results in the non-randomized, multicenter studies, external beam-based APBI has been used with caution in practice [87][88][89][90]. The National Surgical Adjuvant Breast and Bowel Project (NSABP) B-39/Radiation Therapy Oncology Group (RTOG) 0413 trial that randomized 3000 patients to WBXRT or partial breast irradiation (PBI) finished patient recruiting but will be completed at April 2020. As most of the patients on the non-WBXRT arm have received 3D-CRT, the results will help to enlighten the cosmetic results and routine use of external beam as an option [91].
Catheter-based radiation therapy (brachytherapy) has been performed with MammoSite™ (Hologic, Marlborough, MA, USA) as the first balloon-based catheter and following with single and multi-lumen catheters Contura®, and SAVI™, in historical order. These catheters can be found in different sizes and shapes. All placement for insertion shared the same protocols where placement can be performed at either at the time of lumpectomy or as a postponed procedure up to 2-6 weeks after operation. Ultrasound guidance is the key device to detect the seroma and guide the catheter insertion along the longest axis diameter of the cavity. The device can be inserted through the surgical scar or a separate incision pathway could be chosen depending on Ultrasound guidance or the cavity evaluation CT of the patients that was obtained at radiation oncology clinic before placement. This cavity evaluation CT also serves for detecting proper size of the catheter. If the APBI decision was already given before surgery, a 'placer' can be put in the cavity and the balloon placer is then inflated with sterile saline to a diameter of 4.0-5.0 as it is described above and after evaluating the final pathology, it can be replaced by the selected size of the catheter. After insertion, a new CT scan is then obtained to assess the conformance of the balloon to the cavity and the presence of air or fluid gaps. A ratio of air or fluid in the cavity to balloon surface of less than 10% is usually acceptable, and also just for single lumen catheters a balloon-skin distance equal or greater than 5 mm is warranted. The lumpectomy cavity is then delineated and expanded by 1 cm to define the PTV. The most commonly prescribed dose is 3.4 Gy BID to a total of 34 Gy. Recommended dose constraints and contouring recommendations are given in Table 1. It is recommended that the placement and the position of the catheter has to be checked before each treatment.
MammoSite is the first developed balloon-based single-lumen device and major disadvantage is the minimum distance of skin required from skin to cavity which is about 7 mm. After new developments, MammoSite also changed its single lumen form and a multi-lumen catheter released similar to Contura and SAVI.
Contura™ (SenoRx, Inc. Aliso Viejo, CA, USA) is a similar balloon catheter that has multiple catheters within the balloon and also comes in different sizes to fit the cavity. The multiple catheters offer optimization of the plan to that better normal tissue and skin sparing meaning that skin cavity distance has no more importance for patient selection, allow more precise -Dose volume histogram analysis of target coverage will confirm ≥90% of the prescribed dose covering ≥90% of the PTV_EVAL -The actual volume of tissue receiving 150% (V150) and 200% (V200) of the prescribed dose will be limited to ≤70 cc and ≤20 cc, respectively.
-Critical normal tissue DVHs within 5% specified value (uninvolved normal breast: ideally, <60% of the whole breast reference volume should receive ≥50% of the prescribed dose.) -Dose delivered twice a day for a total of 10 treatments over a period of 5-10 days treatment planning. The other advantage of this catheter is the vacuum ports which helps to remove fluid and air if needed.
The SAVI™ (Cianna Medical, Inc., Aliso Viejo, CA, USA) device has also multi-catheter (6, 8 or 10) body in an elliptic shape. The catheter body of the device does not have a balloon around the catheters and can be opened and closed like an umbrella which helps fit easily the fat tissue of the cavity. As it locked in the lumpectomy cavity, the rotation and the problems with the delivery will be ruled out. ClearPath™ (Renata Medical, Irvine, CA, USA) is a single entry multi-catheter device which allows both HDR-and LDR-based APBI treatment. If the patient carries Ir125 seeds placed in ClearPath device, they have to wear a fully • The maximum skin dose at any point is ≤145% of prescription dose, assuring that the skin dose does not exceed acceptable limits the maximum allowable skin dose is kept below 100% of the prescription. If the balloon-skin distance is 5-7 mm, up to 145% of the prescribed dose is also acceptable Table 1. Recommendations for APBI contouring and DVH evaluation based on RTOG NSABP PROTOCOL B-39 [91].
Breast Cancer -From Biology to Medicine shielded bra during the low dose rate APBI treatment. Axxent® (Sunnyvale, CA, USA) is a novel electronic brachytherapy system that is developed to simplify the brachytherapy technique. In its form, there is an iridium seed-based single catheter balloon, also it does not require a high dose rate afterloader unit or a shielded vault and can be turned on and off such that it can be used in the office setting [92]. The balloon is radiolucent to improve visibility on breast radiographs and CT images. In a dosimetric evaluation, electronic brachytherapy plans were stated as providing comparable target coverage, increased high-dose regions, and a significantly reduced dose to the ipsilateral breast and lungs as well as the heart compared with the iridium-192 treatment plans [93]. Also, the intersocietal Electronic Xoft Intersocietal Brachytherapy Trial (EXIBT) registry recruited 400 patients and at 1-year follow-up demonstrated that breast infection occurred in two (2.9%) patients, and no tumor recurrences were reported. Cosmetic outcomes were excellent or good in 83.9-100% of evaluable patients at 1, 6 months and 1 year [94].
The MammoSite Registry of the American Society of Breast Surgeons has the biggest number of patients with this device with a median follow of 63.1 months. The registry data had 1449 patients with a five-year actuarial IBTR rate is 3.8% and axillary recurrence rate is 0.6%. Excellent/good cosmetic results at 60, 72 and 84 months were as follows: 91.3, 90.5 and 90.6%. The overall rates of fat necrosis, symptomatic seroma and infections remained low at 2.5, 13.4 and 9.6% with few late toxicity events beyond 2 years. These results have been found to be comparable to the rates for whole breast irradiation and other forms of APBI. Mann et al. retrospectively examined the long-term results of 111 patients treated with MammoSite APBI and revealed that the incidence of ipsilateral breast tumor recurrence was 2.7%. The incidence of ipsilateral axilla nodal recurrence was low as well (1.8%). Excellent to good cosmesis rate was 98.1% of the patients. The cosmetic results were found to be paralleled to the mean value of maximum skin dose: excellent, good and fair cosmesis were 88.9, 92.7 and 109.5% of the prescription dose, respectively [95]. These results also confirmed by Northwest University prospective MammoSite study (n:33), which noted that local recurrence is 100%, and cosmetic results were good to excellent in 94% of the patients [96]. Gitt et al. used MammoSite brachytherapy as a boost (15 Gy in 2.5-Gy fractions) after whole breast radiotherapy for carefully selected early-stage pT1-2, pN0-1, M0 disease 107 patients were treated with breast-conserving therapy and adjuvant radiotherapy with MammoSite followed by WBI (median = 50.4 Gy).
In a short follow-up period of 21 months, no ipsilateral breast-tumor recurrences have been observed with an acceptable toxicity profile of 28% asymptomatic and 10% symptomatic seroma in 90 days after treatment [97]. Another retrospective long-term single institute (N:157) results confirmed that rate of ipsilateral breast recurrence was low as 2.5% at a median followup time of 5.5 years (range 0-10.0 years). Good to excellent cosmetic outcomes were achieved in 93.4% of patients and proved that skin dose >100% significantly projected the development of telangiectasia (50 vs. 14%, p < 0.0001) [98].
In Mayo clinic, a prospective protocol for completing all locoregional treatment (surgery and APBI) within 10 days with acceptable complication rates and cosmesis. Intraoperative multi-lumen strut-based device was placed for 123 women [age 50 years or older with clinical T1 estrogen receptor positive (ER+) sentinel lymph node (SLN)-negative invasive ductal cancer or pure ductal carcinoma in situ]. Analyzing the procedure, 110 (90%) of these patient underwent intraoperative catheter placement, whereas 13 did not due to intraoperative pathology findings. Prescribed radiotherapy was completed within 5 days at 109 APBI patients (99%), for all patients, this duration was 9 days with 6% 30-day complication rate. The local recurrence rate was 1.8% (two patients), and excellent or good cosmesis was achieved in 88% of patients [99]. Evaluating early toxicity in a prospective manner in 132 patients treated with strut-adjusted volume implant (SAVI) for early-stage breast cancer, SAVI has been observed as a safe treatment option with one acute and three late skin infections (two were grade 3), besides grade 1 or 2 late toxicities of hyperpigmentation (44%), telangiectasia (0.8%), seroma (9%), fat necrosis (5%), and fibrosis (12%). Crude local recurrence rate was 4% at a median follow-up time of 20 months [100]. It has to be noted that the literature studying new catheters except MammoSite are mostly presenting early results for feasibility and toxicity profile. Wobb et al. recently documented late side effects of 1034 patients treated with brachytherapy-based APBI (interstitial 40%, applicator-based 60%) and whole breast irradiation using intensity modulated radiotherapy [101], and stated that though brachytherapy-based APBI was associated with higher rates of ≥grade 2 seroma formation (14.4 vs. 2.9%, p < 0.001), telangiectasia (12.3 vs. 2.1%, p = 0.002) and symptomatic fat necrosis (10.2 vs. 3.6%, p < 0.001), there was no difference between rates of fair or poor cosmesis [101].
The use of partial irradiation in the treatment of ductal carcinoma in situ was tested in a prospective multicenter trial consisting 41 patients (42 breasts) with the eligibility criteria's of a diagnosis of DCIS confirmed by core needle biopsy, unicentric disease _<3 cm in size by mammogram, and an estimated life expectancy of >5 years [102], where the mean tumor size was 0.82 cm with comedo necrosis in 21.4%, and estrogen receptor positivity was 52.4%. Abbott et al. documented four patients (9.8%) developing an IBTR (all DCIS) outside the treatment field with a 3.2 years mean time of recurrence, and the actuarial recurrence rate at 5 years of 11.3%. It has to be noted that all patients with recurrence had at least one normal mammogram after treatment and before recurrence. Even all the recurrences were DCIS and occurred outside of the treatment field, prospective randomized trials have to waited before recommending routine use of APBI for DCIS [102].
In a meta-analysis of nine randomized trials comparing APBI vs. whole breast radiotherapy, the overall mortality was 4.9% and as no difference was observed in the proportion of breast cancer-related deaths, both non-breast cancer mortality with a difference of 1.1% (p = 0.023) and total mortality with a difference of 1.3% (p = 0.05) were found to be significantly lower in PBI than WBI cohorts which encourages PBI in selected patients with a 25% reduction in five-year non-breast cancer and overall mortality in comparison with WBI [103]. The most criticized study in APBI practice was the population-based retrospective analyses by Smith et al. based on Medicare billing codes rather than actual clinical outcomes defining the rate of mastectomy after APBI or whole breast radiotherapy [104], which analyzed 6952 breast cancer patients treated with brachytherapy and 85,783 with whole breast radiotherapy over 67-year old. Mastectomy was required in more women treated with brachytherapy ( [95]. Although single institute results favored APBI, Medicare-based data slowed down the use of APBI which nowadays is recommended mainly in prospective protocols.
Intraoperative radiotherapy is the delivery of a single fraction of radiotherapy at the time of surgery directed to only tumor cavity. This can help to reduce long treatment duration for patient; but in today's practice, it is still expensive due to additional staffing, workload and specific equipment requirements. The available methods of delivering IORT are low-energy X-ray systems, electron beam radiation therapy and high dose rate afterloaders.
The Intrabeam® device (Carl Zeiss, Oberkochen, Germany) is a low-energy X-ray IORT device that has solid and rounded applicators in different sizes. After the lumpectomy is performed, Tungsten-impregnated sheets are used to shield the wound, and afterwards, applicator fixing in the tumor cavity is placed. A 20-Gy one-time dose is delivered at the surface of the applicator decreasing to a dose of 5 Gy at a depth of 1 cm from the cavity. Treatment time varies from 20 to 40 min. Shielding is essential to reduce radiation scatter, operation room walls will often provide sufficient shielding for the low-energy X-rays but measure environmental radiation dose rates around the theatre is essential.
There are three commercially available mobile linear accelerators, which can deliver electron beam radiation therapy the Novac7® (Hitesys S.p.A., Aprilia, Latina, Italy), the Liac® (Sordina, Padova, Italy) and the Mobetron® (IntraOP Medical Inc, Sunnyvale, CA USA). Both Novac7® and Liac® have been used in a phase-III trial, the ELIOT trial. The irradiation procedure is easily completed in 2 min, and the delivered dose is 21 Gy with the depth of 90% isodose ranging from 13 (3 MeV) to 24 mm (9 MeV). The breast tissue is mobilized over a lead/ aluminum shield placed posteriorly to protect the chest wall and viscera. By means of these systems are delivering electrons, non-shielded operating rooms can be used but the team has to leave the operation room while the radiation is delivered. Several single institution studies have been present in the literature on the feasibility and effectiveness of IORT, but only two phase-III trials have been published, the targeted intraoperative radiotherapy-alone (TARGIT-A) trial and the electron intraoperative treatment (ELIOT) trial with results at a medium follow-up of 2.4 and 5.8 years, respectively. TARGIT-A is an international cohort of 3451 patients who were randomized to either whole breast radiotherapy (40 to 56 Gy ± 10 to 16 Gy boost) or Intrabeam®, with a single 20 Gy fraction prescribed to the surface of the applicator. All clinical T1-T2 ≤3.5 cm, N0-1 invasive breast cancer patients were eligible if they were aged 45 years or older and suitable for wide local excision for invasive ductal carcinoma that was unifocal on conventional examination and imaging [105]. After the pathological evaluation, if the patients had adverse pathologic features including LCIS, lymphovascular space invasion, positive nodal status or other parameters defined at each center, postoperative WBI was added, and the APBI was counted as the boost. At a median follow-up of 2 years and 5 months, local recurrence rate was 3.3% in the APBI group and 1.3% in the WBI group (p = 0.04). Interestingly, even though cases were selected carefully, local recurrence in patients treated with TARGIT as a second invasive procedure by reopening the wound (n = 1143) was 5.4% and higher than with EBRT (1.7%). The difference was explained as a possibility of a delay in wound fluid suppression of tumor cells, a delay of radiation or a geometric miss when inserting the applicator postsurgery by authors [106], and "postpathology" TARGIT by reopening the wound was not recommended. Furthermore, OS or distant metastases, the rates were similar with low skin toxicity profile. There was no difference in hematomas needing surgical aspiration, seromas needing greater than three aspirations, infections requiring intravenous antibiotics or surgical intervention or skin breakdown or delayed healing rates between APBI and WBRT [105].
The ELIOT trial also uses intraoperative electrons as a single dose of 21 Gy prescribed to the 90% depth compared 50 Gy of external beam radiation therapy in which 1305 patients presented with tumors 2.5 cm or smaller and 48 years or older. After tumor excision, the breast tissue was mobilized and a lead/aluminum shield was placed to protect chest wall and underlying structures. The breast tissue as a target was rearranged over the shield. An appropriately sized collimator (4-8 cm) was inserted. At a median follow-up of 5.8 years, the 5-year recurrence rate was 4.4% for ELIOT versus 0.4% for the EBRT. For low risk women the 5-year IBTR was 1.7%. For patients with one or more high risk features (tumor size, receptor status, nodal positivity and grade), the 5-year IBTR was 11.8% for the 178 women (30.4%) with 1 or more risk factors versus 1.7% for the 407 ELIOT low risk women (69.6%) [107]. The rate of ELIOT patients who could be defined as ASTRO suitable subgroup was 23%, and ipsilateral breast recurrences ratio for them was 1.5% at 5 years and alike to whole breast group. ELIOT study results revealed low rates of skin and pulmonary damage [108]. There was no difference in terms of pain, retraction or fibrosis. Overall survival was the same between the two arms. The applicator sizes used in the ELIOT trial are not specified, but it has been advised that to guarantee uniform coverage of microscopic residual disease, the IOERT applicator dimension size has to be chosen at least 1.5 to 2 cm larger than the maximum tumor dimension [109]. Although the above IO-APBI trials show some promising early results, the followup for ELIOT is short especially given that breast cancer can recur many years later.
Cochrane meta-analysis including all types of APBI has been published in 2016 consisting seven randomized trials studying 7586 women of the 8955 enrolled [110]. Local recurrencefree survival decreased from HR-1.62 to HR-1.11 for women receiving PBI/APBI compared to WBRT, in addition to poorer physician-reported cosmesis with PBI/APBI. Oncological outcomes as cause-specific, distant metastasis-free, relapse-free survival or mastectomy rates were not affected by this small local recurrence difference, besides no difference in overall survival with PBI/APBI. As acute toxicities seem to be reduced by partial irradiation, this effect did not lead into an advantage for late term subcutaneous fibrosis. 'Elsewhere primaries' (new primaries in the ipsilateral breast) found to be more frequent with PBI/APBI. This meta-analysis cannot help to determine which technique increased the local recurrence or elsewhere primary detection. Ongoing trial will address the questions in future [110]. Despite small differences in local control, the advantages of the patients with APBI such as short treatment duration or easy application during surgery can increase patient treatment compliance. IO-APBI could be a reasonable option for highly selected subpopulation of early-stage breast cancer patients out of a clinical trial.
Breath hold-cardiac sparing methods
Breast cancer radiotherapy reduces the risk of cancer recurrence and death demonstrated by randomized trials, but as radiation delivery requires tangential and selectively mammaria interna fields, meta-analyses also have found an increase in cardiac deaths following breast cancer radiotherapy associated with the volume of the heart receiving 5 Gy or more [111]. Decreased myocardial function or coronary artery diseases are the most common cardiotoxicity besides less common toxicities of myocardial infarction, congestive heart failure, pericarditis, arrhythmias, angina or valve dysfunction [112]. Darby et al. steered a population-based case-control study of major coronary events in 2168 women who underwent radiotherapy for breast cancer between 1958 and 2001 in Sweden and Denmark. The overall average of the mean doses to the whole heart was 4.9 Gy (range 0.03-27.72), and the rates of major coronary events were associated with a 7% increase in risk of ischemic events per gray increase in mean heart dose with no apparent threshold. This effect of radiation on heart was increasing within the first 5 years after radiotherapy and found to be unrelated to the presence of cardiac risk factors at the time of radiotherapy.
Due to the interplay between respiratory motion and MLC motion during IMRT delivery, the planned and expected doses could be different. Respiratory motion is a well-known factor during treatment planning for breast IMRT, dosimetric studies presented that PTV dose heterogeneity increases as respiratory motion grows. The lung and heart doses also change with respiratory motion. As a result, a larger margin is proposed from CTV to PTV margin [113]. The breath-hold technique could help to minimize the effect of potential negative dosimetric impact arising from interplay effect of multileaf collimator and breathing motion during delivery of IMRT [114,115].
In clinical practice, there are two commercially available devices: active breathing coordina-tor™ (ABC_DIBH) (Elekta, Crawley, UK) and Varian RPM system guiding patients to hold their breath while radiotherapy is delivered, which pushes the heart down and away from the radiotherapy field. Even the benefits of these systems were proved by dosimetric studies, they are not used more widespread as it was used in only 19% of EORTC centers in 2010 and just 4% of UK centers [116,117]. This could be due to additional cost, education of staff and timeconsuming procedure depending on patient's capacity and therapist's experience.
In the early 2000s, the Real-time Position Management (RPM) system from Varian Medical System (Palo Alto, USA) consisting of two reflectors attached to an external marker-cube placed on the patient's abdomen was released. The motion of the cube marker, reflecting the breathing pattern of the patient, is evaluated by software that controls the scanner, based on predefined criteria [118]. The advantage of this RPM system is the constant monitorization of patient respiration, and a beam-hold condition automatically occurs if the breath-hold level departs from the planned one [119]. The patient can easily track their performance on screen, also reproducibility is the other important advantage of this system.
The ABC method was established at William Beaumont Hospital and is currently commercialized by Elekta, Inc. as the active breathing coordinator. Also the VMAX Spectra 20C (VIASYS Healthcare Inc, Yorba Linda, CA, USA) and the SpiroDyn'RX (Dyn'R, Muret, France) which are working in the similar principles [120]. The ABC apparatus can be used to suspend breathing at any predetermined position along the normal breathing cycle, or at active inspiration. A digital spirometer is used measure the respiratory cycle, which is connected to a balloon valve. In an ABC procedure, the patient breathes normally through the apparatus. When an operator "activates" the system, the lung volume and the phase (i.e., inhalation or exhalation) at which the balloon value will be closed are specified. The patient is then instructed to proceed to reach the specified lung volume, typically after taking two preparatory breaths. At this point, the valve is inflated with an air compressor for a predefine duration of time, thereby "holding" the patient's breath [120].
There is solid evidence from retrospective and dosimetric planning studies, demonstrating reduction in dose to the heart and coronary arteries with deep inspiration breath-hold treatment of left-sided breast cancers for both early and locally advanced breast cancer therapy with regional irradiation. In a dosimetric analysis, free and breath-hold technique were planned with both forward and inverse IMRT showing a significant reduction in radiation exposure to the contralateral breast, left and right ventricles, as well as proximal and especially distal LAD by breath hold with forward IMRT, as inverse IMRT provided no additional advantage [121]. For whole breast radiotherapy, Wang et al. reported a reduction in mean heart dose from 3.2 Gy forward-planned IMRT in free-breathing to 1.3 Gy for forward-planned IMRT in breath hold. Another confirming study, recruiting 319 breast cancer patients revealed that deep inspiration breath-hold plans expressed large reductions in dose to the heart compared with left-sided FB plans; V20Gy of the heart is reduced from 7.8 to 2.3%, V40Gy from 3.4 to 0.3% and mean dose from 5.2 to 2.7 Gy (−48%, p < 0.0001) while median target coverage is slightly improved [122].
In William Beaumont Hospital experience revealed that moderate deep inspiration breath hold achieved using an active breathing control (ABC) device, compared with free breathing (FB) during treatment with deep tangents fields (DT) for locoregional (LR) irradiation of 15 breast cancer patients, reduced the heart V30 for 6 of the 9 left breasted patients, entirely avoiding heart irradiation in 2 of these 6 patients and the mean percentage of both lungs receiving more than 20 Gy from 20.4 to 15.2% [123]. Twenty centers in order to compare clinical aspects of respiratory-gated conformal radiotherapy during breast cancer irradiation versus conventional conformal radiotherapy and reassured the feasibility and good reproducibility of the respiratory gating systems with the reduction in the dose delivered to the heart during irradiation of the left breast [119]. Even locoregional irradiation is considered, breath-hold technique still added benefit with breath-hold technique significantly by reducing Dmean Heart and Dmean LAD compared to free breathing for both the whole breast and chest wall and regional irradiation groups. When Dmean Heart of <4 Gy had been set as a criteria for planning, all the plans in whole breast radiotherapy has been met this apart from breathing pattern, but only five of nine patients (56%) in the comprehensive breast irradiation group were able to meet this constraint with free breathing, compared to all patients with deep breath hold was in compliance with the criteria of Dmean Heart <4 Gy [124]. Addition to the routine use of deep breath old techniques for left breast cancer patients, Essen et al. recommend it to use for right breast also. The gain for locoregional breast treatment without IMN, the average mean lung dose reduced from 6.5 to 5.4 Gy for the total lung and from 11.2 to 9.7 Gy for the ipsilateral lung while if internal mammaria lymph node irradiation is added significant gain will continue for lung doses, which can translate into a lower risk of pneumonitis and secondary lung cancer rates in future [125]. As a summary of the published literature, deep breath hold reduced the mean heart dose by up to 3.4 Gy when compared to a free breathing approach. Also deep breath-hold technique was announced as stable and reproducible on a daily basis [126].
Breath-hold technique's dosimetric benefits have been clearly in the literature, but these techniques are not yet in widespread use. The reasons for this could be explained by this technique needs commercially available solutions necessitate specialist equipment. Another breath-hold technique described as 'voluntary breath-hold technique' described. This breath-holding technique monitories breath-hold consistency using the distance moved by the anterior and lateral reference marks away from the treatment room lasers in breath hold to monitor constancy at CT-planning and treatment setup. Light fields are then visually checked breath-hold consistency before and during treatment. This technique is announced as simple and inexpensive, but still there is concerns about the reproducibility and consistency [127]. A randomized study conducted at the Royal Marsden Hospital (Sutton, UK), The UK HeartSpare Study, has confirmed that interfraction reproducibility with the voluntary breath-hold technique is analogous to the performed with the spirometry-based device. Addition to this, voluntary technique offers a time advantage at planning-CT and treatment setup and is preferred by patients and radiographers alike compared to using the spirometry-based device [128]. In HeartSpare II study, the VBH technique is currently being ongoing at 10 UK radiotherapy centers to confirm that the technique is applicable in a multicenter setting where presented preliminary data suggest multicenter application of VBH is found to be both actual and practicable at heart-sparing [129].
According to Royal Marsden Hospital protocol firstly patient's asked to practice at home holding their breath, while lying down, initially for 5 s, and building up in 5 s intervals to 20 s. During the standard CT simulation procedure, position of crosses in free breathing and while taking a deep breath in marked on the patient. The duration of the breath hold has to be noted. All the details and a video related to this technique has been published by Barnett et al. [127]. Systematic and random error range for each beam and in each plane reported as 1.5-1.8 mm and 1.7-2.5 mm, respectively [127].
As a conclusion, to date, there is only retrospective or dosimetric studies were presented and no data studying the clinical benefits and oncological outcomes for patients treated with this technique. Especially, the cardiac data will be presented in 15-20 years. Under these circumstances, the clinical application of deep breath-hold technique is important and advisable. In our clinic, we routinely train all our left breast cancer patients and use RPM system during the simulation and treatment to provide the consistency and reproducibility of breath-holding period. After forward IMRT planning, DVH are evaluated according to criteria's as follows: Spinal cord Max <45 Gy or Max <36 Gy (if >2.5 Gy/Fx), heart V20 <4%, V10 <15%, total lung V20 <35%. Our aim is to reduce mean heart dose as low as possible. Average mean heart doses were usually under 4-5 Gy and 2.5 Gy for left-sided RT and right-sided RT including IM nodes. After adding segments, the 105% isodose line cloud should not been seen except in the corners due to lung transmission.
Conclusion
Modern radiotherapy techniques have been evolving in the last two decades. Supine positioning will be continued to be used for breast cancer simulation for several decades over the world as it provides patient comfort and position reproducibility for the whole treatment period, while in rare indications such as a very large pendulous breast or depending on institution choice lateral decubitus or prone position can help. The reflection of modern techniques such as three-dimensional (3D), intensity-modulated radiotherapy (IMRT), volumetric modulated arc therapy (VMAT) has been evolving in breast therapy. Even dosimetric studies has demonstrated more homogenous dose distribution and normal organ sparing, still survival data, and the long-term effects of normal tissue sparing on survival will be answered in future. Especially, forward IMRT, using tangential bream angels and creating multiple segment, can be used in clinical practice taking into the considerations of acute toxicity but using tangential radiotherapy field design is still acceptable. There is an increasing attention to hypofractionation in the treatment of breast cancer, while there are still unanswered questions in regional lymph node and expander irradiation. Another attractive approach-APBI could be a reasonable option for highly selected subpopulation of early-stage breast cancer patients out of a clinical trial. Results of ongoing trial comparing APBI techniques to external radiotherapy will address the future of APBI techniques as a routine clinical approach. The most important advance could be named as cardiac sparing-deep breath-hold approach in all the modern technique improvement. Retrospective or dosimetric studies were presented the benefit of using commercially available techniques or voluntary performance, while clinical outcomes could be presented in 15-20 years. Under these circumstances, the clinical application of deep breath-hold technique is important and advisable.
Although most advanced techniques in management of breast cancer have not been proved to increase survival, we suggest recommending resource stratified advanced techniques to be decided institutionally in order to provide best technical and clinical care in this long-term survivor candidates. | 18,589.2 | 2017-04-05T00:00:00.000 | [
"Medicine",
"Physics"
] |
Experimental and Analytical Studies on Improved Feedforward ML Estimation Based on LS-SVR
Maximum likelihood (ML) algorithm is themost common and effective parameter estimationmethod.However, when dealingwith small sample and low signal-to-noise ratio (SNR), threshold effects are resulted and estimation performance degrades greatly. It is proved that support vector machine (SVM) is suitable for small sample. Consequently, we employ the linear relationship between least squares support vector regression (LS-SVR)’s inputs and outputs and regard LS-SVR process as a time-varying linear filter to increase input SNR of received signals and decrease the threshold value of mean square error (MSE) curve. Furthermore, it is verified that by taking single-tone sinusoidal frequency estimation, for example, and integrating data analysis and experimental validation, if LS-SVR’s parameters are set appropriately, not only can the LS-SVR process ensure the single-tone sinusoid and additive white Gaussian noise (AWGN) channel characteristics of original signals well, but it can also improves the frequency estimation performance. During experimental simulations, LS-SVR process is applied to two common and representative singletone sinusoidal ML frequency estimation algorithms, the DFT-based frequency-domain periodogram (FDP) and phase-based Kay ones. And the threshold values of their MSE curves are decreased by 0.3 dB and 1.2 dB, respectively, which obviously exhibit the advantage of the proposed algorithm.
Introduction
Maximum likelihood (ML) estimation depends on the asymptotic theory, which means that the statistical characteristics are shown accurately only when the sample size is infinity.However, burst-mode transmissions always bring problems about short data and severe conditions.Therefore, threshold effect is existing.Namely, the mean square error (MSE) of ML estimation can reach Cramer-Rao lower bound (CRLB) if it is higher than a value, or the performance will be deteriorated rapidly.
Statistical learning theory (SLT) and structure risk minimization (SRM) rule in it are specialized in small-sample learning [1].As their concrete implement, support vector machine (SVM) overcomes the over-fitting and local minimum problems currently existing in artificial neural network (ANN).Least squares support vector regression (LS-SVR) has the following improvements: inequality constraint are substituted by equality one; a squared loss function is taken for the error variable.Hence, we introduce LS-SVR to improve ML estimator and take feedforward single-tone sinusoidal frequency estimation for example, in this study.
Estimating frequency of a single-tone sinusoid has attracted considerable attention for many decades.Rife and Boorstyn exploited the relationship of maximum likelihood estimator (MLE) to discrete Fourier transform (DFT) and proposed a frequency-domain periodogram (FDP) algorithm having two stages: coarse search and fine search [2].In order to reduce the calculation cost, a great deal of improved algorithms have erupted mainly from two sides: interpolationbased and phase-based ones.
During the former, an iterative binary search for the true signal frequency has been presented, which is particularly suited for digital signal processing (DSP) implementation [3].In [4], the same authors have proposed a number of hybrid estimators that combine the dichotomous search with various interpolation techniques in order to reduce the computational complexity, at the expense of acquisition range.And, other modified dichotomous search frequency estimators have been addressed in [5][6][7].Besides, complex Fourier coefficients have been utilized to interpolate the true signal frequency between the maximum and the second highest bin [8].However, it has been shown to have a frequency dependent performance [9].Two improved estimators have been proposed, which were implemented iteratively [10,11].Rational combination of three spectrum lines (RCTSL) has been employed as the fine estimation, because of its constant combinational weights in least square approximation [12].Other methods used for interpolation include Lagrange interpolator [13], L-filter DFT [14], nonlinear filter [15], Kaise window [16], trigonometric polynomial interpolator [17], narrowband approximation interpolator [18], and so on.In the latter, Tretter [19] was the first person to propose a phasebased approach by introducing an approximated and linear model for instantaneous signal phase.Subsequently, a great deal of improvements have erupted mainly in the following three parts: taking differences over one or more delays, which is well-known as Kay and generalized Kay estimators [20][21][22][23][24][25]; introducing autocorrelations and their different functions, such as Fitz, L&R, and M&M estimators [26][27][28][29][30]; and preprocessing by means of lowpass filter, blocking average, and filter banks to increase signal-to-noise ratio (SNR) [31][32][33][34].
In this paper, we present an improved feedforward ML estimation based on LS-SVR, taking single-tone sinusoidal frequency estimation, for example.LS-SVR process is regarded as a time-varying linear filter to increase input SNR of received signals, and accordingly, the threshold value of MSE curve is decreased.Reliability and validation of LS-SVR process are verified by integrating data analysis and experimental simulation.It is verified that not only can the LS-SVR ensure the single-tone sinusoid and AWGN channel characteristics of original signals well, but also increases the input SNR of received signals efficiently and improves frequency estimation performance.During experimental simulations, LS-SVR process is applied to two common and representative single-tone sinusoidal frequency estimation algorithms, the DFT-based FDP and phase-based Kay algorithms.The estimation performance of having the LS-SVR process and not are compared, respectively, to exhibit the advantage of the proposed algorithm, if its parameters are set appropriately.
The remainder of this paper is organized as follows.Section 2 briefly introduces the basic theory of LS-SVR.Section 3 describes the model of single-tone sinusoidal frequency estimation and the classical algorithms including FDP and Kay.In Section 4, the LS-SVR process is concretely explained and analyzed.And Section 5 shows the results of simulations and experiments.The paper is concluded in Section 6 finally.
Theory of LS-SVR
At first, a linear hyperplane (x) = (w ⋅ (x)) + insofar as for is assumed to fit all elements of the training set = {(x 1 , 1 ), . . ., (x , )} ⊂ × , where w is the high-dimensional coefficient of (x), (⋅) is an inner product operator, and (⋅) is a nonlinear mapping from low to high dimension feature space.Also, -insensitive loss function is defined as denotes the distance from point (x , ) ∈ to (x): According to (2), we optimize (x) through maximizing Then, we proceed to conquer inseparable condition by introducing error variables and least squares (LS) method, and convert (3) into where penalty factor is a positive constant to take compromise in LS-SVR's generalization capability and fitting errors, which are denoted by the first and second item of (w, ), respectively.
Next step, we use Lagrange multiplier method and replace where 1 , . . ., are Lagrange multipliers, Q is a kernel function matrix, and radius basis function (RBF) is adopted in this study, so: where Q is the (, )th element of Q; the width of RBF ℎ is a positive constant.Ultimately, the discriminant function is described as
Signal Model and Classical Algorithms
3.1.Signal Model.The sinusoid signal polluted by noise is modeled as Here, > 0, ∈ [−0.5, 0.5), ∈ [−, ) are the amplitude, deterministic but unknown frequency, and initial phase, respectively; is an independent complex additive white Gaussian noise (AWGN) with zero-mean and variance 2 ; and is the sample size.
LS-SVR Process and Its Analysis
is the real part of −1 , so it is an independent real AWGN with zero-mean and variance 2 /2, and derive real (), ( Then, we substitute into () and get a new series of received signals.
At last, we utilize the classical algorithm to estimate frequency accurately.
Firstly, setting = 0.15, = 0, = 32, SNR = 0 dB, and the parameter of LS-SVR = 5, the arbitrary amplitude spectrums of Ĝ, while ℎ = 1 and ℎ = 5 are illustrated in Figure 1, respectively.It is shown that when ℎ = 1, the spectrum component of Ĝ in is much more powerful than other places.It means that now the output of LS-SVR process still keeps the spectrum characteristics of cos(2+) and can be used to estimate the frequency of cos(2 + ).However, as ℎ increases, the spectrum component of Ĝ in inversely decreases and others gradually increase; hence, now the output of LS-SVR process cannot keep the spectrum characteristics of cos(2 + ).
Furthermore, everything is as in Figure 1, while ℎ = 1, ℎ = 4, and ℎ = 5, the time-domain waveforms of Ĝ are plotted in Figure 2, perspectively.The conclusion of Figure 2 is consistent with Figure 1, just when ℎ = 1, the amplitude of time-domain waveforms of Ĝ is less than cos(2 + ).
The Euclidean distance between Ĝ and G is defined as follows, where max(⋅) is the operation of taking maximum value: Everything is as in Figure 1, and the number of Monte Carlo experiments is 10000; the values of with different ℎ are listed in Table 1.Obviously, Ĝ can be very close to G through proper choice of ℎ.Nevertheless, Ĝ will gradually deviate from G as ℎ increases.
Consequently, proper choice of ℎ ensures that LS-SVR process can be used for frequency estimation of single-tone sinusoidal signals.Integrating the above analyses, the value of ℎ must be less than 3.
(2) Through analyzing the covariance function of Ŷ, it is shown that LS-SVR process is feasible and valid with a proper choice of and ℎ.
From (16), it is obvious that the covariance function is related to 2 and .Taking = 4, for example, when = 5, ℎ = 1, (16) is calculated as We can deduce the following by analyzing (18) (A) The elements except main diagonal ones denote the correlations between ŵ in different moments.Everything is as in Figure 1, and the number of Monte Carlo experiments is 10000; Figure 3 illustrates the impact of on MSE performance, which is consistent with all analyses above, also = 5 is set in this study.
At the same time, according to Everything is as in Figure 3 other than that = 5; the impact of ℎ on MSE performance is shown in Figure 4, which is consistent with all analyses above, and ℎ = 1 is set in this study.
(3) Setting and ℎ appropriately, LS-SVR process can increase SNR of Y and improve the performance of feedforward ML frequency estimations under the condition of small sample and low SNR.
Simulations and Experiments
We apply LS-SVR process to two common and representative single-tone sinusoidal ML frequency estimation algorithms, the DFT-based FDP and phase-based Kay ones, and derive the proposed algorithm called LS-SVR for short, where the number of DFT points of FDP algorithm is = 32.
Mean Performance.
Everything is as in Figure 3 other than = 5; Figures 5 and 6 illustrate the mean of these three algorithms with different SNR.As is shown, whether high or low SNR, LS-SVR process can hardly change the unbiased ranges of FDP and Kay algorithms.Also, the unbiased ranges of all three algorithms will degrade with deterioration of SNR.
MSE Performance.
Everything is as in Figure 3 except = 5; the MSE curves of these three algorithms are both shown in Figures 7 and 8, where CRLB is defined as (1/SNR)(3/(2) 2 ( − 1)(2 − 1)) [2].We can see that LS-SVR process effectively improves the MSE performance of both FDP and Kay algorithm, and their threshold values are decreased by 0.3 dB and 1.2 dB, respectively.
Impact of Sample Size 𝑁.
Everything is still as in Figure 3 except that = 5, Figure 9 illustrates the impact of on MSE performance.We can know that the MSE curve of LS-SVR algorithm will decrease as increases.However, when LS-SVR process is applied to FDP algorithm, its threshold value will increase as increases; when LS-SVR process is applied to Kay algorithm, its threshold value will keep the same.
The reason is related to the concrete frequency estimation algorithm after LS-SVR process.
Figure 3 :
Figure 3: Impact of on MSE performance.
Figure 6 :
Figure 6: Mean of Kay and LS-SVR algorithms.
Table 1 :
Values of with different ℎ.
(B) The main diagonal elements denote the powers of ŵ .ŵ in different moments are independent and identically distributed (i.i.d) by reason of their nearly equal values.And also, it is the premise that the classical algorithms of feedforward ML frequency estimation can be still employed after LS-SVR process.
Table 2 :
Pluses of LS-SVR process with different and ℎ. | 2,791.6 | 2013-11-27T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Developing the Benchmark: Establishing a Gold Standard for the Evaluation of AI Caries Diagnostics
Background/Objectives: The aim of this study was to establish a histology-based gold standard for the evaluation of artificial intelligence (AI)-based caries detection systems on proximal surfaces in bitewing images. Methods: Extracted human teeth were used to simulate intraoral situations, including caries-free teeth, teeth with artificially created defects and teeth with natural proximal caries. All 153 simulations were radiographed from seven angles, resulting in 1071 in vitro bitewing images. Histological examination of the carious lesion depth was performed twice by an expert. A total of thirty examiners analyzed all the radiographs for caries. Results: We generated in vitro bitewing images to evaluate the performance of AI-based carious lesion detection against a histological gold standard. All examiners achieved a sensitivity of 0.565, a Matthews correlation coefficient (MCC) of 0.578 and an area under the curve (AUC) of 76.1. The histology receiver operating characteristic (ROC) curve significantly outperformed the examiners’ ROC curve (p < 0.001). All examiners distinguished induced defects from true caries in 54.6% of cases and correctly classified 99.8% of all teeth. Expert caries classification of the histological images showed a high level of agreement (intraclass correlation coefficient (ICC) = 0.993). Examiner performance varied with caries depth (p ≤ 0.008), except between E2 and E1 lesions (p = 1), while central beam eccentricity, gender, occupation and experience had no significant influence (all p ≥ 0.411). Conclusions: This study successfully established an unbiased dataset to evaluate AI-based caries detection on bitewing surfaces and compare it to human judgement, providing a standardized assessment for fair comparison between AI technologies and helping dental professionals to select reliable diagnostic tools.
Introduction
With the exponential growth in computational power across virtually all semiconductorbased devices, artificial intelligence (AI) is finding its way into medical sciences, driven by the desire to increase diagnostic accuracy, improve treatment outcomes and optimize workflow efficiency [1][2][3].The increasing prevalence of articles on this subject in literature is evidence of this [4].From identifying anatomical or pathological structures to assisting with logistical challenges, AI promises to save time and reduce costs [5][6][7].
In human medical imaging, AI applications show promising potential in several areas, particularly in oncology [8].A major advantage of these AI applications is that their training is based on verified histopathological findings, thus relying on a reliable reference.
In dentistry AI, algorithms have already been developed for automated analysis of radiographs for caries diagnosis [9][10][11][12].Image recognition in regard to caries detection has been approached using a variety of techniques [13].However, the traditional comprehensive analysis of X-rays by the dentist is time-consuming and limited by the possibility of human error, which AI promises to largely eliminate [14][15][16][17].
In 2022, Mohammad-Rahimi et al. conducted a systematic review to evaluate the accuracy of automated caries detection systems and showed that the majority of the models included were able to deliver results with clinically acceptable performance parameters, although the quality of studies is often currently low [18].In particular, in a systematic review and meta-analysis, Ammar and Kühnisch reported acceptable diagnostic accuracy of AI models for caries detection and classification on bitewing radiographs [19].These radiographs are the most reliable and widely used clinical imaging method for caries diagnosis [20,21].Despite some promising results, it has also been criticized that AI-based caries diagnostic studies often neither include an appropriate definition of caries nor provide information on the type of carious lesion detected and have limitations in regard to size and heterogeneity of the reported datasets [22][23][24].
The advancement of AI applications for caries detection in bitewing images relies primarily on the use of deep learning networks, primarily convolutional neural networks [25].This iterative process begins with the compilation of large datasets of annotated bitewing radiographs, in which dental professionals delineate regions of interest corresponding to caries, healthy tooth structure and other anatomical structures [4].These annotated images are then divided into distinct training and test sets.Using machine learning algorithms, AI-driven methods analyze the training dataset, identifying intricate patterns and extrapolating the desired results [4].The integrity of the trained model is then evaluated against the separate test dataset, assessing its ability to analyze novel, unseen data.The accuracy of the model is quantified by comparing the predictions derived from the test dataset with the actual annotations.This dichotomy between training and test datasets is crucial to ensure that the AI model goes beyond simply memorizing specific instances from the training dataset, and instead acquires a robust understanding of the general patterns and features that are essential for accurate caries detection.
However, a fundamental limitation arises in the whole training process, which lies in the annotation of radiographs by dentists, representing the AI training gold standard.According to the Standards for Reporting Diagnostic Accuracy Studies (STARD), a gold standard is defined as an error-free reference standard that represents the best available method for determining the presence or absence of the target condition [26].Although dentists are trained in clinical diagnosis, their sensitivity and specificity for detecting carious lesions on radiographs is somewhat limited [27][28][29][30], in particular for subtle or early stages of lesions.In addition, various factors, such as experience, knowledge, technical skills and time pressure, may influence diagnostic accuracy [31].While it is undeniable that deep learning can identify features indicative of caries, the underlying methodology has potentially serious practical implications as the predictions only reflect sensitivity and specificity within the training and test data.This concern is exacerbated by the existence of commercial automated dental radiograph analysis software solutions, most of which lack transparency regarding the scientific basis of their AI models.
The aim of this study was, therefore, to develop reliable in vitro simulations of bitewing radiographs based on the histological gold standard to provide a basis for evaluating the performance of AI-based software currently offered by commercial vendors for the automated analysis of caries in bitewing radiographs.In addition, a reference dataset of caries diagnoses from in vitro bitewing radiographs by different examiners was created to serve as a benchmark for predicting whether AI applications can provide a diagnostic advantage to dental examiners.
Ethical Aspects
This study was approved by the Ethics Committee of the Medical Faculty of the University of Würzburg (15/15, 9 February 2015) and was carried out in compliance with the Declaration of Helsinki.All teeth used were extracted for existing clinical indications, with ethical approval, voluntarily and without coercion, and were anonymized.Information provided to patients still allowed for patient withdrawal but excluded the possibility of targeted destruction of donated teeth.
Trial Profile
The trial profile is depicted in Figure 1.
Ethical Aspects
This study was approved by the Ethics Committee of the Medical Faculty of the University of Würzburg (15/15, 9 February 2015) and was carried out in compliance with the Declaration of Helsinki.All teeth used were extracted for existing clinical indications, with ethical approval, voluntarily and without coercion, and were anonymized.Information provided to patients still allowed for patient withdrawal but excluded the possibility of targeted destruction of donated teeth.
Trial Profile
The trial profile is depicted in Figure 1.
Tooth Selection
This study used 179 extracted permanent human teeth that were preserved in a 1% tosylchloramide-sodium solution immediately after extraction.All teeth were obtained from various dental clinics and hospitals, ensuring a diverse representation of carious and caries-free conditions.Inclusion criteria were visually and radiographically normal and properly formed permanent teeth with restorative measures that did not significantly interfere with or prevent radiographic caries diagnosis of proximal surfaces.Exclusion criteria comprised completely decayed teeth or root remains, and teeth whose clinical appearance matched hereditary anomalies.All teeth were examined for possible carious lesions by visual inspection using a 2.5× close-up magnification loupe (GTX 2 telescope loupe system; Carl Zeiss Vision GmbH, Aalen, Germany) and tactile examination using a dental probe (EXS3A; Henry Schein Dental Deutschland GmbH, Langen, Germany).A digital single-lens reflex camera (Olympus E-400; Olympus Europa SE & Co. KG, Hamburg, Germany) with 50 mm macro lens (four thirds standard) was used to photograph each tooth from five directions (occlusal, vestibular, oral, mesial, distal).In addition, each
Tooth Selection
This study used 179 extracted permanent human teeth that were preserved in a 1% tosylchloramide-sodium solution immediately after extraction.All teeth were obtained from various dental clinics and hospitals, ensuring a diverse representation of carious and caries-free conditions.Inclusion criteria were visually and radiographically normal and properly formed permanent teeth with restorative measures that did not significantly interfere with or prevent radiographic caries diagnosis of proximal surfaces.Exclusion criteria comprised completely decayed teeth or root remains, and teeth whose clinical appearance matched hereditary anomalies.All teeth were examined for possible carious lesions by visual inspection using a 2.5× close-up magnification loupe (GTX 2 telescope loupe system; Carl Zeiss Vision GmbH, Aalen, Germany) and tactile examination using a dental probe (EXS3A; Henry Schein Dental Deutschland GmbH, Langen, Germany).A digital single-lens reflex camera (Olympus E-400; Olympus Europa SE & Co. KG, Hamburg, Germany) with 50 mm macro lens (four thirds standard) was used to photograph each tooth from five directions (occlusal, vestibular, oral, mesial, distal).In addition, each tooth was radiographed in the vestibulo-oral and mesiodistal directions (Sirona Heliodent DS; Dentsply Sirona Deutschland GmbH, Bensheim, Germany) (Figure 2).Based on the visual, tactile and radiographic findings, two dentists classified all teeth as carious or caries-free.
tooth was radiographed in the vestibulo-oral and mesiodistal directions (Sirona Heliodent DS; Dentsply Sirona Deutschland GmbH, Bensheim, Germany) (Figure 2).Based on the visual, tactile and radiographic findings, two dentists classified all teeth as carious or caries-free.
Preparation of Artificial Defects
A total of 50 caries-free teeth were used to test the ability to discriminate between carious lesions and artificial defects.The artificial defects were created on the proximal surfaces using 1 mm, 2 mm, 3 mm and 4 mm spherical diamond burs (Gebr.Brasseler GmbH & Co. KG, Lemgo).During the preparation process, the burs were inserted into the teeth, creating artificial defects half the size of the drill's diameter.
The selection of diameters ranging from 1 mm to 4 mm was based on findings of Stroud et al. on the mean enamel thickness of permanent posterior teeth [32].This allowed for clinically accurate lesion simulations.
Bitewing Design
An occlusal holder (Split-Fixator; Scheu-Dental GmbH, Iserlohn, Germany) was fitted with Plexiglas blocks attached at the top and bottom by means of a milled groove.The teeth were embedded in Periphery Wax (Sigma Dental, Handewitt, Germany) and mounted in an anatomically and physiologically accurate configuration to standardize their position for radiographic imaging of the bitewings (Figure 3).
Despite clinical best efforts to use the parallel technique, obtaining superposition-free images of the region of interest in bitewing radiographs remains challenging.Factors such as relative positioning of the teeth, superimpositions, the curvature of the dental arch, the orientation and spatial distortion of the film during exposure and the alignment of the Xray tube all contribute to the superposition of dental tissue in the interproximal region [33,34].To mimic clinically relevant situations and improve data quality, the study included not only orthoradial images, but also mesial and distal eccentric images at varying angles.For this purpose, the model was fixed in a rotating vice with a graduated scale that allowed precise angular adjustments in 2-degree increments.
Preparation of Artificial Defects
A total of 50 caries-free teeth were used to test the ability to discriminate between carious lesions and artificial defects.The artificial defects were created on the proximal surfaces using 1 mm, 2 mm, 3 mm and 4 mm spherical diamond burs (Gebr.Brasseler GmbH & Co. KG, Lemgo, Germany).During the preparation process, the burs were inserted into the teeth, creating artificial defects half the size of the drill's diameter.
The selection of diameters ranging from 1 mm to 4 mm was based on findings of Stroud et al. on the mean enamel thickness of permanent posterior teeth [32].This allowed for clinically accurate lesion simulations.
Bitewing Design
An occlusal holder (Split-Fixator; Scheu-Dental GmbH, Iserlohn, Germany) was fitted with Plexiglas blocks attached at the top and bottom by means of a milled groove.The teeth were embedded in Periphery Wax (Sigma Dental, Handewitt, Germany) and mounted in an anatomically and physiologically accurate configuration to standardize their position for radiographic imaging of the bitewings (Figure 3).
Despite clinical best efforts to use the parallel technique, obtaining superposition-free images of the region of interest in bitewing radiographs remains challenging.Factors such as relative positioning of the teeth, superimpositions, the curvature of the dental arch, the orientation and spatial distortion of the film during exposure and the alignment of the X-ray tube all contribute to the superposition of dental tissue in the interproximal region [33,34].To mimic clinically relevant situations and improve data quality, the study included not only orthoradial images, but also mesial and distal eccentric images at varying angles.For this purpose, the model was fixed in a rotating vice with a graduated scale that allowed precise angular adjustments in 2-degree increments.
Preparation of Histological Samples
The sample preparation steps are shown in Figure 5.Each examination series yielded a total of seven radiographs, all taken with the same X-ray unit (Sirona Heliodent DS; Dentsply Sirona Deutschland GmbH, Bensheim, Germany; 60 kV, 7 mA, 0,06 ms).These included a 0° orthogonal image and 4°, 6° and 8° mesial and distal eccentric images (totaling 7 images per series) (Figure 4).
Preparation of Histological Samples
The sample preparation steps are shown in Figure 5.
Preparation of Histological Samples
The sample preparation steps are shown in Figure 5.After radiography, all carious teeth were subjected to an adapted standardized histological examination procedure (Figure 5) [35].This was an elaborate process, beginning with a six-day ascending dehydration series with increasing concentrations of ethanol, followed by a six-day resin infiltration (Technovit 7200 VLC; Kulzer GmbH & Co. KG, Wehrheim, Germany) to effectively preserve carious lesions for subsequent processing (Table 1).After radiography, all carious teeth were subjected to an adapted standardized histological examination procedure (Figure 5) [35].This was an elaborate process, beginning with a six-day ascending dehydration series with increasing concentrations of ethanol, followed by a six-day resin infiltration (Technovit 7200 VLC; Kulzer GmbH & Co. KG, Wehrheim, Germany) to effectively preserve carious lesions for subsequent processing (Table 1).The (carious) teeth were sectioned directly in front of the lesion using a saw with a diamond-coated band 100 µm wide (EXAKT Apparatebau GmbH & Co. KG, Norderstedt, Germany) under constant water cooling.Due to inherent vibrations and the cutting width of the saw blade, a loss of tooth substance of approximately 300 µm per cut (slice) was assumed.During the cutting process, the block was fixed to the machine by a vacuum pump at 680 mbar and pulled through the saw blade by a constant force of 400 g ( 40 N).The hard-cut method was used to divide the carious teeth before the lesion reached its maximum extent.
This was followed by a meticulous, progressive approach to the carious defect using the wet grinding technique with the EXAKT horizontal microgrinding system and a 400 g press weight (EXAKT Apparatebau GmbH & Co. KG, Norderstedt, Germany) (Figure 6).The microgrinding unit was calibrated by grinding a microscope slide with 1200 grit Al 2 O 3 sandpaper for two minutes.A difference in the slide of no more than 5 µm at four different measuring points was considered acceptable.The final step was polishing with the EXAKT horizontal microgrinding system using 2400 and 4000 grit Al 2 O 3 sandpaper, with each incremental step documented by digital photographic records with a digital single-lens reflex camera (Canon EOS 6D Mark II; Canon Deutschland GmbH, Krefeld, Germany) and a macro lens (Canon Macro Lens EF 100 mm, Canon Deutschland GmbH, Krefeld, Germany) to illustrate the maximum extent of the lesion in the mesiodistal direction.The removal of tooth material between grinding was determined by measuring the thickness with a micrometer screw (EXAKT Apparatebau GmbH & Co. KG, Norderstedt, Germany).
Lesion Classification of the Histological Samples
All histological specimens, with the maximum extent of the carious lesion in mesiodistal direction, were digitally photographed and displayed on a diagnostic monitor (Nio Color 2 MP LED; Barco, Kortrijk, Brussels) with no time limit (Figure 7).A review was
Lesion Classification of the Histological Samples
All histological specimens, with the maximum extent of the carious lesion in mesiodistal direction, were digitally photographed and displayed on a diagnostic monitor (Nio Color 2 MP LED; Barco, Kortrijk, Brussels, Belgium) with no time limit (Figure 7).A review was performed twice at three-month intervals by an expert with extensive professional and scientific experience, following the common radiographic classification scheme (Table 2).
Lesion Classification of the Histological Samples
All histological specimens, with the maximum extent of the carious lesion in mesiodistal direction, were digitally photographed and displayed on a diagnostic monitor (Nio Color 2 MP LED; Barco, Kortrijk, Brussels) with no time limit (Figure 7).A review was performed twice at three-month intervals by an expert with extensive professional and scientific experience, following the common radiographic classification scheme (Table 2).
E1
Caries limited to the outer half of the enamel E2 Caries extending to the inner half of the enamel D1 Caries in the outer third of dentin D2 Caries in the middle third of dentin D3 Caries in the dentinal third close to the pulp or up to the pulp The characteristics of the histological analysis are summarized in Table 3.
Radiographic Caries Diagnostic by Dental Examiners
To benchmark dental examiners when analyzing in vitro bitewing images, 10 clinicians, 10 private practitioners and 10 students were asked to evaluate these radiographs.
Clinicians were defined as dentists providing care in a hospital setting, whereas private practitioners were defined as dentists working independently outside an institutional setting, usually in their own private practice.As a baseline, all participants were informed that all teeth would be examined for the presence or absence of proximal caries.Each participant evaluated a random selection of 35 to 36 bitewing images on a dental diagnostic monitor (Nio Color 2 MP LED; Barco, Kortrijk, Brussels, Belgium) without a time limit.All examiners were categorized according to gender, occupation and professional experience to assess respective influence on the quality of caries findings in bitewing radiographs.
Statistical Analysis and Performance Metrics
Statistical analyses were performed using R (version 4.3.2).Quality of carious lesion classification was determined by assessing the intrarater reliability using the intraclass correlation coefficient (ICC).The performance of the combined examiners was assessed using several metrics, including sensitivity, specificity, accuracy, positive and negative predictive values (PPV/NPV), area "under the curve" (AUC), F1 score and Matthews correlation coefficient (MCC).
The F1 score, a harmonic mean of precision and sensitivity, is a commonly used metric for binary classifier evaluation and ranges from 0 to 1, with higher values indicating superior classifier performance.It is defined as 2 × (PPV×sensitivity) (PPV+sensitivity) .The Matthews Correlation Coefficient (MCC) is another key parameter for evaluating predictions against actual values and provides a reliable assessment of performance.The MCC is defined as . An MCC value of 1 indicates a perfect prediction, while −1 indicates a complete disagreement between prediction and observation, and 0 indicates a random prediction.By including true negatives, false positives, false negatives and true positives, the MCC provides a comprehensive assessment of the predictive accuracy of the system or examiner under investigation.De Long's test was used to compare the receiver operating characteristic (ROC) curves of histology and examiners.In addition, MCC scores were tested for differences in correlation using Bonferroni correction to compare performance across varying eccentricities of the central X-ray beam, the different carious lesion depths, gender, occupation and experience.The ability of the examiners to discriminate between artificially induced defects and true caries was investigated by comparing correct and incorrect predictions.
Sample Sice Planning
Our sample size planning was based on the number of bitewing radiographs required for accurate and reliable AI-assisted caries detection.We reviewed recent studies in this area and found that the number of bitewing radiographs used ranged from 45 to 252, with an average of 114 [9,[36][37][38][39][40]. Due to the wide variation in the number of bitewing radiographs used in the literature, we used significantly more radiographs for testing in our study, with a total of 371 bitewing radiographs of 53 carious teeth.It can, therefore, be concluded that our sample size provides a robust dataset for evaluation.
Examiner Characteristics
The metrics for all examiners are shown in Table 4. Private practitioners, clinicians and students were equally represented with ten examiners each.The private practitioners were almost equally divided between six examiners with less than five years' experience and four examiners with five or more years' experience.However, there was some imbalance between the two groups, with four male and six female private practitioners.All ten clinicians were evenly split between those with less than five years' experience and those with five or more years' experience, as was the gender split with five males and five females.There were eight female students compared to two males.Eight of the eleven examiners with less than five years' experience were male, followed by three female examiners in this group.In the group of nine examiners with five or more years' experience, there were three male and six female examiners.Of the thirty examiners, thirteen were male and seven were female.
Reliability of Histological Lesion Classification
Intrarater reliability was very high throughout both assessment rounds (ICC: 0.993; 95%-CI [0.990; 0.995]).In two cases where the expert's categorization of carious lesions differed between the two rounds of examination, a second expert was consulted to determine the final lesion class.
Examiners Performance Metrics
All examiners reached a combined accuracy of 0.799, a sensitivity of 0.565, a specificity of 0.956, a PPV of 0.896, a NPV of 0.765, an AUC of 76.1, a F1 score of 0.693 and a MCC of 0.578 (Table 5).Note.AUC = area under the curve, MCC = Matthews correlation coefficient, NPV = negative predictive value, PPV = positive predictive value.
AUC
All examiners achieved a combined AUC of 76.1, whereas histology, serving as the gold standard method in caries diagnostic research, was assigned an AUC of 100 (Figure 8).Statistical analysis using De Long's test to compare the two ROC curves revealed a significantly higher performance for histology compared to the examiners' assessments (p < 0.001).
All examiners achieved a combined AUC of 76.1, whereas histology, serving as the gold standard method in caries diagnostic research, was assigned an AUC of 100 (Figure 8).Statistical analysis using De Long's test to compare the two ROC curves revealed a significantly higher performance for histology compared to the examiners' assessments (p < 0.001).
MCC by Lesion Class
The MCC showed variation according to the penetration depth of the carious lesions, with the best performance observed for D3 lesions (0.814), whereas E2 lesions showed the least favorable result (0.236) (Figure 9).The aggregated MCC for all lesion categories was 0.587.
MCC by Lesion Class
The MCC showed variation according to the penetration depth of the carious lesions, with the best performance observed for D3 lesions (0.814), whereas E2 lesions showed the least favorable result (0.236) (Figure 9).The aggregated MCC for all lesion categories was 0.587.Testing for differences in MCC between different caries classifications revealed significant differences between all lesion classes (p < 0.008) except between E1 and E2 lesions (p = 1) (Table 6).Testing for differences in MCC between different caries classifications revealed significant differences between all lesion classes (p < 0.008) except between E1 and E2 lesions (p = 1) (Table 6).
Gender Specific MCC
The MCC of male examiners was higher at 0.605 compared to the MCC of female examiners at 0.575 (Figure 10).However, testing for differences in MCC showed no significant effect of gender (p = 0.44).Testing for differences in MCC between different caries classifications revealed significant differences between all lesion classes (p < 0.008) except between E1 and E2 lesions (p = 1) (Table 6).
Gender Specific MCC
The MCC of male examiners was higher at 0.605 compared to the MCC of female examiners at 0.575 (Figure 10).However, testing for differences in MCC showed no significant effect of gender (p = 0.44).
MCC by Occupation
Private practitioners had the highest MCC (0.595), followed by students (0.593) and clinical practitioners (0.571) (Figure 11).Testing for differences in MCC showed no significant differences between all occupations (p ≥ 0.556).
MCC by Experience
Dentists with less than 5 years of experience showed the best MCC (0.611), followed by students (0.593) and dentists with 5 or more years of experience (0.551) (Figure 12).Testing for differences in MCC showed no significant differences between all occupations (p ≥ 0.556).
MCC by Experience
Dentists with less than 5 years of experience showed the best MCC (0.611), followed by students (0.593) and dentists with 5 or more years of experience (0.551) (Figure 12).Testing for differences in MCC showed no significant differences between all occupations (p ≥ 0.556).
MCC by Experience
Dentists with less than 5 years of experience showed the best MCC (0.611), followed by students (0.593) and dentists with 5 or more years of experience (0.551) (Figure 12).No significant differences were found by testing for differences in MCC according to experience (p = 1).
Influence of Eccentricity on MCC
Different eccentricity angles resulted in different MCC values (Figure 13).No significant differences were found by testing for differences in MCC according to experience (p = 1).
Influence of Eccentricity on MCC
Different eccentricity angles resulted in different MCC values (Figure 13).No statistically significant difference between the groups could be found (p ≥ 0.411).
Differentiation between Carious Lesions and Artifically Induced Lesions
Out of a total of 350 artificial defects presented, 159 defects (45.4%) were identified as carious lesions by all examiners and 191 defects (54.6%) were identified as atypical for caries (Figure 14).No statistically significant difference between the groups could be found (p ≥ 0.411).
Differentiation between Carious Lesions and Artifically Induced Lesions
Out of a total of 350 artificial defects presented, 159 defects (45.4%) were identified as carious lesions by all examiners and 191 defects (54.6%) were identified as atypical for caries (Figure 14).No statistically significant difference between the groups could be found (p ≥ 0.411).
Differentiation between Carious Lesions and Artifically Induced Lesions
Out of a total of 350 artificial defects presented, 159 defects (45.4%) were identified as carious lesions by all examiners and 191 defects (54.6%) were identified as atypical for caries (Figure 14).
Tooth Classification
The results indicate that 99.8% of the examiners correctly positioned the teeth depicted in the bitewing simulations according to the World Dental Federation (FDI) tooth numbering system (Figure 15).
Tooth Classification
The results indicate that 99.8% of the examiners correctly positioned the teeth depicted in the bitewing simulations according to the World Dental Federation (FDI) tooth numbering system (Figure 15).
Discussion
The European Medical Device Regulation (MDR) classifies medical imaging software as a medical device and, therefore, imposes several requirements on manufacturers to ensure safety and quality.Among other things, manufacturers are required to conduct a comprehensive clinical evaluation of their medical devices.As AI-based imaging software for caries diagnosis has been approved as a medical device, the underpinnings deserve scrutiny.The aim of this study was, therefore, to create a pool of histology-based radiographs to provide a scientifically sound testbed for such software.We are currently unaware of the existence of such a dataset.
In the context of fuzzy gold standards, several mitigation strategies have been proposed.One approach aims to supplement existing datasets with additional data from external sources [41].By incorporating different perspectives, especially in cases where the gold standard may be imperfect, this strategy aims to improve the robustness of AI models and mitigate bias.The use of multiple diagnostic tests is also encouraged, as this can increase the transparency and reliability of diagnostic results [41].Despite these efforts, the almost complete elimination of bias in AI-based dental caries diagnostics will, at least
Discussion
The European Medical Device Regulation (MDR) classifies medical imaging software as a medical device and, therefore, imposes several requirements on manufacturers to ensure safety and quality.Among other things, manufacturers are required to conduct a comprehensive clinical evaluation of their medical devices.As AI-based imaging software for caries diagnosis has been approved as a medical device, the underpinnings deserve scrutiny.The aim of this study was, therefore, to create a pool of histology-based radiographs to provide a scientifically sound testbed for such software.We are currently unaware of the existence of such a dataset.
In the context of fuzzy gold standards, several mitigation strategies have been proposed.One approach aims to supplement existing datasets with additional data from external sources [41].By incorporating different perspectives, especially in cases where the gold standard may be imperfect, this strategy aims to improve the robustness of AI models and mitigate bias.The use of multiple diagnostic tests is also encouraged, as this can increase the transparency and reliability of diagnostic results [41].Despite these efforts, the almost complete elimination of bias in AI-based dental caries diagnostics will, at least for an extended period, remain an elusive goal.
In general, in vitro studies provide a robust method for validating new caries diagnostic methods because they can refer to a reliable gold standard by means of histological analysis.Also, literature states that histological examination shall serve as the basis for a gold standard for the evaluation of new caries diagnostic methods [42].Therefore, the ideal, albeit theoretical, method for evaluating diagnostic accuracy would be to first assess the diagnoses in vivo and then re-examine the same surfaces in vitro after tooth extraction using the histological gold standard [43].However, logistical constraints and ethical considerations associated with invasive procedures, particularly the need for extraction, make this approach infeasible.Furthermore, it has been argued that differences between in vivo and in vitro results may cast doubt on the generalizability of in vitro data [43].Nevertheless, previous studies have confirmed that no significant difference in the diagnostic accuracy of proximal carious lesions on digital radiographs can be demonstrated between in vivo and in vitro settings [44,45].
To further ensure the applicability of our results to the clinical situation, we attempted to create clinical simulations of the orofacial region on bitewing radiographs that are as realistic as possible.Nevertheless, given the complexity of the human body, accurate reproduction of anatomical structures remains difficult.To account for potential uncertainties, only findings within the coronal region were considered.This approach was intended to reduce possible distortions caused by the setup, particularly the fixation material.A limitation concerns the in vitro radiographs that did not consider external factors that could have influenced the accuracy of the radiographic diagnosis, such as the influence of metal artefacts, patient movement or incorrect positioning of the film holder on the analysis results.For reasons of standardization, all bitewing radiographs were taken on a single X-ray unit, to account for unintended variations.
For the purpose of disinfection and protection against dehydration, all extracted teeth were immersed in 1% tosylchloramide.Previous studies have shown that tosylchloramide has no discernible effect on tooth hard tissue [46][47][48][49].A possible influence of tosylchloramide storage on the infiltration behavior of Technovit cannot be completely excluded, however it seems unlikely in view of the high success rate of histological preparations.All teeth were obtained from a variety of sources, including dental, oral and maxillofacial surgery practices and clinics.This diverse selection supports the assumption of a representative assortment of teeth across different population groups.
As already mentioned, histological examination serves as the most widely used gold standard for the validation of new caries diagnostic methods [42].Its substantial diagnostic quality and value have been highlighted in many publications [50,51].A major criticism of histological examinations is the frequent bisection of teeth through an arbitrary centerline [52].This carries the risk of irreversibly destroying the presumed maximum extent of the carious lesion, thereby obscuring the true maximum depth.To overcome this, the incision was positioned anterior to the carious lesion, and the wet grinding technique was used to approach the maximum extent of the lesion.This approach ensured that the deepest carious extension was accurately identified with a high degree of confidence.The use of final multi-stage polishing ensured a consistent surface quality for subsequent expert analysis.
In our study, all 30 examiners showed a combined accuracy, sensitivity, specificity and AUC of 0.799, 0.565, 0.956 and 76.1, respectively, for the detection of carious lesions on bitewing radiographs.The literature shows a wide range of results.Kay and Knill-Jones observed a dentist sensitivity of 0.26 for the detection of dentin caries on in vitro bitewing radiographs [53].Devlin et al. showed a sensitivity of 44% for enamel-limited lesions on bitewing radiographs among 23 examiners [54].Mileman and van der Welle reported an AUC of 0.88 with a sensitivity of 0.54 and specificity of 0.97 for dentin caries on bitewing radiographs.Similarly, Peers et al. demonstrated a comparable sensitivity of 0.59 for the detection of dentin caries on bitewing radiographs [55].It, therefore, can be assumed that the results of our study are consistent with the literature, as we also could demonstrate that carious lesion depth had a significant effect on the MCC of all examiners between all lesion classes, except between enamel-limited E1 and E2 lesions.We support the assumption that in vitro radiographs provide diagnostic quality parameters similar to studies using in vivo radiographs.
Our results also showed that, contrary to expectations, the eccentricity of the central X-ray beam up to 8 • , whether mesial or distal, did not appear to have a significant effect on the examiner's judgement of the presence or absence of caries.The lack of significant impact from minor eccentricities humbly suggests that clinicians may not need to be overly concerned about small variations in radiographic positioning when assessing for caries.Like our results, the study by Deprá et al. investigated the influence of the central opening angle on the diagnosis of secondary caries and also concluded that it had no influence [56].On the other hand, Chadwick et al. investigated the influence of different central irradiation angles on visualization of proximal cavities in bitewing radiographs and found that lesions are typically diagnosed, often resulting in overtreatment [57].However, as both comparative studies do not provide information on the size of the eccentricity examined, we are, to the best of our knowledge, the first study to provide results with tangible values.
In the present study, no significant effect of examiner experience could be demonstrated.The results, thus, contradict the findings of Geibel et al., which have shown that experienced examiners detect proximal lesions up to four times more frequently than less experienced examiners [58].A plausible explanation for this difference could be that dental students and practicing dentists with less than five years of professional experience in our study took more time to analyze in vitro bitewing images than their colleagues with five or more years of clinical experience, as the time factor has been demonstrated to influence diagnostic accuracy significantly [31].
It was found that just over half (54.6%) of the artificial lesions were judged by the examiners to be atypical for caries, effectively distinguishing them from true carious lesions.This observation highlights the ability of human examiners to differentiate iatrogenic defects, such as those resulting from invasive treatments resulting from treatments of the adjacent teeth, from true caries cases, primarily through the assessment of lesion morphology.To the best of our knowledge, this study represents the first attempt to establish a framework for evaluating AI algorithms in this regard and to compare their performance with human judgement.
The empirical evaluation of binary classification tasks, such as the distinction between caries and healthy tooth structure, is subject of discussion.It must be noted that accuracy, as a metric, comes with the significant limitation of sensitivity to unbalanced datasets, potentially limiting the validity of the results.As the Fifth German Oral Health Study has already confirmed, caries prevalence is decreasing in all age groups, increasing the imbalance between carious and non-carious teeth on radiographs.Therefore, the suitability of accuracy to determine diagnostic quality must strongly be questioned [59].Furthermore, Dinga et al. recommend completely omitting accuracy as sole criterion for evaluating clinical models, as it fails to take into account clinically relevant information [60].Nevertheless, accuracy is still somewhat stubbornly used as the main parameter for performance evaluation in the literature.For the sake of comparability, we have included this metric, but explicitly point out its shortcomings.Positive predictive value (PPV), sensitivity, specificity and the F1 score, which is the harmonic mean of precision and recall, are commonly used parameters to evaluate binary classifiers [61].However, these metrics assume that the "positive" class (in this case a detection of caries) is of primary interest, while true negatives are omitted in their calculation.Consequently, PPV, sensitivity and F1 scores are unaffected by variations in the number of true negatives, whether their value is extremely high or low.To overcome this limitation, we made use of Matthews correlation coefficient (MCC).MCC gives high values only when the predictions of all categories (true positives, true negatives, false positives and false negatives) show good performance, also considering the proportions of the positive and negative classes.As a result, the MCC is a statistically robust measure, even in the presence of unbalanced datasets.
Conclusions
The aim of this study was to establish a histology-based gold standard for the unbiased evaluation of AI-based caries detection systems on proximal surfaces in bitewing radiographs.Through meticulous in vitro simulations and histological analyses, we created a robust dataset to evaluate the performance of AI algorithms in caries detection and compare it to human judgement.Although AI promises to improve diagnostic accuracy and workflow efficiency, its effectiveness depends primarily on the quality of the training data and validation processes.Future research should be designed to accurately reflect the true performance of AI models using histological analysis as a benchmark.In doing so, we have laid the foundation for evaluating the real-world performance of AI systems, thereby advancing evidence-based dentistry.Ongoing advances in AI technology and regulatory frameworks require continuous refinement and validation of diagnostic tools to ensure patient safety and clinical effectiveness.The creation of a standardized database of reference histological specimens and associated radiographs could serve as a benchmark for the development and validation of new AI-based caries detection systems.This database would allow different AI systems to be compared and their performance tested against an established gold standard, helping to identify and develop the most accurate models.However, generating a histology-based dataset is time consuming and requires resources and equipment.Therefore, a simple histology-based implementation will not be readily available in the future.In addition, it remains to be seen whether newer intraoral caries detection techniques will provide higher sensitivity, which could serve as a solid basis for training dental AI systems.In conclusion, our study is an important step towards the creation of standardized evaluation protocols for AI-based caries detection, thereby promoting transparency, reliability and confidence in dental diagnostics.
Figure 2 .
Figure 2. Photographic and radiological documentation of all teeth.
Figure 2 .
Figure 2. Photographic and radiological documentation of all teeth.
Figure 4 .
Figure 4. Digital in vitro bitewing images.Top: color-coded setup-yellow: examination tooth, red: carious lesion, blue: adjacent tooth, green: antagonistic tooth.Below: The mesial-eccentric series shows increased superimposition as the ray path becomes increasingly eccentric in the proximal region of teeth 46 and 47.Conversely, the distal-eccentric series shows increased superimposition as the ray path becomes increasingly eccentric in the interproximal region of teeth 15 and 16.
Figure 4 .
Figure 4. Digital in vitro bitewing images.Top: color-coded setup-yellow: examination tooth, red: carious lesion, blue: adjacent tooth, green: antagonistic tooth.Below: The mesial-eccentric series shows increased superimposition as the ray path becomes increasingly eccentric in the proximal region of teeth 46 and 47.Conversely, the distal-eccentric series shows increased superimposition as the ray path becomes increasingly eccentric in the interproximal region of teeth 15 and 16.
Figure 4 .
Figure 4. Digital in vitro bitewing images.Top: color-coded setup-yellow: examination tooth, red: carious lesion, blue: adjacent tooth, green: antagonistic tooth.Below: The mesial-eccentric series shows increased superimposition as the ray path becomes increasingly eccentric in the proximal region of teeth 46 and 47.Conversely, the distal-eccentric series shows increased superimposition as the ray path becomes increasingly eccentric in the interproximal region of teeth 15 and 16.
The (carious) teeth were then bonded (Technovit 7230 VLC; Kulzer GmbH & Co. KG, Wehrheim, Germany), vestibular side down, to an embedding form (Kulzer GmbH & Co. KG, Wehrheim, Germany) using a disposable spatula and cured with UV light for 10 min in a precision vacuum bonding press (EXAKT Apparatebau GmbH & Co. KG, Norderstedt, Germany).The forms were filled with embedding resin (Technovit 7200 VLC; Kulzer GmbH & Co. KG, Wehrheim, Germany) using a disposable pipette.Pre-polymerization was performed in an EXAKT-HISTOLUX light polymerization unit (Exakt Apparatebau GmbH & Co. KG, Nordstedt) with two UV lamps for two hours, followed by the actual polymerization with eight UV lamps for a further eight hours.The polymerized blocks were fixed to Plexiglas slides (Walter-Messner GmbH, Oststeinbek, Germany) using mixed Technovit 4000 (Kulzer GmbH & Co. KG, Wehrheim, Germany) and cured with UV light for 10 min in the precision vacuum bonding press (EXAKT Apparatebau GmbH & Co. KG, Norderstedt, Germany).Before further processing, the samples were dried in an incubator (Thermo Heraeus B6060 incubator; Heraeus Holding GmbH, Hanau, Germany) for 24 h at 37 • C.
Figure 7 .
Figure 7. Histological specimen with different proximal carious lesion depths.E0 = Caries-free, E1 = Caries limited to the outer half of the enamel, E2 = Caries extending to the inner half of the enamel,
Figure 7 .
Figure 7. Histological specimen with different proximal carious lesion depths.E0 = Caries-free, E1 = Caries limited to the outer half of the enamel, E2 = Caries extending to the inner half of the enamel, D1 = Caries in the outer third of dentin, D2 = Caries in the middle third of dentin, D3 = Caries in the dentinal third close to the pulp or up to the pulp.
Figure 15 .
Figure 15.Tooth classification according to the FDI scheme.
Figure 15 .
Figure 15.Tooth classification according to the FDI scheme.
Table 1 .
Schematic overview of tooth dehydration and resin infiltration.
Table 1 .
Schematic overview of tooth dehydration and resin infiltration.
Table 3 .
Number of histologically confirmed carious lesions and their categorization according to the caries classification scheme.
Table 5 .
Combined examiners' performance metrics for caries detection.
Table 6 .
Adjusted p-values for MCC comparison between lesion classes.
Table 6 .
Adjusted p-values for MCC comparison between lesion classes.
Table 6 .
Adjusted p-values for MCC comparison between lesion classes. | 9,752.4 | 2024-06-29T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Tunable multiwavelength SOA fiber laser with ultra-narrow wavelength spacing based on nonlinear polarization rotation
A tunable multiwavelength fiber laser with ultra-narrow wavelength spacing and large wavelength number using a semiconductor optical amplifier (SOA) has been demonstrated. Intensity-dependent transmission induced by nonlinear polarization rotation in the SOA accounts for stable multiwavelength operation with wavelength spacing less than the homogenous broadening linewidth of the SOA. Stable multiwavelength lasing with wavelength spacing as small as 0.08 nm and wavelength number up to 126 is achieved at room temperature. Moreover, wavelength tuning of 20.2 nm is implemented via polarization tuning. ©2009 Optical Society of America OCIS codes: (250.5980) semiconductor optical amplifier; (140.3510) Lasers, Fiber; (060.2320) Fiber optics amplifiers and oscillators. References and links 1. A. Bellemare, M. Karasek, M. Rochette, S. LRochelle, and M. Tetu, “Room temperature multifrequency erbiumdoped fiber lasers anchored on the ITU frequency grid,” J. Lightwave Technol. 18(6), 825–831 (2000). 2. L. Talaverano, S. Abad, S. Jarabo, and M. López-Amo, “Multiwavelength fiber laser sources with Bragg-grating sensor multiplexing capability,” J. Lightwave Technol. 19(4), 553–558 (2001). 3. Z. G. Lu, F. G. Sun, G. Z. Xiao, and C. P. Grover, “A tunable multiwavelength fiber ring laser for measuring polarization-mode dispersion in optical fibers,” IEEE Photon. Technol. Lett. 16(5), 1280–1282 (2004). 4. L. R. Chen, and V. Page, “Tunable photonic microwave filter using semiconductor fibre laser,” Electron. Lett. 41(21), 1183–1184 (2005). 5. N. Pleros, C. Bintjas, M. Kalyvas, G. Theophilopoulos, K. Yiannopoulos, S. Sygletos, and H. Avramopoulos, “Multiwavelength and power equalized SOA laser sources,” IEEE Photon. Technol. Lett. 14(5), 693–695 (2002). 6. V. Baby, L. R. Chen, S. Doucet, and S. LaRochelle, “Continuous-Wave Operation of Semiconductor Optical Amplifier-Based Multiwavelength Tunable Fiber Lasers With 25-GHz Spacing,” IEEE J. Sel. Top. Quantum Electron. 13(3), 764–769 (2007). 7. L. Xia, P. Shum, Y. X. Wang, and T. H. Cheng, “Stable triple-wavelength fiber ring laser with ultranarrow wavelength spacing using a triple-transmission-band fiber Bragg grating filter,” IEEE Photon. Technol. Lett. 18(20), 2162–2164 (2006). 8. D. S. Moon, B. H. Kim, A. Lin, G. Sun, W. T. Han, Y. G. Han, and Y. Chung, “Tunable multi-wavelength SOA fiber laser based on a Sagnac loop mirror using an elliptical core side-hole fiber,” Opt. Express 15(13), 8371– 8376 (2007). 9. Y. W. Lee, J. Jung, and B. Lee, “Multiwavelength-switchable SOA-fiber ring laser based on polarizationmaintaining fiber loop mirror and polarization beam splitter,” IEEE Photon. Technol. Lett. 16(1), 54–56 (2004). 10. B. A. Yu, J. Kwon, S. Chung, S. W. Seo, and B. Lee, “Multiwavelength-switchable SOA-fibre ring laser using sampled Hi-Bi fibre grating,” Electron. Lett. 39(8), 649–650 (2003). 11. F. W. Tong, W. Jin, D. N. Wang, and P. K. A. Wai, “Multiwavelength fibre laser with wavelength selectable from 1590 to 1645 nm,” Electron. Lett. 40(10), 594–595 (2004). 12. N. Calabretta, Y. Liu, F. M. Huijskens, M. T. Hill, H. deWaardt, G. D. Khoe, and H. J. S. Dorren, “Optical signal processing based on self-induced polarization rotation in a semiconductor optical Amplifier,” J. Lightwave Technol. 22(2), 372–381 (2004). 13. X. Yang, Z. Li, E. Tangdiongga, D. Lenstra, G. D. Khoe, and H. J. S. Dorren, “Sub-picosecond pulse generation employing an SOA-based nonlinear polarization switch in a ring cavity,” Opt. Express 12(11), 2448–2453 (2004). 14. Z. X. Zhang, L. Zhan, K. Xu, J. Wu, Y. X. Xia, and J. T. Lin, “Multiwavelength fiber laser with fine adjustment, based on nonlinear polarization rotation and birefringence fiber filter,” Opt. Lett. 33(4), 324–326 (2008). #114114 $15.00 USD Received 10 Jul 2009; revised 21 Aug 2009; accepted 30 Aug 2009; published 11 Sep 2009 (C) 2009 OSA 14 September 2009 / Vol. 17, No. 19 / OPTICS EXPRESS 17200 15. B. A. Yu, D. H. Kim, and B. Lee, “Multiwavelength pulse generation in semiconductor-fiber ring laser using a sampled fiber grating,” Opt. Commun. 200(1-6), 343–347 (2001).
Introduction
Multiwavelength fiber lasers have been extensively investigated for their potential applications in wavelength-division-multiplexing (WDM) communication systems [1], optical fiber sensors [2], optical instrument testing [3], and microwave photonics [4].The erbiumdoped fiber (EDF) is excellent candidate for the gain medium of the lasers as the EDF can provide large gain, high saturation power and low polarization-dependent gain spectrum.However, due to the homogeneous line broadening of the EDF at room temperature, the fiber lasers based on EDF often suffer from strong mode competition and unstable multiwavelength lasing at room temperature.A range of approaches have been put forward to solve this problem, such as cooling EDF to liquid-nitrogen temperature, utilizing four-wave mixing effect or inhomogeneous loss mechanism by using highly-nonlinear fiber, adding a frequency shifter or phase modulator, using nonlinear gain of cascaded stimulated Brillouin scattering or stimulated Raman scattering, and so on.But all these inevitably add excess complexity and cost to these lasers.In contrast, semiconductor optical amplifier (SOA) possesses the property of primarily inhomogeneous broadening and can support simultaneous oscillation of many lasing wavelengths.Semiconductor multiwavelength fiber lasers with different wavelength number and wavelength spacing have been previously reported.Simultaneous oscillation of 52 lines spaced at 50 GHz was achieved by Pleros et al. from a ring cavity including two SOAs and single pass feedback [5].Baby et al. presented a wavelength-tunable lasing operation of 41 wavelengths with 25 GHz (200 pm) spacing [6].In these two cases, the wavelength spacing is of the same order of magnitude of the SOA homogenous broadening linewidth.Xia et al. shown a lasing with ultra-narrow wavelength spacing of 50 pm [7], but the operation was limited to simultaneous lasing of only three fixed wavelengths.On the other hand, to obtain tunable capability, there have been various experiments that incorporate a Sagnac loop filter, a fiber Lyot filter, and a sampled Hi-Bi fiber grating as a wavelengthselective comb filter [8][9][10].However, in all these experiments the tunable range is relatively small.The distinctive characteristics of the SOA-like nonlinear gain compression can also be used to induce the tunability of the lasing wavelength.The lasing wavelength can be controlled by adjusting the feedback optical power into the SOA with a variable optical attenuator (VOA).But this method has a serious impact on the stability of the output power [11].
In this letter, a tunable multiwavelength SOA fiber laser with ultra-narrow wavelength spacing has been proposed and demonstrated.Multiwavelength selection is performed using a Sagnac loop mirror filter consisting of one section of polarization-maintaining fiber.Mode competition within the homogenous broadening linewidth is suppressed by the intensitydependent transmission induced by nonlinear polarization rotation in the SOA.Stable multiwavelength lasing with multiwavelengths up to 126 and wavelength spacing as small as 0.08 nm is obtained at room temperature.The effect of the SOA driving current on the performance of the multiwavelength laser is also experimentally investigated.
Experiment setup and operation principle
The schematic of the experimental setup is shown in Fig. 1.The gain of the fiber laser is provided by a semiconductor optical amplifier (SOA), whose model is SOA-NL-OEC-1550 produced by CIP.The SOA has a 31.4dB small signal gain at wavelength of 1550 nm and the polarization-dependent saturated gain (PDG) between the transverse electric (TE) and the transverse magnetic (TM) components of the SOA is around 0.5 dB, when the SOA is biased at 200 mA and thermally stabilized at 20 °C.× and a polarization controller (PC1) within the fiber loop.The birefringence of the PMF generates a wavelength-dependent phase difference between the fast and slow components of the light propagating in the fiber loop.By adjusting the PC1 in the loop to generate a 90° rotation between the polarization states of the two counter-propagating light in the cavity, the two counter-propagating light travel along different axes of the PMF and accumulate phase difference.They will interfere at the 3 dB coupler and generate a periodic comb-like spectrum.The filter peak spacing is given as , where n ∆ and L are respectively birefringence and length of the PMF.A polarization-dependent isolator (PDI) is used both to ensure the unidirectional cavity and to act as a polarizer.Polarization controllers (PC2 and PC3) are located before the SOA and the PDI, respectively.The laser output is extracted from the cavity by a 10/90 fiber coupler, with which 90% power is fed back into the laser.The part surrounded by dashed lines is a general configuration of nonlinear polarization rotation in a SOA, which has been used for optical signal processing [12] and passive mode-locking [13].The output is measured by an optical spectrum analyzer (ANDO AQ6317, resolution 0.01nm).The mechanism of multiwavelength generation based on nonlinear polarization rotation in a SOA is described as follows.PC2 is used to adjust the polarization of the input signal with respect to the SOA layers.The light arbitrarily polarized electric field incoming into the SOA can be decomposed as the transverse electric (TE) and transverse magnetic (TM) modes.The modes propagate independently through the SOA, but they have indirect interaction via the carriers.Suppose the optical intensity is high enough to saturate the amplifier.The gain saturation of the TE mode differs from the gain saturation of the TM mode.Hence, the refractive index change of the TE mode also differs from the refractive index change of the TM mode.A phase difference between the two modes builds up as the light propagates through the SOA.At the PDI, both modes recombine.PC3 is used to adjust the polarization of the SOA output with respect to the orientation of the PDI.The phase difference and the orientation of the two PCs determine the intensity-dependent switch of the combiner (PC2 + SOA + PC3 + PDI).If the polarizations of the two PCs are set appropriately, the transmission of the combiner will decrease with the increase of the light intensity, which can be utilized to suppress mode competition for multiwavelength generation [14].In the experiment, the driving current of the SOA is first biased at 350 mA, and the length of the used PMF is 79 m.The multiwavelength output is readily obtained just by adjusting the PCs. Figure 3(a) is the typical multiwavelength output spectrum.The wavelength spacing is 0.08nm, which is in agreement with the calculated value.The number of the lasing wavelengths is 126 with 5 dB bandwidth of 10.08 nm, ranging from 1604.67 nm to 1614.75 nm.The zoom-in view of the part surrounded by dashed lines from 1605 nm to 1608 nm is shown in Fig. 3(b).In order to validate the mechanism of the multiwavelength generation, we replace the PDI with a polarization-insensitive isolator.We found that the stable multiwavelength cannot be obtained in any case the polarization is adjusted.The output spectrum of unstable multiwavelength is shown in Fig. 4. It is reasonable because the wavelength spacing of the comb filter is much narrower than the homogenous broadening linewidth of the SOA.The homogenous broadening linewidth of the SOA is deduced to be about 0.6 nm using the technique reported in Refs [10,15].So the SOA cannot support the multiwavelength geneneration with wavelength spacing as small as 0.08 nm.Note that the self-induced polarization rotation in the SOA has been utilized to mode-lock fiber laser and optical signal processing with nonlinear phase-shift created in the SOA and polarization discriminated in the polarizer [12,13].In these cases, the output from the combination of a SOA and a polarizer increase with the incoming light intensity.Reversely, if the polarizations of the two PCs are set properly, the transmission of the nonlinear polarization switch based on SOA decreases with the increase of the light intensity, which can be employed to suppress mode competition for multiwavelength generation.Then, to demonstrate tunability of the multiwavelength comb, we adjust the polarization controllers in the main laser cavity to modifying the polarization-dependent cavity characteristic.Figure 5 shows the multiwavelength spectra with wavelength spacing of 0.08 nm under four different polarization states.In Fig. 5(a), multiwavelength spectrum ranges from 1614.25 nm to 1624.01 nm with 5 dB bandwidth of 9.76 nm and wavelength number of 124.In Fig. 5(d), multiwavelength ranges form 1594.02 nm to 1603.46 with bandwidth of 9.44 nm and wavelength number of 118.Therefore, 20.2 nm tuning has been implemented without distinct variation of the wavelength number.The tunability can be attributed to the Additionally, we have checked the effect of the SOA driving current on the multiwavelength generation.After the multiwavelength comb with wavelength-spacing of 0.08 nm generates at current of 350 mA (See Fig. 3(a)), decrease the current step by step with the polarization kept fixed.Figure 6 presents the results under the currents of 300, 250, 200, and 150 mA.For clarity, the spectrum under 300, 250, 200 mA are respectively offset upward by 8, 5, 2 dB, while their horizontal coordinates kept unchanged.With the decrease of the current the multiwavelength combs become more and more narrow.The bandwidth of the output spectrum under 150 mA driving current is only 2.3 nm.
Conclusion
In conclusion, we have demonstrated a tunable multiwavelength SOA fiber laser with ultranarrow wavelength spacing and large wavelength number.Multiwavelength generation is the results of the intensity-dependent transmission induced by nonlinear polarization rotation in the SOA.Wavelength tuning is realized through polarization-tuning the cavity characteristic.Stable multiwavelength lasing with multiwavelengths up to 126 and wavelength spacing as small as 0.08 nm is achieved at room temperature.The effect of current on the performance of the multiwavelength laser has been also experimentally investigated.
Fig. 1 .
Fig. 1.Experimental setup of our proposed multiwavelength SOA fiber laser.The part surrounded by dashed line is a general configuration of nonlinear polarization rotation based on SOA.
Figure 2
Figure 2 gives the amplified spontaneous emission (ASE) of the SOA at 200 mA driving current.The peak wavelength is 1563.6 nm.The fiber Sagnac loop filter is formed by a 3 dB fiber coupler, a segment of polarization-maintaining fiber (PMF) with birefringence of 4 3.8 10 −× and a polarization controller (PC1) within the fiber loop.The birefringence of the PMF generates a wavelength-dependent phase difference between the fast and slow components of the light propagating in the fiber loop.By adjusting the PC1 in the loop to generate a 90° rotation between the polarization states of the two counter-propagating light in the cavity, the two counter-propagating light travel along different axes of the PMF and accumulate phase difference.They will interfere at the 3 dB coupler and generate a periodic comb-like spectrum.The filter peak spacing is given as
Fig. 3 .
Fig. 3. (a) Mutiwavelength output spetrum with wavelength spacing of 0.08 nm.(b) Zoom-in of the part surrounded by dashed lines in (a).
Fig. 4 .Fig. 5 .
Fig.4.Output spectrum from the SOA fiber laser with 0.08 nm Sagnac loop filter when the PDI is replaced by a polarization-insensitive isolator. | 3,322.2 | 2009-09-14T00:00:00.000 | [
"Engineering",
"Physics"
] |
New Solutions of Tolman-Oppenheimer-Volkov-Equation and of Kerr Spacetime with Matter and the Corresponding Star Models
The Tolman-Oppenheimer-Volkov (TOV) equation is solved with a new ansatz: the external boundary condition with mass M 0 and radius R 1 is dual to the internal boundary condition with density ρ bc and inner radius r i , and the two boundary conditions yield the same result. The inner boundary condition is imposed with a density ρ bc and an inner radius r i , which is zero for the compact neutron stars, but non-zero for the shell-stars: stellar shell-star and galactic (supermassive) shell-star. Parametric solutions are calculated for neutron stars, stellar shell-stars, and galactic shell-stars. From the results, an M-R-relation and mass limits for these star models can be extracted. A new method is found for solving the Einstein equations for Kerr space-time with matter (extended Kerr space-time), i.e. rotating matter distribution in its own gravitational field. Then numerical solutions are calculated for several astrophysical models: white dwarf, neutron star, stellar shell-star, and galactic shell-star. The results are that shell-star star models closely resemble the behaviour of abstract black holes, including the Bekenstein-Hawking entropy, but have finite redshifts and escape velocity v < c and no singularity.
Introduction
In General Relativity, one of the most important applications is to calculate the mass distribution and the space-time metric for a given equation-of-state of a stellar model. , and can be transformed into one ordinary differential equation of degree 2 for M(r) by eliminating ρ(r). The boundary condition is imposed normally at r = 0 with M(0) = 0 and ( ) 0 0 ρ ρ = , where ρ 0 is the maximal density. Then the TOV-equation for M(r) is solved with this boundary condition at r = 0 for M(r) and M'(r), which gives the total mass M 0 (ρ 0 ) and the total radius R(ρ 0 ), and a mass-radius relation M 0 (R).
The predominant view of the neutron stars and stellar black-holes is, that neutron stars obey an equation-of state (eos) of an interacting-fluid model [2], which solutions of the TOV equation up to about M = 3M sun . For larger masses, it is assumed that only a black-hole solution remains. This is based on the so-called Oppenheimer limit for the radius of a compact mass The 2 parameters R and M 0 in the dual outer boundary condition correspond uniquely to the 2 parameters r i and ρ 0 in the inner boundary condition.
With rotation, one has an axisymmetric model in the variables r and θ (azimuthal angle), and has to solve the Einstein equations in these 2 coordinates. In and R 1x and R 1y are the equatorial and the polar radius. As in the TOV-case, here to the 3 parameters R 1y , M 0 and ΔR 1 correspond the 3 inner parameters r iy , ρ 0 and Δr i .
So here we get a 3-parametric solution manifold, and as in the spherical case, for a given total mass M 0 we have to find the stable physical solution. As before, these will be the ones with minimal riy and among them the one with minimal mean energy density: this defines the inner ellipticity Δr i . In all considered cases, it can be shown numerically, that such a (non-trivial) minimum exists.
The paper is organized as follows.
The Kerr Space-Time, Schwarzschild Space-Time, Einstein Equations
Using the Minkowski metric µν η ( ) that the (apparent) singularity at s r r = is missing.
The same is valid for the original Kerr space-time: the denominator 12 ρ has no zeros, there is no singularity in ab g , which makes it more well-behaved numerically.
Alternatively, in Boyer-Lindquist-coordinates: [3] In the limit 0 a → the Schwarzschild space-time in the standard form (4) emerges.
The Einstein field equations with the above Minkowski metric are: where R µν is the Ricci tensor, R 0 the Ricci curvature, 4 8 G c κ = π , T µν is the energy-momentum tensor, Λ is the cosmological constant (in the following neglected, i.e. set 0), with the Christoffel symbols (second kind) and the Ricci tensor The crucial part of the extended Kerr solution is the expression for the energy-momentum tensor T µν . As usual, one uses the formula for the perfect fluid where P and ρ is the pressure and density, u µ is the covariant velocity 4-vector.
In the Schwarzschild case, when deriving the TOV-equation, one sets the spatial contravariant velocity components to 0: 0 i u = , in the Kerr case the tangential velocity 3 0 u u ϕ = ≠ .
For the velocity one has: i.e.
If we make the obvious assumption that the star rotates as a whole, i.e. with constant angular velocity, then the moment of inertia I becomes r-dependent, like the mass M: The factor 3 in the integral instead of the usual 4π comes from the dimensionless calculation in "sun units" (see below).
J. Helm Journal of High Energy Physics, Gravitation and Cosmology
The amr a also becomes r-dependent: In the relativistic axisymmetric case with rotation with angular velocity ω, u μ has the form [4]: The state equation for the pressure P for the nucleon gas has the form 1 P c γ ρ = or in the dimensionless form with a critical density c ρ and dimensionless pressure 1 P and density 1 ρ ( ) For the horizon, with rotation there is the inner and the outer horizon (M = M 0 )
From now on we skip the index of the dimensionless variables and use the original notation, e.g. r instead of r 1 .
Furthermore, we adopt the Boyer-Lindquist coordinates and the metric tensor (12).
We impose an r-θ-analytic boundary condition for Ai, ∂ r Ai, at r = R 1 (R 1 is the star radius): Ai = 1, ∂ r A0 = 0, ∂ r A2 = 0, ∂ r A3 = 0, ∂ r A4 = 0. For A1, there is no differential boundary condition, as ∂ r A1 is the highest r-derivative, for ρ there is no boundary condition at all, because r is algebraic in the equations, but there is an integral condition:
The Solving Process for the Extended Kerr Space-Time
In addition to the fundamental dual parameters {r i , ρ i } corresponding to {R 1 , M 0 } in the rotation-free TOV-case, in the Kerr-case there is the new fundamental parameter Δr i (inner ellipticity for inner boundary condition), resp. ΔR 1 (outer ellipticity for outer boundary condition), and the angular velocity ω. The outer radii are the latter equality arising from the fact that centrifugal distortion acts only in the x-direction (the y-axis being the rotation axis). The inner radii are correspondingly xi i The r-θ-slicing algorithm with an Euler-step obeys the iterative procedure with slice step size h 1 in r, and step size h 2 in θ, starting with the r-boundary at r = R 1 (slice n = 0).
The transition from slice n to n + 1 proceeds as follows. At slice n all variables and 1-derivatives are known from the previous step, 2-derivatives ∂ rr Ai, and ρ are calculated from the 6 equations.
At slice n + 1 the variables and 1-derivatives are calculated by Euler-formula (or Runge-Kutta) The 2-derivatives ∂ rr Ai, ∂ rr Bi and ρ are again calculated from the 6 significant equations with variables and 1-derivatives inserted from above.
The θ-slicingr-backward algorithm with an Euler-step obeys the iterative procedure with slice step size h 1 in θ as above for r, starting with θ = 0, and solves an ordinary differential equation in r in each θ -step. The boundary condition for the r-odeq is set at r = R 1 (θ) (the outer ellipse radius) with Ai = 1, M = M 0 My0(θ), , where ρ bc is the outer boundary value for the density, ρ bc = 0 for the (non-interacting) neutron-gas in a shell-star and ρ bc > 0, ρ bc = ρ equilibrium for the (interacting) neutron fluid in a neutron star. My0(θ) is the mass-form-factor with the condition ( ) ( ) , where ρ bc = ρ i is the inner boundary value for the density, ρ i is approximately the inner (maximum) density ρ(r i ) from the corresponding TOV-equation, the value must be adapted, so that the resulting total mass is M 0 . For the compact neutron star the inner radius r i (θ) is zero.
In the θ-slicingr-backward algorithm one starts with the outer boundary being the ellipsoid r = R(θ, ΔR 1 ), where ΔR 1 is the outer ellipticity of the star. In the θ-slicingr-forward algorithm one starts with the inner boundary being the ellipsoid r = r i (θ, Δr i ), where Δr i is the inner ellipticity of the star.
At the inner boundary the tangential pressure is uniform, so the density is also uniform and equal to the maximum density, ρ(θ) = ρ i .
In the actual calculation we were using the θ-slicingr-backward algorithm, because here the boundary condition M = M 0 is achieved automatically, when one starts with My 0 (θ) = M 0 .
The odeqs in rconsist of the 6 significant Einstein equations eqR00, eqR11, eqR22, eqR33, eqR03, eqR41 for the six variables A0(r, θ), A1(r, θ), A2(r, θ), A3(r, θ), A4(r, θ), M(r, θ) with θ = θ i and θ-derivatives calculated by Euler-step from the preceding q-slice. For i = 0 i.e. θ = 0 the θ-derivatives are taken from start values for all variables, which normally represent the corresponding TOV-solution (here only A0(r), A1(r), M(r) are non-trivial and do not depend on θ). The odeqs are highly non-linear algebraic differential equations and hard to solve numerically with classical methods for linear odeqs extended by an algebraic equation solver. In the case of a nonlinear odeq-system one uses an Euler or Runge-Kutta method and calculates in each step the highest derivatives with a numerical algebraic equation solver. As an alternative one can use minimization of the least-squares-error in the highest derivatives instead of a numerical algebraic equation solver. Minimization has also the advantage that one can minim-Journal of High Energy Physics, Gravitation and Cosmology ize the complete set of Einstein equations plus the 2 additional continuity equations eqR41, eqR42 in the error goal function instead of the 6 significant equations, which improves the stability of the solution (e.g. in case of degeneracy).
The numerical error of the algorithm is calculated from i.e. the Euclidean norm of the equation values (the right side of the Einstein equations being 0). The error is calculated over the lattice {r i , θ j } as median, mean or maximum. In the internal loop of the algorithm over r i at fixed θ j , the solution of the algebraic discretised Einstein-equation is achieved by square-root error minimization, so it is essential to avoid singularities, e.g. at the horizon and the pseudo-singularity at θ = 0. This is achieved by selecting appropriate analytic convergence factors for the (left side of) the Einstein equations. As the equations are to be zeroed for the solution, the convergence factors do not change the solution of course, but they cancel the numerical singularities, which could otherwise jeopardize the numerical convergence of the algorithm.
The actual calculation was carried out in Mathematica using its symbolic and numerical procedures. In the first stage, the Einstein equations were derived from the ansatz for g μν from section 2 and simplified automatically. The arising complexity of the equations is such, that it is practically impossible to handle For the second numerical stage we tried several slicing algorithms, and the best alternative proved to be the θ-slicingr-forward algorithm implemented by hand in Mathematica. The solution of the resulting odeq in each r-step was calculated using NDSolve. Also, for every star model and parameter set, the TOV solution with ω = 0 a = 0 was calculated first with the algorithm and compared with the exact TOV solution.
The TOV Equation as the Limit ω → 0 for the Extended Kerr Space-Time
In the Schwarzschild spacetime ω = 0 and a = 0, we have spherical symmetry, no dependence on θ, then the TOV-equation can be derived from the remaining non-trivial Einstein equations eqR00, eqR11, eqR22, eqR41.
The TOV-equation is in the standard form: Journal of High Energy Physics, Gravitation and Cosmology (13) and using r s where M t is the total mass, furthermore In order to make the variables dimensionless, one introduces "sun units" ( ) 16 where r ss Schwarzschild-radius of the sun, ρ s the corresponding Schwarzschild-density and P s the corresponding Schwarzschild-pressure.
In "sun units" TOV-equation transforms into with the normalized mass M 1 (r 1 ), and 0 M R = for non-interacting Fermi-gas and for an interacting Fermi-gas:
The Equation of State for an (Non-Interacting) Nucleon Gas
Here,
J. Helm
( ) ( ) , where x F is the Fermi-angular-momentum, n the particle density ( ) The resulting approximate equations of state for P are valid for the density ρ and the critical density ρ c The full expression for P, including temperature T, is as follows ( [5], chap.15). Here, we use dimensionless variables (r 1 distance unit de-Broglie-wavelength λ c , V 1 volume unit 3 c λ , n 1 particle density unit , the resulting particle density is From this relation the chem. potential μ 1 can be calculated, an approximation formula is Finally, the resulting pressure (=energy density) ( ) Below a 3D-diagram of ( ) Here kT is in E 0 units, and one sees the dependence 1 P k γ ρ = except on the left side, when kT reaches the magnitude of 1 Gev (T = 10 10 K).
The Equation of State for an (Interacting) Nucleon Fluid
For the interacting nucleon gas we take into account the nucleon-nucleon-potential in the form of a Saxon-Woods-potential modeled on the experimental data: [7]- [13] ( ) MeV.
The Saxon-Wood potential is shown in Figure 2 below.
The pressure of the interacting nucleon fluid becomes then The experimental data used here are those from [7], and are shown in Figure 3.
And the hard-core potential from the lattice calculation Reid93 [10] is shown in Figure 4.
Both potentials are fitted with a double Saxon-Woods-potential V nn in Figure 5:
Maximum Omega-Values in Kerr-Space-Time
We consider here a rotation model with constant angular velocity ω. With this model the resulting 4-velocity u μ has the form [4] [14] [15]: The maximum values for ω are calculated from the minimal zeros in omega of the denominator in u 0 from (9a), minimized over r 1 and th in their respective re- The resulting value is , where α f is the form-factor in the moment of inertia I 1 . The star parameters mass M 0 and radius R 1 , which enter the outer boundary condition determine completely the solution. In general, there will be an inner radius r i > 0 with the maximum density As we will see, this outer boundary condition together with allowing r i > 0 changes dramatically the resulting manifold of physical solutions.
The TOV-Equation: The Parametric Solution and Resulting Star Types
By setting-up a parametric solution of the TOV-equation one gets a map of possible physical solutions, i.e. possible star structures. As parameters one can use either (M 0 , R 1 ) in the outer boundary condition at r 1 = R 1 or the dual parameter pair (r i , ρ bc ) in the inner boundary condition r 1 = r i .
The pure neutron Fermi-gas model yields for compact neutron stars a maximum mass of M maxc = 0.93M sun , which is in disagreement with observations. Therefore, at least for compact neutron stars, a model of interacting neutron fluid must be used. In 6.2 above we have described a Saxon-Wood-potential model for the nucleon-nucleon interaction, which seems to fit the experiment and the theory in the best way. There will be a critical density (dependent on temperature of course), where a transition from interacting fluid to Fermi-gas takes place, it is plausible to set this density equal to the Saxon-Wood critical density 0.0417 We made calculations with the TOV-equation using these two models for neutron-based stars and we came to the conclusion that compact neutron stars with mass M 0 ≤ 3.04M sun consist of interacting neutron fluid and neutron shell-stars for M 0 ≥ 5M sun obey the Fermi-gas model. The underlying calculation is the Mathematica-notebook [6].
J. Helm Journal of High Energy Physics, Gravitation and Cosmology
This approach yield results, which are described below.
The admissible mass range ends, where the thickness of the shell above the Schwarzschild-radius becomes very small (minimum 0.01).
So in total the R-M-relation for neutron stars becomes
The maximum mass for a repulsive-hardcore-model for the equation-of-state DD2 [16] is 2.42M sun , from our mapping we have the maximum compact neutron star mass of M maxc = 3.04M sun . The actual theoretical limit for neutron star core density is ρ max = 3.5 × 10 15 g/cm 3 = 0.199 in sun-units [8] [9].
J. Helm
The limit for ρ bc reached in our mapping is only ¼ of this ρ bc = ρ bcmax = 0.0544, due to the subluminal-sound-condition and the use of an (attractive) nucleon-nucleon-potential for the nucleon-fluid instead of a pure repulsive-hardcoremodel.
The classical argument for the collapse of a neutron star to a black-hole for ρ bc > ρ max , dating back to Oppenheimer [1], is invalidated here by the simple introduction of shell-star models, where r i > 0, and therefore there is no mass at the center, which means physically, there is only a very diluted nucleon gas there. Stellar shell-stars (stellar black-holes) Journal of High Energy Physics, Gravitation and Cosmology We assume that the underlying equation-of-state state for stellar shell-stars is the Fermi-gas of nucleons with the low-density limit of ( ) We make a further plausible assumption that the "edge" of the solution mapping are the physically stable solutions, i.e. the R-M-relation for stellar shell-stars. The edge in this case consists for fixed ρ bc < 0.0417 = ρ oc of solutions with maximum r i (because then the average density in the shell is lowest) and for ρ bc = 0.0417 = ρ oc it consists of the solutions (M 0 , r i , R 1 ) at the right boundary ( open, but the "thinning-out" of the solutions for small ρ bc and large r i makes it physically plausible (see Figure 11 [6] below).
The resulting R-M-relation is practically linear and has a maximum mass value of M max = 81.3M sun . (Figure 12(a)).
And the corresponding relative shell thickness dR rel = dR/M is (Figure 12(b)) and the relative Schwarzschild-distance dR srel = (R − M)/M is ( Figure 13) The inverse of dR srel gives roughly the light attenuation factor of {1.7, …, 20.}.
Taken the attenuation factor and the small relative shell thickness of around 0.02, these stellar shell-stars have approximately the properties expected of a genuine black-hole, when measured from a distance 1 r R . , which is identical to the Bekenstein-Hawking entropy with the factor (ln2)4/π = 0.882.
Galactic (supermassive) shell-stars
The mean density of a black-hole scales with its radius R like i.e. for supermassive black-hole with M = 10 6 M sun we have In the following we use the abbreviation MM sun = 10 6 M sun .
Therefore it is plausible to try a parametric mapping with the white-dwarf equa- MM sun . Third, a stable solution for a fixed mass will have the highest possible maximum density ρ bc and that will lie on the "ridge". So one can calculate the R-M-relation following the "ridge". The resulting R-M-relation is as follows (Figure 15).
And the inner radius is ( Figure 16).
The R-M-relation is almost linear, as expected, and goes up to 50MM sun .
( )
is the relative thickness (Figure 17), and shows, that the shells are very thin indeed, with a minimum of 0.001. The fourth diagram shows the relative Schwarzschild-distance ( Figure 18) has a minimum at {M 0 , dR srel } = {7, 0.00142857}, so that its reciprocal value (approximate light attenuation factor) is around 700. So the overall result is, that the supermassive shell-stars become ever thinner shells, while the distance from the Schwarzschild-horizon is increasing.
The TOV-Equation: A Case Study for Typical Star Types
In the nearly-rotation-free case the solution of the TOV-equation was calculated for 4 models in sun units with r s = Schwarzschild radius J. Helm Journal of High Energy Physics, Gravitation and Cosmology Here the radius R 1 is reached, when M'(r 1 = R 1 ) = 0, i.e. ρ(R 1 ) = 0.
The "naive" mean density is here
J. Helm
TOV-solution for rho (in 10 −12 units, Figure 27), M (in 10 6 units, Figure 28) in r (in 10 6 units), is: Here there is an internal "hole" with a radius r i = 4.356 × 10 6 , maximum ρ = 4.934 × 10 −12 at r i . The inner radius r i lies a little below the Schwarzschild-radius r s = M 0 . The relative shell thickness Furthermore, r i is little sensitive to the temperature up to T = 10 7 K. As for a stellar black hole, when R converges to r s = M 0 , so does the inner radius r i , and there is no physical solution (with positive ρ and M) for a boundary within the horizon.
The Three Star Models for Kerr-Space-Time with Mass and Rotation
The calculation of Kerr-space-time with mass and rotation was carried out for 3 star models: γ = gam, gam1, gam2 is the exponent, infac is the moment of inertia factor α f , epsi is the singularity cancellation parameter with limit(epsi) = 0 introduced to improve the numerical stability in singularities r i = riact is the polar inner radius R y Δr i the inner ellipticity is the difference between the polar r iy and the equatorial inner radius r ix , Δr i = r iy − r ix ΔR 1 Is the outer ellipticity, with outer radii R x1 = R 1 − ΔR 1 and R y1 = R 1 rilow is the minimal radius r 1 reached in the solution ρ bc = rhobcx is the boundary condition density dthrel is the maximum relative difference of a value dependent on θ, e.g.
The density distribution is similar to the TOV-case but with a decrease in θ-direction. The rotation results show very small flattening in the polar direction ofdthrel = 0.00118. The neutron star behaves like a fluid because of its "viscosity", that is, its nuclear interaction and becomes "pumpkin-like".
The outer Kerr-horizon is r + = 15.21. The underlying calculation is the Mathematica-notebook [20], the results in [19].
The outer ellipticity ΔR 1 is at first a free parameter and calculated from a case-study of minimal mean energy density to ΔR 1 The two significant non-spherical features are the relative shell thickness variation dthrel(dR 1 ) and the relative inner ellipticity dthrel(r i ). The first depends roughly linearly on the outer ellipticity ΔR 1 , plus the value at ΔR 1 = 0 (dthrel(dR 1 ) = 0.0241), which is results from rotation. The second, dthrel(r i ), is almost equal to the relative outer ellipticity dthrel(r i ), plus the small amount at ΔR 1 = 0 (dthrel(R 1 ) = 0.00123).
The density distribution is shown in Figures 32-34.
The mass distribution is shown in Figure 35, Figure 36.
The physical mass distribution ends at the inner boundary at r i = 16.7, where the density jumps to ρ = 0. A remarkable result, distinct from the case of the neutron star, is the shape with rotation. The energy-minimal stellar shell-star behaves like a ball of neutron gas (negligible interaction) and decreases slightly its equatorial radius, so that, speaking naively, the increased gravitation counteracts the centrifugal force, the shell-star becomes "cigar-like", with the shell thickness approximately constant. Typical rotating galactic shell-star This is modelled (approximately) on the central black-hole in the Milky Way with mass M 0 = 4.36 mega-sun-masses (MM s ), radius R 1 = 4.38 mega-sun-Schwarzschild-radii (13.14 × 10 6 km, Mr ss ) [21].
The underlying calculation is the Mathematica-notebook [22], the results in [23].
The outer Kerr-horizon is r + = 4.26Mr ss .
In order to maintain numerical performance, we are using for mass and distance 10 6 (mega) units 10 6 M s and 10 6 r ss and for density 10 −12 (mega −2 ) unit 10 −12 ρ s .
Like in the case of the stellar shell-star, the outer ellipticity ΔR 1 is at first a free parameter and calculated from a case-study of minimal mean energy density to ΔR 1 = −2 dTOV, where dTOV is the shell thickness of the spherical shell-star dTOV = 0.057.
The full parameters are: In contrast to the stellar shell-star, here the relative variation of the shell thickness for the spherical-outer-boundarysolution is smaller by a factor of 20 as compared to the minimal solution with a high outer ellipticity, so here there is a dependence of the shell thickness on the ellipticity.
The density distribution is shown in Figures 37-39. The density distribution increases in th-direction. The mass distribution is shown in Figure 40, Figure 41.
The physical mass distribution ends at the inner boundary at r i = 4.46456, where the density jumps to ρ = 0. The fit extrapolates it to lower r-values.
The maximum distance from the horizon is max(r 02e ) − r + = 0.125, therefore the minimal light energy attenuation is roughly 4.262/0.125 = 34, it means that visible green light of 0.514 μm is shifted to 17 μm into far-infrared. The galactic shell-star has all its mass concentrated within a thin shell (dR 1 = 0.0362) which has its inner radius inside and its outer radius outside its M 0 /min(R 1 (θ)) = 0.9971 and the attenuation factor 1/(1 − M 0 /min(R 1 (θ))) = 345, it means that x-ray-radiation from in-falling matter from the accretion disc with an energy of 5 keV and λ = 0.2 nm is shifted to λ = 69 nm, that is into hard UV-radiation.
Experimental Evidence with Recent LIGO and X-Ray Measurements
In November 2018, the LIGO cooperation published the newest statistics of neutron stars and black holes, based on gravitational waves and x-ray measurements [24].
The resulting mass distribution for black-holes and neutron stars is shown in Figure 42 [24].
From these results, we can deduce a confirmed mass range for neutron stars of Journal of High Energy Physics, Gravitation and Cosmology The compact neutron star with M 0 = 0.932M sun , R 1y = 2.8372r ss = 8.51 km, R 1x = 2.8391r ss , ω = 0.1087, has the relative ellipticity of dthrel = 0.00118. The neutron star behaves like a fluid because of its "viscosity", that is, its nuclear interaction, and becomes slightly "pumpkin-like".
The stellar shell-star behaves like a ball of neutron gas (negligible interaction) and decreases slightly its equatorial radius, so that, speaking naively, the increased gravitation counteracts the centrifugal force, the shell-star becomes "cigar-like", with the shell thickness approximately constant.
The redshift is roughly 345. The galactic shell-star is a shell object with a thin mass shell (ΔR = 0.0352Mr ss ) situated close above its outer Kerr horizon r + = 4.26Mr ss . The polar radius is smaller than the equatorial radius, so the outer shape and the inner shape are both pancake-like.
The overall result is, that the introduction of numerical shell-star solutions of the TOV-and Kerr-Einstein-equations creates shell-star star models, which mimic closely the behaviour of abstract black holes and satisfy the Bekens-tein_Hawking entropy formula, but have finite redshifts and escape velocity v < c, no singularity, no information loss paradox, and are classical objects, which need no recourse to quantum gravity to explain their behaviour.
Conflicts of Interest
The author declares no conflicts of interest regarding the publication of this paper. | 6,658.4 | 2014-04-01T00:00:00.000 | [
"Physics"
] |
Sequential Bayesian Analysis of Multivariate Count Data
We develop a new class of dynamic multivariate Poisson count models that allow for fast online updating and we refer to these models as multivariate Poisson-scaled beta (MPSB). The MPSB model allows for serial dependence in the counts as well as dependence across multiple series with a random common environment. Other notable features include analytic forms for state propagation and predictive likelihood densities. Sequential updating occurs through the updating of the sufficient statistics for static model parameters, leading to a fully adapted particle learning algorithm and a new class of predictive likelihoods and marginal distributions which we refer to as the (dynamic) multivariate confluent hyper-geometric negative binomial distribution (MCHG-NB) and the the dynamic multivariate negative binomial (DMNB) distribution. To illustrate our methodology, we use various simulation studies and count data on weekly non-durable goods consumer demand.
Introduction
Data on discrete valued counts pose a number of statistical modeling challenges despite their widespread applications in web analytics, epidemiology, economics, finance, operations, and other fields. For instance, Amazon, Facebook and Google often are interested in modeling and predicting the number of (virtual) customer arrivals during a specific time period or policy makers require predicting the number of individuals who possess a common trait for resource deployment and allocation purposes. In online settings, the challenge then is fast and efficient prediction of web trafficking counts from multiple websites and pages over time. The total number of clicks over time may be positively dependent with the counts the main site receives and there is a need for dynamic multivariate count models. Thus, we develop a dynamic (state-space) multivariate Poisson model together with particle filtering and learning methods for sequential online updating (Gordon et al., 1993;Carvalho et al., 2010a). We account for dependence over time and across series, via a scaled beta state evolution and a random common environment. Our model is termed the multivariate Poisson-scaled beta (MPSB). As a by-product, we introduce two new multivariate distributions, the dynamic multivariate negative binomial (DMNB) and the multivariate confluent hyper-geometric negative binomial (MCHG-NB) distributions which correspond to marginal and predictive distributions.
Recent advances in discrete valued time series can be found in Davis et al. (2015). However, there is little work on count data models which accounts for serial dependence. Typically, the dependence between time series of counts can be modeled either using traditional stationary time series models (Al-Osh and Alzaid, 1987;Zeger, 1988;Freeland and McCabe, 2004) which are known as observation driven models (Cox, 1981) or via state space models (Harvey and Fernandes, 1989;Durbin and Koopman, 2000;Fruhwirth-Schnatter and Wagner, 2006;Aktekin and Soyer, 2011;Aktekin et al., 2013;Gamerman et al., 2013) that are known as parameter driven models. In a state space model, the dependence between the counts is captured via latent factors who follow some form of a stochastic process. These type of models generally assume conditional independence of the counts given the latent factors as opposed to stationary models where counts are always unconditionally dependent.
Analysis of discrete valued multivariate time series has so far been limited due to computational challenges. In particular, little attention has been given to multivariate models and our approach is an attempt to fill this gap. For example, Karlis (2011, 2012) use observation driven, more specifically multivariate INAR(1) models. Ravishanker et al. (2014) uses Bayesian observation driven models and introduces a hierarcical multivariate Poisson time series model. Markov chain Monte Carlo (MCMC) methods are used for computation where the evaluation of the multivariate Poisson likelihood requires a significant computational effort. Serhiyenko et al. (2015) develops zero-inflated Poisson models for multivariate time series of counts and Ravishanker et al. (2015) study finite mixtures of multivariate Poisson time series. State-space models of multivariate count data was presented in Ord et al. (1993) and in Jorgensen et al. (1999) using the EM algorithm. Closely related models of correlated Poisson counts in a temporal setting include research on marked Poisson processes as in Taddy (2010); Taddy and Kottas (2012); Ding et al. (2012).
One advantage of parameter driven models is that the previous correlations are captured by time evolution of the state parameter which we refer to as the random common environment. The correlations among the multiple series are induced by this random common environment that follows a Markovian evolution, as in Smith and Miller (1986); Aktekin et al. (2013); Gamerman et al. (2013), and modulates the behavior of individual series. The idea of the random common environment is widely used in risk analysis (Arbous and Kerrich, 1951) and reliability (Lindley and Singpurwalla, 1986) literatures to model dependence. Our strategy of using the random common environment provides a new class of models for multivariate counts that can be considered to be dynamic extensions of models considered in Arbous and Kerrich (1951).
Sequential Bayesian analysis (Polson et al. (2008); Carvalho et al. (2010a)) and forecasting requires the use of sequential Monte Carlo techniques. MCMC methods via the forward filtering backward sampling (FFBS) of Carter and Kohn (1994) and Fruhwirth-Schnatter (1994) are not computationally efficient since it requires rerunning of chains to obtain filtering distributions with each additional observation. Particle filtering (PF) and particle learning (PL) methods avoid this computational burden to esimate the dynamic state as well as the static parameters in an efficient manner. As pointed out by Carvalho et al. (2010a), estimating static parameters within the PF framework is notoriously difficult especially in higher dimensions. However, given the specific structure of the proposed state space model (as the conditional filtering densities of all the static parameters can be obtained in closed form with conditional sufficient statistics), it is possible to develop such a filtering scheme that can be used for both on-line updating and forecasting.
The rest of the paper is organized as follows. Section 2 introduces our multivariate time series model for counts and develops its properties. Section 3 briefly reviews some of the PF and PL methods with a focus on Poisson count data. The proposed model and estimation algorithms are illustrated in Section 4 using calibration studies and an actual data set on weekly time series of consumer demand for non-durable goods. Section 5 provides concluding remarks, discussion of limitations and future work.
Multivariate Poisson-Scaled Beta (MPSB) Model
Suppose that we observe {(Y 11 , . . . , Y 1T ), . . . , (Y J1 , . . . , Y JT )}, a sequence of evenly spaced counts observed up until time T for J series. We assume that these J series are exposed to the same external environment similar to the common operational conditions for the components of a system as considered by Lindley and Singpurwalla (1986) in reliability analysis. The analysis of financial and economic time series also includes several series that are affected by the same economic swings in the market. To account for such dependence, we assume a Bayesian hierarchical model of the form (Y jt |λ j , θ t ) ∼ P ois(λ j θ t ), for j = 1, . . . , J and t = 1, . . . , T, where λ j is the rate specific to the jth series and θ t represents the effects of the random common environment modulating λ j . Following Smith and Miller (1986), a Markovian evolution is assumed for θ t as where the error terms follow a Beta distribution as, where α t−1 > 0, 0 < γ < 1 and D t−1 = {D t−2 , Y 1,t−1 , . . . , Y J,t−1 } represents the sequential arrival of data. We refer to this class of models as multivariate Poisson-scaled beta (MPSB) models due to the relationship between the observation and state equations. We also note here that the state equation above (as discussed in Smith and Miller (1986)) is defined conditional on previous counts unlike the state equations in traditional dynamic linear models.
Dynamic Online Bayesian Updating
The observation model (1), is a function of both the dynamic environment θ t and the static parameters, λ j 's. For example, in the case where Y jt represents the weekly consumer demand for household j at time t, λ j accounts for the effects of the household specific rate and θ t for the effects of the random common economic environment that both households are exposed to at time t. When θ t > 1, the environment is said to be more favorable than usual which leads to a higher overall Poisson rate and vice versa. In the evolution equation (2), the term γ acts like a discount factor common for all j series. For notational convenience, we suppress the dependence of all conditional distributions on γ in our discussion below. Having the state evolution as (2) also implies the following scaled beta density for (θ t |θ t−1 ) where (θ t |θ t−1 , D t−1 , λ) is defined over (0; θ t−1 γ ) and the vector of static parameters is defined as λ = {λ 1 , . . . , λ J }.
Here, we assume that for component j, given θ t 's and λ j , Y jt 's are conditionally independent over time. Furthermore, we assume that at time t, given θ t and λ j 's, Y jt 's are conditionally independent of each other.
Conditional on the static parameters, it is possible to obtain an analytically tractable filtering of the states. At time 0, prior to observing any count data, we assume that (θ 0 |D 0 ) ∼ Gamma(α 0 , β 0 ), then by induction we can show that and using (3) and (4) show that the prior for θ t would be Therefore, the filtering density at time t can be obtained using (1) and (6) as which is where α t = γα t−1 + (Y 1t + . . . + Y Jt ) and β t = γβ t−1 + (λ 1 + . . . + λ J ). As a consequence, both the effects of all counts as well as the individual effects of each series are used in updating the random common environment.
Dynamic Multivariate Negative Binomial (DMNB) Distribution
An important feature of the model is the availability of the marginal distribution of Y jt conditional on λ j 's for j = 1, . . . , J. This is given by which is a negative binomial model denoted as NB(γα t−1 , λ j γβ t−1 +λ j ), where λ j γβ t−1 +λ j is the probability of success. From the conditional independence assumptions, we can obtain the multivariate distribution of Y t = {Y 1t , . . . , Y Jt } conditional on λ j 's as This is a generalization of the traditional negative binomial distribution. We refer to this distribution as the dynamic multivariate negative binomial (DMNB) distribution which will play an important role in learning about the discount parameter, γ. Therefore, the bivariate distribution, p(Y it , Y jt |λ, D t−1 ), for series i and j, is given by which is a bivariate negative binomial distribution with integer values of γα t−1 . We note that (13) is the dynamic version of the negative binomial distribution from Arbous and Kerrich (1951) who considered it for modeling the number of industrial accidents in a workplace such as a production facility. Furthermore, the conditional distributions of Y jt 's will also be negative binomial type distributions. The conditional mean, or the regression of Y jt given Y it is a linear function Y it given by The bivariate counts are positively correlated with correlation given by Given (15), our proposed model would be suitable for series that are only positively correlated. One of our examples which will be presented in our numerical illustration section will include counts of weekly demand for consumer non-durable goods of several households that are positively correlated with each other. Also, the structure (15) suggests that as γ approaches zero (or very small values), for the same values of λ j 's, the correlation between two series increases. A similar argument can be made by observing the state equation (2) where γ was introduced as a common discount parameter. In our simulations and analysis of real count data, we only consider series that are positively correlated and discuss its implications. Even tough this is a limitation of our model, it is possible to find positively correlated time series of counts in many fields when the series are assumed to be exposed to the same environment.
Forward Filtering and Backward Sampling (FFBS)
In what follows, we introduce and discuss methods for sequentially estimating the dynamic state parameters, θ t 's, the static parameters, λ j 's and the discount factor γ. We first assume that γ is known. We assume that apriori λ j 's are independent of each other as well as θ 0 and having gamma priors as λ j ∼ Gamma(a j , b j ), for j = 1, . . . , J.
The model can be either estimated using MCMC techniques or particle filtering methods. For MCMC, one needs to generate samples from the joint posterior of all parameters as in p(θ t , λ|D t ) where θ t = {θ 1 . . . , θ t } using a Gibbs sampling scheme via the following steps 1. Generate θ t 's via p(θ 1 , . . . , θ t |λ 1 , . . . , λ j , D t ) 2. Generate λ ′ j s via p(λ 1 , . . . , λ j |θ 1 , . . . , θ t , D t ) In step 1, the forward filtering and backward sampling (FFBS) can be used to estimate the conditional joint distribution of the state parameters where the joint density p(θ 1 , . . . , θ t |λ, D t ) can be factored as The implementation of FFBS would be straightforward in our model as we have the following shifted gamma densities where γθ t < θ t−1 In Step 2, we can use the Poisson-Gamma conjugacy, which is a gamma density as where It is important to observe that given the state parameters, θ t and data, λ j 's are conditionally independent. However, unconditionally they will not necessarily be independent whose implications are investigated in our numerical example. The availability of (17) and more importantly the sequential updating of its parameters using sufficient statistics is important in developing particle learning methods which we discuss in detail in the sequel. As pointed out by Storvik (2002) and Carvalho et al. (2010a), the issue with MCMC methods in state space models is that the chains need to be restarted for every data point observed and the simulation dimension becomes larger as we observe more data over time. Furthermore, MCMC methods require convergence of chains via the calibration of thinning intervals (to reduce autocorrelation of the samples) and the determination of the burn-in period's size, both of which would increase the computational burden. Therefore, using MCMC methods would not be ideal for sequential updating whose implications we investigate in our numerical example section. However, the FFBS algorithm can be used to obtain smoothing estimates in a very straightforward manner since, unlike filtering, smoothing does not require sequentially restarting the chains. In a single block run of the above FFBS algorithm, one can obtain estimates of (θ 1 , . . . , θ t |D t ) by collecting the associated samples generated from p(θ 1 , . . . , θ t , λ|D t ). When fast sequential estimation is of interest, an alternative approach is the use of particle filtering (PF) techniques that are based on the idea of re-balancing a finite number of particles of the posterior states given the next data point proportional to its likelihood.
Particle Learning of the MPSB Model
For sequential state filtering and parameter learning, we make use of the particle learning (PL) method of Carvalho et al. (2010a) to update both the dynamic and the static parameters. To summarize, the PL approach starts with the resampling of state particles at time t using weights proportional to the predictive likelihood which ensures that the highly likely particles are moved forward. The resampling step is followed by the propagation of the current state (t) to the future state (t + 1). Note that in both the resampling and propagation steps, one-step-ahead observations are used. The last step involves updating the static parameters by computing the conditional sufficient statistics. Even tough there has been several applications of the PL methods in the literature, none of them focus on the analysis of Poisson count data. Among many other successful applications, some recent work of the PL algorithm include Carvalho et al. (2010b) for estimating general mixtures, Gramacy and Polson (2011) for estimating Gaussian process models in sequential design and optimization, and Lopes and Polson (2016) for estimating fat-tailed distributions.
Let us first assume that γ is known and define z t as the essential vector of parameters to keep track of at each t. The essential vector will consist of the dynamic state parameter (θ t ), static parameters (λ) and conditional sufficient statistics s t = f (s t−1 , θ t , Y t ) for updating the static parameters. The fully adapted version of PL can be summarized as follows using the traditional notation of PF methods
Step 1: Obtaining the resampling weights
The predictive likelihood is denoted by p(Y t+1 |z t ) = p(Y t+1 |θ t , λ, D t ) and is required to compute the resampling weights in step 1 of the above PL algorithm. Specifically, we need to compute where p(Y t+1 |θ t+1 , λ) is the product of the Poisson likelihoods (1) and p(θ t+1 |θ t , λ, D t ) is the state equation (3). We can show that w t to be equal to Here, CHF represents the confluent hyper-geometric function of Abramowitz and Stegun (1968). For evaluating the CHF function, fast computation methods exist; see for instance the gsl package in R by Hankin (2006). The resampling weights (18) also represent the predictive likelihood (marginal) for the proposed class of dynamic multivariate Poisson models. To the best of our knowledge, (18) represents the form of a new multivariate distribution which we refer to as (dynamic) multivariate confluent hyper-geometric negative binomial distribution (MCHG-NB); see the Appendix B for the details.
Step 2: Obtaining the propagation density The propagation density in step 2 of the PL algorithm can be shown to be The above form is proportional to a scaled hyper-geometric beta density (see Gordy (1998a)) defined over the range (0; θt γ ), as HGB(a, b, c), with parameters To generate samples from the HGB density, it is possible to use a rejection sampling based approach. First, we can numerically evaluate the maximum of the HGB density over (0,1) using a non-linear numerical search technique and use the maximum as an enveloping constant for developing a rejection sampling algorithm. We comment on the performance of the sampling method in our numerical section and also provide an alternative below. Now that we have both the predictive likelihood for computing the resampling weights and the propagation density, the PL algorithm can be summarized as The availability of the recursive updating for the sufficient statistics of the static parameters makes our model an ideal candidate for applying the PL method. Note that in step 4, the conditional distributions of the static parameters are coming from (17). Alternatively, if generating from the HGB distribution in step 2 is not computationally efficient, then one can use another step in the vein of sequential importance sampling by resampling the θ t+1 's using weights proportional to the likelihood. For instance, we can replace step 2 in the above with We comment on the performance of the above approach in our numerical example.
Updating the discount factor γ
For the sequential estimation of the γ posterior at each point in time, we make use of the availability of the marginal likelihood conditional on the λ j 's which is a dynamic multivariate negative binomial density. Estimation of a static parameter that does not evolve over time is surprisingly challenging in a PL context. It is not possible to incorporate the estimation of γ in step 5 of the above algorithm using an importance sampling step as it will lead to the well known particle degeneracy issue. Unlike the λ j 's, the conditional posterior distribution of γ is not a known density with deterministic conditional recursive updating. Therefore, for models where γ is treated as an unknown quantity, we suggest the use of the marginal likelihood conditional on the λ j 's from (11). Therefore, we can write the conditional posterior of γ as where p(γ = k) is a discrete uniform prior defined over (0.001, 0.999) with K categories (we comment on determining the dimension of K in our simulation studies). To incorporate the learning of (19) at the end of step 4 of our PL algorithm above, we first estimate the discrete posterior distribution of γ using the Monte Carlo average of the updated samples of λ 1 , . . . , λ J at time t + 1. Then, we resample particles from this distribution to update f (.) in step 3 at time t + 2.
Numerical Examples
To illustrate our MPSB model and the associated estimation algorithms, we consider several simulation studies and an actual data on consumer demand for two households.
The consumer demand data we were given access to is a subset of a large set used in Kim (2013). The data as well as the R code are available upon request via email from the authors.
Example: Calibration study
First, we present the results of several simulated studies. We constructed 10 simulated sets from the data generating process of the MPSB given by (1) and (2). Each sequence of counts sampled from the model are realizations from the underlying time series model with varying pairwise sample correlations among individual series. The parameter values are unchanged but each simulated set behaves differently as the random common environment differs drastically across simulations even for the same values of the static parameters.
To initialize the simulations, we set θ 0 ∼ G(α 0 = 10, β 0 = 10) representing the initial status of the random common environment. We explicitly assume that the random common environment is initialized around the unit scale (with mean α 0 /β 0 = 1). In doing so, one obtains a better understanding of the scale of the static parameters, λ j 's, as a function of actual count data. This is especially important when dealing with real count data when specifying the hyper-parameters of priors for θ 0 and the λ j 's which we discuss in the sequel. We assumed that J = 5, and the static parameters, λ j 's, were 2, 2.5, 3, 3.5, and 4, respectively. The values are close to each other to investigate if the model can distinguish these static parameters. Finally, the common discount parameter, γ was set at 0.30.
Our PL algorithm uses N=1,000 particles. Since all simulated counts are roughly between 0 and 40 with initial values up to 5-6, we set θ 0 ∼ G(10, 10) and λ j ∼ G(2, 1) for all j (reflecting the fact that very high values of the parameter space does not make practical sense). Our numerical experiments revealed that having tighter priors especially on λ j 's help identifying the true value of the parameters. Varying the hyper-parameters of the priors (within reasonable bounds with respect to the scale of the counts) does not have a significant effect on the overall fit of the models. When the priors are vague and uninformative (e.g. G(0.001, 0.001)), our algorithm has difficulty identifying regions close to the real values of the parameters at the outset. However, in such cases the mean filtered estimates, E(θ t λ j |D t )'s, are found to be in the near proximity of the real counts. When dealing with real data, this is not a major drawback as long as the model is able to provide reasonable filtering estimates since the true value of the static parameters will always be unknown. For practical reasons, we suggest that the initial state prior be set around the unit scale as in θ 0 ∼ G(10, 10). We note here that the results were not sensitive to changes in the hyper-parameters of θ 0 as long as its mean stayed around the region of unit scale such as those in G(1, 1), G(10, 10) or G(100, 100). Table 1 shows the means and 95% credibility intervals (in parenthesis) for the estimated static parameters for 10 different simulations. For each case, the PL algorithm is able identify posterior distributions that are close to the true values of the parameters (λ 1 = 2, λ 2 = 2.5, λ 3 = 3, λ 4 = 3.5, λ 5 = 4 and γ = 0.3). In addition, we also computed posterior coverage probabilities across 10 simulations by investigating if the true value of the parameter was within the 95% credibility bounds. (i.e. the number of times the true values of the parameter was within a given credibility interval across 10 simulations). These coverage probabilities were estimated to be 0.9, 1.0, 0.7, 0.7 and 0.7 for the λ j 's and 1.00 for γ, showing support in favor of the algorithm being able to provide coverage of the true values most of the time. Figures 1 and 2 show the boxplots of the estimation paths of the static parameters for one of the simulations where the straight line represents the true value of the parameter. As can be observed from the size of the boxplots, for the first few observations the posterior distributions exhibit more uncertainty. As we observe more data, the uncertainty tapers off and the posterior distributions converge to regions close to the true value of the parameters (similar plots were obtained for all 10 simulations). After observing up to 9-10 points in time, our algorithm is able to learn about the λ j 's very easily, however learning of the γ takes a few more observations. The dip in the value of γ around time period 10 may be attributed to the jump we observe in the simulated counts in 4 our of 5 series that can be observed in Figure 4 (from time period 9 to 10) since a lower value of γ implies a higher correlation in our model. After a few more observations, the posterior γ goes back to exploring regions around its true value.
The final posterior density plots of λ 1 , . . . , λ 5 after observing all the data are shown in the top panel of Figure 3 for one of the simulations. All of the density plots cover the true value of the parameter as indicated by the vertical straight lines. The posterior distribution of γ from Figure 3 also shows that most of its support is close to the region of 0.30 which is the actual value of γ. The posterior mode was between 0.25 and 0.30 and the mean was estimated to be 0.27 (as there is more support on the left side of the true value in the posterior distribution). In our proposed algorithm, the estimation of γ discussed in (19) requires that we put a reasonably large value for K which is the number of discrete categories for γ. For a discrete uniform prior defined over the region (0.001; 0.999), we experimented with different values for K and explored cases when K = 5, 10, 30, 50, 100 and 500. For all 10 simulations, the posterior distributions were almost identical when k was 30 or larger. For relatively smaller values of K as in 5 and 10, the posterior distribution did not mix well and did not explore regions wide enough for converging to the right distribution. In cases when fast estimation is of interest, we suggest that K is kept in the region of 30-40 since increasing its dimension leads to losses in estimation speed due to the fact that the negative binomial likelihood needs to be evaluated for each point in time equal to "K× number of particles". Another noteworthy investigation is how good our estimated filters are with respect to actual data across simulations. To assess the model fit, we first computed the absolute percentage error (APE) for each simulation (a total of 200 observations for each simulation) and computed the median of these APEs. The results are shown in Table 2 where the estimates range between 14% and 25%. The reason we report the median instead of the mean APEs is the presence of some outliers which skew the results immensely. Typically the APE estimates range between 0 and 0.30 and some outliers are in the range of 3-4, which when we take the average of, show very misleading results. When we plotted the histograms of APEs for each simulation, we were able to observe that the median and the mode of the distributions were very close to each other with the means located away from these two measures due to 1-2 very high values in the right tail of the distributions. We did not report the mean squared errors (MSE) as they would not be comparable across simulations since the scale of the counts vary from one simulation to another even for the same values of the static parameters. Figure 4 shows the posterior means of the filtered rates, E(θ t λ j )'s, at each point in time versus the actual counts for a given simulated example. In this example, the series were moderately correlated with sample pairwise correlations ranging between 0.59 and 0.69. The model is able to capture most swings except for rare cases when all five series do not exhibit similar (upward/downward) patterns at a given point in time. For instance, around roughly time period 9, the counts for series 1,2,4 and 5 exhibit a drop whereas series 3 shows an increase. As the dependency across series is based on the random common environment idea, the filtered states around time period 9 exhibit a decay for all 5 series (not only for series 1,2,4 and 5). Such disagreements lead to extremely large APE estimates as discussed before but are usually no more than 1-2 times in a given simulated set. Figure 5 shows the stochastic evolution of the state of the random common environment over time that all five series have been exposed to (i.e. p(θ t |D t ) which is free of the static parameters) for a given simulation study. For instance, such a common environment could represent the economic environment financial and economic series are exposed to with swings representing local sudden changes in the market place. In our model, θ t s dictate the autocorrelation structure of the underlying state evolution and they induce correlations among the 5 series. The sample partial autocorrelation estimate at lag 1 for the mean of these posterior state parameters was between 0.80 and 0.90 indicating a strong first order Markovian behavior in the random common environment.
As a final exercise, we also used the FFBS algorithm introduced in Section 2.3 to generate the full posterior joint distribution of the model parameters for each time period t as in, p(θ 1 , . . . , θ t , λ 1 , . . . , λ j |D t ). As pointed out by Storvik (2002), for any MCMC based sampling method dealing with sequential estimation, the chains would need to be restarted at each point in time. In addition, issues of convergence, thinning and the size of the burn-in periods would need to be investigated. Therefore, using the FFBS algorithm would not be preferred over the PL algorithm when fast sequential estimation would be of interest as in the analysis of streaming data in web applications. To show the differences in computing speed, we estimated one of the simulated examples using both algorithms. The models were estimated on a PC running Windows 7 Professional OS with an Intel Xeon @3.2GHz CPU and 6GBs of RAM. The PL algorithm takes about 17.25 (or 58.7) seconds with 1,000 (or 5,000) particles and the FFBS algorithm takes about 270.74 seconds for 5,000 collected samples (with a thinning interval of 4) where the first 1,000 are treated as the burn-in period. In both cases, we kept γ fixed at 0.30 even though the computational burden for its estimation with the FFBS algorithm would have been higher with "K× Number of Samples generated=5,000" versus "K× Number of par-ticles=1,000". We also note that the estimated static parameters using the FFBS model were very close to those estimated with the PL algorithm from Table 1. We view the FFBS algorithm as an alternative when smoothing is of interest which can be handled in a straightforward manner as discussed in Section 2.3. For sequential filtering and prediction, we would prefer the PL algorithm due to its computational efficiency. We would like to note that the results summarized above are based on the version of our algorithm which uses the sequential importance sampling step for the state propagation instead of the rejection sampling method discussed in Step 2 of our PL algorithm. Even tough the results were identical in both cases, the computational burden for the rejection sampling algorithm was very high in some cases. Our numerical experiments revealed that the acceptance rate of the sampler became extremely small for certain values of the HGB density parameters, a, b, c. Therefore, unless a very efficient way of generating samples from the HGB density can be developed, we suggest the use of the extra importance sampling step in implementing our PL algorithm.
Example: Weekly Consumer Demand Data
To show the application of our model with actual data, we used the weekly demand for consumer non-durable goods (measured by the total number of trips to the super market) of two households in the Chicago region over a period of 104 weeks (an example for a bivariate model). Therefore, in this illustration, Y jt for t = 1, . . . , 104 and j = 1, 2 are the demand of household j during the time period t, θ t represents the common economic environment that the households are exposed to at time t and λ j represents the individual random effect for household j. The example is suitable for our proposed model since a quick empirical study of the data revealed that weekly demand of these households exhibit correlated behavior over time (temporal dependence) as well as across households (dependence from the random common environment). The sample correlation between the two series was estimated to be 0.41 which is in line with our model structure that requires positively correlated counts. In addition, the partial auto-correlation functions of both series also show significant correlations at lag 1, justifying our use of the first order Markovian evolution equation for the states. As before, we estimated the model using 1,000 particles and used similar priors. Specifically, we assumed that θ 0 ∼ (10, 10) so that the initial state distribution is around the unit scale and assumed that λ j ∼ G(2, 1). Figure 6 shows the time series plot of these two series (straight red line represents household 1 and the dashed black line represents household 2) for 104 consecutive weeks. Figure 7 shows the mean posterior (filtered) estimates (red circles) and the 95% credibility intervals (straight lines) versus the actual data (black dots). We can observe that in most cases the counts are within the credibility intervals except for the beginning first roughly ten time periods. This may be attributed to the fact that the counts for these two households were relatively lower and closer to each other initially, resulting with less global uncertainty in the counts and tighter intervals. However, visually the plots suggest that the model is able to account for sudden changes in the environment (for instance there is a sudden drop around weeks 80-85) while providing an overall reasonable fit for the counts of both households. Since the sample correlation between the two series was 0.41, suggesting a relatively low correlation, there were certain time periods when the intervals do not cover the actual data. For instance, the first 10 observations especially for series 2, look problematic and the model is slow to adapt to the sudden drop between weeks 80-85. However, approximately more than 90% of the real counts are within the credibility interval bounds of the filtered states. Even tough we do not know the data generating process unlike the simulated examples, MAPE obtained for this example was 0.18 which is reasonably low.
The posterior distributions of γ as well as those of λ 1 and λ 2 are given in Figure 8. A higher value of λ indicates a higher order of spending habit for household 1 as opposed to household 2 given that both are exposed to the same economic environment. The mean estimates were 3.05 and 2.04, respectively for the two static parameters. We also note that the posterior correlation between λ 1 and λ 2 was estimated to be 0.21, as expected a positive correlation a posteriori. Furthermore, the posterior mean of γ was around 0.29. In our experience with both simulated and demand data, we observed that the posterior distribution of the static parameter γ did not vary significantly as we observe more data points (say beyond 20-30 observations as argued previously based on Figure 2). Therefore, a practical approach for cases where on-line learning and forecasting is of highest importance, would be to treat γ as fixed (either at the posterior mean or the mode) which can significantly reduce the computational burden by making filtering very fast. Figure 9 shows the boxplot of the posterior state parameters, in other words how the common environment that both households are exposed to changes over time. We can observe that the uncertainty about the environment is relatively lower at the beginning (in the first 1-5 time periods) with respect to the following time periods. This is the same observation we had drawn from the credibility intervals and could be due to the small difference between the counts. Also, the environment is said to be less favorable during roughly weeks 80-85 as there is a steep drop in the state estimates. We believe that being able to model and predict household demand would be of interest to operations managers for long term as well as short term staffing purposes. For instance, related work in queuing systems require the modeling of the time varying arrival rates that are used as inputs of a stochastic optimization formulation to determine the optimal staffing levels (see Weinberg et al. (2007) and Aktekin and Soyer (2012) and the references therein for recent work using Bayesian methods for modeling Poisson arrivals in queuing models). In addition, the marketers may use these models for optimally timing the placements of advertisements and promotions. For instance, a steep drop in the state parameters (as in the weeks of 80-85 in our illustration) might lead to reductions in staffing for cutting operational costs (employees may be diverted to other tasks) or the company may decide to launch a more aggressive advertisement/promotion campaign to cope with Figure 9: Boxplot of the dynamic state parameters, θ t 's for the customer demand example, representing the random common economic environment that the two households are exposed to. undesirable market conditions.
Conclusion
In summary, we introduced a new class of dynamic multivariate Poisson models (which we call the MPSB model) that are assumed to be exposed to the same random common environment. We considered their Bayesian sequential inference using particle learning methods for fast online updating. One of the attractive features of the PL approach as opposed to MCMC counterparts, is how fast it generates particles sequentially in the face of new data, a feature not shared with MCMC methods where the whole chain needs to be restarted when new data is observed. The model allowed us to obtain analytic forms of both the propagation density and predictive likelihood that are essential for the application of PL methods which is a property that not many state space models possess in the literature outside of Gaussian models. In addition, our model allowed us to obtain sequential updating of sufficient statistics in learning our static parameters that is another crucial and desirable feature of the PL method. Further, we showed how the proposed model leads to a new class of predictive likelihoods (marginals) for dynamic multivariate Poisson time series, which we refer to as the (dynamic) multivariate con-fluent hyper-geometric negative binomial distribution (MCHG-NB) and a new multivariate distribution which we call the dynamic multivariate negative binomial (DMNB) distribution. To show the implementation of our model, we considered various simulations and one actual data on weekly consumer demand for non-durable goods and discussed implications of learning both the dynamic state and static parameters.
To conclude, we believe that it is worth noting limitations of our model. The first one is the positive correlation requirement among series as induced by (15). As the series are assumed to be exposed to the same random common environment, our model requires them to be positively correlated. We investigated the implications of this requirement in the estimation paths of our static parameters in Figures 1 and 2 and the real count data example in Figure 7. Based on these plots, it is possible to infer that initially there maybe a few observations that do not follow this requirement where the static parameter estimation paths and the filtered means are not inline with their respective real values. However, if the data is overall positively correlated, our model converges to regions around the true values of the parameters (Figures 1 and 2) and the mean filtered estimates are within the 95% credibility intervals of the real counts ( Figure 7) after a 8-10 time periods. Another noteworthy limitation is the identifiability issue when the priors for the static parameters are uninformative. Even tough, the model keeps the product of the Poisson mean, θ t × λ j , close to the observed counts, it takes a very long time for the learning algorithm to explore regions close to the real values of the static parameters. To mitigate this issue, we suggest to use a prior centered around unity for θ 0 and to use slightly tighter priors on λ j 's as discussed in our numerical example. When dealing with real count data, we believe that this approach is reasonable as long as the posterior filtered estimates provide coverage for the true counts since we will never know the true values of the static parameters or the true data generating process.
In addition, we believe that the proposed class of models can be a fertile future area of research in developing models that can account for sparsity typically observed in multivariate count data. Our current model does not have a suitable mechanism for dealing with sparsity, however modifying the state equation to account for a transition equation that can account for sparsity maybe possible and is currently being investigated by the authors. Another possible extension would be to introduce the same approach in the general family of exponential state space models to obtain a new class of multivariate models. This is also currently being considered by the authors with encouraging results.
where the conditional likelihood is The conditional prior (state evolution) is given by (1−γ)αt−1 Thus, rearranging the terms we can obtain p(Y t+1 |θ t , λ, D t ) as In the above, if use the transformation θ t+1 = θ t γ u then we get where the term after the integral sign is similar to the hyper-geometric beta density as in Gordy (1998b). Therefore, we can write, Rewriting the terms we get where the normalization constant C can be obtained as and CHF represents the confluent hyper-geometric function (Abramowitz and Stegun (1968)). Therefore, the weight can be computed as where a = j Y j,t+1 + γα t , a + b = j Y j,t+1 + α t , c = ( j λ j ) θt γ . w t also represents the predictive likelihood (marginal) for the proposed class of dynamic multivariate Poisson models.
Obtaining the propagation density of the PL algorithm in step 2
The propagation density of the PL algorithm in step 2 can be computed as (1−γ)αt−1 e −( j λ j )θ t+1 , which is proportional to a scaled hyper-geometric beta density defined over the range (0; θt γ ), as HGB(a, b, c), with parameters a = ( j Y j,t+1 ) + γα t , b = (1 − γ)α t and c = j λ j .
Appendix B
Here, we show some of the conjugate nature of our model and show how the multivariate dynamic version was obtained starting with the univariate static case.
Multivariate Case (with conditioning on θ t )
The form presented above would be suitable in the case where MCMC methods are used for estimation. In order to obtain the distributions required for the PL algorithm, we need to add an additional conditioning argument on θ t (the state parameter from the previous period). Therefore, we extend the Bayes' rule to include θ t as p(θ t+1 |θ t , D t , λ) × p(Y t+1 |θ t+1 , λ) = p(θ t+1 |θ t , D t+1 , λ) × p(Y t+1 |θ t , D t , λ), based on which we can show that the conditional prior is (θ t+1 |θ t , D t , λ) ∼ ScaledBeta(γα t , (1 − γ)α t ) defined over 0; θ t γ The likelihood is (Y t+1 |θ t+1 , λ) = j P ois(λ j θ t+1 ) The conditional posterior (propagation density) is a scaled HGB and is where HGB stands for the hyper-geometric beta distribution. The predictive likelihood density, (Y t+1 |θ t , D t , λ), would be a new multivariate density as shown below. Note also the forms of the above densities as (1−γ)αt−1 , where CHF represents the confluent hyper-geometric function. Therefore, we can show that (Y t+1 |θ t , D t , λ) would have the following form where a = j Y j,t+1 + γα t , a + b = j Y j,t+1 + α t , c = ( j λ j ) θt γ . We refer to the above distribution as the multivariate confluent hyper-geometric negative binomial (MCHG-NB) distribution. The MCHG-NB density has the same form as the resampling weight obtained in (18) for our PL algorithm. | 11,018 | 2016-02-03T00:00:00.000 | [
"Mathematics"
] |
Characterisation of a micrometer-scale active plasmonic element by means of complementary computational and experimental methods
In this article, we investigate an active plasmonic element which will act as the key building block for future photonic devices. This element operates by modulating optical constants in a localised fashion, thereby providing an external control over the strength of the electromagnetic near field above the element as well as its far-field response. A dual experimental approach is employed in tandem with computational methods to characterise the response of this system. First, an enhanced surface plasmon resonance experiment in a classical Kretschmann configuration is used to measure the changes in the reflectivity induced by an alternating electric current. A lock-in amplifier is used to extract the dynamic changes in the far-field reflectivity resulting from Joule heating. A clear modulation of the materials’ optical constants can be inferred from the changed reflectivity, which is highly sensitive and dependent on the input current. The changed electrical permittivity of the active element is due to Joule heating. Second, the resulting expansion of the metallic element is measured using scanning Joule expansion microscopy. The localised temperature distribution, and hence information about the localisation of the modulation of the optical constants of the system, can be extracted using this technique. Both optical and thermal data are used to inform detailed finite element method simulations for verification and to predict system responses allowing for enhanced design choices to maximise modulation depth and localisation.
Introduction
Active plasmonics has been gaining attention from the research community for its role in the development of photonic devices [1,2], low-loss waveguides [3], and imaging systems [4]. It is an emerging subfield of plasmonics, which focuses on controlling electromagnetic fields at the nanoscale through external manipulation of the materials' properties. Here, we present the charac-terisation of a recently developed active plasmonic element [5] through two complementary experimental methods. Active plasmonic elements have applications in future imaging technologies and as modulators in optoelectronic couplers for photonic circuits. Finite element method (FEM) simulations are used to validate both experimental approaches, allowing for cross-verification of results and giving greater insight into the underlying physical phenomena.
Surface plasmon polaritons (SPPs) are mixed states of photons and electron density waves propagating along the interface between a conductor and a dielectric. As a result of this phenomenon, an electric field strongly confined in the z-direction is produced at the interface. As direct excitation of a smooth metallic surface does not form SPPs, certain configurations have been developed to provide the conditions allowing for its formation. These were initially proposed by Otto [6] and Kretschmann and Raether [7]. To meet momentum-matching conditions necessary for the formation of SPPs, such configurations rely on the presence of an optically denser dielectric material with which the light interacts before reaching the metal. Light-matter interactions which give rise to the formation of SPPs can be classified into a sub-field of photonics known as plasmonics [8]. Investigations into SPPs provide vital insights into fundamental physical phenomena at the nano-and mesoscales [9][10][11][12][13][14][15], as well as more practical applications in Raman spectroscopy in the form of surface-enhanced Raman spectroscopy (SERS) [16] and other spectroscopic techniques [17,18]. SPPs also find uses in fields such as ultrasensitive detection methods [19,20], as their formation is highly dependent on refractive index changes, and sub-wavelength optics [21]. Our active plasmonic element also provides the potential for an even more sensitive technique. Active plasmonics has further advantages due to the tunable nature of the physics underneath and its ability to interact with electronic circuits [22].
The performance of the active element can be characterised in terms of modulation localisation and depth. Localisation addresses how confined the active control is at the nanoscale, while modulation depth is an indicator of how well the external manipulation changes the properties of the device. Characterisation of both localised temperature distribution and optical constants plays a key role for further applications and is required to optimise the operation parameters for the active plasmonic element.
Previous studies [23,24] investigated the effects of gap size using a fine tunable mechanical separation as a means to control the intensity of a travelling SPP on silver. In contrast, in the present work, the modulation of the device's response is obtained through changes in the optical constants via electrical signals. It is well understood that heating affects the electrical permittivity of metals [25][26][27][28] and dielectrics [29,30]. This, in conjunction with Joule heating, is used to generate the desired effects.
The active plasmonic element proposed (Figure 1) consists of a nano-or mesoscale constriction in a 48 nm thick layer of silver. Applying a current through the silver layer results in increased heating at the constriction due to the reduced cross section. Consequently, given the dependence of the materials electric permittivity on temperature, the optical response will change locally. Figure 1: An AFM image of a 10 × 10 μm 2 constriction in a 48 nm thin silver film on a sapphire substrate. The total area of the AFM image is 30 × 30 μm 2 . Two cuts (black channels) split the silver layer (grey) into two parts, connected only by the bridge depicted in the image. Under the application of a current, the metal heats up due to the Joule effect. The presence of a constriction in the metal (the bridge) results in a localised heating effect.
In this work, we have adapted two unconventional correlative methods to investigate the system exploiting their ability to directly probe the parameters of interest. The first method is based on an attenuated total reflection (ATR) setup whose versatility is enhanced through the addition of a lock-in amplifier (LIA) investigating the changes in the reflectivity induced by a modulated electric current. The acquisition of a surface plasmon resonance (SPR) curve is a common method to characterise a plasmonic far-field response [31] which is highly sensitive to small changes in the refractive index of the metal and dielectric. Because of this extreme sensitivity, small changes in the local temperature, and hence in optical constants, will result in subtle but appreciable changes of reflectivity in the SPR curve. Homodyne detection, with the modulated electric currents as reference, enables a detailed examination of the microscale active plasmonic element. Temperature changes are induced by the applied current through the microoptical element. Analysis of both the lock-in signal and the classical SPR curve from angular interrogation enables the deduction of the complex optical constants. Furthermore, the presence of strong peaks indicates the ability to maximise the modulation of the element's optical response. While the classical SPR response integrates over the whole illuminated area, the LIA curve extracts the changes in reflectivity localised to the active element. This leads to a modulated local near field, which has applications in the development of new imaging technologies using this localised field to go beyond the diffraction limit of light. Because the temperature is geometry-dependent in the constriction, it is necessary to map the thermal distribution in the vicinity of the element. The experimental complementary method therefore investigates the thermal response of this active plasmonic element at a high spatial resolution. Knowledge of the distribution leads to predictions on how the near field will be locally affected, which is key for understanding the behaviour of the active plasmonic element. The heating distribution is investigated by means of scanning Joule expansion microscopy (SJEM) [32]. The technique provides a method to obtain the relative temperature distribution at the nanoscale starting from the measurement of induced thermal expansion, which can be directly mapped in a standard AFM-based image using a LIA. This also provides great insight into nanomechanical thermal interactions.
Knowledge of both physical parameters, the optical response, and heating allows cross-verification and understanding of the underlying mechanisms causing changes in the active plas-monic element. The FEM simulation software COMSOL Multiphysics has been used to predict and confirm the dynamic temperature distribution and the optical response of the system.
Experimental
The manufacturing of the active plasmonic elements employed in the present work is detailed in [5]. At first, a 48 ± 2 nm film of silver is deposited on a sapphire substrate via physical vapour deposition (PVD). After this, two separate AFMs are used to machine channels in the silver film to create the desired constriction, which in this case measures 10 μm. The tip of the AFM is held at a set loading force in contact with the thin metal film and moved to remove the silver.
The investigations carried out consist of two experimental correlative methods. The first method is the above discussed enhanced SPR experiment, based on an ATR setup whose sensitivity is enhanced through the addition of a lock-in amplifier. The second method probes the temperature distribution surrounding the active element through SJEM mapping the thermal expansion of the metallic surface using an AFM. Both methods are further reinforced through the use of three-dimensional simulations. A description of the experimental methods of both investigations is detailed below as well as the FEM parameters used to simulate the behaviour of an active plasmonic element.
Enhanced SPR experiment
For angular interrogation of the plasmonic response of the active plasmonic element, an enhanced SPR experiment ( Figure 2) was used. The system under investigation consists of a constriction in the silver film such that a current modulated at a particular frequency affects its optical properties via Joule heating. The signal is acquired by a photodiode and further processed by a lock-in amplifier (Ametek 5210), with the driving signal of the modulated voltage acting as reference. The DC component of the light results in a typical SPR curve while the modulated signal from the lock-in amplifier produces a more sensitive signal. This yields the variation in the optical constants as a function of the thermal modulation of the active element.
An Oxxius single-frequency CW laser at 561.4 nm, typically incident at 0.2 mW after filtering, was used as the incident light source. The collimated light from an optical fibre was spatially filtered through a 300 μm aperture and a polariser aligned to the plane of rotation ensuring p-polarisation for SPP excitation as seen in Figure 2. The reference light is recorded after the aperture and reflected from a cube beam splitter, with the signal photodiode placed on the 2θ arm of a high-accuracy (18 arcsec resolution) Siemens θ-2θ X-ray diffractometer stage with inbuilt goniometer to collect light reflected from the interface. The absolute angular position is manually determined by aligning multiple back reflections, with an estimated error of approximately 4.5 arcmin. The stage's gearing is driven by a high-current pure sine signal such that motor steps cannot be missed, thus eliminating this as a source of error. The Kretschmann configuration was placed horizontally at the centre of the stage rotating at θ. This configuration consists of a fused silica prism with a sapphire slide optically coupled using refractive index matching oil (n = 1.516). This configuration was also used in the simulations. A sapphire slide was used as the deposition substrate for a thin silver film of 48 nm. The incident angles were referenced to the air-prism interface. The sinusoidal current was generated using a function generator with a current buffer to ensure impedance matching to the system under investigation. A transimpedance-amplified photodiode signal with active DC and high-frequency filtering was fed to the lock-in amplifier, while the non-filtered signal was recorded for the SPR reflected signal. The modulated voltage applied was of the same form as the simulations with frequency f = 631 Hz and offset such that the resultant signal was entirely positive.
At each angular position, the signals from the photodiodes were continuously recorded for one time constant following an adequate rest period, after which the in-phase components of the lock-in amplifier (X-components) were recorded using an Arduino-controlled analog-to-digital converter. The reference phase of the LIA was chosen to maximise X. The LIA was set to a time constant of τ = 300 ms, with second-order low-pass filtering and a sensitivity of 30 mV to avoid spurious input overloads. Once this set was recorded, the stage was moved 1 × 10 −2 degrees to the next angular position in the scan. The substrate used to generate the SPR response is sapphire with a refractive index of n = 1.7717 at λ = 561 nm.
SJEM experiment
To further characterise the active plasmonic element, in complement with the SPR curve measurements above, the thermal distribution due to Joule heating of the active element was measured through the use of scanning Joule expansion microscopy. The application of a current to the metallic element results in Joule heating. As stated previously, this heating results in an appreciable change in the optical constants of the silver enabling modulation of its plasmonic response. This heating also results in thermal expansion of the element. This expansion will result in deflection of an AFM cantilever scanning the surface. If a sinusoidal voltage is applied to an electrically conducting sample, such as the active plasmonic element discussed here, the resulting thermal expansion will be periodically modulated at the frequency of the applied voltage. When an AFM scan is performed on an element which is periodically modulated, the expanded surface will also be captured by the AFM. A lock-in amplifier can then be used to extract the periodically occurring expansion of the topography from a surface scan performed while the element is modulated at a known frequency. This is the basis for SJEM measurements.
Here, SJEM measurements have been performed using an Oxford instruments Cypher-S AFM and a signal recovery 7270 DSP lock-in amplifier. Figure 3 illustrates the setup used to perform such measurements. An Adama NM-RC probe (spring constant: 290.3 N/m, nominal resonance frequency: 814 kHz) has been used in contact mode to scan the topography of an electrically modulated sample with a loading force of 1.9 μN. This particular probe is intended for use in nanomechanical operations such as lithography and machining. The high spring constant of this cantilever has the advantage of minimising the unwanted deflection of the cantilever resulting from electrostatic interaction of the potential on the surface and the probe. The tip is constructed from wear-resistant diamond with a tip radius of 10 ± 5 nm. The deflection sensitivity of the probe was measured to be 81.09 nm/V.
To perform an SJEM measurement, a sinusoidal voltage is applied to the metallic element. The surface of the element is then scanned with an AFM in contact mode. Contact mode was selected in this case to ensure the probe captured the deflection due to thermal expansion while minimising artifacts caused by the periodic potential on the surface. The sample was driven with a frequency of 1227 Hz, well below the 170 kHz limitation on the response time of the z-piezo control of the scanner in the Cypher AFM. This frequency is also well below the resonance frequency of the selected cantilever avoiding unwanted resonant oscillations in the cantilever as a result of the periodic force applied by the expanding surface when deflecting the cantilever. In principle, the drive frequency of the active element could be the same for both experiments. However, the higher frequency provided an increased number of heating cycles over which to integrate given the chosen scan parameters listed below. Ideally, the active element would be modulated in the same fashion in both experiments. However, the information extracted in both cases does not depend on the drive frequency, given the thermal relaxation time of such a metallic element on a sapphire substrate is of the order of nanoseconds, and as such there is no need for matching drive frequencies. AFM scans were performed on a 30 × 30 μm 2 window with a setting of 512 points per line. The resulting pixel size is 58.7 nm. AFM scans were then performed at a scan rate of 0.1 Hz resulting in the tip being over each pixel of the image for approximately 10 ms. At a drive frequency of 1227 Hz, this corresponds to more than ten cycles of expansion and contraction per point for the lock-in amplifier to integrate over. The time constant for the lock-in process was set to 10 ms to match the time the tip is over each pixel. The sensitivity of the LIA was set to 200 μV. The applied voltage was set based on the current density through the sample so as to match the conditions in the above SPR measurements as closely as possible. The Cypher-S AFM is capable of detecting dynamic changes in height of the sample down to the sub-picometer scale. Scans were performed at multiple current densities to demonstrate the differences in temperature distribution surrounding the sample as a function of applied current. For higher current densities, a wider distribution of the temperature is expected. A current buffer was used to ensure the current density through the element was consistent throughout an AFM scan. The probe was left electrically floating during scanning. While leaving the probe floating is counterproductive from the perspective of minimising electrostatic interaction, the possibility of current flow through the tip to ground is eliminated. As with the LIA phase selection for the SPR measurements discussed above, the phase was selected so as to maximise the X-component of the LIA signal. SJEM measurements were performed for current densities of 45, 48.2, 51.8, 54, and 58 mA/μm 2 , the results of which can be found below.
Simulations
Finite element analysis simulations using COMSOL Multiphysics 6.0 were employed to cross-verify the results obtained from both methods. The schematic in Figure 4 illustrates the steps involved in performing such simulations. Initially, a model of the silver structure described above was built on top of a sapphire substrate. This model is a representative subsection of the substrate and silver film on which the physical active element was fabricated. The thermal behaviour of the element was simulated for an applied alternating electrical potential with a DC offset through the thin silver layer An alternating voltage with a DC offset V in (t) = V 0 · (1 + sin(2πft)), where V 0 = 200.0 mV and f = 631 Hz, was applied across the silver layer. The voltage was chosen so as to guarantee a correspondence with the experiments in terms of current density. The application of a voltage with an AC component resulted in a spatial temperature distribution changing periodically as a function of the time. The temperature distribution shown in the Results section below in Figure 12 refers to the point in time t = 1.68 ms. Results in terms of temperature distribution were therefore used to cross-verify the experimental findings coming from the SJEM experiment.
The second part of the computational work focused on the changes induced by the temperature on the optical properties of the system under analysis. Such changes lead to differences in the reflectivity curve acquired by means of the enhanced SPR experiment. In order to back up the experimental findings, electromagnetic simulations were performed both in a cold state, using room temperature refractive indices, and in a hot state. Johnson and Christy [33] electrical permittivity values were used to model the silver layer at room temperature. The electromagnetic simulations of the heated system were performed starting from a temperature distribution extracted from the thermal simulations previously performed (t = 1.68 ms). This means that a temperature value is associated to each spatial point. Starting from two tables, one for sapphire and one for silver, containing spatial coordinates and corresponding temperature values, the data illustrated in Winsemius et al. [27] were used to model the influence of temperature on the silver's permittivity while for sapphire results from Thomas et al. [34] were employed. The resulting tables, again one for each material, containing the spatial points and refractive indices were subsequently imported into COMSOL to define the heated materials. All values were evaluated at the selected operational wavelength λ = 561 nm. For reference, in the following Table 1, we Figure 6: Plots of the enhanced SPR curves for a sapphire-silver-air configuration excited at 561 nm showing the typical shape in red and orange, and ΔR response, taken from the lock-in amplifier (X) component, or the effective difference between the coldest and hottest states in blue and green for a modulated 20 μm square active element. Orange shows the experimental curve normalised to the simulation maximum for an applied (RMS) current density of 71 mA/μm 2 . The angular position of the TIR response of the experimental SPR curves has been calibrated to the simulation. The solid lines are a centred moving average for each experimental curve. Experimental errors are considered below in Figure 8. The results of COMSOL 2D optical simulations were computed for a system at room temperature and with a temperature distribution extracted from thermal simulations. The curves presented are the average between those two states (red) and their difference (green).
report the refractive index values for both sapphire and silver at room temperature and at the highest temperature reached by the simulated structure. The refractive index of air was taken as n air = 1.00.
The optical response of the sapphire-metal-air structure was quantified in terms of reflectivity at different incoming excitation angles, computed by means of COMSOL optical simulations. A two-dimensional simulation space was set up in COMSOL, and the electromagnetic waves in the frequency domain module were used to set up the problem (see Figure 5). The input port was set to simulate an incoming TM wave, and Floquet periodic conditions were imposed on the side boundaries. A perfectly matched layer (PML) was added on top of the air layer in order to avoid back reflection. Total reflectivity data were collected at the output port.
Integrating this simulation setup with the temperature distribution output from the previous step allows for an accurate representation of temperature-dependent changes on the reflectivity curve. This produces a representation of the modulation depth which is seen when looking at the difference of hot and cold states using a lock-in amplifier.
Results
First, the results of the SPR curve with the lock-in amplifier which measures the difference between a heated and room temperature constriction are presented. Subsequently, the results Enhanced SPR experiment Figure 6 shows the experimental data and simulated SPR curves. It is clear that all expected main features of the curves are present for both approaches. The SPP angle corresponds to a minimum in reflection and, in this case, was 46.07°. The results show a critical total internal reflection (TIR) angle at approximately 42.7° from the surface normal. Both results are in line with theoretical predictions for the employed substrate.
While the experimental measurements are exploiting a continuous sinusoidal modulation which oscillates between two states (i.e., hot and cold), the simulations are comparing only these extreme states. The simulation was performed on sapphire and silver at room temperature and at the heated state. In the experiment, the resulting SPR curve, recorded in parallel to the extracted lock-in curve, is effectively a weighted average of all temperature states. Therefore, the simulated SPR curves obtained from both the cold and the heated structure were averaged to obtain the red curve in Figure 6, and their relative difference was taken to generate the green curve.
The slight discrepancy between the minimum in the measured SPR curve (orange) and the theoretical version (red) is assumed to be ascribable to the data set used to model the simulated materials. To describe silver's optical constants at room temperature, Johnson and Christy [33] data were employed, but the refractive index of the silver material used in the experiments differs slightly. The measured SPR curve is slightly broader than the simulated one. This is due to experimental broadening (in particular, the divergence of the laser beam), surface roughness, and further temperature-induced degradation effects that are not taken into account in the simplified simulation setup employed.
In order to determine the quality of the agreement between simulations and experimental data, we additionally carried out a comparative study between two electromagnetic simulations. Both simulations were set up at room temperature (293.15 K, cold state) and using the same substrate, but the silver layer was characterised in two different ways, using two of the most cited sources in the field. At first, the refractive index was defined using Johnson and Christy dataset [33]. Subsequently, the same simulation was repeated exploiting data from Wu and co-workers [35]. The result in terms of SPR curve is plotted in Figure 7.
Directly comparing the simulated and experimental plasmonic resonance positions between the SPR curves in Figure 6, there is a difference of 0.07° in the position of the plasmonic dip. While this is outside the estimated error accumulation of approximately 0.02° on the experimental data, the two simulations show a discrepancy of 0.14°. This clearly shows that the shift measured between the experimental SPR curve and the simulated one can be brought back to the difference between literature data for refractive indices and the optical properties of the film.
On close inspection of Figure 6, in contrast to simulations, the experimental measurements reveal additional peaks on the leading edge of the main peak about 45.0°. Detailed analyses reveal this is due to ageing of the silver and surface modifications. There are others at about 45.5°, which, altogether, will be discussed in a subsequent paper. At this stage, experiments with active elements of different sizes reveal that the ratio between heated and non-heated surface illuminated by the laser beam influences the intensity of this effect. The ability to observe these details demonstrates the high sensitivity of this technique to the differences in optical constants of small areas under investigation. This effect, while highly interesting, deserves its own dedicated investigation and, hence, falls outside the scope of this paper.
In Figure 8, the response of both the typical SPR curve and the LIA signal at four selected current densities and, hence, varied temperature, are shown. Current densities of 71-143 ± 1 mA/μm 2 in steps of ≈24 mA/μm 2 were used for a 20 μm square element. While not identical to the SJEM experimental current densities, we ensure that they are of similar magnitude, 45-58 ± 1 mA/μm 2 , hence, enabling comparison between the experiments. The increasing temperature is seen through the typical SPR curves as a broadening in the plas-monic dip and shift of the minimum angle towards higher angles, as expected.
Experimentally, the SPR curve is measured directly by the photodiode. The modulation depth of the reflected light selected here, in this simple geometry, is of the order of 1% with respect to the total light intensity. To improve the signal-to-noise ratio, a lock-in amplifier is used to detect modulated signals referenced at the driving frequency. The aforementioned SPR characteristics manifest in the X-component signal of the lock-in amplifier, which originates from the changes in the temperature and is highly sensitive to a modification of the systems morphology with a fixed phase relationship.
As the temperature is modulated, these alterations occur in tandem with the driving frequency. The increased temperature affects a change in the optical constants of the system, thereby altering the SPR curve including its resonance position. The difference is measured directly with the driving signal as reference using a lock-in amplifier. This in-phase signal of the LIA is a direct measurement of the changes in the SPR curve.
In conclusion, both the ATR response and its characteristic plasmonic dip in reflectivity are present in the simulated curve and their angular position show a very good agreement with experimental results. The relative difference between the two curves behaves in line with the observed lock-in signal, as reported in Figure 6, validating the measurements. Figure 9 shows an AFM image of the silver surface in a 30 × 30 μm 2 window centered on the active element. On the left is a topographical image of the surface showing the element, which consists of a constriction of approximately 10 × 10 μm 2 in the silver film. The accompanying colour bar denotes the height of the sample in reference to the sapphire substrate below. The dark regions are areas where the silver has been removed as described above. On the right is an SJEM measurement showing the thermal distribution around the active element by mapping the thermal expansion of the silver element for each pixel of the image. The SJEM image shows the largest expansion occurring at the centre of the element, decreasing away from the centre of the element approximating a Voigt function. The associated colour bar shows the highest expansion as lighter coloured regions and areas of lower expansion as darker regions. The expansion around the element is representative of the temperature distribution in the heated state of modulation and appears to follow the expected spread. Figure 10 shows a Voigt function fit to a profile along the centre of the expansion image seen in Figure 9. The measured data is well represented by this fit function. The data shown in Figure 10 corresponds to the profile shown on the bottom left of Figure 11. The data has been translated along the x-direction so as to centre the peak of the fit at 0 μm for the purposes of visualisation. The area of the element is shown by the shaded region labelled "Constriction area" in Figure 10. From this shaded region we can see that the Voigt fit is centered slightly to the side of the centre of the element. This difference has been attributed to asymmetries in the fabricated structure resulting from the machining process [5]. As with the enhanced SPR measurements above, the effect of varying current has been investigated through repeated SJEM measurements under differing applied V(t). Figure 11 shows the results of these measurements. The images at the top show the spatially resolved thermal expansion across the active plasmonic element. The plots below show line profiles through the centre of the element at each current density. The shared y-axis allows for the comparison of expansion values measured at each current density. As the current density increases, the expansion of the element is seen to increase. This is in line with expectations as the temperature of the element should increase with increasing current density through the element. It should be noted that the positive x-direction here is moving from top to bottom along the profiles shown on the SJEM images in Figure 11. As expected, raising the current causes an expansion of the element in each case. Additionally, the thermal distribution broadens as the temperature of the element increases. The following detailed FEM simulations were used to verify the above results and can additionally be utilised to extract absolute temperatures from the dynamic SJEM measurements.
SJEM experiment
In Figure 12, we see the thermal effects of the modulated current across the structure under analysis simulated through COMSOL Multiphysics. As described in the first panel, the highest temperature region is clearly localised in the neighbourhood of the restriction, and the distribution spreads across the surrounding area following a profile with a FWHM ≈17.7 μm. This is more clearly depicted in Figure 12d, illustrating the temperature profile as a function of the x-coordinate. For comparison, Figure 10 shows an equivalent profile obtained experimentally for a 10 × 10 μm 2 structure with FWHM ≈19.4 μm. The simulation results for the thermoelectrical effects are in line with the experimental results observed through AFM detection of thermal expansion, showing the same distribution, and similar FWHM, as Figure 10.
Discussion
This research characterises an active microscale optical element which can be electrically controlled. The resulting responses of the systems were investigated using two experi-mental approaches. First, a homodyne detection enhanced SPR setup was used to provide access to the modulation of the electric field. This method provides access to the modulation of the electric field induced by varying Joule heating. Second, the spatially resolved thermal distribution of the active plasmonic element and the surrounding environment was measured through the use of SJEM. This information is required to fully model the spatial distribution of the induced electric field changes.
While this investigation focused on the behaviour of a single active plasmonic element, the combination of high localisation and the ability to modulate individual plasmonic elements at unique frequencies enables the design of arrays of such active elements which can be operated simultaneously. This could be applied to arrays of elements whose size and pitch is below the diffraction limit of light enabling sub-diffraction-limit applications.
As before, a combination of experimentally verified simulations and direct investigations through SJEM allows various parameters to be analysed, such as the exact geometry and corresponding localisation. This aids in optimisation with the aim to further improve on physical constraints.
Both experimental methods, along with simulations, provide the basis for an optimised design of the discussed active plasmonic element giving access to a variety of possible applications. This plasmonic element was developed to be used as a key feature in a new sub-diffraction-limit imaging technique currently under development.
Conclusion
In summary, using correlative methods to investigate a single device provides complementary information about desirable material properties intrinsic to the active plasmonic element.
Only the combination of both experimental methods discussed here provides the complete set of information required for an optimised design of the element. | 7,751.8 | 2023-01-16T00:00:00.000 | [
"Physics"
] |
Protection of Mild Steel Corrosion in Sulphuric Acid Environment Using Wheat Starch
. The corrosion of mild steel in 0.5 M H 2 SO 4 acid solution and the inhibition process by wheat starch (WS) was investigated using weight loss and potentiodynamic polarization measurement techniques respectively. Gravimetric results revealed that there is significant reduction in the corrosion rate of mild steel in the presence of inhibited solution compared to blank solution, and also the inhibition efficiency was found to depend on the concentration of the WS. Data on potentiodynamic polarization results confirmed that WS exhibited mixed type inhibition behaviour, though the cathodic effect was more pronounced. The mode of WS adsorption on the corroding metal surface followed Langmuir isotherm model. In addition, the trend of inhibition efficiency with temperature, activation energy and heat of adsorption parameters revealed a strong interaction between the WS constituents and the corroding metal surface, thus indicating that WS lowered the corrosion process by blanketing the mild steel surface through chemical adsorption mechanism. The mechanism of inhibition was discussed in the light of the chemical structure of starch.
Introduction
Metals and alloys are enjoying wider acceptance in structural and fabricating applications in the industrial sectors due to excellent mechanical performance or behaviour associated with them. Corrosion of metal and alloy is a degradation of metal and alloy as a result of environment surrounding them. It is a non preventable phenomenon but can be controlled through the use of some macromolecules [1][2][3][4][5][6][7][8][9][10], organic compounds [11][12][13][14][15][16] or extracts from natural plants [17][18][19][20][21][22][23] as corrosion inhibitors. Polymers are macromolecules formed by the repetition of smaller molecules (monomers) that are covalently bonded together. They are widely used as plastics, textile materials, rubber, adhesives, drilling mud, binders and thickeners in surface coating, dyes, pigments, etc. because of ease of processing and modifying the physical and chemical properties of polymers. The utilization of some polymeric materials (natural and synthetic) in controlling corrosion of metals and alloys in various aggressive environments has gained wider acceptance as a result of some inherent properties of the polymers which include: biodegradability, non-toxicity, readily availability, low cost, renewability, water solubility, etc. Polymers and their blends are preferred more than simple organic compounds as inhibitors for corrosion control because they possess multiple functional and substituent groups either in their back bone or side chains which act as regions at which electrons are donated or accepted from surface charge on the metal. Some organic compounds are effective corrosion inhibitors due to presence of hetero-atoms (nitrogen, sulphur, oxygen, etc.) combination of the atoms in their molecular structures [24][25][26][27]. Polymers functioned as effective corrosion inhibitors even at low concentrations [28] by forming complexes through their multiple functional and substituent groups with the metal ions which are adsorbed on metal surface and formed protective films at interface between metal and aggressive solution [29]. The inhibitive performance of polymers may be related to the molecular structure and solubility parameter of the polymers in various solvents of exposure. Recently, greater attention is being directed towards the utilization eco-friendly polymers in controlling of metal corrosion in acid induced environment in attempt to protect our environment, safe guard human life, save our economy and reduce the material loss. Among the eco-friendly polymers used in controlling metal corrosion in aggressive media starch (biopolymer) has not been utilized much [30][31][32] in the field of corrosion science despite the different sources of starch.
To the best of our knowledge, there has been any work in the scientific literature that reported the use wheat starch as the corrosion inhibitor for mild steel in sulphuric acid solution environment. This, however, informed our decision to investigate the effectiveness of wheat starch as a corrosion inhibitor. The purpose of this work is to investigate the inhibiting effect of wheat starch as corrosion inhibitor for mild steel in 0.5 M H 2 SO 4 solution using weight loss and potentiodynamic polarization measurement techniques.
Metal Preparation
The mild steel sheet (with percentage composition of C = 0.06, Si = 0.03, Mn = 0.04, Cu = 0.06, Cr = 0.06, and remainder Fe) was mechanically press-cut into coupons of dimension, 3cm x 4cm x 0.1cm. The coupons were degreased in absolute ethanol, dried in acetone and warm air and subsequently stored in moisture-free desiccators prior to use.
Test Solutions
Sulphuric acid used was of BDH AR grade. Other reagents (Sodium hydroxide, acetone and ethanol) used for the research were of Analar grade and double distilled water was used for preparation blank and inhibited solutions. The blank corrodent was 0.5 M H 2 SO 4 solution. The inhibitor wheat starch (WS) used was processed using a method described elsewhere [33]. Test solutions of the WS were prepared in the concentration range 0.2 -0.8 g/L.
Weight Loss Experiment
The cleaned and weighed coupons were suspended using glass hooks and rods in beakers containing 200ml test solutions. All experiments were performed under total immersion conditions of the aerated and unstirred test solutions at room temperature (30± 1 O C). Weight loss was determined with respect to time by retrieving the coupons from test solutions, cleaned, dried, and reweighed respectively at 24 hours intervals progressively for 5 days. The difference between the weight of the coupons at a given time and its initial weight was taken to be weight loss. All tests were run in triplicate to obtain good reproducibility data and average values for each experiment obtained were used in subsequent calculations. The value of corrosion rate was determined using Equation 1 stated below: where ∆W is the weight loss in gram (g), ρ is the density of the mild steel coupons (g/cm 3 ), t is the time of exposure (h) and A is the exposed surface area of the coupons (cm 2 ). The percentage inhibition efficiency (I.E %) was calculated according to Equation 2 stated as follows: where CR inh is the corrosion rate in the presence of inhibitor and CR blank is the corrosion rate in absence of inhibitor.
Potentiodynamic Polarization Experiment
The potentiodynamic polarization measurements were performed in a computer controlled electrochemical workstation (PARC-263 model). The experiments were carried out in a cylindrical glass electrolytic corrosion cell with graphite rod as counter electrode (CE), saturated calomel electrode (SCE) as reference electrode (RE) and metal coupon as the working electrode. The
12
IJET Volume 10 working electrode was immersed in the test solution and allowed to corrode freely for 30 min to attain open circuit potential (OCP). The potentiodynamic polarization results were obtained in the potential range of ± 250mV versus corrosion potential using linear sweep technique at a scan rate of 0.333mV/s. All the measurements were carried out at room temperature (30 ± 1 O C). The potentiodynamic polarization data was used to extrapolate the data using Power suite software. Each test was run in triplicates to verify the reproducibility of the system. The inhibition efficiency was calculated from the Equation 3 stated as follows: where I 1 corr is the corrosion current in the presence of inhibitor whereas I 2 corr is the corrosion current in the absence of inhibitor.
Weight Loss Measurement Result:
Effect of Inhibitor Concentration: The effect of inhibitor concentration on the corrosion rate and inhibition efficiency of mild steel in 0.5 M H 2 SO 4 solution was studied using weight loss measurement. The results obtained depict that wheat starch decreased the corrosion rate of mild steel in the acidic solution (Table 1) and also inhibition efficiency increases with rise in the inhibition concentration at all concentrations used in the study. This could be attributed to the presence of two glucose polymers (amylopectin and amylose) and glucose in the starch molecule since starch is partially converted to glucose unit in the acid solution [34]. Furthermore, it is observed that inhibition efficiency of 90.48% was obtained at the highest inhibitor concentration. The result of inhibition efficiency exhibited by wheat starch and starch from millet reported elsewhere [22,23] in regulating corrosion of metal in aggressive environment varies. The variation could be attributed to the following factors. (a) Amylose and amylopectin percentage of starch: Amylose molecule contains several thousands of glucose unit whereas amylopectin molecule has up to two million of glucose unit. Hence, in solution amylopectin molecule releases more glucose units than amylose molecule, that is, more hydroxyl groups (-OH) and aromatic groups which are responsible for the inhibition process. (b) Availability of glucosidic linkage: Amylose molecule has 1, 4 glucosidic bonds which make the molecule possess high density, hydrolyze more slowly and become insoluble. Amylopectin molecule has 1, 6 glucosidic bonds which make the molecule to be soluble and degrade quickly because it has many end points onto which enzymes can attach. (c) Presence of hetero-atoms within the starch structure and (d) The interaction between the metal surface and constituents of starch. The values of corrosion rate and inhibition efficiency obtained from weight loss measurement at different concentrations of WS at room temperature (30±1 O C) are summarized in Table 1. International Journal of Engineering and Technologies Vol. 10 13 Acid Concentration Effect: Fig. 1 illustrates the increasing effect of acid concentration from 0.5 M to 3 M on the inhibition efficiency of WS on mild steel corrosion at the highest inhibition concentration (0.8g/LWS) studied. It is clearly seen from this plot that increase in H 2 SO 4 concentration decreased the inhibitive performance of WS from 87.15% to 60.09%. It has been reported [35] that some organic inhibitors become more protonation in the presence of strong acid concentration, thus leading to better inhibition efficiency. But, the decrease in inhibition efficiency of WS with increase in acid concentration is an indication that WS did not show more protonation at increased acid concentration. Also, the decrease in inhibition efficiency of WS with increase in acid concentration could be attributed to presence of more corrosive agent in the inhibited solution which increased the aggressiveness of solution and desorption of WS on the mild steel surface. Fig. 2 showed the plot of inhibition efficiency against immersion time. It is observed from Fig. 2 that inhibition efficiency of wheat starch increased with increasing immersion time, reaching the maximum efficiency on days 3 and showed gradual decrease in the subsequent days. The increase in inhibition efficiency with time may be attributed to strong stability of the glucose components adsorbed on the mild steel surface whereas the decrease in inhibition efficiency with time reflects desorption of glucose components from the mild steel surface, resulting in a reduced surface coverage [36].
Potentiodynamic Polarization Result
The effect of wheat starch on the anodic and cathodic reaction processes of mild steel corrosion in 0.5 M H 2 SO 4 solution was investigated using potentiodynamic polarization measurement. The polarization curves obtained were shown in Fig. 3. It is observed that anodic and cathodic reaction processes obeyed Tafel's Law and the corrosion potential (E corr ), corrosion current densities (i corr ), the cathodic (b c ) and anodic (b a ) Tafel slopes were computed from polarization curves and presented in Table 2. In addition, the values of cathodic and anodic current decreased in the presence of WS in comparison with the blank solution, thus indicating that wheat starch modified the mechanism of anodic dissolution of mild steel as well as cathodic hydrogen gas evolution. The observed shift in corrosion potential (E corr ) was towards the negative direction, suggesting that addition of WS had a more pronounced cathodic effect which is in agreement with the reports elsewhere [37]. The trend increased with increased concentration of the WS. The displacement of E corr between the blank and inhibited solution is less than 85 mV, showing that WS is a mixed-type inhibitor [38].
Temperature Effect
The temperature effect on the corrosion behavior of mild steel in the absence and presence of WS was investigated by performing gravimetric experiments at 30 -60 O C and immersion time of 5h. The results as shown in Table 3 demonstrate that both corrosion rate and inhibition efficiency increased with rise in temperature. This distinct behaviour exhibited WS is concentration dependent within the inhibitor concentration used in the study. The increase in inhibition efficiency with increasing temperature suggests strong adsorption interaction between the surface charge on the metal and the WS. This behavior corresponds to chemical adsorption, thus indicating that within the temperature of study there is no manifestation of adsorption-desorption phenomenon towards adsorbed inhibitor [39] on the metal surface. In addition, stability of the adsorbed inhibitor at higher temperature was not reduced despite the increase in agitation resulting from higher rates of hydrogen gas evolution [40,41]. There is slight difference between the inhibition efficiency values of WS obtained at 6 h and 24 h immersion period respectively, thus confirming better molecular effect at elevated temperature due to more dissolution and diffusion.
Adsorption Isotherm Consideration
To understand the nature of interaction between the surface charge on the metal and adsorbed inhibitor during metal corrosion, adsorption isotherm was used to explain the adsorption characteristic of the inhibitors. This is because the protective action of inhibitors depends on the adsorption abilities of their functional groups, molecular structure, etc. which lead to the formation protective layer the separates the metal surface from corrosion medium. Therefore, the relationship between the degree of surface coverage (θ) defined by IE/100 and inhibitor concentration (C) established by Langmuir adsorption isotherm Equation (4) was used to determine the adsorptiondesorption equilibrium constant, K ads at the different temperatures as shown below: Plots of C/θ against C for corrosion of mild steel in 0.5 M H 2 SO 4 in the presence of different concentrations of WS is shown in Fig. 4.
Linear plots were obtained with slopes 1.139 (R 2 = 0.992), 1.117 (R 2 = 0.996), 1.079 (R 2 = 0.998) and 1.057 (R 2 0.998) respectively. The calculated values of K ads obtained from the intercept are as follow: 16.406, 16.821, 18.002 and 18.904 respectively. Coefficient of linear correlation (R 2 ) values obtained were all above 0.991, thus indicating a good fit of the experimental data and suggests that the adsorption of WS on metal surface followed the Langmuir adsorption isotherm.
The results show that the adsorption-desorption equilibrium constant (K ads ) increased with increasing temperature, indicating better adsorption of WS onto the steel surface at elevated temperatures. In addition, at elevated temperatures adsorption-desorption equilibrium did not tend towards desorption of inhibitor from the metal surface due to enhanced molecular effect observed.
Apparent Activation Energies
The relationship between the rate of corrosion and apparent activation energy (E a ) in the presence of temperature during corrosion inhibition process of metal in the absence and presence of WS was evaluated using Arrhenius Equation 5 stated below: where E a is the apparent activation energy for the corrosion process. CR 2 and CR 1 are the corrosion rates at temperatures T 1 and T 2 respectively. The values of the calculated activation energy were given in Table 4. Careful inspection of Ekanem, et al., 2010 [42] that when the value of activation energy (E a ) in inhibited solution is greater than that of blank solution, it shows that the inhibitor is physically adsorbed on the corroding metal surface while either unchanged or lower activation energy, E a in the presence of inhibitor suggests that the inhibitors are chemically adsorbed on the corroding metal surface.
Heat of Adsorption Studies
To an insight on mechanistic studies of the corrosion and inhibition process of mild steel in 0.5 M H 2 SO 4 in the presence of WS at different temperatures, an estimate of the heats of adsorption (Q ads ) was obtained from the trend of degree of surface coverage (θ) with temperature according to Equation 6 stated below: where θ 1 and θ 2 are the degrees of surface coverage at temperature T 1 and T 2 . The calculated values of the parameter are given in Table 4. Examination of Table 4 revealed that the calculated values of heat of adsorption (Q ads ) for mild steel corrosion in 0.5 M H 2 SO 4 in presence of WS at different temperatures had positive values. This is an indication that the degree of surface coverage increased with rise in temperature [43] and thus, supporting the earlier proposed chemisorption mechanism adsorption of WS on mild steel surface. In addition, the fluctuation observed in the values of Q ads with increase in temperature is as a result of adsorption -desorption behaviour of protective film.
Mechanism of Inhibition
Corrosion inhibition of mild steel in 0.5 M H 2 SO 4 in presence of WS can be explained on the basis of molecular adsorption [44] through the chemical constituents of starch. WS inhibits the corrosion of mild steel by controlling both the anodic and cathodic reactions. From data presented in Table 2 it is clear that WS inhibits the corrosion of mild steel by blanketing the anodic and cathodic sites. The main constituents of starch are amylose and amylopectin molecules which are partially hydrolyze to glucose units in acidic solution and the glucose units possess a number of hydroxyl (-OH) groups and a number of aromatic rings. These organic molecules are adsorbed on the metal surface, thus forming a protective layer and hence exhibit anti-corrosive behaviour [45]. These glucose units of starch could be adsorbed on the metal/corrodent interface by one and/or more of the following ways: (i) donor-acceptor interactions between the π-electrons of aromatic ring and vacant d-orbital of surface iron atoms, (ii) interaction between unshared electron pairs of hetero atoms and vacant d-orbital of iron surface atoms, (iii) electrostatic interaction of protonated molecules with already adsorbed sulphate ions.
Conclusion
The results from weight loss measurement confirmed that control of corrosion of mild steel in 0.5 M H 2 SO 4 using wheat starch (WS) is a feasible possible, thus indicating that WS is an efficient inhibitor for mild steel corrosion with maximum inhibition efficieny of 90.48% for 72 h at room temperature. The efficiency of the inhibition showed corresponding increment with increase in the
18
IJET Volume 10 amount of WS, thus suggesting that inhibition effectiveness of WS was concentration dependent due to adsorption on the corroding metal surface. Potentiodynamic polarization results reveal that WS acts as a mixed type inhibitor but the effect on the cathodic reactions is more predominantly controlled. The mode of adsorption of WS on the corroding metal surface was best modelled using Langmuir adsorption isotherm. In addition, the trend of inhibition efficiency with temperature, calculated values of activation energy and heat of adsorption supported the proposed adsorption mechanism. Finally, the mechanism of corrosion inhibition of mild steel using WS was attributed to the chemical constituents of starch, and results of weight loss and potentiodynamic polarization measurement are in good reasonable agreement. | 4,369.4 | 2017-03-01T00:00:00.000 | [
"Materials Science"
] |
Molecular Cloning and Characterization of a Novel Gene, CORS26 , Encoding a Putative Secretory Protein and Its Possible Involvement in Skeletal Development*
We cloned a novel mouse cDNA, CORS26 (collagenous repeat-containing sequence of 26 -kDa protein), encoding a secretory protein by suppression subtractive hybridization between transforming growth factor- b 1-treated and untreated C3H10T1/2 cells. The deduced amino acid sequence of CORS26 consists of 246 amino acids with a secretory signal peptide and contains a collagenous region (Gly- X - Y repeats) at the NH 2 termi- nus and a complement factor C1q globular domain at the COOH terminus. CORS26 is structurally similar to C1q and to adipocyte-specific protein Acrp30. Transfection analysis suggested that CORS26 is a secretory protein. Northern blot analysis revealed that CORS26 mRNA was present at high levels in rib growth plate cartilage and at moderate levels in kidney of adult mice. CORS26 mRNA was not detected in NIH3T3 cells, BALB/3T3 cells, C3H10T1/2 cells, or osteoblastic MC3T3-E1 cells by reverse transcription-polymerase chain reaction analysis. In situ hybridization of mouse embryos between 13 and 15 days postcoitus revealed relatively high levels of CORS26 mRNA in condensed prechondrocytic cells of cartilage primordia and developing cartilages. How-ever, CORS26 mRNA were undetectable in mature chondrocytes.
In skeletal development in vertebrates, the formation of chondrocytes from undifferentiated mesenchymal cells is one of the important processes, but the molecular mechanisms are not well understood. Identifying the genes underlying the induction of chondrocyte differentiation will provide powerful tools for understanding skeletal development. The induction of chondrogenesis has been extensively studied in vitro using primary cells and clonal cell lines from a variety of sources (1)(2)(3)(4). The mouse embryonic fibroblast cell line, C3H10T1/2, are multipotential cells and have been induced to undergo differentiation into myocytes, adipocytes, osteoblasts, and chondrocytes under specific culture conditions and treatments (5)(6)(7)(8). The frequency of chondrogenic conversion in C3H10T1/2 cells was much lower and irregular compared with other types of conversion (5,8), but it was recently reported that the induction of chondrogenesis and the formation of spheroids in C3H10T1/2 cells preferentially occurred when treated with transforming growth factor (TGF) 1 -1 (9), bone morphogenetic protein-2 (10), or a combination of osteoinductive bone proteins (11) in high-density micromass cultures. The formation of the spheroids resembled the condensation of mesenchymal cells seen in precartilage. Thus, C3H10T1/2 cells in high-density micromass cultures are suited for studying the molecular mechanisms involved in skeletal development.
In the present study, to help clarify the mechanism for skeletal development, mRNAs expressed in TGF-1-treated C3H10T1/2 cells were subtracted with those in untreated C3H10T1/2 cells using the suppression subtractive hybridization (SSH) technique (12), and we isolated a novel gene, CORS26 (collagenous repeat-containing sequence of 26-kDa protein). Sequence analysis revealed that CORS26 possesses a collagenous structure at the NH 2 terminus and a complement factor C1q globular domain at the COOH terminus. Due to the structural similarity between CORS26 and subunits of complement factor C1q, this novel protein is thought to be a member of the C1q-related protein family. The presence of the signal peptide, plus the hydrophilic nature of CORS26, suggests that CORS26 is a secretory protein. Indeed, the CORS26 protein was secreted from COS-7 cells by transient transfection analysis. CORS26 mRNA is specifically expressed in cartilage and kidney in the adult mouse. Moreover, the expression of this novel gene was observed in cartilage primordium or developing cartilage in embryonal mouse tissues in vivo. Thus, a possible secretory protein encoded by the CORS26 gene may be one of the important signaling molecules produced by prechondrocytic mesenchymal cells or early chondrocytes during skeletal development.
EXPERIMENTAL PROCEDURES
Cell Lines and Cell Culture-The mouse cell line, C3H10T1/2, was obtained from the RIKEN Cell Bank (Tsukuba, Japan), and micromass cultures were performed as described previously (9). In brief, trypsinized cells were suspended in Ham's F-12 medium (Life Technologies, Inc.) supplemented with 10% fetal bovine serum (FBS) (Life Technologies, Inc.), penicillin (50 units/ml), and streptomycin (50 g/ * This work was supported by a Grant-in-aid from the Ministry of Education, Culture and Science of Japan (No. 11470396). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
The nucleotide sequence(s) reported in this paper has been submitted to the GenBank TM /EBI Data Bank with accession number(s) AF246265.
§ To whom correspondence should be addressed: Dept. of Radiology and Radiation Oncology, Graduate School of Dentistry, Osaka University, 1-8 Yamadaoka, Suita, Osaka 565-0871, Japan. Tel.: 81-6-6879-2967; Fax: 81-6-6879-2970; E-mail<EMAIL_ADDRESS>ml) at a concentration of 10 7 cells/ml, and then a 10-l drop of this cell suspension was placed in the center of a 24-well dish at 37°C in a humidified atmosphere containing 5% CO 2 . The cells were allowed to adhere to the dish for 3 h, and the culture was flooded with 1 ml of medium. After 24-h incubation, 1 ng/ml TGF-1 was added to the culture medium. Human TGF-1 was purchased from R & D Systems (Minneapolis, MN). The mouse cell lines, NIH3T3 and MC3T3-E1, were also obtained from the RIKEN Cell Bank, and BALB/3T3 cells were from Cancer Cell Repository of Tohoku University (Sendai, Japan).
Construction of the Subtractive cDNA Library by SSH-Messenger RNAs obtained from C3H10T1/2 cells were isolated using the Fast-Track mRNA isolation kit (Invitrogen, San Diego, CA). SSH was performed using the PCR-Select cDNA subtraction kit (CLONTECH, Palo Alto, CA) according to the manufacturer's protocol. Tester cDNA was synthesized from mRNAs of C3H10T1/2 cells treated with TGF-1 for 6 h, and driver cDNA was from mRNAs of untreated C3H10T1/2 cells. Products from the secondary PCR were cloned into pT7Blue T vector (Invitrogen), and transformed into competent Escherichia coli JM109 cells.
Dot-blot Differential Screening-Cloned cDNA inserts were initially evaluated for differential expression by DNA dot-blot analysis. White colonies from a subtractive cDNA library were picked randomly and boiled for 5 min in 20 l of H 2 O and then centrifuged. DNA in the supernatant was amplified by PCR using universal vector primer M13 and M13R. The PCR products were purified by QIAquick PCR purification kit (Qiagen, GmbH, Hilden, Germany) and dot-blotted on Hybond-N ϩ nylon membranes (Amersham Pharmacia Biotech, Buckinghamshire, United Kingdom). The membranes were hybridized with [ 32 P]dCTP-labeled tester cDNA as positive and [ 32 P]dCTP-labeled driver cDNA as negative probes, respectively. The nucleotide sequences of candidate positive clones were determined by sequencing, and cDNA inserts were used as probes in Northern blot analysis confirming differential gene expression.
DNA Sequencing and Sequence Analysis-Sequencing of cDNA inserts in the vector were performed by the ABI PRISM Dye Terminator Cycle Sequencing Ready Reaction kit (PerkinElmer Life Sciences) with the M13 vector primers on an ABI 373 automated sequencer. For long cDNA inserts, synthetic oligonucleotides were used as primers. The cDNA sequence was analyzed using sequence analysis software, DNA-SIS (Hitachi Software Engineering, Yokohama, Japan). Data base homology searches for the cDNA sequence and deduced protein sequences were performed by BLAST programs with the DNA and protein data bases at the National Center for Biotechnology Information. Analysis of the primary structure of predicted proteins were performed by the PRO-SITE motif search program.
Northern Blot Analysis-Total RNA was isolated from C3H10T1/2 cells before and after 6-h treatment with TGF-1 or mouse rib growth plate cartilage using a RNeasy kit (Qiagen). Rib growth plate cartilages were obtained from 10-day-old mice. Twenty micrograms each of total RNA was denatured using glyoxal and electrophoresed in 1% agarose gel, then transferred to Hybond-N ϩ nylon membranes (Amersham Pharmacia Biotech) and UV-cross-linked onto the membrane. For analysis of CORS26 expression in various normal adult mouse tissues, the Multiple Tissue Northern blot (CLONTECH) was purchased. The blotting membrane was incubated in prehybridization solution of 50% formamide, 5ϫ standard saline citrate (SSC), 5ϫ Denhardt's solution, 0.1% SDS, and 50 g/ml salmon sperm DNA at 43°C for 3 h, and then hybridization was performed using a 1.6-kb cDNA fragment of CORS26 labeled with [ 32 P]dCTP by random priming (TaKaRa, Shiga, Japan) at 43°C for 16 h. The membrane was washed in 2ϫ SSC containing 0.1% SDS at room temperature for 15 min, 0.5ϫ SSC containing 0.1% SDS at 55°C for 60 min, and 0.1ϫ SSC at room temperature for 10 min, respectively. The membranes were autoradiographed with Hyper-film (Amersham Pharmacia Biotech) at Ϫ80°C. After autoradiography, the membrane was stripped and rehybridized with mouse glyceraldehyde-3-phosphate dehydrogenase (GAPDH) probe for internal loading control.
Rapid Amplification of cDNA Ends-Rapid amplification of cDNA ends (RACE) was performed in both directions using the SMART cDNA amplification kit (CLONTECH) from mRNA of C3H10T1/2 cells treated with TGF-1. For the 3Ј-RACE method, 5Ј-GGGCCAAAGGTGAGAAA-GGA-3Ј and 5Ј-GCCCCCGTATCAGGTGTGTA-3Ј were designed as gene-specific primers, and for the 5Ј-RACE method, 5Ј-GGCCCCAAA-TCTCCCAGTCAT-3Ј and 5Ј-GGTCGCCTTTGTCTCCTTTCT-3Ј were designed as gene-specific primers. The nested PCR products were subcloned into the pT7Blue T vector and sequenced.
Amplification of Full-length CORS26 -The full-length coding sequence of CORS26 was amplified by LA Taq polymerase (TaKaRa) using primers 5Ј-CTGTCAAGCTTCCCTGCGAGACTCTT-3Ј and 5Ј-GCAAGCCAGATGGGAGAAAAGTTTAT-3Ј from TGF-1-treated C3H10T1/2 cDNA and cloned directly into pGEM-T Easy (Promega, Madison, WI). The amplification conditions were 80°C for 5 min, followed by 30 cycles at 94°C for 30 s, 65°C for 40 s, and 72°C for 2.5 min and a final incubation at 72°C for 5 min. This construct was sequenced for accuracy and was used for in vitro transcription and translation.
In Vitro Transcription and Translation-The TNT T7 quick coupled transcription/translation system (Promega) was used to transcribe and translate the full-length CORS26 cDNA construct in the presence of [ 35 S]methionine (Amersham Pharmacia Biotech) according to the manufacturer's instructions. Five microliters of the products were electrophoresed on a 12.5% SDS-polyacrylamide gel. The gel was treated with Enlightning (PerkinElmer Life Sciences), dried and autoradiographed.
RT-PCR Analysis-Total RNAs isolated from various cell lines were digested with RNase-free DNase I (Promega) at 37°C for 15 min. Then the denatured 1 g of total RNA was reverse-transcribed in 20 l of a reaction mixture containing 5ϫ RT buffer, 10 mM dithiothreitol, 0.5 mM amount of each dNTP, 500 ng of oligo(dT) 15 Aliquots (10 l) of the PCR products were electrophoresed in 1.5% agarose gel, and the gel was stained with ethidium bromide and photographed under UV light. The amplified 300-bp PCR products were subcloned into pGEM-T Easy vector.
In Situ Hybridization-One microgram of recombinant pGEM-T Easy vector containing the 300-bp fragment located at the coding region (nucleotides 305-604) of CORS26 was used as a template. Plasmids were linearized with SpeI to prepare the antisense riboprobe and NcoI to prepare the sense riboprobe. In vitro transcription was performed with 35 S-UTP (Amersham Pharmacia Biotech) using T7 and SP6 RNA polymerase (Promega). Unincorporated labels were removed by ethanol precipitation, and the counts/min were determined on a scintillation counter.
Embryos were obtained from ICR mice. In situ hybridizations were carried out as described previously (14) with modifications. In brief, mouse embryos were fixed with 4% paraformaldehyde and embedded in paraffin for sectioning. The 7-m-thick sections were pretreated with proteinase K and HCl and acetylated. Hybridization was performed with riboprobes (1 ϫ 10 5 cpm per slide) at 53°C overnight. After hybridization, the sections were washed with 5ϫ SSC containing 10 mM dithiothreitol at 50°C for 30 min, SF solution (2ϫ SSC, 50% formamide, and 20 mM dithiothreitol) at 65°C for 30 min, NTE buffer (0.5 M NaCl, 10 mM Tris-Cl, and 1 mM EDTA) at 37°C for 30 min, treated with 20 g/ml RNase A at 37°C for 30 min and then washed with 2ϫ SSC and 0.1ϫ SSC at room temperature. The sections were dehydrated in a graded series of ethanol, air-dried, coated with NTB-2 emulsion (Eastman Kodak Co.) and exposed for 2-3 weeks. Microphotographs were taken using both light-and dark-field optics.
Transient Transfection with FLAG-tagged CORS26 and Immunoblotting-A CORS26 cDNA was tagged with a FLAG epitope at the COOH terminus by PCR using oligonucleotides 5Ј-TTTGCCGAGCCA-TGCTCGGGAGGC-3Ј and 5Ј-TCACTTGTCATCGTCGTCCTTGTAGT-CCTTAGTTTCAAAGAGCAGAAA-3Ј. The sequence for a FLAG tag is underlined. The amplification conditions were 80°C for 5 min, followed by 25 cycles at 94°C for 30 s, 60°C for 30 s, and 72°C for 1 min and a final incubation at 72°C for 5 min. The PCR product was cloned into pGEM-T Easy vector, and the insert was excised with EcoRI and recloned into the EcoRI site of pcDNA3.1(ϩ) vector (Invitrogen). The orientation and nucleotide sequence of pcDNA3.1/CORS26-FLAG was confirmed by an automated ABI 373 sequencer. The pcDNA3.1/CORS26-FLAG construct was transiently transfected into COS-7 cells (RIKEN Cell Bank) using the SuperFect Transfection Reagent (Qiagen) according to the manufacturer's instruction. A pcDNA3.1(ϩ) vector was transfected in parallel as a negative control. Transfections were performed on cells seeded into six-well tissue culture plates in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% FBS 24 h before use at a density of 3 ϫ 10 5 cells/well. Cells were typically transfected with 2 g of DNA, incubated with the DNA suspension for 3 h, and replenished with fresh medium. Then, 48 h after transfection, conditioned medium was collected, and cells were lysed with TNE buffer (10 mM Tris-HCl (pH 7.8), 1 mM EDTA, 150 mM NaCl, 1% Nonidet P-40). Protein from the conditioned medium was concentrated for 1 h with a Centricon-10 concentrator (Millipore, Bedford, MA). The proteins from the medium and cell extracts were separated on a 12.5% SDS-polyacrylamide gel, transferred onto a nitrocellulose membrane in a Trans-Blot semidry electrophoretic transfer cell (Bio-Rad) and immunoblotted with anti-FLAG M2 monoclonal antibody (Sigma) diluted 1:500. Immunocomplexes were detected with a secondary antibody conjugated to peroxidase (Dako, Glostrup, Denmark) and visualized with ECL reagent (Amersham Pharmacia Biotech).
Cell Growth Analysis Using CORS26 Stably Transfected Cells-A DNA fragment containing a coding region of CORS26 was inserted into the EcoRI-EcoRV sites of pcDNA3.1(ϩ) vector. C3H10T1/2 cells were seeded into 35-mm culture dishes in DMEM supplemented with 10% FBS at a density of 3 ϫ 10 4 cells per dish. Twenty-four hours later, the cells were transfected with 2.5 g of either control pcDNA3.1 vector or pcDNA3.1/CORS26 using the SuperFect Transfection Reagent (Qiagen) according to the manufacturer's instruction. The cells were incubated for 48 h and then trypsinized and seeded at a 1:20 ratio in 100-mm culture dishes in DMEM containing 10% FBS. Sixteen hours later, the cells were switched to a selective medium containing 500 g/ml G418 (Promega). After 4 weeks of culture in the selective medium, clonal isolates were expanded, and expression of CORS26 mRNA was verified by RT-PCR as described above. To analyze the growth of C3H10T1/2 cells transfected with pcDNA3.1/CORS26, the transfectants were seeded in 24-well culture plates at a density of 1 ϫ 10 4 cells/well and cultured with DMEM supplemented with 10% FBS. The number of cells were counted with a hemocytometer at the indicated days.
Molecular Cloning and Sequence Analysis of the CORS26
cDNA-To identify genes specifically expressed during skeletal development, the SSH technique was utilized using the mRNA extracted from C3H10T1/2 cells with or without TGF-1 treatment, and we selected several clones specifically up-regulated by TGF-1 treatment.
We determined the sequence of the isolated partial cDNA clones and carried out homology searches in the GenBank TM using BLAST2. Among all clones that had no significant homology with any known genes in the nucleotide sequence data bases, a 380-bp cDNA clone (clone 129) showed a marked induction by TGF-1 treatment (Fig. 1). We, therefore, focused further analysis on this clone. To obtain a full-length sequence, we carried out the 5Ј-and 3Ј-RACE using a TGF-1-treated C3H10T1/2-derived cDNA. The full nucleic acid sequence obtained (1,879 bp) and the deduced amino acid sequence are shown in Fig. 2 an isoelectric point of 6.2. Indeed, a protein of ϳ26 kDa was generated by coupled in vitro transcription and translation of the CORS26 cDNA in rabbit reticulocyte lysates (Fig. 3A), suggesting that the predicted initiation sequence serves as the start site of the translation. Hydropathy analysis (16) revealed that the predicted protein was predominantly hydrophilic, and the signal peptide of 22 amino acid residues and the cleavage site were also predicted at the amino-terminal end on the basis of the rules of von Heijne (17) (Fig. 3B). No transmembrane spanning region was predicted. To investigate whether CORS26 protein was secreted from cells, FLAG-tagged CORS26 was transiently transfected into COS-7 cells and analyzed by Western blotting using an anti-FLAG antibody. These findings demonstrated that COS-7 cells synthesized the CORS26 protein, and the protein was secreted into the medium (Fig. 4). Amino acid sequence analysis of the predicted CORS26 protein using the BLASTp search program revealed that the COOH-terminal globular region of CORS26 was 34, 32, and 30% identical to precerebellin (18), complement protein C1qrelated factor (CRF) (19), and adipocyte complement-related protein of 30 kDa (Acrp30, equivalent to AdipoQ) (20,21), respectively. A unique feature of this protein is that the NH 2terminal part contained uninterrupted collagen-like Gly-X-Y repeats (23 repeats) immediately downstream of a short noncollagenous sequence (at amino acids positions 45-113). NH 2terminal Gly-X-Y repeats are known to be found in the complement protein C1q A, B, and C chains (22)(23)(24), Acrp30, chipmunks hibernation-specific proteins (HP-20, HP-25, and HP-27) (25) and CRF (19). In the COOH-terminal half, CORS26 shares homology with the globular domain of the C1q subunits, precerebellin, Acrp30, CRF, and collagens VIII and X (26) (Fig. 5).
It was also notable that there was a proline-rich region in the NH 2 -terminal part (at amino acid positions 55-62). The pre-dicted protein also contained other putative functional sites, phosphorylation sites for protein kinase C (at amino acid positions 152-154) and casein kinase II (at amino acid positions 77-80 and 138 -141) and several N-myristoylation sites.
Expression of CORS26 mRNA in the Adult Mouse Tissues-We examined the expression of CORS26 mRNA in various adult mouse tissues by Northern blot analysis (Fig. 6). CORS26 cDNA hybridized to double transcripts of 2.3 and 2.0 kb. The CORS26 mRNA was expressed in rib growth plate cartilage and kidney. CORS26 mRNA was not detected in other tissues. In growth plate cartilage, the hybridization signal at 2.0 kb was stronger than the signal at 2.3 kb, while, in kidney, the signal at 2.3 kb was stronger than the signal at 2.0 kb. The 2.0-kb size of the transcripts agrees approximately with the length of the cloned CORS26 cDNA.
In Situ Hybridization of CORS26 in the Embryonal Mouse Tissues-To examine the expression of CORS26 during embryonic development in vivo, we performed in situ hybridization using embryonal mouse serial sections at 13-15 days p.c. CORS26 transcripts were localized in regions of cartilage primordium of occipital bone and that of the vertebral body of a 13-day-p.c. embryo (Fig. 8, A-C and F-H). In Meckel's cartilage, high levels of CORS26 mRNAs were seen in a 15-day-p.c. embryo (Fig. 8, D and I). In the cartilage primordium of digital bone of a 14-day-p.c. embryo, CORS26 was expressed in prechondrocytes, but not in mature chondrocytes (Fig. 8, E and J). These experiments showed that CORS26 mRNA was present at relatively high concentrations in precartilaginous primordia and developing cartilages. No specific signal was detected above background levels by sense riboprobes as controls.
Growth of CORS26 Transfected Cells in Vitro-We generated C3H10T1/2 cells stably transfected with the mammalian expression vector containing CORS26. The expression of CORS26 mRNA in the transfectant was verified by RT-PCR analysis (data not shown). The growth of CORS26 transfectants was significantly enhanced when compared with that of control cells. Moreover, saturation densities of CORS26 transfectants were higher than that of control cells (Fig. 9).
DISCUSSION
During skeletal development, condensation of multipotential mesenchymal cells to differentiate toward the various cell types is an important process. One such process is the formation of chondrocytes from undifferentiated mesenchymal cells. It was recently demonstrated that the mouse embryonic mesenchymal cell line, C3H10T1/2, when cultured at high density, 1 and 3) or pcDNA3.1 vector (lanes 2 and 4). The proteins (20 g) were separated on 12.5% SDSpolyacrylamide gel and immunoblotted with an anti-FLAG M2 antibody.
is induced to undergo chondrogenic differentiation by TGF-1. C3H10T1/2 cells cultured under this condition form a threedimensional spheroid structure, and the morphology of the cells in the spheroid resembled that of the cells seen in precartilage condensations (9). The formation of the spheroid in vitro mimic the condensation event of chondrogenesis in vivo.
In the present study, we demonstrated isolation of a novel gene, CORS26, encoding a secretory protein of 246 amino acids using a suppression subtractive hybridization technique between TGF-1-treated and untreated C3H10T1/2 cells cultured at high density. Sequence analysis reveals that CORS26 protein has a hydrophobic signal peptide at the NH 2 terminus and lacks a putative transmembrane domain. The presence of a putative signal peptide suggests that CORS26 enters the secretory pathway. Actually, CORS26 was secreted from COS-7 cells after transient transfection of a FLAG-tagged CORS26 expression vector. The deduced amino acid sequence of CORS26 displays structural similarity to several C1q related proteins, such as C1q A, B, and C chains, Acrp30, and CRF, containing the collagenous repeats (Gly-X-Y) at the NH 2 terminus and the globular domain at the COOH terminus. This suggested that CORS26 may belong to the C1q family proteins.
C1q family proteins are known to homo-or hetero-oligomerize via the collagenous structures, suggesting that CORS26 might form oligomers with itself or other proteins. The COOH-terminal region of the protein contains three potential phosphorylation sites. Consequently, this molecule is a potential target for protein phosphorylation via protein kinase C and casein kinase II. In addition, a cysteine residue in the globular region of the C1q B and C chains plays an important role in the formation of disulfide bonds with IgG (27) and the stabilization of the triplex strands in the collagenous domain (24). These cysteine residues are replaced by other residues in CORS26 similar to Acrp30 and CRF.
Recently, it was reported that the crystal structure of Acrp30, a member of the C1q family of proteins, showed homology with that of the tumor necrosis factor (TNF) family proteins. Moreover, TNFs (28) and C1q proteins have similar gene structure in the globular domains (29). These similarities suggest an evolutionary link between the C1q-like proteins and TNFs and establish a C1q/TNF molecular superfamily. It has been reported that TNFs have monospecific receptors (TNFRs) (30). Since CORS26 and C1q share significant similarities in the structure of the collagenous domain, which has been shown to be important for ligand-receptor interaction (13), it is possible that CORS26 might signal through the TNFRs or TNFRlike receptors.
Both Northern blot analysis and RT-PCR analysis showed a unique pattern of gene expression of CORS26 in various tissues and cell lines. CORS26 mRNA expression was found in rib Aliquots of the PCR products were electrophoresed in 2% agarose gel, and the gel was stained with ethidium bromide. A 300-bp CORS26specific band was not seen in the mouse cell lines examined. PCR for GAPDH expression was also performed as a quality control. The molecular mass marker is X174/HaeIII. growth plate cartilage and early stages of chondrogenic differentiation of C3H10T1/2 cells from which the CORS26 cDNA was isolated. Although the expression level was lower than that in cartilage, CORS26 was also expressed in kidney of adult mice. Indeed, DNA sequence homology searches in the Gen-Bank TM Mouse EST data bases using the BLAST search program revealed that several mouse EST sequences isolated from whole embryo or adult kidney libraries were homologous or identical to the 5Ј and 3Ј ends of the CORS26 cDNA. These findings from the EST data base are consistent with the tissue distribution of the CORS26 gene. However, CORS26 mRNA was not detected in fibroblastic cell lines and osteoblastic cell lines. The expression of CORS26 in kidney raises the question about its possible role in this tissue, but at present, details are unclear.
The basic form of the skeleton are first recognizable when mesenchymal cells aggregate into regions of high cell density called condensations. They subsequently differentiate into cartilage and bone and continue to grow by cell proliferation, maturation, and matrix deposition. In situ hybridization of mouse embryos 13-15 days p.c. showed that CORS26 was highly expressed in precartilaginous primordia and Meckel's cartilage. In each case, CORS26 transcripts appeared to be localized in condensed prechondrocytic mesenchymal cells or in less mature chondrocytes, whereas the levels of CORS26 mRNA were lower or undetectable in the mature chondrocytes. Furthermore, to test the role of the CORS26 gene in vitro, we generated C3H10T1/2 cells stably transfected with pcDNA3.1/CORS26. Since CORS26 mRNA is expressed in condensed prechondrocytic mesenchymal cells in vivo, we examined the effect of the CORS26 gene on cell proliferation. Overexpression of CORS26 enhanced growth of C3H10T1/2 cells and increased saturation densities in vitro. This finding suggested that secreted CORS26 protein acted as a growth factor based on its mitogenic activity on C3H10T1/2 cells.
Although the function of CORS26 in vivo is not clear, one possible function is the local regulation of mesenchymal condensation as secreted autocrine/paracrine factors during an early stage of skeletal development. Further analysis of CORS26 will lead to a better understanding of the molecular mechanism in skeletal development. | 5,826.8 | 2001-02-02T00:00:00.000 | [
"Biology"
] |
Gravity in Twistor Space and its Grassmannian Formulation
We prove the formula for the complete tree-level $S$-matrix of $\mathcal{N}=8$ supergravity recently conjectured by two of the authors. The proof proceeds by showing that the new formula satisfies the same BCFW recursion relations that physical amplitudes are known to satisfy, with the same initial conditions. As part of the proof, the behavior of the new formula under large BCFW deformations is studied. An unexpected bonus of the analysis is a very straightforward proof of the enigmatic $1/z^2$ behavior of gravity. In addition, we provide a description of gravity amplitudes as a multidimensional contour integral over a Grassmannian. The Grassmannian formulation has a very simple structure; in the N$^{k-2}$MHV sector the integrand is essentially the product of that of an MHV and an $\overline{{\rm MHV}}$ amplitude, with $k+1$ and $n-k-1$ particles respectively.
Introduction
In a recent paper [15], two of us conjectured that the complete classical S-matrix of maximal supergravity in four dimensions can be described by a certain integral over the space of rational maps to twistor space. The main aim of this paper is to prove that conjecture.
In [5,6,7,8,16,17] it was shown that gravitational tree amplitudes obey the BCFW recursion relations [11,12]. Our method here is to show that the formula presented in [15] obeys these same relations, and produces the correct three-particle MHV and MHV amplitudes to start the recursion.
In the analogous formulation of tree amplitudes in N = 4 super Yang-Mills [9,30,36], BCFW decomposition is closely related to performing a contour integral in the moduli space of holomorphic maps so as to localize on the boundary where the worldsheet degenerates to a nodal curve [10,18,20,21,31,32,34]. The various summands on the right hand side of the recursion relation correspond to the various ways the vertex operators and map degree may be distributed among the two curve components.
The relation between factorization channels of amplitudes and shrinking cycles on the worldsheet that separate some vertex operators from others is of course a fairly general property of This paper is a contribution to the Special Issue on Progress in Twistor Theory. The full collection is available at http://www.emis.de/journals/SIGMA/twistors.html string theory. In the present case, it is also necessary to prove that the rest of the integrand behaves well under this degeneration. In particular, the formula of [15] involves a product of two determinants that generalize Hodges' construction of gravitational MHV amplitudes [23,24] to arbitrary external helicities. One of the striking properties of these determinants is that they each depend in a simple way on the 'infinity twistor', and thereby the breaking of conformal invariance inherent in gravitational amplitudes becomes completely explicit. More specifically, the determinants are each monomials in the infinity twistor, to a power that depends only on the number of external states and the MHV degree of the scattering process. Furthermore, as explained in [15], the way arbitrary gravitational amplitudes depend on the infinity twistor can easily be traced through BCFW recursion. This strongly suggests that the determinants behave simply under factorization. We shall see that this is indeed the case.
Along the way, we show that the 1/z 2 decay [6,17] of gravitational tree amplitudes at large values of the BCFW shift parameter z is also simple to see from the formula of [15]. This behaviour is responsible for many remarkable properties of these amplitudes (see, e.g., [33] for some applications).
In addition, in the second part of the paper we reformulate the construction of [15] as an integral over the Grassmannian G(k, n), written in terms of the 'link' coordinates of [4]. As preparation, we show how the two determinants, which in twistor space look very different from one another, are naturally conjugate under parity. The formulation of gravitational tree amplitudes as an integral over G(k, n) is strikingly simple: the integrand is the product of that of an MHV and an MHV amplitude, with k + 1 and n − k − 1 particles respectively.
Gravity from rational curves
We begin by briefly reviewing the conjecture of [15] (see Appendix C for notation). All n-particle tree amplitudes in N = 8 supergravity are given by the sum M n = ∞ d=0 M n,d over N d−1 MHV partial amplitudes. The main claim of [15] is that these N d−1 MHV amplitudes may be represented by the integral Here, Z is a holomorphic map from a rational curve Σ with homogeneous coordinates σ α to N = 8 supertwistor space with homogeneous coordinates Z I = (Z a , χ A ) = (µα, λ α , χ A ). The external states are N = 8 linearized supergravitons, and are described on twistor space by classes h i ∈ H 0,1 (PT, O(2)). These wavefunctions are pulled back to points σ i on the curve via the map Z. We usually take so as to represent momentum eigenstates 1 . It is easy to check that such an h i (Z(σ i )) is homogeneous of degree −4 in the external data λ i , as required for a positive helicity graviton supermultiplet in on-shell momentum space. 1 In [15] we used a G(2, n) notation for the worldsheet variables, whereas here we are working projectively The main content of (2.1) is the operators |Φ| and |Φ| . These arise as a generalization of Hodges' formulation of MHV amplitudes [23,24] and were defined in [15] as follows. We first let Φ be the n × n matrix operator with elements where the second equality in the first line follows when Φ acts on the momentum eigenstates (2.2). It was shown in [15] that Φ has rank n − d − 2, with the (d + 2)-dimensional kernel spanned by vectors whose j th component is σ This equation holds on the support of the δ-functions by integrating out the map coefficients µ a from (2.6). Φ is similarly defined as the symmetric n × n matrix with elements is introduced for later convenience, and again the second equality in the first line of (2.4) follows when acting on (2.2). Φ has rank d, with kernel defined by the equation This equation holds because for any degree d polynomial λ(σ), the residue theorem gives The sum of this residue over all i ∈ {1, . . . , n} vanishes because the resulting contour is homologically trivial. The operators |Φ| and | Φ| are defined as follows. Remove any d + 2 rows and any d + 2 columns fromΦ to produce a non-singular matrixΦ red . Then In this ratio, |r 1 . . .r d+2 | denotes the Vandermonde determinant made from all possible combinations of the worldsheet coordinates corresponding to the deleted rows; |c 1 . . .c d+2 | is the same Vandermonde determinant, but for the deleted columns.
In [15], |Φ| was also defined in terms of the determinant of a non-singular matrix Φ red obtained by similarly removing rows and columns from Φ, now n − d of each. However, |Φ| itself was constructed as using the Vandermonde determinants |r 1 . . . r d | and |c 1 . . . c d | of the rows and columns that remain in Φ red .
Definition of det
The above definitions of | Φ| and |Φ| are actually quite different. This motivates us to find a more canonical way to define these determinants. It turns out that the most natural definition has to do with the null vectors of each matrix. Once the null space of any symmetric n × n matrix K of rank m is determined, one can compute any two maximal minors of the n × (n − m) matrix with the null vectors of K as its columns. Denote the two maximal minors chosen by |R| and |C|. Then where K red is the reduced matrix obtained after removing n − m rows and columns whose row and column label coincide with the ones removed from the n × (n − m) matrix of null vectors to obtain |R| and |C|. Appendix A contains a formal motivation for this definition and explains how the old and new definitions are related. At this point it suffices to say that so that an alternative presentation of the gravity formula (2.1) is In the rest of the paper we will use whichever form is more convenient for the argument at hand.
BCFW recursion
In this section, we will prove the conjecture of [15] by showing that the M n,d defined by equation (2.6) correctly obeys BCFW recursion. There are four aspects to the proof. Firstly, we must show that the formula correctly reproduces the 3-particle amplitudes that seed the recursion. This step is straightforward. Next, we must show that under the BCFW shift 2 the integral in (2.6) decays at least as fast as 1/z in the limit that z → ∞. Thirdly, we must show that M n,d has a pole whenever the sum of momenta of any two or more particles becomes null, with the residue of this pole being the product of two subamplitudes. Finally, we complete the argument by showing that M n,d has no poles other than the physical ones. This being the case, the usual BCFW contour argument [11] may be applied to construct M n,d recursively from smaller amplitudes. Equation (2.6) will then agree with the tree amplitudes in N = 8 supergravity since it satisfies the same recursion relation with the same initial conditions [5,6,7,8,16,17]. In fact, it is known that gravitational scattering amplitudes decay as 1/z 2 under the BCFW shift [6,17]. We will see that M n,d has precisely this behaviour quite transparently. Although this fact can be shown using Lagrangian techniques [6,17], the proof is rather opaque from the view point of the S-matrix.
3-particle seed amplitudes
We first check that (2.6) yields the correct 3-point amplitudes. For the MHV we have n = 3 and d = 0, so that the map Z is constant, Z(σ) = Z. We can remove all three rows and columns of Φ and it is simple to show that det (Φ) cancels 3 the factor of |123| 2 in the denominator of (2.6). We can also remove two of the three rows and columns from Φ. Choosing these to be the first and second rows and the first and third columns, the reduced determinant becomes .
The denominator (12)(23)(31) is exactly compensated by the Jacobian from fixing worldsheet SL(2; C) invariance, so (2.6) reduces to where in going the second line we fixed the C * scaling by setting t 1 = 1 and then performed the d 4|8 Z integral. Using the two bosonic δ-functions involving theλ's to fix t 2 and t 3 shows that is the Poisson bracket associated to the infinity twistor. Notice that the negative homogeneity of the Poisson bracket ensures the interaction term scales the same way as the kinetic term, and that each balances the scaling of the N = 8 measure.
The linearized field equations of (3.2) state that h(Z) represents a class in H 0,1 (PT, O(2)), as required by the Penrose transform for massless free fields of helicity +2. At the non-linear level, the field equations state that∂ + {h, ·} defines an integrable almost complex structure on PT that is compatible with the Poisson structure. This is exactly the content of Penrose's nonlinear graviton construction [29]. A twistor space with an integrable almost complex structure corresponds to a conformal equivalence class of space-times with self-dual Weyl tensor. The additional information that the complex structure is compatible with the Poisson structure picks a distinguished metric in the conformal class that satisfies the vacuum Einstein equations. The presence of the infinity twistor in M MHV is thus a direct consequence of its presence in the self-dual action, reflecting the very nature of the non-linear graviton construction.
For the 3-point MHV we have (n, d) = (3, 1), so that Z(σ) = Aσ 0 + Bσ 1 . We can now remove two rows and columns from Φ. Choosing these to be the first and second rows and the first and third columns, equation (A.2) gives where the second equality holds on the support of the δ-functions for λ i . We can remove all three rows and columns from Φ, so The integral M 3,1 then becomes The integrals over t i and σ i integrals may be fixed by the δ-functions. The integrals over (µ A,B , χ A,B ) then provide a super-momentum conserving δ-function, while the four remaining integrals over |A and |B are compensated by the GL(2). Overall, we have exactly the 3-particle MHV amplitude in N = 8 supergravity, as expected. The 3-particle amplitudes that seed BCFW recursion are thus correctly reproduced by the integral (2.6).
Decay as z → ∞
We now investigate the behaviour of M n,d under the BCFW shift in the limit that the shift parameter z → ∞. We shall see that the highly non-trivial fact that the gravitational amplitudes decay as 1/z 2 in this limit is made manifest by (2.6).
At degree d, we can remove d+2 rows and columns from Φ and n−d rows and columns from Φ. Hence, since we are only interested in BCFW recursion for 1 ≤ d ≤ n − 3, we can always remove at least two rows and columns from each. With the shift (3.1) that affects only |1 and |n], we choose the removed rows and columns to include 1 and n in both cases. In addition, we choose one of the arbitrary points p r ∈ Σ in (2.4) to be σ 1 and another to be σ n so that the terms j = 1, n drop out of the sum over j the diagonal elements Φ ii . Similarly, we choose the arbitrary points p a ∈ Σ in (2.3) to include σ 1 and σ n . With these choices, the external data |1 , |n and |1], |n] does not appear in the determinants det (Φ)det ( Φ). Thus, after integrating out the (µ, χ) components of the map, the shift (3.1) affects (2.6) only by changing the arguments of the δ-functions involving λ 1 andλ n . On the coordinate patch σ α = (1, u) of the worldsheet, these shifted δ-functions becomē where ρ a is the λ part of the map Z.
To absorb these shifts, we introduce new worldsheet variables (û,t) for particle 1, defined bŷ where z is the BCFW shift parameter. This definition absorbs the shifts in the δ-functions, up to terms that vanish as z → ∞. Specifically, the argument of the shifted λ 1 δ-function becomes while the arguments of the δ-functions involving theλ's become The important point is that the new variables (û 1 ,t 1 ) remain finite as z → ∞. Therefore, to study the behaviour of (2.6) in this limit, we should express its integrand in terms of these variables. Begin with the measure for particle 1. It follows from (3.3) that where we have dropped terms that wedge to zero against the measure for particle n. Thus the integration measure of (2.6) falls as z −4 as the shift parameter tends to infinity. Similarly, we find that so that the special case of (1n) decays as z −1 , as the order z 0 term cancels exactly. We now investigate the occurrence of (1n) and t 1 in the integrand of M n,d , since these are the only terms that have non-trivial large z behaviour. We can always choose to remove row and columns 1 and n from both Φ and Φ. This does not quite suffice to remove (1n) and t 1 from the matrices, because they still appear in the diagonal terms By further choosing one of the p r and one of the p a to be σ 1 the summand with j = 1 vanishes in each of these matrices, and since i = 1, n the matrices themselves approach a constant value as z → ∞, obtained by simply replacing σ 1 → σ n . Aside from the measure then, the only pieces of the integrand which affect the large z behaviour are the Vandermonde determinants in the definition of the reduced determinants. Since we have removed rows and columns 1 and n, the Vandermonde determinants associated with Φ are independent of 4 (1n). However, we find that the definition of det ( Φ) involves the denominator where r 1 , . . . , r d are the other rows and columns that were removed from Φ. This factor, appearing in the denominator of det ( Φ), behaves as 1/z 2 in the large shift limit. Combined with the 1/z 4 behaviour of the integration measure, we see that M n,d (z) ∝ 1/z 2 as z → ∞. This ensures that the BCFW integrand M n,d (z)dz/z has no pole at infinity, allowing the BCFW residue theorem to proceed. It is quite remarkable that the formula (2.6) for M n,d reproduces the correct 1/z 2 behaviour of gravity so transparently. We repeat that this behaviour is highly non-trivial to prove by any other means, and yet is a key property of gravitational scattering amplitudes [33].
Incidentally, exactly the same argument as above may be applied to the Witten-RSV formula [30,36] for tree-level scattering amplitudes in N = 4 SYM. We now have so that the measure decays as 1/z 2 , while (1n) still behaves as 1/z under (3.3) so that the decay is softened to 1/z overall. Had we chosen to shift external particles that were not adjacent in the colour ordering, all the (i i + 1) brackets would have approached constants as z → ∞.
Representing Yang-Mills amplitudes by A n,d thus makes it manifest that they behave as 1/z 2 under BCFW shifts of non-adjacent particles [6,17]. Once again, this fact is very difficult to see by any other means except the Grassmanian formulation of Yang-Mills amplitudes [3,25].
Multi-particle factorization
The main ingredient in the proof is multiparticle factorization 5 . Gravitational tree amplitudes have a pole whenever the sum P of any two or more external momenta becomes null, and the residue of this pole is the product of two subamplitudes, summed over the helicities of the particle being exchanged. More specifically, divide the particles into two sets L and R and call Then the amplitude behaves as shorthand for the spinor momenta, and Λ represents the spinor momenta of the internal particle in the strict limit that P 2 L = 0. In this equation, M represents an amplitude stripped of its overall (bosonic) momentum δ-function. We can restore these δ-functions by writing where in the last line we have parameterized p by where λλ and q are null momenta, with q fixed, and s 2 is a scalar parameter chosen for later convenience. Any 4-momentum may be parametrized this way. Notice also that p 2 = P 2 L = s 2 λ|q|λ].
Suppose we approach the factorization channel by taking the limit as s 2 → 0. If we wish to recover the amplitude (3.5) then the d 4 p integral in the second line of (3.6) should be performed over a copy of real momentum space. However, as the amplitude itself is diverging, it is more sensible to compute the residue of the pole. This may be done by changing the contour in the final line to be an S 1 encircling the pole at s 2 = 0, together with an integral over the on-shell phase space of the intermediate particle. One finds where the δ-functions are now naturally incorporated into the subamplitudes. In particular, this formula shows that the residue itself has no memory of the direction in which the factorization channel was approached. We must show that M n,d in (2.6) has the same property. It will actually be convenient first to rewrite the residue on twistor space by transforming the external and internal Λ's to twistor Res where {Z i } and {Z j } are the sets of twistors associated with external states on the L and R subamplitudes. Notice that on twistor space, N = 8 gravitational amplitudes are homogeneous of degree +2 in each of their arguments. Under the assumption (valid at least for 3-particle amplitudes) that these gravitational subamplitudes are associated with curves in twistor space, we see that the residue on a factorization channel corresponds to a nodal curve, with the location Z of the node integrated over the space (see Fig. 1). Therefore, to prove that M n,d as given by (2.6) obeys BCFW recursion -and therefore agrees with all tree amplitudes in N = 8 supergravity -we must show both that it has a simple pole on the boundary of the moduli space where the curve degenerates, and further that the residue of this pole is given by (3.9). A standard way to describe the decomposition of a rational curve into a nodal curve is introduce a complex parameter s and model the rational curve as the conic 8 where (x, y, z) are homogeneous coordinates on the complex projective plane. The homogeneous coordinates σ α = (σ 0 , σ 1 ) intrinsic to the CP 1 worldsheet are related to these coordinates by The degeneration of the curve is controlled by the parameter s 2 , which we will show is the same parameter as appears in (3.7). In the limit we have where the CP 1 's Σ L and Σ R are defined by so that (z, x) form homogeneous coordinates on Σ L and (z, y) form homogeneous coordinates on Σ R . The good homogeneous coordinates intrinsic to Σ L,R are therefore and the affine coordinate u = σ 1 /σ 0 on Σ s is related to the affine coordinates u L,R on Σ L,R by With this choice of coordinates, the node Σ L ∩ Σ R is the point x = y = 0 ∈ CP 2 and is also at the origin in each of the affine coordinates u L,R . As the curve degenerates, the n marked points distribute themselves among the component curves Σ L,R , with at least two of these points on each curve component. In the degeneration limit, any such distribution defines a boundary divisor in the moduli space M 0,n of n-pointed rational curves, with the locations of the marked points considered up to SL(2; C) transformations. The parameter s 2 is then a coordinate transverse to this boundary divisor, which lies at s 2 = 0.
Ordinarily, we think of coordinates on M 0,n as given by a choice of n − 3 independent crossratios of the marked points. No choice of these cross ratios provides coordinates globally on M 0,n , but we can always make a choice such that a particular boundary divisor arises when one or more cross-ratios approach zero, so that in some conformal frame the marked points in the numerator of these cross ratios are colliding. To relate this description to s 2 , consider the cross-ratios where without loss of generality we assume that our boundary divisor has 1 ∈ L and n−1, n ∈ R.
To study the degeneration, marked points should be described in terms of the coordinates u L or u R as appropriate. Using we see that Consequently, as we approach the boundary divisor, any ratio x i /x j with i ∈ L and j ∈ R will vanish as s 2 , whereas any such ratio with i and j limiting onto the same curve components remains finite, provided we approach a generic point of the boundary divisor (i.e., we only consider a single degeneration). We can now extract s 2 by defining rescaled cross-ratios y i by where the y i are to be considered only up to an overall scaling. The factor provides meromorphic top form on the moduli space. This form cannot be written in terms of the cross-ratios alone since it has non-zero homogeneity in each of the σs. However, fixing the SL(2 : C) by freezing 1, n − 1 and n, at least locally we can write in terms of n − 3 of the cross-ratios (3.11) and where the function 9 absorbs the homogeneity.
Upon transforming to the new coordinates, for i ∈ L we wish to replace the x i cross-ratios by s 2 and y i , with the y i treated projectively. To leading order in s 2 , the measure for these L cross-ratios becomes where in the second line we work on the affine patch y 2 = const. The measure for the R crossratios is s-independent and stays as in (3.13). To put this measure in a more familiar form, we now undo the transformation (3.13) separately on the left and on the right, finding i∈L i =1,2 Putting all the pieces together, we have shown that in a neighbourhood of the boundary divisor defined by s 2 = 0, the measure for the integration over marked points may be written as where we recall that with the coordinates (3.10) assumed a gauge fixing in which the node was at the origin in each curve component. Bearing in mind that the boundary divisor is naturally the product M 0,n L +1 × M 0,n R +1 of the moduli spaces for rational curves with fewer marked points, the factors in square brackets are precisely the expected (n L,R + 1) − 3 forms on these spaces. The form ds 2 is normal to this boundary divisor. Thus, to find the residue of our proposed gravity amplitude M n,d in a factorization channel, we must interpret the contour in (2.6) to 9 More precisely, include an S 1 factor that encircles the boundary divisor s 2 = 0, and use this contour to compute the ds 2 integral. Of course, until we study the rest of the integrand in M n,d , it is not clear that we actually have a simple pole there. Armed with this description of a neighbourhood of the factorization channel, we now investigate the behaviour of the rest of the integrand. Begin by considering the map Z : Σ s → PT. It is useful to pull out a factor of u d L and write with the second or third lines the appropriate description for particles limiting onto Σ L,R , respectively. The coefficients Z • , Z a and Z b are related to the original map coefficients Z c by This shows that as s 2 → 0, the twistor curve Z(Σ) degenerates into a pair of curves Z(Σ L ) and Z(Σ R ) that are the images of degree d L,R maps, respectively, where 10 d L + d R = d. As shown in [34], the δ-functions involving λ i that are already present in the external wavefunctions combine with those inλ i that are generated by Fourier transforming to momentum space to enforce where ρ is the λ-component of a map coefficient that limits onto Σ R . This shows that, as in (3.8), (3.9), a factorization channel in momentum space corresponds to a nodal curve in twistor space, with the same parameter s 2 governing both degenerations. We can account for the various factors of u d L and the rescalings in (3.16) as follows. Firstly, unlike in the N = 4 Calabi-Yau case the N = 8 measure is not invariant under the rescaling (3.16) of the map coefficients, but rather behaves as (3.17) Secondly, bearing in mind that the wavefunctions and matrix elements each depend homogeneously on Z(u), we can treat the map purely as the terms in parentheses in (3.15) provided we also make the replacements We do this henceforth. In terms of the new coordinates, the product of the replaced wavefunctions becomes to leading order in s 2 .
10 By forgetting the data of the map, the moduli space M 0,n(PT, d) of degree d rational maps from an n-pointed curve to PT admits a morphism to M 0,n. As we see in the text, a boundary divisor in M 0,n(PT, d) is specified by a boundary divisor in M 0,n, together with choices of degree dL and degree dR maps on the two curve components, with dL + dR = d.
The node itself is mapped to Z • ∈ PT. It will be convenient to be able to treat the node separately on the two curve components. For this, we introduce a factor into the integrand of M n,d . To understand this factor, first note that the powers of scaling parameters t and r in the measure are chosen to ensure the whole expression has no weight in any of the three twistors. The integrals can all be performed against the δ-functions, which simply freeze Y • to (t/r)Z • . Now, whenever we describe a particle in R, we write the map as are a further rescaling of the d R map coefficients Z b . Note that we do not rescale the d L coefficients Z a . Pulling out this factor of r/t from all the wavefunctions h j (Z) with j ∈ R, and from all the rows and columns of Φ and Φ corresponding to particles in R, and also changing the Z b measure into that for Y b leads to a factor of (r/t) 2 . In the original formula (2.6), we divided by vol(C * ) to account for an overall rescaling of the map coefficients. However, as a consequence of (3.21), the new d R map coeffients Y b are no longer locked to scale like {Z • , Z a } but instead are locked to scale like Y • . This factor combines beautifully with the factors in the measure (3.20) to convert those integrals into our standardδ 3|8 's of homogeneity +2 in each entry. These δ-functions can thus be treated as 'external data' for the node. Thus, as in (3.9), as s 2 → 0 the map degenerates into two independent maps from (n L,R + 1)-pointed curves Σ L,R , each described by d L,R + 1 twistor coefficients, with a point • L,R on each curve mapped to the same point Z in the target space. The final integral D 3|8 Z allows this twistor to be anywhere, just as in the residue calculation (3.9). Now that we have described the degeneration, we must show that (2.6) has a simple pole there, with the correct residue. Our first aim is to show that to leading order in s 2 , the matrices Φ and Φ become block diagonal so that their determinants naturally factor into a product of determinants for Φ L,R and for Φ L,R . Consider first Φ and assume that we choose thed + 1 reference points p r on the diagonal in (2.4) so thatd L,R limit onto Σ L,R , wherẽ The remaining marked point is chosen to be the node, viewed as being on the right when we consider diagonal elements Φ ii with i ∈ L, and on the left for diagonal elements Φ jj with j ∈ R. Specifically, we have in terms of the original coordinates, where we have accounted for the factors in (3.18).
Using (3.12) to transform to the limit coordinates, we find that Φ can be written as when i, j ∈ L, and where the subscript L on means we are using limiting coordinates appropriate for L throughout. Similarly when i, j ∈ R and again we use the R limiting coordinates. Once we extract a power of 1/u d L −1 iL from each row and column of (3.22), a power of u d L iR from each row and column of (3.23) and powers of s from both, these matrices are exactly of the form Φ L,R for the subamplitudes. Note that in both cases, we have extended the sum on the diagonal term to include the node (located at u • = 0 in our coordinates). This is possible because the choice of the node as a reference point means this term is zero. While Φ L,R as given here are n L,R × n L,R matrices (rather than (n L,R + 1) × (n L,R + 1) matrices), they still have the expected rank d L,R , because in each case we were forced to choose one of the reference points to be the node. It is as if the row and column corresponding to the internal particle have 'already' been removed.
The off-block-diagonal terms Φ ij with i ∈ L, j ∈ R are of the same order in s 2 as the R block diagonal ones in (3.23). Therefore, the leading term in the reduced determinant comes from the block diagonal terms. After also changing variables u → u L,R in the Vandermonde determinants, a straightforward but somewhat tedious calculation shows that exactly as required for a product of two subamplitudes, times an overall power of s.
In an exactly parallel computation, transforming Φ into the L, R coordinates shows that (3.25) to leading order in s 2 . Once again the matrices Φ L and Φ R are precisely as they should be for the left and right subamplitudes, where again we choose the node as one of the reference points. After these somewhat lengthy calculations, we are finally in position to compute the residue of M n,d on the boundary of the moduli space corresponding to a factorization channel. First, collecting powers of s 2 from equations (3.14), (3.17), (3.19), (3.24) and (3.25), a near miraculous cancellation occurs, leaving simply showing that the integrand of (2.6) indeed has a simple pole on boundary divisors in the moduli space. Combining all the pieces, the residue of this simple pole is or in other words exactly the residue of the gravitational scattering amplitude.
We have now shown that M n,d as given by equation (2.6) produces the correct seed amplitudes for BCFW recursion, has the correct 1/z 2 decay as the BCFW shift parameter z → ∞ and has a simple pole on any physical factorization channel, with residue correctly given by the product of two subamplitudes on either side of the factorization, integrated over the phase space of the intermediate state.
The only remaining thing to check is that in momentum space, M n,d has no unwanted unphysical poles. This is straightforward. A simple dimension count of integrals versus constraints shows that, as for Yang-Mills [30], when evaluated on momentum eigenstates, M n,d is inevitably a rational function of the spinor momenta. Thus the only possible singularities are poles. Any unphysical poles in M n,d which carry some helicity weight would be detected by taking one of the external momenta to become soft. Unphysical "multiparticle" poles, i.e. poles that carry no helicity weight, would also be detected by sequentially taking many particles to become soft. However, the soft limits of M n,d have recently been checked to agree with those of gravity [13]. We therefore conclude that M n,d indeed obeys the correct BCFW recursion relation, and have thus demonstrated that it computes all tree amplitudes in N = 8 supergravity.
Parity invariance
One of the pleasing features of using (2.6) to describe gravitational scattering amplitudes is that the way these amplitudes break conformal symmetry becomes completely explicit: it arises purely from the infinity twistors , and [ , ] and in Φ and Φ, respectively. On the other hand, parity transformations are not manifest, because parity exchanges twistor space with the dual twistor space. For example, the twistor space CP 3|N of conformally flat space-time is exchanged with the dual projective space CP 3|N * . On the original Z twistor space, [ , ] is a differential operator while , is multiplicative, so the role of these brackets are interchanged under parity. We see this change of roles quite transparently at the level of amplitudes: a parity transformation flips the helicities of all external states, so it exchanges d ↔d, and one of the key observations of [15] was that the n-particle N d−1 MHV amplitude M n,d is a monomial of degree d in , and of degreed in [ , ]. This strongly suggests that the determinants of Φ and Φ, which hitherto have seemed very different, are naturally parity conjugates of each other. Let's now see this explicitly.
Acting on either momentum or twistor eigenstates, the matrix Φ has elements To bring this to the form of Φ, consider making the change of variables t i → s i , defined by This transformation of the scaling parameters played a key role in studying the behaviour of the connected prescription for N = 4 SYM under parity [30,35]; its relation to a parity transformation will be reviewed below. Under this change of variables, we find where Φ (d) is our usual Φ matrix on a degree d curve 11 , and Φ (d) ( , , s) is the Φ matrix appropriate for a degreed curve. We also make the replacement [ , ] → , in Φ (d) . Finally, A is the diagonal matrix whose j th entry is the product k =j (jk). Acting with A as in (4.2) multiplies the rows and columns of Φ by this product, which accounts for the denominator in (4.1).
In equation (A.3), we saw that det (Φ) and det ( Φ) behave just as usual determinants under matrix multiplication. In the present case we have Similarly, if we start from Φ (d) ([ , ], t) and make the same change of variables (4.1), then reading this equation backwards gives with no extra factors. Note that the roles of Φ and Φ have been exchanged, along with the exchanges , ↔ [ , ] and d ↔d.
As mentioned above, in [30,35] it was shown that the parity transformation of all the other factors in the N = 4 SYM tree amplitudes A n,d conspire to produce the transformation (4.1) of scaling parameters. In N = 4 SYM, the measure for the scaling parameters themselves behaves as , with a proportionality factor that is cancelled by the transformation of the fermions. In N = 8 supergravity, the scaling parameters' measure is instead, but now the transformation of the N = 8 fermions provides an extra factor of 1/s 4 i . Exactly the same arguments as given in [30,35] thus establish the parity symmetry of our formulation of M n,d . Rather than simply repeat those arguments verbatim, we instead make parity manifest by recasting the integral (2.6) in terms of the link variables introduced in [4] for N = 4 SYM.
Gravity and the Grassmannian
The aim of this section is to write the tree amplitude M n,d as an integral over the Grassmannian G(k, n) (with k = d + 1) along the lines of [2,10,14,18,20,28,32] for the connected prescription of N = 4 SYM. The most obvious reason to perform this transformation is that as an integral over G(k, n), all δ-function constraints involving external data become linear in the variables and hence trivial to perform. The price for such a simplification is that the number of integrations variables is larger than before. The difference in the number of variables is (k − 2)(n − k − 2) and hence the amplitude becomes a multidimensional contour integral over that many variables.
In N = 4 super Yang-Mills something remarkable happens: repeated applications of the global residue theorem transform the integral into one where all variables can be solved for from linear systems of equations [2,10,18,19,20,28,32]. Computationally this is a major advantage, but it also gives a conceptual advantage because individual residues computed after the application of the global residue theory coincide precisely with BCFW terms and hence, in Yang-Mills, leading singularities of the theory. One can then write down a generating function for all leading singularities [3,25] that control the behaviour of the theory to all orders in perturbation theory and which has allowed the development of recursion relations for the all loop integrand [1].
These remarkable properties of the Grassmannian formulation of N = 4 SYM should provide sufficient motivation to explore the same avenues in gravity.
It is important to realize that the existence of a Grassmannian formulation per se has nothing to do with N = 4 SYM, or Yangian invariance, or even twistors. Rather, it is a completely general consequence of dealing with degree d holomorphic maps from an n-pointed rational curve. To see this [14], recall that we can describe the map Z : CP 1 → PT by picking a basis {P 0 (σ), . . . , P d (σ)} of d + 1 linearly independent degree d polynomials in the worldsheet coordinates and expanding The space of such polynomials is H 0 (CP 1 , O(d)) ∼ = C k . Given n marked points on the worldsheet, we would like to define a natural embedding of this C k into C n by 'evaluating' each of the P a (σ) at each of the marked points. This can be done once we fix a scale for σ at each marked pointin other words, once we pick a trivialization of O(d) at each of the σ i . This is exactly the role of the scaling parameters t i . Thus, for every choice of n marked points and n scaling parameters, our map defines a k-plane in C n , i.e. a point in the Grassmannian G(k, n).
As we integrate over the moduli space of rational maps, we sweep out a 2(n − 2)-dimensional subvariety of G(k, n). This dimension arises as 2(n − 2) = (n − 3) + (n) + (−1), (5.1) where (n−3) parameters come from the locations of the marked points up to worldsheet SL(2; C) transformations, a further n parameters are the scaling parameters t i and we lose one parameter from overall rescaling. (Equivalently, we have 2n parameters from both components of the worldsheet coordinates σ α i , minus four from the quotient by GL (4; C).) The precise subvariety we obtain may be characterized as follows [2,14]. The map from the worldsheet to the space of degree d polynomials, considered up to an overall scale, is of course the Veronese map The subvariety of the Grassmannian we sweep out is therefore defined by the condition that the n different k-vectors we get by evaluating our polynomials do not simply span a k-plane through the origin in C n , or equivalently a CP d ⊂ CP n−1 but rather lie in the image of the Veronese map to that CP d . As shown in [2,10,18,19,20,28,32] and obtained again below, this condition amounts to the vanishing of (d − 1)(d − 1) quartics in the Plücker coordinates of the Grassmannian. Note that giving the dimension expected from (5.1). On transforming to momentum space, the external data specifies 2(n − 2) divisors in G(k, n) defined by those k-planes in C n that contain the 2plane specified by the λ i and are orthogonal to the 2-plane specified by theλ i . The intersection number of these divisors with the Veronese subvariety is believed to be n−3 k−2 , where p q is the (p, q) Eulerian number [30].
All the above features of the Grassmannian formulation should thus be common to both N = 4 super Yang-Mills and N = 8 supergravity, purely as a consequence of their having a description in terms of degree d rational maps to twistor space. Of course, the detailed form of the measure on the Grassmannian will be different in the two cases, coming from the external wavefunctions, and from the Parke-Taylor worldsheet denominator in Yang-Mills and from |Φ| and | Φ| in gravity.
Let us now construct the Grassmannian formulation of tree amplitudes in N = 8 supergravity. We will choose our external wavefunctions to be either twistor or dual twistor eigenstates. More precisely we choose exactly d + 1 of the wavefunctions to be that have support only when σ a ∈ Σ is mapped to Z a ∈ PT. The remainingd + 1 wavefunctions are chosen to be h r (Z(σ r )) = dt r t 3 r exp (it r W r · Z(σ r )) that have plane-wave dependence on a fixed dual twistor W r . We sometimes write components W I = (μ α ,λα, ψ A ) dual to the components Z I = (λ α , µα, χ A ) of the original twistors. Notice that both types of wavefunction have homogeneity +2 in Z(σ), as required for an N = 8 multiplet on twistor space. To recover the momentum space amplitude from these twistorial amplitudes, one Fourier transforms 12 µ a →λ a in twistor variables Z a andμ r → λ r in the W r dual twistor variables. Since µ andμ appear only in the exponentials, this Fourier transform is straightforward.
The main virtue of these external wavefunctions is that they provide exactly enough δfunctions to perform all the integrals over the map Z. If we pick our basis of polynomials to be we can describe the map by Then for a = 1, . . . , d + 1, we have simply Z(σ a ) = Y a , so the k × n matrix is fixed to be the identity matrix in the k columns corresponding to the a-type particles. In other words, with this choice of basis, our k-plane inside C n will be represented by the matrix for some parameters c ra . These parameters are known as 'link variables' [4]. Using a different choice of basis for H 0 (CP 1 , O(d)) would lead to a GL(k; C) transformation of C, but the point it defines in the Grassmannian remains invariant. Note that, with the parametrization given in (5.6), since c ab = δ ab , the link variables can be thought of directly as minors of C ai .
There is a small subtlety in using (5.5) to describe the map, because (5.4) is not quite a basis for H 0 (CP 1 , O(d)) since P a (σ) in (5.4) has weight −d in σ a . We can absorb this by declaring that for each a, Y a likewise has weight +d under a rescaling of σ a , so that Y a really takes values in O a (d). With the Calabi-Yau N = 4 supertwistor space, this may be done without comment, but with N = 8 supersymmetry we acquire a Jacobian in the measure for the integration over the map, which becomes in terms of the map coefficients in (5.5). This Jacobian cancels 13 the scaling of d 4|8 Y a . With this subtlety accounted for, the wavefunctions (5.3) enforce Y a = Z a /t a allowing us to integrate out the map directly.
We are left with a contribution a dt a t a r dt r t 3 from the external wavefunctions, where the measure for the t a 's includes a factor of t 4 a from solving the δ-functions for the Y a 's. The factors in the exponential are precisely the Grassmannian coordinates c ra that we obtain by the procedure described above. The ratio t r /t a in front of P a (σ r ) defines a trivialization of O(d) at σ r , and so sets a meaningful scale for this ratio of homogeneous coordinates. The next step is to manipulate our main formula (2.6) so as to write it purely in terms of the external data {W r , Z a } and the Grassmannian minors c ra (i.e. 'link variables'), treated as independent variables. Many of the required steps follow in close parallel to the computations of [2,10,18,19,20,28,32] in N = 4 SYM. Since we did not find these manipulations to be particularly enlightening, we have postponed them to Appendix B.
The final result is that all tree amplitudes in N = 8 supergravity can be written as the Grassmannian integral (which containλ r 's), while the infinity twistor , sees only the Z a 's (containing λ a 's). These are therefore multiplicative operators in both cases. Also notice that, as usual in the link representation, parity invariance is now completely manifest.
Note. As this manuscript was being prepared to be submitted [13] and [22] appeared. The former has overlap with the parity invariance proof given in Section 4 while the latter overlaps with the link representation formula presented in Section 5.
A Some properties of determinants
In this appendix we give a careful definition of the determinants det (Φ) and det ( Φ) that appear in (2.6), and to gather a few general results about such determinants. All the material in here is standard mathematics.
The symmetric matrix Φ defines an inner product on an n-dimensional vector space that we call V . The m-dimensional kernel of Φ is characterized by an m × n matrix of relations R. This is summarized in the sequence The sequence is exact if ker Φ = imR and ker R T = imΦ so that the matrices otherwise have maximal rank (m for R and (n − m) for Φ). If we are given top exterior forms and ε on V and W respectively, the determinant det (Φ) may be defined in an invariant way via the equation 14 That this identity is true for some det (Φ) follows from the fact that, while the left hand side is non-zero by the assumption on the rank of Φ, it vanishes if contracted with any further copy of Φ. Since the kernel is characterized by the m vectors R, the m upstairs skew indices i 1 , . . . , i m and j 1 , . . . , j m must each be a multiple of the m th exterior power of the R's. Choosing any values for these free indices, we immediately obtain the standard formula The above argument shows this expression is independent of the choice of indices. It will be useful to understand the behaviour of reduced determinants when Φ is multiplied by a non-singular n × n matrix A. If Φ has kernel R, then ΦA has kernel A −1 R. We now replace Φ by ΦA in the definition (A.1) and multiplying through by m further A's. On the left we obtain a factor of the determinant of A, while on the right the multiplication cancels the factors of A −1 in the m-fold exterior product of (A −1 R). We therefore obtain simply In particular, provided A is non-singular, conjugation Φ → A −1 ΦA does not change the reduced determinant. In our case, neither the map Φ nor the vector spaces V and W are really fixed, but depend on parameters such as the map to twistor space and the locations of the vertex operators. Because we can rescale these parameters, there are no preferred top exterior forms or ε. The determinant det (Φ) is not really a number, but a section of the determinant line bundle (∧ n V * ) 2 ⊗ (∧ m W ) 2 over the space of parameters. We need to check that the determinant line bundles defined by Φ and Φ combine with the rest of the factors in the integrand to form a canonically trivial bundle, so that the whole expression is invariant under local rescalings of the worldsheet and map homogeneous coordinates.
We can keep track of the behaviour under rescalings of the homogeneous coordinates σ and Z by defining a quantity to take values in O i (1) if it has homogeneity 1 under rescaling of σ i , and values in O [1] if it has homogeneity 1 under rescaling of Z. Thus for example the relation . The weights of Φ ij then identify Φ as a symmetric form on so that Φ gives a pure number, invariant under rescalings, when evaluated on two elements of V .
With the kernel of Φ defined by (2.5), the map R is explicitly Since V is given by (A.4), this identifies W as where C n−d is (dual to) the space H 0 (Σ, O(d + 1)) of degreed + 1 polynomials in σ. The determinant line bundle associated to Φ is thus so that det (Φ) has homogeneity 2d in Z and 2n − 2 in each of the n points σ i . Considering the exact sequence so that det ( Φ) is a section of the determinant line bundle Combining the two determinants and the explicit Vandermonde factor in (2.6) shows that which correctly conspires to cancel the scaling −4(d + 1) of the measure a d 4|8 Z and the wave- (σ i dσ i )h i (Z(σ i )). The integrand of (2.6) is thus projectively well-defined.
Finally, let us comment on relation of the definition of det (Φ) given here to that given in [15]. Here, the denominator in (A.2) involves where numerator of this expression is the Vandermonde determinant of all the worldsheet coordinates associated with the components of Φ that are absent in (A.1), while the denominator comes from the factors in the denominator of R in (A.5). A little experimentation shows that when d > 1 (A.6) can also be written as 2). This is the definition that was used in [15] and it is often more convenient for explicit calculations.
B Transformation to the link variables
In this appendix we explain how M n,d ({W r , Z a }) can be manipulated so as to be written as an integral over the Grassmannian, gauged fixed to the link representation.
With the aim of simplifying the argument of the exponentials in (5.8) it is useful to replace the scaling parameters (t a , t r ) by parameters (S a , T r ) defined as in [18,32] by (rb) 2 exp r,a W r · Z a T r S a (ra) , where the 1/(ra) factor appears in the exponential because (ra) is absent in (5.8) but present in the definition of T r . The factor of a b =a (ab) −1 precisely cancels the Jacobian factor in (5.7) associated with our choice of basis polynomials.
so that the argument of the exponential in (5.8) becomes simply a,r c ar W r · Z a . We can treat the c ra 's as (d + 1)(d + 1) independent variables if we enforce the conditions (B.6) by introducing further δ-functions into M n,d via 1 = r,a dc raδ c ra − T r S a (ra) .
At this point, almost all of the formula for the amplitude can immediately be written in terms of the c's and the external data. We find as in (5.10).
To reach our final form of the Grassmannian representation of gravitational tree amplitudes, depending exclusively on the c ra 's and external data, we must perform the (σ, T, S) integrals. This is a straightforward, if rather lengthy exercise. We choose to fix the SL(2; C) freedom by freezing σ 1 , σ n−1 and σ n to some arbitrary values at the usual expense of a Jacobian (1 n − 1)(n − 1 n)(n1), and fix the scaling by freezing S n = 1 (for which the Jacobian is S n = 1). The integrals are then performed using 2n of the (d + 1)(d + 1) δ-functions, and lead directly to (5.9) given in the main text. Note in particular that the Veronese constraintsδ(V an−1n 12r ) that remain in (5.9) arise simply from repeatedly substituting the support of one δ-function into another.
C Conventions
Let us list our conventions. We take PT to be the N = 8 supertwistor space CP 3|8 with a line I removed. We use calligraphic letters to denote supertwistors, lowercase and uppercase Roman indices to denote their four bosonic and N fermionic components, respectively. We often decompose the bosonic components into two 2-component Weyl spinors with dotted and undotted Greek indices. Thus Z = (Z a , χ A ) = (λ α , µα, χ A ). External states are labelled by lowercase Roman indices from the middle of the alphabet i, j, . . . ∈ {1, . . . , n}. We use σ α with α, β, . . . ∈ {1, 2} to denote a homogeneous coordinate on the CP 1 worldsheet. We often choose italic letters from the beginning of the alphabet to run over the space of degree d polynomials in the worldsheet coordinates, so a, b, . . . ∈ {0, . . . , d}. It is also useful to separately allow r, s, . . . ∈ {0, . . . ,d}. We use [ , ] to denote dotted spinor contractions, , for undotted contractions, and ( , ) for contractions of the homogeneous coordinates σ on the worldsheet. When affine coordinates are more convenient, we will choose them so that σ = (1, u). We shall denote the data of external spinor supermomenta by Λ = (λ α ,λα, η A ). | 13,463.8 | 2012-07-19T00:00:00.000 | [
"Mathematics"
] |
Multi-Step Ahead Short-Term Load Forecasting Using Hybrid Feature Selection and Improved Long Short-Term Memory Network
Short-term load forecasting (STLF) plays an important role in the economic dispatch of power systems. Obtaining accurate short-term load can greatly improve the safety and economy of a power grid operation. In recent years, a large number of short-term load forecasting methods have been proposed. However, how to select the optimal feature set and accurately predict multi-step ahead short-term load still faces huge challenges. In this paper, a hybrid feature selection method is proposed, an Improved Long Short-Term Memory network (ILSTM) is applied to predict multi-step ahead load. This method firstly takes the influence of temperature, humidity, dew point, and date type on the load into consideration. Furthermore, the maximum information coefficient is used for the preliminary screening of historical load, and Max-Relevance and Min-Redundancy (mRMR) is employed for further feature selection. Finally, the selected feature set is considered as input of the model to perform multi-step ahead short-term load prediction by the Improved Long Short-Term Memory network. In order to verify the performance of the proposed model, two categories of contrast methods are applied: (1) comparing the model with hybrid feature selection and the model which does not adopt hybrid feature selection; (2) comparing different models including Long Short-Term Memory network (LSTM), Gated Recurrent Unit (GRU), and Support Vector Regression (SVR) using hybrid feature selection. The result of the experiments, which were developed during four periods in the Hubei Province, China, show that hybrid feature selection can improve the prediction accuracy of the model, and the proposed model can accurately predict the multi-step ahead load.
Introduction
With the rapid development of the economy, the application of electricity in various aspects of production and living has been becoming increasingly widespread [1]. Faced with the difficulty of electrical energy storage, power plants need to generate electricity in accordance with the requirement of the power grid [2]. Short-term load forecasting (STLF) can provide a decision-making basis for generation dispatchers to draw up a reasonable generation dispatching plan [3], which plays a vital role in the optimal combination of units, economic dispatch, optimal power flow, and power market transactions [4]. However, the short-term load is sensitive to the external environment, such as climate change, date types, and social activities [5]. The randomness of the load sequence is raised by these of gate, input gate, output gate, and forget gate were proposed for the application of the network. After years of testing, LSTM showed a more prominent contribution in timing prediction than RNN. In recent years, numerous LSTM variants have been proposed. Zhang et al. unified the gates in the LSTM network into one gate, and these gates share the same set of weights, thereby reducing training time [36]. Pei et al. changed the structure in the LSTM network to achieve better prediction results and shorter training time [37]. These variants of LSTM are well applied to short-term prediction.
Nowadays, most of the researches focus on single-step STLF. However, accurate multi-step STLF has a more important significance for formulating generation scheduling plans [38]. It can formulate longer-term plans for electric power dispatching, and reap greater benefits for electric power operators [39]. This paper is devoted to the exploration of multi-step STLF. In order to test the limit prediction ability of the model under the requirement of short-term load prediction accuracy, the model is proposed to predict multi-step prediction of the power load in the Hubei Province. This study provides technical support for a power system to formulate a generation plan.
Compared with the existing research of short-term load forecasting, the highlights and advantages of our study are as follows: (1) The model not only considers other influencing factors such as the environment on short-term load forecasting, but also pays more attention to the influence of the historical load on the model and adopts a two-stage feature selection method to select load features of 168 time periods from the previous week; (2) This paper proposes an Improved Long Short-Term Memory network for load prediction. This network changes the characteristics of the original door and the transfer method of the cell. Compared with the traditional LSTM, it has higher prediction accuracy; (3) Most of the popular short-term load forecasting models predict the load of the next period (hour-ahead or day-ahead). This article is dedicated to studying the load of the next multi-period (multi-step ahead). This is more conducive to rationally arrange the power generation tasks of the power station and ensure the stability of the power system operation. This research has more practical significance.
Methodology
In this section, the method of feature selection used in this paper is first introduced. This method is composed of the filter feature selection method and wrapper feature selection method. The specific step and formula are shown in Section 2.1. The main predictive model, the Improved Long Short-Term Memory network [37], is introduced in Section 2.2. Furthermore, the overall steps of the predictive model and its flow chart are shown in Section 2.3.
Hybrid Feature Selection
In the study of load forecasting, there are plenty of influencing factors that affect it, such as the previous load, the date type, temperature, dew point, and humidity. Establishing an accurate load forecasting model should be combined with environmental factors and date types. When the temperature is higher, the operating power of the refrigeration equipment will be greatly increased, and it would directly affect the power load. When the temperature is lower, the opening of the heating equipment would also have an impact. When it comes to holidays, the load impact of factory shutdowns is also huge. Therefore, quantitative analysis of these factors is important for load changes.
For non-numeric features, such as date types, they should be quantified. The date types selected in this article are workdays, rest days, and holidays. Such non-numeric features need to be encoded. The maximum number of days for Chinese holidays is 7 days, so they can be mapped to the code as shown in Table 1. Holiday1 represents a holiday for one day, and so on. Other numerical features are normalized accordingly. The effect of these factors above is complex. The maximal information coefficient (MIC) is applied to measure the nonlinear dependence between factors and power load. The closer MIC is to 1, the stronger the nonlinear dependence. The formula of MIC is given as follows: where I(x, y) represents the mutual information coefficient between X and Y. p(x, y) is the joint probability density of variable X and Y, and p(x) and p(y) are the marginal probability densities of variables X and Y, respectively. Peng et al. proposed a feature selection method named Max-Relevance and Min-Redundancy [40] that could use mutual information scores to select features. The purpose is to punish the correlation of features by their redundancy in the presence of other selected features. The correlation of feature set S with class c is defined by the average of all mutual information values between each feature f i and class c, as follows: The redundancy of all features in set S is the mean value of all mutual information values between feature f i and feature f j : Combined with the constraints of the above two formulas, a parameter φ is defined to optimize D and R simultaneously.
In practice, incremental search methods can be used to obtain near-optimal features. The formula is as follows: where X represents the set of all features. S m−1 represents the selected feature subset, and its feature subset contains m − 1 features. This method is based on the selected features to find the feature that maximizes the value of the above formula in the remaining feature space. In fact, each of the remaining features is calculated and then sorted. Therefore, the essence of this method is to use a standard (correlation-redundancy) to sort the features, but we need to select a feature subset firstly, then it can be calculated. The search method is a first-order incremental search. This method can only sort the remaining feature sets. It is better to put the first feature in the remaining feature set into the feature subset rather than the later feature, but it cannot guarantee that the prediction accuracy after adding the feature is better than before. In this paper, the load features are preliminarily screened by MIC as a feature subset, and then the remaining features are sorted by the mRMR method. Add the first Energies 2020, 13, 4121 5 of 23 feature in the remaining feature set in the feature subset and put them into the model for prediction. If the accuracy is improved, continue this process until the accuracy becomes lower.
Improved Long Short-Term Memory Network
LSTM was first proposed by Sepp Hochreiter and Schmidhuber in 1997 [35]. It is an advanced version of Recurrent Neural Network (RNN). Compared with RNN, its essence lies in the introduction of the concept of the cell state. The cell state of LSTM will determine which states should be left behind and which states should be forgotten. The problem of the disappearance of the RNN gradient has been solved. The LSTM network has three gates in the hidden layer (input gates, output gates, and forget gates). Input gates control the input flow of the memory cell, and output gates control the output flow into other cells. The role of forget gates is to selectively forget the information in the state of the cell. The traditional LSTM network has a longer training time due to the complex structure. In order to achieve the purpose of reducing network training time without affecting accuracy, the structure of LSTM is improved, and the Improved Long Short-Term Memory Network (ILSTM) is proposed. ILSTM combines input gates and forget gates into one new gate to reduce network complexity. The structure of ILSTM network is shown in Figure 1. The forward propagation and formula of ILSTM in the t-th period are elaborated as follows.
Energies 2020, 13, x FOR PEER REVIEW 5 of 23 adding the feature is better than before. In this paper, the load features are preliminarily screened by MIC as a feature subset, and then the remaining features are sorted by the mRMR method. Add the first feature in the remaining feature set in the feature subset and put them into the model for prediction. If the accuracy is improved, continue this process until the accuracy becomes lower.
Improved Long Short-Term Memory Network
LSTM was first proposed by Sepp Hochreiter and Schmidhuber in 1997 [35]. It is an advanced version of Recurrent Neural Network (RNN). Compared with RNN, its essence lies in the introduction of the concept of the cell state. The cell state of LSTM will determine which states should be left behind and which states should be forgotten. The problem of the disappearance of the RNN gradient has been solved. The LSTM network has three gates in the hidden layer (input gates, output gates, and forget gates). Input gates control the input flow of the memory cell, and output gates control the output flow into other cells. The role of forget gates is to selectively forget the information in the state of the cell. The traditional LSTM network has a longer training time due to the complex structure. In order to achieve the purpose of reducing network training time without affecting accuracy, the structure of LSTM is improved, and the Improved Long Short-Term Memory Network (ILSTM) is proposed. ILSTM combines input gates and forget gates into one new gate to reduce network complexity. The structure of ILSTM network is shown in Figure 1. The forward propagation and formula of ILSTM in the t-th period are elaborated as follows. Calculate shared gates Energies 2020, 13, 4121 6 of 23 Calculate current information state Update cell memory Calculate output gates Calculate output of hidden layer Calculate output of predicted value In the above formula, net ut , net ct , net ot , and z t are the state of the current stage. W u , W c , and W o are their weight matrices. b u , b c , and b o represent the bias vectors. x t , u t , c t , and o t are the input of input layer, the shared gates, the information state, and output gates in the current period, respectively. C t and C t−1 represent the cell state in the previous period and current period. The symbol · is the multiplication of the matrix, and the symbol × is the multiplication between the elements in the matrix. σ(x) is the activation function of Sigmoid(), and tanh(x) is the activation function of and Tanh(). Their calculated formula is elaborated as follows: Compared with LSTM, ILSTM cuts back the number of doors, reducing the variables needed to be optimized in the weight matrix. In the way of memory cell update, ILSTM first activates the current information state by using the activation function Tanh(). Next, it makes a linear combination of the previous cell memory C t−1 and current information state c t , using update gate u t as the weight of the current information state c t and 1 − u t as the weight of the previous cell memory C t−1 . The sum of the two weights is equal to one. In this way, cell memory is updated.
The Framework of the Proposed Model
In the entire prediction model, the features are first preprocessed. Since the power load changes periodically, the feature can be selected by referring to the period. The period of load change can be regarded as one week, so the hourly power load (168 in total) within seven days is selected as the preliminary feature of past load in this paper. First, we calculated MIC separately between the load of this period (t) and the load of the previous 168 periods (t − 1, t − 2, . . . , t − 168). Features (MIC > 0.6) were placed in the temporary feature subset, and features (MIC < 0.6) were placed in the candidate feature subset. Next, the root mean square error (RMSE) was used to discriminate the prediction effect. The mRMR method was adopted to select features under candidate subset and to add them to the feature subsets until the prediction accuracy becomes low. The selected load features, environment features, and date features were integrated to obtain the final feature set together and put into the ILSTM Energies 2020, 13, 4121 7 of 23 network for training. The prediction models were established for single step prediction, two-step prediction, three-step prediction, and multi-step prediction, respectively. The complete framework of the model is shown in Figure 2, and the Schematic diagram of multi-step prediction is shown in Figure 3.
preliminary feature of past load in this paper. First, we calculated MIC separately between the load of this period ( ) t and the load of the previous 168 periods ( )
Features
(MIC > 0.6) were placed in the temporary feature subset, and features (MIC < 0.6) were placed in the candidate feature subset. Next, the root mean square error (RMSE) was used to discriminate the prediction effect. The mRMR method was adopted to select features under candidate subset and to add them to the feature subsets until the prediction accuracy becomes low. The selected load features, environment features, and date features were integrated to obtain the final feature set together and put into the ILSTM network for training. The prediction models were established for single step prediction, two-step prediction, three-step prediction, and multi-step prediction, respectively. The complete framework of the model is shown in Figure 2, and the Schematic diagram of multi-step prediction is shown in Figure 3. The framework of the proposed model
Evaluation Criteria
In this paper, four indicators, including RMSE (Root Mean Square Error), MAE (Mean Absolute Error), MAPE (Mean Absolute Percentage Error), and 2 R (coefficient of determination) are adopted to evaluate the prediction accuracy of the model. Specific formulas for each indicator are listed as shown in Table 2.
MAE Mean Absolute Error
N is the size of the test sample and i y refers to the i-th predicted value. i Y refers to the i-th observed value and Y is the mean of observed value. The smaller the final MAE, MAPE, and RMSE, the higher the prediction accuracy. The closer 2 R is to 1, the higher the prediction accuracy.
In order to further evaluate the accuracy between the two different models, three indicators, MAE P , MAPE P , and RMSE P , are applied. The specific formula is as follows:
Evaluation Criteria
In this paper, four indicators, including RMSE (Root Mean Square Error), MAE (Mean Absolute Error), MAPE (Mean Absolute Percentage Error), and R 2 (coefficient of determination) are adopted to evaluate the prediction accuracy of the model. Specific formulas for each indicator are listed as shown in Table 2. Table 2. Specific instruction for each indicator.
MAE Mean Absolute Error
N is the size of the test sample and y i refers to the i-th predicted value. Y i refers to the i-th observed value and Y is the mean of observed value. The smaller the final MAE, MAPE, and RMSE, the higher the prediction accuracy. The closer R 2 is to 1, the higher the prediction accuracy.
In order to further evaluate the accuracy between the two different models, three indicators, P MAE , P MAPE , and P RMSE , are applied. The specific formula is as follows: Energies 2020, 13, 4121 9 of 23 In order to better evaluate the future operational risks generated by model predictions, we used the standard deviation of the error S as a criterion. The specific formula is as follows:
Case Study
In this section, we first introduce the basic data for the power load and the corresponding factors applied to the model. In order to verify that the proposed model has high-precision prediction results, four datasets are used for testing and compared with existing popular models. In addition, multiple predictive period experiments further confirm the practicability of the model. In the predictive model, all deep learning models are implemented using the keras framework, and SVR is implemented using the "sklearn" framework in python.
Data Introduction
In this case, the power load and related influencing factors are first introduced. The data used in this paper comes from the Huazhong Power Grid Corporation, which is the hourly load data for 2015 from the Hubei Province, China. In this year, the average annual temperature in Hubei was about 18 degrees. January and February were the coldest times of the year. The minimum temperature reached numbers below zero. The temperature became higher in July and August, and the highest temperature reached 40 degrees and more. The power load in January and February was higher than the annual average due to the application of heating equipment. From March to June, the temperature has been stable below the average. In the summer, the large-scale application of the refrigeration system seriously affected the power load. During this period, the power load was the highest in the whole year. With the weather turned cooler from September to November, the load weakened accordingly until the temperature began to rise steadily. The lowest power load of the year was the Spring Festival due to factory holidays. The annual electrical load data and environmental data are shown in Figure 4. In this paper, the data of the missing period is obtained by using the average value of the load of the previous period and the load of the next period. The first 80% of the original dataset is used as training data to train the model, and the other 20% is used as test data. The models are trained using the cross-validation [41,42]. Tr, Te represents the number of training sets and test sets, respectively. Sum represents the total number of datasets. Four datasets are shown in Figure 5. In order to better evaluate the performance of the model, the load data is divided into four datasets according to the quarter. It can be seen from the figure that the first dataset shows a large fluctuation range, the second dataset is relatively stable, the third dataset performs a high peak value, and the power load in the fourth dataset drops sharply and then slowly rises. According to this classification, the dataset can be better trained and is representative. The detailed parameters of the dataset are shown in Table 3. In this paper, the data of the missing period is obtained by using the average value of the load of the previous period and the load of the next period. The first 80% of the original dataset is used as training data to train the model, and the other 20% is used as test data. The models are trained using the cross-validation [41,42]. Tr, Te represents the number of training sets and test sets, respectively. Sum represents the total number of datasets. Four datasets are shown in Figure 5. In this paper, the data of the missing period is obtained by using the average value of the load of the previous period and the load of the next period. The first 80% of the original dataset is used as training data to train the model, and the other 20% is used as test data. The models are trained using the cross-validation [41,42]. Tr, Te represents the number of training sets and test sets, respectively. Sum represents the total number of datasets. Four datasets are shown in Figure 5.
Feature Combination Selection
In the proposed method, the original load characteristics are pre-processed. Firstly, the load characteristics of the dataset in the first seven days (168 time periods) are calculated for the maximum information coefficient of the current period, and the features with a MIC value greater than 0.
Feature Combination Selection
In the proposed method, the original load characteristics are pre-processed. Firstly, the load characteristics of the dataset in the first seven days (168 time periods) are calculated for the maximum information coefficient of the current period, and the features with a MIC value greater than 0.6 are selected. The MIC value of the load of the first 168 periods and the current period load in dataset 1 to dataset 4 are shown in Figures 6-9. T represents the period of the day when the load is to be predicted, T-1 represents the day before the predicted load, IN1 represents the current load in the past hour, IN2 represents the current load for the past two hours, and so on.
After the preliminary screening of features, make a second selection of the remaining feature. The features including the environment features and the encoded date features are added to the feature subset, and the features of the candidate subset are sorted by the mRMR method. The first ranked feature is placed in the feature subset for training. If the prediction accuracy becomes higher, the feature is retained, and continue the above process. If the accuracy becomes lower, stop the selection. The selected feature tables under the four datasets are shown in Table 4, where t-n represents the load characteristics of the previous n hours predicted at that time. The final feature set is used by the model to make predictions.
Parameter Settings
To validate the performance of the model, other popular models (LSTM, GRU, SVR) are used for comparison. In order to achieve fairness, the parameters of different models should be chosen as much as possible. The parameters selected in this article are all optimized with GA or some common values. For the purpose of eliminating the error caused by randomness, each model is run multiple times to average the results. The specific parameters are as shown in Table 5: T represents the period of the day when the load is to be predicted, T-1 represents the day before the predicted load, IN1 represents the current load in the past hour, IN2 represents the current load for the past two hours, and so on.
After the preliminary screening of features, make a second selection of the remaining feature. The features including the environment features and the encoded date features are added to the feature subset, and the features of the candidate subset are sorted by the mRMR method. The first ranked feature is placed in the feature subset for training. If the prediction accuracy becomes higher, the feature is retained, and continue the above process. If the accuracy becomes lower, stop the selection. The selected feature tables under the four datasets are shown in Table 4, where t-n represents the load characteristics of the previous n hours predicted at that time. The final feature set is used by the model to make predictions.
Parameter Settings
To validate the performance of the model, other popular models (LSTM, GRU, SVR) are used for comparison. In order to achieve fairness, the parameters of different models should be chosen as much as possible. The parameters selected in this article are all optimized with GA or some common values. For the purpose of eliminating the error caused by randomness, each model is run multiple times to average the results. The specific parameters are as shown in Table 5:
Experiment Results and Discussion
In this section, the current popular models are compared with the proposed method to predict the short-term load. The results of various models under the 1-step, 2-step, and 3-step load forecasting will be introduced as follows, and MAE, MAPE, RMSE, and R 2 are used to evaluate the accuracy of the model. Cpu times of the algorithms (CT) is used to indicate the time required for model calculation, and S is used to evaluate the future operational risks generated by model predictions (1) The analysis of the one-step prediction The prediction results of each model on the dataset are shown in Table 6. H-ILSTM represents the model combined with hybrid feature selection and ILSTM. The detailed index comparison figure for dataset 1 is shown in Figures 10 and 11. The figure shows the load forecast results of 96 time periods in the test set for dataset 1. The bold words in the table represent the best prediction results among the eight models. It can be clearly seen that the hybrid feature selection method has improved the model to varying degrees in every dataset. Among them, the H-ILSTM model has the highest prediction accuracy, and the value of the standard deviation of the prediction error is also the smallest. Compared with the model which does not adopt a hybrid feature selection method, the specific improvement effect of this model is shown in Table 7. The forecast accuracy improved by nearly 50%.
Experiment Results and Discussion
In this section, the current popular models are compared with the proposed method to predict the short-term load. The results of various models under the 1-step, 2-step, and 3-step load forecasting will be introduced as follows, and MAE, MAPE, RMSE, and R 2 are used to evaluate the accuracy of the model. Cpu times of the algorithms (CT) is used to indicate the time required for model calculation, and S is used to evaluate the future operational risks generated by model predictions (1) The analysis of the one-step prediction The prediction results of each model on the dataset are shown in Table 6. H-ILSTM represents the model combined with hybrid feature selection and ILSTM. The detailed index comparison figure for dataset 1 is shown in Figures 10 and 11. The figure shows the load forecast results of 96 time periods in the test set for dataset 1. The bold words in the table represent the best prediction results among the eight models. It can be clearly seen that the hybrid feature selection method has improved the model to varying degrees in every dataset. Among them, the H-ILSTM model has the highest prediction accuracy, and the value of the standard deviation of the prediction error is also the smallest. Compared with the model which does not adopt a hybrid feature selection method, the specific improvement effect of this model is shown in Table 7. The forecast accuracy improved by nearly 50%. Compared with the recently popular model, H-ILSTM is also very competitive. As shown in Table 8, the accuracy of this model is compared with other prediction models. The evaluation index is the average of the four datasets. H-ILSTM has a better prediction effect than the original LSTM network, and the prediction accuracy is improved by about 20%. Compared with machine learning, the prediction accuracy has been significantly improved. (2) The analysis of the two-step prediction The prediction accuracy is reduced compared to the one-step prediction, but the model still maintains a high accuracy. The two-step prediction results of various models under four data are shown in Table 9. The bold characters in the table represent the best predictions among the eight models. The comparison figure of forecast indicators is shown in Figures 12 and 13. The figure shows the load forecast results of 96 time periods in the test set for dataset 2. Among them, H-ILSTM predicted the best performance under the four datasets. Figure 12. The use of hybrid feature selection has greatly improved the model prediction, and the relative value of each prediction evaluation index of the ILSTM model is shown in Table 10. Compared with not using the hybrid feature selection method, the average Mean Absolute Error (MAE) predicted by this method is improved by 23.8, Mean Absolute Percentage Error (MAPE) is improved by 21.6, and Root Mean Square Error (RMSE) is improved by 23.9. Based on this, it shows that this feature extraction method plays an important role in two-step prediction. Table 10. Compared with not using the hybrid feature selection method, the average Mean Absolute Error (MAE) predicted by this method is improved by 23.8, Mean Absolute Percentage Error (MAPE) is improved by 21.6, and Root Mean Square Error (RMSE) is improved by 23.9. Based on this, it shows that this feature extraction method plays an important role in two-step prediction.
relative value of each prediction evaluation index of the ILSTM model is shown in Table 10. Compared with not using the hybrid feature selection method, the average Mean Absolute Error (MAE) predicted by this method is improved by 23.8, Mean Absolute Percentage Error (MAPE) is improved by 21.6, and Root Mean Square Error (RMSE) is improved by 23.9. Based on this, it shows that this feature extraction method plays an important role in two-step prediction. Compared with other models, the H-ILSTM model has improved to varying degrees under the four datasets. The improvement indicators are shown in Table 11. (3) The analysis of the three-step prediction The model can still maintain a relatively high prediction accuracy. The three-step prediction results of various models under four data are shown in Table 12 (3) The analysis of the three-step prediction The model can still maintain a relatively high prediction accuracy. The three-step prediction results of various models under four data are shown in Table 12 The use of the hybrid feature selection method has slightly improved the ILSTM model, and each forecast evaluation index has been improved by more than 20%. The details are shown in Table 13. The use of the hybrid feature selection method has slightly improved the ILSTM model, and each forecast evaluation index has been improved by more than 20%. The details are shown in Table 13. Compared with other models, the average value of each evaluation index under the four datasets has increased by about 15%. The details are shown in Table 14. (4) The analysis of the multi-step prediction This section mainly tests the limit prediction ability of the H-LSTM model proposed in this paper. Make predictions of multi-step for four datasets, respectively, and establish prediction thresholds. If the prediction accuracy is less than this threshold, the prediction is stopped. This threshold is evaluated using the decision coefficient R 2 . The result is shown in Figure 16. Compared with other models, the average value of each evaluation index under the four datasets has increased by about 15%. The details are shown in Table 14. This section mainly tests the limit prediction ability of the H-LSTM model proposed in this paper. Make predictions of multi-step for four datasets, respectively, and establish prediction thresholds. If the prediction accuracy is less than this threshold, the prediction is stopped. This threshold is evaluated using the decision coefficient R 2 . The result is shown in Figure 16. It can be seen from the figure that the model performs best under dataset 2 and can accurately predict the load in the next 24 h. In dataset 1 and dataset 4, the model can more accurately predict the It can be seen from the figure that the model performs best under dataset 2 and can accurately predict the load in the next 24 h. In dataset 1 and dataset 4, the model can more accurately predict the load for the next 24 periods. The performance in dataset 3 is general and can only be predicted for the next 6 h. Due to the large fluctuations in the peaks, the model has a slightly insufficient ability to Energies 2020, 13, 4121 20 of 23 predict such multi-steps. Compared with dataset 1, dataset 2, and dataset 4, the model has a good prediction performance for such datasets. Both can accurately predict the load in the next 6 h, and thus can more accurately predict the load in the next 24 h. The model also has room for improvement under load datasets with large fluctuations (5) Comparison experiment between the proposed model and the persistence model A good baseline for time series forecasting is the persistence model. This is a predictive model in which the last observation is persisted forward. This method uses the "today equals tomorrow" concept [43]. In order to better evaluate the effect of the proposed method, we conducted a test comparison between the proposed method and the persistence model, and used MAE, MAPE, RMSE, and R 2 for evaluation. This section shows the experiment of single-step prediction in dataset 1. Figure 17 shows the prediction effects of the two models. The evaluation indicators are listed in Table 15. In terms of indicators, the persistence model is close to the proposed model on R 2 . All other indicators are worse than the proposed model.
Energies 2020, 13, x FOR PEER REVIEW 20 of 23 load for the next 24 periods. The performance in dataset 3 is general and can only be predicted for the next 6 h. Due to the large fluctuations in the peaks, the model has a slightly insufficient ability to predict such multi-steps. Compared with dataset 1, dataset 2, and dataset 4, the model has a good prediction performance for such datasets. Both can accurately predict the load in the next 6 h, and thus can more accurately predict the load in the next 24 h. The model also has room for improvement under load datasets with large fluctuations (5) Comparison experiment between the proposed model and the persistence model A good baseline for time series forecasting is the persistence model. This is a predictive model in which the last observation is persisted forward. This method uses the "today equals tomorrow" concept [43]. In order to better evaluate the effect of the proposed method, we conducted a test comparison between the proposed method and the persistence model, and used MAE, MAPE, RMSE, and R 2 for evaluation. This section shows the experiment of single-step prediction in dataset 1. Figure 17 shows the prediction effects of the two models. The evaluation indicators are listed in Table 15. In terms of indicators, the persistence model is close to the proposed model on R 2 . All other indicators are worse than the proposed model.
Conclusions
STLF has a very important leading role in the power grid. In order to improve the accuracy of short-term load forecasting, this paper first starts from feature engineering, taking into account the relevant factors that affect the load, such as weather conditions and date types, and the hybrid feature selection is adopted. The improved LSTM network is used for multi-step prediction. The datasets of
Conclusions
STLF has a very important leading role in the power grid. In order to improve the accuracy of short-term load forecasting, this paper first starts from feature engineering, taking into account the relevant factors that affect the load, such as weather conditions and date types, and the hybrid feature selection is adopted. The improved LSTM network is used for multi-step prediction. The datasets of four time periods in the Hubei Province are selected and compared with the LSTM, GRU, and SVR models using the hybrid feature selection method. The effects of model prediction are reflected through MAE, RMSE, MAPE, and R 2 . P MAE , P MAPE , and P RMSE are used to reflect the difference of prediction results between models. From the experimental results, the prediction accuracy of the ILSTM model using the hybrid feature selection method is higher than the ILSTM model without this Energies 2020, 13, 4121 21 of 23 method in four datasets on average by more than 20%. The accuracy of the H-ILSTM model is about 15% higher than that of other models using the hybrid feature selection method. We also tested the multi-step prediction ability of the proposed model, which has a satisfactory performance. To sum up the following conclusions: (1) The hybrid feature selection method can improve the prediction accuracy of the model; (2) The ILSTM model is better than other traditional forecasting models in short-term load forecasting; (3) The H-ILSTM model has a good prediction effect in multi-step prediction.
Therefore, the proposed method has a very eye-catching performance in short-term multi-step load forecasting, which can more accurately predict the load in the next few periods. This model is competitive in this field.
The proposed model also has some shortcomings. When selecting features, it only considers the optimal combination of historical loads. Other influencing factors are just normalized as one of the inputs of the model. Secondly, it takes a lot of time to select features. We will gradually improve these issues in future research. | 8,934.8 | 2020-08-10T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
New boundary monodromy matrices for classical sigma models
The 2d principal models without boundaries have $G\times G$ symmetry. The already known integrable boundaries have either $H\times H$ or $G_{D}$ symmetries, where $H$ is such a subgroup of $G$ for which $G/H$ is a symmetric space while $G_{D}$ is the diagonal subgroup of $G\times G$. These boundary conditions have a common feature: they do not contain free parameters. We have found new integrable boundary conditions for which the remaining symmetry groups are either $G\times H$ or $H\times G$ and they contain one free parameter. The related boundary monodromy matrices are also described.
Introduction
In this paper we investigate 1 + 1 dimensional O(N ) sigma and principal chiral models (PCMs). These are integrable at the quantum level i.e. infinite many conserved charges survive the quantization [1,2]. The scattering matrices (S-matrices) are factorized and they can be constructed from the two particle S-matrices which satisfy the Yang-Baxter equation (YBE). Thus, integrable theories at infinite volume can be defined by the solutions of the YBE. For example, it has been verified that the minimum solution of the O(N ) symmetric YBE is the S-matrix of the O(N ) sigma model [3].
In this paper we are interested in boundary conditions for these systems. There are three interesting type of boundary conditions which are: I Classically conformal -which means that the boundary condition does not break the classical conformal symmetry, which guaranties infinitely many conserved charges II Boundary conditions with zero curvature representation which means that there exists a κ-matrix (or classical reflection matrix) from which double row monodromy matrices can be constructed III Quantum integrable, which means that there exist a higher spin conserved charge even on the half line. These boundary conditions were investigated in [4,5,6,7,8,9] and was shown that all of them are conformal. What can we say about the quantum integrability of these boundary conditions? In some of these cases, one can also use the Goldschmidt-Witten argument [5,7] which is a sufficient condition for quantum integrability. With this argument it can be shown that boundary conditions 1b and ib are integrable at the quantum level. There is also a necessary condition for quantum integrability which comes from the boundary bootstrap. As we know, quantum integrable theories with boundary can be defined with the ? × iib ? ? Thus we can infer that if the center of the residual symmetry algebra is u(1) then the reflection matrix contains a free parameter [11]. We can also classify the residual symmetries of PCMs. The bulk theory has G L × G R symmetry and the particles transform with respect to some representations of this symmetry. If the reflection matrix has a factorized form (R = R L ⊗ R R ), then the bYBE can be separated into equation for left and right reflection matrices. Thus, in principle, arbitrarily combined solutions R L and R R can be used to construct the full reflection matrix R. This implies that the remaining left and right symmetries can be different.
From the classification of the quantum reflection matrices [7,6,10,11,12] we can extract the possible residual symmetries therefore we can conclude that 1a, 2a, ia and iia can not be quantum integrable because their residual symmetries are different.
The zero curvature description is also known for some boundary conditions [4,8]. Their classical reflection matrices are constant matrices without any parameters.
The state of the art about boundary conditions and their integrability can be summarized in Table 1. With question marks we indicated the open questions. For example, 1b is quantum integrable (Goldschmidt-Witten argument) and it has O(k) × O(N − k) symmetry so it can be matched to the reflection matrix (coming from the bootstrap) with the same symmetry. Contrary, we have a U(N/2) symmetric reflection matrix with a free parameter and one can ask which boundary condition belongs to it. The boundary condition 2b is a natural candidate because it has a free parameter and the same symmetry. Indeed, in this paper we show that it has a zero curvature representation which may indicate the quantum integrability in view of the fact that a restricted boundary condition preserved the integrability at the quantum level if and only if there exists a zero curvature representation (see the table above).
In the PCM the remaining symmetries for the known classical integrable boundary conditions are H L × H R where H L ∼ = H R which means R L ∼ = R R (or the residual symmetry is G D which is the diagonal subgroup of G L × G R but in this case the reflection matrix is not factorized) [5,6]. This paper also provides a zero curvature representation for boundary condition iib where only the left or the right symmetries are broken therefore these can be candidates for reflection matrices where R L ∼ = R R .
We also derive that the traces of these new monodromy matrices Poisson commute therefore there are infinitely many conserved charges in involution. This Poisson algebra of the one and double row monodromy matrices are consistent if the r-matrix and classical reflection matrix (κ-matrix) satisfy the classical Yang-Baxter (cYBE) and the classical boundary Yang-Baxter equations (cbYBE). In [4] and [8] the Poisson algebra was investigated for non-ultralocal theories with constant κ-matrix. In [13] this was done for ultralocal theories with dynamical κ-matrix when the Poisson bracket of the κ-matrix and the Lax-connection vanished. In this paper we derive the Poisson algebra of non-ultralocal theories with κ-matrix whose Poisson-bracket with the Lax-connection does not vanish. However, the possible solutions of this equation have only been examined in a few cases. In this paper we classify the solutions of the field independent cbYBE and check that the new field dependent κ-matrix is satisfies the cbYBE for O(N ) sigma models.
The paper is structured as follows. In the next section, we start with the Lax formalism of the PCMs where we construct classical reflection matrices and use them to build double row transfer matrices. The conservation of these matrices (which is equivalent to the existence of infinite many conserved charges) provides the boundary conditions of the theories which belong to these boundary Lax representations. Using these results, we derive new double row monodromy matrices for the O(2n) sigma models and the corresponding boundary conditions will be determined too. In Section 4 we derive the Poisson algebra of the double row monodromy matrices and the cbYBE which is satisfied for the new κ-matrices.
Principal Chiral Models on the half line
In this section the new boundary monodromy matrix will be introduced. In the first subsection we will overview the Lax formalism of PCMs. After that the new reflection matrix and the related boundary condition will be derived. Finally we will show the corresponding Lagrangian descriptions and the unbroken symmetries of these models.
Lax formalism for PCMs
Let g be a semi-simple Lie algebra and G = exp(g). We use only matrix Lie-algebra and we work in the defining representation. The field variable is a map g : Σ → G where the space-time Σ = R × (−∞, 0] is parameterized with (x 0 , x 1 ) = (t, x). We can define two currents J R = g −1 dg and We will also use the following notations The ordinary letters denote forms and the italic letters denote the local coordinate functions of these.
Using these, the zero curvature condition can be written as The usefulness of the Lax connection lies in the fact that one can generate from it an infinite family of conserved charges. At first we define the one row monodromy matrix These monodromy matrices have an inversion property The monodromy matrix in the boundary case takes a double row type form where the κ L (λ), κ R (λ) ∈ G are the reflection matrices which will be specified later. In the following we use the right currents therefore we introduce the following notation J(λ) = J R (λ), The existence of infinitely many conserved quantities requires that the time derivative of the monodromy matrix has to vanishΩ(λ) = 0, which is equivalent to: where we assumed that the currents vanish at −∞. This is the boundary flatness condition. This equation can be translated to boundary conditions for the J R current. The consistency of the theory requires that the number of boundary conditions have to be equal to dim(g). Based on these, we call κ(λ) a consistent solution of (5) if it leads to exactly dim(g) boundary conditions.
The consistency of the definitions of double row monodromy matrices Ω L and Ω R (the boundary flatness condition implies the same boundary conditions with Ω L and Ω R ) implies that Using this equation, the double row monodromy matrices also have an inversion property: next subsection, we will try to find new consistent solutions with non-trivial spectral parameter dependency. Before that, we note that there is another possibility for the definition of the double row monodromy matrix, namely: This leads to the following boundary conditions Let us calculate the number of boundary conditions. For this, let us use the relation between the left and right currents.
We saw previously that this type of boundary condition is consistent if the operator Ad U −1 g is an involution on g which is equivalent to Clearly this restricted boundary condition is invariant under the transformation g → U g −1 0 U −1 gg 0 therefore it has the diagonal symmetry G D .
Finally, let us note that there is an other representation of this boundary condition. Using the inversion property (3) we can obtain an equivalent double row monodromy matrix: The conservation of this double row monodromy matrix requires that the following boundary flatness condition has to vanish.
Multiplying this by g from the right, we obtain which leads to the equations (8) and (9).
Spectral parameter dependent κ-matrices
In the previous subsection we summarized the spectral parameter independent κ-matrices. In this subsection, we try to find new spectral parameter dependent κs.
Solution of the boundary flatness equation
Let us use the following ansatz: where k(z) is a scalar and M ∈ g. Using this ansatz the equation (5) takes the following form: Which leads to the following system of equations: where [, ] + is the anti-commutator i.e. [X, Y ] + = XY + Y X. Since equation (12) provides already dim(g) boundary conditions, the consistency requires that the equations (13) and (14) should follow from (12). In the following, we look for constraints on M and N which ensure this.
Taking the anti-commutator of equation (12) with M gives where c is a constant. From this we can see that M commutes with N . Using this and the equation (14) we can obtain: Therefore, by taking the anti-commutator of equation (13) with N , we get Since J 0 spans the whole defining representation of g therefore N 2 has to be proportional to 1 so the automorphism Ad N has +1 and −1 eigenvalues and we denote the corresponding eigenspaces by h and f. Therefore N defines a Z 2 graded decomposition g = h ⊕ f. Equation (14) means that J 1 ∈ f i.e Π h (J 1 ) = 0 where Π h is the projection operator of h subspace. Putting this into (13): where we used that [M, N ] = 0 which implies M ∈ h. We can see from the last equation that equation (14) follows from (12) if M commutes with h. Summarizing, consistency of the solutions requires the following conditions These implies that Ad N generates a Z 2 graded decomposition and M is an element of h and also commutes with h. Therefore h has a non-trivial center which is generated by M . It follows that every Z 2 graded decomposition where hs are not semi-simple belong to these type of reflection matrices and boundary conditions. There are two classes of these κ matrices. The first is N = 0. The second case is N = 0, which implies that M 2 ∼ 1. In this case M defines the Z 2 graded decomposition. The projection operators to the h and f are: where U = N when N = 0 otherwise U = M . The classification of these κ-matrices for classical Lie-algebras are shown in the following.
Examples
We saw that the integrable boundary conditions described above belongs to a (g, h) symmetric pair for which G/H is a symmetric spaces (G = exp(g), H = exp(h)). The symmetric spaces are classified [14]. The spectral parameter dependent solutions belongs to not semi-simple h therefore there are three types of spectral parameter dependent κ-matrices.
These matrices are the classical counterparts of the h = u(1)⊕su(m)⊕su(n−m), h = u(1)⊕su(n) and h = so(2) ⊕ so(n − 2) symmetric solutions of the quantum boundary Yang-Baxter equation [7][6] [9]. The quantum reflection matrices are where ν i (θ) are some dressing phases and For the classical limit we define a scaling variable h for which The classical limit is h → 0. In this limit the R-matrices are proportional to the κ matrices:
Lagrangian and symmetries
In the previous subsection we found reflection matrices parameterized as (11) which leads to the following boundary condition: Using the left currents this condition takes the form: One can obtain the same boundary condition in the Lagrangian description. The Lagrangian density of the bulk theory is Thus if we add a boundary Lagrangian function as we get the boundary condition (17). This boundary condition was already investigated in [7] and [9]. It was shown that this is a conformal boundary condition for all M ∈ g. Now we have just shown that it has a zero curvature representation too for some special M s which satisfy the conditions (16). Now let us continue with the residual symmetries. The bulk Lagrangian has G L × G R symmetries which are the left/right multiplications with a constant group element: g(x) → g L g(x) and g(x) → g(x)g R . The transformations of the currents are the following: We can see that the boundary Lagrangian breaks the G R symmetry. The remaining symmetry is One can derive the Noether charges by the variation of the action but there is an easier way. We know that the J L and J R are the Noether currents of the bulk G L and G R symmetries. Let us define the following charges: By taking their time derivatives we obtaiṅ We can see thatQ and (20) are conserved charges. Finally we note that we could have used the left current J L with the κ-matrix This implies that the right reflection matrix, the boundary condition and the boundary Lagrangian are Therefore, in this case the residual symmetry is H L × G R .
O(N ) sigma model on the half line
The new reflection matrices of the PCM can be used to find new ones for the O(N ) sigma model. In particular, using the equivalence between SU(2) PCM and the O(4) sigma model we have immediately new reflection matrices for the O(N ) sigma model when N = 4. This solution then can be generalized for even N .
Lax formalism for the O(N ) sigma model
The field variables are n : Σ → R N with the n T n = 1 constrain. The bulk Lagrangian is from which equation of motion follows: We can define an O(N ) group element as: h = 1 − 2nn T which satisfies the following identities: h T h = 1 and h = h T . Using this, one can define a current:Ĵ = hdh = 2ndn T − 2dnn T which is the Noether current of the bulk global SO(N ) symmetry. The e.o.m with this current is d * Ĵ = 0 and the Lagrangian is The Lax connection is very similar to the PCM but here the current is constrained.
The double row monodromy matrix can be defined similarly as it was in PCMs. In the following we look for solutions of the boundary flatness equation Let us start with the constant κ-matrices i.e. κ(λ) = U where U ∈ O(N ) therefore the boundary flatness equation looks like which implies the following: In this subsection, we assume that U 2 = ±1 but we do not derive that. We will return to this at the next section. There are two kinds of U s: Let us start with the first case. Let the number of +1s and −1s be N − k and k respectively. Let us use the notation: n =ñ +n, with n = (n 1 , . . . , n N −k , 0, . . . , 0) ,n = (0, . . . , 0, n N −k+1 , . . . , n N ).
Using this, the equation (22) is equivalent tõ Multiplying byn from the right andñ T from the left, we can obtain the following two equations Similarly, from (23) we can get Let us assume thatn Tn = 0 which is equivalent toñ Tñ = 1 andn = 0. From this, the equations (24) and (26) where we used that 0 = n T n =n Tn′ +ñ Tñ′ =ñ Tñ′ . We can see that this is the restricted boundary condition to a sphere S k with maximal radius. Analogously, if we assume thatñ Tñ = 0 thenṅ = 0, which is the restricted bc to S N −k with maximal radius.
What happens whenn Tn = 0 andñ Tñ = 0. Let us multiply (24) withñ T form the left: n Tn ñ Tṅ = n Tṅ ñ Tñ , Using that n Tṅ + ñ Tṅ = 0 0 = n Tn +ñ Tñ ñ Tṅ = ñ Tṅ Let us continue with the second case i.e. U T = −U . Let us start with equation (25): Let us multiply this with n from the right: From this we can obtain the following two equations Let us multiply this with U from the left and U T from the right.
Using this and the original equation (25) we can obtain that J 1 = 0 which is equivalent to n ′ = 0. But we also have equation (24) therefore we have too many boundary condition which means thatĴ 0 = UĴ 0 U −1 andĴ 1 = −UĴ 1 U −1 are not consistent boundary conditions at the second case. We will also see at Subsection (4.3) that the κ-matrix of the second case do not satisfy the classical boundary Yang-Baxter equation.
Spectral parameter dependent solution for N = 4
In the last section, we found a new spectral parameter dependent reflection matrix for the SU(2) PCM. Since this model is equivalent to the O(4) sigma model we can obtain a new nonconstant κ-matrix for the O(4) sigma model by changing the notation to the O(4) sigma model language. We will see that this is a spectral parameter and field (!) dependent reflection matrix.
Thus we need to develop a dictionary between the SU(2) PCM and the O(4) sigma model. Let us introduce the following tensor: which satisfies the following relations: whereσ αα i is the complex conjugate of σ i αα . Using this we can change the basis in which the group element g 4 = SO(4) is factorized.
We can also find the relation between the variables of the O(4) model (h,Ĵ) and the SU(2) PCM (g, J L/R ). Using n = g 4 n 0 and h = 1 − 2nn T we obtain that h = g 4 jg T 4 where j = 1 − 2n 0 n T 0 = diag(1, 1, 1, −1) ∈ O(4). Since det(j) = −1, j is not factorized in the new basis: The group element h in the new basis takes the form: (g was defined in (28)) In the last line we used the following property: σ 2 gσ † 2 =ḡ andḡ denotes the complex conjugate of g. We can see that h is not factorized. This is because h is not an element of SO(4). It is convenient to introduce a new notation: Let us calculateĴ in the new basis.
whereJ R denotes the complex conjugate of J R . The Lax connection in the new basis is: Therefore the monodromy matrix of the O(4) sigma model factorized in the following way: The double row monodromy matrix in the new basis reads: Before we calculate the new κ-matrix let us apply the formula above to the known constant reflection matrices. The simplest known κ 4 is the identity matrix. This is factorized in the spinor basis: κ L = κ R = 1. Another known reflection matrix is κ = diag(−1, −1, 1, 1) in the vector basis. If we change the basis we get: −1). These two reflection factors are consistent if they satisfy the inversion property (6) i.e.
which means that g has to commute with them therefore g is restricted to H = U(1) at the boundary.
There is another known reflection matrix: κ = diag(1, 1, 1, −1) in the vector basis. If we change the basis we get: We can see this matrix is not factorized. Using this formula for the monodromy matrix, we obtain that This theory is consistent in the principal model language if g = g † at the boundary which is the boundary conditions (10). These were the relations of the well known reflection matrices of the SU(2) PCM and the O(4) sigma model. Let us continue with the new one. In the last section we found new reflection matrices for the PCM model which for g = su(2) simplifies to where M R is an arbitrary element of su (2). Without loss of generality one can choose M R = aσ 2 . We have seen that κ L (λ) = gκ R (1/λ)g † so we have Let us denote 1 ⊗M R in the vector representation by M . In the spinor basis hM h looks like Based on the above formulas, the new κ-matrix for O(4) takes the following form: where the matrix M looks like We can see that this κ is spectral parameter and field dependent too. We can give the boundary condition which correspond to this κ from the boundary conditions of SU(2) PCM (17), (18) and (29).Ĵ Using the definition of M and using (31) Therefore the boundary condition in language of the O(4) model is: This boundary condition was investigated in [9]. Using the definitionĴ = hdh = 2ndn T −2dnn T , we can get an equivalent form : From the boundary Lagrangian of the SU(2) PCM we get which agrees with [9]. Using the variables n: Finally, we can see that the residual symmetry is U(2) ∼ = SU(2) L × U(1) R which is a subgroup of SU(2) L × SU(2) R ∼ = SO(4). We saw in the PCMs that we have conserved chargesQ L andQ R . The conserved charge in the SO(4) language are: which is equivalent toQ where h = su(2) L ⊕ u(1) R , and Q is the bulk part of the charge:
Generalization for N = 2n
The result for N = 4 can be generalized for any even N . We assume that equation (32) can be used as κ matrix for N = 2n i.e.
where M = a 0 n×n 1 n×n −1 n×n 0 n×n .
We have to prove that the time derivative of the double row monodromy matrix is zero when the boundary condition is satisfied. The quantity ∂ 0Ω is zero when the boundary flatness condition is satisfied Now the RHS is not zero since the κ has field dependence.
Using this, equation (39) leads to the following three equations: If we take the anti-commutator of the boundary condition (33) with M then we will see that the third equation is satisfied. If we use the following identity This is also follows from the boundary condition.
Only the second equation remained. We have to prove that the following term vanish: Using the definition of h, we obtain that for any X ∈ so(2n).
For conserved charges, we can generalize the formula (37).
We can check the conservation of these charges.
where we used (41). The boundary Lagrangian can be written in the same form as we had for the case N = 4 (35) or (36): These have been studied earlier in [9] where it was showed that this is a conform boundary condition for any M ∈ so(2n) but in this paper we showed more, namely that it has a zero curvature representation only when M 2 ∼1.
Poisson algebra of double row monodromy matrices
In the previous sections we found new zero curvature representation of PCMs and O(N ) sigma models on a half line. This implies the existence of infinitely many conserved charges. In this section we want to prove that these conserved charges are in involution. For this we determine the Poisson algebra of the double row monodromy matrices (whose trace is the generating function of these charges). In the first subsection we summarize the formulas of general "bulk" non-ultralocal theories based on [15]. After that we derive the Poisson-algebra of the double row monodromy matrices and their consistency condition (which is the classical boundary Yang-Baxter equation) when the Poisson-bracket of the reflection matrix and the Lax-connection is not zero. This is a new result because, so far Poisson-algebras of non-ultralocal theories with boundaries were investigated only when the κ-matrix was field independent [4,8].
In the second and the third subsection we apply these general formulas for PCMs and non linear sigma models. We will use the following notations: where X ∈ End(V ) and Y ∈ End(V ) ⊗ End(V ) for a vector space V .
We can generalize the one row monodromy matrix for general paths from y to x: Let x 1 , x 2 , y 1 , y 2 be different positions and x 1,2 > y 1,2 then the general non-ultralocal Poissonbrackets of the monodromy matrices are the following [15]: where x 0 = min(x 1 , x 2 ), y 0 = max(y 1 , y 2 ) and This Poisson-bracket satisfies the Jacobi identity (for not coinciding points) if the generalized classical Yang-Baxter equation is satisfied: . For the calculation of the Poisson bracket of the global monodromy matrices (2) we have to take the limits x 1 → x 2 and y 1 → y 2 . However, the Poisson bracket (43) is not continuous due to the non ultra-locality. It is obvious that the equal intervals limit of the canonical brackets does not exist in a strong sense. More precisely, any strong definition implies the breakdown of the Jacobi identity for the canonical brackets of the global monodromy matrices (2).
However, it is possible to define this limit in a weak sense with respect to the canonical brackets based on a split-point procedure and a generalized symmetric limit. We consider canonical brackets of several monodromy matrices defined on intervals having coinciding end points. In order to compute them, let us first split the coinciding points and use (43) which then gives a completely consistent expression. Then if we symmetrize on all the possible splittings and go to the limit of equal points we get the "weak" algebras e.g. the weak algebra of the global monodromy matrices: The formulas above can be found in [15] but in this paper we use a different conventions for the Lax-pair i.e. we have to change L → −L to get the formulas in [15]. In the following we derive the Poisson-algebra. For this we need the κ-matrices which were derived in the previous sections. We saw that these matrices can depend on the fields but do not on the derivative of the fields therefore we assume that Let us continue with the generalized double row monodromy matrix: The Poisson bracket of Ω(x|λ) and Ω(y|µ) are not well defined even when x = y therefore we have to use the split-point procedure. For this, we can define a shifted double row monodromy matrix: where ∆ < 0. A general κ-matrix depends on the boundary value of the fields φ a (0) (i.e. κ(λ) = κ(φ a (0)|λ)) but we can extend this to arbitrary space coordinate: Using these the Poisson bracket of monodromy matrices are In the following we assume that Now we can calculate the symmetric limit: where x 0 = max(x 1 , x 2 ) and The existence of infinitely many conserved charges in involution requires that the following expression has to vanish.
This is the classical boundary Yang-Baxter equation (cbYBE). If the κ-matrix fulfill this equation then the Poisson-bracket of the double row monodromy matrix is
This Poisson-bracket satisfies the Jacobi identity (this can be derived by a straightforward but very long calculation). Using the split-point procedure and the symmetric limit we can calculate the "weak" Poisson algebra of the global double row monodromy matrix (4).
Poisson bracket in PCMs
Let us specify now the previous findings for the PCMs. The Poisson-algebra of the currents is the following [16,17]: In the following we will need the Poisson-bracket of the group element g and the current J L/R 0 . For this, we can use the following formula where we used the definition: and (48): Therefore The Poisson brackets of the space-like component of the Lax operator is [17]: In [17] a different convention is used which can be obtained by the following changes: L → −L,λ → −λ , γ → −1. This Poisson-bracket is the same as (42) but in this special case the r-and smatrices are space independent. Furthermore, we can find a consistency check for the classical boundary Yang-Baxter equation (cbYBE) in Appendix C where we prove that if κ R (λ) satisfies the cbYBE then κ L (λ) = gκ R (1/λ)g −1 also does which has to follow from the inversion property of the reflection matrices. In this derivation we have to use a non-trivial identity of the r-matrix In Appendix C we also show that this identity is a consequence of the inversion property and the s-matrix has a similar property: In the following we solve the classical boundary Yang-Baxter equation for constant κ-matrices.
Constant κ-matrices
Let κ(λ) = U where U ∈ G is a constant matrix. The cbYBE can be written as This equation has to be satisfied for every λ 1 , λ 2 ∈ C therefore The first equation is satisfied trivially because C 12 is invariant i.e. C 12 = U 1 U 2 C 12 U −1 1 U −2 2 . Let us multiply the second by U 1 from the left and by U −1 2 from the right Using the explicit form of C 12 we obtain that for all X ∈ g. Because we work with the defining representation (which is irreducible), U 2 has to be proportional to the identity. This is the same solution which we obtained from the analysis of the boundary flatness equation. Therefore we can conclude that the consistent solution of the flatness condition and the cbYBE are the same for the constant κ-matrix.
There is another consequence of the fact that we had to modify the equation (47)
Spectral parameter dependent κ-matrix
The κ-matrices described in Section 2 fulfill the classical boundary Yang-Baxter equation (45). The derivation can be found in Appendix B.
In [12] the following theorem was proven.
Theorem. Let U ∈ G for which Ad U defines a Lie-algebra involution and h := X ∈ g|U XU −1 = X . If κ(λ) is a solutions of the following cbYBE for reductive h where X 0 is a central element of h. The κ-matrix κ(λ) is unique for a given U (up to normalization) if we fix the norm of X 0 .
Previously we showed that these solutions exist therefore we classified the field independent solutions of the cbYBE.
We close this subsection with the Poisson-algebra of the Noether charges of the global symmetries. Let us start with the right charges Using the Poisson-algebra of the current we can obtain that We can decompose the basis {T A } into {T a ∈ h} and {T α ∈ f} . Using these, the equation above can be written as therefore they form the Lie-algebra h as expected. Let us continue with the Noether charges of the left multiplicatioñ is not well defined because it contains the following expression therefore we have to use the symmetric limit (∆ < 0): Using this, we can obtain the following equation which can be written as Clearly these charges form the Lie-algebra g as expected. This calculation shows the importance of the symmetric limit because if we do not use it properly then we cannot get the proper Poisson-algebra of the Noether charges of the symmetry G L .
Poisson bracket in O(N ) sigma models
The Poisson-algebra of the fields n i is the following From this one can calculate the Poisson-algebra of the currents [18]: and (P ) ij,kl = δ il δ jk , (K) ij,kl = δ ik δ jl are the permutation and the trace operators and (Z) ij = n i n j . Using this, one can obtain the non-ultralocal Poisson-algebra of the space-like component of the Lax-connection (42) where the r-and s-matrices are At first, we solve the cbYBE for constant κ-matrices and after that we check the spectral parameter and field dependent κ-matrix.
Constant κ-matrix
For κ(λ) = U ∈ O(N ), the cbYBE looks like After substitution, we obtain the following four equations: The first equation follows from the fact that U ∈ O(N ). From the second equation if follows that U 2 = ±1 i.e. U = ±U T . Multiplying the fourth one by U 1 from the left and right, we can see that the third one comes from the fourth. Let us write the third one explicitly.
Multiplying by U T 1 U T 2 from the left, we obtain the following Using the explicit form of C 12 , we can obtain that ). Let us multiply by P 12 from the left.
Taking the trace on the first site: Using thatZ T 2 =Z 2 , Tr Z = 0 and N > 2, we obtain that Since U can be U = ±U T , there are two cases.
1. U = U T . Using a global symmetry transformation U can be diagonalized as and Z in the same block diagonal form looks like From this explicit form we can see thatZ = 0 if and only ifñ = 0 orn = 0.
2. U = −U T . Using a global symmetry transformation U can be diagonalized as U = 0 n×n −1 n 1 n 0 n×n where n = N/2 andZ looks likẽ Multiplying the off-diagonal terms byn form the right, we obtaiñ n n Tn = −n ñ Tn and multiplying this byn T form the left, we obtain n Tn ñ Tn = 0.
At first, let us assume thatn = 0 thereforeñ Tn = 0. Substituting this to the previous equation, we obtain thatñ = 0. Using this in the diagonal term, we obtain thatnn T = 0 which contradicts ton = 0. Thereforen = 0. From n T n = 1 and from the diagonal, we obtain thatñ Tñ = 1 andññ T which is a contradiction. Therefore anti-symmetric U cannot be a solution of the cbYBE.
We can conclude that we have obtained the same constant κ-matrices from the cbYBE as we got from the boundary flatness condition.
Spectral parameter and field dependent κ-matrix
If we want to check that the new κ-matrix (32) satisfy the classical boundary Yang-Baxter equation (45) then we have to compute G 12 (λ 1 , λ 2 ). For this, we will need the following Poisson brackets: We checked the cbYBE for O(4) and O(6) sigma models with explicit calculations using Wolfram Mathematica. For this, we parameterized the sphere with stereo-graphic coordinates: Using this parameterization we can calculate explicitly the matrices r(λ, µ), G(λ, µ), κ(λ) and we can substitute these into the cbYBE. Using Mathematica we have checked that the cbYBE is satisfied for O(4) and O(6) sigma models.
Conclusion
In this paper new double row monodromy matrices have been determined for the principal chiral models. The corresponding integrable boundary conditions break one chiral half of the symmetry to G L × H R where H R was not arbitrary but G/H R had to be a symmetric space and the Lie algebra of H R was not semi-simple. We determined the boundary conditions which correspond to these monodromy matrices. Both the monodromy matrices and boundary conditions contain free parameters.
We used these results for finding new monodromy matrices for the O(N ) sigma models. At first, the SO(4) ∼ = SU(2) L × SU(2) R isometry was used to determine the SU(2) L × U(1) R symmetric κ matrices for SO(4) sigma models. These new spectral parameter dependent κ matrices were then generalized for O(2n) sigma models. They corresponds to U(n) symmetric boundary conditions.
We also showed that these κ-matrices satisfy the classical boundary Yang-Baxter equation therefore there exist infinitely many conserved charges in involution i.e. the boundary conditions proportional to these κs are classically integrable.
There exist quantum O(4) sigma models which have reflection matrix with two free parameters and the residual symmetry is O(2) × O(2) [10]. Therefore one interesting direction to pursue would be to find the classical field theoretical description of these quantum theories i.e. κ matrices and boundary conditions which have two independent parameters and residual symmetry O(2) × O(2). In the language of the SU(2) PCM, this means boundary conditions which independently break left and right symmetries. These results could be then generalized to general PCMs.
As a last remark, it would be interesting to check that the quantum version of the κ matrices determined in the paper are really the known reflection matrices. This could be done in the large-N limit. Recently, the large-N limit was studied for the CP N sigma models on finite intervals e.g. [19] [20]. These methods may also be applicable to the models studied in this paper.
Acknowledgment
I thank Zoltán Bajnok and László Palla for the useful discussions and for reading the manuscript. The work was supported by the NKFIH 116505 Grant.
Appendix A. Non-local conserved charges
If we expand the monodromy matrix around λ = λ 0 we get infinitely many conserved charges which are generally non-local. In this section we will deal with the expansions around λ = ∞ and λ = 0 and we will give the first two terms of these series.
Appendix A.1. Expansion around λ = ∞
We will start with the expansion of the one row monodromy matrix the expansion leads to which gives the first two charges In order to calculate the expansion of the monodromy matrix we will also need the following series: where M = aU . The conserved charges come from the expansion of the double row monodromy matrix.
R + UQ From this the first two conserved charges are the following: The first charge is equivalent to the charge (21) (up to a constant).Q (1) R is very similar to the charge for the g ∈ H restricted boundary condition but there is an extra term: U, Q (0) R [21].
These charges also satisfy the relations:Q (0) For a crosscheck we can take the time derivative of these charges and we will see that they all vanish.
Appendix A.2. Expansion around λ = 0
For the expansion around λ = 0, we can use the inversion property of the double row monodromy matrix (7): We can do the same calculation as before: We can see that the first conserved charge is equal to the Noether charge of the left multiplication symmetry (20):Q (0) L =Q L . The second set of charges vanish. This is similar to the case of the free boundary condition (g = h) in [21].
The derivation of (B.4) is similar. | 9,461.2 | 2018-05-08T00:00:00.000 | [
"Physics"
] |
Effects of high intensity non-ionizing terahertz radiation on human skin fibroblasts
Data on the effects of high-intensity pulsed THz radiation (peak intensity ~30 GW/cm 2 , electric field strength ~3.5 MV/cm) on human skin fibroblasts have been obtained for the first time. A quantitative assessment of the number of histone H2AX phosphorylation foci in a cell as a function of irradiation time and THz pulse energy was obtained. It has been shown that the appearance of foci is not associated with either oxidative (cells retain their morphology, cytoskeleton structure, and the content of reactive oxygen species does not exceed the control values) or thermal stress. Long-term irradiation of cells did not reduce their proliferative index.
INTRODUCTION
Terahertz (THz) radiation is an electromagnetic radiation with a frequency of 0.1 to 10 THz (wavelength of 30 μm to 3 mm), which lies between microwave and infrared regions of spectrum. The energy of THz photon is not enough to cause ionization, i.e. THz is considered to be non-ionizing radiation (NIR). 1 Progress in developed of THz sources has extended the range of their applications, including medical, pharmacological, and security systems. 26 However, there is still a controversial issue on the safety of non-ionizing radiation. The developed safety standards for protection of human health from electromagnetic fields are varied greatly around the world and govern mainly tissue heating. 7 The action mechanism of NIR differs from that of ionizing radiation (IR) and is associated with oxidative stress (e.g. some effects of phone's range of radio frequencies cause the induction of reactive oxygen species), changing gene expression, as well as epigenetic and genetic processes leading to DNA damage. 7 Although, there are many studies related to the effect of THz radiation, the data is insufficient since opposite results are demonstrated for similar radiation parameters. For example, a qualitative analysis 8 showed the induction of histone γН2АХ when exposed to intense THz pulses for 10 min, while quantitative analysis did not reveal the same. [9][10][11] Some studies report no changes in cells, but others indicate alternations in gene expression, DNA damage, presence of micronuclei, and impaired membrane permeability [12][13][14][15][16] as well.
There is a detailed review 17 of possible mechanisms of the effect of THz radiation with biological objects. There is also evidence 18 that linear and nonlinear radiation can cause local breakdown of double-stranded DNA molecules, which indicates the genotoxicity of this radiation. One of the most sensitive markers of genotoxicity is the phosphorylation of histone H2AX. 19 The first report of the histone H2AX phosphorylation by serine 139 was discovered when cells were irradiated with IR. 20 The amount of histones depended on the dose of radiation and was associated with the presence of double-strand breaks. [21][22][23][24] In the case of NIR, an increase in γH2AX foci may be associated with a change in the chromatin structure due to the heating effect, and a number of endogenous processes in the cell 25 , including aging 26 , oxidative stress 27 , and processes in the cell cycle.
The aim of this work is to explain molecular and cellular response mechanisms of human skin fibroblasts to high-intensity pulsed THz radiation. For the first time, the dependence of the number of γH2AX foci on the duration of exposure and their post-radiation kinetics were obtained. We have analyzed a number of factors capable of initiating the formation of histone Н2АХ phosphorylation foci in human skin fibroblasts when exposed to high-intensity pulses of non-ionizing THz radiation.
THz exposure set-up
Optical rectification of femtosecond laser pulses is an efficient approach for generating THz radiation. A nonlinear OH1 [(2-(3-(4-hydroxystyryl)-5,5-dimethylcyclo-hex-2-enylidene) malononitrile] organic crystal (Rainbow Photonics, Switzerland) was used in the experimental setup 28 ( Figure 1A) to convert infrared laser pulses to THz ones. It was pumped by pulses from a Cr:forsterite laser system supplied with a multipass amplifier operating at 100 Hz (Avesta Project LLC) 29 and emitting 100 fs pulses at a wavelength of 1240 nm with the energy of 1.1 ± 0.05 mJ. The laser pump radiation was cut off by a LPF8.8-47 THz filter (Tydex LLC) with 70% transmission in the THz spectral range, placed behind the crystal. The energy of THz pulses after the filter, ETHz =18 ± 0.5 μJ was measured by a calibrated Golay cell (a GC-1D optoacoustic detector, Tydex LLC). To expand the THz beam, a telescope consisting of two off-axis parabolic mirrors was assembled after the OH1 crystal. The THz radiation was focused to a spot with d0.5 = 290 μm by an off-axis parabolic mirror with a reflected focal length of 50.8 mm. The pulse duration was measured using a well-known electro-optic sampling technique. A waveform and a spectrum (obtained by calculating the Fourier transform of the waveform) of a THz pulse are presented in Figure 1 B) and C). Duration of the Gaussian envelope approximation of the registered electric field waveform was fs as a full width at half maximum (FWHM). Thus, duration of a THz pulse by intensity was equal to fs. The cell exposure was performed by focusing the THz radiation through the bottom of a plastic dish (#80466, ibidi with a 180 μm thick polymer bottom) with a cell monolayer attached. A standard 35-mm Petri dish was placed in an incubating plate with a lid (heating system, Ibidi) for long-term cell irradiation mounted on a 3-dimentional motorized linear stage (8MT167-100 along X and Y axes, 8MT173-20 along Z axis, Standa). A video-channel consisting of a 20× microobjective with a numerical aperture NA=0.4 and a CCD-camera was assembled to control the position of the dish with respect to the focal plane of the focusing parabolic mirror. The Petri dish could be moved along X axis to either "THz" or "video" channels. To minimize the absorption of THz radiation by water vapour, the entire experimental setup was assembled in a sealed box purged with dry air. The relative humidity of the air in the box was locally reduced to 2-3%. Taking into account the transmission values of the air along the pathway, of the plastic dish and THz pulse fill-factor 30 , energy, peak intensity, and electric field strength of the THz pulses irradiating the cells were estimated to be = 15 μJ, = 32 GW/cm 2 , and = 3.5 MV/cm, correspondingly. 31 A distinguishing feature of the experimental setup is the combination of a high peak power of the THz source with a low average power. The former enables us to overcome the strong absorption of THz radiation by water, penetrate the culture medium (needed for the cell maintenance and viability) and reach the cells. The low average power enables us to minimize the thermal effects induced in cells.
Cell culture and exposure to THz radiation
This study was conducted in accordance with the Declaration of Helsinki and GCP guidelines. It was approved by the local Ethical Committee of the Federal Research and Clinical Center of Specialized Medical Care and Medical Technologies (protocol No. 4/5 from December 2, 2019). The patient signed an informed consent before enrolling in this study. The fibroblast primary cells culture was obtained by a biopsy of the behind-the-ear retroauricular skin regions as described previously. 3233 The biopsy was performed from a healthy volunteer donor who signed the informed consent.
THz radiation it was focused to a 480 μm-wide spot at the 1/e level to maximize intensity and electric field strength. The highest radiation intensity was reached the in the center of the beam and diminished along the radius to the level of 1/е from the maximum at the distance of 240 μm. Therefore, when evaluating the THz treatment effect, the analyzed field was limited by the 500 × 500 μm area. An area with a certain cell density was chosen. To simplify its identification, the corners of 500 × 500 μm square were marked using laser engraving. Tight focusing of THz radiation results in a small area of the irradiated surface and a low number of treated cells correspondingly. That's why flow cytometry, demanding high cell number (~10 6 ), could not be applied; the technique of immunocytochemical study was used instead.
Immunocytochemical assay
The cells were fixed 24 hours later or immediately after exposure in a Petri dish with 4% paraformaldehyde solution containing 0.1% saponin in PBS (pH 7.4) for 20 minutes at room temperature, followed by two washes in PBS and additional permeabilization of 0.5% Triton-X100 and 0.5% Tween 20 (in PBS, pH 7.4), supplemented with a 1% goat serum to block nonspecific antibody binding. Permeability-fixed cells were then incubated for 1 h at 37°C with primary rabbit anti-γH2AX polyclonal antibody (1 μg/ml, ab11174; abcam, USA) for double-strand break assay, with primary mouse anti-HSP70 monoclonal antibody (5 μg/ml, MS -482-B1, Thermo Fisher Scientific, USA) to determine the heat effect and primary rabbit anti-ki-67 polyclonal antibody (1 μg/ml, ab15580, Abcam, USA) to determine the proliferation index. After three washes with PBS, cells were incubated for 1 h with secondary goat antibodies against rabbit IgG (H + L) (conjugated to Alexa Fluor 488, 5 μg/ml; Invitrogen, USA) for H2AX and ki-67, goat secondary antibody Alexa Fluor 488 secondary goat anti-mouse IgG (H + L) antibodies and Hoechst 33342 for HSP70. Then the Petri dishes were washed three times with DPBS. Cell nuclei were labeled with Hoechst 33342 dye (Thermo Fisher Scientific, United States).
The proliferation index ki-67 was calculated as the ratio of the number of ki67-positive cells to the total number of cells stained with Hoechst 33342 (percentage).
An indicator based on H2DCFDA (C6827, Thermo Fisher Scientific, USA) was used to analyze reactive oxygen species (ROS) in cells exposed to THz radiation. A stock solution of CM-H2DCFDA (50 μg) was prepared in DMSO and diluted to a final concentration of 20 μM in 1 × PBS. Working solution of fluorochrome marker CM-H2DCFDA was added to serum-free culture medium incubated at 37°C in 5% CO2 for 1 hour. For a positive control, cells were incubated in serumfree medium supplemented with H2O2 for 30 min. The cells were then washed twice with DPBS and incubated at 37°C in 5% CO2 in serum-free medium containing 20 μM CM-H2DCFDA for 1 hour. After that, the cells of the experimental group and the cells of the positive control were fixed with 4% paraformaldehyde in DPBS for 20 min at room temperature. 34 Then the cells were washed twice in DPBS and permeabilized with 0.5% Triton-X100 and 0.5% Tween 20 (in PBS, pH 7.4) with the addition of phalloidin tetramethylrhodamine B isothiocyanate (P1951, Sigma Aldrich, USA) to visualize actin.
Immunofluorescence was analyzed using a Celena Digital imaging system (Logos biosystems, South Korea) and a Nikon A1 scanning laser confocal microscope (Nikon Co., Japan).
Statistical analysis
The experiment was repeated three times for each irradiation time to estimate the significance of the data. One-way variance Fisher's analysis (ANOVA) (p>0.05) was used for statistical analysis of the histone Н2АХ phosphorylation foci formation. ImageJ software was used to evaluate the fluorescence of stained cells. An image file was split into the color channels and the contour of each cell was delimited. The area, integrated density, and mean grey value were measured for each cell in the image and in the background as well. The corrected total cell fluorescence (CTCF) was calculated using the formula: CTCF = Integrated Density -(Area of the selected cell × Mean grey value of background readings).
Formation of foci of histone Н2АХ (γH2AX) in human skin fibroblast
In this study, the number of foci of phosphorylated histones H2AX in human skin fibroblasts was estimated after exposure to high-intensity pulses of THz radiation. Figure . Immunofluorescence analysis of γH2AX foci in human skin fibroblasts exposed to THz radiation for 90 min and fixed on the next day after the irradiation. The area exposed to THz radiation is marked by the dotted circle. γH2AX were stained with goat-antirabbit secondary antibodies with Alexa Fluor 633 (red) and cell nuclei were stained with Hoechst (blue) in all panels. Scanning confocal microscopy. Bar size 100 μm (A) and 50 μm (B).
Graphs of THz radiation dependence on time
A series of experiments was carried out to determine the relation between the amount of histones and the irradiation time and the time after the radiation was turned off for energy of THz pulses = 15 mJ. The dynamics of the formation of histones γH2AX per cell when exposed to THz radiation at different times are shown in Figs. 3 A) and B). To study the dynamics of γH2AX foci formation, quantitative assessment was carried out immediately after cells exposure to THz radiation ( = 0 h, Fig. 3A) and 24 hours after switching off the THz source ( = 24 h, Fig. 3B).
Figure 3 A) demonstrates a correlation between
and exposure duration ; value in experimental group is much higher than that in control group for > 30 min. However, neither morphological changes nor foreign inclusions in cell were observed for any . Due to minor differences in observed for = 0 min and = 10 min this set of parameters was excluded from the subsequent experiments. Since the numbers of γH2AX foci in cells irradiated at = 30 and = 90 min do not statistically differ for both immediate fixation = 0 h (Fig.3A) and one day later = 24 h (Fig.3B), an additional study was carried out at = 180 min and = 24 h. This resulted in sixfold increase in γH2AX foci compared with the parallel control group and twofold growth compared with data for = 30 and = 90 min. No signs of apoptosis were however observed.
Determination of heat shock proteins in fibroblasts after irradiation
Thermal effect on the cell could be one of the possible reasons for formation of phosphorylated H2AX foci. 35 Since direct temperature measurement in biological experiments on cell exposure to THz radiation could sometimes be difficult, a theoretical model 36 based on solving the Kirchhoff equation for the heat capacity is widely applied. Theoretical estimations performed earlier 31,37 demonstrated that temperature increase in the center of the irradiated area does not exceed Δ = 0.7°C for a single THz pulse and Δ = 2.8°C for a series of pulses. The biological approach for the heat effect estimation is based on the expression of heat shock proteins. 38,39 To test the hypothesis of thermally induced increase in number of phosphorylated H2AX foci !/# , the expression of heat shock proteins (HSP70) was evaluated after fibroblasts exposure to THz pulses with energy $%& * = 15 μJ following with repetition rate of 100 Hz for ( = 30 min. The immunocytochemical analysis was performed for cells after the THz treatment (Fig. 4A) as well as for parallel (Fig. 4B) and positive control groups (not presented here). The corrected total cell fluorescence (CTCF) intensities for all the groups was calculated. There was no increase in the expression of heat shock proteins (HSP70) in experimental group compared to the group of parallel control, with the expression level much lower than the positive control group. The Mann-Whitney test demonstrated the statistical difference between the positive control group and the rest of the groups. No difference was observed between the experimental and parallel control groups (p<0.05). A detailed analysis of possible thermal effects of THz radiation on cells was performed earlier. 40 In this study, we confirm that histone foci are not associated with cells expressing HSP, that is, the origin of foci is not the thermal effect of THz radiation.
Assessment of the effect of oxidative stress caused by THz radiation on the human skin fibroblast proliferation index
An increase in the reactive oxygen species (ROS) level under the influence of exogenous factors (ionizing radiation, microwaves, heat exposure, chemicals, etc.) can lead to destabilization of the chromatin structure, appearance of double DNA-strain breaks (DNA DSB), and cell apoptosis. 41,42 This can result in formation of foci of γH2AX in the cell. The mechanisms of DNA damage by non-ionizing radiation are associated with the suppression of cellular repair mechanisms; this can lead to oxidative stress and the emergence of cancer due to damage to DNA and other cellular components 1. Disruption of the cell cycle also affects the formation of γH2AX foci. This can lead to a change in proliferative activity. 10,26 ROS levels was assessed after irradiation of fibroblasts for 30, 90, 180 min. Cells treated with hydrogen peroxide (H2O2) were used as a positive control. It was found that an increase in ROS levels and a change in F-actin filaments were found only in cells treated with H2O2. Moreover, no ROS was observed in all irradiated cells, and F-actin filaments retained their structure and maintained cell morphology. Moreover, for all irradiation times, the number of cells expressing ki67 (a marker of proliferation) was approximately the same in both control and experimental groups. Proliferation index of 50-65% suggests that THz radiation does not affect the proliferative activity of cells.
DISCUSSION
The formation of histone H2AX phosphorylation foci occurs spontaneously under normal conditions and an increase in their number occurs due to toxic substances, which can serve as one of the indicators of a genotoxic effect. Experimental studies with different types of cells, various irradiation parameters (frequency, average and peak intensities), and experimental conditions (exposure time, temperature) have been performed previously (see reviews 15,12,2,43 ).
An increase in γH2AX was detected 44 , with a qualitative analysis was performed. Unfortunately, no information on quantitative changes in the number of γH2AX foci was presented in both experimental and control samples of artificial skin. A quantitative analysis of γH2AX foci was reported. 10 However, it is difficult to calculate the exact number of foci per cell using the diagrams given. On average, the number varied from 0.3 to 1 and practically did not differ from the control group values. In this study, the effect of high-intensity THz radiation (the peak intensity of the source used was ~30 GW/cm 2 , which was much higher than typical values used in other studies) was estimated on the phosphorylation of histone H2AX foci in human skin fibroblasts. The cells were irradiated for 30, 90, and 180 min. The cells of the 3rd -5th passage from the one young and healthy donor were used in all experiments in order to exclude possible risks of an increase in γH2AX foci due to cell aging.
It has been demonstrated that THz radiation causes phosphorylation of histone Н2АХ in experimental group compared to the control one. The number of foci increased with the duration of THz irradiation. The foci formed after THz exposure for 30 and 90 min were observed right after the end of irradiation and persisted the next day. This is consistent with data 11,44 , which states that phosphorylation foci are formed immediately after exposure and reach their maximum in 30 min.
At the moment, there are studies reporting that γH2AX foci may indicate not only DNA double breaks 46 at relatively low levels of phosphorylation, but can be caused by either micro-heating or oxidative stress associated with THz exposure. In this regard, appropriate experiments were carried out.
It has been demonstrated earlier 37 that temperature in the center of THz beam (for an average power of 1.5 mW), does not exceed 40°C, which may cause microheating in cells. Semi-quantitative immunocytochemical analysis of heat shock proteins (HSP) showed that the level of HSP70 expression in cells in the focus of THz radiation did not differ from the level in control groups. Besides, the values were significantly lower than those in cells of the positive control, and foci of γH2AX were often found in cells not expressing HSP. However, due to semi-quantitative nature of performed analysis, it is impossible to say for sure about the absence of the effect of THz radiation on the genes responsible for the expression of chaperone proteins. There were also no signs of oxidative stress observed upon cell exposure to THz radiation. The cells retained their morphology and the structure of the cytoskeleton. The amount of ROS and the proliferation index (50-65%) did not differ from those in the control groups even for long-term irradiation (90 and 180 min).
The data obtained indicate a probable epigenetic mechanism of the effect of THz radiation on cells, which can lead to chromatin modification or the appearance of DNA DSB. Formation of DNA DSB upon exposure to THz radiation is still a controversial point, whereas changes in the expression of certain genes have been reported in many studies. 39,[47][48][49] A decrease in the expression of genes associated with the complex of epidermal differentiation in the region of 1q21 chromosome 4849 , dysregulation of 8 signaling pathways responsible for the development of many types of cancer in humans 47 and changes in the expression of genes regulating tumor growth have been observed. Aneugenic effects associated, according to authors, with the induction of aneuploidy have also been reported. 9,11,50,51 Understanding the mechanisms of interaction of non-ionizing THz radiation with biological objects would advance establishment of safety standards and further progress in applications of THz sources.
In the present study, it has been demonstrated that long-term exposure of cells to high-intensity THz radiation causes neither heating nor oxidative stress, as well as no changes in cell morphology and proliferative index. An increase in phosphorylated H2AX histones indicates a possible genotoxic effect, and long-term foci preservation testifies to an epigenetic nature of the effect, which may result in the cell sensibilization to other factors.
Further investigations aimed at studying the safety of THz radiation and the establishment of standards would also help in understanding the mechanisms of interaction of non-ionizing radiation with biological objects, as well as in advancement of potential use of THz radiation sources. In our study, it was shown that prolonged cell exposure to high-intensity THz radiation does not cause heating and oxidative stress and associated changes in the morphology and proliferative index of cells as well. An increase in number of phosphorylated histones H2AX indicates a possible genotoxic effect, and the longterm preservation of the foci indicates an epigenetic character, which could result in increasing the sensitivity of cells to other factors.
CONCLUSION
New data on the effect of high-intensity pulsed THz radiation (peak intensity of ~30 GW/cm 2 , electric field strength of ~3.5 МV/cm) on human skin fibroblasts have been obtained. It has been shown that the exposure of cells to high-intensity THz radiation does not affect their morphology or proliferative activity. However, such a treatment results in epigenetic changes of a cell due to histone Н2АХ phosphorylation. The formation of γН2АХ foci has a time-dependent effect, which persists for a long time in a cell.
The experiments were performed using the unique scientific facility "Terawatt Femtosecond Laser Complex" in the Center for Collective Usage "Femtosecond Laser Complex" of JIHT RAS. The reported study was funded by the Russian Fund for Basic Research (RFBR) according to the research project No. 19-02-00762. | 5,335.4 | 2021-10-20T00:00:00.000 | [
"Environmental Science",
"Medicine",
"Physics"
] |
Representation of Experiential Meaning in Forestry Professional Report Genre
: Despite previous genre studies investigating various professional report genres in different contexts, disciplines and languages, the professional report genre in the forestry discipline remain the least explored, particularly from a Systemic Functional Linguistics (SFL) perspective. This study explored how language represents the forest resource report genre's communicative roles and its predominant linguistic features. This qualitative genre analysis study utilised SFL analytical frameworks in which six reports written in Malay were used as research data in this study. The findings on process types reveal that Action processes are dominant in the forest resource report genre, which indicates that the genre's central concern is on the physical activities and events happening in the forest areas. The role of the genre is to provide preliminary observation and information to assist the forestry department when deciding future directions and planning of forestry-related matters. Results obtained from the study are hoped to illuminate future research on Malay professional texts while simultaneously enriching present knowledge on prominent linguistic features of the professional report genre, particularly on the discourse of forestry discipline.
Introduction
The report genre has been explored both in academic and professional contexts (Bhatia, 1993;Nwogu, 1997;Forey, 2002;Flowerdew and Wan, 2010;Friginal, 2013) and continue to be the subject of interest among those exploring the relationship between language and text types and between forms and functions of a given genre. However, studies focusing on professional report genres are generally scarce due to the difficulties faced by researchers in gaining access to professional texts, which are often restricted by confidentiality issues and the reluctance of professionals to permit such explorations to be conducted (Candlin, 2002;Louhiala-Salminen, 2002;Hanford and McCarthy, 2004;Sarangi, 2002). Although researchers have investigated how language is used in professional reports in various disciplines (Forey, 2002;Flowerdew and Wan, 2010;Nwogu and Bloor, 1991), some disciplines remained the least explored, including the forestry discipline. Few researchers (Leipold, 2014;Winkel, 2012) attempted to address the relative lack of literature in forestry-related studies, but only a few studies focused on analysing professional report genre in the forestry discipline from a linguistic perspective (Joseph et al., 2014;Friginal, 2013) while the majority of forestry-related studies were mainly exploring forest discourses from a Foucauldian discourse perspective focusing on forest policies and its governmentality (Garzon et al., 2020(Garzon et al., , 2021Winkel, 2012;Leipold, 2014;Arts et al., 2010). Forestry is a multi-disciplinary field that incorporates many scientific disciplines; soils, wildlife, civil engineering, economics, ecology, agriculture, environmental science, recreation, silviculture and utilisation of timber products (Green, 2006). With such complexities of the discipline, analysis of professional report genre in forestry is most opportune in hope to uncover the nature of the discipline, the prominent linguistic features as well as the ways in which language is used and how language functions and realises the communicative roles of forestry report genre in its context of use through the analysis of experiential meaning.
Sustainable forest management (SFM) has become a primary concern of forest professionals worldwide. SFM practices and governance are achieved through a comprehensive system of legislation, policies and standard guidelines. However, studies investigating how SFM is achieved and performed by forest professionals from a linguistic perspective are lacking. In Malaysia, the Forestry Department of Peninsular Malaysia (FDPM) is entrusted to perform SFM practices through the issuing of forest harvesting reports which delivers information on potential forest areas to be harvested and systematically sustained. Systemic Functional Linguistics (SFL) was used as the analytical framework for analysing how language is used by forestry professionals in representing their experience of managing over 4.8 million hectares of forest lands in Peninsular Malaysia.
Objective(s)
In order to understand how language is used to accomplish the functions and communicative goals of achieving SFM reported in the forest resource report genre, this paper investigates the process types represented in the professional forestry report in realising experiential meaning in the genre from an SFL perspective. To achieve this, Halliday's (1994) framework for analysing experiential meaning was selected to be used in the study along with the SFL framework of Malay process types established by Idris (2012;. The latter draws heavily on Halliday's (1994) view on the representation of human experience through language. The following objective was outlined to guide the current study: 1. To examine the process types represented in the forest resource report genre realised through the experiential meaning.
Methodology
This study analysed six forest resource reports written by forestry professionals working at the southern district forestry departments in Malaysia, consisting of 236 clauses. The forest resource report genre issued by forestry professionals working at the Forestry Department Peninsular Malaysia is considered an important genre to be studied. It is one of the most important documents concerning the administration, management, and conservation of tropical forests in Peninsular Malaysia. Analysis of the functions of the forest resource report genre was informed inductively and deductively through the data obtained from the analysis on experiential meaning. Halliday (1994) asserts that experiential meaning expresses the representational meaning of a speaker's particular situation through transitivity analysis. Findings obtained from the experiential meaning were conducted to identify the functions of the genre elements represented in forest resource reports. The theoretical foundation of Idris' study (2012) draws heavily on Halliday's (1994, pg. 37) view that 'in all languages, the clause has the character of a message; it has some form of organisation giving it the status of a communicative event' and that 'our most powerful impression of experience is that it consists of 'goings-on -happening, doing, sensing, meaning, being and becoming…sorted out in the grammar of the clause' (Halliday, 1994, pg. 106). Apart from using Halliday's functional grammar as a theoretical framework to analyse the language function, Idris asserts that his study also takes into account the views of Za'ba (1958), Asmah (1986) and Azhar (1993) on their categorisations and descriptions of Malay sentences and its grammatical elements. This study employed Idris' (2012) Malay process types categorisation in the experiential meaning as shown in Table 1.
Results
Analysis of experiential meaning provides insights into how we represent reality in language (Eggins, 2007) and the content of the message (Thompson, 2014). Table 2 illustrates the results of the functions accomplished by the various process types represented in the genre. To inform the existence of entities (e.g. tree species, orang asli settlements and cemeteries and other entities found in forest areas). Description 0.85 To provide further details regarding the context surrounding the forest areas described.
Overall, the findings obtained on the experiential meaning represented in FRR genre corroborate with the findings attained from previous studies conducted focusing on similar elements observed in the study. Action processes appear to be most dominant in the reports similar to the findings obtained by Idris and Benazir Tanjung (2014) on the dominant use of Action process in Malay scientific and social science journals. As shown in Table 2, Action processes are dominant in all reports analysed (38.56 per cent). This indicates that the genre is centrally concerned with physical activities and events and the participants who carry them out. Action processes related to human participants are used in the genre to indicate the physical actions and activities performed by forestry officers in their fieldwork and in evaluating forest areas for approval of forest harvesting activities. Action processes involving material entities, on the other hand, relate to events experienced by material entities such as rivers as well as representation of abstract entities such as water pollution and accessible roads surrounding the forest areas. Table 5.1 also shows that FRR also uses a high proportion of Situation processes (27.12 per cent). The Situation process represents a situation or a condition of an experience (Idris, 2012). The use of Situational processes in the genre constructs forest areas as participants that are most affected by the forest harvesting activities that will be taking place, to portray the geographical conditions of the forest areas as well as to relate how forest areas are considered concerning the forestry department's sustainable forest management plans and gazetted areas. Description processes are also significantly used in the reports, with roughly equal proportions of Description processes (16.53 per cent) across all reports analysed. Description processes in Malay are represented through descriptive particles 'adalah' and 'ialah' (Idris, 2012). To reiterate, descriptive particles used in Malay are not equivalent to to-be verbs in English, such as 'is' or 'are', which represent Relational processes in English (Safiah, 1995). In forest resource report genre, the Description process ialah is used to inform the size of the forest areas, while adalah is used to describe the terrain condition, the distribution of tree stands in terms of the tree species and the types of forest in which particular forest areas are classified.
The genre also utilises the use of Existence processes (10.59 per cent) although less dominantly compared to other process types. The Existence process concerns the representation of a state of existence (Idris, 2012). In the genre investigated, Existence processes are used to acknowledge the existence of material entities found in the forest areas concerning the existence of rivers, flora, fauna, orang asli settlements or cemeteries, and other elements found in the surrounding areas.
Another type of process used in the genre is the Relational process. As illustrated in Table 2, Relational processes occur less frequently (3.81 per cent) in all reports analysed. Relational processes concern the expression of equivalence and attributes of entities (Halliday, 1994).
In the forest resource report genre, Relational processes are used to relate the distribution of tree stands and their attributes in terms of the types of tree species and their qualities (diameter, size) found in the forest areas. Verbal processes (2.54%) are also used in the genre, as shown in Table 2. Verbal processes represent processes of utterance (Idris, 2012). In the reports analysed, the Verbal process (e.g. mencadangkan -suggest) is used to portray forestry officers' thoughts through verbal projections concerning the suggestions made for fieldwork monitoring plans to be executed in the forest areas. However, the Verbal process observed in the reports only occurred once in each report (2.54%). This indicates that the genre is not concerned with conveying thoughts or perceptions but rather presenting factual information related to the forest areas.
The final process type represented in the genre is Mental processes which occurred the least in the genre (0.83 %). Mental processes represent processes of sensing involving the mind, thoughts, perception and cognition, which does not involve any physical action (Idris, 2012). The relative lack of Mental processes in the genre suggests that the reports are not concerned with the conscious cognition of the writer but rather on the physical actions or events relevant to the goings-on in the forest areas.
Discussion
This paper reports the findings of an investigation into one professional forestry report genre used in the Malaysian context, the forest resource report. Overall, the findings on process types reveal that Action processes are dominant in the genre, indicating that the genre's central concern is on the physical activities and events happening in the forest areas. Other process types were also represented in the forest resource report genre, with each serving specific roles in the genre investigated in this study (see Table 2). The use of linguistic elements such as specialised terminology, nominalisation, and passive voice in forest resource reports corroborates with typical conventions of scientific texts. These linguistic elements also help to represent the experiential meaning in the reports that conform to the functions and roles assumed by the genre in providing necessary information required for the evaluation of forest areas for the potential for forest harvesting activities as well as for the implementation of sustainable forest management practices of forestry department depicted in the forest resource report genre. The forest resource report is an example of a scientific text used in the forestry discipline. Scientific disciplines often use a specialised linguistic code which is realised by i) use of specialised terminology and notation, ii) use of nominalisations, iii) syntactic complexity and iv) use of passive voice (Halliday and Martin, 1993). The forest resource report also bears a resemblance to the specialised linguistic code in typical scientific texts. This is evident in the reports analysed where specialised terminologies related to the forestry discipline can be observed. Such use includes, among others, the use of forestry terms such as kompartmen -compartment, Rancangan Tebangan dan Rawatan Tahunan -Annual Felling and Treatment Plan, taburan dirian pokok -distribution of tree stands, spesis pokok -tree species, kawasan tadahan air -water catchment area, usahahasil -harvesting, etc. Although widely used in the discipline, these terms are likely to be unfamiliar to many outside of the discipline. Another linguistic element observed in the realisation of experiential meaning in forest resource reports is the use of nominalisation. Forey (2002) noted that nominalisation is used in texts to control the negotiability of texts through its complex packaging of information. In the genre analysed, nominalisation is used to represent participant roles such as the use of aktiviti usahahasil (harvesting activity), bancian (census), semakan (inspection), zon penampan (buffer zone), which are all represented as Goal in the reports. Apart from that, nominalisation is also used in the forest resource report to represent circumstantial elements such as projek tanaman (plantation project), kutipan cukai (circumstance of Cause), and tadahan air (water catchment) (circumstance of Location). According to Halliday (1994), nominalisation facilitates the taxonomy of scientific terms while simultaneously compacting complex information and allowing for the development of reasoning. Thus, the use of nominalisation representing participant roles in the forest resource reports genre is seen as a resource for creating technical entities to be presented which allows complex information to be developed throughout the reports. The use of passive voice in forest resource reports can also be observed in this study. A total of 24.15 per cent of the clauses analysed in the study were written in passive voice. Halliday and Martin (1993) lament that passive voice in scientific text projects the 'objective' character of scientific knowledge and is more frequently used in such texts than everyday writing. Passives in Malay are often construed from the use of genuine passive of prefix di- (Asmah, 2014), in which the process associated in the clause emphasises the object as the element that is being described (Safiah and Wong, 2015).
Conclusion
Overall, this study elucidates the potential for SFL to be applied to professional writing in the Malay context, complementing that of Idris (2012). Since the exploration of Malay texts from an SFL perspective is still in its infancy, more contributions on developing relevant frameworks and analytical approaches are needed to extend the literature on the use of Malay in accomplishing various communicative tasks. It is hoped that the findings discussed may contribute to the advancement of knowledge in the forestry professional report genre specifically and Malay professional text in general. Findings of this study have indeed shed some light in the genre of report writing found in the forestry discipline which has been found lacking in linguistic exploration of the nature of the discipline particularly from an SFL perspective. Results obtained from the SFL analysis in exploring the SFM practices of forestry professionals in the forest resource report genre has also expanded our knowledge in how language helps to explicate the forestry professionals' concerns and practices in achieving forest sustainability. | 3,556 | 2022-01-20T00:00:00.000 | [
"Linguistics"
] |
Last Words: Amazon Mechanical Turk: Gold Mine or Coal Mine?
Recently heard at a tutorial in our field: “It cost me less than one hundred bucks to annotate this using Amazon Mechanical Turk!” Assertions like this are increasingly common, but we believe they should not be stated so proudly; they ignore the ethical consequences of using MTurk (Amazon Mechanical Turk) as a source of labor. Manually annotating corpora or manually developing any other linguistic resource, such as a set of judgments about system outputs, represents such a high cost that many researchers are looking for alternative solutions to the standard approach. MTurk is becoming a popular one. However, as in any scientific endeavor involving humans, there is an unspoken ethical dimension involved in resource construction and system evaluation, and this is especially true of MTurk. We would like here to raise some questions about the use of MTurk. To do so, we will define precisely what MTurk is and what it is not, highlighting the issues raised by the system. We hope that this will point out opportunities for our community to deliberately value ethics above cost savings.
Figure 1
Evolution of MTurk usage in NLP publications.
MTurk is composed of two populations: the Requesters, who launch the tasks to be completed, and the Turkers, who complete these tasks. Requesters create the so-called "HITs" (Human Intelligence Tasks), which are elementary components of complex tasks. The art of the Requesters is to split complex tasks into basic steps and to fix a reward, usually very low (for instance US$0.05 to translate a sentence). Using the MTurk paradigm, language resources can be produced at a fraction (1/10th at least) of the usual cost (Callison-Burch and Dredze 2010).
MTurk should therefore not be considered to be a game. Although it is superficially similar to Phrase Detectives, in that case the gain is not emphasized (only the best contributors gain a prize, which consists of Amazon vouchers). The same applies to the French-language JeuxDeMots ("Play on Words"), which does not offer any prize (Lafourcade 2007), and to Phrase Detectives (Chamberlain, Poesio, and Kruschwitz 2008), in which the gain is not emphasized (only the best contributors gain a prize).
MTurk is not a game or a social network, it is an unregulated labor marketplace: a system which deliberately does not pay fair wages, does not pay due taxes, and provides no protections for workers.
Why Are We Concerned?
Since its introduction in 2005, there has been a steadily growing use of MTurk in building or validating NLP resources, and most of the main scientific conferences in our field include papers involving MTurk. Figure 1 was created by automatically searching the proceedings of some of the main speech and language processing conferences, as well as some smaller events specializing in linguistic resources, using the quoted phrase "Mechanical Turk." We then manually checked the retrieved articles, source by source, to identify those which really make use of MTurk, ignoring those which simply talk about it. (For example, in the LREC 2010 proceedings, eight articles talk about MTurk, but only five used it, and in 2008, out of two papers citing MTurk, only one used it.) The present journal, Computational Linguistics (CL), appears in the bar chart with a zero count, as none of the articles published in it so far mention MTurk. 2 All of the other sources contained at least one article per year using MTurk. The total number of publications varies from year to year, since, for example, conferences may accept different numbers of papers each year, and some conferences, such as LREC, occur only every two years.
We performed another, less detailed, search, this time in the whole ACL Anthology (not source by source), using the same quoted phrase "Mechanical Turk" on 5 November 2010. We examined the hits manually, and out of the 124 resulting hits, 86 were papers in which the authors actually used MTurk as part of their research methodology. Interestingly, we noticed that at least one paper that we know to have used MTurk (i.e., Biadsy, Hirschberg, and Filatova 2008), was not returned by the search. The published version of this paper does not explicitly mention MTurk, but the corresponding presentation at the conference indicated that MTurk was used. This is some evidence that use of MTurk may be under-reported. It should be noted that these results include a specialized workshop-the NAACL-HLT 2010 Workshop on Amazon Mechanical Turk (35 papers)-the existence of which is, in itself, strong evidence of the importance of the use of MTurk in the domain.
A vast majority of papers present small to medium size experiments where the authors have been able to produce linguistic resources or perform evaluations at a very low cost; at least for transcription and translation, the quality is sufficient to train and evaluate statistical translation/transcription systems (Callison-Burch and Dredze 2010; Marge, Banerjee, and Rudnicky 2010). Some of these papers, however, bring to light language resource quality problems. For example, Tratz and Hovy (2010, page 684) note that the user interface limitations constitute "[t]he first and most significant drawback" of MTurk, as, in their context of annotating noun compound relations using a large taxonomy, "it is impossible to force each Turker to label every data point without putting all the terms onto a single Web page, which is highly impractical for a large taxonomy. Some Turkers may label every compound, but most do not." They also note that "while we requested that Turkers only work on our task if English was their first language, we had no method of enforcing this." Finally, they note that "Turker annotation quality varies considerably." Another important point is made in Bhardwaj et al. (2010), where it is shown that, for their task of word sense disambiguation, a small number of trained annotators are superior to a larger number of untrained Turkers. On that point, their results contradict that of Snow et al. (2008), whose task was much simpler (the number of senses per word was 3 for the latter, versus 9.5 for the former). The difficulty of having Turkers perform complex tasks also appears in Gillick and Liu (2010, page 148), an article from the proceedings of the NAACL-HLT 2010 Workshop on Amazon Mechanical Turk, in which non-expert evaluation of summarization systems is proved to be "not able to recover system rankings derived from experts." Even more interestingly, Wais et al. (2010) show that standard machine learning techniques (in their case, a naive Bayes classifier) can outperfom the Turkers on a categorization task (classifying businesses into Automotive, Health, Real Estate, etc.). Therefore, in some cases, NLP tools already do better than MTurk. Finally, as we said earlier, the vast majority of papers present only small or medium size experiments. This can be explained by the fact that, at least according to Ipeirotis (2010a), submitting large jobs in MTurk results in low quality and unpredictable completion time.
Who Are the Turkers?
Many people conceive of MTurk as a transposition of Grid Computing to humans, thus making it possible to benefit from humans' "spare cycles" to develop a virtual computer of unlimited power. The assumption is that there is no inconvenience for humans (as it is not real work), and the power comes from the myriad. This a fiction.
Let us look first at how many Turkers are performing the HITs. This is a quite difficult task, because Amazon does not give access to many figures about them. We know that over 500k people are registered as Turkers in the MTurk system. But how many Turkers are really performing HITs? To evaluate this, we combined two different sources of information. First, we have access to some surveys about the demographics of the Turkers (Ross et al. 2009(Ross et al. , 2010Ipeirotis 2010b). These surveys may have a bias over the real population of Turkers, as some Turkers may be reluctant to respond to surveys. Because the results of these surveys are quite consistent, and the surveys are usually easy to complete, not particularly boring, and paid above the usual rate, we may assume that this bias is minor, and accept what they say as a good picture of the population of Turkers. In these surveys we see many interesting things. For instance, there is a growing number of people from India: There were below 10% in 2008, above 33% in early 2010, and they represented about 50% of the Turkers in May 2010. 3 Even if these surveys show that the populations from India and the U.S. are quite different, we may take as an approximation that they have about the same reasons to perform HITs in MTurk, and produce about the same activity. We looked at how many HITs the 1,000 Turkers who completed the survey in Ipeirotis (2010b) claim to perform: between 138,654 and 395,106 HITs per week. 4 The second source of information comes from the Mechanical Turk Tracker: 5 According to this, 700,000 HITs are performed each week. But the tracker system neither keeps track of the HITs which are completed in less than one hour, nor is able to quantify the fact that the same HIT can be completed by multiple workers and in fact should be, according to regular users like Callison-Burch and Dredze (2010). Asking the authors of Ipeirotis (2010b), and the creator of the Mechanical Turk Tracker (who are in fact the same person), he suggested that we should multiply the number given by the tracker by 1.7 × 5 to take into account these two factors, 6 resulting in the (conjectural) total number of 5,950,000 HITs. Taking those two data points, 7 we are able to hypothesize that the real number of Turkers is between 15,059 and 42,912. However, from the surveys, we have access to another figure: Eighty percent (80%) of the HITs are performed by the 20% most active Turkers (Deneme 2009), who spend more than 15 hours per week in the MTurk system (Adda and Mariani 2010)-consistent with the Pareto principle which says that 80% of the effects come from 20% of the causes. We may therefore say that 80% of the HITs are performed by 3,011 to 8,582 Turkers. These figures represent 0.6-1.7% of the 3 http://blog.crowdflower.com/2010/05/amazon-mechanical-turk-survey/. 4 The two figures come from the fact that each Turker gave a range of activity rather than an average number of HITs. 5 This system keeps track of all the HITs posted on MTurk, each hour. http://mturk-tracker.com. 6 Personal communication in the comments of http://behind-the-enemy-lines.blogspot.com/ 2010/03/new-demographics-of-mechanical-turk.html, reporting that the tracker is missing ∼70 of the posted HITs, which are posted and completed within less than one hour, and a 5x factor for the unobserved HIT redundancy. 7 That is, 1,000 Turkers perform between 138,654 and 395,106 HITs per week, and the total number of HITs in the MTurk system is about 5.95M HITs per week.
registered Turkers, which in turn is in accord with the "90-9-1" rule 8 valid in the Internet culture. Another important question is whether activity in MTurk should be considered as labor or something else (hobby, volunteer work, etc.). The observed mean hourly wages for performing jobs in the MTurk system is below US$2 (US$1.25 according to Ross et al. [2009]). Because they accept such low rewards, a common assumption is that Turkers are U.S. students or stay-at-home mothers who have plenty of leisure time and are happy to fill their recreation time by making some extra money. According to recent studies in the social sciences (Ipeirotis 2010b;Ross et al. 2010), it is quite true that a majority (60%) of Turkers think that MTurk is a fruitful way to spend free time getting some cash; but it is only 20% (5% of the India Turkers) who say that they use it to kill time. And these studies also show that 20% (30% of the India Turkers) declare that they use MTurk "to make basic ends meet." From these answers, we find that money is an important motivation for a majority of the Turkers (20% use MTurk as their primary source of income, and 50% as their secondary source of income), and leisure is important for only a minority (30%). We cannot conclude from these studies that the activity in MTurk should be considered as labor for all the Turkers, but we can at least for the minority (20%) for whom MTurk represents a primary source of income. 9 Moreover, using the survey in Ipeirotis (2010b), we find that this minority is performing more that one third of all the HITs.
What Are the Issues with MTurk?
The very low wages (below US$2 an hour) are a first issue, but the use of Mechanical Turk raises other ethical issues as well. The position of many prototypical Turkers would be considered ethically unacceptable in major developed countries. Denied even the basic workplace right of collective bargaining (unionization), this community has no recourse to any channels for redress of employer wrongdoing, let alone the normal ones available to any typical worker in the United States and many other developed nations (e.g., class action lawsuits, other lawsuits, and complaints to government agencies), while simultaneously being subjected to egregious vulnerabilities, including the fact that they have no guarantee of payment for work properly performed.
Legal issues surrounding the use of MTurk have also been encountered. At least one university legal department was sufficiently concerned that Turkers working for several months would claim employee status and demand health and other benefits that they refused to allow grant funds to be expended on MTurk (personal communication, E. Hovy). A small number of universities have insisted on institutional review board approval for MTurk experiments (personal communication, K. Cohen). (Institutional review boards in U.S. universities are independent bodies that review proposed experiments for legal and ethical issues.)
Is MTurk the Future of Linguistic Resource Development?
The implicit belief that the very low cost of MTurk derives from the fact that incentivizing casual hobbyists requires only minimal payment is a mirage: Once you admit that a majority of Turkers do not consider MTurk as a hobby, but as a primary or a secondary source of income, and that one third of the HITs are performed by Turkers who need MTurk to make basic ends meet, you then have to admit that MTurk is, at least for them, a labor marketplace. Moreover, the frequent assumption that the low rewards are a result of the classical law of supply-and-demand (large numbers of Turkers means more supply of labor and therefore lower acceptable salaries) is false. Firstly, we do not observe that there are too many Turkers. In fact, there are not enough Turkers. This can be observed through the difficulty in finding Turkers with certain abilities (e.g., understanding a specific language [Novotney and Callison-Burch 2010]), and in the difficulty in performing very large HIT groups (Ipeirotis 2010a). This is not surprising, as we have seen that the number of active Turkers is not that large. Secondly, the low cost is a result of the Requesters' view of the relation between quality and reward: Many articles (e.g., Marge, Banerjee, and Rudnicky 2010) relate that there is no correlation between the reward and the final quality. The reason is that increasing the price is believed to attract spammers (i.e., Turkers who cheat, not really performing the job, but using robots or answering randomly), and these are numerous in the MTurk system because of an inadequate worker reputation system. 10 We obtain here a schema which is very close to what the 2001 economics Nobel prize winner George Akerlof calls "the market for lemons," where asymmetric information in a market results in "the bad driving out the good." He takes the market for used cars as an example (Akerlof 1970), where owners of good cars (here, good workers) will not place their cars on the used car market, because of the existence of many cars in bad shape (here, the spammers), which encourage the buyer (here, the Requester) to offer a low price (here, the reward) because he does not know the exact value of the car. After some time, the good workers leave the market because they are not able to earn enough money given the work done (and sometimes they are not even paid), which in turn decreases the quality. At the moment, the system is stable in terms of the number of Turkers, because good workers are replaced by naive workers.
Amazon's attitude towards reputational issues has been passive. It maintains that it is a neutral clearinghouse for labor, in which all else is the responsibility of the two consenting parties. This attitude has led to an explosion of micro-crowdsourcing start-ups, which observed the MTurk flaws and tried to overcome them. 11 Some of these start-ups could become serious alternatives to MTurk (TheQuill 2010), like Samasource, 12 which offers at least a fair wage to workers, who in turn are clearly identified on the Web site, with their resumes. But others are even worse than MTurk, ethically speaking. MTurk is ethically questionable enough; as a scientific community with ethical responsibilities we should seek to minimize the existence of even less-ethical alternatives to it.
What's Next?
If we persist in claiming that with MTurk we are now able to produce any linguistic resource or perform any manual evaluation of output at a very low cost, funding agencies will come to expect it. It is predictable that in assessing projects involving linguistic resource production or manual evaluation of output, funding agencies will prefer projects which propose to produce 10 or 100 times more data for the same amount of money. MTurk costs will then become the standard costs, and it will be very difficult to obtain funding for a project involving linguistic resource production at any level that would allow for more traditional, non-crowdsourced resource construction methodologies. Therefore, our community's use of MTurk not only supports a workplace model that is unfair and open to abuses of a variety of sorts, but also creates a de facto standard for the development of linguistic resources that may have long-term funding consequences.
Non-exploitative methods for decreasing the cost of linguistic resource development exist. They include semi-automatic processing, better methodologies and tools, and games with a purpose, as well as microworking Web sites (like Samasource) that guarantee workers minimum payment levels. We encourage the computational linguistics and NLP communities to keep these alternatives in mind when planning experiments. If a microworking system is considered desirable by the ACL and ISCA communities, then we also suggest that they explore the creation and use of a linguistically specialized special-purpose microworking alternative to MTurk that both ensures linguistic quality and holds itself to the highest ethical standards of employer/employee relationships. Through our work as grant evaluators and recipients, we should also encourage funding bodies to require institutional review board approval for crowdsourced experiments and to insist on adherence to fair labor practices in such work. | 4,412.8 | 0001-01-01T00:00:00.000 | [
"Computer Science"
] |
Web-based Platform for Subtitles Customization and Synchronization in Multi-Screen Scenarios
This paper presents a web-based platform that enables the customization and synchronization of subtitles on both single- and multi-screen scenarios. The platform enables the dynamic customization of the subtitles' format (font family, size, color...) and position according to the users' preferences and/or needs. Likewise, it allows configuring the number of subtitles lines to be presented, being able to restore the video playout position by clicking on a specific one. It also allows the simultaneous selection of various subtitle languages, and applying a delay offset to the presentation of subtitles. All these functionalities can also be available on (personal) companion devices, allowing the presentation of subtitles in a synchronized manner with the ones on the main screen and their individual customization. With all these functionalities, the platform enables personalized and immersive media consumption experiences, contributing to a better language learning, social integration and an improved Quality of Experience (QoE) in both domestic and multi-culture environments.
INTRODUCTION
Subtitles play a key role in TV and online video services. For many users, such as those with audiovisual impairments or non-natives, subtitles are essential to access and interpret audiovisual contents. Likewise, multimedia services and applications need to be adaptive regarding the users' needs, preferences and resources. With these premises in mind, this paper presents a web-based platform that enables the customization and synchronization (sync, hereafter) of subtitles on both single-and multi-screen scenarios. The platform allows dynamically adapting and customizing the subtitles' presentation according to the users' needs (e.g., language), sensorial capabilities (e.g., format and size), preferences (e.g., format, size, location, number of lines…), the type (e.g., smartphone, tablet, TV…) and number of available devices, the application dynamics and the context of the targeted environment (e.g., domestic or public places…), while guaranteeing a synchronized playout.
During the demo session, the audience will be able to experiment with the platform, by interacting with it on main screens and using their own companion devices to create new, or join ongoing, multi-screen sessions. We also expect valuable feedback about its performance, design aspects, applicability, usability and future functionalities.
PLATFORM FOR SUBTITLES CUSTOMIZATION & SYNC
The technological components that have been used to develop the presented platform and its main functionalities are briefly described in this section.
Technological Components
The platform has been developed by exclusively using webbased components, such as HTML5, CSS3 and JavaScript (e.g., Node.js, Socket.IO, jQuery…), which allows guaranteeing universal (i.e., cross-network, cross-device, cross-platform and cross-browser) support. More details about these components and the advantages of web platforms compared to native platforms can be found in [4].
Functionalities
The platform is mainly comprised of an initial screen for selecting the (Youtube or HTML5) video or playlist, and the initial language (see Figure 1) and a main (player) screen with the subtitles controls (see Figures 2 to 4). In particular, the main screen includes controls for: 1) enabling and disabling the subtitles; 2) selecting their language; 3) simultaneously selecting various languages; 4) selecting the number of lines to be displayed (e.g., the previous and next ones); and 5) customizing the subtitles format (font family, size, color, background color and contrast). It is also possible to set a (positive or negative) delay offset to the presentation of subtitles and restoring the video playout position to the beginning of a specific subtitle line by clicking on it. In full screen mode (Figure 3), it is possible to modify the position of subtitles (subtitles vs surtitles) and to apply a transparency percentage to them. Interestingly, the platform allows showing a QR for the association of companion devices by scanning it. This way, the subtitles can be presented in a synchronized and Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author. (Figure 4) in multi-screen scenarios. Remote playback control can also be enabled. The advantages of our platform compared to other existing ones are summarized in Table 1 (+ means better performance or extra functionalities).
The platform has been objectively and subjectively evaluated, having obtained very satisfactory results in terms of performance, usability, usefulness of its functionalities, its applicability and the awaken interest. Demo videos showing many of its capabilities are available at: http://iim.webs.upv.es/prototypes.html#streaming
FUTURE WORK
As future work, we plan to make our platform compatible with the Hybrid Broadcast Broadband TV (HbbTV) standard and to adopt artificial intelligence techniques, such as voice synthesis/recognition and image recognition.
ACKNOWLEDGMENTS
This work has been funded, partially, by the "Fondo Europeo de Desarrollo Regional (FEDER)" and the Spanish Ministry of Economy and Competitiveness, under its R&D&I Support Program, in project with Ref. | 1,142.4 | 2017-06-14T00:00:00.000 | [
"Computer Science"
] |
Chaotic Orbits in Thermal-Equilibrium Beams: Existence and Dynamical Implications
Phase mixing of chaotic orbits exponentially distributes these orbits through their accessible phase space. This phenomenon, commonly called ``chaotic mixing'', stands in marked contrast to phase mixing of regular orbits which proceeds as a power law in time. It is operationally irreversible; hence, its associated e-folding time scale sets a condition on any process envisioned for emittance compensation. A key question is whether beams can support chaotic orbits, and if so, under what conditions? We numerically investigate the parameter space of three-dimensional thermal-equilibrium beams with space charge, confined by linear external focusing forces, to determine whether the associated potentials support chaotic orbits. We find that a large subset of the parameter space does support chaos and, in turn, chaotic mixing. Details and implications are enumerated.
I. INTRODUCTION
Rapid, inherently irreversible dynamics is a practical concern in producing high-brightness charged-particle beams. Time scales of irreversible processes place constraints on methods for compensating against degradation of beam quality caused by, for example, space charge. This is a very important practical matter because compensation must be fast compared to these processes, and this affects the choice and configuration of the associated hardware.
A beam bunch with space charge comprises an N -body system with typically 3N degrees of freedom. Upon coarse-graining, i.e., "smoothing" the system to remove granularity, the collective space-charge force remains. One might conjecture that this force, when nonlinear, may support chaotic orbits. One example is the University of Maryland five-beamlet experiment that shows presumably irreversible dissipation of the beamlets after a few space-chargedepressed betatron periods [1]. Simulations of the experiment reveal a substantial fraction of globally chaotic orbits [2], and phase mixing of these orbits thereby presents itself as a contributing evolutionary mechanism. This example pertains to a strongly time-dependent nonequilibrium system, yet one might conjecture that nonlinear space-charge forces in a static system could support chaotic orbits as well. We shall explore this conjecture.
An initially localized clump of chaotic orbits will, via phase mixing, grow exponentially and eventually reach an invariant distribution. This is "chaotic mixing" [3,4]. Strictly speaking, the process is reversible in that it is collisionless and its dynamics is included in, e.g., Vlasov's equation. Nonetheless, when the invariant distribution spans a global region of the system's phase space, chaotic mixing is a legitimate relaxation mechanism in that it drastically smears correlations. Moreover, from a practical perspective, the process is strictly irreversible because infinitesimal fine-tuning is needed to reassemble the initial conditions. It is also distinctly different from phase mixing of regular orbits, i.e., linear Landau damping [5], a process that winds an initially localized clump into a filament over a comparatively narrow region of phase space. Whereas chaotic mixing proceeds exponentially over a well-defined time scale and can cause global, macroscopic changes in the system, phase mixing of regular orbits carries a power-law time dependence, proceeds on a time scale depending on the distribution of orbital frequencies across the clump, and acts only over a portion of the phase space. Accordingly, ascertaining conditions for, and time scales of, chaotic mixing in beams is an undertaking of practical importance.
In this paper we consider a family of thermal-equilibrium (TE) configurations of beam bunches with space charge, i.e., nonneutral plasmas, confined by linear external forces [6,7,8]. For simplicity, we treat the dynamics in a reference frame that comoves with the bunch and has its origin affixed to the bunch centroid. Particle motion in this reference frame is taken to be nonrelativistic; transforming from the bunch frame to the laboratory frame is straightforward [9]. In the laboratory frame the space-charge force decreases inversely with the square of the beam energy. For the transverse component, this arises from the partial cancellation between the self-magnetic and self-electrostatic forces; while for the longitudinal component, it is due to Lorentz contraction [10]. Nonetheless, there are many situations involving high-brightness beams wherein space charge is important. Contemporary examples include low-to-medium-energy hadron accelerators such as those that drive spallation-neutron sources or serve as boosters for high-energy machines, heavy-ion accelerators, and low-energy electron accelerators such as photoinjectors [11].
Thermal-equilibrium beams are of practical interest in connection with, e.g., high-current radiofrequency linear accelerators. While conventional designs of such machines lead to bunches that are out of equilibrium, a design strategy that keeps the beam at or near thermal equilibrium has been formulated [12]. The principal motivation for this alternative strategy is to circumvent equipartitioning processes that cause emittance growth and halo formation.
Because a TE configuration is a maximum-entropy configuration, is static, and is manifestly stable [13], one might expect its intrinsic dynamics to be entirely benign. The expectation is questionable. The density distribution of such a configuration is uniform in its interior and falls to zero over a distance commensurate to the Debye length. Thus, large-amplitude orbits will explore this "Debye tail," during which time they experience a nonlinear force.
The question we seek to answer is whether the nonlinear force in the Debye tail can cause a significant number of orbits to be chaotic. The answer is unequivocally "no" for spherically symmetric or infinitely long cylindrically symmetric configurations because their potentials are integrable and thereby support only regular orbits. However, breaking the symmetry can generate chaotic orbits, as will become apparent in the analysis to follow.
Our study involves a comprehensive suite of numerical experiments concerning orbital dynamics in smooth (coarse-grained) TE configurations. We establish a quantitative measure of chaos in orbits and use this measure to distinguish between regular and chaotic orbits. We then evolve initially localized clumps of particles in the smooth potentials. The experiments are fast if the potentials are analytic, but they are much slower if the potentials must first be tabulated numerically over a grid. As part of the preliminaries, Sec. II presents a semianalytic theory for estimating the time scale for chaotic mixing. In general the TE configurations, specified in Sec. III, must be found numerically. Section IV presents a means for rapidly constructing approximate, semianalytic models of their potentials. With these models we are able to survey the parameter space and obtain a zeroth-order assessment of the prevalence and degree of chaos; this is done in Sec. V. Section VI concerns examples for which the potential is accurately determined via a numerical solution of Poisson's equation on a grid. For these examples the experiments of Sec. V are repeated, and the results are compared to those derived from the semianalytic approximation. Section VII summarizes the findings, discusses their implications while providing a comparison with the theory of Sec. II, and presents a path for follow-on work.
II. ESTIMATED TIME SCALE FOR CHAOTIC MIXING
Before embarking on numerical studies, it is wise to ascertain whether chaotic mixing can indeed proceed rapidly. One can construct an analytic tool to estimate the chaotic-mixing rate, although its application involves the tacit assumption, or initial knowledge, that chaotic orbits are present. In this section we sketch the methodology leading to analytic predictions.
The past few years have seen development of a geometric method proposed by M. Pettini to quantify chaotic instability in Hamiltonian systems with many degrees of freedom. The central idea is to describe the dynamics in terms of average curvature properties of the manifold in which the particle orbits are geodesics. The method hinges on the following assumptions and approximations; they are discussed thoroughly in Ref. [16]: (1) a generic geodesic is chaotic; (2) the manifold's effective curvature is locally deformed but otherwise constant; (3) the effective curvature reflects a gaussian stochastic process; and (4) longtime-averaged properties of the curvature are calculable as phase-space averages over an invariant measure, specifically, the microcanonical ensemble. The gaussian process is the zeroth-order term in a cumulant expansion of the actual stochastic process; assumption (3) is that the zeroth-order term suffices. The end result relates chaotic instability to the geometric properties of the manifold defined by the long-time-averaged orbits. In short, the theory is based on (often questionable) assumptions that chaos exists and is characterized by ergodicity and a microcanonical ensemble, and it treats chaotic orbits as arising from a parametric instability that can be modeled by a stochastic-oscillator equation. It has recently been adapted for application to low-dimensional, autonomous (time-independent) Hamiltonian systems and, in tests against a wide variety of such systems, it was found commonly to yield estimates of mixing rates that are good to within a factor ∼ 2 [15].
Action principles in classical mechanics are tantamount to extremals of "arc lengths"; thus, one can infer a metric tensor from an action principle [17]. The metric tensor manifests all of the properties of the manifold over which the system evolves, with these properties being calculable following standard methods of differential geometry. Of special interest is the divergence of two initially nearby 3N-dimensional geodesics q and q + δq as governed by the equation of geodesic deviation: in which D/ds denotes covariant differentiation with respect to the "proper time" s, R α βγδ is the Riemann tensor derivable from the metric tensor, and summation over repeated indices is implied with each index spanning the 3N degrees of freedom. Equation (1) is the basis for determining the mixing rate χ as a measure of the system's largest Lyapunov exponent, a quantity that reflects the long-time behavior of the separation vector: Any number of action principles, and therefore any number of metric tensors, can be selected to proceed further. Eisenhart's metric [18], which is consistent with Hamilton's least-action principle, is probably the most convenient choice. It offers easy calculation of the Riemann tensor, and it avoids spurious results traceable to the singular boundary of the perhaps better-known Jacobi metric that is derivable from Maupertius' least-action principle [19]. Eisenhart's metric operates over an enlarged configuration space-time manifold in which the geodesics are parameterized by the real time t, i.e., is the potential energy per unit mass (hereafter called the "potential"); δ ij (with the indices i, j running from 1 to 3N) is the unit tensor corresponding (without loss of generality) to a cartesian spatial coordinate system, q 0 = t; q 3N +1 = t/2 − t 0 dt ′ L(q,q); and L is the Lagrangian. The resulting geodesic equations for the spatial coordinates q i are Newton's equations of motion, so the particle trajectories correspond to a canonical projection of the Eisenhart geodesics onto the configuration space-time manifold. A convenient byproduct of the Eisenhart metric is that the only nonzero components of the Riemann tensor are Using the aforementioned assumptions and approximations, Pettini and others [16,20] derive an expression for χ in terms of the curvature and its standard deviation averaged over the microcanonical ensemble. The idea is that, as t → ∞, chaotic orbits of total energy E mix through the configuration space toward an invariant measure, taken per assumption (4) to be the microcanonical ensemble δ(H − E), over which time averages become equivalent to phase-space averages. Specifically, for an arbitrary function A(q), the averaging process Per Eisenhart's metric, the average curvature κ and the ratio ρ ≡ σ/κ, with σ denoting the standard deviation of the curvature, are in which ∇ 2 denotes the Laplacian ∂ i ∂ i , and ρ corresponds physically to the ratio of the average curvature radius to the length scale of fluctuations [21]. By taking the curvature to vary randomly along a chaotic orbit, one can reduce Eq. (1) to a stochastic-oscillator equation that can be solved analytically. The solution yields an estimate of the largest Lyapunov exponent χ: The geometric quantities derive from the 6N-dimensional microcanonical distribution.
Anticipating that granularity takes a long time to affect mixing, and wishing to identify conditions for rapid mixing, we now consider the influence of the 3-dimensional coarse-grained space-charge potential V s on a generic chaotic orbit. The largest Lyapunov exponent for the coarse-grained system equates to the chaotic-mixing rate. We presume the assumptions and approximations stated at the outset carry over to the coarse-grained system; the main justification is that the aforementioned previous work concerning low-dimensional autonomous Hamiltonians has shown the mixing rate in such systems usually depends only weakly on the dynamical details [15]. We take the external focusing potential V f to be quadratic in the coordinates x comoving with the bunch, i.e., V f (x) = (ω · x) 2 /2, wherein ω = (ω x , ω y , ω z ) corresponds to the focusing strength; the total potential is V = V f + V s . Per Eq. (4) and Poisson's equation the quantities κ and σ are determined from is the (smoothed) particle density, q and m are the single-particle charge and rest mass, respectively, and ǫ o is the permittivity of free space. We then have Inserting these results into Eq. (5) gives the associated time scale for chaotic mixing, t m ≡ 1/χ. When the standard deviation of the density distribution is large, as can be the case when substructure is present, ρ will be appreciable, and in turn Eq. (5) makes clear that t m will be a few space-charge-depressed periods 2π/ √ κ. Accordingly, the spacecharge-depressed period, a quantity commensurate to the orbital period of a typical particle, constitutes a "dynamical time" t D for charged-particle beams.
To underscore the potential impact of collisionless relaxation via chaotic mixing, it is of interest to compare t m to the collisional relaxation time t R . Perhaps the simplest way to develop an order-of-magnitude estimate of t R in a charged-particle bunch (a nonneutral plasma) is to calculate the time required for a typical particle velocity to change by of order itself presuming collisions comprise a sum of incoherent binary interactions [22]. The result is t R /t D ∼ 0.1N/lnN, wherein the Coulomb logarithm is conservatively taken to be lnN. If we substitute plausible parameter values for real high-brightness beams, we find t R ≫ t D ; for example, N = 6.25 × 10 9 (1 nC) gives t R ∼ 10 7 t D ; hence, t m ≪ t R when chaotic mixing is prominent. The remaining question is whether there is a significant population of globally chaotic orbits to mix, a question to which we now turn our attention.
III. THE EQUATIONS OF THERMAL EQUILIBRIUM
Consider a system, i.e., a bunch, of N identical charged particles, e.g., electrons or protons. For simplicity, invoke a Cartesian coordinate system whose origin lies at the bunch centroid. Assume all particle velocities in this coordinate system are nonrelativistic. The particles mutually interact via the Coulomb force and are confined by a static, externally applied, linear focusing force. The focusing force may have different strengths along the three Cartesian axes. Assume, apart from this focusing force, that the system is isolated and is in thermal equilibrium. Accordingly, the total energy E of each particle is conserved: wherein ω = (ω x , ω y , ω z ) corresponds to the focusing strength; x = (x, y, z) denotes coordinates; m, v, and q are the particle's rest mass, speed, and charge, respectively; and φ(x) = (m/q)V s is the space-charge potential arising from the collective Coulomb force.
To proceed, one would in principle work with the 6N-dimensional microcanonical distribution of particles. This distribution includes interactions at all scales, ranging from particle-on-particle to a single particle interacting with the bulk, smooth potential from all other particles. Discreteness effects from 1/r 2 particle collisions generate chaos [23]; they cause nearby particle trajectories to separate exponentially. The rate of exponential separation, i.e., the Lyapunov exponent, is an increasing function of N [24]. In this sense, larger N gives rise to more chaos. However, the scale at which the separation saturates is a decreasing function of N. Accordingly, in large-N, high-charge-density systems such as beams with space charge, discreteness establishes microchaos [25,26,27,28]. At the other extreme, that of a single particle interacting with the bulk, smooth potential, exponential separation of nearby chaotic particles (if any are present) saturates at a global scale, corresponding to a state of macrochaos. Thus, initially nearby chaotic orbits evolve in three stages [29]: (1) very rapid exponential divergence that saturates at a scale large compared to the initial interparticle spacing but small compared to the system size; followed by (2) rapid exponential divergence that persists until the particles are globally dispersed; followed by (3) less rapid power-law divergence on a time scale ∝ (ln N)t D , in which t D is a dynamical time commensurate to the orbital period. If, in the smooth potential, the initially nearby particles execute regular motion rather than chaotic, then stage (2) is absent, and stage (3) proceeds on the much longer time scale ∝ (N 1/2 )t D [27].
Our interest here is in stage (2). Specifically, we are concerned about the existence of, and time scale for, macroscopic chaos, i.e., chaotic mixing into the global region of phase space that is energetically accessible to the individual particles. Accordingly, we specialize to the smooth 6-dimensional distribution function of a single particle, recognizing that discreteness effects vanish on macroscopic scales as the number density grows. For the TE beam, this the Hamiltonian, k is Boltzmann's constant, and T is the beam temperature. The number density follows upon integrating over velocity space, and the space-charge potential follows upon solving Poisson's equation: wherein ǫ o is the permittivity of free space.
A much more convenient formulation arises by using dimensionless variables. We introduce the Debye length λ D0 and angular plasma frequency ω p0 , both defined in terms of the centroid number density n(0): We then measure all lengths in the unit of λ D0 , i.e., x ↔ x/λ D0 , and all times in the unit of 1/ω p0 , i.e., t ↔ ω p0 t. In addition, we introduce the dimensionless potential Φ(x) ≡ qφ(x)/(kT ), and we normalize n(x) to the centroid density n(0), i.e., n(x) ↔ n(x)/n(0).
Equations (11) and (12) This is the minimum permissible focusing strength; the bunch is unconfined if Ω < Ω u , and the corresponding constraint on the parameter space is Hence, the parameter set [a, c; Ω] fully specifies a TE configuration.
Upon solving for the space-charge potential Φ(x), one can calculate orbits of test particles in the total potential. Their trajectories follow from the (dimensionless) equation of motion: One can, of course, introduce arbitrary initial conditions for the orbits. In our experiments, the initial condition on the velocity is v(0) = 0, and the total energy E of a particle thereby corresponds to the potential energy associated with the initial position x(0).
A key challenge in exploring orbital dynamics throughout the parameter space is to integrate large numbers of orbits rapidly for sufficiently long evolutionary times. Ideally, one would have analytic solutions for the density-potential pairs, from which the force on a particle at each time step can be quickly evaluated. Unfortunately, the equations of equilibrium generally do not submit to analytic techniques. Thus, in principle, one must solve these equations numerically, e.g., over a grid. However, as delineated in the following section, it is possible to formulate approximate, semianalytic solutions, and these solutions enable a search of a broad range of the parameter space for regions that support chaotic orbits. We now turn to that exploration. Subsequently, for select cases, we compare these results against those derived from fully self-consistent numerical solutions.
IV. APPROXIMATE SOLUTIONS TO THE EQUATIONS OF EQUILIBRIUM
A method to solve the equations of equilibrium is through a sequence of successive approximations [30]. A way to begin such a sequence is as follows: (1) As a first approximation, represent the system as a configuration stratified on similar and similarly situated concentric ellipsoids. A "homeoid" is a shell that is bounded by two similar and similarly situated concentric ellipsoids, and in which the surfaces of constant density are ellipsoids that are similar to, concentric with, and similarly situated with respect to the bounding ellipsoid.
Thus, the charge density is "homeoidally striated" in the first approximation (as is later illustrated in Fig. 13). Determine the stratification by solving a spherically symmetric model of the equations of equilibrium. (2) In the second approximation, derive the space-charge field corresponding to the homeoidally striated charge density, and then solve exactly the equations of equilibrium in this field. (3 and up) Repeat the process until the density and potential converge. In practice, one can carry out steps (1) and (2) of this recipe using semianalytic methods; to go further requires numerical techniques.
A. Determination of the structure in the first approximation To invoke a spherically symmetric model of Eq. (12), we take the potential to be stratified over ellipsoids on which R(x) takes a constant value. Then the spherically symmetric model corresponds to solving This model defines the "zeroth approximation" Φ 0 (R) to the potential. In general Eq. (15) must be solved numerically; however, the solution is rapidly and easily accomplished with the aid of, e.g., a Runge-Kutta algorithm.
Once Φ 0 (R) is determined, the corresponding homeoidally striated density becomes the first approximation to the number density: By inspection [31], one can write down the space-charge potential corresponding to the number density n 1 (R), and this becomes the first approximation to the potential: wherein the second equality follows from an integration by parts, and the quantities ∆(u) and R(x; u) are ∆(u) = (a 2 + u)(1 + u)(c 2 + u) , Hence, in the first approximation the number density is homeoidally striated, but the spacecharge potential is not.
B. Determination of the structure in the second and higher approximations The number density in the second approximation, n 2 (x), follows upon substituting Φ 1 (x) calculated from Eq. (17) into Eq. (11). For the special case of spherical symmetry, all orders of approximation agree with one another, but this is of course not true for a general triaxial geometry. To go further requires numerical methods, e.g., solving Poisson's equation for the potential Φ 2 corresponding to the density n 2 , substituting the result into Eq. (11) to obtain n 3 , and successively repeating the process until convergence is achieved. As discussed in Sec. VI below, we use a different method, a multigrid algorithm, for solving Eqs. (11) and (12) numerically.
V. SURVEY OF THE PARAMETER SPACE
Gathering sufficient data to support precise, statistically based conclusions concerning orbital behavior in a given potential requires integrating thousands of orbits in that potential.
And before these orbits can be tracked, the potential needs to be ascertained to sufficient accuracy. In principle, and for each choice of parameters, one must construct the "exact" potential Φ(x) by numerically solving the corresponding Poisson equation. This can be a computationally tedious process, and the solution is by necessity defined over a grid. Next, orbit integration through the grid requires accurate interpolation to evaluate the potential and corresponding particle acceleration between grid points. For sufficient resolution, the time steps need to be appropriately small; accordingly, many interpolations are required, and integrating many orbits is computationally time-consuming. This process is feasible for studying a few choices of parameter sets, and it underlies the results of Sec. VI below.
However, to survey the entire parameter space, i.e., to investigate many choices of parameter sets, the process becomes prohibitive. For this purpose one must resort to using approximate potentials.
Sec. IV above details a sequence of approximations, the first elements of which are semianalytic. The zeroth-order potential Φ 0 , derived from Eq. (15), is easy to evaluate, and it enables fast, high-precision orbital integration. However, Φ 0 itself may be a crude approximation to the exact potential; the approximation gets progressively worse as the parameter sets deviate further from spherical symmetry. One might expect the potential Φ 1 of the first approximation to provide a better model. However, its underlying integral, given in Eq. (17), adds additional complexity and time to the orbit integrations. We tried evaluating this integral at each time (thus position) step along the orbit, but doing so made the orbit computations prohibitively long. The alternative is to evaluate the integral over a grid and then do orbit integrations through the grid. As previously mentioned, integrations through a grid are too computationally expensive to enable a parameter survey. Moreover, if one is able to solve Poisson's equation for the exact potential Φ(x), then there is neither computational benefit nor motivation for using Φ 1 . Our strategy is to explore a few choices of parameter sets in the exact potential to strengthen conclusions from our survey, and this necessitated developing the Poisson solver described in Sec. VI. For these reasons, we use the potential of the zeroth approximation, Φ 0 , to survey the parameter space. For a few specific parameter sets for which the results of the zeroth approximation look especially interesting, we then check the results using the numerically evaluated exact potential, Φ(x).
Plots of Φ 0 (R) versus R derived from Eq. (15) appear in Fig. 1. Also shown are the corresponding profiles of the number densities n 1 (R) constructed in the first approximation.
For larger "case numbers" i, the density contains larger quasi-uniform central regions. In the outer regions the density decreases, over a length commensurate to the Debye length, to a low-density tail. The space-charge force in the quasi-uniform "core" is correspondingly quasilinear; however, it is manifestly nonlinear in the "Debye fall-off region" (henceforth called the "Debye tail"). Fig. 1 shows that the choices of Ω i per Eq. (18) The integrations were done using a fifth-order Runge-Kutta algorithm [32] with variable time step. As the integration proceeded, we computed the largest short-time Lyapunov exponent of each orbit using a well-established algorithm in the field of chaotic dynamics [33]. The idea is to evolve two initial conditions that start from a very close distance for about one dynamical time, then renormalize to bring the two particles close together again, and repeat the process until the average exponent associated with the orbital separation converges to an almost stable value. Typically convergence was achieved within ∼ 100 orbital periods.
After computing the orbits, we extracted the power spectrum for each orbit using a fast-Fourier-transform algorithm [32]. In doing so, we recorded each orbit at a rate ∼ 40 times per orbital period. From the spectrum we computed the total power. Then we sorted the spectral frequencies in descending order, and starting from the highest frequency we added as many frequencies as were needed to reach 90% of the total power. The required number of frequencies is defined to be the "complexity" n of the orbit [34].
Criterion for chaos
Our first and foremost interest is to determine how many of the 2000 orbits in our sample are chaotic in a given TE configuration. Accordingly an objective, quantitative criterion for chaos is needed. There is no universally accepted criterion; hence, we developed our own using the following rationale. Both the largest short-time Lyapunov exponent χ and the complexity n are well-established, conventional measures of chaos [35]. A first piece of information for defining the criterion comes from plotting n versus χ for all of the orbits. We chose to base our criterion only on the complexity for the following reason. Though no problem arises in computing Lyapunov exponents in the zeroth approximation, such is not the case in the exact potential, wherein we found that longer integration times are required to achieve adequate convergence. Recall that the exact potential must be specified over a three-dimensional grid. Accordingly, interpolation errors and discontinuities between the cells can affect computations of Lyapunov exponents because they involve the distance between two initially nearby orbits, which is a local property that is sensitive to the grid size and the order of interpolation. By contrast, a computation of complexity, in that it involves the Fourier spectrum of an individual orbit, avoids reference to nearby orbits and is thus a global property influenced little by the grid size, a notion that we corroborated during the course of our numerical studies. For simulations in exact potentials, we chose a grid size and interpolation algorithm (cf. Sec. VI below) such that numerical errors had negligible effect on the computation of individual orbits. A standard measure of the "goodness" of an orbital integration is the degree to which total energy is conserved [36]; for every orbit we achieved conservation of total energy with relative error ≤ 10 −6 . This is some two orders of magnitude better than contemporary standard practice.
Investigations of n-vs.-χ plots for many zeroth-order TE potentials led us to a specific quantitative criterion, namely, an orbit is categorized as chaotic if its complexity n > 20 [cf. here to be the orbital period corresponding to the total energy of the individual particles comprising the clump. After some tens of t D each clump has spread through a volume commensurate to the total particle energy.
C. Survey results
Our strategy for surveying the parameter space of the TE configurations is as follows. We conduct the survey using the zeroth-order potential Φ 0 found from Eq. (17). Recall that this potential depends on the focusing strength Ω and, through R(x), the scale lengths (a, c);
VI. NUMERICAL EXPERIMENTS IN EXACT POTENTIALS
As a matter of principle, one must be concerned about the extent to which subtle structure in the potential can influence the qualitative behavior, and in particular the chaoticity, of an orbit. One well-known example is that of the Toda potential; the full Toda potential is integrable and supports only regular orbits, but generally a truncated Toda potential is not integrable and supports a population of chaotic orbits [3]. Our survey of the parameter space of TE configurations centered on the use of Φ 0 , a generally crude approximation to the true potential. The survey suggests a large region of the parameter space supports sizeable populations of chaotic orbits, wherein all of these orbits reach into the Debye tail.
We may expect in general that the density profile, particularly that of the Debye tail, corresponding to the exact potential is considerably different from that corresponding to Φ 0 . For example, in the limit of distances very far from the centroid, the exact space-charge potential will approach spherical symmetry, whereas Φ 0 is everywhere homeoidally striated. Accordingly, to check and have confidence in the qualitative results of Sec. V, we must repeat the numerical experiments in a suitably broad collection of exact potentials. As mentioned earlier, the reason we did not base the survey on exact potentials is that the respective numerical experiments are computationally expensive.
To integrate Eq. (12) governing the exact potential Φ(x), which is a fully threedimensional partial differential equation (PDE), we chose a multigrid algorithm [37]. The algorithm requires boundary conditions be specified over the surface of the volume occupied by the grid. We chose a cubic grid volume greatly exceeding the volume of interest, i.e., that spanning the Debye fall-off of the density. Then we calculated the boundary conditions over the surface of this volume using the formalism of the first approximation, specifically, Eq. (17). Because the resulting boundary conditions are only first approximations to the true boundary conditions, we checked our numerical solutions by varying the positions of the bounding surfaces of the grid by factors of five, and we found negligible change in the results over the volume of interest. Applying the multigrid algorithm to three dimensions involves nontrivial manipulations of the inherent restriction, interpolation, and relaxation routines. In the process, a nonlinear algebraic equation emerges due to the nonlinearity of the PDE. It was solved using an iterative method that combines Newton-Raphson and bisection techniques [32].
After tabulating the exact potential in three dimensions, we used a fifth-order Runge-Kutta algorithm with variable time step to evolve the individual orbits in time, within which the force at every time (thus position) step was computed using a three-dimensional interpolation scheme. The resulting orbits conserved total energy with relative error better than 10 −6 and sometimes as low as 10 −7 . Figure 9 exemplifies the difference between the zeroth-order and exact potentials. The top two panels present isopotential contours in the (x, z)-plane for Case 5 with a 2 = 0.5, c 2 = 1.5, a triaxial configuration. The bottom panels show how the two potentials compare along each of the (x, y, z)-axes. Obviously there are, and there should be, differences, but the important question is to what extent these differences alter the qualitative evolution of the orbits and, in turn, the complexities that characterize them? Analogous graphs for Case 5 and a 2 = 1.0, c 2 = 0.25, a strongly oblate spheroid, appear in Fig. 10, from which the respective differences are seen to be much more pronounced. Figure 11 provides a visual comparison of a chaotic orbit starting from the same initial condition and evolving in the zeroth-order and exact potentials of Fig. 9. Although orbits in the two potentials differ quantitatively, in many cases they are qualitatively similar in that they explore a similar volume of phase space and have similar morphology. This pertains to the example of Fig. 11; however, this one example does not in any way guarantee every orbit that is chaotic in the potential of the zeroth approximation is also chaotic in the exact potential. Statistical comparisons of orbital complexities respective to the zeroth-order and exact potentials for several Case 5 configurations appear in Fig. 12, and these show that the complexities in the two potentials can differ considerably depending on the specific parameter set under study.
Following the procedure delineated in Sec. V C, we also computed the percentage of chaotic orbits in a broad range of exact Case 5 potentials. The results, juxtaposed against their counterparts computed using Φ 0 , appear in Table I and in Figs. 5 and 6. For these examples, there is generally a smaller percentage of chaotic orbits in the exact potential than in the zeroth approximation. The explanation is simple: compared to the density n 1 [R(x)], the density derived from the exact space-charge potential Φ(x) is quasi-uniform over a larger volume and falls to small values over a shorter scale length. Accordingly, the configurationspace volume over which the space-charge force is markedly nonlinear, i.e., the Debye tail, is smaller. Figure 13 illustrates the difference in the density profiles corresponding to the approximate and exact solutions. In the limit of spherical symmetry the profiles are identical, and they disagree more strongly as they become less spherically symmetric. Most notable is the comparison between Figs. 13(e) and (f) concerning a strongly oblate spheroid, where we see that the corresponding exact density distribution is much more uniform than that of the zeroth approximation. This accounts for the strong discrepancy revealed in Fig. 12 (d) concerning orbital chaoticity. It is also consistent with expectations based on first principles: the closer a system is to being one-dimensional (e.g., sphere, cylindrically symmetric disc, infinite symmetric cylinder, in which particle motion is integrable), the less is the population of chaotic orbits. However, the essential observation is that, in most cases, the exact TE configurations do indeed support substantial populations of chaotic orbits in keeping with expectations that surfaced from the survey based on approximate solutions.
VII. SUMMARY, IMPLICATIONS, AND FUTURE WORK
We have explored orbital dynamics and phase mixing in thermal-equilibrium beams for which the potential is the superposition of an external potential quadratic in the coordinates and the self potential arising from space charge. The associated parameter space spans the full range of symmetries, i.e., spherical, cylindrical, and triaxial, and the full range of density profiles, i.e., gaussian (corresponding to negligible space charge) through uniform (corresponding to maximal space charge). To reiterate, the main findings concerning chaos in these systems, "discovered" in the context of zeroth approximations to the space-charge potentials and affirmed with the respective exact potentials, are: (1) configurations corresponding to a large portion of the parameter space support considerable populations of chaotic orbits, (2) essentially all of the orbits that are chaotic reach into the Debye tail where the collective space-charge force is manifestly nonlinear, (3) prolate axisymmetric configurations support little chaos, but prolate triaxial configurations can support considerable chaos, and (4) strongly oblate spheroids support little chaos, but moderately oblate spheroids can support considerable chaos.
It is of interest to compare theoretical predictions concerning TE configurations, for which we herein have established the existence of chaotic orbits, with results of our numerical experiments. In terms of the dimensionless quantities introduced in Sec. III, the parameters κ and ρ of Eq. (6) take the form and Eq. (5) then yields the mixing rate χ. A comparison between theory and numerical experiments appears in Fig. 14 wherein the simulation results reflect statistics from initially localized clumps of 2000 particles that were started at zero velocity at various points in configuration space corresponding to various total particle energies E. The figure presents a plot of the mixing rate χ versus |E| in the Case 5 configuration with a 2 = 4/5, c 2 = 4/3, a slightly triaxial system. This configuration is "not too far away" from spherical symmetry, which means the zeroth approximation Φ(x) = Φ 0 is correspondingly reasonable. It also means only a modest population (∼ 5% for this parameter set) of chaotic orbits is supported.
The figure was derived within the framework of the zeroth approximation because therein the Lyapunov exponents, i.e., mixing rates, can be accurately computed from the simulations and the microcanonical averages required for the theory likewise can be easily and accurately evaluated. The numerical experiments span a range 0.5 ≤ |E| ≤ 60, corresponding to 11 ≤ R ≤ 25, i.e., extending from within to well beyond the Debye drop-off in the density profile. The agreement between theory and numerical experiments is remarkably close.
One can see from the numerical experiments described herein that chaotic mixing takes place on an e-folding time scale comparable to a dynamical time (an orbital period). This is very fast compared to, e.g., collisional relaxation; hence, one must account for this collisionless process when designing an accelerator for the production of high-peak-current, high-brightness beams. For example, particles comprising a beam out of equilibrium will, if globally chaotic, redistribute themselves globally and irreversibly on a dynamical time scale.
Because perturbations induced, e.g., by transitions in the beamline will drive a beam away from equilibrium, chaotic mixing can be a dynamic of practical importance. Consider the case of a TE configuration: a small perturbation from image charges passing through an external irregularity in the beamline will distort the Debye tail. If a substantial fraction of particles in the Debye tail are chaotic, which is the case for a wide range of bunch geometries, a corresponding fraction of the orbits comprising the distortion will quickly mix throughout the volume of the configuration. The work done by the external perturbation in setting up the distortion will thereby appear in the form of a larger configuration-space volume. If the perturbation is strong enough so that mixing in momentum space associated with consequent time-dependence in the potential is also substantial, then some of the work done will also appear in the form of a larger momentum space. The net effect is a larger emittance.
If there are many such perturbations along the beamline, the cumulative emittance growth may be troublesome.
The present investigation and its associated implications concern only very specific, timeindependent, single-species systems, i.e., beams (or nonneutral plasmas) in thermal equilibrium. These are the most benign systems imaginable, yet we found even they can support chaotic orbits. Any perturbation will create a nonequilibrium, time-dependent system that will subsequently evolve self-consistently. Accordingly, the space-charge potential can be complicated, particularly if the perturbation is strong. The only sensible conjecture under such conditions is that the corresponding population of chaotic orbits will be larger, and in turn chaotic mixing will be more prevalent. Exploratory numerical simulations of an equipartitioning system and of merging beamlets have supported this notion [2]. Further exploration of time-dependent beams is warranted and will likely prove illuminating, particularly in regard to deciphering time scales for emittance growth, halo formation, etc.
By using only smooth potentials we have restricted our analysis to the six-dimensional phase space of a single particle. Accordingly we have suppressed dissipative effects of collisions in particular, and force fluctuations in general. Such effects can only enhance chaos, as has been demonstrated, e.g., in numerical experiments concerning self-gravitating systems [27]. As the next step, we have constructed frozen N-body representations of the charge densities of the TE configurations and with these representations are repeating the numerical experiments described herein. One of our objectives is to determine the minimum number of particles needed to reproduce the dynamics associated with smooth timeindependent potentials. Results will be described in a forthcoming paper [28]. In the future it will be of interest to do likewise for time-dependent systems and ultimately ascertain, e.g., conditions under which the Vlasov equation governing the six-dimensional phase space of a single particle can be applied with confidence. | 9,570.4 | 2003-03-13T00:00:00.000 | [
"Physics"
] |
CED-Net: Crops and Weeds Segmentation for Smart Farming Using a Small Cascaded Encoder-Decoder Architecture
: Convolutional neural networks (CNNs) have achieved state-of-the-art performance in numerous aspects of human life and the agricultural sector is no exception. One of the main objectives of deep learning for smart farming is to identify the precise location of weeds and crops on farmland. In this paper, we propose a semantic segmentation method based on a cascaded encoder-decoder network, namely CED-Net, to di ff erentiate weeds from crops. The existing architectures for weeds and crops segmentation are quite deep, with millions of parameters that require longer training time. To overcome such limitations, we propose an idea of training small networks in cascade to obtain coarse-to-fine predictions, which are then combined to produce the final results. Evaluation of the proposed network and comparison with other state-of-the-art networks are conducted using four publicly available datasets: rice seeding and weed dataset, BoniRob dataset, carrot crop vs. weed dataset, and a paddy–millet dataset. The experimental results and their comparisons proclaim that the proposed network outperforms state-of-the-art architectures, such as U-Net, SegNet, FCN-8s, and DeepLabv3, over intersection over union (IoU), F1-score, sensitivity, true detection rate, and average precision comparison metrics by utilizing only (1 / 5.74 × U-Net), (1 / 5.77 × SegNet), (1 / 3.04 × FCN-8s), and (1 / 3.24 × DeepLabv3) fractions of total parameters.
Introduction
Weeds and pests are the major causes of damage to any agricultural crop. Many traditional methods are used to control the growth of weeds and pests for obtaining high yields [1]. The major disadvantages of these methods are environmental pollution and contamination of the crops, which have hazardous effects on human health. With the advent of advanced technologies, recently robots are used for selective spraying that targets only weeds, without harming crops [2]. The main challenge for these autonomous platforms is to identify the precise location of weeds and crops [3]. One of the major applications of deep learning in smart farming is to enable these robots to detect weeds and to differentiate them from crops. To automate the agricultural equipment, however, researchers first need to solve a variety of problems, including classification, tracking, detection, and segmentation.
In these aspects, the agriculture industry is enthusiastically embracing artificial intelligence (AI) into its practice and overcome challenges such as reductions in the labor force and increasing demand. In peak
Related Work
In recent years, convolutional neural networks (CNNs) have been at the forefront of training algorithms, and are capable of both visualizing and identifying patterns in images with the minimum human intervention [12]. This capability has enabled the expansion of CNN's applications to all fields of computer vision, including self-driving cars [13], facial recognition [14], stereo vision [15], medical image processing [16], agriculture [7], and bioinformatics [17].
In agriculture, CNNs have been used to solve a variety of problems. To differentiate between healthy and diseased plants, [18] proposed a deep learning-based model that is capable of identifying 26 different diseases in 14 crop species. The authors used pre-trained AlexNet [19] and GoogleNet [20] on a dataset of 54,306 images, to achieve a classification accuracy of greater than 99%. To estimate weed species and growth stages, [21] presented a method using pre-trained Inception-v3 architecture [22]. Their proposed model is capable of estimating the number of leaves with an accuracy of 70%. The proposed cascaded encoder-decoder (CED-Net), shown in Figure 2, consists of four small encoder-decoder networks divided into two levels. Encoder-decoder networks of each level, are trained independently either for crops segmentation or for weeds. More specifically, Model-1 and Model-2 are trained for weeds prediction while Model-3 and Model-4 are trained for the crops. The network was extended to two levels to extract features at different scales and to provide coarse-to-fine predictions. The contributions of this work can be summarized as: instead of building a big encoder-decoder network with millions of parameters, we can implement the same system with small networks in a cascaded form. The proposed architecture outperforms or is on par with U-Net [8], SegNet [9], FCN-8s [10], and DeepLabv3 [11] over intersection over union (IoU), F1-score, sensitivity, true detection rate (TDR), and average precision (AP) comparison metrics on rice seeding and weed, BoniRob, carrot crop vs. weed and a paddy-millet dataset. The proposed network has significantly fewer parameters, (1/5.74 × U-Net), (1/5.77 × SegNet), (1/3.04 × FCN-8s), and (1/3.24 × DeepLabv3) making it more efficient and applicable to embedded applications in agricultural robots. The pre-trained models, datasets information, and implementation details are available at https://github.com/kabbas570/CED-Net-Crops-and-Weeds-Segmentation.
is trained for coarse weed prediction and Model-3 for crop prediction. The predictions of Model-1 and Model-3 are up-sampled, concatenated with corresponding input image size, and used as inputs by Model-2 and Model-4, respectively. Two cascaded networks (Model-1, Model-2) are thus trained for weed predictions, and the other two (Model-3, Model-4) for crop predictions. In total, then, we have four such small networks. The section that follows explains the network architecture and training details.
Spatial Sampling
A custom data generator function ( , , , , , ) is defined for each encoder-decoder network to match input and output dimensions, and to prepare separate ground truths for crops and weeds. For Level-1, we used ( , ) and ( , ), all images and their corresponding ground truths were resized to a spatial dimension of 448 × 448. Level-2 models were trained on ( , ) and ( , ) with spatial dimensions of 896 × 896. Bilinear interpolation was used in each case to adjust the spatial dimension of input images and targets as well as for up-sampling the Level-1 outputs for each encoder-decoder network to match dimensions with the next level. We started to train the networks with inputs of dimensions 448 × 448 for both weeds and crops as separate targets. At Level-1 two models were trained independently where for Model-1 the corresponding target was a binary mask of weeds and for Model-3 target was a binary mask of crops. If represents the model, then the output for input dimensions ✕ can be defined as: At Level-1, {i = 1, 3} and is the output of Level-1 and has the same dimension as input ( ✕ ), where n = 896. After training Level-1 models, their predictions were up-sampled, denoted by , and concatenated with the input image ( ✕ ), which was further used as an input for Level-2 models. The output of Level-2 ✕ , has the dimensions of n × n and expressed as: At Level-2, {i = 2, 4} and is the corresponding output of Level-1.
Related Work
In recent years, convolutional neural networks (CNNs) have been at the forefront of training algorithms, and are capable of both visualizing and identifying patterns in images with the minimum human intervention [12]. This capability has enabled the expansion of CNN's applications to all fields of computer vision, including self-driving cars [13], facial recognition [14], stereo vision [15], medical image processing [16], agriculture [7], and bioinformatics [17].
In agriculture, CNNs have been used to solve a variety of problems. To differentiate between healthy and diseased plants, [18] proposed a deep learning-based model that is capable of identifying 26 different diseases in 14 crop species. The authors used pre-trained AlexNet [19] and GoogleNet [20] on a dataset of 54,306 images, to achieve a classification accuracy of greater than 99%. To estimate weed species and growth stages, [21] presented a method using pre-trained Inception-v3 architecture [22]. Their proposed model is capable of estimating the number of leaves with an accuracy of 70%.
To identify weed locations in leaf-occluded crops, [23] used DetectNet [24]. Their network was trained on 17,000 annotations of weeds images to identify weeds in cereal fields. The algorithm is 46% accurate in detecting weeds, however, it is unable to detect overlapping and small weeds. To specify herbicides for soybean crops, [25] proposed a CNN-based model to identify weeds and classify them either as grass or broadleaf. A sliding window-based approach was used in [3] for stem detection; each local window provides information about stem location or a non-stem region. Fuentes et al. developed an automated diagnosis system for tomato disease detection based on deep neural network, it also used long-short term memory (LSTM) to provide detailed descriptions of disease symptoms [23].
To obtain location information about weeds for site-specific weed management (SSWM), [5] introduced a dataset and performed experiments on a SegNet based encoder-decoder network (via transfer learning) for semantic segmentation that achieved a mean average accuracy as high as 92.7%.
Precise estimation of the stem location of crops or weeds, as well as the total area of coverage, is crucial to remove weeds either mechanically or by selective spraying. Lottes et al. introduced a network based on a single encoder and two separate decoders for plant and stem detection [3]. The authors also provided results that achieved by semantic segmentation in terms of the highest mean average precision of 87.3%. To increase the application of computer vision for agricultural Electronics 2020, 9, 1602 4 of 16 benefits, [6] presented a dataset of 60 images for carrot crops and weeds detection. They also provided the semantic segmentation results in terms of different evaluation metrics like average accuracy, precision, recall, and F1-score.
Semantic segmentation based weeds and crops identification is the most challenging problem and needs to be solved for efficient smart farming, where the goal is to assign a separate class label to each pixel of the image [26]. The most popular deep supervised learning-based models for segmentation include FCN, SegNet, U-Net, DeepLabv3, ParseNet [27], PSPNet [28], MaskLab [29], TensorMask [30] and attention-based models include DANet [31], Chen et al. [32], OCNet [33] and, CCNet [34]. However, CNNs that used encoder (down-sampling)-decoder (up-sampling) structure (such as SegNet, U-Net, and) or a spatial pyramid pooling module (such as DeepLabv3) are considered as the most promising candidate for semantic segmentation tasks as they obtain sharp object boundaries or capture the contextual information at different resolution [35].
FCN is considered as a breaking point for segmentation literature, which is designed to make dense predictions without any fully connected layer [10]. FCN uses VGG-16 to extract the input image features. Different variants of FCN (FCN-8s, FCN-16s, and FCN-32s) are available and their attributes are different in terms of using the intermediate outputs. In contrast, SegNet is a symmetric encoder-decoder based segmentation network [9] where the encoder uses convolution and pooling operations to reduce the spatial dimensions of feature maps while storing the index of each extracted value from each window. The decoder of SegNet performs the up-sampling using stored max-pooling indices. Another symmetric encoder-decoder architecture is U-Net [8] where the features extraction of encoder is performed in four stages with two consecutive 3 × 3 convolutions followed by max-pooling and batch normalization. The bottleneck performs a sequence of two 3 × 3 convolutions and feedforward the feature maps to decoder where it up-samples the feature maps by 2 × 2 convolution and halves the number of feature maps before concatenating with the encoder. Afterwards, a sequence of two 3 × 3 convolutions are performed and the final segmentation map is generated with 1 × 1 convolutions. However, DeepLabv3 uses the concept of atrous convolution to adjust the filter's field-of-view and atrous spatial pyramid pooling (ASPP) to consider objects at different scales [11].
The proposed CED-Net is designed to perform the semantic segmentation task on crops and weeds dataset and consists of cascaded encoder-decoder structure. Thus, for experiments and comparisons of evaluation matrices, we compared the proposed network with FCN-8s, SegNet, U-Net, and DeepLabv3.
Proposed Architecture
The proposed network architecture is shown in Figure 2. The overall model training is performed in two stages. At each level, two models are trained independently. At Level-1, Model-1 is trained for coarse weed prediction and Model-3 for crop prediction. The predictions of Model-1 and Model-3 are up-sampled, concatenated with corresponding input image size, and used as inputs by Model-2 and Model-4, respectively. Two cascaded networks (Model-1, Model-2) are thus trained for weed predictions, and the other two (Model-3, Model-4) for crop predictions. In total, then, we have four such small networks. The section that follows explains the network architecture and training details.
Spatial Sampling
A custom data generator function f (I 1 , I 2 , T 1 , T 2 , T 1 , T 2 ) is defined for each encoder-decoder network to match input and output dimensions, and to prepare separate ground truths for crops and weeds. For Level-1, we used (I 1 , T 1 ) and (I 1 , T 2 ), all images and their corresponding ground truths were resized to a spatial dimension of 448 × 448. Level-2 models were trained on (I 2 , T 1 ) and (I 2 , T 2 ) with spatial dimensions of 896 × 896. Bilinear interpolation was used in each case to adjust the spatial dimension of input images and targets as well as for up-sampling the Level-1 outputs for each encoder-decoder network to match dimensions with the next level. We started to train the networks with inputs of dimensions 448 × 448 for both weeds and crops as separate targets. At Level-1 two models were trained independently where for Model-1 the corresponding target was a binary mask of weeds and for Model-3 target was a binary mask of crops. If M i represents the model, then the output u i for input dimensions I n 2 × n 2 can be defined as: At Level-1, {i = 1, 3} and u i is the output of Level-1 and has the same dimension as input (I n 2 × n 2 ), where n = 896. After training Level-1 models, their predictions were up-sampled, denoted by U i , and concatenated with the input image (I n×n ), which was further used as an input for Level-2 models. The output of Level-2 v n×n , has the dimensions of n × n and expressed as: At Level-2, {i = 2, 4} and U i−1 is the corresponding output of Level-1.
Encoder-Decoder Network
The detailed architecture of a single encoder-decoder network is shown in Figure 3. The input for this small network is an RGB image while the target is a binary mask with the same dimensions as the input. This network is similar to U-Net, but instead of going very deep, we limited the maximum number of feature maps to 256. For the encoder, the number of feature maps was increased as {16, 32, 64, and 128} while decreasing the spatial dimensions using 2 × 2 max-pooling [24] with stride = 2 that results in feature maps subsampling by a factor of 2. In the bottleneck, the maximum number of feature maps was set to 256. For the decoder, the bottleneck feature maps were decreased as {128, 64, 32, and 16} while increasing their spatial dimensions by a factor of 2 through bilinear interpolation. At each stage of the decoder, the up-sampled feature maps were concatenated with corresponding feature maps of the encoder, indicated by a horizontal arrow shown in Figure 3.
Encoder-Decoder Network
The detailed architecture of a single encoder-decoder network is shown in Figure 3. The input for this small network is an RGB image while the target is a binary mask with the same dimensions as the input. This network is similar to U-Net, but instead of going very deep, we limited the maximum number of feature maps to 256. For the encoder, the number of feature maps was increased as {16, 32, 64, and 128} while decreasing the spatial dimensions using 2 × 2 max-pooling [24] with stride = 2 that results in feature maps subsampling by a factor of 2. In the bottleneck, the maximum number of feature maps was set to 256. For the decoder, the bottleneck feature maps were decreased as {128, 64, 32, and 16} while increasing their spatial dimensions by a factor of 2 through bilinear interpolation. At each stage of the decoder, the up-sampled feature maps were concatenated with corresponding feature maps of the encoder, indicated by a horizontal arrow shown in Figure 3.
Post-Processing
As a post-processing step, the outputs of Level-2 are combined by concatenating their predictions, as shown in Figure 2 and the final output is then mapped onto the input images. To differentiate between crops and weeds, we assigned red color to weeds and blue color to crops for all four datasets. Background pixels were kept the same as in the original input image.
Network Training
For each target (i.e., either weed or crop), network training was performed in two stages. In the first phase, Level-1 models (Model-1 and Model-3) were trained independently to produce coarse outputs. Level-2 models (Model-2 and Model-4) were trained in the second phase by utilizing the predictions from Level-1 models as initialization in a concatenated form with the input image.
All four models were trained using Adam optimization [25], with β 1 = 0.9 and β 2 = 0.99, learning rate = 0.0001 with a batch size = 2. A custom loss function was defined in terms of dice coefficient [26],
Evaluation Metrics
To measure and compare the quantitative performance of the proposed network, different evaluation measures such as dice coefficient/F1-score, Jaccard similarity (JS)/intersection over Union (IoU), sensitivity/recall, true detection rate (TDR), and average precision (AP) were measured. These metrics were computed by identifying the variables true positive (TP), true negative (TN), false positive (FP), and false-negative (FN) by calculating the confusion matrix between the prediction and the ground truth. The expressions for IoU, recall, TDR, and precision are defined as: F1-score is computed from the harmonic mean of precision and recall and expressed as: The average precision is calculated for the paddy-millet dataset using 11-points interpolation [27], the maximum precision values (P interp (R)) are found at a set of 11 equally spaced recall values [0, 0.1, 0.2, ... 1] and by averaging them we calculated the AP 11 , as given by: where P interp (R) = max R: R>R P R Electronics 2020, 9, 1602 7 of 16 Therefore, the average precision is obtained by considering only the maximum precision values P interp (R) whose recall values are greater than R. The mean average precision (mAP) is simply the average of AP over all classes (rice and millet) and expressed as:
Datasets
To evaluate and compare the proposed model, we used four different publicly-available datasets that are related to the identification of crops and weeds for smart farming. For each dataset, the goal is to perform a pixel-wise prediction of crops and weeds. Table 1 summarizes the details of each dataset and distribution of data for training, validation, and testing.
Rice Seeding and Weed Segmentation Dataset
This dataset is provided by [5] and contains a total of 224 images of size 912 × 1024 which were captured using a Canon IXUS 1000 HS (EF-S 36-360 mm f/3.4-5.6 IS STM) camera. Each image came with a corresponding ground truth-annotated label with two classes: rice and Sagittaria trifolia weed, which is quite harmful to rice crops [28]. Among 224 total images, 160 images were used for training, 20 for validation, and 44 for testing. The dataset is publicly available at: https://figshare.com/articles/ rice_seedlings_and_weeds/7488830.
BoniRob Dataset
An autonomous robot, named BoniRob [4] was used to collect this dataset in 2016 from fields near Bonn, Germany. The BoniRob dataset contains sugar beet plants, dicot weeds, and grass weeds. For the experiments, we used a subset of the BoniRob dataset containing sugar beets and grass weeds; 492 images of size 1296 × 966 were used, divided into training (400), validation (30), and holdout test (62). This dataset is publicly available at: http://www.ipb.uni-bonn.de/data/sugarbeets2016/.
Carrot Crop and Weed
The carrot crop and weed dataset contains a total of 60 images of the size 1296 × 966 and was introduced by [6]. Images were captured using the JAI AD-130GE camera model from organic carrot fields in a region of northern Germany. Annotation of ground-truth labels of weeds and crops were conducted manually. Among 60 images 45, 5, and 10 were used as training, validation, and testing respectively. The dataset can be found at: https://github.com/cwfid.
Paddy-Millet Dataset
The paddy-millet dataset is acquired from [7] and contains a total of 380 images of size 804 × 604 which are captured using a handheld Canon camera EOS-200D. The paddy and millet weeds have a similar appearance so it's a very challenging dataset and the goal is to identify and localize the paddy and weed location using semantic graphics. The semantic graphics is the idea of labeling an area of interest with minimum human labor. In our experiments, we have manually assigned a solid circle to Electronics 2020, 9, 1602 8 of 16 the base of paddy and millet weed and the rest of the pixels are counted as background. We have used 380 images of this dataset and are distributed as 310 for training, 30 for validation, and 40 for testing.
Experimental Results and Discussion
All experiments mentioned in this paper were performed using a PC equipped with an NVIDIA Titan XP GPU. We used the Keras framework with a Tensorflow backend. Both quantitative and qualitative results of CED-Net and other state-of-the-art networks were compared for all datasets. Table 2 shows the number of parameters for the different architecture used in this paper. Observe that the proposed architecture has a smaller number of parameters compared to others: almost 6 times less than U-Net and SegNet, and 3 times fewer parameters than FCN-8s and DeepLabv3.
Rice Seeding and Weed Segmentation
For quantitative analysis, between the proposed CED-Net and other networks on rice seeding and weed dataset, we computed different metrics such as intersection over union (IoU) individually for each class (i.e., weed IoU and crop IoU) and mean intersection over union (mIoU) for both classes together, F1-score and sensitivity. For every evaluation index, our proposed CED-Net outperforms other networks with distinctive margins. Table 3 summarizes the segmentation performance of our proposed architecture against each evaluation metric and all other networks. The experimental results of all the networks for the rice seeding and weed dataset are shown in Figure 4. The column on the far left shows input images for each network; the result is shown on the input image, with red indicating the Sagittaria trifolia weed and blue the rice crop. The proposed network performed well in differentiating between weeds and crops, whereas the other architectures were at times unsuccessful in assigning the label to pixels, which explains their higher FN rates (SegNet, 3.13%; U-Net, 4.76%; FCN-8s, 5.72%, and DeepLabv3, 3.2%) compared to the proposed network (2.63%), as mentioned in Table 4. Figure 4. The column on the far left shows input images for each network; the result is shown on the input image, with red indicating the Sagittaria trifolia weed and blue the rice crop. The proposed network performed well in differentiating between weeds and crops, whereas the other architectures were at times unsuccessful in assigning the label to pixels, which explains their higher FN rates (SegNet, 3.13%; U-Net, 4.76%; FCN-8s, 5.72%, and DeepLabv3, 3.2%) compared to the proposed network (2.63%), as mentioned in Table 4.
BoniRob Dataset Segmentation
For this dataset, 62 images were used as testing samples, and comparative quantitative analysis was performed as shown in Table 5. Proposed CED-Net outperforms U-Net, SegNet, FCN-8s, and DeepLabv3 for crop IoU, mIoU, and F1-score metric. However, U-Net performs marginally better over weed IoU and sensitivity metrics with 6 times higher parameters than the CED-Net. It can be seen from the SegNet column that it often misclassifies the crop label with weed whereas the better performance is obtained from CED-Net. The confusion matrices from Table 6, show that the proposed CED-Net has~1.7 times,~2.5 times, and~1.3 times less false negatives (FN) than SegNet, FCN-8s, and DeepLabv3 respectively, and marginally higher than U-Net. The qualitative results of the BoniRob dataset for all the networks are shown in Figure 5.
Carrot Crop and Weed Segmentation
The carrot crop and weed dataset is a small dataset, containing only 60 out of which 10 were used as a test set. The evaluation metrics of proposed CED-Net and other comparing architectures are listed in Table 7. Except for the sensitivity metric, the CED-Net outperforms all other comparing networks with huge margins. However, CED-Net marginally underperforms than SegNet over sensitivity metric as SegNet generates the highest number of TP's (2.6% compared to CED-Net 2.5%), a lower number of FN's (0.33% as compared to CED-Net 0.48%) but SegNet produces 8 times more FP's than CED-Net which reduces its overall performance as shown in Table 8. In the U-Net case, it generates the lowest number of FP's (19,138) but its performance is penalized by a higher number of FNs (111,531). The proposed CED-Net performed better than any other network for most evaluation indices and can compete with other networks by predicting the minimum number of FPs and FNs while increasing the number of TPs and TNs. Figure 6 illustrates a qualitative comparison for all the networks. The proposed network performed well in classifying weed pixels, although in some cases it was unable to assign a label to crop pixels; thus, its IoU is lower for crops than for weeds. The SegNet column shows that it was unable to differentiate boundaries well, indicated by its high FP rate.
Electronics 2020, 9, x FOR PEER REVIEW 11 of 16 generates the lowest number of FP's (19,138) but its performance is penalized by a higher number of FNs (111,531). The proposed CED-Net performed better than any other network for most evaluation indices and can compete with other networks by predicting the minimum number of FPs and FNs while increasing the number of TPs and TNs. Figure 6 illustrates a qualitative comparison for all the networks. The proposed network performed well in classifying weed pixels, although in some cases it was unable to assign a label to crop pixels; thus, its IoU is lower for crops than for weeds. The SegNet column shows that it was unable to differentiate boundaries well, indicated by its high FP rate.
Paddy-Millet Dataset
The quantitative performance for this dataset is measured using AP for weed and rice, mAP, and TDR. In the paddy-millet dataset, stamping-out is one of the most effective and environment-friendly techniques to remove the millet weed from rice crops. For the stamping-out technique, finding the class (i.e., millet or weed) and location of the weeds is more important than finding the area covered by them. Since the coordinates of the location of millet weeds and paddy have higher significance, hence it is more useful to find the center point of the detections. Thus, for this dataset, we used TDR, AP, and mAP as evaluation metrics to analyze the performance of the network.
A prediction provided by the network is to be classified as TP, FN, or FP where the category is classified using the Euclidian distance between the centers of prediction and ground truth. If the Euclidian distance between the centers of prediction and ground truth is less than a pre-defined threshold it is counted as TP. However, if the distance is greater than the threshold, two penalties are imposed on the network: (1) detection at the wrong location (FP) and (2) missing of the ground truth (FN). True detection rate (TDR) values are computed using Equation (6) which determines the performance of the network to identify crops (paddy) and the weeds (millet) locations within the defined threshold. Table 9 shows the TDR values of the proposed CED-Net along with comparing networks and illustrates that the proposed network outperforms all other networks with significantly fewer parameters. For further evaluation, we also provided the results in terms of AP for weeds, AP for paddy, and mAP. Precision is defined as the capability of a model to locate relevant objects only and recall is true positive detections relative to all ground truths. The 11-points interpolation is used to find AP (see Equation (9)) for each class (i.e., rice crops and millet weeds) separately and mAP is computed (from Equation (10)) with N = 2 (number of classes). Table 10 illustrates the AP for weed, rice, and mAP results. The proposed CED-Net has the highest mAP for all threshold and can detect most of the millet weeds and rice crops as compared to the other networks as listed in Table 10. The qualitative results for the paddy-millet dataset are presented in Figure 7. For further evaluation, we also provided the results in terms of AP for weeds, AP for paddy, and mAP. Precision is defined as the capability of a model to locate relevant objects only and recall is true positive detections relative to all ground truths. The 11-points interpolation is used to find AP (see Equation (9)) for each class (i.e., rice crops and millet weeds) separately and mAP is computed (from Equation (10)) with N = 2 (number of classes). Table 10 illustrates the AP for weed, rice, and mAP results. The proposed CED-Net has the highest mAP for all threshold and can detect most of the millet weeds and rice crops as compared to the other networks as listed in Table 10. The qualitative results for the paddy-millet dataset are presented in Figure 7.
Conclusions
This paper presents a small-cascaded encoder-decoder (CED-Net) architecture to detect and extract the precise location of weeds and crops on farmland using semantic segmentation. The proposed network has comparatively less number of parameters compared to the other state-of-the-art architectures, thus results in lesser training and inference time. The improved performance of CED-Net is attributed to its coarse-to-fine approach and cascaded architecture. The network architecture is extended to two levels, at each of which two small encoder-decoder networks are trained independently in parallel, (i.e., one for crop predictions and the other for weed). At each level, the network aims either to predict a binary mask for crops or weeds. The predictions of Level-1, are further refined by Level-2 encoder-decoder networks to generate the final output. Thus, four small networks were trained, with two arranged in cascaded for each target (i.e., crops and weeds). To evaluate and compare the performance of the proposed CED-Net with other networks, we used four different publicly-available crops and weeds datasets. The proposed network has 1/5.74, 1/5.77, 1/3.04, and 1/3.24 times fewer parameters than U-Net, SegNet, FCN-8s, and DeepLabv3 respectively, which makes it more robust and hardware friendly compare to the other networks. Moreover, CED-Net either outperforms or is on par with other state-of-the-art networks in terms of different evaluation metrics such as mIoU, F1-score, sensitivity, TDR, and mAP. | 7,137.2 | 2020-10-01T00:00:00.000 | [
"Agricultural and Food Sciences",
"Computer Science"
] |
Influenza H5N1 virus infection of polarized human alveolar epithelial cells and lung microvascular endothelial cells
Background Highly pathogenic avian influenza (HPAI) H5N1 virus is entrenched in poultry in Asia and Africa and continues to infect humans zoonotically causing acute respiratory disease syndrome and death. There is evidence that the virus may sometimes spread beyond respiratory tract to cause disseminated infection. The primary target cell for HPAI H5N1 virus in human lung is the alveolar epithelial cell. Alveolar epithelium and its adjacent lung microvascular endothelium form host barriers to the initiation of infection and dissemination of influenza H5N1 infection in humans. These are polarized cells and the polarity of influenza virus entry and egress as well as the secretion of cytokines and chemokines from the virus infected cells are likely to be central to the pathogenesis of human H5N1 disease. Aim To study influenza A (H5N1) virus replication and host innate immune responses in polarized primary human alveolar epithelial cells and lung microvascular endothelial cells and its relevance to the pathogenesis of human H5N1 disease. Methods We use an in vitro model of polarized primary human alveolar epithelial cells and lung microvascular endothelial cells grown in transwell culture inserts to compare infection with influenza A subtype H1N1 and H5N1 viruses via the apical or basolateral surfaces. Results We demonstrate that both influenza H1N1 and H5N1 viruses efficiently infect alveolar epithelial cells from both apical and basolateral surface of the epithelium but release of newly formed virus is mainly from the apical side of the epithelium. In contrast, influenza H5N1 virus, but not H1N1 virus, efficiently infected polarized microvascular endothelial cells from both apical and basolateral aspects. This provides a mechanistic explanation for how H5N1 virus may infect the lung from systemic circulation. Epidemiological evidence has implicated ingestion of virus-contaminated foods as the source of infection in some instances and our data suggests that viremia, secondary to, for example, gastro-intestinal infection, can potentially lead to infection of the lung. HPAI H5N1 virus was a more potent inducer of cytokines (e.g. IP-10, RANTES, IL-6) in comparison to H1N1 virus in alveolar epithelial cells, and these virus-induced chemokines were secreted onto both the apical and basolateral aspects of the polarized alveolar epithelium. Conclusion The predilection of viruses for different routes of entry and egress from the infected cell is important in understanding the pathogenesis of influenza H5N1 infection and may help unravel the pathogenesis of human H5N1 disease.
Introduction
Highly pathogenic influenza (HPAI) H5N1 virus first emerged as a cause of severe human disease in 1997 in Hong Kong [1,2]. Since then, it has become entrenched in poultry across Asia and Africa with zoonotic transmission to humans, sometimes with fatal outcome. In contrast to human seasonal influenza, H5N1 disease has a higher reported case-fatality rate ranging from 33% in Hong Kong in 1997 to 61% more recently [1,3]. The reason for this unusual severity of human disease remains unclear. Within the lung, the alveolar epithelium is the primary target cell for influenza H5N1 virus [4][5][6]. Although a novel influenza H1N1 virus of swine origin has recently emerged to cause a pandemic [7,8], the pathogenesis of H5N1 virus remains an important public health issue because this virus remains a pandemic and public health threat, either directly or through reassortment with the novel pandemic H1N1 virus.
Epithelial cells line the major cavities of the body, functioning in selective secretion and adsorption, and providing a barrier to the external environment. In human lung, the alveolar epithelium consists of a continuous layer of tissue made up of two principal cell types: flattened type I alveolar epithelial cells and cubodial type II alveolar epithelial cells. Type I alveolar epithelial cells cover over 80% of the alveolar surface in which they function as a broad thin layer for gaseous exchange. These cells are highly polarized since the plasma membranes of these cells are divided into two discrete domains, namely, the apical domain (facing the luminal air surface) and the basolateral domain (facing the systemic circulation) [9]. And this large thin surface makes them extremely susceptible to injury from inhaled pathogens. While there is some data on H5N1 virus infection and cytokine responses in alveolar epithelial cells [10], there is no information of the effect of cell polarity on H5N1 virus replication or on virus-induced host responses.
Though influenza virus infection is localized primarily to the respiratory system, HPAI in some avian species is associated with systemic dissemination of the virus to multiple organs. There is increasing evidence that H5N1 influenza viruses are found in the peripheral blood, the gastro-intestinal tract and occasionally even the central nervous system of humans, and such dissemination may contribute to unusual disease manifestations including those of multiple organ dysfunction [11][12][13][14]. The close anatomical relationship between alveolar epithelium and the lung microvascular endothelium, together with the distribution of putative influenza A virus receptors on the endothelial cell surface [15], make it important that parallel investigations on the lung epithelium and lung endothelium are carried out.
In the present study, we investigated the infection of polarized, primary, human type I-like alveolar epithelial cells and lung microvascular endothelial cells by influenza A virus. The low pathogenic human seasonal influenza virus, A/HK/54/98 (H1N1) and the HPAI A/HK/ 483/97 (H5N1) virus were studied. We found that both influenza H1N1 and H5N1 viruses efficiently infect alveolar epithelial cells from both apical and basolateral surface of the epithelium. Irrespective of the route of infection, both viruses were preferentially released at the apical surface of the alveolar epithelium. Whereas in lung microvascular endothelial cells, influenza H1N1 virus failed to replicate convincingly in contrast, influenza H5N1 virus showed evidence of replication following infection by either apical or basolateral route and new virus was released from both sides of the cell. As previously reported [10], influenza H5N1 virus was a more potent inducer of cytokines and chemokines (e.g. IP-10, RANTES, IL-6) in comparison to H1N1 virus in alveolar epithelial cells. Influenza H5N1 virus induced chemokines were secreted on both the apical and basolateral aspects of the polarized alveolar epithelium while the human influenza H1N1 virus led predominantly to apical secretion. These findings enhance the understanding of how virus infection may spread within and beyond the lung in influenza virus infection and how innate host response may contribute to modulating or aggravating tissue pathology.
Isolation of primary human alveolar epithelial cells
Primary alveolar epithelial cells were isolated from human non-tumor lung tissue obtained from patients undergoing lung resection in the Department of Cardiothoracic Surgery, Queen Mary Hospital and Queen Eliza-beth Hospital, Hong Kong SAR, under a study approved by The Hong Kong University and Hospital Authority (Hong Kong West and Kowloon Central/East, respectively) Institutional Review Board, using a modification of methods previously described [16]. Briefly, lung tissue was minced into pieces of > 0.5 mm thickness using a tissue chopper. The tissue was digested using a combination of trypsin and elastase for 15 min at 37°C in a shaking water-bath. The cell population was purified by a combination of differentiated cell attachment, Percoll density gradient centrifugation and magnetic cell sorting. The cells were maintained in a humidified atmosphere (5% CO 2 , 37°C) under liquid-covered conditions, and growth medium was changed daily starting from 60 h after plating the cells.
Type I-like alveolar epithelial cell differentiation and polarization
The purified cell pellet (passage 1 or 2) was resuspended in medium to a final concentration that allowed seeding at 5 × 10 5 cells/cm 2 onto collagen I coated Transwell supports (Corning) and cultured for 14 to 20 days with the small airway culture medium SAGM (Lonza) in the apical and basolateral chambers of the Transwell. The cells spread to form a confluent monolayer and the culture medium was changed every 48 h. A concomitant increase in transepithelial electrical resistance (TER) was measured using an epithelial tissue voltohmmeter (EVOM). TER was calculated as the measured electrical resistance (Ohms) multiplied by the surface area of the filter. This method has already been established in our laboratory using a modification of the methods previously described [16,17]. When the transepithelial electrial resistance (TEER) reached 1000 ohm cm 2 , which demonstrate the paracellular restrictiveness of the alveolar cell preparation, the competence of the formation of tight junctional complexes within the polarized alveolar epithelial cells model can be assessed [16] and the cells were used for virus infection experiments.
Culture and polarization of lung microvascular endothelial cell
Primary human lung microvascular endothelial cells (HLMVE) were purchased from Lonza Walkersville, Inc. (US) and maintained in the medium and growth supplements supplied by the manufacturer (EGM-2), which contained 5% fetal bovine serum (FBS), hydrocortisone, human endothelial growth factor, vascular endothelial cell growth factor, human fibroblast growth factor basic, long(R3)-insulin-like growth factor-1, ascorbic acid and antibiotics. Medium was changed every 48 h until confluence. The HLMVE were seeded in the apical compartment of a 0.4 μm pore size transwell support (Corning) with a cell density of 5 × 10 5 cells/cm 2 . The cells were cultured for 10 days with medium changed in both the apical and basolateral compartments every 48 h. When the transepithelial electrial resistance (TEER) reached 25 ohm cm 2 , the cells were used for virus infection experiment [18].
Viruses
We used HPAI H5N1 virus (A/Hong Kong/483/97), a virus isolated from a patient with fatal influenza H5N1 disease in Hong Kong in 1997, and A/Hong Kong/54/98 (H1N1) as a representative seasonal influenza virus, for our comparative studies. Viruses were initially isolated and subsequently maintained in Madin-Darby canine kidney (MDCK) cells. They were cloned by limiting dilution and seed virus stocks were prepared in MDCK cells. Virus infectivity was assessed by titration of tissue culture infection dose 50% (TCID 50 ) in MDCK cells. The influenza H5N1 virus used in this study was handled in a Bio-safety level 3 (BSL-3) facilities in the Department of Microbiology, The University of Hong Kong.
Virus infection of cells
Virus inoculation procedures were designed to determine the role of cell polarity in direction of infection, virus release and cytokine secretion. Polarized type I-like alveolar epithelial cells and HLMVE were seeded on the apical surface of the transwell membrane and infected from the apical or basolateral surface respectively. During apical infection, 200 μl of virus was added into the apical compartment of the transwell ( Figure 1A) while during basolateral infection, 80 μl of virus was added onto the transwell membrane with the transwell oriented upside down ( Figure 1B). The orientation of the transwell in the apical infection situation was resumed at 1 h after virus inoculation and the washing steps. In this series of experiments, we used a MOI of 0.01 to evaluate the difference between the two routes of infection in terms of release of Representation of the transwell insert-setup during the influ-enza virus infection experiment
A B
newly formed virus and at MOI of 2 to determine the percentage of cell infection and cytokine release.
Virus replication analysis
Evidence of viral infection was established by a) assaying viral matrix RNA at 1, 3, 6 and 24 h post infection by quantitative RT-PCR, b) viral antigen expression by immunofluorescence staining with mouse anti-influenza nucleoprotein and matrix antibody conjugated with FITC (DAKO Imagen, Dako Diagnostics Ltd, Ely, UK) and c) assaying infectious virus in cell culture supernatant by TCID 50 assay to demonstrate complete virus replication.
Quantification of cytokine and chemokine mRNA by realtime quantitative RT-PCR
DNase-treated mRNA from infected cells model was extracted at 1, 3, 6 and 24 h post infection using RNeasy Mini kit (Qiagen, Hilden, Germany). The cDNA was synthesized from mRNA with Oligo-dT primers and Superscript III reverse transcriptase (Invitrogen) and quantified by real-time quantitative PCR analysis with a LightCycler (Roche, Mannheim, Germany). The gene expression profile for cytokines (interferon beta (IFN-β), IL-6) and chemokines (IP-10, RANTES) and viral matrix gene were quantified and normalized using the housekeeping gene product β-actin mRNA.
Quantification of cytokine and chemokine proteins by ELISA
The concentrations of IP-10, RANTES, IL-6 and IFN-β proteins in the influenza virus infected type I-like alveolar epithelial cells were measured by a specific ELISA assay (R&D Systems, Minneapolis, MN, USA). Samples of culture supernatant were irradiated with ultraviolet light (CL-100 Ultra Violet Cross linker) for 15 min to inactivate any infectious virus before the ELISA assays were done. Previous experiments had confirmed that the dose of ultraviolet light used did not affect cytokine concentration as measured by ELISA (data not shown).
Lectin histochemistry
Type I-like alveolar epithelial cells cultured in transwell insert and HLMVE cell pellet were fixed with 10% formalin and sectioned at 5 μm followed by lectin histochemistry as published previously [19]. The cells were microwaved in 10 mM citrate buffer pH 6.0 at 95 °C for 15 min then blocked with 3% H 2 O 2 in TBS for 12 min and with avidin/biotin blocking kit (Vector). They were then incubated with 1:100 HRP conjugated Sambucus nigra agglutinin (SNA) (EY Laboratories), 1:100 biotinylated MAL-I and MAL-II (Vector) and Digoxigenin conjugated MAA (Roche) for 1 h at room temperature (RT), blocked with 1% bovine serum albumin for 10 min at RT, and then incubated with strep-ABC complex (Dako Cytomation, K-0377) diluted 1:100 for 30 min at RT. Develop-ment was performed using the AEC substrate kit (Vector) at RT for 10 min, the nuclei were counterstained with Mayer's hematoxylin and then the sections were dried and mounted with DAKO aqueous mount (Dako Cytomation). Duck intestine sections were used as controls with and without pre-treatment with sialic acid (Sia) α2-3 specific neuraminidase from Glyko to ensure that Sias were specifically targeted.
Statistical analysis
Two-tailed student t-test was used to compare the different of viral titers in the influenza virus infected cell supernatants between the early and late time point. The quantitative cytokine and chemokine mRNA and protein expression profile of mock, influenza H1N1 and H5N1 virus infected cells were compared using one-way ANOVA, followed by Bonferroni multiple-comparison test. Differences were considered significant at p < 0.05.
Sialic acid receptor distribution on the polarized type I-like alveolar epithelial cell and HLMVE
Lectin histochemistry on the primary culture of human type I-like alveolar epithelial cells using SNA and MAA showed that MALII (which recognizes the accepted avian influenza receptor Siaa2-3Galβ1-3GalNAc) bound strongly to the type I-like alveolar epithelial cells ( Figure 2A-2D). Staining with SNA which recognizes the human influenza receptor Siaα2-6 in type I-like alveolar epithelial cells was not prominent on the type I-like alveolar epithelial cells, results that are similar to the report by Shinya et al [5]. The lectin histochemistry on the HLMVE cells shows binding of both SNA ( Figure 2E) and MALII ( Figure 2F) which agrees with previous reports [15]. Following basolateral infection, the titers of virus shed on the apical aspect of the cell was higher (p = 0.017) following influenza H5N1 virus infection rather than influenza H1N1 virus infection ( Figure 5C).
Polarity of influenza virus infection and replication in lung microvascular endothelial cell
In order to better understand the implications of the effects of cell polarity on virus infection in relation to virus dissemination via the systemic circulation, we investigated the replication of influenza H5N1 and H1N1 viruses in polarized HLMVE cells. There was no convincing evidence of influenza H1N1 virus replication when HLMVE cells were infected via either the apical or basolateral aspect ( Figure 6A). Interestingly, HLMVE cells infected with influenza H5N1 virus via either apical or Lectin binding assay to determine the Sias distribution on polarized type I-like alveolar epithelial cells and HLMVEs basolateral aspect resulted in virus release from both apical and basolateral aspects of the cell (p < 0.05) ( Figure 6B).
Expression of cytokine and chemokine in type I-like alveolar epithelial cell infected with influenza virus through apical and basolateral routes
We next investigated the effects of cell polarity on cytokine and chemokine induction by influenza H1N1 and H5N1 virus infected primary human type I-like alveolar epithelial cells. Specifically, we wanted to determine whether the apical and basolateral infection route led to qualitative or quantitative differences in the profile of cytokines induced. The efficiency of infection of the cells by the apical route was 70-100% and basolateral route was 30-50%. As previously reported by us, there was a trend that influenza H5N1 virus infection led to increased levels of cytokine mRNA at 24 h post infection when compared with influenza H1N1 virus, irrespective of whether such infection occurred by the apical (black bars) or basolateral (grey bars) aspect (Figure 7). The differences between influenza H5N1 and H1N1 viruses achieved statistical significance with IFN β following apical (p < 0.05) and basolateral (p < 0.01) infection ( Figure 7A), IL-6 following apical infection (p < 0.01) ( Figure 7B), and IP-10 following basolateral infection (p < 0.05) ( Figure 7D). While there was a trend suggesting that the chemokine gene RANTES was hyper-induced by influenza H5N1 virus when compared to that in influenza H1N1 virus, statistical significance was not achieved (p = 0.08 in apical infection and p = 0.14 in basolateral infection ( Figure 7C). Similar cytokine and chemokine expression profiles were observed at 3 h and 6 h post infection in influenza virus infected type I-like alveolar epithelial cells (data not shown). Inactivation of the virus by ultraviolet irradiation prior to infection of the type I-like alveolar epithelial cells abolished cytokine induction (data not shown) suggesting that virus replication was required for cytokine induction. Furthermore, even an increase in the MOI of influenza H1N1 virus up to 5 did not result in the cytokine mRNA expression level to levels similar to those induced by influenza H5N1 virus (data not shown). Broadly, the data from the apical infection are consistent with our previous finding the differential induction of proinflammatory cytokine by influenza H5N1 virus in alveolar epithelial cells [10]. We now have thus confirmed that these differences also apply in polarized alveolar epithelium following infection via the basolateral aspect.
Polarity of cytokine secretion in influenza H5N1 virus infected alveolar epithelium
We next investigated whether there was polarity in the secretion of cytokine proteins from type I-like alveolar epithelial cells infected by influenza H1N1 and H5N1 viruses. The concentrations of the IP-10, RANTES and IFN β were measured by ELISA in apical and basolateral culture supernatants of type I-like alveolar epithelial cells infected by the apical route. In parallel with the gene expression profile, influenza H5N1 virus elicited more chemokine release in type I-like alveolar epithelial cells than influenza H1N1 virus, at 24 h post infection. Influenza H5N1 virus induced IP-10 protein secretion was found on the apical side of the polarized type I-like alveolar epithelial cells. This level was significantly higher than the mock infected cells (p < 0.01) and influenza H1N1 virus infected cells (p < 0.05). In addition, a significantly more IP-10 was secreted from the basolateral side of the influenza H5N1 virus infected alveolar epithelial cells when compared to mock and influenza H1N1 virus infected cells (p < 0.05). In contrast, RANTES appeared only to be secreted on the apical aspect of influenza H5N1 virus infected type I-like alveolar epithelial cells although these results did not achieve statistical significance ( Figure 8A). We failed to detect any IFN β proteins in the supernatant of type I-like alveolar epithelial cells after influenza virus infection (data no shown) but it should be noted that the limit of detection of the IFN β ELISA was high (250 pg/ml) and this lack of sensitivity of the assay is likely to be responsible for this lack of detection.
Discussion
In this study, we compared human influenza H1N1 virus with a highly pathogenic influenza H5N1 virus to investigate whether there are differences in the polarity of virus infection and of host cytokine responses in human polarized type I-like alveolar epithelium. The basolateral aspect of the alveolar epithelium lies in close proximity to the basolateral aspect of the lung microvascular endothelial cells raising the question of whether virus regressing the basolateral aspect of the type I-like alveolar epithelial cells can infect the lung microvascular endothelial cells by the basolateral aspect. Alternatively, since influenza H5N1 virus is believed to disseminate systemically and has been detected in the peripheral circulation, it is relevant to understand whether endothelial cells can be infected via the apical aspect, thereby allowing virus in the blood circulation to infect these cells and traffic outward to infect the lung alveolar epithelium from the basolateral aspect. As the lung endothelium covers about 20% of the total surface area of the alveoli sac, the rest being covered by the alveolar epithelium [20], the tropism of influenza A virus in both alveolar epithelium and endothelium is impor-tant in the pathogenesis of influenza H5N1 virus infection in human.
Previously, the polarity of influenza virus infection and release have only been studied with low pathogenic influenza A virus (subtype H3N2) and low pathogenic avian influenza A virus (subtype H5N3 and H4N6) [21] in human airway trachea-bronchial epithelial cells. It was demonstrated that newly forming influenza virus was released from the apical surface of respiratory epithelium [21,22]. However, the mouse-adapted influenza H1N1 virus (WSN strain) and Sendi virus have been shown to bud from the apical and basolateral domains [23,24]. Vesticular stomatitis virus and retroviruses [25] systemic infection [26]. In mouse studies, MHV initially replicates in the nasal epithelium before being disseminated throughout the body. The basolateral release of MHV from epithelial cells into the animal's circulation was postulated as the first step in the establishment of a systemic infection.
We showed that both influenza H1N1 and H5N1 viruses preferentially infect type I-like alveolar epithelial cell from its apical surface with higher levels of viral M gene expression as well as higher percentages of cells being infected, when compared to basolateral infection (Figure 3 and 4). This is expected since respiratory viruses need to be adapted to efficiently infect cells via the apical surface, which is the surface that is exposed to the respiratory Virus titer detected in the supernatant of influenza virus infected type I-like alveolar epithelial cells lumen, and therefore accessible to infection. With influenza H1N1 virus, release of newly formed virus was restricted to the apical aspect, irrespective of whether the alveolar epithelial cells were infected by the apical or basolateral route ( Figure 5). Again this is expected with a virus that appears not to disseminate beyond the lung. Similar observations have been reported in parainfluenza virus infected epithelia with the virus preferentially entering and exiting via the apical surface [27,28]. Given its propensity to disseminate beyond the lung, we initially hypothesized that influenza H5N1 virus may be released from both apical and basolateral aspects. But this proved not to be the case and H5N1 was similar to H1N1 in this respect, i.e. virus was released predominantly via the apical aspect, irrespective of the route of infection of the cell ( Figure 5B). While the efficiency of infection via the basolateral aspect was lower than that from the apical aspect for both viruses, cells infected with H5N1 virus via the basolateral aspect resulted in a greater than 10 fold higher virus yields on the apical surface than cells comparably infected with H1N1.
We then investigated virus replication in polarized lung microvascular endothelium which is anatomically in close proximity to the alveolar epithelium. There was no convincing evidence of replication of H1N1 virus in the polarized HLMVE cells ( Figure 6A). In contrast, H5N1 virus could initiate productive replication of these cells from either aspect and virus release also occurred from apical or basolateral aspect of the cell ( Figure 6B). Although neither influenza H5N1 nor H1N1 viruses are efficiently released via the basolateral aspect of the alveolar epithelium, virus replication is likely to lead to weakening of the tight-junctions and to cell death, thus providing these viruses access to the underlying tissues and the basolateral aspect of microvascular endothelial cells. As the lung microvascular endothelium also comprises 20% of the total surface area of the alveoli [20], influenza virus entry via the basolateral aspect of HLMVE cells, replication within them and release from the apical aspect of these cells could lead to viremia and dissemination of infection. The fact that HPAI H5N1 virus H0 precursor form can be cleaved by proteases not restricted to the lung [29] facilitates disseminated virus infection.
Cytokine and chemokine gene expression in type I-like alveolar epithelial cells after influenza virus infection
The fact that influenza H5N1 (but not H1N1) virus can infect the HLMVE cells from the basolateral aspect would facilitate dissemination of this virus via the blood stream. Furthermore, the observation that HLMVE cells can be infected via the apical aspect and release virus from the basolateral aspect (as well as the apical side) suggests that systemically circulating virus can initiate infection in the lung parenchyma via the endothelial route. This is particularly relevant since recent studies in mice have shown that HPAI H5N1 virus experimentally injected into muscle can led to fatal virus infection with virus establishing infection in the lungs and brain [30]. Furthermore, there has been speculation and anecdotal evidence that H5N1 virus can initiate infection via ingestion and the gastrointestinal tract [31]. The possibility that virus in the systemic circulation can establish a foothold in the lung is therefore an important observation.
Infected type I alveolar epithelial cells undergo either cytolytic or apoptotic death. The shedding of infected type I alveolar epithelial cells may further the inflammation and the underlining interstitial cells may then be exposed to the alveolar lumen fluid which contains high concentrations of virus. Reconstitution of the alveolar epithelial surface depends on the regeneration type I alveolar epithelial cell from its progenitor -the type II alveolar epithelial cells [32]. However, an intact basement membrane is essential for epithelial cell proliferation to occur. Alveolar basement membrane with denuded alveolar epithelial cell will accelerate the epithelial proliferation until the epithelial layer becomes confluent [33]. Nevertheless, type II alveolar epithelial cells could dominate the epithelial surface and prevent the reappearance of type I alveolar epithelial cell when injury signal of type I alveolar epithelial cells persists in the microenvironment [34].
Previous reports on human lung epithelial cell line A549 infected with human influenza H3N2 virus showed a low production of interferons and TNF-α [35]. We have previously shown, compared with influenza H1N1 virus, influenza H5N1 virus differentially upregulated cytokine and chemokine gene expression in alveolar epithelial cells [10] and macrophages [36]in vitro experiments and that the profile of differentially upregulated cytokines corresponds with the elevated IP-10 and MIG levels of H5N1 patients serum [37]. Interestingly, IP-10 and MIG have been reported to play roles in the pathogenesis of tissue necrosis and vascular damage associated with certain EBVpositive lymphoproliferative processes [38]. These results again may dictate the different pathogenesis of the downstream cytokine and chemokine response events and contribute to the unusual adverse pathology in H5N1 patient.
This study is the first demonstration of polarity secretion of cytokines in influenza H5N1 infected alveolar epithelium. The secretion of chemokines, notably IP-10, from both the apical and basolateral aspect, was found in influenza H5N1 virus infected type I-like alveolar epithelial cell but not in influenza H1N1 virus infected cell. This could potentially be relevant to the pathogenesis of influenza H5N1 virus infected patients. The basolateral release of chemotactic IP-10 from the influenza virus infected type I-like alveolar epithelial cells recruit lymphocytes Chemokine secretion from type I-like alveolar epithelial cells after influenza virus infection from the capillary circulation into alveoli. The binding of chemokines to the receptor of the recruiting leukocytes is specific and leads to a rapid change in the cell shape and behavior of the subpopulation of the leukocyte. This makes them capable of migrating from the blood through the vascular endothelium into the site of inflammation [39,40]. Previous studies on IP-10 and transendothelial migration indicated that IP-10 retained on endothelial cells could induce transendothelial chemotaxis of activated T cells [41]. Another investigation on the biological activity of human recombinant IP-10 investigation further verified its chemotactic properties towards human peripheral blood monocytes and stimulated human peripheral blood T lymphocytes, but not neutrophils [42,43]. One of the studies used endothelial cell adhesion assay to demonstrate the effect of IP-10 in potentiating T cell adhesion to endothelium [43]. The potent secretion of IP-10 from both apical and basolateral side of the infected alveolar type I-like alveolar epithelial cell that we observed would suggest a possible directional recruitment and hence, migration of T cells and monocytes from the lung blood capillaries through microvascular transendothelial migration. Thus macrophages, differentiated from the recruited monocytes, may dominate the alveolar space [13,17,37] and T lymphocytes may occupy the interstitial space [13,17,37,44,45] as previously documented in autopsy reports of patients dying with influenza H5N1 virus infection. As cytokine secreting macrophages accumulate within the alveoli, further augmentation of the cytokine and chemokine cascades may result. Since influenza H5N1 virus is reportedly resistant to the anti-viral effects of interferons and TNF-α [46] and can lead to delayed apoptosis of infected macrophages [47], the clearance of the virus and lung inflammation would take a longer period of time than with seasonal influenza infection. Such prolonged inflammation would eventually result in pathological features with diffuse alveolar damage, hemorrhage [48] and finally interstitial fibrosis [13,45,49,50], which are some key observations in the H5N1 patients.
Conclusion
In this study, we demonstrate that both influenza H1N1 and H5N1 viruses efficiently infect alveolar epithelial cells from both apical and basolateral surface of the epithelium but release of newly formed virus is mainly from the apical side of the epithelium. In contrast, influenza H5N1 virus, but not influenza H1N1 virus, efficiently infected polarized lung microvascular endothelial cells from either apical or basolateral aspect and also be released from either aspect of these polarized cells. This is likely to be of relevance to the pathogenesis and provides a possible explanation for the entry to the respiratory tract via the blood stream, as proposed by some who suggest that the gastro-intestinal tract can be a portal of entry for this virus. In addition, the release of inflammatory mediators such as IP-10 may be important contributors to the pathogenesis of the disease. More detailed studies on the mechanisms of alveolar epithelial cell damage and regeneration and the mediators involved in this process will be important in understanding the pathogenesis of human H5N1 disease. | 7,046.2 | 2009-10-30T00:00:00.000 | [
"Biology",
"Medicine"
] |
Dimensioning Cuboid and Cylindrical Objects Using Only Noisy and Partially Observed Time-of-Flight Data
One of the challenges of using Time-of-Flight (ToF) sensors for dimensioning objects is that the depth information suffers from issues such as low resolution, self-occlusions, noise, and multipath interference, which distort the shape and size of objects. In this work, we successfully apply a superquadric fitting framework for dimensioning cuboid and cylindrical objects from point cloud data generated using a ToF sensor. Our work demonstrates that an average error of less than 1 cm is possible for a box with the largest dimension of about 30 cm and a cylinder with the largest dimension of about 20 cm that are each placed 1.5 m from a ToF sensor. We also quantify the performance of dimensioning objects using various object orientations, ground plane surfaces, and model fitting methods. For cuboid objects, our results show that the proposed superquadric fitting framework is able to achieve absolute dimensioning errors between 4% and 9% using the bounding technique and between 8% and 15% using the mirroring technique across all tested surfaces. For cylindrical objects, our results show that the proposed superquadric fitting framework is able to achieve absolute dimensioning errors between 2.97% and 6.61% when the object is in a horizontal orientation and between 8.01% and 13.13% when the object is in a vertical orientation using the bounding technique across all tested surfaces.
Introduction
This work presents a method for dimensioning cuboid and cylindrical objects from noisy and partially occluded point cloud data that are acquired using a Time-of-Flight (ToF) sensor.Over the last decade, there has been an increase in the types of applications where ToF sensors are used, such as 3D scanning [1][2][3][4][5], drone positioning [6][7][8], robotics [9][10][11], and logistics [12][13][14].In applications such as metrology and logistics, the ability to accurately determine the dimensions of an object is critical, for example, for part picking, packaging, and estimating shipping costs and storage needs.The quality of the depth information that is provided using modern three-dimensional (3D) sensors depends on the underlying technology.Existing works on dimensioning objects use lownoise, high-resolution 3D sensors such as structured light, stereo vision, and LiDAR to generate depth information [15][16][17].These 3D sensors each have different tradeoffs and limitations that may not make them suitable for certain applications.For example, stereo vision systems typically have high software complexity, low depth accuracy, weak low-light performance, and limited range [18,19].As another example, structured lights typically have high material costs, slow response times, and weak bright-light performance [18,19].Compared to other 3D sensor technologies, ToF sensors provide a low-cost, compact design with low software complexity, fast response time, and good low-light and bright-light performance that can be used in real-time for generating depth information [18,19].Despite these advantages, ToF sensing suffers from noise artifacts and issues such as multipath interference, which can distort the shape and size of objects in point clouds captured with a Sensors 2023, 23, 8673 2 of 16 ToF sensor.Figure 1 illustrates an example of a point cloud of the side profile of a regular box without and with multipath interference.As shown in Figure 1, multipath interference distorts the profile of the object such that the planar surfaces appear curved.The profile of a cylindrical object also experiences the same type of distortion.The distorted appearance of the object poses a challenge when trying to determine the dimensions of the object.In addition, ToF sensors also suffer from other issues such as low resolution, flying pixels, and self-occlusions, which further makes metrology challenging [20].
Sensors 2023, 23, x FOR PEER REVIEW 2 of 17 multipath interference, which can distort the shape and size of objects in point clouds captured with a ToF sensor.Figure 1 illustrates an example of a point cloud of the side profile of a regular box without and with multipath interference.As shown in Figure 1, multipath interference distorts the profile of the object such that the planar surfaces appear curved.The profile of a cylindrical object also experiences the same type of distortion.
The distorted appearance of the object poses a challenge when trying to determine the dimensions of the object.In addition, ToF sensors also suffer from other issues such as low resolution, flying pixels, and self-occlusions, which further makes metrology challenging [20].A common approach to reducing the effect of multipath interference is to place the ToF sensor directly above an object that is placed flat on a ground surface in a top-view fronto-parallel configuration [21,22].In this configuration, only the top surface of an object is visible to the ToF sensor.An example of a cuboid object in such a top-view fronto-parallel configuration is shown in Figure 2. In this configuration, the x-and y-dimensions of the object can be readily determined with respect to the x-y ground plane.The z-dimension can also be readily determined by taking the difference in depth measurements between the top surface of the object and the ground plane surface.In this configuration, since the interface between the object and the ground surface is not visible, the effect of multipath interference between the object and the ground surface is greatly reduced.However, this configuration is only feasible for a limited number of environments and applications.A common approach to reducing the effect of multipath interference is to place the ToF sensor directly above an object that is placed flat on a ground surface in a top-view fronto-parallel configuration [21,22].In this configuration, only the top surface of an object is visible to the ToF sensor.An example of a cuboid object in such a top-view fronto-parallel configuration is shown in Figure 2. In this configuration, the xand y-dimensions of the object can be readily determined with respect to the x-y ground plane.The z-dimension can also be readily determined by taking the difference in depth measurements between the top surface of the object and the ground plane surface.In this configuration, since the interface between the object and the ground surface is not visible, the effect of multipath interference between the object and the ground surface is greatly reduced.However, this configuration is only feasible for a limited number of environments and applications.
Object
Sensors 2023, 23, x FOR PEER REVIEW 2 of 17 multipath interference, which can distort the shape and size of objects in point clouds captured with a ToF sensor.Figure 1 illustrates an example of a point cloud of the side profile of a regular box without and with multipath interference.As shown in Figure 1, multipath interference distorts the profile of the object such that the planar surfaces appear curved.The profile of a cylindrical object also experiences the same type of distortion.
The distorted appearance of the object poses a challenge when trying to determine the dimensions of the object.In addition, ToF sensors also suffer from other issues such as low resolution, flying pixels, and self-occlusions, which further makes metrology challenging [20].A common approach to reducing the effect of multipath interference is to place the ToF sensor directly above an object that is placed flat on a ground surface in a top-view fronto-parallel configuration [21,22].In this configuration, only the top surface of an object is visible to the ToF sensor.An example of a cuboid object in such a top-view fronto-parallel configuration is shown in Figure 2. In this configuration, the x-and y-dimensions of the object can be readily determined with respect to the x-y ground plane.The z-dimension can also be readily determined by taking the difference in depth measurements between the top surface of the object and the ground plane surface.In this configuration, since the interface between the object and the ground surface is not visible, the effect of multipath interference between the object and the ground surface is greatly reduced.However, this configuration is only feasible for a limited number of environments and applications.Using a perspective view of an object proves the most flexibility for applications, but processing point cloud data captured using a perspective view poses challenges due to the presence of self-occlusions, noise, and multipath interference [20].The amount of noise and multipath interference is scene-specific and depends on the ground surface material an object is resting on and the pose of the object.One approach for dimensioning cuboid objects involves using a plane fitting to identify the various surfaces of a box [23].This approach typically requires that at least three surfaces of the box are visible to the ToF sensor and uses a RANSAC algorithm [24] on the point cloud of a box for detecting planes that correspond with surfaces of the box.Notably, this approach is limited to cuboid objects and cannot be applied to non-cuboid objects.Our results demonstrate that the number of points that are present on the surface of an object depends on the pose of the object with respect to the ToF sensor.This plane-fitting approach begins to breakdown as the number of points decreases on the surfaces of the object.
Object
In this work, we developed a superquadric fitting framework for dimensioning cuboid and cylindrical objects using point cloud data that are generated with a ToF sensor that has a perspective view of an object.Our approach allows for the dimensioning of objects without requiring a top view of an object and without requiring that three sides of the object be visible to the ToF sensor.Previous works in robotic grasping applications have implemented a type of superquadric fitting to point cloud data for determining the general orientation and size of an object that is to be picked up with a robotic hand [25][26][27][28].However, these works focused on obtaining rough dimensions for an object for grasping and did not attempt to quantify how accurately the dimensions of the object can be determined in various environments and orientations.Other existing works have employed superquadric fitting to point cloud data for identifying and classifying objects [29][30][31].The focus of these works is to generally classify objects.These works also do not attempt to quantify how accurately the dimensions of an object can be determined.Further, these works typically rely on point cloud data that are obtained using other types of 3D sensor technologies, which do not suffer from the same types of noise artifacts and issues as a ToF sensor.These works do not suggest or provide any evidence that their approaches can be similarly applied to the same type of noisy point cloud data that are obtained from a ToF sensor.As discussed above, there is a tradeoff between the quality of the point cloud data that can be obtained and the low-cost, compact design of a ToF sensor that can be used in real-time for generating depth information.
Our proposed framework uses a non-linear least squares regression to determine a superquadric shape that best fits the point cloud data for an object while limiting the overgrowth of the superquadric shape.Our experiments show that during the fitting process, the dimensions of the superquadric shape tend to overgrow in the direction where data points are missing for the surfaces of an object due to self-occlusion.This overgrowth leads to significant errors in the dimensions of the object.A previous study by Quispe et al. also noted that superquadric models tend to overgrow during the superquadric fitting process [32].When a superquadric model overgrows during fitting, the dimensions of the superquadric shape extend beyond the point cloud of the object.This type of overgrowth leads to inaccurate dimension estimates.Quispe et al. used an approach that was inspired by Bohg et al. that involves identifying a symmetry plane within the point cloud of an object and then using projection to artificially generate surfaces that are missing within the point cloud [16,32].In their approach, the point cloud data that are used are generated using stereovision and RGBD cameras, which do not suffer from the same type of noise issues (e.g., multipath interference) as point cloud data from a ToF sensor.This approach is not suitable for the types of noisy point clouds that are generated using a ToF sensor because multipath interference distorts the point cloud of objects and makes identifying the surfaces of the object more difficult.
This work contributes to the state of the art by (1) developing a framework for dimensioning cuboid and cylindrical objects using enhanced superquadric fitting techniques and noisy point cloud data that are generated with a single ToF sensor.Our results show that a traditional superquadric fitting technique alone are insufficient for accurately determining the dimensions of an object using point cloud data that suffer from issues such as low resolution, self-occlusions, noise, and multipath interference.Our enhanced superquadric fitting techniques include bounding techniques for limiting superquadric overgrowth as well as considerations for the orientation of an object; (2) quantifying the accuracy for dimensioning cuboid and cylindrical objects on various types of ground surfaces using the noisy and partially observed point cloud data from a ToF sensor.The ground surfaces considered in this work include aluminum foil, black posterboard, white posterboard, and black felt.Each of these ground surfaces has different levels of infrared reflectivity; (3) quantifying the accuracy for dimensioning cuboid and cylindrical objects with different rotation angles and orientations with respect to the ToF sensor; and (4) quantifying the accuracy for dimensioning cuboid and cylindrical objects using various techniques for limiting overgrowth when fitting superquadric models.In applications such as logistics, the ability to accurately dimension objects is critical for operations like object grasping, packaging, storing, and transportation [33][34][35][36][37].The tolerances for dimensioning errors vary from system to system.As these systems are further developed, their tolerances are typically reduced to optimize efficiency, object handling, and packaging [33][34][35][36][37].As such, it becomes increasingly important to understand and quantify the performance and accuracy of object dimensioning techniques and the various factors that affect their performance.As discussed above, the presence of issues such as low resolution, self-occlusions, noise, and multipath interference all negatively impact and limit the usage of traditional techniques for dimensioning objects using point cloud data.This work contributes to the state of the art by quantifying the performance and accuracy of our proposed framework compared to traditional techniques for dimensioning objects using point cloud data.In addition, this work further contributes by quantifying how various environmental factors, such as ground surface material and object orientation, impact the performance and accuracy of our proposed framework.
This work uses a Texas Instrument TI OPT8241 ToF sensor for its experiments due to its widespread use in research and engineering applications.Since other ToF sensors operate using the same principles, which involve emitting and capturing reflected IR light, our framework for dimensioning cuboid and cylindrical objects using point cloud data from a ToF sensor can therefore also be generally applied to other types of ToF sensors since they all experience the same types of issues such as self-occlusions, noise, and multipath interference [20].
This paper is organized as follows: Section 2 discusses our proposed superquadric fitting framework for dimensioning cuboid and cylindrical objects.Section 3 discusses the experimental setup and the numerical results.Finally, Section 4 provides concluding remarks.
Methodology
As discussed above, point cloud data that are obtained from a single ToF sensor typically suffers from issues such as low resolution, self-occlusions, noise, and multipath interference.These issues tend to distort the shape and size of objects, which creates challenges for dimensioning objects using a ToF sensor.In this work, we propose an approach that overcomes these challenges by fitting a parametric model to the point cloud data.Given a set of point cloud data points (x w , y w , z w ) from a ToF sensor, our proposed approach uses non-linear least squares fitting to determine the parametric model that best fits the point cloud data.Our experiments show that directly applying a parametric fit to the point cloud data without any preprocessing results in large estimation errors.To address this issue, in our proposed framework, we preprocess the point cloud data using the following steps: ground plane rectification, ground plane segmentation, and reorienting the point cloud within a new local coordinate system before performing the initial pose estimation.Using this approach, the subsequent parametric fitting shows significantly lower dimensioning errors.Figure 3 provides an overview of our methodology.the point cloud within a new local coordinate system before performing the initial pose estimation.Using this approach, the subsequent parametric fi ing shows significantly lower dimensioning errors.Figure 3 provides an overview of our methodology.The key steps to the parametric fi ing for dimensioning methodology are as follows: A ToF sensor is configured to capture a point cloud from a perspective view of an object within a scene.A ground plane rectification process is first performed to compensate for the perspective view of the ToF sensor.Ground plane segmentation is then performed to segment the object from the rest of the scene.An initial pose is then determined for the object, and the point cloud for the object is reoriented such that the object is axis-aligned and centered about a user-defined local origin.The reoriented point cloud is then fed into a superquadric fi ing algorithm.As part of the fi ing process, we use either a bounding technique or a mirroring technique to limit any overgrowth of the superquadric model.The bounding technique limits superquadric shape overgrowth by applying adaptive upper and lower bounds to the dimensions of the superquadric shape during the fi ing process.The mirroring technique limits superquadric shape overgrowth by synthetically generating data points for the surfaces of the object that are missing due to self-occlusion.The mirroring technique generates a more complete point cloud representation of an object that reduces the number of missing surfaces, which would allow the superquadric shape to overgrow during the fi ing.The object dimensions can then be obtained based on the determined parameters of the superquadric shape that is fi ed to the point cloud.
Ground Plane Rectification
Figure 4 is an example of an intensity image of a scene with fiducial markers and a box positioned on top of a black felt surface.The dimensions of the box are labeled as , , and .In this example, = 149 mm, = 223 mm, and = 286 mm.In our experiments, ArUco fiducial markers are initially used to determine the orientation of the ground plane with respect to the ToF sensor.An ArUco marker is a 2D binary-encoded fiduciary marker that can be used for camera pose estimation [38].ArUco markers were selected due to their widespread use in various computer vision-based applications, such as robotics and automation.This approach does not require a pre-calibrated camera mounting system with respect to the object plane and is more robust and applicable to dynamic se ings where a camera is mounted to a movable arm or robot system.Although ArUco markers were used in this work, a similar approach can be implemented using other suitable types of fiducial markers.
To determine the ground plane orientation, an ArUco marker is placed on the ground plane within the field of view of the ToF sensor.We then capture and process an intensity image of the scene using the OpenCV libraries [39] to detect the presence and orientation of the ArUco marker.The orientation of the ArUco marker provides information about The key steps to the parametric fitting for dimensioning methodology are as follows: A ToF sensor is configured to capture a point cloud from a perspective view of an object within a scene.A ground plane rectification process is first performed to compensate for the perspective view of the ToF sensor.Ground plane segmentation is then performed to segment the object from the rest of the scene.An initial pose is then determined for the object, and the point cloud for the object is reoriented such that the object is axis-aligned and centered about a user-defined local origin.The reoriented point cloud is then fed into a superquadric fitting algorithm.As part of the fitting process, we use either a bounding technique or a mirroring technique to limit any overgrowth of the superquadric model.The bounding technique limits superquadric shape overgrowth by applying adaptive upper and lower bounds to the dimensions of the superquadric shape during the fitting process.The mirroring technique limits superquadric shape overgrowth by synthetically generating data points for the surfaces of the object that are missing due to self-occlusion.The mirroring technique generates a more complete point cloud representation of an object that reduces the number of missing surfaces, which would allow the superquadric shape to overgrow during the fitting.The object dimensions can then be obtained based on the determined parameters of the superquadric shape that is fitted to the point cloud.
Ground Plane Rectification
Figure 4 is an example of an intensity image of a scene with fiducial markers and a box positioned on top of a black felt surface.The dimensions of the box are labeled as a 1 , a 2 , and a 3 .In this example, a 1 = 149 mm, a 2 = 223 mm, and a 3 = 286 mm.In our experiments, ArUco fiducial markers are initially used to determine the orientation of the ground plane with respect to the ToF sensor.An ArUco marker is a 2D binary-encoded fiduciary marker that can be used for camera pose estimation [38].ArUco markers were selected due to their widespread use in various computer vision-based applications, such as robotics and automation.This approach does not require a pre-calibrated camera mounting system with respect to the object plane and is more robust and applicable to dynamic settings where a camera is mounted to a movable arm or robot system.Although ArUco markers were used in this work, a similar approach can be implemented using other suitable types of fiducial markers.
that the ground plane is aligned with a horizontal x-y plane in our coordinate system.Although only one ArUco marker is required to determine the orientation of the ground plane, we use four markers for redundancy.The ArUco markers are also used to identify and crop the region of interest by positioning the object between the outermost ArUco markers.Figure 5 is an example of a point cloud after ground plane rotation correction.As shown in Figure 5, the ground plane in our point cloud data are substantially parallel with the horizontal x-y plane after ground plane rotation.
Ground Plane Segmentation
Following ground plane rectification, the position of the ground plane is known.Thresholding is then performed using an offset threshold value to segment the object of interest from the ground plane.As shown in Figure 5, the point cloud for the ground plane appears noisy primarily due to multipath interference near the interface between the ground plane and the faces of the object.In this example, additional noise is also caused by the fiducial markers.The noise from the fiducial markers and the multipath interference between the ground plane and the object are removed during segmentation, and the remaining point cloud corresponds to the object of interest.Any residual multipath To determine the ground plane orientation, an ArUco marker is placed on the ground plane within the field of view of the ToF sensor.We then capture and process an intensity image of the scene using the OpenCV libraries [39] to detect the presence and orientation of the ArUco marker.The orientation of the ArUco marker provides information about the orientation of the ground plane with respect to the ToF sensor.Once the orientation of the ground plane is determined, the point cloud for the entire scene is then rotated such that the ground plane is aligned with a horizontal x-y plane in our coordinate system.Although only one ArUco marker is required to determine the orientation of the ground plane, we use four markers for redundancy.The ArUco markers are also used to identify and crop the region of interest by positioning the object between the outermost ArUco markers.Figure 5 is an example of a point cloud after ground plane rotation correction.As shown in Figure 5, the ground plane in our point cloud data are substantially parallel with the horizontal x-y plane after ground plane rotation.the orientation of the ground plane with respect to the ToF sensor.Once the orientation of the ground plane is determined, the point cloud for the entire scene is then rotated such that the ground plane is aligned with a horizontal x-y plane in our coordinate system.Although only one ArUco marker is required to determine the orientation of the ground plane, we use four markers for redundancy.The ArUco markers are also used to identify and crop the region of interest by positioning the object between the outermost ArUco markers.Figure 5 is an example of a point cloud after ground plane rotation correction.As shown in Figure 5, the ground plane in our point cloud data are substantially parallel with the horizontal x-y plane after ground plane rotation.
Ground Plane Segmentation
Following ground plane rectification, the position of the ground plane is known.Thresholding is then performed using an offset threshold value to segment the object of interest from the ground plane.As shown in Figure 5, the point cloud for the ground plane appears noisy primarily due to multipath interference near the interface between the ground plane and the faces of the object.In this example, additional noise is also caused by the fiducial markers.The noise from the fiducial markers and the multipath interference between the ground plane and the object are removed during segmentation, and the remaining point cloud corresponds to the object of interest.Any residual multipath
Ground Plane Segmentation
Following ground plane rectification, the position of the ground plane is known.Thresholding is then performed using an offset threshold value to segment the object of interest from the ground plane.As shown in Figure 5, the point cloud for the ground plane appears noisy primarily due to multipath interference near the interface between the ground plane and the faces of the object.In this example, additional noise is also caused by the fiducial markers.The noise from the fiducial markers and the multipath interference between the ground plane and the object are removed during segmentation, and the remaining point cloud corresponds to the object of interest.Any residual multipath interference can be reduced by increasing the offset threshold value.The tradeoff for this approach is that increasing the offset threshold value reduces the number of points on the object that are available for the parametric fitting process.Since the offset threshold value is known, this value is added back later as a correction term to one of the dimensions of the object after performing the parametric fitting.Figure 6 shows an example of the remaining point cloud data for a box after performing ground plane segmentation.interference can be reduced by increasing the offset threshold value.The tradeoff for this approach is that increasing the offset threshold value reduces the number of points on the object that are available for the parametric fi ing process.Since the offset threshold value is known, this value is added back later as a correction term to one of the dimensions of the object after performing the parametric fi ing. Figure 6 shows an example of the remaining point cloud data for a box after performing ground plane segmentation.
Initial Pose Estimation
To determine the initial pose of the object, the remaining point cloud of the object is reoriented such that the object is centered about a user-defined origin and axis aligned.In this work, this reorientation is performed by fla ening the point cloud into the direction of the ground plane to form a top-view representation of the object.Figure 7 illustrates an example of the result of the fla ening process for the point cloud with respect to the ground plane.By fla ening the point cloud in this manner, dense clusters of point clouds will appear, which correspond with the edges of the object.Once the point cloud has been fla ened, a RANSAC ("RANdom Sample Consensus") algorithm [24] is used to identify an edge of the object by fi ing a line to one of the edges of the point cloud.The RANSAC process first identifies the dense clusters of points that correspond with the edges of the object and then fits a line to one of these clusters of data points.In the example shown in Figure 7, the line that was determined from the RANSAC process is represented as a solid blue line.Once the orientation of an edge of the object is known, the point cloud is rotated such that the object edges are axis-aligned.First, an angle is determined between the line that was determined from the RANSAC process and either the x-or y-axis of the coordinate system.Then, the entire point cloud is rotated about the vertical z-axis with the determined angle to axis align the point cloud with the x-y plane.The axis-aligned point cloud is then shifted such that the center of the point cloud is at the local origin.An example of the result of this process is also shown in Figure 7.
Initial Pose Estimation
To determine the initial pose of the object, the remaining point cloud of the object is reoriented such that the object is centered about a user-defined origin and axis aligned.In this work, this reorientation is performed by flattening the point cloud into the direction of the ground plane to form a top-view representation of the object.Figure 7 illustrates an example of the result of the flattening process for the point cloud with respect to the ground plane.By flattening the point cloud in this manner, dense clusters of point clouds will appear, which correspond with the edges of the object.Once the point cloud has been flattened, a RANSAC ("RANdom Sample Consensus") algorithm [24] is used to identify an edge of the object by fitting a line to one of the edges of the point cloud.The RANSAC process first identifies the dense clusters of points that correspond with the edges of the object and then fits a line to one of these clusters of data points.In the example shown in Figure 7, the line that was determined from the RANSAC process is represented as a solid blue line.Once the orientation of an edge of the object is known, the point cloud is rotated such that the object edges are axis-aligned.First, an angle is determined between the line that was determined from the RANSAC process and either the xor y-axis of the coordinate system.Then, the entire point cloud is rotated about the vertical z-axis with the determined angle to axis align the point cloud with the x-y plane.The axis-aligned point cloud is then shifted such that the center of the point cloud is at the local origin.An example of the result of this process is also shown in Figure 7. Once the point cloud is centered at the origin, the initial rotation parameters (, , ) and translation parameters ( , , ) are set to zero.Although the fi ing process is capable of solving for non-zero translation and rotation parameters, our experiments showed improvements in terms of accuracy and speed by reorienting our point cloud and initially se ing these parameters to zero.The fi ing process will later adjust the translation and rotation parameters to best fit the point cloud data.The initial dimension parameters ( , , ) of the object are determined based on the difference between the minimum and maximum values of the point cloud along each axis.
Limiting Superquadric Growth
At this point, we can fit a superquadric model to the remaining point cloud data.However, our experiments showed poor performance resulting from overgrowth in the direction where data points are missing for the surfaces of an object due to self-occlusion.In the first set experiments, the fi ing was performed on the axis-aligned point cloud.
In these experiments, adaptive upper and lower bounds were used to limit overgrowth on the dimensions of the superquadric model that was generated.In our experiments, tolerances of 5%, 2%, and 1% are applied to the dimensions estimated during the initial pose estimation process to determine the bounds.The upper and lower limits were used because the point cloud of the object is incomplete due to self-occlusions.For example, with cuboid objects, the point cloud data only have points on the surfaces of the box that are visible to the ToF sensor and do not include points that represent the bo om or the backside of the box.In some instances, when a superquadric model is fit to the point cloud with partial data, the superquadric grows beyond the points in the point cloud where the surface of an object is not represented.This occurs because the minima in Equation ( 2) does not guarantee returning the smallest superquadric model that fits the point cloud data.In a second set of experiments, the fi ing process is performed on a mirrored version of the axis-aligned point cloud.The mirrored point cloud is generated by creating a duplicate of the axis-aligned point cloud, flipping it 180° vertically about its centroid and the x-axis, and then rotating it 90° about the vertical z-axis.The duplicate point cloud is then merged with the original axis-aligned point cloud to form the mirrored version of the point cloud.By using the mirrored point cloud, data points for any non-visible sides of the object are synthetically created, and bounds are no longer necessary to limit any overgrowth in the fi ing process.An example of the mirroring technique is shown in Figure 8. Once the point cloud is centered at the origin, the initial rotation parameters (φ, θ, ψ) and translation parameters (p x , p y , p z ) are set to zero.Although the fitting process is capable of solving for non-zero translation and rotation parameters, our experiments showed improvements in terms of accuracy and speed by reorienting our point cloud and initially setting these parameters to zero.The fitting process will later adjust the translation and rotation parameters to best fit the point cloud data.The initial dimension parameters (a 1 , a 2 , a 3 ) of the object are determined based on the difference between the minimum and maximum values of the point cloud along each axis.
Limiting Superquadric Growth
At this point, we can fit a superquadric model to the remaining point cloud data.However, our experiments showed poor performance resulting from overgrowth in the direction where data points are missing for the surfaces of an object due to self-occlusion.In the first set of experiments, the fitting was performed on the axis-aligned point cloud.
In these experiments, adaptive upper and lower bounds were used to limit overgrowth on the dimensions of the superquadric model that was generated.In our experiments, tolerances of 5%, 2%, and 1% are applied to the dimensions estimated during the initial pose estimation process to determine the bounds.The upper and lower limits were used because the point cloud of the object is incomplete due to self-occlusions.For example, with cuboid objects, the point cloud data only have points on the surfaces of the box that are visible to the ToF sensor and do not include points that represent the bottom or the backside of the box.In some instances, when a superquadric model is fit to the point cloud with partial data, the superquadric grows beyond the points in the point cloud where the surface of an object is not represented.This occurs because the minima in Equation ( 2) does not guarantee returning the smallest superquadric model that fits the point cloud data.In a second set of experiments, the fitting process is performed on a mirrored version of the axis-aligned point cloud.The mirrored point cloud is generated by creating a duplicate of the axis-aligned point cloud, flipping it 180 • vertically about its centroid and the x-axis, and then rotating it 90 • about the vertical z-axis.The duplicate point cloud is then merged with the original axis-aligned point cloud to form the mirrored version of the point cloud.By using the mirrored point cloud, data points for any non-visible sides of the object are synthetically created, and bounds are no longer necessary to limit any overgrowth in the fitting process.An example of the mirroring technique is shown in Figure 8. .Profile view of a point cloud of a cuboid before (left) and after applying the mirroring technique for limiting superquadric overgrowth (right) in meter units.In the right plot, the red data points correspond with the initial point cloud of the cuboid and the blue data points correspond with the additional data points that were generated by applying the mirroring technique.
Non-Linear Least Squares Fi ing
Our work involves performing a non-linear least squares fit of a superquadric shape to the point cloud of an object to determine the dimensions of the object.A superquadric is a parametric shape that has parameters that describe the size, shape, and pose of the superquadric [40,41].In this work, a superquadric shape was selected because it can be morphed into a wide range of shapes [40].By adjusting the shape parameters, the superquadric shape can be morphed into a range of symmetric objects, which include cuboids and cylinders.The implicit form of the superquadric equation that is used in this work is given using the inside-outside function F, which is defined as the following: where variables ( , , ) are the data points from the captured point cloud, ( , , ) are the scaling dimensions along the x-, y-, and z-axis of the superquadric, respectively, (∈ , ∈ ) are shape parameters, and ( , , , , , , , , , , , ) are the twelve parameters of a homogenous transformation matrix that is the result of a rotation and a translation of the world coordinate plane [42,43].The eleven parameters that define the position and orientation of a superquadric are defined as ʌ = {a1, a2, a3, ∈1, ∈2, ϕ, θ, Ψ, px, py, pz} [42,43].Following our initial pose estimates, the initial rotation parameters (, , ) and translation parameters ( , , ) are set to zero, the initial object dimensions ( , , ) are determined as the difference between the minimum and maximum values of the point cloud along each axis, and the shape parameters (∈ , ∈ ) are initially set to an intermediate value of one.The final values of ʌ are determined using a least squares minimization process.We perform a least squares minimization using the Levenberg-Marquardt algorithm [44] to recover the parameter set ʌ that best fits the kth point, ( , , ), in the point cloud.The following expression describes the minimization process:
Non-Linear Least Squares Fitting
Our work involves performing a non-linear least squares fit of a superquadric shape to the point cloud of an object to determine the dimensions of the object.A superquadric is a parametric shape that has parameters that describe the size, shape, and pose of the superquadric [40,41].In this work, a superquadric shape was selected because it can be morphed into a wide range of shapes [40].By adjusting the shape parameters, the superquadric shape can be morphed into a range of symmetric objects, which include cuboids and cylinders.The implicit form of the superquadric equation that is used in this work is given using the inside-outside function F, which is defined as the following: F(x w , y w , z w ) = where variables (x w , y w , z w ) are the data points from the captured point cloud, (a 1 , a 2 , a 3 ) are the scaling dimensions along the x-, y-, and z-axis of the superquadric, respectively, (∈ 1 , ∈ 2 ) are shape parameters, and (n x , n y , n z , o x , o y , o z , a x , a y , a z , p x , p y , p z ) are the twelve parameters of a homogenous transformation matrix that is the result of a rotation and a translation of the world coordinate plane [42,43].The eleven parameters that define the position and orientation of a superquadric are defined as
Non-Linear Least Squares Fi ing
Our work involves performing a non-linear least squares fit of a superquadric shape the point cloud of an object to determine the dimensions of the object.A superquadric is a parametric shape that has parameters that describe the size, shape, and pose of the superquadric [40,41].In this work, a superquadric shape was selected because it can be morphed into a wide range of shapes [40].By adjusting the shape parameters, the superquadric shape can be morphed into a range of symmetric objects, which include cuboids and cylinders.The implicit form of the superquadric equation that is used in this work is given using the inside-outside function F, which is defined as the following: where variables ( , , ) are the data points from the captured point cloud, ( , , ) are the scaling dimensions along the x-, y-, and z-axis of the superquadric, respectively, (∈ , ∈ ) are shape parameters, and ( , , , , , , , , , , , ) are the twelve parameters of a homogenous transformation matrix that is the result of a rotation and a translation of the world coordinate plane [42,43].The eleven parameters that define the position and orientation of a superquadric are defined as ʌ = {a1, a2, a3, ∈1, ∈2, ϕ, θ, Ψ, px, py, pz} [42,43].Following our initial pose estimates, the initial rotation parameters (, , ) and translation parameters ( , , ) are set to zero, the initial object dimensions ( , , ) are determined as the difference between the minimum and maximum values of the point cloud along each axis, and the shape parameters (∈ , ∈ ) are initially set to an intermediate value of one.The final values of ʌ are determined using a least squares minimization process.We perform a least squares minimization using the Levenberg-Marquardt algorithm [44] to recover the parameter set ʌ that best fits the kth point, ( , , ), in the point cloud.The following expression describes the minimization process: Following our initial pose estimates, the initial rotation parameters (φ, θ, Ψ) and translation parameters p x , p y , p z are set to zero, the initial object dimensions (a 1 , a 2 , a 3 ) are determined as the difference between the minimum and maximum values of the point cloud along each axis, and the shape parameters (∈ 1 , ∈ 2 ) are initially set to an intermediate value of one.The final values of
Non-Linear Least Squares Fi ing
Our work involves performing a non-linear least squares fit of a superquadric shape to the point cloud of an object to determine the dimensions of the object.A superquadric is a parametric shape that has parameters that describe the size, shape, and pose of the superquadric [40,41].In this work, a superquadric shape was selected because it can be morphed into a wide range of shapes [40].By adjusting the shape parameters, the superquadric shape can be morphed into a range of symmetric objects, which include cuboids and cylinders.The implicit form of the superquadric equation that is used in this work is given using the inside-outside function F, which is defined as the following: where variables ( , , ) are the data points from the captured point cloud, ( , , ) are the scaling dimensions along the x-, y-, and z-axis of the superquadric, respectively, (∈ , ∈ ) are shape parameters, and ( , , , , , , , , , , , ) are the twelve parameters of a homogenous transformation matrix that is the result of a rotation and a translation of the world coordinate plane [42,43].The eleven parameters that define the position and orientation of a superquadric are defined as ʌ = {a1, a2, a3, ∈1, ∈2, ϕ, θ, Ψ, px, py, pz} [42,43].Following our initial pose estimates, the initial rotation parameters (, , ) and translation parameters ( , , ) are set to zero, the initial object dimensions ( , , ) are determined as the difference between the minimum and maximum values of the point cloud along each axis, and the shape parameters (∈ , ∈ ) are initially set to an intermediate value of one.The final values of ʌ are determined using a least squares minimization process.We perform a least squares minimization using the Levenberg-Marquardt algorithm [44] to recover the parameter set ʌ that best fits the kth point, ( , , ), in the point cloud.
The following expression describes the minimization process: are determined using a least squares minimization process.We perform a least squares minimization using the Levenberg-Marquardt algorithm [44] to recover the parameter set
Non-Linear Least Squares Fi ing
Our work involves performing a non-linear least squares fit of a superquadric shape to the point cloud of an object to determine the dimensions of the object.A superquadric is a parametric shape that has parameters that describe the size, shape, and pose of the superquadric [40,41].In this work, a superquadric shape was selected because it can be morphed into a wide range of shapes [40].By adjusting the shape parameters, the superquadric shape can be morphed into a range of symmetric objects, which include cuboids and cylinders.The implicit form of the superquadric equation that is used in this work is given using the inside-outside function F, which is defined as the following: where variables ( , , ) are the data points from the captured point cloud, ( , , ) are the scaling dimensions along the x-, y-, and z-axis of the superquadric, respectively, (∈ , ∈ ) are shape parameters, and ( , , , , , , , , , , , ) are the twelve parameters of a homogenous transformation matrix that is the result of a rotation and a translation of the world coordinate plane [42,43].The eleven parameters that define the position and orientation of a superquadric are defined as ʌ = {a1, a2, a3, ∈1, ∈2, ϕ, θ, Ψ, px, py, pz} [42,43].Following our initial pose estimates, the initial rotation parameters (, , ) and translation parameters ( , , ) are set to zero, the initial object dimensions ( , , ) are determined as the difference between the minimum and maximum values of the point cloud along each axis, and the shape parameters (∈ , ∈ ) are initially set to an intermediate value of one.The final values of ʌ are determined using a least squares minimization process.We perform a least squares minimization using the Levenberg-Marquardt algorithm [44] to recover the parameter set ʌ that best fits the kth point, ( , , ), in the point cloud.
that best fits the kth point, (x k , y k , z k ), in the point cloud.The following expression describes the minimization process:
Non-Linear Least Squares Fi ing
Our work involves performing a non-linear least squares fit of a superquadric shape to the point cloud of an object to determine the dimensions of the object.A superquadric is a parametric shape that has parameters that describe the size, shape, and pose of the superquadric [40,41].In this work, a superquadric shape was selected because it can be morphed into a wide range of shapes [40].By adjusting the shape parameters, the superquadric shape can be morphed into a range of symmetric objects, which include cuboids and cylinders.The implicit form of the superquadric equation that is used in this work is given using the inside-outside function F, which is defined as the following: where variables ( , , ) are the data points from the captured point cloud, ( , , ) are the scaling dimensions along the x-, y-, and z-axis of the superquadric, respectively, (∈ , ∈ ) are shape parameters, and ( , , , , , , , , , , , ) are the twelve parameters of a homogenous transformation matrix that is the result of a rotation and a translation of the world coordinate plane [42,43].The eleven parameters that define the position and orientation of a superquadric are defined as ʌ = {a1, a2, a3, ∈1, ∈2, ϕ, θ, Ψ, px, py, pz} [42,43].
Following our initial pose estimates, the initial rotation parameters (, , ) and translation parameters ( , , ) are set to zero, the initial object dimensions ( , , ) are determined as the difference between the minimum and maximum values of the point cloud along each axis, and the shape parameters (∈ , ∈ ) are initially set to an intermediate where the coefficient √ a 1 a 2 a 3 is used to recover the smallest superquadric, and the exponent ∈ 1 promotes faster convergence by making the error metric independent of the shape parameters [43].
After determining the parameters (a 1 , a 2 , a 3 ) corresponding with the dimensions of the object, the offset threshold value that was used during the ground segmentation step is then added to the vertical dimension of the object to compensate for the portion of the point cloud that was removed during the ground plane segmentation process.Through this process, the full dimensions of the object are recovered.Figure 9 is an example of a superquadric shape that is generated based on the parameters determined with the non-linear least squares fitting, which is overlaid with a corresponding object of interest (i.e., a box) within an intensity image.
where the coefficient is used to recover the smallest superquadric, and the exponent ∈ promotes faster convergence by making the error metric independent of the shape parameters [43].
After determining the parameters ( , , ) corresponding with the dimensions of the object, the offset threshold value that was used during the ground segmentation step is then added to the vertical dimension of the object to compensate for the portion of the point cloud that was removed during the ground plane segmentation process.Through this process, the full dimensions of the object are recovered.Figure 9 is an example of a superquadric shape that is generated based on the parameters determined with the non-linear least squares fi ing, which is overlaid with a corresponding object of interest (i.e., a box) within an intensity image.
Figure 9. Example of a determined superquadric shape (shown in green) overlaid with a box in an intensity image.The superquadric shape is represented as a series of points corresponding with data points on the surface of the superquadric shape.
Hardware Configuration
In our experiments, we use a single ToF sensor, TI OPT8241 [45], to generate depth information for a single object.This sensor is able to output both grayscale intensity images and point clouds.This sensor offers a resolution of 320 × 240 with a horizontal fieldof-view of 74.4°.In each experiment, the object is located 1.5 m from the ToF sensor.The ToF sensor is positioned on a tripod with a downward perspective view of between 35 and 45° of the object.The physical dimensions of the boxes are between 122 mm and 365 mm in length.The cylinder has a height of 204 mm and a diameter of 155 mm.In each experiment, an object is placed on different ground plane surfaces that each have different levels of infrared reflectivity and multipath interference.The ground plane surfaces used in our experiment are aluminum foil, black poster board, white poster board, and black felt.For each ground plane surface, the object is rotated between angles of 30 and 75° with respect to the ToF sensor.
For capturing intensity images and point cloud data, we used a Voxel Viewer from Texas Instruments [46].During the data collection period, an average of 400 frames of intensity images and point cloud data were collected for each experiment configuration.For implementing our framework, we used Matlab R2020b and OpenCV libraries [39,47].The OpenCV libraries were primarily used for identifying fiducial markers in our ground plane rectification process.
Dimensioning Performance for Cuboid Objects Based on the Ground Plane Surface
Table 1 shows the average of absolute errors for each of the ground surfaces using various dimensioning techniques.As the box is rotated with respect to the ToF sensor, an
Hardware Configuration
In our experiments, we use a single ToF sensor, TI OPT8241 [45], to generate depth information for a single object.This sensor is able to output both grayscale intensity images and point clouds.This sensor offers a resolution of 320 × 240 with a horizontal field-of-view of 74.4 • .In each experiment, the object is located 1.5 m from the ToF sensor.The ToF sensor is positioned on a tripod with a downward perspective view of between 35 and 45 • of the object.The physical dimensions of the boxes are between 122 mm and 365 mm in length.The cylinder has a height of 204 mm and a diameter of 155 mm.In each experiment, an object is placed on different ground plane surfaces that each have different levels of infrared reflectivity and multipath interference.The ground plane surfaces used in our experiment are aluminum foil, black poster board, white poster board, and black felt.For each ground plane surface, the object is rotated between angles of 30 and 75 • with respect to the ToF sensor.
For capturing intensity images and point cloud data, we used a Voxel Viewer from Texas Instruments [46].During the data collection period, an average of 400 frames of intensity images and point cloud data were collected for each experiment configuration.For implementing our framework, we used Matlab R2020b and OpenCV libraries [39,47].The OpenCV libraries were primarily used for identifying fiducial markers in our ground plane rectification process.
Dimensioning Performance for Cuboid Objects Based on the Ground Plane Surface
Table 1 shows the average of absolute errors for each of the ground surfaces using various dimensioning techniques.As the box is rotated with respect to the ToF sensor, an error for each dimension of the box is computed.The absolute errors for each dimension are then averaged to determine the average of the absolute errors at each rotation angle of the box.The average of absolute errors is the average of the errors across all the rotation angles for the box from 30 • to 75 • .Table 1 shows that a traditional approach of fitting an ellipsoid to the point cloud results in large errors compared to the superquadric fit.Errors can be further reduced by using bounding or mirroring techniques to limit the overgrowth of the super-quadric model during the fitting process.Both techniques rely on fitting the superquadric shape after the point cloud is axis-aligned, which reduces the variation in the rotation parameters in the superquadric shape and improves performance.Our results show that the impact of multipath interference from the ground planes having different levels of infrared reflectivity is negligible using either technique.
Dimensioning Performance for a Cuboid Object Based on Object Orientation
Figure 10 shows the average of absolute errors for each ground plane surface at each rotation angle of the box using the bounding technique and the mirroring technique.Figure 11 shows the average of absolute errors for all of the ground plane surfaces at each rotation angle of the box with respect to the ToF sensor using the bounding technique and the mirroring technique.
error for each dimension of the box is computed.The absolute errors for each dimension are then averaged to determine the average of the absolute errors at each rotation angle of the box.The average of absolute errors is the average of the errors across all the rotation angles for the box from 30° to 75°.Table 1 shows that a traditional approach of fi ing an ellipsoid to the point cloud results in large errors compared to the superquadric fit.Errors can be further reduced by using bounding or mirroring techniques to limit the overgrowth of the super-quadric model during the fi ing process.Both techniques rely on fi ing the superquadric shape after the point cloud is axis-aligned, which reduces the variation in the rotation parameters in the superquadric shape and improves performance.Our results show that the impact of multipath interference from the ground planes having different levels of infrared reflectivity is negligible using either technique.In Figures 10 and 11, the box is initially positioned at an angle of 45° wi the ToF sensor such that two side surfaces and the top surface of the object a the ToF sensor.The box is then rotated about the vertical z-axis with respec sensor to determine the effect of the rotation angle of the box with respect to sor.The results from Figures 10 and 11 show that the smallest of the absolu the various ground plane surfaces generally occur when the box is rotated ab respect to the ToF sensor.In this orientation, both vertical faces are most visib sensor, which results in more data points on the box surfaces being availa cessing.As the box rotates away from 45°, one of the vertical sides of the box b visible, resulting in fewer data points on the surfaces of the box being availa cessing.Our results show that the average error increases as the number of decreases.In an extreme case, when the box is rotated head-on with the ToF or 90°, only one vertical surface and the top surface are visible to the ToF se results in the fewest number of data points on the surfaces of the box.Althou age error increases as the number of data points on the surfaces of the box de experiments show that superquadric shapes can be used even when only two the object are visible to the ToF sensor.While our framework can be applied one vertical surface and the top surface are visible to the ToF sensor, our resul the average error further increases as the number of data points on the surfac decreases.In particular, the mirroring technique experiences a higher averag pared to the bounding technique since the mirroring technique relies on points for synthetically creating non-visible sides of an object.
Dimensioning Performance for a Cuboid Object Using Bounding Technique
Table 2 shows the average of absolute errors for each of the ground pl with a box rotation of 45° with respect to the ToF sensor using the boundin In Figures 10 and 11, the box is initially positioned at an angle of 45 • with respect to the ToF sensor such that two side surfaces and the top surface of the object are visible to the ToF sensor.The box is then rotated about the vertical z-axis with respect to the ToF sensor to determine the effect of the rotation angle of the box with respect to the ToF sensor.The results from Figures 10 and 11 show that the smallest of the absolute errors for the various ground plane surfaces generally occur when the box is rotated about 45 • with respect to the ToF sensor.In this orientation, both vertical faces are most visible to the ToF sensor, which results in more data points on the box surfaces being available for processing.As the box rotates away from 45 • , one of the vertical sides of the box becomes less visible, resulting in fewer data points on the surfaces of the box being available for processing.Our results show that the average error increases as the number of data points decreases.In an extreme case, when the box is rotated head-on with the ToF sensor at 0 • or 90 • , only one vertical surface and the top surface are visible to the ToF sensor, which results in the fewest number of data points on the surfaces of the box.Although the average error increases as the number of data points on the surfaces of the box decreases, our experiments show that superquadric shapes can be used even when only two surfaces of the object are visible to the ToF sensor.While our framework can be applied when only one vertical surface and the top surface are visible to the ToF sensor, our results show that the average error further increases as the number of data points on the surfaces of the box decreases.In particular, the mirroring technique experiences a higher average error compared to the bounding technique since the mirroring technique relies on surface data points for synthetically creating non-visible sides of an object.
Dimensioning Performance for a Cuboid Object Using Bounding Technique
Table 2 shows the average of absolute errors for each of the ground plane surfaces with a box rotation of 45 • with respect to the ToF sensor using the bounding technique and the mirroring technique for fitting a superquadric shape to the point cloud data.Our results show that the bounding technique can provide lower dimensioning errors compared to the mirroring technique.In our experiments, we observed that the mirroring technique tends to result in underfitting the superquadric shape to the point cloud data, which results in larger dimensioning errors.
Dimensioning Performance for a Cylindrical Object Using Bounding Technique
Based on the findings from the cuboid object experiments, we conducted similar experiments on a cylindrical object using the bounding technique since it provided better performance than the mirroring technique.Table 3 shows the average of absolute errors for each of the ground surfaces for different orientations of a cylinder using the bounding technique.As the cylinder is rotated with respect to the ToF sensor, an error for each dimension of the cylinder is computed.The absolute errors for each dimension are then averaged to determine the average of the absolute errors at each rotation angle of the cylinder.Table 3 shows that dimensioning the cylindrical object in the vertical orientation resulted in larger dimensioning errors compared to when the cylindrical object was in the horizontal orientation.In both orientations, our experiments showed an increase in the amount of missing data for cylindrical objects compared to cuboid objects due to the curved surfaces of the cylindrical object.These curved surfaces deflected more infrared light away from the ToF sensor, which resulted in less surface data being collected by the ToF sensor.Our experiments also showed that when the cylindrical object is in the horizontal orientation, the of the ground plane to the curved surface of the cylinder reduces the amount of surface data that is lost compared to when the cylindrical object is in the vertical orientation.Figure 12 illustrates an example of the point cloud data for a cylindrical object and the corresponding superquadric fit to the point cloud data.In the bo om plots, the red data points correspond with the point cloud for the cylindrical object and the black and blue data points corresponds with data points on the surface of the superquadric shape that was determined from the superquadric fi ing process.
Conclusions
In this work, we developed a framework that can be used for dimensioning cuboid and cylindrical objects from point cloud data generated using a ToF sensor despite the presence of noise artifacts and issues such as low resolution, self-occlusions, noise, and multipath interference.This work also quantifies the impact on the accuracy of dimensioning objects based on various model fi ing techniques, the pose of an object, the shape of an object, and the ground surface material under an object.
Our results show that the performance of dimensioning a cuboid object increases when more surfaces and surface areas of the object are visible to the ToF sensor.Conversely, the performance of dimensioning a cuboid object decreases when fewer surfaces and surface areas are visible to the ToF sensor.In addition, the performance of dimensioning a cylinder object increases when the object is in a horizontal configuration as opposed to a vertical configuration.Our results also showed that dimensioning performance improves when a bounding technique is employed in conjunction with the parametric fi ing In the bottom plots, the red data points correspond with the point cloud for the cylindrical object and the black and blue data points corresponds with data points on the surface of the superquadric shape that was determined from the superquadric fitting process.
Conclusions
In this work, we developed a framework that can be used for dimensioning cuboid and cylindrical objects from point cloud data generated using a ToF sensor despite the presence of noise artifacts and issues such as low resolution, self-occlusions, noise, and multipath interference.This work also quantifies the impact on the accuracy of dimensioning objects based on various model fitting techniques, the pose of an object, the shape of an object, and the ground surface material under an object.
Our results show that the performance of dimensioning a cuboid object increases when more surfaces and surface areas of the object are visible to the ToF sensor.Conversely, the performance of dimensioning a cuboid object decreases when fewer surfaces and surface areas are visible to the ToF sensor.In addition, the performance of dimensioning a cylinder object increases when the object is in a horizontal configuration as opposed to a vertical configuration.Our results also showed that dimensioning performance improves when a bounding technique is employed in conjunction with the parametric fitting process to reduce overgrowth of the superquadric shape.Notably, the bound-based approach provides better performance compared to a mirroring-based approach that synthetically creates missing point cloud information for an object.
This work can be extended to examine the use of multiple ToF sensors to further improve dimensioning accuracy.Future works may also extend the ability of parametric fitting to dimension more complex shapes with non-convex surfaces.
Figure 1 .
Figure 1.Point cloud of the side profile of a cuboid object without multipath interference (left) and with multipath interference (right).
Figure 1 .
Figure 1.Point cloud of the side profile of a cuboid object without multipath interference (left) and with multipath interference (right).
Figure 1 .
Figure 1.Point cloud of the side profile of a cuboid object without multipath interference (left) and with multipath interference (right).
Figure 2 .
Figure 2. Top-view fronto-parallel configuration of a cuboid object on a ground plane surface.
Figure 3 .
Figure 3. Process for performing parametric fi ing for dimensioning of an object.
Figure 3 .
Figure 3. Process for performing parametric fitting for dimensioning of an object.
Figure 4 .
Figure 4. Intensity image of a scene with ArUco markers and a box positioned on a black felt surface.The region of interest (ROI) for our object is represented by the red bounding box.
Figure 5 .
Figure 5. Front view of a point cloud of a scene with a box on a ground plane surface after ground plane correction in meter units.An offset threshold for the ground plane segmentation process is represented by the solid black horizontal line.The color of a data point in the point cloud corresponds with a distance between the ToF sensor and a surface in the scene.Darker colors (e.g., dark blue) represent surfaces closer to the ToF sensor and lighter colors (e.g., yellow) represent surfaces further away from the ToF sensor.
Figure 4 .
Figure 4. Intensity image of a scene with ArUco markers and a box positioned on a black felt surface.The region of interest (ROI) for our object is represented by the red bounding box.
Figure 4 .
Figure 4. Intensity image of a scene with ArUco markers and a box positioned on a black felt surface.The region of interest (ROI) for our object is represented by the red bounding box.
Figure 5 .
Figure 5. Front view of a point cloud of a scene with a box on a ground plane surface after ground plane correction in meter units.An offset threshold for the ground plane segmentation process is represented by the solid black horizontal line.The color of a data point in the point cloud corresponds with a distance between the ToF sensor and a surface in the scene.Darker colors (e.g., dark blue) represent surfaces closer to the ToF sensor and lighter colors (e.g., yellow) represent surfaces further away from the ToF sensor.
Figure 5 .
Figure 5. Front view of a point cloud of a scene with a box on a ground plane surface after ground plane correction in meter units.An offset threshold for the ground plane segmentation process is represented by the solid black horizontal line.The color of a data point in the point cloud corresponds with a distance between the ToF sensor and a surface in the scene.Darker colors (e.g., dark blue) represent surfaces closer to the ToF sensor and lighter colors (e.g., yellow) represent surfaces further away from the ToF sensor.
Figure 6 .
Figure 6.Example of the remaining point cloud data for a box after performing ground plane segmentation in meter units.
Figure 6 .
Figure 6.Example of the remaining point cloud data for a box after performing ground plane segmentation in meter units.
Figure 7 .
Figure 7. Example of reorienting the point cloud data for a box before (left) and after axis alignment (right) in meter units.In the left plot, the blue line corresponds with the orientation for an edge of the box that was determined from the RANSAC process.
Figure 7 .
Figure 7. Example of reorienting the point cloud data for a box before (left) and after axis alignment (right) in meter units.In the left plot, the blue line corresponds with the orientation for an edge of the box that was determined from the RANSAC process.
Figure 8
Figure 8. Profile view of a point cloud of a cuboid before (left) and after applying the mirroring technique for limiting superquadric overgrowth (right) in meter units.In the right plot, the red data points correspond with the initial point cloud of the cuboid and the blue data points correspond with the additional data points that were generated by applying the mirroring technique.
Figure 8 .
Figure 8. Profile view of a point cloud of a cuboid before (left) and after applying the mirroring technique for limiting superquadric overgrowth (right) in meter units.In the right plot, the red data points correspond with the initial point cloud of the cuboid and the blue data points correspond with the additional data points that were generated by applying the mirroring technique.
1 2 ∈ 2 + 2 2 ∈ 2 ∈ 2 ∈ 1 + 3 2 ∈ 1
n x x w +n y y w +n z z w −p x n x −p y n y −p z n z a o x x w +o y y w +o z z w −p x o x −p y o y −p z o z a a x x w +a y y w +a z z w −p x a x −p y a y −p z a z a
Sensors 2023 , 17 Figure 8 .
Figure 8. Profile view of a point cloud of a cuboid before (left) and after applying the mirroring technique for limiting superquadric overgrowth (right) in meter units.In the right plot, the red data points correspond with the initial point cloud of the cuboid and the blue data points correspond with the additional data points that were generated by applying the mirroring technique.
Figure 8 .
Figure 8. Profile view of a point cloud of a cuboid before (left) and after applying the mirroring technique for limiting superquadric overgrowth (right) in meter units.In the right plot, the red data points correspond with the initial point cloud of the cuboid and the blue data points correspond with the additional data points that were generated by applying the mirroring technique.
Figure 8 .
Figure 8. Profile view of a point cloud of a cuboid before (left) and after applying the mirroring technique for limiting superquadric overgrowth (right) in meter units.In the right plot, the red data points correspond with the initial point cloud of the cuboid and the blue data points correspond with the additional data points that were generated by applying the mirroring technique.
Sensors 2023 , 17 Figure 8 .
Figure 8. Profile view of a point cloud of a cuboid before (left) and after applying the mirroring technique for limiting superquadric overgrowth (right) in meter units.In the right plot, the red data points correspond with the initial point cloud of the cuboid and the blue data points correspond with the additional data points that were generated by applying the mirroring technique.
Figure 9 .
Figure 9. Example of a determined superquadric shape (shown in green) overlaid with a box in an intensity image.The superquadric shape is represented as a series of points corresponding with data points on the surface of the superquadric shape.
Figure 10
Figure 10 shows the average of absolute errors for each ground plane surface at each rotation angle of the box using the bounding technique and the mirroring technique.Figure 11 shows the average of absolute errors for all of the ground plane surfaces at each rotation angle of the box with respect to the ToF sensor using the bounding technique and the mirroring technique.
Figure 10 .
Figure 10.Dimension errors at each rotation angle of the box.Each marker type (i.e., o, +, *, x) corresponds with a ground surface material.Solid or dashed lines are used with the corresponding
Figure 10 .
Figure 10.Dimension errors at each rotation angle of the box.Each marker type (i.e., o, +, *, x) corresponds with a ground surface material.Solid or dashed lines are used with the corresponding marker type based on whether the bounding technique or the mirroring technique was applied, respectively.
Figure 11 .
Figure 11.Average dimension errors of the box across all ground plane surfaces.The points correspond with error measurements obtained using the bounding technique.points correspond with error measurements obtained using the mirroring technique.
Figure 11 .
Figure 11.Average dimension errors of the box across all ground plane surfaces.The blue data points correspond with error measurements obtained using the bounding technique.The red data points correspond with error measurements obtained using the mirroring technique.
Figure 12 .
Figure 12.Profile view (top left) and perspective view (top right) of a point cloud for a cylindrical object and profile view of superquadric fit (bo om left) and perspective view of superquadric fit (bo om right) to the point cloud for the cylindrical object in meter units.In the bo om plots, the red data points correspond with the point cloud for the cylindrical object and the black and blue data points corresponds with data points on the surface of the superquadric shape that was determined from the superquadric fi ing process.
Figure 12 .
Figure 12.Profile view (top left) and perspective view (top right) of a point cloud for a cylindrical object and profile view of superquadric fit (bottom left) and perspective view of superquadric fit (bottom right) to the point cloud for the cylindrical object in meter units.In the bottom plots, the red data points correspond with the point cloud for the cylindrical object and the black and blue data points corresponds with data points on the surface of the superquadric shape that was determined from the superquadric fitting process.
Table 1 .
Dimension errors for a box using various surfaces and fitting techniques.
Table 1 .
Dimension errors for a box using various surfaces and fi ing techniques.
Table 2 .
Dimension errors for various fitting techniques of a box rotation angle of 45 • .
Table 3 .
Dimension errors for a cylinder using the bounding technique.
Orientation Aluminum Foil Black Posterboard White Posterboard Black Felt
Sensors 2023, 23, x FOR PEER REVIEW 14 of 17 Figure 12.Cont.
Table 3 .
Dimension errors for a cylinder using the bounding technique. | 17,365.6 | 2023-10-24T00:00:00.000 | [
"Computer Science",
"Engineering",
"Physics"
] |
Arbuscular Mycorrhizal Fungi and Soil Enzyme Activities in Different Fonio Millet ( Digitaria exilis Stapf.) Agroecosystems in Senegal
In plant roots, arbuscular mycorrhizal fungi (AMF) are the most prevalent microsymbionts, and thereby provide many key ecosystem services to natural and agricultural ecosystems. Despite AMF’s significance for the environment and the economy, little is known about the mycorrhizal inoculum potential and diversity of AMF associated with orphan African cereal crops, specially fonio millet ( Digitaria exilis stapf . ) under field conditions. We hypothesized that the type of fonio millet agroecosystem influences the AMF density and distribution in soils. We therefore, assessed the inoculum potential, density and diversity of AMF spores and soil enzyme activities in five fonio millet agroecosystems belonging to three climatic zones (Sudanian, Sudano-Sahelian and Sudano-Guinean). By combining AMF spore identification from field-collected soils and trap culture, 20 species belonging to 8 genera ( Acaulospora, Ambispora , Dendiscutata, Gigaspora, Glomus , Racocetra, Sclerocystis and Scutellospora ) were identified. Glomus was the most represented genus with 8 species, followed by Gigaspora (5 species) and Acaulospora (2 species); the remaining genera were each represented by one species. Except for Ambispora which was not found in the Sudanian area, all genera occurred in the three climatic zones. The abundance and diversity of AMF species and FDA-hydrolytic and phosphatase activities varied between fonio millet agroecosystems as well as between climatic zones. Soil pH and soil texture were the variables that best explained the density and distribution of AMF spores. Our results contribute to paving the way towards the development of microbial engineering approaches for agronomic improvement of fonio millet.
INTRODUCTION
Fonio millet (Digitaria exilis, stapf), also called "Acha", is one of the oldest cereal crops originated in West Africa. 1 It has very good prospects for semi-arid and upland areas as it tolerates poor soils and drought conditions, and matures very quickly (6-8 weeks). 2 Moreover, Fonio grains contain higher amounts of amino acids (e.g., methionine and cystine), 3 iron, potassium, calcium and phosphorus. 4,5owever, fonio consumption is still low, particularly in urban areas where it has long been considered as an orphan crop. 6,7][13] In the other hand, arbuscular mycorrhizal fungi (AMF) form the most prevalent microbial symbiotic association with the majority of terrestrial plant species. 14,15These beneficial soil microorganisms have a great potential for contributing to crop production and thereby helping to achieve sustainable global food security. 16,172][23] Hence, harnessing the potential of AMF is considered as a potential less costly solution to increase crop yields. 16,24,25Meanwhile, abiotic and biotic factors influence the effects of AMF taxa on plant development and production. 24,26,27n addition, it has been reported that AMF abundance and diversity varied depending on the ecological zone, 28,29 soil properties, [30][31][32] vegetation type 33 and agricultural management pratice. 34n Senegal, fonio millet is cultivated under various agricultural management practices across different climatic zones. 35However, little is known about the AMF density, diversity and distribution across the fonio millet agroecosystems.We hypothesized that the type of fonio millet agroecosystem and pedoclimatic conditions might influence the AMF density and distribution in soils.We therefore, assessed in this study the inoculum potential, density and diversity of AMF spores, and soil enzyme activities in five fonio millet agroecosystems belonging to three climatic zones (Sudanian, Sudano-Sahelian and Sudano-Guinean).
In each fonio millet agroecosystem, a sampling area of 100 × 100 m was delimited, soil was collected from 6 points at a depth of 0 to 25 cm and then the soil samples were pooled together in plastic bags and brought to the lab.The soil samples were sieved to <4 mm and kept at 4°C.
Characterization of soil properties
Physical (sand, silt, clay) and chemical (C, N, P, P 2 O 5 , C/N) characteristics of the five soils were analyzed at the Laboratory of Soil, Water and Plant of ISRA/CNRA at Bambey (Senegal) using standard methods.The soil physical characterization was carried out as described in Disale et al. 36 The soil samples were placed in a mechanical shaker and sieved for 5 min through a series of sieves to determine the size of different soil particles.The combustion system Thermo-Finnigan Flash EA 1112 (Thermo Finnigan, France) was used to quantify the total amount of soil carbon and nitrogen. 37The amount of soil total and available phosphorus was evaluated as described by Bibi and colleagues. 38Soil organic matter (OM) was determined from organic carbon as follows: OM (%) = organic carbon (%) × 1724.Soil pH was measured in soil-water (1:2,5) suspensions. 37
Determination of mycorrhizal inoculum potential
The mycorrhizal inoculum potential in each soil sample was evaluated by the dilution technique. 39Briefly, a quarter-fold dilution series (1, 1/4, 1/16, 1/64, 1/256 and 1/1024) was prepared by thoroughly mixing defined proportions of nonsterilized and sterilized soil.Then, 50 g of each diluted soil sample were placed in 5 pots, and 3 seeds of Zea mays L. (a highly mycotrophic plant) were sown per pot.The seedlings were thinned to one per pot and all plants were kept in glasshouse and watered with demineralized water.After 45 days, roots of all plants from the dilution ratios were harvested and stained with Trypan blue as described in Founoune-Mboup et al. 40 The presence of mycorrhizal infection in stained root segments was observed by light microscope at a magnification of 100X.The most probable number (MPN) of AMF propagules that can colonize plant roots was calculated as follows: Log MPN = (x log a)-K, where x represents mean of mycorrhized plants for all dilution ratios, a (factor of dilution) = 4, K=constant given by the table of Fisher & Yates. 41,42
Identification and enumeration of AMF spores from field-collected soils
The extraction of AMF spores from each field-collected soil was carried out using the wet sieving and decanting method. 43Briefly, 100 g of soil were mixed with 1 L of water and decanted in a series of 400-50 µm sieves.Then, the material of 200, 100 and 50 µm pore-sieves was re-suspended in water and collected in tubes.Two solutions of sucrose at 20% and 60% were successively added and centrifugation was done at 3000 g/min for 3 minutes.After that, the supernatant containing AMF spores was poured in a 50 µm mesh and rinsed with tap water.AMF spores were grouped and counted according to their morphological characters and using a dissecting microscope.The International Culture Collection of Arbuscular and Vesicular-Mycorrhizal Fungi was used for AM fungi description (https://invam.wvu.edu/methods/spores/enumeration-of-spores).The AMF spore density and abundance of each AMF species were expressed per 100 g of soil.Three replicates were made for each composite soil from each sampled site.
Determination of AMF species composition from trap culture
The trap culture method allows to confirm AMF spore identification (spores are sometimes damaged) from the field-collected soils.On the other hand, this method induces emergence of AMF that would not naturally sporulate. 44The trap culture was performed with field-collected soils using mays (Zea mays L.) for 3 months under glasshouse conditions.For this purpose, each field-collected soil was mixed with an autoclaved nutrient-poor sandy soil from Sangalkam (1:2 v/v) to serve as culture substrate.For each agroecosystem, 9 pots of 1 kg were filled with the culture substrate and 3 fonio seeds were sown per pot (9 replicates x 5 soil sites).Plants were watered every two days for three months.At the end of experiment, plants were harvested and AMF spores were isolated from soils, enumerated and identified as previously described. 43
Soil enzymatic activities
Enzymatic activities were determined from field-collected soils as described in Ndoye et al. 28,45 The activity of FDA (3'.6'-diacetylfluorescein) hydrolysis was measured according to Patle et al. 46 For this test, to 1 g of the soil, 15 ml of 60 mM potassium phosphate buffer (pH 7.6) and 0.2 ml of 1000 µg FDA ml 1 were added (with 3 replicates per soil origin).A blank enzyme without FDA and a blank substrate without soil were included.After 1h of shaking on an orbital incubator at 30°C, the flasks were removed and 1 ml of each suspension were transferred into Eppendorf tube and mixed with 1 ml of acetone to stop the reaction.After centrifugation (10000 rpm/min for 5 min), 1 ml of the supernatant was measured at 490 nm on a spectrophotometer (Ultrospec 3000 Pharmacia Biotech).The concentration of fluorescein was calculated using the calibration graph standard and expressed as µg FDA/g of soil/h.Acid and alkaline phosphatase activities were quantified using a colorimetric determination of p-nitrophenol released after soil incubation with p-nitrophenyl phosphate as substrate (pNPP, 5 mM). 47,48Briefly, 25 mg of soil sample was mixed with 400 µl of buffered sodium p-nitrophenyl phosphate solution (pH 6 and pH 11 for acid and alkaline phosphatase, respectively) and 100 µl pNPP (p-Nitrophenyl Phosphate, 5 mM).A blank enzyme without pNPP and a blank substrate without soil were included.After incubation at 37°C for 1 h, the reaction was complexed with 100 µl of CaCl 2 (0,5 M) then stopped by adding 400 µl of NaOH (0,5 M) solution.After centrifugation (10000 rpm/min for 5 min), 1 ml of the supernatant was measured at 400 nm on a spectrophotometer (Ultrospec 3000 Pharmacia Biotech).The amount of released p-nitrophenol was determined at 400 nm and expressed as µg pNPP/g of soil/h.
Statistical analysis
The Shapiro-Wilk and Levene tests were used to checked the normality and homogeneity of variance, respectively.So, comparisons of means were performed by Kruskal-Wallis test instead of one-way ANOVA when the test was significant.Statistical analyses were performed using rcompanion, FSA, TH.data, pgirmess packages in R software. 49,50The probability threshold "p value" was set at 0.05 in order to establish statistically significant differences between groups.
For each agroecosystem, AMF species richness, Shannon and Wiener diversity index (H') and Simpson Dominance index (D) were determined.The Shannon and Wiener diversity index was calculated as H' = -pi (ln pi) where pi represents the proportion of individuals found in the ith species, estimated as ni /N, ni being the number of individuals in the ith species and N, the total number of individuals.The inverse of the Simpson dominance index was evaluated using the following formula: where ni represents the number of the ith types and N the number of individuals in the population.
Pearson correlation coefficients were determined to investigate the relation between
Soil physicochemical characteristics
Our results showed that the sampled soils were sandy silt clay in Missirah and Mandina Findifeto and sandy clay silt in the other three sites with pH ranging from 5.84 to 6.98 (Table 1).For soil C, N, P, P 2 O 5 , and organic matter (OM) contents, the highest values were obtained in Togue, and the lowest values in Mandina Findifeto.On the contrary, soil pH follows an opposite trend, showing the lowest value in Togue and the highest in Mandina Findifeto.Considering the climatic zones, the Sudanian zone had the lowest C, N, P, P 2 O 5 and OM contents in soil and the highest value of soil pH, whereas the Sudano-Sahelian zone had intermediate values, as compared to the Sudano-Guinean zone (Table 1).
Inoculum potential, AMF species diversity and spore density in field-collected soils
Mycorrhizal soil infectivity of the 5 fieldcollected soils ranged from 5 to 71 propagules in 50 g of dry soil (Table 2).Mandina Findifeto soil showed higher MPN value (70.90 propagules per 50 g of soil) as compared to those of other field-collected soils (ranging from 5.20 to 12.60 propagules per 50 g of soil).Those latter soils did not differ significantly in terms of MPN values.The lowest MPN was obtained in Sare Yoba located in the same climatic zone with Mandina Findifeto.
The spore density of AMF species varied depending on soil origin (Table 3).For instance, Glomus sp.2 and Glomus sp.3 displayed their highest density in Koumbidia Soce (1017.33 and 770.67 spores/100 g of soil, respectively), while Scutellospora sp.aff.dipurpurascens and Dendiscutata sp.aff.heterogama had their highest density in Mandina Findifeto (97.67 and 920.67The total spore density of AMF also differed significantly between the five sites, ranging from 957 to 3166 spores per 100 g of dry soil (Table 3).The density of AMF spores was significantly higher in Mandina Findifeto (3165.33 spores/100 g of soil) than in other sites.It was followed by those of Koumbidia Soce, then Togue (2320.33 and 1379.67 spores/100 g of soil, respectively).The lowest AMF density was recorded in Sare Yoba (957.67 spores/100 g of soil).
Shannon index ranged from 1.63 to 1.29, while Simpson index varied from 0.77 to 0.67 and Hill index from 0.75 to 0.59.The highest diversity indices were observed in Sare Yoba in the Sudanian zone, whereas the soil from Koumbidia Soce in the Sudano-Sahelian zone showed the lowest diversity indices (Table 3).
Composition of AMF species from trap culture
A total of 20 AMF species belonging to 8 genera (Racocetra, Dendiscutata, Scutellospora, Gigaspora, Acaulospora, Ambispora, Glomus and Sclerocystis) were recorded from trap culture (Table 4).Of the 20 AMF species, 17 were found in Koumbidia Soce, 16 in Missirah, 15 in Togue, 15 in Sare Yoba and 11 in Mandina Findifeto.Only 10 out of the 20 AMF species were shared by the 5 sites, while two AMF species, Glomus sp.5 and Glomus sp.6, were recorded exclusively in Sare Yoba.Besides, 8 of the 20 AMF species revealed by trap culture were not detected by spore identification from field-collected soils (Tables 3 & 4).
Soil enzyme activities
Soil FDA-hydrolytic activity in Koumbidia Soce (0.53 µg FDA/g of soil/h) was significantly higher than those in Togue, Sare Yoba and Mandina Findifeto.The lowest FDA-hydrolytic activity was obtained in the Mandina Findifeto soil with 0.27 µg FDA/g of soil/h (Figure 5A).The activity of acid phosphatase was significantly higher in soils from Koumbidia Soce (188.98 µg pNPP/g of soil/h) and Missirah (187.14 µg pNPP /g of soil/h), the two sites located in the Sudano-Sahelian zone, as compared to other sites (Figure 5B).There were no statistically significant differences in acid phosphatase activity between soils collected from Togue, Sare Yoba and Mandina Findifeto.However, the greatest alkaline phosphatase activity was obtained in soil collected from Togue (276.67 µg pNPP /g of soil/h), followed by that from Missirah, Sare Yoba, Mandina Findifeto and Koumbidia Soce (Figure 5C).
Correlation matrix between density and diversity of AMF, soil physicochemical properties and soil enzyme activities
Soil N, P, P 2 O 5 , C and OM had significant positive correlations between them (Figure 6).Soil pH was strongly positively correlated with AMF spore density (r 2 = 0.650, P-value = 0.235) and soil mycorrhizal potential (r 2 = 0.820, P-value = 0.089); and negatively correlated with the diversity of AMF from field-collected soil (r 2 = -0.419,P-value = 0.235) and the diversity of AMF from trap culture (r 2 = -0.810,P-value = 0.096).Soil C, N, P and available P were negatively correlated with soil MPN, spore density and the diversity of AMF from field-collected soils; and positively correlated with the diversity of AMF from trap culture although those correlations were not significant.
DISCUSSION
][56] Here, we analyzed the density and diversity of AMF; and the enzyme activities in soils from 5 fonio millet agroecosystems in Senegal.Our results revealed that agroecological conditions influence AMF spore density and diversity.These findings might be partially explained by differences in physicochemical characteristics of soils and rainfall.Previous study from Ndoye et al. 28 reported the influence of environmental factors on soil AMF spore density.Also, a significant difference in AMF spore density between three agroecological zones of the Central African Republic was observed by Djasbe and colleagues. 57This is consistent with the results of Maffo and coworkers 58 obtained from two agroecological zones in Cameroon.
On the other hand, it has been reported that AMF inoculum potential has a major influence on mycorrhizal effectiveness and early root infection. 59In this study, the observed high AMF spore density and mycorrhizal inoculum potential in Mandina Findifeto might be partially attributed to its lower clay, nutrient and organic matter contents; and higher pH and silt content as compared to other sites.Similarly, Swarnalatha and colleagues 60 had obtained a higher AMF spore density in a silty sandy loam soil compared to a silty clay loam soil.Moreover, the presence of clay might reduce the production of AMF spores. 60hese findings indicate the influence of soil type on AMF density.
One of the objectives of the present work was to determine the AMF diversity in soils from five fonio millet agrosystems.A total of 12 species from field-collected soils and 20 species from trap culture was recorded with differences between sites.Those site effects could be linked to soil physicochemical properties and environmental conditions. 60,61The negative correlations obtained between soil nutrient contents and AMF spore density and diversity collaborate other findings. 61,62In fact, it is reported that soil mineral nutrients, specially P might influence AMF diversity and density. 63For example, in North China, study of Lang and colleagues 64 in a long-term field experiment revealed that AMF alpha diversity gradually decreased as the P fertilizer rate increased.On the other hand, Delroy and colleagues 51 found that the diversity of AMF tends to expand at optimal P.However, evidence points out that P supply does not necessarily have a detrimental effect on AMF diversity. 65Those results suggested that besides nutrient contents, other parameters (organic matter, humidity, pH, etc) might influence soil AMF parameters. 639][70] In this respect, Zhao and colleagues 71 reported that increase in temperature and precipitation can promote mycelia and spore development by allowing the plant to supply more photosynthetic products to AMF.
3][74] Congruently, our results as well as those obtained from other agroecosystems in Senegal 75,76 showed site effects on AMF diversity.In the present work, AM fungi species richness in field-collected soils and trap culture was higher in Koumbidia Soce site which contains higher amounts of N, P, C, OM, sand and clay than Mandina Findifeto site.It is apparent that abiotic factors, particularly soil chemical properties, can influence the AMF community structure and abundance. 76,77Results of Song et al. 78 on Sephora flavescens Ait in China also supports the hypothesis that soil chemicals exert a selective effect of soil AMF population.
On the other hand, due to their effects on several ecosystem processes such as soil geochemical cycles, plant diversity and productivity, and soil composition, Glomeromycota communities have a wide environmental impact. 1976]79 This might be related to their greater environmental adaptability and capacity to colonize plant roots more widely because of their efficient production of mycelia and spores. 80Moreover, Glomus species have been reported to promote fonio growth and yield under glasshouse conditions. 9In the present study, the low spore density and diversity of Scutellospora and Racocetra might be explained by their huge spores which take longer to mature than small spores 81 and/or by the ability to grow only from an intact mycelium or with live spores. 82,83n addition, Glomus, Dendiscutata and Scutellospora dominated in Mandina Findifeto site which contained the lowest amounts of nutrient contents and OM compared to other sites.In contrast, Acaulospora has greater abundance in Togue site and lower abundance in Mandina Findifeto.Songachan and Kayang 83 noted that Glomus species dominated in natural sites and Acaulospora species in cultivated ones, due probably to the failure of hyphal network disturbance in environments that might have benefited Glomus species.
In this work, some AMF species (belonging to Ambispora, Glomus and Gigaspora) revealed by trap culture were not detected by spore identification from field-collected soils.Similar findings were reported by Leal et al., 84 Chairul et al. 85 and Rodriguez-Morelos et al. 86 This demonstrated that cryptic AMF spores that are invisible during sampling or in field conditions can be encouraged to germinate through trap culture. 44,87This shows the importance of the combination of spore identification from trap culture and field-collected soils in AMF analysis. 86urthermore, it was shown that soil pH might affect directly or indirectly AMF community composition by impacting P availability. 80,88Our results revealed positive relationships between soil mycorrhizal inoculum potential, AMF density and soil pH even if that was not significant.Bainard et colleagues 77 found a negative correlation between some AMF species and phosphate concentrations in the soil.Thus, the main factors influencing the spatial variation in the AMF community across the site appeared to be soil pH or pH-driven changes in soil chemistry and Electrical conductivity. 70,89he lower AMF diversity observed in Mandina Findifeto compared to Togue and other sites might partially be related to soil OM and carbon contents as reported by Zhang et al. 90 Moreover, many studies have shown correlations between soil nutrients and enzymatic activities, 91,92 as confirmed by our study.Also, we found that soil FDA activity was positively and significantly correlated with AMF diversity and negatively with soil MPN and soil pH.This is consistent with the study of Cheng and coworkers 93 showing a positive correlation between AMF diversity and soil enzyme activities.However, the correlation between soil alkaline phosphatase and soil nutrient contents was positive except for soil pH.In this way, Moradi et al. 94 observed a positive correlation between acid and alkaline phosphatase activity, soil OM and N.
CONCLUSION
Our study shows an appreciable AMF density and diversity in the five tropical soils in Senegal.The results in field-collected and trap culture samples, respectively, revealed 12 and 20 species of AMF belonging to 8 genera and 4 families from 5 fonio millet agroecosystems into three climatic zones (Sudanian, Sudano-Sahelian and Sudano-Guinean).The AMF diversity increases with the trap culture.In both field-collected and trap culture soils, Glomus was the dominant genus in term of spore density and diversity in the five agroecosystems.The abundance and diversity of AMF species and FDA-hydrolytic and phosphatase activities varied between fonio agroecosystems as well as between climatic zones.Thus, abiotic factors like soil physicochemical properties might influence AMF spore density and diversity.Furthermore, Soil pH and texture were the variables that best explained the distribution of AMF spores.
T h i s w o r k c o n t r i b u t e s t o o u r understanding of diversity and ecology of AMF in fonio millet agroecosystems.It therefore contributes to paving the way towards the development of microbial engineering approaches for agronomic improvement of fonio millet.However, more studies are necessary to better identify and explain the main driving factors of AMF community at different locations.
Figure 4 .
Figure 4. Relative abundance of AMF genera found in the rhizosphere of Digitaria exilis stapf
Figure 5 .
Figure 5. Enzyme activities of field-collected soils (A) FDA (Fluorescein diacetate.μg FDA/g of soil/h).(B) PHA and (C) PHB (Phospatasis acid and Basis.μg pNPP/g of soil/h) Boxes followed by the same letter are not significantly different according to the Kruskal-Wallis test (P < 0.05)
Figure 6 .
Figure 6.Correlation matrix of the different variables (AMF density and diversity.soil physicochemical properties and soil enzyme activities) Positive correlations are displayed in blue and negative correlations in red color.Color intensity and size of circle are proportional to correlation coefficients.In the right side of the correlogram.the legend color shows the correlation coefficients and the corresponding colors Ndoye et al | J Pure Appl Microbiol.2024;18(3):1866-1882. https://doi.org/10.22207/JPAM.18.3.37 AMF spore diversity and density, soil enzyme activities, and soil physicochemical characteristics.All statistical analyses were conducted in R v4.3.1.
Table 3 .
Density of arbuscular mycorrhizal fungal spores in the field-collected soils In row, values followed by the same letter are not significantly different according to the Kruskal-Wallis test (P < 0.05) | 5,388.6 | 2024-08-23T00:00:00.000 | [
"Environmental Science",
"Agricultural and Food Sciences",
"Biology"
] |
Nanotechnology-Based Topical Drug Delivery Systems for Management of Dandruff and Seborrheic Dermatitis: An overview
Dandruff and seborrheic dermatitis (SD) are common skin disorders affecting the scalp and extending to other body sites in case of SD. They are associated with pruritus and scaling, causing an esthetical disturbance in the population affected. Treatment of such conditions involves using a variety of drugs for long terms, thus optimizing drug formulation is essential to improve therapeutic efficacy and patient compliance. Conventional topical formulations like shampoos and creams have been widely used but their use is associated with disadvantages. To overcome such effects, novel topical nanotechnology-based formulations are currently under investigation. In the following article, we highlight recently published formulation approaches used to improve topical dandruff/SD therapy.
Introduction
Seborrhoeic dermatitis (SD) is a recurrent, chronic inflammatory skin condition, which has pink to red greasy-looking skin with yellowish flaky scales, accompanied by itching. It affects areas rich in sebaceous glands, such as scalp, face, chest and intertriginous areas (1) . Dandruff is considered as a mild or initial form of seborrheic dermatitis and appears as white or gray flakes in scalp, accompanied by itching with no apparent inflammation, and is considered as an embarrassing disorder (2) .
There are many possible causes for dandruff/SD but most likely it is due to infection caused by Malassezia fungus species . Many factors are considered as possible contributors to the development of SD/dandruff which includes exogenous factors (e.g. humidity, heat and extended periods of sun exposure) and endogenous host factors (e.g. nutritional deficiency, stress, and immune response) (3) . Various topical treatment options are available such as antifungal agents, keratolytic agents and anti-inflammatory agents. Nowadays, nanotechnology offers a revolutionary treatment for several skin diseases and proved to be safe and effective in the targeted delivery of many medicaments. This review article looks into some of the nanotechnology-based drug delivery systems with a focus on their potential role as nextgeneration carriers for medicaments used for topical therapy of dandruff/SD.
Topical pharmaceutical forms for the treatment of dandruff/seborrheic dermatitis Conventional formulations
Many therapeutic agents are used for dandruff/SD and these are formulated in a variety of pharmaceutical preparations, including liquid preparations (solutions, shampoos, lotions, emulsions, hair oils), or semisolids preparations (ointments, creams, gels) so as to provide ease of application at multiple sites, along with maintaining effectiveness of the active agent. A summary of the main therapeutic agents used presently as different pharmaceutical formulations for management of dandruff/SD is represented in Table 1.
Novel nanotechnology-based formulations
Dandruff/SD patients require regular, longterm use of therapeutic agents, mostly used on daily bases. These are usually available as several conventional topical dosage forms. There is a strong need to develop innovative pharmaceutical formulations which are aesthetically and cosmetically more acceptable to the patient, and can be conveniently incorporated into a patient's routine hair-or skin-care regimen to improve patient compliance. Nanotechnology has emerged as an innovative drug delivery approach, allowing controlled , sustained and targeted drug delivery thus minimize undesirable drug side effects while maintaining or improving therapeutic efficacy (4) .
In the following sections, we highlight recently published work describing nanotechnology-based formulation approaches used to improve the efficacy of topically applied therapeutic agents used for dandruff/ SD management. Table 2 summarizes research conducted with various nanotherapeutics as topical drug delivery systems used for dandruff/SD.
Microemulsions (MEs) and microemulsion gels
Microemulsions are clear/transparent, thermodynamically stable dispersions of oil and water stabilized by emulsifiers, with droplet diameter usually within the range of 10 -100 nm (5) . They have been widely studied to enhance the bioavailability of poorly soluble drugs, and represent an attractive option for enhanced dermal and transdermal administration of both hydrophilic and lipophilic drugs, as well as providing controlled or sustained drug release property (6) .
Microemulsions have been used as carriers for antifungal drugs to ensure effective drug concentration levels in the skin after their dermal administration. Several microemulsion formulations and microemulsion based gels of azole antifungals (ketoconazole (7-10) , clotrimazole (11) , fluconazole (12,13) , miconazole (14,15) , sertaconazole (16)(17)(18) ) and allylamine/benzylamine antifungals (butenafine (19,20) , terbinafine (21) and naftifine (22) ) have been developed with a view to provide controlled drug release and to enhance the skin permeability with the potential efficacy for eradication of cutaneous fungal infections. A study showing the benefits of microemulsion-loaded hydrogel over conventional topical preparations is seen with butenafine hydrochloride microemulsion-loaded hydrogel. Aerosol OT (surfactant), sorbitan monooleate (cosurfactant) and isopropyl palmitate (oil) were used in the preparation of microemulsion and carbopol 940 (1 %) was used as a gelling base for preparation of microemulsion-loaded hydrogel. The developed hydrogel has shown better ex vivo skin permeation and antifungal activity against Candida albicans when compared to marketed cream. The greater drug penetration-enhancing activity of microemulsions may be attributed to the combined effects of both the lipophilic and hydrophilic domains of microemulsions while the greater antifungal activity may be due to enhanced permeation of microemulsion oil globules containing drug through the fungal cell wall (19) .
Salicylic acid (SA) is a keratolytic agent with antimicrobial actions that have been used in topical products for the treatment of SD and dandruff. However, the topical use of SA is associated with burning sensation and irritancy. To minimize skin irritation and increase SA solubility, microemulsion loaded with SA was prepared and provided a better option for topical delivery with enhanced solubility in all the studied concentrations (23) . In another study, a microemulsion composed of 12% salicylic acid and 4% lactic acid was prepared. This was composed of Tween 80 as surfactant, propylene glycol as a co-surfactant, castor oil, ethyl alcohol and purified water. Increasing the concentration of surfactant or co-surfactant, the microemulsion region becomes larger. Such microemulsion could be a suitable vehicle for topical treatment of psoriasis, scaly patches, ichthyoses, dandruff, corns, calluses, and warts on the hands or feet (24) .
Topical calcineurin inhibitors tacrolimus and pimecrolimus have shown safety and efficacy in the treatment of SD as an alternative to corticosteroids. Tacrolimus is a lipophilic drug that is commercially formulated as a lipophilic ointment. A microemulsion-type colloidal carrier, as well as microemulsion based hydrogel of tacrolimus, were developed to improve the dermal availability of tacrolimus (25-27) .
Nanoemulsions (NE) and nanoemulgels
Nanoemulsions are biphasic dispersion of two immiscible liquids; an oily system dispersed in an aqueous system or an aqueous system dispersed in an oily system, stabilized by an amphiphilic surfactant. The droplet sizes in nanoemulsions are usually in the range of 100 -400 nm. Recently, the term nanoemulsions have been used specifically for systems having droplet diameter smaller than 250 nm that are in a metastable state compared with microemulsions (28) . Depending on constituents and relative distribution of the internal dispersed phase/phases and the external phase, nanoemulsions are termed as biphasic (o/w or w/o) or multiple nanoemulsions (w/o/w) (29) .
Nanoemulsions offer several advantages for topical and transdermal delivery; they can be used to deliver both lipophilic and hydrophilic drugs to the skin or mucous membranes, have the capacity for site-specific drug targeting and delivery as well as their ability to increase the solubility and dispersion of drugs onto skin, thus will enhance skin permeation, extend the release of drugs and minimize their side effects by reducing the administrated dose. They are transparent/ translucent with a pleasant appearance that can be washed away easily after application and provide good skin hydration in cosmetic products (30,31) . On the other side, the disadvantage of these systems is their instability during storage and the fact that their preparation requires expensive, high energy input instruments as they require smaller amounts of surfactants compared to microemulsions (32,33) .
Antifungals are widely used in the treatment of SD. They are characterized by poor aqueous solubility and therefore have poor dispersibility in topical vehicles. Formulation of antifungals as nanoemulsions enhances their solubility, and, subsequently, improves their subcutaneous absorption, and increases their efficacy for topical use. Nanoemulsions of antifungal drugs for topical use were developed for ketoconazole (34)(35)(36)(37)(38) and clotrimazole (39) .
The use of topical nanoemulsions is limited due to their low viscosity and spreadability; such a problem is solved by the incorporation of gelling agents to nanoemulsions and thus converting them to nanoemulgels (40) . The latter can accommodate a higher amount of drugs due to their better solubilization capacity. Moreover, because of their adhesion, nanoemulgels provide longer retention time and higher skin penetration along with the achievement of controlled drug release profile at the target site with fewer side effects (41) . A variety of nanoemulgel formulations for the treatment of fungal infection incorporating ketoconazole (42) , bifonazole (43) and terbinafine hydrochloride (44)(45) have been formulated as a mean of more effective topical drug delivery system. A comparative assessment between terbinafine nanoemulgel for ex vivo drug permeation and in vivo antifungal activity compared to the marketed product, Lamisil® emulgel was conducted. Results showed that skin permeation and in vivo antifungal activity of terbinafine for Candida infection from all the prepared nanoemulsion based gel formulae was improved significantly over the marketed emulgel (46) .
Polymeric micelles (PMs)
Polymeric micelles are nanoscopic coreshell structures with diameters typically smaller than 100 nm, formed by self-aggregation of amphiphilic block copolymers dispersed in aqueous media, with the hydrophobic part of the polymer on the inside (core) and hydrophilic part on the outside (shell). PMs have great potential as a drug delivery system as they increase the solubilization of poorly soluble molecules, provide sustained-release properties, and increase drug stability by the protection of encapsulated substances from degradation (47) . Despite their promising potential, significant problems have impeded the progress of PM and limited their applications as drug delivery systems, mainly due to lack of stability, limited polymers for use and lack of suitable methods for large-scale production. (48) Researches have been conducted to utilize PMs as drug delivery systems for different azole antifungal compounds using different copolymers. In one study, different azole antifungal compounds (clotrimazole, fluconazole, and econazole nitrate) were loaded in polymeric micelles with different copolymers. The best formulation was provided by the MPEG-dihexPLA micelles loaded with econazole and incorporated with an efficiency of 98.3%. This micelle formulation showed significantly higher penetration than its commercial liposomal gel (Pevaryl ® ) in both the porcine and human skins. The authors concluded that better skin delivery is due to the smaller size of formulation while the commercial formulation containing numerous penetration enhancers (49) . Another study reported that ketoconazole incorporated into methoxy poly (ethylene glycol)-b-poly (δvalerolactone) copolymeric micelles had 86-fold higher water-solubility than crude ketoconazole, and showed activity similar to crude drug with no skin irritation. In addition, the drug-loaded micelles demonstrated enhanced drug deposition in mice skin with no penetration through skin, as compared to marketed ketoconazole cream indicating selective skin delivery (50) .
Liposomes
Liposomes are colloidal spherical nanoparticle vesicles, composed of one or more lipid bilayers that can be produced from cholesterols, non-toxic surfactants, sphingolipids, glycolipids, long-chain fatty acids, and even membrane proteins. They have an aqueous core and can transport hydrophilic or hydrophobic drugs (51,52) .
Topical liposome formulations offer several advantages; they act as a solubilizing matrix for poorly soluble drugs, provide good skin penetration, associated with improved therapeutic efficacy and reduced side effects. They also act as a local depot that provides sustained drug release. However, the disadvantages of liposomes are associated with their low solubility, physical and chemical instabilities after long-term storage (53,54) .
Liposomes and liposomal gels have been used as a drug delivery system for a variety of antifungal drugs including fluconazole (55) , miconazole (56)(57)(58) , ketoconazole (59)(60)(61)(62)(63)(64) , terbinafine (65-67) and ciclopirox olamine (68) . Liposomal dispersions and liposomal gels have also been developed for a variety of corticosteroids to increase their dermal delivery and hence, improve their topical bioavailability, reflected by improved therapeutic effect and reduced side effects. Among the corticosteroids studied which have the potential for use in dandruff/SD are hydrocortisone, betamethasone, and triamcinolone (69-71) . However, increased percutaneous penetration and efficacy combined with a decreased toxicity cannot be found for all steroids; the liposome characteristics can vary according to size, shape, surface charge and lipid composition (72) .
Despite the improved therapeutic value of liposomes, it has become evident that classical liposomes remain confined to upper layers of the stratum corneum and fail to penetrate the skin layers deeply (73) . To improve the elasticity of conventional liposomes, researchers have found and a new family of liposomal structures called transferosomes.
Transferosomes
Transferosomes, also known as 'deformable liposomes' or 'elastic liposomes' are highly elastic vesicular systems, consisting of a complex lipid bilayer surrounding water-filled core. They differ from liposomes by the presence of edge activators (surfactants) in the lipid bilayer of vesicles; this will contribute to the deformability of the bilayers and provides transferosmes with better skin penetration ability (74) . Transferosomes are used for topical or systemic administration of various hydrophilic and lipophilic drugs delivering them either into or through the skin; they have the ability for sustained release action with high efficiency. The main disadvantage of transferosomes is related to their chemical instability and cost of formulation (75) .
In one study, miconazole transferosomes with a high encapsulation efficiency ranging from (67.98 ± 0.66%) to (91.47 ± 1.85%), with small particle sizes ranging from (63.5 ± 0.604 nm) to (84.5 ± 0.684 nm) were prepared. The optimized formulation of miconazole transfersomes was incorporated into a Carbapol 934 gel base and showed higher antifungal activity than marketed product (Daktarin ® cream 2%), were the steady state flux after 24 h for miconazole transfersomal gel was 85.968 µg cm -2 h -1 as compared to a value of 72.488 µg cm -2 h -1 for Daktarin ® cream 2%. This could be attributed to the high deformability and flexibility of transfersomes, which allowed them to overcome skin barrier properties (78) .
Sulfur and salicylic acid are effective for topical delivery in many skin-care products of many clinical conditions including SD due to their antiinflammatory and keratolytic activities. Topical transferosomal gels of sulfur and salicylic acid were formulated and have shown an enhanced skin penetration compared with conventional gels (84) . Transferosomes have also been used for the delivery of anti-inflammatory agents such as hydrocortisone (85) and tacrolimus (86) with improved site-specificity and overall drug safety compared with traditional topical formulations, making such carrier a suitable one for the treatment of inflammatory skin disorders.
A study reported the prepartion of tacrolimus transfersomes using different kinds of surfactants (sodium cholate, tween 80 and span 80). Tween 80 was selected as the optimal carrier owing to the best deformability and the highest drug retentions.
The optimized transferosomal formulations were further made into gel and in vitro drug release after 24 h of transferosomal gel and liposomal gel was 2.8 times and 2.3 times higher than the commercial ointment (Protopic ® ). The optimized tacrolimus transferosomal gel displayed highest skin retentions compared with liposomal gel and commercial ointment. The amounts of tacrolimus in epidermis and dermis from transferosomal gel were 3.8 times and 4.2 times respectively as much as ointment, while liposomal gel was only 1.7 times and 1.4 times respectively as compared to ointment. In vivo therapy of mice atopic dermatitis, tacrolimus transferosomal gel took effect more quickly than liposomal gel and commercial ointment. Thus transferosomes displayed superior performance and effective skin target for topical delivery of tacrolimus (86) .
Ethosomes
Ethosomes are a slight modification of liposomes. They are soft vesicles made of phospholipids, containing a high content of ethanol (20-45%) and water (87) . Compared to liposomes, skin penetration capacity of ethosomes is higher due to the capability of ethanol to cause disturbance of skin lipids, making this carrier system suitable for dermal and transdermal delivery of hydrophilic and lipophilic drugs. As with other lipid-based vesicular systems, stability is a major challenge for ethosomes (88,89) .
Ethosomes and ethosomal gels represent an efficient carrier for a variety of therapeutic agents used in the treatments of skin infection and inflammatory conditions, including SD. However, clinical studies are lacking but many researches have been conducted to prepare ethosomal formulations for a variety of antifungal agents including fluconazole (55) , clotrimazole (80) , ketoconazole 90) , terbinafine (91)(92)(93) and ciclopirox olamine (94,95) . In one study, tacrolimus ethosomes were prepared and showed lower vesicle size and higher encapsulation efficiency as compared with traditional liposomes. In addition, tacrolimus ethosomes permeated to a greater degree than from commercial ointment (Protopic ® ) suggesting the greater penetration ability to the deep strata of the skin for ethosomes (96) .
Niosomes
Niosomes are vesicular nanocarriers similar to liposomes except that they are composed of mixtures of non-ionic surfactants, cholesterol and may contain small amounts of phospholipids (97) . They can be used as carriers for hydrophilic or lipophilic drugs but are more popular than liposomes in the field of topical drug delivery due to their higher chemical stability because of using surfactant instead of phospholipids during their preparation, low production cost, high loading capacity and their ability to provide sustained drug release pattern (98,99) .
In recent years, there have been much research on the use of niosomal dispersions and niosomal gels for the delivery of a variety of antifungal drugs such as miconazole (100) , fluconazole (101) , ketoconazole (102) , terbinafine hydrochloride (103) , naftifine hydrochloride (104) and ciclopirox olamine (105) . In one study, ciclopirox olamine niosomes were prepared using span 60, cholesterol, diacetyl phosphate. The obtained niosomes were in the size range of 170-280 nm, with entrapment efficiency 38-68%. A niosomal gel of the optimized batch was prepared by incorporating the niosomal dispersion in a 2% (w/w) carbopol 940 P. Deposition of ciclopirox olamine into rat skin from niosomal dispersion and its gel was significantly higher than that of plain ciclopirox olamine solution and its marketed product. Such findings suggest that niosomes are promising tools for cutaneous retention of ciclopirox olamine with expected reduction in the frequency of the application of the dosage form (106) . Benzoyl peroxide is widely used in the treatment of acne but has also been effective for the treatment of trunk and facial SD due to its antibacterial and keratolytic effects. 107(107) Benzoyl peroxide loaded niosomes have been prepared to increase its solubility and was incorporated into gel by adding it to 1% carbopol 934 base to increase skin contact time to gain maximum benefits of the treatment. The prepared niosomal gel was advantageous because it controlled the release of the drug and enhanced its transdermal permeation. Skin irritation studies conducted on mice showed that optimized niosomal gel formulation cause significant reduction in inflammation with very less irritation in comparison with plain benzoyl peroxide solution (108) .
Polymeric nanoparticles (PNs)
Polymeric nanoparticles are solid colloidal particles with a diameter ranging from 1-1000 nm. They are made of non-biodegradable or biodegradable polymers (natural, semi-synthetic or synthetic) in which the active ingredient is dissolved, encapsulated, adsorbed or chemically attached. There are two types of nanoparticles depending on the preparation process: nanospheres and nanocapsules. Nanospheres have a monolithic-type structure (matrix) in which drugs are dispersed, encapsulated within the particles or adsorbed onto their surfaces, whereas nanocapsules have the drug confined in cavity of liquid core and surrounded by a polymeric membrane (109,110) . Polymeric nanoparticles have been extensively studied as promising particulate carriers in the pharmaceutical and medical fields due to their subcellular size, potential to protect unstable active ingredients, ability to enhance the skin permeation poorly water-soluble lipophilic drugs, as well as their utility in providing controlled-and sustaineddrug delivery (111) . Despite their proposed benefits, topically applied nanoparticles remain localized to proximal glands and hair follicles and are unable to deeply penetrate the stratum corneum; this makes their utility in obtaining prolonged skin retention and controlled release for the desired therapeutic effect debatable (112) .
Many anti-inflammatory agents have been developed as PNs, including hydrocortisone (113)(114)(115) , betamethasone (116-119) and tacrolimus (120) with the aim of increased drug permeability through lipid membranes, long-term drug release potential as well as providing a safer approach for the treatment of dermatitis. Zinc pyrithione (ZPT), a widely used agent in anti-dandruff shampoos was prepared as nanoparticles with primary particle diameters in the range of 20-200 nm. It is expected that particles smaller than 25 nm in diameter would not be expected to significantly scatter light, and produce a clear anti-dandruff shampoo formulation, which exhibit a higher activity, be distributed more effectively on the scalp, and will require a less thickening agent in the shampoo formulation to ensure its stability against settling than the standard form of ZPT (121) .
Lipid nanoparticles: solid lipid nanoparticles (SLNs) and nanostructured lipid carriers (NLCs)
Solid lipid nanoparticles (SLNs) are nanosized spherical structures composed of a coat of aqueous surfactant monolayer surrounding a high melting point lipid core that remain in a solid-state at the room as well as body temperature (122) . SLNs can effectively encapsulate and solubilize lipophilic and hydrophilic drugs, but lipophilic drugs can be better delivered by solid-lipid nanoparticle (123) . SLNs hold great promise to achieve controlled sitespecific drug delivery and increase in skin hydration. However, drawbacks associated with SLNs are uncontrolled drug expulsion from the carrier and limited drug loading capacity (124) .
Nanostructured lipid carriers (NLCs) are modified generations of SLNs consisting of a matrix composed of solid and liquid lipids, stabilized by an aqueous surfactant solution. The incorporation of liquid lipid causes structural imperfections of solid lipids to form a crystal lattice with many spaces. Such arrangement increases spaces and allows for higher drug loading capacity (125) .
Lipid nanoparticles (SLN, NLC) have been reported as suitable carrier systems to control the penetration/permeation of highly lipophilic drugs and offer epidermal, follicular targeting, as well as controlled release of drugs, protecting them from degradation and enhancing their stability (126) . SLNs and NLCs are among the nano-carriers that have conquered a better place in topical preparations and are applied either as an aqueous dispersion or incorporated in a suitable liquid or semi-solid preparations to provide an appropriate formulation for application upon the skin. They have been used to improve drug absorption by the skin for a variety of drug molecules intended for the topical treatment of multiple diseases. The development of SLNs/NLCs of antifungals might have a significant advantage for their clinical use. Antifungals drugs such as miconazole nitrate (127)(128)(129) , fluconazole (130,131) , bifonazole (131,132) , ketoconazole (133)(134)(135)(136) , clotrimazole (137) and terbinafine hydrochloride (138,139) formulated as SLNs /NLCs upon incorporation into suitable semisolid preparations, have the potential to provide targeted and sustained drug release pattern, with reduction of fungal burden in the infected area. Such findings could be exemplified with miconazole nitrate loaded SLN (127) . SLN dispersions exhibited average size between 244 and 766 nm. All the dispersions had high entrapment efficiency ranging from 80% to 100%. Miconazole nitrated-SLN gel (2%) was prepared by incorporation with carbopol 940 gel base (0.3-1.0%), out of which 0.5% concentration showed good consistency. Miconazole nitrated-SLN gel produced significantly higher deposition of the drug in skin (57±0.65%) than marketed gel (30%±0.87) and this colloidal nanoparticulate gel, being sumbicron in size, enhances the drug penetration into the skin, remains localized for a longer period of time in skin as compared to conventional gel, thus enabling better drug targeting to the skin.
Incorporation of corticosteroids, such as hydrocortisone, betamethasone valerate and dipropionate (140)(141)(142) , clobetasol propionate (143)(144)(145) into lipid nanoparticles enable such drugs to be deposited on skin with reduced systemic exposure and reduced local side-effects along with providing sustained release of drug in addition to more efficient penetration into skin layers than traditional formulations. Tacrolimus, a calcineurin inhibitor, used in treatment of SD mainly due to its antiinflammatory effects, is not associated with the side effect profile of corticosteroids but topically is reported to have low penetration rate through the skin. A solid lipid nanoparticle (SLN), nanostructured lipid carrier (NLC) and modified nano-lipid carrier formulations of tacrolimus were developed to overcome such a problem and subsequently improve its bioavailability (146)(147)(148) .
Metallic nanoparticles (MNs)
Recent advances in nanotechnology are the development of inorganic nanoparticles that remain stable for long periods and are useful for specific targeting and controlled release of carried drugs in the skin (149) . A variety of metallic nanoparticles have been used in the treatment of a variety of skin diseases including SD.
Antidandruff shampoos have become popular in the treatment of dandruff using agents that combat the growth of the causative agent, Malassezia furfur. Recently, this yeast has developed resistance towards the commonly used antidandruff drugs, and as a result, it is necessary to develop a new class of novel antidandruff shampoos.
Silver nanoparticles (AgNP) were developed for their bactericidal properties and used in the treatment of infectious diseases and have been used in several biomedical products, including wound or burn dressings (150) . They have also been investigated as a potential fungistatic agent for various clinically relevant fungi including M. furfur involved in scalp related diseases such as dandruff. It is also reported that AgNP may also have significant anti-inflammatory effects (151,152) . The activity of silver nanoparticles depends on factors such as sensitivity to silver, the concentration of nanoparticles in the formulation, and their shape 153(153) .
Silver nanoparticles can be synthesized from eco-friendly, cost-effective biological systems making them amenable to large-scale industrial production and are considered as cost-effective fungistatic agents in shampoo formulations for treating scalp problems, especially knowing that very small amount is required for producing desired antidandruff activity. There have been many reports using silver nanoparticles during the formulation of antidandruff shampoo with effective antifungal activity (154)(155)(156)(157) . A hybrid system of ketoconazole complexed with silver nanoparticles have been synthesized to enhance the activity against Malassezia furfur. The anti-dandruff activity was highest with ketoconazole coated AgNp when compared to ketoconazole and AgNp individually (158,159) .
There have been studies indicating that Ag NPs are toxic to the mammalian cells (160) therefore, sulfur nanoparticles were developed as a safer, more cost-effective alternative to silver nanoparticles as these are reported to possess broad-spectrum antimicrobial activity, as well as extensive antifungal activity against M. furfur, the main causative agent of dandruff (161,162) .
Other metallic nanoparticles with potential anti-dandruff activity due to their antifungal activities against Malassezia include selenium nanoparticles (SeNPs) (163) with reported higher potency than the known anti-dandruff agent, selenium sulphide; zinc oxide nanoparticles (ZnO NPs) (164) and palladium nanoparticles (Pd NPs) (165) reported to have antimicrobial as well as antidandruff activity. However, clinical research work is required before such metallic nanoparticles are introduced into anti-dandruff preparations. The potential benefits of the previously mentioned nanotechnology approaches over the conventional dosage forms and potential advantages of each nanotechnology formulation compared to the other nanotechnology techniques are summarized in Table 3. Limited benefit over other nanotechnology approaches due to lack of stability, low drug loading capacity, limited polymers for use and lack of suitable methods for large-scale production.
48
Liposomes, transferosomes and ethosomes Suitable carriers for both lipophilic and hydrophilic drugs, with better skin penetration, reduced side effects, improved therapeutic efficacy and stability of encapsulated drugs as well as ability to provide local drug depot, with sustained drug release action.
Improved localized as well as transdermal skin delivery of drugs 167
Niosomes
Suitable carriers for both lipophilic and hydrophilic drugs, enhanced bioavailability, targeted delivery, and slow drug release.
Compared to lipid vesicles, niosomes are more stable, with higher drug loading capacity leading to reduction of dose, delayed clearance and ease of modification with lower production cost.
168
Polymeric nanoparticles Enhance lipophilic drug penetration through skin, with ability to protect unstable active ingredients and reduce skin irritation, with sustained drug release ability over prolonged periods of time.
Limited degree of enhancement in skin permeation and localization in the hair follicles; this may promote potential application of delivery of drugs to site of application during the treatment of dermatological conditions. Higher stability and ability to protect chemically labile drugs against decomposition than lipid vesicles.
Continue
NLC provide greater drug loading and better stability compared to SLN.
Metallic nanoparticles
Useful for controlled, localized and targeted drug release in the skin.
Good stability in addition to antimicrobial properties in some types of metallic nanoparticles.
Conclusions
Dandruff and SD are stubborn skin disorders that require symptomatic relief against pruritus and long-term therapy using antifungal, keratolytic and anti-inflammatory agents to clear symptoms, as well as the need to maintenance therapy to help maintain remission. Nanotechnology offers a new approach in the treatment of dandruff/SD with the potential to better targeting, enhanced penetration and sustained delivery of active therapeutic agents. However, reported clinical studies using such drug delivery systems in topical applications have been limited. Consequently, further clinical investigative studies are required to elucidate the effectiveness of nanotechnology in the topical treatment of dandruff/SD. | 6,699 | 2020-06-21T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
Importance of Sand Grading on the Compressive Strength and Stiffness of Lime Mortar in Small Scale Model Studies
Mortars provide the continuity required for the stability and exclusion of weather elements in masonry assemblies. But because of the heterogeneity of the mortar, its mechanism of behaviour under different load effects is dependent on the properties of the constituents of the mortar. The aim of paper is to determine the effect sand grading for various cement-sand-lime mortar designations (BS) and strength classes (EC) on the compressive strength and stiffness of mortar. Two silica sands; HST 95 and HST60 were used to make mortars in three strength classes: M2, M4 and M6, corresponding to mortar designations iv, iii and ii respectively. The results show that mortar made with the HST60 sand (coarser grading) usually resulted in mortar with a higher compressive strength and stiffness. The One Way ANOVA analysis of both compressive strength and stiffness at a significance level of 5% on the effect of sand grading on the two parameters also shows that they are both significant. There is also strong evidence of a linear correlation between the stiffness and compressive strength. The results indicate that in order to replicate full scale behaviour of masonry at model scales, the grading of fine aggregate in the models should be similar so as to properly model full scale behavior.
Introduction
Masonry is a composite material, with the constituents having distinct strength and deformation characteristics.
However, even though masonry has been used for thousands of years it is not well understood as it should be, because of the varying properties of its components as well as its failure mechanisms.
Mortars are used to bed and join masonry units giving them the continuity required for stability and exclusion of weather elements [1].The proportion of the different constituents is usually determined by how the masonry is to be used, which is governed by the strength requirement of the application, degree of resistance to movement required, degree of frost resistance and rain penetration required etc.
Because mortar is not a homogeneous material, the mechanics of its behaviour under load are dependent on a variety of factors that influence each of its constituent elements.This paper aims to investigate the effect of sand grading for varying mortar designation on the mechanical properties of mortar like stiffness and compressive strength as it relates to small scale model studies.It presents part of a research programme looking at the behaviour of brickwork at prototype (full scale) and model scales [2].This necessitated carrying out various tests on the different mortars used for the prototype and model scale tests.
The main agent responsible for the setting and strength development of cement mortars is the cement hydration process.Consequently the higher the cement content in mortar, the higher its strength.But because adequate cement hydration only takes place in the presence of sufficient water, the water/cement ratio of mortar becomes one of the most important factors that affect the compressive strength of mortars [3].
There are many parameters that influence mortar strength apart from the water/cement ratio and they include; cement volume, workability and sand grading.The effect of sand grading on the compressive strength has shown a higher strength yield in mortars with coarse sands.While the effect of sand grading on the tensile bond properties of mortars has been discussed by Anderson and Held [4], who found that the finer the grading of sand, the lower the bond strength of the masonry.This suggests that, since very fine sands have to be used in relatively small brickwork models because of the thin joints, the bond strengths of such models may show lower bond strengths to a comparable prototype because of this reason.And generally the higher the cement content of mortar the stronger is the bond while the converse is true for the water to cement ratio.
The stiffness properties of mortar are also important because they greatly influence the stiffness properties of brickwork as well as its strength [1].The stress/strain relationship in mortars usually shows distinct plastic characteristics.
Materials
In the selection of an appropriate mortar for the tests, it was intended that a mortar that best compares to what is in use currently and in the past for masonry structures would be most suitable.The first consideration was whether to use a cement-sand mortar or cement-sand-lime mortar.Traditionally lime has been used in mortar to improve its workability and water retention properties.It was thought that both of these properties were desirable considering possible difficulties in adequately placing mortar in the bed joints of the model specimens and rapid suction of water from the model bed joints because of their small thickness.Consequently, cement-sandlime mortar was adopted for the tests.
Three types of sand were used in this research.Ordinary building sand was used for tests involving full scale specimens while Congleton HST95 and HST60 silica sands were used for the model scale tests.In order to ensure the same sands were used throughout the study, all the sands were bought in one batch and of sufficient quantity to last the duration of the programme.The grading curves for the model sands and the ordinary building sand are shown in Figure 1, it shows that HST 60 sand and the building sand lie within the grading limits of the code, but nearer the fine limit.The building sand is just coarser than the HST 60 sand.While the other model sand, HST 95, has a grading that is finer than the fine limit set by the code.The grading of all the sands shows that they are within the limits set by BS EN 13139:2002 [5] for aggregates used in mortar.
The cement used conforms to BS EN 197-1:2000 [6].It was acquired in different batches in order to ensure that the fresh qualities of the cement needed for strength build up are maintained during the duration of the testing programme.Hydrated lime which conforms to BS EN 459-1:2001 [7] was acquired in one batch and used throughout.
Three mortar designations according to BS 5628; ii, iii and iv were used for the silica sands (used for the small scale model tests) while the ordinary building sand was used in making just one type of designation iii mortar for the full scale tests.Details of the different mortars used for the study are summarised in Table 1.The , and (iv) respectively.The batching for the constituents of dry mortar was carried out in accordance with the guidance given in BS 4551 [8] for the batching by weight of the three chosen mortar designations.
Methods
Compressive strength and modulus of elasticity test.The procedure outlined in BS EN 1015-11:1999 [9] were followed in testing the specimens.The test was carried under load control at a rate within the range of 0.06 -0.1 kN/s.Three prisms measuring 75 × 75 × 200 mm were used for determining the elastic properties of the prototype and model mortar modulus of elasticity tests as well as their compressive strengths.Four LVDT's were attached to each specimen as described for the brick specimens.The specimens were tested through two loading cycles of up to a third of the expected maximum load for some of the tests, but most of the tests were carried out without load cycling after it was seen that there was no noticeable difference in the loading and unloading cycles in the earlier tests.All the stiffness calculations were determined at a third of the maximum stress reached as a secant modulus.
Results and Discussion
Typical failure of the mortar specimens was by shear cracks in the direction of loading.This tended to be triangular in shape originating from the sides of the specimen at the top slanting inwards, towards the centre at mid height and diverging again to the sides of specimen at the bottom.The final outcome of this is a pyramidal shaped mass at failure considered to be due to the platen restraint.
Compressive Strength
The average value of the compressive strength for different batches of 1:1:6 prototype mortars, (MP mortar) was 1, which gives a summary of mortar test results.This value of compressive strength is higher than the minimum compressive strength of 3.6 N/mm 2 as stipulated in BS 5268 [10] for mortar designation (iii), which is an indication that the batching, mixing and curing conditions used were appropriate for the attainment of the specified minimum strength.
Sieve size (mm) Cummlative % passing
From Figure 2, which shows the variation of model mortar compressive strength as the mortar strength class is increased, it is seen that mortars made with HST 60 sand consistently had higher compressive strengths than those made with HST 95 sand.The strength class was substituted for the mortar designations in the X-axis as it better illustrates the increase in strength.
As expected it can be seen from Figure 2 that the relationship between compressive strength and strength class is a linear relationship.For designation ii, (class M6), there is a 60% difference between the compressive strength of the M60 and M95 mortars.While for designation iv (class M2), there is also a similar difference of about 58%.Because of the coarser grading of the HST 60 sand, it has higher a bulk density and thus a lower water to cement (w/c) ratio than an equivalent weight of HST 95 sand, which subsequently increases the compressive strength of M60 mortars.The wider divergence at higher mortar grades could be attributed to the greater quantity of cement available for making a more cohesive mix in the case of HST 60 mortar, which has coarser grading of sand.Therefore there is better cohesion between the coarse sand grains and finer cement grains.An investigation into the effects of grading on mortar properties by Anderson and Held [4] also yielded similar results; the sand with coarsest grading within the BS EN 13139 [5] limit gave higher compressive strength as a result of the lower w/c ratio.
Since the prototype sands are coarser than the model sands, there is a possibility that full scale tests could have higher mortar strength.However the influence of this on masonry strength might not be very significant as suggested by Hendry [11]; that halving of mortar cube strength only results in a 12% reduction in masonry strength for a medium strength brick.But the different grading of the sands could still have an effect on flexural bond strength and shear bond strength tests which are more susceptible to changes in the grading characteristics of the sand in the mortar as reported by Anderson and Held [4].
Table 2 shows a One Way ANOVA analysis of all the strength results at a significance level of 5%.From the table it is clear that there is significant difference in the means of the compressive strengths judging from the very low value of P, thus implying that there is a real effect of the different sand gradings on mortar strength.
The variation of compressive strength with w/c ratio as shown in Figure 3 shows a decrease in compressive strength with increasing w/c ratio.It is also seen from the plot that at about a compressive strength of 3.5 N/mm 2 (grade iii mortar) the two mortars have the same value of w/c ratio of around 2. The plot also shows that the mortars with coarser sand (M60) are affected more by changes in w/c ratio than mortars with finer sand (M95) mortars.This implies that the prototype tests could be more susceptible to changes in the w/c ratio than the model tests because of the coarser sands in the former.
Stiffness
It is seen in Figure 4 and Figure 5, which compare the stress/strain curves for the axial strain and lateral strain respectively, that the M60-ii and M95-ii mortars were the stiffest and showed a more brittle response than the less stiff M95-iv and M60-iv.However from Table 1, the stiffness of M60-iv and MP-iii were found to be similar even though MP-iii is a designation (iii) mortar.
From the stiffness/strength plot in Figure 6 and the stiffness/strength class plot in Figure 7, it is seen that there is a much greater difference in the stiffness between the strength classes in the M60 mortar than in the M95 mortars.For instance there is a 51% percentage increase in stiffness between M95-iv and M95-ii, while the percentage increase in stiffness between M60-iv and M60-ii is 150%.This shows that the coarser grading of the sand in the M60 mortars is more receptive to increase in cement content as discussed earlier.Across strength classes, it is seen that for the M2 strength class the mean M60 mortar stiffness is 2300 N/mm 2 higher than the corresponding M95 mortar stiffness.While for M6 strength class, the mean M60 mortar stiffness is 4100 N/mm 2 more than the corresponding M95 mortar stiffness.This indicates that even for suitable model sands, the stiffness and strength properties for the same designation of mortar could be different.The stiffness of MP-iii was determined to be 6300 N/mm 2 ; which is about 3% less stiff than the M95-iii mortar and 86% less stiff than M60-iii mortar.
The One Way ANOVA analysis of all the stiffness results at a significance level of 5% is shown in Table 3.It reveals that there is a significant difference in the means of the stiffness as evidenced in the low value of P, thus suggesting that there is a real effect of the different sand gradings on mortar stiffness.
Therefore, when modelling prototype behaviour at model scale, the grading of the model sand should be similar to that of the prototype even though the average grain size is smaller.
Stiffness/Strength Correlation
The stiffness/strength plot in figure shows a very good linear correlation between stiffness and compressive strength for both M95 and M60 mortars.The regression equation for the M95 and M60 mortars are shown in Equations ( 1) and ( 2) respectively.The respective values of the R2 are displayed on the chart.From the values of the R2 for both mortar types, there is strong evidence of a linear correlation between the stiffness and compressive strength.
Conclusion
The results show the importance and effect of sand grading on the strength and stiffness of mortar even for sands with similar grain sizes.It revealed that mortar made with the HST60 sand (coarser grading) usually resulted in mortar with a higher compressive strength and stiffness.The One Way ANOVA analysis of both compressive strength and stiffness at a significance level of 5% on the effect of sand grading also shows that there is a significant difference in their means, implying that there is a real and discernible effect of the sand grading on both parameters.There is also a strong evidence of a linear correlation between the stiffness and compressive strength.Accordingly, in order to replicate full scale behaviour of masonry at model scales, the grading of fine aggregate in the models should be similar so as to properly model full scale behaviour.
Figure 1 .
Figure 1.Grading curves for prototype and model sands within the BS limits.
Figure 2 .
Figure 2. Variation of compressive strength with strength class for model mortars.
Figure 3 .
Figure 3. Variation of compressive strength of model mortars with w/c ratio.
Figure 4 .
Figure 4. Comparison typical stress/axial strain plot for prototype and model mortars.
Figure 5 .
Figure 5.Comparison of typical stress/lateral strain plot for prototype and model mortars.
Figure 6 .
Figure 6.Variation of stiffness with strength for model mortars.
Figure 7 .
Figure 7. Variation of stiffness with strength class for model mortars.
Table 1 .
Properties of prototype and model mortars (COV in brackets).
strength class is the new nomenclature used in the Eurocode 6 (EC 6) to differentiate the mortar types.Strength class M6, M4 and M2 correspond to mortar designations (ii), (iii)
Table 2 .
Properties of prototype and model mortars (COV in brackets).
Table 3 .
The P-values of the mortar tests showing the effect of sand grading on mortar stiffness at significance level of 5%. | 3,689 | 2015-11-24T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Irreducible background and interference effects for Higgs-boson production in association with a top-quark pair
We present an analysis of Higgs-boson production in association with a top-quark pair at the LHC investigating in particular the final state consisting of four b jets, two jets, one identified charged lepton and missing energy. We consider the Standard Model prediction in three scenarios, the resonant Higgs-boson plus top-quark-pair production, the resonant production of a top-quark pair in association with a b-jet pair and the full process including all non-resonant and interference contributions. By comparing these scenarios we examine the irreducible background for the production rate and several kinematical distributions. With standard selection criteria the irreducible background turns out to be three times as large as the signal. For most observables we find a uniform deviation of eight percent between the scenario requiring two resonant top quarks and the full process. In particular phase-space regions the non-resonant contributions cause larger effects, and we observe shape changes for some distributions. Furthermore we investigate interference effects and find that neglecting interference contributions results in an over-estimate of the total cross-section of five percent.
Introduction
The discovery of the long-sought Higgs boson at the LHC in July 2012 [1,2] ushered in a new era of probing the mechanism of spontaneous symmetry breaking and thus mass generation in nature. The determination of the properties of the discovered Higgs boson is a major goal of the LHC. Especially the couplings of the Higgs boson to matter particles are important for the understanding of the origin of mass. The production of a Higgs boson in association with a top-quark pair is of particular interest, since it allows to directly access the topquark Yukawa coupling. Although the production rate is small and the measurement is experimentally challenging, ATLAS [3][4][5] and CMS [6][7][8][9][10] have already performed searches using the data of the LHC runs at 7 and 8 TeV. With the upcoming run 2 of the LHC at 13 TeV the determination of the ttH signal and the potential measurement of the top-quark Yukawa coupling will be pursued.
The production of a Higgs boson in association with a top-antitop pair has been studied theoretically by many authors. Leading-order (LO) predictions for the production of ttH for stable Higgs boson and top quarks have been presented in Refs. [11][12][13][14][15] while the corresponding next-to-leading-order (NLO) corrections have been calculated in Refs. [16][17][18][19][20]. More recently the matching of the NLO corrections to parton showers has been performed [21,22]. Very recently also electroweak corrections to ttH have been elaborated [23,24]. NLO QCD corrections for the most important irreducible background process, ttbb production in LO QCD, have been worked out [25][26][27][28] and matched to parton showers [29][30][31]. A combined analysis of ttH and ttbb production was carried out in Ref. [32]. Further analyses and results can be found in the yellow reports of the LHC Higgs Cross Section Working Group [33][34][35].
In this article we study Higgs-boson production in association with a top-quark pair (ttH) including the subsequent semileptonic decay of the top-quark pair and the decay of the Higgs boson into a bottom-antibottom-quark pair, pp → ttH → ℓ + ν ℓ jjbbbb. (1.1) The final state under consideration consists of six jets, four of which are b jets, one identified charged lepton (electron or muon) and missing energy, and our primary goal is the study of the irreducible background. We consider this process in three different scenarios. In the first scenario we include the complete Standard Model (SM) contributions, comprising all resonant, non-resonant and interference contributions to the 8-particle final state. For the second scenario we require the intermediate resonant production of a top-quark pair in association with a bottom-quark pair. We employ the pole approximation for the top quarks and include the leptonic/hadronic decay of the top/antitop quark. Similar approaches have been used for tt production in association with massless particles at NLO QCD [36,37].
In the third scenario, corresponding to the signal, we require in addition to the resonant top-quark pair an intermediate resonant Higgs boson decaying into a bottom-quark pair and employ the pole approximation for the Higgs boson as well. We use the matrix-element generator RECOLA [38] to compute all matrix elements at leading order in perturbation theory.
Comparing the predictions in the three scenarios allows us to examine the size of the irreducible background for Higgs production in association with a top-antitop-quark pair. Further we determine the quality of the approximations compared to the calculation of the full process and quantify deviations for the total cross section and differential distributions. We investigate the total cross section and differential distributions for the LHC operating at 13 TeV with particular emphasis on distributions that allow to enhance the signal over the irreducible background. We study, in particular, different methods of assigning a b-jet pair to the Higgs boson and compare their performance in reconstructing the Higgs signal. Furthermore, we investigate the size of interference effects between contributions to the matrix elements of different order in the strong and electroweak coupling constants.
The paper is organised as follows. In Section 2 we specify the setup of our calculation and identify the various partonic contributions to the signal process. In Section 3 we present numerical results for the total cross section and kinematical distributions and quantify the size of the irreducible background. Our conclusions are presented in Section 4. In Appendix A we explain in detail the on-shell projections needed for the pole approximations we apply and in Appendix B we present additional results.
Notation and setup
This section provides some technical details about our computation. We consider gluons and light (anti-)quarks (u, d, c, s) to be the only constituents of the proton and disregard contributions from bottom quarks and photons in the parton distribution functions. We neglect flavour mixing as well as finite-mass effects for the light quarks and leptons. The considered matrix elements involve (multiple) resonances of electroweak gauge bosons, top quarks and the Higgs boson. For the consistent description of these resonances we use the complex-mass scheme [39][40][41] where all masses of unstable particles are defined by the poles of the propagators in the complex plane, µ 2 = m 2 − imΓ. In addition, all couplings, and, in particular the weak mixing angle, are consistently derived from the complex masses and thus are complex, too.
We consider three scenarios to calculate the process pp → ℓ + ν ℓ jjbbbb: • In the first scenario, the full process, we include all SM contributions to the process pp → ℓ + ν ℓ jjbbbb. Counting u, d, c, and s quarks separately, we distinguish 48 partonic channels. Eight channels can be constructed from gg → ℓ + ν ℓ q ′q′′ bbbb and 40 from qq → ℓ + ν ℓ q ′q′′ bbbb using crossing symmetries and substituting different quark flavours. Matrix elements involving external gluons receive contributions of O α s α 3 , O α 2 s α 2 and O α 3 s α , whereas amplitudes without external gluons receive an additional O α 4 term of pure electroweak origin. Some sample diagrams are shown in Figures 1a-1c.
• In the second scenario we only take those diagrams into account that contain an intermediate top-antitop-quark pair. The resulting amplitude, labelled ttbb production in the following, corresponds to the production of a bottom-antibottom pair and an intermediate top-antitop pair followed by its semileptonic decay, i.e. Figure 1: Representative Feynman diagrams for (a)-(c) the full process (top), (d)-(f) ttbb production (middle) and (g)-(i) ttH production (bottom).
pp → ttbb → ℓ + ν ℓ jjbbbb. Some sample diagrams are shown in Figures 1d-1f. Note that we use the pole approximation for the top quarks only, hence we take into account all off-shell effects of the remaining unstable particles. The details of our implementation of the pole approximation are described in Appendix A. This scenario involves 10 partonic channels, comprising 2 gluon-fusion and 8 quark-antiquark-annihilation channels. As a consequence of the required top-antitop-quark pair the amplitudes receive no contribution of O α 3 s α .
• Finally, we consider the signal process pp → ttH → ℓ + ν ℓ jjbbbb and label it ttH production. In addition to the intermediate top-antitop-quark pair we require an intermediate Higgs boson decaying into a bottom-antibottom-quark pair and use the pole approximation for the top-quark pair and the Higgs boson. Here the same 10 partonic channels as in the previous case contribute. The requirement of the Higgs boson eliminates contributions of the O α 2 s α 2 from the amplitude. The implementation of the pole approximation applied to the Higgs boson is explained in Appendix A, and some sample diagrams are shown in Figures 1g-1i.
The full process involves partonic channels with up to 78,000 diagrams. All matrix elements are calculated with RECOLA [38] which provides a fast and numerically stable computation. RECOLA computes the matrix elements using recursive methods, i.e. the complexity does not scale with the number of Feynman diagrams, and allows to specify intermediate particles for a given process. The phase-space integration is performed with an in-house multi-channel Monte-Carlo program. Here the number of diagrams matters for the construction of the integration channels.
Results
We present results for the LHC operating at 13 TeV. For the calculation of the hadronic cross section we employ LHAPDF 6.0.5 with the CT10 LO parton distributions (LHAPDF ID 10800). Following Ref. [17] we use a fixed renormalisation and factorisation scale The corresponding value of the strong coupling constant α s provided by LHAPDF reads All other masses and widths in (3.3) are considered as pole values. The width of the top quark Γ t in (3.3) for unstable W bosons is computed at leading order according to Ref. [43], i.e.
λ(ǫ, y) = 1 + y 2 + ǫ 4 − 2(y + ǫ 2 + yǫ 2 ). (3.6) The masses and widths of all other quarks and leptons are neglected. We derive the electromagnetic coupling α from the Fermi constant in the G µ scheme [44], We impose cuts on the transverse momenta and rapidities of leptons, jets, b jets and missing transverse momentum as well as distance cuts between all jets in the rapidityazimuthal plane, with the distance between two jets i and j defined as where φ i and y i denote the azimuthal angle and rapidity of jet i, respectively. Our selection of cuts represents standard acceptance cuts and is neither deliberately chosen to enhance the contribution of ttH production nor to suppress any background process: (3.9)
Total cross section
In this section we analyse the total hadronic cross section for the three scenarios and cuts defined above. In Table 1 we show the total cross section for ttH production and the corresponding contributions resulting from quark-antiquark annihilation and gluon fusion. In this scenario the total cross section is σ Total ttH = 7.39 fb, and about 70 % of the events originate from the gluon-fusion process. While the bulk of the contributions results from matrix elements of order O α s α 3 , quark-antiquark annihilation receives an additional small contribution from pure electroweak interactions. Note that there are no interferences between diagrams of O α 4 and O α s α 3 in this scenario.
The cross section for ttbb production is presented in Table 2. In column two to four we show the contributions to the cross section resulting from matrix elements of specific orders in α s . The corresponding matrix elements include only Feynman diagrams that originate exclusively from one specific order in the strong and electroweak coupling constant, e.g. results in column three are exclusively build upon Feynman diagrams of O α s α 3 and do not include interferences between diagrams of O α 4 and O α 2 s α 2 . The fifth column, labelled "Sum" represents the sum of the contributions in columns two to four and thus contains no interferences between matrix elements of different orders. The total cross section in the last column is calculated from the complete matrix element for pp → ttbb → ℓ + ν ℓ jjbbbb pp Cross section (fb) (9) 0.014887 (2) 7.377(1) 7.3920(9) Table 1: Composition of the total cross section in fb for ttH production at the LHC at 13 TeV. In the first column the partonic initial states are listed. In the second and third column we list the contributions resulting from the square of matrix elements of specific orders in the strong and electroweak coupling. The last column provides the total cross section.
pp Cross section (fb) (6) 2.4932(9) 0.9199 (2) 3.4312 (9) 3.4366 (6) gg -7.818(4) 16.650 (9) 24.47 (1) 23.010 (7) 0.018134 (6) 10.311 (4) 17.570 (9) 27.90 (1) 26.446(7) Table 2: Composition of the total cross section in fb for ttbb production at the LHC at 13 TeV. In the first column the partonic initial states are listed. In the second, third and fourth column we list the contributions resulting from the square of matrix elements of specific orders in the strong and electroweak coupling. The fifth column shows the sum of the columns two to four, while the last column provides the total cross section incorporating all interference effects.
including all interferences. For the total cross section we find a significant enhancement of the production rate compared to ttH production, σ Total ttbb = 26.45 fb, and thus the irreducible background which exceeds the ttH signal by a factor of 2.6. Note that in this definition of the irreducible background interference effects between signal and background amplitudes are included. The major contribution to the irreducible background (87 %) arises from gluon fusion at O (α 2 s α 2 ) 2 while for quark-antiquark annihilation we find a relative contribution of about 5 % at this order. A comparison of the results at O (α s α 3 ) 2 between Table 1 and Table 2 shows a rise of the cross section of 49 % (16 %) for the gluon-fusion (quarkantiquark-annihilation) process. These enhancements of the O (α s α 3 ) 2 contribution in the ttbb scenario result from Feynman diagrams involving electroweak interactions with Z bosons, W bosons and photons (like in Figure 1e). (1) 3.511 (6) 3.538 (4) gg -8.01 (2) 17.19 (6) 0.00756 (2) 25.21 (6) 23.71 (6) 0.02120 (3) 10.87 (2) 18.69 (6) 0.516 (2) 30.10 (6) 28.60(6) Table 3: Composition of the total cross section in fb for the full process at the LHC at 13 TeV. In the first column the partonic initial states are listed, where qq (′) denotes pairs of quarks and/or antiquarks other than qq. In the columns two to five we list the contributions resulting from the square of matrix elements of specific orders in the strong and electroweak coupling. The sixth column shows the sum of the columns two to five, while the last column provides the total cross section incorporating all interference effects.
pp Cross section (fb)
A comparison between the fifth (Sum) and sixth (Total) column in Table 2 allows a determination of interference contributions between matrix elements of different orders in the coupling constants. Neglecting those interference effects results in an over-estimation of the cross section of about 5 %. The main interference contributions originate from the gluon-fusion channel, where they reduce the cross section by 6 %, while the qq channel is hardly affected. We could trace the dominant contribution to interferences of diagrams of O α s α 3 with W-boson exchange in the t-channel (like in Figure 1e) with diagrams of O α 2 s α 2 that yield the dominant irreducible background (like in Figure 1d). We confirmed the size and sign of these contributions by switching off all other contributions in RECOLA. We note that these kinds of interferences are absent in the qq channel. We also investigated the interference of the signal process, i.e. all diagrams of order O α s α 3 involving s-channel Higgs-exchange diagrams (like in Figures 1g and 1h) with the dominant irreducible background of order O α 2 s α 2 and found it to be below one per cent.
The results for the full process are listed in Table 3. We show contributions resulting from matrix elements of different orders similar as in Table 2. In addition we list the contributions of additional partonic channels separately. Here gq and gq denote channels with gluons and quarks or antiquarks in the initial state and qq (′) all channels involving two quarks and/or antiquarks in the initial state other than qq (including channels with gluons in the final state). We compute the total cross section to σ Total full = 28.60 fb including all interference effects, with the major contribution (about 83 %) arising from the purely gluon-induced process. The consideration of contributions without an intermediate topantitop-quark pair results in an increase of about 3 % for both the gg and qq processes compared to the ttbb scenario. The additional partonic channels contribute about 5 %. For the total cross section we find a relative increase of 8 % while for the irreducible background we note an enhancement of 11 % relative to ttbb production. The signal cross section σ Total ttH amounts to 26 % of the full cross section σ Total full . In the full process sizeable interference effects only appear in the gluon-induced channel and are of the same size, roughly −6 %, as for ttbb production. Since the total cross section of the full process and ttbb production differ by only about 8 % we conclude that the major interference effects are those that we identified in the underlying ttbb production process.
Differential distributions
In this section we present differential distributions for all three scenarios. We compare results for the full process with ttbb production and ttH production to assess the irreducible background to ttH production in distributions. In the upper panels in each plot we show the differential distribution of the full process with a black solid line, ttbb production with a dashed blue line and ttH production with a dotted red line. The lower panels provide the ratio of ttbb production to the full process with a dashed blue line and the ratio of ttH production to the full process with a dotted red line.
We study specifically the distribution in the invariant mass of two b jets selected according to different criteria in view of identifying the b-jet pairs originating from the Higgs-boson decay. In particular, we consider the invariant mass of • the two b jets that form the invariant mass closest to the Higgs-boson mass, M bb,Higgs , • the two b jets that are most likely not originating from the (anti-)top-quark decay, M bb,non-top , • the two b jets that have the smallest ∆R distance as defined in (3.8), M bb,∆R min , • the two b jets that have the highest transverse momenta, M b 1 b 2 . Figure 2 displays the corresponding b-jet pair invariant-mass distributions. In all cases the shape of the full process is well represented by the ttbb approximation including the Higgs-boson and the Z-boson resonance. In the lower panels we observe that the difference between ttbb production and the full process is generally of the same order as for the total cross section of about 8 %. For ttH production the shape and the relative size compared to the full process depends on the observable. In the following we analyse in detail the four invariant-mass distributions.
Invariant mass of b-jet pair closest to the Higgs-boson mass
For this observable we compute the invariant masses of all six b-jet pairs and choose the one that is closest to the Higgs-boson mass used in the Monte Carlo run. Figure 2a displays the resulting invariant-mass distribution. For ttH production it features a very clear peak The lower panels show the relative size of ttbb and ttH production normalised to the full process.
at the Higgs-boson mass with a strong drop-off according to the Breit-Wigner shape of the Higgs-boson resonance. Thus, the ratio of ttH production and the full process almost vanishes outside the resonant region. The finite off-shell contributions are a result of the pole approximation where the on-shell projected momenta are only applied to the matrix element evaluation, but not to the resonant propagator. The phase-space integration and thus the histogram binning is performed with off-shell momenta. The ttbb and the full process have contributions where the Higgs boson is replaced by a Z boson, visible as a bump around the Z-boson mass in the differential distribution.
This observable is strongly biased, since we always choose the b-jet combination that gives the invariant mass closet to the Higgs-boson mass. This explains the rise of the differential cross section for ttbb production and the full process towards the Higgs resonance outside the peak.
Invariant mass of b-jet pair determined by top-antitop Breit-Wigner maximum likelihood
Motivated by Ref. [3] we determine the two b jets that most likely originate from the decay of the top quark (t → W + b → ℓ + ν ℓ b) and antitop quark (t → W −b →ūdb, with u = u, c, d = d, s) and plot the invariant mass of the remaining b-jet pair. Since in most events the top quark and antitop quark in ttH production are nearly on-shell, the two b jets maximising the corresponding propagator contributions are most likely to originate from the top-quark and antitop-quark decay. To determine the maximising b-jet combination we compute a top-momentum candidate with the charged lepton, neutrino and a b-jet momentum (p b i ), and an analogous antitop-momentum candidate with the two momenta of the non-b jets and a different b-jet momentum (p b j ), As b jets originating from the top-quark and antitop-quark decay we select those that maximise the likelihood function L defined as a product of two Breit-Wigner distributions corresponding to the top-quark and antitop-quark propagators: (3.12) In Figure 2b we present the b-jet-pair invariant mass that has been identified to originate from the Higgs-boson decay by the maximum-likelihood method described above. In the off-shell region the ratio of ttH production to the full process drops considerably below the corresponding ratio for the total cross section of about a fourth. Owing to the absence of the bias, the peak in the resonant region is more pronounced for the full process compared the method based on the b-jet-pair invariant mass closest to the Higgs-boson mass (Figure 2a), yielding a better signal to background ratio. Since this method tags the b jets resulting from the top and antitop quarks, any resonance in the invariant mass of the remaining b-jet pair is resolved, and thus the Z resonance is clearly visible in the plot.
Invariant mass of minimal ∆R-distance b-jet pair
Next we compute for each event the ∆R distance of all six b-jet pairs and plot the invariant mass of the pair with the smallest ∆R distance. Since the Higgs boson, as a scalar particle, decays isotropically, this observable is not sensitive to Higgs-boson production at the ttH threshold but potentially for boosted Higgs bosons. Figure 2c displays the corresponding invariant-mass distribution for all three scenarios. The Higgs-boson peak is clearly visible in all cases. However, it is less pronounced compared to the Breit-Wigner maximum-likelihood method (Figure 2b) and the ratio of peak over background is only weakly enhanced compared to the purely combinatorial effect. Outside the regions of the Higgs and Z-boson resonances the ratio of ttH production vs. the full process equals the average value of about a fourth, which we see in the total cross-section ratio. This indicates that the b-jet pair with minimal ∆R distance is almost evenly distributed among all six possible b-jet pairs, i.e. the method fails to tag the b-jet pair originating from the Higgs-boson decay.
The bump around the Z-boson mass in the differential distributions is weaker as compared to Figure 2b and there appears a ditch in the ttH ratio because the Z-boson resonance is absent in ttH production. The ditch in the ttbb ratio is due to relative differences in contributions from resonant Z bosons in the full process and in ttbb production. Thus, in the considered set-up this method is not suitable to tag the Higgs boson. This might be different if strongly boosted Higgs bosons are required.
Invariant mass of two hardest b jets
For comparison we show the distribution in the invariant mass of the pair of the two hardest b jets in Figure 2d. Also here the Higgs-boson signal is clearly visible for all processes but there is no enhancement above the purely statistical level. The apparently smaller enhancement of the Higgs peak as compared to Figure 2c is basically due to the larger bin width (10 GeV as compared to 1.5 GeV). Furthermore, outside the resonance regions ttH production is about a fourth of the full process, as for the total cross section.
Further differential distributions
In this section we investigate further differential distributions concentrating on observables that show deviations in the shape between the full process and the approximations. Some other distributions which show no significant shape deviations are listed in Appendix B. Figure 3a shows the azimuthal separation of the b-jet pair determined by top-antitop Breit-Wigner maximum likelihood according to Section 3.2.2. While ttbb production and the full process yield a very similar shape, ttH production exhibits clearly a different shape. This behaviour can be explained by the dominant production mechanisms of bottomantibottom pairs. In the signal process these result from the Higgs boson and owing to the finite Higgs-boson mass tend to have a finite opening angle. In the background processes the bottom-antibottom pairs result mainly from gluons and thus tend to be collinear leading to a peak at small φ bb that is cut off by the acceptance function. Thus, this distribution can help to separate bottom-antibottom pairs resulting from Higgs bosons from those of other origin. Figure 3b displays the transverse-momentum distribution of the third-hardest b jet. We find that all three approximations are similar in shape for p T values below 150 GeV. For higher transverse momenta the distribution for ttH production diverges from those of the full process and ttbb production. We do not see this behaviour in the transverse momentum distributions of the two harder b jets (see Figure 5c in Appendix B) but to some extent in the one of the fourth-hardest b jet. This results from the fact that in the ttH signal all b jets originate from heavy-particle decays, while in the full process some are directly produced yielding more b jets with high transverse momenta. The distributions in the transverse momenta of the two non-b jets are displayed in Figures 3c-3d. For the hardest non-b jet, we find a similar picture as for the 3rd hardest b jet and an enhancement for the full process relative to ttbb production and ttH production for transverse momenta above 150 GeV. The explanation is similar as in the preceding case. While in the ttbb process the jets originate from the top-quark decay, in the full process they can be produced directly leading to more jet activity at high transverse momenta. In the case of the second hardest jet both approximations exhibit a strong drop near p T,j,min = 200 GeV. For higher transverse momenta two jets originating from W-boson decay are too collinear to pass the rapidity-azimuthal-angle-separation cut of ∆R jj > 0.4 such that the corresponding events are eliminated. On the other hand, events with jets pairs with higher invariant masses, which are present in the full process, are not cut.
The sum of all transverse energies (including missing transverse energy) is depicted in Figure 3e. For small H T the different thresholds of the approximations are clearly visible. For H T ∼ 400 GeV-800 GeV the ttbb approximation describes the full process within 10 %.
As it decreases stronger with increasing H T the deviation becomes larger above 800 GeV.
Since H T incorporates all transverse energies of the process it is a measure for the average deviation of the transverse energies between the approximations and the full process.
Finally, Figure 3f presents the invariant mass of the three hardest b jets. Below the threshold M H + p T,b,cut ≈ 150 GeV the signal process is strongly suppressed, above its ratio to the full process rises to 36 % at M b 1 b 2 b 3 ∼ 195 GeV and then drops slowly to 26 % at M b 1 b 2 b 3 ∼ 400 GeV. The ratio of ttbb production and the full process on the other hand is
Interference effects in differential distributions
In this section we study in detail the effects of the interference contributions between matrix elements of different orders in α s . For most distributions we find a uniform shift by roughly the same amount as for the total cross section, i.e. about 5 % for ttbb production and the full process (interference effects are absent in ttH production). For both scenarios we observe a few kinematical distributions that are sensitive to these interference effects. The upper panels of Figure 4 show the results for the full process and the central and lower panels highlight the interference effects. Specifically, the central panels show the relative difference (σ tot − σ sum )/σ tot for ttbb production with a solid blue line and the lower panels the same relative difference for the full process with a dashed line.
In Figures 4a and 4b we see that the effect of the interference for the distributions in the transverse momentum of the third-hardest b jet and for the harder non-b jet, respectively, varies monotonically with increasing p T from −6 % to −2 %. Figure 4c shows the interference effects on the distribution of the invariant mass of the b-jet pair determined by top-antitop Breit-Wigner maximum likelihood. The suppression of interference in the regions of the Higgs-and Z-boson resonances is clearly visible. For invariant masses above the Higgs threshold the interference effect exceeds −10 %. As shown in Figure 4d, the relative interference effects grow with increasing azimuthal-angle separation of the b- . The lower panels show the relative interference effects of ttbb production and the full process, respectively. The upper panel shows the corresponding differential distribution of the full process as reference.
jet pair determined by top-antitop Breit-Wigner maximum likelihood from almost zero at small angles to −25 % for φ bb,non-top = 180 • , while the cross section drops with increasing azimuthal-angle separation. Also in the distributions the dominant interference effects arise from diagrams of order O α s α 3 involving t-channel W-boson exchange interfering with diagrams of order O α 2 s α 2 . Interferences with diagrams for ttH production are by a factor five or more smaller.
Conclusion and outlook
We have presented an analysis of the irreducible background for ttH production at the LHC. Specifically, we have compared the full Standard Model cross section for the production of four b jets, two jets, one identified charged lepton and missing energy with the contributions from the subprocesses ttbb production and ttH production, obtained using the pole approximation. With standard acceptance cuts we find that the total cross section of ttH production and decay is roughly a fourth of the full process, while ttbb production constitutes the major contribution to the full process with about 92 %. For all scenarios the bulk of the cross section originates from gluon-induced processes.
We analysed various b-jet-pair invariant-mass distributions based on different methods to select two of the four b jets to be identified with the decay products of the Higgs boson. We find that assigning two b jets to the top-and antitop-quark decay by maximising a combined Breit-Wigner likelihood function and assigning the remaining two b jets to the potential Higgs boson yields a good unbiased determination of the b-jet pair originating from the Higgs-boson decay.
We investigated the interferences between contributions to the matrix element of different orders in the strong and electroweak couplings constants. We find that interference effects are only sizeable for gluon-induced processes and lower the hadronic cross section by about 5 %. The dominant contributions result from interferences of the QCD ttbb production diagrams of order O α 2 s α 2 with diagrams of order O α s α 3 involving t-channel W-boson exchange. Interferences between the dominant background and the ttH signal on the other hand are below one per cent. In most of the differential distributions the interference effects lead to a constant shift. We found, however, a few distributions where non-uniform shape changes appear.
Our analysis demonstrates the complexity of this process and provides useful information for a future NLO calculation. We found that the ttbb process provides a good approximation to the full process, with a deviation of only about 8 % for the total cross section and a uniform shift for most differential distributions. While the QCD corrections to the leading QCD contributions to ttbb production are already known, a calculation of the NLO corrections to the full ttbb process should be feasible with available tools [38,45].
Acknowledgments
This work was supported by the Bundesministerium für Bildung und Forschung (BMBF) under contract no. 05H12WWE.
A On-shell projection
In this appendix we discuss some details of our implementation of the pole approximation. For ttbb production (ttH production) we compute the matrix element with on-shell momenta for the top quarks (and the Higgs boson) and consider off-shell effects only in the corresponding propagators of the unstable particles. Thus, starting from off-shell momenta we use an on-shell projection to generate the momenta of the resonant particles. Since the procedure of on-shell projection is not uniquely defined we impose suitable requirements. The matrix element is strongly sensible to resonances and hence we project such as to retain sensible invariants as far as possible. As a consequence of the on-shell projection of the top quarks we also need to incorporate proper on-shell projections for their decay products.
Analogously we obtain for the projection of the W − decay products in W − →ūd(cs): Thus, we perform six projections in total for ttH production and five for ttbb production.
Of course, the on-shell projections do not change the momenta, if the resonant particles are already on shell.
B Further differential distributions
In this appendix we present further differential distributions which exhibit only minor shape deviations between the full process and the approximations. As for the total cross section we find for most distributions a constant offset of 8 % between the full process and the ttbb approximation and about a factor four between the full process and the ttH approximation. Only in tails of distributions larger deviations between the full process and the approximations appear.
In Figures 5a-5b we show the transverse momentum and the rapidity distribution of the identified charged lepton, while the transverse momentum and the rapidity distribution of the hardest b jet are depicted in Figures 5c-5d. Finally, the missing transverse momentum and the transverse mass of the charged lepton, the neutrino and the hardest b jet are provided in Figures 5e-5f. | 8,229.2 | 2014-12-17T00:00:00.000 | [
"Physics"
] |
Acousto-optic Ptychography
Acousto-optic imaging (AOI) enables optical-contrast imaging deep inside scattering samples via localized ultrasound-modulation of scattered light. While AOI allows optical investigations at depths, its imaging resolution is inherently limited by the ultrasound wavelength, prohibiting microscopic investigations. Here, we propose a novel computational imaging approach that allows to achieve optical diffraction-limited imaging using a conventional AOI system. We achieve this by extracting diffraction-limited imaging information from 'memory-effect' speckle-correlations in the conventionally detected ultrasound-modulated scattered-light fields. Specifically, we identify that since speckle correlations allow to estimate the Fourier-magnitude of the field inside the ultrasound focus, scanning the ultrasound focus enables robust diffraction-limited reconstruction of extended objects using ptychography, i.e. we exploit the ultrasound focus as the scanned spatial-gate 'probe' required for ptychographic phase-retrieval. Moreover, we exploit the short speckle decorrelation-time in dynamic media, which is usually considered a hurdle for wavefront-shaping based approaches, for improved ptychographic reconstruction. We experimentally demonstrate non-invasive imaging of targets that extend well beyond the memory-effect range, with a 40-times resolution improvement over conventional AOI, surpassing the performance of state-of-the-art approaches.
INTRODUCTION
Optical microscopy through scattering media is a long-standing challenge with great implications for biomedicine. Since scattered light limits the penetration depth of diffraction-limited optical imaging techniques approximately to 1 millimeter, the goal of finding a better candidate for high-resolution imaging at depth is at the focus of many recent works [1]. Modern techniques that are based on using only unscattered, 'ballistic' light, such as optical coherence tomography and two-photon microscopy, have proven very useful, but are inherently limited to shallow depths where a measurable amount of unscattered photons is present [2][3][4][5][6][7].
The leading approaches for deep-tissue imaging, where no ballistic components are present, are based on the combination of light and ultrasound [1], such as acousto-optic tomography (AOT) [8][9][10] and photoacoustic tomography (PAT) [8,11]). PAT relies on the generation of ultrasonic waves by absorption of light in a target structure under pulsed optical illumination. In PAT, images of absorbing structures are reconstructed by recording the propagated ultrasonic waves with detectors placed outside the sample. In contrast to PAT, AOT does not require op-tical absorption but is based on the acousto-optic (AO) effect: in AOT a focused ultrasound spot is used to locally modulate light at chosen positions inside the sample. The ultrasound spot is generated and scanned inside the sample by an external ultrasound transducer. The modulated, frequency-shifted, light is detected outside the sample using an interferometry-based approach [8,10]. This enables the reconstruction of the light intensity traversing through the localized acoustic focus inside the sample. Light can also be focused back into the ultrasound focus via optical phase-conjugation of the tagged light in "timereversed ultrasonically encoded" (TRUE) [12] optical focusing, or via iterative optimization, which can be used for fluorescence imaging [13,14]. AOT and PAT combine the advantages of optical contrast with the near scatter-free propagation of ultrasound in soft tissues. However, they suffer from low spatial-resolution that is limited by the dimensions of the ultrasound focus, dictated by acoustic diffraction. This resolution is several orders of magnitude lower than the optical diffraction limit. For example, for ultrasound frequency of 50MHz the acoustic wavelength is 30µm, while the optical diffraction limit is λ N A where NA is the numerical aperture of the system and λ is the optical wavelength, i.e. a 100-fold difference in resolution. This results in arXiv:2101.10099v1 [physics.optics] 25 Jan 2021 a very significant gap and a great challenge for cellular and sub-cellular imaging at depths.
In recent years, several novel approaches for overcoming the acoustic resolution limit of AOT based on wavefront-shaping have been put forward. These include iterative TRUE (iTRUE) [15,16], time reversal of variance-encoded (TROVE) optical focusing [17] and the measurement of the acousto-optic transmission matrix (AOTM) [18]. Both iTRUE and TROVE rely on a digital optical phase-conjugation (DOPC) system [19], a complex apparatus, which conjugates a high-resolution SLM to a camera. In AOTM, an identical resolution increase as in TROVE is obtained without the use of a DOPC system, by measuring the transmission-matrix of the ultrasound modulated light and using its singular value decomposition (SVD) for sub-acoustic optical focusing. A major drawback of this state-of-the-art 'superresolution' AOT approaches is that they require performing a large number of measurements and wavefront-shaping operations in a time shorter than the sample speckle decorrelation time. In addition, in practice, these techniques do not allow a resolution increase of more than a factor of ×3 − ×6 improvement from the acoustical diffraction-limit, when sub-micron optical speckle grains are considered [18]. Recently, approaches that do not rely on wavefront shaping, and exploit the dynamic fluctuations to enable improved resolution [20] or fluorescent imaging [21] have been demonstrated, but these do not practically allow a resolution increase of more than a factor of 2-3. Closing the two orders of magnitude gap between the ultrasound resolution and the optical diffraction-limit is thus still an open challenge.
Diffraction-limited resolution imaging through highly scattering samples without relying on ballistic light is currently possible only by relying on the optical "memory effect" for speckle correlations [22][23][24]. These techniques retrieve the scene behind a scattering layer by analyzing the correlations within the speckle patterns. Unfortunately, the memory-effect has a very narrow angular range, which limits these techniques to isolated objects that are contained within the memory-effect field-of-view (FoV). For example, at a depth of 1mm the memory-effect range is of the order of tens of microns [25,26] making it inapplicable for imaging extended objects.
Here, we present acousto-optic ptychographic imaging (AOPI), an approach that allows optical diffraction-limited imaging over a wide FoV that is not limited by the memory-effect range, by combining acousto-optic imaging (AOI) with specklecorrelation imaging. Specifically, we utilize the ultrasound focus as a controlled probe that is scanned across the wide imaging FoV, and use speckle-correlations to retrieve optical diffractionlimited information from within the ultrasound focus. Importantly, we develop a reliable and robust computational reconstruction framework that is based on ptychography [27][28][29], which exploits the intentional partial overlap between the ultrasound foci. We demonstrate in a proof of principle experiments a > ×40 increase in resolution over the ultrasound diffractionlimit, providing a resolution of 3.65µm using a modulating ultrasound frequency of 25MHz.
A. Principle
The principle of our approach is presented in Figure 1 along with a numerical example. Our approach is based on a conventional pulsed AOI setup, employing a camera-based holographic detection of the ultrasound modulated light [8,18]. In this setup ( Fig. 1(a)) the sample is illuminated by a pulsed quasi-monochromatic light beam at a frequency f opt . The diffused light is ultrasonically tagged at a chosen position inside the sample by a focused ultrasound pulse at a central frequency f US . The acousto-optic modulated (ultrasound-tagged) light field at frequency f AO = f opt + f US is measured by a camera placed outside the sample using a pulsed reference beam that is synchronized with the ultrasound pulses, via off-axis phase-shifting interferometry [20,30] (Supplementary section 1).
In conventional AOI, the ultrasound focus is scanned along the target object ( Fig. 1(b,c)), and the AOI image, I AOI (r), is formed by summing the total power of the detected ultrasound-modulated light at each ultrasound focus position r US m : is the ultrasound-modulated speckle field that is measured by the camera ( Fig. 1(d)). Since the conventional AOI image ( Fig. 1(g)) is a convolution between the target object and the ultrasound-focus pressure distribution [20], its resolution is limited by the acoustic diffraction limit.
Our approach relies on the same data acquisition scheme as in conventional AOI ( Fig. 1(b-d)). However, instead of integrating the total power of the camera-detected ultrasoundmodulated field at each ultrasound position, we use the spatial information in the detected field, E m (r cam ), to reconstruct the diffraction-limited target features inside the ultrasound focus, via speckle-correlation computational imaging approach [23,24,31]. Specifically, we estimate the autocorrelations of the hidden target inside each ultrasound focus position ( Fig. 1(e)), and then use a ptychography-based algorithm [27,28] to jointly reconstruct the entire target from all estimated autocorrelations ( Fig. 1(f)). Thus, our approach exploits the richness in the information of the detected ultrasound-modulated speckle fields, which contain a number of speckle grains limited only by the camera pixel count.
Beyond improving the resolution of AOI by several orders of magnitude, from the ultrasound diffraction-limit to the optical diffraction limit, our approach allows to tackle a fundamental and generally very difficult to fulfill a requirement for speckle-correlation imaging: that the entire imaged object area must be contained within the memory-effect correlation-range [23,24,32], i.e. that all object points produce correlated speckle patterns [24]. This requirement usually limits speckle-correlation imaging to small and unnaturally-isolated objects. Recently, ptychography-based approaches were utilized to overcome the memory-effect FoV [33,34]. However, the implementations of all FoV-extending approaches to date required direct access to the target in order to limit the illuminated area, a requirement that is impossible to fulfill in noninvasive imaging applications. Our approach overcomes this critical obstacle by relying on noninvasive ultrasound tagging to limit the detected light to originate only from a small controlled volume that is determined by the ultrasound focus. The only requirement for speckle-correlation imaging is that the ultrasound focus ( Fig. 1(b), dashed yellow circle) would be smaller than the memory-effect range ( Fig. 1(b), dashed green circle). Thus allowing to image through scattering layers objects that extend well beyond the memory-effect FoV, without a limit on their total dimensions.
Mathematically our approach can be described as follows: Consider a target object located inside a scattering sample ( Fig. 1(a)). As a simple model, we model the object by a thin 2D amplitude and a phase mask, whose complex field transmission is given by O(r). The goal of our work is to reconstruct the object 2D transmission |O(r)| 2 by noninvasive measurements of the scattered light distributions outside the sample. Fig. 1. Acousto-optic ptychographic imaging (AOPI) principle and numerical results. a. Schematic of the experimental setup: An AOI setup is equipped with a rotating diffuser for producing controlled speckle realizations. An object hidden inside a scattering sample is imaged by scanning a focused ultrasound beam (in yellow) over the object, and the acousto-optic modulated (frequency-shifted) light is holographically detected using a high-resolution camera. b. The ultrasound beam scan the target (dashed yellow circle). The US beam is smaller than the "memory effect" range (dashed green circle), allowing the use of speckle correlation imaging for each scan as part of our AOPI method. c. For m = 1...M scan positions, the ultrasound beam modulate the light at the target plane. d.The modulated light propagate from the target plane through a diffuser and reach the camera. For each scan position, N different speckle realizations fields are recorded at the camera, due to different speckle illuminations that are obtained using the rotating diffuser. e. For each scan position, the autocorrelation of the ultrasound modulated light is estimated via correlography, using the N recorded fields. f. Numerical result for AOPI reconstruction. The M autocorrelations for all scan positions are entered into ptychography-based phase retrieval algorithm and a full reconstruction of the target is obtained. g. Conventional AOI reconstruction, obtained by plotting the total modulated power at each ultrasound focus position. Scale bar 100µm.
A monochromatic spatially-coherent laser beam illuminates the object through the scattering sample ( Fig. 1(a)). The light propagates through the scattering sample, results in a speckle illumination pattern at the object plane. Considering a dynamic scattering sample, such as biological tissue, the illuminating speckle pattern on the object is time-varying. We denote the speckle pattern field illuminating the object at a time t n by S n (r). The field distribution of the light that traverses the object at time t n is thus given by: O n (r) = O(r)S n . This light pattern is ultrasound modulated by an ultrasound focus whose central position, r US m , is scanned over m = 1..M positions inside the sample. We denote the ultrasound focus pressure distribution at the m-th position by U(r − r US m ). The shift-invariance of the ultrasound focus is assumed here for simplicity of the derivation, and is not a necessary requirement [35]. The ultrasound modulated light field at the m-th ultrasound focus position is given by the product of O n (r) and U(r − r US m ): O m,n (r) = O n (r)U(r − r US m ) . The ultrasound modulated light field O m,n (r) propagates to the camera through the scattering sample, producing a random speckle field at the camera plane: E m,n (r cam ). When the ultrasound focus dimensions are smaller than the memory-effect range: D US < ∆r mem ≈ L∆θ mem , where L is the depth of the object inside the scattering sample from the camera side ( Fig. 1(a)) and ∆θ mem is the angular range of the memory effect, the scattering sample can be considered as a thin random phase-mask with a phase distribution φ sample (r cam ). The scattered light field measured by a camera that images the scattering sample facet is thus given by: where P L is the propagation operator for propagating the field from the object plane to the scattering sample facet. The complex field autocorrelation of the target, illuminated by the n-th speckle pattern and multiplied by the ultrasound field distribution O m,n (r) O m,n (r) can be calculated from a single camera frame [36]. However, multiple (n = 1..N) camera frames, captured under different speckle illuminations, can be used to calculate the target intensity autocorrelation: which is free from speckle artefacts via correlography [36], in the exact same manner as demonstrated without ultrasound modulation by Edrei et al [31]. In correlography [31,34,36] the estimate for the object un-speckled intensity autocorrelation at the m-th ultrasound focus position,ÂC m (r), is calculated by averaging the Fourier transforms of the captured speckle frames intensity distribution, after subtracting their mean value [34,36]: AC m (r) * |S n (r) S n (r)| 2 n speckle grain autocorrelation
(2)
The calculated autocorrelation,ÂC m (r), is the autocorrelation of the object convolved with the diffraction-limited pointspread-function of the imaging aperture on the facet plane. A required condition for estimating the object autocorrelation from the Fourier-transform of the captured speckle patterns (Eq.2) is that the object distance from the measurement plane, L, is larger than 2Dr c /λ, where D is the object dimensions, i.e. the ultrasound focus diameter, and r c is the illumination speckle grain size [31]. In deep tissue imaging, r c ≈ λ/2, and the condition becomes: L > D, i.e. the imaging depth should be larger than the dimensions of the ultrasound focus, a naturally fulfilled condition.
The use of a multiple illumination speckle realizations, N, is advantageous both for estimating the un-speckled intensity transmission of the object, |O m (r)| 2 , and for improved ensemble averaging of the estimation [24,31] (see Supplementary section 5). For a dynamic sample, the speckle illuminations naturally vary in time, and in the case of a static sample, the speckle realizations can be easily obtained by e.g. a rotating diffuser. According to the Wiener-Khinchin theorem, the Fourier transform of the object estimated autocorrelation,ÂC m (r), is the Fourier magnitude of the object intensity transmission: |F m (k)| = F {ÂC m (r)} The object itself can thus be reconstructed from |F m (k)| via phase retrieval [37].
If a partial overlap between the scanned ultrasound foci exists, the reconstruction problem can be reliably solved using ptychography, an advanced joint phase-retrieval technique [27][28][29]38], that was recently shown to be extremely successful in stable, high-fidelity, robust reconstruction of complex objects, which is not possible by separately solving the M phase-retrieval problem. A numerical example for using our approach to image an extended object beyond the memory-effect FoV, with diffractionlimited resolution is shown in Fig 1.f, side-by-side with the conventional AOI image of the same object using the same measurements ( Fig. 1(g)). A resolution increase of > ×25 over conventional AOI is apparent in the high-fidelity reconstruction ( Fig. 1(f)). A detailed explanation of the data processing and the implemented ptychographic reconstruction algorithm can be found in Supplementary section 4,7.
A. Experimental set up
To demonstrate our approach we built a proof-of-principle setup schematically shown in Fig. 1(a). It is a conventional AOI setup with camera-based holographic detection based on phaseshifting off-axis holography (Supplementary section 1), with the addition of a controlled rotating diffuser before the sample (two 1 o light shaping diffusers, Newport), used for generating dynamic random speckle illumination. The illumination is provided by a pulsed long-coherence laser at a wavelength of 532nm (Standa). An ultrasound transducer with a central frequency of f US = 25MHz, and ultrasound focus dimensions of D X = 149µm, D Y = 140µm full-width at half max (FWHM) in the horizontal (transverse) and vertical (axial ultrasound) directions, correspondingly, is used for acousto-optic modulation. The ultrasound focus position was scanned laterally by a motorized stage, and axially by electronically varying the time delay between the laser and ultrasound pulses. The full setup description is given in Supplementary Section 1. As controlled scattering samples and imaged targets for our proof-of-principle experiments we used a sample comprised of a target placed in water between two scattering layers composed of several 5 o scattering diffusers that have no ballistic component (see Supplementary Section 6). An sCMOS camera (Andor Zyla 4.2 plus) was used to holographically record the ultrasound-modulated scattered light fields, using a frequency-shifted beam. To minimize distortions in the recorded fields, no optical elements were present between the camera and the diffuser. The field at the diffuser plane, E m,n (r cam /λL), was calculated from the camera recorded field by digital propagation (Supplementary section 1).
B. Imaging an extended object beyond the memory effect
As a first demonstration, we imaged a transmittive target composed of nine digits ( Fig. 2(a)) that extends over 3.5 times beyond the memory-effect of the scattering sample, which is ∆r mem ∼ 280µm (Fig. 2(c(i) For 2D imaging, the ultrasound focus ( Fig. 2(c(ii))) was scanned over the target with a step size of ∆X = 44.7µm ,∆Y = 37.3µm, in the horizontal and vertical directions, respectively. These steps (along with the ultrasound spot size) define a probe overlap of ∼ 88% between neighboring positions ( Fig. 2(b)). A study of the effect of the probe overlap on the reconstruction fidelity is presented in Supplementary section 8. For each ultrasound focus position, r US m (m = 1..224) we recorded N = 150 different ultrasound-modulated light fields, E m,n (r cam ), each with a different (unknown) speckle realization, S n (r). The target was reconstructed from the M = 224 autocorrelations using rPIE ptychographic algorithm [28] (Supplementary section 7). Fig. 2(c(iii)) presents an example for one of the autocorrelations used as input. The AOPI reconstructed image (Fig. 2(e)) provide an image with a resolution well beyond that of a conventional AOI reconstruction (Fig. 2(h)), and also well beyond the improved resolution of recent super-resolution AOI techniques such as AO-SOFI [20] (Fig. 2(i-j)). Importantly, since the target extends beyond the memory-effect range, as is the case in many practical imaging scenarios, conventional speckle-correlation imaging without AO modulation [24,31] fail to reconstruct the object (Fig. 2(d)), as expected. Interestingly, when taken only part of the measured positions from the center of the scanned area (only 6 × 8 positions from the center of the object area, instead of all 28 × 8 positions, Fig. 2(f)), and reconstruct the probed object -a good reconstruction is obtained using AOPI method ( Fig. 2(g)), proving that the acoustic probe well functions as an isolation aperture.
C. Imaging resolution verification
To demonstrate the resolution increase of AOPI we performed an additional experiment where the target of Fig. 2 was replaced by elements 3-4 of group 6 of a negative USAF-1951 resolution test chart (Fig. 3(a)). For 2D imaging, the ultrasound focus ( Fig. 3(d)) was scanned over the target with a step size of ∆X = ∆Y = 29µm. These steps (along with the ultrasound spot size) define a probe overlap of ∼ 93% between neighbouring positions ( Fig. 3(b)). For each ultrasound focus position, r US m , m = 1..72 we recorded N = 150 different ultrasound-modulated light fields, E m,n (r cam ) (Fig 3.c), each with a different (unknown) speckle realization, S n (r). The reconstruction of the target from the M = 72 autocorrelations using rPIE ptychography algorithm [28] is presented in Fig. 3(g-h). A study of the effect of probe overlap on the reconstruction is presented in Supplementary section 8. The AOPI reconstructed image resolves resolution target features of size (separation) of 5.52µm (Fig. 3(g-h) Conventional AOI reconstruction. The acoustic probe limits the imaging resolution to the probe dimensions. f. Speckle correlation reconstruction using classic phase retrieval algorithm (same algorithm as in the ptychography engine). g. AOPI reconstruction using rPIE algorithm. h. Cross-section of the AOPI reconstruction from (g), resolve features ∼ ×30 smaller than the acoustic focus with ∼ ×40 increase in resolution compared to classical AOI methods. Horizontal cross-section lines width is 6.2µm (orange) and vertical cross-section lines width is 5.52µm (bright orange). Scale bar 40µm.
The cross-sections of the reconstructed image ( Fig. 3(h)) allow to estimate the imaging resolution by fitting the result to a convolution of the known sample structure with a Gaussian PSF. This results in a resolution of 3.65µm (FWHM), a 40-fold increase in resolution compared to the acoustic resolution of conventional AOI (Fig. 3(e)). Interestingly, although the target dimensions in this experiment are contained in the memory-effect range, conventional speckle-correlation imaging that is based on phaseretrieval without AO modulation ( Fig. 3(f)), results in a considerably lower reconstruction fidelity than the AOPI reconstruction ( Fig. 3(g)). This improvement is attributed to the larger input data set and improved algorithmic stability of ptychographic reconstruction compared to simple phase-retrieval [27,38].
DISCUSSION AND CONCLUSION
We proposed and demonstrated an approach for diffractionlimited wide FoV optical imaging in ultrasound with specklecorrelation computational imaging [31]. In contrast to previous approaches for super-resolved acousto-optic imaging [15][16][17][18], the resolution of our approach is optically diffraction-limited, independent of the ultrasound probe dimensions, the ratio between the speckle grain size and the ultrasound probe dimensions, or the number of realizations [18]. This allowed us to demonstrate an ×40 improvement in resolution over the acoustic diffraction-limit, an order of magnitude larger gain in resolution compared to the state of the art approaches, such as iTRUE [15,16], TROVE [17], and AOTM [18]. In addition, TROVE and AOTM allow the resolution to increase only when unrealistically large speckle grains are considered [18]. Another important advantage of our approach is that unlike transmission-matrix and wavefront-shaping based approaches [17,18] it does not require unrealistically long speckle decorrelation times. Similar to recent approaches that utilize random fluctuations [20,34], our approach benefits from the natural speckle decorrelation to generate independent realizations of coherent illumination, improving the estimation of the object autocorrelation [31]. While our approach relies on the memory-effect to retrieve the diffraction-limited image, its FoV is not limited by the memory-effect range, as is the case in all other noninvasive memory-effect based techniques [23,24]. The FoV is dictated by the scanning range of the ultrasound focus, which is practically limited only by the allowed acquisition time. Such an extension of the FoV in speckle-correlation based imaging has only been obtained before by invasive access to the target object [33,[39][40][41]. The adaptation of a ptychographic image reconstruction significantly improves the reconstruction fidelity and stability compared to phase-retrieval reconstruction (Fig. 3(f-g)) [23,24].
Our super-resolution AOPI approach does not rely on wavefront-shaping [15][16][17][18] or nonlinear effects [42], and it can be applied to any AOI system employing camera-based coherent detection.
The main limitation of our approach is the requirement for a memory-effect range which is of the order of the ultrasound probe dimensions, i.e. that the ultrasound-tagged fields have a non-negligible correlations. This condition can be satisfied by relying on a small ultrasound focus, achieved by the use of highfrequency ultrasound, and by the use of long laser wavelength, which increases the memory-effect range [32,43]. Importantly, while at very large imaging depths, deep within the diffusive light propagation regime, the memory effect angular range is [32,43], at millimeter-scale depths, which are of the order of the transport mean free path (TMFP), the memory-effect range has been shown to be orders of magnitude larger [44]. In addition, the requirement for a sufficiently large memoryeffect can be alleviated by relying on translation-correlations [25] or the generalized memory-effect [26]. Applying the above improvements (a higher ultrasound frequency, a longer optical wavelength, generalized speckle correlations) are the next steps for bringing our proof of principle demonstrations to practical biomedical imaging applications. Another necessary technical improvement for biomedical application of our approach is in the demonstrated acquisition speed. Similar to all super-resolution techniques that do not rely on object priors, our approach requires a large number of measurements for reconstructing a single image. The required number of frames is the product of: (the number of probe positions) × (the number of realization per probe position) × (number of phase-shifting frames). In our proof of principle demonstration system, which was not optimized for acquisition speed, we used 150 realizations per probe position and 16 phase-shifting frames. A diffuser mounted on a slowly rotating motor was used to change the realizations, and a conventional sCMOS camera was used to capture the images, which resulted in an acquisition time of ∼ 1.6 seconds per realization. This time can be reduced by orders of magnitude using single-shot, off-axis, fast camera-based detection [18,21,45,46], a fast MEMS-based dynamic wavefront randomizer [47], and 2D electronic scanning US array instead of the mechanical scan of the single-element ultrasound transducer [48]. Assuming the acquisition speed is limited by the camera frame-rate of ∼ 7, 000 frames-per-second [18], the acquisition of 150 realizations for 72 probe positions (as in Fig. 3), will be of the order of ∼ 1.5sec, excluding off-line data processing. This is expected to be adequate for imaging biological structures since the requirement on the speckle decorrelation time is that of a single frame, i.e. ∼ 0.1msec. Moreover, the number of required realizations is expected to be significantly reduced by using advanced correlation-based reconstruction schemes such as those provided by deep neural networks (DNN). These have been recently shown to be able to significantly improve the estimation of the intensity autocorrelation, from only a few coherent realizations [49].
The combination of the state-of-the-art optical, ultrasound, and computational imaging approaches, has the potential to significantly impact imaging deep inside complex samples.
ACKNOWLEDGMENTS
We thank Prof. Hagai Eisensberg for the q-switched laser and thank the Nanocenter at the Hebrew University, with special thanks to Dr. Itzik Shweky and Galina Chechelinsky, for fabrication the target samples.
DISCLOSURES
Disclosures. The authors declare no conflicts of interest.
EXPERIMENTAL SETUP
The full experimental setup is shown schematically in Supplementary Figure S1. It is a pulsed acousto-optic imaging (AOI) experiment, employing a camera-based digital phase-shifting holographic detection [1,2]. The light source is a single longitudinal-mode 532nm passive Q-switched laser (Standa STA-01-SH-4) providing < 1ns duration pulses at f rep = 25kHz repetition rate. The laser output is split to a reference and object arm by a polarization beam splitter (PBS), with a half-wave plate (HWP) controlling the splitting ratio.
At the object arm the beam is expanded to a width of ∼ 4mm, passed through an f = 200mm focal length lens and a diffuser (two stacked 1 o holographic diffusers Newport) mounted on a computer-controlled rotating stage, generating a controlled speckle decorrelation-time in the illumination beam. The speckled beam was introduced into the water tank containing the target, placed between two scattering layers, in a sandwich configuration. The first scattering layer was a 5 o diffuser placed ∼ 1cm before the target, and the second layer was composed of two 5 o diffusers stacked together. The distance between the target to the second layer was L = 5cm . The targets were fabricated on 1.5mm thick glass slides coated with Ti and Ag. The Ti layer was 20nm thick placed above an Ag coating of 100nm thickness, fabricated using E-Beam Lithography.
An sCMOS camera (Andor Zyla 4.2) was placed at a distance of 9cm from the second diffuser. For acousto-optic tagging, an ultrasound transducer with a central frequency of 25MHz (Olympus Fig. S1. The experimental setup. PBS -polarizing beam splitter. AOM -two acousto optic modulators. BD -beam dump. FG -function generator. AMP -amplifier. UST -ultrasonic transducer. f opt -laser optical frequency ( f opt =c/λ).f US -ultrasound driving frequency: f US = 25MHz + f rep /2 + 1/T exposure , where f rep is the laser pulse repetition-rate, and T exposure is the camera exposure time v324, focal length = 12.7mm, F# = 2) was used. The transducer was driven by 100ns-long sinusoidal pulses at a central frequency of f us = 25MHz + f rep /2 + 1/T exposure , where T exposure is the camera exposure time. The pulses were generated by a function generator (Keysight 33600A) with a peak-to-peak amplitude of 2.4V pp , amplified by 40 dB by an RF power amplifier (Amplifier Research 25A250A), resulting in 240V pp driving amplitude.
The ultrasound transducer was mounted on a computer-controlled motorized translation stage (Thorlabs), which allowed transverse horizontal scanning of the acoustic focus. The vertical position of the ultrasound focus was controlled by varying the relative delay between the acoustic pulses and the laser trigger pulses. To ensure accurate temporal synchronization of the < 1ns laser pulses and to compensate for slow temporal drifts of the passively q-switched laser pulses timing from the laser trigger, a photodiode was used to trigger the acoustic pulse using the previous laser pulse.
At the reference arm, two acousto-optic modulators (AA Opto-Electronic MT80-A1) connected to RF amplifiers (Mini-Circuits ZHL-3a) were used to frequency shift the reference beam by f AOM = f US + f phase shifting /4, where f phase shifting = f cam /4, and f cam is the sCMOS camera frame-rate.
The ultrasound frequency shift was chosen to be f us = 25MHz + f rep /2 + 1/T exposure , to both efficiently reject the coherent interference of the unmodulated speckle background, which originates from the use of the laser pulses shorter than the ultrasound period and pulse width [3,4], and to average over the sinusoidal spatial modulation of the ultrasound pulse. Efficient background rejection of the unmodulated light is obtained by adding a frequency-shift of f rep /2 = 12.5KHz [3], and the additional 1/T exposure exploits the travelling ultrasound pulse to average the sinusoidal spatial modulation over a single ultrasound wavelength [2], effectively providing smooth ultrasound probe in the pulse propagation (vertical) direction ( Fig. S2(a,c)). The additional frequency-shift of f phase shifting = 5Hz allows 4-frame phase-shifting holographic detection.
ULTRASOUND PROBE CHARACTERIZATION
A direct optical characterization of the ultrasound focus, as measured by removing the diffusers before the camera and placing an imaging lens, averaged over 200 different speckle illuminations is presented in Fig. S2. One-dimensional profiles of the ultrasound focus along the horizontal (x-axis) and vertical (y-axis) dimensions are presented in Fig. S2(b-c). Their measured full width at half max (FWHM) are D X = 149µm, and D Y = 140µm (FWHM), respectively.
EXPERIMENTAL CHARACTERIZATION OF THE SCATTERING LAYERS MEMORY-EFFECT RANGE
The memory effect range of the scattering layer through which imaging was performed in our experiments (two 5 o diffusers, Newport) was characterized by comparing the autocorrelation of the direct image of the extended object ( Supplementary Fig. S3(a)) used in Fig. 2 in the main text, obtained using an imaging-lens replacing the scattering layer (Supplementary Figure S3(b)) to the autocorrelation of the object obtained using correlography through the scattering layer but without acousto-optic tagging, following Edrei et al [5] ( Supplementary Fig. S3(c)). The memory-effect correlation at each shift at the object plane was estimated by calculating the ratio between the two autocorrelations, and taking the envelope of the resulting ratio. Supplementary Fig. S3(d) presents the horizontal cross-section of the memory effect range, along with the acoustic probe cross-section and auto-correlation. The resulting memory-effect FWHM is ∼ 280µm at the target distance of ∼ 5cm, i.e. an angular memory-effect range of ∼ 5.6mrad = 0.32 o . The requirement for acousto-optic ptychographic imaging (AOPI) is that the acoustic probe will be contained withing the memory effect range. In this way, each position can be retrieved using speckle correlation method, while the full object can extend beyond the memory effect range indefinitely. As can be seen from Fig. S3(d), in our experimental conditions while the extended object (Fig. 2) indeed extends beyond the memory effect, the ultrasound probe itself and its spatial autocorrelation are smaller than the memory-effect correlation range, as required. In cases where the probe dimensions are similar to the memory effect range, one can digitally compensate the memory effect correlation reduction by normalizing the calculated autocorrelation through the scattering medium by the memory effect correlation function at each angle. Using this method will enable to relax the requirement on the probe size concerning the memory effect range, which might be significant when imaging in biological samples. . The memory effect correlation FWHM is 280µm, , which is ∼ 3.5 times smaller than the object size, but ∼ 2 times larger than the ultrasound focus dimensions. Scale bars: 200µm.
DATA PROCESSING FOR CALCULATING THE ULTRASOUND-GATED AUTOCOR-RELATIONS
Here we provide the technical details of our experimental autocorrelation calculation. As discussed in the main text, the complex field autocorrelation of the target can be calculated from a single camera frame obtained for a single speckle illumination [6]. However, this autocorrelation will be the autocorrelation of a speckled target, and thus will contain a sharp diffraction-limited peak representing the diffraction-limited speckle grain size. To overcome this, multiple camera frames captured under different speckle illuminations can be used to estimate the target incoherent intensity autocorrelation which, in the case of sufficient speckle averaging, is free from speckle artifacts via correlography [5][6][7][8]. The use of multiple speckle realizations is advantageous also for improving the SNR of the autocorrelation estimation via ensemble averaging [6]. While for an infinite number of speckle patterns the diffraction-limited speckle autocorrelation peak will decrease to zero, when a finite amount of realizations are used, the autocorrelation will still contain speckle features (see next section) and contain a sharp diffraction limited peak, which can be significantly larger than the object autocorrelation features. In our experiments, 150 different speckle realizations where used at each scan position, and the speckle-grain autocorrelation peak was reduced by replacing the peak intensity with an intensity that is ∼ 3-times larger than its neighbouring pixels, following [5]. For background noise removal and filtering, the peak-reduced autocorrelation was additionally filtered by a Tukey window with a width adjusted to the ultrasound focus autocorrelation width. Finally, the windowed autocorrelation was filtered by thresholding its Fourier magnitude, resulting in noise removal and spatial smoothing.
EFFECT OF THE NUMBER OF SPECKLE REALIZATIONS ON AUTO-CORRELATION ESTIMATION
To determine the number of required speckle realizations (illuminations) we have studied the effect of the number of speckle realizations on the retrieved autocorrelation. The results of this study are given in Fig. S4. As expected, a lower number of speckle realizations (Fig. S4.(d-f)) results in speckle artefacts in the autocorrelation estimate, harming the reconstruction fidelity.
CHARACTERIZATION OF THE DIFFUSER USED IN THE EXPERIMENT
When imaging through a scattering medium, it can be suspected that some of the imaging information is carried by the ballistic components that pass through the scattering medium along with the diffused components. To verify that in our experiments there is no significant ballistic component that allows imaging, and to study the scattering of the used diffusers, we performed an experiment that directly measures the scattering function of the scattering layer used in our experiments. To this end, we focused a beam from a collimated laser diode @520nm (Thorlabs CPS520) on an sCMOS camera (Andor Zyla), using an f=100mm lens, and measured the focused intensity without and with the scattering layer placed in the path between the lens and the camera. The results of this experiments are shown in Fig. S5(a), and prove that the transmission through the scattering layer does not contain any significant ballistic component, as expected.
PTYCHOGRAPHIC IMAGE RECONSTRUCTION PIPELINE
An iterative minimization algorithm was used to solve the joint-phase retrieval problem. We compared several algorithms from the ptychographic iterative engine (PIE) family [9,10]. The best results on our experimental data were obtained using the regularized PIE (rPIE) algorithm [10], which was used for all reconstructions displayed in this work. Below, we provide the details of the algorithmic steps for reconstructing the full objects from the set of estimated autocorrelations, which are calculated from the measured scattered patterns, as explained in the previous sections.
In a nutshell, the joint-phase retrieval algorithm follows a similar scheme to Fienup's Hybrid input output (HIO) algorithm for phase retrieval [11] but using a set of power spectra, which in our case are the Fourier magnitude of the measured autocorrelations, where the scanned acoustic probe acts as the isolated object constraint (spatial gate). The ptychographic algorithm starts by guessing an initial object, and estimation of the acoustic probe. During every iteration the algorithm enforces the object Fourier amplitude measurements and the estimated probe as constraints following the PIE update rules, until convergence. Since in our method we aim at reconstructing the target intensity pattern, we also enforced non-negativity constraint on the reconstructed object. rPIE is a slightly modified advanced version of the original ePIE, differing in the update rules for the exit wave function, probe, and object [10]. For simplicity we present here the original ePIE algorithm. A full flowchart of our method, including both the autocorrelation estimation, and the ptychographic reconstruction, is given in Fig S6. At the m-th probe scan position and the j-th iteration of the algorithm we can write the exit wave function: Where O j,m (r) is the object and U j,m (r − r US m ) is the probe illumination shifted by r US m relative to r -a two dimensional coordinate along the ultrasound propagation and in perpendicular to it. The probe initial guess is estimated by a Gaussian profile in the horizontal axis and to a convolution between Gaussian and a rectangle profile in the vertical axis, from prior measurements of the acoustic probe (Supplementary section 2). In the next step, the algorithm constrains the Fourier amplitude of the exit wave function to the estimated Fourier amplitude from the object autocorrelation and keep the Fourier phase. Then, the algorithm use the new exit wave function to update first the object under the assumption that the probe is fixed and then the probe under the assumption that the object is fixed with the update terms: · (ψ j,m (r) − ψ j,m (r)) (S3) U j+1 = U j + β O * j (r + r US m ) |O j (r + r US m )| 2 max · (ψ j,m (r) − ψ j,m (r)) (S4) Where α, β are weight parameters for the updated feedback. α is usually between 0.7 − 0.9 and β ≈ 0.1. When all M scan positions are gone through in a random order, the j-th ptychography iteration is completed.
NUMERICAL STUDY OF THE RECONSTRUCTION FIDELITY AS A FUNCTION OF NUMBER OF PROBE POSITIONS
The significant advantage of ptychography over individual reconstruction of the object at each probe position, is in the joint solution of the phase-retrieval problem, exploiting information contained in the overlap between neighboring probe positions [12]. To determine the required number of probe positions, or equivalently the overlap between neighbouring probe positions, we performed a numerical investigation of the influence of the number of probe positions on the reconstruction performance. The results of this study are shown in Figure S7.
For the numerical experiment a USAF-1951 resolution test chart with a minimal line width of 6.24µm and an experimentally measured acoustic probe (Fig. S2) were used. The conventional AOI reconstruction from 868 probe positions is presented in Fig. S7(a) and shows a low-resolution reconstruction, without the ability to distinguish even the largest features of the target. An ideal AOPI reconstruction from 868 scans assuming infinite number of speckle averaging (i.e. effectively incoherent illumination) is presented in Fig. S7(b), where each autocorrelation was estimated from a single incoherent illumination, free from speckled pattern, and show a very-well resolved features with high-resolution reconstruction of the target. Fig. S7(c-f) show the AOPI reconstruction using 150 speckle realizations in each probe position, for a different number of probe positions: 868 scans (89% overlap between neighboring positions), 460 scans (80% overlap), 272 scans (67% overlap), and 72 scans (24% overlap), respectively.
To quantify the reconstruction fidelity, we calculate the structural similarity index (SSIM) [13] between each reconstruction and the target object. The graph in Fig. S7(g) plots the SSIM index as a function of number of probe positions. For the AOPI reconstruction using 150 speckle realizations, the reconstruction fidelity does not reduce until an overlap lower than 67% (i.e. 272 scans) is used. In our experiments an overlap of approximately 90% was used. | 9,632 | 2021-01-25T00:00:00.000 | [
"Physics"
] |
Online Denoising Based on the Second-Order Adaptive Statistics Model
Online denoising is motivated by real-time applications in the industrial process, where the data must be utilizable soon after it is collected. Since the noise in practical process is usually colored, it is quite a challenge for denoising techniques. In this paper, a novel online denoising method was proposed to achieve the processing of the practical measurement data with colored noise, and the characteristics of the colored noise were considered in the dynamic model via an adaptive parameter. The proposed method consists of two parts within a closed loop: the first one is to estimate the system state based on the second-order adaptive statistics model and the other is to update the adaptive parameter in the model using the Yule–Walker algorithm. Specifically, the state estimation process was implemented via the Kalman filter in a recursive way, and the online purpose was therefore attained. Experimental data in a reinforced concrete structure test was used to verify the effectiveness of the proposed method. Results show the proposed method not only dealt with the signals with colored noise, but also achieved a tradeoff between efficiency and accuracy.
Introduction
With recent improvements in sensor technologies, information networks, and telemetry, an enormous amount of data is collected every day. At the same time, with the help of data processing techniques, policy-makers and scientists are now able to deploy these sampled data in significant applications such as target location [1], disease case count prediction [2], structural health monitoring [3], and financial forecasting [4]. However, signals may be subject to random noise in practical processes, due to such reasons as incorrect measurements, faulty sensors, or imperfect data collection. Any noise and instability can be considered as the source of error, which would result in signal distortion.
How to eliminate the influence of the noise in measured data and extract the useful information has been a focus of information science research. Currently, existing algorithms primarily focus on the offline denoising problem, which requires a full set of data to accomplish the denoising process. Common solutions can be divided into two categories, i.e., offline denoising in the time domain [5][6][7] and in the frequency domain [8]. Specifically, in the time domain, Weissman et al. [5] proposed the discrete universal denoiser (DUDE) algorithm for offline denoising. DUDE assumes the statistical It needs to be pointed out that, when using the Kalman filter, an accurate system dynamic model would offer great help to achieve the optimal estimation. Miao et al. [28] used the Kalman filter with several different kinds of system models to remove the noise of the storage volume data of the internet center. Due to the difficulty in obtaining the density characteristic of the practical data, the adaptive model was proposed to capture the characteristics of the moving targets in [29], and estimate the acceleration based on the adaptive parameter.
In this paper, a denoising method for real-time data with unstable fluctuation and colored noise was investigated. For the sake of the data features and the online requirement, the Kalman filtering method based on a second-order adaptive statistics model was proposed here, and its performance was verified by some real test data. Moreover, the test data was processed via another two representative methods: first-order exponential smoothing [18] and Holt's exponential smoothing [20], and the results demonstrated that the proposed method could give a better effect.
Compared to previous works, the contribution of this work is that we used a second-order adaptive model for online denoising, which can obtain a better denoising performance for the measurements in the reinforced concrete structure test experiment. The comparison between our model and the third-order model [29] is given in Section 3, and the results show that the developed second-order adaptive model here can obtain a smaller error and consume less time.
The structure of this paper is as follows. Section 2 presents the specific method of the second-order adaptive statistics model. The overview of the experiment is provided in Section 3. Section 4 discusses the robustness and the real-time performance. Some conclusions are given in Section 5.
Online Denoising Algorithm Based on Kalman Filtering and the Adaptive Statistics Model
For the purpose of removing the unexpected noise in an online mode, Kalman filtering was actually a competitive solution, where only the estimation derived in the previous step and the measurements in the current step were required to compute the new estimated values. However, this is not enough to obtain the desired results. A reasonable model that could describe the dynamic features of the data is another impact factor in the denoising process. Therefore, a second-order adaptive statistics model is presented later in this section, and the method to compute the adaptive parameter is explained in detail as well.
Online Denoising Algorithm Based on Kalman Filtering
Kalman filtering is one of the most classical recursive algorithms that gives the optimal estimation of the state vector. The Kalman filter estimates a process by using a form of feedback control: the filter estimates the process state at some time and then obtains feedback in the form of (noisy) measurements. As such, the equations for the Kalman filter fall into two groups: state update equation and measurement update equation, which can be expressed as: where x is the state vector of the system to be estimated, whose initial value and covariance are known as x 0 and P 0 . Φ(k + 1|k) is the state-transition matrix. u(k) is the system input and U(k) is the corresponding matrix. w(k) and v(k) are the process noise and measurement noise respectively, and the variance of v(k) is known (as R). Note that both w(k) and v(k) are white noise with zero mean and independent of the initial state x 0 . z(k) is the measurement vector and H(k) is the observation matrix.
The Kalman filtering considers the correlation between errors in the prediction and the measurements. The algorithm is in a predict-correct form, which is convenient for implementation as follows: (3) Correction: According to the equations above. The algorithm works in a two-step process. In the prediction step, the Kalman filter produces estimates of the current state variables along with their uncertainties. Once the outcome of the next measurement (necessarily corrupted with some amount of error, including random noise) is observed, these estimates are updated using a weighted average, with more weight being given to estimates with higher certainty. Since the algorithm can run recursively, we can implement it step by step, that is, the denoised data can be obtained in real time.
Adaptive Statistics Model for Online Denoising
Considering the unstable fluctuation of the data and the existence of colored noise, the linear time-invariant model with noise as used in Section 2.1 may not be suitable for describing this kind of data. Therefore, we proposed a second-order adaptive statistics model to deal with these challenges. Let x,ẋ be the data itself and the gradient, respectively. The state vector is expressed as x = [x,ẋ] T throughout this paper unless stated otherwise explicitly.
Referring to the colored noise, it mainly lies in the changing process of the data gradient. When the data is varying with time, its gradient will follow certain rule: value of the gradient at the next time tick is always within the neighborhood of the current predicted gradient value. Therefore, the gradient can be computed as:ẋ (t) =ḡ(t) + ∆(t) (8) whereḡ(t) is the predicted value ofẋ(t) in current interval. In particular, ∆(t) stands for the maneuvering change with colored noise. Considering that Kalman filter has specific requirements for the type of the noise, colored noise in ∆(t) needs to be processed. Therefore, the Wiener-Khinchin theorem was introduced here, which assumes it corresponds to the first-order stationary Markov process: where α is the parameter of maneuvering frequency [29], and w(t) is a Gussian white noise with zero mean and a variance of σ 2 ∆ . With the two equations above, the change of the gradient can be written as: sinceẍ(t)=∆(t) over any sampling interval. Therefore, the state-space representation of the continuous-time adaptive model is: The solution of equations is: We assume t = t 0 + T and t 0 = kT. Then we can get the discrete-time equivalent as the following: With Laplace transforms, matrix Φ(k + 1|k) can be expressed as: Matrix U(k) can be described as: The variance of the w(k) can be computed in the following way: where
Adaptive Parameter Adjustment via the Yule-Walker Algorithm
In the previous subsection, a statistics model was presented to capture the fluctuation features in the measured data. It needs to be pointed out that in the proposed model, the adaptive parameter α is not only unknown, but also self-adaptive. We adopted the following method to update parameter α and σ 2 ∆ based on the Yule-Walker estimated algorithm [29]. First of all, we need to discretize the Equation (9). Through substituting A to −α and C to 1 in Equations (14) and (16), we can obtain its discrete-time equivalent: Then, the method of parametric update is as follows: (1) Initialization: (2) Set the estimation of gradient andḡ(k) as:ẋ The parameter of σ(k) is satisfied with the first-order stationary Markov process: (3) Parameter update: and Then, we can use the Equation (24) to get α and σ 2 ∆ so that we can achieve the purpose of updating the system parameters.
Using the method described in this section, online denoising of data with unstable fluctuation and colored noise was then accomplished. The flow chart of the proposed method was given in Figure 1. It can be seen that the method consists of two parts within a closed loop. The first one is to estimate the system state with the Kalman filter based on the second-order adaptive statistics model, and the other is to update the adaptive parameter in the model by the Yule-Walker algorithm. In the next section, the effectiveness of this method will be evaluated via the experiment data from a reinforced concrete structure test, and the results will also be compared to some other representative online denoising methods.
() gk
Update then see Equations (23) and (24) Initialization of see Equation (19) Figure 1. The flow chart of the proposed online denoising method.
Experiments
In order to verify the effectiveness of the proposed algorithm, experimental data from the test of a reinforced concrete structure was adopted. The configuration of the experiment is shown in Figure 2. It was a quasi-static test for the column made by Chinese Grade 345 steel and C30 Grade concrete [30]. During the experiment, the column was tested under constant axial load and cyclic bending. Through this experiment, deformation displacement at different time samples were obtained, which correspond to the measurements in the proposed algorithm. Although the entire data was ready before denoising as well, the process was implemented in an 'online' mode, i.e., only the measurement of the 'current' sampling time and previous result would be used in computation. The necessity of the online mode for this background is because the actual value of the measured state has great effect on the identification of the structure security, and it needs to be known during the monitoring process. In this experiment, the sampling time was set as 0.001 s. Figure 3 gives the measurement and the real data, which is used to test the performance of the developed method. The measurement data came from the experiment and the real data came from the offline filter with high degree of accuracy. As can be clearly seen from Figure 3, the measured data possessed a unstable fluctuation as well as the existence of the colored noise. In this paper, we compared a second-order adaptive statistics model with various other methods such as first-order exponential filtering, Holt's exponential filtering or a third-order adaptive statistics model to deal with the denoising problem for the real-time deformation displacement data. In order to evaluate these methods, mean and covariance of the error were compared. In addition, the root-mean-square error (RMSE) was used. The RMSE is very commonly used and makes for an excellent general purpose error metric for numerical predictions. Specifically, 'mean' here represents averaged absolute value of difference between the real data and the denoised data, i.e.,
Base
where n is the number of the measurements, r i is the ith real data and d i is the corresponding denoised data. Then, the covariance is defined as the following: Finally, the RMSE can be expressed as the following: In the following context, three cases are implemented. In the first two cases the comparison between different denoising methods is depicted, while a third case is given to discuss the effect of the initial value on the denoising performance. In Section 3.1, the adaptive statistics models, including the second-order model and the third-order model, were used to deal with the data; in Section 3.2, we compared the developed method with the first-order exponential filtering and Holt's exponential filtering, respectively; in Section 3.3, through eliminating data within the adjustment process and retaining the posterior convergent data, the denoising effect was obviously improved.
The Denoising Effect of the Adaptive Statistics Model
The performances of the second-order and the third-order adaptive methods for online denoising were compared in this part. The denoised results are shown in Figure 4. Since the difference was too small, we provide a detailed part of the curves in the small picture, and 500 points from 6.3 s to 6.8 s are shown there. The results demonstrated that this algorithm is feasible and reliable with reasonable precision. Furthermore, through comparing the real data and the denoised data, the satisfactory denoising effect of the second-order adaptive statistics model was illustrated.
Comparing the second-order and third-order adaptive statistics models, we can find a satisfactory denoising effect in Figure 4a,b. However, from the result before 3 s, we might notice the third-order adaptive statistics model performs with poorer precision. Thus, the second-order adaptive statistics model can have advantages with respect to accuracy. Meanwhile, in order to better describe the error and compare the denoising precision, Figure 5 gives the error of the both models.
The results in Figure 5 show that the second-order adaptive statistics model has the smaller error. In order to better prove this conclusion, more groups of data were adopted to test the method, and each group contained 10,000 points. The symbol mean m here represents the mean of the whole data set. The results of the tests are shown in Table 1. Obviously, for each group, results from the second-order model all showed better performance both in mean, covariance and RMSE. As a whole, variance and RMSE of the second-order model was only about 0.0223 and 0.1461, respectively, better than that of the third-order model (0.1407 and 0.3129). On the other hand, Kalman filtering is an estimation algorithm which shows resemblance and proximity with the one-step prediction. We can estimate next step value by merely using the last measurement. Therefore, it is an online algorithm, that is, there is the negligible delay with the denoising process. In addition, the calculated amount of the second-order model is lower than for the third-order model. This is due to the more computational expense caused by the larger matrices in the higher-order model. Therefore, results showed the second-order adaptive statistics model could not only deal with the signals with colored noise in real time, but also achieve a tradeoff between efficiency and accuracy. Based on the results in the Table 1, it can be clearly seen that the second-order adaptive statistics model is better than the third-order one, because it provided better precision and faster speed in online denoising. Meanwhile, as we can see in Figure 6, more stable denoising effect and smaller RMSE can be offered by the second-order statistics model, in which the 'orange column' is the RMSE and the 'blue column' is the covariance for each group.
Comparison of the Denoising Effect between the Proposed Method and the Exponential Smoothing
Formerly, the exponential smoothing was typically for forecasting. Simultaneously, it could also be applied in online denoising [21]. When using the exponential smoothing, parameter selection is very important, as it can adjust the development tendency of the data trend. However, it is usually very subjective. Nowadays, the primary methods for parameters selection can be divided into two ways: one is the empirical method, the other is trial method. In this paper, we adopted the empirical method. Finally, we decided to utilize first-order exponential smoothing and Holt's exponential smoothing for comparison with the result in Section 3.1.
The Denoising Effect of the First-Order Exponential Smoothing
We utilized priori knowledge to select the parameters of 0.2, 0.5 and 0.8. A first-order exponential smoothing with different parameters was used to denoise the same five groups of data as those in Section 3.1, and the results are given in Table 2. According to those test results, we can draw a conclusion that first-order exponential smoothing [18] with a parameter of 0.2 possessed the best denoising effect. Within Holt's exponential smoothing [20], two kinds of states were usually used: one was the signal of the backward-smoothing, and the other was the tendency of the backward-smoothing. As a result, we introduced two parameters a and b. b was set to be 0.8 as empirical value, meanwhile, the parameter a was selected the same as the first-order exponential smoothing method, which was 0.2, 0.5 and 0.8. The same data was used as before, and the results are shown in Table 3. Table 3. Mean, covariance and RMSE the of Holt's exponential smoothing with different parameter a.
Various Models
The Parameter a of 0. It can be clearly seen in Table 3 that the best denoising effect can be acquired with the parameter a of 0.2 and b of 0.8, but the value of different indicators was still obviously larger than those of the proposed adaptive method. Table 4 gives a summary of performance comparison among different methods. In these three categories of online denoising methods, the mean, covariance and RMSE of the adaptive statistics model are obviously the smallest. The results indicated that online denoising could be better achieved via the adaptive statistics model, because the system parameter could be adjusted dynamically as the denoising process was implemented. Furthermore, by contrasting the second-order adaptive model and the third-order adaptive model, we have come to the tentative conclusion that the effect of the second-order adaptive model is more outstanding. To sum up, between the two exponential smoothing methods, the Holt's exponential smoothing with the parameter a of 0.2 and b of 0.8 has better denoising effect. However, among all the different methods conducted in this paper, the second-order adaptive statistics model presented the best performance. It not only showed good denoising accuracy, but also gave a faster processing speed.
The Effect of Initial Value on the Denoising Performance
In this case, we would analyze the figure of the error data, as shown in Figure 7. From the figure above it can be clearly seen that online denoising based on the adaptive statistics model had a regulatory process at the beginning. This is because the initial value of x 0 was zero and P 0 was very big. It thus appears that we could obtain the more precise filtering results through the index for selection. Actually, it needs to be emphasized that the convergence procedure existed in the adaptive model, that is, the denoising effect is be better as time goes on. Finally, we selected the last 5000 points to calculate the covariance and the mean.
As can be clearly seen from the Table 5 and Figure 8, mean, covariance and RMSE decreased significantly compared with those in Table 1 and Figure 6. By assessing the data, the covariance of the second-order model is only 0.0171 and RMSE is only 0.1200, while for the third-order model these values are 0.0345 and 0.1760, respectively. Recall that the best filter effect of exponential smoothing is about 0.2 and 0.43. This leads one to believe that the adaptive statistics model was superior to the exponential smoothing. When comparing two approaches using the adaptive statistics models, we can find the denoising effect of the second-order adaptive statistics model is better than the third-order adaptive statistics model. This is because the general trend of data seems more consistent with the second order.
In fact, except for precision, the second-order model has another preponderance. It possesses a smaller computation burden. We computed the runtime for each denoising process, and found the second-order adaptive model is faster than the smoothing filter and the third-order model. If we started to denoise the data with 52,741 counts, the elapsed time of the second-order model is 9.142300 s. On the contrary, we need 13.124500 s for the third-order model. Considering the statements above, we can come to the conclusion that the second-order adaptive statistics model is kind of more accurate and efficient method to proceed online denoising.
Discussion
In the previous section, through the experiment data and the comparison with other classical denoising methods, the effectiveness and superiority of the proposed method have been verified. In this part, we will focus on some other features of our denoising method, that is, the robustness and the real-time performance.
Firstly, as a good denoising method, it should be able to deal with various kinds of data. In order to prove this, two groups of superposed sinusoidal signals with colored noise were adopted. The sampling time for both groups was 0.001 s. The main difference between the two reference curves was that one had more sharp points while the other changed more gently, and the curves were shown respectively in Figures 9 and 10 for comparison purpose.
The first group of data with noise is given in Figure 11, where the reference curve was totally drowned. With the proposed online denoising method, the estimated curve in Figure 9 could be derived. According to the comparison with the reference curve, the original noised signal was successfully processed.
For the second group of data with noise, as was shown in Figure 12, the denoising method was applied again. The difference between the denoised result and reference values was given in Figure 10. It can be seen in the figure that the overall trend of the curve was in good accordance with the reference values, and the oscillation was because some features of the noise was reserved due to a high-dimension process model.
Secondly, we would like to discuss the real-time performance of the proposed method. In order to achieve online denoising, the algorithm should have a fast processing speed. If not, latency would exist and might affect the result. As was stated before, the method proposed in this paper was based on Kalman filtering, which was a recursive algorithm. As long as the filtering process could finish before the new measurement was collected, the method was able to be implemented in real time. In the two simulations above, the time needed for one iteration was on average of 0.0003 s, which was far smaller than the sampling time of 0.001 s. It needs to be pointed out the difference like the subfigure shown in Figure 9 was not caused by the latency; it was mainly because of the sharp point A. The estimated points were changed with inertia, and they were then corrected to the measurement values by the recursive process. Therefore, this difference actually resulted from an estimation error other than the latency of the algorithm. In fact, the algorithm indeed performed in a real-time way as described above.
Conclusions
A huge amount of the real-time data is collected every second around the world. However, due to the imperfect measurement and data collection mechanisms, real-time data is distorted by various types of noise and instability. Therefore, working with noisy time series is an inevitable part of any real-time data processing task and must be addressed precisely. In the past decades, the demand for real-time data analysis techniques such as the first-order exponential smoothing and Holt's exponential smoothing has grown dramatically. In this paper, we proposed an online denoising method for the real-time data with unstable fluctuation and colored noise.
This method consists of two parts within a closed loop. The first one is to estimate state based on the second-order adaptive statistics model. The other is to update the adaptive parameter in the model by the Yule-Walker algorithm. The effectiveness of method was demonstrated via an experiment, which not only processed the signals with colored noise, but also achieved a tradeoff between efficiency and accuracy. In addition, the performance of the proposed method was compared with some existing methods. Results showed that a more accurate and efficient denoising effect could be performed by employing the second-order adaptive statistics model with the Kalman filter for online denoising. | 5,790.4 | 2017-07-01T00:00:00.000 | [
"Engineering"
] |
PHILOSOPHY OF INTERDISCIPLINARITY: JAN CORNELIUS SCHMIDT’S CRITICAL-REFLEXIVE PROBLEM-ORIENTED INTERDISCIPLINARITY
Philosophers were reluctant to address interdisciplinarity during the 20 th century. But things have changed in the 21 st century, since a two-level relationship between philosophy and interdisciplinarity has been established: philosophy of interdisciplinarity and philosophy as interdisciplinarity. Thus far scholars have shown more interest in exploring the first level of that relationship. The aim of this article is to closely examine the developmental path of a philosophy of interdisciplinarity envisioned and constructed by Jan Cornelius Schmidt in the past two decades. In our opinion, it has reached two milestones. The first (2008) being the one in which he clarified the vague notion of interdisciplinarity and classified its four types with the help of philosophy of science, and the second (2011) being the one in which he opted for problem-oriented interdisciplinarity. Schmidt’s philosophy of interdisciplinarity has reached its (current) peak (2022), resulting in a philosophical framework which promotes problem-orientation and critical-reflexivity in interdisciplinary endeavors. Thereby Schmidt has created prerequisites for the construction of philosophy as interdisciplinarity.
INTRODUCTION
Specialization, professionalization, disciplining and departmentalization were some of the main outcomes of the establishment of the modern university in the 19 th century, and these outcomes have not, expectedly, circumvented philosophy.Ever since, academic philosophy has been on a quest of finding its own disciplinary identity, as well as discovering its relationship with other disciplines.The latter has especially been so in the past 50 years, since new scientific paradigms or approaches have been presented to the general academic public in the 1970s, namely multi-, pluri-, cross-, inter-, and transdisciplinarity [1][2][3].
Here we shall offer a brief history of the relationship that academic philosophy has established with one of the aforementioned scientific paradigms: interdisciplinarity.Unfortunately, philosophers have not sufficiently considered the role and relationship of philosophy towards it.Evidence to support this claim is abundant.On this occasion we shall mention just one of them: e.g., Michael H.G. Hoffmann, Jan C. Schmidt and Nancy J. Nersessian state that "in general, philosophers have remained reluctant to address 'interdisciplinarity' " [4; p.1858].However, in spite of the inattention of philosophers towards interdisciplinarity and the fact that "until quite recently the field of interdisciplinary studies has attracted few philosophers," Julie Thompson Klein and Robert Frodeman rightfully argue that the situation is changing today [5; p.150].This change has been going on for at least 15 years.
The development of a more intense relationship between philosophy and interdisciplinarity can be traced to a series of international workshops and conferences, starting with a workshop held in Atlanta in 2009 and ending with a conference held in Tübingen in 2012.According to a report from the Atlanta workshop, the primary purpose of it was to "reflect on interdisciplinarityfor the first timefrom a philosophical point of view" [6; p.42a].Two outcomes have emerged from this workshop: (1) it developed "the idea of philosophy not as a metadiscipline, but as an engaged participant and partner in interdisciplinary discourses"; (2) it resulted in establishing a network of philosophers and other scholars interested in interdisciplinarity named Philosophy of/as Interdisciplinarity Network (PIN-net) [7; pp.169-170].
Therefore, the mentioned workshops and conferences stimulated the progress of the relationship between philosophy and interdisciplinarity.Moreover, two levels of that relationship have been identified and defined during the Atlanta workshop: philosophy of interdisciplinarity and philosophy as interdisciplinarity.According to Hoffmann and Schmidt, philosophy of interdisciplinarity encourages "philosophical inquiry into problems regarding the practices and theories of interdisciplinary research in the style of traditional philosophy of science."On the other hand, philosophy as interdisciplinarity is focused upon "initiating a new philosophical practice of reflective and reflexive engagement in the worldone that questions and overcomes the boundaries that have constituted philosophy as a discipline in the 20th century," with its leading idea being that "philosophers leave the study and enter the field, integrating their work with scientists, engineers, and policy makers" [7; p.170].
Besides the mentioned workshops and conferences, other proof of the ongoing progress of the relationship between philosophy and interdisciplinarity can be found elsewhere.One of them is provided by the 2010 edition of The Oxford Handbook of Interdisciplinarity.As the handbook's editor-in-chief Robert Frodeman claims in the introductory text, this edition "heralds the centrality of philosophic reflection for twenty-first century society," since interdisciplinarity is "inherently philosophical, in the non-professionalized and non-disciplined sense of the term" [8; p.xxxi].This edition of the Oxford's handbook contains a short yet noteworthy textual addendum on prospects for a philosophy of interdisciplinarity authored by Schmidt [9].The handbook's 2017 edition contains only one contribution which discusses not the relationship between philosophy and interdisciplinarity, but the one between interdisciplinarity and a single philosophic discipline, i.e., ethics, authored by Carl Mitcham and Wang Nan [10].The other two hallmarks in the history of considerations on philosophy of and as interdisciplinarity we would like to point out are two special issues of scientific journals: (1) Due to the fact that more literature regarding the first level of the relationship between philosophy and interdisciplinarity has recently emerged, e.g., Choudhary [11,12] and Curis [13], we shall examine what we consider as the peak of its development.Thus, we shall analyze the opus of the German physicist and philosopher Jan Cornelius Schmidt, who has been developing his philosophy of interdisciplinarity for the past 20 years.Special attention will be given to Schmidt's latest monograph Philosophy of Interdisciplinarity.Studies in Science, Society and Sustainability (2022), which we perceive as his intellectual crown on the matter.
THE TRAJECTORY OF SCHMIDT'S PHILOSOPHY OF INTERDISCIPLINARITY
In this chapter, we shall shed light upon the development of Jan Cornelius Schmidt's thought on philosophy of interdisciplinarity.For that purpose, we have selected two of his articles which we consider to be milestones in the trajectory of his theory.These articles from 2008 and 2011 were, in our opinion, crucial for constituting his capital work published in 2022.Thus, we divided our article into three sections.The first section includes Schmidt's conceptual sketch of philosophy of interdisciplinarity, in which he clarified the role of philosophy in considering interdisciplinarity and elucidated the vital components of philosophy of interdisciplinarity.The main topic of the second section of our article is problem-oriented interdisciplinarity, that is, the dimension of interdisciplinarity which will turn out to be central for Schmidt's philosophy of interdisciplinarity.The third section is focused upon the realization of Schmidt's goal in the form of a comprehensive philosophy of interdisciplinarity, which has a critical-reflexive and problem-oriented variant of interdisciplinarity at its core.
BLUEPRINT OF A NEW APPROACH
In our opinion, the first milestone of Schmidt's thoughts on philosophy of interdisciplinarity is his article entitled "Towards a philosophy of interdisciplinarity.An attempt to provide a classification and clarification" (published online in 2007 but printed in 2008) [14].It stemmed from his unconcealed intellectual irritation by the widespread and often perverted usage of the term interdisciplinarity, and the frivolous characterization of projects, as well as research and education programs as being interdisciplinary, which often reduce the term to a mere fund-acquiring catchword, a vague concept deprived of meaning.In order to 'right the wrong,' Schmidt reached towards distinctions established in philosophy of science in approaching interdisciplinarity as a multi-faceted phenomenon with regard to four dimensions: (a) ontological dimension, (b) epistemological dimension, (c) methodological dimension, and (d) problem framing and problem perception dimension (problem-oriented dimension).
Yet the birth of Schmidt's considerations on these four dimensions can be traced back to the early 2000s.Namely, he applied them in the context of an inherently interdisciplinary scientific field of bionics [15], used them to pave a new way in the jungle of interdisciplinarity [16], then to address questions of technological reductionism in another interdisciplinary scientific field, i.e. nanotechnology [17], and his considerations culminated in an article entitled "Dimensionen der Interdisziplinarität. Wege zur einer Wissenschaftstheorie der Interdisziplinarität" in which he evoked Interdisziplinaritätsphilosophie for the first time [18].
Each of the four dimensions, according to Schmidt [14], could be matched with corresponding traditional philosophical stances.The ontological dimension of interdisciplinarity would therefore refer to objects and entities, hence being advocated by (a) realists who mainly deal with "given or constructed objects of a human-independent reality"; the epistemological one refers to knowledge, theories and concepts, so the corresponding philosophical stance would be that of (b) rationalists; the methodological dimension, i.e., the one which refers to knowledge production, to the research process, the rule-based action of scientists, and to the languages in use, thus matched with (c) methodological constructivists and pragmatists; the problem framing, problem perception or problem-oriented dimension, hence the one which includes considerations on "how to handle and solve problems pragmatically; the impact, effect and outcome of knowledge is of utmost relevance"; thereupon resembling the stance supported by (d) instrumentalists, utilitarians and critical theorists [14; pp.59-62].After identifying the four dimensions and their respective philosophical stances, Schmidt illustrated them by using some examples of popular research programs which are labelled as interdisciplinary: (a) nanoresearch and neurosciences (object-orientedrealism); (b) complex systems and chaos theory (theory-orientedrationalism); (c) biomimicry/bionics and econophysics (method-orientedmethodological constructivism and pragmatism); (d) technology assessment and sustainability research (problem-orientedinstrumentalism, utilitarianism, and critical theory) [14; pp.62-66].Finally, Schmidt concluded that "a minimal philosophy of science is the prerequisite in order to understand (and probably to promote) 'interdisciplinarity' ".Even though Schmidt claimed that philosophy is "effectively helpful in analyzing and classifying interdisciplinarity", he emphasized that "philosophy of interdisciplinarity still remains a desideratum" [14; p.66].
Inappropriate use of interdisciplinarity led Schmidt to writing another article or, as we called it earlier, a textual addendum on the topic.The addendum was published in Frodeman's 2010 edition of The Oxford Handbook of Interdisciplinarity in the form of a box entitled "Prospects for a philosophy of interdisciplinarity."Even though it does not offer anything new in comparison to the article from 2008, Schmidt's box fills its purpose "to foster the debate on ID," since it "presents elements of pluralist philosophy of interdisciplinarity," and in it Schmidt exclaims once again that he may have proposed "some elements for a philosophy of ID" [9; p.39; p.41].More importantly, this box is significant on a symbolical level, being the only textual contribution devoted to the relationship between philosophy and interdisciplinarity in Frodeman's handbook, therefore indicating an ongoing change.
A STEP CLOSER TOWARDS A PHILOSOPHY OF INTERDISCIPLINARITY
After elucidating the plurality of different dimensions of philosophical approach to interdisciplinarity in the earlier phase, thus offering a conceptual framework for its analysis, the focal point of this section of our article will be what we consider the second milestone of Schmidt's theory which contains his thoughts on problem-oriented interdisciplinarity, i.e., its fourth type.
The most detailed and thus exemplary instance of Schmidt's reflections on problem-oriented interdisciplinarity is undoubtedly the article entitled "What is a problem?On problem-oriented interdisciplinarity" published in 2011.Schmidt's urge for writing such an article came out of the same source as his article we discussed in the first section of our article.Namely, it again came out of intellectual irritation, which was caused this time by the buzzword problem.The term itself, according to Schmidt, "plays a major role in the various attempts to characterize interdisciplinarity or transdisciplinarity", and it seems that the discourse and practice of interdisciplinarity have "problems with the 'problem'," since they can "also be found in traditional disciplinary sciences as well as in the life world," which made him exclaim that: "Problems seem to be everywhere and nowhere!" [19; p.249; pp.251].
Recognizing the vagueness of the notion of problem as the cause of misunderstanding of problemoriented interdisciplinarity, Schmidt insisted on clarifying the terms problem and interdisciplinarity, and on finding demarcation lines between problem-oriented and other types of interdisciplinarity.In order to clarify the notion of problem, he reached for and combined integrative approaches of Dietrich Dörner (an undesired or initial state; desired or final state; barriers between the two) and Roland W. Scholz (target, system, and transformation knowledge), concluding that the notion of problem includes "(i) the assessment of the actual or future statefrom the angle of an anticipated target stateas being undesired or negative (negativity thesis) and (ii) the barrier to reaching or avoiding the target or anticipated state (barrier thesis)" [19; pp.259-260].From that emanates his definition of problem-oriented interdisciplinarity, which offers "system, target, and transformation knowledge, including a time-sensitive, temporal dimension, and an ex ante reflection on prospective future states," and which produces problem knowledge that is "intrinsically interlaced with action knowledge."[19; p.260] Therefore, the role of problem-oriented interdisciplinarity is threefold: to constitute, frame and clarify a problem, and to anticipate it and prevent it, or to suggest actions for its solution.
When it comes to Schmidt's differentiation between problem-oriented and three other iterations of interdisciplinarity, he drew demarcation lines as follows: (1) object-oriented interdisciplinarity does not "mainly refer to knowledge, methods, or problems, but to an external, human-independent reality;" (2) theory-oriented interdisciplinarity refers to "meta-disciplinaryor at least nondisciplinaryabstract knowledge;" (3) method-oriented interdisciplinarity refers to answering the question of "whether there are special canons or methods, rules, empirical settings, and hermeneutic forms which typify ID and positively determine it."[19; pp.254-255]Therefore, these iterations of interdisciplinarity are insufficient, since they do not "cover the whole breadth of the notions of ID" [19; p.256].On the other hand, problem-oriented interdisciplinarity, or as it is sometimes called transdisciplinarity, focuses "on the starting points, goals, and purposes of interdisciplinary research activitiesin other words, on the constitution, identification, and framing of problems," and interdisciplinary problems are considered as "being external to disciplines or to academia.They are primarily societal ones that are (pre-) defined by society, e.g., lay people, politicians, and stakeholders" [19; pp.256-257].From a methodological standpoint, this type of interdisciplinarity tries to transgress the existing boundaries between science and society.It does that in two ways: it takes up "external (to science) societal problems, works on them internally, and transfers the results to the societal domain in order to contribute to extrascientific societal problem solving" [19; p.261].Seen from an epistemological perspective, this type of interdisciplinarity is the place in which constructivism and realism converge, asking for an epistemological position Schmidt calls constructivist realism, in which "based on real situations and matters of fact, problems are constituted according to normative criteria" [19; p.263].Accordingly, Schmidt deemed that it was not enough to describe reality and the criteria of its cognition, but that both reality and the criteria should rather be normatively defined or, to be more precise, constructed in accordance with the interdependence of natural objects, humans, and technology.Thus, he criticized previous tendencies in science, i.e., inclinations towards conducting a unilateral analysis of these three constituents from a non-dynamic perspective.
SCHMIDT'S PHILOSOPHY OF INTERDISCIPLINARITY
Schmidt's blueprint for a new, philosophic approach towards interdisciplinarity in 2008 and commitment to its problem-oriented version in 2011 enabled him to construct the desired philosophy of interdisciplinarity.His thoughts on the matter have undoubtedly reached their (current) peak in his monograph Philosophy of Interdisciplinarity.Studies in Science, Society and Sustainability published in 2022.It is the outcome of his long-lasting intellectual endeavor, his scientific venture through the wastelands of the interdisciplinary jungle.
Once again portraying his reluctance to accept the current state in academia which is depriving interdisciplinarity of its semantic core, Schmidt opens the book with a reminder to the roots of interdisciplinary discourse which dates back to 1960s and 1970s, and which emerged from discussions on environmental issues.By recognizing the weaknesses of a widespread instrumentalist or strategic approach to interdisciplinarity, he advocated one which would complement and upgrade it, namely its critical-reflexive variant.Schmidt clearly expressed his intention of departing from the Baconian, Cartesian and Kantian philosophical heritage regarding the human-nature relationship, aligning his thought with the critical theory and cultural critique of the Frankfurt School, especially with that of Theodor W. Adorno, Max Horkheimer, and Jürgen Habermas.The essence of Schmidt's understanding of the relationship between philosophy and interdisciplinarity in the form of philosophy of interdisciplinarity, as well as his clarification of both of its constituents, is best shown in the following lines: The Philosophy of Interdisciplinarity is thus interdisciplinary and genuinely philosophical: "In comparison with the disciplinary mainstream of 20th-century philosophy with its subdisciplines, its reductionist approaches and regional ontologies (Frodeman 2014), the Philosophy of Interdisciplinarity can be characterized as truly interdisciplinary.Furthermore, it is genuinely philosophic because it is based on the rich and colourful intellectual tradition of philosophy that addresses fundamental metaphysical questions and develops frameworks of orientation.In other words, the Philosophy of Interdisciplinarity aims to (re)open the academic discipline of philosophy towards other disciplines and, beyond that, to society at large.It resonates with an interdisciplinary-oriented philosophy and therefore could also be called interdisciplinary philosophy" [20; pp.7-8].
Therefore, Schmidt made it clear that his philosophy of interdisciplinarity is not another philosophical subdiscipline, merely a "philosophy of X" as was the trend during the 20 th century due to overspecialization.It is rather an overarching critical-reflexive variant of a problem-oriented interdisciplinary framework which is deeply rooted in philosophical heritage.
It is worth noting that chapter 2 of his book on philosophy of interdisciplinarity relies upon in two of his articles which we have previously discussed [14,19], but it also provides proof of the further advancement of his considerations on the matter.Novelties presented in the chapter, when compared to the abovementioned articles, largely contribute to the constitution of philosophy of interdisciplinarity, so we shall focus only on the points of divergence.
Before constituting his philosophy of interdisciplinarity, Schmidt pointed out the plurality and diversity of the views on knowledge and (inter-)disciplinarity taken by different philosophic traditions who deemed knowledge unity and integration of disciplines as an overall aim of academic inquiry, starting from Ancient Greek philosophy to German Idealism up to the 20 th century philosophy of science and analytical tradition.Furthermore, Schmidt tackled another plurality.Namely, the one of motives, values, or underlying goals of interdisciplinary research, which were often misinterpreted, and which resulted in viewing interdisciplinarity merely as a means for technological innovation and for achieving economic growth, thus being exclusively instrumentalist in nature.That lead Schmidt to the conclusion that interdisciplinarity is a "double-edged sword," because it can serve "as a point of access and key catalyst for recognizing and reflecting on goals and motives of science and research in society," but it can simultaneously conceal these goals and motives [20; p.22].He recognized four motives pursued by interdisciplinarians and their respective values: (1) epistemic motive stems from the attitude that science is guided by the value of truth; (2) economic motive comes from the belief that utility is the base value of scientific research; (3) ethical-societal motive centers on the value of the human and nature's well-being; (4) personal motive is driven by the value of sense-making and self-understanding.The task of philosophy of interdisciplinarity is, in Schmidt's words, to consider and reflect upon that ambivalence, because interdisciplinarity has "the potential to spark deeper reflection on science and research in society," thus creating its guiding idea to put that potential into practice [20; p.22].
However, classification of motives and values can lead to a limited, mainly descriptive understanding of interdisciplinarity.That is why Schmidt proposed that philosophy of interdisciplinarity aims "to critique, complement, and widen the view," while one of its central objectives is to "reveal underlying philosophical assumptions and fundamental convictions regarding the notion of 'interdisciplinarity'and on this basis it advances a critical perspective that opens up avenues towards sustainable knowledge within the academy" [20; p.24].Acknowledging the fact that the existence of disciplines is a conditio sine qua non of interdisciplinarity, Schmidt presented the unavoidable dilemma that arises from such a situation.At the core of interdisciplinary endeavors there is the so-called boundary paradox, i.e., the simultaneous tendency to conserve and eliminate disciplinary boundaries.Schmidt suggested a philosophic view of that paradox, naming it the boundary dialectic, which would include both separation and integration of disciplines, and which would enable us to reject dominant conception of interdisciplinarity as being solely integrative to the extent of dissolving disciplinary boundaries, hence dissolving its own roots [19; pp.252-253, 20; p.25].The dialectic view of disciplinary boundaries is offered by philosophy of interdisciplinarity which possesses the ability to, as Schmidt concludes, "explicitly address boundaries and provide a conceptual framework encompassing both (a) separation or differentiation and (b) transgression, transcendence, or integration" [20; p.26].As we have already mentioned in an earlier chapter, he designed that conceptual framework in the early 2000s, and it consisted of four interchangeable views of a multifaceted phenomenon of interdisciplinarity: object-oriented, theory-oriented, methodoriented, and problem-oriented interdisciplinarity.
The whole of Schmidt's monograph is interwoven with his bias towards problem-oriented interdisciplinarity, since he was convinced that it transcends the other three views of interdisciplinarity.That can be seen from his statement that problem-oriented interdisciplinarity, compared with the other three types, "frames science and research from a more comprehensive perspective," and that it "centres on problems and issues, and it includes the goals, purposes, initial conditions, and research agendas of scientific activities" [20; p.29].As we mentioned earlier, Schmidt [19; p.256] wrote that that type of interdisciplinarity is sometimes called transdisciplinarity, because the two share many common features, e.g., they both deal with societal, ethical, real-world, extra-and trans-scientific problems.That is why it was of utmost importance to him to distinguish between them in his monograph.As is shown in a figure and elaborated upon in the text, transdisciplinarity is a comprehensive concept which encompasses all four forms of interdisciplinarity but only its problem-oriented type in its entirety.Namely, certain interdisciplinary objects, methods and theories fall out of the transdisciplinary scope.Therefore, problem-oriented ID (including its critical-reflexive subtype) is a subset of transdisciplinarity, and thus always transdisciplinary.Given the fact that he criticized the report because of the lack of the other three types of interdisciplinarity and taking the nature of his critique into consideration, it is, in our opinion, justified to conclude that Schmidt would like the report to contain each of the four types with an emphasis on the problem-oriented one, more precisely on its critical-reflexive subtype.The conclusion we put forward is based upon Schmidt's critique of today's knowledge society which does not take consequences of technological advancement and its impact on humanity and environment into consideration.Schmidt's vision of philosophy of interdisciplinarity provides a conceptual framework, based on the minimal philosophy of science, which should be used to consider and judge interdisciplinarity present in the dominant knowledge politics, which largely impacts and builds the society of the future.Therefore, the role of philosophy of interdisciplinarity is to encourage criticism and foster reflection on interdisciplinarity, thus creating a reflexive society [20; p.53].
Besides detecting the state of interdisciplinarity studies which dominates today's scientific discourse and suggesting how we can improve it in the chapter on NSF's NBIC report, Schmidt went to determine the historical roots of such a view on science.He found them in the work of Francis Bacon, the early Modern philosopher and the founding father of modern science, recognizing him as the precursor of today's technoscience which does not reflect on the implications produced by scientific progress and technological advancement.Schmidt already did so in 2011 in two articles entitled "The Renaissance of Francis Bacon" [22] and "Toward an epistemology of nano-technosciences" [23], in which he postulated that Bacon's program is now experiencing a rebirth (renaissance), reaching its full potential and nominally being put into action in various research programs: "Bacon-I in the 17 th century is now followed by Bacon-II, supporting the well-known vision or fiction of an epochal break" [22; p.38].Both in his articles and his monograph, Schmidt claimed that Bacon's vision of science, and consequently the one that is present in today's technoscience (object-oriented interdisciplinarity), was mostly instrumentalist, since it only dealt with means of achieving progress, and not on its consequences.In Schmidt's words, Bacon's instrumentalism and his materialist real-constructivist epistemology are "now, in essence, more powerful than ever before, especially in the growing field of interdisciplinarity and interdisciplinary technosciences" [20; p.72].In order to overcome such a situation, philosophy of interdisciplinarity comes into play.According to Schmidt, it should be put to use in the sense of acknowledgment and awareness of the subsiding Baconian elements of modern technoscience, and the predominant type of interdisciplinarity, which enables us to assess it critically especially from the perspective of the relationship between humans, technology, and nature.Put succinctly, philosophy of interdisciplinarity helps us "to go through Bacon and deal with his programin order to go beyond him" [20; pp.72-73].
Unlike object-oriented interdisciplinarity which dominates the technoscientific neo-liberal era we live in, Schmidt expectedly advocated interdisciplinarity which would be oriented towards wicked societal problems and their resolvement.That can be seen in the chapter entitled "Society and Societal Problems" of Schmidt's monograph, which has a lot in common with the theses he presented in a article we discussed earlier, namely "What is a problem?" published in 2011.Since we have already analyzed that article in detail, we shall focus on the differences between it and the book chapter.Special emphasis will be put on philosophy of interdisciplinarity's contribution to the discussion on problems in general, and societal problems in particular.The first difference between the article and the chapter lies in the involvement of philosophy of interdisciplinarity in recognizing types of interdisciplinarity present in two reports: NSF's NBIC report from 2002 and the European Commission's CTEKS (Converging Technologies for the European Knowledge Society) report.The CTEKS report, in Schmidt's opinion, "shifts the perspective away from object-oriented interdisciplinarity [advocated by NSF's report] towards problem-oriented interdisciplinarity, which, by means of detailed specification of each component, aims to achieve a framing of the problem, a convergence of goals, and critical reflection on and the potential revision on purposes" [20; p.88].Furthermore, the second difference lies in the central place philosophy of interdisciplinarity should hold in correct understanding of and orientation on a problem, since it is "a key term in both the political and epistemological discourse and the practice of interdisciplinarity," and therefore philosophy of interdisciplinarity becomes indispensable for giving substance to problem-oriented interdisciplinarity [20; p.90].
Although an advocate of problem-oriented interdisciplinarity, Schmidt saw that today, scarce as it is, even such an approach to interdisciplinarity has many shortcomings.He recognized that it is most often characterized by instrumentalism, which is oriented exclusively on solutions to problems, rather than on their roots and prevention, and he called such an approach solutionism.That caused him to devote an interlude in his monograph to the clarification of what problemoriented interdisciplinarity should be and, to be more precise, he promoted its critical-reflexive subtype.That subtype, in Schmidt's words, contributes to "thwarting new problems at their very root," since it "scrutinizes the underlying dynamics of scientific/technological advancement," having both the emerging problems and, even more so, the prevention of problems in the early phases of scientific progress in focus [20; p.93].
The interlude actually reveals the direction and the main message Schmidt tried to send in next three chapters in which he presented three case studies from a critical-reflexive perspective on the following topics: ethics and environment, nature and the sciences, technology and the future.When he dealt with ethics and environment, Schmidt largely relied on the philosophical approach taken by the German philosopher Hans Jonas.His inclination towards Jonas' thought is apparent in an article "Die Aktualität der Ethik von Hans Jonas.Eine Kritik der Kritik des Prinzips Verantwortung" [24], and in an article "Defending Hans Jonas' Environmental Ethics: On the Relation between Philosophy of Nature and Ethics" [25], as well as in the sixth chapter of his monograph on philosophy of interdisciplinarity.In all three cases, Schmidt approached Jonas' philosophy in a similar fashion.He aspired to critically assess the applicability and actuality of Jonas' philosophy in the 21 st century, in order to find out whether his philosophy can prove useful in a critical analysis of the current state of affairs and the relation between society and environment.Schmidt used scholastic precision in analyzing the four objections (diagnosis objection, origin analysis objection, argumentation and justification objection, and practice objection) and arguments put forward by Jonas' critics in an effort to repudiate his theses.His defense of Jonas and his theses can be brought down to the following two conclusions Schmidt expressed in the introductory part of the chapter: (1) "Jonas is a pioneer in driving the idea of critical-reflexive interdisciplinarity […] in order to shift the direction of scientific advancement onto an environmentally friendly path" [20; p.103]; (2) Jonas' public philosophy "can be can be regarded as interdisciplinary in a (self-)critical-reflexive sensean interdisciplinary philosophy that is part of any good reflexive and reflective practice" [20; p.104].So, Jonas' philosophy served Schmidt as an illustrative example of how critical-reflexive philosophy of interdisciplinarity should look like, since Jonas considered philosophy of nature and ethics twin sisters, therefore being requisite for facing life-world problems.In line with Jonas' thought, Schmidt's practically relevant environmental philosophy of interdisciplinarity would be the one "in which ethics, anthropology, metaphysics, philosophy of nature, philosophy of science, as well as politics and the life-world are conceptualized as a converging domain in a critical-reflexive fashion" [20; p.119].He concluded his considerations on the relation between ethics and environment by first taking an ex negativo approach in showing what philosophy in general, and philosophy of interdisciplinarity in particular should be like.He is convinced that it should not be "apathetic or indifferent about the world," since it should concern "the world's state of affairsespecially environmental issues and global change problems," hence it should not be "value-free."It should, in Schmidt's opinion, rather be "engaged in changing the situation," and should achieve it by "fostering people's awareness, the responsibility of scientists or, in general, humans' stewardship for nature," by providing "a reflexive fundament for the betterment of societal praxis -and for a good life."However, in order to achieve such a philosophy, it is necessary for humans to develop a different mindset towards nature, a mindset which would "govern our approach to the natural environment and change our societal relations to nature."[20; p.120]In conclusion, Schmidt was a strong supporter of Jonas' philosophy and therefore a proponent of an ethically responsible human approach towards nature.Schmidt deemed a critical-reflexive philosophy of interdisciplinarity, which should be problemand future-oriented, as key for achieving such an approach.
The contents of Schmidt's chapter on the relation between nature and the sciences can be reformulated in a question: What kind of science do we need in a world marked by instability and complexity?The topics of instability and complexity occupied a large part of his scientific endeavors from the early 2000s.His thoughts on the matter reached their peaks in two of his previous monographs: Instabilität in Natur und Wissenschaft: Eine Wissenschaftsphilosophie der nachmodernen Physik [26] and Das Andere der Natur.Neue Wege zur Naturphilosophie [27].Science of the second half of the 20 th century has challenged the perspective in which the world was considered as being stable and static.It had shown that the world around us is mainly characterized by instabilities and complexities which stem out of them.Schmidt's view on the matter was, of course, a critical-reflexive one.In his opinion, such a view "opens avenues for exploring new directions within the sciences and for fostering a change in the way sciences conceptualize (ex ante and ex post) nature and our societal relations to nature," and it can also "encourage scientists (and all of us) to question what counts as legitimate science, entailing a cultural critique of present-day fragmented knowledge production, the institutionalized research system, and the related (Cartesian dualistic) worldviews."[20; p.123]He advocated and used it in order to find an alternative to the mainstream sciences by means of critical-reflexive interdisciplinarity.That type of interdisciplinarity involves four aspects: (1) self-enlightenment which encompasses a critical stance towards one's own approach to the world and to the boundaries of our framing of the world's objects; (2) synthesis or synopsis of disciplinary and non-disciplinary knowledge which would be used for creating a new and comprehensive understanding of nature and societal relation between humans and nature; (3) change or transformation in the orientation of science and scientific advancement; and (4) problem orientation, since it is related to grand societal changes.[20; pp.123-124] Accordingly, Schmidt's analysis concerned the fact that the sciences of the second half of the 20 th century recognized instabilities, which was followed by the emergence of self-organization theories and shook the foundations of thus known science.In turn, Schmidt saw instabilities as an opportunity for a new synthetic-synoptic view on scientific findings which would shed a different light on nature, and he did so with the help of philosophy of interdisciplinarity which aims to unify various perspectives and create a scientific common ground [20; p.130].In Schmidt's opinion, despite offering a new view on nature, instabilities simultaneously reveal limitations of and in the sciences.This dialectic relation is central to critical-reflexive interdisciplinarity.Instabilities have posed new methodological challenges to sciences by deconstructing the methodological dogma of reproducibility, predictability, testability, and describability which arose out of the Baconian scientific program.Hence a critique of the Baconian program is indispensable for a problem-oriented perspective.Instabilities turn out to be paradigmatic for a critical-reflexive orientation in interdisciplinarity and philosophy as an academic discipline [20; p.143].In given circumstances, quantitative methodology became insufficient and had to be complemented by its qualitative counterpart, which involves methods such as processuality, modelling and contextualism.Schmidt deemed self-organization theories a prominent example of a new methodological orientation which produces and tests holistic models and offers explanations rather than (re)producing rigorous scientific laws.The dominant instrumentalist approach to science, according to Schmidt, addresses problems only at a superficial level, whereas latemodern science requires it to be complemented by a critical-reflexive dimension which would facilitate problem prevention.Moreover, Schmidt is convinced that late-modern sciences "open pathways to a more contextual and democratic understanding of sciences" [20; p.151].In his opinion, self-organization theories, as a form of problem-oriented late-modern scientific paradigms, deal with problems on a deeper level.He argues that the emergence and wide recognition of instabilities do not "drive sciences into a dead end and render scientific inquiry impossible," but they rather "engender a different concept of science and a change of view regarding what counts legitimately as science" [20; p.152].Finally, the new, appropriate scientific approach to phenomena in the world of instabilities and complexity is the one which is critical-reflexive, problem-oriented, future-oriented, synthetic, synoptic, holistic, and methodologically contextual.
Schmidt concluded his monograph on philosophy of interdisciplinarity with a chapter on technology and the future.It is a topic on which he wrote in, for example, the article "Towards a prospective technology assessment: challenges and requirements for technology assessment in the age of technoscience" [28] and in the article "Prospective Technology Assessment of Synthetic Biology: Fundamental and Propaedeutic Reflections in Order to Enable an Early Assessment" [29].
Due to challenges related to environment, sustainability and global change, and since they are mostly caused by reckless use of technology, in the 1960s in the USA and in 1980s in Europe a new interdisciplinary approach to dealing with these challenges emerged: Technology Assessment (TA).The main goal of TA is to foster and facilitate societal and political shaping of technoscientific advancement by politicians and legislation, with its basic purpose being early identification and assessment of new technologies, as well as influencing their development.However, TA faced criticism.Schmidt was one of the authors who criticized the current state of TA.His remarks aimed at TA's lateness in reacting to problems produced by emerging new technologies, urging it to address "underlying technoscientific knowledge dynamics with its inherent tendency to continuously produce new problems," as well as at TA's lack of criticality in considering "the background of the issues we face today" [20; p.158].Besides being critical towards it, Schmidt and his colleague Wolfgang Liebert [28] created a new concept of a critical-reflexive interdisciplinary approach in TA which should diminish TA's shortcomings: Prospective Technology Assessment (ProTA).Such a concept encompasses self-enlightenment "in the sciences and engineering, in the academy and the research system, and furthermore in science politics and society at large," which intends to "hinder the creation of new problems" and which "matches perfectly with the concept of critical-reflexive interdisciplinarity". Schmidt considered ProTA paradigmatic for his philosophy of interdisciplinarity, since he perceived it as a "normative-descriptive hybrid at the interface between science, society, and politics," moreover he was convinced that it can be "deemed to truly epitomize the concept of critical-reflexive interdisciplinarity" [20; p.158; p.178].
Schmidt's vision of ProTA involves four dimensions or orientations which sets it apart from TA: 1) early-stage orientation or timeliness in addressing emerging novel kinds of technology and in acquiring technoscientific knowledge; 2) consideration of purposes and options for realizing technoscientific potentials; 3) shaping orientation, since it aims to shape technoscientific knowledge production; 4) examination of technoscientific knowledge produced at the technoscientific core [20; pp.158-160].ProTA turns out to be crucial because "technical systems, devices, things, and objects based on instabilities and showing self-organizing phenomena are beginning to populate our life-world," and from it stems the necessity to address "instability-based, late-modern type of technology and undertake the task of developing procedures either to restrict and contain or to shape and deal with it" from an ethical perspective [20; p.177].In order to illustrate the applicability of ProTA, Schmidt used the example of synthetic biology which he considered to be a key technoscience of the future.The major essence of its technoscientific core being the "idea(l) of harnessing self-organizationincluding the ability to set off complex dynamical phenomenafor technical purposes" [20; p.177].Being a relatively new technoscientific field, Schmidt claimed that societally relevant ethical issues arise from synthetic biology and that they should be addressed as early as possible, especially due to the fact that its further development and realization would cause us to enter "a new technological era in which technical systems possessed high levels of autonomy and agency properties" [20; p.163].Therefore, critical-reflexive approach which would facilitate early prevention, consideration of purposes, shaping and examination of technoscientific knowledge is of the essence.Furthermore, Schmidt's conception of ProTA needs to be founded on solid ethical basis, similar to the one put forward by Jonas in his seminal work The Imperative of Responsibility [30].Schmidt argued that if ProTA would be "in alignment with Jonas's ethics," it could truly offer "an interdisciplinary, critical-reflexive approach that enables us to analyse and assess the technoscientific core of this new wave of emerging technologies" [20; p.177].
CONCLUSION
Philosophy was often criticized for not being involved in the discourse on interdisciplinarity.However, discussions on the relationship between philosophy and interdisciplinarity have intensified in the last two decades.Moreover, two levels of that relationship have been established in 2009: philosophy of interdisciplinarity and philosophy as interdisciplinarity.While philosophy of interdisciplinarity refers to a philosophical approach towards interdisciplinarity in the manner of philosophy of science, philosophy as interdisciplinarity encourages philosophical practice characterized by both reflective and reflexive engagement in the life-world, investigating and transcending academic philosophy's disciplinary boundaries and doing integrative fieldwork with scientists, engineers, and decision makers.Scholars have thus far shown more interest towards the first level, that is towards philosophy of interdisciplinarity.
The aim of our article was to thoroughly investigate the development of a specific philosophy of interdisciplinarity conceptualized by Jan Cornelius Schmidt, who devoted the past two decades to reflection on the first level of the relationship between philosophy and interdisciplinarity.We have traced the evolution of his philosophy of interdisciplinarity from a conceptual blueprint to its (current) peak, i.e., from his first utterance of the notion Interdisziplinaritätsphilosophie in 2005 to his monograph on the matter in 2022.We have recognized and emphasized two milestones in the evolutionary trajectory of Schmidt's philosophy of interdisciplinarity.
The first of them was his article "Towards a philosophy of interdisciplinarity.An attempt to provide a classification and clarification" from 2008.Schmidt was convinced that the first step towards a philosophy of interdisciplinarity is to approach the complex phenomenon of interdisciplinarity using a four-dimensional framework stemming from philosophy of science.The second milestone was his article "What is a problem?On problem-oriented interdisciplinarity." Inspired by Dörner and Scholz, Schmidt defined problem-oriented interdisciplinarity as the one which serves for constituting, framing, and clarifying a problem, anticipating and preventing it, as well as suggesting actions for its solution.Besides naming the problems of today and identifying the lack of answers provided by contemporary science's selective and incomplete, theoretical and practical understanding and use of interdisciplinarity, Schmidt gave priority to its problem-oriented type with the corresponding philosophical stance immersed in the critical theory of the Frankfurt school.
The development of Schmidt's philosophy of interdisciplinarity has reached its (current) peak in 2022 in the form of a monograph entitled Philosophy of Interdisciplinarity.Studies in Science, Society and Sustainability.As we have shown, it is a theory he has been meticulously building for two decades and gradually exposing in numerous articles and books published in that period.The leitmotif of the whole monograph is Schmidt's criticism aimed at mainstream science, which is unaware of or ignores the true meaning of interdisciplinarity, from an ontological, epistemological, methodological, and problem-oriented perspective.His criticism was founded and done with the help of philosophy of interdisciplinarity, which he saw as a possible catalyst for improving the current relationship between philosophy, science, technology, society, and nature, as well as a possible pathway towards constituting new science for the future.In his opinion, the nanotechnoscience we witness today originates from and still largely resembles the thought of Francis Bacon.Such science promotes instrumentalism, neglecting negative implications and consequences of technological advancement.Schmidt's response to that is the critical-reflexive subtype of problem-oriented interdisciplinarity, which aims to prevent problems from emerging, thus being future-oriented.He recognized Hans Jonas as the precursor of such an approach, since Jonas promoted the imperative of responsibility in human conduct towards naturetheir scientific endeavors included.Accordingly, Schmidt's philosophy of interdisciplinarity should be involved in the world's state of affairs, offering a critical-reflexive fundament for a responsible, value-laden, practically relevant philosophical consideration of the life-world.That implies the adaptation to the new science largely characterized by complexity, dynamics, and instability.Schmidt considered ProTA to be the embodiment of a responsible relationship towards technological advancement, involving problem prevention and critical reflection on the purpose of technology and science.
Schmidt's monograph represents the realization of one of two capital tendencies exposed at the Atlanta Philosophy of/as Interdisciplinarity workshop in 2009, the fulfilment of Schmidt's desideratum he evoked in many of his articles.However, Stephan Lingner rightfully noticed that Schmidt's monograph does not explicate "how its critical reflexive ambition might be carried-out in practice and how it could effectively enter research policies and related techno-scientific innovation," and that it opens the following question: "which incentives or organizational changes could nudge the actors in a competitive world to more responsible innovation beyond volatile appellative considerations."[31; p.79]But that was not the aim of Schmidt's monograph.From our point of view, his philosophy of interdisciplinarity presented in his monograph is the prerequisite for constructing a philosophy as interdisciplinarity which would practically tackle the problems of life-world and thus answer Lingner's questions.While Schmidt's monograph covered the missing theoretical of gap, philosophy as interdisciplinarity is still a practical desideratum.
issue 11 of the 190 th volume of Synthese (2013) edited by Hoffmann, Schmidt and Nersessian; (2) and issue 3 of the 6 th volume of European Journal of Philosophy of Science (2016) edited by Uskali Mäki and Miles MacLeod.
Figure 1 .
Figure 1.The four types or dimensions of interdisciplinarity, including transdisciplinarity and a central subtype, namely critical-reflexive interdisciplinarity [20; p.30].When he discussed knowledge politics and research programs in his book, Schmidt further developed his thoughts on the topic he previously expressed in a article entitled "Knowledge Politics of Interdisciplinarity. Specifying the Type of Interdisciplinarity in the NSF's NBIC Scenario" [21].In both cases he challenged the prevalent understanding of interdisciplinarity advocated in the NSF's (National Science Foundation) NBIC (Nanotechnology, Biotechnology, Information Technology and Cognitive Science) report from 2002.He did so by analyzing the report from the perspective of the four types of interdisciplinarity, recognizing that it advocates a weak type of interdisciplinarity: techno-object interdisciplinarity [20; pp.50-51, 21; p.322].Given the fact that he criticized the report because of the lack of the other three types of interdisciplinarity and taking the nature of his critique into consideration, it is, in our opinion, justified to conclude that Schmidt would like the report to contain each of the four types with an emphasis on the problem-oriented one, more precisely on its critical-reflexive subtype.The conclusion we put forward is based upon Schmidt's critique of today's knowledge society which does not take consequences of technological advancement and its impact on humanity and environment into consideration.Schmidt's vision of philosophy of interdisciplinarity provides a conceptual framework, based on the minimal philosophy of science, which should be used to consider and judge interdisciplinarity present in the dominant knowledge politics, which largely impacts and builds the society of the future.Therefore, the role of philosophy of interdisciplinarity is to encourage criticism and foster reflection on interdisciplinarity, thus creating a reflexive society[20; p.53]. | 10,283 | 2023-01-01T00:00:00.000 | [
"Philosophy"
] |
An Analysis on Absolute Velocity
The following analysis presents a practical approach for measuring the absolute velocity of an observer, which could be used for determining a spacecraft's state of motion from inside a closed cabin. The concept of absolute velocity generally refers to a standard uniform velocity of the various objects of a physical system relative to a postulated immobile space that exists independently of the physical objects contained therein (i.e., an absolute space).
Introduction
The following analysis presents a practical approach for measuring the absolute velocity of an observer, which could be used for determining a spacecraft's state of motion from inside a closed cabin. The concept of absolute velocity generally refers to a standard uniform velocity of the various objects of a physical system relative to a postulated immobile space that exists independently of the physical objects contained therein (i.e., an absolute space).
Analysis on Absolute Velocity
Herein, we present an approach based on rigid kinematics to demonstrate that the absolute velocity of an observer can be determined from the fact that light travels through a vacuum at speed c regardless of the motion of the light source or that of an observer's frame of reference. Consequently, sometime is required for light to travel from a light source to an observer in space, such that the emission of light and the observance of the emitted light are not simultaneous.
In the proposed analysis, we first provide the following conventions for the coordinate systems under consideration. We assume a Cartesian coordinate system comprised of three pair-wise perpendicular axes originating from point (0,0,0), where any point P in space can be defined by its coordinates along the x, y, and z axes, represented by an ordered pair of real numbers (x,y,z). An inertial reference system S is assumed to be represented by a space rectangular coordinate system (x,y,z) whose origin is O. A series of standard clocks (denoted as S clocks) are located at stationary points in S. The S clocks are mutually calibrated (i.e., they provide equivalent readings at the same instant in time) based on the transmission and reception of a light signal. Specifically, at time t A , a light beam is projected from clock A to clock B, which is then reflected at time t B by clock B back toward clock A, and arrives at clock A at time t A '. If , the two clocks are calibrated. For any event located at coordinates x,y,z, its time coordinate t is given by the reading of the event-related S clock. Similarly, another inertial reference system S' is established based on a second space rectangular coordinate system (x',y',z') whose origin is O', and another series of mutually calibrated standard clocks (S' clocks) are located at stationary points in reference system S'. For any event located at coordinates x',y',z', its time coordinate t' is given by the reading of the event-related S' clock.
Because the respective origins of the spatial and temporal coordinates, as well as the directions of the coordinate axes can be selected arbitrarily to a large extent, the relationships between S and S' employed herein are based on the following conventions, which have been applied for simplicity.
(1) At a particular instant in time, the origins and coordinate axes of the two systems are superposed, and the clocks respectively located at O and O' are set to zero, i.e,. t=t′=0 (2) The x and x' axes are coincident in the direction of the relative motion of S and S'; thus, x and x' are coincident at all times, while y and y', as well as z and z' are parallel.
(3) S' moves along the +x direction of S.
The proposed analysis is based on the fundamental principle that light in a vacuum travels at a constant velocity c regardless of the motion of either the observer or the light source. Thus, we herein define the absolute velocity U of the observer, which is assumed to be linear and uniform, as U= c. f(φ), where f(φ) represents a function of the geometrical relationship between the direction of a light signal and the direction of U, denoted herein as φ.
Two new explanations of the relativity of time and length are provided in the proposed analysis, which differ from Einstein's explanations (please refer to "On the Electrodynamics of Moving Bodies"). These explanations are introduced in the following sections.
The relativity of time
Owing to the finite velocity of light, sometime is required for light to travel from a light source to an observer in space. It can therefore be deduced that the emission of light and the observance of light cannot be simultaneous. Based on the constancy of the velocity of light and the conditions illustrated in Figure 1, we define the following factors. The positions of a light source and an observer at t=0, at which time the light source emits light (denoted as event R), are given respectively as A and H 0 . During the period of light propagation from t=0 to t=T, the observer travels a distance U•T from position H 0 to H, whereupon the observer receives the emitted light. Accordingly, we define HA as the distance l between the position of the observer upon receiving the light signal at t=T and the position of the light source when emitting light at t=0. Owing to the constancy of the velocity of light, l=c .T the distance H H 0 is equivalent to U•T. Finally, we define A H 0 as the distance l 0 between the position of the observer at t=0 and the position of the light source at t=0. Here, we introduce a consideration of the observer's position at t=0, which was previously thought to have no physical significance. If the states of motion of the observer and the light source are equivalent, l 0 is given, such that U can be calculated via the triangular relations illustrated in Figure 1.
In the present work, the time of an event is measured using the following method. Both the observer and a clock are placed at the origin. When the light signal representing the occurrence of an event reaches the observer, the light arrival time will correspond with the time indicated by the clock. The advantage of the correspondence is that it is always related to the position of the observer who employs the clock. As seen in Figure 1, spatial point O 3 can be defined based on the position of the light source when emitting light at t=0, and spatial point O 2 can be defined based on the position of the observer when receiving light at t =T. However, the means of defining spatial point O 1 at which the observer is located when the light source emits light at t=0 is not obvious. To solve this problem, the concepts of absolute rest (i.e., 0 = U ) and absolute motion (i.e. , U≠0) are introduced. We respectively substitute the single moving observer at H 0 and H with two observers H 1 and H 2 at rest at spatial points O 1 and O 2 , respectively, while light source A is at rest at spatial point O 3 . Each of the observers and the light source employ calibrated standard clocks, and A emits a light signal at t=0 denoted as event R 0 . When observer H 1 receives the light signal traveling at c, the clock reading is T 0 , and when H 2 receives it, the reading is T. As such, the distances can be defined based on the respective travel times of the light signal as follows: and ( According to the triangular relations shown in Figure 1, the transformation of event R 0 between observers H 1 and H 2 is given as follows. Here, ϕ is the angle between the line O 3 O 1 and the x axis, and ′ is the angle between the line O 3 O 2 and the x axis. Returning now to the condition of a single moving observer, we assume that, in system S, both the observer H and light source A employ a calibrated standard clock [1]. (3) and (4) can be rewritten as follows.
) (cos cos
We also observe that, for ϕ=π/2, Equations (6) and (7) Therefore, based on the above analysis, a simple method for determining U can be obtained from Equation (6), given that A H 0 (φ) is known.
For the purpose of simplicity, a light source with an equivalent state of motion as that of the observers is chosen as the reference frame [2,3]. An observer is set in the center of a straight rigid bar in uniform linear motion, and light signals from different positions on the bar arrive at the observer at different times. The observer detects bar deflection at the observer's position with a deflection angle π-2ϕ′ that can be obtained from Equation (8).
Through this method, the state of motion of a spacecraft could be determined from a closed cabin.
The relativity of length
In the above analysis, two lengths have been introduced, i.e., H H − and 2 1 O O . In this case, we consider the length of a bar that is assumed to be a stationary rigid bar of length L within its coordinate system, for which L is measured with a stationary staff gauge, and where the bar axis is coincident with the x axis of a coordinate system that moves with uniform linear motion along the +x direction at an absolute velocity U. It is assumed that L is determined by the two operations defined below. (a) The observer resides in the same moving coordinate system as does the bar and staff gauge, and L is measured by superposing the bar and staff gauge [4][5][6].
(b) By the aid of several clocks positioned in stationary coordinates, which move in synchronization, the observer measures the positions of the two ends of the bar in stationary coordinates at a specific moment t, and the distance between the two positions is measured with the stationary staff gauge.
H H − is the length calculated from operation (a), which is denoted herein as the bar length in the moving coordinate. We assume that, in system S, observer His in the same state of motion as H. Then, as shown in Figure 1, His at point O 1 when receiving the light signal at t =T 0 , and, simultaneously, observer H is at point O 4 . Therefore, (10) 2 4 O O is the distance traveled by observer H in the interval from T 0 to T, such that (11) As such, the following expression can be deduced.
Next, we will consider the coordinate and time transformations between two moving coordinate systems, both of which move with a uniform linear velocity, which represents a new explanation of the Lorentz transformation.
Assuming that observer H (in system S), observer H' (in system S'), and light source A'' (in system S'') all employ calibrated standard clocks individually, and, at t=t=t′′′=0, light source A'' emits a light signal, denoted as event R'' [7]. At the instant of event R'', A'' is at point O 3 , and observers H and H' are coincident. When observer H receives the light signal, the reading of the S clock is T, and H is at point O 1 . When observer H' receives the light signal, the reading of the S' clock is T', and H' is at point O 2 . Then, the transformation of event R'' between observers H' and H can be calculated from Equation (3) (14) Here, u is the relative velocity between observers H' and H, which yields the following transformations. | 2,751 | 2016-08-03T00:00:00.000 | [
"Physics"
] |
Finite-Difference Time-Domain (FDTD) Simulation of Novel Terahertz Structures
Previous work on compact, variable, efficient, high brightness radiation sources is extended by calculating the radiated power and angular distributions for different configurations and drive sources. Figures of merit are defined in terms of efficiencies or effective impedances such as the radiation coupling impedance Zr. Characteristics of representative cases are discussed in terms of a few basic parameters. Conditions for interference are discussed and demonstrated. Finally, we discuss some further possibilities together with various impediments to realizing such devices. The differences between bound and free electrons are studied from the standpoint of the frequencies that are practicably achievable. With the ansatz that the transport physics with Maxwell’s Equations are valid but modified by the material properties, a number of analogs exist between these two basic sources of radiation. In many cases, the differences are between macro and micro implementations e.g. between klystrons and klystrinos (micro or nano) or solid state and semiconductor lasers or rareearth doped transistors. Cases with no apparent analogs are ones due to unique quantum effects e.g. radiation at 3kTc in superconductors. This is well above magnetic resonance imaging MRI around 0.4 meV but well below room temperature at 25 meV. Bound and free possibilities for planar, micro undulators over this range are studied using FDTD techniques. To our knowledge, there have been no implementations of either possibility.
I. Introduction
Previously, we explored possibilities for producing narrow-band THz radiation using either free or bound electrons in micro-undulatory configurations [1]- [4] because integrated circuit technology appeared well matched to this region extending from about 300 GHz to 30 THz. This range [5]- [15] has largely been neglected until recently because it runs from the limit of WR-3 waveguide at 300 GHz up to CO2 lasers where the laser regime dominates.
The present work is a byproduct of an ongoing goal of making an electro-optic electron accelerator on a chip or AOC. While lasers provide sufficient power, their use generally implies effective cell sizes proportional to their wavelength which poses a major complication. Thus, devices bridging the gap between lasers and conventional RF could prove very useful. Because of their other potential uses [5]- [15], we decided to explore this THz region using wiggler or snake-like configurations such as shown in Fig. 1. .
General Discussion
There are many ways to approach this problem but the most direct is to determine the Poynting vector based on calculating the acceleration fields in the far field and from it the angular distribution: (1) For <<1, the above relation reduces to: (2) Where is the angle between the observation direction n and the direction of acceleration at emission time t.
An important aspect of any source is the ability to measure it so we place a high value on reciprocity. Fast switching transistors or the production of x-rays via electron bremsstrahlung are classic examples of the inverse of the photoelectric effect that is used to produce electrons when the photon energy h has an energy sufficient to assist the electron in overcoming the Schottky barrier. The phase space densities of the resulting electron or photon beams depend on their wave vectors and both beams will diverge/diffract without confining potentials or guide structures that are properly matched to the incident beams. These are the reasons for the increased use of laser driven, RF assisted electron guns and photonic band gap crystals. If one runs the resulting beam of free electrons into a macroscopic undulator [16] having a wavelength U they will radiate at harmonics n of the device period: where the electron energy is in units of rest mass mc 2 . Clearly, one can benefit from increasing the energy or reducing U or the effective mass m* (making a tensor makes m an invariant). For low energy conduction band electrons, ~1 so that a wiggle period of U = 60 m, achievable with standard IC techniques, might be expected to give 30 m, 10 THz radiation. We explore the validity of these ideas and ways to implement such devices.
III. FDTD Code Validation
Finite Difference Time Domain (FDTD) is a powerful and flexible technique that is expected to play a central role in development and simulation of sub-millimeter wave devices. It was chosen over others because it is very efficient and its implementation is straightforward. Also, the FDTD method is ideal for our problem which is non-linear and may include anisotropy.
Fig. 2. Bench-mark filter used to validate the FDTD code.
Before presenting simulation results for any undulators, the developed FDTD code should be validated. The results are compared to those presented in [17]. The low-pass filter used to validate the code is shown in Fig. 2. Comparison results for the insertion loss (S21) and return loss (S11) are shown in Figs. 3 and 4. One observes good agreement with measured and calculated data except for the highest frequency which is somewhat shifted. Experimentation with planar circuit techniques leads one to conclude that this shift is caused mainly by the slight misplacement of the ports inherent in the choice of the spatial steps [17]. [18] where dimensions were scaled to give the same low frequency impedances for similar periods. Pulse currents greater than 1 A at 1 ns were obtained routinely without failures by careful conditioning. Different 2-D and 3-D implementations are interesting to pursue as well as other inductor-like topologies or laser driven, high mobility, direct band gap materials but first, it is useful to check the consistency between the classical and microscopic pictures we have assumed.
For conventional synchrotron radiation [19], one can estimate an energy loss per wiggle turn of: (4) where ( ) is the bend radius in m. We note that Eq. (2) can be due to magnetic or other equivalent effects because any change in velocity or momentum of an electric charge results in radiation. Further, the average photon energy u can be written: (5) where we have assumed a radius of 10 m from Fig. 1. For reference, a 0.5 THz photon has an energy of 5 meV. Thus, the assumption of constant in Eqs. (4) -(5) appears reasonable, ignoring intrinsic scattering in the material.
V. Results And Discussion
FDTD simulations were carried out for wiggler structures such as shown in Fig. 1 for 1.5 periods. The half-period circuit length L is 231.4 m for U =30 m. This gives a fundamental resonant frequency f0 of 0.437 THz. This is not fU for a free electron from Eq. (1). The return loss for a half period and 1.5 periods are shown in Figs. 5 and 6 normalized to the frequency f0. None of these structures, in this form, are expected to be coherent. Figure 5 demonstrates that an electron wave passes through the structure with very small reflection at f0 because it doesn't resolve the half loop well at this frequency and so passes through it with virtually no reflection. Further, the broad reflections around 2, 4, 6, and 8 f0 are due to harmonics of the reflection coming from the loop at ¼ of the wiggler period. As the frequency increases, the reflection coefficient increases and broadens consistent with the fact that higher frequencies resolve and sample the full loop better. From Fig. 1, Eqs. (3)-(4) and f0 we expect a radiation rate of 0.03 photons per electron per half loop with a diffuse pattern based on a mean angular spread of 1/ radians. While not optimal for brightness, it does imply out-of-plane radiation. We also expect the reflected electrons to radiate photons with a different radiation pattern in a competitive way because A p r i l 0 4 , 2 0 1 4 In Fig. 6, there are three small reflections around f0 corresponding to the same mechanism as in Fig. 5 before one reaches the strong first loop reflection at the same frequency as for Fig. 5. For coherence with such structures we would require multi-port feeds. In such cases, one could expect the three peaks to merge at f0 with a more pronounced resonance structure. Another observation is that the broad reflections around 2, 4, 6, and 8 f0 exist also for the 1.5 period case except there are now loops at ¼, ¾ etc. of the wiggler period. This explains why these reflections for the 1.5 period case are higher than the half period case by direct analogy with HR coatings. shows also the insertion loss versus frequency for 1.5 periods. As the frequency increases, transmission decreases. This is dual to the return loss parameter. Figure 7 shows the input impedance (real and imaginary) as a function of frequency. At deep resonance, the input impedances are purely real (50 ). This corresponds to a matching load that has zero reflection. Under the assumption of ballistic transport, this implies a broad band radiation spectrum having the mean energy given by Eq. (5) although such radiative losses are not explicitly reflected in these plots.
On the other hand, the broad reflections around 2, 4, 6, and 8 f0 have higher values of input impedance (mismatching) emphasizing that many more electrons are reflected at these frequencies producing radiation with a more bremsstrahlunglike spectrum. This mode relates most closely to IMPATT devices. In both cases, ballistic transport and reflection, the spectra and distribution patterns are expected to be very different with the latter extending to higher frequencies and in lowest order having a dipole distribution whose axis is centered on the incident electron's wave vector so the radiation peaks in directions around the perpendicular to this vector. This is in direct contrast to the synchrotron-like radiation. With increasing frequencies we expect such differences to become better defined because the classical conditions for radiation [19] improve.
To obtain a bound, micro undulator that retains the 2-D structure of Fig. 1, we can add a thin covering dielectric layer followed by a broad strip of metal running perpendicular to the straight segments as shown in Fig. 8. This is pulsed with shorter duration, higher peak currents that couple to the fields of the previous circuit to produce coherent radiation (Eq. 1) whose wavelength varies with the angle of observation relative to the oscillation plane. This relates to Smith Purcell radiation [20] but is more practical. There are many variants. For the free case one can add a mirror symmetric circuit above Fig. 1.
A useful figure-of-merit for such devices is the 6-D, normalized brightness in the form of a photon density: Even for «1, bound implementations are far preferable since this is an intense source by virtually any standard. Even the differing uses of metals in such devices, as opposed to semiconductors, is too broad to discuss here as well as the differences between metals such as Al and Au for use in fast laser drive systems [2] but we would be remiss to not mention materials such as poled, periodic lithium niobate [22] that could also be used with electrons. In a typical, 2-port, lossy, microwave structure, the power dissipated (normalized to the input power) can be estimated on the assumption that the S-matrix is complex and orthogonal as: .
The power dissipated can be due to radiation, conductor, or substrate loss. For instance, for a standard radiating structure with no output port (S21=0), the dissipated power is dependent on S11 only. In this case, small values of S11 indicate high loss. Further, if we assume no conductor or substrate loss, the radiated power must go inversely as |S11| 2 . If we define radiation efficiency as: (8) Then this radiating structure has 100% radiation efficiency since all the power dissipated is due to radiation. If a second port exists, the dissipated power must depend on the transmission coefficient as well (S21), with higher transmission indicating lower losses. In addition, if part of the dissipated power is due to substrate and/or conductor loss, the radiation efficiency in this case, based on (2), will be less than 100%. An example of this is the half-period wiggler where a portion of the loss is dissipated in the substrate (the substrate is assumed to be lossy with a small loss tangent value). Another definition for the radiation efficiency can be given as: (9) where Pt is the total power applied to the structure. This definition states that the efficiency is the percentage of power lost into radiation compared to the total power applied to the structure-ideally the so-called wall-plug power. A p r i l 0 4 , 2 0 1 4 9 shows the radiation efficiency as a function of the normalized frequency for the half-period wiggler. One observes that the radiation efficiency increases with frequency. In addition, the radiation efficiency maxima track the minimum of S21, which occur around 2, 4, 6, 8, and 10 f0 as discussed above. It is noteworthy to underline that the resonant frequencies are estimated based on a constant relative permittivity. This explains the results in the previous figures, where the high resonant frequency values are overestimated because the effect of the increase of relative permittivity with frequency is not included. A simple estimate for the relative permittivity at 10 f0 gives ~ 1.1 (f0).
HFSS simulations were carried out to calculate the radiation pattern of the half-period period. Fig. 10 shows the radiation pattern at Phi=90 (the YZ plane) for different frequencies, with the angle theta starting from the -y axis. One observes that higher frequencies have higher radiated power while S11 trends higher while S21 decreases with frequency. Because the value of S21 (close to unity) is higher than S11 (close to zero), the radiated power tracks S21. Further, at high frequency and looking at the YZ plane, the half-period wiggler acts as a dipole antenna; the EM-waves propagating along the different sides of the half-period wiggler have different directions. Moreover, the 90-degree turn generates radiation at high-frequency. These were put in to avoid crosstalk between input and output ports and also a well-defined loop was required to study at this stage of the work. This explains the radiation pattern of Fig.10, where a second loop is partially created at higher frequency. On the other hand, Fig. 11 shows the radiation pattern at Phi=0 (the XZ plane) for different frequencies, with the angle theta starting from the -x axis. One observes again that the radiated power increases with frequency. Further, the radiation pattern is unsymmetrical. Considering Fig. 11, one concludes that the half-period wiggler is equivalent to a dipole antenna resting at a certain angle in the XY plane, and this angle depends on the operating frequency.
| P a g e
A p r i l 0 4 , 2 0 1 4 To further investigate the characteristics of the half-period wiggler, the lengths of the two transmission lines (T) at either side of the wiggler are varied so that the half-circle is connected to the ports via transmission lines of length T. As a result, the first resonant frequency should occur at a higher/lower frequency, which can be checked by looking at the Sparameters curves, Figs. 13 and 14. It is worth noting that the purpose of reducing the length T is to have radiation at higher frequencies and also to achieve a more pure dipole-like radiation pattern at these frequencies. A p r i l 0 4 , 2 0 1 4 It is found that the resonant frequencies for planar circuits follow Eq. (10): where a = L = 2W+T+R and b = R+ 2T. eff is the effective permittivity, which is a function of frequency. Further, the effective relative permittivity equals to ~ 2.2, 1.96, and 1.32 for the modes HFSS simulations were carried out to calculate the radiation pattern of case 1-provided in Fig. (12). The radiation patterns for phi equals to 0 and 90 degrees are shown in Figs. 15 and 16. Considering Fig. 16 and at f = 4.0 THz, one notices that the radiation pattern is more symmetric and have a dipole-like radiation pattern. This is in contrast to the original wiggler (Fig. 10), where at 4.2 THz the radiation pattern is distorted, i.e. has bad directivity. Moreover, one notices that the radiation pattern at phi equals to 0 degree for f = 4.0 THz is not symmetric simply because the XZ plane identifies the distance between the input and output transmission lines. One observes also that tradeoffs exist between the radiation efficiency and directivity. This is noticeable by considering the peak values of the radiated electric fields, which show that the efficiency of the original case is higher than the case provided in Fig. (12) that has the better directivity. Figs. 17 shows the 3D radiation patterns at different frequencies.
One observes that at f = 18 THz, the radiation pattern is distorted. This is due to the high-frequency radiation coming out of the sharp edge connecting the port to the half-circle. One observes that a new resonant frequency is created at 3.8 THz, which corresponds to twice the wavelength of a single half-circle (7.9 THz). Good agreement between the results obtained by the FDTD code and HFSS can also be conferred. Moreover, the number of resonant frequencies is doubled in the same frequency range. Fig. 20 shows the radiation efficiency versus frequency for three different cases. It can be seen that the radiation efficiency is increased because there are two half-circles radiating instead of one. In addition, the radiation efficiency is roughly quadrupled at 16 THz-indicating coherence at this frequency. Also, for the case without kinks, the radiation efficiency increases almost linearly, and coherence is not achieved because the loops are not well-defined. Finally, considering the radiation patterns for both the half-circle and two half-circles, one concludes that the directivity of the second case is much better. This is in analogy with an antenna array, i.e., a two-element array antenna has better directivity than that of a single element antenna. The coupling impedance is a measure of how much power is lost into radiation. Table I emphasizes that radiation coming out of the two half circles is higher than that of the single half-circle, given that both half circles are radiating constructively.
Moreover, at f = 12 THz, the peak value of the radiated field decreases due to an out of phase radiation, i.e. the radiation coming out of the two circles add destructively. In the next section, a new structure will be developed to exploit the fact that a change of frequency can achieve coherence. Fig. 24 shows a top-view of the simulated structure. The main idea is to achieve a constructive radiation of the two-half circles. As a result, a higher radiated power or equivalently higher radiation efficiency can be attained. In order to do this, a transmission line of distance d is inserted between the two half-circles. By changes the distance d, the phase difference of the EM-waves propagating along the two half circles is controlled. In this manner, a constructive or a destructive radiation can be achieved. It is important to mention that the radiated power will be function of frequency and the distance d when keeping all the other parameters fixed.
FDTD simulations were carried to obtain the radiation efficiency as a function of the distance d. Fig. 25 shows the results, where one observes that a change of the distance d affects both the amount of the radiated power and the frequency at which the maximum value of radiation occurs. Also, d equals 2R gives a very high radiation efficiency at f = 18 THz (almost all the input power is lost into radiation).
VI. Conclusions And Future Research
This paper presents several possibilities for coherent radiation in the THz range based on micro undulator ideas. There are several interesting assumptions that we have made and now attempting to justify that includes such things as the low energy limit of SR, and the differences between quantum and classical pictures in this domain e.g. we need to study mobility and whether a scattering event during radiation takes it out of the classical SR picture. It has been argued that bound implementations have a number of advantages including costs based on using standard IC techniques as opposed to current macro FELs. Simple scaling needs to be checked, i.e. radiation may not be scale invariant. A related question concerns the frequencies allowed when we add a mirror symmetric combination above the structure in Fig. 1 or Fig. 8 since this will certainly serve as a high pass filter in the case of structures such as Fig. 8. | 4,797.8 | 2014-04-17T00:00:00.000 | [
"Physics",
"Engineering"
] |
Quasi 3D Nacelle Design to Simulate Crosswind Flows: Merits and Challenges
: This paper studies the computational modelling of the flow separation over the engine nacelle lips under the off-design condition of significant crosswind. A numerical framework is set up to reproduce the general flow characteristics under crosswinds with increasing engine mass flow rate, which include: low-speed separation, attached flow and high speed shock-induced separation. A quasi-3D (Q3D) duct extraction method from the full 3D (F3D) simulations has been developed. Results obtained from the Q3D simulations are shown to largely reproduce the trends observed (isentropic Mach number variations and high-speed separation behaviour) in the 3D intake, substantially reducing the simulation time by a factor of 50. The agreement between the F3D and Q3D simulations is encouraging when the flow either fully attached or with modest levels of separation but degrades when the flow fully detaches. Results are shown to deviate beyond this limit since the captured streamtube shape (and hence the corresponding Q3D duct shape) changes with the mass flow rate. Interestingly, the drooped intake investigated in the current study is prone to earlier separation under crosswinds when compared to an axisymmetric intake. Implications of these results on the industrial nacelle lip design are also discussed.
Introduction
Gas-turbine engines are generally housed in nacelles. An optimal intake design is the one which provides a uniform distribution of the air flow to the downstream components with minimum total pressure loss over a wide range of operating conditions. Modern engine architectures are trending towards higher bypass ratios and lower fan pressure ratios, which offer further improvements in propulsive efficiency with consequential reduction in emissions and noise. To circumvent the consequent increase in the weight and drag, engine manufacturers are exploring the possibility of employing shorter intakes and slimmer nacelle lips [1,2]. Such designs are however much more susceptible to flow separation due to the reduced diffusion capability of the intake, specifically under the off-design conditions of high crosswinds and high-incidence.
Ideally the nacelle lip skin would be designed to maximise cruise performance, since cruise is generally the longest flight phase [3]. Nevertheless, to cater for the off-design conditions of high-incidence and crosswind, the actual lip shapes used in commercial aircraft intakes are a compromise. Typically, the lip shapes at different circumferential locations may be designed independently, then blended together to form the full lip shape. Figure 1 outlines the key flow features under strong crosswind when the flow around the nacelle lip either (a) reaches supersonic speed and encounters a flow separation due to shock-wave boundary layer interaction (Figure 1a from Wakelam et al. [4]) or (b) remains subsonic and encounters a low-speed separation (Figure 1b from Vadlamani and Tucker [5]). In either of these cases, inhomogeneous flow reaching the fan face has a detrimental effect on the engine efficiency. As shown by Freeman and Rowe [6], aerodynamic instabilities like fan stall can occur in extreme cases. Hall and Hynes [7] investigated the interaction of natural wind with the intake, and observed hysteresis behaviour in the flow separation and reattachment at low-speeds/low mass-flow rates. Colin et al. [8] simulated crosswind flow over an intake with nine different turbulence models and compared their predictive capability. Among these models, Spalart-Allmaras (SA), shear stress transport (SST) and explicit algebraic Reynolds stress model (EARSM) were shown to accurately predict the separated flow at higher engine mass flow rates. Additional complications due to the ground vortex ingestion under crosswinds are also analysed by [9] and Carnevale et al. [10]. [11]; (b) low-speed separation on intake lip (Vadlamani and Tucker [5]).
To examine the crosswind flows in detail (in the absence of ground plane), Wakelam [11] adopted a novel experimental approach. A cost effective quasi-3d (Q3D) sector rig is built in contrast to a full-3D (F3D) intake rig. Duct walls are modelled inline with the captured three-dimensional streamtube (extracted from CFD simulation) ingested into the axisymmetric intake. This is shown to recreate the pressure distribution over the highly loaded section on the windward side of the intake lip. The Sector Rig also captured three flow regimes at increasing engine mass flow rates: low-speed separation (with hysteresis), attached flow, and high speed shock-induced separation. Wakelam et al. [4,12] further demonstrated the use of different flow-control strategies (passive boundary trips and active vortex generator jets) to reduce fan-face distortion. From the numerical front, Vadlamani and Tucker [13] studied the possibility of using plasma actuators for flow control. Oriji and Tucker [14] employed a Q3D computational framework that shares a similar geometry to the 90 • sector rig. This setup is used to enhance the standard one-equation SA model to accommodate for the roughness and relaminarisation effects.
Aforementioned literature suggests that Q3D sector rig is a plausible approach to qualitatively reproduce the dynamics of the flow over the intake lip under crosswinds, albeit in the absence of the ground plane. Q3D approach offers a significant reduction in the experimental and computation costs, which is particularly beneficial for iterative parametric studies. However, its validity needs to be further assessed, if the intake design has to be based on the results of this approach. Hence, the current work attempts to develop a numerical framework that could model the crosswind flow behaviour, specifically close to the high-speed shock-induced separation. The objectives of this paper are multi-fold: (a) identify the merits and challenges of the Q3D approach on the intake design and flow control (b) progressively invoke simplifications, both in terms of the geometry and flow physics, to facilitate the Q3D extraction procedure (c) At each stage, verify both the qualitative/quantitative impact of these simplifications by comparing the isentropic Mach number (ISM) over the intake lip, stagnation pressure and distortion coefficient at the fan-face for different operating conditions.
Outline of the paper: details of the numerical methodology are discussed in the next section. The results section provides an overview of the (a) performance of a full 3D (F3D) real intake with droop (b) comparisons against an axisymmetric intake (c) quasi 3D (Q3D) duct extraction procedure and (d) reproducibility of the flow behaviour using Q3D. This is followed by concluding remarks in the final section.
Numerical Methodology
A nacelle design typical for modern wide-body aircraft is used as a baseline case for the analysis. Figure 2a exhibits the geometric features of a real intake which includes circumferentially varying lip thickness (front view) and intake droop (side view). Figure 2b shows the physical boundary conditions applied and the extent of the computational domain (7D × 8D × 10D in x, y and z directions respectively). The inlet freestream is set at atmospheric sea-level density, total pressure and total temperature. The crosswind speed is set to 27 knots. At the inlet, Spalart variable (ν) is set to 1.76 × 10 −4 . This corresponds to an inflow turbulent viscosity which is 10 times the laminar viscosity. The effect of increasing levels ofν/ν in the free-stream on the boundary layer behaviour is expected to be minimal since the Reynolds number is large enough and the flow over the intake lip is already turbulent (see [15]). An equivalent mass inflow is set at the windward face to supply uniform 90 • crosswind. The intake and spinner surfaces are modelled as viscous walls, and the rest of the faces of the computational domain as inviscid slip walls. Additional complications due to the ground vortex ingestion are not included in the current simulations. To simulate different engine power settings, the engine exit mass outflow is varied within a range of normalised mass flow rate ṁ =ṁ engine /ṁ re f = 0.55 to 1.35, whereṁ re f is a reference value. A subsonic outflow boundary is imposed at the face opposite to the windward face. To satisfy continuity, the mass flow through the outlet is hence specified to be the difference between the mass flow rate at the inlet and engine exit. A mesh comprising of 10.1 million nodes (15.2 million elements) is used for the full 3D domain. Hybrid meshing strategy is employed in which (a) hexahedral elements are used around the intake lip and spinner walls to accurately model the boundary layers and (b) unstructured tetrahedral elements are used in the freestream to reduce computational cost and facilitate meshing. Standard one-equation Spalart-Allmaras (SA) turbulence model is used to close the system of Reynolds-averaged Navier-Stokes (RANS) equations, which is demonstrated to reasonably predict the flow over intakes under off-design conditions ( [8,10,16]). Note that additional corrections to the SA model to account for relaminarisation and roughness (see [14]) are not included in this current study. It is well known that the post-separation predictive capability of steady RANS models is questionable. Unsteady RANS or the presence of fan (see [17]) can alleviate these deficiencies to some extent. Nevertheless, we limit scope of the current study to steady RANS, since the objective of this paper is to verify if the Q3D can reproduce the flow characteristics of F3D on the intake lip using the same computational techniques. The minimum y + values of the first off-wall nodes are typically 30-40 and hence wall-functions are activated at the wall to ensure sufficient accuracy of the boundary layer predictions [18]. The numerical framework (and the grids employed in the current study) has been successfully validated against the experimental data under the off-design condition of high-incidence [16].
A Rolls-Royce in-house solver, HYDRA, was used for all simulations. It is a density-based finite-volume unstructured solver, which is second-order accurate in space and time. The solver is suitable to carry out multi-fidelity simulations including RANS/unsteady RANS/large eddy simulations (LES) and has been extensively validated over a wide range of test cases and complex flows [19]. Low Mach number preconditioning is also activated to accelerate convergence and improve the predictions in the low-Mach separated zones.
Full 3D Drooped Intake
In this section, results are reported for the simulations on a full 3D (F3D) intake, which has realistic geometric features like droop and circumferentially varying lip thickness. For a fixed crosswind speed of 27 knots, the characteristics of the flow over this intake is examined by varying the engine mass flow rate. Figure 3a-c show the Mach number contours on the Z = 0 plane for increasing engine mass flow rates. Note that the mass flow values are normalised as ṁ =ṁ/ṁ re f , whereṁ re f corresponds to the engine mass flow rate for the 'attached' case close to high-speed separation. Consistent with the experiments on an intake subjected to crosswinds (see [11]), three distinct flow regimes can be identified: low speed separation on the intake lip for ṁ = 0.55, attached flow for ṁ = 1, and high speed separation for ṁ = 1.3. When the engine mass flow is low, the pressure gradient is insufficient to promote flow reattachment over the intake lip. On the other hand, at higher ṁ, excessive acceleration results in a local transonic flow over the lip. The subsequent formation of the shock wave over the lip triggers boundary layer separation. Between these two separation limits, the flow remains fully attached over the intake and the distortion is minimal. The local acceleration of the flow over the convex curvature of the spinner wall is evident in Figure 3c. This is due to the well known fact that the flow accelerates over convex bends due to a favourable pressure gradient. In addition, the shock induced separation on the windward side of the intake lip increases the local blockage. Due to this blockage, the flow further accelerates particularly on the convex surface of the spinner wall facing the windward side of the intake lip. Similar flow behaviour was noted by Colin et al. [8].
Mach Number
Low speed separation Attached flow High speed separation Y X shock The performance of the fan is sensitive to the distortion transferred to the fan face. Frames (a-d) in Figure 4 compare the contours of stagnation pressure, p 0 , at the fan-face (≈0.5D from highlight) for varying engine mass flow ratios between ṁ = 0.55 and ṁ = 1.35. In frame (a), a considerable region with a lower p 0 is observable on the windward side of the intake. This is indicative of the distortion transferred to the fan face due to the low speed separation on the intake lip. The p 0 distribution in frame (b) was almost uniform (≈ p 0,atm ), except within the region of thin boundary layer, thereby indicating a fully attached flow. Frame (c) shows the distortion initiated by the shock-induced separation at a relatively higher mass flow. Further increase in the engine mass flow intensifies the shock strength and results in an abrupt increase in the distortion as seen in frame (d). The p 0 in the separated region was much lower for high speed shock-induced separation (by ≈ 30%p 0,atm ) when compared to low speed separation (by ≈ 5%p 0,atm ). It can be noted that the separation is not symmetric about Z = 0 plane; this asymmetric behaviour is attributed to the (a) droop of the intake and (b) non-uniform thickness of the intake lip in the circumferential direction. This aspect will be further discussed in Figure 7.
Simplification of F3D Problem
Initial attempts were made to extract a quasi-3D duct from the full 3D viscous drooped intake (see Appendix A). However, due to asymmetry and low speed flow in the boundary layer, the resulting duct shapes were complex and mesh creation around these ducts was extremely challenging. In order to facilitate the extraction of the Q3D duct, additional simplifications are essential. These include: (a) simplifying the geometry to an axisymmetric intake and (b) eliminating the boundary layers (and flow separation) by treating both the flow and walls as inviscid. Subsequent sections show the impact of these simplifications by highlighting the key differences between (a) the symmetric and drooped intakes and (b) viscous and inviscid simulations on the axisymmetric intake.
Full 3D Axisymmetric Intake-Viscous Simulations
In order to construct the F3D axisymmetric intake, the highly loaded intake lip section from the drooped intake is first extracted and subsequently rotated by 360 • about the X-axis. It is important to examine the effect of this simplification on the dynamics of the flow. Due to the three-dimensional nature of the flow, captured streamtubes in the two intake designs are expected to differ. Hence a thorough comparison between the predictions from drooped and axisymmetric intakes is made in the subsequent plots. Figure 5 compares the axial variation of isentropic Mach number, ISM, around the intake lip for the axisymmetric and drooped intakes. ISM profiles have been extracted from the intersection between the Z = 0 plane and the windward side of the inlet. It is defined as where p 0,atm is the freestream (inlet) stagnation pressure, and p is the local static pressure on intake lip. For the separated cases in frames (a,c,d), deviations in the peak values of ISM are observable. However, the agreement is generally favourable for the ṁ = 1 case (frame (b)) where the flow remains attached in both the intakes. For the current intake configuration in the absence of ground plane, the highly loaded lip profile and the windward profile at Z = 0 are observed to be marginally different, thus having a little impact on the streamwise variation of the ISM. Figure 6b compares the corresponding extent of distortion at the fan-face. An industrial standard, DC60 = (p 0 −p 0,60,min )/q, is used to quantify the distortion. Herep 0 andq are the mean fan face stagnation and dynamic pressures respectively, and min(p 0,60 ) is the minimum mean stagnation pressure over any 60 • sector at the fan face. Based on the extent of low p 0 regions and the DC60 values, it is apparent that the axisymmetric intake shows low speed separation for ṁ = 0.55 (frame (i)), high speed separation for ṁ = 1.3 (frame (iv)) and attached flow for both the ṁ = 1 and ṁ = 1.15 cases (frames (ii,iii)). It is worth noting that the p 0 contours in Figure 6a for the axisymmetric cases are always symmetric about the Z = 0 plane, in contrast to the asymmetry observed for the droop cases. It is also interesting to note from the ṁ = 1.15 case (frame (iii)) that the flow in the drooped intake has separated while the flow remains attached in the axisymmetric intake. This is also evident from the Figure 6b, where the DC60 value at ṁ = 1.15 for the drooped intake is higher than the axisymmetric intake. The steep increase in the DC60 (indicator of the separation onset) has shifted to a higher mass flow rate for the axisymmetric intake. It indicates that the flow in the drooped intake is prone to separation at a lower mass flow rate when compared to the axisymmetric intake. Nevertheless, it should be noted that the industrial intakes are generally drooped in order to accommodate for the installation effects of nacelle under the wing. Hence, the finding of drooped intake separating at a lower mass flow rate than the axisymmetric intake cannot be generalised, since the effect of wing is not considered in the current study. The reason for an earlier separation and asymmetric p 0 contours for the drooped intake will be explored next. Figure 7 compares the mass flow distribution at the fan face for the drooped and axisymmetric intakes for ṁ = 1 (attached flow case). Fan-face is sub-divided into four quadrants as shown in the inset and the proportion of mass flow through each quadrant is estimated. Blue/red colours in the figure respectively represent a deficit/excess in the mass flow rates through each of the quadrants, and the percentage deviation from a uniform mass flow distributionṁ dev is calculated as: m dev = (ṁ quad /ṁ actual − 0.25) × 100.
For the axisymmetric case, equal mass flows through the quadrants I and IV. However, for the drooped intake, more mass flow through quadrant IV when compared to quadrant I. Hence, the flow experiences a higher acceleration and a stronger shock over the intake lip for the drooped case, thereby promoting earlier separation. Mass flow in quadrants II and III was relatively lower than that in I and IV respectively. This was attributed to the thicker boundary layers in quadrants II and III evident from a closer look at P 0 contours in Figure 6a( Despite these aforementioned differences between the drooped and axisymmetric intake, Figures 5 and 6b have demonstrated a favourable agreement in the axial variation of ISM and corresponding distortion levels for both the intakes. Hence, the simplification of choosing an axisymmetric intake, to facilitate the extraction of a Q3D duct, is justified.
Full 3D Axisymmetric Intake-Inviscid Simulations
The boundary layers could potentially separate at high or low mass flow rates thereby hindering the extraction of the Q3D duct. Even if the flow is attached, the streamlines released closer to the intake and spinner walls fail to adhere to these surfaces due to the boundary layers. Hence, inviscid simulations are carried out on the axisymmetric intake to eliminate boundary layer effects and undesirable flow separation. No turbulence model is used for these simulations and the intake/spinner surfaces are treated as slip walls. Figure 8 compares the axial variation of ISM for inviscid and viscous simulations. Viscous simulations in frames (b,c) remain attached while those in frames a and d show low-speed and high-speed separation respectively. For the attached cases, the ISM distribution from viscous and inviscid simulations are in excellent agreement. A noticeable mismatch can be observed for the separated cases (frames a and d) which was expected. Inviscid simulations should not produce any undesirable separation unless an artificial boundary layer develops due to artificial viscosity in the numerical scheme. Hence, it was reassuring to note that such undesirable separation is not observed and that the numerical schemes are robust. When compared to the viscous simulations, inviscid simulations in frames (a,d) also predict a higher ISM peak on the intake lip at X/D = 0. In the inviscid simulations, due to the absence of boundary layers, there is no scope for any low-speed or high-speed shock induced separation. Thus the flow accelerates to higher speeds on the intake lip in the absence of viscous effects.
Quasi-3D Axisymmetric Duct
Previous sections demonstrated that, for ṁ = 1, the simplifications (axisymmetric intake and elimination of viscosity) had minimal effect on the DC60 values and the ISM distributions on the intake lip. This corresponds to the case when the flow remains attached but close to the high-speed shock induced separation. In this section, a quasi-3D duct is extracted from the F3D axisymmetric inviscid result for ṁ = 1. Subsequently, both inviscid and viscous simulations are carried out on the Q3D duct. Comparisons were made against the F3D to verify if the Q3D can reproduce similar trends at a substantially lower computational cost.
Extraction Procedure
Firstly a 20 • sector with the largest distortion at the fan-face has been identified. This corresponds to the ±10 • on either side of the Z = 0 plane on the windward side of the axisymmetric intake. As shown in Figure 9a, a total of six streamlines were released from the engine exit and traced to the inlet of the computational domain. Of these, two streamlines were released on the spinner (spinner lines, SL), two on the intake (intake lines, IL) and the rest were released midway between the spinner and intake surfaces (mid lines, ML). These streamlines compose the Q3D streamtube of interest. The three dimensional streamtube contraction was crucial to achieve the appropriate lip loading, since it controls the local flow acceleration over the intake lip and the corresponding shock strength. Hence, it was prudent to qualitatively understand the variation of the captured streamtube with increasing engine ṁ. Figure 9b compares the captured streamtubes on X − Y and Y − Z planes for ṁ's corresponding to low-speed separation, attached flow and high speed separation. With increasing mass flow, a clear increase in the cross-sectional area of the captured streamtube is evident from Figure 9b, specifically along the normal to the intake lip (marked as A-A in frame (b)(i)). For the scope of this project, the streamtube obtained for ṁ = 1 was chosen, since this corresponds to a highly realistic case close to high-speed shock induced separation.
For the ṁ = 1 case, 3D stereolithography (STL) surfaces are generated from the six streamlines as shown in Figure 9c. Hybrid meshing strategy is employed for Q3D grid with structured hexahedral elements around intake and unstructured tetrahedrons elsewhere. The near wall resolution was inline with that of F3D drooped intake. For the viscous simulations, SA turbulence model with wall-functions is used. Top and bottom walls of the Q3D duct are always treated as inviscid to avoid undesirable boundary layer growth. Intake lip was treated with inviscid or viscous boundary condition as discussed in the subsequent subsection. As discussed above, a minimum of six streamlines were required to construct the Q3D duct. If more number of streamlines are considered, the cross-section of the Q3D duct at the inlet will be nearly elliptic (refer to the duct shape in Figure A1 which was extracted using ≈ 100 streamlines) unlike the rhomboidal shape in Figure 9a. Although increasing the number of streamlines can resolve the Q3D duct better, preliminary studies have shown that this has little effect on the cross-sectional area at section A-A, the corresponding streamtube contraction and intake lip loading. Streamtube contraction at the cross-section A-A (in Figure 9b) largely dictates the lip loading which can be accurately captured using six streamlines. Figure 10a,b compare the complex streamtube pattern (IL,SL,ML) extracted from the real F3D drooped viscous case against the inviscid axisymmetric case on the Y − Z and X − Y planes respectively. It is evident that the extracted streamlines are not symmetric about Z = 0 for the drooped case. Also, even in the absence of flow separation, it is impossible to extract streamlines for the viscous case that can entirely adhere to the intake surface due to the no-slip condition imposed on the wall. Hence, the assumptions made in the previous section are necessary to obtain a simplified streamtube which is capable of reproducing the trends. Nevertheless, inviscid drooped simulations can alleviate the q3d drooped duct extraction procedure to certain extent. This has been attempted and the results are presented in the Appendix A.
Q3D Axisymmetric Duct Results-Comparison against F3D Simulations
Simulations reported in this section are carried out on the Q3D duct by imposing (a) inviscid BC and (b) viscous no-slip BC on the intake lip. In addition, the averaged static pressure at the engine exit (recorded from the corresponding F3D simulations) is prescribed as the exit BC for the corresponding Q3D simulations. Figure 11(i,ii) compares the ISM distribution on the intake lip between the F3D axisymmetric intake and the Q3D duct. Frame (i) reports the results obtained using inviscid BC's on the intake lip, while frame (ii) is obtained using viscous BC's (and using turbulence model). Since the Q3D duct has been designed at ṁ = 1, ISM distributions for the Q3D and F3D simulations are in excellent agreement at this mass flow rate. Discrepancies are observed when the ṁ deviates from the design value. For the inviscid cases, the peak values of ISM from Q3D vary marginally: lower peaks for the ṁ = 0.55 case and higher peaks for the ṁ = 1.15 and ṁ = 1.2 cases. This was attributed to the fact that the shape of the Q3D duct was frozen in the current computations, and as a result its cross-sectional area normal to the intake lip was fixed (A-A in Figure 9b). However, recall from Figure 9a that the F3D simulations yielded a larger streamtube cross-section area at a higher engine mass flow, and vice-versa. Hence, when compared to the F3D, Q3D predicts a lower acceleration over the intake lip at lower ṁ ( ṁ = 0.55) and vice versa (e.g., ṁ = 1.15, 1.2).
For the viscous cases, similar observation can be made for ṁ = 1.15, where the acceleration is marginally higher for Q3D (frame iic). However, ṁ = 0.55 case (frame iia) shows a higher peak for the Q3D simulations compared to F3D. Also, the absence of plateau in ISM distribution in Q3D simulations implies that the low-speed separation is not captured. As discussed before, this is attributed to the fact that the current Q3D duct shape (designed for ṁ = 1.0) is different from the streamtube at ṁ = 0.55. Finally, at ṁ = 1.3 (frame iid), high-speed shock-induced separation is observed in the Q3D simulations, but the predicted lip loading deviates from the F3D results. Figure 12 compares the contours of stagnation pressure for the F3D and Q3D simulations at the fan-face, in a 20 • sector with increasing engine mass flow rates. The Q3D simulations accurately predict the onset of high-speed shock-induced separation; however, it fails to reproduce the low speed separation. In addition, once separated, the distortion levels/patterns predicted by Q3D and F3D are different. Note that the endwalls of Q3D duct are treated as inviscid in the current study to avoid undesirable boundary layer growth from the inlet and this might impact the distortion pattern when the flow separates-additional studies are essential to address this. Figures 11b,c and 12b demonstrate that the agreement between the F3D and Q3D is encouraging when the flow is attached or with modest level of separation. The results deviated beyond this limit. Nevertheless, it is worth noting that the simulations on Q3D duct are 50 times faster than the F3D. This speed up was primarily achieved due to the substantial reduction in the size of the computational domain and the grid. The computational time required to run an inviscid simulation at the design mass flow rate and extract the Q3D duct is also lower than the time required to run a F3D viscous simulation. To summarise, the current study shows the potential saving that can be achieved using Q3D strategy. It also, however, highlights the challenges of using Q3D strategy to (a) optimise the nacelle shapes (b) develop the lip rig experimental setup and (b) extrapolate the effects of flow control strategies from Q3D (lip rig) to F3D simulations, given the fact that the distortion patterns are largely different. To circumvent these challenges, it will be beneficial to explore additional strategies in the future, which include (a) employing multiple Q3D ducts at different ṁ or (b) alter the ṁ passing over the intake lip using bleed slots.
Conclusions
A numerical framework has been developed to simulate the effects of crosswind on the intake performance. The framework is demonstrated to be capable of capturing low speed separation, attached flow and high speed shock-induced separation with increasing ṁ. Interestingly, the drooped intake investigated in the current study is prone to earlier separation under crosswinds when compared to axisymmetric intake.
A quasi-3D duct extraction method from the F3D simulations has been developed. Results from the Q3D simulations are shown to largely reproduce the trends observed (ISM variations and high-speed separation behaviour) in the 3D intake at a substantially lower computational cost. The simulations on Q3D duct are 50 times faster than the F3D.
Although the quasi-3D strategy reproduces the key trends, it is found that the captured streamtube shape (and hence the Q3D duct shape) changes with the corresponding mass flow rate. The agreement between the F3D and Q3D simulations is encouraging when the flow is attached or with modest level of separation. The results deviate beyond this limit. For example, the duct used in the current study failed to capture the low speed separation. When the flow is separated, the distortion levels also differ in magnitude. Hence, sufficient care has to be taken when (a) designing the lip rig experimental setup and (b) extrapolating the effects of flow control strategies from lip rig to F3D simulations. Funding: This research received no external funding while the APC was funded by Euroturbo also failed to capture low speed separation. Further investigations will be carried out to alleviate these Q3D deficiencies. | 7,028 | 2019-08-13T00:00:00.000 | [
"Engineering"
] |
Pain in experimental autoimmune encephalitis: a comparative study between different mouse models
Background Pain can be one of the most severe symptoms associated with multiple sclerosis (MS) and develops with varying levels and time courses. MS-related pain is difficult to treat, since very little is known about the mechanisms underlying its development. Animal models of experimental autoimmune encephalomyelitis (EAE) mimic many aspects of MS and are well-suited to study underlying pathophysiological mechanisms. Yet, to date very little is known about the sensory abnormalities in different EAE models. We therefore aimed to thoroughly characterize pain behavior of the hindpaw in SJL and C57BL/6 mice immunized with PLP139-151 peptide or MOG35-55 peptide respectively. Moreover, we studied the activity of pain-related molecules and plasticity-related genes in the spinal cord and investigated functional changes in the peripheral nerves using electrophysiology. Methods We analyzed thermal and mechanical sensitivity of the hindpaw in both EAE models during the whole disease course. Qualitative and quantitative immunohistochemical analysis of pain-related molecules and plasticity-related genes was performed on spinal cord sections at different timepoints during the disease course. Moreover, we investigated functional changes in the peripheral nerves using electrophysiology. Results Mice in both EAE models developed thermal hyperalgesia during the chronic phase of the disease. However, whereas SJL mice developed marked mechanical allodynia over the chronic phase of the disease, C57BL/6 mice developed only minor mechanical allodynia over the onset and peak phase of the disease. Interestingly, the magnitude of glial changes in the spinal cord was stronger in SJL mice than in C57BL/6 mice and their time course matched the temporal profile of mechanical hypersensitivity. Conclusions Diverse EAE models bearing genetic, clinical and histopathological heterogeneity, show different profiles of sensory and pathological changes and thereby enable studying the mechanistic basis and the diversity of changes in pain perception that are associated with distinct types of MS.
Background
Multiple sclerosis (MS) is one of the most common neurological diseases mostly affecting young adults. It is an incurable, chronic inflammatory, progressive neuroinflammatory and neurodegenerative disease with a still unclear etiology. Among others, pain is one of the critical MS symptoms. While research on pain in MS is performed with increasing frequency, the literature remains ambiguous to date. Many studies are based on questionnaires and the reports on pain prevalence in MS patients vary from 29% [1] up to 86% [2]. Some studies report no difference in the frequency of pain in MS patients compared to the background population, but report a higher intensity and impact of pain on daily life in MS patients [3]. It has been reported that 32% of patients indicate pain among the most severe symptoms of MS [4], and 12% of various pain syndromes are even classified as the worst symptom of the MS itself [5]. Symptoms of neuropathic pain, including mechanical or cold allodynia as well as thermal and mechanical hyperalgesia have been described [6][7][8][9]. Chronic pain in MS severely reduces the quality of the patient's life and therefore deserves detailed analysis. So far, not much is known about the mechanisms underlying MS-related pain and its treatment remains difficult. Therefore, there is a major and unmet need for basic research on molecular mechanisms underlying the development and chronicity of pain in MS.
Various animal models mimicking the disease have been used for decades, the most prevalent being experimental autoimmune encephalomyelitis (EAE), which closely resembles MS [10]. The use of diverse immunogenic peptides against central nervous system (CNS) components in the EAE model enables simulation of diverse types of MS (for example, relapsing-remitting, progressive, etcetera). A major difference between MS and EAE is that whereas MS is a spontaneous disease, EAE has to be artificially induced using strong immune adjuvants. Only particular combinations of antigen and rodent strain can elucidate EAE [11,12], leading to specific disease profiles [11][12][13][14]. Moreover, EAE is studied mainly in inbred strains; hence, the genetic heterogeneity which is critical in the MS populations is only reflected when different models of EAE are studied in parallel [11].
Pain hypersensitivity of the hindpaw has been previously reported in mouse EAE models [15][16][17][18]. However, a comprehensive temporal analysis and comparison thereof in different models representing different subtypes of MS has been missing so far. In this study, we sought to comprehensively analyze nociceptive sensitivity during the whole disease course in two different EAE mouse models, namely SJL mice immunized with PLP 139-151 peptide and C57BL/6 mice immunized with MOG peptide. Moreover, we performed detailed immunohistochemical analyses to address pathophysiological changes that are potentially linked to differences in pain behavior between the two models, and we performed electrophysiological measurements on peripheral nerve terminals. Our results showed that distinct EAE models are associated with specific profiles and temporal courses of changes in pain sensitivity as well as particular patterns of neurochemical changes in the spinal cord.
Animals and induction of experimental autoimmune encephalomyelitis
Female SJL/J mice were purchased from Harlan Laboratories (Borchen, Germany) and C57BL/6 J mice were purchased from Janvier (Le Genest Saint Isle, France). For the induction of EAE, female mice at age eight weeks, received subcutaneous injections in both flanks of either 50 μg MOG 35-55 peptide or 100 μg PLP 139-151 peptide (synthesized at German Cancer Research Center; DKFZ, Genomics and Proteomics Core Facilities, Peptide Synthesis, Heidelberg, Germany) in PBS emulsified in an equal volume of complete Freund's adjuvant (CFA) containing Mycobacterium tuberculosis H37RA (Difco, Detroit, MI, USA) at a final concentration of 0.5 mg/ml under Isofluran anesthesia. Control mice were immunized with ovalbumin (50 μg) in PBS/CFA. Two injections of pertussis toxin (List Biological Laboratories Inc., Campbell, CA, USA; 200 ng per mouse intraperitoneal) were given on the day of immunization and 48 hours later. Animals were weighed and scored for clinical signs of disease on a daily basis. Disease severity was assessed using a scale ranging from 0 to 10; scores were as follows [19]: 0 = normal; 1 = reduced tone of tail; 2 = limp tail, impaired righting; 3 = absent righting; 4 = gait ataxia; 5 = mild paraparesis of hindlimbs; 6 = moderate paraparesis; 7 = severe paraparesis or paraplegia; 8 = tetraparesis; 9 = moribund; 10 = death. If necessary, food was provided on the cage floor.
Behavioral nociceptive testing
All animal procedures including the EAE protocol under section:`Animals and induction of experimental autoimmune encephalomyelitis´were conducted with the approval of the ethics commitee by the local governing body (Regierungspräsidium Karlsruhe, Germany). All behavioral measurements were done in awake, unrestrained, agematched female mice. All tests were performed in an appropriate quiet room between 10 am and 4 pm.
Analysis of paw withdrawal latency in response to an infrared beam (which generates a heat ramp) was done as described in earlier publications [20,21] (for example, Plantar test apparatus, Hargreaves' Method, Ugo Basile Inc.). Mechanical sensitivity was tested in the same cohort of animals via manual application of calibrated von Frey hair filaments (0.04 g to 1.4 g) to the plantar surface of the hindpaw as described for earlier studies [20]. The hindpaw withdrawal latency upon heat stimulation using the plantar test apparatus and the hindpaw response to von Frey hair stimulation was assessed every second to third day, alternately.
Locomotion and exploratory activity
General activity and novelty-induced explorative behavior was measured by using an open field chamber (44 x 44 cm; Ugo Basile, Comerio, Italy) under normal lighting conditions. A video tracking software (ANY-Maze, Ugo Basile, Italy) was used to monitor the mice over ten minutes. The following parameters were analyzed: distance travelled (horizontal activity), speed and immobility time.
Afferent recordings in skin-nerve preparation
An in vitro skin nerve preparation was used to study the properties of mechanosensitive C fibers, two types of Aβ-afferent (slowly adapting fibers (SA) and rapidly adapting fibers (RA)), and Aδ-afferent fibers that innervate the skin of the hind paw. Experiments were performed on the dissected skin of control mice and SJL-EAE mice in the chronic phase of the disease. Animals were killed by CO 2 inhalation, and the saphenous nerve was dissected with the skin of the dorsal hindpaw attached and mounted in an organ bath inside-up to expose the dermis. The preparation was perfused with an oxygen-saturated modified synthetic interstitial fluid solution containing (in mM) 123 NaCl, 3.5 KCl, 0.7 MgSO 4 , 1.5 NaH 2 PO 3 , 1.7 NaH 2 PO 4 , 2.0 CaCl 2 , 9.5 sodium gluconate, 5.5 glucose, 7.5 sucrose, and 10 HEPES at a temperature of 32 ± 1°C and pH 7.4 ± 0.05. Fine filaments were teased from the desheathed nerve, placed in separate chamber, and placed on a recording electrode.
Nerve fibers were classified according to their conduction velocities, von Frey thresholds, and firing properties. Electrical stimulation of the nerve fiber was employed to calculate conduction velocities of individual nerve fibers. Fibers which conducted <1 m/s, fibers conducting between 1 to 10 m/s, and the fibers conducting with the velocity >10 m/s were considered to be unmyelinated Cfibers, myelinated Aδ-fibers and thickly myelinated low threshold mechanoceptors (RA and SA), respectively. The threshold for each unit was tested using calibrated von Frey filaments; the thinnest filament that elicited three action potentials in the time of approximately 2 seconds of pressing the filament on the units was taken as a threshold.
Once the receptive field was identified using the glass rod, a computer-controlled linear stepping motor (Nanomotor Kleindiek Nanotechnik, Reutlingen, Germany) was used to apply standardized mechanical stimuli. Each fiber was tested with a series of displacement mechanical stimuli ranging from 6 to 384 μm for both control and EAE animals. Electrophysiological data were collected with a Powerlab 4.0 system (ADInstruments, Spechbach, Germany) and analyzed off-line with the spike histogram extension of the software.
Immunohistochemistry
Mice were perfused with 0.1 M phosphate buffer saline and 4% paraformaldehyde (PFA). Spinal cords were isolated and post-fixed for up to 16 hours in 4% PFA. Free-floating vibratome sections (50 μm) were processed for immunofluorescence protocol. Sections were incubated for 30 minutes at 80°C in prewarmed 10 mM sodium citrate buffer (pH 8) for antigen retrieval [22] and processed according to standard immunofluorescence protocol. The following antibodies were used: rabbit poly-
Illustrations and densitometry
Fluorescence images were obtained using a laser scanning confocal microscope (Leica TCS AOBS, Bensheim, Germany). For quantitative measurement of microglia and astrocytes, images were obtained in a confocal series over a thickness of 50 μm using the same laser intensity in all images. The fluorescence signal intensity in per unit area was measured densitometrically using NIH ImageJ software (National Institutes of Health, Bethesda, Maryland, USA) Data were averaged from four areas per section and two sections per mouse in groups of at least four animals in three independent experiments.
Statistics
If not indicated differently, all data are presented as mean ± standard error of the mean (S.E.M.). For comparisons of multiple groups, analysis of variance (ANOVA) for random measures was performed followed by post-hoc Bonferroni's test, and for the comparison of two groups Student's t-Test was used to determine statistically significant differences. A value of P <0.05 was considered to be statistically significant.
Disease progression, pain and locomotion
We actively immunized female mice from the SJL and C57BL/6 strains with either the PLP 139-151 peptide or the MOG peptide (referred to henceforth as SJL-EAE or C57-EAE mice, respectively). Control mice underwent the same immunization protocol using ovalbumin. SJL-EAE mice showed a typical relapsing-remitting disease pattern, whereas C57-EAE mice developed chronic EAE. After immunization, SJL-EAE mice displayed the first signs of disease onset with tail weakness on day 10 and reached a peak in motor deficit functions at day 12 ( Figure 1A), whereas C57-EAE mice showed the first symptoms at day 11 and a maximal disease score at day 17 ( Figure 1B). As usually seen, EAE mice lost 1 to 2 g of body weight immediately preceding the onset of the disease ( Figure 1). The degree of the EAE in the chronic phase was comparable over both models, as indicated by a similar disease score ( Figure 1).
In addition to monitoring clinical disease symptoms on a daily basis over 44 days (SJL-EAE mice) or 52 days (C57-EAE mice), we investigated nociceptive thresholds in response to heat and mechanical stimuli. We found that the response latency towards heat stimuli dropped significantly in SJL-EAE and C57-EAE mice following immunization as compared to basal response latencies (Figure 2A,B). Mice in both EAE models developed significant thermal hyperalgesia in the chronic phase of the disease (Figure 2A,B; Table 1). Thus, the time course of thermal hyperalgesia was not different across the two models.
We applied mechanical pressure via von Frey hair filaments (0.04 g to 1.4 g force) to the plantar surface of the hindpaws. The application of low magnitude of forces (von Frey filaments of forces between 0.04 g to 0.07 g), which do not normally evoke nociceptive withdrawal in control mice, elicited withdrawal in SJL-EAE mice in the chronic phase of the disease starting from day 36 onwards and lasting over the whole period of investigation (data with 0.07 g force are shown in Figure 2C). The same stimulus also elicited withdrawal behavior in C57-EAE mice but in a different temporal time frame: in the onset and peak phase of the disease ( Figure 2D). The application of more intense forces to the plantar surface of the paw (von Frey hair filaments between 0.16 g to 0.6 g), that normally evoke mild nociceptive withdrawal in control mice, resulted in a significant increase in withdrawal response frequency in SJL-EAE mice in the chronic phase of the disease, starting from day 28 after immunization and continuing over the whole observation period (data with 0.4 g force are shown in Figure 2E), whereas the withdrawal behavior of C57-EAE mice did not differ from control mice ( Figure 2F). Moreover, we found that mechanical allodynia correlated with the clinical scores. SJL-EAE mice with higher clinical scores (score 5 to 6) showed a more pronounced mechanical allodynia than EAE mice with moderate symptoms (score 3 to 4) ( Figure 3). Interestingly, the paw withdrawal response frequency towards the application of von Frey filaments of stronger force (1 g or 1.4 g) was comparable between either SJL-EAE mice and control mice (data with 1.0 g force are shown in Figure 2G) or C57-EAE mice and controls ( Figure 2H). This shows that SJL-EAE mice develop nociceptive mechanical allodynia in the chronic phase of the disease. The differences in the behavioral phenotypes are summarized in Table 1.
Intrigued by the marked mechanical hypersensitivity in the chronic phase of EAE in SJL mice, we questioned whether their locomotor activity would be altered. Using the open field test apparatus SJL-EAE mice did not demonstrate any difference in horizontal activity when compared to either the control mice or to their basal behavior before the induction of EAE ( Figure 4A). Additional parameters, as movement speed ( Figure 4B) or immobility time ( Figure 4C) were not different between EAE and control animals in the chronic phase of the disease or as compared to basal behavior. Thus, SJL-EAE mice did not reveal aberrant behavioral changes associated with EAE despite the presence of nociceptive hypersensitivity to sensory stimuli.
Electrophysiological analyses of peripheral nerve activity
In order to characterize the firing properties of peripheral afferents in the chronic phase of the disease, the skin nerve preparation of the saphenous nerve was employed on eight SJL-EAE mice and seven control mice in the chronic phase of the disease (day 35 to 45) ( Figure 5). Firing properties of four different fiber types innervating the hindpaw were investigated in response to graded mechanical stimuli, namely mechanosensitive C-fiber nociceptors, Aδ mechanonociceptors, SA, and RA low-threshold Aβ mechanoceptors, which were identified on the basis of stimulation as well as conduction and firing properties. Stimulus-response functions of C-fibers and Aδ mechanonociceptors from control and SJL-EAE mice demonstrated no significant changes in the responsiveness to mechanical stimulation ( Figure 5A, 5B). Low-threshold SA and RA Aβ fibers isolated from the SJL-EAE animals showed a slight or even statistically significant increase in responses to higher stimulus intensities. Additionally RA and SA low-threshold Aβ fibers and non-myelinated Cfibers ( Figure 5E) showed a slight decrease in conduction velocity. There were no changes in mechanical thresholds of different afferent fibers ( Figure 5F). So, the functional properties of the nerve fibers in the chronic phase of the EAE are unaltered and unlikely to contribute to the sensory abnormalities.
Immunohistochemistry on the spinal cord
We investigated lumbar spinal cord section of SJL-EAE mice and control immunized mice at different time points during EAE for the expression of different painor EAE-related markers. Because not only white matter abnormalities but also grey matter abnormalities are a basic phenomenon in EAE, we investigated the expression of various key marker proteins at 2 to 3 days after immunization ('pre' time point), at disease onset, at peak and in the chronic phase of the disease (day 35 to 45 after EAE induction).
We found a downregulation of NeuN expression throughout the whole spinal cord at disease onset and in the peak phase and an almost complete recovery of NeuN immunogenicity in the chronic phase as compared to control mice ( Figure 6A). Recently, NeuN has been identified as the Fox-3 gene product [23]. Therefore, we performed co-labeling of anti-NeuN with anti-Fox-3 antibody. Interestingly, we did not find any difference in Fox-3 expression during the time course of the EAE (Figure 6B), indicating no alteration in the amount of neuronal cells during the time course of the EAE. The loss of NeuN immunoreactivity might be accompanied with specific changes in the EAE disease that lead to a change in NeuN antigenicity, as has been reported in other conditions [24,25].
Additionally we analyzed the patterning of the neuropeptide calcitonin gene-regulated peptide (CGRP) and the nonpeptidergic isolectin B4 (IB4). Although there was no difference in the density of CGRP-immunoreactive fibers in the spinal dorsal horn in SJL-EAE mice or control mice during the time course of the EAE ( Figure 7A), we observed an increase in IB4-positive signals throughout the whole spinal cord at the onset of the disease ( Figure 7B). We registered maximal increase in IB4 expression at the peak stage of the disease, which decreased in the chronic phase ( Figure 7B). Because IB4 selectively binds activated microglia cells [26], our results indicate a strong activation of microglia in SJL-EAE mice at disease onset and at peak phase of the disease. Co-labeling studies with anti-GFAP, a marker for astrocytes and anti-Iba1, a marker for microglia cells, confirmed the expression of IB4 specifically in microglia.
As glia cells play an important role in EAE we investigated the time course of astrocyte and microglia activity in the spinal cord of SJL-EAE and control mice. Immunohistochemistry with anti-GFAP antibody showed an increase in GFAP-positive cells at disease onset in the spinal dorsal horn ( Figure 8A). The number of GFAP positive cells further increased in the peak and chronic SJL-EAE mice with a higher disease score (score 5 to 6) show more pronounced mechanical allodynia than SJL-EAE mice with a lower score (score 3 to 4) as compared to control mice. n = 6 mice/ group, *P <0.05 as compared to control mice at this measuring point, † as compared to all other groups at this particular measuring point, ANOVA, post hoc Bonferroni's test. All data points represent mean ± SEM. EAE, experimental autoimmune encephalomyelitis.
phase of the disease, and cells became activated as seen by their morphological changes ( Figure 8A). Similarly, using the microglia specific anti-Iba1 antibody, we saw an induction of microglia cells at disease onset and in the chronic phase of the disease and activation of microglia, which was evident by morphological changes ( Figure 8B). Because microglia and astrocyte activation plays an important role in pain, we compared the time course of microglia and astrocyte activation in SJL-EAE and C57-EAE animals in more detail. Interestingly, we found a comparable activation of microglia as shown with anti-Iba1 antibody in the dorsal horn of the spinal cord during the onset phase in SJL-EAE and C57-EAE mice ( Figure 9A), but to a lesser extent in C57-EAE mice as compared to SJL-EAE mice in the peak phase as well as in the chronic phase of the disease ( Figure 9A).
To quantify the amount of microglia cells in the chronic phase of the disease, we measured the fluorescence intensity in lamina I and II of the spinal dorsal horn and found a significantly higher fluorescence intensity for Iba1 in SJL-EAE mice as compared to C57-EAE mice (see Figure 9C for example, Figure 9E for quantification). Additionally we compared the expression profile of astrocytes by using an anti-GFAP antibody. We found a stronger activation of astrocytes in C57-EAE as compared to SJL-EAE mice in the onset phase of the disease ( Figure 9B). Interestingly, there was an accumulation of GFAP-positive cells in the superficial spinal dorsal horn of SJL-EAE mice in the chronic phase of the disease as compared to C57-EAE mice ( Figure 9B). Quantification of the GFAP fluorescence intensity in the spinal dorsal horn revealed a significantly stronger activation of astrocytes in SJL-EAE mice as compared to C57-EAE mice in the chronic phase of the disease (see Figure 9D for example, Figure 9F for quantification).
The differences of microglia and astrocyte activation in the spinal dorsal horn between the two EAE models are summarized in Table 1.
Discussion
Clinically significant pain is a severe and debilitating symptom associated with MS, however, to date we are far beyond understanding the mechanisms underlying MS-related pain. Animal models mimicking diverse aspects of the disease have been used for decades to study pathological features of the disease and more recently to investigate behavioral changes with respect to pain hypersensitivity. Chronic pain symptoms in MS are very complex and diverse and could even be indirectly related to MS (reviewed in [27,28]). Pain symptoms, the number of pain sites, and their severity vary among the patients and are often unrelated to the duration of MS [29]. Pain has been reported at the onset of the disease [4] or even as an initial symptom of MS [30]. Pain syndromes are described as increasing with the age of patients and the disease progression [2,4,31], but in most MS studies chronic pain was found to have no significant correlation to age, disease duration or disease course [29,[32][33][34][35][36][37]. Taking this into account, the use of animal models to study MS-related chronic pain syndromes is very limited. We aimed to investigate the sensory properties of the hindpaws as readout for hyperalgesia and allodynia, which constitute one component of MS-related pain displacement (µm) displacement (µm) Here, we provide a thorough investigation of nociceptive sensitivity of the hindpaw in two different mouse EAE models over a complete time course of the disease. Additionally, we provide substantiated underlying mechanistical analysis with detailed immunohistochemical data. We found that SJL mice immunized with PLP 139-151 peptide and C57 mice immunized with MOG 35-55 peptide clearly showed thermal hyperalgesia, whereas only SJL-EAE mice developed marked mechanical allodynia in the chronic phase of the disease. C57-EAE mice developed mechanical allodynia exclusively towards very low-intensity stimuli during disease onset and peak phase. Our findings are in line with a study from Aicher et al. [15] who showed thermal hyperalgesia in SJL-PLP 139-151 EAE mice in the chronic phase of the disease [15]; however, this was found on the tail and forepaw of the mice. Additionally Olechowski et al. [16] and Rodrigues et al. [17] reported hindpaw mechanical allodynia and hypernociception before and around the onset phase of EAE in C57-MOG 35-55 mice [16,17]. Our findings are supported from these studies and clearly demonstrate differences in the sensory properties between the two commonly used EAE models. The use of the same behavioral tests over a long-lasting investigation period under similar conditions enabled us to directly compare the sensory profile of both EAE models.
Pain in MS patients is very diverse and one EAE model cannot mirror the heterogeneity of the disease [11] research perspective should therefore be focused towards the understanding that one EAE -pain model is not sufficient to study MS-related pain. Moreover, depending on the immunization peptides used and their representation in peripheral nervous system [38], peripheral pain may also add to the mechanism of increased pain in neuroinflammation, especially in models of autoimmune neuritis [39,40].
We found a strong activation of glia cells in the spinal dorsal horn in SJL-EAE and C57-EAE mice. This glia activation occured to a different magnitude and over a different time course in both models, that matched the temporal profile of nociceptive hypersensitivity. It is known that microglia and astrocytes are critical players in the effector phase of EAE and MS [41,42] because there is a marked activation of glia cells in both the spinal cord and brain over the course of the disease [43,44]. We hypothesize that the time course and extent of microglia and astrocyte activation in SJL-EAE mice as compared to C57-EAE mice and the subsequent release of diverse signaling molecules constitute the marked differences in the development and maintenance of chronic pain. This theory is supported from a study of Olechowski et al. [16], suggesting inflammation and reactive gliosis as key mediators of allodynia in C57-MOG 35-55 EAE mice [16].
There is a large variety of molecules and mediators, and thus, diverse signaling scenarios are possible. Temporally regulated key signaling mediators that possibly account for the development and maintenance of chronic pain in EAE include regulated glial factors such as those that comprise the chemokine monocyte chemoattractant protein-1 (MCP-1), which is released from glia cells and can attract various cell types involved in inflammation and also pain. Previous studies have demonstrated the expression of MCP-1 in the CNS of patients with MS [56][57][58] or EAE mice [59]. Additionally, the MCP-1 receptor CCR2 has been shown to be critical for the induction of EAE [60]. Accumulating evidence indicates that MCP-1 plays a critical role in chronic pain facilitation via CCR2 receptors [61][62][63][64]. Spinal MCP-1 can lead to neuropathic pain behavior [65,66] and induces to the phosphorylation of the mitogen-activated protein kinase (MAPK) extracellular regulated kinase (ERK) [65] in the spinal cord. In addition, Shin et al. [67] found a significant increase of different MAPK (phosphorylated ERK, c-jun N-terminal kinase (JNK) and p38) in the rat spinal cord at the peak stage of EAE [67]. The activation of ERK is known to play an important role in central sensitization [68], and JNK has been shown to be persistently activated in spinal cord astrocytes after nerve injury [69,70]. Moreover MCP-1 has been shown to amplify excitatory glutamatergic currents [65] and inhibits GABA-induced currents [71]. Thus, MCP-1 is strongly involved in mechanisms of chronic pain. Another example is matrix metalloproteinases (MMPs), which are known to be largely implicated in MS and EAE progression [72,73] extent in different EAE models [77]. Moreover, MMP-9 plays an important role in neuropathic pain conditions [78,79] as well as in MS [80][81][82][83]. Additionally, the administration of MMP inhibitors or genetical ablation of MMPs reduces the disease severity in different EAE murine models [84][85][86][87].
To further support our theory, another mechanistical possibility might be via proinflammatory cytokines (for example, IL-1beta, IL-6 and TNFalpha), which have been shown to lead to the phosphorylation of CREB [79]. CREB is essential for the maintenance of long-term plasticity in dorsal horn neurons [79] and thereby plays an essential role in pain sensitization [79,[88][89][90]. Kim et al. suggests that increased phosphorylation of CREB in sensory neurons in the dorsal horns might be involved in the generation of neuropathic pain in EAE [91]. Taken together, there are various signaling pathways arising from activated glia cells which may thereby contribute to pain in EAE and possibly also to MS.
Given that neuro-immune interactions play a critical role in other pain states and given that peripheral immune function is also changed in MS patients [7] it is possible that peripheral neuro-immune interactions contribute to MS-induced pain. In order to clarify potential changes in the peripheral nervous system in SJL-EAE mice, we investigated the electrophysiological properties of peripheral afferent fibers in EAE mice using the skin nerve preparation. EAE is known to cause central demyelination, but there is weak evidence for a peripheral component to the disease [92,93]. In case of a peripheral demyelination one would expect a decrease in velocity of the signal transduction of myelinated Aβ and Aδ fibers. Pender et al. observed an impaired response to noxious mechanical stimuli potentially associated with a demyelination-induced conduction block in the small diameter myelinated afferent (Aδ) fibers in the dorsal root ganglia (DRGs) of rabbits or rats with EAE [94][95][96]. We observed a slight decrease in conduction velocity in myelinated Aβ mechanonociceptors but the observed changes in the peripheral afferents are very mild, indicating only minor peripheral contribution to the disease phenotype which might arise from a different mechanism than possible peripheral demyelination processes.
In summary we show clear differences in pain behavior between different EAE mouse models, which may reflect the heterogeneity in human MS. Moreover the observed differences in glia cell activation most likely contribute to the different pain behavior. This study suggests that microglia and astrocytes represent a good target to investigate pain mechanisms in different EAE mouse models. Future studies would be necessary to elucidate differences in downstream signaling cascades in the different EAE models.
Conclusions
In summary we show clear differences in pain behavior between different EAE mouse models, which may reflect the heterogeneity in human MS. Moreover the observed differences in glia cell activation most likely contribute to the different pain behavior. This study suggests that microglia and astrocytes represent a good target to investigate pain mechanisms in different EAE mouse models. Future studies would be necessary to elucidate differences in downstream signaling cascades in the different EAE models. | 6,952.4 | 2012-10-06T00:00:00.000 | [
"Biology",
"Psychology",
"Medicine"
] |
A Comparative Study of Particle Swarm Optimization and Artificial Bee Colony Algorithm for Numerical Analysis of Fisher’s Equation
Te aim of this research work is to obtain the numerical solution of Fisher’s equation using the radial basis function (RBF) with pseudospectral method (RBF-PS). Te two optimization techniques, namely, particle swarm optimization (PSO) and artifcial bee colony (ABC), have been compared for the numerical results in terms of errors, which are employed to fnd the shape parameter of the RBF. Two problems of Fisher’s equation are presented to test the accuracy of the method, and the obtained numerical results are compared to verify the efectiveness of this novel approach. Te calculation of the error norms leads to the conclusion that the performance of PSO is better than the ABC algorithm to minimize the error for the shape parameter in a given range.
Introduction
To obtain numerical solutions with the optimized results in a variety of scientifc and engineering disciplines, researchers have developed various methods.Algorithms based on swarm intelligence have great potential in the feld of numerical optimization, according to researchers Yagmahan and Yenisey [1], Eberhart and Kennedy [2], Price et al. [3], and Vesterstrom and Tomsen [4].Swarm intelligencebased algorithms [5] and evolution [6] are two signifcant categories of population-based algorithms in the feld of optimization.Optimization is the process of increasing the advantages of a mathematical model or function while minimizing its disadvantages.It is a combination of techniques that enables us to improve the output of the system.Te primary goal of optimization is to fnd an optimal or nearly ideal solution with the least amount of computing work.Te key to fnd the solution is to optimize parameters connected to a mathematical model.
Several research felds adopt optimization approaches for numerical simulation of various linear and nonlinear partial diferential equations (PDEs) and also for optimizing the parameters related to problematic models.Tere are some well-known approaches of optimization such as ant colony optimization (ACO) [7] is one of the optimization algorithms that is a meta-heuristic algorithm inspired by the foraging behaviour of ants and how they fnd the shortest path between their nest and a food source.While it is commonly used for combinatorial optimization problems, it can also be adopted for numerical solutions of PDEs.Particle swarm optimization (PSO) [2] is also a heuristic algorithm inspired by the social behaviour of birds and fsh, where individuals in a group (particles) cooperate and communicate to fnd optimal solutions to a problem.Bacteria foraging optimization (BFO) [8] is stimulated by the foraging behaviour of Escherichia coli (E.coli) bacteria that mimics the way bacteria forage for nutrients in their environment to fnd the optimal solution for a given optimization problem.When adapting BFO for the numerical solution of PDEs, it can efectively explore the solution space for parameter settings that yield accurate and efcient numerical solutions to PDEs.Tese nature-inspired meta-heuristic optimization algorithms have recently gained popularity for developing an efective search algorithm.
Exploration and exploitation are two major determinants for the development of successful optimization algorithms for search mechanisms.A meta-heuristic optimization algorithm efectively explores the solution space, balancing between exploration (global exploration) and exploitation (local refnement) to fnd near-optimal or optimal solutions for a variety of optimization problems.Exploration involves the search for new, unexplored regions of the solution space.It aims to discover potential solutions that might be superior to the current ones.Exploitation involves focusing on known promising regions of the solution space to improve the quality of solutions.It aims to refne and optimize the current solutions based on the information available.Researchers are motivated to develop such population-based optimization algorithms because of the abundance of natural resources.Tese population-based optimization methods assess ftness and provide almost perfect solutions to complex optimization problems.
Swarm intelligence (SI) is a feld of study inspired by the collective behaviour of social insect colonies and other animal societies.It explores the principles and models of behaviour that emerge from the interactions of simple individuals within a group.Te connection between SI and optimization lies in leveraging the collective behaviour observed in natural swarms to create efective optimization algorithms and strategies.SI uses social insect behaviour to create algorithms or distributed problem-solving tools, according to Bonabeau et al. [9].Bonabeau studied only social insects such as termites, bees, wasps, and ants.Social species frst developed swarm intelligence through trial and error.It simulates self-organizing swarms of interacting agents.An immune system, ant colony, or bird fock are swarm systems.Bees swarming around their hives illustrate swarm intelligence.Based on honey bee swarm social intelligence, the artifcial bee colony (ABC) algorithm frst described by Karaboga [10] in 2005, and in 1995 [2], Kennedy and Russell proposed PSO for solving numerical optimization problems.Honey bees' search for nutritious food infuenced the procedure of ABC, and the process of PSO was attracted by the behaviour of social animals.Tese population-based stochastic search methods are simple and fast.Tese methods also solve complex, continuous, and unbounded optimization problems with multimodal or unimodal issues.SI-based meta-heuristic algorithms are popular for solving various optimization models as in [11] employed a novel adaptive artifcial bee colony (A-ABC) algorithm that can select the best search equation based on the current situation in order to more precisely predict the transport energy demand (TED), in [12] four diferent meta heuristic algorithms used for natural gas demand forecasting based on meteorological indicators in Turkey, [13] proposed a new modifed artifcial bee colony (M-ABC) method that can more precisely calculate Turkey's energy usage by adaptively choosing an optimal search equation and many more examples are there in biology, physics, evolution, and human behaviour that inspire nature-inspired algorithms that include ant colony optimization, artifcial bee colony, the frefy method, particle swarm optimization, brain storm optimization, sine and cosine algorithms, and genetic algorithms.With inspiration from SI, many researchers applied these meta-heuristic algorithms for the numerical simulation of various PDEs by optimizing the solution space.
For numerically simulating ordinary and partial diferential equations, RBF has proven to be a useful basis function.Numerical solutions to a nonlinear partial differential equation are found in this study by employing a mesh-free method based on radial basis functions (RBFs) with ABC and PSO optimization techniques.Both optimization strategies are used to determine the shape parameter (ϵ) related to RBF.
Te reaction-difusion equation is one of the most intriguing equations in physical processes.We concentrate on the form of reaction-difusion, which is known as Fisher's equation.
whose boundary and initial conditions are as follows: Many chemical and biological processes use Fisher [14] introduced this equation to demonstrate a benefcent gene's kinetic advance rate.Fisher's equation shows population evolution through opposing physical phenomena.Fisher's equation dominates genetics, tissue engineering, growth models, and more in science and engineering.Fisher's equation was frst simulated using the pseudospectral method developed by Gazdag and Canosa in 1974 [15].Since then, many diferent approaches have been developed to solve it, such as the Petrov-Galerkin fnite element method processed by Tang and Weber [16], the Tanh method by Wazwaz [17], and the homotopy analysis method proposed by Tan et al. [18].Other methods that have been used to solve this problem include the alternating iterative method by Sahimi and Evans [19], the central fnite diference algorithm by Hagstrom and Keller [20], the explicit and implicit fnite diference algorithms by Parekh and Puri [21], the collocation of cubic B-splines by Mittal and Arora [22], and the pseudo spectral approach by Bhatia and Arora [23].
In this paper, the ABC and PSO algorithms with RBF applied to Fisher's partial diferential equation are used to fnd the best shape parameter of RBF by minimizing the error, and the RBF-PS method is used for numerical simulation of Fisher's equation by converting it into an ordinary diferential equations (ODEs) system.MATLAB is used for optimizing the parameter ε and for numerical approximation of Fisher's equation.In this work, the results are obtained by the present hybrid approach in the form of error norms-L ∞ , L 2 , and L rms and shape parameter values at diferent time intervals, which are more comparable to the 2 Discrete Dynamics in Nature and Society results available in the literature.Te errors obtained by PSO are less as compared to the errors obtained by the ABC algorithm; thus, PSO values of shape parameter are good in comparison to the ABC's results.Te structure of the paper is as follows.Section 2 describes the ABC algorithm in detail, with pseudocodes for all the phases, and also presents the complete process of ABC.Section 3 presents the explanation of the PSO algorithm with pseudocode, and the obtained results of the two problems of Fisher's equation by the novel hybrid approach are discussed and compared in Section 4. Section 5 concludes the present article with the details of key fndings and the future scope.
Artificial Bee Colony (ABC) Algorithm
Te artifcial bee colony (ABC) algorithm was invented by Karaboga [10] in 2005.Te algorithm seeks the nectarrich fower region (optimal solution).Employed bees (busy), onlookers, and scouts structured the swarms in the ABC algorithm.Te swarm that fnds the best food supply is more likely to be followed by the others.ABC remembers a user's best location, as in the process of PSO.Te bee travels to a new location and evaluates it in comparison to its current favourite place.If the new site is better, the old one is forgotten and the new one is remembered.Te recollection remains unchanged.Te ABC algorithm begins with deploying bees in the beginning and dispersing them in diferent locations.All employed bees are those actively foraging for nectar or pollen.Employed bees bring food information back to the hive and share it with curious bees.Onlooker bees wait in the hive for scout bees to report new food sources.To share food supply information, employed bees danced in a designated area.Te dancing bee's dance depends on the food source's nectar.Onlooker bees monitor the dance and choose a food source depending on its trustworthiness.Before returning to the beehive, employed bees communicate the information with onlooker bees, which chose the most likely to follow.So, better food sources attract more bees than poor ones.When a food supply is exhausted, all employed bees become scouts.Scout and employed bees pursue exploitation and exploration processes.
In this algorithm, each food source is a potential solution to the problem, and its nectar amount indicates the ftness value's estimate of its quality.For each possible food source, there is exactly one busy bee, and the total number of food sources is equal to the total number of employed bees.Based on the following related probability value p i defned, an observer bee selects a food source.
where the ftness function (objective function value) of i th solution is fit i evaluated by employed bees that are proportional to the optimal solution and Np is the number of food sources.Due to the process, employed bees share the whole information with the onlooker bees.Utilizing the probability values of the employed bees, a roulette wheel selection method is used.Te likelihood of being chosen by onlooker bees increases with the amount of nectar a worker bee shares.With the aid of the chosen employed bee, the onlooker bee travels to a new site V i using the following formula: Here, the present position is denoted by X i , employed bee selection is represented by X j , and φ i is selected randomly from −1 to 1 for fnding the food sources in the region of X j .Any bee that is not capable of fnding a better food source after several iterations is replaced with a scout bee X k .Te scout bee summoned to replace the unsuccessful bee fies about any random or uncharted area to investigate its surroundings using the following equation: Here, ub and lb are upper and lower bounds, respectively, and randomness lies between 0 and 1.Again, the process is repeated with employed bees.Te parameter limit (the number of cycles) is used for a location that cannot be enhanced during the fxed cycles.Tese three steps of the algorithm ABC, with their pseudocodes, are as follows.
2.1.Employed Bee Phase.For the generation of new solutions in the employed bee phase, the following are some points to be remembered: (1) Te number of employed bees is equal to the number of food sources (2) Tere is an opportunity for all of the solutions (3) A partner is randomly selected for the generation of a new solution (4) Te current solution and partner should not be the same (5) Modifcation of a randomly selected variable is important for the generation of a new solution where X j p � j th variable of p th solution X j new � j th variable of a new solution ∅ � the random variable lies between −1 to 1 X j � j th variable of the current solution (6) Boundedness of the newly generated solution After generating a new solution within the boundaries, we must assess the objective function value and fnd its ftness via the relation.To update the current solution, we greedily select the newly generated solution.We track trial failures for each solution.If the new solution is worse, we increase the trial by one; if it is better, we reset it.Now, this better solution counts in the population.Te pseudocode for the employed bee phase is given as follows: Input = objective function, p, ft, trial, lb, ub, Np = s/ 2 = no. of food source/employed or onlooker bees, s = swarm size for i = 1 to Np Selected a partner (p) randomly such that i is not equal to p Selected a variable (j) randomly and update j th variable Evaluate objective function (f new ) and ftness (ft new) Accept X new , if ft new greater then ftness and Set trial = 0. Else increase trial by one. End.
Onlooker Bee Phase.
In this phase, there is a condition of probability for bees to exploit a particular food source.As we know the ftness of each food source, for each food source, we calculate the probability which is determined as follows: where fit i and prob i are ftness and probability of the i th solution, respectively.A solution with a higher ftness value will have a higher probability.Te pseudocode for the onlooker bee phase is as follows:
Scout Bee Phase.
As every solution is associated with an individual trial, we need to specify a parameter limit that is a user-specifed integer value.For entering the solution in this phase, the value of trial should be greater than the limit and trial of the abandoned solution is reset to zero.Not every solution passes through the scout phase.Te limit of solution can be calculated as Np * d because of d-dimensional problem space.Scout phase can occur only when trial counter of at least one solution is greater than the limit.Te pseudocode for the scout bee phase is as follows: Input � obj.Function, p, ft, trial, lb, ub, limit.Identify food source (t) whose trial > limit.Replace X t with p as X t � lb + φ i (ub − lb) Evaluate the objective function (f t ) and assign ftness (ft t ) 2.4.Complete Pseudocode for the ABC Algorithm.ABC initializes bee swarms and repeats until stopping criteria are met.ABC optimizes iteratively.Employed and onlooker bees agree on exploitation, while the process of exploration is performed with scout bees.
Particle Swarm Algorithm (PSO)
Kennedy and Eberhart [4] invented PSO as a popular swarm intelligence technique in 1995.PSO is efective in solving optimization problems by changing the paths of particles.Particle movement is mostly stochastic and deterministic.Social animals, focks of birds, marine animal communities, and swarms infuence this optimization strategy.Swarms are population particles that transfer information to improve the search solution and discover the global optimum in this nature-based swarm optimization technique.Each particle's position is their best experience.Particles' worldwide best position is their fnest experience.When it fnds a target better than all others, a particle alters its best position.Each 4 Discrete Dynamics in Nature and Society n-particle has a new best solution during iterations.Tis method fnds the best solution among all possible solutions.Tis process continues until the set iteration or the goal is not met.In this algorithm, the velocity U k+1 i and the position X k+1 i of the i th particle are updated as follows: where V k i is the velocity of particle i at k th iteration, a 1 , a 2 are real parameters, random is a random number, whose value lies between 0 and 1, X k i is the i th -particle position at k thiteration, P best i is the personal best position of particle i, G best is the global best position of whole search space.PSO is a computational technique that iteratively optimizes a problem to reduce error.It is a statistical method used to determine parameter values.To fnd the optimal answer to an optimization problem, the particles communicate, share their knowledge, and follow a simple rule.It is an innovative method for evaluating the best shape parameter value of RBF using the nonlinear partial diferential equation.It is a global search optimization strategy and ofers numerous characteristics in the parameter space.
Numerical Applications
In this section, the numerical solution of Fisher's equation using the RBF pseudospectral method is obtained for the applications of the above novel approach.Two problems of Fisher's equation are solved numerically using the present approach, and their results are calculated using the diferent error norms: L ∞ , L 2 , L rms , absolute errors along with the shape parameter values.A comparison of the obtained results is presented for the efectiveness and applicability of the proposed method.First, derivatives are approximated using RBF, and then solutions are determined by MATLAB software with version R2022a using both the algorithms with the intel (R) Pentium processor and the window 7 operating system.Te cubic radial pattern basis is taken as a basis function for the numerical simulation of the equation.Te initial parameters for both the algorithms are as follows: number of decision variables � 1 and values of lower and upper bounds of decision variables are 0 and 1, respectively.Same sets of values are used for obtaining the shape parameter that minimizes the errors in both the problems of Fisher's equation.
Te process of the proposed approach is shown graphically in Figure 1.
Following are the formulae used for the calculation of the errors:
Test Problem 1.
Taking one dimensional Fisher's equation (1) with with initial condition and exact solution are, respectively, Numerical simulation of problem 1 is carried out by taking Δt � 0.0001, N � 21 at time interval 0.2, 0.5, 1. Table 1 represents the computed error norms: L ∞ , L 2 , L rms by ABC and PSO algorithm and comparison is carried out with results in the literature [24,25].It can be seen from the results that the errors are less by the PSO algorithm comparative to the ABC's approach.Te obtained results in Table 1 are compared to the results calculated by other numerical methods available in the literature.By optimizing the errors, the shape parameter resulted in a value 0.018062 using PSO and 0.065821 using ABC at which the best errors occur.Te values of the shape parameter at diferent T for Δt � 0.0001 and N � 21 are given in Table 2. Comparative analysis of absolute errors using both the algorithms at N � 21 and Δt � 0.0001 with time 0.01, 0.02, and 0.03 as shown by Table 3 which concluded the errors are less by the PSO algorithm in comparison to ABC. Figure 2 demonstrates the graphical solution for 21 node points with Δt � 0.0001 on domain [0, 1].
Test Problem 2. We consider the general Fisher's (1) with
whose exact solution and initial condition are given as follows, respectively: Fisher' s Equation Solve numerically using RBF RBF to be dealt for the shape parameter Find shape parameter PSO and ABC [26] for a � 1 with Δt � 0.0001 at N � 21 along with exact solutions.As shown in Table 5, a comparison of diferent error norms calculated by both the algorithms at Δt � 0.00001, N � 21 with various time intervals 0.001, 0.002, Discrete Dynamics in Nature and Society 0.003, and 0.004 and iteration � 71 is similar to results in the literature [23].Te analysis of the shape parameter values is carried out in Table 6 which shows the PSO algorithm as the best optimizer for error giving the value of shape parameter as 0.251490.Table 7 presents the comparative analysis of absolute errors of the problem by ABC with [23] at diferent time levels which seems better than the available results.
Minimize Error as Objective function Solution by RBF-PS method
Here, the optimised shape parameter value is 0.246477.Te numerical solution is presented in Figure 3 for N � 21, Δt � 0.0001 at various time intervals.Using the current hybrid approach, two Fisher's equations are solved numerically and the results are derived in terms of various error norms, including absolute errors and shape parameter values.Te comparison of the obtained results is performed and presented to test the efcacy and application of this novel approach.
Conclusion
In this paper, a novel hybrid technique is proposed for computing the numerical solution of Fisher's equation using PSO and ABC optimization algorithms with RBF-PS.Using PSO and ABC optimization algorithms, the concept of ideal shape parameters is proposed because there is a discrepancy between numerical stability and accuracy when using different radial basis functions.To show the accuracy and efciency of the specifc method, two problems are solved numerically.Based on their error norms and shape parameter values, the obtained results are compared with the results available in the literature.Te obtained results are more accurate in comparison to the results available in the literature.Form the results, it can be concluded that PSO gives more accurate results compared to the ABC algorithm in terms of less errors.Furthermore, the present work can be explored with various other optimization algorithms, such as the genetic algorithm, the ant colony optimization algorithm, the bacteria foraging optimization algorithm, and the frefy algorithm.Tus, the work has scope to solve the partial diferential equation existing in various other felds with minimum errors.
Figure 1 :
Figure 1: Graphical representation of the proposed approach.
8
3.1.Pseudocode for the PSO Algorithm.Enter values of parameters: a 1 , a 2 , ftness, lb, ub, Np, and t.Assign P best i as P and f best as f.(4) Evaluate fnest ftness solution and allocate the solution to G best and ftness to f best .
Table 1 :
Comparison of error norms of problem 1 with Δt � 0.0001 and N � 21 on varied T and iteration � 71.
Table 4
represents the comparison of the results obtained by ABC and PSO algorithm with results in the literature
Table 3 :
Comparison of absolute errors of problem 1 at N � 11, Δt � 0.0001 with varied time T.
Table 4 :
Comparison of numerical and exact solutions of problem 2 at N � 21 and Δt � 0.0001 with time T.
Table 5 :
Comparative study of error norms at Δt � 0.00001 and N � 21 with time interval at iteration � 71.
Table 6 :
Values of shape parameter values (ε) with time intervals at Δt � 0.00001 and N � 21. | 5,378.8 | 2023-10-18T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
Multi-modality MRI fusion for Alzheimer’s disease detection using deep learning
Diffusion tensor imaging (DTI) is a new technology in magnetic resonance imaging, which allows us to observe the insightful structure of the human body in vivo and non-invasively. It identifies the microstructure of white matter (WM) connectivity by estimating the movement of water molecules at each voxel. This makes possible the identification of the damage to WM integrity caused by Alzheimer’s disease (AD) at its early stage, called mild cognitive impairment (MCI). Furthermore, the brain’s gray matter (GM) atrophy characterizes the main structural changes in AD, which can be sensitively detected by structural MRI (sMRI) modality. In this research, we further develop a novel multi-modality MRI (DTI and sMRI) fusion strategy to detect WM alterations and GM atrophy in AD patients. The latter is based on a 2-dimensional deep convolutional neural network (CNN) features extractor and a Support Vector Machine (SVM) classifier. The fusion framework consists of merging features extracted from DTI scalar metrics (fractional anisotropy (FA) and mean diffusivity (MD)), and GM using 2D-CNN and feeding them to SVM to classify AD vs. cognitively normal (CN), AD vs. MCI, and MCI vs. CN. Our novel multimodal AD method demonstrates a superior performance with an accuracy of 99.79%, 99.6%, and 97.00% for AD/CN, AD/MCI, and MCI/CN respectively.
Introduction
Alzheimer's disease (AD) is an irreversible progressive neurodegenerative disorder that affects people over the age of 65 and outlines around 60% of dementia worldwide.It is caused by damage to nerve cells in certain brain regions, affecting a persons memory and cognitive abilities, which disrupt their daily life.The Alzheimer's Association declares that AD is the sixth leading cause of death in the USA; around 50 million people were diagnosed with this disease in 2018 and in 2050, this number will be tripled (1).At present, no effective treatment or prevention is found.Moreover, disease management is prohibitively costly.Early screening of this disease is of primordial importance for researchers to slow down its progression and optimize the treatment.In this context, advances in neuroimaging, primarily magnetic resonance imaging (MRI), have shown the potential to improve the early diagnosis of AD.
AD is characterized by a progressive loss of Gray matter (GM) that occurs pre-symptomatically in certain neuro-anatomical structures (2).Structural MRI (sMRI) is the most used neuroimaging modality to detect brain atrophy.It has already highlighted many biomarkers of Alzheimer's disease; in particular, the atrophy of structures such as the hippocampus, the amygdala, and the thalamus (3).In fact, hippocampal atrophy in prodromal patients proved to be the best structural predictor of Alzheimer's disease progression (4).However, it is associated with a large number of neurodegenerative pathologies, thereby limiting its specificity to Alzheimer's disease (5).
Within this frame of reference, many studies on the AD-prodromal phase called mild cognitive impairment (MCI) have focused their research on the hippocampus.Nevertheless, some other structures appear interesting such as, the volume of the amygdala which could be a structural predictor as powerful or even more efficient than the volume of the hippocampus to predict MCI (6; 7).Furthermore, there are changes in white matter that preceded gray matter atrophy but are not detectable by sMRI (8).The introduction of diffusion tensor imaging allows identification of these changes when the patient still presents an MCI (9).The MCI is the transitory phase between (CN) decline and AD or another dementia.DTI has conventionally studied the white matter microstructural integrity based on the estimation of the water molecules' diffusion in all directions (six directions at least) (10).The degree of anisotropy of water diffusion is represented by the fractional anisotropy (FA), while mean diffusivity (MD) represents its magnitude.Studies have shown the importance of measuring these two DTI indices (FA and MD) to describe the physiological aging in the MCI patient phase (11).Increased MD and decreased FA were reported in AD patients compared to CN.Higher MD in MCI patients was observed in both hippocampi (12).Indeed, a considerable increase in MD and decrease in FA indicates a progressive loss of the barriers restricting the motion of water molecules in tissue compartments, associated with neuronal loss in AD (13).It, therefore, seems important to measure the DTI indices because they can provide additional information on the pathophysiology of the disease.
The introduction of machine learning and deep learning techniques has greatly contributed to the diagnosis and prognosis of AD based on neuroimaging data (14).Numerous research works have been published for the AD classification using DTI, where the FA and MD were the most frequently metrics used as features.The most popular among these machine learningbased methods utilized as classifiers, are the Support Vector Machine (SVM), and Random Forest (RF) (15; 16; 17; 18; 19).Most of them used the tractbased spatial statistics (TBSS) algorithm (37) to extract the white matter skeleton from FA and MD.They selected only the pertinence WM Skeleton information to perform binary or multi-classification using Alzheimers disease national initiative (ADNI) data set.The difference was presented in the classification task, where Maggipinto (18) used Random Forest and Lella (19) proposed to concatenate the best result from different classifiers (SVM, RF, and Multi-layer perceptron (MLP)) from all features groups (FA, MD, radial diffusivity (RD), longitudinal diffusivity (LD)).The use of DTI-based machine learning shows impressive performance.However, it is necessary to extract features and subsequently select the relevant ones to perform classification tasks, which is difficult and time-consuming.
Deep learning is a state-of-the-art machine learning method (20).Classification techniques using deep convolutional neural networks (CNN) revealed higher AD detection performance (21).Most of the literature approaches have used CNN-based sMRI to classify the different stages of Alzheimer's disease.CNN can handle low to high automatic feature extraction from complex structures.Some authors have proposed a new CNN architecture (22) reaching promising results with an accuracy of 99.9%.Others have reported excellent results using transfer learning methods (23; 24; 25).However, others have suggested extracting deep discriminative features based on transfer learning methods and classifying them with SVM (26; 27).
In recent years, DTI indices, principally MD, combined with sMRI information have been adopted by many researchers.They proposed different techniques to combine DTI and sMRI.Massalimova et al. (28) have tried multi-modal Resnet-18 network (sMRI and DTI) in classifying CN, MCI, and AD from OASIS-3 datasets.They managed to suggest that the classification performed by the softmax layer could be preferable than another classifier in contrast to Kang et al. (26).Kang et al. (26) suggested a fusion technique consisting of merging slices with the same index of the T1w, FA, and MD images into an RGB slice.After that, the pre-trained VGG16 network is used to extract the features and SVM classifier to discriminate MCI patients, from CN using the ADNI dataset.Aderghal et al. (29) proposed LeNet-like CNN based on sMRI and DTI-MD images.They selected the median slice Hippocampal and its two neighbors in each projection (axial, sagittal, and coronal).The proposed CNN is trained on the MNIST database.They first retrained the model on sMRI then on DTI-MD.They achieved a classification accuracy of 86.83% for AD vs. CN, 69.85% for MCI vs. CN, and 71.75% for AD vs. MCI.Marzban et al. (30) proposed a simple 2DCNN based on a single convolution layer.They trained the model on diffusion scalars metrics (FA, MO, and MD) and GM.The cascaded MD and GM volumes achieved an overall accuracy of 88.9% and 79.6% respectively for AD vs. CN and MCI vs. CN.Ahmed et al. (31) extracted visual features from the hippocampus ROI in both sMRI and MD images.The extracted features and the amount of CSF calculated on the sMRI are combined and classified using multi-kernel learning (MKL).
Assessment of pathophysiological changes by neuroimaging would be essential to predict AD.Single modality cannot provide enough information, therefore, multi-modality must be combined to detect AD. sMRI and DTI have received more attention in recent years to study the progression of Alzheimer's disease.These two modalities are complementary; the sMRI detect the shrinkage of gray matter and changes in the brain volume.Moreover, the DTI is a useful prediction marker to detect the WM deterioration.In this context, we aim to detect patterns of micro and macrostructural changes in the different AD stages using the multi-modality MRI (sMRI and DTI) fusion process.We propose a new methodology that consists of a new CNN to extract the salient visual features from the DTI measurements and the GM images separately.After that, these features are merged and transmitted to SVM to identify AD from MCI, AD from CN, and MCI from CN.
Database
Dataset used in this work has been obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) (http://adni.loni.usc.edu).The ADNI was launched in 2003 as a public-private partnership, led by Principal Investigator Michael W. Weiner, MD.The objectives of the ADNI study are the identification of biomarkers for clinical use and early detection of AD (32).The selected balanced dataset includes both Diffusion-weighted images (DWI) and sMRI brain scans from 150 individuals of both genders (50 AD, 50 CN, and 50 MCI), with ages varying from 55 to 90 acquired by GE medical system scanners.The 50 MCI subjects are selected with 25 early MCI and 25 Late MCI.The selected subjects coming from ADNI-GO and ADNI-2 phases.
In addition to these images, 5 T2-weighted images without diffusion (b = 0) are used as reference scans.More informtion about the acquisition parameters can be found in the ADNI2 protocol.
Methodology
Our proposed strategy consists of pre-processing, a 2D slice selection, a features extraction, and a classification.We work on DTI measurements (FA, MD) and GM brain segmented from T1-weighted sMRI to classify (CN vs. AD), (AD vs. MCI), and (CN vs. MCI).New 2DCNN architecture was trained by slice-level dataset (only the 32 relevant slices selected from FA, MD, GM images) to extract the salient features from DTI maps and GM.The optimal FA-CNN, MD-CNN, and GM-CNN models are saved depending on lower loss value during the training process, then adapted to extract features from the last fully connected layer.After that, the features of each slice in the subjectlevel dataset (FA, MD, GM) are extracted by their optimal model (FA-CNN, MD-CNN, and GM-CNN).These features are merging and feeding to the SVM classifier to improve the performance as is illustrated in figure 1.The detailed description is found in the following subsections.
SVM classifier
Fig. 1 Flowchart of the proposed fusion multi-modalities system using the 2DCNN-SVM approach for AD identification.
The pre-processing steps of the raw sMRI volumes to segment the GM are performed by the CAT12 toolbox (http://www.neuro.uni-jena.de/cat/).The CAT12 toolbox is an extension of SPM12 software (33) .In short, all T1weighted 3D sMRI are normalized by the DARTEL algorithm (Diffeomorphic Anatomic Registration Through Exponentiated Lie algebra) using an affine transformation followed by a nonlinear registration, corrected for bias field inhomogeneities, and then segmented into GM, WM components.DWI volumes are preprocessed using Functional Magnetic Resonance Imaging of the Brain (FMRIB) Software Library (FSL) (34).First DWI scans are corrected for eddy current distortions and susceptibility artefacts by the FSL-eddy correct.FSLs Brain Extraction Tool was used to remove the brain skull.The diffusion tensor calculations are performed by the FSL dtifit at each voxel of fixed DWI scans.The eigenvalues of the diffusion tensor (λ1, λ2 , λ3) were utilized to obtain maps of scalar anisotropy and diffusivity.Several diffusion metrics can be calculated.The widely used diffusion metrics are fractional anisotropy (FA) and mean diffusivity (MD).FA is calculated using equation 1. MD represents the magnitude of diffusion which is calculated by averaging the three eigenvalues as it is mentioned in equation 2. Finally, FA and MD are co-registered with the corresponding sMRI scans and each scan contains 121×145×121 voxels using SPM12. (1)
2D slice selection
Each FA, MD, and GM volume is decomposed into 2D slices along the axial view to highlight the most distinctive features and ensure improved classification efficiency.We select 32 slices from each subject based on higher entropy information (slices with indices 3465).The selected slices are associated with most of the deteriorated AD brain regions mentioned in literature such as the hippocampus, the entorhinal cortex, and thalamus.As a result, a total of 1600 (32×50) of each class (CN, MCI, and AD) are selected.More details are shown in table1.
Feature extraction using 2DCNN
The handcrafted features extraction was the main problem in the traditional machine learning algorithms which is hard and time-consuming.CNN can perform this task automatically without human intervention.CNN is the most common deep learning model used among neural networks.
It is inspired by the human visual system.A typical CNN architecture comprises principally an input layer, convolution layer, pooling layer, fully connected layer, and classification layer.The convolution layer extracts automatically the features from the input FA, MD, or GM images by multiplying element-wise with a filter.The pooling layer aimed to reduce the redundant information by acquiring the average of a region or the maximum.The fully connected layer is used to reduce and transform the feature maps to a column feature map.The classifiers are finally used for AD prediction.
In short, the 2DCNN architecture consists of three convolutional layers with 3×3 size filters.Each convolutional layer is followed by a RELU layer, batch normalization (BN) layers, and a max-pooling layer, then two fully connected layers, softmax layer, and output layer.The RELU layer sets the negative values to zero and BN accelerates the training process.More details are tabulated in table 2.
Classification using support vector machine (SVM)
SVM is a widely applied supervised learning method that treats small highdimensional data by finding a maximal margin hyperplane to separate classes and solve a binary classification problem (35).SVM is considered better to use than the Softmax layer as is mentioned in previously published studies (36; 37).The trained FA-CNN, MD-CNN, and GM-CNN are adopted to extract the features.These features are then transmitted to the SVM classifier instead of the Softmax layer for AD classification.These features extracted from FA, MD, and GM images is a matrix whose size is the number of slices multiplied by the number of features selected from each slice.For 32 slices of each subject, the feature representation has the dimension of 32×2.For all subjects (100), the output of each model is a matrix of 100 32×2.They are then concatenated into a total feature matrix with the dimension of 3200×2.SVM classifier is trained and tested using these deep extracted features as is shown in figure 2.
Input GM dataset
Fig. 2 The pipeline of proposed GM-CNN with SVM method to distinguish between AD and CN.
Multi-modality MRI fusion process.
The automatic AD screening fusion algorithm developed using multimodalities MRI is illustrated in figure 1.The three optimal CNN (FA-CNN, MD-CNN and, GM-CNN) are used to extract features.We tried several fusion procedures experiences (FA and MD), (FA and GM), (MD and GM), and (FA and MD and GM) to choose the best model score.The fusion process consists of merging the features extracted from FA, MD, and GM into a global feature vector.Accordingly, the size of the fused FA + MD + GM feature matrix is 3200×6.
Experiments
In this work, several experiments are carried out to validate the effectiveness of our proposed method to classify (AD vs. CN), (CN vs. MCI), and (AD vs. MCI).In the first experience, we performed a direct unimodal classification of features extracted from FA, MD, and GM.This gives us information about the best modality and map.In the second experiment, we study whether multimodality increases performance and allows better discrimination between the different classes or not.This is achieved by studying the impact of merging features of the two modalities.The 2DCNN-SVM proposed has been implemented using MATLAB ver.R2019a and running on a 3.1 GHz Intel-i7 processor, 16 GB of RAM.The CNN model was trained using an optimized SGDM (Stochastic Gradient Descent Momentum) using the back-propagation algorithm and cross-entropy as a loss function.The batch size is 64, the learning rate is 0.0001 for 25 epochs.There is a total of 3200 images of each map (FA, MD, and GM), 1600 images for each class.The dataset is divided into 75% for training, 15% for validation and the remaining 15% for testing the SVM.The same CNN architecture is used to train FA slices, MD slices and GM slices.For the SVM classifier, the extracted data is categorized into training, validation, and test data.We used the extracted features from 2720 images for the training and 480 images for the test.
The best SVM using radial basis function (RBF) (Gaussian kernel) classification score was obtained by 10-fold cross-validation.The optimal hyperparameters (cost and gamma) were determined using the grid search technique.It finds the best model result from different combinations of parameters; where cost controls the error and gamma gives the curvature weight of the decision boundary.
Evaluation
The performance of our method was validated using accuracy and the area under the receiver operating characteristic curve (AUC).The validation results are illustrated in table 3 and the ROC curves of 10-fold cross-validation are shown in figures 3, 4, 5.The fused FA, MD, and GM improved better the result and outperformed the single modality and the sMRI+MD fused procedures adopted in many previous studies (26; 29; 30).We tested our method using 240 AD images, 240 CN images, and 240 MCI images.The used evaluation metrics are the accuracy, sensitivity, and specificity determined by the confusion matrices.In matrices of confusion, the sensitivity is shown in the last row and the specificity in the last column.The diagonal boxes indicate the numbers and percentages of correctly classified classes and the last one shows the overall accuracy of the model.An example of the confusion matrix of the fused characteristics FA, MD, and GM is shown in figure 6 Table 4 shows that the FA, MD, and GM are important to discriminate the different AD stages.For the use of FA, MD, GM independently, we report that MD obtained the best result in the case of AD vs. CN with an accuracy of 98.96%.However, the GM yields better results in classifying AD vs.MCI and CN vs.MCI with an accuracy of 96.88% and 93.50% respectively.
We investigated the best combination of features (FA and MD, FA and GM, and MD and GM).Fused FA and MD outperformed the other combined features with an accuracy of 99.98 % and 98.33% to classify AD vs. CN and AD vs. MCI.On the other hand, fused GM and MD achieved higher results to classify CN vs.MCI with an accuracy of 97.00%, a sensitivity of 97.20%, and a
Discussion
To validate the performance and efficiency of our novel workflow, we compared it to the previous approaches presented in the literature and dealing with the same databases (ADNI) and the same modalities (sMRI and DTI).
Our results gained higher accuracy in the AD detection compared to other Multi-modality MRI fusion for AD detection using deep learning studies as is shown in Table 5.
In general, our results concerning AD early detection imply the existence of distinct pathophysiological processes.In fact, the hippocampus is known to be one of the earliest and most severely damaged structures affected by AD.However, there are other structures involved in AD detection such as the amygdala, thalamus, and putamen.The relevant slices selection seems a powerful and easy method than segmenting the hippocampus or other brains regions which requires a human expert.Our network learns the complex patterns of brain atrophy from relevant sections that contain almost all of the AD-affected regions mentioned in the literature, for each patient.This eliminates the process of segmentation of the hippocampus and other regions of the brain.Moreover, a subsequent selection of the most discriminating characteristics is avoided in our approach.
Our results confirm the effectiveness of the DTI measurement FA and MD in the classification of AD vs. CN, AD vs. MCI, and CN vs.MCI which is consistent with the previous works (19; 18).In addition, The GM atrophy in sMRI is of great interest to researchers for the AD early detection.The sMRI based transfer learning has proven impressive results (23; 25).Generally, the VGG16 and VGG19 models have gained higher accuracy than other pretrained models (24).Recently, some of the authors(26; 27) succeeded in using a pre-trained (VGG16) model for automatic extraction of features and SVM for the classification; they achieved a higher accuracy.However, the transfer learning technique relies generally on natural images whose models are trained using the Imagenet database (39).Conversely, our simple networks learn and extract from scratch the most pertinent features.
In the past few years, the multi modalities (DTI-MD and sMRI) were reported by many researchers.They proposed different combination techniques to ensure the best classification.Aderghal et al. (29) suggested the transfer learning technique to perform the fusion process and Marzban et al. (30)adopted a cascaded CNN.However, they achieved lower accuracy than what we got which is over 97%.This is probably due to the small sample size we used compared to them, or the fact that we didn't work on specific ROI, or the impact of adding FA.
In summary, both, diffusion scales metrics and the GM are powerful elements and important for AD stage discrimination.The multi-modality fusion process (FA+MD+GM) seems to be the best technique to improve the AD classification performance.
Conclusion
In this paper, we have proposed a 2DCNN-SVM classification approach based on DTI scalar metrics (FA and MD) and GM segmented from T1w images from ADNI databases for AD detection and diagnosis.The fusion of features extracted from FA, MD, and GM by the proposed 2DCNN demonstrates the effectiveness of our method achieving a classification accuracy of 99.79%, 99.85%, and 97.00% for AD/CN, AD/MCI, and CN/MCI respectively.In conclusion; the use of DTI-FA, DTI-MD, and GM separately gives lower results than fused together.
• Authors' contributions All authors were involved in the work leading up to the manuscript.All sources used are properly disclosed (correct citation).
Fig. 9
Fig. 9 Comparison of performance of proposed technique for binary classification of AD vs. CN AD vs. MCI, and CN vs. MCI.
Table 1
Sample size of the preprocessed selection process
Table 2
Layers Proprieties for the proposed 2DCNN architecture.
Table 3
The performance of the validation dataset.
, 7, 8.All test results are summarized in Table 4. Matrice of confusion of AD vs. CN.
Table 4
Matrice of confusion of CN vs. MCI.Performance evaluation of the proposed 2DCNN-SVM technique on the test dataset.
Table 5
Comparison of results with state-of-the-art techniques applied to AD detection. | 5,117 | 2022-02-21T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Engineered fluidic systems to understand lymphatic cancer metastasis
The majority of all cancers metastasize initially through the lymphatic system. Despite this, the mechanisms of lymphogenous metastasis remain poorly understood and understudied compared to hematogenous metastasis. Over the past few decades, microfluidic devices have been used to model pathophysiological processes and drug interactions in numerous contexts. These devices carry many advantages over traditional 2D in vitro systems, allowing for better replication of in vivo microenvironments. This review highlights prominent fluidic devices used to model the stages of cancer metastasis via the lymphatic system, specifically within lymphangiogenesis, vessel permeability, tumor cell chemotaxis, transendothelial migration, lymphatic circulation, and micrometastases within the lymph nodes. In addition, we present perspectives for the future roles that microfluidics might play within these settings and beyond.
INTRODUCTION: THE RISE OF MICROFLUIDICS
The discrepancies between scientific data gathered in vitro and in vivo vs clinical settings suggest that new models are warranted to recapitulate human pathophysiological processes. 1 For years, there has existed a challenge in the field of bioengineering and drug discovery concerning the effectiveness of 2D cell cultures to model human physiology and drug interactions observed clinically. [1][2][3][4] Two-dimensional static cultures remain the standard for cellular biology; yet, these models lack physiological relevance and have often proven ineffective as clinical predictors due to the dilute and ineffective recapitulation of the cellular microenvironment. 4,5 While in vivo models remain necessary to assess drug interactions in the preclinical setting, the average success rate of translation from animal models to clinical cancer trials is less than 8%. 6 Aside from being "lost in translation," animal models raise ethical concerns and are problematic when using human cells due to hostimmune cell interactions. 2,6 The combination of these shortcomings has pushed research into the direction of using 3D and microfluidic platforms to recapitulate the physical and chemical microenvironments seen in vivo, while providing the means to precisely control and visualize cellular interactions in a high-throughput manner. Microfluidics employs the use of small channels on the scale of tens to hundreds of micrometers in diameter to process minute fluid volumes. 7 Their small size allows for limited cell numbers or reagents, which are often expensive or low in quantity. Furthermore, these devices can more accurately model in vivo architectures through integration of 3D extracellular matrix (ECM) components. 2,3,8 The spatiotemporal control of these devices has allowed researchers to study specific cellular interactions in a more precise and controlled manner. Microfluidics is also advantageous as it can be fabricated to incorporate small working distances to allow for high-resolution, real time imaging.
The usage of biologically compatible material substrates from molds allows for high-throughput production of devices and subsequent analysis. A large majority of microfluidic devices for biological application use soft lithography techniques, which include fabricating a master "stamp" from a photocurable polymer such as SU-8. 9 This master can be used to imprint features into elastomeric materials, such as polydimethylsiloxane (PDMS), with high resolution. PDMS is widely used in microfluidics since it is easy to handle, can be purposed in diverse applications, is economically viable, ideal for imaging due to its optical properties, and, most importantly, is biologically inert. 10 While the field of microfluidics has advanced tremendously in terms of applications, the versatility of replication molding with PDMS meant that new fabrication techniques have lagged behind. Other means of fabrication often require advanced equipment and are not economically feasible at a small scale for research purposes, use materials that do not translate well with biological applications, or lack the high-resolution capabilities inherent with soft lithography. 9 However, certain applications may require more intricate fabrication techniques, such as micromachining, 3D printing, or dry etching. Table I illustrates the ubiquity of PDMS and photolithography in the field of microfluidics for biomedical research.
Replication of tissue microenvironments within microfluidic devices has allowed the modeling of complex physiological process systems in "organ on a chip" devices. [11][12][13][14][15][16] Meanwhile, the incorporation of multiple organs on a chip in one integrated device can be used for the scaling of "microHumans" to study complex anatomical interactions and systemic drug toxicity. 17,18 Although initially intended to bridge the gaps between 2D in vitro studies and in vivo work, 3D fluidic models are evolving to study pathologies and drug interactions directly in patient-specific devices. 2,[19][20][21][22][23] These technological advances have made it possible to study specific diseases, including cancer within a microfluidic device. 1,2,8,12,[24][25][26] The applications for microfluidic devices are evolving and emerging in the field of cancer research, both from a biological perspective of understanding roles of immune and stromal cells to a translational perspective of investigating the efficacy of therapeutics in preclinical models.
THE ROLE OF LYMPHATICS IN CANCER METASTASIS
Approximately 90% of all cancer related deaths are attributed to metastasis. 27 Despite its high morbidity, cancer metastasis is a very inefficient process in which less than 0.1% of circulating tumor cells (CTCs) will actually go on to colonize and form macrometastases. 28 The metastatic cascade is composed of several sequential steps, each of which selects for a specific cellular phenotype that is able to overcome inhospitable environments. First, cancer and stromal cells in the primary tumor secrete proangiogenic factors such as VEGF to promote tumor microvasculature networks of both blood and lymphatic vessels. Tumor cells then undergo an epithelial to mesenchymal transition (EMT) that promotes cell motility through the loss of cell-cell adhesion proteins such as E-cadherin and β-catenin. 29 Motile cells will then migrate and invade the basement membrane of the nearby vasculature through both physical (high intratumoral pressures) and chemical (chemokine gradients) cues. 28 Cells may enter the hematogenous or lymphatic circulation via transendothelial migration (TEM) from the tissue parenchyma into nearby blood or lymphatic vessels, respectively. 27 In the case of lymphatic intravasation, tumor cells will drain into collecting lymphatic vessels, eventually emptying into the sentinel or "tumor draining lymph nodes" (TDLN). 30 Successful migration to the sentinel lymph nodes provides cells with a direct route to systemic lymph nodes and the bloodstream via the thoracic duct and subclavian vein. 31,32 Once in circulation, cells can undergo extravasation, followed by a mesenchymal to epithelial transition (MET) and colonization of distant organs. 27 As suggested from Paget's "seed and soil" hypothesis, tumor cells will have genetic and phenotypic advantages to promote seeding in specific organs over others. 33,34 Just as a seed needs proper nutrients to grow, tumor cell proliferation and survival is highly dependent upon the microenvironment where CTC extravasation occurs. This sequential model of metastasis is well studied but has proven to be an oversimplification in many cancer models and in clinical observation. 31,35 It is estimated that 80% of carcinomas and melanomas metastasize via lymphatics. 30 Despite the fact that the majority of all human cancers metastasize initially via the lymphatic system, the mechanisms of lymphogenous metastasis remain poorly understood and understudied compared to that of hematogenous metastasis. 30,36 There are many factors that help determine which metastatic route a cell will take. 37 Typically, metastatic subpopulations will develop mutational burdens, which may be preferential for one mode over another. For instance, CCR7+ tumor cells will preferentially traffic toward CCL21 secreted by lymphatic endothelial cells (LECs), promoting initial metastasis through the lymphatic system. This signaling axis is typically used in CCR7+ dendritic cells (DCs) trafficking into lymph nodes, while T-cells expressing CCR7 follow gradients toward increasing CCL19 secreted within the thymus and lymph nodes. 38 Additionally, CCL21 can be upregulated by VEGF-C/VEGFR-3 signaling, which has been shown to be highly expressed in primary tumors and tumor-derived lymphatic neovasculature. 31 By harnessing these signaling mechanisms employed by immune cells to traffic toward LECs for antigen presentation, cancer cells experience directed migration toward the lymphatic circulation. 31,36 Physical and mechanical forces have also been shown to play a role in lymphatic homing. High interstitial pressures within solid tumors promote interstitial flow (IF) toward the periphery of the tumor where lymphatic vessels are concentrated. 39 Interstitial flow has been shown to promote vascular remodeling and even promote tumor cell invasion via autologous chemotaxis toward lymphatic vessels along the tumor periphery. Furthermore, secretion of specific prolymphangiogenic factors can favor lymphatics vs blood. 32 For example, upregulated VEGF-C and VEGF-D secretion by cancer cells have been shown to induce preferential lymphatic metastasis via LEC VEGFR-2 and VEGFR-3, whereas blood endothelial cell (BEC) angiogenesis prefers VEGF-A/VEGFR-1 signaling.
Lymphatic capillaries lack pericytes and tight interendothelial junctions typically seen in blood vessels. 31,40 The leaky nature of lymphatic vessels facilitates tumor cell intravasation via transendothelial migration, promoting initial metastasis. Likewise, the lower fluid shear environment within lymphatic vessels (in the range of 0.4 dyne/cm 2 with surges between 4 and 12 dyne/ cm 2 ) 41,42 compared to blood vasculature (upward of 30 dyne/cm 2 in arteries) 43,44 increases the likelihood of cell survival in transit. 37 Mounting evidence exists suggesting that lymphatics may also play a role in curbing antitumor immune responses. 45 For example, initial lymphatic vessels formed as a result of tumorinduced lymphangiogenesis exhibit upregulated expression of the immune checkpoint ligand programmed cell death ligand 1 (PD-L1), which induces CD8+ T-cell anergy upon tumor associated antigen (TAA) presentation via MHC class 1. 31,36,46 In addition, tumor-induced LECs have been shown to prevent dendritic cell maturation, increase T-cell tolerization, and inhibit proliferation of T-cells that have been stimulated by proinflammatory cytokines. 36,47,48 Despite the increasing evidence for the roles of lymphatics in promoting cancer progression and dissemination, there are innate characteristics of lymphatics that promote antitumor immunity. For instance, the presence of lymphatic networks at the primary tumor are important for TAA trafficking to immune cells to evoke a robust T-cell response. 45 In addition, lymphatic vessel density in solid tumors strongly correlates with the quantity of infiltrating cytotoxic CD8+ T-cells, promoting "hot" vs "cold" tumors in patients. 49,50 The involvement of lymphatics in cancer dissemination extends into the clinic. The presence of metastases in the sentinel lymph nodes of cancer patients is used as a basis for establishing tumor staging, predicting patient prognosis and formulating treatment strategies. 51 Axillary and sentinel lymph node biopsies in melanoma and breast cancer patients have proven to be fundamental for assessing the aggressiveness and extent of disease. 52,53 In fact, sentinel lymph node (LN) biopsies will often reveal metastatic spread before detection by traditional imaging modalities such as positron emission tomography/computed tomography (PET-CT) or before the presence of blood-borne CTCs. 51 The clinical ramifications of this on the degree of lymphatic involvement in metastasis demonstrates the importance of the role of the lymphatic system in cancer.
The complex roles of lymphatics in metastatic progression are not yet fully understood. What is apparent, however, is that better understanding of the interplay between tumor cells, their microenvironment, and the lymphatic system during metastasis is vital to the discovery of new therapeutics to exploit tumor cell weaknesses. The tight control of physical and chemical stimuli, high-throughput nature, high-resolution capacity, and physiologically relevant architecture of microfluidics provide excellent means for better understanding these intricate interactions.
Lymphangiogenesis and LEC barrier function
Lymphatic vasculature is comprised of initial lymphatic vessels and collecting vessels, which function to prevent the accumulation of fluid, termed edema, in tissue. 31,54 Additionally, these vessels function to transport pathogens, antigens, and antigen presenting cells (APCs) from tissues toward immune cells residing within the lymph nodes. Initial lymphatic vessels, also known as lymphatic capillaries, range from 35 to 70 μm in diameter and absorb interstitial fluids, facilitated by pressure gradients between the interstitium and vessel lumen. 55 The lymph then drains into downstream collecting vessels where unidirectional valves and smooth muscle contractions facilitate transport into the draining lymph nodes. Lymphangiogenesis is the analog to angiogenesis, the process in which lymphatic endothelial cells sprout to create new vasculature off of existing vessels. Tumor-induced lymphangiogenesis is characterized by VEGF-C or VEGF-D overexpression in tumors, which has been correlated with an increase in lymph node metastasis and high morbidity in patients. [56][57][58] Lymphatic vessels have a high permeability due to discontinuous interendothelial junctions and a sparse surrounding basement membrane. 32 While this Interstitial flow has been shown to be an important regulator of lymphangiogenesis in vitro and in vivo. 59,60 To model this, the Swartz Lab created a multichambered, high-throughput flow device capable of replicating interstitial flow pressures through a 3D extracellular matrix. 61 This PDMS device allowed live imaging of morphogenesis of lymphatic and blood endothelial cells while incorporating tumor microenvironment (TME) components with cocultures of tumor cells and fibroblasts. A related study by Kim et al. also investigated the roles of IF on lymphangiogenesis, discovering directionality of lymphatic sprouting was flow dependent. 62 Interstitial flow-induced upstream lymphatic sprouting and suppressed downstream sprouting via LEC polarization. This PDMS device consisted of a center 3D fibrin channel containing LECs and microposts, surrounded by two flow channels and a 3D fibroblast culture. The same group later adapted the design to study the complex networks of lymphatic vessels in coculture with BECs, fibroblasts, and cancer cells in a 3D tumor microenvironment model, as shown in Fig. 1(a). 63 This is the only known study to mimic angiogenesis and lymphangiogenesis simultaneously in one 3D microfluidic platform.
Interstitial flow is also key in preserving endothelial barrier function in lymphatic neovasculature. 39,60 A study by Wong et al. used a new device to study endothelial cell perfusion to demonstrate the importance of lymphatic drainage in preserving vascular stability. 64,65 This device in particular has the capability of being repurposed by introducing coculture with cancer cells to study endothelial cell permeability and transendothelial migration under lymphatic drainage conditions. More recently, a device by Sato et al. investigated vascular permeability within cocultures of LECs and BECs. 66 The PDMS device consisted of two microfluidic channels where LECs and BECs were cultured back-to-back, separated by a fibronectin coated polyethylene terephthalate (PET) membrane. 67 Their findings demonstrated that physiological flow conditions enhanced cell-cell junctions and recapitulated microvascular architecture seen in vivo. 66 While angiogenesis is a well characterized process that has been incorporated into many microfluidic devices, 68-73 the process of lymphangiogenesis is less well understood. The described microfluidic devices have demonstrated advances in understanding the roles of lymphatic vessel sprouting, morphogenesis, and permeability in the context of the tumor microenvironment. In particular, within the last few years there has been emphasis on the fabrication of more complex systems that model lymphangiogenesis in parallel with angiogenesis. While multiple devices have been employed to study lymphatic microvasculature function, few have incorporated components to model lymphangiogenesis in the context of the tumor microenvironment. Modifying or reapplying these devices to represent tumor-induced vascular remodeling will be instrumental in the future. The field is mature from a device design standpoint, but there remain many opportunities within existing devices for further studies, especially regarding drug modulation of lymphangiogenesis and LEC barrier function. Currently, there are no FDA-approved compounds to prevent tumor-induced lymphangiogenesis, supporting the use of such microfluidic devices to study novel drug candidates before human trials. 74
Lymphatic-induced migration and tumor crosstalk
The likelihood of successful metastatic dissemination is contingent upon a tumor cell's migratory potential. 75 Cancer cells in the primary tumor undergo EMT characterized by a loss of adhesion proteins like E-cadherin and the upregulation of vimentin. 29 These cancer cells resemble a cancer stem cell phenotype and morphology, including a decrease in proliferation and increase in migratory capacity. As previously alluded to, these cancer cells are able to harness the same chemotactic mechanisms typically employed by immune cells to migrate toward and into initial lymphatics. 31,36 Meanwhile, high intratumoral pressures direct interstitial flows toward the tumor periphery into peritumoral lymphatic vessels. 39 High interstitial flows can cause fibroblast contraction and collagen fiber alignment, along with tumor stiffening. This will increase tumor invasiveness, as cells will more easily migrate through aligned collagen fibers. When taken together, all these mechanisms promote tumor cell migration away from the primary tumor and toward peritumoral vasculature.
The Swartz Lab has pioneered microfluidic modeling of tumor cell chemotaxis toward lymphatics. In one study, Shields et al. used a modified 3D Boyden chamber, consisting of a tumor cell culture in 3D ECM with LECs cultured on the underside of the chamber. 76 By introducing interstitial flows of just 0.2 μm/s through the 3D ECM in the absence of LECs, cancer cell migration was enhanced via autologous CCR7 signaling. This novel finding first suggested that IF not only promoted directed migration through physical mechanisms but also through autocrine signaling where a chemotactic gradient naturally forms at the leading edge of the cell. When cultured with LECs, paracrine CCL21 secretion enhanced CCR7 signaling and offered a complementary role for lymphatic-directed chemotaxis. A separate study from the Swartz Lab built upon this observation in a similarly modified Boyden chamber device. Issa et al. demonstrated that tumor cell VEGF-C enhanced LEC CCL21 secretion through VEGFR-3 signaling, thereby enhancing tumor cell proteolysis and migration toward LECs. 77 Polacheck et al. created a two-channel PDMS device separated by a collagen interface in which a pressure difference between channels drove interstitial flow through the device, as shown in Fig. 1(b). 78,79 This study further supported the previously demonstrated phenomena of autologous CCR7 chemotaxis downstream of interstitial flow. However, when blocking CCR7 signaling, migration was directed upstream of flow, hypothesized to be linked to flow-induced tension in integrins via phosphorylation of focal adhesion kinase (FAK).
Tumor cells under confinement show preferential migration in paths of least resistance through trajectories created by leader cells, 80 collagen fiber alignment, 81 or on the periphery of preexisting lymphatic and blood vessels. 82 Irimia and Toner created a highthroughput model of cancer cell migration under confinement using collagen-filled, cell-sized microchannels in 96 well plates. 83 When treated with paclitaxel chemotherapy, overall cell migratory potential of MDA-MB-231 cells was significantly decreased. However, subpopulations of cells proved resistant to migratory inhibition and showed sustained migration in the presence of high concentrations of drug.
Tumor cells have been shown to create tolerization of immune cells prior to inhabiting the TDLN, creating a premetastatic niche that can be ideal for tumor cell seeding. 30,84 Moreover, technological advancement in the way of ex vivo tissue cultures has enabled more translational studies of drug-tumor interactions and personalized medicine. 85,86 Recently, Shim et al. created the first ex vivo crosstalk system via secreted factors between lymph node and tumor slices. They designed a multilayer PDMS device with integrated pumps to recirculate supernatant between tumor and LN tissue under physiological interstitial flow conditions. 87 Their findings demonstrated that LN tissues cultured with tumor tissue contained immunosuppressed T-cell populations, as characterized by decreases in IFN-γ secretion, supporting established in vivo findings.
Since cancer cell and immune cell migration employ the use of the same chemokine signaling axes, it is important to understand how migration is modulated by the presence of drugs or immunotherapies. Parlato et al. created a PDMS device with an immune chamber and tumor chamber separated by confined connecting chambers to demonstrate the mechanisms behind IFN-α-conditioned DC migration toward tumor cells. 88 Their results demonstrated that the CXCR4/ CCL12 axis guides dendritic cells toward apoptotic cancer cells leading to TAA phagocytosis and cross-presentation to naïve T-cells. This study focused on immune cell migration upon treatment of tumor cells, but the device could straightforwardly be repurposed to examine simultaneous immune cell and cancer cell-directed migration in the presence or absence of drugs.
Overall, this has been the most studied stage in lymphatic metastasis using microfluidics. Precise control over device characteristics such as collagen density (and consequential stiffness), flow profiles, pressure gradients, chemotactic gradients, and channel architectures make microfluidic devices well suited for modeling tumor migration and lymphatic crosstalk. However, to date, no device yet exists to study LEC-induced chemotaxis of cancer cells simultaneously in the presence of immune cells. Most current systems are binary, only comprised of cancer cell lines in culture with LECs, or in the case of the previously described device, only address mechanisms of immune cell trafficking. Modeling the roles of LEC directed migration with lymphatics and cancer cells in tandem will be important for understanding drug interactions to prevent off target effects on immune cells. Moreover, the addition of immune cells will provide insights into symbiotic relationships between immune cells and cancer cells during lymphatic-directed migration. Furthermore, the inclusion of other cell types such as cancer associated fibroblasts (CAFs) will be key in elucidating the complex roles these cell types have during migration. 89 More recent studies incorporating primary samples and ex vivo tissue samples are gaining traction due to their translational relevance. 85,86 It is expected that this trend will continue, specifically with applications in personalized medicine. One could imagine the reapplication of the aforementioned Shim et al. ex vivo device, 87 with patient derived slices of an excised tumor and LN samples to study patient immune tolerization and immunotherapy efficacy.
Transendothelial migration through lymphatic endothelium
A cancer cell's ability to disseminate to other organs is fully dependent upon its ability to enter the lymphatic or hematogenous circulation. 28 Intravasation is the process in which cancer cells invade the basement membrane of the vasculature and then enter the circulation through a process known as transendothelial migration (TEM). There are two modes of TEM: paracellular (through endothelial cell-cell junctions) or transcellular (through endothelial cell bodies). 90 TEM in the context of cancer intravasation or extravasation has been studied in a range of microfluidic devices. 2,3,91,92 However, until recently, these studies were carried out predominantly with BECs in the context of blood vessel intravasation and extravasation.
The Kamm lab created a successful device that set a precedent for the study of endothelial barrier function in the context of tumor metastasis. 93 Their PDMS device consisted of two independent channels where endothelial cells and tumor cells were seeded, separated by a 3D ECM hydrogel region. Permeability of BEC monolayers cocultured with macrophages and subsequent transmigratory potential of HT1080 fibrosarcoma cells were quantified. This study was instrumental in elucidating the role of macrophagesecreted TNF-α in endothelial monolayer permeability and tumor intravasation potential. Building on this, another group created artificial microvasculature from cylindrical channels lined with endothelial cells to study cancer cell migration and intravasation into perfusable vessels. 94 Another system was used to study extravasation, where a two-chamber PDMS device was split with a porous membrane containing an endothelial monolayer. 95 Cancer cells were perfused through the top chamber, while the bottom chamber contained a reservoir to collect any extravasated cancer cells. While the cancer cells adhered to the endothelium, no transendothelial migration was observed within the short time frame of cell rolling. These are just some of the many current microfluidic platforms used to study cancer cell migration through the blood endothelium. 3,91,92,[95][96][97] Multiple devices have been created to study TEM through vascular endothelium in concert with other metastatic processes. Lee et al. created a "metastasis chip" that modeled both angiogenesis and subsequent intravasation of MDA-MB-231 cells together in one platform. 70 Likewise, Chaw et al. created a multistep device where cells underwent deformation through 10 μm trenches before passing through an endothelial monolayer. 98,99 The latter has applications in studying cell confinement through vessel contraction and subsequent lymphatic extravasation, an understudied phenomena.
More recently, microfluidic devices have been fabricated for the purpose of investigating the role of the lymphatic endothelium in TEM. Increasing emphasis on the role of lymphatics in initial metastasis along with the innate differences between blood and lymphatic endothelium has motivated such studies. For instance, unlike vascular endothelial monolayers, lymphatic monolayers are characterized by having increased permeability, an incomplete or absent basement membrane, and sparse, overlapping intercellular junctions. 100 The Swartz Lab pioneered one of the first devices of cancer cell transmigration in LECs. They fabricated a five-channel microfluidic chamber that was designed to deliver both luminal and transmural flow to LEC monolayers, as shown in Fig. 1(c). 101 Tumor cells in a 3D extracellular matrix were cultured above a membrane containing the monolayer. This device demonstrated that luminal, interstitial, and transmural flow promoted intravasation of MDA-MB-231 cells. The device was validated by demonstrating that luminal flow augmented LEC expressed CCL21 to drive cancer cell migration. Xiong et al. created a simplified version that used transwell inserts coated with an LEC monolayer to study vectorial migration and intravasation of immune cells and breast cancer cell lines. 102 This more recent model was designed to be more readily accessible and easier to use for other research labs to study TEM. A similar system using transwell inserts was used by Karpinich and Caron to study tumor cell interaction with lymphatic endothelium. 103 Their study demonstrated that the peptide adrenomedullin promotes coupling of cancer cells to LEC gap junctions and facilitates heterocellular communications to induce transendothelial migration.
Microfluidic platforms are excellent tools for studying transendothelial migration of cancer cells, largely due to the ease of visualization via live cell imaging. In addition, precise control of endothelial monolayers more closely mimics endothelial barrier function observed in vivo. Due to known differences between the lymphatic and blood endothelium, there exists a need to understand the different roles they play in relation to cancer cell intravasation. The majority of established hematogenous intravasation and extravasation devices could be readily modified to study LEC barrier function as well. Although straightforward in principle, inherent differences between blood and lymphatic endothelium will require these repurposed devices to be thoroughly screened and calibrated with LECs. If successfully implemented, future studies may examine differences in the transmigratory potential of cancer cells between BECs and LECs within the same device, potentially revealing subpopulations of phenotypes that are prone to lymphatic vs hematogenous TEM. In addition, cancer cell extravasation through lymphatics is poorly understood and few in vitro or in vivo models exist to study this phenomenon. Modifying existing extravasation devices by culturing LECs instead of BECs will allow for modeling cancer cell rolling, arrest within the lymph node, and extravasation from lymphatics into nearby or distal tissues.
Lymphatic circulating tumor cells
Once cancer cells transmigrate through the endothelium and enter the lymphatic circulation, they are subject to a unique physical and chemical environment. 28,31,36,37,54 There are many differences in the rheology and flow dynamics between lymphogenous and hematogenous circulation. With the absence of red blood cells and platelets, the viscosity of lymph and interstitial fluid can be two-to fourfold less than that of blood. 104,105 Initial lymphatic vessels that have a low Reynolds number flow within the Stokes flow regime, and in the largest vessels draining into the thoracic duct, the flow remains laminar. 54 This deviates from arterial blood flow, which is mostly laminar but can become turbulent in larger arteries. Overall, the higher shear rates within the blood flow make CTC survival inauspicious compared to those within lymphatics. Early efforts to model shear effects of lymphatics on metastatic tumor cells included the use of parallel plate flow chambers, as demonstrated in a previous study with colorectal cancer cell lines. 106 In that study, a constant shear stress of 1.2 dyne/cm 2 was applied to cancer cells while cell proliferation, spreading, and apoptosis were quantified. In two separate studies, our lab modeled dynamic shear on CTC's in human blood using a cone-and-plate viscometer. 107,108 These studies demonstrated that physiological shear stress can sensitize cancer cells to TNF-α related apoptosis inducing ligand (TRAIL) via the activation of mechanosensitive ion channels. 109 Similar to parallel plate flow chambers, a cone-and-plate viscometer is widely available, easy to use, and readily adaptable to model a variety of different physiological shear processes, making it suitable for studying cancer cells in lymphatic transit.
The lymphatic system utilizes both extrinsic and intrinsic phasic pumping mechanisms from the surrounding lymphatic muscle to produce the pulsatile flow of lymph from tissue. 41,54 The contractile properties of the lymphatics can create confined architectures for cancer cells, augmenting cell motility, proliferation, and survival via the process of mechanotransduction. 110 Chen et al. created a migration device with choke points ranging from 6 to 30 μm to model metastasis through lymphatic capillaries. 111 This PDMS device, as shown in Fig. 1(d), consisted of two separated serpentine channels, one loaded with cells and the other with chemotactic agents, separated by straight migration channels with constricted choke points of various diameters. Using MDA-MB-231 cells, cell migration through tight choke points was revealed to be dependent on Map Kinase family member p38γ.
Despite the high concentrations of immune cells surveilling the lymphatics, cancer cells are often able to seed within the sentinel lymph nodes and form micrometastases. 30 Often small and clinically undetectable without a lymph node biopsy, these cancer cells can reside and remain senescent for years while evading immune detection. To mimic this, our group created a PDMS microcavity device that recapitulated the architecture of the lymph node, as shown in Fig. 1(e). 112 The device was fabricated using deep reactive ion etching in silicon, followed by gas expansion molding in PDMS to create spherical microbubbles. 113 Natural killer cells were cocultured with cancer cells, modeling interactions between micrometastases, immune cells, and therapeutics. Although this study was only conducted under static conditions, the same device was used in a separate study under a continuously perfused flow to culture 3D spheroids. 114 Currently, there exist flow devices that can be applied to the study of lymphatic and hemodynamic shear on migratory cancer cells. Both parallel plate flow chambers and cone-and-plate viscometers are easily adaptable and widely available devices to study such phenomena. Despite this, surprisingly few studies exist to study cancer cell transit in lymphatic circulation. This represents an important research opportunity to make these devices more physiologically relevant. This may include culturing LEC monolayers on the inner walls of a device while perfusing intraluminal lymphatic flow to CTCs. Furthermore, the creation of a device that allows for vessel dilation and constriction via smooth muscle, or artificially via transmural pressure, would better replicate the lymphatic behavior experienced in vivo. These physiological conditions may be modeled after similar devices that use whole artery or vein segments ex vivo. 115,116 Meanwhile, there is a need for more devices to model cancer cell seeding and senescence within the tumor draining lymph nodes, specifically devices that incorporate immune cell interactions with cancer cells. There currently are multiple "lymph node on a chip" devices existing outside of the applications of cancer. [117][118][119] Although outside of the scope of this review, incorporating cancer cells within lymph node on chip microenvironments may elucidate the mechanisms behind cancer cell seeding and survival.
CONCLUSIONS AND FUTURE PERSPECTIVES
Despite their great potential and versatility, microfluidic devices have not been fully harnessed to study the intricacies of lymphogenous metastasis. While there are an abundance of microfluidic devices studying metastasis in the context of the bloodstream, few devices exist that incorporate lymphatics as part of or the focus of their model. This is surprising since the majority of all cancers metastasize via lymphatics and the mechanisms of lymphogenous metastasis are in many ways as poorly understood as that of hematogenous metastasis. 30 Given that the infrastructure of the aforementioned devices are widely adaptable, we propose that progress within the field will mostly come from new applications of previously developed systems. A germane initial step would be to recreate previous studies, such as those modeling angiogenesis or TEM into the blood vessels but replacing blood microvasculature with lymphatic microvasculature. Even more ideal would be the incorporation of blood vessels and lymphatics within the same device, similar to that described by Sato et al. 66 This could be instrumental in elucidating how cells differentiate between lymphogenous vs hematogenous metastasis, while characterizing subpopulations that are predisposed to one mode over another. Moreover, combining multiple stages within one device, as done by Lee et al., who studied both angiogenesis and intravasation, 70 will be useful to determine how different metastatic steps affect one another. Fabrication of these all-in-one "lymphatic metastasis on a chip" devices will advance the field toward a device capable of modeling the entire metastatic cascade within one platform.
As previously mentioned, there may be difficulties with replacing BECs with LECs in existing devices. Although both are endothelial cells and carry out similar functions, they have distinct transcriptional profiles, which make them unique in culture. 120 For example, BECs appear to be more reliant on ECM interactions for proper functionality, indicating that ECM components in existing devices with endothelial monolayers may need to be tailored to suit LEC culture. Additionally, lymphatic endothelium is known to have relatively looser interendothelial junctions, which could pose challenges for culturing uniform monolayers within devices. 31,32 Incorporating LECs into existing devices will require careful observation and calibration to ensure physiological relevance, especially when adapting features such as flow profiles, cell densities, and ECM concentrations.
Immune cells play complex roles in relation to cancer development, and as such, numerous microfluidic devices exist to study these interactions. 2,117,119,[121][122][123] Surprisingly, these devices tend to look exclusively at cancer-immune cell interactions strictly within the tumor microenvironment, not in relation to their roles during metastasis. Cancer cells trafficking toward and into the lymph nodes are likely to interact in some capacity with both adaptive and innate immune cells, further warranting their inclusion within these microfluidic systems. Investigating the roles of the immune system will be key to not only understanding how cancer cells can leverage these interactions but also for exploiting cancer cell weaknesses with immunotherapies.
For more translational studies to exist, microfluidic devices must become more user-friendly and compatible for use in the clinical setting. This includes the integration of automated image processing, routine sample processing, and minimalization of complex system components to allow for the analysis to be carried out to completion within hospital laboratories. Meanwhile, reproducibility of such devices will be necessary for widespread implementation. The first step in the inclusion of clinical microfluidic devices would be validating drug toxicology and biological phenomena observed in vivo and in humans. This includes examining drug interactions of approved FDA compounds that have extensive clinical data and comparing those same interactions within relevant microfluidic devices for validation purposes. With regard to lymphatic devices, testing an FDA-approved compound such as sorafenib, which has been shown to interfere with LEC expressed VEGFR-2 and VEGFR-3 and has been approved for the treatment of metastatic renal cell carcinoma, 124 within a lymphangiogenesis device would be applicable for validating modeling capabilities. In addition, testing well characterized checkpoint inhibitors such as PD-L1 targeting antibody atezolizumab to demonstrate blockage of checkpoint signaling by LECs would provide insights into the mechanisms behind an LEC targeted therapy. 125 From a research perspective, new platforms to promote collaborations between biologists and engineers are warranted. This framework will in turn promote the fabrication of devices to answer pressing questions in the field of biology, rather than attempting to fit biological applications within preexisting, incompatible devices. A 2014 study by Sackmann et al. estimated that only 6% of all microfluidic devices are published in biology and medicine journals. 126 Interdisciplinary work within this field will be crucial for improved biological modeling and drug discovery. | 8,135.4 | 2020-01-01T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Cross-domain decision method based on instance transfer and model transfer for fault diagnosis
As the digitalization of industrial assets advances, data-driven fault diagnosis has increasingly garnered attention. However, models often underperform due to the lack of sufficient training data and the complexity of operational environments. In scenarios where a similar task with abundant data exists in the source domain, leveraging the knowledge embedded in this source data could be key to constructing an effective diagnostic model for the target domain. Following this idea, this study introduces a novel cross-domain decision method, weighted structure expansion and reduction (WSER), for fault diagnosis. This method initially extracts features from the time, frequency, and time-frequency domains. It then estimates data weights following the idea of instance transfer to mitigate the dissimilarity between the source and target data distributions. Based on these estimated weights, feature selection is further performed. The extracted source knowledge is subsequently transferred to the target domain using the proposed WSER method. The proposed method is applied on two public engineering fault datasets, and the results demonstrate the effectiveness of the proposed method in increasing the accuracy of fault diagnosis.
Introduction
In recent years, the advent of increasingly complex systems in fields such as manufacturing, aviation, and power generation has given rise to unprecedented challenges in fault diagnosis.These systems, characterized by their intricate network of interconnected components and subsystems, require sophisticated diagnostic techniques to ensure their reliable and safe operation.Furthermore, faults within these systems could have far-reaching implications, which may cause performance degradation, system failure, and even catastrophic accidents. 1Therefore, developing effective and robust methods of fault diagnosis can be of paramount importance.
The existing fault diagnosis methods can be classified into model-based, signal-based, knowledge-based methods. 2 The knowledge-based methods, also known as data-driven methods, can help extract the underlying knowledge about the systems without previously known models or signal patterns based on the collected historical data.Such methods can be effective for complex systems where the explicit system models or signal symptoms are hard to establish.Currently, machine learning methods have been widely applied in the data-driven fault diagnosis, 3 such as support vector machine (SVM), k-nearest neighbor (KNN), and neural network (NN). 1,4,5These machine learning methods can help establish wellperforming models with a large amount of data.
However, in some cases, there may not be enough data for model training, such as the situations where the complex systems are newly applied or used infrequently, and the data collection may cost too much time.In such cases, the model trained with insufficient data may not perform well on the target task of fault diagnosis.In addition, machine learning methods work well under a general assumption: the training data and the testing data should be drawn from the same distribution. 6Even if there exists a large amount of data collected from a similar system, the obtained model trained using the data can still perform poorly on the target task with a different data distribution.
To address the problems mentioned above, more and more attention has been paid to transfer learning.Transfer learning methods are proposed to learn and transfer shared knowledge from a similar domain (source domain) to the current domain (target domain). 7,8Transfer learning methods have been widely applied for fault diagnosis.For example, Zhao et al. 9 proposed a transfer learning method based on bidirectional gated recurrent unit and manifold embedded distribution alignment, to tackle the fault diagnosis problem with limited labeled data; Wu et al. 10 developed an adaptive deep transfer learning method for bearing fault diagnosis, which was constructed based on instance transfer and feature transfer; Liu et al. 11 proposed a transfer learning method based on model transfer for fault diagnosis in building chillers; and Yang et al. 12 proposed a feature-based transfer learning neural network to identify the healthy conditions of real machines with the diagnosis knowledge obtained from experimental machines.The abovementioned studies demonstrate the effectiveness of transfer learning in tackling fault diagnosis problems with few or even without labeled data, and transfer learning is thus studied in this paper.
In addition, many machine learning methods are considered black-box, where the process of generating decisions could be complicated for decision makers to understand.The generated model may not be suitable for high-stacks decision making. 13In high-stakes decisions there are often considerations outside the collected data that need to be combined with a risk calculation.It may be hard to manually calibrate how much the additional information adjusts the estimated risk with a black-box model.For example, in the fault diagnosis, there could be some conditions that are not easy collected as data, but very useful for the diagnosis of specific system or components.Besides, it can be unclear what factors are considered in the construction of the model, which could lead to risky or unreliable results.For example, the chatbot Tay in 2016, designed to continuously learn and improve through interactions, became a ''troubled girl'' embodying gender discrimination, and racial prejudice within less than 24 h of engaging with humans.The black box nature of machine learning algorithms, in which the decisionmaking process becomes opaque and difficult to trace, exacerbates the potential for unintended consequences.
Building the diagnosis model with an explainable method, such as decision tree, can thus be essential for complex system with high reliability.In existing machine learning methods, decision tree, logistic regression, and linear regression can be more explainable than others from the perspective of model.Compared with the linear methods, decision tree can capture the nonlinear relationship, and extract more complicated patterns.5][16] In these methods, STRUT method keeps the structure of the source decision tree but adjust its threshold values, and SER and TDT methods use the labeled target data to adjust the structure of the decision tree trained using the source data, which could be more flexible on knowledge transfer.Compared with STRUT method, SER and TDT methods could be more flexible considering the domain dissimilarity.Different to TDT method, in SER method, after expanding the source decision tree, a reduction operation is then conducted to further improve the tree structure.The SER method is thus focused in this study.
Existing transfer learning methods can be classified into four categories according to the transferred objects, including instance, feature, model and relationship. 17SER is a model-based transfer learning method, but can be different from the existing transfer learning methods.In instance-based transfer learning, the shared knowledge is assumed to be contained in the source data, and the data weights are estimated or the data are selected to help adapt the marginal distributions.For examples, Huang et al. 18 proposed Kernel Mean Matching (KMM) to match the means between the source and the target data in a Reproducing Kernel Hilbert Space, and Sugiyama et al. 19 proposed Kullback-Leibler Importance Estimation Procedure (KLIEP) to minimize the Kullback-Leibler (KL) divergence of the source and the target data.Feature-based methods focus on transforming one feature representation to align with those of the other one, or transforming both feature representations to align them to each other.For examples, Daume´2 0 proposed the Feature Augmentation Method to transform the original features by feature replication, Pan et al. 21proposed the Transfer Component Analysis (TCA) to adapt marginal distribution by minimizing the distribution difference using Maximum Mean Discrepancy, and Fernando et al. 22 proposed Subspace Alignment (SA) to transform source subspace obtained with Principal Component Analysis into the target one.Model-based methods assume that the knowledge can be shared with the model or its parameters.For examples, Duan et al. 23 proposed a framework, Domain Adaptation Machine, to construct a robust classifier with some base classifiers preobtained on multiple source domains, Zhuang et al. 24 proposed the Matrix Tri-Factorization Based Classification Framework to characterize the connections among the document classes and the concepts conveyed by the word clusters using parameters, and Gao et al. 25 proposed an ensemble-based framework, Locally Weighted Ensemble, to combine various learners generated with different source domains or learning algorithms.Relational-based transfer learning approaches focus on transfer the learned source's logical relationships or rules to the target domain.For examples, Wang et al. 26 proposed a relational knowledge transfer to extract the relational knowledge from data manifold structure and transfer it backwards to help generate virtual data for unseen categories, and Qin et al. 27 proposed a relational-based transductive transfer learning method, where the time series are clustered using the similarity measured with the relational knowledge.
Compared with the instance-based methods, SER can better dig the deep knowledge with the decision tree model, which can avoid the problem of the high dissimilarity between the marginal distributions or the high inconsistency between the label spaces.In addition, most feature, model and relational-based transfer learning methods could be a black box for certain tasks, while the SER constructed based on decision tree can be of better interpretability, where the results can be more reliable for diagnostic problems of complex system.However, the original SER only focus on transferability between the tree structures at the source and the target domain, where the marginal distribution is not considered when applicable, which can further facilitate the knowledge transfer.
In this work, a cross-domain decision method is proposed based on the improved SER method.In the proposed method, features are first extracted from the time domain, the frequency domain and the time-frequency domain.The data weights are then estimated following the idea of instance transfer.The extracted features are selected based on the estimated data weights.The knowledge contained in the source decision tree model is further transferred using the proposed weighted SER (WSER) method by considering the estimated data weights.
The main contributions of this paper include: (1) A cross-domain decision method WSER is introduced based on decision tree with instance transfer and model transfer.
(2) The weights of the labeled source and the target data are calculated following the idea of instance transfer.(3) A new feature selection algorithm is developed to prioritize and select features with the estimated data weights.(4) The effectiveness of the proposed method is demonstrated through its application on two engineering fault datasets, showcasing its practical utility.
The remainder of this paper is organized as follows.Section 2 briefly reviews the preliminaries of the related algorithms.Section 3 elaborates the details of the proposed method.The proposed method is further verified using two public engineering fault datasets in Section 4. Finally, this paper is concluded in Section 5.
Feature extraction
Fast Fourier transformation.Fast Fourier Transformation (FFT) is an algorithm used to efficiently compute the discrete Fourier transform (DFT) of a sequence or time-domain signal. 28][31] The main advantage of the FFT algorithm is its computational efficiency, making it possible to perform high-speed spectral analysis on large sets of data in real-time or near real-time applications.The algorithm exploits the symmetry and periodicity properties of the DFT to reduce the number of computations required.It divides the DFT calculation into smaller subproblems and recursively combines the results, resulting in a significant reduction in computational complexity.
Based on the FFT algorithm, 30 the spectrum s(k) of a given signal x n is defined by x n e Ài2pkn dn, k = 1, . . .
, K denotes the number of spectrum lines, N is the number of time signals, and K equals to N in FFT algorithm.The frequency value of the k À th spectrum line can be calculated as where F S denotes the sampling frequency.As the N=2 of the frequency points can be derived from the remaining parts, the N =2 redundant points can be discarded to improve the computing efficacy.By evaluating vibration signals of fault condition with those of the healthy condition through FFT algorithm, the fault diagnosis can be better conducted with specific frequency components.
Wavelet packet transform.Wavelet Packet Transform (WPT) algorithm is a signal processing technique that extends the capabilities of wavelet analysis by providing a more detailed and flexible decomposition of signals into subbands. 32,33It is a multi-resolution analysis tool that allows for a more comprehensive exploration of signal features in both time and frequency domains.
Unlike traditional wavelet analysis, which decomposes signals into a binary tree structure of low-pass and high-pass subbands, the WPT algorithm decomposes signals into multiple subbands at each level, allowing for a richer representation of signal components.This decomposition can be performed recursively to achieve greater granularity and capture fine-scale details in the signal. 34The WPT algorithm provides a flexible framework for signal analysis, offering the ability to select and analyze specific subbands of interest.The WPT algorithm is thus used in this paper to extract the time-frequency domain features.
Given a wavelet packet function C, and three integer indices j, n and g = 0, 1, . . ., 2 j À 1, which are the scale (frequency localization) parameter, the translation (time localization) parameter 35 and the modulation or oscillation parameter, respectively, C can be further obtained as The computation of the wavelet packet coefficients c n j, k for a signal x can be accomplished based on the inner product operation between the signal itself and the corresponding wavelet packet function, which is The wavelet packet node energy E j g ð Þ is defined as The obtained E j g ð Þ can represent the characteristics of vibration signals in both time domain and frequency domain.
Feature selection
Feature selection is a process of selecting a subset of relevant features or variables from a larger set of available features, which can be an important step in the preprocessing stage to improve model performance, reduce overfitting, and enhance interpretability.
The objective of feature selection lies in identifying and retaining those features that hold the highest value of information and discriminatory power for the target task, while discarding irrelevant or redundant features that may introduce noise or add unnecessary complexity to the model.
Feature selection algorithms can be broadly categorized into three types, including filter, wrapper, and embedded algorithms. 36Filter algorithms rank features based on the statistical properties or the relevance to the target variable, wrapper algorithms evaluate feature subsets by using a specific learning algorithm, and embedded algorithms incorporate feature selection as part of the learning algorithm itself.
In this paper, the recursive feature elimination (RFE) algorithm is mainly considered.As a wrapper algorithm, the RFE algorithm can consider the interaction and combination effects of features, which can lead to more accurate feature selection. 37,38The RFE algorithm incorporates the model performance during the feature selection process, which ensures that the selected features are directly related to the model performance.In addition, the RFE algorithm is a flexible algorithm that can be used with various methods for model construction.The application of other feature selection algorithms in different scenarios is not extensively discussed in this paper.
Methods
In this section, a cross-domain decision method aimed at fault diagnosis is proposed by considering instance and model transfer.Initially, features are extracted from time series data, and subsequently, the data weights are heuristically estimated.Feature selection proceeds based on these estimated weights.The source knowledge related to fault diagnosis is then acquired from the source domain via decision tree.This knowledge is subsequently transferred to the target domain employing the WSER method.
Framework
The operational physical parameters of machinery serve as references that aid in abnormality detection and diagnosis., and high-sensitivity accelerometer-generated vibration signals are primarily utilized for this purpose. 39,40The vibration signals collected as time series data are thus mainly focused on in this paper.
Given two fault diagnosis problems from the source domain D S and the target domain D T , the vibration signals x of the machine in D S and D T would be recorded in the fixed time step, and the operation states y of the machine would be also monitored with x.The task is to construct a fault diagnosis model to help determine the operation state of the machine based on the vibration signals x in the target domain D T .While abundant historical signal data x S are collected with operation states y S in D S , only few data x L are recorded with y L in D T .The constructed model may not perform well on target testing data D U with the few labeled signal data In such cases, the sufficient signal data D S = x S , y S È É in D S can help learn the patterns of fault diagnoses, which may facilitate the model construction in the target domain D T and improve the model performance on target data.Following this idea, the process of the proposed method is depicted in Figure 1.
As stated in Figure 1, to help extract the fault patterns from the signal data in time series, the features are first extracted with the data x and operation states y, including time-domain features, frequency-domain features and time-frequency-domain features.Then, to improve the distribution similarity between the source and target data, the data weights are first estimated following the idea of instance transfer.The extracted features are further selected using improved decision tree based RFE DT + À RFE À Á algorithm based on the data weights.Then the fault patterns learned from D S using decision tree are transferred to D T and further optimized using the proposed WSER method with D L .
Feature extraction
The features derived from vibration signal data encapsulate the health status information of machine components, holding crucial importance for fault diagnosis and prognosis. 41Signal processing techniques across a multitude of domains 2 time, frequency, and time-frequency, have been leveraged on the collected vibration data to glean a variety of original features. 34,42n this section, various features are extracted from time, frequency, and time -frequency domains, which could be further used to help construct the diagnosis model.
Time-domain features.Time-domain analysis, a straightforward technique typically used in the initial stages of mechanical fault diagnosis, provides amplitude information of the signal in relation to time. 41Statistical attributes are often involved in time-domain features, which are particularly sensitive to impulse faults. 33The 16 dimensional features are calculated in this paper, such as mean, absolute mean, variance, and so on, which are defined as in Table 1.
Frequency-domain features.Frequency-domain approaches typically entail an analysis of vibration signals to identify characteristic frequencies associated with the rotation of bearings. 30he FFT is adopted on the time-domain vibration signals to help extract the frequency-domain features, which can provide information on defect frequencies of the components. 43The 12 features are calculated considering the statistical results of frequency, such as mean, variance, maximum, and so on, 39 which are defined as in Table 2.
Time-frequency-domain features.As stated above, timedomain features and frequency-domain features are easily extracted and commonly used in fault diagnosis.However, time and frequency information cannot be simultaneously considered in the extracted features above.Time-frequency domain analysis is thus further utilized to help extract comprehensive features, which may be more effective in fault diagnosis. 44Many timefrequency domain analysis technologies have been developed, including short-time Fourier transform (STFT), wavelet packet transform (WPT), Hilbert-Huang transform (HHT) algorithms, etc. 33,45,46 In this paper, the WPT algorithm is adopted to extract the time-frequency-domain features with accelerometer sensor signals due to its flexible decomposition, excellent time-frequency localization, computational efficiency, and wide applicability. 34,40,47he vibration signals are first decomposed into four scales using WPT algorithm, and the procedure can be referred in Rauber et al. 34 The energy values of wavelet packet nodes are further calculated at the 4th level, deriving 16 time-frequency-domain features, 33 with decomposition refined down to the fourth level.and refining is done down to the fourth decomposition level.This analysis considers a 1-D time-domain vibration signal comprising N samples.
In WPT algorithm, with a tree depth of j, 2 j final leaves W j, 0 , . . ., W j, 2 j À1 are generated.Each has approximately N =2 j wavelet coefficients.The features derived from the final 2 j leaf nodes of the decomposition tree represent the respective proportions of the energy contained in each leaf.Let c g j, n , n = 0, . . ., N =2 j À 1 be the N =2 j wavelet coefficients of leaf node g at tree depth of j, where g = 0, . . ., 2 j À 1.The energy of the g À th node 34 is calculated as Then, the g À th wavelet packet feature is Zhang et al.
Weight estimation
After the data D S = f x S , y S g and D L = f x L , y L g from extracted features are obtained with D S and D L , the weights of data are further estimated to help increase the distribution similarity between the source and the target data.In this section, the weight estimation is conducted in the way of instance transfer following the idea of Multiclass TrAdaBoost (MC-TAB) method. 48ompared with other weight estimating methods, the developed weight estimation method can have an effective use of the labeled target data.In addition, besides the source data weights, the weights of labeled target data can also be estimated in this process, which can help further extract the data or information that is representative for the target domain.
Given the source data D S and the target data D L with labels, the weights of x S and x L are first initialized.In the absence of supplementary information to help obtain the initial data weights, they can be designated as equal, such as 1.
The weights are then normalized as A model is then trained using decision tree with the labeled data D S and D L with the normalized weights p r , where the decision tree is used here to keep consistency with the transfer model in the following steps of the proposed method.
Then the error of the derived model on D L can be calculated as where I(Á) denotes the signal function.The weight updating parameters can be further calculated as where K is the number of classes, and R is the max iteration number.In equation (10), the first part can be correlated to the error rate, which can help better adjust the weights by reflecting the importance.The second part can help the algorithm fitting in multi-class cases.The a in equation ( 11) can help adjust the weights in a fixed rate.Further details can be referred in Hatie's work. 49gure 1.The process of the proposed method.
And the weights can be updated as where w r n denotes the weights at the r-iteration, and e (Á) denotes the exponential function with base e, which can help update the weights in a smooth way.
To avoid the overfitting of the target labeled data, the max iteration number R is set at 20 in this paper.
Feature selection
As stated above, 44 features are extracted from the data.However, not all features are very relevant to model construction, and the irrelevant or redundant features may lead to model overfitting or high complexity. 50The feature selection is thus conducted to help find the most effective features, which can also assist in reducing the data dimensionality and complexity. 36o obtain the relevant feature subset from the 44 features, the RFE algorithm is applied in this paper, where decision trees serve as the base classifier, which is wrapped by the RFE algorithm.Compared to other classification methods, decision tree can have better interpretability, which is also used for model construction and knowledge transfer in the source and target domains in the subsequent sections.Decision tree is thus chosen here as the base classifier for feature selection to help keep consistency.The DT + À RFE algorithm is further developed in this paper based on DT-RFE, where the data weights are considered in the process of feature selection.
The DT + À RFE algorithm initially considers all features, progressively eliminating those deemed irrelevant until only pertinent features remain, as determined by assigned scores.The algorithm yields an array output based on data weights, representing the positive integer values that signify the ranking of each feature.A lower score denotes a higher feature ranking, and conversely, a higher score indicates a lower ranking.The DT + À RFE algorithm eliminates low-ranked or irrelevant features, selecting only those with high rankings.
As stated above, the source data with high weights can be more similar to the target data, and the DT + À RFE algorithm constructed using the source data weights can thus better find the features that are more relevant to the tasks in the target domain.
Model construction based on instance and model transfer
When the data DS = fx S , y S g and DL = fx L , y L g are obtained from D S and D L after the features are extracted and selected, the knowledge can be learned from the data with specific methods.Decision tree method is selected in this paper to construct the model, which can keep the high interpretability.In addition, decision tree method can also have better fitting power in non-linear problems compared with linear models.As stated above, when labeled target data DL are insufficient, the decision model constructed using decision tree method may perform poor in the target domain D T .In such cases, the abundant source data DS can help extract the knowledge which can be applicable on the target data.
The source model is first trained using the source data DS and the data weights w S .In this paper, the decision tree is constructed using CART algorithm, where the Gini Index is used to measure the reduction in class impurity from partitioning the feature space, as shown in equations ( 13) and ( 14). 51 where p j denotes the relative frequency for each class j, that is, the number of samples of class j divided by the total sample number.After the source model M S is obtained, the knowledge in the source domain D S can be contained in M S .To transfer the knowledge from D S into D T , one important problem is that how the knowledge could be transferred.
Similar to SER method, WSER applies two transformations using the limited labeled target data DL , for example, expansion and reduction.Then, the weights of DL that generated using the estimation method are further considered in WSER, which facilitates the effectiveness of the expansion and reduction.
Given a leaf node v of the source model M S , WSER will computes DL v , the subset of the target data DL that reaches the node v. Subsequently, each leaf v is expanded to a full tree with DL v .This expansion is achieved by developing a full decision tree using CART algorithm with data DL v . 52he reduction is then conducted based on the leaf error and subtree error.These are defined as the empirical error on v respected to DL v if v was to be pruned into a leaf, and the empirical error of the subtree whose root is v. 16 Leaf error can be calculated as where w L n, v and y L n, v denote the weight and the label of the n À th element in DL v , and y v denotes the majority class of the leaf v.The subtree error can be obtained by aggregating the errors of all leaves, each weighted by the fraction of DL v j attributed to each leaf v j , 15 which is calculated as where y L n, v j denotes the label of the n À th element in DL v j , and y L v j denotes the majority class of the leaf v j .If the leaf error on the node v is smaller than the subtree error, then the subtree of v would be cut.
The WSER algorithm is summarized as follows.
Experiments
To validate the effectiveness of the proposed fault diagnosis method, the method is adopted on two public engineering fault datasets, including the bearing data provided by the Bearing Data Center of the Case Western Reserve University (CWRU) and the gearbox dataset from the Southeast University (SEU).The comparative experiments of the proposed method against machine learning and transfer learning-based methods are performed to underscoring its effectiveness.
Absolute mean Root mean variance x n j j
Maximum
x Kurtosis factor Square root of the amplitude x n j j s denotes the variance.
Mean
x F mean = 1
Dataset
CWRU dataset.The CWRU dataset, widely recognized as a standard in rolling bearing fault diagnosis datasets, encompasses a driving motor, a torque transducer, and a load motor.Test bearings 6205-2RS JEM SKF and 6203-2RS JEM SKF are mounted at the drive end and the fan end of the driving motor, respectively, to uphold the motor shaft. 53Bearing vibration data are collected by the acceleration sensors mounted at the ends of driving motor under various operational loads and bearing conditions. 33The CWRU bearing data have been used extensively in various researches, which can provide an effective validation for bearing fault diagnosis. 1,53,54he vibration signals collected at the sampling frequency of 12 kHz are adopted in this paper.Four kinds of bearing health conditions are identified in the data, such as normal (N), inner race fault (IR), outer race fault (OR), and roller fault (RF).Different fault diameters, 0.007, 0.014, and 0.021 in, are contained in the three types of faults.All bearings are re-fitted onto the testing rig under four distinct operational conditions, that is, the constant speeds for motor loads of 0, 1, 2, and 3 horsepower (HP).These loads correspond to the motor's four types of speeds, which are 1797, 1772, 1750, and 1730 rpm, respectively.
To extract the samples from the signal data, the sample length is set as 1024, which means each sample contains 1024 signal points.9000 samples are randomly extracted from the signal data under different operating conditions.The details of the preprocessed data samples are given in Table 3.
As shown in Table 3, four datasets are obtained after data preprocessing, where data have the same label spaces of health conditions, but are collected under different operating conditions.To validate the effectiveness of the proposed method, 12 transfer tasks Z k (k = 1, . . .12) of fault diagnosis are conducted in this paper, including To simulate the situation where only few labeled data are available in the target domain, only 100 data samples are randomly selected from the datasets when they are chosen as the target data, and the rest of the data are used for testing.SEU dataset.The SEU dataset is a gearbox dataset collected from the Drivetrain Dynamics Simulator by Shao et al. 55 The details of SEU dataset is given in Table 4.This dataset consists of two sub-datasets, including the bearing and gear datasets, where eight channels were collected, and the data of channel 2 are mainly used following the setting of the work in Zhao et al. 56 As shown in Tables 4 and 5 different health statuses can be found in two sub-datasets, including one health and four fault statuses, while the fault statuses can differ between bearing and gear.The transfer tasks are established between two different working conditions with rotating speed system load set to be 20 Hz -0 V or 30 Hz -2 V for each sub-datasets, which are separately denoted as tasks 0 and 1.In total, there are four transfer learning settings, including
Results of CWRU dataset
Performance of the proposed method.Following data preparation, the proposed method is employed to verify its efficacy.As delineated above, 12 transfer tasks are performed.Each task consists of a source domain D S and a target domain D T , with 9000 pieces of training data in D S , and 100 pieces of training data and 8900 pieces of testing data in D T .With the collected data, 44 features are first extracted from time, frequency and timefrequency domains.
The data weights are further estimated, and the weighted data are used for feature selection using DT + À RFE method, as stated in Methods, where half of the features are selected by default.The proposed method WSER is then used to generate the target diagnosis models based on the obtained data DS k and DL k , and the data weights, which are examined by DU k to obtain the performance of the WSER models on tasks Z k (k = 1, . . ., 12).In addition, the DT T models trained using decision tree method with only labeled data DL k in D T , and the DT ST models trained using decision tree method with weighted data DS k and DL k are also examined on DU k , which can help further highlight the effectiveness of the proposed WSER method.The performance of the above models on different tasks are given in Table 5.All the performance in this study is measured by the accuracy rate, that is, the rate of correct predictions in all the testing data.
As shown in Table 5, the DT ST models perform better compared with the DT T method on tasks Z 4 À Z 12 , which means the signal data collected under varying operational conditions can be similar, and leveraging source data can enhance the target model performance.In addition, the WSER models perform better than other models on most tasks, and the DT ST models perform better than those of WSER models only on tasks Z 5 and Z 8 .The results indicate that the proposed WSER method can effectively leverage the source knowledge for the target domain.
Effect of different categories of features on model performance.
To understand how time, frequency, and time-frequency features affect diagnostic model performance, models are constructed using each of these feature types separately.They are examined by DU k (k = 1, . . ., 12) to obtain the performance.The results are given as follows.
As shown in Table 6, the models constructed using only time features comprehensively perform worse than those based on frequency features and time-frequency features.The frequency feature based models comprehensively perform better than time-frequency feature based models comprehensively.In addition, the models constructed using all the features perform better at the most cases.The results show that among three categories of features, the frequency features can be more important than others, which means the fault status tends to be reflected by the frequency information of the CWRU dataset.
Feature selection in transfer tasks on CWRU dataset.As stated in Section 3.4, the features are ranked using RFE algorithm, and the ranking results are presented in Figure 2 to illustrate which features are more important for model construction on the specific tasks.
As shown in Figure 2, on the tasks Z 1 À Z 12 , the time features, including 1 and 6, the frequency features, including 16, 17, 21, 23, and 24, and the timefrequency features, including 30, 34, 39, and 42 show higher importance compared with other features.Comprehensively, the frequency features can be more important than others on tasks Z 1 À Z 12 of CWRU dataset.
Results of SEU dataset
Results of the proposed method.For SEU dataset, four transfer tasks are performed.Each task consists of a source domain and a target domain, with 4500 pieces of training data in D S and 100 pieces of training data and 4400 pieces pf testing data in D T .The same 44 features are also extracted to help construct the models.After the weight estimation and feature selection, the results of the proposed method on the SEU dataset are given in Table 7.
As shown in Table 7, the DT ST models perform better compared with the DT T models on Z 14 and Z 15 , which means the signal data collected under varying operational conditions may be different for SEU dataset, and directly sample weighting may not help improve the model performance in the target domain.However, the WSER models perform better than other models on all the tasks.The results also indicate the effectiveness of the proposed WSER method.In addition, note that all models perform poorly on datasets G 0 and G 1 , possibly because the hand-craft features does not apply to such datasets.
Effect of different categories of features on model performance.
The performance of models constructed with time, frequency, and time-frequency features are given as follows to help learn the effect of different categories of features on the transfer tasks for SEU dataset.
As shown in Table 8, the models constructed using all the features perform better at the most cases.Differently, the time feature based models perform poorly compared with other models, and the timefrequency feature based models show better performance compared with frequency feature based models on tasks Z 13 , Z 14 , and Z 16 .The results indicate that time-frequency features may show higher importance on transfer tasks for SEU dataset.
Feature selection in transfer tasks on SEU dataset.The ranking results of SEU dataset are further presented in Figure 3 to show feature importance on the specific tasks.
As shown in Figure 3, on the tasks Z 13 À Z 16 , the time features, 1, the frequency features, including 20, and 22, and the time-frequency features, including 28, 32, 33, 34, 37, and 39 show higher importance compared with other features.Comprehensively, the timefrequency features can be more important on the transfer tasks Z 13 À Z 16 of SEU dataset, which can also be consistent with the results presented above.
Comparative analysis Comparison with machine learning methods
Results of CWRU dataset.To further highlight the effectiveness of the proposed method, the performance of the proposed method is compared with those of some typical machine learning methods, including Support Vector Machine (SVM), Logistic Regression (LR), K-Nearest Neighbor (KNN), AdaBoost (ADB), Fully Connected Neural Network (FNN), and Gaussian Naive Bayes (GNB).The six methods are used to train models with the labeled data DS k and DL k ).The performance of the obtained models is examined using DU k .The model results on 12 tasks Z k (k = 1, . . ., 12) are given in Table 9.
As shown in Table 9, the WSER models perform better than SVM, LR, KNN, ADB, FNN and GNB models on most of the tasks, and only the FNN model performs better than WSER model on task Z 9 .Comprehensively, the performance of WSER method is the best in most cases.The Wilcoxon signed rank test is conducted to show the differences between the model performance based on the model results. 57The Wilcoxon signed-rank test is performed to illustrate the performance discrepancies among the models based on their respective results. 57The results indicate that the performance of the WSER method significantly outperforms that of the KMM and KLIEP methods (T = 0, p = 0:0005\0:05) and outperforms that of FNN method (T = 1, p = 0:0010\0:05), which underscoring the effectiveness of the WSER method.
The models trained using the six machine learning methods with only labeled target data of datasets H 0 À H 3 are also obtained in this paper, and the results are given in Table 10.
As shown in Tables 9 and 10, most machine learning methods perform better with the assistance of the source data.This indicates that the model performance in the target domain can be effectively enhanced by the knowledge contained within the source data.
Results of SEU dataset.The above six machine learning methods are also used to train models with the labeled data DS k and DL k (k = 13, . . ., 16), and with only DL k .The performance examined using DU k on tasks Z k are given in Tables 11 and 12 separately.
As shown in Table 11, the WSER models perform better than SVM, LR, KNN, ADB, FNN, and GNB models on most of the tasks, and only the FNN model performs better than WSER model on task Z 16 .Comprehensively, the performance of WSER method is the best in most cases.The Wilcoxon signed-rank test is not performed here due to its limitation for at least six sets of results. 57s shown in Tables 11 and 12, the models trained using labeled source and target data perform slightly better than those trained using only labeled target data, which indicates the feasibility of making use of the knowledge contained in the source data.In addition, the performance of the models trained using the proposed methods perform better than others at the most cases, which also indicates the effectiveness of the proposed method.
Comparison with transfer learning based methods
Results of CWRU dataset.The WSER method is developed based on the integration of the decision tree method and transfer learning.Its efficacy can also be underscored when compared with the combination of the decision tree method and other transfer learning methods.
Comparative experiments are performed by employing seven methods, derived via following three different ways.
1.The methods are given by combining the decision tree method with two instance-based transfer learning methods, including Nearest Neighbors Weighting (NNW) 58 and Kullback-Leibler Importance Estimation Procedure (KLIEP). 19. The methods are given by combining the decision tree method with three typical featurebased transfer learning methods, including correlation alignment (CORAL), 59 transfer component analysis (TCA), 21 and subspace alignment (SA).22 3. The meths are given by combining the decision tree method with two model-based transfer learning methods designed for decision tree method, including SER, and STRUT.15 4. The performance of the models trained using the methods derived above is examined using DU k .The relevant results on Tasks Z k (k = 1, . . ., 12) are given in Table 13.
As shown in Table 13, compared with the WSER models, TCA model performs better on task Z 8 , SER models perform better on tasks Z 4 , Z 7 , and Z 9 , and STRUT model performs better on task Z 9 .In the rest of the cases, WSER models perform better than those of other compared methods.Comprehensively, the performance of the WSER method is the best in most cases.The Wilcoxon signed-rank test is performed to illustrate the performance discrepancies among the models based on the results.The results indicate that the performance of the WSER method significantly outperforms that of the NNW, KLIEP, CORAL and SA methods (T = 0, p = 0:0005\0:05), significantly outperforms that of the STRUT method (T = 5, p = 0:0049\0:05), significantly outperforms that of the TCA method (T = 5, p = 0:0093\0:05), and significantly outperforms that of the SER method (T = 13, p = 0:0425\0:05), which further highlights the effectiveness of the WSER method.
Results of SEU dataset.The models are also trained using the methods derived above on SEU dataset, and examined using DU k (k = 13, . . ., 16).The relevant results on Tasks Z k are given in Table 14.
As shown in Table 14, WSER models performs better than those of other compared methods in all the rest cases, which further highlights the effectiveness of the proposed method.
Discussion
The results presented in Tables 5 and 7 offer insightful observations regarding model performance across different domains.When the models are constructed directly based on the labeled data in the target domain, the model performance can be limited.After reweighting the source data, notable improvements can be observed on some tasks for models built with the weighted data.The enhancement of model performance on specific tasks suggests that data weighting can help reduce the difference in feature distribution between the source and target domains on these tasks.However, there are still some tasks where the model performance gets worse with the weighted data.This may indicate a large difference in feature distribution between the source and target domains on these tasks, making it difficult to bridge this gap through data weighting alone.In contrast, the proposed method, which employs labeled data from both the source and target domains, demonstrates a clear advantage.Its performance surpasses that of models constructed either solely with the labeled data from the target domain or with the weighted labeled data from both domains.This indicates that the validity of the proposed method in not only extracting shared knowledge from the source domain but also in facilitating a more effective transfer of this knowledge between the two domains.Consequently, the proposed method enhances model performance in the target domain, even in situations where the feature distributions between the source and target domains are markedly distinct, which underscores the robustness of the proposed method in adapting to and overcoming challenges posed by significant differences in feature distribution.
In addition, the proposed WSER method outperforms the compared methods without transfer learning, indicating that the knowledge from the source domain can be utilized to construct an effective model for the target task.Moreover, according to Tables 13 and 14, the WSER method achieves better performance compared to other transfer-learning-based methods.These results demonstrate the effectiveness of the WSER method in extracting and transferring knowledge for fault diagnosis.
To sum up, when dealing with limited data in fault diagnosis problems, it can be challenging to construct an effective model due to cost or other limitations.The proposed method addresses this issue by extracting knowledge from a source domain and transferring it to the target domain.The fault diagnosis model built with transferred knowledge can provide better predictive power for the target task.Additionally, the proposed method based on decision tree offers better interpretability compared to other black-box machine learning methods.This transparency allows engineers to understand how the recommended decisions are made, enhancing the reliability of system operation and maintenance.
Conclusion
Data-driven methods can be effective for fault diagnosis of complex systems.However, the application of data-driven fault diagnosis methods can be limited due to the lack of data.To tackle this challenge, this study develops a cross-domain decision method for fault diagnosis.This method can facilitate the knowledge transfer from the source domain to the target domain.Firstly, the features are extracted from the time, frequency, and time-frequency domains.The data weights are determined following the idea of instance transfer, which can reduce the distribution dissimilarity between the source and target data.The extracted features are then selected using the estimated data weights.Finally, the knowledge contained in the source model is transferred to the target domain using the proposed method.The efficacy of the method is thoroughly validated on the CWRU and SEU engineering fault datasets.This validation is further accentuated through a comparative analysis of the proposed method against machine learning methods and other transfer learning-based methods, underlining its superior performance.
The principal limitations of this study are as follows: (1) the proposed method constructs the model with features extracted using specific methods, which may need adjustment in different decision scenarios; and (2) the feature spaces of the source and the target domains are assumed to be the same, which may not be applicable in some problems.
In the next step, the proposed method would be extended to situations where the source and the target domains share heterogeneous feature spaces.In addition, the transfer task with no labeled data available in the target domain will be further investigated.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Note.Bolded results indicate the best model performance under the same conditions.
Note.Bolded results indicate the best model performance under the same conditions.Table 14.Comparison of the proposed method with transfer learning based methods for SEU dataset.Note.Bolded results indicate the best model performance under the same conditions.
Table 3 .
The details of extracted CWRU data samples.
Table 4 .
The details of extracted SEU data samples.
Table 5 .
Performances of the DT T , DT ST , and WSER models on tasks Z k (k = 1, . . ., 12).Note.Bolded results indicate the best model performance under the same conditions.
Table 6 .
The performance on CWRU dataset constructed with different categories of features.
Table 7 .
Performances of the DT T , DT ST , and WSER models on tasks Z k (k = 13, . .., 16).Note.Bolded results indicate the best model performance under the same conditions.
Table 8 .
The performance on SEU dataset constructed with different categories of features.
Table 12 .
Performances of machine learning models in single target domain for SEU dataset.
Table 13 .
Comparison of the proposed method with transfer learning based methods for CWRU dataset. | 11,160.8 | 2024-04-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Interaction of Radiopharmaceuticals with Somatostatin Receptor 2 Revealed by Molecular Dynamics Simulations
The development of drugs targeting somatostatin receptor 2 (SSTR2), generally overexpressed in neuroendocrine tumors, is focus of intense research. A few molecules in conjugation with radionuclides are in clinical use for both diagnostic and therapeutic purposes. These radiopharmaceuticals are composed of a somatostatin analogue biovector conjugated to a chelator moiety bearing the radionuclide. To date, despite valuable efforts, a detailed molecular-level description of the interaction of radiopharmaceuticals in complex with SSTR2 has not yet been accomplished. Therefore, in this work, we carefully analyzed the key dynamical features and detailed molecular interactions of SSTR2 in complex with six radiopharmaceutical compounds selected among the few already in use (64Cu/68Ga-DOTATATE, 68Ga-DOTATOC, 64Cu-SARTATE) and some in clinical development (68Ga-DOTANOC, 64Cu-TETATATE). Through molecular dynamics simulations and exploiting recently available structures of SSTR2, we explored the influence of the different portions of the compounds (peptide, radionuclide, and chelator) in the interaction with the receptor. We identified the most stable binding modes and found distinct interaction patterns characterizing the six compounds. We thus unveiled detailed molecular interactions crucial for the recognition of this class of radiopharmaceuticals. The microscopically well-founded analysis presented in this study provides guidelines for the design of new potent ligands targeting SSTR2.
■ INTRODUCTION
Most efforts of modern medicine are addressed toward personalized medicine, in which each patient is treated according to the molecular features of the disease of interest. 1,2 In this contest, radiopharmaceuticals have been extensively used to specifically target unhealthy tissues. 3,4 According to the decay properties of the radionuclide, compounds can be employed for diagnostic or therapeutic purposes, or both (theranostics). 5 Radionuclides emitting γ or β+ (e.g., 111 In and 68 Ga) are exploited for imaging with single-photon emission computed tomography (SPECT) and positron emission tomography (PET), respectively, while those emitting βor α (e.g., 177 Lu and 211 At) are used for therapeutic treatments. 6 In this last case, after the binding of a radiopharmaceutical to the given target and its subsequent internalization, a cytotoxic dose of radiation is delivered to the cancer cell. 7 In some cases, the radionuclide emits both β+ and β-( 64 Cu), or γ and β-( 177 Lu), making their use suitable for theranostics. 4,8 However, the inability to precisely quantify the radiation doses supplied to tumors and normal tissues has been one of the main drawbacks of radionuclide-based treatments. For example, 111 In decays by electron capture emitting relatively high-energy γ photons with a half-life (t 1/2 ) of 67.2 h, resulting in suboptimal imaging resolution and high radiation exposure in patients, which is even more pronounced when using shortlived isotopes, such as 68 Ga (t 1/2 = 1.13 h). Therefore, to alleviate this problem, it is possible to use a longer-lived radionuclide, allowing a more accurate assessment of biodistribution and tissue clearance. An example of alternative diagnostic agent is represented by the positron-emitting isotope 64 Cu (t 1/2 = 12.7 h, β+ = 17.4%, E max β+ = 653 keV). 9,10 Both 68 Ga and 64 Cu are widely used in peptide receptor radionuclide therapy, a prominent example of which is represented by the treatment of neuroendocrine tumors (NET), 11 where cancer cells are detected by exploiting a high concentration of somatostatin receptors on their surface. 12 These receptors are class A G-protein-coupled receptors (GPCRs) and include the five distinct isoforms SSTR1−5. 13 The isoform 2 (SSTR2), belonging to the SRIF1 sub-class together with the isoforms 3 and 5, 14 is the most expressed in these types of tumors 15,16 and, as a result, several drugs have been developed to specifically bind this receptor. 17 To date, eight peptide-based radiopharmaceutical compounds targeting somatostatin receptors have been approved by FDA, and are routinely used in clinics for different applications (Table 1). 6 Radiopharmaceuticals targeting SSTR2 share a similar three components structure made by (1) a biovector mimicking the structure of the endogenous ligand somatostatin, that is conjugated with (2) a chelator moiety carrying (3) a radionuclide ( Figure 1). 19 In the last year, different structures of SSTR2 in multiple conformational states have been published. 15,20,21,13,22,23 Noteworthy, most of these structures are in complex with agonist ligands, which is often somatostatin or its analogous, like the octa-peptide octreotide. Both experimental and computational studies explored the conformational features of SSTR2 that are common to those of other class A GPCRs, 24 and the key elements characterizing the binding with different types of ligands (see Figure S1 for an overview of the three-dimensional structure of SSTR2 and its main domains). 25 The availability of such structural data can burst the development of new SSTR2 ligands able to bind the receptor with high affinity. 26 However, a detailed molecular and atomistic-level description of the interaction of the radiopharmaceutical/SSTR2 complex is missing, thus hampering the rational design of new effective ligands of this family. Therefore, exploiting the available structural knowledge, in this work, we carefully analyzed the key dynamical features and detailed interactions of SSTR2 in complex with six radiopharmaceuticals. We focused on compounds loaded with either 68 Ga or 64 Cu: the former is the leading β+ emitting radiometal for PET imaging and is contained in two approved drugs ( 68 Ga-DOTATATE and 68 Ga-DOTATOC, Table 1), the latter is used for theranostic purposes and is contained in two approved drugs as well ( 64 Cu-SARTATE and 64 Cu-DOTATATE, Table 1). The two radionuclides were simulated in complex with three different chelators: 1,4,7,10-tetraazacyclododecane-N,N′,N″,N‴-tetraacetic acid (DOTA), 1,4,8,11-tetraazacyclotetradecane-1,4,8,11-tetraacetic acid (TETA), and 3,6,10,13,16,19hexazabicyclo[6.6.6]icosane (SAR) (Figure 1). For the peptide portion, we considered three derivatives of octreotide, namely, TOC, TATE, and NOC: the first one is octreotide with the replacement of F3 into Y3, the second differs from TOC at the last residue (threonine T8 instead of threoninol T-ol8), and the third is octreotide with the replacement of F3 with naphthalene (Nal3) (Figure 1).
The choice of the radiopharmaceuticals was driven by the aim of exploring the influence of the different portions of the ligands in the interaction with the receptor by (1) considering the same radionuclide-chelator ( 68 Ga-DOTA) and changing the peptide (TOC, TATE, and NOC), (2) considering the same chelator-peptide (DOTA-TATE) and changing the radionuclide ( 68 Ga and 64 Cu), and (3) considering the same radionuclide-peptide ( 64 Cu-TATE) and changing the chelator (DOTA, TETA, SAR). Through multicopy μs-long molecular dynamics (MD) simulations based on a previous investigation on SSTR2 in different states, 25 here we found analogies and differences in the interaction patterns characterizing the binding of the six compounds with SSTR2, and we discovered how each moiety can influence the dynamical behavior of the complexes. The detailed molecular-level analysis presented in this study, thoroughly mapping the SSTR2/ligand interactions, revealed previously unknown structural and mechanistic insights into molecular recognition processes of radiopharmaceuticals at SSTR2.
■ RESULTS AND DISCUSSION
We performed multicopy all-atom MD simulations of six metal-based radiopharmaceutical compounds in complex with SSTR2 (total simulation time of 15 μs per system). We focused on the influence that each component exerts on the dynamic properties of the complexes and the resulting interaction pattern. In the following, we analyze the role of the peptide moiety, the radionuclide, and the chelator by changing only one component at a time and comparing the MD results in terms of dynamics and detailed molecular interactions. Following this strategy, the role of the different components in the interaction could be evaluated more accurately. The Ballesteros-Weinstein numbering scheme for class A GPCRs is adopted throughout the paper. 27 For better clarity, SSTR2 and ligand residues are indicated using the three-and one-letter nomenclature, respectively. Generally, according to root-mean-square deviation (RMSD) of the ligand heavy atoms with respect to the initial frame of the production run, all compounds were highly stable inside the binding pocket, following the order 64 Cu- Figure S2). As expected, most fluctuations were found at the terminal portions of the ligands (i.e., the chelator moiety and the last residue of peptide T8 or T-ol8, Figure S3). Overall, for all compounds, we found the known conserved interactions involving residues located in the bottom part of the binding pocket of SSTR2 (i.e., Asp122 3.32 , Gln126 3.36 ) and the D W4 and K5 motif of the ligands ( Figure S4), as well as other residues already reported in previous works. 13,20,25 In the following, we focus only on the comparative analysis of protein−ligand interactions characterizing the selected radiopharmaceuticals under investigation.
Small Changes in the Peptide Structure Strongly Affect the Dynamics of the Complex. In the MD simulations of 68 Ga-DOTATOC/TATE/NOC, we did not change the chelator-radionuclide portion ( 68 Ga-DOTA), but only the peptide biovector (TOC, TATE, NOC). As a result, we were able to focus on the influence that small variations in the peptide structure (T8, T-ol8, Y3, or Nal3) exert on the interaction with SSTR2. In all cases, cluster analysis of MD trajectories (see the Computational Details section and Table S1) reveals a dominating cluster (population in the range of 60−80%) that does not differ significantly from the other two (RMSD in the range of 0.9−4.7 Å), confirming the overall stability of the binding modes (Table S1). Inspecting how the population of the dominant binding mode changes with time in all replicas, we found that it is the most populated one along the whole μs-long time-scale simulation or starting from a few hundreds of ns (see Figure S5). The representatives of the most populated clusters for the three cases are shown in Figure 2. Interestingly, the peptide portion overlaps neatly with the cryo-EM conformation of octreotide in complex with SSTR2 (RMSD octreotide vs TOC/TATE/NOC portions: 1.3/1.4/ 1.5 Å). 20 Looking at the structures, we found in all cases that the 68 Ga-DOTA moiety was placed between TM5 and TM6, with (Table S1). The extracellular loop ECL2 is colored in red; the gallium ion is represented as a pink sphere; the receptor is represented as a transparent white cartoon; the chelator DOTA is colored in dark cyan; the peptide portions TOC, TATE, and NOC are colored in magenta, violet, and pink, respectively. The top panels report the superimposition of the representative structural clusters with octreotide (green transparent sticks), taken from the PDB 7T11. In the bottom panels, the main interactions are shown as black dotted lines. some of the contacts involving ECL2 and ECL3 as well, while T8/T-ol8 interacted only with ECL2 ( Figures 2 and S6). By combining the clustering of MD trajectories with the interaction fingerprint analysis (see the Computational Details section), we could identify detailed protein−ligand interactions stabilizing the complexes ( Figure 3).
First, we discovered the prominent role of residue Tyr205 5.35 in interacting with the 68 Ga-DOTA moiety for 61, 63, and 42% of the total simulation time in 68 Ga-DOTATOC/TATE/ NOC, respectively. In addition, the binding of ligands is stabilized by hydrophobic interactions of Y3/Nal3 with Tyr205 5.35 (persistence of 74, 86, and 78% for the three compounds, respectively). In turn, the ligand Y3/Nal3 residue interacts also with Ile195 (50, 51, and 45%) and Val280 6.59 (71, 82, and 74%) ( Figure 3). Interestingly, previous works reported the key role of Tyr205 5.35 and Ile195 in the interaction of SSTR2 with SST14 and octreotide (through F7 and F3, respectively), and it was also pointed out that these two residues can contribute to the selectivity of the different SSTR isoforms. 20,21,13,23,28 Furthermore, Phe294 7.35 and Ser279 6.58 (belonging to the hydrophobic sub-pocket constituted by TM6−7 and ECL3 13 ) seem to be involved in isoform selectivity as well, 23 and we found consistently their interaction with the disulfide bridge featured by all compounds (75/70% for 68 Ga-DOTATOC, 86/81% for 68 Ga-DOTA-TATE, 77/41% for 68 Ga-DOTANOC). Although the aforementioned interactions are conserved, we noticed some differences in the persistence of the DOTA-Tyr205 5.35 interaction. In particular, the replacement of tyrosine with naphthalene at position 3 in 68 Ga-DOTANOC results in a higher steric hindrance that destabilizes the interaction between the chelator and the protein, letting the 68 Ga-DOTA fluctuate more than that for the other compounds (see Figure S3). Conversely, the difference between 68 Ga-DOTATOC and 68 Ga-DOTATATE, which share the tyrosine residue at position 3, should be searched in the terminal residue T-ol8/T8. Indeed, the presence of a carboxylic negative charge on the T8 of 68 Ga-DOTATATE allows the peptide to interact with the basic Arg190, located at ECL2, for 59% of the simulation time ( Figures 2B and 3), while this interaction was found neither in 68 Ga-DOTATOC nor in 68 Ga-DOTANOC. At the same time, this polar interaction appears also to stabilize a second one between 68 Ga-DOTA and Arg184 (belonging to ECL2 as well) that in 68 Ga-DOTATATE was found in 64% of the simulation time, compared to 36 and 19% of 68 Ga-DOTATOC and 68 Ga-DOTANOC, respectively. Interestingly, previous studies reported the interaction between Arg184 and somatostatin 20 and the Arg184Ala mutation was found to decrease the potency of somatostatin, but not that of octreotide. 22 This last finding supports the crucial role of the deprotonated C-terminus (T8) in the interaction with the receptor. This can possibly explain also the higher selectivity of 68 Ga-DOTATATE toward the isoform 2, which is characterized by the presence of two arginine residues at the ECL2
Journal of Chemical Information and Modeling
pubs.acs.org/jcim Article (Arg184 and Arg190), whereas 68 Ga-DOTATOC binds the isoform 5 (belonging to the same sub-class SRIF1) that features on the ECL2 an acid residue (Glu182) instead of a basic one. 20,29 Further simulations of this class of radiopharmaceuticals interacting with both SSTR2 and SSTR5 are needed to confirm this hypothesis. The small differences in the peptide structure reflect not only on single-protein residue interaction but also on the overall dynamics of the receptor, especially of the very mobile ECL2. This loop is known to play a key role in the interaction with ligands 30,31 and it is characterized by opening and closing movements. 20,25 For this reason, we computed the percentage of MD frames in which the loop was found closed, according to the threshold values established in our previous work. 25 These thresholds refer to geometric parameters, namely, distances and angles, characterizing the movements of this loop. As a result, ECL2 was closed in about 50, 7, and 18% of the simulation time in 68 Ga-DOTATOC/TATE/NOC, respectively ( Figure S8). The marked differences can trace back to the characteristic behavior of DOTA and T-ol8/T8 moieties in the three compounds described above: in 68 Ga-DOTATOC, the 68 Ga-DOTA portion stably interacts with Tyr205 5.35 (thanks also to the presence of residue Y3 of the peptide), moving this group away from ECL2, and allowing its closure. In 68 Ga-DOTATATE the interaction with Tyr205 5.35 is still present, but the 68 Ga-DOTA moiety also strongly interacts with Arg184, mediated by the T8-Arg190 interaction. Since both arginine residues are located at the ECL2, their involvement in the interaction with the ligand very likely impairs its closure (see below). Differently from the other two compounds, in 68 Ga-DOTANOC the chelator loosely interacts with Tyr205 5.35 , leading to a higher oscillation that prevents the closure of ECL2.
Substitution of 68 Ga 3+ with 64 Cu 2+ Affects the Persistence of Ligand/SSTR2 Interactions. After assessing the role of the peptide moiety, we focused on the influence of the radionuclides by comparing 64 Cu-and 68 Ga-DOTATATE. Both gallium and copper ions are hexa-coordinated when in complex with DOTA (by four nitrogen and two oxygen atoms), showing a pseudo-octahedral geometry. 8 Due to the intrinsic properties of the two radionuclides (e.g., electric charge, van der Waals radius, Jahn−Teller distortion 8 ) the coordination geometries differ, showing a more elongated one in 64 Cu 2+ , compared to 68 Ga 3+ (Table S2). Keeping in mind the limitations associated with classical/force field-based MD simulations when describing such challenging types of atoms, 32 these differences reflected in the conformation assumed by the DOTA group during the MD simulations, where the free/ Figure S9). Focusing on the whole ligands, inspection of the dynamical behavior of 64 Cu-DOTATATE and 68 Ga-DOTATATE reveals that these compounds share overall the same pattern of interactions with SSTR2. The only relevant differences are found for residue T8 that interacts with Arg184 and Ser192 in 68 Ga-DOTATATE, and for residue Y3 that interacts with Asn276 6.55 in 64 Cu-DOTATATE (Figures 3 and 4). However, these differences do not significantly affect the dynamics of ECL2 (which was found to be closed in the 4 and 7% of the simulation time for 64 Cu-DOTATATE and 68 Ga-DOTATATE, respectively) ( Figure S8).
Changes of the Chelator Moiety Influence the Interactions at the Peptide C-Terminal.
In the third part of this work, we considered the same radionuclide ( 64 Cu 2+ ) and the same peptide (TATE) while considering three different chelators (DOTA, TETA, SAR). TETA differs from DOTA just for six atoms (60 vs 54 atoms, respectively), and both coordinate the copper ion through four nitrogen and two oxygen atoms (from the carboxylic groups). Differently from DOTA, in TETA the carboxylic groups are located one above, and one below the plane formed by the nitrogen atoms, conferring a slightly higher steric hindrance (average Connolly surface area 33 computed on the MD trajectories: 307 ± 3 Å 2 vs 329 ± 2 Å 2 , respectively). SAR has the lowest surface area (299 ± 2.0 Å 2 ) but, differently from the other chelators, it is associated with a butanediamide linker that increases its effective steric hindrance (405 ± 4 Å 2 ) as well as its flexibility ( Figure S3F). Besides the presence of a linker, another important difference between SAR and the other two chelators is the absence of negatively charged groups (i.e., carboxylic acid moieties), as the chelator coordinates the copper ion through its six nitrogen atoms.
Focusing on the MD simulations, Figure 5 shows the representatives of the most populated cluster of the 64 Cu-DOTA/TETA/SAR-TATE compounds. Consistently with what has been reported above for 68 Ga-based systems, also in this case, we found that all compounds interact with Phe294 7.35 and Ser279 6.58 via their disulfide bridge and with Tyr205 5.35 , Ile195, and Val280 6.59 through Y3 and the chelator moiety ( Figure 5). Noteworthy, in contrast to the other radiopharmaceuticals, in 64 Cu-SARTATE residue Tyr205 5. 35 interacts with the linker portion and not with the chelator (SAR) that remains thus free to oscillate during the MD trajectories.
Combining the clustering of MD trajectories with the interaction fingerprint analysis, we found that DOTA in 64 Cu-DOTATATE interacts with SSTR2 with the overall higher persistence compared to the other two chelators (TETA and SAR) ( Figure 4). As expected, this suggests that the presence of a hindering chelator destabilizes the interaction between the peptide and SSTR2; nonetheless, the improved stability of copper inside such chelator is known to yield high-quality images. 34 Interestingly, although the TATE peptide was common to all Cu-labeled compounds, the change of the chelator affected the interactions involving the C-terminus and the residues located at the ECL2. In detail, 64 Cu-DOTA/TETA-TATE interacted with Arg190 through the terminal T8, while the chelator moieties were involved with Arg184. Conversely, in 64 Cu-SARTATE, the terminal T8 was found to interact mainly with Arg184 (54% of the simulation time) and to a lesser extent with Arg190 (23%), whereas the SAR portion interacted poorly with the ECL2 (only 26% with Glu200) compared to .3%, respectively. ECL2 is colored in red, the copper ion is represented as a green sphere, the receptor is represented as a transparent white cartoon, the peptide TATE is colored in lilac, the chelators DOTA in dark cyan, TETA in purple, SAR in gold, and its linker in dark gray. The top panels report the superimposition with octreotide (green transparent sticks) taken from PDB7T11. In the bottom panels, the main interactions are shown as black dotted lines. the other chelators (Figures 4 and 5). This different behavior can be traced back to the total +2 net charge of 64 Cu-SAR (compared to −1 of 64 Cu-DOTA/TETA) that, despite the presence of a negative C-terminus, penalizes the interactions with basic residues.
Interestingly, when simulating SSTR2 in complex with 64 Cu-DOTA/TETA-TATE, the ECL2 was found to be closed in 2.7 and 3.5% of the simulation time, respectively, which is consistent with what was registered for 68 Ga-DOTATATE (6.6%). On the contrary, in 64 Cu-SARTATE, the ECL2 was able to close upon the binding pocket in 19.1% of the simulation time, similarly to 68 Ga-DOTANOC (18.1%) ( Figure S8). As mentioned above, both 64 Cu-DOTA/TETA-TATE were able to strongly interact with both Arg184 and Arg190 (such as 68 Ga-DOTATATE), while 64 Cu-SARTATE interacted only with Arg184 (like 68 Ga-DOTATOC/NOC). Therefore, these results suggest that the closure of the ECL2 loop is mostly impaired by the presence of strong polar interactions with the ligands, but also by the high fluctuations of the chelator.
■ CONCLUSIONS
In this study, we investigated the interaction of six radiopharmaceuticals with SSTR2, a key drug target for neuroendocrine tumors. We predicted the binding modes of these compounds and rationalized the role of the three different moieties characterizing this class (i.e., radionuclidechelator-peptide). Starting from the experimental structure of the receptor in complex with the somatostatin analogous octreotide, we generated the protein−ligand complexes that underwent to overall 15 μs of MD simulation time each. The analysis of the MD trajectories revealed that the substitution of the radionuclide ( 68 Ga 3+ with 64 Cu 2+ ) did not influence the dynamics and the main interactions established by the ligand, while the pattern of interaction of the C-terminus is strongly affected by changes of the chelator moiety (DOTA, TETA, SAR). The radionuclide-chelator portion is stabilized by crossinteractions between Tyr205 5.35 , Ile195, and the third residue of the peptide (Y3 for TOC and TATE, Nal3 for NOC). Furthermore, we found that upon small changes in the peptide structure (at the C-terminal T8/T-ol8 and at the third residue Y3/Nal3), the dynamics of both the chelator portion and SSTR2 strongly differ, possibly paving the way to a molecular rationalization of the differences in SSTR isoform selectivity.
The detailed molecular-level analysis presented in this study and the overall computational platform can be extended to other radiopharmaceuticals of this class, thus contributing to the rational design of new potent ligands targeting SSTR2. ■ COMPUTATIONAL DETAILS System Setup. The starting 3D structure of SSTR2 was retrieved from PDB ID7T11, 20 in which the receptor was solved in complex with the synthetic agonist octreotide and the G-protein. Missing atoms were added by structure refinement using Modeller10.2. 35 Given the close similarity between the peptide portion of the six radiopharmaceuticals and octreotide, it was reasonable to assume as the initial position of the ligands in the binding pocket those obtained by direct superimposition between the peptide portion and octreotide. The stability of the initial binding modes was thoroughly tested by monitoring the MD trajectories through analysis of RMSD/F values (see below). To reduce the computational cost, we did not include the G-protein in the structures. The ionization state of the residue side chains, the tautomeric states of histidine residues, and the Asn/Gln flipping were checked by the H++ server. 36 The CHARMM-GUI server 37 was used to embed the protein into a double layer of phosphatidylcholine (POPC, 70%) and cholesterol (30%). 38 The system was inserted in an OPC water box 39 and neutralized by adding K + and Cl − ions, reaching a 0.15 M concentration. The AmberTools20 software 40 was used to assign the force field lipid17 to POPC and cholesterol 41 and ff19SB to the protein. 42 The peptide portion of the ligands was obtained by manually changing the experimental structure of octreotide, solved in complex with SSTR2 (PDB ID7T11 20 ). The chelator structures were retrieved from the Cambridge Structural Database, 43 choosing those entries solved in complex with a radionuclide (DOTA ID 1136299, 44 TETA ID 624742, 45 SAR ID 915824 34 ). Finally, the chelator portions carrying the radionuclide were manually bound to the peptide N-terminal. For the generation of the ligand force field parameters, we combined two approaches: (1) one for the peptide and (2) one for the chelator-radionuclide portion. (1) The force field ff19SB was assigned to the peptide, and nonstandard residues (i.e., D-phenylalanine, naphthalene, D-tryptophan, threoninol) were parametrized as described previously. 25 (2) Given the peculiarity of the metal coordination bond involving 68 Ga 3+ and 64 Cu 2+ and their challenging parametrization, we used the Metal Center Parameter Builder (MCPB.py) procedure 46 implemented in Amber20. In detail, the 3D structures of the chelator-radionuclide and the first residue of the ligands ( D F1), obtained as described above, underwent quantum mechanics (QM) calculations at the Density Functional Theory (DFT) level 47 with the B3LYP functional using the Gaussian16 package (Revision A.03). 48 We performed geometry optimization on 68 Ga-DOTA using different basis sets in order to identify the best one for our system (Table S2). We compared the coordinaion distances between the experimental and QM optimized structure, and we computed the mean absolute error results (MAE). We observed that a large basis set does not lead to big differences in geometry optimization. Therefore, we employed the hybrid B3LYP functional, 49 in conjunction with the split-valence 6-31G(d,p) Gaussian basis set 50 to save computational time. For each compound, the ground-state structure was optimized, and then a full vibrational analysis was performed. In the case of 64 Cu-SAR, solvation effects were calculated using the integral equation formalism of the Polarized Continuum Model (IEF-PCM), 51 with water as the solvent 52 to avoid the collapse of the chelator. In all cases, the DFT-based structural parameters are in good agreement with the available experimental data (Table S2). The vibrational analysis results were used by MCPB.py to generate the bonded terms of the force fields. Then, on the optimized geometry we performed B3LYP/6-31G(d,p) single-point energy calculations to generate the atomic partial charges fitting the molecular electrostatic potential. We used the Merz−Kollman scheme 53 to construct a grid of points around the molecule under the constraint of reproducing the overall electric dipole moment of the molecule. Atomic partial charges were then generated through the two-step restrained electrostatic potential (RESP) method 54 implemented in the Antechamber package. 55 These steps enabled the generation of the force field of the chelator-radionuclide moieties using the General Amber Force Field 2 (GAFF2). 56 Journal of Chemical Information and Modeling pubs.acs.org/jcim Article MD Simulations. Each system underwent an energy minimization combining the steepest-descent and the conjugated gradient algorithms (2500 steps each) and applying positional restraints on the protein−ligand complex (10.0 kcal mol −1 Å −2 ) and on cholesterol and phosphate groups of phosphatidylcholine molecules (2.5 kcal mol −1 Å −2 ).
NVT and NPT equilibrations followed minimization, in which the positional restraints were incrementally reduced. The NVT equilibration was divided into two steps: first 125 ps with the same positional restraints of the minimization, then further 125 ps decreasing the restraint strength to 5.0 kcal mol −1 Å −2 for the protein and the ligands, and keeping 2.5 kcal mol −1 Å −2 for cholesterol and the phosphate groups of phosphatidylcholine molecules (overall NVT equilibration time = 250 ps). The following NPT equilibration was divided into four steps: (1) 125 ps using positional restraints of 2.5 kcal mol −1 Å −2 for the protein−ligand, and 1.0 kcal mol −1 Å −2 for the membrane components, (2) 500 ps using 2.5 kcal mol −1 Å −2 for protein−ligand and 1.0 kcal mol −1 Å −2 for the membrane, (3) 500 ps using 0.5 kcal mol −1 Å −2 for the protein−ligand and 0.1 kcal mol −1 Å −2 for the membrane, and finally (4) 500 ps using 0.1 kcal mol −1 Å −2 for the protein− ligand and leaving the membrane completely free to move (overall NPT equilibration time = 1.625 ns). We used the Langevin thermostat (1 ps −1 as collision frequency, 310 K) and the Berendsen barostat (1 atm), a cutoff of 9 Å, the time step was incremented from 1 to 2 fs with the SHAKE algorithm, 57 the Particle Mesh Ewald method for long-range electrostatics was employed. 58 The production runs were carried on for 3 μs, using the NPT ensemble and 4 fs as a time step, adopting the hydrogen mass repartition scheme. 59 Five replicas were generated for each system, resulting in a 15 μs simulation time. The MD simulations were conducted using the PMEMD module of Amber20, 40 and the trajectory frames were written every 100 ps.
MD Trajectory Analyses. MD replicas were first concatenated and the CPPTRAJ software 60 was used to perform a cluster analysis. A hierarchical algorithm 61 was used to group all frames into conformational clusters, according to the compound RMSD. Considering all cases, we found that three clusters sample significantly different conformations of the ligands which are, at the same time, reasonably populated (see Table S1). In all cases, the RMSD values of the ligands were computed on all of the heavy atoms, after aligning the backbone of the receptor in the MD trajectories, with respect to the first frame of the production run. Interaction fingerprints were computed using the ProLIF Python library 62 on all of the frames of the MD trajectories. The numbers of interactions were combined for all replicas and converted into persistence of interactions (%). | 6,841.8 | 2023-07-19T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Emergency Information Communication Structure by Using Multimodel Fusion and Artificial Intelligence Algorithm
With the development of The Times, social events are increasing, and emergency management has gradually become the main helper to solve the crisis in the public domain. By observing the current situation of many countries and regions, we can find that various types of public crises often occur in many countries and regions in the world, which have severely affected people's daily life, lives, and property. Through long-term research and analysis, it can be known that the emergency management mechanism currently established in China has certain shortcomings. The communication problem of emergency information is likely to cause the emergency work to not proceed smoothly. In addition, problems in the communication channels of emergency information are likely to cause problems in the cooperation of various departments when people carry out emergency management work, and the efficiency of the government in dealing with problems will also be reduced in real scenarios. In order to improve the efficiency of emergency information management, this paper aims at the various problems existing and facing in the construction of emergency management system. On this basis, the integration of various relevant emergency information management plan models is analyzed and sorted out, and based on the research and integration of the development of artificial intelligence algorithms. The main research results of emergency information management at home and abroad are comprehensively studied and evaluated. Finally, a QG algorithm based on more model fusion is developed. In the process of analysis, this article uses artificial intelligence algorithms to build a prediction model of multiple modes and collects the data needed to build the model by random extraction. Through the analysis of different data sets, it is used as the basic training data for prediction. Through comprehensive analysis, the model constructed in this paper can promote the sharing of emergency information among departments to a certain extent.
Introduction
In order to improve the efficiency of problem handling as much as possible, it is necessary to enhance the effectiveness and immediacy of information communication between various government departments, whether in the beginning or the follow-up process of public crisis handling. In the process of carrying out emergency management, emergency information is the carrier of key signals transmitted by various departments, and it can also explain the connections between various subjects in public crises [1]. Improving the efficiency of communication of emergency information can promote the resolution of public crises to a certain extent. Under the current development background, various countries and regions are getting closer and closer, and many issues are related to the interests of multiple subjects [2]. Public crisis events have gradually shown new characteristics, which require all departments to do a good job of information and communication work. When dealing with emergencies, problems in the communication channels of emergency information are likely to lead to problems in the cooperation of various departments when people carry out emergency management work [3]. In reality, the efficiency of the government in handling problems will also be reduced. In actual scenarios, the transmission and sharing of information is an important way to determine whether public crisis events can be resolved quickly. Government departments must not only use professional measures to increase the speed of information transmission but also establish a crisis event alert system [4].
Due to technical difficulties, the data transmission speed and sharing function of existing information and communication systems are limited to some extent [5]. In view of the current problems, in the future of the development of information and communication system, we can rely on new technology and new theoretical knowledge to improve and optimize the development and construction of the existing information and communication management system. In order to solve the above problems and improve the efficiency of early warning and emergency information management, this paper conducts a multimodel fusion study. Combined with the development of artificial intelligence algorithms, this paper comprehensively analyzes the main achievements of emergency information management research at home and abroad and finally proposes a QG algorithm based on the multifusion model [6]. In the process of analysis, this article uses artificial intelligence algorithms to build a prediction model of multiple modes and collects the data needed to build the model by random extraction. The collected data can be used as the training data for the prediction model [7]. Ensure the scientific of the analysis results. After overall analysis, the model constructed in this article can promote the sharing of emergency information among various departments to a certain extent. Through the experimental analysis shows that through this article builds a fusion model of multiple factors, and through the use of an artificial intelligence algorithm, which is combined with the actual scene, the application of information transfer and sending and receiving efficiency and accuracy can be a certain level, to improve the communication efficiency of government departments and social crisis event comprehensive public relations capacity.
Related Work
Literature introduced the specific situation of emergency management to people, expounded on the concept and types of emergency management, and also analyzed the importance of information management to emergency management [8].
In the process of analysis, the article also introduced the grid theory, and the technology introduced by the theory can improve the effect of information integration to a certain extent and can realize information sharing and efficient dissemination between various departments. The literature also analyzes the information transmission and sharing in crisis management, the integration of various models, the development of artificial intelligence algorithms, and the main research results of emergency information management at the present stage [9]. Finally, an algorithm using multiple model fusion is proposed to improve the management efficiency of emergency information. Literature analyzes a variety of factors that affect information transmission and uses the knowledge and technology of grid theory to study the methods of information transmission and communication [10]. The article believes that the knowledge and technology of grid theory can improve the crisis. The efficiency of information management in management has confirmed the feasibility of this method based on specific analysis [11]. Literature introduced the details of information management, analyzed the development of information management and existing problems, and studied the methods of information sharing [12]. This paper conducts a series of analyses on the internal structure, operation means, and corresponding procedures of the information management system and finds that there are still a series of problems in the current system, such as uncoordinated and unbalanced communication, insufficient transparency of external information disclosure, and no connection between various communication channels [13]. Literature uses existing theories and technologies to build an information communication model. This model is mainly supported by the technology used in gridding theory [14]. In the process of research, the article also tested and analyzed the role of this model in government crisis management. Literature analyzes the practical application of gridding theory and believes that various government departments can use the advantages of gridding to realize remote office work and establish virtual information communication organizations in various regions, which can promote the improvement of information communication efficiency [15]. Help all departments and regions realize the sharing of information resources.
This article uses the knowledge of grid theory and related technologies to optimize the distribution of information communication channels and establishes a grid informationsharing system through integration. The integrated system has functions such as digital transmission, voice calls, and video chats [16]. The system can also back up various types of data at any time, realizing information sharing among different levels, different departments, and different regions. In this paper, an artificial intelligence algorithm is used to construct a fusion model of various modes in the analysis process. Under the realistic situation that the original organizational structure of the organization is controlled as much as possible without changing, the information communication channels are broadened and enriched, the level of information management is improved to a certain extent, and the quality of information management is guaranteed [17][18][19].
Multimodel Fusion and Artificial
Intelligence Algorithm
Multimodel Fusion
The construction of the QG model needs to rely on the generation of the sequence. In the process of building the model, the encoder must be used to calculate the expression of the result. After the calculation is calculated, the decoder is used to sort the answer vector, so that a complete question can be generated. Let's introduce the detailed situation. The purpose of using the encoder is to turn questionand-answer sentences with inconsistent lengths into fixedlength vectors. This type of vector is continuous, and different neural networks can be used to achieve the purpose of the encoder. The decoder will predict the problem output by the system based on the content of the sequence, which can be expressed as a mathematical formula: 2 Computational Intelligence and Neuroscience This article uses an attention-based structure, which can calculate the probability of each word in the answer sentence. The calculation formula is as follows: When encoding, the attention system will set a different probability or weight for each hidden state in each time step. The specific calculation formula is as follows: During the running of the model, the model will record the appearance of each word in different sentences, and the model will not reuse words that have already appeared. The specific calculation formula is as follows: In the QG model, each component can solve the differential function, so all the parameters in the model can be learned using the backpropagation algorithm. The specific formula of the algorithm is as follows: In the process of building the model, using the grid beam search algorithm can expand the search range of the traditional beam algorithm and can calculate all the words before decoding. The algorithm can calculate the language sentences of all models. The detailed calculation formula is as follows:
Evaluation Index.
In the process of analyzing the question types, the QGSTEC2010 assessment meeting was used in this study. The assessors can divide the test questions and sentences into five parts for scoring. The details of the scoring of the first part are shown in the following Table 1.
In the evaluation process, the question sentence needs to be converted into the corresponding query sentence, so that the system can match the corresponding data, and the output result is the answer to the question, so that it can also judge whether the type of the question is correct. The details are shown in the following Table 2.
In the process of evaluation, it is also necessary to judge whether the syntax of the question sentence is correct.
The fourth part of the assessment is the clarity of the question statement. The details are shown in the following Table 3.
In order to improve the efficiency of emergency information management, this paper analyzes various fused emergency information management scheme models on the basis of relevant problems existing in the current construction of emergency management system. Based on the research on the development of artificial intelligence algorithm, this paper refers to the main achievements of emergency information management research. Finally, a QG algorithm based on more model fusion is developed. The specific flow of the algorithm is shown in Figure 1.
In the process of designing the model, this article mainly uses the problem clustering algorithm, which can select the common problems needed to build the model from many problems. The specific calculation formula is as follows: The calculation formula for the elements of the model input layer is as follows: In the above formula, Next, the attention distribution needs to be generated on each attention vector. The specific calculation formula is as follows: After the convolution layer and pooling layer are processed in sequence, the data will be classified and packaged and transferred to the output layer. This is the expression form of the vector that requires the activation function to be calculated. The calculation formula is as follows: The generation question is completely related 2 The generation problem is basically related 3 The generation problem is basically irrelevant 4 The generation problem is completely irrelevant The generated problem is of a given type 2 The generation problem is not a given type Computational Intelligence and Neuroscience During the operation of the convolutional layer, the composition of the data is mainly obtained by convolution calculation on the feature map and convolution kernel output by the previous layer, and the convolution kernel can be obtained through system learning. In the calculation process, different convolution kernels will produce different feature maps. The specific calculation formula is as follows: The encoder will read the fragments of the input layer, and the decoder will predict the output word sequence based on the read results. The specific calculation company is as follows: In the following calculations, professional analysis tools are used to extract the candidate results of the question. If the candidate results can be extracted, the formula can be used to determine the degree of matching between the answer and the question. The specific formula is as follows: If the placeholder in the formula below represents the word in the question, then this article can choose the word in the formula as the topic of the question. The specific conditions that the formula meets are as follows: The features required in the above formula can be combined using a linear model. The specific formula is as follows:
Artificial Intelligence Algorithm.
The convolutional nerve is composed of a convolutional layer and a pooling layer. The two layers will alternately appear in the system as the operating cycle of the system changes. After many iterations, the system will change the pixel size of the last layer of pooling layer. It is expressed in the form of a vector, and the result is transmitted to the artificial neural network, so that the final result can be obtained. The operation process of the convolutional layer in the convolutional neural network is mainly to borrow a filter that can be used for data training to process the convolution input image and add a bias cylinder, so that the convolutional layer can be obtained. The operating process of the pooling layer is to calculate the maximum or average value of a part of pixels at intervals of a certain distance and use the calculated values to draw a feature map. Both the convolution process and the pooling process can use the activation function to activate the relevant calculation process according to the speed of the network convergence when running, so that people can extract complete image information. The specific structure of the network is shown in Figure 2. Stacked sparse self-encoding neural network consists of two parts. The main function of the autoencoder is to extract the features contained in the image. The encoder of the neural network is composed of many sparse encoders, which can improve the accuracy of feature extraction sex. In addition, the Softmax classifier is also an important part of the neural network. The classifier can classify the extracted image features and improve the work efficiency of people analyzing image features. 4 Computational Intelligence and Neuroscience
Stacked Sparse
Autoencoder. The autoencoder can use unsupervised learning for grid training, encode the data input to the system, restructure the data structure, and reduce the error of the restructured data, so that the characteristics of the input data in the hidden layer and the detailed structure can be obtained, as shown in Figure 3. The hidden layer extracted by the self-encoder cannot express the data of the input layer very well. Therefore, Shausen and other related personnel have proposed a theory of sparse coding. Through the research and analysis of the brain's unsupervised learning process, they learned that the human body uses the neurons in the human body to learn external things. In fact, most of the application elements are not in the working state, some are not involved in the work, and only a small part of the neurons are activated after being stimulated, which means that the response of the neurons is incomplete. Because of this, the human brain has a better effect on learning all kinds of data information. This ability is also suitable for the autoencoders we mentioned above. With incomplete restrictions, the sparse encoder can be used for features. The sparse expression plays a good role in learning, so that the relevant information we extract is easy to distinguish. The limitation of incompleteness means that when the value of the output function of the neuron is close to 1 without limitation, the output is activated. When infinitely close to 0, the output is inhibited. In many cases, where the output is inhibited, it can be called as sparsity limitation. The specific calculation formula is as follows: During the operation of the network, the output of neurons will be suppressed, so the sparse activation parameters need to be used to improve the output performance of neurons, which can be expressed as The loss function can be used to measure how the model processes the data. The specific formula is as follows: After adding the penalty factor to the model, the loss function of the encoder can be expressed as In shallow networks, sparse autoencoders can extract features that have large differences. Combining multiple sparse autoencoders to form a large encoder can improve the learning effect of deep networks. In the process of model operation, the structure of the encoder is more complicated, and the gradient disappears, which is not conducive to the training of the data, so it is necessary to adopt an unsupervised layer-by-layer greedy training method, so that the structure of the network can be optimized, thereby improving the model. The performance of the encoder is shown in Figure 4.
Softmax
Classifier. The Softmax classifier can classify the extracted image features and improve people's work efficiency in analyzing image features. When classifying, you need to use the hypothesis function to determine the characteristics of the problem. The specific formula is as follows: The loss function of the classifier can be determined using the maximum entropy model, which can be expressed as Computational Intelligence and Neuroscience In the process of data training, if the fit between the data is poor, the parameter value of the penalty function will be too large, so weights need to be used to reduce the attenuation term. At this time, the loss function can become as follows: (23)
Analysis of the Status Quo of Emergency Information
Communication Channels. The main objects of government emergency information communication are mostly related departments responsible for public crisis emergency management tasks. Many crisis-related data are reserved. They are important departments for the transmission and transmission of relevant emergency information. In China, the communication and exchange of this type of information generally include the following types of institutions and related personnel: (1) In China, the highest administrative leading agency for emergency management of public crises that occur suddenly is the State Council. As the highest authority, it is also the leading agency. Many affairs are discussed by its Standing Committee and presided over the work. When necessary, the relevant working groups will also be designated to guide the relevant work behaviors of other relevant agencies. which is an important organization that can link related affairs together. (3) Subordinate offices. Most of these institutions are responsible for dealing with relevant emergency affairs, but when dealing with these affairs, they need to carry out reasonable analysis and processing of relevant types of affairs in accordance with relevant legal regulations and their own rights. Similarly, the drafting and implementation of relevant measures are also included, and the relevant matters decided by the Party Central Committee of the State Council are implemented and carried out to the end. (4) Local institutions. This part of the agency is the secondary management agency responsible for handling the emergency management of public crises that occur suddenly. They are lower than the related agencies mentioned above and are mainly responsible for handling the relevant areas within their jurisdiction. (5) Specially dispatched expert groups and technical teams. Relevant responsible agencies generally will vigorously recruit relevant talents to form what we call the government's advisory team and will also invite and dispatch relevant experts to join relevant organizations when relevant work is needed. This is what the government faces. Put forward some valuable opinions on related issues. The government very much hopes and welcomes these professionals to actively participate in related work. In fact, when dealing with related practical issues, communication is essential, and information is needed at any time. In addition to the related institutions we have introduced above, there are many other exchanges that can be regarded as the whole of emergency information communication. These include many small temporary emergency command agencies, relevant departments such as the information early warning center, as well as some related personnel such as commanders, information officers, leaders of small teams, and related organizations, as well as some organizations with a wider range of handling such as the public and relevant media. In a state of public crisis, everyone may unwittingly become the main body of information communication. Every word spoken is a message. It may become people's concern and cause very serious consequences to other personnel. It is precisely because the scope of the subject is very wide that it should be more recognized and valued for the importance of information exchange.
The determination of the government's emergency information communication process determines the government's ability and efficiency to handle public crisis events. When a public crisis event occurs, the government should reasonably arrange the order of information transmission according to the needs of the work. In the process of the Chinese government's handling of crisis events, the information communication process follows certain institutional requirements. The specific process can be divided into three steps. The details are as follows: (1) After a crisis event occurs, government departments need to report the information collected in the previous period. Once each department discovers a crisis event, it needs to report specific information to the relevant management department within four hours. In the process of handling the incident, each part also needs to report the progress of the processing in time.
(2) Government departments should make preliminary responses according to the severity of crisis events. If some crisis events have a large and severe impact, each department should use their powers to initiate emergency plans and then discuss specific solutions. (3) Publicize the detailed information of the crisis event to the public, and determine the accuracy of the information before the information is released to avoid the public misunderstanding of the information. Although government departments are required to publish relevant information in a timely manner, in actual life, the government has some omissions in this aspect of work. After a crisis occurs, government departments should not only make preparations for crisis management but also verify the reliability of information through the differences and correlations of various release channels on the basis of verifying the reliable source of information, so as to ensure the authenticity of the source and content of information to the greatest extent. The specific process of government emergency information communication is shown in Figure 5.
Triangular Interactive Communication Model.
This model is one of the most used models for the early handling of crisis events. This model can establish connections between government departments, the public, and the media and promote the exchange and dissemination of information among various local subjects. The social public part of the model mainly refers to the social groups that are affected when a public crisis event occurs. The government department should promptly inform this group of the progress of the crisis in the process of handling the crisis event. In addition, civil servants are a part of the groups affected by crisis events who shoulder more social responsibilities. This part of personnel should consciously fulfill their responsibility for social development.
Model Construction of Grid Emergency Information
Communication Channels. In the process of establishing grid-based information communication channels, a separate organization should be established to manage grid-based information communication channels. This organization should be composed of systematic organizational departments. Although it is a department under government management, the daily work of the organization is independent. In the process of forming the system, the supervisory center will match the supervisors according to the grid units that have been divided, so that the supervisory efficiency of the relevant areas can be guaranteed, and the supervisory axis can be formed in this way. The command center and various work departments will deal with the information of the crisis together, thus forming the axis of execution. In the grid-based communication channel, the district-level platform has two command centers, which can refine the crisis events in the city. When managing urban objects, things in the city can be materialized, and urban public facilities, road traffic, environmental protection work, emergency resource allocation, and various other types of urban things can be uniformly managed at the same time.
In the work process of the government, each divided area should carry out corresponding activities according to the requirements of the work, and the supervision and implementation of each part should cooperate with each other. When the command center and the control center release and inspect the work of each department, they should let the monitoring axis cooperate with the specific work of the axis. The two centers should always issue reasonable tasks to each area and supervise each area to perform tasks. Through specific practice, we can know that all parts of the system are connected to a certain degree, which can ensure that there will be no omissions in the process of information communication, and each part of the system can check the content of the information. Once a crisis event occurs, each part information will be released to the center of the system, so that the efficiency of government departments in handling crisis events can be improved.
Conclusion
In the development of recent years, many countries and regions in the world often have various types of public crises, which have severely affected people's daily life, lives, and property. Through long-term research and analysis, it can be known that the emergency management mechanism currently established in China has certain shortcomings. The communication problem of emergency information is likely to cause the emergency work to not proceed smoothly. In addition, problems in the communication channels of emergency information are likely to cause problems in the cooperation of various departments when people carry out emergency management work, and the efficiency of the government in dealing with problems will also be reduced in real scenarios. In the process of analyzing related issues, this article elaborates on the concept of information communication channels, the concept of emergency management, the principles of information communication, and the characteristics of information sharing. The article believes that the knowledge and technology can provide a good help for the communication and sharing of information. When dealing with emergencies, problems in the communication channels of emergency information are likely to lead to problems in the cooperation of various departments when people carry out emergency management work. In reality, the efficiency of the government in handling problems will also be reduced. In actual scenarios, the transmission and sharing of information is an important way to determine whether public crisis events can be resolved quickly. Government departments must not only use professional measures to increase the speed of information transmission but also establish a crisis event alert system based on the characteristics of the information. This ensures that the crisis management department can grasp the detailed incident information in a timely manner, and ensure that the public can understand the specific situation of incident handling. This article also analyzes the problems that Chinese government departments have in the process of handling public crises and provides a reasonable plan for information communication and sharing between various departments and subjects. In the future development process, an emergency information management system based on multimodel fusion and artificial intelligence algorithms will provide more convenience to the work of government departments.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The author(s) declare that they have no conflicts of interest. | 6,591.2 | 2022-10-10T00:00:00.000 | [
"Computer Science"
] |
Bluetooth in Intelligent Transportation Systems: A Survey
The rise of Bluetooth-equipped devices in personal consumer electronics and in in-car systems has revealed the potential to develop Bluetooth sensor systems for applications in intelligent transportation systems. These applications may include measurements of traffic presence, density, and flow, as well as longitudinal and comparative traffic analysis. A basic Bluetooth sensor system for traffic monitoring consists of a Bluetooth probe device (s) that scans for other Bluetooth-enabled device (s) within its radio proximity, and then stores the data for future analysis and use. The scanned devices are typically on-board vehicular electronics and consumer devices carried by the driver and/or passengers which use Bluetooth communications, and which then reasonably proxy for the vehicle itself. This paper surveys the scope and evolution of these systems, with system attributes and design decisions illustrated via a reference design. The work provides motivation for continued development of non-invasive systems that leverage the existing communication infrastructure and consumer devices that incorporate short range communication technology like Bluetooth.
Introduction
Intelligent traffic systems (ITS) hold the promise to improve roadway congestion and transportation infrastructure management by capitalizing on information derived from traffic monitoring.The increasing requirement and public expectation for accurate vehicular traffic information to manage traffic flows has precipitated the deployment of large scale traffic monitoring infrastructures.Typically, this has included the use of inductive loop detectors, microwave sensors and relatively expensive video cameras.
On-board vehicular electronic devices as well as consumer electronic devices are emerging as an alternative traffic sensing modality to complement the existing traffic monitoring and management infrastructure.This evolving infrastructure provides has the benefit of providing costeffective, real time traffic data by leveraging existing telecommunication infrastructure such as the cellular phone network.
This paper reviews the application of Bluetooth sensing in relation to ITS.It is a field that has seen very rapid evolution in the past 10 years, and will undoubtedly continue to evolve rapidly.A survey of the current state of the art can serve as a point of reference for both future and past works.In this paper, wireless sensor networks based on Bluetooth sensing are presented as a practical means of collecting a statistical representation of traffic density and flow.A basic system configuration consists of a Bluetooth probe device (s) that scans for other Bluetooth-enabled device (s) within its radio proximity, and then stores or forwards the data for future analysis and use.The scanned devices are typically on-board vehicular electronics and consumer devices carried by the driver and/ or passengers which use Bluetooth communications, which reasonably proxy for the vehicle itself.Thus, the data provide the information needed to extract a reasonable approximation of traffic presence, density, and flows.
The work reported here is contextualized within many of the uses initially suggested under the IntelliDrive initiative [1] that are oriented towards improving mobility within surface transportation systems.However, advanced applications are not practical to deploy on scale without leveraging implementations based on networked and web services and evolving internet technologies.Currently, web service realization for ITS applications is promising, since a uniform middleware can be achieved while utilizing the underlying network infrastructure [2].The applications reviewed in this paper demonstrate this integration.
Early vehicular telematic applications were user-centric, whereas innovative applications that are now within reach often combine crowdsourced information with the objective of data collection and analysis for statistical reliability and generalizability [3].These newer class applications are cohortcentric rather than individual-or user-centric.An early reference to the use of crowdsourcing for ITS [4] pursued the idea via a Smartphone app than from a wireless sensor network.These applications often rely on inferencing from GPSequipped probes or floating cars, with the intent to capture the behaviours of a statistically significant portion of vehicles, such that meaningful inferences can be made and potentially generalized to the entire population of vehicles [5], [6].Being statistical in nature, in some cases, only a very small amount of floating car or probe data are required to infer significant events such as congestion build-up or dissolution [7].In addition to the work relative to estimating traffic from GPSenabled probe devices and map services, an alternative includes the possibility of using drivers' cellphone trajectories as directly as proxies for vehicles, without an intermediary Smartphone app, as could be provided from a mobile cellular service provider [8][9][10].Cellphone trajectories are typically coarser-grained in both space and time but can be overlaid on a traffic grid and allow for some degree of traffic flow inferencing [11].
This survey is directly related to the use of Bluetooth transceivers to crowdsource data from a statistically significant sample of vehicles, where the data can be analyzed for traffic reliable inferencing.Bluetooth is a short-range, wireless telecommunications standard that defines how mobile phones, computers, personal digital assistants, car radios, GPS units, and other digital devices can be easily interconnected.A prototypical example of the technology configuration is the interconnection of a driver's or passengers' mobile phone to a wireless earpiece of vehicle audio system for hands-free operation while driving.
The remainder of this paper is organized as follows.Section 2 provides a background of Bluetooth in Intelligent Transportation Systems (ITS) from some of the earliest academic references in the research area to the present.Section 3 discusses Bluetooth relative to the methods and data that can be readily obtained in a non-intrusive manner in an ITS context.Section 4 overviews the design considerations within a Bluetooth configuration for ITS applications, and Section 5 provides a provides a an illustrative reference design which encompasses many of the phenotypes of a typical Bluetooth sensor network for ITS and illustrates the ease with which data can be collected, stored and presented.
A Survey of Bluetooth in ITS
The earliest references to using Bluetooth for tracking purposes were typically unrelated to vehicular traffic and ITS.Within overall objectives of safety and flow monitoring, early examples included Bluetooth tracking systems to track of children at a zoo [12] and students at a University [13].The realization that wireless sensor networks generally may play a significant role in traffic monitoring emerged in the literature in the mid-2000s [14] without explicitly mentioning Bluetooth.
Bluetooth may now appear to be an obvious methodology for non-intrusive traffic detection and estimation; however, industry reports and tests as late as 2010 evaluated various traffic sensors without considering Bluetooth as an option [15].In a 2010 study, the authors almost apologetically implied that limiting their system to Bluetooth networks was a potential limitation, and indicated that their work could be applied to Wi-Fi devices as well [16].The field since then has borne out that the Bluetooth devices serve as better proxies for vehicles than Wi-Fi devices would have been.
Another concurrent stream of research has been associated with the role of GPS in traffic monitoring relative to estimating travel times, vehicle density, and vehicle flows.In a more ambitious program where data was collected from GPSenabled cellular devices, the work indicated that a 2-3 % penetration of GPS enabled cell phones in the driver population was sufficient to provide accurate measurement of the velocity of traffic flow [6].Although the percentage of GPS enabled cell phones is clearly above the 2-3 % of the population, it is also a requirement that the GPS unit be physically on and driver data voluntarily be provided (either implicitly or explicitly).In comparison to Bluetooth, the only requirement is that the probed device has its Bluetooth enabled or discoverable, which is more often the case than having GPS enabled as well.
The potential of Bluetooth in traffic monitoring began to appear in the academic literature near 2010 [16,17] although a small number of early field trials by local government transportation departments and agencies date to as early as 2008 [18] and 2010 [19].Another early reference to Bluetooth sensing in a 2009 academic thesis emphasized optimal sensor location as opposed to data collection [20].These were some of the first publications to consider Bluetooth as a means of collecting data for traffic monitoring and ITS management.A potential exception may be a 2004 reference to Bluetooth for ITS where Bluetooth is considered as a means of inter-vehicle communication as opposed the use of Bluetooth as a traffic monitoring sensor [21].
The reliability of Bluetooth sensor systems for ITS depends on widespread Bluetooth-enabled device penetration.In a study of traffic through the Limfjord Tunnel, Bluetooth penetration was estimated at between 27 and 29 % [22].This appears to be considerably higher than many other reports at or about the same time frame.In a longitudinal study, Bluetooth penetration was seen to increase dramatically with the number of unique MAC addresses seen increasing over a year by 26 % [22].Rationale for the increased penetration was assumed to be the popularity of GPS units combined with an increasing number of vehicles with built-in Bluetooth.This reported level of Bluetooth penetration augers well for the continued investigation into these technologies and increased levels of statistical confidence as an increasing number of Bluetooth devices become discoverable.
Most often, one thinks of the Bluetooth-enabled cellular phone as a foundational component in a Bluetooth sensor system for ITS.This premise is supported by cellular penetration rates in Canada and the U.S as well as elsewhere.In the first quarter of 2012, Canada had 28 million cellular subscribers in total, which is a penetration rate of up to 80 % if no duplicate devices to a single subscriber are considered.While not all cellular phones are Bluetooth-capable, Nielson reports Smartphone penetration of up to 64 % of mobile phone owners in the U.S. by August 2013 [23].
In one of the earliest references, researchers in 2008 envisioned the basic system components of a Bluetooth sensor system for ITS that have since evolved, stating that "one could easily imagine a battery-powered, Bluetooth enabled, smart cell phone in a plastic case chained to the side of the road to collect much more substantive travel time estimates over 24 h or 7 d to much more precisely characterize operational characteristics of either a signalized corridor or a construction work zone.Those data might be logged for later download" [18].
In general, these early studies typically focused on applications to vehicle travel times estimates (including travel time delays due to roadway obstructions) and origin-destination estimates on urban arterials and freeways.Potential applications to network analysis (shortest path), congestion reporting, bicycle and pedestrian travel times, and before-after studies were also speculated [19].Other research interests included the quality of data produced by the Bluetooth detection of mobile devices for applications to travel time forecasting and estimates of time-dependent origin-destination matrices within an Advanced Traffic Information System (ATIS) that supplies information to drivers [24].More recent studies likewise investigate the role of Bluetooth sensors to estimate travel times [25,26] and vehicle velocities [27].
Privacy considerations were also foreseen early on, with the recommendation that organizations that implement Bluetooth-based tracking for ITS applications develop practices that encrypt and preclude storing MAC addresses for more than a few hours [18].
Early studies also observed asymmetry of traffic data collected in opposite directions (e.g.westbound vs. eastbound) and attributed this to antenna position.Others extended this line of investigation, characterizing antenna patterns and observing that an omnidirectional radiation pattern is most suitable for Bluetooth data collection [28].This is not unexpected, as it reduces the complexity of antenna placement and analysis when implementing a larger system of Bluetooth sensors.
In one such study of antenna characterization relative to travel time data collection using Bluetooth, the proportion of unique Bluetooth MAC addresses read relative to the known total traffic flow was reported to be 10 % [28].Although published in 2013, these data appear to have been collected in 2011 and as such, they represent a significant increase in the Bluetooth MAC address reads reported just three years earlier at 1 % [18].This improvement is likely due to both higher gain antennas developed combined with the precipitous increase in the number of Bluetooth devices in vehicles over that same time period.Currently, the collecting of CoD information is largely absent in the available literature.
Furthermore, the role that Received Signal Strength Indicator (RSSI) may play in the Bluetooth data collection process in ITS arose as an explicit research question around 2011 [29], with little other apparent work on investigating RSSI as a Bluetooth data source specifically for vehicular traffic.One of the few studies available explores using RSSI as means of estimating distance from the scanning point and using Class of Device information as a potential means of differentiating pedestrians from vehicles [30].
System calibration and validation studies are becoming more rigorous as more Bluetooth sensor systems for ITS are deployed.Opportunities exist to calibrate the data with existing traffic measurement devices, including loopdetectors, mechanical counters, travel survey data of various forms, and increasingly, data inferred from cellphone trajectories.In one novel study, Bluetooth sensor data has been compared with automatic license plate recognition (ALPR) systems [17].If available, ALPR provides as nice alternative for validation as both ALPR and Bluetooth sensors also label the data, in contrast to loop-detectors, counters, and survey data.Systems have been developed that employ a variety of sensors and their fusion in scaling up systems [31], such as the fusion of loop based detection with Bluetooth-enabled devices [32].While Bluetooth devices appear to have been limited to defined probe vehicles resulting in a low quantity of Bluetooth data, the work demonstrated opportunities in multi-sensor data fusion using Bluetooth data in conjunction with auxiliary measurements.Another instructive example of multi-system calibration potential is with the Anonymous Wireless Address Matching (AWAM) proof of concept demonstration with the City of Houston on an urban arterial [33].Travel times and speeds collected on identical roadway segments using a probe-based toll tag (AVI) system and the AWAM system were compared with excellent correlation.
While large-scale deployment of commercial Bluetooth traffic monitoring is still in its infancy, a number of pilot programs of various scales are being deployed.In Clark County, a pilot program costing $540,000 has approximately 20 Bluetooth probes installed collecting data along the Andresen corridor which experiences relatively high traffic, in an effort to determine whether the system can provide the information that traffic engineers need [34].The study reports the system reading 3-5 % percent of vehicular traffic via Bluetooth MAC addresses, and they have recognized this to be sufficient to provide information on traffic flow.According to the authors, sufficient data and analysis is anticipated by early 2014 for agencies to use the outcomes to adjust traffic signal timing.Similar pilot programs are ongoing in many other countries [35][36][37].While many pilot programs have focused on traffic flow on corridors and arterials, there are other Bluetooth scanning applications that are associated with work zone diversions [19,38,39].At present, the majority of pilot programs are related to travel time informatics and Bluetooth data assessment towards building evidencebased cases for traffic signal retiming [40].
One of the earliest commercial system that used Bluetooth for vehicle identification for travel time estimation appears to be BLIDS [41] (http://www.blids.cc/).BLIDS was introduced early 2008 with over 50 systems deployed primarily along corridors.Traffax Inc. (http://www.traffaxinc.com/) is also one of the early commercial vendors of Bluetooth traffic monitoring systems, with the system known by the trade name BluFax and introduced in 2009.Traffax has a patent application pending (CA2711278 A1) which claims a priority date of January 2008, (provisional filing) but may come under some challenges as the system of [18] was already deployed at that time.Blipsystems (http://www.blipsystems.com/home/)supplies a commercial solution to Bluetooth tracking and traffic monitoring, with their product is denoted BlipTrack.Another commercial system is TrafficCast (http://trafficcast. com/) with a product denoted BlueTOAD.Iteris (http://www.iteris.com/)also has similar product denoted Vantage Velocity for capturing Bluetooth MAC data but does not appear to have integrated cellular connectivity.In these commercial systems, most if not all of the components discussed in this survey are included as product offerings.Somewhere in-between commercial systems and academic prototypes, various transportation institutes and agencies are leveraging intellectual property developed at or in conjunction with universities.An excellent example is the Texas A&M Transportation Institute [37,42].
With the observed data gathering potential of Bluetooth for ITS applications combined with the emergence of a 'big data' culture, numerous references to systems and implementations have appeared from industry, governments, and academia.This diversity of approaches, motivations, and originators of the research lends credibility to the expanding role that Bluetooth sensing can play in ITS.In spite of this, there continues to be a need to explore system integration and validation for the various combinations of system configurations that can be envisioned.
Bluetooth Technology
Bluetooth is one of several available wireless technologies that may be employed to assist in resolving location extracted from a consumer electronic device (Table I).This survey focuses on classical Bluetooth 2.0 with a range that is welltailored for monitoring or detecting devices residing in or integrated into vehicles, such as Smartphones, Bluetooth earpieces and car audio.NFC and BLE 4.0 are more recent market entries with emphasis on low power and more personal communication or very body-centric networking.WiFi and cellular are intended for wider area networking.
In addition to Table I, there is a variety of wireless networking technologies developed over the IEEE 802.15.4 protocol such as ZigBee or XBee.These and similar are not considered here as they have not established a dominant presence in consumer devices such as smartphones.However, they are technologies to stay aware of as automobile manufacturers incorporate greater degrees of low cost wireless technologies into product lines.Relative to the basic functionality of inferencing vehicle presence by scavenging radio signals, ZigBee could provide an alternative to Bluetooth, although at this time is difficult to foresee ZigBee to be as pervasive as Bluetooth.
In general, Bluetooth scanning requires a device to probe the local wireless environment and detect the proximity of Bluetooth radios.In the ITS context, the proximate Bluetooth radios (typically, drivers' or passengers' Smartphones, earpieces, or on-board car audio) detected by the probe device The data collected from a Bluetooth scan can be fairly detailed, including a detected Medium Access Control (MAC) address, Class of Device (CoD) information, as well as metadata from the manufacturer or as specified by the device owner (Fig. 1).While getting all of this information from a device and storing it could lead to security and privacy concerns, the MAC address, the primary method of identifying a Bluetooth device, can easily be anonymized in a fashion such that the device can be uniquely identifiable from other devices while preserving a user's identity to a certain degree.
The Bluetooth address itself is a unique 48bit device identifier, where the first 3 bytes of the address are assigned to a specific device manufacturer by the IEEE (www.ieee.org/),and the last 3 bytes are freely allocated by the manufacturer.Even if the manufacturer of a device is known, the number of possible Bluetooth addresses is immediately limited to 16,777,216.As well, since only devices that are in discoverable mode can be detected by the scan, a user can simply turn their device off (non-discoverable) to avoid being detected.There are software tools available which allow brute-force discovery of non-discoverable devices, an early example of which is RedFang, but this is usually too complex for a minimal hardware configuration and not necessary for data collection purposes associated with ITS.It is also possible to burrow deeper into Bluetooth connections to provide connection-based tracking for fine granular or building-wide device tracking [43].
It is desirable, but not necessary, to employ probes in such a way that the system is scalable.This is one of the most unique features of a Bluetooth data collection system.An organization can start with a very limited and even a portable system and easily scale it to meet increased sensing demands.In this regard, the backend data collection, storage and analysis hardware are conservative.Modern web servers and connectivity are more than sufficient to collect the volume of data that could be collected even from a large number of Bluetooth probes.The scalability issue has essentially been further resolved by existing cellular and internet infrastructure, web servers, and data services collectively being developed as a Service-Oriented Architecture (SOA) [44,45].
The primary equipment used as the Bluetooth probe device must have the ability to be set for device discovery.Several of the existing Bluetooth modules libraries are designed to detect eight devices per inquiry, which is insufficient for most vehicular traffic applications.The limitation of eight appears to be a consequence of the anticipated use cases of Bluetooth enabled consumer devices, where supporting eight connections is likely more than sufficient.In effect, many existing Bluetooth modules are designed for connecting with the Bluetooth devices it discovers, whereas for ITS Bluetooth data collection, the primary requirement is only for devices to be detected.During traffic congestion periods, it is desirable to discover as many surrounding devices as possible, but a connection to the surrounding devices is not required.The important aspect is to ensure that the selected Bluetooth probe modules can discover as many devices as possible.In the reference design (Section 5), up to 250 unique devices per inquiry were detectable and hence recordable.This detection capability is not a restriction of the standard but rather particular implementations.For example, a detailed description of the Bluetooth discovery protocol that simulates the detection of 15 devices within a few seconds is available in the literature [46].
Bluetooth Sensor System: Attributes & Design Decisions
When designing a Bluetooth sensor system for ITS applications, there are choices in system attributes that become design decisions unique to the context and objectives of the system in deployment (Fig. 2).The basic configuration requires the designer to decide what type of probe device (s) will be used, how many probe device (s) are required, and where and how they will be located and fixed in the environment.
& Commercial/Prototype: In academia, a prototype system can often be easily assembled for several thousand dollars and provide entry-level data for exploring Bluetooth device detection.A commercial-grade installation of similar scope would likely be an order of magnitude greater in cost.Intermediate-grade systems are also available, for example [30].& Fixed/Portable: A fixed system implies a permanent or semi-permanent installation, whereas a portable system implies a system set-up or tear-down time of several hours maximum, and a deployment period measured in days, weeks, or months.In the case of a portable system, consideration has to be application domain, and a portable system offers the convenience of being more easily redeployed.A portable system typically implies a storage battery and possibly a battery charging system, such as solar.A fixed system is typically more robust but greater attention in sensor placement is required as the initial placement decisions may not be easily changeable.Fixed system may also rely on battery power, although a permanent power supply may be cost-effective as well.In both fixed and portable systems, GPS positioning is required.
Both fixed and portable systems may be online or offline & Online/Offline: An on-line system has the potential for real-time data collection (and potentially, analysis), by data being stored in the probe for only very short periods of time and backhauled at regular intervals over a wireless connection to a dedicated server.In an offline system, a requirement exists for much more significant data storage and relatively simple retrieval.A minimum requirement would be for the probes to write data to an onboard SD card or similar, and the manual data retrieval protocol to be planned.On-board data storage is typically Flash-based and very robust.& Wireline power/battery/solar: The choice of power to the probe devices depends largely on the intended application.
A portable application will most probably be battery powered, and depending on it intended duration and the environment, it may also be equipped with a solar charging system.Wireline power may be an economical alternative in a fixed system, although a fixed system may also run on battery power.
& Networking: Networking considerations are typically limited to online systems, where the designer must consider means of data transport.Two of the more likely networking technologies are cellular (e.g.GSM), or WiFi.The consideration in selecting WiFi would be to ensure adequate coverage over a wide area.& Tiered wireless configuration: A tiered wireless sensor network is a very common architecture for sensor networks (e.g.[30] and the reference design in Section 5).
In a system with multiple probe devices, a design decision must be made whether each probe will be equipped with its own WiFi or GSM/GPRS module, allowing direct communication from each probe to the back-end server.
The alternate would be to implement a middle wireless tier, adding some complexity to the system.An example, of a multi-tiered wireless implementation is presented in Section 5.It is these authors opinion however, that a tired wireless system should be avoided and direct communication to the probes be supported.& RSSI capable: Only a very limited number of published works incorporated RSSI for traffic monitoring in an ITS context [29,30,47].At time of writing, RSSI data does not play a significant role in Bluetooth systems for ITS applications, although the rapid evolution of this area in general may illuminate the potential and utility of RSSI data within a few years.& Bidirectional: A bidirectional system may be able to proactively communicate over Bluetooth to 'subscribers' or discovered devices, for example by providing traffic alerts.For example, a potential business model would allow users to purchase low-cost devices solely for the purpose of being tracked [18], with a subsequent feature of being able to backcast from the ITS to the device.This device may be as redundant as a Smartphone running a suitable app.The opportunity to also log a subscriber's OBD-II data automatically would also require bidirectional communication via an established connection with a simple Bluetooth OBD-II dongle.& Remote monitoring and/or control: The Bluetooth probes should be chosen to be remotely monitored and preferably also configurable.The physical environment of installation sites (heavy traffic areas, exposure to all weather conditions) may make on-site monitoring and configuration both uncomfortable and potential dangerous.Remote monitoring allows for early detection of malfunctioning probes and other inconsistencies.Remote configuration can range from configuring the sampling rate and/or sleeping sensors when not required, which becomes critical for battery powered units.The implication of remote monitoring and configuration is that wireless access via WiFi or cellular is available to the probes. .& Cross validation/calibrating: As a Bluetooth sensor is sampling a proportion of the by-proxy vehicle population, by definition considerable emphasis has to be placed of validating and calibrating.A Bluetooth sensor network is relatively easy to install and provides considerable opportunity for sensor fusion.At minimum, designers must consider Bluetooth radio ranges of the probe (s) relative to the sampling area and clock synchronization between multiple probes.In data analysis, considerations include but are not limited to the ability to handle multiple MAC address reads from a given probe sensor that represent different vehicles as well as multiple reads of the same vehicle (e.g. a vehicle stopped at a traffic signal for several sampling periods), simultaneous MAC address reads from two or more probe sensors, one or more MAC address reads of a single MAC address by multiple sensors either simultaneously or in sequence, and MAC address reads of devices that are not necessarily sourced from a vehicle.The fact that the data is labeled can be used to provide some level of differentiation.There will always be some degree of uncertainty; for example, the data analysis is unlikely to be able to definitely differentiate a single vehicle from public transportation (e.g. a bus with 40 passengers and multiple Bluetooth reads).Managing uncertainty is one of the more academic aspects associated with Bluetooth traffic monitoring and promises to be a rich area of research [48].& Security: By virtue of the fact that all wireless communication devices need to signal to some degree in plain sight, security will always be an issue.It is not possible to alter this fact, as communicating devices need a standard means of identifying one another.
Bluetooth scanning for ITS applications such as simple traffic flow will become contextualized within big data concepts, in which the opportunities for exploring and exploiting the rich suite of data such systems are capable of generating is a significant research area in its own right.Examples include generating trajectories from uncertainties in measurement and detection [49] using techniques like Hidden Markov Models.Similar efforts will produce forecasting models that use massive amounts of real-time Bluetooth device data.
A Reference Design
This section presents a reference design as an example of one combination of many of the system attributes and design choices overviewed above.The system used multiple Bluetooth transceivers consisting of one master node or access point and multiple sensor probes around a major intersection in a Winnipeg, Canada, during winter 2013 (Fig. 3) [50].The multiple probes collected vehicle presence information (detection by one probe) and vehicle trajectory information (detection by multiple probes in sequence) via Bluetooth device discovery, and then transmitted this information to the master node via the 802.15.4 protocol.Architecture The basic system architecture is that of a Bluetooth sensor network, interconnected with an XBee/802.15.4 middle tier to the master node, and then a GSM wireless backhaul tether to a web data collection and processing web portal.XBee Pro was selected as the middle tier wireless networking technology as it offers a low power solution with sufficient range for the interconnection between the master node and sensor probes.GSM was selected as the cellular tier as a means of aggregating and forwarding data collected to a web server for processing and display.The data sent to the central server displayed the current traffic density and average velocity of vehicles at the intersection on a web front end (including mobile website) at five-minute intervals.
Probe design
The probe design uses an Arduino Uno development board, which utilizes the ATMega328 microcontroller, an XBee module and a Bluetooth Pro module, both of which connect to the Arduino module (Fig. 4).
During the data collection process at each probe, the information was organized in a consistent manner.Each device frame is 8 bytes and included a marker byte, a 4 byte timestamp, and 3 bytes for the truncated MAC address.To increase privacy, only half of the MAC address is recorded which still provides 16 million unique combinations for one probe network.The first 3 bytes of the packet are reserved bytes; a control byte, a length byte, and a probe id byte.The maximum packet size that can be produced is 100 bytes, as this is the size of the receive buffer of the XBee module.The XBee module on the master node was set as the network coordinator, and the XBee module on each probe was set to associate with the network coordinator.To save power, the XBee module on the probes were configured to hibernate when not in use.Due to changes in temperature and other internal and external factors on the probes and master node, the calculated time offset at various nodes slowly drifted over time, requiring scheduled clock resynchronization.
The function of the probe is illustrated in the main flow diagram of Fig. 5, and the scan flow sub diagram is illustrated in Fig. 6.
Master node
The master node consisted of Arduino-based GBoard from iteadstudio.com.For the GSM module, a preexisting library was used, with modifications and additional functionality to meet system requirements.
Power The desired minimum run time of each probe was 24 h.A combination of battery-only and battery-plus-solar were used, demonstration the viability of a solar rechargeable source for the wireless sensor network.The rechargeable power sources were two 4,400 mA, 3.7 V lithium ion batteries hooked up in parallel to a step-up converter (to 7 V).Both batteries are connected to lithium-ion chargers powered by a 3.7 W, 6 V solar panel.A custom step-up converter was also designed to have lower current draw compared to the off the shelf step-up converter; thereby increasing the life expectancy of the supply.
Database and front-end A MySQL database was used in order to store data, refreshed at five minute intervals, and display data to a user through a website or a smartphone app.Initial scripts were written where device MACs are compared to a table of existing MACs to find matching devices with a different probe location.When found, a probe-pair was created using the start probe, end probe and detection time difference; this was used to estimate the velocity of the device.Another PHP script was created that automatically runs every five minutes, refreshing the traffic data that is displayed on the websites.The data is stored as the total travel time for all vehicles and the total vehicle count through each probe-pair.
Implementation A Bluetooth discovery test was conducted to ensure devices traveling in excess of 80 km/h could be detected.The XBee distance test consisted of testing the connectivity while increasing the separation between master and slave devices.Line of sight difficulties were encountered for distances exceeding 200 m.As a complete test of the system, four probes were placed along a major thoroughfare to capture the traffic over several 24 h timeframes for multiple iterations representative of different traffic and weather conditions.Probes and master nodes were mounted to light standards along the thoroughfare using steel strapping at an elevation of approximately 2.5 m.Several single-probe and multi-probe trials were carried out over winter, 2013.Data were crossreferenced to data obtained by mechanical traffic counters and to cellular service provider data that can serve as proxies for users' movements between cell towers.Qualitatively similar trends in traffic density and flow were observed from the data sources.The reference design was found to capture an average of 4.5 % of real vehicle traffic (when compared to mechanical counters), which is above the 2-3 % conjectured as being required for statistically accurate traffic flower inferencing Initial findings lent credibility to the reference design as having the characteristics of a viable full-scale system.The cost of the reference design described above was approximately $1,500 in 2012 and 2013.
The reference system described above provides insights into the type of data that can be easily and cost effectively collected at a relatively inexpensive capital cost.The reference design, which was an academic prototype, is validated conceptually by others who report similar systems and investigations.In a study on traffic monitoring with Bluetooth sensors over ZigBee, an intervening multitier network is discussed [30].Although a different protocol is deployed, the basic and rationales are similar.Others similarly discuss the use of Bluetooth data to infer vehicle proximity as a means of estimating traffic characteristics, including the use of solar power to meet system energy requirements [51,52].
Additional Considerations
As of time of writing, the emerging wireless technology for inferring traffic is that based on Bluetooth 2.x.Bluetooth 2.x is the most widely deployed mid-range communication protocol has a range commensurate with distances typical of traffic and traffic control systems.The ubiquity of Bluetooth 2.x and its ready application to traffic-related contexts currently makes it a natural choice for ITS applications.In the future, communication alternatives and superior wireless versions -ostensibly developed for different user applications -will undoubtedly contribute to the data collected for ITS.For example, Bluetooth 4.0 is currently available for 'coin-cell-powered' devices and is currently not obviously applicable to common ITS applications; however, this applicability is likely to develop as new opportunities emerge.The same is true for other technologies, in the spirit that the most effective uses of a technology often emerge after its development rather than as a bounded or fixed a priori specification.
From our experience, one significant recommendation in this field of Bluetooth sensor systems for ITS would be to avoid ad hoc intermediate networks.The difficulties with installation and reliability make data collection networks that rely on lines of sight for communication too constraining.In addition, the XBee communications used in the reference design were power-intensive and not tuned for power conservation.As an alternative, each sensor should ideally be equipped with GSM/GPRS and have the backend service parse data from each probe directly.The additional costs of this approach are likely to be outweighed by the benefits accrued through much simpler deployment and lower maintenance.The intervening XBee, communication network also added complexity through the use of a less efficient protocol for data transmission than GSM directly.This follows the principle of "Lex Parsimonia" where the simplest system is most likely the best.
Future work should investigate the integration of mobile Bluetooth probes as well as stationary probes.This increases processing and analysis of the data collected, but would augment data collected by the stationary Bluetooth probes.An example of a vehicle trajectory while scanning, is in [53], while a patent application for a mobile probe can be found in [54].Utilizing mobile probes would also allow for augmenting of the data with information such as acceleration at intersections, and would be a direct means of inferring environmental conditions such as ice and snow.
Finally, a decided advantage of a Bluetooth traffic monitoring system through a Services Oriented Architecture is that it can easily and inexpensively augment any existing traffic measurement system.
The real academic, engineering, and organizational challenges will lie in full-scale deployment of a Bluetooth sensor system at many intersections across a large urban centre.The probe sensor network may be augmented by a large number of probe vehicles that could also be configured to upload data from proximate devices detected while in transit.The only requirement for this additional data from probe vehicles is to augment the discovered Bluetooth MAC address and timestamp with GPS information.Once scaled to city-wide deployment, the data mining challenges will be considerable and will require a whole new big data approach, but will also become an invaluable input to an ITS.
BLE 4.0 <25 m (estimated) Bluetooth 1-100 (class dependent) WiFi 5-100 m (typical) Cellular km+(cell sector -typical) serve as proxies for vehicles.The objective of the probe devices (aka probes) is to collect information on vehicle presence (detection by one probe) and vehicle trajectory information (detection by multiple probes in sequence) via Bluetooth device discovery, and then transmit this information via either an intermediate wireless tier or directly over a cellular network to a web service or backend server.The communication protocols between probes and backend servers are typically based on the TCP/IP protocol stack, leveraging not only the physical communication infrastructure but also the highly developed Internet IP infrastructure.
Fig. 1
Fig. 1 Simple Bluetooth Probe Scan Data
Fig. 2
Fig. 2 Bluetooth traffic monitoring design decisions
Fig. 3
Fig. 3 High level system overview | 8,817.2 | 2014-05-29T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
Energy Efficient Communications for Reliable IoT Multicast 5G/Satellite Services
: Satellites can provide strong value-add and complementarity with the new cellular system of the fifth generation (5G) in cost-effective solutions for a massive number of users/devices/things. Due to the inherent broadcast nature of satellite communications, which assures access to remote areas and the support to a very large number of devices, satellite systems will gain a major role in the development of the Internet of Things (IoT) sector. In this vision, reliable multicast services via satellite can be provided to deliver the same content efficiently to multiple devices on the Earth, or for software updating to groups of cars in the Machine-to-Machine (M2M) context or for sending control messages to actuators/IoT embedded devices. The paper focuses on the Network coding (NC) techniques applied to a hybrid satellite/terrestrial network to support reliable multicast services. An energy optimization method is proposed based on joint adaptation of: (i) the repetition factor of data symbols on multiple subcarries of the transmitted orthogonal frequency division multiplexing (OFDM) signal; and (ii) the mean number of needed coded packets according to the requirements of each group and to the physical satellite links conditions.
Introduction
The new fifth generation (5G) cellular system has been recently defined as the network of networks because it should consist of multiple seamlessly-integrated radio access technologies (RATs).In addition, the 5G system has been addressed as a densification of networks for its ability to dynamically integrate heterogeneous networks, from Wi-Fi access points, machine-to-machine (M2M) communications, and Device to Device (D2D) communications up to small cells also operating in the millimeter wave bands [1,2].Finally, 5G could be considered a dense network for its challenge to connect a massive number of things, i.e., the billions of objects and devices expected in 2022 to be on the Internet of Things (IoT) [3][4][5].
Recently, terrestrial low-power, wide area network (LPWAN) technologies have been proposed to interconnect massive IoT devices on licensed/unlicensed frequencies, including proprietary and open standard options, for instance the proprietary, unlicensed: (1) SigFox ultra-narrowband technology, operating over a public network in the 868 MHz or 902 MHz bands; (2) LoRa spread-spectrum technology transmitting in several sub-gigahertz frequencies of Industrial, Scientific and Medical (ISM) band; and (3) the open standards of the 3rd Generation Partnership Project (3GPP), i.e., (i) the Narrowband-IoT (NB-IoT); and (ii) LTE-M operating on existing licensed bands of the Global System for Mobile communications (GSM) and Long Term Evolution (LTE), respectively.
In this context, satellite technology plays a key role to provide anywhere global IoT connectivity, especially for mission critical and industrial applications.LPWAN terrestrial solutions can support massive IoT devices, while a satellite link, usually via a concentrator (gateway) can provide connectivity among LPWAN networks in remote areas such as desolate tundra or oil platforms on the sea.For these scenarios, the latency requirements are often relaxed for many IoT applications, e.g., for continuous monitoring of air pollution or wildlife positions, and the low data rate transmission allows the use of low bandwidth satellite infrastructure [6].
The IoT platform provider Stream Technologies and the LoRa Alliance already support reliable delivering of near real-time communications for a global connectivity over Iridium mesh architecture, which comprises 66 Low Earth Orbit (LEO) satellites.In addition, Vodafone and Inmarsat provide satellite backhaul for IoT devices to remotely track animals in wide geographical rural areas.Additionally, the support of direct access of IoT devices to satellite is the challenge taken by ORBCOMM even if the requirements of low maintenance costs and extremely long battery duration are quite demanding.In any case, the satellite segment must be part of the mix of 5G access technologies, as considered in 3GPP Release 16, for its major advantage to cover global areas and for providing the increasing dominant IoT connectivity guaranteed by the advent of 5G era.
Another key point of satellite technology is the ability to multicast a message to a group of IoT devices over a large area, hence driving down cost.This is particularly relevant in several M2M/IoT applications, where smart devices could be grouped according to their services requirements and/or for the information that they need to receive.In this context, reliable satellite multicast communications can reduce the control signals overhead and the overall energy consumption.For example, the forward link communication could be used to provide alerts or commands to a group of actuators deployed in a large and inaccessible area or in emergency situations to support communications among rescuers for updating periodically their position in disaster area and/or for sending some updates including warning, alarms and security messages [7].Besides, satellites can simultaneously deliver software/firmware update towards a large number of cars or multicast traffic and road conditions information, together with indicating of alternative routes by guaranteeing the continuity of communications, without the risk of losing the connection, as can happen in terrestrial wireless network due to sudden traffic peaks.Moreover, the satellite high speed backhaul connectivity allows multicasting the same content (video or HD/UHD TV) since content distribution remains mainly a key satellite-based service.In particular, for these bandwidth-hungry services, the satellite can be used for off-loading the terrestrial 5G networks traffic.
In this paper, we apply an advanced Network Coding (NC) scheme to reliable multicast, typically for content delivery or reconfiguration to a M2M/IoT group-based devices, and we propose an approach to optimize the overall end-to-end energy consumption for a hybrid IoT 5G/satellite scenario.The adopted NC method assures an efficiency improvement because the acknowledgments, usually needed from each terminal, can be avoided, thus minimizing both the delivery delay and the energy consumption to decode the data flow.In addition, to minimize the required energy for the overall multicast packet transmission, an optimized transmission scheme is introduced, where multiple data symbols are replicated over multiple subcarriers of orthogonal frequency division multiplexing (OFDM) signal, by varying: (i) the adopted modulation schemes according to the channel conditions; and (ii) the number of coded packets K needed to be transmitted to all the on-Earth devices with the constrained of a target outage probability for the group-based sensors/devices.
In [8], a similar reference scenario for reliable multicast content delivery applications and NC technique is also introduced.However, in this paper, a completely different methodology is proposed as follows: (i) NC operations are carried out at the ground control center (CC) as more transparent Geostationary Earth Orbit (GEO) satellite is considered while in [8] a satellite with on-board processing capabilities for NC is assumed; (ii) data symbols are replicated in the frequency domain , while in [8] a rate reduction factor is used to increase the time duration of data symbols; and (iii) we stress that our goal is different from the in [8] because here the repetition factor is used to enforce the transmission for minimizing the overall delivering energy of the hybrid satellite/terrestrial system.
The evaluation and performance assessment of the proposed scheme are addressed by providing numerical results.Finally, the conclusions are outlined.
Iot over 5G/Satellite
The massive number of connected sensors/actuators and vehicles in IoT scenarios, often distributed in wide and remote areas, usually require satellite technology.The hardware and energy constrained IoT communications in unlicensed bands below 1 GHz (or on ISM band) and satellite communications at frequencies above 10 GHz favor a hybrid terrestrial/satellite solution, where terrestrial LPWAN networks act together with 5G systems, which could cover the same technology and waveforms of satellite, e.g., in part of C-band, to provide a global connectivity [9,10].
Satellite could complement terrestrial networks and the challenges of facing: (1) direct connection of billions of IoT devices in a resource efficient way with the new narrow band satellite communication technologies; (2) the interoperability between satellite systems and a plethora of different RAT technologies and the heterogeneity of sensors/devices; and (3) efficient multicast delivery of the same content stored in 5G small cells data caches to support applications that require very low (sub-1ms) latency.
The unprecedented opportunity for the satellite to provide a significant contribution is linked to the definition and implementation of the network slices, which represent one of the pillars introduced in 5G and represent the set of infrastructures and protocols made available to satisfy the requirements of a service, also drawing on different technological domains (multi-tenant) [11].In addition, to enhance the satellite network device interoperability, and the integration of satellite and terrestrial networks, Software-Defined Networking (SDN) has recently been introduced into this domain due to its expected flexibility, programmability, simplified management, and reduced operating costs [12].
Satellite systems are historically efficient point-to-multipoint distribution platforms.An interesting overview of the use of satellite in remote IoT environment can be found in [7].This work highlights the main applications of IoT via satellite such as smart grid, environmental monitoring and emergency management by identifying challenges and issues for an efficient satellite support to M2M/IoT networks.Specifically, they have been pointed out as: (1) specific MAC protocols to assure the access of sensors to the satellite resources; and (2) the support of IPv6 over satellite.
IoT applications usually rely on a group-based data distribution according to the service requirements or the location of the group; as a consequence, multicast communications schemes are well suited for IoT domain, where many-to-many information passing schemes have been proposed for resource constrained embedded devices [13].In addition, a recent application trend requires the multicast-based group addressing, which has been introduced and compared with the traditional unicast mode in [14].To this purpose, in [15], a radio resource management policy is performed on per-group basis to provide multimedia content delivery over 5G/satellite networks, validating its performance for the effects of subgrouping approaches, well-known in terrestrial network, and its robustness to the long propagation delay, a typical constraint in satellite system.
Network Coding
The Network Coding (NC) approach, starting from the work of Ahlswede et al. [16] for wired networks, has recently gained more attraction in wireless terrestrial networks [17][18][19][20] and can be a key enabling technology for 5G/satellite, mainly for reliable multicast services.NC techniques applied to satellite networks allow: (1) reducing the overall capacity needed to transfer data between peers, saving bandwidth in forward link; and (2) increasing network robustness in the case of residual packet erasures after the channel coding operations performed at the physical layer.As satellites commonly support simultaneous transmissions of the same packets toward a subset of nodes within the whole network, NC schemes further improve reliable multicasting.In the basic concept, NC allows the intermediate node (e.g., relay node) between the source and the destinations to process the received packets and send a linear combinations of them (generally a XOR operation).Packets are linearly encoded with coefficients chosen independently and randomly belonging to a Galois field of size q.The destination nodes can decode them only if enough independent packets have been received, according to the random linear network coding (RLNC) concept.To apply this two-way relay NC technique to satellite network, we can consider an hub station and a satellite with a bent-pipe payload, or directly a satellite with a regenerative payload, to perform the XOR combining of previously received packets, for example sent by two terrestrial terminals.Each Earth terminal receives the XOR combined packet and can process it with its own stored packet to retrieve the packet transmitted by the other terminal, as shown in Figure 1a.The advantages are in terms of lower delivery delay and decreasing of bandwidth utilization (up to 50 % if NC is performed on board satellite).In this case, NC improves applications of videoconference between two satellite terminals or the use of satellite backhauling in geographical remote regions.
For reliable multicast communications, the data are segmented and transmitted by an Earth station to the end terminals through satellite.Each terminal sends an acknowledgment (or negative acknowledgment) message if it has received correctly or not the packet, as in Figure 1b.NC is performed on board satellite providing redundancy in the packets to allow each multicast terminal to decode the missed packets.
In Figure 1c , another transmission scheme is shown that does not require any acknowledgment message, thus minimizing the delivery delay, which usually represents a heavy constraint for reliable multicast services.As further explained, this is the assumption we made in deriving our system model.To this purpose, as explained in Section 4, we adopt the analytical derivation recently proposed in [21], which simplifies the one in [22], characterizing the exact decoding probability that the receiver node obtains N linearly independent coded packets among K(K ≥ N) received coded packets.In the literature, the advantages of NC for reliable multicast communications via satellite are widely analyzed and a specific RFC has been also proposed by IETF [23].The benefits of adoption of NC for content delivery in DVB-SH systems for handheld terminals without return link and for DVB-S2/RCS systems with fixed and mobile terminals, are highlighted in [24].NC multicast is also analyzed in [25] for multibeam satellite system, where receivers can tune different frequencies or polarizations to simultaneously decode orthogonal transmissions from adjacent beams and NC is used to enable decoding of signals from adjacent beams (spatial diversity).
Besides, the performance of NCed satellite communications connecting two remote clusters on the Earth is analyzed in [8], by defining an optimization procedure to minimize the data flow delivery delay.In addition to this, a comparative performance analysis is provided with respect to a classical negative acknowledged (NACKed) scheme, as depicted in Figure 1b, for a reliable multicast satellite communication scheme protocol relying on feedback channels (typically, the DVB-RCS/DVB-RCS2 systems), as well as with the classical NC scheme already investigated in [24], where no transmission rate optimization is performed.
Recently, NC is combined with Multipath-TCP (MPTCP), which exploits multiple TCP connections using different paths, to protect TCP transmissions from packet losses and improve user's throughput, as explained in Figure 1d.Differently from analytical and simulative approaches, De Cola et al. addressed the integration of NC in a real protocol stack, taking as reference the DVB-S2/RCS2 architecture [26].The specific implemented NC is positioned between network and the datalink layers and an emulator for the integration and validation of the NC applications in DVB-S2/RCS2 is also shown in the paper.
Multicast Nc Satellite Approach
According to our system model, different group-based M2M/IoT devices can be envisaged, according to their service requirements, as presented in Figure 2. In each group (or cluster), actuators/smart devices need to receive the same forward control signals and/or data packets typically from a ground control center (CC).It is worth noticing that clusters are located in the same spotbeam.The forward link communication is shown in Figure 2 only to highlight the multicast scenario.A direct access can be provided for sensors /devices to communicate with a satellite in the case of M2M via satellite proprietary protocol; however, they could be based on the DVB-S2 standard for the forward link (the link from the CC on the ground to the actuators/devices) and on DVB-RCS2 standard for the return link (the link from the on-Earth sensors/devices to the CC), or even on the protocols selected by the 3GPP long-term evolution (LTE) via satellite [7]. Figure 2 also highlights: (1) the multicast communications from satellite to a cluster of sensors and actuators; and (2) the indirect access mode where each sensor communicates with the satellite through gateway (GW).Since satellite has generally limited computation capability, for the sake of reducing complexity, the NC operations are carried out at the CC and a transparent Geostationary Earth Orbit (GEO) satellite is considered.
In the following, we focus on the dissemination of the data flow N to all the on-Earth nodes.Each content is composed of N packets, which have been processed by the CC station according to the RLNC principle [18,21,27], i.e., by transmitting K > N linear combinations of the N original data packets with random coefficients belonging to GF(q), i.e., a Galois field of size q.To this purpose, it is required that the overall outage probability is lower than a predefined value θ, where (1 − θ) represents the probability that all the on-Earth devices correctly received the content.Due to satellite channel errors and the consequent packet erasures, the NC redundancy, i.e., K − N, remarkably increases.To mitigate this effect and to satisfy the outage probability, since no acknowledgement is provided by the intended destinations, we adopt OFDM signal where at the source m replicas of the data symbols are transmitted over multiple subcarriers.Since in the multicast group sensors/devices may experience heterogeneous propagation conditions, we assume that the subcarriers used are selected at random among all the available ones.This increasing of the data reliability impacts on the overall energy consumption, which is directly proportional to m.The redundancy (and, hence, the energetic cost) jointly introduced by the NC scheme (K) and the symbols repetition factor (m) needs to be traded-off with reliability of the delivery with respect to different application scenarios.Therefore, an optimization policy is proposed in the following section to identify the minimum value of m that obtains a lower overall energy consumption to complete the delivery to all the on-Earth nodes, while matching a target outage probability.This policy considers an adaptive transmission strategy based on the signal-to-noise ratio (SNR), which affects packets erasure.According to most modern standards (for both cellular and satellite systems), different modulations (M-PSK and M-QAM) can be considered according to the channel impairments.Further, different services can be characterized according to the different Quality of Services (QoS) requirements.Usually, sending control commands to a group of actuators requires low data rate and, consequently, a QPSK modulation with a low spectral efficiency can be adopted; otherwise, high-order modulations can be used, e.g., for content distribution services, but this is obviously limited by the channel propagation conditions.
In the considered multicast scenario, we assumed that an OFDM signal is transmitted and a repetition of multiple data symbols on multiple subcarriers is considered.At each multicast terminal, the decision variables carried by the m replicas of the same symbol are coherently combined, by improving the Symbol Error Rate (SER) and, as a consequence, reducing the Packet Error Rate (PER).Under the assumption of static or low mobile devices, the considered Land Mobile Satellite (LMS) channel [28] is a flat fading channel.By assuming, without loss of generality, that: (1) the communications from satellite to all the on-Earth nodes occur over independent and identically distributed (i.i.d.) links; (2) the channel propagation conditions are constant during the transmission of a data packet; and (3) ideal coherent detection at the Earth terminal, we can obtain for the ith node, a SER for each M order modulation that depends on the m repetition factor and on the average of SNR γ i , which can be generally expressed as P M s,i (m, γ i ).
Optimization Problem
With reference to the application scenario previously introduced, we characterize here the specific optimization problem, whose aim is to reliably deliver a content to a community of devices.According to the authors of [18,21], the probability φ i (K, m) that the ith device correctly received the content, for a given value of γ i , modulation order M and the repetition factor m, is given by: where P pd,i represents the probability of an error free packet delivery of L length and the last product shows the probability of having at least N independent packets over the generation of n ≥ N coded ones, which has been characterized in [21,22].As a consequence, the proposed approach is based on the optimization of the energy (K, m) = Km, needed to reliably deliver the content in a multicast way as in the following: subject to: ∏ i=1,...,ν where the first constraint guarantees an overall decoding outage probability lower than an acceptable value θ, while the other ones are inherently related to the NC operations.Furthermore, we can note that, to perform the optimization procedure given in Equation ( 2), it is required, as in [29], that the satellite is aware of the channel propagation conditions for each of the ν links.This feature is usually provided in modern satellite supporting standards [30,31].However, being the focus of our investigation on geostationary satellite systems, it can be assumed an almost stationary channel, whose estimation can be performed within an initial set-up phase with a sporadic updating.
Numerical Results
In this section, the performance investigations for the proposed NC scheme and repetition factor m, optimized according to the proposed policy shown in Equation (2), are provided in terms of overall energy, under the constraint of a specified outage for content dissemination over on-Earth devices.
In performing our analysis, we considered a typical scenario for static or semi-static group (i.e., cluster) of devices as in Figure 2. The FFT length was 2048 and QPSK, 16QAM and 64QAM modulations were considered.The packet size qA is L = 1000 [bit] (the packet size is arbitrary and the results can be extend to different values) to a set of ν receiving nodes randomly located into the satellite footprint.In addition, LMS channel model for the case of slow variation is considered [28].In deriving these results, we assumed an outage decoding probability threshold θ = 10 −6 , m * = 8.
In Figures 3-5, the normalized multicast content delivery energy as a function of SNR values for ν = 100 devices, L = 1000 [bit], N = 10 [pkt], θ = 10 −6 and QPSK, 16QAM and 64QAM modulation schemes, respectively, is shown.It can be noticed a remarkable advantage with respect to the basic NC scheme (i.e., m = 1), especially for low-to-medium SNR range, where the proposed approach is able to reliably deliver a content to a multicast group with a lower energetic cost.Moreover, the achieved gain is more evident for higher order modulations, allowing a higher bit-rate with a lower instantaneous transmitted power at the satellite side.In addition, Figure 6 shows the optimal m value as a function of SNR values for the different M-QAM adopted modulations.It can be noticed the existence of a typical SNR range where the m adaptation is effectively performed: specifically, it is equal to [0 ÷ 11], [0 ÷ 17], and [0 ÷ 24] for QPSK, 16QAM and 64QAM modulation schemes, respectively.
To complete the investigation, in Figure 7, the mean normalized packet delivery delay is presented for the same scenario and QPSK modulation scheme.It is evident that the proposed approach is almost optimal also in terms of delivery latency.
The impact of the Galois field size q on the performance is further addressed in Figure 8; it is highlighted that a good trade-off between complexity and effectiveness is represented by q = 16, which is the value adopted in performing our simulations.Further, the scalability of the proposed approach is investigated in Figure 9, which represents the mean normalized content delivery energy vs. SNR values for ν ∈ [10 ÷ 1000], under the constraint of the same maximum group outage θ = 10 −6 .The limited performance decrease with the increase of ν, especially for low to medium SNR values, is evident.
Finally, in Figure 10, the normalized multicast content delivery energy is investigated for two different values of group outage probability, namely θ = 10 −3 and θ = 10 −6 , which correspond to different requirements for the content to be reliably delivered, showing the remarkable efficiency of the proposed approach.
Conclusions
In this paper, the main challenges and requirements that lead to the integration of satellite and terrestrial networks are reviewed, where 5G could be the enabling technology towards the era of convergence.Hundreds of billions of smart devices connected to the 5G network will create a true"Internet of Everything" and, in this scenario, the integration of the satellite segment can provide global connectivity and multicast distribution or configuration applications transparent to the end users.Specifically, the NC schemes application to SatComm is addressed, pointing out the improvement in terms of delivery delay, network capacity saving and the reduction of packets erasures in multicast services.In addition, an energy efficient multicast satellite communications scheme is investigated in view of applications to M2M/IoT scenarios.To provide reliable content delivery to a group of remote devices the proposed approach jointly adopts: (i) a RLNC scheme; (ii) an optimized repetition of the OFDM data symbols over multiple subcarriers; and (iii) different modulations adapted on the channel status basis for minimizing the overall energy consumption needed to deliver the content to a group of devices in a reliable way.Remarkable performance in terms of energetic cost, scalability, adaptability and robustness is always pointed out with a significant delivery delay reduction, which indicates also a beneficial effect on the packet throughput.
Figure 2 .
Figure 2. Reference multicast satellite communications scenario with a couple of device clusters within the satellite spot beam.
Figure 9 .
Figure 9. Normalized delivery energy as a function of SNR values for a QPSK modulation and L = 1000 [bit], N = 10 [pkt], and θ = 10 −6 with respect to different values of ν devices.
Figure 10 .
Figure 10.Normalized delivery energy as a function of SNR values for a QPSK modulation and ν = 100 devices, L = 1000 [bit], and N = 10 [pkt] with respect to different values of θ. | 5,804.4 | 2019-07-25T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |
Theoretical analysis and simulation study of the deep overcompression mode of velocity bunching for a comblike electron bunch train
Premodulated comblike electron bunch trains are used in a wide range of research fields, such as for wakefield-based particle acceleration and tunable radiation sources. We propose an optimized compression scheme for bunch trains in which a traveling wave accelerator tube and a downstream drift segment are together used as a compressor. When the phase injected into the accelerator tube for the bunch train is set to ≪ −100°, velocity bunching occurs in a deep overcompression mode, which reverses the phase space and maintains a velocity difference within the injected beam, thereby giving rise to a compressed comblike electron bunch train after a few-meter-long drift segment; we call this the deep overcompression scheme. The main benefits of this scheme are the relatively large phase acceptance and the uniformity of compression for the bunch train. The comblike bunch train generated via this scheme is widely tunable: For the two-bunch case, the energy and time spacings can be continuously adjusted from þ1 to −1 MeV and from 13 to 3 ps, respectively, by varying the injected phase of the bunch train from −220° to −140°. Both theoretical analysis and beam dynamics simulations are presented to study the properties of the deep overcompression scheme.
I. INTRODUCTION
The application ranges of free electron lasers (FELs), wakefield-based accelerators, and tunable radiation sources can be greatly extended by driving them with a train of short electron microbunches rather than with a single bunch [1][2][3][4][5][6][7][8][9].For example, a bunch train in which the energy increases or decreases along its length can produce a corresponding train of radiation pulses of variable delays and wavelengths, which has potential applications in ultrafast pump-probe FEL experiments [8,[10][11][12][13][14][15][16][17][18].In addition, a bunch train with a subpicosecond bunch length and a repetition rate of a few terahertz can be quite useful in the coherent excitation of plasma waves in plasma wakefield accelerators [19,20] and in the generation of frequencytunable narrow-band terahertz radiation [7,21,22] as well as in other beam dynamics applications [5,[23][24][25].All of these challenges and developments have generated considerable interest in electron bunch trains, in which two or more electron bunches are generated and are then accelerated and compressed in the same accelerating bucket.
There are several typical methods of generating such bunch trains, including the conversion of a transverse modulation into a periodic longitudinal distribution through the use of a dispersive beam line and an initial energy chirp [3] as well as the emittance-exchange technique [26,27].However, the generation of a transverse modulation using a mask will always lead to particle loss.Another possibility is to introduce an energy modulation by means of self-excited wakefields and then convert it into a density modulation through a magnetic chicane [28].However, the most straightforward method of generating microbunches is to impose modulations on the beam directly at the cathode and then attempt to maintain or recover these initial modulations as the beam propagates in the beam line [2,[6][7][8].
The well-known "laser-comb" operating concept, in which terahertz bunch trains are generated by illuminating a photocathode with trains of laser pulses, was proposed by Boscolo et al. [29].With this approach, it is possible to generate bunch trains with a charge of several hundreds of picocoulombs within the same rf gun accelerating bucket.Downstream of the gun exit, a velocity bunching technique is applied within a traveling wave accelerator (TWA) tube to modulate the bunch train by compressing each subbunch and adjusting the time and energy spacings along the train [2,[29][30][31].Such comblike bunch trains have been successfully used in x-ray FEL experiments [12,14,17].However, from previously reported beam dynamics results [31][32][33], in which the velocity bunching of the bunch train mainly occurred when the phase injected into the TWA tube was in the range of ∼½−100; −80°, it is evident that this method suffers from a smearing effect from longitudinal space charge forces, which causes the initial modulation to be blurred and tend to disappear for high beam currents.
Here, we investigate the optimization of the velocity bunching of a bunch train generated via the laser-comb method under a different working condition; we call it as the deep overcompression scheme for short.This scheme was developed at the Tsinghua Thomson x-ray (TTX) beam line [34] and is illustrated in Fig. 1.With a TWA tube and a downstream drift segment together serving as a compressor, we have studied the injection of a very wide range of phases into the TWA tube for the first time.From both theoretical predictions and beam dynamics simulations, we find that, when the phase of the TWA tube is varied over a large range of values of ≪ −100°, the generated laser-comb bunch train is widely tunable in both energy and bunch interval after a few meters of drift.In this scheme, the TWA tube overcompresses the beam, reversing the phase space of the bunch train while maintaining a velocity difference (or energy chirp) within the bunch train at the TWA exit; then, the long drift segment gives rise to gentle and uniform compression of the bunch train.The main benefits of the deep overcompression scheme compared with other velocity bunching schemes are (i) the relatively large phase acceptance of the TWA tube and (ii) the uniformity of the compression for each subbunch in the laser-comb train.
This paper is organized as follows: First, a theoretical model without a space charge effect is built to study the deep overcompression scheme, and simulations of the compression of a single bunch with and without a space charge effect are also presented for comparison with the theory.Next, beam dynamics simulations with a space charge effect performed using ASTRA [35] are reported for the two-bunch case, in which we show that the two compressed bunches are relatively uniform under deep overcompression and that the time and energy spacings of the two bunches are continuously adjustable over a wide range.Finally, the key results for four bunches under deep overcompression are presented for both the low-charge case and the high-charge case; these results demonstrate the applicability of the optimized compression scheme for generating comblike beams.
II. THEORY OF VELOCITY BUNCHING FOR A SINGLE BUNCH
The velocity bunching scheme, also called rf compression, was previously investigated in Refs.[36,37].The key principle of velocity bunching is that the compression and acceleration of the beam occur simultaneously within the same linac section.
Here, we concentrate on optimizing the velocity bunching scheme for more uniform compression of the bunch train, considering the effect of drift downstream of the TWA tube and performing a very wide scan of the phase injected into the TWA tube.We take the TTX beam line setup as an example, as shown in Fig. 1.A single laser bunch (or a comblike laser train generated with α-β-barium borate crystals) is generated and illuminated on the photocathode to produce an electron beam (or bunch train), which is then accelerated by a 1.6 cell S-band rf gun with a high gradient to control the space charge effect.The 3-meter-long S-band TWA tube is placed ∼1.5 m away from the gun, and there is a few-meter-long downstream drift segment after the TWA tube.
In the TTX beam line, the TWA tube and the downstream drift segment together act as a compressor.In this section, we elaborate on a simple model for single-bunch compression without a space charge effect that describes how the drift segment affects the velocity bunching scheme.
In an rf TWA structure, the electrons experience a longitudinal electric field, and, thus, the longitudinal phase space obeys the following expressions: where dγ=dz and dϕ=dz, respectively, describe the changes in the Lorentz factor (energy) and the electron phase respect to the rf wave.Here, α ¼ eE peak =m 0 c 2 , where E peak is the peak field of TWA, k is the rf wave number, and e, m 0 , and c are constants representing the electron charge, the electron mass, and the speed of light, respectively.
In the drift segment, where α ¼ 0, The symbol ϕ Ã denotes the phase after the drift segment.The second expression in Eq. ( 2) can be simplified to where ϕ e is the phase at the exit of the TWA tube and Δϕ is the phase change induced by the drift segment: Here, L is the length of the drift segment after the TWA tube, and γ e is the Lorentz factor for electrons at the exit of the TWA tube, which remains unchanged in the drift segment.Equation ( 4) shows that Δϕ is larger when γ e is smaller.
For an electron with an initial state ðγ 0 ; ϕ 0 Þ, where γ 0 and ϕ 0 denote the injected energy and phase, respectively, at the entrance of the TWA tube, Eq. ( 1) can be numerically solved using the Runge-Kutta method; thus, we can obtain the electron state ðγ e ; ϕ e Þ at the exit of the TWA tube.By combining the results with Eqs. ( 4) and (3), we can determine Δϕ and ϕ Ã , respectively.We solve Eqs. ( 1)-( 4) with the beam line setup depicted in Fig. 1, and the operation parameters are listed in Table I.The rf gun is set to a high gradient and the maximum acceleration phase (0°) to control the upstream space charge effect; thus, the initial state of the beam injected into the TWA tube is defined.We conduct a scan over the possible values of the phase injected into the TWA tube, ϕ 0 , to optimize the compression scheme.In this particular case, the gradient of the TWA tube is set to 8 MV=m, which results in a relatively small γ e for a large Δϕ.
The curve of ϕ e versus ϕ 0 is shown as the blue dashed curve in Fig. 2(a); it is a smooth curve with a single valley (point A).The curves of Δϕ versus ϕ 0 and ϕ Ã versus ϕ 0 are also shown in Fig. 2(a) as the black and red curves, respectively.We can see that the addition of the phase induced by the drift segment downstream of the TWA tube (Δϕ) to the phase at the exit of the TWA tube (ϕ e ) leads to a double-valley structure (points B and C) in the final phase curve ϕ Ã .
For an electron beam, the injected phase ϕ 0 is defined as the phase injected into the accelerating field for a reference particle in the beam that is located at the average position of the beam.We let σ ϕ 0 denote the initial rms phase (or rms bunch length) distribution of the beam at the entrance of the TWA tube and define the compression factor as follows: Then, we can obtain the compression factor curves of C versus ϕ 0 for an electron beam with σ ϕ 0 ¼ 1°from the curves in Fig. 2(a); the results are shown in Fig. 2(b).Peaks appear in the compression factor curves at points A 0 , B 0 , and C 0 , because the ϕ e and ϕ Ã values of every particle in the beam tend to be the same at these points.Figure 2(b) shows that, at the exit of the TWA tube, the maximum compression occurs for ϕ 0 ∼ −90°(point A 0 ), whereas after the long drift segment, two compression peaks appear at ϕ 0 > −90°( point B 0 ) and ϕ 0 < −90°(point C 0 ).
In the more general case, the drift length L and the accelerating field E in the TWA tube can also be tuned in the deep overcompression scheme.The compression factor curves corresponding to different drift lengths when the gradient of the TWA tube is fixed at 8 MV=m are shown in Fig. 3(a), and the curves corresponding to different gradients of the TWA tube when the drift length is fixed at 6 m are shown in Fig. 3(b).The deep overcompression scheme benefits from a longer drift length and a lower gradient in the TWA tube, as the theory predicts.To strike a balance between the cost (which is higher for a longer beam line) and the energy of the beam at the target position, values of L ¼ 6 m and E ¼ 8 MV=m have been chosen for the TTX beam line; these values are used throughout the following discussions.
Velocity bunching can occur in several different compression modes depending on the phase ϕ 0 injected into the TWA tube, as defined in Refs.[31,38].As shown in Fig. 4, the different compression modes induced by different injected phases correspond to different variations in the rms bunch length along the beam line (position Z); the results are distinguished by differently colored curves (no space charge effect is included here).In all cases, the initial injected rms bunch length before compression is 1 ps (black dashed line).In the full compression mode, corresponding to an injected phase of ∼ − 90°(magenta curve), the bunch length decreases to its minimum value at the accelerator exit.With an injected phase of > −90°(say, −80°), the compression is inadequate, and some energy chirp remains in the beam at the exit of the tube, causing the minimum rms bunch length to be observed after 3 m of drift (blue curve).In the overcompression mode, corresponding to an injected phase of < −90°(say, −100°), the rms bunch length monotonically increases after the TWA tube, and the minimum rms bunch length value occurs inside the TWA tube (green curve).By contrast, when the injected phase satisfies ≪ −90°(say, ∼ − 130°), there are two minima in the rms bunch length: One occurs inside the TWA tube, as in the overcompression mode, and the other occurs after a long drift of 6 m downstream from the TWA tube, as the red curve shows.Now, let us consider the changes in the beam's phase space (distribution in E − ϕ space) in the deep overcompression scheme.We have marked points A-G on the red curve in Fig. 4, and Fig. 5 shows the corresponding phase space at each of these points.The initial injected beam is a 2D Gaussian beam with an average energy of 5.5 MeV, as shown in Fig. 5 for point A. We have marked regions 1 and 2 of the beam to distinguish the time distributions.Since the phase injected into the TWA tube satisfies ϕ 0 ≪ −90°, the decelerated beam at point B shows a chirp.Region 1 becomes slower than region 2, because it is subjected to more deceleration.Then, region 1 catches up with region 2 at point C, corresponding to the first occurrence of the minimum bunch length (inside the TWA tube).Because the beam is still inside the rf bucket, the beam continues to slip in phase and is subsequently accelerated, with an energy chirp again, as shown at point D. At point E, region 2 has surpassed region 1 such that the bunch length reaches its maximum value inside the tube.Subsequently, the rf bucket continues accelerating and chirping the entire beam; at point F (exit of the TWA tube), region 1 still has a higher energy than region 2, which results in another occurrence of the minimum bunch length after 6 m of drift, as shown for point G.It is clear that the energy space of the beam at point G is reversed compared with that at point C. A long drift segment is necessary to obtain a compressed beam outside the tube in this scheme.Up to this point, we have built a simple theoretical model without a space charge effect to explain the deep overcompression scheme.The results of beam dynamics simulations with and without a space charge effect performed using ASTRA code are also presented for comparison.The beam line setup and the detailed parameters used in the simulations are shown in Fig. 1 and Table I, and the results are shown in Fig. 6.For the velocity bunching scheme, we define the beam at the entrance of the TWA tube as the initial beam, which is traced from the laser with a space charge effect by means of the ASTRA code.
In Fig. 6, the beam's charge is 100 pC, and the initial bunch length is ϕ 0 ¼ 1 ps.The compression factor is calculated as σ f =σ 0 , where σ f is the final rms bunch length after 6 m of drift downstream from the TWA tube.Two peaks are clearly seen in the compression factor curves for both cases (with and without a space charge effect), showing good agreement with the prediction represented by the red curve in Fig. 2(b).The green curve in Fig. 6(a), without a space charge effect, is more consistent with the theoretical prediction; the wide peak between the injected phase of −150°and −110°corresponds to the deep overcompression mode discussed above.From a comparison with the blue curve, we find that the complicated space charge effect that occurs when the beam is rotated during compression tends to increase both the height and the width of the peak in the compression factor curve that corresponds to the deep overcompression mode.By combining these findings with the behavior of the average energy curve (red curve), we find that the compression is stronger when the beam energy is relatively low; this finding is also consistent with Eq. ( 4). Figure 6(b) shows the simulated beam emittance versus ϕ 0 .The emittance is increased by up to a factor of 2 in the deep overcompression mode compared with the initial emittance when ϕ 0 ¼ 0.
Notably, the compression factor curve with the space charge effect [blue curve in Fig. 6(a)] is relatively flat and smooth in the deep overcompression range compared with the sharp peak at ϕ 0 ∼ −80°.This observation indicates that the deep overcompression mode has two advantages: (a) There is no abrupt change in the compression factor, as the injected phase ϕ 0 is varied over a relatively large range; (b) the compression is more uniform across all subbunches when the separation or distribution (i.e., the occupied range in the phase space) of the injected bunches is relatively large.Because of these beneficial characteristics, the deep overcompression mode is broadly applicable to bunch trains with a wide variety of characteristics.
III. TWO-BUNCH SIMULATIONS WITH
A SPACE CHARGE EFFECT Typical simulation results for two-bunch compression with a space charge effect are shown in Fig. 7.The initial state of each subbunch with a charge of 100 pC injected into the TWA tube is similar to that of the single bunch in Fig. 6.We use the colors red and blue to represent bunches 1 and 2, respectively; bunch 1 is in front of bunch 2 at injection.Not only the rms bunch length of each subbunch but also the bunch interval vary with the injected phase ϕ 0 .Two special cases (ϕ 0 ¼ −80°and ϕ 0 ¼ −130°) are seen in the figure in which only one bunch (shown in magenta) appears, meaning that bunches 1 and 2 are merged together; these cases correspond to the peaks in the compression factor curve in Fig. 6.We find that ϕ 0 ¼ −80°is a dividing point that marks the reversal of bunches 1 and 2 in the time space; i.e., bunch 1 remains in front in the undercompression mode (ϕ 0 > −80°) but is behind bunch 2 when ϕ 0 < −80°.
Figure 8 shows more details on two-bunch compression, including the bunch length and bunch emittance of each subbunch.As shown in Fig. 8(a), the bunch lengths of both bunches are compressed to relatively low values, and the lengths of the two bunches are more similar to each other in the deep overcompression mode (when ϕ 0 is in the range of [−210°−150°]), demonstrating the compression uniformity of the deep overcompression scheme.The bunch emittances of the subbunches are shown in Fig. 8(b); they exhibit similar trends as in the single-bunch case [magenta curve in Fig. 6(b)] but are obviously affected by the space charge effect between the bunches in the deep overcompression mode.
Figure 8(c) shows the bunch intervals with and without the space charge effect as well as the energy spacing versus the injected phase ϕ 0 .The initial interval and all intervals in the undercompression mode have positive values, indicating that bunch 1 is in front of bunch 2, whereas negative values indicate the reverse case, with bunch 2 in front.The two regions of zero values in the bunch interval curves correspond to the merged case (ϕ 0 ¼ −70°; −80°a nd ϕ 0 ¼ −120°; −130°).The accompanying values of the energy spacing between the two bunches have similar interpretations; i.e., positive values indicate that the average energy of bunch 1 is higher than that of bunch 2, whereas negative values indicate the opposite.In contrast to the marked space charge effect on the single-bunch compression curve [Fig.6(a)], here the space charge effect on the bunch interval is slight, mainly because the charge of each subbunch is not very high (100 pC) and the interval between the bunches is relatively large (a few picoseconds).
FIG. 8. Two-bunch compression results: (a) variation of the bunch length of each subbunch with ϕ 0 , (b) variation of the bunch emittance of each subbunch with ϕ 0 , and (c) variation of the bunch interval (with and without a space charge effect) and the energy spacing with ϕ 0 .
We focus on the regions ϕ 0 ¼ −220°to −140°, where both the time and energy spacings are continuously adjustable over quite a large range (þ1 to −1 MeV for the energy spacing between the two bunches and 13-3 ps for the time spacing).Those results indicate the optimization of the bunch train tunability under the deep overcompression scheme.
IV. APPLICATION TO THE CASE OF A FOUR-BUNCH TRAIN
The deep overcompression scheme can also be applied to improve the compression of a four-bunch train.We investigated the results in two cases: a low-charge bunch train, with a charge of 20 pC in each subbunch, and a high-charge bunch train, with a charge of 200 pC in each subbunch.The former case helps us to understand the compression uniformity of the deep overcompression scheme, and the latter case is more practical in application.
Figure 9 shows the low-charge case, in which the charge of each subbunch is 20 pC.The initial four-bunch train is generated by a four-bunch uniform laser with an interval of 3 ps upstream of the beam line in Fig. 1.It becomes nonuniform at the entrance to the TWA tube due to the energy chirp induced in the rf gun and the space charge effect.Then, the longitudinal distribution and phase space of the four-bunch train vary depending on the injected phase ϕ 0 into the TWA tube.Our results mostly reproduce those previously reported in Refs.[32,33] for the case in which the injected phase satisfies ϕ 0 > −110°, as shown in the top two rows in Fig. 9.However, with our understanding of the deep overcompression mode corresponding to ϕ 0 < −130°, we obtain the typical optimized four-bunch train distributions shown in the bottom two rows in Fig. 9.For the case of ϕ 0 ¼ −150°, we achieve a relatively uniform distribution of the four-bunch train in the time space, with a current of ∼50 A for each compressed subbunch and a decreasing energy distribution.The average interval between adjacent subbunches is ∼1 ps.The four-bunch train distribution remains relatively uniformly spaced in time even when ϕ 0 ¼ −210°, for which the average interval between adjacent bunches is ∼5 ps and their energy distribution is increasing.
To provide a closer look at the compression properties in the four-bunch case, we list detail parameters in Tables II [Fig.10(a)] and III [Fig.10(b)], which list the rms bunch lengths of each subbunch and the bunch intervals, respectively, for 4 × 20 pC bunch train with different ϕ 0 values.And note that we list the results only when four bunches are separated as the value of ϕ 0 is changed.
Figure 10 presents a summary of the parameters in the tables.The bunch lengths [Fig.10 deep overcompression scheme is optimized for uniform compression of the bunch train.
The high-charge four-bunch case is shown in Figs.11 and 12.The charge of each subbunch in the train is 200 pC, and the initial bunch current is ∼100 A upon injection into the TWA tube.The initial UV laser interval is 8 ps to generate separated bunches before the TWA tube, shown as the initial case in Fig. 11.It is still difficult to obtain a uniform bunch train when the injected phase is ϕ 0 > −110°, as shown in the top and middle rows in Fig. 11.However, when working in the deep overcompression mode (ϕ 0 < −130°), we obtain the typical optimized high-current four-bunch train distributions shown in the bottom row in Fig. 11.For the case of ϕ 0 ¼ −150°, the four-bunch distribution is relatively uniform in the time space, with a current of ∼300 A for each compressed subbunch and a decreasing energy distribution.The average interval between adjacent subbunches is ∼4 ps.For the case of ϕ 0 ¼ −170°, the four-bunch train is still relatively uniform, with an increased average interval of ∼6 ps, an increasing energy distribution, and a current of ∼200 A for each compressed subbunch.The four-bunch train distribution remains relatively uniformly spaced in time even when ϕ 0 ¼ −200°(average interval of ∼7 ps between subbunches).The parameter values are plotted (with error bars) versus the injected phase ϕ 0 in Fig. 12.
From this figure, we can see that, in the deep overcompression mode, the bunch lengths and intervals tend to be uniform (small error bars in Fig. 12) in this range, and the bunch intervals are continually tunable over a relatively wide range, which is beneficial for bunch-train compression.
V. SUMMARY
In summary, the deep overcompression mode of the velocity bunching scheme is greatly beneficial for bunchtrain compression due to its relatively large phase acceptance and its uniformity of compression.A theoretical model was built, and careful beam dynamics simulations based on ASTRA code were performed to study the deep overcompression scheme, in which the TWA tube reverses the phase space of the injected beam while maintaining a velocity difference (energy chirp) within the beam, thus giving rise to a compressed comblike electron bunch train after a few-meter-long downstream drift segment.The relatively uniform laser-comb bunch trains (both twobunch trains and four-bunch trains) that are generated via this optimized compression scheme, with charges of several hundred picocoulombs, subpicosecond subbunch lengths, and wide tunability in time and energy space, should have great potential for application in scenarios involving FELs, advanced particle acceleration, and terahertz radiation sources.
FIG. 1 .
FIG.1.Sketch of the velocity bunching scheme at the TTX beam line.
FIG. 3 .
FIG.3.Effects on the compression factor (a) when the gradient of the TWA tube is fixed at 8 MV=m and the drift length is varied and (b) when the drift length is fixed at 6 m and the gradient of the TWA tube is varied.
FIG. 5 .FIG. 6 .
FIG. 5. Sketches of the changes in the electron beam phase space (E − ϕ) in the deep overcompression scheme at different positions along the beam line, where positions A-G correspond to points A-G on the red curve in Fig. 4.
FIG. 7 .
FIG. 7. ASTRA simulations of the longitudinal distributions and phase spaces for two-bunch compression with different injected phases ϕ 0 .
FIG.9.Low-charge four-bunch case: ASTRA simulations of the longitudinal distributions and phase spaces for the compression of a four-bunch train with different injected phases ϕ 0 , where the charge of each subbunch is 20 pC.
FIG. 10 .
FIG. 10.Bunch lengths and bunch intervals with different injected phases ϕ 0 for a 4 × 20 pC bunch train; (a) and (b) correspond to the statistics parameters in Tables II and III, respectively.
FIG. 11 .FIG. 12 .
FIG.11.High-charge four-bunch case: ASTRA simulations of the longitudinal distributions and phase spaces for the compression of a four-bunch train with different injected phases ϕ 0 , where the charge of each subbunch is 200 pC.
TABLE I .
Parameters of the TTX beam line setup.
TABLE II .
Bunch lengths (rms) of subbunches in a 4 × 20 pC bunch train with different injected phases ϕ 0 .
TABLE III .
Bunch intervals in a 4 × 20 pC bunch train with different injected phases ϕ 0 . | 6,610.8 | 2018-02-27T00:00:00.000 | [
"Physics",
"Engineering"
] |
Development and Application of a High-Performance Triangular Shell Element and an Explicit Algorithm in OpenSees for Strongly Nonlinear Analysis
: The open-source finite element software, OpenSees, is widely used in the earthquake engineering community. However, the shell elements and explicit algorithm in OpenSees still require further improvements. Therefore, in this work, a triangular shell element, NLDKGT, and an explicit algorithm are proposed and implemented in OpenSees. Specifically, based on the generalized conforming theory and the updated Lagrangian formulation, the proposed NLDKGT element is suitable for problems with complicated boundary conditions and strong nonlinearity. The accuracy and reliability of the NLDKGT element are validated through typical cases. Furthermore, by adopting the leapfrog integration method, an explicit algorithm in OpenSees and a modal damping model are developed. Finally, the stability and efficiency of the proposed shell element and explicit algorithm are validated through the nonlinear time-history analysis of a high-rise building.
Introduction
The performance of structures against extreme hazards has become an important research topic. By discovering the damage evolution process and failure mechanism, the research outcomes will support the identification and optimization of vulnerable structures. In addition to physical experiments, numerical simulations based on the finite element method, as an important and effective approach, have been widely used [Nesnas and Abdul-Latif (2001); Bradford and Pi (2012) ;Lin, Li, Lu et al. (2016)]. Thus far, strongly nonlinear analyses of structures have been performed extensively, and corresponding simulation strategies have been proposed [Lu, Lu, Guan et al. (2013); Lu, Tian, Cen et al. (2018)]. OpenSees, as an open-source finite element software, is now widely used owing to its high transparency and freedom [McKenna, Scott and Fenves (2009)]. For strongly nonlinear problems, on the one hand, the elements must consider the material and geometric nonlinearity simultaneously; on the other hand, the time integration algorithm should be sufficiently stable during the entire computational process. However, further improvement on these two aspects is still required in OpenSees. In terms of element technology, a typical modeling strategy for the nonlinear analysis of buildings is to adopt fiber elements for beams/columns and shell elements for shear walls and coupling beams [Lu and Guan (2017)]. In OpenSees, the collapse simulation of frame structures has been performed successfully by using fiber elements [Lignos, Chung, Nagae et al. (2011);Xie, Lu, Guan et al. (2015)]. However, it would be difficult for the fiber elements to represent the axial-flexural-shear coupled behavior of shear walls. Therefore, based on the generalized conforming theory, Lu et al. [Lu, Tian, Cen et al. (2018)] proposed and successfully implemented a quadrilateral flat shell element, NLDKGQ, into OpenSees. Subsequently, they performed a collapse simulation of a highrise reinforced concrete (RC) frame-core tube building using NLDKGQ. The NLDKGQ element, consisting of the plate element DKQ and the membrane element GQ12, can avoid shear locking. By introducing the updated Lagrangian formulation, NLDKGQ can simulate the geometric nonlinearity and is suitable for large deformation problems. However, NLDKGQ is not adaptable to triangular meshes; therefore, it is not easy to adopt the NLDKGQ element for the cases with complicated boundaries or curved surfaces. In contrary, triangular shell elements are more adaptive to complicated boundaries, and they can effectively solve mesh distortion and warpage problems. Therefore, it is necessary to propose a triangular shell element for OpenSees. In terms of the time integration algorithm, two types of algorithms exist: implicit algorithm and explicit algorithm. The implicit algorithm is typically used, but a convergence test is essential at each time step. It is noteworthy that the implicit algorithm may fail to perform a complete analysis owing to its strong nonlinearity-induced nonconvergence. Therefore, explicit algorithms are preferred for strong nonlinearity [Lu, Lin, Cen et al. (2015); Pham, Tan and Yu (2017)], which can avoid convergence problems. Among the existing explicit algorithms, the central difference method is the most popular one. Theoretically, the central difference method can be highly efficient when the system of equations can be decoupled. However, the decoupling criterion for the central difference method requires a diagonal damping matrix. The mass-proportional damping matrix is diagonal, but it obviously underestimates the damping ratio of high-order modes and consequently does not yield a satisfactory accuracy [Xie (2015)]. In contrast, if the stiffness-proportional damping model is introduced to restrain high-order modes, the system of equations would fail to decouple, leading to an increased computational time.
To solve the problems above, researchers have proposed numerous solutions. For example, Li et al. [Li, Liao and Du (1992)] derived an explicit difference method for viscoelastic dynamic equations; Du et al. [Du and Wang (2000)] derived an explicit integration formula for damped elastic lumped-mass structures. However, although the algorithms proposed by Li et al. [Li, Liao and Du (1992)] and Du et al. [Du and Wang (2000)] can ensure the decoupling of the system of equations, the equations for displacement and velocity are required to be established and solved separately at each time step, which significantly increases the computational time. Consequently, based on the generalized conforming theory and the updated Lagrangian formulation [Long, Cen and Long (2009)], a new triangular shell element NLDKGT is proposed in this work that is suitable for cases with complicated boundary conditions and problems with strong nonlinearities. Furthermore, by adopting the leapfrog integration method, an explicit algorithm in OpenSees and a modal damping model are developed in this work. The explicit integration algorithm can ensure the decoupling of the system of equations. The accuracy and reliability of the triangular shell element are validated through typical cases. Finally, the stability and efficiency of the proposed shell element and the explicit algorithm are validated through the nonlinear time-history analysis of a high-rise building.
Basic formulation under small deformation
To develop a suitable triangular shell element, the triangular planar membrane element GT9 [Xu and Long (1993)] and the triangular thin plate element DKT [Batoz, Bathe and Ho (1980)] were used to construct the new triangular shell element in this work. The planar membrane element GT9 contains three degrees of freedom (DOFs) at each node by introducing a rigid rotational freedom. In addition, a higher accuracy is achieved by defining higher order displacement fields [Xu and Long (1993)]. The plate element DKT is based on the Kirchhoff theory and can effectively avoid shear locking. Fig. 1 illustrates the decomposition of the NLDKGT element. Consisting of GT9 and DKT, the NLDKGT has six DOFs at each node. This will greatly reduce the connection modeling workload among the shell and the beam/column elements. The nodal displacement q is defined as follows: Here, qi m and qi b denote the nodal displacement components of GQ9 and DKT, respectively. They can be expressed as follows: The displacement u of GQ9 can be obtained through superposition of u0 and uθ as shown in Eq. (4). Here, u0 denotes the linear part of the displacement field, while uθ is an additional rotation displacement. Through Eqs. (5) to (8), u0 and uθ can derived [Xu and Long (1993)].
is the coordinate of node i in GT9 element in the local system. Then, Eqs. (9) and (10) The stiffness matrix of GQ9 is as follows: Here, Dmm represents the material matrix of GQ9 element. Generally, if the element is made of isotropic linear elastic materials, Eq. (12) can be adopted to derive Dmm. In Eq. (12), E represents the elastic modulus, h represents the element thickness, and ν represents Poisson's ratio. Eq.
(3) defines the rotational DOFs of the DKT element [Batoz, Bathe and Ho (1980)]. The relation between nodal displacements q b and rotational strain χb is as follows: Details on H x and H y are available in Batoz et al. [Batoz, Bathe and Ho (1980)].
The stiffness matrix of DKT is as follows: Here, Dbb denotes the material matrix of DKT. Generally, if the element is made of isotropic linear elastic materials, Dbb can be derived as: For small deformations, based on the plate stiffness matrix Kb and the membrane stiffness matrix Km, the local stiffness matrix of the NLDKGT element can be derived according to the DOF sequencing in Eq. (1). Then, the global element stiffness matrix can be obtained through coordinate transformation.
Geometric nonlinearity
At each time step t, through the updated Lagrangian formulation, the current deformation can be adopted to update the stresses and strains in incremental forms. Based on Kirchhoff's and von Karman's assumptions [Podio-Guidugli (1989)], a linear part (∆e) and a nonlinear part (∆η) constitute the shell element strain increment (∆ε). The linear part (∆e) can be derived from plate rotational strain increment ∆χb and membrane strain increment ∆εm, as shown in Eqs. (16) and (17), respectively.
From t to t + dt, Eq. (18) is adopted to update the shell element stress tensor: Here, at time t, Dtan is the tangential constitutive matrix. In the local coordinate system of the shell element, the system of equations using the updated Lagrangian formulation is as follows: Here, t+dtF and tR represents the external and internal force vector, respectively; the subscript represents the time step. In terms of the stiffness matrix, Kl is the linear part and Knl is the nonlinear part, which can be obtained through Eqs. (20) and (21), respectively.
∫∫
Here, by integrating Dtan through the element thickness as shown in Eq. (22), Dmm, Dmb, Dbm, and Dbb can be solved. Using the bending plate element interpolation function, the matrix G can be derived [Batoz, Bathe and Ho (1980)]. Variables corresponding to the membrane element internal force vector constitute the matrix N t (Eq. (23) Eq. (24) can be used to solve the elemental internal force vector tR in Eq. (19):
Implementation in OpenSees
Under the class of shell in OpenSees, a new class named ShellNLDKGT is added. During the implementation of the NLDKGT, nearly no source code change is made beyond the shell element domain. Through the official website of OpenSees (http://opensees.berkeley.edu), users can download corresponding source code of the NLDKGT element. It is worth noting that, the proposed NLDKGT element is compatible with other elements in OpenSees. Thus, for a real finite element model, users can use the NLDKGT elements in complicated boundary areas, and use other four-node shell elements in regular-shaped areas.
Scordelis-Lo roof problem
The Scordelis-Lo roof problem is shown in Fig. 2. The cylindrical panel is loaded vertically by a uniform dead weight of g=90. The panel is supported by end diaphragms but the sides are free. Owing to the symmetry, only one quarter of the panel is established. Three types of meshes were adopted in this analysis, as listed in Tab. 1. The vertical deflection at point A was recorded. For this case, the geometric nonlinearity was not considered. The exact solution of 0.3024 provided by MacNeal and Harder [MacNeal and Harder (1985)] was used as a reference. The results obtained using the DKT-CST-15RB element [Nicholas, Henryk and Ted (1986)] and OLSON element [Olson and Bearden (1979)] were compared with the results obtained using the NLDKGT element. The DKT-CST-15RB element is a superposition of the DKT plate bending element and the CST plane stress element, with 15 DOFs [Nicholas, Henryk and Ted (1986)]. The OLSON element is an 18-DOF flat triangular shell element reformulated by combining a bending triangle with a plane stress triangle incorporating in-plane rotations at each vertex [Olson and Bearden (1979)]. Tab. 1 shows the comparison results. The NLDKGT element is more accurate compared to the other two elements. Figure 2: Scordelis-Lo roof problem Fig. 3 shows the twisted beam problem [MacNeal and Harder (1985)]. A concentrated load is applied at the tip along the in-plane (P) and out-of-plane (Q) directions, respectively. A mesh of 2×12 was adopted in this problem. Two load cases were analyzed: (1) P=1, Q=0; and (2) P=0, Q=1. The displacement along the loading direction at the tip was recorded. For this case, the geometric nonlinearity was not considered. The exact solutions provided by MacNeal et al. [MacNeal and Harder (1985)] were used as a reference. Tab. 2 shows the results of the comparison and illustrates the accuracy of the NLDKGT element. Figure 3: Twisted beam problem
Large deformation problem of a cantilever beam
To validate the geometric nonlinearity simulation capacity of the NLDKGT element, a cantilever beam subjected to a pure bending load (out-of-plane) is analyzed [Horrigmoe and Bergan (1978); Park, Cho and Lee (1995)], as shown in Fig. 4. The mesh of 1×10 is adopted. Fig. 5(a) shows the relationship between the normalized moment (κ=M/Mmax) and the horizontal and vertical displacements at the loading point. Fig. 5(b) shows the deformed shape of the cantilever beam under different bending moments. The results show that the NLDKGT element can simulate the large deformation and rotation problems with good accuracy, which is similar to the S4 element in ABAQUS. Such a large deformation capacity makes the NLDKGT element highly suitable for geometric nonlinearity problems.
Buckling analysis of an H-shaped beam
An H-shaped beam shown in Fig. 6 is used to demonstrate the buckling analysis. An isotropic elastic material (E=2.06×10 11 Pa, ν=0.3) is used for the beam. Both the NLDKGQ and NLDKGT elements were adopted in this analysis, and the corresponding meshes are shown in Fig. 6. The two ends of the H-shaped beam were simply supported. In finite element simulations, initial defects (e.g., initial bow imperfections leading to additional moment to the middle of components) are theoretically necessary to simulate the buckling phenomenon. According to the recommendations in EN 1993-1-1 [CEN (2005)], a distributed load of p=0.5 N was imposed at each node on the web of the Hshaped beam to simulate the initial defects. Subsequently, a pressure load was applied at the top of the H-shaped beam, and the relation between the vertical load and displacement along the loading direction was recorded, as shown in Fig. 7. As shown in Fig. 7, the model using NLDKGQ fails to converge when the imposed load approaches 200 kN, i.e., when the H-shaped beam just begins to buckle. This phenomenon is due to the warping of the quadrilateral shell element. In contrast, a stable result is obtained using the triangular shell element NLDKGT. Fig. 8 shows the deformation of the H-shaped beam along the Z direction. The blue and red solid lines denote the deformation shape using the NLDKGT and NLDKGQ elements, respectively, at the time step when NLDKGQ fails to converge. The dashed blue line denotes the final deformation of the model using the NLDKGT element. Through the analysis of this case, the NLDKGT element is proven as more stable and reliable than the NLDKGQ element for the buckling analysis.
RC shear wall experiments
To investigate the performance of the NLDKGT element in simulating RC specimens, the hysteretic behavior of two shear wall specimens is analyzed using OpenSees based on the multilayered shell section model ] and the NLDKGT element. The test specimens include one rectangular wall (denoted as SW1-1) [Zhang (2007)], and one coupled wall (denoted as CW-3) [Chen and Lu (2003)]. The meshing schemes and corresponding hysteretic curves are shown in Fig. 9. The comparisons between the test and simulation results indicate that the NLDKGT element can provide satisfactory simulation results in the nonlinear behavior of RC shear walls. 28).
where M and C are the mass and damping matrices of the system, respectively; R and P are the resisting and external force vectors of the system, respectively; u, u , and u denote the displacement, velocity, and acceleration, respectively; the subscript denotes the time.
It is difficult to decouple the equations if C is not a diagonal matrix. To avoid this problem, most researchers adopt the mass-proportional damping model for the central difference method. However, mass-proportional damping will underestimate the damping ratio of high-order vibration modes, which sometimes leads to unreasonable results.
Leapfrog integration method
The leapfrog integration method [Hockney (1970)] is an improved format proposed based on the Verlet integration method [Verlet (1967)]. In the leapfrog method, the equations for updating velocity and displacement are as follows: To adopt the leapfrog method, Eq. (27) Substituting Eqs. (29)-(30) and Eq. (32) into Eq. (31) yields Eq. (33) shows that the system of equations can be decoupled when the mass matrix is diagonal. However, in this method, the kinetic and potential energies of the system are not defined at the same time step, leading to a failure in calculating the total energy directly. To solve this problem, certain additional steps are added to revise the algorithm. The entire process of the revised format is as follows [Sandvik (2018)]: (1) First, calculate t t ∆ + u through Eq. (33); (2) Subsequently, calculate t u again through the central difference method using t t ∆ + u in Step (1); (3) Solve t u using the newly obtained t u : (4) Solve t t ∆ + u using Eqs. (29)-(30).
The revised format above was performed through an iterative process. However, the additional computational cost is still relatively small, because the equations are simple. In addition, the revised format will provide the velocity and displacement at the same time step and is convenient to calculate the total energy at each time step directly. Because the backward difference format is adopted for the velocity, the stability criterion of the algorithm is different from the central difference method. Here, the conclusion will be given directly as follows (more details are provided in Appendix A): where, ωn is the highest angular frequency of the system; Tn is the shortest period of the system; ζ is the damping ratio corresponding to ωn. Eq. (35) shows that the numerical stability of the algorithm is not only related to the system frequency but also to the damping ratio.
For a finite element model, the shortest period Tn can be determined by solving generalized eigenvalues of the system. But, to simplify this procedure, an additional method is often adopted: to solve the shortest period of each element (denoted as min(Tn (e) )) [Wang (2003)]. It has been proved that, the substitute period min(Tn (e) ) is always not longer than Tn. The min(Tn (e) ) for each element is usually approximately estimated by using πL/C. Here, L is the characteristic length of the element; C is the wave speed. These parameters may differ for different kinds of elements. For example, for truss and beam elements, L is the length of the element, and C can be taken as ρ E , where E is the Young's modulus, and ρ is the mass density. For shell elements, three kinds of L are provided by Hallquist [Hallquist (2006)], and C can be taken as , where ν is the Poisson's ratio.
Although different estimation methods can be found for min(Tn (e) ), the basic concept is identical: The stable time step will be smaller for models with smaller element sizes and larger stiffness. Therefore, appropriate meshing schemes should be adopted for models using explicit algorithms. The explicit algorithm above was implemented in OpenSees through a new class called Explicitdifference, which falls under the class of Integrator. The new algorithm fits the OpenSees framework. The source code of the method is available at the official website of OpenSees (http://opensees.berkeley.edu).
Damping model adopted in explicit algorithm
To avoid a high computational cost, restrain unreasonably high frequency vibrations, and ensure the equations to be decoupled, it is necessary to use the superposition of the modal damping and mass-proportional damping models. Thus, the modal damping model was implemented in OpenSees. The modal damping can be expressed as follows [Clough and Penzien (2003)]: where Cm is the modal damping matrix; M is the mass matrix; mn, ζn, ωn, and φn are the modal mass, modal damping ratio, natural vibration frequency, and mode shape corresponding to the nth mode, respectively. The damping model that this work adds to OpenSees ensures that the superposition of the damping ratios from the modal damping, and the mass-proportional damping of each vibration mode is equal to the assigned total damping ratio.
Collapse simulation of a high-rise RC frame-core tube building
In this section, a 42-story RC frame-core tube building with a height of 141.8 m (Fig. 10) (denoted as Building 2N by Lu et al. [Lu, Xie, Guan et al. (2015)]) was simulated using OpenSees. More details about this building are provided by Lu et al. [Lu, Xie, Guan et al. (2015)]. The beams and columns were simulated with fiber beams. The shell element combined with the multilayered shell section was adopted to model the shear walls. Hence, both the material and geometric nonlinearity can be considered. The shear walls in this high-rise building are of regular shapes. In this work, coupling beams of the core tube were simulated using NLDKGT, while other shear walls were modeled using NLDKGQ. ] First, the El-Centro 1940 ground motion was adopted as the input along the X direction. The peak acceleration was adjusted to 5.1 m/s 2 (2% probability of exceedance in 50 years, as defined in the Chinese code [CMC (2010)]). According to the mesh size, material property, and element size, the time step was set to 4×10 -5 s for the explicit algorithm, and 0.01 s for the implicit algorithm. Tab. 3 provides the information of the analyzed cases. Fig. 11, the superposition of the mass-proportional and modal damping (i.e., the Ex-MS+MD model) will provide similar results to the implicit algorithm using the Rayleigh damping (i.e., the Im-RL model). However, if only the mass-proportional damping model is adopted, the IDR results are much greater.
For large-scale engineering structures, the fundamental periods are relatively long. Consequently, the high-order vibration modes contribute significantly to the structural responses. Thus, the mass-proportional damping model alone, to some extent, is not suitable for large-scale structures. It is more appropriate to adopt the superposition of the modal damping and mass-proportional damping models to avoid unreasonably high frequency vibrations. An incremental dynamic analysis (IDA) was performed using the Ex-MS+MD10 model, and the peak accelerations were adjusted to 5.1 m/s 2 , 20 m/s 2 , 40 m/s 2 , and 50 m/s 2 , respectively. Here, 20% of the initial slope is used to find the collapse intensity [FEMA (2000); Jalayer (2003); Villaverde (2007)]. According to the criterion above, Building 2N will collapse when the peak acceleration of the El-Centro record is larger than 40 m/s 2 .
(a) Inter-story drift envelope (b) Relation between PGA and the maximum IDR Tab. 4 shows the efficiency comparison among different cases in Tab. 3 and an additional case (explicit algorithm+Rayleigh damping). Among all the cases using the explicit algorithm, the Ex-MS, and Ex-MS+MD models both require the least computational time cost. However, when the Rayleigh damping model is adopted for an explicit algorithm, the time cost becomes much larger (approximately three times that of the Ex-MS model).
The primary reason of this phenomenon is that, the Ex-RL model spends significantly more time on the damping matrix contributed by the stiffness matrix at each time step. The total computational cost of the two cases using the implicit algorithm is less than that of the explicit algorithm. It is noteworthy that, when the implicit algorithm is adopted to perform the collapse analysis (i.e., the Im-RL2 model), the computational time will increase significantly. This is because the number of iterations will increase significantly when the structural components enter strong nonlinearity. Even with a larger convergence tolerance, the average time cost at each step is still 2.4 times that of the Im-RL1 model (which has a smaller ground motion intensity). The explicit algorithm does not require any iteration. This advantage means that the time cost of the explicit algorithm is proportional to the number of time steps. Thus, for strongly nonlinear problems, compared with the explicit algorithm, the implicit algorithm requires more time cost, and sometimes demonstrates no satisfactory results because of convergence failure.
Although the Ex-MS model also demands less computational time, its results are not accurate because the mass-proportional damping alone cannot control the unnecessary high-order vibration. Thus, the Ex-MS+MD model is the best option for the collapse analysis of this building.
Conclusions
For strongly nonlinear analysis, the element and time integration algorithm are two important challenges. However, the shell elements and the explicit algorithm in OpenSees still require further improvements. Therefore, a triangular shell element NLDKGT and an explicit algorithm are proposed and implemented in OpenSees in this work. The conclusions are as follows: (1) Through the validation of classical benchmarks, the triangular shell element NLDKGT was proven accurate and reliable. Compared with the quadrilateral element, the NLDKGT element could not only well consider the geometric nonlinearity, but also exhibited great advantages in strong nonlinear and warpage problems, such as buckling analysis. In addition, it is more flexible to use NLDKGT elements in complicated boundary areas to avoid mesh distortion; (2) An explicit algorithm, along with a modal damping model, was implemented into OpenSees based on the leapfrog method. Through the nonlinear time history analysis of a high-rise RC frame-core tube building, the proposed shell element and explicit algorithm demonstrated higher efficiency and more stable results in strong nonlinear problems. | 5,826.8 | 2019-01-01T00:00:00.000 | [
"Engineering"
] |
Can One Phase Induce All CP Violations Including Leptogenesis?
In the framework of a SUSY SO(10) model a phase is generated spontaneously for the B-L breaking VEV. Fitting this phase to the observed CP violating K,B decays all other CP breaking effects are uniquely predicted. In particular, the amount of Leptogenesis can be explicitly calculated and found to be in the right range and sign for the BAU.
CP violation is directly observed only in the decays of the K and B mesons. The present experimental results [1] are consistent at the moment with the standard model (SM). I.e. CP breaking is induced by a phase in the Cabibbo, Kobayashi, Maskawa (CKM) mixing matrix of the quarks. Extensions of the SM using right-handed (RH) neutrinos, that account for the neutrino oscillations, involve in general phases which allow for CP violation in the leptonic sector also. This CP breaking is difficult to observe but may be detected as soon as neutrino factories are available. The observation of neutrino-less double beta decays may be also an indication for Majorana phases in the neutrino sector [2]. Spontaneous generation of baryon asymmetry in the universe (BAU) needs CP violation [3]. It is clear now that it requires also extension of the SM, while baryon asymmetry in the universe (BAU) a la Fukugita and Yanagida [4] due to leptogenesis [5] is the most popular and promising theory for the BAU.
Where is the CP breaking coming from? CP breaking can be induced via phases in the Yukawa coupling, in the interactions of the LH and RH gauge bosons and in the VEVs. Phases in the spontaneously generated VEVs lead naturally to violation of CP. This spontaneous breaking can also help to solve the strong CP problem [7] [8]. The spontaneous violation of CP was already suggested long ago by T. D. Lee [9]. In the framework of SO(10) GUT spontaneous breaking was first discussed by Harvey, Reiss and Ramond [10]. Recently, Bento and Branco [11] added to the SM a heavy Higgs scalar with a B − L violating VEV to generate spontaneous CP violation.
In general, the known CP violation in the hadronic sector is not related to the leptonic one. Even the CP breaking needed for leptogenesis is usually independent of that in the leptonic sector. Hence, CP violation in the leptonic sector is in general not predictable. Predictability can be gained only in terms of a specific model. There are quite a few models relating CP violation in the neutrino sector to leptogenesis [12] but no conventional SUSY GUT which connects the leptogenesis to the observed violation in the K and B decays is presently known.
I would like to suggest in this paper that the one and only origin for CP violation is a spontaneous breaking at high energies. A phase in the B − L breaking VEV can induce all manifestations of CP violation. This phase can be fixed by the observed breaking in the K and B decays and the other CP violations are then predicted. In particular, we will show explicitly that within a SUSY SO(10) model the amount and sign of leptogenesis are as is needed to have the right BAU.
Let me first show how a phase can be spontaneously generated in the SU (5) singlet component of a scalar 126 representation of SO (10). It was already pointed out by Harvey,Ramond and Reiss [10] that there is a natural way to break CP spontaneously at high energies. This is due to the fact that (126) 4 is SO(10) invariant. Φ 126 is the Higgs representation used to break down B − L . Its SU (5) singlet component gives also masses to the heavy RH neutrinos. The corresponding large VEV induces also small VEVs in the components of Φ 126 that transform like 2 L under the SM [13] which play a role in the light fermion mass matrices.
Assume that all the parameters in the SO(10) invariant Lagrangian are real. If the three fermionic families are in Ψ 16 's, only Φ 10 , Φ 126 and Φ 120 can contribute to the mass terms: Suppose we have chosen global symmetries that dictate a (super-)potential of the form 1 [11] V and that those are the only phase dependent terms after the spontaneous breaking 2 . If the SU (5) singlet component of Φ 126 , Φ 126 acquire a VEV as well as the right component of Φ 10 : The phase dependent part of the potential can be then written as For B positive and |A| > 4B the absolute minimum of the potential is obtained with This spontaneous generation of a phase in the large VEV Υ, will generate also phases in the induced small VEVs which give mass to the light fermions. Those will lead to CP violation in the quark and lepton sectors. The value of the spontaneously generated phase α depends on arbitrary parameters in 1 Note that 10 is a real representation. 2 For a detailed discussion of possible scalar potentials see Ref. [10]. The [(Φ126) 4 S + (Φ 126 ) 4 S ] part serves also to break the continuous global symmetries avoiding massless Nambu-Goldston bosons.
the Higgs potential. Its actual value can be however fixed by the requirement that the phases of the induced light VEVs will give the observed CP violation in the K,B decays. All other manifestations of CP violation will be then explicitly predicted. In particular the amount and sign of leptogenesis are then predicted in models where M Dirac ν is known.
Let me now explicitly calculate the amount of leptogenesis in a SUSY SO(10) model where a phase is generated spontaneously in the B −L breaking VEV. The model was developed in a series of papers [14] [15]. It was originally aimed to find explicitly the mixing angles which are hidden in the SM, like RH rotations. Those allow to calculate explicitly e.g. the proton decay branching ratios as well as all mass matrices and in particular the Dirac neutrino mass matrix and the RH neutrino mass matrix which are needed for the calculation of the leptogenesis. We will use here the mass matrices given in Ref. [15]. This is a renormalizable SUSY SO(10) model i.e. B −L is broken via Φ 126 +Φ 126 while Φ 126 gives mass to the RH neutrinos (without using non-renormalizable contributions). The origin of CP breaking in the model is a phase in the SU (5) singlet component of one Φ 126 . A global horizontal symmetry U (1) F dictates the asymmetric Fritzsch texture [16] for the fermionic mass matrices and the possible VEVs in the different Higgs representations. By fitting the free parameters to the observed masses and CKM matrix a set of non-linear equations is obtained. These equations have five solutions which obey all the restrictions, i.e. five sets of explicit mass matrices. The Dirac neutrino mass matrices have the texture: They are given explicitly in Appendix I. The RH neutrino mass matrices have the following form in our model: Where the real a, b > 0. The corresponding eigenmasses are given in Table 1. What is Leptogenesis? Out of equilibrium CP violating decays of RH neutrinos, N i , produce excess of the lepton number δL = 0 . This will induce baryon asymmetry through B + L conserving sphaleron processes [4] [5] [6]. The amount of CP violation in these decays is: Knowing the details of CP violation in the leptonic sector as well as the RH mixing angles 3 , one is able to calculate explicitly the BAU via leptogenesis. This is the main test of the model.
Let us denote the Dirac neutrino mass matrix M Dirac ν in the basis where M ν R is real diagonal with positive eigenvalues: M D . In this basis ǫ i can be expressed as follows and v = 174 × sin β GeV 4 .
M ν R is given in eq.(7) and its eigenmasses in Table 1.
It is diagonalized by a matrix U In this basis, in terms of eq. (6) 3 Note, that M † M is diagonalized using the RH mixing matrix. 4 tan β = 10 is used in the model [15] M This gives the following general results Due to the degeneracy of N 1 , N 2 , the decay of both contributes to ǫ i . However, eq. (8) avoids the possible singularity in f (x). Hence, The BAU is given then (in the minimal supersymmetric SM) as where g * = 228.75 and d B−L is the dilution factor due to inverse decay washout effects and lepton number violating scattering. It must be obtained by solving the corresponding Boltzmann equation. There are different approximate solutions in the literature. The frequently used approximate solution [17] is good only for . In our model however, K ≈ 10 −2 . Buchmüller, Di Bari and Plümacher [6] studied recently in detail both cases K > 1 and K < 1. They found that for K < 1 one must take into account thermal corrections ue to the gauge bosons and the top quark. Hence, d B−L depends on "initial conditions" and they found that for K ≈ 10 −2 5 Hirsch and King [18] give empirical approximate solutions for the case K ≪ 1 . The solution corresponding to our model is 5 See Figure 9 in their paper where dB−L is called κ f . I will use this expression to have a definite prediction. The results for the five solutions are given in Table 2. Table 2. The CP asymmetry ǫ L , the dilution factor d B−L and the Baryon asymmetry Y B for the five solutions. I must emphasize that there is no ambiguity in the prediction of the sign because of the following reasons: a) The sign of M 1 must be positive because ǫ i is calculated in terms of M D which is the neutrino Dirac mass matrix in the basis where the RH neutrino mass matrix (7) is diagonal, real and positive. b) The parameters and especially the phases of M Dirac ν (6) are fixed without ambiguity for each one of the above solutions, although one cannot write explicitly their dependence on α. As was mentioned before, the entries to the mass matrices are solutions of non-linear equations in which the induced components of Φ 126 (with the phase α) are involved. The physical value of α is then fixed by requiring that J Jarlskog ∼ 10 −5 to be α ∼ 0.003 6 .
To complete the predictions of the model let me use the complex lepton mixing matrix U P M N S of Ref. [15] (see Appendix II) to give the amount of CP violation in the neutrino oscillation and the value of < m ee > relevant for the neutrino-less double-beta decay ββ 0ν 7 . See Table 3. CONCLUSIONS I presented in the paper the following observations: CP is naturally broken spontaneously at high energies in SO(10) GUTs. A phase is generated in a VEV and not in the Yukawa couplings, as it is usually done. This can be used as the only origin CP violation.
In the framework of a SUSY SO(10) model that uses this idea, fitting to the observed CP violation, as it is reflected in the CKM matrix, fixes uniquely the CP breaking in the leptonic sector without free parameters. An explicit calculation of leptogenesis in this model gives solutions consistent with the range and sign of the observed BAU 8 . Our model applies the conventional see-saw mechanism [22], it is possible however, to use a similar program for the type II see-saw [23] as well [24]. The large value of the RH neutrino mass can be incompatible with the gravitino problem if SUSY is broken in the framework of mSUGRA. Possible 7 m1 in our solutions is of O(10 −3 eV ). 8 "A common origin for all CP violations" was suggested recently also by Branco, Parada and Rebelo [21]. They use a non-SUSY SM extended by adding scalar Higgs, leptons and exotic vector-like quarks. The complex phase is generated spontaneously in the VEV of the heavy singlet scalar meson. The connection with the low energy CP violation in the hadronic sector is obtained only via mixing with the exotic quarks. They give also no explicit value for the leptogenesis.
solutions are discussed in the literature. E.g. Ibe, Kitano, Murayama and Yanagida [25] presented very recently a nice solution based on anomaly mediated SUSY breaking.
APPENDIX I
The Dirac neutrino mass matrices for the five solutions (for tan β = 10) in GeV. | 2,972.2 | 2004-03-31T00:00:00.000 | [
"Physics"
] |
Gene mutations responsible for primary immunodeficiency disorders: A report from the first primary immunodeficiency biobank in Iran
Background Primary immunodeficiency (PID) is a heterogeneous group of inheritable genetic disorders with increased susceptibility to infections, autoimmunity, uncontrolled inflammation and malignancy. Timely precise diagnosis of these patients is very essential since they may not be able to live with their congenital immunity defects; otherwise, they could survive with appropriate treatment. DNA biobanks of such patients could be used for molecular and genetic testing, facilitating the detection of underlying mutations in known genes as well as the discovery of novel genes and pathways. Methods According to the last update of the International Union of Immunological Societies (IUIS) classification, patients are registered in our biobank during a period of 15 years. All patients’ data were collected via questionnaire and their blood samples were taken in order to extract and protect their DNA content. Results Our study comprised 197 patients diagnosed with PID. Antibody deficiency in 50 patients (25.4%), phagocytic defect in 47 patients (23.8%) and combined immunodeficiency with associated/syndromic feature in 19 patients (9.6%) were the most common PID diagnoses, respectively. The most common variant of PID in our study is common variable immunodeficiency, which accounted for 20 cases (10.1%), followed by chronic mucocutaneous candidiasis in 15 patients (7.9%) and congenital neutropenia in 13 patients (7%). Mean age at onset of disease was 4 years and mean age of diagnosis was 9.6 years. The average diagnostic delay was 5.5 years, with a range of 6 months to 46 years. Parental consanguinity and history of PID in family were observed in 70.2 and 48.9% of the patients, respectively. The majority of PID patients (93.3%) were from families with low socioeconomic status. Conclusion This prospective study was designed to establish a PID Biobank in order to have a high quality DNA reservoir of these patients, shareable for international diagnostic and therapeutic collaborations. This article emphasizes the need to raise the awareness of society and general practitioners to achieve timely diagnosis of these patients and prevent current mismanagements.
Background
Primary immunodeficiency (PID) refers to a complex genetic group of disorders characterized by defects in the immune system, resulting in high susceptibility to various infections [1]. Publications on PID patients have improved our knowledge that at present about 250 genes are involved in distinct immunodeficiency disorders [2].
A report from the Iranian Primary Immunodeficiency Registry (IPIDR) established the incidence of PID at 13 per 1,000,000 population, and a mortality rate of 18.7%, approximately similar to the global mortality rate. Although significant advances in the identification of PIDs have been made, its prevalence is underestimated owing to lack of awareness of the public and general practitioners [3,4].
Since most PIDs are inherited in an autosomal recessive pattern, consanguineous marriage leads to a higher rate of their prevalence [5,6]. Frequency of inter-family
Open Access
Allergy, Asthma & Clinical Immunology *Correspondence<EMAIL_ADDRESS>marriage in Muslim societies such as Iran is higher than in non-Muslim societies [7][8][9]. Data from the IPIDR revealed that 63% of the PID patients have consanguineous parents [4].
Biobanks are designed to store the samples and data from patients willing to participate in biomedical research. It also provides accessibility to patients' samples for long-term evaluations, appropriate diagnosis and treatment [10]. It enables setting up links between medical centers around the world for research and therapeutic purposes, which would be particularly helpful for rare diseases [11]. In this study, we introduce a Biobank for PID patients (PIDB) with the aim of collecting and preserving sensitive data and biological samples. PIDB permits the assessment of the proportion of the affected individuals by genome sequencing and determining undiagnosed types of PID.
Necessity of creating a biobank
Inadequate sample availability and poor biospecimen quality are the limitations of the case-control and cohort studies, particularly in the field of genetic disorders [12,13]. Without a data and specimen bank, lots of time and material are wasted in each cross-sectional study. These limitations could be overcome by establishing a comprehensive biobank and database for affected patients. Storage of blood samples accelerates the process of laboratory investigations and offers the opportunity of studying several specimens simultaneously. Human DNA, a stable molecule containing genetic information, is extensively used for research purposes. The benefit of setting up a DNA bank is to overcome diagnostic limitations in our country by expansion of international collaborations. The collected samples could be distributed across borders for basic and clinical research projects resulting indefinite diagnosis [14].
PIDB management and funding
Our PID biobank is constructed under the support of our Immuno-Deficiency Research Center (IDRC) and managed by the head of this research center. Isfahan Immunodeficiency Association is a private charity association, which financially supports this project. It will be explained further that samples are primarily prepared in our center and are sent to the partner centers across the world. The Academic research collaboration agreements have been designed to carry out the genetic studies without charge.
PIDB database
This retrospective study comprised patients with the diagnosis of PID who are referred to the clinical immunology clinics in Isfahan or are hospitalized in Alzahra hospital for receiving IVIg and other parental therapies. We used the criteria of European Society for Immunodeficiencies (ESID) and the International Union of Immunological Societies (IUIS-2014) for diagnosis of PID, from 2000 to 2015 [15,16]. The family members of the patients who are suspected of any kind of PID or even presenting atypical manifestations are included in our survey. In particular, the individuals born to consanguineous parents took priority in being investigated. All individuals were provided with an information sheet in which the purposes of creating the PIDB and possible further research were written. Each patient who agreed to participate in the project was given a unique code, which served as the label for the specimen tube and the data sheet. Data were collected through questionnaires including detailed demographic information, socioeconomic status, parental consanguinity, family history of PID, number of deaths due to PID, first clinical manifestations of disease, history of having recurrent infections, history of autoimmune disease, history of atopy, laboratory results, treatments, information from stored medical documents and interviews with patients. Data were collected in Excel database and converted for analysis by the SPSS statistical software package version 16. The average maximum and minimum values were used for quantitative variables. ANOVA was utilized to compare quantitative variables for more than 2 groups. Pearson's chi 2 test was used to compare nominal/ordinal variables among groups. A p value lower than 0.05 was considered statistically significant.
PIDB sampling
After the inform consent had been signed, a 10-ml blood sample of each patient was taken in a tube with anticoagulant for the extraction of DNA and RNA. The process of extraction was carried out with calibrated instruments according to standard protocols. It has been shown that adding citrate as anticoagulant yields higher quality RNA and DNA; however, Ethylene diamine tetra-acetic acid (EDTA)-coated collection tubes are also suitable for extraction of DNA and protein but may show some unwanted side effects [17]. Peripheral blood mononuclear cells (PBMC) were isolated from the buffy coat by Ficoll-Hypaque density gradient. All specimens were processed and archived immediately after sample collection to avoid potential degradation, because a time difference between sampling and cryopreservation of the biomolecules of more than 24 h could affect the quality of samples. Freshly obtained PBMC were processed for DNA and RNA extraction with the High Pure PCR template preparation Kit, and the cDNA that was synthesized with reverse transcriptase enzyme, were all kept at −70 °C. It has been established that DNA is stable at 4 °C for weeks, at −20 °C for months and at −70 °C for years, so −70 °C is a suitable temperature for long-term stable storage of DNA. However, there is some evidences that RNA may be damaged over 5 years of maintenance at this temperature. All steps of sample preparation were performed by a trained team of nurses and technicians. The quality of samples in biobanks should be guaranteed, thus the used protocol provided rigorous quality assurance and control. We used spectrophotometry for rapid evaluation of the yield and purity of the extracted DNA and RNA. An OD 260/280 ratio higher than 1.8 was considered an indicator of acceptably pure DNA/RNA, relatively free of protein.
Since spectrophotometry does not reflect the integrity of the genomic content, agarose gel electrophoresis was applied as well. Concentration and yield was determined by comparing the sample DNA intensity to that of a DNA quantitation standard. To ensure that anonymity was preserved, specimens including DNA, RNA and cDNA were labeled with their identification codes.
PIDB role in medical research
Creation of a PIDB not only provides an opportunity to secure genetic information for further molecular studies but also describes epidemiological data on different types of PID [18]. This biorepository plays a significant role in the recognition of known gene mutations or new gene mutation discoveries by proper storing and processing of the collected samples. A PIDB is a resource of large amounts of DNA, which is especially beneficial in genetic analysis of families with complex pedigrees resulting from consanguineous marriages.
Informed consent
Informed consent is one of the major principles of ethics that should be considered in DNA biobank studies as well as other research surveys in which specimens are obtained through intervention [19]. Consent allows patients to decide whether they are interested in participating in a study with a given sample of their body. All individuals have to be well informed about the purpose of the study and the associated risks and benefits, and then a voluntary consent with preserved rights of patients will be obtained. All participants are allowed to withdraw the consent at any time in the study. Since biobank samples are not prepared just for one study, consent from the donors has to be obtained for all further research except if the patients have agreed to continue with the usage of their samples under a broad consent form initially. We preferred to seek informed consent for one study; however, this requires re-contact of patients for any new purpose that has not been declared in the primary form.
Frequent contact with the donor assures patients that the process of diagnosing their disease is continuing and has not been stopped because there has not been any noticeable achievement. Since a dead participant cannot be recontacted, we agreed with the authors that if obtaining re-consent is impossible, it would be acceptable to re-use the samples without consent [20][21][22].
Most biobank studies do not contain samples of children, because they do not always understand the purpose of research studies, they are not easily accessible, obtaining samples from children requires more skills and also because they suffer relatively more than adults. Some authors believe that sharing data on children should not be allowed ethically until they reach adulthood and have the right to decide whether they want to be involved in the investigation [23]. However, other authors argue that the data will expire and the individual himself/herself cannot take the advantage [24]. In our study, which mainly comprised children, parents are the only eligible ones who could sign the informed consent on their behalf. This investigation was approved by the Medical Ethics Committee of the Isfahan University of Medical Sciences under approval no. 290130.
Participants' privacy
One of the participants' concerns in blood donation is confidentiality about their personal information [25]. Biobanks contain genetic information about each individual with a specific phenotype. By using anonymous samples, the link between the lab and personal data is broken and just longitudinal epidemiologic results can be achieved. Best solution is coding data, which guarantees the protection and is acceptable in standard research experiments [26][27][28].
We established clear policies to secure patients' privacy, such as identifying different levels of access to the data by the employees of PIDB and encoding of biospecimens and data [29].
There are some issues on the exchange of data and sharing of databases in international collaborations. In this situation, the risk of confidentiality breach increases, and this is the reason why most researchers are reluctant to have the data sets shared [30]. Our international collaboration is extremely encouraged, even by the patients, as it is not only reaching research goals but also gives aid in diagnosis of their disease. Hence, we took measures to maximize the patients' privacy by transmitting coded data.
In this type of research, in which genetic biobank of a population of rare diseases is targeted, patients have to be notified on the concrete genetic findings relevant to their medical therapies once the final results have been prepared. The duty of informing patients equally and immediately after their definite diagnosis is an obligatory principle.
Partnership
Identification of the genetic basis of a primary immunodeficiency disease requires sufficient number of cases and availability of high technologies to discover the molecular origin of genetic disorders. Following the establishment of a PID biobank, a memorandum of understanding was signed between the Isfahan University of Medical Sciences and the Hanover Medical School and Ludwigs Maximillians University of Munich, Germany. Academic partnerships with France and Sweden for research and clinical collaborations were developed afterwards. As stated in the method section, DNA, RNA and cDNA samples from patients were prepared in accordance to the type of information to be obtained and the level of sensitivity necessary. One of the techniques for gene expression profiles-RNA sequencing, exome sequencing and next generation sequencing-were applied. RNA transcriptome sequencing mostly focused on gene expression profile and also detects alternative splicing events but apart from being costly and time consuming, its usage would be limited in genes with relatively low expression. In exome sequencing, because DNA is targeted, there is no difficulty in detecting low-expressed genes.
Results
The biobank of PID consists of 197 samples, 121 male and 76 female, from 2000 to 2015. The classification of registered patient according to IUIS is presented in Table 1. Antibody deficiency and phagocytosis defects, including number and/or function, were the most common groups of PID disease, with 50 patients and 47 patients, respectively. Other subcategories of PID were as follows: combined immunodeficiency with associated/syndromic feature 19 cases, innate immunity disorders and auto inflammatory diseases each 13 cases, combined immunodeficiency 12 cases, immune dysregulation 7 cases and complement deficiency 3 cases. 34 patients are presented with different manifestations of primary immunodeficiency diseases but are not yet categorized in a specific group. CVID 1 was the most common disorder, with 20 patients. CGD 2 (n = 14), CMC 3 (n = 13), and MSMD 4 (n = 12) were also common PIDs defined in our registered patients.
Our survey resulted in a molecular genetic diagnosis for 33 out of 197 patients. Another 160 patients who were suspected of having PIDs according to clinical symptoms did not yet receive a genetic diagnosis. No mutation was detected in 4 cases by using targeted gene panel-which focuses on a set of relevant candidate genes with known diagnostic yield. Thus, the samples are under further investigations by applying whole exome sequencing (WES). Detailed genetic diagnosis of patients, including the affected gene, the mutation, the PID subclass, the molecular genetic test by which the diagnosis was carried out and confirmation by Sanger sequencing are listed in Table 2. Most of the patients presented typical clinical manifestations of known PIDs, so that they were likely to be diagnosed based on a classical approach. Identifying gene mutations in the Iranian PID population resulted in 2 novel mutation discoveries, JAGN1 5 and STK4 6 deficiencies, which are the causes of a type of CVID and CID, respectively [31,32].
Data analysis
Our data analysis revealed that the median age of the registered patient is 15 years and 10 months, with a wide range from 2 to 58 years. Median age at the onset of disease in different types of PID is 2.5 years, ranging from infancy to 58 years. We observed first signs and symptoms of PID in 34.8% of patients under 1 year and 50% under 2.5 years. Only 12 cases showed first clinical manifestations of PID after the age of 10 years. It is shown that median age of onset among patients with different types of PID has a minimum of 1 and a maximum of 5.5 years, which is attributed to phagocytic defect and complement deficiency, respectively.
The median age of diagnosis was 7 years, with a range of 7 months to 51 years. Phagocytic defects in number/ function accounts for the lowest mean age of diagnosis, which is 4 years, while innate immunity defect is diagnosed at older age, with a median age of 14 years and 7 months. Diagnostic delay varied from 6 months to 46 years, with a median of 3 years. The median period of 4.5 years of diagnosis delay was the highest in patients with auto inflammatory disease, and 2 years was the lowest diagnostic delay seen in antibody deficiency patients. 20.3% of patients (cases) were diagnosed a year after the onset of disease and diagnosis was made later than 4 years after onset in about 38% of patients.
Ethnicity and religion were assessed in all patients. Almost all patients were Muslims, and 97% of patients were Persian. Two patients were Arab, 1 Turkish and 1 Kurdish. CID 7 , combined immunodeficiency with associated/ syndromic feature and antibody deficiency was observed threefold more frequently in males (M: F; 62:19). However, other PIDs affected males and females equally.
Among the different types of PID, parental consanguinity did not differ statistically. 100 and 22 patients had consanguineous parents, and 63.9% were children of first-cousin marriages. Highest rate of consanguinity was seen in combined immunodeficiency with associated/syndromic feature and CID 7 . History of PID in the family occurred in 48.9% of patients and was significantly higher in patients with CID 7 and innate immunity defects. Forty-three patients belonged to 16 kind reds, and gene mutations responsible for the PIDs were determined in 21 individuals from 8 families. 100 and 75 patients were from families with low socioeconomic status, and no significant difference was seen in socioeconomic status between different groups of PID (Table 1).
Discussion
In the last decade, much effort has been devoted to establish biobanks for different research purposes. It has been a long-standing need to create a biobank for PID patients in Iran. Construction of such a biobank as a resource center of data and genome contents facilitates identification of genetic defects underlying different types of PID, in spite of the generation of descriptive statistics. We created a PIDB in order to stop the ad-hoc research studies, which may not have standard quality of sampling, processing and storage.
In our population, 30 different PIDs were diagnosed and categorized into 8 main groups. This is a preliminary report of our PID biobank and database. Our study showed that antibody deficiency is the most common group of PIDs in Iran, which is consistent with other studies [29,[33][34][35][36][37]. The proportion of patients with antibody deficiency in total is almost similar to previous reports from Iran but lower than the last report from ESID (26 vs. 57%) [4,37]. Congenital defect of phagocyte number and/or function is the second most predominant PID in our study, similar to studies from France, Malaysia, Korea, USIDNET 8 , Iran and Iceland [4,[36][37][38][39], and is in contrast with studies from other registries that reported combined immunodeficiency with associated/ syndromic feature as the second common one [33,35,39,40]. Phagocyte defect involved 23.9% of our patients and 42% of PIDs in Oman [41]. However, much lower frequency has been observed in ESID, UK, Turkey and Spain [35,39,42]. Combined immunodeficiency with associated/syndromic feature was the third prevalent PID, which accounts for 10% of patients in accordance with studies from France, Malaysia and UK. This group of PID was formerly known as well-defined syndromes, but it has been changed into combined immunodeficiency with associated/syndromic feature in the updated IUIS. Innate immunity defects and auto inflammatory diseases are followed by combined immunodeficiency and disease of immune dysregulation. Complement deficiency was the least prevalent subcategory in our study and also in the other surveys [39,43]. CVID 1 was the most frequent disorder in most studies [34,39], which is in contrast with the last update of IPIDR that indicated SCID 9 as the most common disorder in Iran [4].
It was not an unexpected finding that PID in males is more frequent than females (1.7:1), but we found a significant difference in sex distribution in CID 7 , combined immunodeficiency with associated/syndromic feature, antibody deficiency and auto inflammatory disorders compared to other PID groups. The fact that males are more affected by PID is partially related to known X-linked disorders. Our registry included only 10 BTK 10 deficiencies, 1 CD-40 ligand deficiency, 2 WAS 11 and Patients with the same letter in parentheses after their IDs are from one family 1 IPEX 12 patients. An important finding is that, besides these known X-linked diseases, there is still a significant difference in gender distribution in the mentioned groups. This finding may suggest the presence of undiscovered X-linked patterns of inheritance in PID groups among our population. Few studies evaluated consanguinity in PID families. Our study is the first study to analyze consanguinity among different groups of PIDs, but no significant difference was observed. Recent studies and other registries from Islamic countries and also from Germany showed a higher rate of consanguineous marriage in parents of PID patients compared to the rate reported from the UK and Turkey [35,42]. In our PID population, consanguinity was mostly observed in patients with CID 5 and CID with associated/syndromic feature. Our study evaluated the history of any known/ suspicious case of PID in family members of the patients, which was more frequent than in another report from Iran (48.9 vs. 28.9%) [4]. It is probably due to the consideration of positive family history of PID in relatives of patients who did not yet have a definite genetic diagnosis. Diagnostic delay in our study (5 years and 6 months) was much higher than that of other studies. This may be due to ignorance of symptoms and mismanagements of patients by inept physicians. The clinical evidence for this fact is that 22 patients suffer from bronchiectasis (11.7%). The delay from onset of symptoms to diagnosis was shorter in CID 7 with syndromic/associated feature, because these patients have significant presentations, distinctive features in childhood or intrauterine defects including coarse faces, microcephaly, mental retardation, dwarfism, and IUGR, among others which cannot be neglected. Our registry also reports 7 deaths due to primary immunodeficiency disease that can be partly related to delayed or undiagnosed type of disease. As patients with PID are struggling with various recurrent infections, lifelong, and no definite treatment is known except bone marrow transplantation, in some cases, they just receive therapies in order to control signs and symptoms. Consequently, these patients visit their clinical immunologists for follow up and renew their drugs frequently. Our last update revealed that 61 patients were not followed in the previous year, and we are unaware of their disease status. Mortality rate in our study (5.5%) is much lower when compared to the other report from Iran (18.7%) [4]. We established that most of the deceased patients suffered from SCID 9 ; however, our study included few cases of SCID 9 and other severe PID cases leading to death. We also observed that most of the affected patients are from low socioeconomic status. This might be due to a lower level of education and not being aware of the risks associated with consanguineous marriage. Other studies indicated that lack of accurate clinical PID diagnostic criteria, unawareness of general practitioners and lack of referring to physicians in cases with mild presentations cause PIDs not to be discovered properly. Societies with advanced social awareness report statistics closer to the actual quantities. Therefore, long-term projects for improving social knowledge about primary immunodeficiency diseases, their clinical presentations, consanguineous marriage and its genetic effect on occurrence of PIDs are planned. We began our cooperation with the Standing Committee of Public Health in the International Federation of Medical Students Association. This standing committee is responsible for raising awareness regarding global public health issues. As a rule, the earlier the diagnosis is made, the sooner the treatment can commence and the lower morbidity and mortality can be expected.
Conclusion
Although a national registry has been established in Iran (IPIDR) [4], this is the first comprehensive biobank for PID patients in our country, which offers cross-country collaborations with the goal of identifying genetic diagnosis of these patients. Genome sequencing and definite diagnosis may help patients to receive effective treatments and enjoy higher survival. Our study was designed to provide the PID Biobank (PIDB) in order to have a high quality DNA reservoir of these patients, shareable for international diagnostic and therapeutic collaborations. We are keen to link with other international research centers in order to share data and samples. High prevalence of consanguinity makes PIDB samples valuable for collaborative projects. This article emphasizes on raising the awareness of society and general practitioners in order to achieve timely diagnosis of these patients and prevent current mismanagements. | 5,857.6 | 2016-12-02T00:00:00.000 | [
"Medicine",
"Biology"
] |
Illumina sequencing‐based analysis of sediment bacteria community in different trophic status freshwater lakes
Abstract Sediment bacterial community is the main driving force for nutrient cycling and energy transfer in aquatic ecosystem. A thorough understanding of the community's spatiotemporal variation is critical for us to understand the mechanisms of cycling and transfer. Here, we investigated the sediment bacterial community structures and their relations with environmental factors, using Lake Taihu as a model system to explore the dependence of biodiversity upon trophic level and seasonality. To combat the limitations of conventional techniques, we employed Illumina MiSeq Sequencing and LeFSe cladogram to obtain a more comprehensive view of the bacterial taxonomy and their variations of spatiotemporal distribution. The results uncovered a 1,000‐fold increase in the total amount of sequences harvested and a reverse relationship between trophic level and the bacterial diversity in most seasons of a year. A total of 65 phyla, 221 classes, 436 orders, 624 families, and 864 genera were identified in the study area. Delta‐proteobacteria and gamma‐proteobacteria prevailed in spring/summer and winter, respectively, regardless trophic conditions; meanwhile, the two classes dominated in the eutrophication and mesotrophication lake regions, respectively, but exclusively in the Fall. For LEfSe analysis, bacterial taxon that showed the strongest seasonal or spatial variation, majority had the highest abundance in spring/summer or medium eutrophication region, respectively. Pearson's correlation analysis indicated that 5 major phyla and 18 sub‐phylogenetic groups showed significant correlation with trophic status. Canonical correspondence analysis further revealed that porewater NH 4 +‐N as well as sediment TOM and NO x‐N are likely the dominant environmental factors affecting bacterial community compositions.
. Because the spatial and temporal distribution of these microbes is controlled by physiochemical conditions of the sediments, temperature, nitrogen level, and organic matter in particular (Haller et al., 2011;Song, Li, Du, Wang, & Ding, 2012), a shift in sediment bacterial communities can provide important insights into environmental changes in the local ecosystem.
Bacterial community is characterized by its structure and biodiversity which have been well studied so far by conventional experimental techniques such as polymerase chain reaction-denaturing gradient gel electrophoresis (PCR-DGGE) and clone library techniques. These investigations helped to establish a broad understanding concerning microbial community's temporal and spatial distribution patterns.
For example, a PCR-DGGE/clone library study of the bacterial community in Sitka stream, Czech Republic found that most of the mcrA gene clones showed low affiliation with known species and probably represented genes of novel methanogenic archeal genera/species (Rulik et al., 2013). Another research in the Yangtze Delta (Huang, Xie, Yuan, Xu, & Lu, 2014), using the same technique found the number of total cultivable bacteria in an estuary reservoir was significantly lower than that of the main river. Despite the advancement, the knowledge obtained by these studies may have its limitation because the low-throughput methods employed often underestimate the overall diversity and lack the ability to detect rare species in complicated environmental systems. For example, Berdjeb, Pollet, Chardon, and Jacquet (2013) used the similar methods to examine the archaeal community structure in two neighboring peri-alpine lakes of different trophic status but found no spatiotemporal dynamics in their study area, suggesting the potential inadequacy of the conventional techniques to probing the complexity of biodiversity and community structure in natural environments.
Compared to the conventional methods, high throughput sequencing has the advantage of being able to generate multi-million sequences and thousands of Operational Taxonomic Units (OTUs) in environmental samples. For example, Conrad et al. (2014) used pyrosequencing to obtained more than 1000 bacterial OTUs in the sediment of Amazon region and found that rewetting of the sediments resulted in a dramatic increase of the relative abundance of Clostridiales. The chosen study area, Lake Taihu (2,338 km 2 ), is highly heterogeneous in the trophic levels due to the difference in river input to different regions. As such, the water body in the lake can be divided into different ecological types based upon trophic status and plankton community structure. Spatial variation of bacterial communities in the lake sediments was documented by a number of researchers but no consensus has been reached so far. For example, Liu et al. (2009) reported the absence of Actinobacteria in the eutrophied area of the lake, but was contradicted by Chen et al. (2015), where the authors detected as much as 5% abundance for this phylum. Similar inconsistency can be found for the spatial distribution of Cyanobacteria, alpha-proteobacteria, and Planctomycetes upon comparing the results by Shao et al. (2011) andChen et al. (2015).
On the vertical dimension, Ye et al. (2009) reported similarity of bacterial communities in different layers of sediments taken from Meiliang Bay, but Shao et al. (2013) in a later work discovered the variation of bacterial community and an overall decrease of biodiversity with depth in Meiliang Bay. A literature review indicates such disagreement may have originated largely from the limitations of the clone library because most of these previous studies employed the conventional analytical techniques. In this study, we assessed the sediment bacterial community in a lake with known trophic gradient used a high-throughput sequencing method (Illumina MiSeq) to circumvent the technical limitations of the traditional methods.
For data processing, we used Linear discriminative analysis Effect Size (LEfSe) to recover the spatiotemporal variations of the bacterial community. The aim of this study is to dissect the bacterial community, using the high-throughput sequencing technique (1) to determine the relations of sediment bacterial taxa with the trophic status of the lake water and sedimentary environmental factors (2) and to provide powerful evidences for further elucidation of the nutrients cycle and accumulation mechanism driven by bacteria in aquatic ecosystem.
| Sampling site and procedure
The study area ( Figure 1) is located at the north to east side of the Lake Taihu with total nitrogen decreasing from Meiliang Bay (region A-1, north), to Gonghu Bay (region A-2, northeast), and finally to Xukou Bay (region A-3, east). Area A-1 is highly enriched in nutrients and has frequent algal blooming incidents. In contrast, the low nutrient waterbody in A-3 is characterized by submersed vegetation and diverse communities of fishes and invertebrates and, in fact, is a drinking water source for local communities. The water in A-2 was similar to that in A-1 till about 15 years ago but has since improved its quality due to the interference of the local government.
Sample collection was carried out in the Fall of 2014, and in Winter, Spring, and Summer of 2015. For sediments, loose sediment samples in the depth of less than 5 cm was collected using a 1/16 m 2 Petersen grab sampler. Triplicate samples from three separate grabs were homogenized to generate one composite sample in each sampling site. Water samples were taken together at the same locations.
All samples were immediately stored in an icebox and transported back to the lab within 3 hr. Once in the lab, an aliquot of the sediment samples was placed in a 15 ml sterile centrifuge tube at −80°C until DNA extraction was carried out. The remaining portion was further processed (freeze-dried to collect sediment particles, and centrifuged to collect the pore water) for physicochemical analyses.
| Physicochemical analyses
Seventeen physicochemical parameters of the overlying water, pore water, and freeze dried sediments were analyzed ( Table 1).
| DNA extraction and purification
Total genomic DNA of each sediment sample was extracted using (Caporaso et al., 2012). The conditions for amplification are as follows: 95°C for 2 min; 27 cycles of 95°C for 30 s, 55°C for 30 s, followed by 72°C for 45 s, with a final extension 72°C for 10 min. The PCR products were gel-purified, using the UltraClean PCR Clean-Up Kit (Mo Bio laboratories) and quantified, using a Qubit system (Invitrogen). Equimolar amounts of purified amplicons were pooled and stored at −20°C until sequenced. Library construction and sequencing were performed commercially (Beijing Genomics Institute).
| Sequences data analyses
Illumina sequence reads were processed using MOTHUR version 1.27.0 (Schloss et al., 2009). Briefly, upon completing sequencing by the Illumina MiSeq platform, the reads from the original DNA fragments were merged, using FLASH (V1.2.7, http://ccb.jhu.edu/software/FLASH/), and quality filtering of reads was performed according to the literature (Caporaso et al., 2011). Chimeric reads were removed by checking against a chimera-free database of 16S rRNA gene sequences, using UCHIME (DeSantis et al., 2006). Sequences were assigned to the OTUs with a maximum distance of 3%, using MOTHUR (Schloss et al., 2009). Community diversity indices and rarefaction curve of each sample were generated, using the UPARSE pipeline (Edgar, 2013). The RDP classifier was used to assign taxonomic identity to the representative sequence for each OTU. T A B L E 1 Physicochemical parameters of the overlying water, pore water, and freeze dried sediments
| Statistics analysis
The Trophic Status Indices (TSI) (Aizaki, 1981) of all sampling sites were calculated using the measured Chl-a, W-TP, W-TN, COD, and SD by the following expression: where TSI(∑) is the completed TSI; w j is the relative weight of TSI of the j parameter; and TSI (j
| Physicochemical properties of the samples
Measured TSI in the study area decreased in the direction of A-1,
| Bacterial community structures via Illumina MiSeq sequencing
The similarity of sediment bacterial communities within individual lake regions was first analyzed by PCR-DGGE. The dendrograms ( Figure S1) indicated that the communities in each region can be grouped into 2 defined clusters corresponding to winter and summer.
Each cluster can be further divided into two sub-clusters. Guided by this understanding, we selected two sites in each region, one from each sub-cluster, and performed further analysis by Illumina MiSeq sequencing.
A total of 1,918,768 high quality sequences (average length 253 bp) were obtained by Illumina MiSeq sequencing at which the rarefaction curves of Shannon diversities approached a plateau, suggesting a complete capture of the bacterial community at each site.
Based on a 97% sequence similarity cutoff, these sequences yielded a bacterial OTU number that ranged from 2279 to 4331 (Table S2).
Of the three regions, the sites in A-2 showed the highest diversity in Fall, while the sites in A-3 reached a peak in the other three seasons. Seasonality-wise, the lowest diversities were observed in fall and winter. (Table S6). At the genera level, Acinetobacter, with the relative abundance ranging from 0.01% to 61.8%, was the most dominant. GOUTA19 and LCP-6 were the other abundant genera and were present in all sediment samples with the relative abundance of 0.5-6.0% and 0.8-5.8%, respectively (Table S7).
| The spatial-temporal distribution of bacterial communities
Spatial-temporal variation of the bacterial community can be evaluated either by a direct comparison of the relative abundance of individual taxa, or by LeFSe algorithm. Direct comparison found that, for the dominant phyla Proteobacteria, its major classes varied greatly with trophic status and seasonal change. For example, delta-proteobacteria and gamma-proteobacteria prevailed in spring/summer and winter, respectively, regardless trophic conditions; meanwhile, the two classes dominated in the eutrophication and mesotrophication lake regions, respectively, but exclusively in Fall ( Figure 5 and Table S4).
The strongest seasonal dependence may be manifested by gammaproteobacteria whose abundance showed a greatest decrease from winter and fall to summer and spring ( Figure 5). The spatial variation may be exemplified by the behavior of Planctomycetes, Chloroflexi, and For those that showed the strongest seasonal variation, the majority had the highest abundance in spring and summer (Table 2); for those that showed the strongest spatial variation, a majority had the highest abundance in region A-1 (Table 3). Some bacterial taxonomy levels (from phylum to family or genus levels) had consistent variation among F I G U R E 5 The temporal and spatial variations characteristics of bacterial community structure in different bacterial taxonomical levels (only shown the sequence of bacteria >1% of all sequences). The size of the circle represented the relative abundance of bacteria at each site, and the color of the circle represents bacterial taxonomical levels, red is phylum, blue is class, black is order, family and genus
| Relationship between bacterial community structure and environmental variable
The overall level of biodiversity in the study area appeared to be
| DISCUSSION
The bacterial OTUs and Shannon diversity obtained in the present study are more than two orders of magnitude and twofold higher than the results acquired via low-profiling biology techniques for the same/ similar eutrophication lakes (Zhao et al., 2013;Szabó et al., 2011), which is similar to that found by the high-throughput pyrosequencing method , suggesting that high-throughput
| The characteristics of bacterial community structure
The bacterial communities observed in this study were dominated by gamm-, delta-, beta-proteobacteria, a pattern similar to those found in soils (Liu, Zhang, Zhao, Zhang, & Xie, 2014) and other fresh water lake sediments , but distinct from those found in salt water lake sediments (Xiong et al., 2012) and marine coastal waters (Fortunato et al., 2013). It is known that the phylum Proteobacteria might be involved in a variety of biogeochemical processes in aquatic ecosystems (Zhang, Zhang, Liu, Xie, & Liu, 2013;Liu et al., 2014).
For example, numerous studies, either through conventional approach or high-throughput method, have shown the predominance of Proteobacteria in sediments of various lakes, with a large shift in the composition of major classes and relative proportions (Ye et al., 2009;Haller et al., 2011;Song et al., 2012;Bai et al., 2012). At the class level, both gamma-proteobacteria (Sinkko et al., 2013;Liu et al., 2014) and delta-proteobacteria (Rodionov, Dubchak, Arkin, Alm, & Gelfand, 2004;Lehours, Evans, Bardot, Joblin, & Gerard, 2007) were observed to occur in organic-rich lacustrine sediments. Beta-proteobacteria, a major class in most of the samples in this study, occurs almost exclusively in freshwater environments (Hempel, Blume, Blindow, & Gross, 2008) and is seen as the most abundant group in the sediments of eutrophication lakes (Bai et al., 2012).
The predominance of Proteobacteria's class and the observed strong correlation between these bacteria and nitrogen conversion (Table 4) in the present study suggest that they were actively involved in the functioning and processes of lake sediment ecosystems (Song et al., 2012). Numerous studies point to a linkage between nitrogen conversion with Proteobacteria's class. For example, Zhang et al.
| Spatial and seasonal variations in bacterial community structure
The spatial variation of bacterial community is characterized by the dominance of delta-proteobacteria in the eutrophication regions (A-1 and A-2) and gamma-proteobacteria in the mesotrophication region (A-3) in Fall. Such pattern might be due to the regional differences in the sediment organic matters at different trophic levels. In F I G U R E 6 Cladograms indicating the phylogenetic distribution of bacterial lineages associated with the 4 seasons of a year. The phylum, class, order, family, and genus levels are listed in order from inside to outside of the cladogram and the labels for levels of order, family, and genus are abbreviated by a single letter. The green, blue, red, and purple circles represent the bacteria enriched in the sediment of spring, summer, fall, and winter, respectively, whereas the yellow circles represent the taxa with no significant differences between 4 seasons of a year | 11 of 15 WAN et Al.
the mesotrophication region, the sediment organic matter is derived mainly from decomposing and dead residues of large vascular plants; in comparison, the sediment organic fraction of the eutrophication regions originated primarily from the organic remains of algae (Qin, Xu, Wu, Luo, & Zhang, 2007). Our results differ from previous research (Shao et al., 2011) where the authors reported that delta-proteobacteria was the prevailing class in the macrophyte-flourishing areas while
| High abundance bacterial phyla at eutrophic conditions
The high abundance of five phyla at the eutrophication region may be an indication that these microbes have specific nutritional or environmental preference. For example, the observed relation between TP and Chloroflexi assemblage (Table 4), along with previous studies in a different lake (Song et al., 2012), may suggest a possible role of phosphorus in promoting the growth of Chloroflexi. In addition, it was reported that this phylum was a predominate taxa (57-82%) in the sediment of copper mine (Lucheta, Otero, Macias, & Lambais, 2013). Following this lead, we hypothesize that the high abundance of Chloroflexi in region A-1 may be due to the discharge of phosphorus and heavy metal-containing industrial wastewater in this area. The observed high TP concentration in the overlying water and sediment in region A-1 (Figure 2) provide additional support for this view point.
For Verrucomicrobia, the high abundance may be due to the prosthecate morphology of these bacteria which renders a unique ability for nutrient uptake (Zwart et al., 1998). Verrucomicrobia, which was able F I G U R E 7 Cladograms indicating the phylogenetic distribution of bacterial lineages associated with the sediments of 3 lake regions. The phylum, class, order, family, and genus levels are listed in order from inside to outside of the cladogram and the labels for levels of order, family, and genus are abbreviated by a single letter. The green, red, and blue circles represent the bacteria enriched in the sediment of Meiliang Bay (A-1), Gonghu Bay (A-2), and Xukou Bay (A-3), respectively, whereas the yellow circles represent the taxa with no significant differences between the sediments of 3 lake regions to take advantage of nutrient-rich environments, had been found in eutrophic ponds and lakes such as those in recreational parks where visitors feed waterfowl (Schlesner, 2004). Chlorobi are photosynthetic bacteria and hence require the presence of adequate light penetration in water (Vila, Abella, Figueras, & Hurley, 1998). On the contrary, Region A-3 is teemed with a great many submersed vegetation or aquatic plants. The dense leaves of macrophyte, in particular, can effectively block the transmission of light to the surface of sediments, resulting in an opaque condition that leads to slow growth for Chlorobi. Nitrospirae is a known significant group related to the nitrite oxidation in freshwater lake sediments (Bartosch, Hartwig, Spieck, & Bock, 2002). Consequently, these bacteria will flourish in high nitrogen condition such as regions A1 and A2. Lastly, positive relation between Planctomycetes and eutrophication may be understood from genome analysis (Gloeckner et al., 2003) which revealed the microbes' ability to derive energy from the degradation of sulfated polysaccharides of algal origin. Planctomycetes was present at high levels in diatom blooms (Morris, Longnecker, & Giovannoni, 2006)
| Factors affecting bacterial community structures
Agreeing with previous studies in other similar eutrophication freshwater lake (Zeng et al., 2008;Dang et al., 2010;Macalady, Mack, Nelson, & Scow, 2000), CCA results from our analyses (Figure 7) showed that pore water NH 4 + -N as well as sediment TOM and NO (Zhong et al., 2015). The content of TOM is more than 45 g kg −1 in winter and less than 20 g kg −1 in other three seasons (Figure 2), the increase of TOM in winter directly influence the abundance of Cloacibacterium (Figure 7), because these bacteria participate in organic matter degradation, and TOM provide nutrient for the growth of Cloacibacterium (Bauer et al., 2006).
| CONCLUSIONS
High throughput Illumina MiSeq sequencing method was used to investigate the biodiversity and bacterial community structure in Lake Taihu. More than 1,910,000 sequences were analyzed in the context of changing environmental conditions to evaluate the impact of trophic status on bacterial community, and the results showed significant correlation with trophic status in 5 major phyla and 18 sub-phylogenetic groups. Findings from this investigation can be summarized as follows: 1. The diversity of bacterial community is inversely related to the trophic levels of water body in most seasons of a year.
2.
The bacterial taxa, delta-proteobacteria and gamma-proteobacteria, that dominated, respectively, in the eutrophication and mesotrophication regions showed the strongest seasonal variation. | 4,559.8 | 2017-02-07T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Accuracy of Conventional and Digital Radiography in Detecting External Root Resorption
Introduction: External root resorption (ERR) is associated with physiological and pathological dissolution of mineralized tissues by clastic cells and radiography is one of the most important methods in its diagnosis. The aim of this experimental study was to evaluate the accuracy of conventional intraoral radiography (CR) in comparison with digital radiographic techniques, i.e. charge-coupled device (CCD) and photo-stimulable phosphor (PSP) sensors, in detection of ERR. Methods and Materials: This study was performed on 80 extracted human mandibular premolars. After taking separate initial periapical radiographs with CR technique, CCD and PSP sensors, the artificial defects resembling ERR with variable sizes were created in apical half of the mesial, distal and buccal surfaces of the teeth. Ten teeth were used as control samples without any resorption. The radiographs were then repeated with 2 different exposure times and the images were observed by 3 observers. Data were analyzed using SPSS version 17 and chi-squared and Cohen’s Kappa tests with 95% confidence interval (CI=95%). Result: The CCD had the highest percentage of correct assessment compared to the CR and PSP sensors, although the difference was not significant (P=0.39). It was shown that the higher dosage of radiation increases the accuracy of diagnosis; however, it was only significant for CCD sensor (P=0.02). Also, the accuracy of diagnosis increased with the increase in the size of lesion (P=0.001). Conclusion: Statistically significant difference was not observed for accurate detection of ERR by conventional and digital radiographic techniques.
Introduction
xternal root resorption (ERR) is a condition associated with physiological and pathological dissolution of mineralized tissues by odontoclastic cells [1,2]. Early diagnosis is the key factor to detect and preserve the involved teeth [3]. Root resorption usually does not represent with any clinical sign or symptom. Hence, the diagnosis is generally based on its detection during radiographic examinations [2]. Numerous imaging modalities are currently accessible. Image acquisition is improved and is easier with the use of several tools that incorporate sensors using solid-state technology, aka charge-coupled device (CCD), or photo-stimulable phosphor (PSP) technology, which are known as a semi-direct or indirect acquisition modality [4][5][6]. However, the conventional intraoral film radiography (CR) is another option that compresses the three-dimensional anatomy into a two-dimensional image or shadowgraph, and thus greatly limits the diagnostic performance as the important features of the tooth and its surrounding tissues are detectable in the proximal plane (mesiodistal direction) only [7]. Similar features presenting in the buccolingual plane (i.e. the third dimension) may not be fully visible; however, this shortage could be overcome by taking several intraoral views at different angles [8].
CCD sensors and PSP plates are the intraoral digital radiographic techniques most commonly used in clinical dentistry E Table1. The number (N) and location of artificial root resorption in different groups (R: resorption detected, RN: resorption not detected) Group Location N R (N) RN (N) 1 Buccal 10 10 20 2 Mesial 10 10 20 3 Distal 10 10 20 4 Buccal and mesial 10 20 10 5 Buccal and distal 10 20 10 6 Mesial and distal 10 20 10 7 Buccal, mesial and distal 10 30 0 8 No resorption (control group) 10 0 30 for diagnosing different lesions [9]. Solid state detectors consist of a CCD or complementary metal oxide semiconductor (CMOS) chip that is sensitive to light and a scintillator layer that converts x-ray to light. The quality of the image produced by a solid state detector depends on dimensions of the chip pixel, type, and configuration of the scintillation layer, the electronics including analog-to-digital conversion, and the acquisition and display software. The CCD system uses a thin wafer of silicon as the basis for image recording [10], while PSP consists of a polyester base coated with crystalline halide emulsion composed of a europiumactivated barium fluorohalide compound. PSP plates absorb and store x-ray energy, which is then released as phosphorescence upon stimulation by another light of an appropriate wavelength. Digital systems offer several advantages over conventional silver-halide analogue radiographic films, including reusability, reduced radiation dosage, being time-saving, possibility of image enhancement and ease of storage, retrieval and dentists' communication [11]. Considering the importance of radiologic diagnosis of external root resorption and the potential difference in diagnostic performance of different imaging systems, the aim of this study was to evaluate the accuracy of CR with CCD and PSP sensors in detection of ERR in 3 different root surfaces including buccal, mesial and distal, with different cavity sizes and exposure times.
Methods and Materials
In this experimental study, 80 extracted human mandibular premolars were collected. Teeth with root canal fillings, root resorption, fracture, cracks and incomplete apices, were excluded. The samples were divided into 8 groups (n=10). After taking initial radiographies with conventional Espeed intraoral film (AGFA-Gevaert, Mortsel, Belgium), CCD sensor (DIXi3, planmeca Oy, Helsinki, Finland) and PSP sensor (Digora; soredex, Helsinki, Finland), the artificial defects similar to ERR were created using round diamond burs (Tizkavan, Tehran, Iran) with 0.8 mm, 1 mm, 1.2 mm and 1.4 mm diameters by drilling with the entire bur depth at apical half of the mesial, distal and buccal surfaces of the teeth and 10 teeth were placed in a control group without any resorption (Table1). According to the factorial design rule, the number of usage of each bur was 30 times. All teeth were randomly numbered from 1 to 80 and the number, location and the size of cavities were listed and saved.
Teeth were separately repositioned in mandibular alveolar sockets of a cadaver skull that was borrowed with ethical approval from the Faculty of Dentistry, Babol University of Medical Sciences. The soft tissue was simulated by wax plates. Then the radiographs were repeated at 2 different exposure times; the exposure time for digital imaging (CCD and PSP) was 0.04 and 0.08 sec while this time for CR was considered 0.08 and 0.16 sec at 60 kVp. The CR films were processed in an automatic processor (HOPE dentamax, Warminster, PA, USA) based on manufacturers' recommendations. Digital intraoral images were taken using PSP sensors with 85-167 pixel size per µm (Digora; soredex, Helsinki, Finland) and a resolution of 6 LPM, and CCD sensor with 19 pixel size per µm (DIXi3, planmeca Oy, Helsinki, Finland) and 25 LPM resolution. The CR was also taken using intraoral E-speed size 2 films. The distance between the digital detectors or CR films and teeth were fixed by holders, the focus receptor distance was 30 cm.
The radiographic results were analyzed by three observers (a radiologist and two endodontists). The obtained films were evaluated using a light box and digital images were displayed on a 17-inch monitor (SyncMaster, Samsung, Seoul, Korea) without enhancement. The images were evaluated based on being able to determine the presence/absence and the surface of the defect. The reliability and degree of agreement was determined by means of Cohen's Kappa analysis with CI=95%. Correct detection and sensitivity (true positive) was defined as correct detection of a surface with defect, and specificity (true negative) was defined as correct detection of a surface without defect and positive and negative predictive value of the study was analyzed using the SPSS software (SPSS version 17.0, SPSS, Chicago, IL, USA) and the chi-square test.
Results
A Total of 240 root surfaces were included in the study: 120 were detected with resorption and 120 were classified without any defect by the observers ( Table 1). The number of cavities The CCD had the highest rate of correct detection compared with the CR and PSP sensors, even though the difference was not significant (P=0.39). Table 2 shows that the highest percentage of precise detection according to resorption surface and the radiological methods were observed in mesial, distal and buccal surfaces, in descending order, for the CR (P=0.55), CCD (P=0.58) and PSP (P=0.26) sensors, respectively.
According to the results, high dosage of radiography increases the accuracy of diagnosis ( Figure 1); however, this issue is only significant for CCD sensors (P=0.02). Figure 2 also, shows that the surface without cavity has the highest accuracy of diagnosis, also the accuracy of diagnosis increases with the increase in cavity size (P=0.001). The results revealed that the most sensitivity and specificity for high exposure time of CCD sensor were 81.4 and 68.2 while for lower amount of exposure time they were 78.7 and 66.9, respectively. In addition, the highest kappa coefficient was for high exposure time (0.458±0.055).
Moreover, regarding PSP sensor, the highest sensitivity and specificity of high exposure time were 79.7 and 63.9 while they were 82.4 and 66.2 for the low exposure time.
Also, the highest kappa coefficient was related to high exposure time and this was 0.458±0.055.
Discussion
This experimental study proved that there is no significant differences in detection of ERR with variable sizes and in different tooth surfaces between conventional and digital intraoral radiographic techniques.
The diagnosis of ERR it highly important as it can increase the chance of treatment and maintenance of the tooth [12]. According to the impediments of CR, recently the digital radiographic techniques such as CCD and PSP have gained notable acceptance among the clinicians. Although the CCD technique showed the highest amount of efficiency, but the difference in the accuracy of assessments between the conventional and digital radiographic methods was not significant. In the study by Kamburoğlu et al. [10], CCD and CR revealed more correct readings than PSP. It seems that the low accuracy of PSP is due to the quality of the phosphor plate, low resolutions, and low signal-to-noise ratio and the mechanism of the scan. Borg et al. [13] showed that digital radiography has similar sensitivity to CR in resorption diagnosis but the amount of radiation is lower in digital radiography. Nevertheless, digital radiography has some advantages that CR does not, for instance the images can be manipulated such as enlargement, inversion, and contrast enhancement [14]. Contrary to this study, Westphalen et al. [15] have shown that the sensitivity of digital radiographic method was statistically higher than the CR.
Similar to the findings of Levander et al. [3] and Borg et al. [13], the percentage of correct assessment is increased by the size of the cavities in this study. Removing the larger amount of dental tissue leads to a wider radiolucent area. Therefore, the root resorption diagnosis rate is higher for larger cavities by both conventional and digital radiographic methods.
In contrast to the present study, Shokri et al. [2] showed no significant differences in detection of resorptive cavities with different sizes among cone-beam computed tomography (CBCT), CCD and CR methods.
In this study there was not any significant differences between detection of the ERR in buccal, mesial and distal surfaces of the root. In another study, the accurate assessment was related to the proximal surfaces with no difference in the diagnosis of the cavities in cervical, middle and apical portion of the root [16]. Kamburoğlu et al. [10] showed that the most difficult surface of the root for resorption diagnosis are the buccal and proximal aspects in apical areas while the proximal, cervical and the medium surfaces had the most accurate readings. According to the study by Shokri et al. [2], CBCT did not show any significant supremacy in cavity detection, compared to other methods except for cavities in the apical area. Table 3 shows that the most sensitivity and specificity rates of CR for the higher exposure times are 82.6 and 70, respectively and for the lower exposure time are 80 and 61, respectively. The most Kappa coefficient value was dedicated to the higher exposure time (0.5±0.052). Similar to the results of Borg et al.'s study [13], this investigation has shown a higher percentage of correct reading for all of the radiographic methods using higher exposure times.
In some studies it was found that the angulation of radiography has an important role in correct detection of the resorption. In the study by Westphalen et al. [15], radiographic images of teeth were taken in orthoradial, mesial and distal angulations and it was found that for the cavities which were not visible in orthoradial images, changing the horizontal angles can increase the chance of their detection. Also, some cavities were detectable by images taken with mesial and distal angulation which was similar to the results revealed by Borg et al. [13] and Andreasen et al. [16]. In comparison to these studies Kamburoğlu et al. [10] obtained a higher correct detection rate by the orthoradial angulations rather than distoradial and mesioradial. However, the most correct detection was achieved when the images from all angulations were evaluated simultaneously.
Conclusion
There was no significant difference between conventional and digital radiographic methods in terms of detecting external root resorption. | 2,954 | 2014-10-01T00:00:00.000 | [
"Materials Science",
"Medicine"
] |
MSDeepAMR: antimicrobial resistance prediction based on deep neural networks and transfer learning
Introduction Antimicrobial resistance (AMR) is a global health problem that requires early and effective treatments to prevent the indiscriminate use of antimicrobial drugs and the outcome of infections. Mass Spectrometry (MS), and more particularly MALDI-TOF, have been widely adopted by routine clinical microbiology laboratories to identify bacterial species and detect AMR. The analysis of AMR with deep learning is still recent, and most models depend on filters and preprocessing techniques manually applied on spectra. Methods This study propose a deep neural network, MSDeepAMR, to learn from raw mass spectra to predict AMR. MSDeepAMR model was implemented for Escherichia coli, Klebsiella pneumoniae, and Staphylococcus aureus under different antibiotic resistance profiles. Additionally, a transfer learning test was performed to study the benefits of adapting the previously trained models to external data. Results MSDeepAMR models showed a good classification performance to detect antibiotic resistance. The AUROC of the model was above 0.83 in most cases studied, improving the results of previous investigations by over 10%. The adapted models improved the AUROC by up to 20% when compared to a model trained only with external data. Discussion This study demonstrate the potential of the MSDeepAMR model to predict antibiotic resistance and their use on external MS data. This allow the extrapolation of the MSDeepAMR model to de used in different laboratories that need to study AMR and do not have the capacity for an extensive sample collection.
Introduction
Antimicrobial resistance (AMR) has become one of the most urgent global public health problems (O'Neill, 2016), whose current growth leads to an estimate of an annual death toll of more than ten million annually by 2050, and a cost of approximately 100 trillion USD worldwide (Brogan and Mossialos, 2016;O'Neill, 2016).In general, AMR is the process by which bacteria can survive exposure to antibiotics that, under normal conditions, would be deadly or stop their growth.According to a Nature report ("The Antibiotic Alarm"), antibiotics have been consistently and heavily over-prescribed by doctors worldwide for decades (Nature, 2013).Besides, the indiscriminate use of antibiotics in livestock (Li et al., 2018;Hickman et al., 2021), and the environmental factors that favor the distribution of resistant genes (Lin et al., 2021) have directly contributed to the development of antibiotic resistance.
Antibiotic-resistant mechanisms can be either intrinsic or acquired.In the former, structural or functional characteristics of the bacteria allow them to resist a particular antibiotic.In the latter, bacteria develop resistance to an antibiotic through different mechanisms: (i) minimization of the intracellular concentrations of an antibiotic as a result of poor penetration into the bacterium or as a result of antibiotic efflux; (ii) modification of the antibiotic target by genetic mutation or post-translational modification of the target; and (iii) inactivation of the antibiotic by hydrolysis or modification (Blair et al., 2014).
Regarding AMR detection, the antibiotic sensitivity test (AST) is key in clinical treatments.Testing for antibiotic resistance/susceptibility is typically based on measuring the bacterial growth in the presence of that antibiotic, which takes up to 72 h to obtain results.Hence, new, rapid, and effective techniques are needed to address these challenges.
. Mass spectrometry
Mass spectrometry (MS) is a technique that measures the mass/charge ratio (m/z) of the atoms or molecules of a sample, after ionizing them.The potential of MS lies in its ability to measure the exact mass of these molecules and to obtain information from the ion fragments of the analyte.MALDI-TOF MS (Matrix-Assisted Laser Desorption/Ionization Time-Of-Flight Mass Spectrometer) is one of the most used techniques in this field (Tanaka et al., 1988).It corresponds to a Desorption Ionization System with Laser Assistance by a Matrix, coupled with the ion analyzer TOF (Time of Flight).MS has had a significant impact in clinical microbiology, allowing for quick identification of bacteria from an intact cell or a whole cell Peptide Mass Fingerprint (PMF) (Singhal et al., 2015).It provides higher accuracy, rapidity, and cost-effectiveness than conventional methods used in microbiology, yielding results in minutes rather than hours (Singhal et al., 2015).This technique has also shown better resolution and reproducibility than gel-based protein or DNA fingerprint techniques (Fenselau and Demirev, 2001;Lay, 2001).The discovery of suitable matrices and the use of whole/intact cells for recording the PMF of bacteria in the mass range of 2-20 kDa, followed by databases for bacterial identification, has made MALDI-TOF MS an excellent alternative for this area.Specifically, "MALDI Biotyper, " developed by Bruker Daltonics, has been considered as a platform to operate and analyze samples with a simple extraction/preparation method (Seng et al., 2009).Since MALDI received regulatory approval from the Food & Drug Administration (FDA) of the United States in 2013, it has been available worldwide for routine identification of cultured bacteria from human specimens (in vitro diagnosis).MALDI-TOF MS has rapidly become a reference method for identifying a wide range of microorganisms.Its application for detecting microorganisms such as bacteria has also been widely established, reducing turnaround time and simplifying workflows in clinical microbiology laboratories (Patel, 2015;Welker et al., 2019;Oviaño and Rodríguez-Sánchez, 2021).
These advantages highlight MALDI-TOF MS as a fast, reliable method to identify AMR (Florio et al., 2020), which allows a rapid antibiogram in <3 h.The methodology for bacterial resistance detection consists of incubating microorganisms with the antibiotic, then centrifugation is performed, and the supernatant obtained is analyzed using MALDI-TOF.
A bacteria is considered to be resistant when an enzyme that degrades the antibiotic [such as carbapenemases and extendedspectrum beta-lactamases (March-Rosselló, 2017)] is detected in its spectrum.On the one hand, the peak corresponding to the mass/charge of the antibiotic disappears.On the other hand, new peaks appear in the spectrum, corresponding to metabolites related to the rupture of the antibiotic.Only the antibiotic peak can be seen in the case of a susceptible (i.e., non-resistant) bacteria.The sensitivity of this experimental technique is close to 100%, which means this method can be used on grown colonies, isolation plates (Lasserre et al., 2015), and grown blood culture bottles from patients (Oviaño et al., 2014).Several methods have been proposed to analyze MALDI-TOF spectra for subspecies discrimination.Some methods focus on visual examination of the spectra to discover strain-specific peaks (Wolters et al., 2011;Lasch et al., 2014), while others are based on the use of ClinProTools software to identify strain-representative peaks (Mather et al., 2016;Villarreal-Salazar et al., 2022).
. Machine learning on mass spectrometry
The similarity between MALDI-TOF spectra of highly related strains hinders their visual interpretation (Camoez et al., 2016).Therefore, this analysis involves searching particular, possibly complex patterns in large volumes of data.In this context, the potential of artificial intelligence is very promising (Mather et al., 2016), particularly machine learning techniques.
Machine learning (ML) allows computers to learn without being explicitly programmed for the task at hand.The type of problem and data this research addresses (MALDI spectra with known information about antibiotic resistance) calls for supervised learning algorithms, which are trained using a dataset formed by instances (in this case, each spectrum is an instance), each labeled with a discrete class or a real value (in this case, the resistance/susceptibility of the bacteria).Then, a trained classifier can predict the class of new instances.In recent years, the field of medicine has focused on applying ML-based methods to analyze MS data due to their potential to analyze complex data and the ability to identify biomarkers (Olate-Olave et al., 2021;Tapia-Castillo et al., 2021;López-Cortés et al., 2022;González et al., 2023).Specifically, MS coupled with ML techniques has been widely used in different areas, including health: (i) detection/diagnosis of diseases in humans (Drew et al., 2017), animals (López-Cortés et al., 2017, 2019), among others; (ii) detection of pathogens such as bacteria (Bruyne et al., 2011;Didelot et al., 2012;Dematheis et al., 2022), fungi (Becker et al., 2014;Bolt et al., 2016); and most recently in (iii) AMR prediction (Florio et al., 2020;Huang et al., 2020;Weis C. et al., 2020;Weis et al., 2022;Feucherolles et al., 2022;Wang et al., 2022;Zhang et al., 2022;Guerrero-López et al., 2023).
Recent studies have focused on refining species identification (Guajardo et al., 2022) and determination of AMR (Wang et al., 2018(Wang et al., , 2019;;Huang et al., 2020;Weis et al., 2022).A recent systematic review (Weis C. V. et al., 2020) has concluded that, despite the number of studies and their quality, there are still some limitations related to poor reproducibility, a small sample size, and a lack of external validation.In this sense, it is necessary to persist in improving the algorithmic techniques used when classifying antibiotic resistance, and this is reflected in the current state of the art, where researchers account for novel and complex classification techniques such as ensemble models (Zhang et al., 2022) or convolutional neural networks (CNN) (Wang et al., 2022).In this reference, a CNN architecture is presented for the identification of Enterococcus faecium resistance to Vancomycin, marking a promising research avenue, since CNNs have already been shown to outperform classical ML algorithms on data problems with high dimensionality (LeCun et al., 2015;Lippeveld et al., 2020).
Regarding the study of AMR by using ML approaches, there are different studies with a focus on the use of other experimental techniques such as (i) MS (Wang et al., 2019;Delavy et al., 2020;Huang et al., 2020); (ii) Genome sequencing (Bhattacharyya et al., 2019;Kim et al., 2020); (iii) Infrared microscopy (Sharaha et al., 2019); and (iv) PCR (Athamanolap et al., 2017).Specifically, several works have been focusing on the use of MS coupled to ML in the study of Candida albicans fluconazole resistance detection (Delavy et al., 2020), discrimination of contagious strains of Streptococcus (Esener et al., 2018), detection of carbapenemresistant Klebsiella pneumoniae (Huang et al., 2020), and rapid classification of group B of Streptococcus serotypes (Wang et al., 2019), among others.
These advances and the increasing prevalence of AMR worldwide highlight the need for efficient techniques to detect bacterial resistance to antibiotics and facilitate the pathogendirected clinical treatment of the infection.Thus, combining MALDI-TOF with artificial intelligence is an excellent opportunity for this task.It could improve the patient's quality of life and recovery since they would receive timely and direct treatment, also reducing public health costs.
In terms of data availability, a recent study has generated a public database called DRIAMS (Weis et al., 2022), with more than 750,000 antibiotic resistance mass spectra profiles collected in four different laboratories in Switzerland.The study implemented three classification algorithms: logistic regression, LightGBM (Light Gradient Boosting Machine), and a deep neural network (multilayer perceptron).LightGBM presented the best classification results for E. coli and S. aureus, while the multilayer perceptron obtained the best score for K. pneumoniae.These extensive public databases open the way for new and advanced methodologies for AMR analysis to be investigated, as in the present work using deep learning (DL).This methodology is distinguished by its ability to detect new patterns in complex data sets, but it requires a large amount of data to train the models.
In this context, transfer learning (Weiss et al., 2016) has become a hot research topic in many fields, allowing us to start the training from models already pre-trained on large (often publicly available) datasets.These pre-trained models can be fine-tuned with small datasets by laboratories with limited sample collection and computing capacity, which can, in such a way, take advantage of powerful models.A recent proposal in this direction is to detect AMR using deep learning using transfer learning based on whole genome sequence data (Ren et al., 2022).However, to the best of our knowledge, no transfer learning proposals have been made for AMR based on MS techniques.Our research proposes a complete and novel methodology based on deep learning (DL) and transfer learning for directly analyzing raw MS data to identify antibiotic resistance in three different bacterial species.The use of raw MS data implies a significant reduction of the typical preprocessing (smoothing, baseline correction, peak picking, among others) made with MS data.The dataset for our study was constructed based on DRIAMS (Weis et al., 2022).The bacteria with the highest number of samples and clinical relevance were included: Escherichia coli, Klebsiella pneumoniae, and Staphylococcus aureus.The set of antibiotics studied for the identification of resistance is detailed in Table 1.First, the data set was formed from the raw mass spectra.Next, the MSDeepAMR model was trained and tested to obtain an area under the receiver-operating characteristic (AUROC).In total, 13 models of antibiotic resistance were implemented with results of AUROC > 0.80 in most of the cases studied, showing a 10% improvement over the state-of-the-art.Then, transfer learning was applied to evaluate our models in external databases to study whether laboratories with a lower sample collection capacity can use these models.Our results demonstrate that performing transfer learning substantially improves the evaluation of the model on external data.The MSDeepAMR model generally showed excellent results for classifying antibiotic resistance in different bacterial species.
Finally, as was mentioned in previous paragraphs, conventional methods for antibiotic sensitivity tests (AST) take up to 72 h to obtain results.In this way, our approach (MSDeepAMR) can significantly reduce the time in the part of the AST, implying to the health industry a decrease in public costs and an improvement in patients' quality of life.Besides, this research opens the door to integrating MSDeepAMR within the MALDI-TOF device to enable on-the-fly AMR detection due to the network classifying the raw data directly without manual preprocessing.
Considering the proposed methodology and the obtained results, the article's contributions are as follows: • A systematic and reproducible methodology for antibiotic resistance detection is proposed based on deep neural networks and transfer learning, achieving state-of-the-art results.
• The MSDeepAMR model architecture has been evaluated in several scenarios, demonstrating its ability to predict antibiotic resistance in E. coli, K. pneumoniae, and S. aureus against different types of antibiotics.• The proposed methodology performs transfer learning to evaluate reproducibility on external datasets, a pioneering study within the context of MS, highlighting the model improvement and its successful adaptation to external data.
The rest of the paper is structured as follows.Section 2 details the proposed MSDeepAMR methodology, describing experiment settings, transfer learning evaluation, and ablation study.Section 3 presents the results of the conducted experiments.Section 4 discusses the obtained results.Finally, concluding remarks on the study and future works are stated in Section 5.
Materials and methods
This study implements a DL architecture to identify antibiotic resistance in different bacterial species from raw MS data.As detailed in Figure 1, the first step corresponds to the dataset construction from DRIAMS (Weis et al., 2022), chosen due to its high number of samples.The second step is the extraction of the bacterial data to be used in the present study.In the third step, binned mass spectra are computed to obtain vectors of the same length.Finally, data splitting is performed to train and test the proposed architecture.
. Datasets
In the present study, we used the public database DRIAMS (Weis et al., 2022), which has about 300,000 mass spectra of different types of bacteria with more than 750,000 antibiotic resistance profiles.This database consists of four sub-collections (DRIAMS-A, DRIAMS-B, DRIAMS-C, and DRIAMS-D) corresponding to the different clinical laboratories where the samples were collected.DRIAMS-A has the largest number of samples and, therefore, was used to implement and train the MSDeepAMR model, while the remaining ones were used for external testing and transfer learning.Initially, the dataset included 803 different types of bacterial and fungal pathogens.However, given the high number of samples required to train deep neural networks, the following bacteria were selected due to their relevance according to the World Health Organization (WHO) (Asokan et al., 2019) and to their number of samples: Escherichia coli (n = 5, 000), Klebsiella pneumoniae (n = 2, 800), and Staphylococcus aureus (n = 3, 800).These bacteria are on the list of priority pathogens presented by WHO.Table 1 details the number of samples for each class of bacteria and antibiotics under study.Our neural network was trained with raw mass spectra data.A bin size of 3 Da in the range of 2,000 to 20,000 Da was applied.This binning produces a fixed-length vector suitable for the DL algorithms.
. Deep learning
This study proposes a deep-learning approach for identifying E. coli, K. pneumoniae, and S. aureus bacterial species with resistance to different types of antibiotics (Table 1).Thus, the input data corresponds to the raw MS data represented by a total binned vector of 6,000 features.In contrast, the output corresponds to identifying the resistance (class 1 label) or susceptibility (class 0 label) of the given sample to the studied antibiotic.
. . Model implementation: MSDeepAMR
The MSDeepAMR model was applied to 13 different study cases (Table 1), which includes three of the most clinically relevant bacteria and the most commonly used antibiotics to treat them: E. coli (ciprofloxacin, ceftriaxone, cefepime, piperacillin-T., tobramycin), K. pneumoniae (ciprofloxacin, ceftriaxone, cefepime, meropenem, tobramycin), and S. aureus (ciprofloxacin, fusidic acid, oxacillin).In order to find a structure that would perform well in all the study cases, we took as a starting point the architecture presented in Wang et al. (2022).The architecture of the model was optimized based on the bacteria-antibiotic pair with the highest number of samples (E.coli-ceftriaxone), for which a hyperparameter grid search was performed.Subsequently, this architecture was applied to the rest of the bacteria-antibiotic pairs, and each one was optimized in the same way until the final architecture was reached.In this way, the final architecture contains the following parameters: • The number of convolutional layers (1 to 5).
• The number of filters and kernels for each convolutional layer (filters: 32-256 with a step of 32, kernels: 3-19 with a step of 1).• The number of fully connected layers (1 to 5).
• The number of neurons within each fully connected layer (32-256 with a step of 32).
As shown in Figure 2, our model comprises four onedimensional convolution layers, allowing the network to learn to differentiate the locations of the m/z peaks.Additionally, each convolutional block contains a batch normalization layer to reduce the overfitting and a max-pooling layer to reduce dimensionality and focus the attention of the CNN on the m/z peaks in each convolution.The classification module consists of four fully connected layers preceded by a dropout layer.The last layer has one output neuron with a sigmoid activation, where the output of each neuron corresponds to the probability that the studied sample presents a resistant or susceptible profile to the antibiotics under study.As for the parameters of the network, the four convolutional layers contain 64, 128, 256, and 256 filters, respectively.The kernel sizes were 17, 9, 5, and 5, and the three fully connected layers before the output layer were composed of 256, 64, and 64 units, respectively.Mean and max pooling were tested, after which mean pooling was selected due to the higher AUROC and AUPRC obtained.The dropout probability was set to 0.65.For training, a maximum of 100 epochs was set in conjunction with early stopping with patience = 4.At the same time, the learning rate of the Adam optimizer was initialized at 10 −4 , with a learning rate reduction of 0.1 when the loss function remained unchanged.
. Ablation study
An ablation study was performed to evaluate the behavior of MSDeepAMR when different modifications were applied to the final model.For this purpose, a comparison has been made through a 10-fold cross-validation for each of the study cases (Detailed in Section 2.2.1).Thus, we compared how normalization and regularization layers improve the model performance after the hyperparameter search grid.The evaluation considered three different model modifications: .
External test and transfer learning
Transfer learning (Pan and Yang, 2010) consists of adapting a model trained on a "source" dataset to perform well when applied to a "target" dataset, typically by using a few instances from the target set to fine-tune the pre-trained model.It enables external laboratories with little sample collection capacity to adapt complex models-pre-trained on much larger datasets-to their specific needs (Ebbehoj et al., 2022).The implementation of transfer learning on MALDI-TOF data is a problem that needs to be studied because there are only two mass spectrometry systems that dominate the market: MALDI Biotyper System from Bruker Daltonics and ViteK MS from Biomeriux (Dierig et al., 2015;Hou et al., 2019).Therefore, differences between data collected by two different laboratories with similar sample collection equipment are expected to be limited.Thus, it would facilitate the application of transfer learning techniques to reduce the need to train large models from scratch.
As mentioned above, the DRIAMS database contains three sub-collections of external data with smaller numbers of samples (DRIAMS-B, DRIAMS-C, and DRIAMS-D) corresponding to data collected by different laboratories using the same mass spectrometry system.
In order to evaluate the potential benefits of transfer learning on MALDI-TOF data, this paper describes four experimental scenarios: • Models trained and tested only on the external datasets.
• Evaluation of the best-performing model trained on DRIAMS-A when applied to the external data: • Without transfer learning.
• Applying transfer learning, freezing the weights of the four convolutional layers, only retraining the weights of the fully connected layers (as shown in Figure 3).• Applying transfer learning, retraining the weights of all layers.
In all cases, the same 20% of the external datasets were used to evaluate the model performances.In contrast, the remaining 80% were used to train the models (first scenario) or fine-tune the pretrained model (transfer learning).An Adam optimizer was used to avoid overfitting with a learning rate of 10 −7 for 10 epochs and a batch size of 32, sufficient for the model to fit the external data.The implementation of MSDeepAMR and examples of experiments found in this article are publicly available at: https://github.com/xlopez-ml/DL-AMR.
Furthermore, the evolution of the AUROC and AUPRC metrics has been studied when applying transfer learning by retraining all layers using different percentages (25%, 50%, 75%, and 100%) of the training set of the target datasets (DRIAMS-B-C-D) to study how models are affected by an increase of the number of samples available for the transfer learning.
. . Feature importance analysis
In order to interpret the results obtained by the bestperforming models, the analysis of SHAP values (using DeepExplainer) has been implemented to identify which m/z peaks are the most important when determining antimicrobial resistance or susceptibility.Specifically, how the most predominant peaks in the external datasets are affected before and after applying transfer learning will be analyzed.
. Evaluation metrics
The main models trained with DRIAMS-A were implemented with a 10-fold cross-validation to avoid overfitting.Then, the transfer learning scenarios described in Section 2.3 were evaluated using ten random train-test splits.Therefore, the results reported in both cases are the mean of 10 iterations.
The metric Area Under the Receiver Operating Characteristic Curve (AUROC) and the Area Under the Precision-Recall Curve (AUPRC) were calculated.AUROC and AUPRC are metrics commonly used in binary classification problems of biological nature class (Chicco, 2017).The calculation of AUROC involves computing the area under the ROC curve, which represents the true positive rate or "recall" [Recall formula = (TP/TP+FN)] versus the false positive rate (1-specificity) [Specificity formula = (TN/TN+FP)].This metric measures the model's discriminative ability, where a value of AUROC equal to 1 indicates a perfect model.In contrast, a value of 0.5 indicates performance similar to random guessing.
Regarding the AUPRC, this metric is calculated similarly but based on precision [Precision formula = (TP/TP+FP)] and recall, focusing on correctly classified positive values (minority class).AUPRC is a more reliable indicator for imbalanced datasets.We have also included the calculation of balanced precision, which consists of the arithmetic mean of sensitivity and specificity and is helpful in these cases.In the case of transfer learning, the metrics chosen to evaluate the models' performances also corresponded to AUROC and AUPRC.
Results
The main objective of this study was to implement models based on DL to develop our MSDeepAMR models that allow for the correct classification and identification of antibiotic resistance for different bacteria.All models were implemented from the raw MS data to improve the current state of the art, which was achieved with traditional machine learning
FIGURE
Transfer learning: a model is trained with a database containing an extensive number of samples.This model can be used as a starting point to be adapted to problems with similar characteristics.Transfer learning was implemented to evaluate our MSDeepAMR models on external databases.In the first case, we freeze the weights of the convolutional layers and only retrain the fully connected layers.For the second case, we update the weights of the entire model.R, Resistant; S, Susceptible.
algorithms (Wang et al., 2022;Weis et al., 2022;Zhang et al., 2022).As input data, we used raw mass spectra from the public database DRIAMS (Weis et al., 2022), selecting bacteria with the highest number of samples with antibiotic resistance profiles.Models were trained using DRIAMS-A and subjected to 10-fold cross-validation.Subsequently, each model was tested to evaluate its prediction performance using AUROC and AUPRC.
. Results of MSDeepAMR models
To evaluate the classification performance of MSDeepAMR models, the AUROC, AUPRC, and balanced accuracy metrics were used.Models were implemented for different bacteriaantibiotic profiles: E. coli (ciprofloxacin, ceftriaxone, cefepime, piperacillin-T., tobramycin), K. pneumoniae (ciprofloxacin, ceftriaxone, cefepime, meropenem, tobramycin), and S. aureus (ciprofloxacin, fusidic acid, oxacillin).As shown in Figure 4, most models showed good performance (AUROC > 0.80), whereas the models for E. coli, E. coli-Ciprofloxacin, E. coli-Ceftriaxone, and E. coli-Cefepime showed an AUROC of 0.85, 0.87, and 0.88, respectively.On the other hand, analyzing the antimicrobial resistance profiles in K. pneumoniae, three of the five models implemented stand out: K. pneumoniae-Ceftriaxone, K. pneumoniae-Cefepime, and K. pneumoniae-Meropenem, which reach an AUROC of 0.82, 0.83, and 0.83, respectively.Finally, for S. aureus the S. aureus-Oxacillin model stands out with a good 0.93 AUROC.Regarding the AUROC, it is important to mention that for the study of resistance to Ciprofloxacin, the three bacteria presented a good performance, as shown in Figure 4, with an AUROC of 0.85 (E.coli-Ciprofloxacin), 0.76 (K.pneumoniae-Ciprofloxacin), and 0.85 (S. aureus-Ciprofloxacin), respectively.
Regarding the analysis of the results in the function of the AUPRC, it can be seen how the model is affected by the imbalance of classes in some cases.Therefore, the positive class must be more correctly classified, corresponding to the sample resistant to a given antibiotic.Nevertheless, our results showed that in E.coli, the better AUPRC corresponds to 0.75 (E.coli-Ciprofloxacin), 0.79 (E.coli-Ceftriaxone), and 0.70 (E.coli-Cefepime).In the case of K.pneumoniae, the best AUPRC was 0.68 for K. pneumoniae-Ceftriaxone.
. Ablation study
In order to obtain the most optimal and robust model, we evaluate the effect of batch normalization and dropout layers on the baseline model obtained after the hyperparameter search grid.A 10-fold cross-validation was applied for each of the 13 cases under study (Table 1).The experiments considered the following three different scenarios: (i) Baseline model; (ii) MSDeepAMR model with batch normalization; and (iii) MSDeepAMR model with batch normalization and dropout (final model).
As shown in Table 2, adding normalization and regularization layers improves the model's performance in most cases under study.Specifically, the best-performing models for each bacteriaantibiotic correspond to E. coli-Ciprofloxacin, K. pneumoniae-Ceftriaxone, and S. aureus-Oxacillin.Furthermore, using these layers in scenarios (ii) and (iii) improved the metrics by 1% to 2% and reduced the standard deviation.In other cases, when AUPRC values are low, regularization substantially improve the model's performance.In detail, an example of this case corresponds to K. pneumoniae-Ciprofloxacin, where the AUPRC increases from 0.18 to 0.53 when the batch normalization and dropout layers are applied (Table 2).
. External test and transfer learning results
The best-performing model was selected for each of the bacteria studied in the previous section, namely E. coli-Ceftriaxone, K. pneumoniae-Ceftriaxone, and S. aureus-Oxacillin.These models were tested with the external data subcollections (DRIAMS B-C-D).Subsequently, it was studied if implementing transfer learning improved the adaptation of the models to the external data.Table 3 shows the number of samples available in each case, where 80% was used for training and 20% for testing.
Tables 4-6 show the AUROC and AUPRC obtained in each of the transfer learning experiments described above.Regarding the E. coli-Ceftriaxone model (Table 4), the implementation of transfer learning achieves the best results of AUROC and AUPRC, where it is noted that DRIAMS-B had the best adaptability to the pre-trained model reaching an AUROC of 0.943, and an AUPRC of 0.752 in comparison to the 0.740 and 0.542 of AUROC and AUPRC obtained by training the model from scratch.For the K. pneumoniae-Ceftriaxone model (Table 5), the best results were also obtained with transfer learning, except in DRIAMS-C, where the model trained from scratch exceeded the AUROC and AUPRC obtained in the transfer learning experiment (0.594 vs. 0.512 in AUROC and 0.325 vs. 0.165 in AUPRC respectively).Finally, for the S. aureus-Oxacillin model (Table 6), in both DRIAMS-B and DRIAMS-C datasets, the transfer learning showed the best AUROC among the three experiments performed.Besides, it should be noted that in terms of AUPRC, the model's training from scratch presented better results than the tl test retraining all layers in the DRIAMS-B dataset (0.385 vs. 0.274, respectively).Besides, when studying the results obtained by retraining the neural network by freezing the weights of the convolution layers, in all cases, the results were lower than if we retrained the entire neural network.
The results of the analysis increasing the amount of target data used for the fine-tuning are shown in Supplementary Figure S1.DRIAMS-B was the subset that best adapted to the models trained on DRIAMS-A, despite being the one with the smallest number of samples available for training.On the other hand, the DRIAMS-C and D subsets show that, despite not having obtained significant improvements in the prediction accuracy, it improves consistently along with the number of samples used in the model fine-tuning.In this way, the results of this experiment show that as the percentage of samples increases, the AUROC and AUPRC also improve, demonstrating that a small amount of new samples can have a large impact on the model's performance after fine-tuning.
Respecting the feature importance analysis, the SHAP values results are shown in Supplementary Figures S2-S5.SHAP values were computed for the three best models obtained for each bacteria under study: E. coli-ceftriaxone (Supplementary Figure S3), K. pneumoniae-ceftriaxone (Supplementary Figure S4), and S. aureusoxacillin (Supplementary Figure S5).The SHAP values were computed on DRIAMS B, C, and D in order to analyze the impact of the most important features (m/z peaks) in the fine-tuning process.
Analyzing the results obtained on DRIAMS-A (Supplementary Figure S2), it can be seen that the proposed model focuses the attention on the first part of the spectrum (2,000Da-7,000Da), which contains ions of lower mass, which separate easily, allowing for better differentiation between spectra of susceptible and resistant bacteria.
In the case of E. coli-Ceftriaxone, when the model is tested on DRIAMS-B (Supplementary Figure S3A), it is observed that most of the m/z peaks appear in the range 6,800-6,900Da, but after the transfer learning, they become closer to those of the base model.It is important to note that when transfer learning is applied, the 8,450 Da peak appears among the top 20 features, previously attributed to antibiotic multi-resistance in Escherichia coli.For the DRIAMS-C (Supplementary Figure S3B) and D (Supplementary Figure S3C) cases, there are no major differences with respect to the base model, except that for the DRIAMS-C case where some peaks in the range (6,800-6,900 Da) also stand out, but their direct relationship with antibiotic resistance has not been documented yet.
For the case of K. pneumoniae-Ceftriaxone, the tendency of the base model remains similar: a large part of the most important peaks are present in the range of 2,000-3,000 Da.However, when testing external datasets (Supplementary Figure S4), it is observed that these spectra focus their differentiation on the m/z peaks 7,770-4,736-2,135-7,706 Da, which, together with other peaks, coincide with those reported by Weis et al. (2022) which could help to confirm their relationship with the identification of antimicrobial resistance.
Finally, for the case of S. aureus-Oxacillin, in the base case (DRIAMS-A, Supplementary Figure S5A), the absence of the m/z peaks 2,414 Da (PSM-mec) and 3,006 Da (agr-positive), which have been widely documented to be directly attributable to the MRSA subspecies (methicillin-resistant Staphylococcus aureus), stands out.When analyzing the SHAP values for the DRIAMS-B dataset (Supplementary Figure S5B), the identification of peak 2,414 stands out in this case, along with the appearance of peak 4,517, also reported by Weis et al. (2022) and previously associated with antibiotic resistance [MRSA clonal complexes (CC398)].In the case of DRIAMS-C (Supplementary Figure S5C), some of the peaks previously associated with antibiotic resistance do not stand out, but m/z peaks 2,411 and 2,417 Da are found, which could be associated with peak 2,414 Da in relation to calibration differences in the equipment used.
Discussion
In this study, MSDeepAMR models based on DL were implemented in order to predict AMR.Specifically, the MSDeepAMR model was applied on three different bacteria with varied antibiotic resistance profiles: E. coli (ciprofloxacin, ceftriaxone, cefepime, piperacillin-T., tobramycin), K. pneumoniae (cefepime, ciprofloxacin, ceftriaxone, meropenem, tobramycin), and S. aureus (oxacillin, ciprofloxacin, fusidic acid).Raw MS data were used, and deep learning methods were applied to obtain MSDeepAMR models.Out of the trained models, the best AUROC and AUPRC metrics performances were obtained for the following models: E. coli-Ceftriaxone, K. pneumoniae-Ceftriaxone, and S. aureus-oxacillin (Table 2).Subsequently, these models were used to study their adaptability to external data (Table 3).As for the remaining models, we consider that lower performances of AUPRC are due to the predominant class imbalance in the datasets, so future research should focus on developing methodologies to build robust classifiers to the predominant class imbalance in the study of antibiotic resistance.
Table 7 show the results obtained with MSDeepAMR, comparing our results with state-of-the-art machine learning As for the AUPRC, the performance of our model considerably exceeded the results obtained in the previous research, even doubling the AUPRC obtained in the best cases, as was the case for E. coli-Ceftriaxone (0.79 vs. 0.30), E. coli-Cefepime (0.70 vs. 0.24), and K. pneumoniae-Ceftriaxone (0.68 vs. 0.33).
Concerning the ablation study, it is worth mentioning that normalization and regularization layers constitute a fundamental part of the neural network architecture for this type of data, as shown in Table 2; the use of these layers improved the results obtained in most of the cases presented.
Regarding the implementation of transfer learning or domain adaptation methodologies, we found that, although the equipment used for sample collection in each laboratory belonged to the Microflex Biotyper System by Bruker Daltonics product family, adapting a pre-trained model to data from a new laboratory is not a simple task.It is partially due to the high number of genetic and biological factors that distinguish bacterial strains according to their origin or slight differences in sample collection parameters.Nevertheless, it was demonstrated by the experiments performed that retraining all layers of a model to adjust it to data from a new laboratory is a better starting point than training a model from scratch.These promising results open the way for further research on transfer learning in models that include MALDI-TOF mass spectrometry data.
Besides, it was demonstrated that when the sample size increases, the transfer learning results improve considerably (Supplementary Figure S1).This implies that our methodology enables AMR detection even when there is a very small amount of data, although the availability of a larger number of samples can improve the model's performance.
Finally, it was demonstrated that when a large number of samples (over 3,000) are available, it is possible to generate deeplearning models with high performance in identifying resistance or susceptibility to a given antibiotic.These models can be used in clinical routines to quickly and efficiently identify the optimal treatment to be implemented, avoiding the wait for traditional bacterial cultures and the indiscriminate use of broadspectrum antibiotics.
Conclusion
This work proposes a complete methodology for antimicrobial resistance prediction from raw mass spectrometry data.An approach based on deep learning was applied.Deep learning is designed to identify patterns in complex and extensive data.In our case, MS data with their m/z peaks allow us to characterize whether a bacterium is resistant or susceptible to an antibiotic.To demonstrate the effectiveness of this approach, the mass spectra of Escherichia coli, Klebsiella pneumoniae, and Staphylococcus aureus bacteria were analyzed in concordance with their AST profiles.The datasets were constructed based on a recently published free database (Weis et al., 2022).Our results showed that the implemented MSDeepAMR models were efficient and effective for AMR prediction on this type of data.Furthermore, our MSDeepAMR models showed better performance (AUROC) than the state of art results (Wang et al., 2022;Weis et al., 2022;Zhang et al., 2022).Besides, those studies are made with traditional machine algorithms.Since deep learning models require a significant number of samples for training, a complication for laboratories with a low sample collection rate, the implementation of transfer learning was studied.
Transfer learning results demonstrated that the developed MSDeepAMR models could be used for other laboratories as a starting point to adapt them to their data, guaranteeing the reproducibility of our models.Besides, our results showed that MSDeepAMR models allow the correct work of raw MS data.The MSDeepAMR models gave good results in classification and prediction.In addition, transfer learning will allow using these models on new samples to provide reproducibility, which is necessary for this area when predicting AMR in different laboratories.Nevertheless, it is still required to continue optimizing the methodologies for antimicrobial resistance analysis from MALDI-TOF mass spectra and to continue contributing to the creation of public databases from different laboratories worldwide.
Finally, one limitation is that we consider MALDI-TOF from Bruker, which produces data with a different length dimension in comparison to other equipment, for example, with the MALDI-TOF from bioMrieux.In future research, adaptations of this methodology to inputs from other MALDI-TOF devices may be explored, potentially opening the door to cross-device AMR models.
In future work, MSDeepAMR within the MALDI-TOF device could be used to enable the on-the-fly AMR detection because the proposed network allows the classification of the raw data directly, which is an advantage because it avoids any manual preprocessing.In this study, three main bacteria in the DRIAMS dataset were studied.Nevertheless, the methodology could be evaluated on more bacteria/antibiotic pairs.For this purpose, we published our code as open-source to enable other researchers and practitioners to extend this line of research.
FIGURE
FIGURE Scheme of the methodology proposed for the identification of AMR.(A) MS database selection followed by an exploratory analysis of the content, (B) Extraction of the bacteria chosen to be studied, (C) Binning of the spectra into equal-sized feature vectors to obtain a DB Adhoc for deep learning models implementation, (D) Data split into % training and % test, stratified by both antimicrobial class and sample case number.(E) DL models implementation: For training, ten-fold cross-validation and hyperparameters' optimization was used, and for testing, ten-fold cross-validation was used to evaluate the final model.(F) Model performance evaluation and comparison were made according to AUROC, AUPRC, and Balanced Accuracy.
•
Baseline model, without any normalization or regularization layer.• MSDeepAMR model with batch normalization after each convolutional layer.• MSDeepAMR final model, with batch normalization after each convolutional layer and dropout after the first fully-connected layer.
FIGURE
FIGUREMSDeepAMR architecture: four convolutional layers followed by three fully connected layers.The last layer corresponds to the Sigmoid classifier, which indicates the probability of belonging to one of the classes.
TABLE Number of samples of each bacterium and antibiotic under study in DRIAMS-A.
TABLE Performance results of -fold cross-validation in the ablation study and final MSDeepAMR model.
The best results for each pair of Bacteria-Antiobiotics are highlighted in bold font.The best global metric for each bacteria under study is highlighted in bold red font.B. Acc, balanced accuracy; BN, batch normalization; DO, dropout.
TABLE Number of samples of each bacterium and antibiotic in external datasets (DRIAMS-D did not contain samples for the case S. aureus-Oxacillin).TABLE AUROC and AUPRC external testing and transfer learning of E. coli-Ceftriaxone model trained on DRIAMS-A.The best result for each case of study is highlighted in bold font.
TABLE AUROC and AUPRC external testing and transfer learning of K. pneumoniae-Ceftriaxone model trained on DRIAMS-A.
Weis et al. (2022)PRC external testing and transfer learning of S. aureus-Oxacillin model trained on DRIAMS-A.The best result for each case of study is highlighted in bold font.algorithms, and specifically with the research ofWeis et al. (2022).A Wilcoxon test was applied and detected statistically significant differences (p-value < 0.05) between our MSDeepAMR model and the state-of-the-art results.In detail, considering that the data used were the same, it can be seen in Table7that the MSDeepAMR model improves the AUROC values obtained by an average of 13% compared to the more traditional machine learning algorithms implemented by Weis et al. (LightGBM for E. coli, Multi-layer perceptron for K. pneumoniae and LightGBM for S. aureus).
TABLE MSDeepAMR performance results, comparing the present study and the previously obtained by the state of art(Weis et al.).The best result for each case of study is highlighted in bold font. | 9,427.6 | 2024-04-17T00:00:00.000 | [
"Medicine",
"Computer Science",
"Chemistry"
] |
Robustness and timing of cellular differentiation through population-based symmetry breaking
During mammalian development, cell types expressing mutually exclusive genetic markers are differentiated from a multilineage primed state. These observations have invoked single-cell multistability view as the dynamical basis of differentiation. However, the robust regulative nature of mammalian development is not captured therein. Considering the well-established role of cell-cell communication in this process, we propose a fundamentally different dynamical treatment in which cellular identities emerge and are maintained on population level, as a novel unique solution of the coupled system. Subcritical system’s organization here enables symmetry-breaking to be triggered by cell number increase in a timed, self-organized manner. Robust cell type proportions are thereby an inherent feature of the resulting inhomogeneous solution. This framework is generic, as exemplified for early embryogenesis and neurogenesis cases. Distinct from mechanisms that rely on pre-existing asymmetries, we thus demonstrate that robustness and accuracy necessarily emerge from the cooperative behaviour of growing cell populations during development.
Main
Functional diversification during mammalian development arises through symmetry breaking events that characterize a transition from an initially homogeneous group of multilineage primed cells towards a heterogeneous population of differentiated cellular identities (Zhang and Hiiragi, 2018;Simon et al., 2018). These events are typically generated and the corresponding states are maintained through self-organized cooperative processes, whose respective dynamics cannot be deduced from the dynamical features of the individual cells (Zhang and Hiiragi, 2018;Kauffman, 1993). Even more, the relatively low information content of a system residing in a symmetrical homogeneous state implies that information originates with the generated cell types at the symmetry breaking event. The onset of this event must be accurately timed, where not only the specification of distinct cell fates is suitably determined, but also the number distribution of each type must be robustly established.
The observations that expression of mutually exclusive genetic markers distinguishes the differentiated fates among each other and from the multilineage primed state have however promoted the hypothesis that multistability on the level of single cells sets the dynamical basis for differentiation (Kauffman, 1969;Andrecut et al., 2011;Wang et al., 2011;Enver et al., 2009). The most common functional motif that accounts for bistability on a single cell level is a two-component genetic toggle-switch (Thomas, 1981;Cherry and Adler, 2000;Snoussi, 1998), whereas addition of self-activating loops (Huang et al., 2007;Bessonnard et al., 2014;Jia et al., 2017) gives rise to a third stable state -the multilineage primed co-expression state. Such single cell multistable circuits have been used to describe the Gata1/PU.1 switch that governs the lineage commitment in multipotent progenitor cells (Huang et al., 2007;Graf and Enver, 2009), the Cdx2/Oct4 switch in the differentiation of the totipotent embryo (Niwa et al., 2005), the T-bet/Gata3 switch in the specification of the T-helper cells (Huang, 2013), as well as the Gata6/Nanog switch in the branching process of inner cell mass (ICM) (Bessonnard et al., 2014;Chickarmane and Peterson, 2008). In these systems, preexisting asymmetries, typically attributed to stochastic events or cell-to-cell heterogeneities, are assumed to be amplified by intercellular signaling, and are necessary to drive the individual cellular states out of the multilineage primed-and into one of the differentiated attractors (De Mot et al., 2016;De Caluwé et al., 2019). However, as all of the differentiated states are initially present in this description, symmetry breaking does not formally occur. Even more, experimental evidence suggests that cell-cell communication and signaling is crucial to obtain differentiated cell types during early mammalian development (Nichols et al., 2009;Yamanaka et al., 2010). Thus, cells do not function as isolated entities that process the information from the environment in a unidirectional input-output fashion, but operate as a joint dynamical system in which they continuously communicate by secreting growth factors or other signaling molecules. The principle of an emergent symmetry breaking mechanism that characterizes how cells differentiate into heterogeneous types while simultaneously accounting for the beginning and robustness of the process, is therefore still unclear.
We propose a dynamical mechanism where a population of identical cells breaks the symmetry due to cell-cell communication, giving rise to a novel heterogeneous dynamical solution that is different than the solutions of the isolated cells. We identified that the transition from homogeneous to a heterogeneous population is uniquely governed by a subcritical pitchfork bifurcation, resulting in the formation of an inhomogeneous steady state (IHSS) that reflects the cooperatively occupied differentiated cell fates. The formation of this new, population-based heterogeneous attractor additionally demonstrates how information is generated at a symmetry breaking event. Parameter organization in the vicinity of its bifurcation point enables cell number increase to trigger the symmetry breaking event in a self-organized manner, which renders timing of cellular differentiation -an emergent property of growing populations. Moreover, reliable cell proportions in the differentiated fates are an inherent feature of this symmetry-breaking solution. The proposed mechanism is generic and applies to systems with diverse gene expression dynamics in single cells, as we additionally demonstrate for a bistable circuit to describe the differentiation into epiblast (Epi) and primitive endoderm (PrE) states from the homogenous ICM during the blastocyst development stage of the mammalian preimplantation embryo, as well as for single cell oscillations, capturing the vertebrate neurogenesis dynamics.
Heterogeneous cellular identities emerge via a population-based inhomogeneous steady state solution
We consider a generic case where the single cell dynamics is governed by a minimal model of a genetic toggle switch, composed of two genes u and v that inhibit each other's transcription via their respective promoters P u and P v . We assume here that the extracellular signaling molecules s negatively affect the transcription of the switch gene u via intracellular signaling processes, while their expression is in turn regulated by the dynamics of the toggle-switch (Eq. (1), Fig. 1a, inset). As the communicating signals are secreted by the cells themselves, their concentration is no longer a parameter, but rather a variable in the system. This couples the states of the cells, creating interdependence between them, thus effectively establishing a single joint dynamical system.
The dynamics of the system on the level of a single cell, explored with respect to changes in the expression strength of the promoter P u , α u , exhibits monostability in the full parameter range (Fig. 1a). However, the bifurcation analysis for coupled systems revealed multiple different dynamical regimes, even for a minimal population of two identical cells. For a given range of α u , only a single fixed point is stable (Fig. 1b and Fig. 1c). This homogeneous steady state (HSS) represents the multilineage primed state (mlp), where both u and v are co-expressed in all cells. At a critical α u value, the HSS's symmetry is broken via a pitchfork bifurcation (PB): the HSS looses its stability, and a pair of fixed points is stabilized, giving rise to an inhomogeneous steady state (IHSS) (Fig. 1b, red). The IHSS is a single dynamical solution that has a heterogeneous manifestation: the unstable HSS splits into two symmetric branches that gain stability via saddle-node (SN) bifurcations , and correspond to a high u-expression state in one cell (u 2 ) and low u-expression state in the other cell (u 1 < u 2 , yellow) and vice versa (violet, Fig. 1d). These branches reflect the differentiated cellular identities that emerge from the multilineage primed HSS. The results therefore show that tristability in single cells, i.e. initial coexistence of all the possible attractors, is not a necessary requirement to describe differentiation from a multilineage primed state, but rather emerging cellular fates are generated with coupling cell-cell interactions.The observations are furthermore valid for number of different network topologies (Supplementary Figs. 1a to 1c,Eqs. (2)).
Unlike the independent steady states in a classical bistable system, in the two IHSS branches the cell states are interdependent, and the branches are conjugate to one another. For two coupled cells therefore, a high u-expressing state in one and a low u-expressing state in the other cell always emerge jointly, even when the cells are completely identical based on their parameters. In that sense, the IHSS arising via a pitchfork bifurcation is a true symmetry breaking solution, since it inevitably and robustly leads to differentiated cellular identities. More generally, for N globally coupled cells, N − 1 different distributions of the cells are possible between the upper and the lower branches, manifested by stable attractors in phase space (Koseska et al., 2010). For example, for N = 4 globally coupled cells, three different IHSS distributions are stable: 1u+3v+ denotes one cell having high-(u+) and 3 cells having low-expressing u state (v+), 2u+2v+ denotes 2 cells in each state and 3u+1v+ denotes 3 cells with high-and one with low-expressing u state (Fig. 1e). These distributions span the parameter space and are always sequentially ordered towards increasing number of cells with high-u expression state for increasing α u , where branches of neighbouring distributions with similar proportions overlap in parameter space. Thus, it follows that reliable proportions of cells in the differentiated fates is an inherent property of the IHSS solution. Which proportion will be observed for a specific system only relies on the organization of that system in parameter space.
Parameter differences between single cells on the other hand increase the stability region of the IHSS (Koseska et al., 2009). Bifurcation analysis in the case where cell-to-cell variability in α u was present (Methods) revealed that the HSS is already characterized with a slightly different mlp value in each cell, whereas its stability interval was relatively decreased (black, Fig. 1f). The parameter range where the IHSS solution is stable was on the other hand further expanded, and the overlapping intervals between neighbouring distribution branches were reduced (compare relative decrease from Fig. 1e to Fig. 1f). This effectively increases the robustness of the cell proportions in the two differentiated fates for a given α u . Thus, the number of cells acquiring specific fates is conserved under cell-to-cell variability.
These results demonstrate that several crucial dynamical characteristics emerge for a population-based symmetry-breaking: i) description of the undifferentiated co-expression state does not require evoking higherorder multistability on the level of a single cell, ii) heterogeneous cellular identities are generated from initially identical cells and are maintained on a population level due to cell-cell communication and iii) reliable proportions of differentiated cells are a direct consequence of the IHSS solution and can be robustly maintained under cell-to-cell variability.
To probe whether these principles generally apply for signaling with different communication ranges, we considered in addition to the global (all-to-all) coupling, also three different cell-cell coupling scenarios: local (nearest neighbor only), non-local (nearest and second nearest neighbour) communication, as well as distance-dependent cell-cell interactions forming an irregular grid (Supplementary Figs. 1d to 1g). The lattest was implemented by a coupling scheme where the probability of interaction declines with increasing distance between the cells ( Supplementary Fig. 1f, left, Methods). Although the HSS was destabilized for different α u values depending on the coupling type, the proportion of high-u expressing cells progressively increased with increasing α u for non-locally, globally and irregularly coupled N = 32 cells on 4×8 grid (Fig. 1g Fig. 1 Emergence of cellular identities via population-based symmetry breaking. a Bifurcation diagram depicting monostable increase in u levels with respect to changes of promoter strength α u , in a single-cell system. Inset: underlying network topology. b Bifurcation analysis for two coupled identical cells (scheme in inset) reveals emergence of inhomogeneous steady state (IHSS) solution. PB: symmetry-breaking subcritical pitchfork bifurcation; SN: saddle-node bifurcation. Solid lines depict stable: homogeneous steady state (HSS, black) and IHSS (red); Dashed lines: unstable steady states; Dotted lines: organization points in parameter space. c u 1 -u 2 phase plane analysis for organization of the two-cell system in the HSS (α u = 2.3); solid lines: nullclines. d u 1 -u 2 phase plane analysis for organization of the two-cell system in the IHSS (α u = 2.52). Inset: u-v phase plane manifestation of the IHSS on the marginal level of single cells. e Bifurcation analysis of a system of N = 4 globally-coupled cells (inset). 3 stable IHSS distributions with increasing u+/v+ cell ratios appear sequentially (legend). Gray shaded area: HSS/IHSS coexistence parameter range for subcritical organization. Line description as in b. f Bifurcation analysis for N = 4 non-identical globallycoupled cells (equivalent as in e, see Methods). g u+/v+/mlp cell proportions for increasing α u , when a non-locally coupled population of N = 32 cells on a 4×8 grid is considered. The width of each sub-bar within a bar reflects the fraction of occurrence of the respective u+/v+/mlp proportion in the 10 independent realizations. The initial conditions were randomly drawn from a normal distribution N (µ ics , σ 2 ics ) around the corresponding α u -specific mlp state as mean (µ ics ), and σ ics =0.1. Model parameters in Methods.
conditions (Methods) demonstrated that although neighbouring IHSS distributions were populated, α uspecific cell type ratios still remained reliable with a deviation of 10%. For local coupling however, 50-50% ratio was maintained in a large α u interval, indicating that visiting an IHSS manifestation that is different from a regular salt-and-pepper pattern on a 4×8 lattice only increases for higher α u values ( Supplementary Fig. 1e). This analysis however also highlighted that for N = 32 cells, specifications were observed for α u values for which in the case of N = 2 coupled cells, only the multilineage primed state (the HSS) was stable (compare Fig. 1g and Fig. 1b). This inevitably opens the question how the timing of cellular differentiation comes about.
Timing of cellular differentiation emerges in self-organized manner for critically organized growing populations It is typically assumed that a change of a bifurcation parameter, such as extracellular concentration of signaling molecules s for example, drives the system through a dynamical transition, thereby relating the onset of differentiation to characteristic reaction rates (De Mot et al., 2016). Considering on one hand that s is not a parameter but a dynamical variable of the system, and on the other hand that the system parameters such as the promoter expression strength in the studied example are biochemical constants that cannot significantly deviate, question has to be posed what triggers the symmetry breaking event during differentiation. We therefore study next how cell fate specification from the multilineage primed state occurs for fixed system organization in parameter space.
Experimental observations show that the multilineage primed state is maintained for several cell cycles before differentiation occurs (Saiz et al., 2016;Hatakeyama et al., 2004). Given this initial symmetry, it follows that for N = 2 coupled cells the system is poised in the HSS, before the pitchfork bifurcation (as for α u = 2.3, Fig. 1b). Since for N globally coupled cells, N − 1 IHSS distinct distributions are possible (as in Fig. 1e), it can be deduced that the number of distributions increases as 2 n , where n denotes the step in the lineage tree. As the IHSS emerges via a subcritical PB and thereby coexists with the HSS in the vicinity of the PB point, inclusion of these new distributions with each cell division widens the coexistence regions in parameter space, eventually capturing the system's organization point (compare gray shaded area in Fig. 1e to Fig. 1b). As a result, in the parameter region where for N = 2 coupled cells only the HSS was stable (Fig. 2a, green), for N = 4 non-locally coupled cells, stable IHSS solutions appeared (red/blue u+/v+ stackbar markers, Methods). Furthermore, for growing populations of N > 4 cells, the HSS loses its stability as the PB position is shifted due to an increase in the intrinsic spatial inhomogeneities, arising from the differing number of neighbors near the boundaries of the non-locally coupled grid. This shift is alike the one observed previously when cell-to-cell variability was introduced in the globally coupled system, reducing the HSS stability range (Fig. 1f). Therefore, while the PB position in a globally-coupled system of identical cells does not change (dotted line), for smaller communication range reflecting non-locally-(solid line) and locallycoupled (dashed line) systems, the symmetry breaking point shifts towards lower α u value with cell number increase (Fig. 2b). This two parameter bifurcation diagram therefore shows that differentiation timing is not constrained to a narrow parameter region, thereby reducing the necessity for fine-tuning in locally and non-locally coupled systems. Taken together, the coexistence between IHSS and HSS and subsequent loss of HSS stability with increase in system size effectively generates a subcritical pitchfork bifurcation with respect to N (Fig. 2a). This renders the number of cells an effective bifurcation parameter that drives the triggering of symmetry breaking and cellular differentiation.
To demonstrate how organization before the bifurcation point in conjunction with cell division can serve as a timing mechanism that regulates the onset of differentiation, we generated a lineage tree where the population growth of non-locally coupled cells on a grid is represented in a simplified manner: after a given time period T that mimics cell cycle length, all cells divide and the number of cells is doubled. The initial gene expression states of the daughter cells are inherited from the final state of the mother cell. Starting from the mlp state (green), as the cell population grows in size and the IHSS distributions appear via the subcritical PB, the loss of HSS stability triggers switch to the already existing IHSS solution (n = 4th step of the lineage tree, Fig. 2c). The distribution proportions for increasing system size showed a steady ratio above a certain population size (N ≈ 16 cells, Fig. 2c fates are separated, and the respective evolution of the sub-systems is followed to generate: d Lineage tree seeded from the N = 2 cells that before separation adopted the high u-expression state (u+); e Lineage tree seeded from the N = 6 cells that initially had adopted the low u-expression state (v+). Both upper panels reflect the respective cell type proportions.
The self-organized manner of generating heterogeneous cellular identities on a population level that we propose here also implies that differentiated fates cooperatively coexist. To check the importance of cooperativity to maintain cell fates, we performed a numerical experiment in which cells at n = 4th step of the lineage tree (N = 8 cells, Fig. 2c) are separated according to their fates, forming two single-fate sub-populations of different size that can further continue to grow and divide (Figs. 2d and 2d). The sub-population of two coupled cells with high u-expression reverted to the only stable 2-cell system attractor -the mlp HSS, but after 2 cell cycles (N = 8 cells) both differentiated fates re-emerged (Fig. 2d). The other sub-population of N = 6 cells with low u-expression on the other hand, initially briefly re-visited the multilineage primed state before both cell types stably re-emerged and the population settled in the IHSS attractor (Fig. 2e).
The difference in timing between the two cases again points to the cell number dependence in triggering of the symmetry breaking ( Figs. 2a and 2b). The cell type ratios for both sub-populations of different size were stabilized to a steady value similar to that of the full system before separation, and differed among each other by around 6%. This scaling and regenerating capability of the self-organizing system is a direct consequence of the properties of the IHSS solution: dynamically, it is not permitted to populate the upper without populating the lower u-expressing state. Thus, even when cells are separated such that only the cells whose dynamics with high (low) u-expression state remain, the cell division and the cell-cell communication through which IHSS is established in a first place, will enable the system to recover both cell types with reliable ratios.
Reliable proportions of differentiated cells are an intrinsic feature of the IHSS solution
To systematically probe the robustness of cell type ratios, we investigated the effects of variation in the initial conditions, as well as presence of intrinsic noise that is ubiquitous in gene expression. Results are obtained for a population of N = 32 cells under the 4 distinct coupling types, and a fixed α u organization before the symmetry breaking bifurcation as in Fig. 2c (α u = 2.3). Sampling the single cell initial conditions from normal distributions with increasing standard deviations σ ics around the mlp-value produced distributions with reliably conserved proportions between u+ and v+ cells for each coupling type. The different communication ranges yielded different stable u+/v+ proportions for this fixed α u value, in agreement with values in Supplementary Fig. 1, Fig. 1: ∼0.45 for non-local coupling (Fig. 3a), 0.5 salt-and-pepper patterns for local coupling ( Supplementary Fig. 2a), ∼0.4 for irregular coupling ( Supplementary Fig. 2g), whereas the HSS remained stable against moderate perturbations for global coupling (Supplementary Fig. 2d). Stochastic realizations with gradual shift in the initial mean value from high v-expression to high u-expression state (µ ics from 0 to 1, Fig. 3b, Supplementary Figs. 2b, 2e and 2h), or with increasing noise intensity (Fig. 3c, Supplementary Figs. 2c,2f and 2i), also demonstrated reliable u+/v+ proportions. As observed previously, organization in parameter space for a given communication range determines the obtained value of steady proportions of cells in the differentiated fates. We observed as well a manifestation where besides populating u+/v+ states within the IHSS solution, few cells also populated the mlp state, resembling a chimera-like state (Kuramoto and Battogtokh, 2002;Abrams and Strogatz, 2004). This was however only observed for non-local or irregular coupling (Fig. 3a, Supplementary Figs. 2g to 2i). Since chimera states have been predominantly characterized for systems of coupled oscillators, a detailed theoretical study is required to dynamically classify this solution.
We analyzed next whether reliable cell proportions can be achieved when considering that cellular identity is an intrinsic feature of single cells. Following this current view of differentiation, tristability on the level of single cells is the necessary requirement to simultaneously account for the multilineage primed as well as the differentiated fates, where intercellular signaling only drives the distribution of cells in one of the existing attractors (De Mot et al., 2016). It has been shown however that cell-cell communication can lead to novel dynamical solutions of the coupled system that are different than those of the isolated cells, indicating that the features of the coupled system cannot be formally explained with those of single cells (Suzuki et al., 2011;Goto and Kaneko, 2013;Koseska et al., 2007;Ullner et al., 2008). The proposed symmetry breaking mechanism is also a demonstration of this principle. We therefore explore whether the concept of multistability, on the level of a single-cell as well as on a joint communicating system level, is necessary and sufficient to serve as a basis for differentiation.
For this purpose, we adapted a paradigmatic multistability model where toggle switch with self-activation leads to tristability on the level of single cells (Jia et al., 2017), to also exhibit tristable behavior when cells are coupled (Eq. (3), Fig. 3d). In this case, the linage commitment was not robust, and the fate decisions were completely determined by the distributions of initial conditions or the amplitude of the noise intensity. All cells either remained in the multilineage primed state, or all cells jointly populated one of the two differentiated states (Fig. 3e, Figs. 3f to 3h). Thus, while a system of independent cells can theoretically populate any combination of the coexisting single-cell attractors (Fig. 3e, gray squares), within a coupled system all of the cells populate the same state in the multistability solutions (Fig. 3e, coloured squares). This comes from the lack of symmetry breaking (PB) (Fig. 3d) which highlights the difference in dynamical behavior to the IHSS solution: the IHSS arises from a true symmetry breaking solution, its branches emerge together at the PB and as previously noted, are conjugate to each other (van Kekem and Sterk, 2019), always leading to a joint occupancy of each of the differentiated fates by the cells.
Existence of multistability on a single-cell level is therefore not a sufficient condition for co-existence of heterogeneous cellular identities and reliable proportions on a population level of a coupled system. While in principle coupled multistable systems could generate a more complex PB-induced IHSS solution, as we have described here they are not a necessary condition for symmetry breaking emergence.
IHSS as a generic mechanism for cellular differentiation: examples of the early embryo and vertebrate neurogenesis
The proposed symmetry breaking solution together with the system's organization before the pitchfork bifurcation point uniquely provides a dynamical mechanism of differentiation that simultaneously accounts for robustness in proportions and self-organized timing of the event. These properties can be directly related to the population-based dynamical transitions and therefore they apply to systems with diverse gene expression dynamics in single cells. In addition to the generic model that displays monostability in a single cell system, we demonstrate this using bistable and oscillatory single cell behavior, as pervasive during embryogenesis and neurogenesis, respectively.
During early embryogenesis, differentiation from the ICM state results in the formation of Nanog-positive epiblast (Epi) and Gata6-positive primitive endoderm (PrE) cells. Multistability on a single cell level has been proposed as an underlying mechanism, where cell fate specification is mediated by intercellular interactions involving Fgf4 communication and Erk signaling (Schröter et al., 2015;Bessonnard et al., 2014;Chazaud et al., 2006). The heterogeneities in extracellular Fgf4 concentrations that each cell perceives have been determined as crucial for cells to populate one of the remaining stable attractors (De Mot et al., 2016;Bessonnard et al., 2014). This single-cell identity view has also been used to explain the occurrence of purely Epi or PrE states when development occurs in the absence of Fgf4 signaling or in presence of a constant high level of exogenous Fgf4 (Nichols et al., 2009;Yamanaka et al., 2010).
Considering again that Fgf4 is not a parameter, but rather a variable in the system, we studied the dynamical properties of a minimal model of two coupled cells (Eq. (4)), where the single-cell dynamics is characterized with a bistable behavior (Schröter et al., 2015). Bifurcation analysis demonstrated that not only a coexpression ICM-like state (black) emerges due to cell-cell communication, but also a symmetry breaking IHSS occurs, reflecting the differentiation in PrE and Epi fates (red, Fig. 4a). Numerical simulations of a population of N = 32 locally-coupled cells organized before the pitchfork bifurcation (α N = 5) showed the transition of the joint system towards either Epi or PrE fates upon signaling perturbations, as experimentally observed (Nichols et al., 2009;Yamanaka et al., 2010). Administering increasing doses of Fgf4 inhibitor gradually decreased the cell-cell communication, eventually unravelling the Nanog+ solution of the singlecell system (Fig. 4a, gray profile). Thus, gradual increase of the Epi/PrE proportions towards an all-Epi state was observed (Fig. 4b). On the other hand, increasing the dose of exogenous Fgf4 overwrote the intercellular communication such that the system reflected the cells' dose-response behavior, resulting in an abrupt joint switch in the population to high Gata6 expression. The coupled system either remained in the salt-and-pepper pattern or transitioned jointly to the PrE state (Fig. 4c), in line with the experimental observations in (Kang et al., 2013). Taken together, these results suggest that IHSS in conjunction with Fig. 4 Symmetry-breaking via pitchfork bifurcation as a generic mechanism for differentiation during early embryogenesis and neurogenesis. a Combined bifurcation plot depicting bistable Nanog-Gata6 behavior on the single-cell level (gray profile) and IHSS on a 2-cell population level (red), (Eq. (4)). Line description as in Fig. 1b (5)) into two distinct differentiated fates as a function of the cell number. Stochastic simulations were performed as described in Methods.
critical organization before the pitchfork bifurcation is consistent with the existing experimental observations and thus, this general principle of population-based symmetry breaking can serve as a basis for robust Epi and PrE specification, crucial during early development.
In the developing mouse brain on the other hand, oscillatory expression of transmembrane ligands of the Delta, Serrate, and Lag-2 (DSL) family, Hes-Her proteins, and proneural proteins have been observed in neural precursors, before patterned steady states are reached (Kageyama et al., 2007;Shimojo et al., 2008;Momiji and Monk, 2009). Since the oscillations possibly play a central role in delaying the onset of neural differentiation, multiple models have already provided dynamical descriptions of these observations (Momiji and Monk, 2009). However, a dynamical mechanism that leads to heterogeneous steady state levels from an initially homogeneous oscillatory dynamics is lacking. Without describing the molecular details of the system, but rather using a paradigmatic model that captures oscillations on the level of single cells (Eq. (5), Methods), we next explore the possibility that the symmetry of the synchronized solution of the coupled system can be broken via a pitchfork bifurcation induced by the intercellular communication. The bifurcation analysis of a minimal (N = 2) globally coupled system showed a coexistence between a limit cycle solution corresponding to synchronized oscillations and an IHSS corresponding to the differentiated state (Fig. 4d, top). Increase in cell number, similarly as before (Fig. 1b to Fig. 1e), enlarged the parameter region where the IHSS is stable, and therefore in the parameter range where for N = 2 coupled cells only stable limit cycle was observed, for N = 4 cells stable IHSS distributions appeared. Numerical simulations where cell division was explicitly considered demonstrated that a transition from synchronized oscillations to a symmetry-broken IHSS emerges for critical organization before the pitchfork bifurcation (Fig. 4e). Even though the model does not reflect the molecular details of the Notch-pathway oscillatory dynamics during vertebrate neurogenesis, it demonstrates that population-based symmetry-breaking can in principle serve as a mechanism to describe differentiation for growing populations, characterized with oscillatory dynamics. Thus, using these two models that resemble early embryogenesis and neurogenesis cases, together with the generic examples (Fig. 1, Supplementary Figs. 1a to 1c), we demonstrate the general applicability of the proposed population-based symmetry breaking mechanism via a pitchfork bifurcation. Subcritical organization before the pitchfork bifurcation on the other hand again enables self-organized timing of cellular differentiation to emerge due to cellular division.
Discussion
Important insights regarding symmetry breaking mechanisms unquestionably came from Turing's seminal work (Turing, 1952), and have been subsequently widely explored to generally describe the emergence of spatial organization during development (Raspopovic et al., 2014;Economou et al., 2012). The population-based symmetry breaking mechanism we propose here however, not only provides a unique dynamical transition from homogeneous to a heterogeneous distribution of cellular fates, but in conjunction with subcritical organization enables accounting for the reliability and timing of the differentiation event. Although a similar mechanism of a population-based symmetry breaking via a pitchfork bifurcation has been previously suggested for the Delta-Notch lateral inhibition model when the strength of the interaction between the two cells is varied (Ferrell, 2012), it relied on a supercritical PB. Moreover, it has not been proposed how robustness and accuracy emerge due to this dynamical transition which to our understanding is more limiting since subcritical organization, as we have demonstrated here, is crucial to determine timing of cellular differentiation in reliable cell type proportions.
We argue that intercellular communication, an integral part of developing mammalian embryos, gives rise to differentiated fates whose dynamical manifestation cannot be directly inferred from the gene regulatory network dynamics in single cells. We demonstrated that such novel dynamical solution with a heterogeneous manifestation within the population can emerge with cell division even from systems of single cells, characterized with monostability. This is distinct from systems that rely on pre-existing asymmetries where intercellular signaling is influencing the distribution of cellular fates between co-existing attractors.
In order to understand the basic mechanisms of cell-to-cell cooperative behavior, several theoretical principles have been already successfully developed by investigating both natural and synthetic genetic networks (McMillen et al., 2002;Taga and Bassler, 2003;Kuznetsov et al., 2004;Garcia-Ojalvo et al., 2004;Ullner et al., 2007). These studies have shown that multiple coexisting attractors arise in multicellular systems, thereby enabling a very diverse dynamics, different than the dynamics of the isolated cells, to be manifested on a population level in conjunction with high adaptability. Cooperative behavior has been therefore profiled as necessary for the emergence of typical features observed in cellular systems. In that respect, it has been proposed that phase-repulsive coupling generally leads to inhomogeneous solutions such as the IHSS discussed here, whereas the size of the system affects the relative sizes of the basins of attraction of the coexisting regimes to promote the inhomogeneous solutions (Ullner et al., 2008;Koseska et al., 2010). Another inhomogeneous solution that is directly associated to the IHSS is an inhomogeneous limit cycle -a periodic solution of a system of coupled oscillators rotating around two spatially non-uniform centers (Ullner et al., 2007(Ullner et al., , 2008Koseska et al., 2010). Manifestation of this solution has been in turn related to stem cell differentiation with self-renewal (Suzuki et al., 2011). The derived generalizations how these solutions emerge however mainly refer to populations of coupled genetic networks that exhibit oscillations. In order to understand the minimal coupling principles that lead to novel inhomogeneous solutions in populations that exhibit mono-or bistability on a single cell level, a more detailed theoretical analysis is required. The hypothesis that timing is triggered by the number of cells can be experimentally tested for example in stem cell culture where the specification of populations of different sizes is tracked over time. Such experiments can also shed light on the self-organizing properties of mammalian embryos, by probing the regeneration of cell type proportions upon external physical or chemical perturbations, as suggested in Fig. 2. The cooperativity necessary for these organization principles to arise can be further tested in systems of coupled synthetic genetic networks that can mimic intercellular communication.
It is important to note here that one of the main characteristics of the IHSS solution is the robustness of the differentiated cell types numbers. The defined number of possible distributions between the two cell fatesall of which represent stable attractors, therefore ensures that stochasticity in gene expression dynamics or variability in the initial conditions can only switch the system between neighbouring attractors with similar proportions, thus preserving the overall robustness. This in turn allows to envision further extension of this principle of population-based symmetry breaking to describe pluri-/multipotency of stem cells. Conceptually, this would correspond to a finite cascade of subsequent pitchfork bifurcations occurring simultaneously on both branches of the existing IHSS solutions (Zakharova et al., 2013;van Kekem and Sterk, 2019).
Our results overall suggest that the cooperative behavior of growing populations enables the symmetry of a homogeneous population to be broken, as a pre-requisite for novel information regarding different cellular types to emerge, whereas organization in the vicinity of this dynamical transition allows to comprehensively capture how robustness and accuracy during development are generated.
Generic cell-cell communication system
The generic model from Figs. 1a to 3c and Supplementary Figs. 1d to 2i is described with the following set of equations: Here u and v are the two genetic markers that are coupled with mutual inhibition, while s is the secreted signaling molecule whose production is regulated by u. i -single cell index. In a single cell case, s i,ext = s i , as in Fig. 1a. For multiple cells, the system is distributed spatially on a regular two-dimensional lattice with no-flux boundary conditions. Four different communication ranges, R, of the secreted signaling molecule were considered: globally connected network (all-to-all communication, R = ∞), locally connected network (cells communicate only with direct neighbors on the lattice, R = 1a, where a is the lattice constant), nonlocally connected network (cells communicate with direct neighbors and cells on two hops away on the lattice, R = 2a) and distance-based coupling on an irregular grid (cells communicate with other cells with probability e − d 2 2R 2 , where d is the cell-cell distance and R = 1 in this case). In these cases s i,ext = 1 |N (i)|+1 j∈(N (i)∪i) s j is the external amount of signal perceived by cell i from its neighborhood N (i), which negatively regulates the production of u. This effectively creates a joint 3N -dimensional system, where N is the total number of cells.
For the case of non-identical cells (Fig. 1f), the α u parameter was uniformly varied between the cells in the range from −2% to 2% of its value.
For the stochastic simulations (Fig. 2, Fig. 3 and Supplementary Fig. 2), a stochastic differential equation model was constructed from Eq. (1) by adding a multiplicative noise term σXdW t , where dW t is the Brownian motion term and X is the variable state. When the noise intensity σ was not varied, it was set to 0.1 was used unless when noise intensity was varied. The model was solved with ∆t = 0.01 using the Milstein method (Mil'shtein, 1974).
To discriminate between the multilineage-primed-(mlp), u-positive (u+) or v-positive (v+) cell fates for a given realization, each marginal cell state vector (u i ,v i ) within the converged state of the system (IHSS or HSS) was individually categorized as one of three fates and the three-term ratio (proportions) of the realization was subsequently calculated. The reference mlp fate vector was pre-determined for a given parameter set, i.g. for a specific value of α u , from the steady state value of 1-cell monostable system realization, since the bifurcation analysis demonstrated that the mlp HSS for a single cell and coupled systems are equivalent. Marginal cell state of a deterministic realization was categorized as mlp fate when its value fell within 1% around the pre-determined value, whereas cell states with larger v-value were v+, and with smaller v-value -u+ fates. Transient states for the stochastic realizations in Figs. 2c and 2d were categorized as mlp if they fell within 5% around the deterministic mlp state. End states of all stochastic realizations were allowed to converge to their deterministic attractor state in noise-free fashion before categorizing (with 1% std around mlp fate). Finally, results from 10 repetitions were grouped according to matching proportions and each proportion was plotted as a stacked sub-bar within a bar plot whose width corresponds to the number of repetitions in the group, i.e. the fraction of occurrence of that proportion in the simulations.
Estimating IHSS distributions as a function of the number of cells (N)
By analogy to Fig. 1e, the different branches of the IHSS (i.e. proportions of cells in them) were estimated using the number of cells as a bifurcation parameter. For this, exhaustive scanning was performed to locate the different fixed point attractors in phase space for each N . The scanning process involved 20 repetitive executions with different noise intensities (varying from 0 to 0.3). Each repetition consisted of 30 alternating cycles of stochastic-(for exploration), followed by deterministic execution (for convergence to attractor), when the reached state was tested for stability and subsequently recorded. For every distinctly detected attractor, the u + /v + /mlp proportion of cells was estimated, after which the average u-value was calculated and plotted for each of the branches (the u+ and v+ cells for IHSS, or mlp cells for HSS; color-coded, see Fig. 2a). Chimera-like states were omitted from the diagram for brevity.
Lineage tree generation
Generation of lineage trees was performed using stochastic simulations where the system doubles in size at regular time intervals, starting from a single cell system to an 8 × 8-cell system. At every cell division the mother cell's steady state is passes on to daughter cells' initial conditions. Cell divisions occur along horizontal and vertical axis on the grid alternately, sequentially yielding lattices of 1 × 1, 1 × 2, 2 × 2, 2 × 4, 4 × 4, 4 × 8 and 8 × 8. Cellular states were categorized in every time instance to plot the single cell temporal evolutions in the lineage trees (Figs. 2c to 2e). Further, cellular proportions in the system were estimated from those values, and their temporal evolution was shown in the panels above the lineage trees.
In the cell fate separation case (Figs. 2d and 2e), the steady states of the cells at the end of the fourth cycle were categorized and the differentiated cells were then separated: u+ cells were given as seeds to a new lineage tree (1 × 2 grid), while v+ cells were seeds for a separate one (2 × 3 grid). Following this, multiple cell divisions were again performed and the cell proportions were estimated.
Multistability model on a single-cell/population level
Following (Jia et al., 2017) that demonstrated tristability on a single cell level, we introduced cell-cell communication to achieve tristability on a population level (Figs. 3d and 3e) with the following equations: where the shifted Hill function is used to capture regulation of production of X by Y: H Y X (Y, λ Y,X , n Y,X , Y 0,X ) = 1 + λ Y,X ( Y Y 0,X ) n Y,X 1 + ( Y Y 0,X ) n Y,X Cells communicate via diffusive coupling of A (D A = 0.5). Other parameters: λ A,A = λ B,B = 3, λ A,B = λ B,A = 0.1, A 0,A = B 0,B = 80, A 0,B = B 0,A = 20, n A,A = n B,A = n B,B = n A,B = 4, k A = k B = 0.1, g b = 5. g a = 4.035 was set to place the system in tristability regime.
Modeling of the Nanog-Gata6 differentiation system in the early embryo
Similarly to the generic u-v symmetry breaking system, the Nanog-Gata6 system was modeled using the following dynamics: Here N , G and F are Nanog, Gata6 and Fgf4, respectively. α N = 5 to place the system before the first bifurcation point (Fig. 4a) by analogy to the u-v system, α G = α v , α F = α s ; F i,ext = 1 |N (i)|+1 j∈(N (i)∪i) F j ; F inh,1/2 = 0.1 for the cases with external inhibition F inh of Fgf4 and F exo is the exogenous Fgf4 concentration. The other parameters were as in the u-v system. Locally connected network was used for the corresponding stochastic simulations with multiplicative noise term and σ = 0.5.
Paradigmatic model mimicking the vertebrate neurogenesis process
It has been previously demonstrated that the presence of time delays in models of lateral inhibition can result in significant oscillatory transients before patterned steady states are reached. The impact of local feedback loops in a model of lateral inhibition based on the Notch signaling pathway, elucidating the roles of intra-and intercellular delays in controlling the overall system behavior have been also proposed (Momiji and Monk, 2009). Here, we aim at understanding whether population-based pitchfork bifurcation can provide the dynamical background behind the observed symmetry-breaking phenomenon. Since our aim is demonstrating the validity of this concept, we omit the molecular details of the Notch-pathway example and model a generic case where the gene expression dynamics in each cell is characterized with oscillatory behavior, whereas intercellular communication between the cells is realized in a global manner, for simplicity. The dynamics of the system is therefore described as: dp i dt = α p 1 Here p and q are two genes that mutually inhibit each other's expression, r controls the production of the signaling molecule, whose extracellular concentration is denoted as r ext . This system has been demonstrated to exhibit synchronized oscillations in a population of communicating cells (Kuznetsov et al., 2004;Koseska et al., 2007). α p = 2.95 to place the system in the region where the limit cycle is stable (N = 2), and subsequently within the stability region of the IHSS for N = 4 (Fig. 4d). Other parameter values: α q = 5, α p,r = 1, α r = 4, β = 2, γ = 2, δ = 2, η = 2, = 0.01, d = 0.008, de = 1. The lineage tree was generated by stochastic simulations (executed in C) with additive noise term of σ = 0.0008. | 10,252.2 | 2019-03-16T00:00:00.000 | [
"Biology",
"Physics"
] |
Development Study of Binding Agent in Diffusive Gradient In Thin Films (DGT) Technique for Absorption of Phosphate Compounds using Nano-La 2 O 3
The abundance of phosphate is a concern because it causes problems in aquatic ecosystems. The diffusive gradient in thin films (DGT) technique is a promising method for phosphate absorption because it can be used in situ. The DGT device consists of a membrane filter, a diffusive gel, and a binding gel. The presence of a binding agent in the binding gel makes the specific analyte bound to the binding gel. One of the binding agents that can be used for phosphate absorption is La 2 O 3 . Binding gel La 2 O 3 was successfully synthesized that proven by the similarity of FTIR peaks of the diffusive gel and binding gel. The typical absorption of La-O also proves it from the binding gel at 642 cm -1 and 423 cm -1 . La 2 O 3 binding gel was made of N,-N' -methylenebisacrylamide as a cross-linker with an elution factor of 97.4%. DGT-La 2 O 3 proved capable of adsorption for 72 hours, with phosphate absorbed at 1.91 x 10 5 ng. DGT-La 2 O 3 also has the optimum ability to absorb phosphate at pH 3 of 1.93 x 10 5 ng.
INTRODUCTION
Eutrophication has developed into one of the most discussed problems because of its effects which can damage the ecosystem function of water body. Eutrophication is a process of enrichment of soil and water by several nutrients, which is characterized by the excessive growth of algae due to the supply of nutrients to the aquatic system (Kitsiou & Karydis, 2011;Kobetičová & Černý, 2019). Eutrophication encourages cyanobacteria's growth, resulting in poor water quality, causing oxygen depletion and fish death in the waters (García, 2021). Excess phosphorus is one of the triggers for eutrophication. The use of phosphorus-containing fertilizers, along with soil erosion and wastewater discharges, has a more significant influence on the global phosphorus cycle to watersheds than naturally available phosphorus. Rapid population growth and living standards are the main driving factors changing the global phosphorus cycle (Tao et al., 2019).
Excess phosphorus occurs due to anthropogenic activities and its bioavailability. The efficiency of phosphorus utilization is relatively low, accompanied by the still feeble recovery of phosphorus, causing an almost one-way flow of phosphorus from mining to natural waters where excess phosphorus causes a decrease in the quality of water (Alam et al., 2021). Phosphorus is trapped in bedrock, soil, and sediment. The conversion of unavailable forms to available forms, especially soluble orthophosphate, occurs through geochemical and biochemical reactions at various stages in the global phosphorus cycle (Ruttenberg, 2019). In general, phosphorus can be divided into inorganic phosphorus and organic phosphorus. The main form of phosphorus in soil is inorganic phosphorus . Thus, a method is needed to monitor the presence of phosphorus in water bodies, one of which is the diffusive gradient in thin films (DGT).
The diffusive gradient in thin films (DGT) is a new approach for adsorbing phosphorus in the environment. The DGT technique has been applied to the in situ measurement of dissolved phosphorus in natural waters. The phosphorus species can change when the sample is stored because it has dynamic interactions in natural water. Thus, this technique allows in situ measurement of reactive phosphorus (Pichette et al., 2009). DGT has practical advantages because it can be used directly in the field, identify multi-element samples, and measure integrated time (Yabuki et al., 2014). DGT measurements are calculated based on the linear flux of solute to the DGT device, which is referred to as the average concentration on the surface of the DGT device during application. DGT has been used to measure various labile species in soil, sediment and air, depending on the type of binding gel used with a binding agent specific to a particular analyte (Wang et al., 2016).
In recent years, various fields of study have shown great interest in applying nanotechnology, one of which is environmental remediation. Oxides and hydroxides based on rare earth metals have been extensively studied to remove and restore water bodies. One of the rare earth metals that had considerable potential in the recovery of water bodies is lanthanum. The availability of lanthanum is relatively abundant, thus providing great potential as a promising adsorbent in phosphor uptake (Zhi et al., 2020). La naturally has a strong affinity for phosphate, and La 3+ ions can attract oxygen donor atoms from phosphates through the anion-ligand exchange process (Razanajatovo et al., 2021). In this research, a binding agent was developed using the diffusive gradient in thin films (DGT) technique for the adsorption of phosphate compounds using La 2 O 3 . This study aimed to modify the binding agent components on the diffusive gradient in thin films (DGT) devices using La 2 O 3 and to test the binding agent La 2 O 3 on phosphate uptake. This research provides information regarding the synthesis of DGT-La 2 O 3 , polyacrylamide polymerization, and the role of N-N'-methylene bisacrylamide (MBA) crosslinkers. It can be used to monitor phosphate concentrations in water bodies to prevent eutrophication and control water pollution.
Materials
The materials used in this work were lanthanum (
Instrumentations
The instrumentation used in this work was Fourier Transform Infrared (FT-IR) Shimadzu IR Prestige 21, X-Ray Powder Diffraction (XRD) PANalytical, Field Emission Scanning Electron Microscopes Energy Dispersive X-Ray Spectroscopy (FESEM EDS) Jeol JIB-4610F, Ultravioletvisible Spectrophotometry (UV-Vis) Shimadzu UV-2450. Characterization was carried out at the Department of Chemistry and Physics, University of Indonesia, and the National Research and Innovation Agency (BRIN)
Fabricated of La 2 O 3
As much as 2.5 g of starch was dissolved in 100 mL water. Then 0.1 M lanthanum nitrate was added to the starch solution with a ratio of starch and lanthanum nitrate is 1:1. The resulting solution was stirred for 30 minutes, evaporated at 100 ºC and calcined at 750 ºC for 2 hours. Furthermore, obtained La 2 O 3 was characterized by FTIR, XRD, and FESEM-EDS.
Preparation of N,N'methylenebisacrylamide
0.3 g of solid N,-N'-methylene bisacrylamide (MBA) was dissolved in 100 mL of demineralized water.
Fabricated of Diffusive Gel
This diffusive gel consisted of 1.9 mL of 40% acrylamide, 0.75 mL of 0.3% N,N'-methylenebisacrylamide, and 2.35 mL of demineralized aqua, which was then added with 35 L ammonium persulfate solution and 12.5 L N,N,N',. The mixture was stirred until homogeneous for 10-15 seconds and pipetted into a glass mould cleaned with 0.05 M HNO 3 solution, ensuring no bubbles formed. Then the solution in the mould was heated at 42-46 ºC for 1 hour until a gel was formed. The formed gel was cut using a DGT cutter with a diameter of 2.5 cm, washed with demineralized water, and soaked for 24 hours for hydration. Aquademineralization was replaced 3-4 times during 24 hours. Then the gel was stored in 0.01 M NaNO 3 solution until it was used.
Fabricated of La 2 O 3 Binding Gel
Binding Gel La 2 O 3 consists of 2 g of La 2 O 3 , acrylamide, N,N'-methylene bisacrylamide, and aquademineralization, which is added with ammonium persulfate solution and N,N,N',N'tetramethylethylenediamine (TEMED). The mixture was stirred until homogeneous for 10-15 seconds and pipetted into a glass mold with 0.05 M HNO 3 solution, ensuring no bubbles formed. Then the solution in the mould was heated at 42-46 ºC for 1 hour until a gel (not liquid) was formed. The formed gel was cut using a DGT cutter with a diameter of 2.5 cm, washed with demineralized water, and soaked for 24 hours for hydration. Aquademineralization was replaced 3-4 times during 24 hours. Then the gel was stored in 0.01 M NaNO 3 solution until it was used.
La 2 O 3 Binding Gel Elution Factor Test
La 2 O 3 binding gel was soaked in 20 mL of 10 ppm KH 2 PO 4 solution for 24 hours. Next, the binding gel was washed with aquademine. Then the binding gel was eluted for 24 hours to determine the eluted concentration. La 2 O 3 binding gel was eluted with 10 mL of NaOH solution at a concentration of 0.5-1.5 M each.
Installation of DGT Devices
DGT devices were washed and rinsed with aquademin. The filter membrane was soaked in aquademin for 5 minutes. The first order is the binding gel, followed by placing the diffusive gel and membrane filter. DGT closed tightly and firmly.
Diffusion Coefficient Test
The diffusion coefficient test was carried out by immersing the DGT La 2 O 3 device in 30 mL of 10 ppm KH 2 PO 4 solution. The solution was stirred for 2, 4, 6, 8, 12 and 24 hours of DGT immersion.
Soaking Time Variation Test
Immersing the DGT La 2 O 3 device in 30 mL of 10 ppm KH 2 PO 4 solution was tested for variations in immersion time. The solution was stirred for 2, 4, 6, 8,12,24,48, and 72 hours of DGT immersion.
Phosphate Concentration Variation Test
The concentration variation test was carried out by immersing the DGT La 2 O 3 device in 30 mL of KH 2 PO 4 solution. The concentration of KH 2 PO 4 was varied from 0.5, 1, 2, 3, 4, 5 and 10 ppm with soaking time for 24 hours with stirring.
Phosphate Solution pH Variation Test
Testing for pH variations was observed by immersing the DGT La 2 O 3 device in 25 mL of 10 ppm KH 2 PO 4 solution with pH variations of 3, 5, 7, 9 and 11 for 24 hours with stirring. The pH was adjusted with 0.1 M NaOH and 0.1 M HCl.
Sample Analysis
Phosphate concentration was determined using the elution solution, whose concentration was determined by the molybdenum blue method using a Uv-Vis spectrophotometer. The formation of a phosphomolybdenum blue complex observed the absorbance value.
La 2 O 3
La 2 O 3 was fabricated using the Moothedan and Sherly (2016) method. La 2 O 3 synthesis was carried out by reacting starch with 0.1 M lanthanum nitrate with a volume ratio of 1:1. The choice of starch as a capping agent is due to its environmental friendliness, non-toxicity, and unlimited sources in nature (Moothedan & Sherly, 2016). When the capping agent surrounds the lanthanum nitrate core, it inhibits its growth, thereby preventing agglomeration. When the concentration of the capping agent is insufficient to cover the core completely, the capping agent's crystal growth can be inhibited partially. Hence, the resulting particles are large (Kabir et al., 2019).
The mixture was stirred for 30 minutes and evaporated at 100 C. After that, the mixture was calcined at 750 C. Calcination temperature affects the La 2 O 3 crystal size growth. The average particle size increases with increasing calcination temperature. Growth is relatively slow when the temperature is low and increases rapidly when the temperature exceeds 750 C. Pure La 2 O 3 is obtained when the calcination temperature reaches 750 C (Wang et al., 2006). Figure 2, where a peak appears at 2θ = 15.65º; 27.30º; 27.98º; 39.50º; 48.65º. These peaks have similarities with previous literature indicating that La 2 O 3 was successfully synthesized (Sulaiman et al., 2018). These peaks also resemble JCPDS data from La 2 O 3 no. 04-0856 (Mustofa et al., 2020). Furthermore, the crystal size of La 2 O 3 is calculated by the Debye-Scherrer equation 1.
Where, D is the crystal size, is wavelength (1.54 Å), FWHM is the full width at half maximum, and is peak position (Ravi et al., 2019). Based on the calculation results, the average crystal size of La 2 O 3 is 28.51 nm.
La 2 O 3 Binding Gel and Diffusive Gel
FT-IR characterized La 2 O 3 binding gel and diffusive gel. Based on Figure 4, the two gels show peaks at 1119 cm -1 and 1123 cm -1 , which are absorptions from C-N stretching. Absorption from N-H stretching appears at 3335 cm -1 and 3197 cm -1 , and N-H bending absorption appears at 1664 cm -1 , 1610 cm -1 for La 2 O 3 binding gel and 1650 cm -1 , 1601 cm -1 for diffusing gel. C-H stretching absorption appears at 2945 cm -1 and 2950 cm -1 , and C-H bending occurs at 1451 cm -1 and 1456 cm -1 (Li et al., 2020;Sabbagh & Idayu, 2017). The characteristic absorption that most distinguishes between La 2 O 3 binding gel and diffusive gel is absorption at 642 cm -1 , which indicates La-O stretching, and 423 cm -1 , which indicates La-O bending (Li et al., 2020).
Elution Factor Test Results
The elution factor is needed for the phosphate desorption process that a material has adsorbed. Eluent optimization was carried out with various concentrations of NaOH, namely 0.5 M; 1 M; and 1.5 M, for 24 hours. The optimum eluent concentration for eluting La 2 O 3 binding gel was 1 M NaOH. Based on Table 1, 1 M NaOH was successful in eluting with an elution factor of 97.4%. 1 M NaOH will be used as the best eluent to desorb phosphate.
Diffusion Coefficient Test Results
The diffusion coefficient test was carried out by immersing the DGT device in 30 mL of 10 ppm KH 2 PO 4 solution with a contact time of 2 to 24 hours with stirring at room temperature. In Figure 5, the resulting regression value is 0.98148, with a slope value of 2.1636. This slope value will be used in calculating the diffusion coefficient value for the La 2 O 3 binding gel. Based on the calculations, the resulting value of the diffusion coefficient for La 2 O 3 binding gel is 1.4542 x 10-5 cm 2 s -1 . The value of the diffusion coefficient affects the diffusion rate for analyte species, where a more excellent value of the diffusion coefficient will result in a greater diffusion rate (Kuntari et al., 2019). The resulting diffusion coefficient value is greater than that of the previous experiment by Zhang et al. (1998), which is 7.39 x 10-6 cm 2 s -1 . It is due to the differences in the crosslinkers used in the gel manufacture, which give a different porosity effect to the diffusive gel.
The MBA cross-linker was used in this work, while the previous experiment used agarose (Zhang et al., 1998).
DGT-La 2 O 3 Ability in Phosphate Absorption
The ability of DGT-La 2 O 3 was tested by immersing DGT-La 2 O 3 in 10 ppm KH 2 PO 4 pH 7.4 within 2 to 72 hours with stirring. The ability of DGT-La 2 O 3 to absorb phosphate can be seen in Figure 6a, where the absorbed phosphate increases with increasing time and becomes saturated when entering 24 hours to 72 hours. Therefore, 24 hours was chosen to test for pH variations and solution concentration variations. The highest phosphate value absorbed at 72 hours was 1.91 x 10 5 ng. DGT-La 2 O 3 was also tested by varying the pH of the KH 2 PO 4 solution from pH 3 to 11 to see its ability to absorb phosphate (Figure 6b). DGT-La 2 O 3 decreased with increasing pH. It is because when the pH is acidic, the dominant orthophosphate species are H 3 PO 4 and H 2 PO 4 -. Excess of H + protonates the hydroxyl group from La, making it easier for the metal to bind to phosphate. However, at alkaline pH, the dominant orthophosphate species is PO 4 3which will compete with OHso it can bind to metals (Zhang et al., 2022). The pH condition that gave the best absorption was pH 3, with the amount of phosphate absorbed at 1.93 x 10 5 ng.
The ability of DGT-La 2 O 3 was also tested by varying the concentration of KH 2 PO 4 solution from 0.5 to 10 ppm with a pH of 7.4. Figure 6c, as the concentration increases, the amount of phosphate absorbed in DGT-La 2 O 3 increases with a value of 2.29 x 10 5 ng. The increased phosphate concentration will cause the phosphate mass diffused and bound to the binding gel to increase.
The mechanism that occurs is that when the pH is acidic, the H + ions in the solution will protonate the hydroxide groups from the surface so that they will be preferred by negatively charged phosphates (H 2 PO 4 and HPO 4 2-) and increase the capability of phosphate absorption. When the pH is alkaline, the surface of the adsorbent is rich in negative charges due to OHin solution, resulting in electrostatic repulsion with HPO 4 2and PO 4 3-, thereby reducing the capability of phosphate absorption (Zhang et al., 2022).
CONCLUSIONS
La 2 O 3 was successfully synthesized and confirmed by FT-IR, XRD, and FESEM-EDX. La 2 O 3 can be used as a binding agent in testing the DGT technique as an absorbent for phosphate compounds. DGT-La 2 O 3 could absorb phosphate for 72 hours with accumulated phosphate of 1.91 x 10 5 ng. DGT-La 2 O 3 could also absorb phosphate at various concentrations with accumulated phosphate of 2.29 x 10 5 ng at a concentration of 10 ppm. DGT-La 2 O 3 has an optimum pH at pH 3 with accumulated phosphate of 1.93 x 10 5 ng. | 4,061.4 | 2022-11-30T00:00:00.000 | [
"Chemistry"
] |
A Novel Modulator of STIM2-Dependent Store-Operated Ca2+ Channel Activity
Store-operated Ca2+ entry is one of the main pathways of calcium influx into non-excitable cells, which entails the initiation of many intracellular processes. The endoplasmic reticulum Ca2+ sensors STIM1 and STIM2 are the key components of store-operated Ca2+ entry in mammalian cells. Under physiological conditions, STIM proteins are responsible for store-operated Ca2+ entry activation. The STIM1 and STIM2 proteins differ in their potency for activating different store-operated channels. At the moment, there are no selective modulators of the STIM protein activity. We screened a library of small molecules and found the 4-MPTC compound, which selectively inhibited STIM2-dependent store-operated Ca2+ entry (IC50 = 1 μM) and had almost no effect on the STIM1-dependent activation of store-operated channels.
INTRODUCTION
An increase in the concentration of cytoplasmic Ca 2+ ions is one of the common cellular responses to extracellular stimulation of membrane receptors by physiologically active substances that trigger a wide range of intracellular cascades. Under physiological conditions, the intracellular Ca 2+ response to an agonist includes not only entry of extracellular Ca 2+ into the cell, but also depletion of the intracellular Ca 2+ stores located in the endoplasmic reticulum (ER) [1]. Plasma membrane channel-mediated Ca 2+ entry into the cell in response to the depletion of intracellular Ca 2+ stores or store-operated Ca 2+ entry [2] provides a significant part of the Ca 2+ ion influx into the cell. The entry is induced by STIM proteins (STIM1 and STIM2), which are Ca 2+ sensors in the ER lumen. The STIM1 protein, which is the main activator of store-operated Ca 2+ entry, was the first to be characterized [3,4]. The STIM1 and STIM2 proteins differ in their affinity for Ca 2+ ions and ability to interact with plasma membrane chan-nels [5]. STIM2 is more sensitive to small changes in the concentration of stored Ca 2+ and is a weaker activator of store-operated Ca 2+ entry than STIM1. STIM1 is most likely responsible for the cellular Ca 2+ response to an extracellular signal, while STIM2 regulates the basal levels of cytosolic and stored Ca 2+ [6]. In addition, STIM2 facilitates STIM1 transition to the active state [7]. Under physiological conditions, STIM1 and STIM2 activate various store-operated channels in the cell [8], which are formed by proteins belonging to the Orai [9,10] and TRP [11][12][13] families. STIM proteins are involved in a wide range of pathologies. For instance, a long-term increase in the neuronal Ca 2+ concentration, which is caused by an enhanced activity of STIM proteins and leads to cell death, is observed in Huntington's disease [14,15], Alzheimer's disease [16,17], cerebral ischemia [18], and traumatic brain injury [19,20]. Changes in STIM expression levels are typical for several breast cancers [21] and colon carcinoma [22]. Thus, changes in the activity of STIM proteins, in particular decreased STIM2 activity, may possess a potential therapeutic effect. In basic research, a STIM2 activity modulator would be an essential tool to be used to distinguish between STIM1-and STIM2-mediated signaling pathways, because such pharmacological agents are currently unavailable.
Researchers have actively used a wide range of store-operated Ca 2+ entry inhibitors. Most of these inhibitors modulate the activity of store-operated Ca 2+ channels. However, these compounds are often poorly characterized and have more than one target. One of the most commonly used compounds, 2-aminoethoxydiphenyl borate (2-APB), was first characterized as a blocker of IP 3 -induced Ca 2+ release [23]. It is now widely used as a store-operated Ca 2+ entry inhibitor at concentrations exceeding 50 μM. In addition, 2-APB, at a concentration of 5 μM, can potentiate store-operated entry [24]. The mechanism of 2-APB action is not fully understood; this compound is known to have several targets and, in particular, to exert a modulatory effect on the activity of various channels; e.g., TRPV [25,26] and Orai3 [27] channels. The 2-APB compound also enhances non-specific Ca 2+ leak from the ER lumen [28].
When ER Ca 2+ stores are filled, STIM proteins are in an inactive conformation stabilized by the interaction between the CC1 (Coiled-Coil 1) and SOAR (STIM-Orai Activating Region) domains. Following Ca 2+ store depletion, STIM proteins undergo multimerization, change their conformation, and expose the SOAR domain for interaction with plasma membrane channels [29]. The 2-APB compound, at concentrations of about 10 μM, is known to induce store-operated Ca 2+ entry by transforming STIM2 into its active conformation [30]. On the contrary, 2-APB at a higher concentration (50 μM) stabilizes an inactive STIM1 conformation by enhancing the interaction between the CC1 and SOAR domains. Thus, it inhibits the interaction of the SOAR domain with Orai1 channels and the activation of the channels. Interestingly, increased Orai1 expression partially reverses this action [31].
Thus, 2-APB directly interacts with STIM proteins and provides a good basis for the search for a more selective modulator of store-operated Ca 2+ entry. In this work, we have tested a library of 250 chemical compounds received from InterBioScreen Ltd. possessing a chemical structure similar to that of 2-APB, in order to identify a selective modulator of STIM2 activity. A 4-MPTC compound was found to inhibit STIM2dependent Ca 2+ entry (IC 50 = 1 μM) but had almost no effect on the STIM1-mediated mechanism of storeoperated channel activation. The other 249 compounds from the library had a divergent, and non-selective, effect.
Fluorescence analysis
Changes in the intracellular Ca 2+ concentration were measured using a Fluo-4 AM calcium indicator (Thermo Fisher Scientific, USA). The cells were plated into 96-well culture plates 48 h prior to the analysis. The cells were first incubated in a HBSS solution (2 mM CaCl 2 , 130 mM NaCl, 25 mM KCl, 1.
Electrophoresis and immunoblotting
The cells were grown in 60-mm Petri dishes and then lysed by adding a protease inhibitor cocktail. Proteins were separated by 8% denaturing PAGE. The proteins were transferred to a nitrocellulose membrane using a semi-dry transfer unit (Hoefer Pharmacia Biotech., Germany). Primary antibodies to STIM1 (Cell Signaling #4917, USA), STIM2 (Cell Signaling #5668, USA), and α-tubulin (Sigma-Aldrich #T6074, USA) were diluted at a ratio of 1 : 1000. Next, secondary anti-mouse IgG antibodies (Sigma-Aldrich #A0168, USA) against α-tubulin and anti-rabbit IgG antibodies (Sigma-Aldrich #A0545, USA) against STIM1 and STIM2 were used. Blots were visualized on a BioRad Cell Imaging System (Bio-Rad Laboratories, Inc., USA).
Low-molecular-weight compounds for screening, including 4-MPTC, were kindly provided by InterBio-Screen Ltd. (ibscreen.com) in dry form. The compounds were dissolved in DMSO to a final concentration of 10 mM.
Statistical analysis
Statistical analysis was performed using the Origin 8 software. The results of fluorescence measurements were checked for normality using the Fisher's test. Data groups were compared using the Bonferroni test. Statistically significant differences are denoted in figures as follows: "*"-the confidence interval of p < 0.05, "**"-p < 0.01, "***"-p < 0.001; "n.s." -not statistically significant differences.
RESULTS AND DISCUSSION
In order to search for low-molecular-weight compounds that modulate the activity of STIM2 proteins, we used a model cell line derived from HEK293 cells stably expressing exogenous STIM2 and Orai3 proteins (STIM2Orai3 cell line) (Fig. 1A). The effect of the test compounds on the amplitude of a cellular Ca 2+ signal in response to the depletion of intracellular Ca 2+ stores was recorded using the Fluo-4 AM calcium indicator. Intracellular Ca 2+ stores were depleted by adding 1 μM thapsigargin (Tg), a selective inhibitor of the ER Ca 2+ pump, to the extracellular solution. At the first stage, the effect of the library of 2-APB analogs on the Tg-induced Ca 2+ response was tested. For this purpose, the cells were incubated in HBSS solutions containing one of the 250 test compounds (at a concentration of 100 μM) for 30 min prior to starting the experiments. Next, the amplitude of the Ca 2+ response to the addition of 1 μM Tg was assessed. As a result of library screening, we selected 4-MPTC (Fig. 1C), the compound that most strongly affected the Tg-induced Ca 2+ response in STIM2Orai3 cells: the Ca 2+ response was inhibited by 39 ± 3% compared to that in the cells incubated in a solution supplemented with 1% DMSO (Fig. 2A). Since 4-MPTC significantly inhibits the Tg-induced Ca 2+ response in cells with increased STIM2 and Orai3 levels, we may suggest that 4-MPTC modulates the activity of these proteins. The direct action of 4-MPTC on Orai3 is supported by the fact that 2-APB can activate the Orai3 channel [27]. To test the effect of 4-MPTC on Orai3 channels, HEK293 cells with Orai3 knockout (the Orai3 KO cell line) were used. Incubation of Orai3 KO cells with 4-MPTC changed the shape of the Tg-induced Ca 2+ response and decreased its amplitude by 12 ± 3% (Fig. 2B). Furthermore, incubation of HEK293 cells expressing exogenous STIM1 and Orai3 proteins (the STIM1Orai3 cell line) with 4-MPTC did not inhibit the amplitude of the Tg-induced Ca 2+ response (Fig. 2B) and, therefore, did not decrease the activity of the Orai3 channels. Hence, the Orai3 protein is not a selective target for 4-MPTC.
The activity of store-operated channels in a cell is known to be modulated by both the STIM1 and STIM2 proteins [8]. The predominant pathway of store-operated entry activation can be modulated through either the STIM1 protein or the STIM2 protein by changing their expression levels. HEK293 cells expressing exogenous STIM1 and Orai3 proteins were used to test the effect of 4-MPTC on STIM1.
As mentioned above, incubation of STIM1Orai3 cells with 4-MPTC changes the shape of the Tg-induced Ca 2+ response without decreasing its amplitude (Fig. 2B). Since 4-MPTC significantly reduced the Ca 2+ response amplitude but did not alter the curve's shape in STIM2Orai3 cells ( Fig. 2A), we may suggest that this compound affects the pathway of store-operated calcium entry activation through STIM2, but not through the STIM1 protein. A change in the curve's shape for the Orai3 KO and STIM1Orai3 cell lines is quite typical and reflects a decrease in the rate of the Ca 2+ response. Since the endogenous STIM2 protein is present in Orai3 KO and STIM1Orai3 cells (Fig. 1B), 4-MPTC can reduce its activity and, thereby, change the dynamics of both the release of Ca 2+ from the store into the cytoplasm and the entry of extracellular Ca 2+ ions. Knockout of STIM2 using short interfering RNAs results in a similar effect on the Ca 2+ response; it decreases Ca 2+ release from the store [33] and subsequent Ca 2+ entry [4,34]. Cell lines overexpressing STIM proteins (STIM1Orai3 and STIM2Orai3) contain endogenous STIM1 and STIM2 (Fig. 1A,B), which complicates data interpretation. Therefore, we further used STIM1 (the STIM1 KO cell line) and STIM2 knockout cells (the STIM2 KO cell line), which are devoid of this drawback (Fig. 1A,B). When STIM1 expression is completely suppressed, the STIM2 protein becomes the key and only activator of store-operated Ca 2+ entry [4]. Pre-incubation of STIM1 KO cells with 4-MPTC decreased the Tg-induced Ca 2+ response by 57 ± 8% compared to that in control cells (incubation with 1% DMSO) (Fig. 3A). It should be noted that 4-MPTC more effectively inhibits store-operated Ca 2+ entry under these conditions. For example, the Tg-induced Ca 2+ response was inhibited by 57% in STIM1 KO cells and by only 39% in STIM2Orai3 cells. A significant change in the shape of the Tg-induced response is observed after incubation of STIM2-knockout cells in which the STIM1 protein is the only activator of store-operated Ca 2+ entry with 4-MPTC. The calcium concentration increases more slowly in these cells than in the control cells, with the maximum Ca 2+ response amplitude being 61 ± 5% higher compared to that in the control (Fig. 3B). 4-MPTC was experimentally demonstrated to act divergently in STIM1 KO and STIM2 KO cell lines: it inhibits the Ca 2+ response through the STIM2-dependent pathways and enhances it through the STIM1 pathways. Thus, the selected compound, 4-MPTC, enables differentiation between the pathways activating store-operated Ca 2+ entry through different STIM proteins; however, the mechanism of action of this compound requires further clarification.
Thus, given our findings, we may conclude that the use of 4-MPTC in cell lines expressing predominantly the STIM2 protein (STIM1 KO and STIM2Orai3) significantly inhibits the amplitude of the Tg-induced Ca 2+ response, while the use of 4-MPTC in cell lines producing predominantly the STIM1 protein (STIM2 KO, STIM1Orai3), on the contrary, changes the shape of the Ca 2+ response curve, without decreasing its amplitude. Thus, 4-MPTC selectively inhibits storeoperated Ca 2+ entry via the STIM2-mediated pathway, but not the STIM1-mediated pathway. Despite the fact that 2-APB is widely used as a store-operated Ca 2+ entry inhibitor, it does not appear to selectively inhibit store-operated Ca 2+ entry and also has a divergent concentration-dependent effect. 2-APB derivatives have been investigated in the search for an inhibitor lacking these disadvantages [35][36][37][38][39][40][41]. Most of the identified compounds inhibit store-operated Ca 2+ entry at lower concentrations than 2-APB but are at the same time unable to activate Ca 2+ entry at certain concentrations; in other words, they have better inhibitory properties than the parent compound. More attention in the search for new inhibitors of store-operated Ca 2+ entry has been paid to the STIM1-dependent pathway of activation, while the STIM2-mediated pathway often has remained unexplored. For example, MDA-MB-231 cells, in which STIM1 and Orai1 proteins play a key role in store-operated Ca 2+ entry, as well as HEK293 cells expressing STIM1 and Orai-family proteins, have been used as model cell lines in experiments [42,43]. A study of the compounds DPB163-AE and DPB162-AE demonstrated that they interact differently with STIM1 and STIM2 but eventually inhibit store-operated Ca 2+ entry through both proteins [37]. The 4-MPTC compound, identified in our study, has an inhibitory effect on the STIM2-mediated pathway and does not inhibit Ca 2+ entry through the STIM1dependent pathway. | 3,214.4 | 2021-01-01T00:00:00.000 | [
"Medicine",
"Chemistry"
] |
Modeling and test of flywheel vibration isolation system for space telescope
Studied on the flywheel micro vibration isolator of a space telescope, the relationship between input and output (I-O) disturbance force and velocity vector is described by the characteristic transfer matrix in the subsystem of the flywheel vibration isolation. The elastic support coupling vibration transfer matrix of the vibration isolator is derived, and the vibration transfer characteristics of the vibration isolation system are studied. The dynamic model of the three degrees coupled vibration isolation system about the flywheel micro vibration excitation, multi elastic support and basic structure is established on the admittance method and partition subsystem. Model simulation and the flywheel vibration isolation system test results show that the two spectra are consistent basically in the frequency components, the form of vortex frequency curve and the change of amplitude, which indicates that the key factors of the vibration characteristics of the flywheel vibration isolation system are accurate and the theoretical analysis is correct. The sub structure analysis method effectively avoids the complexity of the solution of the state vector of the sub structure coupling interface. The elastic support coupled vibration transfer matrix can solve the problem of the sub structure integration and merging, and integrated modeling and analysis in active and passive support system.
Introductions
Attitude control actuator is the core technology of the precision and high stability space telescope [1,2]. Flywheel as attitude control and attitude stabilization equipment in space telescope, it will generate additional disturbance force and torque when spins on orbit, which will affect the accuracy, stability, and imaging quality of the telescope [3,4].
Micro vibration on the influence of the payload is more complex, which involves structure, control and optical system. Integrated modeling technology is an effective analysis method. It refers to that on the basis of the structure, control and load system modeling, according to the physical connection of the micro vibration in the transmission path of each subsystem, the system model is integrated. And a system level dynamic I-O mathematical model, which can fully reflect the influence of various coupling effects on the key performance indicators of the spacecraft, is finally formed. Many research institutes have made a research on the integrated modeling system in the world. NASA established a model of Integrated Modeling Environment [5].United States Air Jet Propulsion Laboratory established the Integrated Modeling of Optical Systems [6]. Massachusetts Institute of Technology established the Disturbance Optics Controls Structures [7]. They provide system level comprehensive performance evaluation and error analysis methods, which have been successfully applied to the development of high resolution space telescope JWST [8], SIM [9] and TPF [10]. Luo Q., Li D. and Zhou W. addressed the dynamic modeling and analysis of the micro-vibration isolation of flywheel assemblies [11]. Yu J., Yamaura H., Oishi T., et al focused on vibrations caused by loading torque of printing mediums when they were delivered into the machinery. The system was modeled as a four-rollers-belt with a stepper motor based on exist apparatus [12]. By considering the flywheel and the platform as an integral system with gyroscopic effects, Wei Z., Li D., Luo Q., et al developed an equivalent dynamic model and verified through eigenvalue and frequency response analysis [13].
The above literatures on the micro vibration of the flywheel are mainly integrated modeling. Modeling of the transmission characteristics of the flywheel vibration isolator is not carried out. In this paper, the dynamic model of the three degrees coupled vibration isolation system about the flywheel micro vibration excitation, multi-elastic support and basic structure is established on the admittance method and partition subsystem. The relationship between I-O disturbance force and velocity vector is described by the characteristic transfer matrix in the subsystem of the flywheel vibration isolation.
According to the structure parameters and the micro vibration characteristics of a certain type of flywheel, the model is simulated to obtain the output vibration power waterfall plot. We got the flywheel vibration isolation system waterfall plot as well by testing the flywheel isolation system using multi component device. The results indicated that the key factors of the vibration characteristics of the flywheel vibration isolation system are accurate and the theoretical analysis is correct.
Flywheel vibration isolation system model
The flywheel vibration isolator adopts four points supporting symmetrical radiation type arrangement and the vibration isolation device is arranged in a radiation pattern, which is shown in Fig. 1. Along with the coupling structure department, the flywheel isolation system is divided into three subsystems, including source quality A, isolation vibration bearing B, and supporting basic C. Each subsystem represents one or more rigid distributed parameter components of the total system. The rigid element has six degrees of freedom, and the structure of the distributed parameter is allowed to be more than one degree of freedom. According to the direction of the energy flow, the system force and velocity vectors of I-O terminals are defined, and the overall characteristic transfer matrix model of the system is established, which is shown in Fig. 2. Each transfer structure represents a subsystem of the whole system.
3.1. Flywheel sub-structure disturbance characteristic transfer matrix The natural frequency of the flywheel is much larger than that of the disturbed excitation frequency, so it can be regarded as a general three-dimensional rigid structure. Fe is expressed as the generalized disturbance force vector acting on the flywheel centroid, ve is the generalized velocity response vector of the flywheel centroid. The generalized force and the velocity vector of the flywheel disturbance output, which connected with subsystem through n (n= 4) coupling point, are Fa and va: F e = F e x F e y F e z T e x T e y T e z T , v e = v e x v e y v e z θ ˙ e x θ ˙ e y θ ˙ e z T , F a = F a 1 F a 2 ⋯ F a n T , v a = v a 1 v a 2 ⋯ v a n T .
According to the law of dynamics and structure geometry relation, the translational and rotational motion equations of the micro vibration source of the flywheel, under the generalized harmonic exciting force [14], are expressed as Eq. (2) in the form of admittance matrix [15]: The characteristic transfer matrix αij is derived from the admittance matrix, which is expressed as Eq. (10)
Vibration isolator characteristics
The flywheel vibration isolator is modeled [16] as a cylindrical continuous elastic rod with a density ρ and damping η. A dynamic model of multi input and multi output structure is established, which is shown in Fig. 3.
In Fig. 3, the generalized force and velocity response vectors of the I-O terminals are: The transfer matrix equation for the dynamic characteristic of the vibration isolation system is Eq. (12): where, Bijk is the three-dimensional coupling vibration isolator transfer matrix, which can be determined by structural generalized admittance frequency response function [17] based on modal analysis [18,19]. The detailed derivation process is as follows.
As shown in Fig. 3, the I-O force and moment are Fby1, Fbz1, Tbx1 and Fby2, Fbz2, Tbx2. We can get the transfer admittance frequency response function by modal analysis method: (20) where, λf is the structure bending wave number, λ1 is longitudinal wave complex wave number. h is supporting structure height, A is section area and I is moment of inertia. Eσ* is the elastic modulus.
Considering the coupling relationship between I-O of each motion component, the vibration isolation support characteristic transfer matrix B¯ij for the symmetry plane Oyz is derived, which is shown as follows: B ¯ i j = -T 21 -1 T 22 T 21 -1 T 12 -T 11 T 21 -1 T 22 T 11 T 21 -1 , In the same way, the state vector transfer matrix B^ij of Oxz plane can be determined. The flywheel is installed on the honeycomb panel of the telescope structure, for the convenience of research, the honeycomb board structure is modeled as a rectangular plate, and the admittance matrix equation of the structure characteristic is described as Eq. (21): where, Fc is the input generalized force of each support structure, and vc is the velocity corresponding response vector. The characteristic matrix can be determined by the modal analysis method [7,14,15]: According subsystem coupled junction force and displacement continuity conditions, the dynamic characteristics of a comprehensive subsystem above matrix Eqs. (9), (18), (21), we get the state vector in overall system coupling interface: v e = -Λ 21 + Λ 22 γ i j -Λ 11 + Λ 12 γ i j -1 F e , F c = -Λ 11 + Λ 12 γ i j -1 F e , v c = γ i j -Λ 11 + Λ 12 γ i j -1 F e .
Model simulation and experiments 5.1. Model simulation
According to the flywheel structure parameters and vibration isolator micro vibration features which are provided by the manufacturers. The parameters are shown in Table 1, and the parameters of the honeycomb panel are shown in Table 2, and the layer mode of the honeycomb panel is shown in Fig. 4. The model in Chapter 2 was simulated under the flywheel imbalance condition. The output disturbance force waterfall plot is shown in Fig. 5.
Flywheel vibration isolation system experiments
The disturbance test is carried out in the ultra-clean environment laboratory. The test site is shown in Fig. 6. Using the six component quartz model HR-FP3402 force plate, the sensor sampling frequency is 5000 Hz, testing the flywheel disturbing force characteristics in the X-axis and Z-axis. Testing process is speed up the flywheel from 0 RPM (Rotation per minute) to the specified speed, deceleration to 0 RPM after holding 20 s, a single total test time 60 s, and the single test time lasts 60 s. 2 3 Signal processing the disturbance data in the stable speed at different speed, the waterfall plot curve of the characteristics in frequency domain is shown in Fig. 7. Fig. 6. Experiments site According to Fig. 7, at the same frequency of the vibration isolation system the flywheel has scrambled excitation force, and there are also a series of harmonics at multiples. In addition to the individual frequency points, the X axis disturbance force of the vibration isolation system is in the 0.1 N magnitude, and the Z axis disturbance force of the vibration isolation system is in the 0.2 N magnitude.
Result discussion
The Fig. 4 and Fig. 6 show that the two spectra are consistent basically in the frequency components, the form of vortex frequency curve and the change of amplitude. The vibration disturbance force exists in the frequency shown the first line in Fig. 4 and Fig. 6, which is caused by the unbalance of the flywheel. Consistent with the actual working conditions of the flywheel, a series of harmonic is also caused by the flywheel rotating frequency at high frequency.
The test result in Fig. 6 shows amplification of x-direction disturbance around 200 Hz which is not captured in the simulation. This vibration of the flywheel is caused by the modal enlargement of the supporting structure. Structural modal amplification factors are more complex, in this paper, the process of modeling and Simulation is not considered. So, there is no significant amplitude appears xwaterfall plot around 200 Hz in Fig. 4. The experimental results show that the modal amplification effect of the flywheel is obvious, and it needed to be modeled separately due to the complexity of the structural mode amplification effect.
The amplitude of the disturbance is significant around 294 Hz in Fig. 4 and Fig. 6, it is the obvious characteristics of the flywheel whirl mode, which is caused by flywheel axial translational mode and radial translational mode. Analysis and test results also show that when the natural frequency of the flywheel and the harmonic frequency of the intersection, it will produce a significant resonance, resulting in disturbance amplification, especially around 270 Hz.
Although the simulation results and experimental results are in error, the theoretical modeling and simulation analysis match well with the experimental results, which indicates that the key factors of the vibration characteristics of the flywheel vibration isolation system are accurate and the theoretical analysis is correct.
Conclusions
In this paper, the dynamic model of the three degrees coupled vibration isolation system about the flywheel micro vibration excitation, multi elastic support and basic structure is established on the admittance method and partition subsystem. The relationship between I-O disturbance force and velocity vector is described by the characteristic transfer matrix in the subsystem of the flywheel vibration isolation.
1) Model simulation and the flywheel vibration isolation system test results show that the two spectra are consistent basically in the frequency components, the form of vortex frequency curve and the change of amplitude, which indicates that the key factors of the vibration characteristics of the flywheel vibration isolation system are accurate and the theoretical analysis is correct.
2) Three-dimensional coupled micro vibration transfer matrix dynamic model shows that the transfer matrix method can effectively avoid the complexity of the solution of the state vector of the sub structure coupling interface. This method can be applied to the vibration analysis of multi -layer or multi -system with branches.
3) Transfer matrix can play a more prevalent role in the study of dynamic characteristics of micro vibration isolation system, active and passive control integrated modeling and analysis, at the same time, it establishes the theoretical basis for the structural parameters optimization and active control strategy of the flywheel micro vibration system.
Contact Us · Terms and Conditions · Privacy Policy
Model simulation and the flywheel vibration isolation system test results show that the two spectra are consistent basically in the frequency components, the form of vortex frequency curve and the change of amplitude, which indicates that the key factors of the vibration characteristics of the flywheel vibration isolation system are accurate and the theoretical analysis is correct. Attitude control actuator is the core technology of the precision and high stability space telescope [1,2]. Flywheel as attitude control and attitude stabilization equipment in space telescope, it will generate additional disturbance force and torque when spins on orbit, which will affect the accuracy, stability, and | 3,332 | 2017-05-15T00:00:00.000 | [
"Engineering",
"Physics"
] |
Quantitative Evaluation of Hypomimia in Parkinson’s Disease: A Face Tracking Approach
Parkinson’s disease (PD) is a neurological disorder that mainly affects the motor system. Among other symptoms, hypomimia is considered one of the clinical hallmarks of the disease. Despite its great impact on patients’ quality of life, it remains still under-investigated. The aim of this work is to provide a quantitative index for hypomimia that can distinguish pathological and healthy subjects and that can be used in the classification of emotions. A face tracking algorithm was implemented based on the Facial Action Coding System. A new easy-to-interpret metric (face mobility index, FMI) was defined considering distances between pairs of geometric features and a classification based on this metric was proposed. Comparison was also provided between healthy controls and PD patients. Results of the study suggest that this index can quantify the degree of impairment in PD and can be used in the classification of emotions. Statistically significant differences were observed for all emotions when distances were taken into account, and for happiness and anger when FMI was considered. The best classification results were obtained with Random Forest and kNN according to the AUC metric.
Introduction
Parkinson's disease (PD) is a neurodegenerative disorder characterized by motor symptoms such as tremor, rigidity, bradykinesia, and gait and balance problems. There is also a plethora of non-motor symptoms that are experienced by PD individuals and that have a strong impact on patients' and their care-partners' quality of life [1]. Emotional processing is impaired at different levels in PD [2] including facial expressivity and facial emotion recognition. Hypomimia/amimia is a term used to describe reduced facial expression in PD, which is one of the most typical features of the disease [3]. Despite being clinically well recognized, its significance, pathophysiology, and correlation with motor and non-motor symptoms is still poorly explored [4,5]. This is partially due to the scarcity of objective and validated measures of facial expression [6].
Face expressions are an important natural means of communicating, and have been the objective of several studies since the beginning of the 20th century [7] in healthy and different clinical populations. Hjortsjo [8] provided an anatomic description of muscular movements during facial expressions and their subdivision depending on the displayed emotions. Around the same period, other authors approached a subdivision of the meaning of the expressions by their inherent emotionality. This can be found in the work of Ekman and Friesen [9], who defined a precise small universal discretization of the six basic emotions according to Darwin [10] as follows: fear, anger, disgust, happiness, sadness, and surprise. Furthermore, the Facial Action Coding System (FACS) [9] was developed that describes facial expressions by means of action units (AUs). Of 44 defined FACS AUs, 30 AUs are anatomically related to the activation of specific facial muscles, and they can occur either individually or in combination. Through this encoding system, more than 7000 different AU combinations have been observed [9]. This system is still used in manifold fields and applications.
The analysis of facial expressions has advanced in many domains, such as face detection, tracking, pattern recognition, and image processing. In recent years, different algorithms and architectures have been proposed in Facial Expression Recognition (FER) systems. In order to extract relevant information for face and facial expression analysis they generally follow three main steps: (a) Face landmark detection: Identification of landmarks is based on specific face features positions (i.e., eyes, mouth, nose, eyebrows, etc.). Usually, after landmarks have been detected, a step of normalization is performed by aligning each face to a local coordinate framework in order to reduce the large variation introduced by different faces and poses [11]. (b) Feature extraction: Feature construction and/or selection is usually based on the coordinates obtained from (a), and either an appearance or a geometric approach can be used. The former employs the texture of the skin and facial wrinkles, whereas the latter employs the shape, i.e., distances and angles of facial components [12]. (c) Classification: The last step concerns the classification of different emotions or expressions. Different methods are applied in the literature depending on the previous phases. The most-used classification algorithms in conventional FER approaches include Support Vector Machines, Adaboost, and Random Forest [13].
At present, algorithms for automatic facial analysis employing these kinds of methodologies are gaining increasing interest. The aims of these systems are facial comparison and/or recognition (e.g., OpenFace software [14]), in addition to the identification and classification of different emotions (e.g., EmoVu, FaceReader [15], FACET, and Affectiva Affdex [16]). Regards the latter objective, it is crucial to note that these algorithms usually adopt machine or deep learning (DL) techniques that exploit enormous databases of healthy subjects' images. When using these methods to assess impairments in face mobility in a given pathology (e.g., PD, depression, obsessive-compulsive disorder [17]), the evaluation of the symptom is based on the measurement of the deviation of the acquired expressions from the corresponding ones in healthy individuals. Despite the growing interest in the application of FER algorithms to hypomimia, in particular to PD [4,18,19], there is still a paucity of work regarding the quantitative assessment of the degree of impairment in these individuals.
Emerging literature points towards the quantification of hypomimia as a potential marker for diagnosis and disease progression in PD, and some attempts in this area have been recently made. Bandini et al. [20] evaluated hypomimia in a cohort of PD subjects. They estimated a quantitative measure from the neutral expression in a subset of basic emotions (happiness, anger, disgust, and sadness), considering both the actuated and the imitated ones. Grammatikopoulou and colleagues [21] proposed an innovative evaluation of this symptom in PD based on images captured by smartphones. Two different indexes of hypomimia were developed without discriminating among different emotions. A review of automatic techniques for detecting emotions in PD was recently carried out by Sonawane and Sharma [22]; they investigated both machine and DL algorithms used in the classification of emotions in PD subjects with hypomimia. Moreover, they addressed the problem of expression quantification and related pending issues. In 2020, Gomez and colleagues [19] proposed a DL approach to model hypomimia in PD exploring different domains. The main issue they encountered when using such techniques was the lack of large databases of PD subjects' videos and/or images to be exploited in this approach. In summary, the current state-of-the-art hypomimia evaluation proposes methodologies that aim, first, to distinguish PD and healthy control subjects, and second to develop quantitative metrics. The indexes available to date still have some limitations, such as the assessment of the symptom without considering the specific face muscles involved or the disregard of the basic emotions in the analysis [5]. The objective of the present study is to provide a quantitative measure of hypomimia that tries to overcome some of these limitations and is both able to differentiate between pathological and physiological states and classify the basic emotions.
In particular, the main contributions of this work are: • the design of a new index based on facial features to quantify the degree of hypomimia in PD and link it to the different emotions; • the definition of a stand-alone metric able to quantify the degree of hypomimia in each subject independently from the comparison with healthy subjects' databases, thus enabling tracking of disease progression over the time; • a spatial characterization in face regions strictly related to the movement of specific muscles, thus enabling targeting specific rehabilitation treatments.
Participants
A total of 50 PD subjects and 20 healthy control (HC) subjects were enrolled for the study. Power analysis for sample size estimation was applied [23] (p = 0.05, power = 80%, values from [24], Appendix A). People with idiopathic PD were recruited from the Department of Casa di Cura "Villa Margherita" in Vicenza, and healthy controls were recruited from hospital personnel. This study was approved by the local ethics committee (ARS_PD1/100-PROT). A written informed consent was obtained from all participants. Data from 3 healthy subjects were discarded from the analysis due to artifacts in the video sequences. Table 1 reports the demographic data of the participants. For PD individuals, data on disease duration and Unified Parkinson's Disease Rating Scale (UPDRS) Part III in the ON medication status were collected.
Inclusion and Exclusion Criteria
Patients were eligible for inclusion if they were diagnosed with Parkinson's disease according to UK Brain Bank criteria. The diagnosis was reviewed by a movement disorders neurologist. Exclusion criteria were: presence of clinically significant depression (according to Diagnostic and Statistical Manual of Mental Disorders-V (DSM-V) criteria and Beck's depression inventory (BDI-II) score >17); presence of dementia (according to DSM-V criteria and MMSE score < 24); presence of deep brain stimulation surgery.
Pipeline
A schematic representation of the processing pipeline is reported in Figure 1. Data acquisition, processing, and statistics are described in Sections 2.2.1-2.2.3 respectively. Data were imported into MATLAB (R2017a) and custom code was developed to perform the analysis. Moreover, unsupervised classification was implemented in Orange data mining toolbox [25], as described in Section 2.2.4.
Pipeline
A schematic representation of the processing pipeline is reported in Figure 1. Data acquisition, processing, and statistics are described in Sections 2.2.1, 2.2.2, and 2.2.3 respectively. Data were imported into MATLAB (R2017a) and custom code was developed to perform the analysis. Moreover, unsupervised classification was implemented in Orange data mining toolbox [25], as described in Section 2.2.4.
Data Acquisition
Frontal face videos of the participants were recorded while they were instructed by the researcher to perform, in random order, the six basic facial emotions: anger, disgust, fear, happiness, sadness, and surprise. The neutral face expression was also acquired either at the beginning or at the end of the video session while the participant was invited to remain silent and look at the video camera while resting. Subjects were comfortably seated in front of a commercial camera (GoPro Hero 3, 1920 × 1080 pixels, 30 fps) placed at eye level. A neutral background was located behind them [5].
Data Acquisition
Frontal face videos of the participants were recorded while they were instructed by the researcher to perform, in random order, the six basic facial emotions: anger, disgust, fear, happiness, sadness, and surprise. The neutral face expression was also acquired either at the beginning or at the end of the video session while the participant was invited to remain silent and look at the video camera while resting. Subjects were comfortably seated in front of a commercial camera (GoPro Hero 3, 1920 × 1080 pixels, 30 fps) placed at eye level. A neutral background was located behind them [5].
Data Processing
For each of the six emotions and the neutral expression, four frames were extracted from the acquired videos; these were selected as the frames immediately following the instruction given by the clinician. Based on the FACS encoding system, a set of facial landmarks was defined. This corresponds to forty points in the 2D space-image; Figure 2 describes the different landmarks. Following Cootes et al. [26], 3 types of facial feature points were adopted: points labeling parts of the face with application-dependent significance, such as the eyebrows and the lip contour (see Figure 2, feature numbers 1, 2, 3, 4 and 33, 34, 35); points labeling application-independent elements, such as curvature extrema (the highest point along the bridge of the nose, see feature numbers 18 on Figure 2); and points interpolated from the previous two types, such as feature numbers 19 and 23 ( Figure 2). Each point was tracked with TrackOnField (BBSoF S.r.l. [27]). From the coordinates of these landmarks, forty Euclidean distances were computed ( Figure 3) per frame.
Data Processing
For each of the six emotions and the neutral expression, four frames were extracted from the acquired videos; these were selected as the frames immediately following the instruction given by the clinician. Based on the FACS encoding system, a set of facial landmarks was defined. This corresponds to forty points in the 2D space-image; Figure 2 describes the different landmarks. Following Cootes et al. [26], 3 types of facial feature points were adopted: points labeling parts of the face with application-dependent significance, such as the eyebrows and the lip contour (see Figure 2, feature numbers 1, 2, 3, 4 and 33, 34, 35); points labeling application-independent elements, such as curvature extrema (the highest point along the bridge of the nose, see feature numbers 18 on Figure 2); and points interpolated from the previous two types, such as feature numbers 19 and 23 ( Figure 2). Each point was tracked with TrackOnField (BBSoF S.r.l. [27]). From the coordinates of these landmarks, forty Euclidean distances were computed ( Figure 3) per frame. Each obtained value was then averaged over the extracted frames, obtaining a single value per each distance. Then, each distance was normalized to the corresponding value in the neutral expression (Equation (1)).
Values outside the interquartile range were excluded from the analysis. Lastly, a total FMI was defined and calculated as follows: For each emotion (j = 1...6), the FMI was determined as the summation of the percentage deviation from the neutral expression (|1 − ratio i |·100%) of all the distances; the FMI was then normalized to the number of available distances (n_dist). Overall, FMI represented an intuitive description of the mobility of face muscles in the different emotions with respect to the neutral expression.
Moreover, three indexes per face region were computed. The same formula as before was applied (Equation (2) Finally, a further FMI was computed by considering only the statistically significant distances for each emotion (Appendix B). Each obtained value was then averaged over the extracted frames, obtaining a single value per each distance. Then, each distance was normalized to the corresponding value in the neutral expression (Equation (1)).
Values outside the interquartile range were excluded from the analysis. Lastly, a total FMI was defined and calculated as follows:
Statistics
Statistical analysis was performed in order to compare, first, the normalized distances, and then the different FMIs. Non-parametric tests were applied to the two cohorts of subjects and to each emotion. The Kruskal-Wallis test (p < 0.05) was implemented to compare the normalized distances (ratio in Equation (1)). The Wilcoxon rank sum test (p < 0.05) was used to compare the different FMIs (FMI, FMI_up, FMI_mid, FMI_low) between healthy and PD individuals.
Finally, a correlation analysis was performed between FMIs and values of UPDRS III, age, disease duration, and gender per each emotion in the PD cohort of subjects only. Pearson correlation coefficients (r) were computed for all the quantities, except for gender, which, being a binomial variable, required the use of the Point-biserial correlation coefficient (r PB ) [28].
Supervised Classification
Different supervised classification algorithms were applied. The following models were evaluated: k-Nearest Neighbors (kNN), Tree, Random Forest, Neural Network, Naïve Bayes, and CN2 rule inducer. Algorithms were applied to both normalized distances and the FMI of the two cohorts with the aim to discriminate the different emotions. The distances dataset only was preprocessed with a principal component analysis (PCA, 10 components, 81% explained variance) due to the presence of a high correlation among the data. Given the reduced dimension of the training datasets, test phases of the classification were performed with a leave-one-out cross validation in both datasets. In order to evaluate the best classification technique, the following standard performance metrics were calculated: area under the curve (AUC), F1 score, precision, and recall [29].
Results of the Statistical Analysis of Distances and FMIs
In reference to Figure 3 (column N) each normalized distance (Equation (1)) was characterized by a number. Figure 4 reports the normalized distances (ratio) per each emotion in the three face regions: upper, middle, and lower. Values greater than 100% represented an increase from the neutral expression in the specific distance and, conversely, while considering values lower than 100%. Therefore, the closer the distance to 100% the less the variation from the neutral expression. When comparing the two cohorts of subjects, statistically significant differences (p < 0.05) between corresponding distances were highlighted per each emotion ( Figure 4 and Table 2). Results of the analysis for the FMI computed by considering only the distances reported in Table 2 can be found in Appendix B. When combining all the distances in the FMI, the comparison between the two populations of subjects (see Figure 5) revealed statistically significant differences only in the happiness emotion (p < 0.05), even though HC subjects displayed a higher absolute value for almost all the emotions. When considering the FMI associated with the three face regions (Figure 6d), it can be noted that the lower (Figure 6c) part index was the only one that displayed statistically significant differences between the two populations of subjects in both anger and happiness emotions (p < 0.05). When considering the FMI associated with the three face regions (Figure 6d), it can be noted that the lower (Figure 6c) part index was the only one that displayed statistically significant differences between the two populations of subjects in both anger and happiness emotions (p < 0.05).
In Table 3, the correlation coefficients between FMIs and clinical and demographic variables (UPDRS III, duration of the disease, age, gender) per each emotion are reported. The analysis was performed on the PD cohort of subjects only and values of FMI were employed. No statistically significant correlations (p < 0.05) were highlighted between the different quantities. Greater values of FMI represent greater deviation from the neutral expression. Red * highlights statistically significant differences at the 0.05 confidence level. FMI 1 is dimensionless.
When considering the FMI associated with the three face regions (Figure 6d), it can be noted that the lower (Figure 6c) part index was the only one that displayed statistically significant differences between the two populations of subjects in both anger and happiness emotions (p < 0.05). In Table 3, the correlation coefficients between FMIs and clinical and demographic variables (UPDRS III, duration of the disease, age, gender) per each emotion are reported. The analysis was performed on the PD cohort of subjects only and values of FMI were Table 4 in terms of AUC and F1 score values, whereas results referring to the other metrics are included in Appendix C. Classification of the distances database was performed as a validation phase to assess the feasibility of classifying through the FMI database. The Random Forest algorithm showed the best score on the distances databases both in HC and PD cohorts, obtaining AUC values ranging between 94.3 and 91.6, and F1 scores between 76.2 and 71.5, respectively. By comparison, kNN was found to be the optimal technique in the classification with FMI; AUC values ranging between 88.9 and 88.4 and F1 scores between 70.1 and 73 were respectively obtained in the HC and PD datasets.
Discussion
Developing an automatic system for AU recognition is challenging due to the dynamic nature of facial expressions. Emotions are communicated by subtle changes in one or a few facial features occurring in the area of the lips, nose, chin, or eyebrows [30]. To capture these changes, different numbers of facial features have been previously proposed and, irrespective of their number, these landmarks cover the areas that carry the most important information, such as eyes, nose, and mouth [31]. Although more points provide richer information, they require more time to be detected. In order to quantify the involvement of each muscle with regard to each specific emotion, a face mobility index was developed based on distances between points of insertion of each muscle (see Figures 2 and 3) coupled with significant facial features. A total index (FMI) was defined in order to summarize the overall face muscles involvement.
Based on these metrics, a population of PD subjects was compared with a group of healthy controls matched by age and gender. Through the distances analysis, a fine spatial characterization of movements related to muscle activity was obtained. Statistically significant differences were found among emotions between the two cohorts of subjects. According to [30], each emotion can be described by a specific set of AUs and this dataset highlighted impairments related to specific AUs and related muscles. A notable example of this involves the happiness emotion. Statistically significant differences were found in distances number 15, 32, 33, 34, and 35 in the lower part of the face; these quantities represent the movement of the combination of AUs 12 and 25, which are the characteristic AUs for happiness. Because AUs and face muscles are strictly related (see Appendix D), it can be noted that PD people displayed impairments in the Zygomatic Major and Depressor Labii muscles, and this finds agreement with [32]. Another example, considering the upper face, is the surprise emotion, described by AUs 1 and 2. Values greater than the neutral expression were found in both HC and PD people, but the latter displayed less mobility associated with those AUs corresponding to the Frontalis Muscle [33]. The anger and sadness emotions had statistically significant differences in the distances of the upper and lower face regions, respectively, showing deficits in the characteristic AUs 4 and 7 in anger, and AU 15 in sadness. It can be concluded that the corresponding muscles, Orbicularis Oculi and Triangularis, showed impairments in PD subjects. Fear displayed statistically significant differences in the upper region (distance number 36) associated with AUs 1 and 4 (Frontalis, Pars Medialis, and Corrugator Muscles), in the middle region associated with AU 20 (Risorius), and in the lower region associated with AU 25 (Orbicularis Oris). Finally, disgust revealed statistically significant differences in the upper region related to the activity of the Orbicularis Oculi muscle, and in the lower region in those distances associated with AU 17, in accordance with [34].
When considering face mobility in the overall metric, as expected, FMI reported general higher values in HC with respect to PD individuals even though only the happiness emotion revealed statistically significant differences. Whereas, when comparing the three FMIs in the upper, middle, and lower regions, it can be noted that happiness was still the most impaired in the middle and lower parts of the face. Furthermore, anger also showed statistically significant differences in the lower part between the two cohorts of subjects (Figure 6d), showing in PD people greater impairments in the related AU 24 and consequent Orbicularis Oris muscle.
Regarding the analysis of the correlation between the different demographic and clinical data, and the FMI values in the PD subjects, surprisingly, no significant correlations emerged. This may be interpreted as the ability of the proposed metric to measure different aspects of the symptom, which could be considered to be complementary to the standard clinical scales. In this regard, it is worth mentioning that UPDRS III primarily assesses patients' appendicular function [35].
The classification algorithms showed good results in the preliminary analysis with the normalized distances databases. As expected, the AUC and F1 scores calculated on the HC individuals were higher than those of the PD cohort of subjects, despite the differences in the size of the datasets (17 vs. 50 subjects). These outcomes validated the possibility of using the new developed FMI index to perform classification and demonstrated the differences in expressivity in the two cohorts of subjects. The second step of classification involved the FMI datasets. Encouraging results were achieved even if performance values were inferior to those obtained with the former analysis. The kNN algorithm outperformed the other techniques in both HC and PD datasets.
Some limitations in the present study must be highlighted. Firstly, emotions were performed according to indications given by clinicians. This consideration can be overcome by naturally inducing the emotion by other stimuli (e.g., videos or movies); however, the downside of this approach is the uncertainty in the specific emotion that is elicited in the subject. Secondly, it is worth mentioning that the total UPDRS III score was employed in the correlation analysis. Furthermore, images were analyzed in the 2D image space, leading to a reduced accuracy in the measured quantities. The authors are aware of this limit, but this type of method was employed in order to simplify the setup, thus avoiding multiple camera acquisition and calibrations. In terms of classification, it is important to note that all the analyses were validated with the leave-one-out cross validation technique in order to cope with the limited sample of subjects. Finally, straightforward conventional machine learning techniques were employed rather than DL methods, which may be considered the most emerging approaches in this domain. However, due to its limited dataset, this study can be considered a feasibility analysis to assess whether this new index (FMI) may be an effective metric.
Future analysis could involve more advanced techniques, such as DL, increasing the number of subjects with relative FMIs. This approach will enable the introduction of automatic metric computation and real-time applications with possible time evolution analyses. Overall, the final aim of the proposed study could be the combination of all the proposed methods into a single easy-to-use tool to be adopted in clinical and research applications able to track disease progression, tailor targeted therapies, and evaluate their efficacy. Comparison among different rehabilitation interventions for hypomimia could be performed by assessing the new developed metric in the pre-and post-treatment conditions. Moreover, spontaneous emotion expressiveness could also be evaluated since this research includes emotions triggered by external instructions.
Nevertheless, other future investigations could be carried out in order to link the standard clinical assessment (UPDRS III items specifically related to hypomimia, i.e., facial expression and speech) with the proposed metrics.
Finally, by considering the relationship between face anatomical landmarks and muscle functions, future developments could also consider including the simultaneous acquisition of muscle activity through surface electromyography, as in [32,34], for validation purposes.
Conclusions
Although copious research has been undertaken on PD, hypomimia remains substantially under-investigated. The state-of-the-art research suggests evaluation of the symptom should be undertaken by means of clinical scales (UPDRS III item 3.2), which suffer from poor inter-rater repeatability, thus justifying the need to provide a more objective measure of facial expressiveness and recognition [5]. The present contribution showed the possibility of quantitatively characterizing the degree of hypomimia in the PD population. Moreover, through the proposed methodology, face muscles associated with a specific emotion (i.e., AU [9]) can be identified, thus providing a tool for planning target interventions. The overall metric represents a stand-alone methodology for measuring the degree of impairment without the need to be supported by the comparison with a database of healthy subjects [20]. Nevertheless, the application of the same methodology to the control group showed the ability to better highlight the specific impairment associated with PD, thus also supporting the adoption of such an index for classification purposes. Finally, both the proposed normalized distances and FMI can be considered a comprehensive description of face mobility that can become a powerful tool to quantitatively measure the degree of hypomimia associated with specific emotions in PD subjects.
Institutional Review Board Statement:
The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Ethics Committee of Vicenza (protocol code ARS_PD1/100-PROT, 17/06/2020).
Informed Consent Statement:
Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in the manuscript:
Appendix A
Power analysis for sample size estimation was applied according to [23] following equation (5) for unequal sized group. Chosen values of p value and power were p = 0.05 and power = 80% respectively. Values of FMI metric for happiness emotion were used from [24].
First, N was computed assuming that the groups were equal sized according to [23] Equation (2): where N is the required number of subjects in each group, d is the standardized difference (target difference/standard difference) and c p,power is a constant defined by the values chosen for the p value and power. In this case c p,power = 7.9 and according to [24]: mean FMI HC = 14.458; mean FMI PD = 11.295; SD = 3.5 d = 14.458−11.295 Then N is adjusted according to the actual ratio of the two groups (k) with the revised total sample size N : In this case k = 50/17=2.94. Finally, the two individual sample sizes in each of the two groups are: N /(1 + k) and kN /(1 + k) resulting in: N HC = 38.12 subjects N PD = 12.96 subjects FMI computed on statistically significant distances according to Equation (2) and distances presented in Table 2 is reported in Figure A1. No statistically significant differences were highlighted. Figure A2 represents FMI computed on the same quantities grouped by face regions. Statistically significant differences (p < 0.05) were found in surprise in FMI_up. Figure A1. FMI computed on statistically significant different distances.
. Figure A2. FMI computed on statistically significant different distances per face region. Red * highlights statistically significant differences at the 0.05 confidence level.
Appendix C
Classification results for precision and recall metrics for the different classification techniques are reported in Table A1, and A2, respectively.
Appendix B
FMI computed on statistically significant distances according to Equation (2) and distances presented in Table 2 is reported in Figure A1. No statistically significant differences were highlighted. Figure A2 represents FMI computed on the same quantities grouped by face regions. Statistically significant differences (p < 0.05) were found in surprise in FMI_up. Figure A1. FMI computed on statistically significant different distances.
. Figure A2. FMI computed on statistically significant different distances per face region. Red * highlights statistically significant differences at the 0.05 confidence level.
Appendix C
Classification results for precision and recall metrics for the different classification techniques are reported in Table A1, and A2, respectively. Figure A2. FMI computed on statistically significant different distances per face region. Red * highlights statistically significant differences at the 0.05 confidence level.
Appendix C
Classification results for precision and recall metrics for the different classification techniques are reported in Tables A1 and A2, respectively. Table A3 describs AUs, related names according to FACS [9] and corresponding muscles. Table A4 represents the basic emotions described by AUs according to [30]. Cheek blow 34
Appendix D
Cheek puff 35 Cheek Table A4. Specific AUs involved in the basic emotions according to [30]. | 7,078.4 | 2022-02-01T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Involvement of CD26 in Differentiation and Functions of Th1 and Th17 Subpopulations of T Lymphocytes
CD26, acting as a costimulator of T cell activation, plays an important role in the immune system. However, the role of CD26 in the differentiation of T cell subsets, especially of new paradigms of T cells, such as Th17 and Tregs, is not fully clarified. In the present study, the role of CD26 in T cell differentiation was investigated in vitro. CD26 expression was analyzed in the different subsets of human peripheral blood T lymphocytes after solid-phase immobilized specific anti-CD3 mAb stimulation. Here, the percentage of CD4+ cells significantly increased and most of these cells were coexpressed with CD26, suggesting a close correlation of CD26 expression with the proliferation of CD4+ cells. Subsequently, after immobilized anti-CD3 mAb stimulation, CD26 high-expressing cells (CD26high) were separated from CD26 low-expressing cells (CD26low) by magnetic cell sorting. We found that the percentages of cells secreting Th1 typical cytokines (IL-2, IFN-γ) and Th17 typical cytokines (IL-6, IL-17, and IL-22) or expressing Th17 typical biomarkers (IL-23R, CD161, and CD196) in the CD26high group were markedly higher than in those in the CD26low group. In addition, a coexpression of CD26 with IL-2, IFN-γ, IL-17, IL-22, and IL-23R in lymphocytes was demonstrated by fluorescence microscopy. These results provide direct evidence that the high expression of CD26 is accompanied by the differentiation of T lymphocytes into Th1 and Th17, indicating that CD26 plays a crucial role in regulating the immune response.
Introduction
CD26/DPPIV (dipeptidyl peptidase) is a multifunctional integral type II transmembrane glycoprotein with a broad cell-surface distribution [1]. As serine proteases, DPPIV cleaves the dipeptides after proline or alanine at the penultimate position of the N-terminus of several bioactive peptides and thereby modulates their activities in diverse biological processes [2]. Besides its enzyme activity, CD26 was also shown as a costimulator involved in T cell activation and differentiation by its interaction with other cellular molecules, such as adenosine deaminase (ADA), receptor-type protein tyrosine phosphatase (CD45), CARMA1, and caveolin-1 [3,4]. The expression of CD26 in T lymphocytes is differentially regulated during T cell development. As an activation marker of T cells, CD26 is mainly expressed on CD4 + T cells, and it is thought to be a marker of T helper type 1 cells [4,5]. Although both Th1 and Th2 cells express CD26, Th1 cells express three-to sixfold more CD26 protein than Th2 cells [6]. Other studies have indicated that CD26 expression induced the cytokine production of Th1 cells, including IL-2, IFN-γ, IL-10, and IL-12 [7]. In vivo, CD26 deficiency decreased the production of IL-2 and IL-4, delayed the production of IFN-γ in sera of mice after pokeweed mitogen (PWM) stimulation, and increased secretion of IL-4, IL-5, and IL-13 in bronchoalveolar lavage (BAL) after ovalbumininduced airway inflammation [8,9]. In recent years, a new major effector population of CD4 + T cells has been defined and designated as Th17 cells, which play important roles in many diseases [10][11][12]. One of the Th17 signature cytokines is IL-17 which is a proinflammation factor. Besides IL-17, Th17 cells can produce other proinflammatory cytokines, including IL-22, IL-26, and IFN-γ, and recent studies have shown that Th17 cells express IL-23R, lectin-like receptor CD161, and chemokine receptor CCR6 (CD196) [13,14]. It has been reported that human Th17 cells also express a high level of CD26/DPPIV [15]. However, the role of CD26 in the differentiation of Th17 cells has not been clearly investigated. Besides Th17 cells, regulatory T cells (Tregs) are another subpopulation of T helper cells [16]. Tregs modulate the immune activities through their immunosuppressive effect on other self-reactive T cells thereby contributing to the maintenance of immunologic self-tolerance [17]. Previous studies found that the majority of human Tregs strongly and constitutively express CD25 (CD25 high ), and a fork-head transcription factor (Foxp3) is required for the development and function of CD4 + CD25 + regulatory T cells and regards as one of the specific markers of Tregs [16,17]. Recently, we have demonstrated a delayed allogeneic skin graft rejection in CD26deficient mice. During graft rejection, the concentration of IL-17 in serum and the percentage of cells secreting IL-17 in mouse peripheral blood lymphocytes (MPBLs) were both significantly lower while the percentage of regulatory T cells (Tregs) was significantly higher in MPBLs of CD26 -/mice than in those of CD26 +/+ mice [18]. To further investigate the role of CD26 in the differentiation of Th17 subpopulations of human T lymphocytes, in this work, the correlation of CD26 expression with the differentiation of subsets of human T lymphocytes after solid-phase immobilized specific anti-CD3 mAb stimulation was investigated in vitro. We demonstrated that CD26 is closely involved in regulating the differentiation and functions of Th1 and Th17 subpopulations of T lymphocytes.
Separation of Human Peripheral Blood Lymphocytes.
Healthy human blood collection was performed according to the German Ethics laws, and approval (EA4/106/13) was obtained from the Ethics Committee of Charité Universitätsmedizin Berlin. Lymphocytes from human peripheral blood were isolated using Ficoll density gradient centrifugation (GE Healthcare, Sweden). The isolation process was performed according to the manufacturer's instructions. Briefly, human peripheral blood was collected and then centrifugated using a simple and rapid centrifugation procedure. Differential migration of cells during centrifugation results in the formation of layers containing different cell types: the bottom layer contains erythrocytes; the layer immediately above the erythrocyte layer contains mostly granulocytes; at the interface between the plasma and the Ficoll-Paque layer, mononuclear cells are found together with other slowly sedimenting particles (e.g., platelets) with low density. We then collect the interface layer of mononuclear cells and culture the cells in the tissue culture-treated dish overnight, and then, the monocytes were adherent to the dish and the lymphocytes were suspended. We then collected the suspension lymphocytes and identify the purity using flow cytometry; more than 95% cells were lymphocytes. The lymphocytes were next cultured in the not tissue culture-treated flasks for further experiment.
Activation of Human Lymphocytes by Stimulation with
Solid-Phase Immobilized Anti-CD3-mAb. It has previously been reported that lymphocytes could be activated by stimulation with solid-phase immobilized specific monoclonal anti-CD3 antibodies (mAbs, such as OKT3), in which CD26 was selectively involved in the activation pathway triggered by anti-CD3 [19]. According to Hegen's protocol [19], human peripheral blood lymphocytes (HPBLs) were stimulated by immobilized anti-human CD3 mAb (OKT3, IgG2a) (Thermo Fisher Scientific, USA). Briefly, each of 100 μL PBS with 2 μg/mL anti-CD3 mAb (stimulated group) or without antibodies (PBS, as negative control) was immobilized in a well of 96-well plates overnight. After removal of PBS buffer, 200 μL lymphocyte culture including 2 × 10 5 fresh isolated lymphocytes in RPMI-1640 growth medium (supplemented with 10% FBS, 100 μg/mL streptomycin, and 100 UI/mL penicillin) was cultured directly in each well of the 96-well plate with or without immobilized antibody at 37°C in a humidified atmosphere with 5% CO 2 for 72 h.
Measurement of Lymphocyte Proliferation.
The proliferation of lymphocytes after stimulation was measured by flow cytometry after cells were labeled with carboxyfluorescein succinimidyl ester (CFSE) assay kit (Thermo Fisher Scientific, USA) according to the instructions of the manufacturer.
Measurement of Cytokine Secretion of HPBLs after
Stimulation Using ELISA. Three days after stimulation, the cell culture suspensions of HPBLs were collected. After centrifugation, the supernatant was transferred into new tubes. Different cytokine levels in the supernatant were measured with ELISA kits (R&D Systems, Minnesota, USA). The procedure is according to the instructions of the manufacturer.
2.5. Separation of CD26 + Cells by Magnetic Cell Sorting (MACS). MACS MicroBeads (Miltenyi Biotec, Germany) were used for the separation of cells expressing CD26. Lymphocytes were collected at day three after stimulation. At first, the mouse anti-human CD26 mAb (anti-CD26 mAb 350 prepared in our own laboratory) was used to label the lymphocytes for 1 h at 4°C. Following two washing steps, magnetic MicroBeads labeled with anti-mouse IgG were added to the cells and incubated further for 15 min at 4°C. After a washing step, cells were loaded into the column, which was preplaced in the magnetic field of a suitable MACS Separator (Miltenyi Biotec, Germany). The unlabeled cells were collected after flow-through with two times wash processes. The labeled CD26 + cells were bound to the column. After removing the column from the separator and being placed in a suitable collection tube, the labeled CD26 + cells were separated from the column and flushed out by help of a plunger. Finally, two groups of cells, the CD26 highexpressing (CD26 high ) group and the CD26 low-expressing (CD26 low ) group, were obtained and then analyzed by flow cytometry. For determination of the coexpression of CD26 with intracellular cytokines, after incubation with FITC-conjugated antihuman CD26, the cells were washed and fixed with 4% formaldehyde for 5 min, washed again, and subsequently permeabilized with 0.1% Triton X-100 in PBS for 10 min.
After further wash steps after permeabilization, the cells were then incubated with PE-conjugated corresponding antibody. 2.7. Fluorescence Immunomicroscopy. The immunofluorescence staining of cell surface or intracellular proteins was performed as above. Thereafter, cells were resuspended in 20 μL PBS after twice washing steps with PBS and covered on a slide with a thin layer. After air drying, cell layers were added with mounting solution (Thermo Fisher Scientific, USA) and covered by coverslips for fluorescence microscopy. Images were made at a magnification of ×600.
2.8. Statistical Analysis. All data represent the mean value ± SD from a minimum of five independent experiments with at least five healthy donor HPBL samples, and each experiment was repeated more than three times. The statistical differences of values were calculated using ANOVA. Differences between groups were considered significant at p < 0:05, p < 0:01, p < 0:005, and p < 0:001; p values were calculated with a chi-square test.
Results
3.1. Part of the Lymphocytes Was Activated and Proliferated, and the Expression of CD26 Was Upregulated after Antigen Stimulation. After the isolation of mononuclear cells and monocyte removal, 24 h, 48 h, and 72 h after stimulation by solid-phase immobilized specific anti-CD3 mAb (OKT3, Thermo Fisher Scientific, USA), the expression level of CD26 was tested, and we found that the CD26 expression level was the highest at day 3 after stimulation (Supplementary Figure 1). Three days after stimulation, the survivability of the cells was tested using Annexin V/PI; we can see that more than 98% cells were alive (Supplementary Figure 2), which can be used for the next study. Then, the activation of HPBLs was determined by the measurement of expression of different lymphocyte activation markers (CD69, CD25, CD71, and CD26). In comparison to nonactivated control cells, the percentage of CD26 + HPBLs was significantly increased after stimulation by 85% (33 ± 8% vs. 61 ± 14% of total HPBLs, p < 0:001) (Figure 1(a)), while the percentages of CD69 + and CD71 + cells were 6-fold and 5-fold compared to control cells (54:29 ± 20:87% vs. 9:07 ± 7:28%, p < 0:01; 30:6 ± 14% vs. 5:8 ± 2:46%, p < 0:05), respectively, and the percentage of CD25 + HPBLs was 68% higher than the value in the control group (17:65 ± 6:58% vs. 10:49 ± 9:41%) ( Figure 1(b)). These results indicate that a substantial part of the HPBLs was activated after stimulation with immobilized anti-CD3 mAb.
To determine the proliferated new generations of lymphocytes after stimulation, the CFSE assay was used. As shown in Figures 1(c) and 1(d), at day three after stimulation, the stimulated group (hollow black histogram) showed five additional peaks that represent five increased generations of HPBLs whereas the PBS control group (shaded red histogram) showed only one peak remaining in the original position, indicating that no new generation was generated. These results provide evidence that the lymphocytes proliferated and increased by up to five new generations after stimulation compared to the lymphocytes of the PBS control group that had not proliferated within three days.
Increased
Percentages of CD4 + -, CD4 + CD26 + -, and CD8 + CD26 + -HPBLs after Stimulation. In order to clarify the role of CD26 in lymphocyte differentiation, the percentages of CD4 + T lymphocytes (T helper cells) and CD8 + T lymphocytes (T cytotoxic cells) as well as the percentage of cells that were coexpressing each of these two subpopulation markers with CD26 were analyzed after stimulation. As shown in Figure 2, after stimulation, the percentage of CD4 + cells was increased from 32:57 ± 8:91% to 54:72 ± 12:85% of total HPBLs while the percentage of CD8 + cells did not increase significantly. This result suggests a strong proliferation of the T helper subpopulation (CD4 + ) of T lymphocytes after stimulation. Further analysis revealed that after stimulation the percentage of cells that were coexpressing CD4 and CD26 (CD4 + CD26 + ) in total HBPLs was 2.8fold of that in the control group (39.98% vs. 14.43%). In the stimulated CD4 + subpopulation, about 73% of the CD4 + cells were coexpressed with CD26, while in the control CD4 + subpopulation only 40% of the CD4 + cells were coexpressed with CD26 (Figures 2(a) and 2(b)). As previously known, CD26 is a costimulator of T cell activation; the increased T helper cells (CD4 + ) after stimulation were mostly coexpressed with CD26 observed in the present work, indicating that the activation and proliferation of CD4 + cells are closely related to CD26 expression.
While the percentage of CD8 + cells did not increase significantly after stimulation, we found that the percentage of CD8 + CD26 + cells in the stimulated group was about 2.1 times than that of the control group (14:28 ± 3:35% 3 Journal of Immunology Research vs. 6:72 ± 4:21%). In the stimulated group, approx. 40% of CD8 + cells were coexpressing CD26, compared with 21% of the CD8 cells in the control group (Figures 2(c) and 2(d)). The increased CD8 + CD26 + cells suggest that CD26 is also related to the activation of CD8 + cells. Interestingly, the percentage of total CD8 + cells was not increased significantly. Since cell survival analysis showed that almost no dead lymphocytes were observed after stimulation (data not shown), it suggests that T cytotoxic CD8 + cells hardly proliferated, or their proliferation rate was much slower than that of CD4 + cells.
3.3. Higher Percentages of CD4 + , CD4 + CD26 + , and CD8 + CD26 + Cells in the CD26 high Group. For further analysis of the correlation of CD26 to T cell differentiation, after stimulation, CD26 + cells were separated using MACS MicroBeads conjugated with anti-mouse IgG after binding of CD26 + lymphocytes with anti-human CD26 mAb ( Figure 3). After separation, two groups of cells were obtained: CD26 low-expressing group (CD26 low ) and CD26 high-expressing group (CD26 high ). The expression profiles of CD4 + and CD8 + and their coexpression with CD26 on surfaces of cells in the CD26 low and CD26 high groups were analyzed. As shown in Figure 4, the percentage of CD4 + cells in the CD26 high group was 2.2-fold of that in the CD26 low group (62:70 ± 14% vs. 28:28 ± 9%, p < 0:005), while the percentage of CD8 + cells was lower in the CD26 high group compared to the CD26 low group (32:24% ± 5% vs. 45:11 ± 9%, p < 0:05). Further analysis showed that the percentage of CD4 + CD26 + cells in the CD26 high group was 6-fold of that in the CD26 low group (44:27 ± 15% vs. 7:13 ± 7%, p < 0:01) (Figures 4(a) and 4(b)), while the percentage of CD8 + CD26 + cells in the CD26 high group was about 3.5-fold of that in the CD26 low group ( Journal of Immunology Research and 4(d)). These results showed that the expression of CD26 occurred mostly in T helper cells (CD4 + ) and only a small part of T cytotoxic cells (CD8 + ) expressed CD26 after stimulation, indicating activation of most T helper cells (CD4 + ) but only a few T cytotoxic cells (CD8 + ). In consideration of the greatly increased percentages of CD4 + cells and CD4 + CD26 + cells after stimulation, CD26 is closely involved in the proliferation of T helper cells (CD4 + ) undoubtedly.
Higher Secretion of Th1 and Th17 Typical Cytokines or
Expression of Th17 Molecular Markers in Cells of the CD26 high Group. After three days of stimulation, the levels of different cytokines were measured by ELISA. From Figure 5, we can see that after stimulation, great amounts of IL-2, IFN-γ, and IL-6 were produced. The level of IL-13 also increased after stimulation, but it was very limited, while the secretion level of IL-4 did not increase significantly. As known, IL-2 and IFN-γ are mainly secreted by Th1 cells, and IL-4 and IL-13 are mainly secreted by Th2 cells. Although IL-6 is mainly separated by macrophage during acute inflammation, more and more reports suggested that IL-6 is also secreted by T cells, such as Th17.
To investigate the association of CD26 expression with CD4 cell differentiation, the percentages of T helper subpopulations were determined by flow cytometry after cells were labeled with fluorescence-conjugated antibodies against corresponding cytokines or cell surface markers. The results showed that the percentages of cells secreting Th1 typical cytokine IL-2 and IFN-γ in the CD26 high group were significantly higher than those in the CD26 low group (Figure 6(a)). The percentage of cells secreting IL-2 in the CD26 high group was approximately three times than that of the CD26 low group (25:93 ± 5:39% vs. 8:89 ± 5:85%), and the percentage of cells secreting IFN-γ in the CD26 high group was about seven times than that of the CD26 low group (30:17 ± 11:14% vs. 4:45 ± 2:63%). Similarly, in Figure 6(b), the percentages of cells secreting Th17 typical cytokines (IL-6, IL-17, and IL-22) or expressing biomarkers (IL-23R, CD196, and CD161) were evidently higher in the CD26 high group than in the CD26 low group. The percentages of cells secreting IL-6 or lL-17 in the CD26 high group were about 7-fold of those in the CD26 low group ( 5 Journal of Immunology Research percentage of cells expressing IL-23R was even higher in the CD26 high group, 7-fold of that in the CD26 low group (35.93% vs. 4.98%). In addition, the percentages of cells expressing Th17 surface biomarkers CD196 and CD161 in the CD26 high group were 2.8-fold and 3-fold of those in the CD26 low group (34.73% vs. 12.35%, 42.52% vs. 13.59%), respectively. Histogram analysis showed that the expression levels of Th1 and Th17 typical cytokines (IL-2, IFN-γ, IL-6, IL-17, and IL-22) or a Th17 typical surface marker (IL-23R) in the cells of the CD26 high group were much higher in relation to the cells of the CD26 low group (Figure 6(c)). These results suggest that the expression of CD26 is involved in the regulation of the differentiation and functions of Th1 and Th17 subpopulations of T lymphocytes.
On the other hand, the percentages of cells secreting Th2 typical cytokines either IL-4 or IL-13 showed exceptionally low and no significant differences between the CD26 high group and the CD26 low group (Figure 6(a)). Similarly, the histogram analysis showed that there were no significant dif-ferences in the expression levels of Th2 cytokines (IL-4 and IL-13) in cells between the CD26 high group and the CD26 low group (Figure 6(c)). In addition, the percentages of cells expressing molecular markers of regulatory T cells (CD25 +-Foxp3 + or CD4 + Foxp3 + ) in the CD26 high group did not have significant differences to those in the CD26 low group (Figure 6(d)). These results suggest that the CD26 expression is not correlated to the differentiation and functions of Th2 and Treg subpopulations of T lymphocytes after antigen stimulation.
Coexpression of CD26 with Th1 or Th17 Typical
Cytokines in Cells of the CD26 high Group. The association of CD26 expression to the differentiation of Th1 or Th17 subset was further analyzed by determination of the coexpression of CD26 with each of the Th1 typical cytokines (IL-2 or IFN-γ), Th17 typical cytokines (IL-6, IL-17, and IL-22), or Th17 specific surface marker (IL-23R). In comparison to the CD26 low group, the percentages of cells that were coexpressing CD26 Figure 4: Percentages of CD4 + , CD8 + , CD4 + CD26 + , and CD8 + CD26 + cells in the CD26 low and CD26 high groups. (a) Percentages of CD4 + and CD4 + CD26 + cells in the CD26 low and CD26 high groups. (c) Percentages of CD8 + and CD8 + CD26 + cells in the CD26 low and CD26 high groups. Data represented mean value ± SD from seven independent experiments with seven healthy donor HPBL samples, and each experiment was repeated more than three times. Dot plots show the percentages of (b) CD4 + and CD4 + CD26 + cells and (d) CD8 + and CD8 + CD26 + cells in the CD26 low and CD26 high groups. The percentages of cells expressing Tregs typical biomarkers in the CD26 low and CD26 high groups. Data represented mean value ± SD from a minimum of five independent experiments with at least five healthy donor HPBL samples, and each experiment was repeated more than three times. 8 Journal of Immunology Research with each of these cytokines were obviously higher in the CD26 high group (Figure 7(a)). The percentages of cells that were coexpressing CD26 with IL-2 (CD26 + IL-2 + ) or IFN-γ (CD26 + IFN-γ + ) in the CD26 high group were 3.5-and 3-fold of those in the CD26 low group (20.31% vs. 5.83% and 15.66% vs. 5.18%), respectively. Notably, the percentages of cells that were coexpressing CD26 with IL-17 (CD26 + IL-17 + ), IL-6 (CD26 + IL-6 + ), or IL-22 (CD26 + IL-22 + ) in the CD26 high group were nearly 6-fold, 5-fold, and 6.5-fold of those in the CD26 low group (20.14% vs. 3.43%, 14.81% vs. 3%, and 18.64% vs. 2.86%), respectively. Also, the percentage of cells that were coexpressing CD26 with Th17 marker IL-23R (CD26 + IL-23R + ) in the CD26 high group was 6-fold compared to that in the CD26 low group (23.14% vs. 3.7%) (Figure 7(a)). Fluorescence microscopy detected that the CD26 protein was predominantly located on the cell plasma membrane, while IL-2, IFN-γ, IL-17, and IL-22 were present in the cytosol and IL-23R was also mainly located on the cell surface. After merging the photos, CD26 was found to be coexpressed with IL-2, IFN-γ, IL-17, IL-22, or IL-23R (Figure 7(b)) in the same lymphocytes. Since IL-2 and IFN-γ are typical Th1 cytokines, the coexpression of Th1 cytokines with CD26 suggests a correlation of CD26 to the differentiation and Similarly, IL-17 and IL-22 are typical Th17 cytokines, and IL-23R is a typical Th17 cell surface marker. Therefore, the coexpression of Th17 cytokines or markers with CD26 suggests a correlation of CD26 to the differentiation and function of Th17 cells.
Discussion
CD26 was determined as one of the costimulators for T cell activation [3,4], and the costimulatory effect of CD26 for T cell activation could be mediated by the interaction of CD26 with the ecto-adenosine deaminase (ADA), tyrosine phosphatase CD45, CARMA1, or caveolin-1 [20,21]. In the present work, antigens of lymphocytes were stimulated by using an immobilized anti-CD3 mAb (OKT3, IgG2a) to further investigate the role of CD26 in T cell differentiation. Three days after stimulation, the activation of lymphocytes was determined by the enhanced expression of lymphocyte activation markers CD26, CD69, CD71, and CD25 (Figures 1(a) and 1(b)). CD69 is one of the earliest cell surface antigens expressed by T cells following activation. It acts as a costimulatory molecular and surface marker for T cell activation and proliferation. CD71 (the transferrin receptor) and CD25 (the IL-2 receptor alpha chain) are the other two molecular surface markers of T cell activation and proliferation [22,23]. The significant increase in the expression of CD69, CD71, and CD25 indicates that most of the lymphocytes are activated after stimulation [23]. In addition, CD26 expression was also significantly upregulated after stimulation (Figure 1(a)) suggesting an association of CD26 to the activation of T lymphocytes, which is consistent with previous studies [3,4]. After stimulation, the coexpression level of CD26 with CD4 + or CD8 + was increased markedly (Figure 2), which indicates that the expression of CD26 is related not only to the activation of CD4 + cells but also to a certain extent to the activation of CD8 cells. A previous study reported that a unique pattern of CD26 high expression was identified on influenza-specific CD8 + T cells but not on CD8 + T cells specific for cytomegalovirus, Epstein Barr virus, or HIV, which suggested that high CD26 expression may be a characteristic of long-term memory cells [24]. A later study indicated that CD26 + CD8 + cells belong to the early effector memory T cell subsets. The CD26-mediated costimulation of CD8 + cells provokes effector function via granzyme B, tumor necrosis factor-α, IFN-γ, and Fas ligand [25]. The role of CD26 in the differentiation and function of CD8 + cells needs further investigation.
Thereafter, the proliferation of lymphocytes was analyzed after stimulation. It was found that in comparison to lymphocytes without stimulation (PBS control), which did not proliferate, the immobilized anti-CD3 mAb stimulated lymphocytes proliferated up to five generations (Figures 1(c) and 1(d)). Further analysis showed that after stimulation, the percentage of CD4 + cells in total HPBLs was increased significantly while the percentages of CD8 + did not change ( Figure 2). The upregulated percentage of CD4 + cells suggests that the immobilized anti-CD3 mAb triggered mainly the proliferation of CD4 + lymphocytes [26]. It was found that the percentage of CD4 + cells in the CD26 high group was sig-nificantly higher than that in the CD26 low group, and most of the CD4 cells were coexpressed with CD26 (Figures 4(a) and 4(b)). Previously, Ohnuma et al. have reported that CD26 was thought to be mostly expressed by memory T helper cells, and its expression was preferential on CD4 + cells and associated with T cell activation as a costimulatory molecule [4]. Blockade of CD26-mediated T cell costimulation with soluble caveolin-1 induced anergy in CD4 + cells [20]. Besides studies on the involvement of CD26 in the activation and proliferation of CD4 + T cells in vitro, in vivo investigation using CD26 knockout mice presented a decreased percentage of CD4 + cells [8]. CD4 + cells are T helper cells and they can secrete different cytokines upon T cell activation, and these cytokines play a crucial role in the activation and/or proliferation of other effector cells, such as B cells, cytotoxic T cells, and macrophages [27,28]. The higher percentage of CD4 + cells in the CD26 high group and CD26 high expression in activated CD4 + cells observed in the present work further confirm that CD26 expression is involved not only in the activation but also in the proliferation and in further bioprocesses and functions of CD4 + cells.
After activation, CD4 + cells proliferate and differentiate into different subpopulations. Th1 and Th2 are the two main and earliest defined subpopulations of T helper cells [27]. Th1 cells can potentially produce large amounts of IFN-γ and IL-2 cytokines while Th2 effector cells are characterized by the production of IL-4 and IL-13 [28]. In the current work, after three days of stimulation with immobilized anti-CD3 mAb, a large amount of IL-2, IFN-γ, and IL-6 was detected in cell culture by ELISA analysis, while the levels of IL-13 and IL-4 were very low ( Figure 5). After cell sorting of CD26-expressing cells, the percentages of cells secreting each of Th1 typical cytokines IFN-γ and IL-2 in the CD26 high group were significantly higher than those in the CD26 low group (Figures 6(a) and 6(c)). Moreover, most of the cells secreting IFN-γ or IL-2 were coexpressing CD26 (Figure 7). In a previous study, the upregulation of CD26 expression on CD4 + cell surfaces was identified to be related to the production of Th1 cytokines [4]. It was reported that the solidphase immobilized anti-CD26 mAb had a comitogenic effect by inducing CD4 + lymphocyte proliferation and enhancing IL-2 production in conjunction with submitogenic doses of anti-CD3 [19]. The inhibitor of DPPIV/CD26 enzyme activity has been suggested to be able to reduce the production of IL-2, IL-6, and IFN-γ of human and mouse T cells under mitogen stimulation [7]. Supporting these findings, the results of the present work showed that the expression of CD26 is associated with the differentiation of Th1 cells. Th1 is an important subset of T helper cells. The positive relation between the activation of CD4 + cells and CD26 expression (Figures 4(a) and 4(b)) benefits the differentiation of CD4 + cells into a Th1 subset.
Interestingly, the percentages of cells secreting Th2 typical cytokines IL-4 or IL-13 were not only very low (<5%) in the CD26 low and CD26 high groups, but they also did not present any difference between both kinds of cell groups (Figure 6(a)). As one of the main subpopulations of T helper cells, the Th2 subset is often recognized as an opposite of Th1 cells since Th2 cytokines may suppress the activity and 10 Journal of Immunology Research proliferation of Th1 cells during immune responses [29]. Our results indicate that CD26 expression is not related to the differentiation of CD4 + cells into the Th2 subset after antigen stimulation. Besides Th1 and Th2 subsets, Th17 and Tregs are the other two important subsets of T helper subpopulations. Th17 is a more recently identified subset of CD4 + cells [10], which is distinct from classic Th1 and Th2 subsets [11,30]. These cells originate from naive CD4 + precursor cells mainly in the presence of TGF-β and IL-6, and their differentiation requires IL-23 [13,14]. As a novel member of the CD4 + T subset, it is important to clarify the role of CD26 in the differentiation and function of Th17 cells. After cell sorting, the percentage of cells secreting Th17 typical cytokines (IL-17 and IL-22) or expressing Th17 molecular markers (IL-23R, CD161, and CD196) was found to be significantly higher in the CD26 high group than in the CD26 low group ( Figure 6(b)). Moreover, most of the cells secreting IL-17 and IL-22 or expressing IL-23R, CD161, and CD196 were coexpressed with CD26 ( Figure 7). This indicates an involvement of CD26 in the differentiation of CD4 + cells into the Th17 subset. A previous study showed that Th17 cells express a high level of CD26, and the phenotypic analysis of Th17 cells could be identified by the CD26 expression [15]. Th17 cells play an important role in preventing the pathogen invasion through secreting proinflammatory cytokines. Clinical research found that CD26 was related to some diseases which involved the immune response initiated by Th17 cells through inducing chronic inflammation or autoimmunity, like rheumatoid arthritis and multiple sclerosis [31].
Recently, it has been reported that inhibition of the enzyme activity of CD26 by sitagliptin reduced the proliferation and Th1/Th17 differentiation of human lymphocytes in vitro [32], and the CD26 costimulatory blockade improves lung allograft rejection and is associated with enhanced IL-10 expression in vivo [33]. We have also shown recently that CD26 deficiency resulted in a delayed allogeneic skin graft rejection after allogeneic skin transplantation. The concentrations of serum IgG, including its subclasses IgG1 and IgG2a, were significantly reduced in CD26 -/mice during graft rejection. The secretion levels of the cytokines IFN-γ, IL-2, IL-6, IL-4, and IL-13 were significantly reduced whereas the level of the cytokine IL-10 was increased in the serum of CD26 -/mice compared to CD26 +/+ mice. Additionally, the concentration of IL-17 in serum and the percentage of cells secreting IL-17 in mouse peripheral blood lymphocytes (MPBLs) were both significantly lower while the percentage of regulatory T cells (Tregs) was significantly higher in MPBLs of CD26 -/mice than in those of CD26 +/+ mice [18]. In line with the results of these in vivo experiments, the results of the present in vitro study confirm that the expression of CD26 is not only highly correlated to the differentiation of Th1 and Th17 but also plays an important role in the functions of Th1 and Th17. It is precisely because CD26 plays an indispensable role in the differentiation and function of Th1 and Th17 lymphocytes, which results in a lack of effective Th1 and Th17 cells when CD26 is absent under relevant pathological conditions. The present study provides more insight into the role of CD26 for the function of Th17 cells and related diseases and will support future research in this field.
It is reported that CD26 can be used as a negative selection marker for Tregs [34]. In the present study, the percentages of Tregs were very low in the CD26 high and CD26 low groups, and no significant difference was found between the two groups ( Figure 6(d)), indicating that the expression of CD26 is not necessary for the differentiation of Tregs after immobilized anti-CD3 mAb stimulation.
In conclusion, CD26 is not only an activation marker for T lymphocytes, but its expression is closely related to the subsequent proliferation, differentiation, and functions of T lymphocytes. Considering that the balance between Th1 and Th2 and the balance between Th17 and Tregs play a prominent role in immune responses [35,36], our results in this study demonstrated that the high expression of CD26 is beneficial to the differentiation of T lymphocytes into Th1 and Th17 subpopulations after antigen stimulation, indicating a crucial role of CD26 in regulating the immune response to inflammation and autoimmune reactions. The correlation of CD26 with the differentiation balance between Th1 and Th2 and between Th17 and Tregs observed in this study provides more insights into the role of CD26 in related diseases. The important role of CD26 in immune regulation suggests that it would become a therapeutic target for related diseases [37].
Data Availability
This manuscript describes original work, and neither the entire nor any part of its content has been published previously or has been accepted elsewhere. The presented version (https://www.authorea.com/users/364553/articles/484935involvement-of-cd26-in-differentiation-and-functions-of-th1and-1-th17-subpopulations-of-t-lymphocytes) is just preprint and never accepted or published.
Conflicts of Interest
The authors declare no financial or commercial conflict of interest. and 449) and the China Scholarship Council. We acknowledge support from the German Research Foundation (DFG) and the Open Access Publication Fund of Charité-Universitätsmedizin Berlin. | 8,013.6 | 2021-01-20T00:00:00.000 | [
"Biology",
"Medicine"
] |
Nonlinear Model Predictive Growth Control of a Class of Plant-Inspired Soft Growing Robots
Recently, researchers have shown an increased interest in considering plants as a model of inspiration for designing new robot locomotions. Growing robots, that imitate the biological growth presented by plants, have proved irresistible in unpredictable and distal environments due to their morphological adaptation and tip-extension capabilities. However, as a result of the irreversible growing process exhibited by growing robots, classical control schemes could fail in obtaining feasible solutions that respect the permanent growth constraint. Thus, in this article, a Nonlinear Model Predictive Control (NMPC) scheme is proposed to guarantee the robot’s performance towards point stabilization while respecting the constraints imposed by the growing process and the control limits. The proposed NMPC-based growth control has applied to the kinematic model of the recently proposed plant-inspired robots in the literature, namely, vine-like growing robots. Numerical simulations have been performed to show the effectiveness of the proposed NMPC-based growth control in terms of point stabilization, disturbance rejection, and obstacle avoidance and encouraging results were obtained. Finally, the robustness of the proposed NMPC-based growth control is analyzed against various input disturbances using Monte-Carlo simulations that could guide the tuning process of the NMPC.
I. INTRODUCTION
Motivated by the morphological adaptation capacity shown by snakes, elephant trunks, and octopus tentacles, soft continuum robots have demonstrated the potential to facilitate manoeuvring in tight and restricted environments [27]. As compared to rigid robots, continuum robots have curvilinear structures with regularly bending backbones that make them extremely versatile to the surroundings [22], [29]. However, continuum robots are commonly designed to have small lengths that restrain their applicability in the navigation of distant environments [16].
Investigating the growing process exhibited by plants, new mobility by growth approach has been recently proposed to come up with growing robots. These kinds of robots emulate biological growth by incrementally expanding either their lengths, volumes, or knowledge [5]. Soft growing robots can reach narrow spaces searching for victims or can serve The associate editor coordinating the review of this manuscript and approving it for publication was Zheng H. Zhu . as channels to transfer air or water for them in emergency scenarios [25]. Earlier studies have reported the realization of long flexible robots in congested environments. For instance, Tsukagoshi et al. [25] have proposed multiple degrees of freedom growing robot, called ''Active Hose,'' that used for rescue and searching scenarios. This robot has designed to be flexible with the capability of expanding its length by connecting small flexible units of two degrees of freedom in series. A flexible long cable with a ciliary vibration mechanism has developed by Isaki et al. [14] to achieve navigation in narrow spaces. Expandable soft robots have also been proposed, such as ''Slime Scope'' [19] that was a pneumatically driven expendable arm with a camera attached to its tip used for search and rescue people in the rubble environments. Tsukagoshi et al. [24] have developed a flexible hose-like robot that was able to steer in narrow environments by manual control.
Lately, vine-like growing robots, which mimic the growing process displayed by plants, have proved magnificent performance towards undertaking investigation and rescue VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ missions [21], [28]. Hawkes et al. [13] have developed a novel growing robot using the concept of tip eversion mechanism [2]. These vine-like robots made of thin-walled polyethene tubing that can expand up to several tens of meters while navigating challenging environments either through teleoperation [8] or guided by obstacles [9], [10]. A steerable vine robot version is developed by Greer et al. [11] by inflating multiple series of pneumatic artificial muscles placed around the robot's spine. The increased length-to-diameter ratios, the lengthening capability, and the flexible structures allow vine-like robots to penetrate cluttered environments as evaluated in [4].
Although the potential of vine growing robots in unstructured and congested environments, there is still a notable paucity towards feedback controlling their growth in spatial environments. This in particular is due to the challenges that exist in vine robots in terms of their coupled dynamics and the lack of practically deploying sensors on their lengthy bodies. In general, controlling soft continuum robots in joint and task spaces has been addressed in the literature. For instance, dynamic control of planar multi-link soft continuum robot is proposed in [6] considering interaction with the environment. The curvature of each segment is selected as the controlling variables for the robot to achieve the target while assuming the robot's length is assumed to be inextensible. Seleem et al. [23] have developed a computed torque control based on the derived dynamics for multi-section spatial continuum robot. The physical constraints of sections' lengths have been considered in the control loop as saturation blocks, which potentially could lead to non-linearity and under-utilization of the control scheme.
There have been many attempts towards controlling the growth of vine-robots, either in the joint or in the task space. For instance, in [21] a stimulus oriented control that imitates the plant root behaviour is employed to control the movement of the root-like plant-inspired robots based on the tactile information received from the sensor embedded on the robot's root.Due to the relatively slow-growing process reported in root-like robots, considering irreversible constraints is not crucial in the control process. An optimal control problem is formulated in [20] to control the tip of a plant-inspired root to minimize the energy spent by the root while penetrating the soil environment.The proposed control approach assumed planar robot's dynamics where the robot's length and curvature are the controlling variables. In [7], a Proportional-Derivative (PD) controller with gravity compensation is presented to the derived dynamics model of vine robots to ensure the performance in terms of trajectory following in the robot's joint space. Although the success of these attempts of conventional control schemes, handling the irreversible growing process exhibited by growing vine-robots by such schemes could be challenging since once the robot has grown to a certain length, it can not be retracted back to lower length values.
In this article, inspired by the significant improvement achieved by applying Model Predictive Control (MPC) [3] in controlling of planar redundant manipulators [26], we developed a Nonlinear MPC scheme to control the growth of vine-like growing robots in task-space. Meanwhile, considering the irreversible growing process and the actuator limits in the control loop. MPC is a class of optimal control that has long been used for large multiple-input, multiple-output control problems in the control of the chemical processes.
The key idea is that we minimize an objective function over a finite time prediction horizon subject to the dynamics of our robot model represented as an equality constraint [17]. Meanwhile, other constraints such as the irreversible growing process and the limits of the actuator of vine robots could be described as inequality constraints during the prediction horizon as well. Hence, an optimization problem is solved at each time step to find the optimal control sequence suitable for deriving the robot model to the required position in spatial space while considering the system's constraints. Since this optimization problem is solved at each time step before applying any control inputs to the process, MPC-based control schemes has the potential to succeed in controlling growing robots compared to other conventional control approach mentioned in the literature.
The key challenge of applying the MPC control scheme in growing robots is the coupled nonlinear dynamics that complex the prediction model that should be incorporated in the control scheme. The nonlinear MPC control approach has been proposed in hydraulic systems as in [12] with nonlinear dynamics have been incorporated in the prediction model. Although dynamics model implies a better representation of the real system, it requires high computations. Thus, in this article, the contributions consist of the following aspects. (1) Application of NMPC-based growth control in plant-inspired vine growing robots to control its spatial movements in task-space considering the irreversible constraints exhibited by the robot's growing process. (2) Incorporation of the robot's kinematics model as the NMPC prediction model to reduce the required computational cost while achieving significant performance assuming the relatively slow movements of vine-robots while navigating the working environment. (3) Proposing a Monte-Carlo simulation-based approach to guarantee the robustness of the proposed NMPC while guiding the process of parameter tuning.
After introducing the kinematic model of growing robots in Section II, the proposed Nonlinear Model Predictive Control (NMPC) for growth control of vine robots is discussed in Section III: first, the robot model is introduced; then, the objective function and the controller design are summarized. After simulation validations of the proposed NMPC-based growth control in Section IV, a final conclusion is drawn in Section V.
II. KINEMATICS MODEL OF VINE ROBOT
In this research, the ''vine robot" developed in [4] is under discussion, where this kind of robots can elongate their tips up to tens of meters via eversion mechanism [11]. Air pressure is applied to its core tube as depicted in Figure 1 to facilitate tip extension, while steering is achieved by applying air pressure through one or two of the serial Pneumatic Actuator Muscles (sPAM) that are placed around the robot circumference. A camera or other sensing device could be added to its tip to facilitate the navigation capability of the robot.
A. DIRECT KINEMATICS
The constant-curvature model [15] that is commonly applied in modeling continuum-like robots is assumed here to find the forward kinematics of the vine growing robot. The distal tip pose T b r with respect to its base is derived in terms of the robot configuration parameters q ∈ IR 3 including its length s, the bending angle θ and the plane angle φ as shown in Figure 2. Thus, T b r is obtained as The robot tip position p = [x, y, z] T ∈ R 3 in Cartesian space could be extracted from Eq. (1) as, Although the actual actuation space of the vine-robot is the sPAMs lengths l = [s, l 1 , l 2 , l 3 ], using the shape space generalizes the control problem to suit any kind of continuum-like robots with constant curvature model.
B. DIFFERENTIAL KINEMATICS
The growing robot tip velocity,ṗ ∈ R 3 , is related to the time derivatives of the robot configuration parametersq as follows, where the Jacobian matrix, J(q) ∈ R 3×3 , is computed analytically as follows, where p is the robot tip Cartesian position mentioned in Eq. (2).
III. NONLINEAR MODEL PREDICTIVE GROWTH CONTROL
In this section, we present the NMPC-based growth control scheme proposed to control in closed-loop the growth of the vine robot. The NMPC aims to consider the irreversible growth constraint and the input constraints exhibited by vine robots while achieving the control objectives: point stabilization, obstacle avoidance, and trajectory tracking in task-space that will be discussed.
A. MODEL DESCRIPTION
To involve the irreversible growth constraint exhibited by vine robots, the state x = [p q] T ∈ R 6 has been selected to combine both the robot tip position p in Cartesian space and its joint variables q. Hence, the non-linear model representing the movement kinematics of vine robots is described aṡ where J(q) ∈ R 3×3 is the robot Jacobian obtained in Eq. (3) while u =q ∈ R 3 is the velocity in the robot's configuration VOLUME 8, 2020 space representing the manipulated variables. The vine robot state x is the controlled variable and is assumed to be fully observable. Although the full observability assumption could be challenging in real applications of vine robots, this step aims to prove the applicability of MPC in controlling the growth of such robots. In future work, state estimation could be incorporated to relax this assumption. The irreversible growing process shown by vine robots is represented as an inequality constraint imposed on its growing velocity and its length, i.e., The key aim of the proposed MPC-based growth control is to guarantee the growing robot stabilization performance over a desired reference state x r = [x r , y r , z r , s r , θ r , φ r ] T defined in task and joint space. The controller should also consider the constraint imposed physically by the irreversible robot's growth while searching for optimal control actions. Thus, the cost function J is chosen in such a way to evaluate the tracking performance and the control action over a prediction horizon N as follows where e = x − x r denotes the tracking error, while u indicates the predicted control increment. The matrices Q ≥ 0 and R ≥ 0 are the weighting matrices that are assumed to be constant over the prediction horizon N .
C. CONTROLLER DESIGN
The MPC strategy that is proposed to control the growth of vine-robots is shown in Figure 3. The manipulated variables (u =q) is the velocity in configuration space that used to either elongate or steer the vine robot. The aim is to bring the robot state x(t) to the reference input x r in the case of point stabilization and the reference trajectory x r (t) in the case of trajectory tracking for all instance t. Meanwhile, the growth and the control input constraints mentioned earlier have to be considered.
In conventional MPC, a discrete-time linear model of the plant under control is usually employed as the prediction model. However, as depicted from Eq. (5), the kinematic model of vine robots is nonlinear and continuous since it depends on the robot's configuration q. Thus, in the proposed NMPC-based growth control, the prediction model is a discrete version of the robot's kinematics model that is obtained using Euler discretization at each sample k along the predication horizon, where T denotes the sampling time. Thus, by using this prediction model, the NMPC predicts the robot's state x p along the prediction horizon while applying all admissible control inputsū as highlighted in Figure 3.
IV. RESULTS AND DISCUSSION
In this section, we present simulation experiments conducted to evaluate the proposed NMPC-based growth controller while considering the locomotion and input constraints of vine-like robots. The NMPC-based growth controller is built using CasADi framework [1]. The MATLAB/SIMULINK is used with ode45 solver to simulate the vine robot model in (1) with the proposed NMPC-based growth control. First, we explain the experiment scenarios that are accompanied by the experimental results that confirm the capabilities of the proposed NMPC scheme.
A. POINT STABILIZATION RESULTS
As mentioned earlier, one of the applications of vine-like robots is to serve as a conduit to deliver essentials to people in disaster scenarios. Thus, in the first simulation experiment, starting from an initial state x 0 = [0, 0, 0.4, 0.4, 0, 0] T , the proposed NMPC-based growth controller is utilized to stabilize the tip of the vine-like robot within a set of predefined goal states in the space, x d ∈ R 6 . These states could represent potential locations for the robot to visit with the environment. The sampling time has chosen to be T s = 0.1 s with a prediction horizon N = 10. The state and the input weighting matrices in (7) are chosen to be diagonal, where Q = diag(1, 1, 1, 0, 0, 0) while R = diag(0.5, 0.5, 0.5).
The first three elements in the robot's state are constrained between [−4, 4] defining the reachable space in the environment, while the other remaining three-state elements are constrained according to the robot's configuration limits highlighted earlier in the kinematics section.
On the other side, the input inequality constraints that respect the irreversible nature of the growing vine-like robot and the actuator limits are chosen as follows That's why while achieving this new goal, both x and y position have been slightly affected as illustrated in Figure 4. To tackle this issue and bring the robot's tip back, the NMPC has actuated the curvature angle θ in the positive direction while simultaneously increasing the robot length s. After a while, only the robot length is increased to compensate for the changed x and y positions. Finally, after 50 seconds of the simulation time, the robot is required to reach a new z goal that is lower than the previous one. This requires the robot to shrink its length. However, due to the irreversible growing process, the robot is constrained having no ability to shrink its grown length. Thus, the NMPC decreased the robot curvature hoping to reach the new desired goal. Although that helped in obtaining a reasonable error in the z coordinate, the other two coordinates have been significantly affected. In all stages, the NMPC satisfies the state and input saturation constraints of the vine-like robot.
B. OBSTACLE AVOIDANCE
In the second simulation scenario, the proposed NMPC is evaluated against avoiding obstacles the could exist in the environment. Thus, a static point obstacle is located at x o = [x o , y o , z o ] T within the robot pathway from a starting point x 0 to the end goal x g = [1, 1, 1] T meters away from its base. The prediction horizon at this stage is chosen as N = 30, while the sampling time is T = 0.1 s. To avoid that obstacle, a new non-linear inequality constraint has been introduced to the optimization problem to retain the Euclidean distance between the robot's tip (x, y, z) and the obstacle's position beyond a certain safe distance (r t + r o ) as follows, where 2 is the distance between the robot's tip and the obstacle, r t = 0.1 m and r o = 0.1 m are the robot's tip and the obstacle radii respectively. As depicted in Figure 5, the NMPC has succeeded in planning a safe path for the vine robot to avoid the obstacle. The corresponding actuation is shown that the robot has to alter its curvature and bending angle during the navigation to avoid that obstacle. It is worth to mention that this approach in avoiding obstacles could not guarantee that the whole body of the vine robot will avoid that obstacle since it is only the tip position that is considered in Eq. (11). However, this could be tackled in future work by dividing the robot's body into segments that their position could be anticipated from the robot shape parameters. Then, one more constraint could be added to ensure that these segments are away from that obstacle as well.
C. TRAJECTORY TRACKING
A spiral reference trajectory has been considered to assess the proposed NMPC-based growth controller performance against trajectory tracking. This spiral movement could be useful if the robot is required to wrap around a pillar for instance to reach its top. Particularly, the The robot starts from an initial state x 0 = [0, 0, 0.9, 0.9, 0, 0] T . The controller time step is chosen to be t = 0.1 seconds while the prediction horizon is N = 20 with a total simulation time of 20 seconds. The state and input weighting matrices have been chosen as Q = diag(10, 10, 1, 0, 0, 0) and R = diag(0.1, 0.1, 0.1) respectively. Figure 6 shows the NMPC performance in terms of the difference between the actual and the reference trajectories. The obtained Root Mean Square (RMS) errors between the reference and the actual robot trajectory are (0.19, 0.193, 0.11) meters respectively in x,y and z directions. As noted in Figure 6, the errors in x and y coordinates are increasing with time. This is because the z coordinate is linearly increasing with time, this requires the robot's length to be increased which subsequently affects the actual x and y tip positions. We believe that properly designed weighting Q and R matrices would tackle this issue. In Figure 7, we imposed an inequality constraint on the robot's state y as a work-space limitation. The proposed NMPC shows satisfactory tracking performance while respecting the imposed constraint beside the other robot's locomotion constraints.
To compare our proposed NMPC-based growth control with attempts found in the literature, we implemented two Jacobian-based trajectory tracking controllers. In the first controller, the irreversible growing process and the limits of the actuator haven't considered while in the second controller these constraints have been considered as saturation blocks. In Jacobian-based trajectory tracking the control action of such a controller is calculated as: where e = x ref − x is the error between the reference trajectory and the feedback state of the robot while K is a positive definite diagonal gain matrix chosen as K = I. As shown in Figure 8(a), the performance of the Jacobian-based controller while not considering the process constraints is satisfactory and close to what our proposed NMPC-based growth control achieved. However, if the process constraints are 214500 VOLUME 8, 2020 considered the Jacobian-based control will fail to follow the reference trajectory as highlighted in 8(b). This shows how our proposed trajectory tracking based on the proposed NMPC controller outperforms the Jacobian-based controller when the irreversible process and actuator constraints are considered. This is due to the capability possessed by the MPC controllers to anticipate the future state of the process and plan accordingly while considering any process constraints.
D. ROBUSTNESS ANALYSIS
One of the key factors that ensure the robustness of the MPC control system is to have insignificant levels of discrepancies between the prediction model and the real system under control. Having a nonlinear kinematics model as the prediction model in our proposed NMPC-based growth control of vine robots plays a crucial rule in satisfying such condition. In fact, the system behaviour could be anticipated at each time step in the future relying on the nonlinear prediction model that acts as a replica to the vine robot model under control.
In this experiment, we need to assess the robustness of the proposed NMPC-based growth control over a wide range of input disturbances. Monte Carlo simulations are utilized to evaluate the robustness in terms of tracking performance concerning variation in disturbed model uncertainties. This approach would evaluate the NMPC controller with no need to simulate each parameter variation separately, which could take up significant time.
Thus, 150 values of the variances σṡ, σθ and σφ are randomly selected within an allowable uniform distributions [0.1, 0.3] m/s, [2,10] rad/s, and [2, 10] rad/s. These model uncertainties could represent the variation of wind speed effect on the NMPC performance in each control coordinate. The NMPC-based growth control is assessed through the tracking performance of the trajectory proposed in Subsection IV-C by calculating the RMSE at each simulation scenario. In the first set of analysis, the weighting matrices were chosen as Q 1 = diag(10, 10, 1, 0, 0, 0) and R 1 = diag(0.1, 0.1, 0.1). Due to limited space, Figure 9 shows the results of 15 scenarios that have been selected and sorted according to the RMSE values. We can interpret that the RMSE has not significantly changed during the various simulated scenarios with the chosen disturbances.
Subsequently, to show the effect of the weighting matrices on the robustness performance, three more Monte Carlo simulations have been conducted with the same chosen disturbances but with different values of Q and R as highlighted in Figure 9. These matrices are chosen as follows, Q 2 = diag(10, 10, 10, 0, 0, 0), R 2 = diag(0.1, 0.1, 0.1) Q 3 = diag(10, 10, 1, 0, 0, 0), R 3 = diag(1, 0.1, 0.1) Q 4 = diag(10, 10, 1, 0, 0, 0), R 4 = diag(0.05, 0.05, 0.05) As shown in Figure 9, simulations with matrices Q 4 and R 4 introduces the lowest RMSE, which implies increased states weights compared to the weights of the control inputs. On the other hand, Q 2 , Q 3 and R 2 , R 3 have the worst performance compared to others. As noted, these results could suggest the direction of choosing the optimal weighting matrices that give the best performance.
V. CONCLUSION
In this article, a Nonlinear Model Predictive Control (NMPC) scheme is presented, which is capable of automatically driving the tip of the vine growing robot to a spatial target VOLUME 8, 2020 position in the environment. The proposed NMPC-based growth control has succeeded to control the vine robot in a closed-loop while respecting the irreversible growing and actuation constraints. The nonlinear kinematics model of the vine robot is incorporated as the controlled plant where a discrete version has been used as the controller prediction model. The proposed NMPC growth control is simulated over different scenarios ranging from the point stabilization, trajectory following, and obstacle avoidance with satisfactory performance results. Besides, a robustness analysis has been conducted based on Monte-Carlo simulations to evaluate the vine robot growth under various disturbance conditions as well as to guide the direction of choosing the weighting matrices in the control problem in such a way to maximize the tracking performance. In future work, building a Moving Horizon Estimation (MHE) [18] is promising to relax the assumption of full state observability that has been assumed in this research. Also, our work would be possibly extended to a case wherein the dynamics model of vine growing robots is used instead of the kinematics model either a prediction model or as the model of the process under control. IBRAHIM A. HAMEED (Senior Member, IEEE) received the Ph.D. degree in industrial systems and information engineering from Korea University, Seoul, South Korea, and the Ph.D. degree in mechanical engineering from Aarhus University, Aarhus, Denmark. He is currently a Professor with the Department of ICT and Natural Sciences, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology (NTNU), Norway, where he is also the Deputy Head of research and innovation. His current research interests include artificial intelligence, machine learning, optimization, and robotics. He is also the elected Chair of the IEEE Computational Intelligence Society (CIS) Norway Section. JEE-HWAN RYU (Senior Member, IEEE) received the B.S. degree in mechanical engineering from Inha University, Incheon, South Korea, in 1995, and the M.S. and Ph.D. degrees in mechanical engineering from the Korea Advanced Institute of Science and Technology, Daejeon, South Korea, in 1995 and 2002, respectively. He is currently an Associate Professor with the Department of Civil and Environmental Engineering, Korea Advanced Institute of Science and Technology. His research interests include haptics, telerobotics, teleoperation, exoskeletons, and autonomous vehicles. VOLUME 8, 2020 | 6,164.2 | 2020-01-01T00:00:00.000 | [
"Engineering"
] |
Bamboo as reinforcing material in concrete structures: A literature study
The production of conventional building materials such as steel, concrete, and brick causes severe exploitation of natural resources and emission of greenhouse gases. Therefore, alternative eco-friendly, sustainable, and inexpensive building materials are required. Bamboo is a natural material which can replace steel in various structures. Several studies have evaluated the potential of bamboo as a steel replacement in structures. This paper provides a literature review on the use of bamboo-reinforced concretes (BRC) in various countries.
Introduction
Recently, construction of buildings and infrastructure projects are rapidly increasing.Owing to huge population growth in urban areas, there is a significant demand for homes, which has led to the exploitation of traditional natural resources.Concrete is widely used as construction material worldwide as it exhibits high compressive strength.However, it has very low tensile strength, which is only 10 % of its compressive strength.Owing to its poor tensile strength and brittleness, concrete requires tensile reinforcement, typically in the form of a steel rebar.Although steel works well with concrete, it has many disadvantages, including heavy weight, corrosion susceptibility, high cost, and ecological unfavorability.To overcome these drawbacks, numerous researchers are developing novel approaches to offer sustainable replacements for steel reinforcements.Bamboo is a potential replacement for steel in reinforced concrete owing to its mechanical properties and positive economic, social, and environmental impacts, particularly for low-cost structures in rural and urban areas.In only a few months, bamboo reaches its entire growth potential, and within years, it reaches its maximum mechanical strength.Bamboo is known to have a 50 % lower longitudinal ultimate tensile strength than mild steel and it has a far higher specific tensile strength than cast iron, structural steel, aluminium alloys, wood, and concrete [1].Bamboo is a useful alternative to steel reinforcement owing to its low weight, high tensile strength, and ability to regenerate, particularly in locations with easy access to locally produced bamboo.One of the most significant benefits of growing bamboo is its ability to absorb carbon dioxide.Therefore, it is also known as a carbon sink.In recent times, the rapid increase in global warming is of great concern.Bamboo can help reduce the ill effects of greenhouse gases and combat global warming.The large-scale use of bamboo as a reinforcement material will lead to its increasing demand, consequently increasing its production, that is, cultivation.Live bamboo absorbs large amounts of CO 2 from air and releases O 2 , thereby purifying the atmosphere.However, steel production negatively affects the environment.Mature bamboo can be cut, dried, and treated to make it suitable for use in construction.Thus, although dead bamboo has no atmospheric purifying effect, its production results in a greener environment.According to Akwada and Akinlabi [2] bamboo plays a significant role in mitigating global climate change.China has one of the highest rates of carbon sequestration worldwide.It grows rapidly; thus, it produces a higher rate of oxygen than other equivalent stands of trees.According to a report by Environmental Bamboo Foundation (2001), bamboo releases 35 % more oxygen than equivalent stands of trees, and sequesters up to 12 tonnes of carbon dioxide from the air per hectare per year.Bamboo can significantly reduce greenhouse gas emissions, create jobs and thus produce high incomes for cultivators.Bamboo is a fast-growing plant with strong roots and rhizomes that improve soil stability.This indicates that it can stabilise and regenerate land, thereby preventing landslides.Its roots are highly effective in preventing soil erosion by firmly holding the soil together, thereby preventing soil loss.Bamboo has several advantageous characteristics which render it a suitable building material.However, several shortcomings are also observed.Bamboo degrades faster than steel because it is a natural material.It has many species, the properties of which vary widely.Not all species are suitable as building materials.Therefore, cultivating suitable species is as important as their use for construction purposes.In addition, as natural materials, they are not uniform in size and shape.Despite these limitations, bamboo has been a popular research topic for years.These studies and their findings are categorised in this paper.
Early research
Chow [3] conducted the first documented investigation of bamboo usage as a reinforcing component in concrete at the Massachusetts Institute of Technology in 1914.The high tensile strength of bamboo has led researchers to conceive the idea of applying bamboo on the tension side of beams.Four beams were cast, two of which were tested under a single concentrated load at the centre, and the other two were tested under a two-point loading test 60 days after casting in a beam-testing machine, as shown in Figure 1.Results showed that the maximum load at which the steel beams and bamboo beams failed were 19.57kN and 13.87 kN, respectively.The factor of safety for the ratio of the actual maximum load to the theoretical load was 3.2 for the steel beam and 2.3 for the bamboo beam with a concentrated Bamboo as reinforcing material in concrete structures: A literature study load at the centre.For the second setup in which the beam was subjected to two-point-load, the maximum load at which the steel and bamboo beams failed were 30.69 kN and 20.46 kN, respectively, and factor of safety for actual load to expected load were 3.37 and 2.25 for the steel and bamboo beams, respectively.It was concluded that the behaviour of bamboo resembled that of steel in concrete, and it may be used instead of steel for small structures.However, it has been suggested that more experimental data are required before designing bamboo beams in practice.After Chow [3], the viability of employing bamboo as an alternative type of reinforcement in structural concrete has been tested by several researchers.For more than a century, many researchers have used bamboo to strengthen concrete constructions.In those studies, bamboo splints (semiround strips) or bars (whole culms of moderate diameters) were used.In 1950, Glenn [4] led a study funded by the US War Production Board on bamboo-reinforced concrete (BRC), which included building experimental structures and conducting mechanical tests.From the test results, Glenn [4] came to numerous conclusions that helped in developing the design guidelines for the use of bamboo splints as reinforcements in concrete.Glenn [4] noted the issues with BRC beams under loading, including significant deflection, limited ductility, and early brittle fractures.Furthermore, he discovered bonding problems caused by the extreme swelling and cracking of bamboo, a lower ultimate load-carrying capacity than that of the steel-reinforced components, and the requirement for employing asphalt emulsions.Based on the maximum stress values of 55 to 69 MPa for concrete beams with 3-4 % bamboo reinforcement, Glenn [4] suggested using bamboo tensile stresses of 34-41 MPa.It was suggested to utilize 3-4 % bamboo reinforcement to keep the beam deflection under l/360 of the span.A report was created in 1966 by Brink and Rush [5] to assist the field crew in designing and building BRC using an allowed stress technique, a method that is similar to that described in ACI 318 [6] for steel reinforced concrete.Based on a bond strength of 0.34 MPa and ultimate capacity of 124 MPa, they suggested a reasonable bamboo tensile stress of 28 MPa.For the serviceability criterion, a bamboo elastic modulus of 17.2 GPa was suggested.Subsequently, a BRC flexural component was built as an unreinforced concrete component with a maximum tensile stress of 0.67 √f c ' (compressive strength of concrete in MPa), according to the hybrid design technique proposed by Geymayer and Cox [7] in 1970.They found that with 3-4 % bamboo reinforcement, a total safety factor of 2-2.5 can be achieved.Numerous studies describing bamboo-reinforced flexural components have provided evidence supporting the design methods proposed by Geymayer and Cox [7].Moreover, at optimum longitudinal bamboo reinforcement ratios of 3-5 %, a concrete flexural member that would otherwise be unreinforced exhibited a capacity increase of at least 2.5 times.In one of the early studies in India, Kurian and Kalam [8] investigated structural elements made of bamboo-reinforced soil-cement material.The main aim of that study was to identify a cost-effective alternative to rural housing in India.Soil-cement is a mixture of pulverized soil with small quantities (4-10 % by weight of soil and water) of Portland cement.This material was widely used for the construction of road bases and airports in mid-thirties.The study investigated bamboo-reinforced soil-cement foundations, building walls, and pavements.The bamboo was treated with a solution of 40 % rosin in alcohol and coated with white lead paint for waterproofing.It was reported that soil cement exhibits a considerable increase in strength with age.Bamboo is not as effective as a compressive reinforcement because of its low compressive strength, which results from its fibrous nature.The results showed that the structural models were good at resisting flexure when made with the bamboo-reinforced soil-cement material.In addition, it was reported that reinforcing soil cement with bamboo imparts considerable rigidity to flexible pavements.Furthermore, it was found that when the plain soil-cement section was reinforced with bamboo without attempting to reduce the depth of the section, it worked well in the resisting moment.It was concluded that bamboo-reinforced soil cement has the potential to be used in rural construction, especially for building walls, foundations, and pavements.Mansur and Aziz [9] conducted an experimental investigation into the viability of employing a woven bamboo mesh as a reinforcement for cement mortar.The addition of bamboo mesh increased the tensile, flexural, and impact strengths of the mortar, as well as its ductility and toughness.Studies have found that BRC beams are significantly better at handling loads than plain concrete beams in a four-point bending test with 2-3 % bamboo reinforcement [10].
Bamboo reinforcement in different structural members
Early research showed that BRC is an emerging field of research.However, further experimental work is required before arriving at any conclusion.Different experimental studies have been conducted on the use of bamboo as a reinforcing material in different structural members, some of which are discussed in the following section.
Bamboo as reinforcement in slab
Kankam and Odum-Ewuakye [11] conducted experiments on 13 simply supported one-way slabs reinforced with babadua (Thalia geniculate) bars.Four-point loading was applied to the slabs.A schematic of the four-point loading is shown in Figure 2. The results showed that the slabs collapsed owing to excessive slab deflection, flexural failure of babadua bars under tension, or concrete crushing.A short-term factor of safety of approximately 2 against cracking and 3 against collapse was attained for span-to-effective depth ratios between 12.5 and 9.3 and shear span-to-effective depth ratios between 4.2 and 6.44.These slabs exhibited an extremely ductile behaviour and underwent significant deflection before failure.Kankam and Odum-Ewuakye [12] used babadua (Thalia geniculate) bamboo bars as reinforcements in two-way slabs supported on all four sides.A significant improvement in the flexural and punching shear strengths of the slabs was observed when tested under monotonic and cyclic loads.In addition, it was observed that the concrete slab with babudua bamboo reinforcement offered appropriate stiffness against deflection.
In 2005, Ghavami [13] investigated permanent-shutter concrete slab panels using Dendrocalamus giganteus bamboo.Halfsectioned bamboos, which functioned as permanent shutter forms, were filled with concrete, as shown in Figure 3.A Sikadur 32-Gel was applied to the bamboo to prevent it from absorbing water from the concrete.The shear resistance of Dendrocalamus giganteus to full-and half-bamboo diaphragms was investigated.The half-bamboo had a shear strength of 10.89 MPa with a standard variation of 2.56 MPa.Perera and Lewangamage [14] used strips of Bambusa vulgaris to reinforce a slab panel with dimensions 600 × 60 × 100 mm.They investigated the flexural behaviour of slab panels under a central uniformly distributed load.According to their findings, when bamboo and steel were combined, the slabs performed better than the control specimens with steel reinforcement and the specimens with bamboo reinforcement (bamboo alone).Muda et al. [15] examined the effectiveness of BRC slab panels subjected to impact loads.The bamboo was tied to form a mesh at a spacing of 50 mm after being spliced and chopped into the necessary diameters of 7.5 mm, 5 mm, and 2.5 mm.In this experiment, oil palm shells (OPS) in the concrete mix were used as an alternative to traditional aggregates, with an OPS-tocement ratio of 0.45 and 0.6.They concluded that the bamboo diameter had a significant effect on the impact strength of the first crack, whereas the slab thickness had an even greater effect.In 2016, Muda et al. [16] examined the impact behaviour of 300 × 300 mm BRC that simply supported one-way slabs under impact loading.The bamboo-reinforcing material was prepared using Buloh kuning (Bambusa vulgaris schrad) bamboo.Rice husk was added to the concrete used to create the slab panels in proportions of 5 % and 10 % with respect to Ordinary Portland Cement complying to ASTM Type I [17].During the experiment, the impact strengths of the slab panels were examined in relation to the bamboo diameter and slab thickness.It has been reported that, for both types of concrete mixes, the bamboo diameter and slab thickness have a linear relationship with the initial and ultimate crack strengths.The impact strength of these BRC slabs in comparison to that of typical reinforced cement concrete (RCC) slabs needs further investigation.Chithambaram and Kumar [18] studied the flexural behaviour of slab panels made of ferro-cement and bamboo with fly ash used to partially replace cement.They used chicken-wire mesh and bamboo strips as reinforcements in one-way slabs.The results of partially replacing cement with fly ash and varying the slab thickness were explored.Twelve ferro-cement slab panels with dimensions of 470 × 940 mm and thicknesses of 40 and 50 mm, each containing six slabs, were tested as part of the experimental program.Six of these slabs were created using typical mortar at a ratio of 1:3, whereas six others were created using fly ash in place of 15 % of the cement.All slabs were cured for 28 days in wet gunny bags before testing under an evenly distributed weight.According to the test results, both slabs exhibited similar initial cracks and ultimate loads.In comparison to the experimental ultimate load capacity, the bamboo strips increased the estimated ultimate capacity of the slab at a rate that was approximately three times higher than that of the mortar and wire mesh.Mali and Datta [19] investigated BRC slab panels using semicircular grooved bamboo strips (Figure 4) as reinforcements.
They used a bond-tite epoxy adhesive to reduce the water absorption capacity of the bamboo strips from the surrounding concrete.Fifteen concrete slab panels with dimensions of 600 × 600 × 100 mm that comply with Eurocode EN-14488-5 (2006) [20] were cast and tested.The effects of completely replacing the primary steel reinforcement with bamboo were investigated in terms of the failure modes, crack patterns, energy absorption capacity, and load-deformation characteristics.By comparing plain cement concrete and reinforced cement concrete slab specimens, it was discovered that the concrete slab panels reinforced with grooved bamboo strips exhibited significantly higher load-bearing and Bamboo as reinforcing material in concrete structures: A literature study deformation capacities.Additionally, the structural behaviour of the slab flexural performance was significantly improved, and it was only moderately inferior to that of RC slabs utilising mild steel bars as the main reinforcement.From the above discussion, it was concluded that the slab panels reinforced with bamboo strips performed almost as well as those reinforced with steel.When used as reinforcement, grooved bamboo strips provide greater strength than plain bamboo strips.In addition, bamboo treatment is required to reduce its water absorption capacity and increase its durability.
Bamboo as reinforcement in beam
Beam members are one of the most important components of a building structure.They carry both horizontal and vertical loads.
Traditionally, steel has been used as a reinforcement in beams to increase their load-carrying capacities.Research has been conducted to improve the mechanical quality of plain concrete by substituting naturally occurring elements instead of steel.
BRC beams with dimensions of 140 × 150 × 1100 mm were tested to understand their flexural behaviour.In a four-point bending test (Figure 5), Mali and Datta [22] investigated the flexure behaviour of BRC beams.Experimental investigations were conducted on three distinct types of concrete beams, namely beams with traditional steel reinforcement, beams with bamboo reinforcement, and plain concrete beams (without reinforcement).The energy absorption capacity, ultimate load, flexural strength, shear strength, and linear stiffness of these beams were analysed to better understand their flexural behaviours.Two types of BRC beams with longitudinal and shear reinforcements (stirrups) made of bamboo strips were examined.Using 2.8 % and 3.8 % longitudinal bamboo reinforcements in proportion to the beam cross section, BRC beams were cast and analysed.It was discovered that plain cement concrete (PCC) beams were significantly outperformed by both forms of BRC beams in terms of ultimate load, first crack load, ductility, and energy absorption capacity.Additionally, it was found that the flexural strength of the BRC beam with 3.8 % bamboo reinforcement was comparable to that of the RCC (reinforced cement concrete) beam with 1.23 % steel reinforcement.Kankam and Odum-Ewuakye [23] investigated the flexural strength and behaviour of Babadua (Thalia geniculata)-reinforced concrete beams of sizes 100 × 180 × 1500 mm and 135 × 235 × 1800 mm with different percentages of tensile reinforcement from 2.87 12.13.Stirrups were formed from Babadua strips that were approximately 8 mm thick.The beams were tested for failure under four-point and cyclic loading conditions, as shown in Figure 6.M Sadique Ameen, Debarati Datta The results showed that the collapse of the beams was either due to flexural failure caused by concrete crushing or diagonal tension failure of the concrete in the shear span.They also conducted tests on the performance of beams aged for more than one year after casting.The flexural strength of the Babadua-reinforced beams was unaffected, and there was no indication that the reinforcing bars had deteriorated.For rural buildings, Mark and Russell [24] conducted a comparative study of BRC beams using various stirrup materials.They used strips of Bambusa vulgaris for longitudinal tension reinforcement, and bamboo, rattan cane and steel for stirrups.Four-point bending tests were performed on the beams until they failed.Using the performance model developed in this study, the least expensive and most cost-effective method for providing shear reinforcement to bamboo-reinforced beams was investigated.Steel stirrups were shown to be the most cost-effective option based on the beam performance index (BPI), which measures the amount of energy absorbed per unit cost of a beam.Apart from the abovementioned literature, many more studies have been conducted on BRC beams; some recent studies are listed in Table 1.
From the above discussion, it was concluded that the flexural capacity of the beam increased when it was reinforced with bamboo strips.3.8 % of bamboo reinforcement was comparable with that of 1.23 % of steel reinforcement with respect to beam cross-section.In addition, bamboo-reinforced beams performed better when steel stirrups were used.
Bamboo as reinforcement in column
A column is a structural member which transfers the compressive load of a superstructure to that of a substructure.Bamboo as reinforcing material in concrete structures: A literature study modulus of elasticity was 24.46 GPa, and its average tensile strength was 185.93 MPa.Different chemical adhesives, namely tapcrete P-151, Anti Corr RC, Araldite, and Sikadur 32 gel, were applied to bamboo to examine their impact on the bond strength at the bamboo-concrete composite interface.The M20 grade concrete was used to cast 24 columns with dimensions of 150 × 150 × 1000 mm.Three columns were cast: one plain, one with 0.89 % steel reinforcement, one with untreated bamboo (with reinforcements of 8 %, 5 %, and 3 %), and one with treated BRC (with reinforcements of 8 %, 5 %, and 3 %).
From the above results, it can be concluded that the Sikadur 32 gel had the strongest average binding between the bamboo and concrete.The treated bamboo reinforced column with 8 % reinforcement could support a load that was comparable to that carried by steel reinforced column.Owing to the poor bonding between bamboo and concrete, the untreated bambooreinforced columns withstood significantly less load than the treated bamboo-reinforced columns.
Bamboo as reinforcement in walls
Walls are among the most important components of building structures.They normally occupy the majority of a building space and require numerous construction materials.Bricks are typically used in walls to increase the dead load of a building.However, bricks significantly increase the price of walls and degrade the ground by consuming fertile soil.Puri et al. [32] studied prefabricated wall panels with bamboo reinforcement, which are useful for affordable housing.Wall panels, which were 2440 mm long, 300 mm wide and 50 mm thick, were created.Bamboo strips of 3-5 mm thickness of the Bambusa balcoa species were used as reinforcements in the wall panels.Each box had a crisscross pattern and had dimensions of approximately 50 × 50 mm.Because bamboo contains cellulose, lime water treatment was employed to stop the degeneration of bamboo resulting from termite attack and fungus formation.To reduce the water absorption capacity of bamboo, Sikadur 32 LP epoxy, an alternative to Sikadur 32 gel, was used.A mortar mix with a cement to sand ratio of 1:2 (cement and sand) was used to cast the wall panels.The rebound hammer and transverse loading tests were conducted on the panels.It was reported that compared with standard brick walls, the proposed wall panel system was significantly more affordable, energy-efficient, and lightweight.Compared with a brick-wall system, it lowered the dead weight of walls by 56 % and the price by 40 %.Ganesan et al. [33] studied the strength and behaviour of wall panels made of BRC under two-way in-plane action.Splints of Bambusa bambos, 20 mm wide and varying in thickness from 8-15 mm, were used as reinforcement.A varnish coating was applied to the splints to make them water resistant.Sandblasting of the splints was performed to obtain a better bond with the concrete.Three prototypes of BRC wall panels, with aspect ratios of 1.667, 1.818, and 2 and thickness ratios of 12.5, 13.75, and 15, were considered.All the samples had a consistent slenderness ratio of 25.A uniformly distributed inplane load applied at an eccentricity of t/6 was used to examine the failure of wall panels with varied aspect and thickness ratios.
It was reported that owing to the two-way action of the wall panels, biaxial bending occurs in the planes parallel and perpendicular to the axis of loading, causing diagonal cracks to form that extend from the corners of the wall to the centre of the wall panel.With increasing aspect ratio, the wall panel deflection increased.According to the mentioned study, wall panels made of BRC with aspect ratios ranging from 1.667-2 and thickness ratios ranging from 12.5-15 could withstand weights of up to 630 kN.
From the above discussion, it can be concluded that bambooreinforced wall systems are more affordable and lightweight than traditional wall systems.They have higher load-carrying capacities than traditional brick walls.Treatment should be performed to avoid deterioration of the bamboo strips.Overall, they are good replacements for the traditional brick walls.
Performance of bamboo-reinforced structures under dynamic loading
BRC is a popular composite material owing to its strength, durability, and low cost.The dynamic loading effect on bamboo-reinforced elements is an important factor to consider when designing structures that are subjected to seismic activity.One of the limitations of BRC is its limited ductility, which is a concern in seismic areas where structures must sustain through large deformations during earthquakes without collapsing.Few studies have been conducted on the dynamic-loading effects of BRC elements.
To understand the seismic behaviour of houses built using bamboo as reinforcement, in 2006, Kaushik et al. [34] reviewed the performance of structures during the Sikkim earthquake that occurred on 14 th February 2006.This earthquake was of moderate level with a magnitude of 5.7 on the Richter scale.Heritage structures, masonry structures, and reinforced concrete buildings performed poorly during the earthquake, whereas traditionally constructed wooden/bamboo houses sustained tremendously well.One such traditional housing system commonly used in Sikkim is the 'Ikra' housing system, as shown in Figure 7. M Sadique Ameen, Debarati Datta 'Ikra' houses are one-story buildings with masonry walls made of brick or stone that extend up to roughly one metre above the plinth.This brickwork supports plastered walls made of bamboo braided into wooden frames.GI roofing sheets supported on bamboo trusses are typically used.It was found that there was no significant damage to the Ikra housing structures during the earthquake, thus concluding that traditionally constructed bamboo-structured housing systems performed well during earthquakes.González and Gutiérrez [35] investigated the performance of bamboo Bahareque walls under cyclic loading conditions.The Bahareque walls contain cement plaster on both sides of a timber frame with split bamboo at the centre.The primary goal of the study was to perform an experimental evaluation of the rigidity and deformation properties of prefabricated 'bamboo bahareque' shear walls developed in Costa Rica by the Bamboo Foundation (FUNBAMBU) under horizontal cyclic loads simulating earthquake effects.Seven wall panels, with length 2.7 m, height 2.4 m, and thickness varying from 40 to 60 mm, were experimentally investigated.The findings demonstrated that the tested 'bamboo bahareque' walls have sufficient strength to sustain loads caused by earthquakes of a sizeable magnitude.During cyclic loading, they exhibited ductile behaviour.Bamboo has a high strength-to-weight ratio, indicating that it can withstand heavy loads without becoming too heavy.Additionally, bamboo has a high damping capacity that allows it to absorb and dissipate energy in the form of heat.These properties make bamboo an ideal material for earthquake-resistant structures.Moroz et al. [36] investigated the performance of BRC masonry shear walls.Two different types of walls were constructed: one reinforced with conventional steel reinforcement, and the other with Tonkin cane bamboo reinforcement, both vertically and horizontally, in bond beams.It has been reported that walls reinforced with bamboo exhibit enhanced shear capacity and ductility compared with unreinforced concrete block masonry.In addition, it was observed that the bamboo-reinforced shear walls showed remarkably similar behaviour to those reinforced with steel.However, special care must be taken to prevent moisture absorption by bamboo in a cementitious matrix.
In terms of dynamic loading effects, BRC has been found to have good resistance to fatigue and impact loading.However, further research is required to fully understand the dynamic properties of BRC and its behaviour under different types of loading.Overall, the use of BRC in seismic-resistant structures is an exciting area of research with the potential to revolutionise the construction industry.
Different codes for bamboo
For decades, several studies have been conducted by different researchers to investigate the potential of bamboo as a reinforcement material for structural members.These studies have helped develop codes for bamboo so that one can easily understand its behaviour.IS 8242:1976 [37] provides different methods for testing split bamboos.This standard was used to assess the physical and mechanical characteristics of split bamboo, such as moisture content, specific gravity, static bending, compression perpendicular to the grain, and shear perpendicular to the grain.Similarly, IS 6874:2008 [38] specifies different test methods for determining the physical and mechanical properties of round bamboo.This standard also contains methods for determining the density, shrinkage, and tensile strength parallel to the grains of round bamboo samples.
The Bamboo as reinforcing material in concrete structures: A literature study typically decomposes after one or two years.However, bamboo has a service life of two to five years when used in concealed, off-the-ground environment.As fungal deterioration in the sclerenchymatous fibres (Figure 8) begins, the mechanical strength of the bamboo rapidly deteriorates.The correct preservation treatment must be applied to bamboo to increase its durability.IS 15912:2018 [40] prescribes fire safety for bamboo structures.This indicates that, with the help of chemical treatments, bamboo can become fire-resistant.IS 15912:2018 [40] also mentioned that bamboo has a high tensile strength owing to its fibrous nature.In accordance with the restrictions on design and construction, it can serve as a substitute material for reinforcement in concrete.The ultimate tensile strengths of some bamboo species under direct tension are almost identical to those of steel, ranging from 1400 to 2000 kg/cm 2 .Design guidelines for concrete structures with steel have also been applied to concrete members made of bamboo.
International standards are available to guide designers and researchers.Information on the use of bamboo structures, including those constructed of round bamboo, split bamboo, glued laminated bamboo, and panels made of bamboo fastened together using adhesives or mechanical fasteners, is provided by ISO 22156 -2004 [42].It is based on the performance of the structure and the limit state of the design.It only addresses the demands for serviceability, durability, and mechanical resistance of structures.ISO 22157 -1:2004 [43] outlines the test procedures for assessing the strength and physical characteristics of bamboo, including moisture content, mass per volume, shrinkage, compression, bending, and tension.It also involves tests on bamboo samples conducted to acquire data that may be utilised to define distinctive strength functions and establish permitted stresses.This information can be used to establish a relationship between the mechanical characteristics and elements such as moisture content, mass per volume, growth site, location along the clum, and the presence of nodes and internodes for quality control purposes.M Sadique Ameen, Debarati Datta There are several bamboo species available worldwide.Therefore, grading bamboo is an important procedure for determining its suitability for structural applications.To grade round or pole bamboos for structural purposes, ISO 19624:2018 [44] outlines specific mechanical and visual grading procedures.
Visual sorting is performed based on the observable features of the specimen.Mechanical sorting involves a non-destructive assessment of qualities that are known to correlate with the characteristic values defining a grade.
Bamboo concrete bond
The bond formed between the concrete and reinforcing bars enables strain compatibility by ensuring that the stresses from the reinforcing material are adequately transmitted to the concrete.This guarantees that there is no slippage between the reinforcement bar and surrounding concrete, which is necessary for their composite behaviour.The fracture control patterns, section stiffness, and anchoring of the reinforcing bars were influenced by the bond development mechanism.
The bond behaviour in reinforced concrete is affected by numerous factors, including the cover of concrete, spacing between reinforcing bars, size of reinforcing bars, transverse reinforcement, properties of concrete and steel, surface state of reinforcing bars, casting position, development, and length of splice [45].In addition, the anchorage length of the reinforcing bars was determined by the increase in the bond strength between the steel and concrete.The lack of a proper anchorage length contributes to a variety of failures, particularly in lap splices, cantilever supports, and beam-column joints in conventional structural designs.This emphasises the importance of the anchoring length, as it depends on the adequate bond strength.When the end anchorages are reliable, sufficient bonds are available for the beam to carry the imposed load even if local bonds are unavailable in other areas of the beam [46].The behaviour of BRC members is significantly affected by the bonding phenomenon between bamboo and concrete, particularly the post-cracking behaviour [47].The importance of the bond between the bamboo and concrete was first emphasised by Mansur and Aziz [9] in 1983.They claimed that adding bamboo mesh considerably increased the ductility, toughness, and tensile, flexural, and impact strengths of the mortar.However, despite these improvements, particularly in terms of tension, significant cracking was observed owing to the weak bond between the bamboo and concrete.The binding strength between bamboo and concrete is significantly affected by dimensional variations in bamboo caused by changes in moisture and temperature.The swelling and shrinking of bamboo reinforcement during the casting and curing of concrete is a major issue [13], as shown in Figure 9.
When concrete experiences hydration during curing, bamboo splints captivate and store moisture.Consequently, the bamboo expanded, and the concrete began to crack, as shown in Figure 8(b).Furthermore, the hydration of the concrete persists, and during the post-curing period, the concrete absorbs the water stored in the bamboo, causing the bamboo to contract.Although the cracks started to decrease, the presence of voids weakened the bond between the bamboo and concrete, as shown in Figure 8(c).The correct bond between bamboo and concrete cannot be created because of the ongoing process of bamboo swelling and shrinking, which severely restricts the use of bamboo as a reinforcement material instead of steel [13].The binding strength is affected by three main factors: swelling and shrinkage of the bamboo.
adhesion-promoting qualities of cement developing frictional stresses on the surface of the bamboo strips the shear resistance of concrete because of the surface configuration and roughness of the reinforcing strip [10].Bamboo as reinforcing material in concrete structures: A literature study According to Mali and Datta [49], increasing bond strength can improve the uniaxial and flexural responses of BRC beams.They suggested that by using the right surface treatments, the swelling and shrinkage of bamboo in concrete could be minimised.Researchers have used different chemicals to reduce the water absorption capacity of bamboo splints.Table 3 lists the different chemical treatments applied to the bamboo.In 1995, Ghavami [10] conducted a preliminary evaluation of bamboo reinforcement with various coatings.In addition, in 2000, Janssen [57] noted that treating bamboo before its use as a reinforcement would significantly extend its life.had the strongest bond with the bamboo treated with an epoxy agent [31].The bond strength between bamboo and concrete was studied in 2016 by Javadian et al. [54] using waterbased epoxy coatings, TrueGrip EP, TrueGrip BP, and Exaphen coating.The coatings were applied with or without sand.They discovered that the addition of sand increased the bonding between the bamboo and concrete owing to an increase in the surface friction between the concrete and sand particles.By using epoxy on the bamboo surface with sand coating, the bond strength can be increased to 3.65 MPa.
Durability of BRC
Durability plays a very important role when natural fibres are used as construction materials for understanding their long-term behaviour.Although bamboo has been shown to exhibit good M Sadique Ameen, Debarati Datta short-term performance in concrete structures, it is important to understand its long-term performance.A potential concern regarding the use of bamboo in concrete is its susceptibility to decay and insect damage.Several studies have shown that proper treatment and protection can significantly reduce these risks [61].For example, bamboo can be treated with boron to render it resistant to insects and decay.Additionally, the durability of bamboo varies from species to species.Another important factor to consider is the durability of the concrete.Over time, concrete can be subjected to various forms of deterioration such as cracking, spalling, and corrosion of steel reinforcements.These processes can weaken the structure and reduce the long-term performance.Lima et al. [63] analysed the durability of bamboo used as reinforcement in concrete.In total, 500 Dendrocalamus giganteus bamboo specimens were used in this study.The inner cross section of the bamboo was also studied to understand the behaviour of the bamboo fibres.The uppermost layer of bamboo, known as the barker, is composed of epidermal cells that include a waxy layer known as cutin.Bamboo culms are composed of a composite material, and diaphragms or nodes divide them into segments.The innermost layer was composed of sclerenchyma cells.A tissuelike matrix known as parenchyma is wrapped around the fibres, veins, and sap conductors that make up the middle layer and are randomly arranged in the transverse section.Parenchyma makes up, on average, 30 % of the culms, followed by fibres at 60 %, and sap conduction at 10 %.The physical and mechanical characteristics of bamboo, which vary among species, are directly affected by these percentages.The majority of the bamboo fibres were found to be entirely enclosed within the parenchyma and were not directly exposed to the alkalinity of the cementitious matrix.The durability was assessed by altering the tensile strength and Young's modulus of the bamboo.The specimens were then subjected to a setup that includes soaking and drying cycles.Each sample was soaked and then dried for 24 h.The samples with concrete were placed in tap water, whereas those without concrete were immersed in calcium hydroxide solution.The Young's modulus and tensile strength were assessed after 7, 15, 30, 45, and 60 cycles.There was no considerable change in the mechanical properties of the bamboo.According to Moh and Khatib [61], the resistance of bamboo to fungi can be increased by protective finishes and coatings that prevent it from wetting.Heat treatment is another method of treating biological degradation.Heat treatment increases the resistance of bamboo to fungi and insects.Recently, Awolusi et al. [63] studied the flexural and durability of BRC prisms.They investigated the resilience of BRC under challenging working conditions, including hot, acidic, and saline environments.Rectangular prisms of BRC and steel-reinforced concrete with dimensions of 150 × 150 × 550 mm with M 25 grade concrete were used for the study.Following a 60-day curing period, during which the BRC samples were subjected to unfavourable working conditions, the flexural strength and weight loss of the concretes were evaluated.It was found that, in comparison to the steel-reinforced concrete samples, the BRC samples showed a reduced slope of strength loss in the high-temperature tests, with a strength loss of 0.407 N/mm 2 for the BRC and 5.5 N/mm 2 for the steel-reinforced concrete.The acid and chloride attack tests indicated slower weight and strength losses for the BRC samples.The weight loss for steel reinforced concrete and BRC beams was 0.95 kg and 0.54 kg for acid attack and 0.893 kg and 0.087 kg for chloride attack, respectively.Despite having worse working conditions than those of steel-reinforced concrete, BRC generally exhibits several encouraging traits.While these studies provide some evidence of the long-term performance of BRC, more research is required to fully understand its behaviour over extended periods of time.Factors such as the exposure to environmental conditions, loading patterns, and maintenance practices can affect the long-term performance of BRC structures.In summary, investigating the long-term behaviour of BRC is an important aspect of this study.Although studies have shown promising results, further research is required to fully understand the durability and performance of BRC structures over extended periods of time.
Conclusion
Due to the current energy crisis, scientists and engineers are searching for natural materials to replace steel in the construction industry.One of the most interesting materials is bamboo, which is readily available in tropical regions of the world and has unique qualities such as rapid growth and a high tensile-strength-to-weight ratio.Based on this literature review, it can be concluded that bamboo is an effective and suitable material for replacing steel in concrete.Therefore, it can be used as a reinforcement material for structural members.It is lightweight, sturdy, versatile, and cost efficient.Specifically, the wall constructed using bamboo as reinforcing material was 56 % lighter in weight and also 40 % cheaper than the brick wall.One of the shortcomings of bamboo is that it degrades more quickly than steel because it is a natural material.Therefore, untreated bamboo should not be used as a reinforcement material.Unlike steel, bamboo requires two phases of preservative treatment before it can be used as reinforcement: first, a chemical preservative to protect against insect and fungal assault, and second, an epoxy coating to make the bamboo waterproof.However, the preservation of bamboo requires further research so that it does not degrade due to attacks by termites and fungi.Research has shown that many chemical treatments can overcome this problem; however, they are costly.Therefore, the development of less expensive methods is necessary.Further research should be conducted to investigate the durability of BRC members.Finally, bamboo can absorb carbon dioxide from the atmosphere which helps mitigate global warming.Seizing carbon dioxide from the air is an important means of combating climate change.In addition, bamboo is a fast-growing plant with strong roots that can help prevent landslides and reduce soil erosion.In general, bamboo is advocated as the best, most affordable, and most environment-friendly substitute for steel.
Figure 4 .
Figure 4. Semi-circular grooved bamboo strips [19]Recently, Haryanto et al.[21] examined the structural behaviour of concrete footplate foundation slabs reinforced with bamboo under concentrated loading.Three distinct slab panels made of BRC and one panel made of steel-reinforced concrete (SRC), each 600 × 600 × 70 mm in size, were cast and examined.To establish the benefit of using bamboo instead of steel for reinforcement, the ultimate load, stiffness, load-deflection characteristics, cracking pattern, energy absorption capacity, and ductility of concrete slabs were measured.The uppermost part of a locally accessible string bamboo (Gigantochloa apus) with an average tensile strength of 138 MPa was used.The bamboo strips were carefully greased to reduce their water-absorption capacity.The mix design and sample testing were performed according to the SNI 1974:2011 (BSN 2011) criteria.The volume of the coarse aggregate was divided into two different nominal sizes to ensure appropriate interlocking between the bamboo and concrete.This consisted of aggregates with a 70:30 ratio in the 20 mm and 10 mm sizes.According to the findings, BRC slabs can achieve a strength of 82 % compared with steel-reinforced concrete slabs.Furthermore, ductility verified by the two types of samples was nearly comparable (up to 93 %).The authors claimed that the structural performances of the slabs reinforced with bamboo and steel were similar.From the above discussion, it was concluded that the slab panels reinforced with bamboo strips performed almost as well as those reinforced with steel.When used as reinforcement, grooved bamboo strips provide greater strength than plain bamboo strips.In addition, bamboo treatment is required to reduce its water absorption capacity and increase its durability.
Figure 9 .
Figure 9. Performance of bamboo used in concrete during curing: a) bamboo in concrete; b) bamboo at the time of curing of concrete (cracks developed); c) visible voids and cracks in concrete after curing [48] Kute and Wakchaure[53] used bitumen-based black Japan paint to reduce the water-absorption capacity of bamboo.They discovered that black Japan inhibited water absorption by 75 %, whereas only 10 % of the bond stress was affected.Researchers have conducted pull-out tests after treating bamboo with various chemicals to determine the binding strength between bamboo and concrete.Pull-out tests in line with IS 2770 (Part 1)[58] are commonly used to evaluate the bond strength growth between steel bars and concrete.Pullout tests are primarily used to assess the interfacial strength between concrete and reinforcing bars.A typical pull-out test setup is shown in Figure10.
Table 1 . Recent studies on BRC beam
[30]atanon etal.[29]examined the ductility and compressive strength of short concrete columns (125 × 125 × 600 mm) with both longitudinal and shear reinforcements (stirrups) made of bamboo tested under concentric loading.The bamboo strips were treated with a water-repellent substance (Sikadur-31 CFN).Compared with untreated strips used in columns, this treatment increased the strength and ductility of columns reinforced with treated strips.Furthermore, 3.2 % treated bamboo reinforcement could replace 1.6 % steel reinforcement in a column with equivalent behaviour, strength, and ductility.To repair and strengthen treated bamboo-reinforced square concrete columns with dimensions of 150 × 150 × 600 mm, Akinyemi and Omoniyi[30]examined the use of an acrylic polymer as a concrete matrix modifier and ferrocement jacket confinement.Thirty columns were cast, of which 10 were made of concrete using both conventional and modified concrete, and were tested for failure.Ten more concrete columns from each concrete design mix were preloaded at 75 %, 50 %, and 25 % of the ultimate load, repaired with ferro-cement jacket, and subsequently placed through an axial test.The last 10 concrete columns were ferro-cement jacketed before axial testing.The tests involved measuring axial and lateral deflections.and bamboo-reinforced columns to determine their lateral deflection, load-carrying capability, and failure mode patterns.Bamboo culms of Muli Bamboo (Melocanna bambusoides) with a brownish appearance were acquired at 3-4 years of age.Its
Table 4
[48]ides a thorough analysis of the bond strengths obtained by various chemical treatments.From table 4, it was concluded that, compared with lightweight BRC beams made of plain cement concrete, treating bamboo with negrolin-sandwire boosts the bond strength by 90 % and the load-carrying capability by 400 %[10].The bond strength of bamboo treated with Sikadur 32 gel, in comparison to bamboo treated with negrolin-sand-wire, has been found to be improved (2.75 MPa)[13].According to Maity et al.[48], the binding between bamboo and concrete is strengthened when bamboo mats coated with asphalt and sand are sprayed to produce a BRC wall.Additionally, a variety of epoxy agents, including Tapecrete P-151, Sikadur 32 gel, Araldite, and Anti Corr RC, have been used to treat bamboo surfaces.It was discovered that Sikadur 32 gel-treated bamboo M Sadique Ameen, Debarati Datta
Table 4 . Bond strength achieved with chemical treatments
Bamboo as reinforcing material in concrete structures: A literature study | 10,147.2 | 2023-09-01T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Materials Science"
] |
Discrimination of Rice Varieties using LS-SVM Classification Algorithms and Hyperspectral Data
Fast discrimination of rice varieties plays a key role in the rice processing industry and benefits the management of rice in the supermarket. In order to discriminate rice varieties in a fast and nondestructive way, hyperspectral technology and several classification algorithms were used in this study. The hyperspectral data of 250 rice samples of 5 varieties were obtained using FieldSpec®3 spectrometer. Multiplication Scatter Correction (MSC) was used to preprocess the raw spectra. Principal Component Analysis (PCA) was used to reduce the dimension of raw spectra. To investigate the influence of different linear and non-linear classification algorithms on the discrimination results, K-Nearest Neighbors (KNN), Support Vector Machine (SVM) and Least Square Support Vector Machine (LS-SVM) were used to develop the discrimination models respectively. Then the performances of these three multivariate classification methods were compared according to the discrimination accuracy. The number of Principal Components (PCs) and K parameter of KNN, kernel function of SVM or LS-SVM, were optimized by cross-validation in corresponding models. One hundred and twenty five rice samples (25 of each variety) were chosen as calibration set and the remaining 125 rice samples were prediction set. The experiment results showed that, the optimal PCs was 8 and the cross-validation accuracy of KNN (K = 2), SVM, LS-SVM were 94.4, 96.8 and 100%, respectively, while the prediction accuracy of KNN (K = 2), SVM, LS-SVM were 89.6, 93.6 and 100%, respectively. The results indicated that LS-SVM performed the best in the discrimination of rice varieties.
INTRODUCTION
Rice, one of the major eating foods, is the main raw material for daily meal of people in China.The nutritional value and taste of rice in diverse regions and varieties are different.In China, the main producing domains of rice lay in East and South of the Yangtze River area.In order to meet the nutritional needs and purchase demand of customers, it is necessary to classify the rice based on quality and variety reasonably and it is also a trend of marketing management of largescale food supermarket.At present, the classification of rice varieties in China is still in the stage of manual sorting, which is time-consuming and laborious.There have been a few reports about the application of variety classification or grading in fruit, fish and meat.Sarbu et al. (2012) used the UV-Vis spectroscopy to classify the kiwi and pomelo based on the combination of Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA).Pholpho et al. (2011) used the pattern recognition method to realize the classification of intact longan and bruised longan based on the visible spectrum.Cen used visible/near infrared spectroscopy to classify the orange varieties and compared the classification accuracy of neural network with that of partial least squares (Cen et al., 2007).
In addition, Wang et al. (2011) used hyperspectral reflectance imaging technique to discriminate insect infestation from other confounding surface features in jujubes.Zhu et al. (2013) investigated the potential of visible and near infrared hyperspectral imaging as a rapid and nondestructive technique to determine whether fish has been frozen-thawed and obtained a good classification performance.Barbin et al. (2012) developed a hyperspectral imaging technique to achieve fast, accurate and objective determination of pork quality grades.Zhao et al. (2010) used NIR and support vector data description to discriminate egg's freshness and achieved good result.In the above literatures, the method of fruit classification was effective and it also showed the feasibility of using the spectral technology to classify the fruit grades.However, there were few reports referring to the classification of rice according to variety and quality.
In recent years, hyperspectral technology has gained wide application in different fields by virtue of its advantages over other analytical technology and it Fig.1: Schematic representation of hyperspectral data acquisition device has been one of the most dominant in the field of nondestructive detection (Qin et al., 2012;Li et al., 2012;Watanabe et al., 2013).However, hyperspectral data has overlap bands and a large amount of information, so it is difficult to process the hyperspectral data directly.Therefore, spectral preprocessing, feature extraction and classification algorithm will be investigated and an optimal model for classifying rice variety will be found in this study
MATERIALS AND METHODS
Hyperspectral data acquisition device: Hyperspctral data acquisition device is composed of portable spectrometer, the auxiliary light source, notebook computer, round containers and experiment platform.FieldSpec®3 portable spectrum analyzer made by ASD company in American was used to obtain hyperspectral data, with the spectral measurement range of 350-2500 nm.In the spectral range of 350-1000 nm, the sampling interval is 1.4 nm and the spectral resolution is 3 nm, while in the range of 1000-2500 nm, the sampling interval is 2 nm and the spectral resolution is 10 nm.Finally, Hyperspectral data were derived in the form of ASCII and stored in the computer for the following process.The spectral data analysis software was ASD View SpecPro.Hyperspectral data acquisition device is shown as Fig. 1.
The halogen lamp which has a wide range of spectrum and adjustable light was chosen as auxiliary light source and it can meet the need of spectral detection.Spectral information was captured using spectral optical fiber probe and transferred to spectrometer through an optical fiber.Signal was parsed by the spectrometer and transferred to portable computer.Hyperspectral data were read by computer using spectral analysis software and saved as binary file automatically.
Sample preparation and spectral data acquisition:
In this study, rice samples were purchased from WAL-MART supermarket laying at DingMao Road No. 198 in Jiangsu Zhenjiang, including a total of 5 species, such as: These samples were manually labeled five tags and then stored in the plastic bag under the room temperature.
During the experiment of spectral data acquisition, the rice sample was firstly placed on the black velvet and then spectral probe was placed 4 cm above table, perpendicular to the circular vessels with diameter of 9.2 cm and height of 2 cm.The angle of auxiliary light and experimental platform was kept as 45 o and the vertical distance between auxiliary and experimental platform was 20 cm.The field of view was set as 25 o .Before the measurement of rice sample, the standard reflecting plate was measured to eliminate the system error caused by the environmental factors such as light intensity.Finally, all the measurements of each sample were repeated for 3 times and the average value was taken as the final measurement results.
Multiplication Scatter Correction (MSC):
Sample in homogeneity would cause the great difference in the sample spectrum and the spectral changes caused by scattering will be greater than that caused by sample components.In MSC method, each spectrum should be linear with the ideal spectrum and the ideal spectrum can be approximated as the average spectrum of calibration set.The reflection absorbance value under arbitrary wavelength of each sample has an approximately linear relationship with the corresponding absorbance spectrum of average spectra.Linear intercept and slope can be regressed by spectra set and be used to calibrate each spectrum.Intercept size reflects the unique reflection action of sample and the slope size reflects the uniformity of samples (Sirisomboon et al., 2012;Zhang et al., 2012).
The expression of average spectrum is shown as formula (1), linear regression is shown as formula (2) and correction formula of MSC is shown as formula (3): where, X is the spectral matrix of calibration set, X i is the spectrum of i th sample, m i , b i are the slope and intercept respectively of linear regression of the i th spectrum X i and average spectrum X.Through the adjustment of m i and b i , the spectral difference is reduced, at the same time, the original information relevant to chemical composition is tried to be kept.Through the correction, the random variation can be deduced in maximum degree.
Principal Component Analysis (PCA):
PCA is an unsupervised pattern recognition method and is used for visualizing data trends in a dimensional space.It has been applied in many fields.Generally, PCA is one of the techniques that commonly used for intending to eliminate the redundant information and reduce the computational burden by using mathematical method.
The main objective of PCA is to use fewer variables to explain most of the variation in the raw data and many highly relevant variables are changed into those which are independent or unrelated to each other (Peng et al., 2014;Liu and Ngadi, 2013;Serranti et al., 2013).
K-Nearest Neighbors (KNN):
KNN is a simple linear classification algorithm in the machine learning and it is also one of the most popular classification methods in pattern recognition.In KNN classification, an unknown sample of the prediction set is classified according to the majority of its K-nearest neighbors in the calibration set (Ji-yong et al., 2011).In this study, KNN was used, the k samples which are the nearest to test sample in the all N samples will be found.The identification rate of KNN model is influenced by parameter K which can be determined by calibration process.
Support Vector Machine (SVM):
SVM is a non-linear supervised learning method for linear or nonlinear classification problems which was developed by Vapnik and his co-workers.At the beginning, SVM only can be used for Binary classification problems and then with the development of its theory, it can also be used for multi-class problem.It works by obtaining the optimal boundary of two groups in vector space independent on the probabilistic arrangements of vectors in the calibration set.When the linear boundary in the low dimension input space is not enough to separate the two classes, SVM can create a hyperplane that allows linear separation in the higher dimension feature space by using kernel function (Teye et al., 2013;Li et al., 2011).
Least Squares Support Vector Machine (LS-SVM):
LS-SVM is an improved version based on the standard support vector machine.It has been successfully applied in many classification problems at present.LS-SVM works well based on the margin-maximization principle of performing structural risk minimization and it trains more easily than SVM (Wu et al., 2012).In this study, three crucial problems such as the selection of optimal input subset, appropriate kernel function and optimal kernel parameters would be resolved by using grid search and 10-fold cross validation methods (Gao et al., 2013).And the free LS-SVM toolbox (LS-SVM v1.5) with the MATLAB version was used to develop the calibration and prediction models.
The spectra characteristic of each rice variety:
The raw spectra of all samples (350-2500 nm) were shown in Fig. 2 and the average spectrum of each rice variety ware shown in Fig. 3. From Fig. 3, it can be seen that, there were obvious differences among the spectra of the 5 rice varieties.Especially the differences among spectral data were greater at the peak of the spectral curve.Therefore, the 5 rice varieties can be classified according to hyperspectral data.
MSC preprocess:
As the spectra may be affected by the inevitable noise resulting from the hardware and a great difference among the spectral data would be caused by the inhomogeneity of the rice samples and the scattering from the light.Therefore, it is necessary to use a suitable preprocess method to correct the raw spectra before the development of the models.In this study, MSC preprocess method was used to deal with the raw spectra and the processed spectral curve ware shown as Fig. 4.
RESULTS AND DISCUSSION
Preliminary: Dimension reduction using PCA: As hyperspectral data provides much more information than general spectral data, the problem of huge data, noisy data and redundant data are more prominent during the procedure of hyperspectral data processing.In order to improve the processing efficiency and meet the online industrial application, a few of dimension reduction methods should be investigated.In this study, PCA was used to reduce the dimension of raw spectra of 5 rice varieties.The three-dimensional map of PC1, PC2 and PC3 of 5 rice varieties was shown in Fig. 5. Where, "1" expresses Ruan-Ya-Xiang-Si rice, "2" expresses Jiang-Su rice, "3" expresses Chang-Li-Xiang rice, "4"
Determination of PCs and model parameters:
The parameters used in the model have great influence on the performance of the final discrimination models.Different parameters may lead to great difference for the same classification algorithm.Therefore, the parameters should be firstly determined in the calibration set.In this study, the 10-fold cross validation method was used to choose the optimal number of PCs and determined the K value of KNN and kernel function of SVM and LS-SVM.PCs from 1 to 20, K value from 1 to 10 in KNN, linear and RBF kernel function in SVM and LS-SVM were investigated respectively.The final parameters were determined according to the maximum cross validation discrimination rate.The cross validation results of KNN, SVM and LS-SVM were shown in Fig. 6 to 8 respectively.It can be seen from Fig. 6 to 8 the cross validation discrimination rate improved with the increasing of the PCs and when the PCs reached 8, the discrimination rate changed little.From Fig. 6, it can be found that the optimal PCs is 8 and the optimal K of KNN is 2. From Fig. 7, the optimal PCs of SVM is 8 and its kernel function is linear kernel function.
Similarly, the optimal PCs of LS-SVM is also 8 and its kernel function is linear kernel function.
Analysis of three classification models in prediction set: All samples were divided into two parts.One hundred and twenty five rice samples (25 of each variety) were randomly chosen as calibration set and the remaining 125 rice samples were prediction set.
According to the optimal PCs and parameters, three classification algorithms (KNN, SVM and LS-SVM) were respectively used to establish the discrimination models for rice varieties and the performance of three models were mainly evaluated by cross validation accuracy and prediction accuracy, the final results were shown in Table 1.
For the three models, KNN model is the lowest both in the cross validation accuracy and prediction accuracy, while the other two models are relatively high.Therefore, nonlinear models (SVM and LS-SVM) performed better than the linear model (KNN).Probably because there is some nonlinear relationship among the hyperspectral data, so it is hard to separate all the samples successfully only using linear algorithm.In addition, for the two nonlinear models, LS-SVM model performed better than the SVM model with the cross validation accuracy of 100% and prediction accuracy of 100%.This is because LS-SVM is the improved algorithm for SVM and more suitable for the hyperspectral data.
CONCLUSION
The FieldSpec®3 spectrometer was used to discriminate the rice varieties with the spectral range of 350-2500 nm.Principal Component Analysis (PCA) was used to reduce the dimension and move the noisy data in the hyperspectral data.Then using K-Nearest Neighbors (KNN), Support Vector Machine (SVM) and Least Square Support Vector Machine (LS-SVM) to develop three discrimination models for 5 rice varieties.Similarly, cross validation method was employed to determine the optimal number of Principal Component (PCs) and the parameters in the KNN, SVM and LS-SCM models.Based on the results of cross validation accuracy and prediction accuracy, KNN model give the lower performance than the SVM and LS-SVM models.In addition, LS-SVM models achieved the best performance with the accuracy of 100%.These results indicated that nonlinear classification algorithm performed better than the linear classification algorithm in the hyperspectral data of rice variety and LS-SVM can be used to develop the optimal discrimination model for rice variety.
Fig. 5 :
Fig. 5: Three-dimensional map of PCs of 5 varieties of spectra
Fig. 7 :
Fig. 7: Result of cross validation recognition rates in SVM model
Table 1 :
Results of 3 discrimination models | 3,589.6 | 2015-03-25T00:00:00.000 | [
"Agricultural and Food Sciences",
"Computer Science"
] |
Identification of new benzofuran derivatives as STING agonists with broad-spectrum antiviral activity
Highlights • Benzofuran derivatives were shown to induce IFN-I expression in a STING-dependent luciferase assay.• Activity as STING agonist was confirmed by mutagenesis studies.• Antiviral effect of BZFs was demonstrated on HCoV-229E and SARS-CoV-2 replication.• IFN-I mediated antiviral effect was confirmed by immunofluorescent analysis.
Hence, STING agonists are host-targeting molecules inducing innate immunity with potentially broad-spectrum antiviral activity.To identify novel antiviral agents and given the reported STING-agonist activity of benzothiophene (Pan et al., 2020) and benzimidazole derivatives (Zhu et al., 2021), we studied the activity of a new series of benzofurans derivatives (BZFs), whose scaffold is a bioisostere of both benzothiophene and benzimidazole substructures (Barillari and Brown, 2012;Brown, 2012).Furthermore, BZF is a common moiety present in many biologically active natural and therapeutic compounds representing a suitable scaffold for the development of novel bioactive molecules (Duncan et al., 2021;Khanam and Shamsuzzaman, 2015;Miao et al., 2019;Naik et al., 2015;Nevagi et al., 2015;Pan et al., 2020;Xu et al., 2019).Hence, thirteen in house BZF derivatives bearing different substituents were selected (Delogu et al., 2022(Delogu et al., , 2021(Delogu et al., , 2016)), and subjected to biological assay to assess their ability to induce IFN and to inhibit viral replication.
Plasmid mutagenesis
The plasmid pUNO1-hSTING-HA3x was mutated with the Quik-Change Lightning Site-Directed Mutagenesis Kit (Agilent Technologies) according to manufacturer's indications.Primers used were forward CCG TGC GGA GAG GGA GTT GCT TTT CCA TTC CAC T reverse: AGT GGA ATG GAA AAG CAA CTC CCT CTC CGC ACG G, mutagenesis was confirmed through sequencing.
Western blot
HEK293T cells were seeded in 12-well plates at 10 5 cells per well; 24 h after seeding, cells were treated with the indicated compound concentrations diluted in culture medium.Doxorubicin was used as control of genotoxic effect at 0.5 μM concentration.After 24 h, the cell culture medium was removed, cells were washed with cold Phosphate Buffer Saline (PBS) and proteins were extracted with 200 μL RIPA buffer (0.05 M Tris-HCl, pH 7.4, 0,15 M NaCl, 0,25 % deoxycholic acid, 1 % NP-40, 10 mM EDTA) supplemented with protease and phosphatase inhibitor (PhosSTOP™ -Roche).Cells were lysed in ice with RIPA buffer for 20′ in orbital shaker at 250 rpm.Whole cell lysates were cleared 20′ at 12,000 x g.Protein concentration was quantified with Pierce™ BCA Protein Assay kit (Thermo Fischer Scientifics) and 20 ng of proteins were processed with 4X Loading Buffer and boiled 3′, then loaded in SDS-Page (NuPage 4-12 %) for protein separation.Proteins were blotted with
HCoV-229E viral replication assay in MRC-5 cells
MRC-5 cells were seeded 1*10 5 per well in 12-well plates and incubated overnight.24 h later, cells were infected with a MOI of 0.2 and treated with indicated concentrations of compounds for 1 h at 35 • C with 5 % CO 2 for 1 hour, then the inoculum was removed and substituted with compounds diluted in complete medium.48 h post infection, RNA was extracted with TRIzol™ Reagent (Invitrogen), reverse transcribed and amplified using Luna universal one-step quantitative real-time PCR (RT-qPCR) kit (New England BioLabs), HCoV-229E Envelope protein mRNA expression levels (fw_primer: CGTCAGGGTAGAATACCTT; rv_primers: CCTGTGCCAAGATAAAA) were normalized to the level of GAPDH.Results are expressed as percentage of viral replication calculated with respect to the infected control.GC376 compound was used as positive control of viral inhibition (Hu et al., 2021).Compounds' cytotoxicity was performed in parallel, 2*10^4 cells/well MRC-5 were seeded in 96 well plate, after 24 h cells were treated with decreasing concentrations of compounds.Cell viability was measured 48 h after treatment with MTT method as described above.
SARS-CoV-2 viral replication assay in BEAS-2B cells
BEAS-2B cells were seeded 3*10 5 per well in 12-well plates and incubated overnight to reach 90 % confluency.24 h later, cells were infected with a MOI of 0.2 and treated with indicated concentrations of compounds for 1 h at 37 • C with 5 % CO 2 for 1 hour, then the inoculum was removed and substituted with compounds diluted in complete medium.48 h post infection, RNA was extracted with TRIzol™ Reagent (Invitrogen), reverse transcribed and amplified using Luna universal one-step quantitative real-time PCR (RT-qPCR) kit (New England Bio-Labs), SARS-CoV-2 Spike protein mRNA expression levels (fw_primer: GTGTTTATTTTGCTTCCACT; rv_primer: GGCTGAGAGACA-TATTCAAAA) were normalized to the level of GAPDH.Results are expressed as percentage of viral replication calculated with respect to the infected control.GC376 compound was used as positive control of viral inhibition (Hu et al., 2021).
SARS-CoV-2 viral replication assay in Vero-E6 GFP
The SARS-CoV-2 viral replication assay in Vero-E6 GFP was performed as previously described (Corona et al., 2022).The inhibition of viral replication was calculated as percentage of virus-induced cytopathic effect on infected untreated controls.EC 50 value was calculated with Prism 9. Version 9.1.2via non-linear regression.
SARS-CoV-2 viral replication assay in Calu-3
The SARS-CoV-2 viral replication assay in Calu-3 cells was performed as previously described (Stefanelli et al., 2023).Compounds' cytotoxicity was performed in parallel, 2*10^4 cells/well Calu-3 were seeded in 96 well plate, 24 h after cells were treated with decreasing concentrations of compounds.Cell viability was measured 48 h after treatment with MTT method as described above.
Immunofluorescence
BEAS2-B cells were seeded 5*10^4 cells per well in transparent 24 well plates.24 h after seeding cells were treated with compound or 0.1 % DMSO (untreated controls) and infected with HCoV-229E with a MOI of 0.06 in presence of compound or 0.1 % DMSO (untreated controls) for 1 h at 35 • C, 5 % CO 2 .Then the inoculum was removed and replaced with compound or 0.1 % DMSO in complete medium.6 h post infection, cells were fixed with 4 % PFA for 15′, washed three times with PBS, 7′ with glycine 100 mM, washed three times with PBS, permeabilized with 0,3 % Triton X-100 in PBS for 10′, blocked with 0,1 % Triton X-100, 5 % BSA in PBS for 60′, incubated 60′ with primary antibody Phospho-IRF3 (Ser396) (Invitrogen cat.720012) diluted 1:2000 in blocking solution.Cells were washed three times with blocking solution and then incubated with secondary antibody Anti-Rabbit IgG -Atto 488 (Sigma-Aldrich cat.18,772) diluted 1:500 for 60′ and washed three times with PBS.Post fixation was performed for 10′ with 4 % PFA, nuclei were stained with Hoechst 1μg/ml in PBS.Cells were washed three times with PBS and maintained in PBS for the image acquisition.Image acquisition was performed with the Cytation 5 Cell Imaging Multimode Reader (BioTek) and image analysis was performed with Gen5 Software for Imaging & Microscopy (BioTek).
Molecular modelling studies
Ligand preparation.Compounds global minimum conformation has been determined by molecular mechanics conformational analysis performed by Macromodel software version 9.2 (Mohamadi et al., 1990), considering Merck Molecular Force Fields (MMFFs) as force field and solvent effects by adopting the generalized Born/surface area (GB/SA) water implicit solvation model (Halgren, 1996;Kollman et al., 2000).The simulations were performed allowing 5000 steps Monte Carlo analysis with Polak-Ribier Conjugate Gradient (PRCG) method and a convergence criterion of 0.05 kcal/ (mol Å) was used.All the other parameters were left as default.
Protein preparation.The three-dimensional coordinates of the protein complexes were obtained from the Protein Data Bank (PDB) (Burley et al., 2019).Subsequently, the proteins were processed, and the hydrogen atoms were added, the multiple bonds and bond lengths were optimized using the algorithm implemented in Maestro's Protein Preparation Wizard using the default settings (Madhavi Sastry et al., 2013).The available 3D models were aligned, and the structure of the protein was analyzed in detail.In particular, the overlap of secondary structures and individual residues involved in the interaction with agonists.
The new compound was then docked using the extra precision (XP) docking mode on the protein structure's generated grid and the Glide score was used to evaluate the final ligand-protein binding.
Druggable sites detection.Sitemap was applied to the prepared protein to identify the druggable pockets.SiteScore, the relative scoring function was used to assess a site's propensity for ligand binding (Halgren, 2009a).
Establishment of a reporter gene assay to select STING agonists
Given the STING involvement in DNA damage response, in most transformed cell lines there is an alteration of the cGAS-STING pathway.Hence, in the establishment of a reporter gene assay to test molecules potentially acting as STING agonists, it was considered more robust and controlled to use a cell line defective for STING, the Human Embryonic Kidney 293T (HEK293T) cell line, and transfecting it with a plasmid expressing exogenous STING, to then measure specifically the STINGdependent induction of the IFN-β gene (Miao et al., 2019;Suter et al., 2021;Thomsen et al., 2016).Therefore, HEK293T cells were transfected with a vector encoding wt STING and a reporter plasmid encoding the luciferase gene under the control of IFN-β promoter, as described in material and methods.The STING agonist MSA-2, was used as an induction control (Reus et al., 2020).Optimization of the assay led to identify the best background to MSA-2 induced signal ratio conditions (Fig. 1).
In addition, a mutated inactive form of STING was also used.In this mutant, STINGP371Q, the STING amino acid residue Pro371 is replaced with a Gln, which prevents STING from binding to TBK-1 and hence impedes the IFN-I induction.Indeed, MSA-2 was not able to induce the IFN-b promoter expression in the presence of the vector encoding STINGP371Q even at the highest plasmid tested concentration (Fig. 1).
STING dependent IFN-b promoter induction by BZF derivatives
Based on previous observations showing that benzothiophene (Pan et al., 2020) and benzimidazole derivatives (Zhu et al., 2021) are STING agonists, and the fact that the BZF scaffold is a bioisostere of both benzothiophene and benzimidazole substructures (Barillari and Brown, 2012;Brown, 2012), 13 BZF derivatives (Fig. 2) were selected to be evaluated in the above described assay to verify their ability to act as STING agonists.Results showed that, in the presence of wt STING, 7 out of 13 BZFs strongly induced IFN-β transcription (Fig. 3).In particular, compounds BZF-2OH, BZF-3OH, BZF-5OH, BZF-8OH, BZF-9OH, BZF-37OH and BZF-46OH significantly induced the IFN-I reporter gene expression (Fig. 3), while BZFs with three hydroxyl groups on the 2-phenyl ring (BZF-7OH and BZF-45OH) as well as those with only one hydroxyl in the meta position (BZF-177OH and BZF-183OH) were found to be inactive.In addition, compound BZF-52OH, which is substituted in position 7 by an isopropyl group, was inactive as compared to BZF-2OH, and compound BZF-47OH, which exhibits a chlorine atom in position 5, was also inactive as compared to compounds BZF-3OH, BZF-5OH and BZF-9OH.Overall, these results define structure-activity relationships for this chemical series.
To confirm that these BZFs induce the IFN-I expression STINGdependently, compounds were also tested in the presence of the inactive STINGP371Q.Results showed that the BZFs active on wt STING did not induce IFN-I expression in the presence of STINGP371Q, confirming their ability to act as STING agonists (Fig. 3).
BZFs do not induce DNA damage
Given that the cGAS-STING pathway can be activated also by a cytosolic DNA release upon nuclear DNA damage, we wanted to exclude that BZF compounds could be genotoxic.Hence, the potential DNA damage induced by BZFs was assessed measuring the p53 levels in the presence of the compounds through western blot.The HEK293T cells were treated for 24 h with BZF-2OH, BZF-5OH and BZF-37OH, that were shown to induce the IFN-β reporter gene assay, using doxorubicin as genotoxyc positive control (Fig. 4) (Lin et al., 2018).Results showed that the p53 levels in the presence of the BZF compounds were comparable to the untreated control, excluding that BZFs could induce IFN-I expression through cytosolic DNA release.
Inhibition of HCoV-229E replication by BZF derivatives
To verify whether the BZFs induction of the IFN-I expression could lead to an antiviral effect, firstly we tested the BZF efficacy on the HCoV-229E replication in BEAS-2B cells, using compound GC376 as positive control (Seng et al., 2014).Results showed that among the seven compounds able to induce the IFN-β reporter gene expression, three were able to effectively inhibit HCoV-229E replication, namely BZF-2OH, BZF-5OH, BZF-37OH, with EC 50 values in the μM range (Table 1).
Differently, BZF-8OH and BZF-46OH were cytotoxic, while BZF-3OH and BZF-9OH were not able to inhibit viral replication even if they were not highly cytotoxic (Table 1).Of note, MSA-2 known as STING agonist was not able to inhibit viral replication and, indeed, at the best of our knowledge no report has been published showing an MSA-2 antiviral effect.To furher assess the compounds antiviral activity on HCoV-229E, the active BZFs were tested to evaluate their effect on the HCoV-229E replication in MRC-5 cells, confirming their antiviral activity in the same concentration range (Table 1).
Inhibition of SARS-CoV-2 replication by BZF derivatives
To verify whether compounds BZF-2OH, BZF-5OH, BZF-37OH could also inhibit other HCoVs, we then tested their effect on SARS-CoV-2 replication.For better comparison of the results, we firstly wanted to assess the effects of the SARS-CoV-2 replication also in BEAS-2B cells.Given that it is known that SARS-CoV-2 replication is less efficient than HCoV-229E replication, we determined the replication efficiency observing a roughly 2-fold lower efficiency for SARS-CoV-2 replication with respect to HCoV-229E replication (data not shown).Considered that SARS-CoV-2 replication in BEAS-2B cells was sufficient for the evaluation of compounds effect, we tested them showing that BZF-2OH and the BZF-5OH were able to inhibit SARS-CoV-2 replication with EC 50 values in the μM range, while compound BZF-37OH was unexpectedly inactive (Table 2).To furher assess the compounds antiviral activity on SARS-CoV-2 replication, BZFs effect was evaluated also using Calu-3 cells, in which SARS-CoV-2 has a higher replication efficiency with respect to BEAS-2B, showing an antiviral effect in the nM range for all three tested compounds (Table 2).Given that it has been reported that in Calu-3 cells infected by SARS-CoV-2 there is a strong cGAS/STING induction (up to 98 folds) as consequence to viral infection (Mösbauer et al., 2021;Zhou et al., 2021), the higher potency of SARS-CoV-2 inhibition observed in Calu-3 confirmed the mode of action of the compounds.
To further confirm that compounds inhibition was indeed due to the IFN-I induction, we tested their inhibitory effect on SARS-CoV-2 replication in Vero E6 cells that are defective for IFN-I production.Importantly, as expected, BZFs did not inhibit SARS-CoV-2 in Vero E6 cells (Table 2), hence confirming that they act inducing IFN-I expression.
pIRF3 espression analysis
Phospho-IRF3 is a main interactor of the cGAS-STING pathway, hence, to further verify that BZFs act as STING agonists, we wanted to evaluate whether they trigger the IRF3 phosphorylation.To this aim, IRF3 phosphorylation was evaluated in BEAS-2B uninfected cells (Fig. 5A) as well as in BEAS-2B infected by HCoV-229E (Fig. 5B), in absence and presence of BZF-2OH or MSA-2.MSA-2 was used as control of STING mediated IRF3 phosphorylation (Pan et al., 2020).Images were taken 6 h post infection and subpopulation analysis was performed with the Gen5 software (BioTek).
The subpopulation analysis showed that pIRF3 nuclear and cytoplasmic levels in BEAS-2B infected cells increased by 2.3-and 13-fold, respectively with respect to the uninfected control cells.In presence of BZF-2OH pIRF3 nuclear and cytoplasmic levels increased by 9.4-and 32.2-fold, respectively, as compared to the uninfected control cells.Similarly, in presence of MSA-2 pIRF3 nuclear and cytoplasmic levels increased by 3.6-and 21-fold, respectively, as compared to the uninfected control cells.Interestingly, the comparison of the effects of BZF-Fig.2. Chemical structure of the benzofurans derivatives.
A. Paulis et al. 2OH in infected BEAS-2B, showed that the pIRF3 levels both nuclear and cytoplasmic are reduced with respect to uninfected cells, since in infected cells BZF-2OH induces a 2.2-and 19.1-fold increase of nuclear and cytoplasmic pIRF3 induction, respectively.Of note, the comparison of the effects of MSA-2 in infected BEAS-2B, showed that the pIRF3 levels both nuclear and cytoplasmic are even more reduced with respect to uninfected cells, since in infected cells MSA-2 led to a 1.1-and 13-fold increase of nuclear and cytoplasmic pIRF3 induction, respectively.Overall, these results demonstrate that BZF-2OH acts as STING agonist and show that viral infection (probably due to innate immunity evasion mechanisms) reduces the effect of STING induction by both BZFs and MSA-2 in different degrees.The fact that MSA-2 does not increase IRF3 phosphorylation in BEAS-2B infected cells may explain its lack of antiviral effect.
Docking studies
To gain further insights on BZF interaction with STING, the most promising and selective compounds, BZF-2OH and BZF-37OH, were then considered for molecular docking studies to predict their putative binding mode considering the STING crystal structure with pdb code 6UKZ (Pan et al., 2020).The docking protocol was validated through reand cross-docking, while taking into account the crystallographic data of seven ligands.The docking predicted binding mode of ligands to STING extracellular cavity is shown in Fig. 6.A further analysis was performed applying Sitemap to understand how the BZF derivatives could be optimized.The analysis highlights areas within the BZFs binding pocket which are suitable for occupancy by ligands with hydrogen bond acceptors (red maps), donors (violet) or hydrophobic groups (yellow maps) (Fig. 7) (Halgren, 2009b).The differentiation of the various binding site sub-regions allows a quick assessment of a ligand's complementarity.We observed that both donor (violet) and acceptor maps (in red) are well-represented (Fig. 7B).
Discussion
Ongoing viral evolution, climate change and spillover events represent a major health issue worldwide.Hence, innovative therapeutic approaches are required to effectively counteract and control viral spread, also considering novel potential epidemics.On the path to the discovery and development of broad-spectrum antiviral agents, one possibility is to target cellular proteins to trigger a strong innate immune response capable of blocking viral replication.STING has been identified as a potential target for this strategy due to its central role in the innate immune response (Deng et al., 2014b;Maringer and Fernandez-Sesma, 2014;Unterholzner and Dunphy, 2019b;Woo et al., 2014b).
Known STING agonists often show a moiety that mimic the purine bases of the substrate: e.g.benzothiophene derivatives (MSA-2) and benzoimidazole derivatives (di-ABZI).Hence, BZF derivatives were tested as promising scaffold for the design of novel STING agonists.In fact, typical isosteric substitutions are -S-with -O-and -N=.with-CH= (Barillari and Brown, 2012;Brown, 2012).
To study potential STING agonists, we firstly established a novel luciferase gene reported cell-based assay that was then used to test thirteen BZFs and then identified seven BZFs that are able to significantly induce IFN-β driven luciferase expression in presence of wt STING.The lack of BZF induction of IFN-b expression in the presence of mutant and inactive STINGP371Q confirms the STING engagement in their mode of action.Antiviral assay showed that BZF derivatives are able to inhibit HCoVs replication, namely HCoV-229E and SARS-CoV-2, in different cell lines.The different potency of inhibition of viral replication observed in the different cell lines are probably to be linked to the different levels of STING expression, activation upon viral infection and inhibition by viral infection.In fact, the difference in antiviral efficacy among some BZF derivatives as well as the lack of antiviral effects of the known STING agonist MSA-2 point to the need of further investigation of their interplay with viral proteins that may reduce their ability to act as STING agonists.
The hypothesis that BZF derivatives inhibit viral replication by active as STING agonist is clearly demonstrated by the lack of induction of luciferase production using mutant STINGP371Q, the lack of SARS-CoV-2 inhibition in Vero E6 cells and the induction of IRF3 phosphorylation Docking experiments allowed to predict the binding mode of best compounds BZF-2OH and BZF-37OH.The complexes are stabilized by hydrogen bond interactions (with Arg312 and Gly166, for BZF-2OH and Gly 166, for BFZ-37OH) and strong cation-π interactions between the ligands and the Arg238 residues of the STING dimer.Furthermore, several van del Walls interactions, including Leu159, Tyr163, Tyr167, Leu 170, Ile235, Tyr240, and Pro264, from both dimer chains, also contributed.The binding mode analysis helps to understand the SAR for this chemical series.Although the mono substitution of the benzofuran ring is relatively well tolerated the steric hindrance of a larger substituent is associated with a loss of activity, as in BZF-52OH in position 7. SAR also suggests that the presence of three OH groups on the 2-phenyl ring as in BZF-7OH and BZF-45OH is detrimental to the compounds activity.However, given the overall druggable site, results suggest that it is possible to increase the compound size, and this could lead to increase selectivity.Indeed, cGAMP and some known agonists such as di-ABZI (Ramanjulu et al., 2018) and the MSA-2 dimers (Pan et al., 2020) are reported to occupy this large area.Altogether, this might help to increase the activity of studied molecules possibly reducing their toxicity.
Conclusions
Overall, the cellular testing combined with in silico studies demonstrated that some BZF derivatives are selective STING agonist, able to induce the innate immune response thus inhibiting HCoVs replication in different cell lines.The presented data indicate that BZF derivatives can be used as chemical scaffold to target STING and develop broadspectrum antivirals.
Fig. 1 .
Fig. 1.Establishment of pUNO-STING concentration for the IFN-I induction gene-reporter assay.HEK293T cells were transfected with pGL-IFN-β-luc (60 ng/well) and 1, 10, or 50 ng of pUNO-STING or 50 ng of Empty Vector (EV), or 50 ng of pUNO-STINGP371Q.24 h after transfection, cells were stimulated with MSA (blue oblique stripes column) at 10 μM or equal volume of complete medium with DMSO (blue filled columns).24 h after stimulation, cells were harvested, and luciferase activity was measured.Results are shown as pGL-IFNβ-luc folds of induction over not stimulated control in presence of EV.Values represent the mean ± SEM of three independent experiments based on triplicates.Asterisks indicate a significant difference obtained comparing EV-DMSO/ pUNO-STING at different concentrations (two-way ANOVA test, n>=3) **p<0,01, **** p<0,001.
Fig. 4 .
Fig. 4. BZFs effect on p53 expression.HEK293T cells were treated with DMSO, MSA-2 and BZFs at 10 μM concentration or doxorubicin at 0.5 μM concentration for 24 h.Then cells were lysed and 20 ng of cell lysates was subjected to western blot.The experiment was repeated three times independently with similar results.
, compounds' concentration able to reduce by 50 % the HCoV-229E induced cytopathic effect in BEAS-2B cells, as compared to the untreated control.b SI, selectivity index calculated as the ratio between CC 50 and EC 50 values.c CC 50 , compounds' concentration able to reduce by 50 % BEAS-2B cells viability.d EC 50 , compounds' concentration able to reduce by 50 % HCoV-229E viral RNA accumulation in MRC-5 cells, as compared to the untreated control.e CC 50 , compounds' concentration able to reduce by 50 % MRC-5 cells viability BEAS-2B values represent the mean ± SDs of three independent experiments based at least on 6 compounds concentrations in triplicate; MRC-5 values represent the mean ± SDs of two independent experiments in duplicate based on at least 4 concentrations in duplicate.
Fig. 5 .
Fig. 5. Effect of BZF-2OH and MSA-2 on IRF3 phosphorylation in mock and infected cells.Immunofluorescence of BEAS-2B cells uninfected (A) and infected (B) with HCoV-229E MOI 0.06 in presence and absence of 10 μM BZF-2OH or MSA-2.6 h post infection and treatment cells were fixed and stained as described.The images are representative of two independent experiments.Nuclei are stained in blue and S396 pIRF3 is stained in green.Images were acquired at 20× magnification.Scale bar = 200 μm.(For interpretation of the references to color in this figure legend, the reader is referred to the Web version of this article.).
Fig. 6 .
Fig. 6.Putative binding mode of BZF-2OH and BZF-37OH.Panels A, C: putative 3D representation of ligands binding mode to STING: A-chain, in light grey, and B, in dark grey; panels B, D: corresponding 2D representation of the interactions.
Fig. 7 .
Fig. 7. SiteMap Analysis of STING binding pocket.Panel A: druggable site identified by Sitemap and relative maps; panel B: hydrogen-bond acceptor map in red.Panel C: hydrogen-bond donor map in violet.Panel D: hydrophobic map in yellow.The best docked compound, BZF-2OH, it is shown in magenta sticks.
Table 2
Effect of BZF derivatives 2OH, 5OH and 37OH on SARS-CoV-2 replication.2B and Vero E6 GFP values represent the mean ± SDs of three independent experiments based at least on 6 compounds concentrations in triplicate; Calu-3 values represent the mean ± SDs of two independent experiments in duplicate based at least on 6 compounds concentrations in duplicate. | 5,548.2 | 2024-07-03T00:00:00.000 | [
"Chemistry",
"Medicine"
] |
A candidate gene based approach validates Md-PG1 as the main responsible for a QTL impacting fruit texture in apple (Malus x domestica Borkh)
Background Apple is a widely cultivated fruit crop for its quality properties and extended storability. Among the several quality factors, texture is the most important and appreciated, and within the apple variety panorama the cortex texture shows a broad range of variability. Anatomically these variations depend on degradation events occurring in both fruit primary cell wall and middle lamella. This physiological process is regulated by an enzymatic network generally encoded by large gene families, among which polygalacturonase is devoted to the depolymerization of pectin. In apple, Md-PG1, a key gene belonging to the polygalacturonase gene family, was mapped on chromosome 10 and co-localized within the statistical interval of a major hot spot QTL associated to several fruit texture sub-phenotypes. Results In this work, a QTL corresponding to the position of Md-PG1 was validated and new functional alleles associated to the fruit texture properties in 77 apple cultivars were discovered. 38 SNPs genotyped by gene full length resequencing and 2 SSR markers ad hoc targeted in the gene metacontig were employed. Out of this SNP set, eleven were used to define three significant haplotypes statistically associated to several texture components. The impact of Md-PG1 in the fruit cell wall disassembly was further confirmed by the cortex structure electron microscope scanning in two apple varieties characterized by opposite texture performance, such as ‘Golden Delicious’ and ‘Granny Smith’. Conclusions The results here presented step forward into the genetic dissection of fruit texture in apple. This new set of haplotypes, and microsatellite alleles, can represent a valuable toolbox for a more efficient parental selection as well as the identification of new apple accessions distinguished by superior fruit quality features.
Background
Fruit quality is defined by four main principal factors; appearance, flavour, texture and nutritional properties [1]. Of these factors, texture is the major component and the most important, especially for fruit with a crispy flesh [2], and due to its influence on general fruit quality. Texture decay causes substantial fruit loss during shipping and storage which is caused by the degradation of the internal cellular compartment of the fruit, which consequently promotes the development of diseases typical of the postharvest storage and shelf-life [3]. Texture is recognised as a complex set of different sub-phenotypes, which can be divided into two main categories [2,4]. The first encompasses mechanical features, and is fundamentally related to the strength exerted by the chemical bonds of the cell wall/middle lamella upon application of external pressure. The second category is defined by acoustic signatures, and is related to the cell wall breaking phenomenon with the consequent release of internal pressure [4,5]. Texture change is a physiological event which occurs naturally throughout fruit development and ripening [6], and the magnitude of texture decay is extremely variable between different apple varieties [7]. The variability observed is the result of physiological mechanisms activated during the fruit maturation and ripening, in which a large number of enzymes are co-ordinately expressed to remodel the cell wall/middle lamella polysaccharide structure, and regulated, amongst other factors, by the effect of ethylene and transcription factors [8,9]. The remodelling process is associated with a decrease in cell-to-cell adhesion, resulting in the separation of cells along the middle lamella (mealy texture) rather than a primary cell wall breaking (crispy texture; [1,7]) when the fruit is consumed. Fruit softening and textural changes thus involve a coordinated modification of the primary cell wall and middle lamella polysaccharide structure, a process which initially takes place with a dissolution of the pectin polysaccharides of the middle lamella, followed by a disruption of the ordered structure of the primary cell wall [10,11]. In several fruits the most active enzymes responsible for the pectin modification are polygalacturonase (PG) and pectin methylesterase (PME), while those acting on the primary cell wall are xyloglucan endotransglycosylase (XET) and expansin (Exp). Among this inventory, polygalacturonase is the major enzyme involved in the solubilization of the pectin polysaccharides [12][13][14]. The degradation of the cell wall/middle lamella architecture is in practice considered the final result of the concerted activity of these enzymes, which are usually encoded by multigene families, confirming the complex genetic control of the fruit texture metabolism [15][16][17][18][19].
Because of the impact that such physiological changes have on the marketability of edible fruit, researchers have for many years attempted to unravel the genetic basis of this mechanism, with the final goal of elucidating the genes underlying this dynamic process and the development of molecular markers suitable for phenotype prediction [6,20,21]. Quantitatively-inherited traits can be studied using a QTL mapping approach, which is generally carried out on bi-parental crosses. In apple, several reports have already identified major genomic loci putatively involved in fruit firmness and softening control [22][23][24][25][26], with the largest texture QTL mapping survey described by Longhi et al. [27]. However, QTL mapping carried out using full-sib progenies presents important limitations due to the number of alleles that can be simultaneously analyzed as the approach samples only a small portion of the total allelic diversity within the cultivated apple germplasm pool. Moreover, linkage analysis requires the development of a segregating population, making this procedure laborious and time consuming. In addition, in this type of material the number of recombination events per chromosome is generally low, limiting genetic mapping resolution [16,28,29]. To overcome these limitations, the analysis of a wider genetic background is rapidly becoming the main strategy for the dissection of complex genetic architecture in plants, establishing genotype-phenotype association complementary to bi-parental linkage mapping [30][31][32][33][34][35][36].
The main purpose of this study was to validate a QTL identifying a new set of valuable alleles associated to apple fruit texture sub-phenotypes in 77 cultivars. The phenotype was measured using an extremely precise technique to improve association resolution [37], and the impact of this gene on fruit texture was investigated further by cortex cell wall electron microscope scanning of two apple cultivars displaying contrasting texture phenotypes. Finally, a novel set of haplotypes and microsatellite marker alleles specifically related to important texture components are presented as valuable markers suitable for marker assisted parent selection (MAPS) as well to assist traditional breeding towards the selection of novel apple accessions characterized by superior fruit quality properties.
Plant material
A panel of 77 apple varieties, including both modern and old apple cultivars (Table 1), was chosen from two germplasm collections available at the Research and Innovation Centre of the Edmund Mach Foundation and the Laimburg Research Centre for Agriculture and Forestry, both located in the North of Italy (Trentino Alto Adige region). All the apple cultivars were planted in triplicate on M9 rootstocks and maintained following standard technical management procedures. Apple fruits were collected at the commercial harvest stage defined by monitoring the change of standard pomological parameters, such as skin and seed colour, brix value (total sugar content), cortex firmness assessed on site, and starch conversion index. Fruit were picked at a starch index of 7, based on a 1 to 10 scale.
Total genomic DNA was isolated from young leaf tissue, using the Qiagen DNeasy Plant mini kit (Qiagen) following the manufacturer's protocol. DNA quantity and quality was measured spectrophotometrically with a Nanodrop ND-8000 W (Thermo Scientific, USA).
Apple fruit texture assessment
Fruit samples were stored in a controlled temperature cellar at 2°C for two months after harvest to maximize the trait phenotypic variance, as reported in Costa et al. [7], and high resolution phenotyping was carried out for two years. In order to avoid any effect of low temperature, samples were maintained at 20°C prior to analysis. Fruit texture was phenotypically dissected assessing simultaneously both mechanical and acoustic fruit profiles using a TA-XTplus texture analyser coupled with an AEDacoustic envelop device (Stable Micro System Ltd., Godalming, UK). Sample preparation, instrument settings and parameter characterization are described in detail in Costa et al. [7]. The fruit texture assessment was performed in an isolated room, avoiding Apple cultivars are listed by name and trade mark (between brackets). N°is the code used to identify varieties in Additional file 3 and Figure 4. The letters "a" and "b" show the varieties used for the phenotypic assessments performed in years 1 and 2, respectively. "Type" indicates whether the variety is considered as old (O) or elite (E, new). The "Alleles" column shows the allelic size of the microsatellite marker Md-PG1 SSR 10kd for each cultivar. Candidate gene SNP genotyping was performed by resequencing (Sanger technology) the regions described above from the 77 apple cultivars, using specific forward and reverse primers listed in Additional file 1. Sequences were assembled and analysed with Pregap4 software version 1.3 (Staden Package). For fine mapping the Md-PG1 region, in addition to the SNPs genotyped by re-sequencing, two microsatellites located in the assembled gene meta-contig were also used. The first was located 3 kb upstream of the Md-PG1 start codon and retrieved from Longhi et al. [27]. The second SSR marker, here named Md-PG1 SSR 10kd (kd; kilobases downstream) was positioned at 10 kb downstream the stop codon, and was identified de novo using the software Sputnik (http://espressosoftware.com/sputnik/index.html). PCR for SSR marker genotyping was performed as reported in Longhi et al. [27] (Additional file 1). Fragment sizes were called by GeneMapper W (Applied Biosystems, by Life Technologies).
Md-PG1 SSR 10kd mapping and QTL co-localization
The novel microsatellite motif found in the Md-PG1 meta-contig was mapped to the framework map of the 'Fuji × Delearly' population [27] by specific primer sequences designed with the software Primer3 (http://primer3.sourceforge.net/). The marker was integrated employing the software JoinMap 4 [38], using a LOD of 5.0 and a recombination frequency of 0.45. To investigate the co-location of this marker with QTL regions already associated with texture dissected subtraits, a MQM computation was performed de novo using MapQTL 6 [39], selecting Md-PG1 SSR 10kd as a co-factor in order to reduce the residual variance. A LOD threshold value of 3.0, established after running 1000 permutations, was chosen to consider a QTL significant. The linkage group was visualized using MapChart 2.1 [40].
Population structure
To correct the analysis for population structure, the molecular profiles of 17 SSR markers (Additional file 2) and 368 SNPs [27] (16 out of the initial number of 384 failed to hybridize) were combined and used. Each microsatellite marker was selected according to map position, amplification efficiency and allelic size information available at the HiDRAS website (www.hidras.unimi.it). The population structure of the 77 apple cultivars was computed using a principal component analysis (PCA computed by Statistica software v7), which is a faster alternative to the MCMC model-based strategy, especially with large marker sets [34,41,42]. To account for genetic relatedness among individuals, the same marker data set used for population structure (Q matrix, fix effect) was also employed to generate a kinship matrix (K matrix), considered as random factor in the Mixed Linear Model performed using TASSEL [43,44].
Linkage disequilibrium and marker-trait association
The linkage disequilibrium level among markers (SNPs and SSRs) identified within the apple cultivar collection, was calculated and visualised using Haploview 4.2, a software package designed for linkage disequilibrium statistics and haplotype block inference from genotype data [45]. This software was used to illustrate the pairwise r 2 among the 40 markers identified for the Md-PG1 gene. To illustrate the LD decay within the Md-PG1 region here investigated (from 3 kb upstream to 10 kb downstream the gene start and stop codon, respectively) the marker pair-wise r 2 values were plotted against their physical distance on the chromosome 10. To fit the data, a smoothed line, represented by the logarithmic trend, was also added. The distribution of 63190 pairs of unlinked markers (368 SNPs) was employed to compute the r 2 , and the 95 th percentile was used as critical point to consider true the linkage between syntenic marker loci.
Marker-phenotype association analysis was performed using markers with a MAF ≥ 0.05 (minor allele frequency higher than 5%), and employing both a fixed general linear model (GLM) and a mixed linear model (MLM) with random factors. Initially, the GLM algorithm, implemented in the software package PLINK release 1.07 ( [46]; http://pngu.mgh.harvard.edu/~purcell/plink/), was used to find associations between the marker set and the first two principal components (PC1 and PC2), derived by the PCA computation performed on the texture parameters. Genome-wide adjusted empirical Pvalues were then computed and corrected running 1000 permutations. In a second step, the same phenotypic and genotypic data sets were used to find association by implementing the MLM model of TASSEL, where Pvalues were corrected for false positives using the False Discovery Rate approach (FDR ≤ 0.05), performed by the QVALUE package implemented in R [47]. A P-value ≤ 0.05 was considered as the criterion for trait-phenotype association. MLM corrected by FDR was further used to exploit specific association between markers and each single texture dissected sub-phenotypes.
Considering that the phenotype variability is more likely associated with SNPs assembled in haplotype configuration rather than singularly, an additional analysis was performed with haplotypes, inferred by FastPhase [48] using only the significant SNPs. Haplotype-phenotype association was computed by the GLM algorithm, and P-values were adjusted by running 1000 permutations.
Scanning electron microscopy of the apple cortex structure
To depict the different anatomic structure between a mealy ('Golden Delicious') and a crispy ('Granny Smith') apple fruit, a cortex portion from both cultivars was isolated and observed using a scanning electron microscope (SEM). Apple flesh slices were prepared by pulling apart the cortex portions which were then fixed for 2 hours at 4°C with 5% formaldehyde in a 0.1 M phosphate buffer (Na 2 HPO 4 and NaH 2 PO 4 pH 7). Samples were successively washed over night with 0.1 M phosphate buffer at pH 7 at 4°C. Dehydration was performed by incubating the slices for 15-20 min in solution with an increasing concentration of ethanol, and an Emscope 750 (Emitech, Ashford, Kent) was used to identify the critical drying point. Samples were finally coated with a SC 500 gold sputter coater (Bio-Rad Micro-science division) and examined using a Cambridge Instruments Stereoscan 260 scanning electron microscope.
Results and discussion
Apple fruit texture phenotype dissection The 77 apple cultivars were phenotypically assessed for fruit texture by using a TA-XTplus-AED instrument. The trait dissection was performed identifying fourteen parameters over the combined mechanical-acoustic profile, ten of which were derived from the mechanical profile and four from the acoustic signature. The fruit texture variability evaluated within the apple collection over two years of observation is illustrated by the PCA plot (Additional file 3). The first principal component (PC1) describing 74.14% and 70.95% of the entire phenotypic variability for the two years respectively, together with the second principal component (PC2), accounting for an additional 12.44% and 12.95%, discriminated the orientation of the mechanical parameters from the acoustic group, suggesting a possible different genetic control for these two components [7]. The variable projection on the PCA space distinguished the separation between the two general texture components (Additional file 3), with all the mechanical parameters plotted in the negative PC1 and positive PC2 graph area, and the acoustic more oriented towards the area characterized by negative value for both PCs. The consistent variable orientation and cultivar distribution between the two years confirms this novel strategy as an efficient and reliable method to dissect the fruit texture complexity. In both years, the data distribution clearly distinguished mealy varieties (such as 'Delearly' , 'Golden Delicious' , 'Gelber Edelapfel' , plotted on the positive PC1 values) from known firm and crispy varieties (such as 'COOP39' , 'Granny Smith' , 'Fuji' and 'Cripps Pink') placed in the area characterized by negative PC1 values.
Candidate gene based marker genotyping
The apple genome underwent a recent duplication resulting in a pair-wise colinearity of large chromosome segments [49] and because of this Md-PG1, the candidate gene investigated in this work and located on chromosome 10, shows a similarity of 86% with its homoeologue Md-PG5 on chromosome 5 [27]. To enable the characterization of the specific sequence for Md-PG1, the sequences of the two genes were retrieved from the 'Golden Delicious' genome.
Out of the 38 SNPs genotyped over the Md-PG1 genomic region, 22 were identified by re-sequencing the full length (2395 bp) within the apple collection, with an average frequency of 1 SNP/108.9 bases. Among them, ten were located in exons (total length of 1380 bp) with a frequency of 1 SNP/138 bp, and 12 SNPs in introns (total length of 1015 bp) with a frequency of 1 SNP/84.5 bp.
These frequencies are consistent with previous observation made for apple of 1 SNP/149 bp [50], as well as in other outcrossing plant species such as pine with 1 SNP/102 [51], but lower than white clover (1/59, [52]) and grapevine (1/64, [53]). From the gene structure analysis resulted that SNPs found in non-coding regions were two fold more frequent than in coding ones [54]. Within the Md-PG1 predicted gene (with an intron/exon structure consistent with Bird et al. and Atkinson and Gardner [55,56]) the SNP's location along with their functional annotation is presented in Additional file 4 and Additional file 5. Among the remaining sixteen SNPs, three were located in the 3 0 UTR region, two were found 1 kb upstream the start codon and eleven 1 kb downstream the stop codon of the gene. In addition to the 38 SNPs, two microsatellite markers were also included. The first (Md-PG1SSR) was retrieved from the data of Longhi et al. [27], while the second, named Md-PG1 SSR 10kd, was de novo identified screening for microsatellite repetition over the Md-PG1 genomic contig (MDC000024.376 and MDC004966.443, available at http://genomics.research.iasma.it).
Md-PG1 SSR 10kd co-localizes with a texture hot-spot QTL
The newly developed Md-PG1 SSR 10kd microsatellite marker was further amplified and integrated into the 'Fuji × Delearly' genetic map, where several QTLs for apple fruit texture were previously mapped [27], and among which the major hot-spot cluster coincided with Md-PG1 gene. The allele segregation allowed the mapping of this marker in the same position of the gene, at 22.5 cM from the top of the linkage group. This second version of the 'Fuji x Delearly' map was used to calculate an improved version of the QTL profile for fruit texture, implementing this new marker as co-factor for the multiple QTL detection (MQM algorithm). The QTL cluster was confirmed on chromosome 10 ( Figure 1) and associated with ten sub-traits representative of the fruit texture, such as yield force, maximum, final and mean force area, Young's module, number of force and acoustic peaks, mean and maximum acoustic pressure. It is interesting to note that the highest LOD value corresponded with this novel marker, with a LOD value ranging from 3.85 to 8.80 and expressing a phenotypic variance between 19% and 41.8%, thus confirming its impact in the fruit texture association. Among the texture parameters employed in the QTL mapping, the Young's module, related to flesh elasticity, showed the lowest level of association. This is consistent with the observations of Costa et al. [7] about this feature, which reported this index as more related to the cell layer compression behavior rather than the cell wall fracturing, the causal event of the mealy/crispy fruit texture, thus under the control of other genes encoding cell wall degrading enzymes.
QTL validation and allelic survey within an apple collection
A collection of 77 apple cultivars was analyzed using 38 SNPs genotyped by re-sequencing and two SSR markers by PCR amplification, contained in a region of approximately 16 kb. Among them, 22 SNPs were associated with the Md-PG1 full length, while the other 16 were (Figure 2b) was plotted in reference to the LD base-line set at r 2 = 0.106, represented by the 95 th percentile of the r 2 distribution of unlinked markers. The baseline was determined following the methods of Breseghello and Sorells [57], which proposed that LD extent should be defined comparing the target LD with the one observed among unlinked loci, being the LD dependent on the sampling scheme. In this computation the intersection between the data fitting curve and the LD baseline defined an LD extent of~2 kb, pointing to a rapid LD decay within this gene, confirming the suitability of the candidate gene approach [58] to find association between fruit texture and markers based on Md-PG1. Among the set of 40 markers only those having a MAF value higher than 5% were further used to find association with the texture phenotype, and to avoid spurious associations (due to false positive effects), the structure was taken into consideration as covariant. When phenotypic traits are correlated with population structure, loci that are not related to the trait under investigation may nonetheless be statistically associated [33]. Statistical correction for multiple test and MAF ≥ 0.05 were also employed in order to improve the QTL detection confidence with small sample set. For a better estimation of the size effect of this QTL, a wider collection will be further assembled and implemented in the analysis. The genetic relationship among the 77 apple cultivars was investigated by Principal Component Analysis. From the total number of PCs, ten were finally selected as covariates to represent the population structure, accounting for 32% of the total genetic variance. The traits employed in the association were represented by a phenotypic dataset containing 14 texture parameters, clustered in two main categories, mechanical and acoustic. These two groups, distinguished by the two principal components (computed on the phenotypic data set), captured 85% of the total phenotypic variance. Initially, the analysis considered the first two PCs as traits, and the association with markers (MAF ≥ 0.05) was computed by running both GLM and MLM modules. Six markers were commonly identified by both algorithms as statistically associated with PC1, consistent with the higher textural variation explained by this component compared to PC2 (Table 2). Among this set, five SNPs are specifically located within the full length Md-PG1 gene, and three are included in haploblocks. In particular, PG-full 10 is located in the 1 st haploblock, and PG-full 19 and 20 in the 2 nd haploblock. The remaining two SNPs, PG-full 1 and 12, were not present in any of the haploblocks defined here. This association additionally confirmed the role of one SNP in particular, here named PG-full 1. This SNP was in fact originally used to map Md-PG1 to linkage group 10 [26,27]. The effect of this marker is here validated in a wider germplasm collection, supporting the previously formulated hypothesis about the effect of the amino acid changed due to this SNP on the fruit firmness control [26]. It is worth noting that the last marker of this set is the microsatellite Md-PG1 SSR 10kd, located in the 4 th haploblock. In the MLM computation (corrected for false discovery) this microsatellite was also associated with PC2 (q-value: 0.035739, not shown in the table), principal factor expressing a lower quotient of phenotypic variability, but oriented towards the phenotypic dissection of the mechanical/acoustic components.
To better exploit the association between the markers and fruit texture sub-traits, each SNP was further analyzed with each single texture parameter. Eight out of the fourteen texture components resulted statistically associated with the marker set employed in the analysis (Additional file 6), including acoustic linear distance, number of force and acoustic peaks, area, final force, yield, maximum and mean force. The remaining six parameters showed a limited number of associated markers. Maximum and mean acoustic pressure were associated only with PG-full 9 and 1 kb down 5. Force linear distance was associated with PG-full 9 and 1 kb down 5, while Young's module showed a significant P-value only with the third allele of Md-PG1 SSR 10kd. Parameters related to the force direction (Δ force and force ratio) were associated with PG-full 1, 12, 13 and the allele 2 and 3 of the Md-PG1 SSR 10kd marker.
To estimate more accurately the SNP frequency markers assembled into haplotypes were tested for association with the texture sub-traits. From the total number of SNPs significantly associated to the texture components, eleven, with a MAF ≥ 0.05 and located in the Md-PG1 full length, were selected and used to infer three significant haplotypes (H1, H2 and H3; Table 3 and Figure 3). H1 (the most frequent) showed a relevant association with nine texture sub-traits, and it was shared by cultivars distributed in the PCA plot over the PC1 axis, thus characterized by medium/low texture behaviour (mealiness, like 'Golden Delicious'). H2, associated to six texture subtraits of both a mechanical and acoustic nature, characterized cultivars known for a favourable texture properties (crispness), such as 'Cripps Pink' , 'Granny Smith' and 'Nicogreen'. The last haplotype H3, was associated with only two texture sub-traits (10.7% of the explained phenotypic variance), but it is worth noting that these are specifically related to the acoustic components (acoustic linear distance and number of acoustic peaks). As with H2, H3 was present in high texture performing apple cultivars, such as 'CIVG198' , 'Coop39' , 'Ligol' and 'Minnewashta'. H2 and H3 also share four SNPs which leads to changes in the Md-PG1 primary sequence. These changes have been analyzed in order to see whether they might have an impact on the polygalacturonase enzyme activity, explaining, at least partially, the high flesh firmness typical of the varieties harbouring these two haplotypes. SNP1 (V/F) is located in an un-conserved region and F is one of the most frequent residues, thus it is not expected to negatively influence PG activity (Additional file 7). On the contrary, both the Q/R (SNP6) and the C/R (SNP10) conversions might slightly change the Md-PG1 activity. Indeed R residues are very rare among plant PGs in both positions. The last considered substitution (A/V, SNP18) is closed to a highly conserved region, with the A as the predominant amino acid, while V being slightly bigger and more hydrophobic might decrease the PG activity. As the alleles leading to the three changes are homozygous in all the tested crispy varieties (exception made for 'Minnewastha' , which is heterozygous for the only SNP6), we hypothesize that a less active polygalacturonase isoenzyme could be less effective in middle lamella depolymerization. This finding was moreover supported by the fact that apple cultivars having a homozygous presence of the haplotype H1 are characterized by an extremely low texture property (mealiness), such as 'Dalla Rosa' , 'Early Gold' , 'Limoncini' , 'Napoleone' , 'Permain Dorato' , 'Rosmarina Bianca' and 'Tavola Bianca' (a set represented for the most by old apple varieties).
Parental selection
These SNPs and haplotypes can be considered as a novel toolbox to improve the phenotype prediction efficiency of breeding programs towards the programmed identification of the most suitable parents (MAPS-marker assisted parent selection) and the subsequent selection of novel accessions (MASS-marker assisted seedling selection) with improved fruit texture quality. It is also worth emphasizing, as markers useful for breeding, the microsatellite Md-PG1 SSR 10kd which was highly associated to the set of texture sub-traits. This microsatellite was targeted in the Md-PG1 meta-contig, in strong LD with SNPs located within the gene. The allelic state configuration of this marker within the apple cultivars (Table 1) showed a clear dosage effect when compared to the texture distribution over the PCA plot for both years (Figure 4 and Additional file 8). Apple cultivars characterized by the homozygosity for the allele Md-PG1 SSR 10kd_3 were located in the positive PC1 area of the PCA, thus showing a general low texture behaviour. When this allele was absent the cultivars, distinguished by PC1 values from -8 to 0, showed a superior textural properties. In contrast, apple cultivars characterized by a heterozygous state for allele "3" showed an intermediate texture distribution. As additional proof of the utility of this microsatellite marker for texture selection programs in apple, a correlation with the three significant haplotypes was also observed. Apple cultivars characterized by H2 and H3 (the two favourable haplotypes associated with a valuable texture performance) lack the allele "3" of this microsatellite marker, which showed a dosage effect associated to fruit texture decay. The haplotype survey carried out on the 77 apple cultivars, and the validation of their association with the texture components, highlighted that cultivars showing the two haplotypes H2 and H3, or lacking the allele "3" of the microsatellite marker, are distinguished by a favourable fruit texture behaviour. These varieties (Table 1) are already employed as valuable potential parents in breeding programs addressing the improvement of fruit quality in apple, while the SSR alleles/haplotypes can be further exploited to investigate the breeding potential of other apple accessions not yet characterized.
Apple fruit cortex structural characterization
The impact of the Md-PG1 gene on fruit development and ripening was also investigated by SEM (scanning electron microscopy). The mealy/crispy texture behaviour of the two cultivars was assessed using the texture analyser (Figure 5a and b). The analysis of the combined texture profiles (mechanical and acoustic), performed at a ripe stage, showed that 'Golden Delicious' displayed a lower texture performance (mealiness) with respect to 'Granny Smith' , in which a better texture behaviour was observed (crispness). The digital extraction of the parameters underlined the different textures of these two cultivars showing a maximum force of 11.12 and 14.11 N, and acoustic peaks of 13 and 104 for 'Golden Delicious' and 'Granny Smith' , respectively. The polysaccharide depolymerization of the middle lamella is one of the major events distinguishing mealy from crispy cultivars, and excessive degradation controlled by the polygalacturonase enzyme determines a significant weakening of the chemical binding between adjacent cells, facilitating cell-to-cell slippage along the middle lamella upon mechanical compression. This Figure 3 Structure of the three Md-PG1 haplotypes. For each haplotype the respective association with the presence or absence of the allele "3" of the microsatellite marker Md-PG1 SSR 10kd is reported. At the bottom, the four significant amino acid changes, differentiating haplotypes 2 and 3 from haplotype 1, are highlighted. Table 1. Colours indicate the allelic dosage for Md-PG1 SSR 10kd-3, with blue used for cultivars characterized by the absence of the "3" allele, green for cultivars having this allele in heterozygous state and red for cultivars carrying this allele in a homozygous state (thus present two times). hypothesis is consistent with the fruit cortex structural observation made by SEM. Fruit cortex cells of 'Golden Delicious' had generally collapsed, due to a loss of internal turgor pressure, but were structurally intact, meaning mechanical rupture followed the cell boundaries at the level of the middle lamella, which in this cultivar were highly degraded. In 'Granny Smith' , the cells were completely broken, showing an increase in laceration of the cell walls rather than in the middle lamella, most likely due to a reduced degradation activity that prevented cell separation occurring (Figure 5c and d).
G A G T T C T C G T C T G G C C C C T A C T T G G C C C T T A T
The distinct anatomical structure between the two cultivars are also correlated with the different haplotype structure found within the Md-PG1 gene. It is worth noting that both cultivars present the Md-PG1 haplotypes in heterozygous state. One is a common haplotype shared between the two, which is not statistically associated to any fruit texture parameters. The other haplotype is represented by H1 and H2 for 'Golden Delicious' and 'Granny Smith' , respectively, both distinguishing particular fruit texture behaviours.
Conclusion
The results of this work validated the impact of a QTL associated to fruit texture in apple presenting a new set of Md-PG1 alleles valuable for a marker assisted parent selection. Fruit texture is one of the principal quality factors in apple, and is a priority world-wide in modern apple breeding programs. Many works have been already presented to the scientific community, generally limited to QTL surveys focused on bi-parental maps. In this study, we identified a new set of markers and haplotypes related to Md-PG1 gene and associated to texture dissected sub-traits. In particular three haplotypes and a novel microsatellite marker, with a clear allelic dosage effect, were specifically associated to several texture components. The fruit texture dissection in mechanical c d | 7,483 | 2013-03-04T00:00:00.000 | [
"Agricultural and Food Sciences",
"Biology"
] |
Mining Key Skeleton Poses with Latent SVM for Action Recognition
Human action recognition based on 3D skeleton has become an active research field in recent years with the recently developed commodity depth sensors. Most published methods analyze an entire 3D depth data, construct mid-level part representations, or use trajectory descriptor of spatial-temporal interest point for recognizing human activities. Unlike previous work, a novel and simple action representation is proposed in this paper which models the action as a sequence of inconsecutive and discriminative skeletonposes,namedaskeyskeletonposes.Thepairwiserelativepositionsofskeletonjointsareusedasfeatureoftheskeletonposes whichareminedwiththeaidofthelatentsupportvectormachine(latentSVM).Theadvantageofourmethodisresistingagainst intraclassvariationsuchasnoiseandlargenonlineartemporaldeformationofhumanaction.Weevaluatetheproposedapproach onthreebenchmarkactiondatasetscapturedbyKinectdevices:MSRAction3Ddataset,UTKinectActiondataset,andFlorence 3DActiondataset.Thedetailedexperimentalresultsdemonstratethattheproposedapproachachievessuperiorperformanceto thestate-of-the-artskeleton-basedactionrecognitionmethods.
Introduction
The task of automatic human action recognition has been studied over the last few decades as an important area of computer vision research.It has many applications including video surveillance, human computer interfaces, sports video analysis, and video retrieval.Despite remarkable research efforts and many encouraging advances in the past decade, accurate recognition of the human actions is still a quite challenging task [1].
In traditional RGB videos, human action recognition mainly focuses on analyzing spatiotemporal volumes and representation of spatiotemporal volumes.According to the variety of visual spatiotemporal descriptors, human action recognition work can be classified into three categories.The first category is local spatiotemporal descriptors.An action recognition method first detects interesting points (e.g., STIPs [2] or trajectories [3]) and then computes descriptors (e.g., HOG/HOF [2] and HOG3D [4]) based on the detected local motion volumes.These local features are then combined (e.g., bag-of-words) to represent actions.The second category is global spatiotemporal templates that represent the entire action.A variety of image measurements have been proposed to populate such templates, including optical flow and spatiotemporal orientations [5,6] descriptors.Except the local and holistic representational method, the third category is mid-level part representations which model moderate portions of the action.Here, parts have been proposed which capture a neighborhood of spacetime [7,8] or a spatial key frame [9].These representations attempt to balance the tradeoff between generality exhibited by small patches, for example, visual words, and the specificity by large ones, for example, holistic templates.In addition, with the advent of inexpensive RGB-depth sensors such as Microsoft Kinect [10], a lot of efforts have been made to extract features for action recognition in depth data and skeletons.Reference [11] represents each depth frame as a bag of 3D points along the human silhouette and utilizes HMM to model the temporal dynamics.Reference [12] learns semilocal features automatically from the data with an efficient random sampling approach.Reference [13] selects most informative joints based on the discriminative measures of each joint.Inspired by [14], Seidenari et al. model the movements of the human body using kinematic chains and perform action recognition by Nearest-Neighbor classifier [15].In [16], skeleton sequences are represented as trajectories in an -dimensional space; then these trajectories are then interpreted in a Riemannian manifold (shape space).Recognition is finally performed using NN classification on this manifold.Reference [17] extracts a sparse set of active joint coordinates and maps these coordinates to lower-dimensional linear manifold before training an SVM classifier.The methods above generally extract the spatial-temporal representation of the skeleton sequences with well-designed handcrafted features.Recently, with the developing of deep learning, several Recurrent Neural Networks (RNN) models have been proposed for action recognition.In order to recognize actions according to the relative motion between limbs and the trunk, [18] uses an end-to-end hierarchical RNN for skeleton-based action recognition.Reference [19] uses skeleton sequences to regularize the learning of Long Short Term Memory (LSTM), which is grounded via deep Convolutional Neural Network (DCNN) onto the video for action recognition.
Most of the above methods relied on entire video sequences (RGB or RGBD) to perform action recognition, in which spatiotemporal volumes were always selected as representative feature of action.These methods will suffer from sensitivity to intraclass variation such as temporal scale or partial occlusions.For example, Figure 1 shows that two athletes perform some different poses when diving water, which makes the spatiotemporal volumes different.Motivated by this case, the question we seek to answer in this paper is whether a few inconsecutive key skeleton poses are enough to perform action recognition.As far as we know, this is an unresolved issue, which has not yet been systematically investigated.In our early work [20], it has been proven that some human actions could be recognized with only a few inconsecutive and discriminative frames for RGB video sequences.Related to our work, very short snippets [9] and discriminative action-specific patches [21] are proposed as representation of specific action.However, in contrast to our method, these two methods focused on consecutive frame.
In this paper, a novel framework is proposed for action recognition in which key skeleton poses are selected as representation of action in RGBD video sequences.In order to make our method more robust to translation, rotation, and scaling, Procrustes analysis [22] is conducted on 3D skeleton joint data.Then, the pairwise relative positions of the 3D skeleton joints are computed as discriminative features to represent the human movement.Finally, key skeleton poses, defined as the most representative skeleton model of the action, are mined from the 3D skeleton videos with the help of latent support vector machine (latent SVM) [23].In early exploration experiments, we noticed that the number of the inconsecutive key skeleton poses is no smaller than 4.During testing, the temporal position and similarity of each of the key poses are compared with the model of the action.The proposed approach has been evaluated on three benchmark datasets: MSR Action 3D [24] dataset, UTKinect Action dataset [25], and Florence 3D Action dataset [26]; all are captured with Kinect devices.Experimental results demonstrate that the proposed approach achieves better recognition accuracy than a few existing methods.The remainder of this paper is organized as follows.The proposed approach is elaborated in Section 2 including the feature extracting, key poses mining, and action recognizing.Experimental results are shown and analyzed in Section 3. Finally, we conclude this paper in Section 4.
Proposed Approach
Due to the large performance variation of an action, the appearance, temporal structure, and motion cues exhibit large intraclass variability.So selecting the inconsecutive and discriminative key poses is a promising method to represent the action.In this section, we answer the question of what are and how to find the discriminative key poses.
Definition of the Key Poses and Model
Structure.The structure of the proposed approach is shown in Figure 2.Each action model is composed of a few key poses, and each key pose in the model will be represented by three parts: (1) a linear classifier () which can discriminate the key
Feature extract
Finding key pose Given is a video that contains frames = { 1 , . . ., }, where is the -th frame of the video.The score will be computed as follows: in which is the set of key poses of video , = { | = ( 1 , . . ., ), 1 ≤ ≤ }, and ∈ .For example, is {1, 9, 10, 28} in Figure 3(a). is the total number of key poses in the action model; in our following experiment, is ranging from 1 to 20. is the serial number of the key pose in the sequence of frames of video.And Δ is defined as follows: in which 0 is the frame at which action begins.Δ is a Gaussian function and reaches peak when − 0 = . 0 has been manually labeled on the training set.The method of finding 0 in a testing will be discussed in Section 2.4.
Feature Extracting and Linear Classifier.
With the help of real-time skeleton estimation algorithm, the 3D joint positions are employed to characterize the motion of the human body.Following the methods [1], we also represent the human movement as the pairwise relative positions of the joints.For a human skeleton, joint positions are tracked by the skeleton estimation algorithm and each joint has 3 coordinates at each frame.The coordinates are normalized based on Procrustes analysis [22], so that the motion is invariant to the initial body orientation and the body size.For a given frame And the feature is a 630-dimension (570 pairwise relative positions of the joint and 60 joint position coordinates) vector for MSR Action 3D and UTKinect Action dataset.AS for Florence 3D Action dataset, it is a 360-dimension vector.(The selection of alternative feature representations will be discussed in Experiment Result.)Then, we train a linear classifier for each key pose according to the following equation: The question of which frame should be used for training () will be discussed in Section 2.3.
Latent Key Poses Mining.
It is not easy to decide which frames contain the key poses, because key poses' space is too large to enumerate all the possible poses.Enlightened by [23], since the key pose positions are not observable in the training data, we formulate the learning problem as a latent structural SVM, regarding the key pose positions as the latent variable.
Rewrite (1) as follows: in which = ( 1 , . . ., ) is treated as the latent variable.Given a labeled set = {⟨ in which is the penalty parameter.Following [23], the model is first initialized: and are the positive and negative subsets of , and the model is initialized with key frames as shown in Algorithm 1.In Algorithm 1, and are the positive frame set and the negative frame set, respectively.They are used to train the linear classifier ().In order to initialize our model, we firstly compute ( ), the feature of the -th frame which belongs to the first video sample in .Then the Euclidean distance between ( ) and the feature of the frames in other samples in the neighborhood of temporal position with radius in is computed.The frame which has the minimum Euclidean distance from ( ) in each sample is added in .Then is used to train the linear classifier () and choose as the average of frame number in .To select the next key pose, chose with the minimum score based on () for next loop; in other words, the -th frame which is most different from previous key pose is selected in the next loop.Finally, all and are trained with the linear SVM when Algorithm 1 is completed.
Once the initialization is finished, the model will be iteratively trained as follows.First, to find the optimal value subjected to opt ∈ where opt = arg max ( ⋅ Φ(, )) for each positive video example and update with the average value of all opt , the new linear classifier () is trained with modified for each key pose.Second, ( 6) is optimized over , where () = ⋅ Φ(, opt ) with stochastic gradient descent.Thus, the models are modified to better capture skeleton characteristics for each action.
Action Recognition with Key Poses.
The key technical issue in action recognition in real-world video is that we do not know where the action starts, and searching start position in all possible places takes a lot of time.Fortunately, the score of each possible start position can be computed, respectively.So a parallel tool such as OpenMP or CUDA might be helpful.
Given a test video with frames, first, the skeleton feature score () of each frame has been computed in advance so we could reuse them later.Then for each possible action start position 0 , we compute the score of each key pose according to the following equation: These scores are summed together as the final score of 0 .If the final score is bigger than the threshold, then an action beginning at 0 has been detected and recognized.Figure 3 shows key poses for different actions in Florence 3D Action dataset.
Experiment Result
This section presents all experimental results.First, trying to eliminate the noise generated by translation, scale, and rotation changes of skeleton poses, we preprocess the dataset with Procrustes analysis [22].And we conduct the experiment for action recognition with or without Procrustes analysis on UTKinect dataset to demonstrate effectiveness of Procrustes analysis.Second, the appropriate feature extraction was selected from four existing feature extraction methods according to experimental result on Florence 3D Action dataset.Third, quantitative experiment is conducted to select the number of inconsecutive key poses.Last, we evaluate our model and compare it with some state-of-the-art method on three benchmark datasets: MSR Action 3D dataset, UTKinect Action dataset, and Florence 3D Action dataset.
Datasets
(1) Florence 3D Action Dataset.Florence 3D Action dataset [26] was collected at the University of Florence during 2012 and captured using a Kinect camera.It includes 9 activities; 10 subjects were asked to perform the above actions for two or three times.This resulted in a total of 215 activity samples.
And each frame contains 15 skeleton joints.
(2) MSR Action 3D Dataset.MSR Action 3D dataset [11] consists of the skeleton data obtained by depth sensor similar to the Microsoft Kinect.The data was captured at a frame rate of 15 frames per second.Each action was performed by 10 subjects in an unconstrained way for two or three times.(3) UTKinect Action Dataset.UTKinect Action dataset [24] was captured using a single stationary Kinect and contains 10 actions.Each action is performed twice by 10 subjects in indoor setting.Three synchronized channels (RGB, depth, and skeleton) are recorded with a frame rate of 30 frames per second.The 10 actions are walk, sit down, stand up, pick up, carry, throw, push, pull, wave hands, and clap hands.It is a challenging dataset due to the huge variations in view point and high intraclass variations.So, this dataset is used to validate the effectiveness of Procrustes analysis [22].
Data Preprocessing with Procrustes Analysis.
Skeleton data in each frame of a given video usually consists of a fixed number of predefined joints.The position of joint is determined by three coordinates (, , ). Figure 4 shows the skeleton definition in MSR Action 3D dataset.It contains 20 joints which could be represented by their coordinates.
Regarding raw human skeleton in the video as the features is not a good choice in consideration of the nature of skeletonrotation, scaling, and translation.So, before the experiment, we should normalize the datasets by Procrustes analysis.In statistics, Procrustes analysis is a form of statistical shape analysis used to analyze the distribution of a set of shapes and is widely applied to the field of computer vision such as face detection.In this paper, it is used to align the skeleton joints and eliminate the noise owed to rotation, scaling, or translation.Details of Procrustes analysis will be depicted next.
Given a skeleton data with joints (( 1 , 1 , 1 ), ( 2 , 2 , 2 ), . . ., ( , , )), the first step is to process the joints with translation transformation.We compute the mean coordinates (, , ) of all joints and put them on the origin of coordinates.The translation is completed after each joint coordinate subtracting the mean coordinate, denoted as equation ( , , ) = ( − , − , − ).The purpose of scaling is making mean square root of all joint coordinates equivalent to 1.For the skeleton joints, we compute according to the following equation: And the scaling result is calculated as follows: ( , , ) = ( /, /, /).The rotation of skeleton is the last step of Procrustes analysis.Removing the rotation is more complex, as standard reference orientation is not always available.Given is a group of standard skeleton joint points = (( 1 , V 1 , 1 ), ( 2 , V 2 , 2 ), . . ., ( , V , )), which represent an action facing positive direction of x-coordinate axis.The mean coordinate of is put on the origin of coordinate and the mean square root of coordinate is 1.Then we compute the rotation matrix for skeleton = (( 1 , 1 , 1 ), ( 2 , 2 , 2 ), . . ., ( , , )) which has been scaled and transformed as aforementioned method by (9), in which is 3 * 3 matrix.Σ is the singular value decomposition with orthogonal and and diagonal Σ.And the rotation matrix is equal to matrix multiplied by the matrix transform of .At last, skeleton joint points can be aligned with through computing multiplied by .
We followed the cross-subject test setting of [30] on UTKinect dataset to test the validity of Procrustes analysis.Result is shown in Table 1.It is easy to see that the recognition rate of almost all actions is improved after preprocessing skeleton joint point with Procrustes analysis.In particular, the recognition rate of action is improved by 10%.It turned out that the translation, scaling, and rotation of human action skeleton in the video affect the recognition accuracy and Procrustes analysis is an effective method to eliminate the influence of geometry transformation.
Feature Extraction Method Selection.
With the deep research on action recognition based on skeleton, there are many efficient feature representations.We select four of them (Pairwise [1], the most informative sequences of joint angles (MIJA) [31], histograms of 3D joints (HOJ3D) [24], and sequence of the most informative joints (SMIJ) [13]) as alternative feature representations.
Given is a skeleton = { 1 , 2 , . . ., }, in which = ( , , ).The Pairwise representation is computed as follows: for each joint , we extract the pairwise relative position features by taking the difference between the position of joint and the position of another joint : = − , so the feature of is () = { | = − , 1 ≤ < ≤ }.Due to the informativeness of the original joints, we made an improvement on this representation by concatenating () and .Then the new feature is The most informative sequences of joint angles (MIJA) representation regards joint angle as features.The shape of trajectories of joints encodes local motion patterns for each action.It chooses to use 11 out of the 20 joints capturing information for an action and center the skeleton, using the hip center joint as the origin (0, 0, 0) of the coordinate system.From this origin, vectors to the 3D position of each joint are calculated.For each vector, it computes the angle 1 of its projection onto the x-z plane with the positive -axis and the angle 2 between the vector and -axis.The feature consists of the 2 angles of each joint.
Histograms of 3D joints (HOJ3D) representation chooses 12 discriminative joints of 20 skeletal joints.It takes the hip center as the center of the reference coordinate system and defines -direction according to left and right hip.The remaining 8 joints are used to compute the 3D spatial histogram.The Spherical Coordinates space is partitioned to 84 bins.And for each joint location, a Gaussian weight function is used for the 3D bins.Counting the votes in each bin and concatenating them, we can get an 84-dimension feature vector.
Sequence of the most informative joints (SMIJ) representation also takes the joint angle as feature but it is different from MIJA.It partitions the joint angle time series of an action sequence into a number of congruent temporal segments and computes the variance of the joint angle time series of each joint over each temporal segment.The top 6 most variable joints in each temporal segment are selected to extract features with mapping function Φ.Here Φ() : R || → R is a function that maps a time series of scalar values to a single scalar value.
In order to find the optimal feature, we conduct an experiment on Florence 3D Action dataset, in which each video is short.And we estimate other 5 joints coordinates from original 15 joints of each frame in Florence dataset to make the same joints number of each frame as MSR Action 3D or UTKinect dataset.The experiment takes cross-subject test settings; one half of the dataset is used to train the key pose model and the other is used for testing.The model has 4 key poses and Procrustes analysis has been done before the feature extracting.Results are shown in Figure 5.The overall accuracy of Pairwise feature across 10 actions is better than SMIJ and MIJA.And it is observed that, for all actions except sit down and stand up, the Pairwise representation shows promising results.So, in following experiment, we select Pairwise feature to conduct action recognition experiment.The estimated joints coordinates generate more noise, so the accuracy is lower than the results on original Florence 3D Action dataset (shown in Table 6).
Selection of Key Pose Numbers.
In this section, we implement some experiments to determine how many key poses are necessary for action recognition.The experimental results are shown in Figure 6; the horizontal axis denotes the number of key poses, and the vertical axis denotes recognition accuracy of the proposed approach.The number of key poses ranges from 1 to 20.We can see that the accuracy increases with the number of key poses when the number is less than 4. The accuracy almost achieves maximum values when the number of key poses equals 4, and the accuracy does not increase when the number of key poses is more than 4. To consider the accuracy and computation time, 4 is selected as the number of key poses for recognition action in our following experiment.
Table 2 only enumerates recognition accuracy for each action in UTKinect Action dataset when the number of key poses ranges from 4 to 8. It can be seen that the recognition accuracy varies with different key poses number for one action.However, the average recognition accuracy is nearly the same with different key poses number, so 4 is the high cost-effective choice.
Results on MSR Action 3D Dataset.
According to the standard protocol provided by Li et al. [11], the dataset was divided into three subsets, shown in Table 3. AS1 and AS2 were intended to group actions with similar movement, while AS3 was intended to group complex actions together.For example, action ℎ is likely to be confused with ℎ in AS1 and action pickup & throw in AS3 is a composition of and high throw in AS1.We evaluate our method using a cross-subject test setting: videos of 5 subjects were used to train our model and videos of other 5 subjects were used for test procedure.Table 4 illustrates results for AS1, AS2, and AS3.We compare our performance with Li et al. [11], Xia et al. [24], and Yang and Tian [25].We can see that our algorithm achieves considerably higher recognition rate than Li et al. [11] in all the testing setups on AS1, AS2, and AS3.For AS2, the accuracy rate of the proposed method is the highest.For AS1 or AS3, our recognition rate is only slightly lower than Xia et al. [24] or Yang and Tian [25], respectively.However, the average accuracy of our method on all three subsets is higher than the other methods.MSR Action 3D Histogram of 3D joints [24] 78.97% EigenJoints [25] 82.30% Angle similarities [27] 83.53% Actionlet [1] 88.20% Spatial and temporal part-sets [28] 90.22% Covariance descriptors [29] 90.53%Our approach 90.94% Table 5 shows the results on MSR Action 3D dataset.The average accuracy of the proposed method achieves 90.94%.It is easy to see that our method performs better than the other six methods.
3.6.Results on UTKinect Action Dataset.On UTKinect dataset, we followed the cross-subject test setting of [30], in which one half of the subjects is used for training our model and the other is used to evaluate the model.And we compare our model with Xia et al. [24] and Gan and Chen [30]. Figure 7 summarizes the results of our model along with competing approaches on UTKinect dataset.We can see that our method achieves the best performance on three actions such as pull, push, and throw.And the most important thing is that the average accuracy of our method achieves 91.5% and is better than the other two methods (90.9% and 91.1% for Xia et al. [24] and Gan and Chen [30], resp.).The accuracy of actions such as clap hands and wave hands is not so good; the reason may be the fact that the skeleton joint movement ranges of these actions are not large enough and the skeleton data contain more noise.So, it hinders our method from finding the optimal key poses and degrades the accuracy.
3.7.
Result on Florence 3D Actions Dataset.We follow the leave-one-actor-out protocol which is suggested by dataset collector on original Florence 3D Action dataset.All the sequences from 9 out of 10 subjects are used for training, while the remaining one is used for testing.For each subject, we repeat the procedure and average the 10 classification accuracy values at last.For comparison with other methods, average action recognition accuracy is also computed.The experimental results are shown in Table 6.In each column, the data represent each action's recognition accuracy, while the corresponding subject is used for testing.The challenges of this dataset are the human-object interaction and the different ways of performing the same action.By analyzing the experiment result of our method, we can notice that the proposed approach obtains high accuracies for most of the actions.Our method overcomes the difficulty of intraclass variation such as bow and clap.The proposed approach gets lower accuracies for the actions such as answer the phone and read watch; this can be explained by the fact that these actions are human-object interaction with small range of motion and the Pairwise feature could not well reflect the motion.Furthermore, results compared with other methods are listed in Table 7.It is clear that our average accuracy is better than Seidenari et al. [15] and is the same as Devanne et al. [16].
Table 7: Comparing of our method with the others on Florence 3D Actions dataset.
Conclusion
In this paper, we presented an approach for action recognition based on skeleton by mining the key skeleton poses with latent SVM.Experimental results demonstrated that human actions can be recognized by only a few frames with key skeleton pose; in other words, a few inconsecutive and representative skeleton poses can describe the video action.
Starting from feature extraction using the pairwise relative positions of the joints, the positions of key poses are found with the help of latent SVM.Then the model is iteratively trained with positive and negative video examples.In test procedure, a simple method is given by computing the score of each start position to recognize the action.We validated our model on three benchmark datasets: MSR Action 3D dataset, UTKinect Action dataset, and Florence 3D Action dataset.Experimental results demonstrated that our method outperforms all other methods.Because our method relies on extracting descriptors of simple relative positions of the joints, its performance degrades when the actions are little varied and uninformative, for instance, those actions that were performed only by forearm gestures such as clap hands in UTKinect Action dataset.In the future, we will explore the other local features reflecting minor motion for better understanding human action.
Figure 1 :
Figure 1: Two athletes perform the same action (diving water) in different way.
Figure 3 :
Figure 3: Key poses for different action in Florence 3D Actions dataset.
and offset , where the key poses are most likely to appear in the neighborhood of with radius , and (3) the weight of linear classifier and weight of the temporal information .
Table 1 :
Results of action recognition with or without Procrustes analysis.
Table 2 :
Recognition accuracy on different number of key poses.
Table 3 :
The three subsets of actions used in the experiments.
Table 4 :
Comparison of our method with the others on AS1, AS2, and AS3.
Table 5 :
Comparison of our method with the others on MSR Action 3D.
Table 6 :
Results on Florence 3D Actions dataset.Figure 7: Results on UTKinect Action dataset. | 6,381.2 | 2017-01-01T00:00:00.000 | [
"Computer Science"
] |
Gravitational lensing in a topologically charged Eddington-inspired Born–Infeld spacetime
In the present paper, we study several aspects of gravitational lensing caused by a topologically charged Monopole/Wormhole, both in the weak field limit and in the strong field limit. We calculate the light deflection and then use it to determine the observables, with which one can investigate the existence of these objects through observational tools. We emphasize that the presence of the topological charge produces changes in the observables in relation to the case of General Relativity Ellis–Bronnikov wormhole.
I. INTRODUCTION
It is known that despite its enormous success, the Einstein's General Relativity (GR) has some weaknesses that may point to a broader gravitational theory.In particular, we highlight the singularity problem, i.e., the termination of a geodesic in a black hole singularity.The theory also does not naturally deal with the accelerating expansion of the universe and the cosmological problem of the Big Bang singularity [1][2][3][4][5][6].These, among other questions, motivated physicists to search for a theory capable of avoiding such problems [7][8][9][10][11].Among these theories, we highlight the so-called Eddington-inspired Born-Infeld modification of gravity (EiBI gravity) [11].The structure of this theory is inspired by the nonlinear electrodynamics of Born and Infeld [12] and, when approached by the affine-metric formalism, avoids problems such as phantom degrees of instability [13].Even at a classical level (without quantum corrections), such a theory provides singularity-free black holes solutions and sustained wormholes without the need for exotic matter [14][15][16][17][18].In fact, these new theoretical possibilities, combined with the technological tools arising from large international collaborations, have rekindled the search for regular black holes and wormholes.
One of the forms of investigation in cosmology and gravitation are gravitational lensing [19][20][21][22][23], Einstein himself used this method to make GR effectively relevant in the scientific community [24,25].Gravitational lensing can occur both in the weak field limit, which is when the light ray passes very far from the source responsible for the lens, or in the so-called strong field limit, which is when the light passes very close, in this case the deflection of light it is actually very big [26,27].
So far, the simplest solution ever obtained in EiBI gravity is the Global Monopole/Wormhole (GM/WH for short) [17,74].This solution interpolates a modified global monopole and a wormhole like Ellis-Bronnikov with topological charge.We must emphasize that said solution was obtained with a source of matter that does not violate the energy conditions and, impressively, it is as simple as the well-known Ellis-Bronnikov solution [75,76].Given its simplicity and potential applications to understand systems linked to condensed matter, some studies related to the quantum dynamics of particles in this space-time have already been published [78][79][80][81][82].With regard to gravitational lensing, in [74], the authors calculated the deflection of light in the EiBI-GM in the weak field limit and in [17] the authors obtained a general expression for the light deflection in the topologically charged WH.
In the present paper, we will do a more general study of the topologically charged EiBI spacetime.Let's consider both possibilities, WH and GM, and let's treat them in the same theoretical framework.We will address the de-flection of light not only in the weak field limit, but also in the strong field limit for the first time.Furthermore, we calculate the observables and discuss the possibility of their detection.It should also be noted that the results obtained here can also shed light on the optical properties of liquid crystals and crystalline lattices with topological defects [77].
The paper is organized as follows: In section II we present the metric that describes the GM/WH in EiBI gravity and calculate the light deflection in the weak field limit.In section III we calculate the light deflection in the strong field limit and investigate the lens equation only in the wormhole spacetime.In section IV, we present some observational perspectives of the wormhole and its detection plausibility and, finally, in section V, we make the last discussions and conclusions of the paper, presenting our future perspectives.
II. THE GM/WH METRIC AND LENSING IN THE WEAK FIELD LIMIT
The line element describing the GM/WH in EiBI gravity, in spherical coordinates (t, r, θ, ϕ), is given by [17]: where κ 2 = 8πG, being G the gravitational constant, η the energy scale associated with spontaneous symmetry breaking and ε, the parameter associated with the nonlinearity of the EiBI gravity.For simplicity, let us make just a few reparametrizations: ).With that, the Eq.( 1) becomes The above metric can describe a global monopole, when ε > 0, or a topologically charged wormhole, when ε < 0. It should be noted that solution (2) was originally obtained, in [17,74], from a energy source corresponding to the external region of the Barriola-Vilenkin global monopole core [83].For that reason, in the case ε > 0, (2) also describes a spacetime external to the global monopole core, but this time within the EiBI gravity.And, when ε = 0, the metric reduces to on the wellknown outside core global monopole.For ε < 0, we can compare the Eq.( 2) with the more general case of a wormhole, proposed by Morris and Thorne [84], to conclude that the redshift function is null, Φ(r) = 0, and the form function is b(r) = r − α 2 r − |ε| r .Because of topological charge, the EiBI wormhole is not asymptotically flat, unlike the Ellis-Bronnikov wormhole: lim Let us now get the geodesics associated with spacetime (2), using the variational method.For a smooth curve on a space with metric (2), the length, S, of the curve is where λ is the affine parameter of the curve.Taking S as an affine parameter in (3), we can show that the curves that minimize (3), δS = 0, also minimize: Therefore, for θ = π 2 , the lagrangian L becomes: The corresponding Euler-Lagrange equation for the coordinates t and ϕ leads to the following conserved quantities In terms of the quantities E and L, the lagrangian, Eq.( 5), becomes: It is known that for null geodesics, L = 0.In this case, the (7) leads to This equation can be interpreted as describing the onedimensional motion of a particle of energy E subject to an effective potential V ef = L 2 /r 2 .In Fig. 1, we plot the effective potential for some values of L. Of course, there is only radial motion when dr dλ > 0, and the smaller the angular momentum L greater the approximation.Therefore, the turning point r 0 occurs for dr dλ = 0, i.e., r 0 = L 2 /E 2 .It is worth mentioning that in the case of the wormhole, ε < 0, the solution has a minimum radius given by r = |ε|, thus, photons with sufficient energy pass to the other side of the wormhole.
From ( 6), it can be shown that Eq.( 8) becomes where β = L E .We want to find the change in coordinate ϕ, i.e., ∆ϕ = ϕ − − ϕ + .By symmetry, the contributions to ∆ϕ before and after the turning point are equal, so Eq.( 9) leads to ∆ϕ = 2 Let us define the variables in terms of which, the (10) becomes The above expression is valid both for the WH case (ϵ < 0) and the GM case (ϵ > 0).Remembering that β is the turning point, we will have 0 < m < 1 for the WH; in this case K(m) is an incomplete elliptic integral of the first type, given in terms of the parameter itself rather than the modulus.In the GM case, we can write m = − ε β 2 = −a, with a = ε β 2 > 0. We can show, Appendix A, that The light deflection, angle between the new direction of propagation and the previous direction, as shown in the Fig. 2, it is given by δϕ = ∆ϕ − π.Therefore, The expression ( 14) is valid for both m > 0 and m < 0. But for didactic reasons we will discriminate, below, the deflection for a WH and a GM.Of course, if ε = 0 and α = 1, we will have δϕ = 0, which corresponds to the case of flat spacetime, where there is no angular deflection since the gravitational attraction is null.In fig.3, we plot the deflection for α = 0.8 in order to clarify some aspects related to lensing in the GM/WH spacetime.Note that in the WH case (m > 0), when the turning point tends to the throat radius the deflection diverges, we call this limit strong field.On the other hand, in the case of the modified GM (m < 0), the deflection is always finite.First, let us investigate the deflection in the weak field limit, i.e., for β ≫ ε, where m → 0. In this limit, Eq.( 14) can be written as The first term in the expression above corresponds to the deflection of a pure GM [83], and the second term is a contribution from EiBI gravity.We emphasize that in the WH case (ε < 0), the deflection clearly becomes more pronounced than in the Barriola and Vilenkin GM case, the opposite occurs for EiBI GM.For the EiBI GM case, the deflection was first obtained by [74].
III. WORMHOLE LIGHT DEFLECTION IN THE STRONG FIELD LIMIT
As already discussed, Eq.( 15) is valid for both the WH and the GM.However, as we can see in Fig. 3, the light deflection never diverges in the GM case (ε > 0).On the other hand, the deflection diverges for the WH case (ε < 0) when m → 1.This limit, which corresponds to the approximation of the turning point towards the wormhole throat, is called the strong field.It is on this limit that we will focus from now on.For simplicity, let's take ε = −a 2 ; therefore, the parameter a corresponds to the wormhole throat.Thus, from ( 14), we are left with In the strong field limit, β → a, consequently, a 2 β 2 → 1.According to [17], in this limit, the deflection of light diverges logarithmically, that is 1 , With this result, we can study gravitational lensing in the strong field limit.To that end, let us review the lens equations at this limit in order to clarify our findings.In Fig. 4 we present the visual profile of lensing.The light emitted by a source (S) is deflected by the lens (L), the wormhole, towards the observer (O).With respect to the optical axis (LO), ψ and θ give the angular position of the source and the image (I), respectively.Evaluating the figure, it can be shown that [31]: where Let's assume that the source and lens are almost perfectly aligned.In this sense, although the angular positions of the source and the image are small, the light ray circles the source several times before heading towards the observer, thus, Λ must be very close to a multiple of 2π [28].Therefore, we can write Λ = 2πn + ∆Λ n , where ∆Λ n is the deflection angle after the n loops around the lens, so tan(Λ − θ) ∼ ∆Λ n − θ.With these observations, the lens equation becomes 1 We could also get δϕ in the strong field limit, taking into account that lim Still, in agreement with what we have stated, we can then write the critical impact parameter as With this, we can rewrite the angular deflection (17) as We must remember, however, that whoever enters the lens equation ( 19) is ∆Λ n .To get ∆Λ n , we expand Λ(θ) Taking Λ(θ 0 n ) = 2πn in (21), we find Substituting ( 21) and (I) in (22), we are left with Substituting (24) in the lens equation ( 19), we are left with The Eq.( 25) gives the angular position of the nth relativistic image.See that they are influenced by the topological charge and when α → 1, they fall back on the case of Ellis-Bronnikov [86].In general, α < 1, which implies that in the case of the topologically charged, the respective angular positions are greater than in the case of the Ellis-Bronnikov WH, which has no topological charge [86].The total flux from the nth lensed image is proportional to the magnification µ n , which is given by µ n = ψ θ ∂ψ ∂θ θ 0 n −1 [28].Therefore, from ( 25) and (I),we can show that As we can see, the magnification decreases rapidly with n, indicating that the brightness of the first image, θ 1 , is higher than the others.On the other hand, since in general D OL ≫ a, the magnification is always small.We can also observe that the greater the alignment between the source and the lens (ψ ≪ 1), the stronger the magnification.
The equations ( 25) and ( 26) express the positions of the relativistic images and the magnifications in terms of the parameters that characterize the wormhole (a and α).We can think in reverse, that is, we can define observables that can be measured via observational methods.Since these observables can be resolved to obtain the parameters that characterize the wormhole, they can present themselves as a good way to search for wormholes.In addition, the such methodology can also be useful as a tool for research beyond GR, since from the observables we can discriminate between a GR WH and an EiBI WH.In [29], Bozza defined the following observables: Where s is the angular separation between the first and the rest of the relativistic images and R is the relationship between the flux of the first image and the flux of all the others.From (25), we can show that s which is given by and from (26), we can also show that For α ≃ 1, as we believe it should be, the expression ( 29) becomes Compared to the Ellis-Bronnikov WH, the presence of the topological charge α decreases the value of the observable R and increases the observable s.Please, compare equations ( 30) and (28) with equations (5.15) and (5.14) of [86], respectively.In order to have an observational perspective of the EiBI WH, we will model three astrophysical scenarios, one for the strong field limit and two for the weak field limit; the choices were made in order to make the data clearer and avoid too many approximations.First, let us assign well-motivated values to the parameters a and α.
A. Strong field limit
At the strong field limit, we modeled the wormhole with data from Sagittarius A*, at the center of our galaxy.The mass of SgrA* is estimated at 4.4 × 10 6 M ⊙ and is at an approximate distance of D OL = 8.5Kpc [88].As we have discussed before, it is estimated that the radius of the EiBI WH throat has an upper bound, a ≤ 10 15 km.In table I we present the values of the observables, θ ∞ , s and R, for some values of the throat radius a.The observable θ ∞ is defined as the critical angle, of (20), θ ∞ = β D OL .The observable s is given by (28) and R = 2.5 log 10 R, where R is given by (30), the latter redefinition is useful to compare our results with those found in the literature.In the case of the well-known Schwarzschild black hole, the values of these observables for the scenario considered here are already known and are given by [31]: θ ∞ =26.547µarcsecs, s = 0.03322µarcsecs e R = 6.821 magnitudes.According to table I, for 10 10 km ≤ a ≤ 10 11 km, the order of magnitude of the critical angle and angular separation have the same order of magnitude as in the Schwarzschild case, but differ significantly for other values.In fact, the data indicate that for a ≥ 10 11 the values of the observables assume values more compatible with the current observational range than in the Schwarzschild case.As for the magnification (30), R, we observe that it does not depend on the throat radius, and is given by R = 6.821 magnitudes; we note that it is similar to the Schwarzschild case.However, we must emphasize that using the critical angle and angular separation we can distinguish between the space of a Schwarzschild black hole and an EiBI WH using the strong field limit lensing.
B. Weak field limit
In the weak field limit, a ≪ 1 and a/β ≪ 1, there are no loops and the deflection is given by (15), which for feasible topological charge values, becomes It is worth remembering that, as we are dealing with the wormhole case, we are taking ϵ = −a 2 in (15).In perfect alignment, ψ = 0.The angular position, θ E , can be obtained from ( 19), (20), and (31) which leads to the equation whose real solution for values of α close to unity is given by From (20), we can calculate the Einstein radius, R E , Let us now estimate the observables R E and θ E taking into account reasonable values for the model parameters.Following the example of [56], let's consider the lensing of a bulge star and a star in the Large Magellanic Cloud (LMC).For a bulge star, the following parameters are adopted: D OS = 8kpc and D OL = 4kpc; while for the Large Magellanic Cloud: D OS = 50kpc and D OL = 25kpc.Based on these estimates, in the tables II and III we present some values of the observables for the two presented scenarios.We verified that, within the possibilities of EiBI gravity, the present wormhole presents feasible theoretical possibilities of being detected for values of the throat radius of the order of 10 9 km or more, which is well accommodated within the restrictions for the parameters of the mentioned gravity theory.The angular deflection and the angle of the Einstein ring is of the order of 10 arcsec, for realistic parameter values, and are certainly within the observable range.
V. CONCLUSIONS AND PERSPECTIVES
In this work, we explore the gravitational lensing of a GM/WH spacetime within EiBI gravity.Initially, we calculate the light deflection in a general and exact way, both for the case of a GM and for the case of a WH, and we emphasize that the presence of a topological charge allows observationally to discriminate whether such a solution is described by the RG or by the EiBI gravity.We emphasize that in the latter case, the solution is physically more attractive as it does not presuppose any source of exotic matter [17].For the time being, we conjecture that a topologically charged wormhole may arise within EiBI gravity as a result of the evaporation of a topologically charged black hole, we hope to publish our theoretical research in this direction soon.In the weak field limit, we show that the light deflection increases for a wormhole and decreases for a GM compared to the Barriola and Vilenkin GM.We argue that in the case of GM the light deflection never diverges, on the other hand, in the case of WH, the light deflection logarithmically diverges in the strong field limit, that is, when the impact parameter tends to the radius of the WH throat.We studied the lensing of the EiBI WH in the strong field limit and showed that this methodology allowed us to distinguish between an EiBI WH and an Ellis-Bronnikov WH.We also performed an analysis of the observables taking into account plausible and well-motivated values for the parameters.The data suggest that at the weak field limit the observables are within the observational range.Finally, we also emphasize that the solution studied in this paper can collaborate in the understanding of optical properties of liquid crystals, since the spatial section of (2) adequately describes these condensed matter systems [77,89].
FIG. 1 :
FIG.1: Effective potential for some values of angular momentum L. | 4,615.8 | 2023-05-18T00:00:00.000 | [
"Physics"
] |
Operating conditions of hydraulic structures and results of their multi-factor analysis
Nowadays, the state of Zhiguli Hydroelectric Station (HS) and its main hydraulic structures is controlled by observation of control and measuring equipment installed on these structures. There are systematic inspections and surveys carried out by both the hydroelectric power station personnel and by invited experts. The last analysis of the state of HS and its hydraulic structures was made in 1991. Since that time, a computerbased information and diagnostic system for monitoring the state of hydraulic structures has been introduced at HS. This system introduction caused the need to pay more serious attention to the reliability of the results obtained by means of control and measuring equipment, because the system monitored the condition of all hydraulic structures according to these results. Thus, the purpose of the research is to perform a multi-factor analysis of HS state and its hydraulic structures according to field studies results of all the devices installed in water-retaining structures. The results show impervious elements of HS underground circuit are not effective enough for specific geological conditions in terms of clearing the filtration head. The main drop of water head in the base occurs on HS building upper cutoff, i.e. in this zone there is a maximum filtration gradient, which is very dangerous. This process should be carefully monitored. It was revealed that several piezometers in the base do not work or their indications are not reliable, therefore it is necessary to repair and, in some cases, replace them. In future, it is necessary to equip main piezometers of HS with remote water level measuring sets and to make an automated system of condition monitoring.
Introduction
Nowadays, the state of Zhiguli Hydroelectric Station (HS) and its main hydraulic structures is controlled according to the data of control and measuring equipment observations installed on these structures. There are also systematic inspections and surveys carried out by both the hydroelectric power station personnel and invited experts. As a result of the MATEC Web of Conferences 196, 02005 (2018) https://doi.org/10.1051/matecconf/201819602005 XXVII R-S-P Seminar 2018, Theoretical Foundation of Civil Engineering analysis of field studies and surveys, qualified experts make a conclusion on the hydraulic structure state.
In accordance with the latest version of the Maintenance Rules for Power Plants (RD 34.20.501-95), the wording "the analysis of field studies results" was replaced by the wording "multifactorial analysis of the structure state". This makes it possible to evaluate all the operating factors when analyzing the state of hydraulic structures of waterworks.
An obvious factor in all the cases is that the analysis of the structures state should be carried out every five years by qualified experts. In this analysis, assessment of reliability and validity should be made in addition to structures state assessment, obtained from control and measuring equipment, as well as sufficient operability of the equipment itself.
The last analysis of hydraulic structures state of Zhiguli Hydroelectric Station was carried out in 1991. Since that time computer information and diagnostic system for monitoring the structures state began to be introduced at the hydroelectric station. The introduction of this system caused the need to pay the most serious attention to the reliability of the results obtained with the help of control and measuring equipment, as the structure state control is being carried out on their basis.
The article gives an analysis of field studies from 1975 to 2015 of all the equipment installed in water-retaining structures in Zhiguli Hydroelectric Station [1][2][3].
For some groups of equipment, the whole period data were used for analysis. The authors carried out a multifaceted analysis of the hydraulic structures state of Zhiguli Hydroelectric Station on the basis of the obtained materials processing. This analysis is given below.
Materials and methods
HS building is situated on the right bank of the Volga river and its base is embedded in the ancient ravine bedrock formed as a result of Zhiguli massif erosion.
Base plates with cutoff and sheet piling, as well as anchor upstream apron are the impervious elements of Zhiguli HS. The sheet piling is interfaced with sheet piles of rightand left-bank retaining walls, thus preventing offline filtration. To remove the filtration water and pressure, subsurface drain is installed behind the upper cutoff of HS building, linked with lower pool.
The base under the HS building is not homogeneous: sections 1 -6 are located on the gully, where the HS building crashes into dense Kinel clay with its upper cutoff. In the river bed part, where the crest of Kinel clays is located deeper, the upper cutoff rests on the relatively more permeable quaternary mindel-riska soil sediments [4][5][6].
To control the filtration mode, 186 piezometers were installed at the base of the station building and in conjunction with the retaining walls. One group of piezometers controls the work of entrance section of HS building: anchor upstream apron, SCS cutoff and upper cutoff of HS building base plate. The second group of devices controls filtration pressure on HS building base plate. 366 sedimentary marks (currently there are 361), 84 slit-metering devices (currently there are 76) and 23 range marks (currently there 30) were installed to control the deformation of HS building and associated devices.
The impervious circuit of HS building is known to have been created by the base plate of trash-rack structure with cutoff, sheet piling, anchor upstream apron and upper cutoff of HS building [7,8].
Observations on piezometers controlling the filtration mode at the entrance area under HS building showed that the efficiency of this section in terms of pressure reducing was small. So, behind the upper sheet pile for sections 1, 5, 6, 7, water head drop was in the range of 20 to 30%. Within the sections 3-4, water levels measured by piezometers behind MATEC Web of Conferences 196, 02005 (2018) https://doi.org/10.1051/matecconf/201819602005 XXVII R-S-P Seminar 2018, Theoretical Foundation of Civil Engineering the sheet pile were equal to the water levels in the reservoir, i.e. there was no head clearing. The greatest head clearing on the upper sheet pile was observed in sections 8, 9 and 10 and was 30-40% of the head. The main head decrease in the base occurred on the upper HS building cutoff. This is quite natural, because there was surface drain behind upper cutoff that it had entrance to lower pool.
Thus, impervious elements circuit on the entrance part of the HS building are not effective enough for specific geological conditions in terms of clearing the filtration head.
Studies have shown that the greatest head drop on the damp section of HS building is observed in sections 7, 8 and 9. Consequently, at these sections base a minimum pressure gradient is observed under upper cutoff of the building.
The maximum pressure gradient is observed at the base of sections 3 and 4, and it is somewhat smaller at the base of sections 5 and 10. For sections 3, 4 it is 0.82-0.85, for sections 5, 10 -0.6 and 0.9 respectively.
Maximum gradients for loam are 0.8, and for clays 1.35. Since the zone of sections 1-4 of HS building is embedded in dense Kinel clays with its upper cutoff, the measured head gradient does not exceed the permissible values. However, it should be borne in mind that there are no piezometers directly below the cutoff, so it is possible only to assume gradient on the length between piezometers P3 and P4 or P3 and surface drain. In fact, water level measured by piezometer P3 can be observed much closer to surface drain, for example, directly under cutoff. As a result, the actual pressure gradient will be much greater than the one mentioned above.
All this makes it necessary to provide constant, especially careful monitoring over the piezometers at the base of sections 3, 4, 5 and 10.
Thus, all the data observed confirm that zone under the upper cutoff of sections 3, 4, 5 and 10 of HS building is the most dangerous from the viewpoint of ensuring the filtration stability of the base. The main controlling parameter for this zone should be the gradient of filtration head, measured between piezometers P3 and P4.
It is also interesting to analyze the change in piezometric levels measured by equipment during operation periods. The readings of piezometers P1, P2 and P3 fluctuate in accordance with upper pool. The levels in piezometer P4, as well as piezometers P7, P8, P9 and P10 change synchronously with the fluctuations in the lower pool levels.
In general, the indications of control piezometers (P1, P2, P3, P4) for the entire operation period have horizontal approximation, i.e. except for fluctuations associated with changes in the levels of UP and LP, they do not change over time. Readings of piezometers P2 and P3 in sections 1 and 2 are considered to be exceptions. Over the past 35 years, P2 piezometer readings under these sections have been decreasing with intensity of 0.1 m / year; piezometer P3 readings with intensity of 0.2 m / year.
If these data are reliable, similar decrease indicates the colmatation of filtration pathways in the upstream apron part of these sections base, that in turn provides filtration gradients reduction in the upper cutoff zone.
The reverse process -piezometric levels growth is observed by P3 piezometer at the sections base 7 and 9, growth rate is 0.1 m / year, and that indicates an increase in the head gradient under the upper HS building cutoff. This process for the sections mentioned is not dangerous, as the filtration head gradients are in the range of 0.2-0.28, that is much lower than the accepted values [9].
Observation piezometers data analysis showed that they are quite efficient and provide reliable information. However, there is a number of piezometers that require verification and repairing: P4 in sections 3, 5 (large scatter of readings) and 2, 6, P5 in section 6 and P7 in sections 1 and 7.
To monitor subsidence and relative displacements of sections, powersites of slit-metering devices were installed on HS building; that is No. 3 from the upstream pool side (in the zone of upper pool shutter) and No. 5 from the upstream pool side (in the zone of emergency-repair shutters of the lower pool). Moreover, two powersites of slit-metering devices were installed on trash-rack structure: No. 1 from the upper pool side and No. 2 from the upstream pool side. Figure 1 shows Despite the fact that the building subsidence is practically damped, observations on slitmetering marks must be continued, the observations accuracy being even higher than the one at the beginning of operational period. Modern geodetic instruments should be used and thorough monitoring of surveying reference points should be carried out.
In no case the frequency of observations should decrease, observations should be conducted at least once in every 2 years, or even annually.
Horizontal displacements measuring, carried out on slit-metering marks gives only a relative subsidence value of the sections, and that does not allow to estimate the overall horizontal movements of the building. Nevertheless, according to available observations, it can be stated that the measured horizontal displacements of sections along the flow are within the observations accuracy. The largest movements are measured in the direction of joints opening and closing. Figure 2 shows the graphs approximation of opening-closing cross-sectional joints along the Gate 3. As can be seen, the joints are constantly closed, their closing intensity during last years being the same and damping of this process is not visible. The only exception is the joint in section 9 (gate 3), where a continuous joint opening is observed.
Discussions
Observations analysis of filtration regime at the base showed that, the impervious HS building circuit, including base plate with cutoff and sheet piling, as well as anchor upstream apron work insufficiently effectively in these geological conditions. Thus, under sections 1, 5, 6, 7 the sheet pile clears 20-30% of the head and in the area of sections 3-4 does not clear the head at all.
As a result, the filtration gradients lie within the range of 0.6 to 0.9 directly under the upper cutoff of sections 3, 4 and to less extent of sections 5 and 10, which is close to the https://doi.org/10.1051/matecconf/201819602005 XXVII R-S-P Seminar 2018, Theoretical Foundation of Civil Engineering limiting values. Taking into account that it is not possible to measure the piezometric head distribution directly under the cutoff, the actual gradient can be much larger.
Several piezometers at the base do not work or their indications are not relible. These piezometers must be repaired or replaced.
Subsidence observations of the station showed that minimum subsidence values are observed on sections 1-6, and maximum -on sections 8-10. This fact allows to assume that the soil at the base 8-10 has a lower density and is more deformative. In general, the subsidence values even in sections 8-10 are very small, and their growth in time does not exceed 1 mm per year, which indicates their damping.
Horizontal deformations of the section in the direction along the flow are practically absent, and the transverse deformations measured with the slit-metering device indicate continuous joints closing with undamaged rate. This process has linear nature and almost all the section joints is in doubt in the instruments readings reliability. It can be recommended to perform special test measurements to check the reliability of observation results.
Conclusions
The main conclusions on the work can be formulated as follows: 1. The impervious elements of the underground circuit of HS building, especially at the exit section, turned out to be insufficiently effective for specific geological conditions of the base from the point of view of filtration head clearing. Thus, the main head drop of water at the base of the sections under consideration occurs on the upper HS building cutoff, i.e. the maximum filtration gradient is observed in this zone. This seems to be very dangerous; therefore, this process should be carefully monitored, especially in the area of sections 3 and 4.
2. P3 and P4 are the main piezometers that determine the reliability of impervious elements devices of the entire HS building. However, special attention should be paid to piezometers in sections 3, 4, and 10. In the future, it is necessary to equip the main piezometers of the HS building with remote water level measuring sets and to make an automated system of condition monitoring. | 3,343.2 | 2018-01-01T00:00:00.000 | [
"Engineering"
] |
A loop structure allows TAPBPR to exert its dual function as MHC I chaperone and peptide editor
Adaptive immunity vitally depends on major histocompatibility complex class I (MHC I) molecules loaded with peptides. Selective loading of peptides onto MHC I, referred to as peptide editing, is catalyzed by tapasin and the tapasin-related TAPBPR. An important catalytic role has been ascribed to a structural feature in TAPBPR called the scoop loop, but the exact function of the scoop loop remains elusive. Here, using a reconstituted system of defined peptide-exchange components including human TAPBPR variants, we uncover a substantial contribution of the scoop loop to the stability of the MHC I-chaperone complex and to peptide editing. We reveal that the scoop loop of TAPBPR functions as an internal peptide surrogate in peptide-depleted environments stabilizing empty MHC I and impeding peptide rebinding. The scoop loop thereby acts as an additional selectivity filter in shaping the repertoire of presented peptide epitopes and the formation of a hierarchical immune response.
Introduction
Nucleated cells of higher vertebrates provide information about their health status by presenting a selection of endogenous peptides on MHC I molecules at the cell surface. By sampling these peptide-MHC I (pMHC I) complexes, CD8 + T lymphocytes are able to detect and eliminate infected or cancerous cells (Blum et al., 2013;Rock et al., 2016). In a process called peptide editing or proofreading, peptides derived from the cellular proteome are selected for their ability to form stable pMHC I complexes. This peptide editing is known to be catalyzed by the two homologous MHC I-specific chaperones tapasin (Tsn) and TAP-binding protein-related (TAPBPR) (Fleischmann et al., 2015;Hermann et al., 2015;Morozov et al., 2016;Tan et al., 2002;Thomas and Tampé, 2019;Wearsch and Cresswell, 2007;Wearsch et al., 2011). The selection of high-affinity MHC I-associated peptide epitopes is of pivotal importance not only for immunosurveillance by effector T lymphocytes, but also for priming of naïve T cells and T cell differentiation. As an integral constituent of the peptide-loading complex (PLC) in the endoplasmic reticulum (ER) membrane, the ER-restricted Tsn functions in a 'nanocompartment' characterized by a high concentration of diverse, optimal peptides. The peptides are shuttled into the ER by the heterodimeric ABC (ATPbinding cassette) transporter associated with antigen processing TAP1/2, the central component of the PLC (Abele and Tampé, 2018). In the ER, most peptides are further trimmed by the aminopeptidases ERAP1 and ERAP2 to an optimal length for binding in the MHC I groove (Evnouchidou and van Endert, 2019;Hammer et al., 2007). In contrast to Tsn, TAPBPR operates independently of the PLC and is also found in the peptide-depleted cis-Golgi network . Fundamental insights into the architecture and dynamic nature of the Tsn-containing PLC have come from a recent cryo-EM study of the fully-assembled human PLC (Blees et al., 2017), while the basic principles underlying catalyzed peptide editing have been elucidated by crystal structures of the TAPBPR-MHC I complex (Jiang et al., 2017;Thomas and Tampé, 2017a): TAPBPR stabilizes the peptide-binding groove in a widened conformation primarily through the MHC I a2-1 helix, distorts the floor of the binding groove, and shifts the position of b2-microglobulin (b2m). Furthermore, one of the two TAPBPR-MHC I complex structures revealed a remarkable structural feature in TAPBPR named the scoop loop (Thomas and Tampé, 2017a). In TAPBPR, this loop is significantly longer than the corresponding region in Tsn, which was not resolved in the X-ray structure of Tsn (Dong et al., 2009). Notably, the scoop loop of TAPBPR is located in the F-pocket region of the empty MHC I binding groove ( Figure 1A,B). By anchoring the C-terminal part of the peptide, the F pocket region is crucially involved in defining pMHC I stability (Abualrous et al., 2015;Hein et al., 2014). The scoop loop occupies a position that is incompatible with peptide binding and displaces or coordinates several key MHC I residues responsible for binding the C terminus of the peptide. We therefore proposed that the scoop loop can be regarded as a surrogate for the C terminus of the displaced peptide, stabilizing the inherently labile empty MHC I molecule (Thomas and Tampé, 2017a). At the same time, by occupying a region critical to peptide binding, the scoop loop might allow only highaffinity peptides to re-enter the MHC I binding groove after displacement of sub-optimal peptide. The proposed importance of the scoop loop for TAPBPR function has recently been scrutinized in a study by Ilca et al. investigating TAPBPR scoop-loop variants using immunopeptidomics and cellbased assays (Ilca et al., 2018). Ilca et al. found that a specific leucine residue in the scoop loop facilitates peptide displacement on MHC I allomorphs favoring hydrophobic peptide side chains in their F pocket. Here, we aimed to clarify the role of the scoop loop during TAPBPR-catalyzed peptide editing using in vitro interaction and peptide-exchange studies with defined, purified components. We demonstrate that the scoop loop is of critical importance for TAPBPR-mediated stabilization of empty MHC I clients in peptide-depleted environments and contributes to peptide quality control during editing by impeding released peptide to rebind in the MHC I groove. Collectively, our data support a crucial role for the TAPBPR scoop loop in establishing a hierarchical immune response. eLife digest Cells in the body keep the immune system informed about their health by showing it fragments of the proteins they have been making. They display these fragments, called peptides, on MHC molecules for passing immune cells to inspect. That way, if a cell becomes infected and starts to make virus proteins, or if it becomes damaged and starts to make abnormal proteins, the immune system can 'see' what is happening inside and trigger a response.
MHC molecules each have a groove that can hold one peptide for inspection. For the surveillance system to work, the cell needs to load a peptide into each groove before the MHC molecules reach the cell surface. Once the MHC molecules are on the cell surface, the peptides need to stay put; if they fall out, the immune system will not be able to detect them. The problem for the cell is that not all peptides fit tightly into the groove, so the cell needs to check each one before it goes out. It does this using a protein called TAPBPR.
TAPBPR has a finger-like structural feature called the "scoop loop", which fits into the end of the MHC groove while the molecule waits for a peptide. The experiments revealed that the scoop loop plays two important roles. The first is to keep the MHC molecule stable when it is empty, and the second is to hinder unsuitable peptides from binding. The scoop loop sticks into one side of the groove like a tiny hairpin, so that pushed-out, poorly fitting peptides cannot reattach. At the same time, it holds the MHC molecule steady until a better peptide comes along and only releases when the new peptide has slotted tightly into the groove.
Understanding how cells choose which peptides to show to the immune system is important for many diseases. If cells are unable to find a suitable peptide for a particular illness, it can stop the immune system from mounting a strong response. Further research into this quality control process could aid the design of new therapies for infectious diseases, autoimmune disorders and cancer.
Design of TAPBPR scoop-loop variants
To investigate the function of the scoop loop, we prepared two human TAPBPR variants: TAPBPR Tsn-SL , in which the TAPBPR scoop loop was replaced with the corresponding shorter loop of Tsn, and TAPBPR DSL , in which the original scoop loop was essentially deleted by replacing it with three glycine residues to preserve proper folding of the MHC I chaperone ( Figure 1C). The ERlumenal domains of wildtype (wt) TAPBPR and the variants, each harboring a C-terminal histidine tag, were expressed in insect cells and purified from the cell culture supernatant via immobilizedmetal affinity chromatography (IMAC) and size-exclusion chromatography (SEC). As MHC I chaperone clients, we chose mouse H2-D b and human HLA-A*02:01, which are known to interact with TAPBPR Ilca et al., 2019;Morozov et al., 2016). HLA-A*02:01, the major MHC I allomorph in the Caucasian population and found in more than 50% of the global population, presents a diverse spectrum of immunodominant autoimmune, viral, and tumor epitopes and is therefore medically highly relevant (Boucherma et al., 2013). The MHC I allomorphs were expressed in E. coli as inclusion bodies and refolded in the presence of b2m and fluorescently-labeled or photo-cleavable peptide (Rodenko et al., 2006). The highly pure TAPBPR variants and pMHC I complexes eluted as monodisperse samples at expected size during SEC ( Figure 1D-F).
Scoop-loop variants have reduced chaperone activity towards peptidefree MHC I
During peptide exchange, MHC I molecules go through a peptide-free high-energy intermediate state after peptide release and before re-entry of a new peptide. A hallmark of peptide editors like TAPBPR is their ability to recognize and chaperone this intermediate until it is located in a peptiderich environment where a high-affinity peptide ligand can enter the MHC I binding groove (Thomas and Tampé, 2019;Thomas and Tampé, 2017b). To scrutinize the role of the scoop loop in chaperoning empty MHC I, we tested the ability of our TAPBPR variants to stabilize peptide-free H2-D b . Hence, H2-D b (10 mM) loaded with a photo-cleavable peptide was incubated with TAPBPR (3 mM) under UV exposure. Subsequent SEC analysis revealed that both TAPBPR Tsn-SL and TAPBPR DSL are, in principle, competent to form complexes with MHC I (Figure 2A). However, in comparison to TAPBPR wt ( Figure 2A,B), the amount of H2-D b complex detected for TAPBPR Tsn-SL and TAPBPR DSL during SEC was reduced by around 40% and 90%, respectively ( Figure 2C). After reanalysis of the MHC I chaperone complexes by SEC, the mutant complexes were mostly dissociated, indicating kinetic instability ( 10 mM) loaded with a photo-cleavable peptide (RGPGRAFJ*TI, J* denotes photocleavable amino acid) was irradiated with UV light in the presence of TAPBPR wt (3 mM, red), TAPBPR Tsn-SL (blue), or TAPBPR DSL (yellow) and subsequently analyzed by SEC. The different elution volumes of the first main peak, marked by dashed lines, already hint at different complex stabilities. (B) Deconvolution of size-exclusion chromatogram from TAPBPR wt complex formation (experiment independent of the sample shown in (A)). The experimental chromatogram (red) was deconvoluted using three Gaussian functions (gray) that can be ascribed to the TAPBPR-H2-D b complex (1.06 mL), free TAPBPR (1.12 mL), and free H2-D b (1.20 mL). The sum of the three Gaussians is shown as dotted curve. The residual plot depicted beneath the main panel shows the difference between the experimental data and the sum. (C) Stability of complexes formed by TAPBPR wt , TAPBPR Tsn-SL , and TAPBPR DSL , respectively, as judged by the area of the complex peak obtained by deconvolution. Data represent mean ± SD (n = 2).
The online version of this article includes the following figure supplement(s) for figure 2: Scoop-loop variants retain their function in catalyzing peptide dissociation from MHC I After investigating the chaperone activity of the TAPBPR scoop-loop mutants, we tested their ability to displace MHC I-bound peptide. To this end, we employed an in-vitro peptide exchange assay similar to the one previously described for measuring the activity of Tsn (Fleischmann et al., 2015;Chen and Bouvier, 2007). Dissociation of medium-affinity fluorescent peptide from refolded and purified p*MHC I (p* denotes fluorescently-labeled peptide) was monitored by fluorescence polarization after addition of a 1000-fold molar excess of unlabeled high-affinity competitor peptide in the absence or presence of TAPBPR ( Figure 3A). The large molar excess of unlabeled competitor peptide ensures that once a fluorescent peptide dissociates, it does not rebind, but is replaced by an unlabeled competitor-peptide molecule. The observed rate constant is thus solely determined by the dissociation rate constant of the fluorescent peptide. The condition of this assay mimics the environment of the PLC, where optimal, high-affinity peptides abound. For the mouse MHC I allomorph H2-D b , TAPBPR wt and the scoop-loop variants accelerated the uncatalyzed peptide release (2.53 ± 0.37 Â 10 À3 s À1 ) to a similar extent. The TAPBPR DSL mutant lacking the entire scoop loop exhibited slightly reduced activity (7.68 ± 1.17 Â 10 À3 s À1 ) compared to the wt protein (10.41 ± 0.54 Â 10 À3 s À1 ), whereas TAPBPR Tsn-SL was slightly more active (12.64 ± 1.03 Â 10 À3 s À1 ) ( Figure 3B,C). When we performed the experiment at a much lower TAPBPR concentration (75 nM), the TAPBPRs retained their activity, and the gradual activity differences between the variants remained ( Figure 3-figure supplement 1). This suggests that TAPBPR wt and the scoop-loop mutants have similar affinities for H2-D b . TAPBPR wt was even able to catalyze displacement of a high-affinity peptide from H2-D b , although the catalytic effect was considerably smaller (1.8-fold acceleration) than for H2-D b loaded with the medium-affinity peptide (4.1-fold acceleration) (Figure 3-figure supplement 2A,B). In a second set of experiments, we analyzed peptide dissociation from the human MHC I allomorph HLA-A*02:01. Similar to the experiments with H2-D b , in a peptide-rich environment (1000-fold molar excess of peptide), the highest catalytic activity towards HLA-A*02:01 was observed for TAPBPR Tsn-SL , followed by TAPBPR wt and TAPBPR DSL ; yet, the differences in activity between the three TAPBPRs were more pronounced, and the acceleration of the uncatalyzed peptide dissociation from HLA-A*02:01 (1.90 ± 0.04 Â 10 À3 s À1 ) by TAPBPR Tsn-SL (26.31 ± 2.59 Â 10 À3 s À1 ) and TAPBPR wt (15.79 ± 0.71 Â 10 À3 s À1 ) was significantly higher than for H2-D b , while the activity of TAPBPR DSL (8.52 ± 1.18 Â 10 À3 s À1 ) remained almost the same ( Figure 3D,E).
The validity of our peptide exchange assay was confirmed by two interface mutants of TAPBPR wt , TN3-Ala and TN6. The TN3 (E72K) and TN6 (E185K, R187E, Q189S, Q261S) mutants were initially described for Tsn to significantly reduce or abolish MHC I binding (Dong et al., 2009). The impact of the TN6 mutations on MHC I interaction was later confirmed for TAPBPR (Morozov et al., 2016). According to the TAPBPR-MHC I crystal structures (Jiang et al., 2017;Thomas and Tampé, 2017a), the residue in TAPBPR (E105) corresponding to the mutated residue in Tsn-TN3 forms a hydrogen bond with the swung-out Y84 of the MHC heavy chain, which is involved in coordinating the C-terminus of the peptide in liganded MHC. We reasoned that a mutation to Ala instead of Lys might increase the mutational effect and therefore generated the TN3-Ala mutant. Two of the mutated residues in TN6 (R210 and Q212) are part of the jack hairpin of TAPBPR and form several interactions with MHC I heavy-chain residues, while Q275 lies in the interface with the a2-1 helix and the b8 sheet in the floor of the MHC I binding groove. Consequently, TN3-Ala and TN6 displayed drastically reduced activity towards H2-D b in our peptide-exchange experiment, with peptide dissociation rate constants close to the value of the uncatalyzed reaction ( Figure 3C,F). In summary, the results of our exchange assays demonstrate that under peptide-rich condition, the tested TAPBPR variants differ gradually in their displacement activity in an allomorph-dependent manner. But even the TAPBPR DSL mutant lacking the scoop loop is still able to substantially accelerate peptide dissociation from MHC I.
The scoop loop acts as an internal peptide competitor
In the TAPBPR-MHC I crystal structure, the scoop loop binds in the F pocket region of the MHC binding groove and appears to act as a surrogate for the peptide C terminus (Thomas and Tampé, 2017a). This notion is corroborated by our SEC analyses, which show that the scoop loop stabilizes peptide-free MHC I. We therefore wondered if the scoop loop impedes rebinding of displaced peptide and functions 'in cis' as a tethered, internal peptide competitor in the F pocket with extremely high effective concentration. To test this hypothesis, we modified the peptide exchange assay for H2-D b and HLA-A*02:01 by adding in a first step only TAPBPR without competitor peptide, which allowed us to monitor the change in free and bound fluorescent peptide under the influence of peptide rebinding in the presence of TAPBPR ( Figure 4A). This condition mimics the physiological environment TAPBPR is operating in, where optimal replacement peptides are scarce. Strikingly, after addition of the different TAPBPRs to H2-D b loaded with fluorescent peptide, the polarization changes, which correspond to the changes in the ratio of free to bound peptide, diverged dramatically ( Figure 4B). Peptide dissociation was most pronounced for TAPBPR wt with the native scoop loop, reaching~60% peptide release, whereas only~12% of the peptide population was released from H2-D b by TAPBPR Tsn-SL , and almost no decrease in polarization was caused by TAPBPR DSL . Similar to our original peptide exchange assay (Figure 3), differences between the two MHC I allomorphs were observed: In comparison to H2-D b , TAPBPR Tsn-SL -induced peptide dissociation from HLA-A*02:01 was significantly stronger, approaching the level of peptide release induced by TAPBPR wt (Figure 4-figure supplement 1A). Peptide release was also peptide-dependent, as H2- Figure 4 continued on next page D b loaded with a high-affinity peptide led to a significantly smaller decline in bound peptide (Figure 3-figure supplement 2C). After addition of competitor peptide (2 nd step), the observed dissociation rate constants were in the same range as the values determined for the one-step experiment. Moreover, the level of released peptide after TAPBPR addition was titratable and reached saturation at 3 mM TAPBPR ( Figure 4C-E, Figure 4-figure supplement 1B). Under the given conditions, TAPBPR wt was able to dissociate 70% (H2-D b ) and 80% (HLA-A*02:01) of total MHC I-associated peptide, respectively ( Figure 4C, Figure 4-figure supplement 1B). These results suggest that the scoop loop interferes with re-binding of displaced peptide. It can only be completely dislodged from the MHC I binding pocket by a high-affinity peptide. The scoop loop thus acts as a crucial selectivity filter during peptide editing on MHC I.
Discussion
Tsn and TAPBPR are MHC I-dedicated chaperones, which facilitate loading and selective exchange of antigenic peptides and thereby generate stable pMHC I complexes that shape a hierarchical immune response. The molecular underpinnings of their chaperone and peptide proofreading activities have only recently been uncovered by crystal structures of the TAPBPR-MHC I complex (Jiang et al., 2017;Thomas and Tampé, 2017a). Notably, one of the X-ray structures resolved a loop structure, termed the scoop loop, that is wedged into the F-pocket region of the empty MHC I binding groove and has been postulated to play an important role during peptide exchange (Thomas and Tampé, 2017a). Here, we show that the TAPBPR scoop loop is indeed critically important in chaperoning intrinsically unstable empty MHC I clients in a peptide-depleted environment. This is illustrated by the reduced chaperone activity of TAPBPR Tsn-SL , which harbors the shorter Tsn scoop loop, and by the dramatically reduced lifetime of the TAPBPR DSL complex. In a peptide-rich, PLC-like environment, emulated by our one-step displacement experiments, the TAPBPR Tsn-SL mutant displays the highest activity, while TAPBPR DSL retains the ability to displace peptide. The latter observation appears to be in contrast to the study by Ilca et al. which found that TAPBPR with a mutated, but full-length scoop loop loses its ability to effectively mediate peptide dissociation (Ilca et al., 2018). In addition to stabilizing the chaperone-MHC I complex, we demonstrate that the TAPBPR scoop loop acts as an internal peptide competitor, and thus, as a selectivity filter in the discrimination between low-and high-affinity peptides. Although a direct competition appears to be the most obvious explanation for the effect on peptide rebinding, we cannot exclude that the scoop loop exerts its influence on peptide rebinding by an allosteric mechanism. The peptide-filtering activity seems to be allomorph-dependent for TAPBPR Tsn-SL . Our current interpretation of this allomorph specificity is that the Tsn scoop loop interacts more strongly with the F-pocket region of HLA-A*02:01 and is therefore able to impede peptide rebinding more efficiently than in the case of H2-D b . In contrast, TAPBPR wt shows a strong peptide release activity towards both MHC I allomorphs.
Based on the new insights, we propose the following model of TAPBPR-catalyzed peptide optimization on MHC I ( Figure 5): The large concave surface formed by the N-terminal domain of TAPBPR mediates its initial encounter with a suboptimally-loaded MHC I, assisted by the C-terminal domain of TAPBPR, which contacts the a3 domain of the MHC I heavy chain and b2m. TAPBPR facilitates the release of low-to medium-affinity peptides primarily by widening the peptide-binding groove through the MHC I a2-1-helix, fastening the peptide-coordinating Tyr84, distorting the floor of the binding groove, and shifting the position of b2m (Jiang et al., 2017;Thomas and Tampé, 2017a). This remodeling is made possible by the intrinsic plasticity of MHC I molecules Garstka et al., 2011;McShan et al., 2019;Natarajan et al., 2018;Thomas and Tampé, 2017b;van Hateren et al., 2017;van Hateren et al., 2015;Wieczorek et al., 2017), and it appears to be induced primarily by structural elements of TAPBPR that lie outside the scoop loop. As a result, the TAPBPR DSL mutant lacking the scoop loop is still able to catalyze peptide displacement. Once the suboptimal peptide has been released, the scoop loop occupies the position of the peptide C terminus in the F-pocket region. The scoop loop thereby contributes to the stabilization of the peptidedeficient binding groove. Our two-step peptide exchange -mimicking a peptide-depleted environment -demonstrates that the scoop loop functions at the same time as a peptide selectivity filter by impeding re-binding of the replaced peptide, either through direct competition with the C terminus of the incoming replacement peptide or through an allosteric mechanism. Hence, the scoop loop contributes to the significant affinity decrease of incoming peptides for the MHC I groove in the presence of TAPBPR (McShan et al., 2018). Assuming a mode of direct competition, the replacement peptide would dock in the MHC I groove first with its N terminus, before it competes with the TAPBPR scoop loop over the F pocket region (Hafstrand et al., 2019;Thomas and Tampé, 2017a). Negative allosteric coupling between different parts of the MHC I molecule might play a role in the final release of TAPBPR . The shorter scoop loop in Tsn suggests that its selective pressure on the replacement peptide is weaker than in TAPBPR. Indeed, our fluorescence polarization and SEC analyses show that the tapasin scoop loop in TAPBPR Tsn-SL is less efficient in preventing re-binding of dissociated peptide.
Physiologically, these observations might be explained by the fact that Tsn functions within the PLC, a 'nanocompartment' characterized by an abundant and diverse supply of optimal peptides, reaching a bulk concentration of up to 16 mM before the TAP transporter is arrested by trans-inhibition (Grossmann et al., 2014). Moreover, Tsn is supported by other PLC chaperones in stabilizing empty MHC I clients. In contrast, TAPBPR operates as a single MHC I-dedicated chaperone outside the PLC in environments where the concentration of high-affinity peptides is drastically lower and Figure 5. Proposed mechanistic functions of the scoop loop in catalyzed peptide proofreading. MHC I molecules bound to low-affinity peptide are recognized by the peptide editor (TAPBPR) (step 1). The editor lowers the peptide affinity of the suboptimally-loaded MHC I and induces dissociation of the low-to medium-affinity peptide (step 2). The scoop loop, which inserts into the F-pocket region of the peptide-binding groove, crucially contributes to the stabilization of the empty MHC I. In the absence of suitable peptides, empty MHC I clients are thereby held in a stable state until they can be loaded with an optimal epitope, for example in the PLC. Re-binding of the low-affinity peptide (step 3) is impeded by the scoop loop, through direct competition and/or via allosteric means. Only high-affinity peptides are able to compete with the editor over key regions of the peptidebinding groove (step 4) to eventually displace the scoop loop and the editor from the MHC I (step 5). The displaced editor is now ready for a new round of peptide selection, and the stable pMHC I complex is licensed to travel via the Golgi apparatus to the cell surface.
MHC I clients have to be stabilized in a peptide-receptive state for extended periods of time. Longterm stabilization of suboptimally-loaded or empty MHC I by TAPBPR also allows the major ER/cis-Golgi glycoprotein folding sensor UGGT1 (UDP-glucose:glycoprotein glucosyltransferase 1) to reglucosylate the MHC I molecule in order to feed it back into the calnexin/calreticulin cycle and/or allow recruitment of the MHC I to the PLC Thomas and Tampé, 2019). In conclusion, the evidence provided by our study indicates that the scoop loop is evolutionarily finetuned to enable Tsn and TAPBPR to accomplish their dual function as chaperone and proofreader in the specific subcellular location they operate in. By serving both as a stabilizing element and as selectivity filter in TAPBPR, the scoop loop influences peptide editing and impacts the repertoire of MHC I-associated epitopes presented on the cell surface.
Materials and methods
Key resources
DNA constructs
The DNA constructs of human b2m, the ectodomain of mouse H2-D b , and TAPBPR wt were identical to the ones previously described (Thomas and Tampé, 2017a), except for position 97 in TAPBPR wt , which contained the native cysteine. The TAPBPR scoop loop mutants TAPBPR Tsn-SL and TAPBPR DSL were generated by overlap extension PCR, the TN3-Ala and TN6 mutants were generated by sitedirected mutagenesis. The TN3-Ala and TN6 mutants harbored the same mutations that were described for the corresponding mutants of Tsn (Dong et al., 2009), except that in TN3-Ala E105 was mutated to alanine. TAPBPR Tsn-SL , TAPBPR DSL , TN3-Ala, and TN6 all contained the C97A mutation. Human HLA-A*02:01 (amino acids 1-278) was cloned into pET-28 (Novagen, Merck Millipore, Darmstadt, Germany) and ended in a C-terminal His 6 -tag preceded by a linker (sequence: HE). The amino acid numbering of TAPBPR is based on the mature protein as defined by N-terminal sequencing (Zhang and Henzel, 2004).
Protein expression
Human b2m and the ectodomains of mouse H2-D b and human HLA-A*02:01 were expressed as inclusion bodies in Escherichia coli BL21(DE3) as described before (Rodenko et al., 2006;Thomas and Tampé, 2017a). TAPBPR proteins were expressed in Spodoptera frugiperda (Sf21 or Sf9) insect cells according to standard protocols for the Bac-to-Bac system (Thermo Fisher Scientific, Waltham, MA). A high-titer recombinant baculovirus stock was used to infect the insect cells at a density of 1.5-2.0 Â 10 6 cells/mL, which were cultivated in Sf-900 III SFM medium (Thermo Fisher Scientific) at 28˚C. The cell culture medium containing secreted TAPBPR was harvested 72 hr after infection.
Refolding and purification of MHC I allomorphs H2-D b and HLA-A*02:01 were refolded from inclusion bodies by rapid dilution in the presence of purified b2m and peptide according to established protocols (Rodenko et al., 2006). Refolded MHC I complexes were purified by SEC (Superdex 200 Increase 10/300, GE Healthcare) in 1xHBS and concentrated by ultrafiltration (Amicon Ultra, Merck Millipore).
Purification of TAPBPR proteins
TAPBPR proteins were purified from the insect cell culture medium by IMAC according to a protocol published earlier (Thomas and Tampé, 2017a), polished by SEC (Superdex 200 Increase 10/300, GE Healthcare) in 1xHBS, and concentrated by ultrafiltration (Amicon Ultra, Merck Millipore).
Peptide exchange
Dissociation of fluorescently labeled peptide from MHC I was monitored at 23˚C in 1xHBS by fluorescence polarization (Fluorolog-3 spectrofluorometer, Horiba Jobin Yvon, Bensheim, Germany) with l ex/em of 530/560 nm. One-step and two-step dissociation assays were carried out with 300 nM MHC I loaded with TAMRA-labeled peptide, 1 mM TAPBPR, and 300 mM competitor peptide. Dissociation rate constants were determined in GraphPad Prism using a one-phase exponential decay regression.
MHC I-chaperone complex formation
In the presence of purified TAPBPR (3 mM), photo-P18-I10-loaded H2-D b (10 mM) was irradiated with UV light (36 nm, 185 mW/cm 2 , 120 s) on ice and afterwards incubated for 10 min at room temperature. Samples were subsequently centrifuged at 10,000xg for 10 min and analyzed by analytical SEC on a Superdex 75 (3.2/300) column (GE Healthcare). SEC runs were conducted in 1xHBS and monitored by absorbance at 280 nm. Chromatograms were deconvoluted into three Gaussian functions using the program Fityk 1.3.1 (Wojdyr, 2010). The amount of complex was assessed by the area of the complex peak.
TAPBPR-MHC I complex stability
Purified peptide-deficient TAPBPR wt -H2-D b , TAPBPR Tsn-SL -H2-D b , and TAPBPR DSL -H2-D b complexes were analyzed via analytical SEC either on a Superdex 75 (3.2/300) or a Superdex 200 (3.2/300) column (GE Healthcare) at a flow rate of 0.075 mL/min. A separate sample of purified TAPBPR wt -H2-D b complex was incubated with a 100-fold molar excess of high-affinity peptide prior to re-analysis by SEC. | 6,725.6 | 2020-03-13T00:00:00.000 | [
"Biology"
] |
Three-dimensional simulations of rockfalls in Ischia, Southern Italy, and preliminary susceptibility zonation
Abstract Ischia Island is a volcano-tectonic horst in the Phlegrean Volcanic District, Italy. We investigated rockfalls in Ischia using STONE, a three-dimensional model for simulating trajectories for given detachment locations of blocks. We propose methodological advances regarding the use of high-resolution LiDAR elevation data, the localization of possible detachments sources, and the inclusion of scenario-based seismic shaking as a trigger for rockfalls. We demonstrated that raw LiDAR data are useful to distinguish areas covered by tall vegetation, allowing realistic simulation of trajectories. We found that the areas most susceptibile to rockfalls are located along the N, N-W and S-W steep flanks of Mt. Epomeo, the S and S-W coast, and the sides of some steep exposed hydrographic channels located in the southern sector of the island. A novel procedure for dynamic activation of sources depending on ground shaking, in the event of an earthquake, helped inferring a seismically-triggered source map and the corresponding rockfall trajectories, for a scenario with 475 y return time. Thus, we obtained preliminary rockfall suceptibility in Ischia both in a “static” (trigger-independent) scenario, and in a seismic shaking triggering scenario. They must not be considered a risk map, but a starting point for a detailed field analysis.
Introduction
Landslides are natural phenomena which shapes the Earth surface, and in populated areas they can endanger lives and cause huge economic damages. Rockfalls are among the most dangerous types of landslides for their rapidity and destructive potential. Research on landslides focuses on methods that allows spatial zonations of the probability of occurrence (susceptibility), while hazard assessment requires knowledge of magnitude and temporal frequency. Additional information about vulnerable elements exposed to hazard allows to estimate the risk associate to landslides, in general, and rockfalls in particular.
For example, Copons and Vilaplana (2008) discussed the exposure of human lives and land to rockfalls in Andorra; Youssef et al. (2015) investigated the relationship between rockfalls and urban expansion in Saudi Arabia; Alvioli et al. (2021) produced rockfall susceptibility maps along the whole Italian railway network; Dorren et al. (2022) discussed mitigation of rockfall risk in the Swiss Alps.
Here, we performed a preliminary susceptibility zonation of rockfall susceptibility in the Ischia Island, with a three-dimensional model and high-resolution topographic data. We further implemented a novel mechanism to link spatial susceptibility with a seismic forcing with well-defined return time, adding a temporal component to get one step closer to full hazard assessment; we further analyzed the possible impact of rockfalls on roads and built up areas, in a critical way.
Rockfalls are widespread geomorphological processes that represent one of the main hazards in mountain areas (Whalley 1984), with mobilized volumes ranging from less than 1 to 10 5 m 3 (Evans and Hungr 1993;Hungr et al. 1999;Ruiz-Carulla and Corominas 2020) and with a very rapid to extremely rapid movement (Broili 1973;Dorren 2003). To describe them, we adopted the model STONE (Guzzetti et al. 2002), suitable for events including the detachment of individual blocks from a steep slope or a rocky cliff and following a path consisting in falling, bouncing and/or rolling on the ground. Thus, we do not discuss rock avalanches and other phenomena, here.
The most relevant input of STONE is a map of possible rockfall sources. This is a common input to many similar models, namely: RockGIS, (Matas et al. 2017); Rockyfor3D, (Dorren et al. 2022) and references therein; 2D CRSP, (Jones et al. 2000); Hy-STONE (Agliardi and Crosta 2003). All of these models calculate geometrical rockfall trajectories for given starting points; and none of them contains an physical mechanism to determine if and when blocks actually detach from a specific point. We applied STONE because we have been working with the model for several years already, in different settings (Guzzetti et al. 2003(Guzzetti et al. , 2004Santangelo et al. 2019;Sarro et al. 2020;Santangelo et al. 2021;Alvioli et al. 2021).
Several methods exist to gather information about slope stability and to map landslides features, including rockfalls sources and trajectories followed by individual falling blocks in individual slopes. They may involve direct field observation (Heckmann et al. 2016;Rossi et al. 2021), laser scanning (Copons and Vilaplana 2008), photogrammetry based on photographs acquired on the ground (Matas et al. 2022) or by aerial vehicles (Santangelo et al. 2019;Buyer et al. 2020;Giordan et al. 2020), and infrared thermography (Baron et al. 2014;Loche et al. 2022). Such information is useful to infer the likely location of future rockfall sources, and represent a valuable input for the application of three-dimensional models for rockfall trajectories. Simulation of boulders trajectories represents an essential tool in hazard/risk analyses (Lan et al. 2007(Lan et al. , 2010Pellicani et al. 2016).
Here, we present results of an analysis based on a new application of LiDAR elevation data and use of the model STONE (Guzzetti et al. 2002) for a probabilistic assessment of rockfall trajectories. We investigated various steps of the procedure involved in preparing input data for running the computer code, assessment of the results of simulations and their impact on built-up areas and roads, and a scenariobased hazard assessment for seismic triggering of rockfall events to identify the most rockfall-susceptible areas of the island, to plan detailed field studies aimed at full rockfall risk assessment in Ischia.
This work is organized as follows: Section 2 describes the geological setting, the historical seismicity and the historical flank instability of Ischia, and introduces the three-dimensional rockfall simulations. Section 3 describes and lists the data used for this study, specifying which were already available and which were newly developed. Section 4 describes the different steps performed in this work: interpolation of raw LiDAR data; values of input parameters, also in relation to vegetation; rockfall source localization; application of a new probabilistic model for triggering seismically induced rockfall. Section 5 describes the main results for the assessment of trajectories, Section 6 discusses results and draws conclusions about the simulation of rockfalls in Ischia island.
Geographical and geological setting
The Ischia island is one of the three volcanic complexes located in the Phlegrean Volcanic District (Orsi et al. 1999) which last erupted in 1302 (Civetta et al. 1991). It is an active volcanic field at rest that covers an area of about 46 km 2 and it is characterized by a remarkable ground uplift caused by the resurgence of an ancient collapsed caldera (Acocella and Funiciello 1999), whose rim is facing the southwest and northwest sectors of the island. (a) Location of the study area; the background represents seismic hazard levels in Italy, represented by maximum ground acceleration with exceeding probability of 10% in 50 years (Stucchi et al. 2004), for illustrative purposes. (b) A shaded relief obtained from 2 Â 2 m elevation data of the island, colorized with shades of green as a function of the difference between a digital surface model (including vegetation) and a digital terrain model (excluding vegetation). Brown polygons are buildings and black lines are roads, both from the OpenStreetMap project.
Being a volcanic island, Ischia shows different processes, e.g. fumarolic activity, earthquakes, slope instabilities and volcanic climax eruptions. Volcanic edifices experience slope instability as consequence of different solicitations such as (i) eruption mechanism and depositional process, (ii) tectonic stress, (iii) extreme weather conditions and (iv) seismic shaking. All these events induce the mobilization of unstable fractured volcanic flanks, initiating rockfalls (Carracedo 1996;Hurlimann et al. 2000;Delcamp et al. 2017;Roberti et al. 2021). The island is also densely populated, and the national inventory of landslide phenomena (Trigila et al. 2010;ISPRA 2018) in the area show rockfall runout/deposition areas with substantial overlap with buildings and infrastructure ( Figure 1). Thus, assessment of possible locations for initiations of new falling blocks, and their possible trajectories downhill, is of paramount importance.
The regional extensional tectonics is supposed to be related to the opening of back-arc basins caused by an East-retreating subduction of the Apulo-Adriatic lithosphere (Doglioni et al. 1996). The resurgent dome generated an uplifted block, the Mount Epomeo horst, located in the central and western sector of the island; it is marked by a system of subvertical faults striking NW-SE and NE-SW, on the edges of the dome and away from it, and and N-S and W-E mainly located at the borders of the dome (Acocella and Funiciello 1999). The southeastern part of the island is characterized by highly dipping ENE-WSW normal faults. Volcanism at Ischia started over 150 ka B.P. (Cassignol and Gillot 1982) and continued, with very long period (centuries to millennia) of quiescence, until the last eruption occurred in 1302 A.D. (de Vita et al. 2006). The oldest exposed rocks belong to a partially eroded volcanic complex, which crops out in the south-eastern part of the island, covered by more recent deposits, composed of volcanic effusive and explosive rocks, mostly trachytes and phonolites (de Vita et al. 2006). The large Mount Epomeo Green Tuff caldera formed during the eruption that took place 55 ka ago. In addition to volcanic soils, debris flow deposits cover the southern slope of Mount Epomeo and in the northern and western sectors of the island (de Vita et al. 2006). Southern sector collapses of Ischia, probably occurred between between 8.6 and 5.7 ka ago as a consequence of the resurgence, generated three major debris flow deposit units (Tibaldi and Vezzoli 2004).
Historical and recent seismicity
Historical seismicity of Ischia is dominated by the earthquakes of 1881 (D'Auria et al. 2018) and 1883 (Carlino et al. 2010), which mainly affected Casamicciola Terme and the other municipalities of the island. Both events, especially the second one, strongly affected the towns on the island causing the death of 2,313 people (Cubellis et al. 2004), and triggered several ground and environmental effects: large landslides North-Western slopes of Mt. Epomeo, ground cracks and variations of the springs' flow rate, chemistry and temperature (Alessio et al. 1996). Since 1883, before 2017 the seismicity of Ischia was characterized by small and shallow events, most of which were detectable only in Casamicciola Terme (D'Auria et al. 2018) On August 2017 a seismic sequence characterized by a starting Mw 4.0 earthquake occurred 1-2 km below the town of Casamicciola Terme (De Novellis et al. 2018). Given that the highest geothermal activity is located in the southwest sector of the island while the main seismogenic portion of Ischia is represented by the Casamicciola Terme area, the recorded seismicity does not seem to be correlated with the geothermal activity (Chiodini et al. 2004). Then it is reasonable to speculate that seismogenesis at Ischia is probably related to the structural dynamics of the northern part of the island Paoletti et al. 2013).
Slope stability
The North, North-West and South-West facing steep flanks of Mount Epomeo are considered the gravitationally least stable slopes of the island (Della Seta et al. 2011). Mass movement deposits are widespread on the island and were generated by rockfalls, slides, toppling, debris flows and debris avalanches (Mele and del Prete 1998;Tibaldi and Vezzoli 2004;del Prete and Mele 2006). Slope movements on the island are correlated with vertical uplift and volcanism (Fusi et al. 1990;Alessio et al. 1996;Mele and del Prete 1998). This relationship is documented by displacement of volcanic and non-volcanic deposits, whose age and original stratigraphic position are in some cases well constrained (de Vita et al. 2006).
Old mass movements have been correlated to volcanic eruptions (Buchner 1986), whilst recent events (del Prete and Mele 2006) have been related to exceptionally heavy rains (Mele and del Prete 1998) or to seismicity (Guadagno and Mele 1992;Molin et al. 2003;Rapolla et al. 2010). Rockfalls mainly occur as an effect of the gravitational instability on the cliffs along the coastline or on the major scarps generated by the rapid resurgence. Their activation was favored by the widespread rock fracturing (favored in turn by hydrothermal weathering), which allowed even huge blocks to be detached (Della Seta et al. 2011).
Rockfall dynamics can be defined as a two steps process: the detachment of a rocky block from a slope (first step) and the subsequent downslope movement (second step). In general terms, landslide hazard analysis should determine both the likelihood of occurrence of failures, both in time and space. Due to its characteristic and to its rapidity, a rockfall hazard assessment should also consider the travel distance of the falling blocks, and their trajectories (Guzzetti et al. 2004).
More specifically, a comprehensive analysis of rockfalls hazard should include a wealth of specific data: accurate source areas locations in a rock cliff (Loye et al. 2009), the volume of rock mass that can be released (Mavrouli and Corominas 2020;Francioni et al. 2020;Hantz et al. 2021), the dissipation of energy during rolling and rebounds on the slope surface ) and impacts on trees (Dorren et al. 2006;Lundstrom et al. 2009;Lu et al. 2021), penetration depth in the soil during impact (Pichler et al. 2005;Lu et al. 2019) and, possibly, a model for fragmentation of the block after each impact (Ruiz-Carulla and Corominas 2020).
In this research we did not adopt a physical model to infer the timing of the rockfall and the detachment processes. The existing methods to analyze the source area, investigating the precise location of the possible detachment points, the travel distance of the falling blocks, the expected volume of the blocks, and the frequency and size of rockfalls, are only applicable on very small areas and are beyond the scope of this work. Expert mapping of possible source areas by photo and satellite interpretation is an adequately strong and effective approach which can be used to identify the source points on relatively large areas (Santangelo et al. 2019, in combination with morphometric analysis (Michoud et al. 2012;Alvioli et al. 2021).
Here, we are interested in obtaining a preliminary rockfall zonation of the island. We also estimated the likelihood of built-up locations and roads to be hit by a block falling from upslope, according to the proposed zonation; risk assessment is not the primary focus of this work, though. To evaluate the rockfall trajectories, the following information is needed: (i) the point of origin of the falling rock, (ii) the underlying topography, (iii) the friction and energy restitution coefficients of the cropping out rocks downhill, useful to run a model for the description of how the mass would reach a rest state.
Eventually, we performed an assessment of seismic-induced rockfalls within an earthquake scenario with a specific return time. We proposed a new model to link a scenario-based seismic ground shaking to a probability of triggering rockfalls. For this additional step we need additional input, with respect to the data listed above. The seismic-triggering mechanism is based on spatially distributed peak ground acceleration (PGA), and it will be explained in detail in the following.
Materials
We used materials which were either already available to us (a small scale geo-lithological map; an inventory of rockfalls occurred on the island; a vector map of the buildings and roads, and a map of ground shaking for a specific scenario), or newly developed maps (a digital elevation model, and a map of sources prepared in expert way). Here follows a short description of such data.
1. Raw LiDAR data provided by the Italian Ministry of the Environment.
Interpolated data to obtain digital terrain and surface models, to run the computer code of the model STONE and to obtain spatial information about vegetation height; Figure 1). 2. Vector layer containing the street network and the polygons of buildings, from OpenStreetMap; Figure 1). The map contains about 580 km of roads, and more than 16,000 buildings. 3. Inventory of landslide phenomena in Ischia, selected from the inventory of landslide phenomena in Italy known as IFFI (Trigila et al. 2010; ISPRA 2018). The IFFI inventory contains 308 landslide polygons in Ischia island, of which 100 are identified as rockfalls and diffused rockfalls ( Figure 2). 4. Map of sample potential source area for rockfalls on Ischia, from expert interpretation of Google Earth TM images ( Figure 2). 5. Geo-lithological map of Ischia, scale 1:50,000 Rapolla et al. (2010). Figure 3 shows the map, with eight lithological classes (listed in Table 1). 6. Map of PGA for the major Casamicciola earthquake of 1833, obtained from Rapolla et al. (2010), and corresponding to a return time of 475 years ( Figure 4). We are aware of a recent work (Albano et al. 2018) proposing a model for a PGA map of the 2017 earthquake in Ischia, also in relationship with ground movements. We did not consider that specific event, here, because we deal with scenarios associated with specific return times; we will discuss this choice further in the Methods section. 7. A land cover map, obtained from ISPRA (2012), useful to compare inferred vegetation areas with actual land cover, though at lower resolution (10 m) than the DEM adopted here.
Methods
The different steps performed in this work can be summarized as: (1) interpolation of raw LiDAR data, to obtain digital elevation models; (2) assignment of values for input parameters, based on geo-lithological data and presence of vegetation; (3) localization of prospective sources for the initiation of rockfall trajectories and run of the computer program STONE, for different options of the input source map; (4) application of a new model for selective triggering of prospective sources, and corresponding trajectories, due to seismic shaking. We illustrate these point in the following four paragraphs.
Interpolation of raw LiDAR data
We used data from the Italian "Geoportale Nazionale," 1 managed by the Ministry of Environment and providing different kinds of spatial data. In particular, the archive contains an extensive LiDAR survey covering a substantial portion of Italy, with data Figure 3 shows the corresponding geo-lithological map. We set STONE to perform random sampling of values of the parameters in a ± 10% range around the nominal values listed here.
stored at intermediate processing level. Each point contains the x, y coordinates of the point, its elevation, and a binary classification flag. For this research, we selected point clouds covering the Ischia island and we interpreted the two values of the flag either as referring to reflections of the laser impulses from the ground (first returns labeled with flag "1") or from the upper vegetation, or other elevated surfaces (second returns, flagged with "2"). The two sets of data contain, respectively, 39,896,156 and 25,320,537 points. We interpolated separately the two point clouds, using the module r.surf.rst specifically designed to perform surface interpolation from vector points map by splines, within GRASS GIS (Mit a sov a and Hofierka 1993). Interpolation resulted in two digital elevation models, with a square grid of 2 m  2 m -the nominal resolution of the digital elevation model provided "as-is" from the same data source as the point clouds. In the following, we will refer to the first one as digital terrain model (DTM) and to the second one as digital surface model (DSM).
Moreover, we interpreted the point-to-point difference between DSM and DTM as due to vegetation, and exploited this information to infer modifications of ground parameters relevant for the simulations with STONE. We partially took into account disturbances due to the presence of anthropic structures and buildings using additional land cover data, which we correlated with point-to-point DSM -DTM differences, as explained in the next section.
Numerical input parameters
In STONE, the behavior of individual blocks rolling and bouncing down a slope on different types of substrata is controlled by three main parameters. Friction controls the rolling stage, while normal and tangential restitution control bounces. We inferred the values of the three parameters by expert comparison of the geo-lithological formations existing in the island and values of the parameters reported in the bibliography, mainly by (Guzzetti et al. 2002) and Alvioli et al. (2021). The selected values are listed in Table 1.
The criteria adopted to match lithotypes with parameter values were as follows (Guzzetti et al. 2004). Massive and/or thickly bedded rocks (e.g. lavas, limestone, thickly bedded siltstone, claystone etc.) are characterized by very high values of the normal and tangential energy restitution coefficients and very small values of the dynamic rolling angle, as well as pyroclastic deposits and breccias and pumice; whereas thinly bedded limestone, clay and reworked materials are characterized by intermediate values of the normal and tangential restitution coefficients and also of the dynamic rolling angle. Empirical field data revealed that the alluvial deposits show low values of the normal and tangential energy restitution coefficients, and a very high value of the dynamic rolling friction angle (Guzzetti et al. 2002(Guzzetti et al. , 2004. The program STONE performs a random modification of individual values of the parameters, within configurable intervals and probability distribution functions. Moreover, we assumed that vegetation interferes with trajectories of falling blocks reducing their ability to proceed downhill (Dorren et al. 2006;Masuya et al. 2009;Dorren et al. 2022). Thus, we modified the parameters exploiting information about the presence of vegetation obtained using the available LIDAR data (Section 4.1). The map of vegetated area used in Figure 1) to colorize the shaded relief of the island was obtained subtracting the DEM generated using the" second impulse" from that obtained interpolating the" first impulse," and considering only points with a positive altitude difference of more than 5 m, which actually exists in the island.
We assumed that vegetation interferes with trajectories of falling blocks only modifying the parameters of cells identified as containing vegetation; we did not set up physical barriers. Thus, in the grid cells with vegetation, we enhanced the friction Table 2. Comparison between the DEM difference used in this work as a proxy for the presence of higher vegetation, and land cover obtained from a national land cover map, at 10 m resolution, obtained from the ISPRA website. Values are percentages of each land cover type within the pixels in which the DEM difference is larger than 1, 5 or 10 m; pixels with buildings and roads (cf. Figure 1) were excluded from the calculations. coefficients in Table 1 by 50% and reduced both the restitution coefficients by 50%. We considered as vegetation only the grid cells with a positive difference between the DSM and DTM, larger than 5 m. We decided to use grid cells with DEM differences larger than 5 m as follows. We used a land cover map (ISPRA 2012; 10 m resolution), and calculated the percentage of cells falling in each land cover class where the DEM difference was larger than a threshold. Using three different thresholds, namely 1 m, 5 m and 10 m ( Table 2). We observed that 5 m provided a reasonable balance between the accumulated percentages of classes which potentially affect rockfall trajectories (low values in "broad leaved," "needle leaved," "orchards" and "vineyards") and classes which surely correspond to false positives for the DEM difference interpreted as upper vegetation, namely "artificial surfaces." Thus, we selected 5 m as a minimum difference to modify terrain parameters in STONE.
Localization of rockfall sources
The most relevant input of the program STONE is a raster grid specifying the location of sources. For small areas, i.e. at slope or small catchment scale, sources can be mapped on the field and by photointerpretation. On larger areas, either procedures are impractical and sources must be inferred in some probabilistic way. We follow a similar approach as in Alvioli et al. (2021), where the analysis concerned the whole Italian railway system. Sources were localized by morphometric analysis on a few sample locations mapped by experts, and statistical generalization to the whole area of interest identifies locations of possible future sources.
Here, we used the results of the generalization procedure of Alvioli et al. (2021) as a starting point, and applied additional cuts based on slope angle and relief (elevation range) values. Additional cuts are based on mapping of possible sources of rockfalls on Google Earth TM images, shown as blue polygons in Figure 1. The expert mapping analysis consists in a detailed investigation of the location of potential sources of rockfalls through the photointerpretation of satellite images spanning a period of about 10 years (Guzzetti et al. 2004;Santangelo et al. 2019Santangelo et al. , 2021. The mapped polygons were used to calibrate the statistical procedure providing additional information on the local conditions that may induce detachment of blocks. We analyzed the distribution of values of slope and relief within the mapped polygons, first, and of the whole study area, then. Figure 5. The latter can be calculated in different ways, as virtually any morphometric quantities derived from a DTM (Trevisani and Cavalli 2016;Voros et al. 2021).
Generally speaking, relief is defined in each grid cell as the difference between the maximum and minimum values of elevation in a neighborhood of that cell; thus, it clearly depends on the size of the neighborhood. We investigated the values of relief within mapped sources, in relation with values in the whole study area, for different sizes of the neighborhood, using the module r.neighbors in GRASS GIS. It turned out that two specific sizes, namely square neighborhoods of size 15 and 25 cells, have distributions that helps generalizing the morphometric characteristics of mapped sources to the whole area. This is clearly shown in the ratio of histograms corresponding to values of slope, 15 Â 15 relief and 25 Â 25 relief within the sources and in the whole area, in Figure 5.
From the shape of the ratios, we concluded that the probability of any grid cell in the study area to be a source of rockfalls is much larger for increasing values of slope, 15 Â 15 and 25 Â 25 relief. Thus we assigned each grid cell a probability proportional to the ratio source/whole area of slope as shown in the figure, P(S). In addition, we heuristically defined thresholds for relief, i.e. we set the grid cells with joint values of 15 Â 15 relief below 35 m and 25 Â 25 relief below 45 m to null probability. We refer to the relief-constrained probability as P(S, R), where R ¼ ðR 15 , R 25 Þ: The probability, in turn, controls the number of trajectories initiating from each grid cell of the DTM, providing a probabilistic output for the model. The number of simulated trajectories per source cell varies from a minimum of 10 trajectories, at lower probabilities, to a maximum of 100, for probability equal to unity. The resulting raster source map is called SRC Prob , where SRC stand for "source" and where Prob stands for "probabilistic."
A new model for seismically induced rockfalls
To investigate the impact of earthquake-induced rockfalls, we illustrate here a new model to link ground shaking to the natural predisposition of slopes to be affected by rockfalls. The method was developed within a national project called FRA.SI -"Integrated multi-scale methodologies for the zonation of earthquake-induced hazard in Italy," jointly developed by CNR IRPI, IREA and IGAG (https://frasi-project.irpi. cnr.it, in Italian). Basic assumptions of the method are that seismic shaking is one of the possible triggers of rockfalls which typically affect slopes with specific topographic characteristics, or where rockfall occurred already (Govi 1977). We briefly summarize the rationale of the method, here below.
We consider the probabilistic source map described in Section 4.3 as the baseline map, in which the different grid cells with non-null probability can be actually activated (i.e. trajectories initiating from these cells are actually simulated within STONE) as a function of ground shaking due to an earthquake. We selected peak ground acceleration (PGA) as a scalar, distributed measure of the intensity of an earthquake event. We defined the overall probability of a grid cell with given values slope angle S, relief R ¼ ðR 15 , R 25 Þ, and PGA of representing a source of rockfalls as follows: where the first factor on the right hand side, P(S, R), is the relief-constrained probability as a function of S, R 15 and R 26 , defined in Section 4.3, and the second factor is a damping factor. The latter is a linear function of PGA, and assumes values between 0 and unity; thus, it sets to zero the overall probability P seismic ðS, R; PGAÞ where PGA is null, and leaves P(S, R) unmodified where PGA ¼ PGA max . The values PGA min and PGA max in Eq. (1) were determined from the minimum and maximum values of expected PGA calculated from return times of 475 y and 975 y, all over Italy, with the model of Mori et al. (2020); Falcone et al. (2021), and are 0.0 and 0.81, respectively (in units if g, the Earth's acceleration of gravity). The rationale for using these minimum and maximum values rely on the possibility of using the same method of defining P seismic ðS, R; PGAÞ in a parametric way, as a function of values of PGA contained in specific maps, corresponding to specific earthquake events, with return times within the range 475 y and 975 y. Thus, the method is applicable both using scenario-based PGA maps.
For the specific PGA map available for the study area of interest in this work, we obtained a raster source map called SRC seismic , using Eq. (1). We stress, here, that we used the PGA scenario of Rapolla et al. (2010) because that was explicitly associated with a specific return time. This is consistent with the meaning of the parameters PGA min and PGA max of Eq. (1), which were calibrated against national PGA maps for the same specific scenarios. Use of a different PGA maps, corresponding to as specific event as e.g. in Albano et al. (2018), would be inconsistent with such definition, as it would require an absolute calibration of the parameters PGA min and PGA max against a number of specific events for which rockfalls were observed and mapped. This is beyond the scope of this work; such kind of study is underway and will be presented elsewhere (Alvioli et al. 2022b).
Simulation of rockfall trajectories
The software STONE, adopted here, assumes point-like masses falling under the sole action of gravity and the constraints of topography, and it calculates trajectories dominated by ballistic dynamics during falling, and bouncing and/or rolling on the ground in a probabilistic way. The assessment of trajectories derives as consequence of several of steps, the first of which consists in the identification of perspective rockfall source areas. Identification of potential rockfall sources is a key step of numerical, physically based simulation of rockfall runout Section 4.3).
The software STONE was designed to assess rockfall hazard at the regional and local scales using thematic data (e.g. geological, geomorphological, land use maps etc.), available for large scale or obtained in field surveys (Guzzetti et al. 2002). In the model, a kinematic simulation is performed computing the trajectory at discrete time steps. Within each time step the boulder can be in a free falling state, in a rolling state, or in a bouncing state (Guzzetti et al. 2002). The trajectory of each boulder is computed from the digital topography and it depends on the starting point, and a set of coefficients used to simulate the loss of velocity at the impact points or during the rolling state, i.e. dynamic friction where the boulder rolls, normal and tangential restitution coefficients values at each impact point.
Attribution of ranges values for dynamic friction and normal and tangential restitution coefficients depends on the lithotype was inferred from a geo-lithological map ( Figure 3). We assigned values by comparison of the information available from the literature for each lithological class (Guzzetti et al. 2002;Alvioli et al. 2021). For each simulation the program allows a random variation in the coefficients (within 10% of the nominal values listed in Table 1, uniformly distributed) and in the initial direction of motion, resulting in an output with probabilistic meaning.
Furthermore, since the vegetation can be considered an important element that can influence and reduce the distance that a rolling boulder can reach, the friction between the falling block and the terrain was increased by 50% and the normal and tangential restitution coefficients were reduced by 50% in all such cells. Thus, we have run STONE with two identical sets of input data, except the set of numerical parameters describing the soil response, corresponding to considering or not the effect of vegetation. without considering the presence of vegetation and contemplating the effect of vegetation. We decided to use these thresholds on the basis of the results of diagrams shown in Figure 5. The analysis of Figure 5 highlights that in cells mapped as sources (a) impact on buildings; red squares represents building which potentially could be hit by falling boulders and yellow squares represent building identified as potential hits by by STONE but ruled out as "false positive" in an expert way. (b) roads; the color ramp represents intersection with a very high (purple), high (red), medium (orange) and low (green) number of trajectories. the probability of observed values of slope and relief higher than a certain limit could be used to locate sources through a probabilistic analysis. Therefore, we considered the "static" source probability as a function of slope according to the normalized ratio of Figure 5b; probability; then, we used the values of relief calculated in a 15 Â 15 and 25 Â 25 cells neighborhoods as thresholds to establish which of the identified sources have more probability of representing a source of falling boulders (Figure 5d and f). We note, here, that considering relief only provides a correction to the probabilities obtained using slope alone.
Results & discussion
As highlighted in Figure 7a and b the presence of vegetation reduces the reach of boulders trajectories, by construction. As explained in section 4.2 we considered as vegetation the grid cells with a positive difference between the DSM and DTM larger than 5 m. We decided to use the threshold of 5 m because, as shown in Table 2, it represents a good compromise between larger percentages of land cover classes which can influence rockfall trajectories ("broad leaved," "needle leaved," "orchards" and "vineyards") and small percentage in the "artificial surfaces" class.
We stress that LiDAR data provide high resolution DEMs suitable for physicallybased slope stability models, including STONE and comparable models (Matas et al. 2017;Dorren et al. 2022;Jones et al. 2000;Agliardi and Crosta 2003). The strategy was applied before, though with no distinction between DTM and DSM (Santangelo et al. 2019). High-resolution elevation data are useful both to run morphometric analysis and locate potential rockfall sources and to carry out a detailed analysis of boulders trajectories. In the present case, availability of both DTM and DSM data allowed to include the effect of vegetation.
As one can see in Figure 6a and b, the areas most susceptible to rockfall events are those located along the South and South-West coast of the island, characterized by steep cliffs, the North, North-West and South-West steep flanks of Mount Epomeo and the exposed sides of the hydrographic channels located in the South sector of the island. This is a qualitative confirmation of the findings by Della Seta et al. (2011); del Prete and Mele (2006); the polygons denoted by diffused rockfalls in the IFFI inventory also overlap with the runout calculated with STONE. Moreover, Ischia is a densely populated island (about 1,380 people per km, 2 vs. an average of about 200 people per km, 2 in Italy) and, as shown in Figure 6, it is characterized by the presence of potential rockfall sources on several places. Thus, built up areas ( Figure 8a) and roads (Figure 8b) could be affected by rapid phenomena as rockfalls. Figure 6a and b shows the result of the overlap of cell-by-cells trajectory count (Figure 6b) with roads and buildings on the island (Figure 1), to assess the possible impact of rockfalls on infrastructure. We stress that, at this stage, trajectory count only represents susceptibility maps and intersection with vulnerable infrastructure cannot be considered as a risk map, because magnitude and the temporal components are not considered up to now. Moreover, potential source areas were identified through a statistical approach, derived from a morphometric analysis of terrain within polygons mapped from Google Earth TM images. Therefore a field survey to ascertain actual potential sources and would be useful to validate the inferred source areas. The same goes for evidences on the field of actual deposition areas of rockfalls occurred in the past.
Rockfall sources were established on the basis of slope and relief values. Therefore, the trajectory count output of STONE represent susceptibility maps, in which false positives are often possible. On the other hand, to go beyond traditional susceptibility maps and give the best possible estimate of rockfall hazard, one can consider a specific trigger, with a specific probability in timeor return time, as we did in this work. We considered a seismic trigger for rockfalls, thus approximating an assessment seismically-induced rockfall hazardgiven that a magnitude component would still be missing. The rockfall trajectories simulated on the basis of the PGA map for the major Casamicciola earthquake of 1833, obtained from Rapolla et al. (2010), and corresponding to a return time of 475 years, show that the source map differs from the probabilistic source map described in Figure 6 first of all for the number of potential sources: less sources have been recognized on the whole island in the SRC seismic in comparison of that recognized in the baseline ("static") probabilistic source map ( Figure 6). Furthermore the overall trajectory count in the result is smaller than in the full simulation, as one can see comparing Figures 6 and 9. Almost all of the earthquake-induced rockfalls are located on the steep N-NW flanks of Mount Epomeo, on the SE and SW cliffs of the coastline.
Conclusions
In this study we performed a statistical analysis to infer the potential sources of rockfall on the Ischia island, their possible trajectories both in a "static" (trigger- Figure 9. Simulation of seismically-induced rockfall trajectories, using a seismic trigger corresponding to an event with return time 475 y and with a peak ground acceleration with the spatial pattern of Figure 4. The model adopted in this work singles out rockfall sources on the basis of slope, S, and values of PGA, as defined in Eq. (1) (Alvioli et al. 2022a(Alvioli et al. , 2022b. independent) scenario corresponding to a susceptibility assessment, and in a seismic shaking triggering scenario, going in the direction of seismically-induced rockfall hazard. We performed our analysis using the 3D model STONE, which needs input data of potential rockfall sources, geo-lithological characteristics of the studied area to assign terrain parameters, and a DEM describing the topography. We can draw the following conclusions: We improved an existing morphometric method to generalize expert-mapped rockfall sources, previously applied at national scale and at 10 m resolution . In this work, at 2 m grid resolution, we included relief calculated with two different moving windows sizes along with slope angle, to generalize mapped polygons to additional potential sources of rockfalls with a probabilistic method. Raw LiDAR data are useful to distinguish areas covered by tall vegetation, allowing a more realistic simulation of rockfalls trajectories and their traveling distance. This in principle can be reproduced in all of the areas where LiDAR points are available from the Geoportale Nazionale, a large portion of Italy. Preliminary assessment of rockfall susceptibility in Ischia showed that the areas with highest susceptibility are located along the N, N-W and S-W steep flanks of Mount Epomeo, the S and S-W coast of the island. Many of the susceptible areas overlap roads and built-up areas. We implemented a new method to model a seismic trigger for rockfalls, for physically-based rockfall simulations with models that are otherwise completely independent from a specific trigger, and from time. This effective method allows to approximate seismically-induced rockfall hazard.
Many of the methods presented here can be further improved, either with additional modeling and/or with data from field surveys. In the inclusion of the effects of vegetation, we simply modified the terrain parameters describing the static and dynamic response in STONE to effectively account for the presence of vegetation; future work may focus on calibrating the parameters, with proper field data. A LiDAR campaign was performed recently by a few of us, and results will be reported elsewhere. The survey will also be useful to locate actually unstable area, to further constrain the maps of sources proposed here, and to pinpoint deposition areas or individual blocks from past rockfalls, to constrain the parameters of the model.
Eventually, we stress that the method presented here is amenable for application both a small, medium and large scale. While this work is an example application on the small scale, the same approach was applied at national level (Alvioli et al. 2022a) using ground shaking maps corresponding to different return times (Mori et al. 2020;Falcone et al. 2021). Ground shaking maps corresponding to a return time of 475 y was actually used to calibrate the seismic trigger for "static" rockfall sources adopted here. The same method can be applied at intermediate scale using ground shaking maps corresponding to specific earthquake events. The final aim of the seismic-trigger method is the application on near-real time, with ground shaking maps obtained after an earthquake event. Note 1. http://www.pcn.minambiente.it/mattm/en/ | 9,619.4 | 2022-10-09T00:00:00.000 | [
"Geology",
"Environmental Science"
] |
Observing galaxy clusters and the cosmic web through the Sunyaev Zel’dovich effect with MISTRAL
. Galaxy clusters and surrounding medium, can be studied using X-ray bremsstrahlung emission and Sunyaev Zel’dovich (SZ) e ff ect. Both astrophysical probes, sample the same environment with di ff erent parameters dependance. The SZ e ff ect is relatively more sensitive in low density environments and thus is useful to study the filamentary structures of the cosmic web. In addition, observations of the matter distribution require high angular resolution in order to be able to map the matter distribution within and around galaxy clusters. MISTRAL is a camera working at 90GHz which, once coupled to the Sardinia Radio Telescope, can reach 12 ′′ angular resolution over 4 ′ field of view (f.o.v.). The forecasted sensitivity is NEFD ≃ 10 − 15 mJ y √ s and the mapping speed is MS = 380 ′ 2 / mJ y 2 / h . MISTRAL was recently installed at the focus of the SRT and soon will take its first photons.
Introduction
The Cosmic Microwave Background (CMB) represents one of the most unique source of cosmological information.Studying the primary anisotropies and the polarization of the CMB is allowing us to enter into the so called precision cosmology.Within this framework, we can derive the cosmological parameters with extreme precision and know the energy content of our universe to a fraction of a percent [1,17].
On the other hand, the nature and the physics of most of the estimate energy content of our universe are still unknown.68.3% of today's energy content of our universe is in the form of dark energy which is responsible for the acceleration of the universe.26.8% of it is in the form of dark matter which can only interact gravitationally with the remaining baryonic matter.In addition, the observed baryonic matter in the local universe is still small compared to what is predicted by the Big Bang Nucleosynthesis and by measurements of the CMB power spectrum (see, e.g., [17]).A diffuse baryonic dark matter (missing baryons) could explain, at least in part, the apparent discrepancy between observations and cosmological estimation [12].
Hydrodynamical simulations of large-scale structures (see, e.g., [5]) show that at low redshifts, these missing baryons should lie in the temperature range of 10 5 <T<10 7 K in a state of warm-hot gas not yet observed through their soft X-ray emission.This warm-hot intergalactic medium (WHIM) is arranged in the form of filamentary structures of low-density intergalactic medium connecting (and around) the clusters of galaxies into the so called cosmic web.
The Sunyaev Zel'dovich effect in galaxy clusters and in filaments 2.1 Thermal Sunyaev Zel'dovich effect
It is well known that the CMB has an almost perfect black body spectrum.However, when the CMB photons scatter off hot electrons present in the Inter Cluster Medium (ICM) present in galaxy clusters, they undergo inverse Compton scattering resulting in a distortion of its frequency spectrum.
This effect (the Sunyaev Zel'dovich, SZ, effect [18]) is due to the energy injected by hot electron gas in galaxy clusters and the surrounding medium.This secondary anisotropy effect produces a brightness change in the CMB that can be detected at millimeter and submillimeter wavelengths, appearing as a negative signal (with respect to the average CMB temperature) at frequencies below 217GHz and as a positive signal at higher frequencies.The change in SZ intensity directly depends depends on the electron density of the scattering medium, n e , and on the electron temperature T e , both integrated over the line of sight l, and its spectrum can be described by the following differential intensity: where: , T CMB is the CMB temperature, x = hν k B T CMB is the adimensional frequency, and y = n e σ T k B T e m e c 2 dl is the Comptonization parameter, σ T is the Thomson cross section, k B is the Boltzman constant, m e is the electron mass, and c is the speed of light in vacuum.The Comptonization parameter y is the integral along the line of sight l of the electron density n e weighted by the electron temperature T e and is the quantity that quantifies the SZ effect: it can be seen as the integrated pressure over the galaxy clusters.
It turns out that the same electrons that scatter off the CMB photons in galaxy clusters, also emit in the X-ray by bremsstrahlung.The bremsstrahlung emission depends on n e and on T e with different dependencies with respect to the SZ effect.In particular, X-ray emission is proportional to n 2 e and thus, the SZ effect, which is proportional to n e , is more sensitive to low density regions.For this reason, it was proposed to use the SZ to probe low density environments such as the outskirts of galaxy clusters and the filamentary structures between them.
Matter distribution
Matter distribution in our universe is clearly non uniform and hydrodynamical simulations predict that matter is distributed in a so-called cosmic web distribution.Simulations can test how structures form and thus investigate the interplay between baryonic matter, dark matter and dark energy.Focussing on a few M pc scale, allows us to track the progenitor of a group of galaxies or galaxy clusters.Small-mass objects form first at z>5, and quickly grow in size and violently merge with each other, creating increasingly larger and larger system.Hydrodinamical simulation of pre-merging pair adapted to Comptonization parameter y observable, show observable over-densities at angular resolution ranging from arcmin to tens' of arcsec [19].This drives the necessity to observe SZ with high angular resolution, without loosing large scales, and with high sensitivity (10 resolution with few arcmin f.o.v.).
MISTRAL receiver
The MIllimeter Sardinia radio Telescope Receiver based on Array of Lumped elements kids (MISTRAL), is a cryogenic camera working at 90GHz1 .It takes radiation from the 64 m Sardinia Radio Telescope.MISTRAL hosts an array of 415 Kinetic Inductance Detectors (KIDs) and will measure the sky with 12 angular resolution over 4 f.o.v.. MISTRAL has recently started its commissioning phase and in 2024, it will start its operations as part of the renewed Sardinia Radio Telescope (SRT) receiver fleet, as facility instrument.
The SRT [3], is a Gregorian configured, fully steerable, 64m-primary mirror radiotelescope which can work from 300 MHz to 116 GHz.It is a multipurpose instrument with a wide variety of applications which started its scientific programs in 2016.In 2018, a National Operational Program (PON) grant was assigned to INAF with the aim to exploit to the full the SRT capability to reach mm wavelenghts up to 116 GHz [10].Among other scientific upgrades, one of the working packages includes a millimetric camera working, in continuum, at 90 ± 15 GHz: MISTRAL receiver, which was built at Sapienza University [2,9].
MISTRAL cryogenic system
MISTRAL is a cryogenic camera hosting refocussing optics and an array of KIDs.Our KIDs are superconducting detectors made out of Titanium-Aluminium (Ti-Al) bilayer.The critical temperature T c of this alloy is 945mK and thus the detectors have to be cooled down to temperatures well below T c .This, in addition to the necessity to cool down the detectors to reduce noise, makes MISTRAL a fairly complicated cryogenic camera.MISTRAL employs a Sumitomo 1.5 W Pulse Tube cryocooler2 and a twin Helium 10 close cycle refrigerator 3and was assembled in UK by QMC instruments 4 .
One of the biggest challenges of MISTRAL, is the necessity to work in the Gregorian room of the SRT.This implies that the receiver will move with the telescope and thus the cryostat will not be steady nor in the vertical position as usually cryogenic equipment need to stay.This has two consequences: a) the insertion of the Pulse Tube head and the refrigerator into the cryostat, is such that they both work in the nominal vertical position when the telescope points at an elevation of 57.5 • .b) The compressor which ensures the operation of the cryocooler has to be put in a position which does not change its inclination.This is possible only in the compressor room which is 120m apart from the Gregorian room.The possibility to have the cryocooler working at such a distance with 120 m flexible helium lines was previously tested and proved to be feasible although with some loss of efficiency [7].In such a way, MISTRAL has been tested to work properly in the inclination range +/-25 • , resulting in elevation range of 32.5-82.5 • with no degradation of the thermal performance.
MISTRAL optics
The optical design of MISTRAL includes two Anti Reflection Coated (ARC) Silicon lenses able to image the Gregorian Focus on the array of detectors.Detectors are coupled to radiation through open space (filled array) so, a cryogenic cold stop placed in between the two lenses, is needed to reduce background and avoid stray-light.The bandwidth of operations, as well as the reduction of the load onto the different stages of the cryostat, is provided by a set of radiation quasi-optical filters produced by QMC instruments 5 , anchored at the different thermal stages of the cryostat (see Fig. 1).
The two Silicon lenses allow to report 4 of the SRT focus onto the 415 KIDs array.They are anti-reflection coated with Rogers RO3003 6 .Their diameter is 290 mm and 240 mm respectively while the aperture cold stop diameter of 125mm.All the lenses+cold stop system is kept at 4 K.The in-band average simulations report excellent values with a Strehl Ratio from 0.97 to 0.91 for on-axis and edge positions.Analogously, the FWHM is 12.2 on axis, and 12.7 at 45 mm off axis (which corresponds to 2' in the sky).
MISTRAL detectors
MISTRAL takes advantage of the high sensitivity of KIDs as well as the capability to frequency domain multiplexing such resonators [4,14,15].MISTRAL KIDs are Ti-Al bilayer of thickness 10 + 30 nm with critical temperature T c = 945mK and are fabricated at CNR-IFN7 on 4" silicon wafer [6,13] (see Fig. 2).The feedline is made of Aluminium of 21nm with a critical temperature T c = 1.1K.This was done to reduce its susceptibility to millimetric radiation.The 415-detectors array is arranged in such a way the each KID samples the f.o.v. with angular spacing of 10.6 , lower than the pixel angular resolution, thus oversampling the observed source.We use ROACH2 based electronics 8 to manage the Frequency Domain Multiplexing readout, and to send the tones to bias each of the resonators.
MISTRAL calibration, installation, and sensitivity forecast
MISTRAL has undergone diffuse laboratory calibration, noise measurements, pixel recognition, which certified the good health of the instrument.The electric characterization has started with the tuning of the KIDs, the choice of the resonant frequencies and the adjustment of the power to be sent to each KID.Our KIDs are designed to work and be read between 200 MHz and 800 MHz.The resulting tones are spaced with an averaged separation of 0.96 MHz (see Fig. 2, right panel).
The optical performance have then been measured using an artificial source and a custom designed optical system which sends to MISTRAL KIDs, millimetric radiation with the same beam divergence (i.e.same f/#) it receives from the SRT.84% of MISTRAL detectors are alive and usable.The average optical efficiency of the receiver was measured to be 35%.The figure of merit for the sensitivity of the KIDs is their Noise Equivalent Power (NEP) which represents the incoming power which produces a signal equal to the noise power spectrum of the KIDs.In Fig. 3
Conclusions
The full comprehension of the matter distribution around the universe is crucial both for cosmology and for astrophysics.The Sunyaev Zel'dovich effect is a powerful tool to study low density environments and search for bridges and filaments in the cosmic web.High angular resolution is crucial to understand and map galaxy clusters and the surrounding medium.We developed MISTRAL, which, coupled with the SRT, is an ideal instrument to map the sky, at 90 GHz, with 12 angular resolution.MISTRAL is a cryogenic camera with an array of
Figure 1 .
Figure 1.A cut of MISTRAL cryostat highlighting the optics of the receiver: a Ultra High Molecular Weight (UHMW) polyethylene window [8] is followed by two infrared (IR) filters.The band selection is actuated by a sequence of Low Pass Edge filters (LPE) and a final Band Pass (BP) filter.
Figure 2 .
Figure 2. Left: MISTRAL array of KIDs in its holder [14].Right: An image of the response of the KIDs to the tones generated by the ROACH2 electronics sent to MISTRAL.
(right panel) we show an histogram of the resulting measurement which shows a median value of 8.07 × 10 −16 W/ √ Hz.MISTRAL receiver was transported and installed at the focus of the SRT between May and June 2023 (see Fig. 3, left panel).The aforementioned NEP's, nominally would translate into a Noise Equivalent Flux Density (NEFD) 2.8 mJy √ s [16].However, what is not taken into account in this estimate is the telescope efficiency and the noise added by the atmospheric fluctuation (i.e.static loading is taken into account while fluctuations are not).We thus have undertaken a realistic simulation which assumes an arbitrary telescope efficiency of 30%, and takes into account the real atmospheric noise at the SRT observatory at 22GHz, and then extrapolated it to 90GHz using am code 9 .This results into an approximate NEFD 10 − 15mJy √ s.Assuming the definition reported by Perotto et al. 2020 [16], we extracted a mapping speed (MS) of MS = 380 2 mJy −2 h −1 [11].
Figure 3 .
Figure 3. Left: MISTRAL receiver installed in the Gregorian room of the SRT.Right: an histogram of the optical NEP of MISTRAL KIDs.The dashed line represents the limit we were given comparing the optical NEP with the photon noise at the SRT. | 3,185 | 2023-10-27T00:00:00.000 | [
"Physics"
] |
Fuzzy Pattern Classification Based Detection of Faulty Electronic Fuel Control (efc) Valves Used in Diesel Engines
In this paper, we develop mathematical models of a rotary Electronic Fuel Control (EFC) valve used in a Diesel engine based on dynamic performance test data and system identification methodology in order to detect the faulty EFC valves. The model takes into account the dynamics of the electrical and mechanical portions of the EFC valves. A recursive least squares (RLS) type system identification methodology has been utilized to determine the transfer functions of the different types of EFC valves that were investigated in this study. Both in frequency domain and time domain methods have been utilized for this purpose. Based on the characteristic patterns exhibited by the EFC valves, a fuzzy logic based pattern classification method was utilized to evaluate the residuals and identify faulty EFC valves from good ones. The developed methodology has been shown to provide robust diagnostics for a wide range of EFC valves.
Introduction
An Electronic Fuel Control (EFC) valve regulates the fuel flow to the injector fuel supply line in a Pressure-Time (PT) fuel system in many heavy duty Diesel engines.The EFC system controls the fuel flow by means of a variable orifice that is electrically actuated.The valve inspection test results provide a characteristic curve that captures the relationship between pressure and current input to the EFC valve.These frequency response curves document the steady state characteristics of the valve but they do not adequately capture the valve's dynamic response.To overcome this deficiency, a dynamic test procedure was developed in order to evaluate the performance of the EFC valves.The test itself helps to understand the effects of design modifications on the stability of the overall engine system.Additionally, such a test is expected to provide the ability to evaluate returned/failed EFC valves that have experienced stability issues or severe performance degradations.This test is also aimed at determining whether an EFC valve has failed or not before it is integration in a diesel engine.The characteristics of a good valve and a bad valve can be observed through the dynamic performance tests which can be used to identify the failed valve via a fault detection methodology.
Isermann [1] provides an overview of fault detection applications that uses process and/or signal models.A number of examples were discussed in this paper including the fault detection of a diesel engine using fuzzy inference engine.Venkatasubramanian, et al. [2] discussed fault diagnosis methods that are based on historic process knowledge.They observed that integrating various complementary features in model based detection is one way to develop hybrid systems that could overcome the limitations of individual solution strategies.He and Wang [3] presented a fast pattern recognition based fault detection method, termed principal component-based kNN (PC-kNN), which takes advantage of both principal component analysis (PCA) for dimensionality reduction and FD-kNN for nonlinearity and multimode handling.Two simulation examples and an industrial example are used to demonstrate the performance of the proposed PC-kNN method in fault detection.Lou and Loparo [4] presented a scheme for the diagnosis of localized defects in ball bearings based on wavelet transform and neuro-fuzzy classification.Vibration signals for normal bearings, bearings with inner race faults and ball faults were acquired from a motor-driven experimental system.The wavelet transform was used to process the accelerometer signals and to generate feature vectors.An adaptive neuro-fuzzy inference system (ANFIS) was trained and used as a diagnostic classifier.He, et al. [5] reviewed application of fuzzy pattern recognition in intelligent fault diagnosis systems and provided some results with an illustrative example while Bhushan and Romagnoli [6] discussed a method for unsupervised pattern classification called self-organizing self-clustering network in the context of chemical process plant.Podvin [7] provided a fuzzy-logic-based fault recognition method using phase angles between current symmetrical components in automatic DFR record analysis while Detroja, et al. [8] presented a possibilistic clustering approach to novel fault detection and isolation.
In this work, both frequency domain and time domain system identification methods were explored in order to determine the characteristics of the EFC valves.Bode diagrams and step responses were utilized to identify the EFC valve, and combining the two methods offered an estimate of the order of the system while maintaining the integrity of the results when compared to one another.The two methods mentioned above proved to be efficient with process speed, as well as being robust, where the outcomes do not have significant variations.This led to the development of a pattern classification contributing to the robust fault diagnosis of EFC valves based on the dynamic performance test data.
Recursive Least Squares (RLS) algorithm was used in discrete time domain to estimate the transfer function of the EFC valves.The transfer functions thus obtained shows distinctive features depending on the nature of the EFC valve, i.e., whether it is a failed part, a good part, or a prototype part.This information is later used in the pattern classification algorithm development for fault diagnosis purposes.
As indicated above, this work involves fuzzy pattern classification based fault detection of electronic fuel control valves from data obtained from the proposed dynamic performance tests.The proposed methodology is based on a step response test of the EFC valves.Crisp logic based residual evaluation is prone to less effective diagnosis since the residual error threshold for the faulty EFC valves varies within certain range.Instead a fuzzy logic based residual evaluation methodology was considered that handled the variable error thresholds better in this application.
Experimental Set Up
The EFC Test Stand is used in a production environment to verify the proper operation of EFC valves [9].It is capable of accommodating a variety of EFC valves with various voltage and normal valve position conditions.Mimicking the placement of the valve onto a pump in an engine, the EFC valve is placed in a housing on the EFC Test Stand that lines up the inlets and outlets so that a continuous stream of fluid can be transferred based on the proportional variation of the orifice size.The EFC valve spool displacement is regulated via the duty cycle of a pulse width modulated (PWM) DC voltage applied to the valve solenoid.The hydraulic fluid that runs through the EFC valve is regulated by a Test Fluid System.The purpose of the Test Fluid System is to maintain the pressure, temperature, and cleanliness of the fluid being tested.Figure 1 shows a frontal view of the typical setup of the Test Stand.A representative EFC valve actuation current profile with respect to desired common rail pressure is shown in Figure 2.This map is utilized to generate valve input current for both frequency response and step response tests [10].
Frequency Response
Frequency sweep tests were performed on the test stand for different EFC valve types and experimental data was recorded.Figures 3 and 4 show the Bode diagrams that have resulted from medium amplitude sweep using the dynamic performance test bench [11] for various categories of EFC valves.These categories are returned valve, prototype valve, and good valve.Figure 3 shows the experimental results for the normalized gain of the EFC valves, and Figure 4 shows the phase plot of the EFC valves.For both plots, it can be seen that the three types of EFC valves demonstrate clearly distinct signatures on the characteristic curves.
Figures 5 and 6 show the Bode plots (normalized magnitude and phase) for high amplitude frequency sweep of different categories of the EFC valves.These plots are very similar to those for medium amplitude plot with minor variation.
In the normalized gain plots, the valves are categorized as "returned" starts decaying the earliest, followed by the valve categorized as "prototype", and lastly the valve categorized as "good".As expected, the same pattern repeats itself for the phases that are associated with the normalized gains.
Frequency domain identification techniques offer the following advantages: the ease of reducing the noise, reduction of the amount of data when compared to time domain data, the ease of removing the DC offset errors found in the input and output signals, no need to initially estimate the states of the system, and the ease of removing the output drift [12,13].The Bode diagrams of the EFC valves that were constructed as a result of the frequency response give a good indication of the characteristics of the transfer function associated with these EFC valves.An educated estimate of the transfer functions [14] can be made by analyzing the characteristics of these curves, such as the slope of the asymptotes on the normalized gain plots, corner frequencies, and phase conditions.The poles and zeros of a transfer function can be estimated through minimization of estimation error.The order of the system would dictate how many parameters are to be estimated.From observation of the Bode diagrams, the EFC valve system order is approximated to be in a range between five and nine.This magnitude of system order can be attributed to the fluid dynamics within the system, the electro-mechanical system dynamics, as well as nonlinearities in the system.The structure of the model for the EFC valve is thus constructed.The algorithm developed by Santos and Carvalho [14] has been used to estimate the transfer function, where the minimization of error is performed.
(1) here, denotes the Frequency Response data, and B/A denotes the estimated transfer function.
Model Structure
Let us assume that we will have a transfer function of the following nature [14]: In the above transfer function, the corresponding Bode plot is proportional to: (3) And the asymptote is given by: (4) By computing the difference between Equations (3) and ( 4), the magnitude of error in the normalized gain plots can be estimated. (5) The error magnitude, which is dependent on the distance (x − x i ), is the largest when x = x i and approaches zero when (x − x i ) . This is taken into consideration in the transfer function estimation process.
Asymptotic Approximation to the Bode Diagram
With the assumption of using a continuous set of measurements in the range [x min , x max ], the estimates can be refined through minimization to the following objective function: (6) where n = n p + n z .For the transfer functions with poles and zeros sufficiently far apart, the minimum of will lie in a region where V is convex.Therefore minimization of J would lead to the minimum of V as well.
Step Response
Time domain identification methods can provide a simple, yet robust approach for identifying complex systems .Such system identification techniques can also utilize the boundary condition data that is already known.
The pressure response curves of the EFC valves that were constructed as a result of the step response already proved effective in capturing the characteristic signatures as indicated earlier.The transfer function of the EFC valves can be estimated by identifying the model parameters using a given set of data with the help of a system identification tool already proven effective in this field.This method could be conducted in an offline manner.However, periodic online identification process would also be effective when the new data points become available.The raw data acquired through the step response tests is analyzed and then utilized for the purposes of estimating the transfer function using the Recursive Least Squares (RLS) algorithm [15][16][17].A brief description of the RLS algorithm is given below.
Recursive Least Squares (RLS) Algorithm
For the purpose of identifying the model parameters of the EFC valve, the RLS algorithm is based on the following model.[18].(7) We assume to be zero since it is the coefficient of correlated noise thus Equation (7) becomes: here represents an error that is assumed to be statistically independent of the inputs and outputs.and are the regression vector and parameter vector respectively, and are defined as (9) (10) where (11) The parameters making up the transfer function are estimated by finding estimates of of the unknown parameter vector that will minimize the error function: (12) Here is a weighting factor in the range of 0< ≤ 1 that weighs new data more heavily than old data.
The Recursive Least Squares algorithm used to estimate the transfer functions of the EFC valves is expressed as follows: P is the covariance matrix of the estimation error of the parameter estimates, follows from Equation (8) for = 0, and K(t) is the Kalman filter gain, which multiplies the prediction error in order to portray the correction term for the model parameter vector.Equation ( 13) requires an initial estimate of the parameter vector , and Equations ( 14) and ( 15) require an initial estimate of P(0).
The step response test was conducted for different levels of mean maximum fluid pressure.The pressure levels reached are as follows: 3.1 psig which was achieved with a current input of 1.2 Amps, 26 psig with a current input of 1.4 Amps, 120 psig with a current input of 1.6 Amps, 160 psig with a current input of 1.8 Amps, and 210 psig with a current input of 2.0 Amps.
The notations X R and X S represent real data and simulated data for output pressure, respectively.The real data is what we have obtained through data acquisition of the step response, and the simulated data was obtained through the procedure of Recursive Least Square (RLS) method [18].X S is included on the response plots in order to visualize characteristic differences of the EFC valves.
Figures 7 and 8 show the step response diagrams that have resulted from current input signal of 1.6 Amps, for various categories of EFC valves.These categories are returned valve and good valve.Figure 7 shows the experimental result for a good EFC valve, and Figure 8 shows the experimental result for a returned/failed EFC valve.From both plots, it can be seen that the two types of EFC valves demonstrate different signatures on the characteristic curves.In the returned valve plot, the rise time of the response is slower compared to the rise time of the response for the good valve.Another observation is that when the input current is increased to 1.8 Amps, a distinctive signature can be seen at the settling portion of the response.The settling portion corresponding to the returned valve, lands far away from the simulation, while the settling portion corresponding to the good valve, lands flat on or within close proximity.These variations are seen in Figures 9 and 10.
Transfer Function Estimations
Methods for both frequency domain [12] and time domain were used to estimate the transfer functions of the valves.
Frequency Domain Method
The transfer functions of the EFC valves have been estimated by taking into consideration the contributing factors mentioned earlier.The estimated transfer function is of the ninth order.The following transfer function represents the dynamics of a good EFC valve [11].(16) The transfer function of a returned / failed EFC valve was estimated as follows [11]: (17) The results above demonstrate that there are in fact significant differences between a returned/failed EFC valve and a good EFC valve.
Once the transfer function estimation is satisfactory, the Bode plots of the transfer functions are simulated, and then superimposed onto the original Bode plots that were generated earlier for verification purposes.Improvement in the results were obtained after fine tuning the transfer function parameters via a trial and error approach.Figures 11 and 12 represent the Bode plots with the simulated results for both good and returned categories of EFC valves.
Bode plot for return valve simulations.
Time Domain Method
The discrete transfer function for a good EFC valve following the time domain system identification resulted in a transfer functions of 7th order, and is shown below [11]: (18) Similarly, the estimated discrete transfer function of a returned/failed EFC valve is obtained as follows: (19) While the system identification (both in frequency domain and time domain) of the EFC valves correlated with the test data and exhibited significantly different transfer function coefficients, this did not offer a robust approach because these coefficients did not maintain a clear pattern within each category of the EFC valves.The fluid (fuel) leakage in the EFC valve, which can vary randomly for from one EFC valve to other, may have contributed to the discrepancy between the different types of EFC valves.Additionally, the "stickiness" phenomenon may have caused the valve opening and closing to behave in an unstable manner between the different types of EFC valves that were studied in this work.Due to such variations, failure detection via a crisp logic type residual evaluation is considered to be less effective and less accurate as the error threshold would vary within certain range.A Fuzzy pattern classification of the residuals from the measured data and the identified model outputs is considered to be a better solution since it would handle the variable error thresholds more effectively through fuzzy sets.
Fuzzy Pattern Classification
From the results of the system identification, each valve type demonstrated a distinctive characteristic.These characteristics eventually evolved into certain patterns depending on the type of valve tested.This section discusses how the implementation of fuzzy logic helps classify the different types of valves based on their patterns.Fuzzy pattern classification algorithm starts off by determining the membership values that are going to be processed in the decision system and then converting these crisp set data into a fuzzy set data.Next, the membership rules must be defined fittingly to represent the characteristics of the membership value.Once these values are processed, they get defuzzified and a decision is made accordingly.A representation of the fuzzy pattern classification based fault detection is shown in block diagram form in Figure 13.
Initialization of the Fuzzy Decision System
The fuzzy system is made up from a list of fuzzy sets as well as the rule set that they are associated with [19].The system is made up of two inputs and one output.Each input that the system takes is considered a fuzzy variable.Each of these inputs has their own membership functions, primarily constructed from trapezoidal and triangular functions.
The inputs to the system are used from the data acquired as a result of the step responses that were performed earlier on the EFC valves.One of the inputs is the current amplitude: 1.4A being defined as Low, and the other is 1.6 A being defined as High.The other input was constructed as a result of equating a modified version of the root square mean error between the real response of the EFC valve and the simulated response of the EFC valve.Figures 14 and 15 provide graphical representation of the fuzzy membership function definition for EFC valve inputs/outputs.We can see that there are two responses, one generated as a result of simulated data (X S ), and the other generated as a result of the real data (X R ).Using these two variables, a residual value that is representative of the modified version of the root mean square error (Equation ( 20)) within a certain period is defined.This residual value would differ from one type of EFC valve to the other (e.g., good valve vs. returned valve).
Where t is the time period, and N represents the number of data points.The output is a conclusion of the fuzzy system where a decision is made in classifying the EFC valves' type.
The fuzzy rules are based on a set of fuzzy if-then rules in order to define the inference engine from the input data set to the output data set based on the knowledge of the characteristics of the EFC valves.
Once all initializations have been performed, the intended tests could be run.A model of the above algorithm of fuzzy pattern classification was built using MATLAB/Fuzzy Logic Toolbox [20].This model was then simulated in parallel with m-file scripts and SIMULINK models.The results are provided in the next section.In a few instances, the data acquired from the good EFC valve had overlaps on the parameters making up the membership functions.Although fundamentally they are still classified accurately as good EFC valves, there were unavoidable consequences from the data due to noise and unforeseen responses.The degrees of memberships took over a categorized EFC valves in their respective classes.Table II provides the results after the training data was processed in the fuzzy system.These classes are a result of the defuzzification procedure.In the defuzzification process, which was based on the ranges that the error values fell under, the EFC Valves were classified according to the following rules: ) ( _ From the above classification results, it is evident that a pattern exists between good EFC valves and bad (returned) EFC valves.Furthermore, this pattern allows us to distinguish amongst the EFC valves depending on their functionality conditions.The fuzzy system is able to satisfy the pattern classification for both low amplitude inputs, as well as high amplitude inputs.The fuzzy system classified the types of the EFC valves correctly for 80 different sets of data, and it only made 4 "soft" errors for the classifications between the functionality conditions, providing close to 95% accuracy in fault diagnosis.
Conclusions
Insight into the mathematical model of the EFC valves relating the input (current) and the output (pressure) of the system was used to estimate the order of the linearized EFC dynamic system.The approach in time domain proved to be more efficient and effective with the use of step response.The signature characteristics of the response curves became evident when using different types of EFC valves that were either good valves or faulty valves.The decision to use inputs of different amplitude levels proved to be fruitful, especially for low current (1.4 A), and high current (1.6 A).Fuzzy logic based methodology was implemented for the purposes of pattern classification of residuals.This method provided robustness in the fault diagnosis over residual evaluation via crisp logic due to variability in the error thresholds.Each type of EFC valve exhibited a certain residual pattern in the form of a modified root mean square error.This, along with current input was used in the fuzzy system to classify the type of EFC valve being tested.This method proved to be very effective, as all the types of EFC valves that were already pre-classified, were verified accurately for their respective types.
Figure 1 .
Figure 1.Experimental setup for EFC valve test.
Figure 1 :
Figure 1: Experimental setup for EFC valve test.
Figure 2 .
Figure 2. EFC valve actuation current profile with respect to desired rail pressure.
Figure 3 .
Figure 3. Normalized gains of the EFC valves with medium amplitude.
Figure 4 .Figure 5 .
Figure 4. Phase plot of the EFC valves with medium amplitude.
Figure 6 .
Figure 6.Phase plot of the EFC valves with high amplitude.
Figure 11 .
Figure 11.Bode plot for good valve simulations.
Figure 13 .
Figure 13.Block diagram representation of fuzzy pattern classification based fault diagnosis.
Figure 14 .
Figure 14.Two membership functions used in fuzzy system.
Figure 15 .
Figure 15.Four membership functions used in fuzzy system.
Table I .
Cont. | 5,282.2 | 2014-05-07T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
An anti-aging polymer electrolyte for fl exible rechargeable zinc-ion batteries † Materials A
Polymer electrolytes have been extensively applied in zinc-ion batteries, especially those based on hydrogels; however, the densi fi cation of the hydrogel electrolytes during cycling a ff ects the durability, resulting in capacity attenuation. It is revealed in this work that the surface electrical resistance of hydrogels is particularly a ff ected by the aging e ff ect. Hence, an adhesive bonding solid polymer electrolyte (ABSPE) for zinc-ion batteries was developed exhibiting signi fi cantly enhanced anti-aging properties, where the surface resistance remains constant for over 200 hours, twice that of conventional hydrogel electrolytes. For the hydrogel electrolyte, the surface resistance only remains constant for less than 100 hours which is half of the time achieved by the ABSPE. The ionic conductivity increases with plasticizer loading, reaching 3.77 (cid:1) 10 (cid:3) 4 S cm (cid:3) 1 . The kinetic mechanism probed in this work revealed a di ff usion-controlled mechanism for Zn/ABSPE/ b -MnO 2 instead of a capacitive dominated process in the hydrogel electrolyte. In addition, a fl exible device was fabricated using a carbon fi bre-reinforced polymer composite; this device showed superior power supply performance even under twisting, cutting and bending conditions.
Introduction
Flexible batteries, integrating a constant electrical power supply and physical exibility, break the constraints of the current rigid battery design and have applications in wearable electronics, roll-up displays and implantable electronics. 1 The liquid electrolytes used in conventional batteries have the challenges of leakage, ammability and mechanical stability that inhibit the development of exible batteries. 2 Considerable efforts have been made to commercialise thin-lm lithium batteries by replacing the liquid electrolyte with ceramic materials such as lithium phosphorus oxynitride (LiPON) 3 and organic polymer electrolytes such as polyethylene oxide (PEO). 4,5 However, their poor ionic conductivity (10 À4 -10 À7 S cm À1 ) 6,7 limits the volumetric energy density (100 W h L À1 ), 8 and stringent fabrication conditions restrict commercialisation.
To date, an alternative strategy, the exible zinc-ion batteries (ZIBs), have attracted considerable interest owing to the high specic capacity of zinc (5855 mA h cm À3 ), 9,10 inherently superior safety characteristics and the availability of robust manufacturing methods. Attributed to the pioneering work in rechargeable aqueous zinc-ion batteries (AZIBs), [11][12][13][14][15][16][17] hydrogel electrolytes have been developed that combine the benets of aqueous electrolytes with the exibility offered by polymers. Zhi [18][19][20] and co-workers made a signicant contribution in evaluating ionic conductivities of hydrogel polymer electrolytes (HPEs) and integrating additional functionalities such as antifreezing 21 and self-healing 22 abilities. As summarised in Table S1, † the average ionic conductivity of solid polymer electrolytes is around 10 À5 S cm À1 , and the hydrogel polymer electrolyte based on a polyacrylamide (PAM) framework even achieves 10 À2 S cm À1 with a similar specic capacity to AZIBs (306 mA h g À1 ). Moreover, the water molecules that existed in the hydrogel electrolyte enhance the contact of the solid-solid interface between the electrodes and electrolyte, resulting in continuous chemical reactions through the interface. While these results are very promising, there remain challenges associated with aging of the hydrogel electrolyte that leads to performance degradation. As such, 'anti-aging' properties need to be integrated into this component to achieve a viable commercial offering. In addition, Zhi et al. have noticed the side effect of induced hydrogen evolution reaction (HER) in the aqueous system and hence developed hydrogen-free and dendrite-free all-solid-state zinc ion batteries by complexing ionic liquid 1-ethyl-3-methyl-imidazolium tetrauoroborate ([EMIM]BF 4 ) with 2 M zinc tetrauoroborate (Zn(BF4) 2 ) into the polymer matrix poly(vinylidene uoride-hexauoropropylene) (PVDF-HFP). 23 Assembled with the cobalt hexacyanoferrate cathode, the as-fabricated solid state zinc-ion batteries delivered a stable cycling capability with nearly 100% coulombic efficiency.
For hydrogel electrolytes, a higher ionic conductivity requires a greater amount of aqueous electrolyte absorbed in the porous polymer framework. The more water molecules absorbed, the greater the free volume provided for the segmental motion of Zn 2+ in the quasi-solid-state electrolyte. However, the evaporation of water from the hydrogel densies the electrolyte during cycling, resulting in increasing internal resistance and capacity loss.
In this work, an adhesive bonding solid polymer electrolyte (ABSPE) was developed for zinc-ion batteries by combining a polymer framework (poly(ethylene glycol)diglycidylether (PEGDGE)), a zinc-ion salt (zinc triuoromethanesulfonate (ZnOTf)) and a plasticizer (propylene carbonate (PC)). Owing to the presence of hydroxyl groups and the aromatic rings in the diglycidylether, interfacial contact could be enhanced by the adhesive bonding between the solid-solid interface via attractive interactions, largely avoiding the aging issue in hydrogel electrolytes. Hydroxyl groups (-OH) formed in the polymerisation induce bond formations or strong polar attractions to oxide or hydroxyl surfaces. The as-fabricated solid polymer electrolyte exhibits an ionic conductivity of 3.77 Â 10 À4 S cm À1 and excellent aging stability, maintaining a constant surface resistance (R f ) for at least 200 hours, examined using periodic electrochemical impedance spectroscopy (EIS) testing. Using carbon cloth as the substrate for commercial electrodes based on b-MnO 2 and zinc powder, the as-assembled battery can be regarded as a carbon bre-reinforced polymer composite capable of delivering constant power under various physical deformations.
Results and discussion
The adhesive bonding solid polymer electrolyte was synthesised based on an epoxy-based thermosetting polymer poly(ethylene glycol)diglycidylether (PEGDGE, M n ¼ 500) as shown in the schematic diagram (Fig. 1a) by free radical polymerisation. A zinc-ion salt, zinc triuoromethanesulfonate (Zn(OTf), $98%), was dissolved into the polymer framework PEGDGE, stirring for at least 3 hours until a homogeneous transparent solution was obtained. Aerwards, the curing agent triethylenetetramine (TETA, $97%), an aliphatic amine, was added into the complexe in the molar ratio of 1 : 4 (TETA : PEGDGE). Propylene carbonate (PC, anhydrous), a common organic solvent for ionic salts, was also added into the solution. Since the maximum solvability for ZnOTf in PC is 0.04 mol L À1 (ref. 24), PC is regarded as a ller, expanding the free volume for ionic transportation. To address the relation between ionic conductivity and the plasticizer in zinc-ion batteries, ABSPEs with various quantities of PC were fabricated. Differential scanning calorimetry (DSC) was applied to determine the polymerisation curing temperature (see Fig. S1 †). All chemicals were supplied from Sigma-Aldrich UK and the experimental details are in the ESI. † PEGDGE and TETA were polymerised by free radical polymerisation, in which the diglycidylether ring of PEGDGE opens and reacts with two primary amino groups in TETA, as shown in Fig. S2. † The polymerisation mechanism of the epoxide ring opening process was also evaluated by Raman spectroscopy analysis (Fig. S10 †). Compared to pure PEGDGE, the missing peaks centered at 1256 cm À1 and 1133 cm À1 belong to the epoxide ring deformation, 25 while the missing peak at 912 cm À1 corresponded to an epoxide ring breathing-mode. 25 The polymer lm exhibits a light brown color, as shown in Fig. 1b. Due to the low contrast, the structure is characterized by optical microscopy (Fig. 1c), where the crystalline zones observed are located in the amorphous background with a transparent structure. 26 The as-fabricated solid polymer electrolyte is homogeneous as shown by the Energy-dispersive X-ray spectroscopy (EDX) mapping images (Fig. S3 †). As shown in Fig. 1d, good thermal stability of the polymer electrolyte was obtained, for which rapid degradation occurred at 313 C. The 5% weight loss observed at 119 C is likely caused by the loss of the PC. Although the zinc ion transfer mechanism in the polymer electrolyte has not been fully investigated, a reasonable ion mechanism could be deduced, according to the detailed observation in the lithium-ion gel polymer. There is general agreement that ion transmission happens by the segmental motion of the small chains in the amorphous region of the polymer host. 27,28 When ZnOTf is added, PEGDGE with sequential polar groups, such as -Oand C-N, dissolves the zinc salts and form polymer-salt complexes, as shown in Fig. 1a. As to further increase the content of the amorphous phase, the plasticizer PC, was used because of its low glass transition temperature and high dielectric constant. Incorporating the polymer host with low molecular weight compounds, the intermolecular and intramolecular forces between the polymer chains are reduced, consequently reducing the rigidity of the three-dimensional structure. 27,29,30 As a common organic solvent, only 0.04 mol L À1 ZnOTf can be dissolved in PC; 24 hence, PC is regarded as a low-molar-mass ller impeding chain folding and increasing free volume and speed segmental relaxation. 7, 31 The X-ray powder diffraction (XRD) pattern for the polymer electrolyte (Fig. 2a) showed incoherent broad scattering around 10 that demonstrates that most regions of the polymer are in the amorphous phase. The amorphous phase enables Zn 2+ ion transport through the polymer electrolyte to the electrodes. The crosslinking mechanism has also been veried This journal is © The Royal Society of Chemistry 2020 J. Mater. Chem. A by X-ray photoelectron spectroscopy (XPS), where the hypothesis of zinc ion transportation has also been conrmed. As displayed in the spectra for O 1s (Fig. 2b), the presence of the ionic bond of Zn-O at a binding energy of 530.5 eV proves that zinc ions will couple with the oxygen in the free hydroxylic groups, hopping to the neighbouring sites. Spectra for Zn 2p, shown in Fig. S8d, † also reveal that Zn-O has a binding energy of 1045 eV. In the conventional 2p spectra, the area enclosed by Zn 2p 1/2 and S 2p 1/2 is half that of the area enclosed by Zn 2P 3/2 and S 2P 3/2 , respectively. Fourier-transform infrared spectroscopy (FTIR) (see Fig. 2c) was used to elucidate the polymerisation mechanism of the polymer. In the background spectrum of ZnOTf, C-F groups and S]O groups are observed at 1229 cm À1 and 1039 cm À1 , respectively. Focusing on the polymer with PC, there is an additional band at 1700 cm À1 compared to the polymer without PC, which is due to the stretching of C]O. The peak at 1095 cm À1 is in the region for C-O stretching in the polymer PEGDGE. Hydroxyl groups -OH formed during the crosslinking between PEGDGE and aliphatic amine are observed at ca. 3476 cm À1 , related to free non-hydrogen bonded groups. The majority of zinc cations are bonded with free-electron donors, -OH, forming coordinated bonds, while the addition of plasticizer PC offers C]O free groups to form polymer-salt complexes. As displayed in Fig. 1a, the transportation of zinc ions in the polymer is amenable to segmental motion. The solid-state ZIBs for electrochemical tests were assembled in a coin cell (CR2032) under open-air conditions with a zinc foil anode, fabricated b-MnO 2 cathode, and the solid electrolyte ABSPE. The cathode was fabricated by casting the commercial active cathode material, b-MnO 2 onto the carbon paper before assembling in the sandwich structure. The cyclic voltammograms (CV) (Fig. 3a) investigated under scan rates of 0.5 mV s À1 to 5 mV s À1 , from 0.8 V to 2.0 V, exhibit a reduction peak and oxidation peak located at 1.69 V and 1.21 V at a scan rate of 1 mV s À1 , respectively. The redox peaks were assigned to the change of the valence state for Mn in b-MnO 2 from Mn 4+ to Mn 3+ at 1.69 V accompanying the redox reaction of Zn to Zn 2+ at the anode. To further understand the kinetic process for the epoxy-based solid-state ZIBs, diffusion-controlled and capacitive contribution to performance (Fig. 3b and c) were analysed based on the following relations: 32 Hence, where i refers to the peak current and v is the scan rate. a and b interpreted from the log-log plot (Fig. 3b) are variables related to the diffusion-controlled and capacitive contribution; where, if b is close to 0.5 the kinetic process is mainly inuenced by diffusion control, otherwise, if b is close to 1, a capacitive process is dominating. The b values for redox peaks C1 and D1 are 0.73 and 0.63, respectively, indicating the kinetic process is mainly diffusion-controlled, rather than under capacitive control. A detailed estimation of diffusion and capacitive contributions is shown in Fig. 3c which further reveals that the electrochemical reaction is dominated by ionic diffusion; whereas as the scan rate is increased, there is an increasing proportion of pseudocapacitance, reaching 48% at a scan rate of 5 mV s À1 . The ionic conductivities for Zn/ABSPE/b-MnO 2 were determined by electrochemical impedance spectroscopy (EIS) for the polymer electrolytes with varying amounts of PC. Focusing on Nyquist plots (see Fig. 3d), the two semi-circles attained reveal the equivalent resistance series (ESR) consisting of a bulk resistance (R b ), a surface resistance (R f ) and a charge transfer resistance (R ct ) as displayed in Fig. 3e. Ionic conductivities, summarised in Table S2, † were calculated based on R b . As displayed in Fig. S9a, † the ionic conductivity increases with plasticizer (PC) content, reaching 3.77 Â 10 À4 S cm À1 , which correlates with the hypothesis that the free volumes are expanded by PC for the segmental motion of Zn 2+ , resulting in higher ionic conductivity. However, there is a decrease in ionic conductivity over an optimal concentration ratio of PC (56 wt%). When PC is over 50 wt% in the polymer electrolyte, the lower amounts of ZnOTf (Fig. S9b †) result in a slight reduction in ionic conductivity. As shown in Fig. S13a, † the as-prepared solid-state Zn/b-MnO 2 battery exhibits a specic capacity of 177 mA h g À1 at a current density of 0.1 A g À1 , even at 2 A g À1 it can deliver a specic capacity of 47 mA h g À1 . The charge/ discharge rate performance (Fig. S13b †) exhibits two typical voltage plateaus at 1.7 V and 1.2 V, which are consistent with the two pairs of redox peaks in the CV curves (Fig. 3a). Meanwhile, it also exhibits a stable cycling ability at 0.5 A g À1 , for which there is a 100% coulombic efficiency with a capacity retention of 85% aer 300 cycles.
A exible device referred to as a carbon bre-reinforced polymer composite was fabricated in a sandwich structure with the substrate carbon cloth acting as the electrodes. As shown in Fig. 3f, a glass bre mat was applied in the middle of the stack to avoid short-circuiting and improve the mechanical properties. The entire device was manufactured by the vacuum resin molding strategy, as shown in Fig. S7. † The as-fabricated exible device can power a temperature-humanity meter (see Fig. 3g) as an indicator of continuous operation. As a exible device, it can still generate power during twisting, bending and even cutting, as shown in Fig. 3h and i, respectively. Moreover, as revealed in Fig. S13d, † under the deformation, the device still exhibits a high coulombic efficiency above 95% at a current density of 2 A g À1 .
Owing to the absence of water molecules in the polymer electrolyte, the ionic diffusion in ABSPE exhibits a wider electrochemical stability window, as shown in Fig. S5, † where a symmetric Zn/ABSPE/Zn cell was tested. Scanning the potential over the range À3 to 3 V, zinc plating was detected at AE1.9 V during the rst cycle. Due to the irreversible zinc deposition at the surface of the electrodes (see Fig. S3 †), the potential window was reduced to AE1.2 V for the second cycle, coupled with a reduction in the peak current. The presence of the cathodic and anodic peaks is related to the reversible reaction of the polymer electrolyte ABSPE in which Zn 4 Zn 2+ + 2e À . Generally, zinc stripping/deposition for aqueous and hydrogel ZIBs occurs at AE0.3 V in symmetric cells; the large operational voltage window (À1.9 V to 1.9 V) for ABSPE is benecial for ZIBs to achieve high voltage, avoiding the hydrogen evolution reaction (HER) at the cathode.
The aging stability of the polymer electrolytes (see Fig. 4a) was investigated by recording the EIS spectra every 10 hours for 400 hours under ambient conditions so as to simulate under realistic working scenarios. As shown in Fig. 4b, R b remains constant at $200 U through the 400 hours test, while R f remains constant at $2000 U for $200 hours. The smoothed curve for R f indicates a substantial increase, nearly an exponential growth, in surface resistance aer 200 hours. For comparison, the aging effect for a conventional alginate hydrogel electrolyte was also investigated (see Fig. 4c). As displayed in Fig. 4d, R b of the hydrogel electrolyte remains constant at $6 U for 170 hours; however, the resistance subsequently jumps to 20 U. R f remains constant around 300 U for 100 hours followed by an exponential increase, as observed for ABSPE. For both ABSPE and HPE, the good stability of the internal resistance reveals that the ion transportations in the pristine polymer electrolyte are less inuenced by the aging effect, whereas surface resistances are more sensitive to aging issues. The different phenomena of ABSPE and HPE in bulk resistance could be explained by the thermal stability. As the ionic conductivity for HPE generally increases with higher water content, 33-35 the densication of HPE itself induced by the evaporation of the water molecules will result in lower ionic conductivity and hence a greater bulk resistance. TGA results for HPE (Fig. S11 †) also prove the thermal stability where the degradation temperature for HPE is below 100 C compared to 313 C for ABSPE as shown in Fig. 1d.
The attenuation of ion transmission at the electrode-polymer interface could be ascribed to the densication of the polymer at the surface and the growth of zinc deposits on the anode. Densication was identied as the primary reason and is induced by solvent evaporation, such as water for HPEs and PC in ABSPEs, which reduces the surface wettability between electrodes and electrolyte; hence, surface resistances are likely to be inuenced by the aging effect. Compared with HPEs, epoxy-based polymer electrolytes exhibit relatively high stability in R f . Owing to the diglycidyl ether groups in the ABSPEs, hydroxylic groups and amine groups formed aer the ring-opening reaction 36 generate a strong intermolecular bond and hydrogen bond at the interface, 37 which restricts the evaporation of the PC solvent. As reported by Fourche,38 chemical bonding takes place extensively in the case of polymer-metal interfaces; moreover, metal-polymer adsorption bonds (M-O-C) are observed for metals such as Al, Fe and Ni coated with epoxy. 39 Therefore, the covalent bonds (Zn-O-C) shown in Fig. 4e could form for ABSPEs under the Lewis acidbase reactions activated by the interfacial electrical eld. 40 The metal-polymer adsorption bonds strongly stabilise the interface, avoiding the aging effect. For HPEs, water molecules present on the surface of the hydrogel enhance the wettability for ionic diffusion, whereas the intermolecular coulombic forces are lost as a result of the ingress of media with a high dielectric constant, such as water 41 (see Fig. 4f). Because of the greater evaporation rate of water (BuAc ¼ 0.3 (ref. 42)) compared with PC (BuAc ¼ 0.005 (ref. 43)), free dissociated water molecules absorbed in the HPEs, interacting via dipoledipole forces, are easily lost, resulting in a shorter aging time. Nazarov, 40 who also investigated a zinc/epoxy interface, indicated that water adsorption and desorption rapidly changed the potential drop across the zinc/polymer interface. The potential variation during water drying and restoration processes reect the sensitive change in the surface resistances.
Conclusions
In summary, an adhesive bonding solid polymer electrolyte was developed for rechargeable zinc-ion batteries and exhibited excellent anti-aging properties and a large electrochemical window. With an increasing amount of the plasticizer (PC), the ionic conductivity could be increased to 3.77 Â 10 À4 S cm À1 . The carbon bre reinforced polymer composite fabricated by VARTM provides a reliable power supply under bending, twisting and cutting. In relation to the aging effect, it is obvious that the surface resistance is more sensitive for both polymer and hydrogel electrolytes compared to the polymer bulk resistance. Interface intermolecular interactions and densication of the solvent were identied as the explanation for the attenuation in capacity during cycling at the interface.
Conflicts of interest
There are no conicts to declare. | 4,730.6 | 2020-11-10T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
A survey for the presence of microcystins in aquaculture ponds in Zaria , Northern-Nigeria : Possible public health implication
Aquaculture ponds in Zaria, Nigeria, were screened for the occurrence of the hepatotoxic microcystins using an ELISA method. Four genera of cyanobacteria (Microcystis, Nostoc, Planktothrix and Anabaena) were recorded from 11 aquaculture ponds screened. These cyanobacteria are generally known to produce microcystins and other bioactive substances. Six of the 11 aquaculture ponds had detectable concentrations of microcystins (ranging from 0.6 to 5.89 μg/L). This means that there is the possibility of bioaccumulation of microcystins in fish. The implication of this is that people that feed on contaminated fish from these ponds stand the risk of microcystins poisoning.
INTRODUCTION
Aquaculture ponds are made up of a community of photosynthesizing organisms.These organisms belong to different groups.Among these groups is the cyanobacteria group (blue green algae).The blue green algae possess the basic morphology of gram negative bacteria.The distinct differences between them and other members of the bacteria group is that they are usually larger in size and have the ability to photosynthesize.In addition, the cyanobacteria are known to have the ability to fix atmospheric nitrogen (Codd et al., 2005;Huisman et al., 2005;Hudnell, 2008).Species of blue green algae could be unicellular, colonial and filamentous in form.They are usually enclosed in mucilagenous sheaths, either individually or in colonies.Independent of the morphological form of blue green algae in an aquaculture pond, they could become the dominant photosynthesizing group.Under the right set of conditions and concentrations of nutrients (nitrogen and phosphorus), blue green algae undergo excessive proliferation, leading to occurrence of cyanobacterial bloom (Nwaura et al., 2004).This is also called harmful algal bloom (HAB), even if the species constituting the bloom do not produce harmful bioactive substances.This is because the excessive growth of blue green algae could have other negative effects on other plants through competition for light, space, nutrients and oxygen (Mitrovic et al., 2004).All of these effects may be detrimental to other cultured organisms, which cannot withstand the associated problems of HABs.Some species of cyanobacteria need to be in higher numbers to produce significant concentrations of bioactive substances (toxins), while others do not.Between members of the same species toxin production vary in time, space, environmental conditions and strain-type.Hence, morphologically identical species could be chemically different; a chemotype producing toxins and the other not producing (Oberholster et al., 2006).
Cyanobacteria produce a wide range of bioactive substances or secondary metabolites.Toxins produced by cyanobacteria are of several types comprising neurotoxins (aantoxins and saxitoxins), hepatotoxins, cylindrospermopsin and lipopolysaccharides (LPS) (Carmichael and Falconer, 1993;Carmichael, 1997).The hepatotoxins are cyclic peptides including microcystins and nodularin.Microcystins contain 5 invariant and 2 variant amino acids.The Adda is one of the invariant amino acids which is a unique β-amino.The nomenclature of microcystins is such that a two letter suffix (XY) is given to individual toxins to specify the two variant amino acids.Several reports support the occurrence of variants of the 'invariant' amino acids and the replacement of the 9methoxy group of Adda by an acetyl moiety.Over 90 variants of microcystins have been characterized to date (Rinehart et al., 1994;Sivonen and Jones, 1999;Natural Resources and Mines, 2005;Pichardo et al., 2007).The International Agency for Research on Cancer classified microcystins as 'possible human carcinogens' (Class 2B) based on the consideration of the accumulated toxicological data.The ability of the toxins to inhibit certain protein phosphatases influenced this classification.The protein phosphatases are enzymes that are critical in cellcycle regulation (Grosse et al., 2006;Humpage and Burch, 2007).
The effects of the toxins produced by species of cyanobacteria could pose significant threat to public health because they have the ability to bioaccumulate; thereby passing through the food chain.Fish and other animals that feed on organisms with high or significant concentrations of these toxins could be killed.This results in severe economic losses in aquaculture if they are implicated in the death of cultured animals or consumers of the cultured animals.There are reports of losses amounting to about one billion US dollars per decade (Landsberg, 2002;Hudnell, 2008).Among these toxins are the microcystins which present the most concern to public health (Chen et al., 2009a).Microcystins are known to inhibit the growth of fishes.Environmental level toxicity has been reported for several fishes such as salmon (Anderson et al., 1993), carp (Xie et al., 2004), zebrafish (Oberemm et al., 1999) and catfish (Zimba et al., 2001).
In Nigeria, there is no report on the presence of microcystins in any aquaculture facility.Most of the works from aquaculture ponds have been on the diversity and abundance of algae (Onuoha et al., 1991;Akpan and Okafor, 1997;Ekpenyong, 2000;Chindah and Pudo, 1991).To date, the studies on algal toxins have been mainly in other aquatic ecosystems and not fish farms or aquaculture ponds.The studies have been basically bioassays which ranged from fish and shell fish bioassay (Unyimadu, 2002) and mouse bioassay (Odokuma and Isirima, 2007).Even though, there is a lot of published works in other countries on the effects of microcystins (and other algal toxins) on fish, not much is known about the presence of such toxins in aquaculture facilities in Nigeria.In Nigeria, fish is a good source of protein for the Chia et al. 6283 poor and the rich as well.People that cannot afford to buy meat buy fish like Tilapia.Hence, if this cheap source of protein is contaminated, the implication of this may be far reaching.The effect of this could be short term and long term poisoning from consumption of contaminated fish.In addition, there is no legislation in Nigeria to enhance proper monitoring and management of fish farms with respect to cyanotoxins.The success of any legislation on the control of the incidence of cyanotoxin poisoning will be dependent on the amount of data available.This calls for research works aimed at generating data from surveys of fish farms or aquaculture facilities in Nigeria.This will help in appraising the extent of contamination of fish farms or ponds; thereby giving room for effective monitoring and management of the facilities.This project was carried out with the aim of screening aquaculture ponds in Zaria, Nigeria for the presence of microcystins.
Study area
Zaria is situated centrally in the Northern Guinea Savanna of Nigeria (11 o 3´ N, 7 o 42´ E).Climatic conditions in Zaria are tropical with well defined wet and dry seasons.The rainy season occurs from May to October while the dry season from November to April.The aquaculture ponds selected for this survey are privately owned and managed.The cultivated fish species were Tilapia and the African catfish.A few ponds were enriched with fertilizers to enhance primary productivity of algae.Fish production from most of these ponds is for commercial purposes.Some of these aquaculture ponds supply their products (harvested fish) directly to specific markets for human consumption.These ponds were mostly concrete ponds.The source of water supply to some of them was from boreholes while others had tap water as the primary source of water.The Table 1 gives the name of the ponds and their locations.
Sampling
The period of sampling for present survey was from September 2008 to November 2008.Samples for physical, chemical and biological analysis were collected from three fixed sampling points per aquaculture pond using an integrated hose pipe sampler (2.5 cm diameter and 5 m length).The samples were collected in replicates at about 30 cm depth and 1 m away from shore (APHA, 1998).The volume of water collected for microcystins analysis was 3 liters.Samples were preserved on ice and transported to the laboratory.In the laboratory, samples for microcystins analysis were stored at -20 o C.
Cyanobacteria analysis
Analysis of cyanobacteria samples was carried out in the Hydrobiology Laboratory of the Department of Biological Sciences, Ahmadu Bello University, Zaria, Nigeria. 100 ml concentrates from initial collected water samples using the Integrated Hose pipe sampler were obtained for cyanobacteria analysis.These subsamples were fixed with 0.1 ml of Lugol's solution to precipitate and preserve algae (APHA, 1998).Laboratory analysis of cyanobacteria was done using the procedures of Prescott (1977), APHA (1998) and Bartram and Rees (2000).Cyanobacteria biomass (number of cells per ml) was determined using the Drop Count Technique (Bartram and Rees, 2000).Cells in colonies were counted were counted without separation of cells.
Analysis of physiochemical parameters
Physicochemical analysis and sample preservation were carried out in the Hydrobiology Laboratory of the Department of Biological Sciences, Ahmadu Bello University, Zaria, Nigeria.The Mercury thermometer was used to determine water temperature.Electrical conductivity (EC) (µmhos/cm) was obtained using the E.B.A/10 Conductivity meter.Determination of pH was with the Pye Unicam pH meter model 292 at 25 o C. The modified Winkler Azide method (Lind, 1974 andAPHA, 1985) was used for Dissolved Oxygen (DO) and Biochemical Oxygen Demand (BOD) analysis.Total Dissolved Solids (TDS) was obtained using the procedure of Lind (1974).Nutrient concentrations (phosphates phosphorus, PO4-P and nitrates-nitrogen, NO3-N) were determined spectrophotometrically using HACH DR/2000 Direct Reading Spectrophotometer.And specific nutrient concentrations were read from a calibration curve (Mackereth, 1963;Lind, 1974;APHA, 1985).
Immunological detection of microcystins
Enzyme Linked Immunosorbent Assay (ELISA) was used to determine the total microcystins concentrations of pond water samples.Samples were frozen and thawed three times to release the intracellular toxins (Chorus and Bartram, 1999).This permitted the analysis dissolved as well as intracellular toxins in the ponds.The ELISA analysis was carried out in the Algae Laboratory, National Research Institute for Chemical Technology, NARICT, Zaria, Nigeria.The principle of the ELISA assay used is based on the polyclonal anti body method put forward by Chu et al. (1990) and adapted by Carmichael and An, (1994).Antibodycoated tubes, standards and all reagents were supplied by Abraxis LLC (War minster PA18974, USA).The level of sensitivity for microcystins using this kit was approximately 0.15 µg/L.Microcystins were quantified using a Jenway spectrophotometer (Model 6400) at a wavelength of 450 nm in conjunction with a reference wavelength of 630 nm (Fischer et al., 2001;Hawkins et al., 2005).
Statistical analysis
Analysis of variance (ANOVA) using Microsoft Office Excel 2007 for windows was used to test for difference between means of observed parameters (Fisher, 1925).Possible relationship between analysed parameters was determined using the Pearson's correlation coefficient.
RESULTS
The highest temperature recorded in this study was 31.80 o C in LH pond A while the lowest was 24.80 o C in RRGRA pond.PN pond B had the peak value of 307µohm/s for EC and was closely followed by Aliyu Fish pond with 289µohm/s.The lowest value for EC was 71.0µohmS -1 and was recorded in BS pond A. TDS showed a similar trend with EC.The highest value of TDS (153.5 mg/L) was observed in PN pond B and the lowest (35.5 mg/L) in BS pond A. DO concentrations in all ponds ranged from 14.9 to 25.4 mg/L.While Biochemical Oxygen Demand (BOD) concentrations ranged from 1.00 to 11.55 mg/L.The highest BOD value was observed in BS pond B and the lowest in LH pond C. The lowest concentration of nitrate nitrogen (NO 3 -N) was 0.07 mg/L.This was recorded in both RRGRA pond and BS pond C. On the other hand, the highest recorded value for NO 3 -N was 0.14 mg/L in PN pond B. The highest concentration of PO 4 -P was 0.26 mg/L (in Engineering pond), while the lowest concentration was 0.10 (in Aliyu fish pond).Except for LH pond B and BS pond A, all ponds had pH values that ranged from 6.58 -7.88.LH pond B and BS pond A had high pH values of 9.40 and 9.75, respectively (Table 2).
Table 3 shows density and relative abundance of cyanobacteria in the fish ponds.The cyanobacteria species with the highest density was Microcystis spp.This observation was made in PN pond A. In addition, Microcystis spp.had the highest frequency of occurrence (72.72%) in all ponds.Biomass of Microcystis spp. in the ponds was significantly correlated with EC (r = 0.7542), DO (r = 0.8993), BOD (r = 0.6669), PO 4 -P (r = 0.7988), TDS (r = 0.7545) and pH (r = 0.8444) at P < 0.05 (Table 4).
Microcystins were detected in fifty four percent samples of the screened aquaculture ponds and five of the six ponds had a level of microcystins above the WHO limit (1µg/L).The highest concentration of microcystins was 5.89µg/L in engineering pond.The lowest concentration detected was 0.60µg/L in LH pond C. Concentrations of microcystins in BS pond C, Aliyu pond A, LH pond A, LH pond B and RRGRA pond were below the level of detec-tion (BLD) for the ELISA kit used for the current study (Table 5).Concentration of microcystins significantly correlated with the temperature (r = 0.6341), EC (r = 0.7867) and TDS (r =0.7877) at P ≤ 0.05 (Table 4).
DISCUSSION
The whole aquaculture ponds in the present study were relatively rich in amount of dissolved oxygen.This is because even the least recorded value of 14 mg/L is high when compared to the critical level of 3 mg/L in aquatic ecosystems (Lind, 1974).The high DO values could also be attributed to the rate of photosynthesis taking place in the aquaculture ponds.The rate of photosynthesis in an aquatic system is proportional to the plant biomass in it.The more the plants photosynthesize, the more they release oxygen to the system.The observed similarity in the variations of TDS and EC is not surprising.This is because both parameters usually show a linear relationship.The differences between the conductivity of the different aquaculture ponds may also be depended on their sources of water and the nature of the dissolved substances.The variations in the activities around the catchment of the water body that serves as source of tap water for the ponds could be responsible for observed differences in TDS and EC values (Chia, 2007).Observed differences in the amount of nutrients in these ponds could be attributed to the practices of the owners of the ponds.Therefore, the nutrient content of the ponds could be a reflection of the extent of fertilization by the owners of the ponds.
Where the ponds are artificially enriched, there will be corresponding increased growth of algae.As phosphorus concentrations increase so does the cyanobacteria biomass in an aquatic ecosystem (Havens et al., 2003).
The density of these species was closely associated with physicochemical parameters of the aquaculture ponds.The water temperature of the screened ponds ranged from 24.80-31.00o C.This temperature range is optimum for the growth of cyanobacteria in aquatic systems (Konopka and Brock, 1978;Howard and Easthope, 2002).This could have been the reason for the increased density of most species of cyanobacteria recorded in this study.Different strains of cyanobacteria have been shown to significantly alter their growth rates in response to variations in temperature, light level, and availability of nutrients (Lee et al., 2000;Oh et al., 2000;Wiedner et al., 2003).The presence of Microcystis spp. in these ponds may be indicative of constant nutrient enrichment, because they are known not to tolerate nutrient poor conditions in aquatic ecosystems (Finni et al., 2001).
Species of Microcystis (Botes et al., 1982;Watanabe and Oishi, 1985;Henriksen, 1996), Nostoc (Sivonen et al., 1990), Planktothrix (Henriksen, 1996) and Anabaena (Krishnamurthy et al., 1986;Vezie et al., 1998) have been repor- ted to produce microcystins in aquatic systems.Although, it is impossible to state which species in the current study was responsible for the production of microcystins, 6 out of 11 ponds screened had detectable concentrations of microcystins in the water.It is probable that the Microcystis could be implicated for the production of the microcystins detected in this study as 2 of the 6 ponds had 100% Microcystis presence.This could have a serious negative implication on people that use the fish from these ponds as a source of meat.This is because there is sufficient published data that implicate microcystins of bioaccumulation in fish tissues.The bioaccumulation in fish is further supported by the evidence of histopathological effects in muscle, gill and kidney tissue of several fish species (Rodger et al., 1994;Carbis et al., 1996;Kotak et al., 1996;Fischer and Dietrich, 2000;Sipia et al., 2001;Magalhães et al., 2003;Xie et al., 2005;Ibelings et al., 2005;Gkelis et al., 2006).Zhao et al. (2006) showed that microcystin accumulation rates in muscle and liver tissues are directly proportional to ingestion rates for Oreochromis niloticus.
Due to technical limitations, the present study did not look at the concentration of microcystins in fish collected from the aquaculture ponds.However, there is compelling evidence available that establishes the bioaccumulation of these toxins in fish tissues.Therefore, fish from these ponds may be contaminated due to the detection of microcystins in the water.Wilson et al. (2008) are of the opinion that the consumption of fish containing cyanobacterial toxins represents a poorly studied, but potentially important mechanism for the ingestion of harmful cyanotoxins by humans.Chen et al. (2009a) recently present a milestone work in MC exposure research that microcystins were found to be transferred mainly from contaminated fisheries products to a chronically exposed human populatin (fishermen at Lake Chaohu in the subtropical China) together with indication of hepatocellular damage.They identified for the first time the presence of microcystins in serum samples (average 0.39ng/ml) of humans (fishermen) under a natural exposure route.Therefore, it is quite likely that the presence of microcystins in aquaculture ponds means that the toxins contained within fish tissues may pose an alternative route of exposure to humans.Field data have been accumulating to indicate the potential chronic risk of human health by consumption of microcystins contaminated fish muscles (Magalhães et al., 2001;Xie et al., 2004;Wood et al., 2006;Wilson et al., 2008, Chen et al., 2009b).
Other probable negative effects of microcystins in aquaculture ponds are that they have the ability to reduce fish growth and productivity.Sublethal effects from exposure of fish to microcystins include liver damage (Best et al., 2001), startle response and disoriented swimming, as well as changes in ventilation rates (Li et al., 2008).The effect on livers is supported by the fact that the toxins accumulate in the liver or hepatopancreas of an exposed fish and binds to the nucleophilic site on protein phosphatases PP1 and PP2A (Robinson et al., 1991;Craig et al., 1996;Fischer and Dietrich, 2000;Fischer et al., 2001).The consequence of this is a possible result of hepatocyte degradation and fatal liver hemorrhaging (Fischer and Dietrich, 2000;Zimba et al., 2001).There is the possibility that some incidence of unexplained fish deaths in aquaculture ponds in Nigeria could be caused by microcystins or other algal toxins.In agreement with this, a number of studies in freshwater ecosystems in other countries have reported fish kills associated with microcystins producing strains of Microcystis (Anderson et al., 1993, Tencala andDietrich, 1997).
In conclusion, microcystins have been detected in 6 out of 11 aquaculture ponds screened.The physicochemical parameters of these ponds encourage the growth of four cyanobacterial species.There is the possibility of bioaccumulation of these toxins in fish tissues as supported by published literatures.People that feed on fish from these aquaculture ponds and others may be at the risk of continuous microcystins poisoning.It is therefore recommended that fish farmers should reduce the rate of artificial enrichment of their aquaculture ponds.Further studies are needed to examine the rate of bioaccumulation and contamination of fish by microcystins in Nigerian aquaculture ponds.Furthermore, it is advised that studies in the future should quantify the type and abundance of toxic intracellular cyanobacterial compounds in important commercially harvested fish species in Nigeria.These studies are required to generate data to support the formation of legislation for the management of the incidence of fish contamination from microcystins (and other algal toxins) in aquaculture facilities in Nigeria.
Table 2 .
Physicochemical parameters of selected Aquaculture ponds in Zaria.
Table 3 .
Density of dominant cyanobacteria (No. of cells per ml x 10 3 ) with their relative abundance (%) in the parenthesis in selected aquaculture ponds in Zaria, Nigeria.
Table 4 .
Correlation coefficient between observed parameters in selected aquaculture ponds in Zaria, Nigeria.
Table 5 .
Concentrations of microcystins in the selected aquaculture ponds in Zaria, Nigeria. | 4,562.6 | 2009-11-16T00:00:00.000 | [
"Biology",
"Engineering"
] |