id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
17985116
pes2o/s2orc
v3-fos-license
Infinite Deformed Groups and Their Geometrical and Physical Applications In this article we give the realization of the Klein's Program for geometrical structures (Riemannian spaces and fiber bundles with connection) with arbitrary variable curvature within the framework of infinite deformed groups. These groups generalize gauge groups to the case of nontrivial action on the base space of bundles with use of idea of groups deformations. We also show that infinite deformed groups give a group-theoretic description of gauge fields (gravitational field with its metric or vierbein part similarly to gauge fields of internal symmetry) which is alternative for their geometrical interpretation. Introduction An infinite (local gauge) symmetry lays in a basis of modern theories of fundamental interactions. Theory of gravitation (general relativity) is based on the idea of covariance with respect to the group of space-time diffeomorphisms. Theories of strong and electroweak interactions are gauge theories of internal symmetry. Moreover, the existence of these interactions is considered to be necessary for ensuring the local gauge symmetries. But any physical theory can be written in covariant form without introduction of a gravitational field. Similarly, as first had been emphasized in [1], for any theory with global internal symmetry G corresponding gauge symmetry G g can be ensured without introduction of nontrivial gauge fields with help of pure gauging. Presence of the gravitational or the gauge fields of internal symmetry is being manifested in presence of deformation -curvature of Riemannian space, or fiber bundles with connection accordingly. Formally nontrivial gauge fields are being entered by continuation of derivatives up to covariant derivatives ∂ µ → ∇ µ . Their commutators characterize strength of a field, which is considered, and from the geometrical point of view -curvature of corresponding space. On the other hand, covariant derivatives set infinitesimal space-time translations in gauge fields (curved spaces). That is why it is possible to suppose, that for introduction of nontrivial arbitrary gauge fields it is necessary to consider groups (wider than gauge groups G g ) which would generalize gauge groups G g to a case of nontrivial action on space-time manifold and contain the information about arbitrary gauge fields in which occur a motion. Consequently, such groups must contain the information about appropriate geometrical structures with arbitrary variable curvature and set these geometrical structures on manifolds where they act. Hence from the mathematical point of view such groups should realize the Klein's Erlanger Program [2] for these geometrical structures. For a long time it was considered that such groups do not exist. E.Cartan [3] has named the situation in question as Riemann-Klein's antagonism -antagonism between Riemann's and Klein's approach to geometry. There are attempts of modifying of the Klein's Program for geometrical structures with arbitrary variable curvature by means of refusal of group structure of used transformations with usage of categories [4], quasigroups [5] and so on. One can encounter widespread opinion that nonassociativity is an algebraic equivalent of the geometrical notion of a curvature [5]. In our article we shall show that realization of the Klein's Program for the geometrical structures with arbitrary variable curvature (Riemannian space and fiber bundles with connection) can be executed within the framework of the so called infinite deformed groups which generalize of gauge groups to the case of nontrivial action on the base space of bundles with use of idea of groups deformations. Such groups have been constructed in [6]. Klein's Erlanger Program was realized for fiber bundles with connection in [7] and for Riemannian space in [8]. This is important for physics because the widely known gauge approaches to gravity (see, for example, [9]) in fact gives gauge interpretation neither to metric fields nor to vierbein ones. An interpretation of these as connections in appropriate fibrings has been achieved in a way of introduction (explicitly or implicitly) of the assumption about existence of the background flat space (see, for example, [10]). That is unnatural for gravity. The reason for these difficulties lies in the fact that the fiber bundles formalism is appropriate only for the internal symmetry Lie group, which do not act on the space-time manifold. But for the gravity this restriction is obviously meaningless because it does not permit consider gravity as the gauge theory of the translation group. In this article we also shall show that infinite deformed groups give a group-theoretic description of gauge fields (gravitational field with its metric or vierbein part similarly to gauge fields of internal symmetry) which is alternative for their geometrical interpretation [11]. This approach allow to overcome the well known Coleman-Mandula no-go theorem within the framework of infinite deformed groups and gives new possibilities to unification gravity with gauge theories of internal symmetry [12]. Generalized Gauge Groups Gauge groups of internal symmetry G g are a special case of infinite groups and have simple group structure -the infinite direct product of the finite-parameter Lie groups G g = x∈M G where product takes on all points x of the space-time manifold M . Groups G g act on M trivially: For the aim of a generalization of groups G g to the case of nontrivial action on the space-time manifold M , let's now consider a Lie group G M with coordinatesg α (indexes α, β, γ, δ) and the multiplication law: which act (perhaps inefficiency) on the space-time manifold M with coordinates x µ (indexes µ, ν, π, ρ, σ) according to the formula: The infinite Lie group G g M is parameterized by smooth functionsg α (x) which meet the condition: where d ν := d/dx ν . The multiplication law in G g M is determined by the formulae : x ′µ =f µ (x,g(x)). It is a simple matter to check that these operations truly make G g M a group. Formula (2) sets the action of G g M on M . In the case of trivial action of the group G M on M , G g M becomes the ordinary gauge group G g = x∈M G. The groups G g M we call generalized gauge groups. For the clearing of the groups deformations idea we shall consider spheres of different radius R. For gauge groups G g M some isomorphisms also play a role of deformations of space of groups representations, but as against isomorphisms of finite-parameter Lie groups such isomorphisms are more substantial, as these allow to independently deform space in its different points. Let us pass from the group G g M to the group G gH M isomorphic to it by the formula g a (x) = H a (x,g(x)) (Latin indexes assume the same values as the corresponding Greek ones). The smooth functions H a (x,g) have the properties: 2) ∃K a (x, g): K a (x, H(x,g)) =g α ∀g ∈ G, x ∈ M . Formula (4) sets the action of G gH M on M . Such transformations between the groups G g M and G gH M we call deformations of infinite Lie groups, since (together with changing of the multiplication law) the corresponding deformations of geometric structures of manifolds subjected to group action are directly associated with them. The functions H a (x,g) we call deformation functions, functions h(x) a α := ∂H a (x,g)/∂g α |g =0 -deformation coefficients, and the groups G gH M -infinite (generalized gauge) deformed groups. Let us consider expansion: The functions ϕ a , setting the multiplication law (3) in the group G gH M , are explicitly x-dependent, so the coefficients of expansion (5) are x-dependent as well. So, x-dependent became structure coefficients of group G gH M (structure functions versus structure constants for ordinary Lie groups) F (x) a bc := γ(x) a bc − γ(x) a cd (6) and coefficients which we call curvature coefficients of the deformed group G gH M . Since where ∂ b := ∂/∂g b and h(x) α a is reciprocal to the h(x) a α matrix, the generators X a := ξ(x) µ a ∂ µ (∂ µ := ∂/∂x µ ) of the deformed group G gH M are expressed through the generatorsX α :=ξ(x) µ α ∂ µ of the group G g M with the help of deformation coefficients: So, in infinitesimal (algebraic) level, deformation is reduced to independent in every point x ∈ M nondegenerate liner transformations of generators of the initial Lie group. Theorem 1. Commutators of generators of the infinite (generalized gauge) deformed group are liner combinations of generators with structure functions as coefficients [6]: For the generalized gauge nondeformed group G g M we have: whereF γ αβ -structure constants of the initial Lie group G M . Group-Theoretic Description of Connections in Fiber Bundles and Gauge Fields of Internal Symmetry Let P = M × V be a principal bundle with the base M (space-time) and a structure group V with coordinatesυ i (indexes i, j, k) and the multiplication law (υ ·υ ′ ) i =φ i (υ,υ ′ ). As usually, we define the left lυ : P = M × V → P ′ = M ×υ −1 · V and the right rυ : Let's consider a group G M = T M ⊗ V where T M is the group of space-time translations. The group G M is parameterized by the pairt µ ,υ i , has the multiplication law: and act on the M inefficiently: x ′µ = x µ +t µ . One can define the left action of the group G M on the principal bundle P : x ′µ = x µ +t µ , υ ′i = l ĩ υ (υ). The group G g M is parameterized by the functionst µ (x),υ i (x) which meet the condition: The multiplication law in G g M is: where (10) determines the inefficient action of G g M on M with the kern of inefficiency -gauge group V g . The group G g M has the structure Dif f M ×)V g , act on P as: x ′µ = x µ +t µ (x), υ ′i = l ĩ υ(x) (υ) and is the group aut P of automorphisms of the principal bundle P . Let us deform the group G g M → G gH M with help of deformation functions with additional properties: 3) H µ (x,t,υ) =t µ ∀t ∈ T,υ ∈ V, x ∈ M ; The deformed group G gH M is parameterized by the functions: Obviously, the group G gH M , as well as the group G g M , has the structure Dif f M ×)V g and act on P as: where functions K i (x, t(x), υ(x)) are determined by equation: K i (x, t(x), υ(x)) =υ i (x). Properties 3), 4) result in the fact that among deformation coefficients of the group G gH M , x-dependent is only h(x) i µ = ∂μH i (x,t, 0)|t =0 =: −A(x) i µ (where ∂μ := ∂/∂t µ ). Generators of the G gH M -action on P (11) are split in the pair: whereX i are generators of the left action of the group V on P . This results in the natural splitting of tangent spaces T u in any point u ∈ P into the direct sum T u = T τ u ⊕ T υ u subspaces: The distribution T τ u is invariant with respect to the right action of the group V on P , and T υ u is tangent to the fiber. So T τ u one can treated as horizontal subspaces of T u and generators X µ -as covariant derivatives. This set in the principal bundle P connection and deformation coefficients A(x) i µ are the coordinates of the connection form, which on submanifold M ⊂ P may be written as ω i = A(x) i µ dx µ . Necessary condition of existence of group G gH M (8) for generators X µ yield: where -structure functions of the group G gH M andF i jk -structure constants of the Lie group V . Relationship (12) one can write as: where form play the role of the curvature form on submanifold M . So, equation (14) is a structure equation for connection, which has be set on the principal bundle P by action of group G gH M . Theorem 2. Acting on the principal bundle P = M × V deformed group G gH M = Dif f M ×)V g sets on P structure of connection. Any connection on the principal bundle P = M × V may be set thus [7]. This theorem realizes Klein's Erlanger Program for fiber bundles P = M × V with connection.
2014-10-01T00:00:00.000Z
2005-08-14T00:00:00.000
{ "year": 2005, "sha1": "663e21841512a37132fcb1ed3026430cab8adba4", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "06c3b66fd7d856ccfbab09c1e6989356ac3f60e1", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
257932196
pes2o/s2orc
v3-fos-license
Practical Reasoning and Practical Argumentation: A Stakeholder Commitment Approach This paper examines the conceptual and terminological overlap between theories and models of practical deliberation developed within the fields of Practical Reasoning (PR) and Practical Argumentation (PA). It carefully delineates the volitional, epistemic, normative, and social commitments invoked and explicates various rationales for attributing the label ‘practical’ to instances of reasoning and argumentation. Based on these analyses, the paper develops a new approach to practical deliberation called the Stakeholder Commitment Approach (SCA). By distinguishing between ‘problem holder’ and ‘problem solver’, and specifying the distributions of attributable commitments among the stakeholders, the SCA introduces an extension and refinement of the grounds for assigning the label ‘practical’ that brings PR and PA closer together. Introduction Philosophers in the fields of Practical Reasoning (PR) and Practical Argumentation (PA) have developed a variety of theories and models of deliberation as a means for resolving so-called 'practical' problems. These are problems expressed in questions such as: 'Should I buy this house?', 'What can you do to be on time for your job interview?' and 'What is the best way to improve the situation of precarious groups in society?' Within both fields, practical deliberation is usually described in terms of the specific premises leading to a conclusion about an action and the various types of commitments attributed to the agent(s) (supposedly) performing that action. As a result, the theories and models developed within PR and PA of how people address practical problems-through reasoning, group deliberation, and argumentation-show considerable conceptual and terminological overlap. Apart from these similarities, one can also identify some crucial differences. In the field of PR, for example, various cases of reasoning aimed at resolving practical problems are often labeled 'theoretical reasoning' rather than 'practical reasoning'. An example is the following inference resulting in advice to a second party: To be on time for your job interview, you should take the bike because that is the fastest means of transportation in Amsterdam. Other examples of what counts as 'theoretical' within PR are inferences expressing the benefits or consequences of certain actions and the specific conditions for carrying them out. Within PA, in contrast, such inferences often count as 'practical' because they are an integral part of a proposal for action rather than a theoretical hypothesis about reality. In this paper, we analyze prominent theories and models developed within the fields of PR and PA, focusing on their procedures for labeling (elements of) reasoning and argumentation as 'practical'. Our aim in this endeavor is not only to generate a deeper understanding of the complex relationship between the two fields but also to provide the building blocks for an integrated approach to practical deliberation, which we call the Stakeholder Commitment Approach 1 3 (SCA). This approach bridges the apparent methodological distance between PR and PA by incorporating insights from both. In particular, SCA yields an extension as well as a refinement of the possible grounds for assigning the label 'practical' by looking at the involved stakeholders, the roles assigned to them, and the practical commitments (potentially) generated through the reasoning and argumentation. We begin the paper, in Sect. 2, by analyzing specific characteristics of principal theories within PR. This analysis includes the relationship between practical and theoretical reasoning, the notion of 'means-end reasoning', the nature of the involved premises and conclusion, the distinction between first-and other-person-perspective reasoning, and the distinction between sufficient and necessary means. Next, in Sect. 3, we first contrast the notions of 'argumentation' and 'reasoning'. Based on an analysis of the literature within PA, we then propose to distinguish between two main approaches: those theories and models taking practical argumentation as publicly performed practical reasoning, to which we refer as the 'public performance approach' (PPA), and those based on insights about policy debates, which we call the 'policy debate approach' (PDA). Finally, we outline the relationship between these two approaches. In Sect. 4, we investigate the key discrepancies between the discussed theories in PR and PA and identify the conditions for labeling a particular instance of reasoning or a particular piece of argumentation as 'practical'. This enables us to locate the source of the discrepancies between the labeling procedures of PR and PA in their understanding and use of commitment. In Sect. 5, we outline our Stakeholder Commitment Approach (SCA). Based on a systematic variation of the problem-related and communicative roles of the stakeholders involved in practical deliberation, we specify the distribution of commitments invoked. In particular, we propose a distinction between the roles of problem holder and problem solver as a central aspect in identifying argumentation as practical. We then articulate how this novel approach enables the inclusion of second-and third-person practical reasoning as well as a refinement of the models of practical argumentation. In Sect. 6, we conclude the paper with a short reflection on how SCA integrates insights from the fields of PR and PA, and how it compares to other integrative proposals. Furthermore, we provide an indication of future work. Practical Reasoning Reasoning has been characterized as a cognitive process, a rule-based procedure, a method for belief revision, and a tool for knowledge expansion and decision-making (see, e.g., Walton 1990). It occurs within dialogical settings between conversing interlocutors and within the monological setting of a single reasoner. Furthermore, one is not committed to a cognitive interpretation of reasoning and can likewise think of reasoning as performed by AI agents. The term 'reasoning' may also indicate the result of the above activity, e.g., a set of linked statements including premises and conclusions. In this respect, there is a close relationship between reasoning and the study of logic (see, e.g., Streumer 2010). Practical reasoning, more specifically, can be understood as the type of reasoning invoked by questions about what to do. Such questions are referred to as 'practical problems', and their answers relate to and motivate the performance of an action, ideally one that satisfactorily addresses the issue at hand. Thus far, in the philosophical literature, the phenomenon of practical reasoning has received considerably more attention than practical argumentation. In this section, we provide an overview of central themes and challenges in the field of PR. We do this by focusing on those theories that, to the best of our knowledge, have received central attention in the debate. 1 In particular, we discuss the relation of practical reasoning to theoretical reasoning, its subcategory means-end reasoning, the nature of its premises and conclusion, the distinction between firstand other-person reasoning, and that between sufficient and necessary means. Our discussion serves to facilitate a comparison with our survey of PA in the next section. In Sect. 5, we will provide our own stance on PR. Present-day debates about what counts as 'practical' are informed by Aristotle's distinction between practical and theoretical philosophy. The term 'practical' stems from the Ancient Greek praxis (i.e., 'action') and relates to praxeology, the study of agency and action. It contrasts with 'theoretical', stemming from the Ancient Greek theōría (i.e., 'contemplation' or 'things looked at'), which refers to knowledge of things, for instance, through perception. Aristotle defined the distinction in terms of deliberation ultimately resulting in (rightful) conduct, respectively contemplation directed towards attaining truth and knowledge (Hintikka 1991). In light of this classical distinction, as practical beings (i.e., as agents), we engage in practical reasoning when addressing problems of desire, wants, means, and obligation. When engaged in theoretical reasoning, we are considered epistemic beings (i.e., knowers), addressing problems of knowledge, belief, and truth. While the above distinction may seem straightforward, a complication arises when using it to label the various elements of reasoning. For example, assessing a meansend premise of practical reasoning, such as 'taking the bike is a means for getting to a job interview', is a theoretical endeavor. The premises of practical reasoning typically concern not only desire, obligation, and intention but also knowledge and belief, reflecting the fact that practical problems engage us in the role of agents as well as knowers. This double role is a distinguishing feature of the practical perspective and is characterized by the agent's efforts to change the world based on knowing it. Two other aspects of Aristotle's account of practical reasoning relevant to understanding present-day debates are his focus on correct (ethical) conduct through deliberation (von Wright 1963) and his conceptualization of the conclusion of practical reasoning as an action: "Now when the two premisses are combined, just as in theoretical reasoning the mind is compelled to affirm the resulting conclusion, so in the case of practical premises you are forced at once to do it" (Ethica Nicomachea 1147a27-28, translation Rackham 1996, our italics). During the twentieth century, a significant shift took place concerning both aspects. First, the study of practical reasoning has been narrowed down to the analysis of means-end inferences. Second, the conclusion of practical reasoning became an intention to act or a normative claim necessitating action rather than the action itself. We now turn to discussing both shifts in more detail. The focus on means-end inferences stems from the work of Anscombe (2000), who is considered the founder of the modern study of practical reasoning. Means-end inferences are taken to consist of three types of statements: (P1) a premise expressing wants, desire, or motivation; (P2) a theoretical premise concerning the relationship between an action (as a means) and a state-of-affairs (as the action's outcome); and (C1) a conclusion expressing a normative statement, intention, or action. 2 Without loss of generality, we focus on wants, necessary means, and normative statements. The corresponding means-end scheme is presented by (S1) (P1) I want X; (S1) (P2) I know Y is the only action leading to X; (C1) Hence, I must do Y. Von Wright (1963) refers to this scheme as the primary practical inference. In (s1) we present an instantiation of this scheme expressing a normative commitment (must) to an action (taking the A-train) that is the only means for accomplishing the given end (to go to Harlem). The example is borrowed from Condoravdi and Lauer (2016). I want to go to Harlem;(s1) Taking the A-train is the only way for me to get to Harlem; Hence, I must take the A-train. There are also secondary schemes, which replace (P1) with a premise expressing a practical conclusion derived from earlier inferences and, thus, enable chaining. 3 The reasoning in (s2) is an instantiation of a secondary scheme chained with (s1). I must take the A-train; (s2) Only through buying a ticket can I take the A-train; Hence, I must buy a ticket. Means-end reasoning, in short, enables us to determine which actions are required (or sufficient) to secure the want expressed in the first premise. Central to this type of reasoning is the second premise (P2), called 'the means-end premise', which typically expresses an instrumentality relation between an action (means) and its outcome (end) and is therefore theoretical in nature. Means-end reasoning is goal-directed and thus provides immediate guidance in ascertaining ends, desires, and goals. 4 For this reason, it is considered the most prevalent type of practical reasoning-see, e.g., Clarke (1985) and Walton (2007). 5 However, the associated scheme (S1) has three features that pose particular challenges. First, since the three statements are of a completely distinct nature (desire (P1), knowledge (P2), and obligation (C1)), the logical status of their relationship is unclear. Second, instantiating (S1) 3 Clarke (1985), von Wright (1963, and Walton (2007) all talk about chaining inference schemes. 4 This view strongly relates to the influential Belief-Desire-Intention model developed by Bratman (1987) and has been extensively investigated in other fields such as Artificial Intelligence (Rao and Georgeff 1995). 5 Notwithstanding the focus on means-end inferences, many philosophers stress that this inference is just one instance of a variety of practical reasoning schemes. Audi (1991) emphasizes the plurality of practical reasoning, of which he argues that means-end reasoning is a common practical scheme. Clarke (1985) takes practical reasoning to comprise a variety of inference schemes, ranking the means-end scheme among one of them. Other authors mentioning the plurality of practical schemes are Broome (2001) and Walton (2007). Walton introduces a distinction between a narrow conception of practical reasoning, which is means-end reasoning, and a broader conception which incorporates value-reasoning and extends means-end reasoning. Surprisingly, with the notable exception of Clarke (1985), the above authors only mention that there are forms of practical reasoning different from means-end reasoning but do not specify which these are. Moreover, Searle (2001) argues that practical reason is more than just (short-term) means-end reasoning and criticizes the idea that rational agents are essentially goal-driven, reducing any with other perspectives than the above first-person perspective (FPP) causes problems. Third, premise (P2) represents the action as a necessary instead of a sufficient means to the desired outcome. We address each in turn. The conclusion of theoretical reasoning is widely recognized to be a belief or a doxastic attitude (cf. Streumer 2010). The nature of the conclusion of practical reasoning, however, is highly controversial. In the literature, we find three main candidates: (i) action (Aristotle, Ethica Nicomachea and De motu animalium-see e.g., Broadie 1991; Dancy 2018); (ii) intention (Anscombe 2000;Broome 2001;Lewiński 2021;Raz 1978;von Wright 1963); and (iii) normative statements (Audi 1991;Clarke 1985;Walton 2007;von Wright 1972). As seen from the quote at the beginning of this section, Aristotle takes the conclusion of practical reasoning to be the actual performance of an action. Von Wright (1963) argues against this position, taking the conclusion to indicate a setting oneself to act (a perlocutionary effect, nevertheless). Likewise, Anscombe (2000) emphasizes that practical reasoning does not compel any action but instead concludes with intention. According to Broome, concluding intentions is "as practical as reasoning can get" (2001, p. 175). Audi (1991) observes that, although the conclusion (whether it be an intention or a normative statement) is perhaps likely to cause the intended act, causation is not part of a reasoning process. In fact, Searle (2001) argues that the gap between reasoning and deciding is a necessary condition for rationality. This separation of the conclusion of practical reasoning from action serves to explain problematic cases such as failure to act (e.g., through incontinence, change of mind, or intervention) and weakness of the will (akrasia)-see Audi (1991) andvon Wright (1963). More recently, Dancy (2018) argues for reconsideration and modification of Aristotle's approach, taking action as the conclusion of PR. The most prevalent approach is to take the conclusion of practical reasoning as a normative judgment: a statement necessitating the agent to certain action; cf. (C1) in scheme (S1). 6 Whereas intentions and actions do not qualify as propositions implied by a reasoning process, so the argument goes, a normative conclusion does-see Audi (1991) andvon Wright (1972). Streumer (2010) adopts the view that all three types of conclusions are possible, representing various kinds of practical reasoning. The three candidates (i)-(iii) share their consideration of the conclusion as a non-descriptive statement concerning action that differs in nature from the premises from which it is supposed to be derived. Regarding the nature of these premises, Audi (1991) distinguishes between motivational and cognitive premises. The former motivate the reasoning process through the active desiring and wanting of a certain state of affairs, e.g., (P1) of (S1), and the latter express the reasoner's beliefs and knowledge about the world and the relations between actions and outcomes, e.g., (P2) of (S1). Audi stresses that a motivational commitment of the reasoner to the first premise, by means of actively desiring what is stated, and a cognitive commitment to the second premise, by means of actively believing in the accuracy of the meansend relation, are necessary for practical commitment to the conclusion drawn-cf. the sincerity condition of the speech act of asserting (Austin 1962;Searle 1969). Clarke (1985) adopts a similar position, emphasizing that (P1) expresses a volitional attitude and (P2) an epistemic disposition. He furthermore emphasizes that the wants, desires, or needs described in a volitional attitude additionally require awareness. For Broome (2001), practical reasoning is a rule-based process over cognitive attitudes-including beliefs, desires, and intentions-concluding in intention. Similar positions emphasizing the presence of non-descriptive content can be found in the seminal works of Anscombe (2000) and von Wright (1963). We emphasize that what these approaches have in common is that they assume the reasoner's cognitive commitment to the content of the reasoning. To avoid confusion, what Audi calls a cognitive commitment with respect to the second premise can be better called an epistemic and doxastic commitment. In what follows, we exclusively use 'cognitive commitment' as an overarching term for a reasoner's volitional, doxastic, epistemic, and normative commitments. Following the above, the reasoning process captured by the inference (S1) can thus be seen as a transition from motivations and beliefs to a (normative) commitment to action. One of the central challenges concerning such inferences is then to determine the logical relation between the involved statements and the validity of the transition from wants and beliefs to practical necessitation. In almost all of the works mentioned above, we find that the reasoner's motivational/ volitional commitment to the first premise is a distinguishing feature of practical reasoning. It is for this reason that the FPP takes up a central position in the literature on practical reasoning. This brings us to the next aspect, the essential role of the 'I' in models of practical reasoning. As discussed above, the presence of an agent actively endorsing what is desired, wanted, or intended is often considered to lie at the very heart of what makes such reasoning practical. Moreover, the view that practical reasoning must somehow affect the reasoner's intentions or actions inevitably entails the FPP of PR (Streumer 2010). Consider the following instantiation of (S1) in the third-person perspective (TPP): Billy wants to go to Harlem; (s3) Billy knows the A-train is the only means of getting there; Hence, Billy must take the A-train. While the occurrence of 'must' in (s3) denotes a normative judgment concerning what is rational for Billy to do, the reasoning itself is often not considered practical, and even labeled theoretical (see the works of Clarke 1985;Hunter 2017;von Wright 1963 for discussions). In (s3), the premises are descriptive (facts and observation) of Billy and are neither (required to be) cognitively nor motivationally endorsed by Billy. 7 Thus, so the argument goes, (S1) can only be properly practical from the first-person perspective. For von Wright (1963), too, only the first person setting concludes in practical commitment: the rise of an intention. In the third-person perspective in (s3), the conclusion expresses a necessity (normative judgment), which is descriptive as in predictive or reconstructive reasoning. In such settings, the nature of 'must' changes from practical (necessitation) to theoretical (rational prediction concerning facts). Clarke (1985), extensively discusses first-and otherperson perspectives of practical inference, distinguishing between second-person perspectives (SPP) and third-person perspectives (TPP). The SPP is employed to persuade the hearer to perform the action specified in the inference. The TPP has a different perlocutionary force: it intends to induce a belief in the hearer concerning the truth of the conclusion. While von Wright takes the TPP to conclude in a categorical, i.e., detached, normative judgment, Clarke emphasizes that such conclusions are most often hypothetical: 'If Billy wants to go to Harlem, then Billy must take the A-train'. 8 Only when the reasoner themselves cognitively endorses the volitional premise (P1), does the conclusion become categorical. For this reason, the FPP cannot but conclude categorically, since the reasoner always endorses their own wants. Clarke argues that only categorical conclusions are satisfactory for practical reasoning since only those can constitute an appropriate answer to the question 'What must I do?' (or 'What must X do?'). Since the central component of practical reasoning is the endorsement of a want expressed in the volitional attitude, SPP and TPP are commonly considered instances of theoretical reasoning. What von Wright and Clarke (a.o.) have in common, is that they require the reasoner's volitional commitment in the first premise as a conditio sine qua non for identifying it as 'practical'. A notable exception, in this respect, is Hunter (2017) who argues against the common view that the FPP (and even the SPP) is an essential characteristic of PR. A final important aspect of present-day models of practical reasoning is the distinction between necessary and sufficient means. The distinction particularly generates certain challenges for the role of choice-making within the reasoning process. Most of the authors mentioned above-except for Clarke (1985), Walton (2007), and Lewiński (2017)focus on practical inferences based on necessary means only. Hare (1971) points out that the overlooked distinction between sufficient and necessary means leads to a misleading focus in the philosophy of practical reasoning. Whereas reasoning with necessary conditions is more common to theoretical reasoning (deduction), reasoning with sufficient conditions is more common to practical reasoning (cf. abduction). According to Hare, the problem is that practical necessitation does not follow from sufficient means. The reasoning in (s4) may be considered invalid due to the possibility of alternative sufficient means (e.g., 'taking the car to Harlem instead of the A-train') and, thus, does not lead to any conclusion normatively binding the agent to action. I want to go to Harlem; (s4) I know that taking the A-train is a way to get there; Hence, I must take the A-train. Hare subsequently observes that, in the case of sufficient-means reasoning, the resolve to act is not a logical conclusion but a decision following the reasoning process. Likewise, Clarke (1985) states that decision-making is part of the post-deliberative phase. Walton (2007) similarly distinguishes between practical reasoning as an inferential process and practical deliberation as a goal-directed method of decision-making. Also, recall Searle's (2001) position that this gap between reasoning and deciding is in fact necessary. In contrast, Lewiński (2017) takes the decision-making process to be at the heart of practical reasoning, incorporating the weighing and selecting of sufficient means into the process. 7 Some of these accounts take the conclusion of (s3) as invalid (e.g., Clarke 1985), whereas others claim that the 'must' in the conclusion is a logical or epistemic must, rather than a practical must (e.g., von Wright 1963). 8 Such hypothetical statements are known as anankastic conditionals. These conditionals have been considered as enthymemes of practical inferences, but contemporary linguists take them as conditionals in their own right, posing essential difficulties for the evaluation of the involved modals in the conditional. See the work by Condoravdi and Lauer (2016) for an overview. 3 3 Practical Argumentation Colloquially, 'reasoning' and 'argumentation' are often taken to be exchangeable notions. When looking at them as products of mental activities, they share many features, most notably their constituents: both a piece of reasoning and a piece of argumentation consist of a structured set of one or more premises and a conclusion. Only in describing these activities, a crucial difference between the two notions becomes visible. As we elucidated in the previous section, reasoning is an individual, cognitive process in which an intelligent agent draws a particular conclusion from certain premises. Following the general characterizations and definitions of argumentation provided by, for instance, van Eemeren et al. (2014, pp. 1-7) and Wagemans (2019a, pp. 58-59), we say that arguing is primarily a social, communicative process in which someone, the 'arguer', tries to convince someone else, the 'addressee', of the acceptability of a particular conclusion by offering certain premises in support. As the latter is superfluous if the addressee would already accept that conclusion, we can describe the pragmatic aim of argumentation as changing the attitude of the addressee regarding the conclusion from 'doubt' to 'acceptance'. From a historical point of view, the distinction between reasoning and argumentation is reflected in the coexistence of the philosophical subdisciplines of, on the one hand, logic as the art of reasoning and, on the other hand, dialectic and rhetoric as the art of philosophical debate and that of public speaking, respectively. As emphasized by Aristotle in his debate manual the Topica, in contrast to the philosopher, who reasons on their own, the dialectician has to present their reasoning in reference to another party and thus has to consider not only its content but also its arrangement and framing (Wagemans 2019b). In classical rhetoric, the similar tasks of finding (inventio), ordering (dispositio), and wording (elocutio) of the material in preparing a speech for an audience are canonized as the first three of the five so-called 'tasks of the speaker'. In short, while logic studies the abstract structure of reasoning, either in itself or underlying an argument, dialectical and rhetorical approaches to argumentation study the communicative practice of conducting discussions and giving speeches, respectively (Wagemans 2021). Based on these considerations, we can see argumentation as a means to invite others to reason. This articulation of the relationship between the two stems from Pinto, who defines 'inference' as "the mental act or event in which a person draws a conclusion from premisses" (2001, p. 31) and proposes that "an argument is best viewed as an invitation to inference, that it lays out grounds or bases from which those to whom it is addressed are invited to draw a conclusion" (2001, p. 68). Viewed from this perspective, the relationship between reasoning and argumentation is thus an asymmetric one. Reasoning manifests itself in argumentation without being restricted to it: every argumentation contains reasoning, but not every reasoning is expressed as argumentation. Practical argumentation, more specifically, can be conceived as inviting an audience to reason about a practical problem. The asymmetric relationship between reasoning and argumentation is reflected in the theories and models developed in the field of PA. While some emphasize the cognitive and inferential aspects of practical argumentation, others focus on the social and communicative aspects. In describing their general characteristics, we propose, therefore, to distinguish between two main approaches. The first approach focuses on the specific argumentative discourse structure supporting a practical point of view. This approach is close to practical reasoning, using similar terminology for naming the premises involved and taking practical argumentation as externalization or publicly performed practical reasoning, to borrow a phrase from Lewiński (2021, p. 435). We shall refer to it as the 'public performance approach' or PPA for short. The second approach uses the notions of 'policy statement' and 'stock issues' to characterize how debates about practical problems, called 'policy debates', are (or should be) conducted. This approach is more remote from practical reasoning as it employs a different terminology and considers any set of premises supporting a practical conclusion as practical argumentation. Since it is inspired by debate theory, we refer to it as the 'policy debate approach' or PDA for short. In the remainder of this section, we discuss both approaches in more detail, paying special attention to their usage of the label 'practical'. Instead of discussing different variants of the PPA, we discuss here the most recent iteration of the 'scheme of practical argumentation' as developed by Lewiński (2015Lewiński ( , 2017Lewiński ( , 2021. This scheme is based on literature on practical reasoning (a.o., Audi 2006;Broome 2013;Searle 2001) and other instances of the PPA by different authors (a.o., Fairclough and Fairclough 2012;Walton 2007). Lewiński develops his point of view by criticizing specific aspects of practical reasoning. First of all, as he observes, "[p]hilosophical accounts of practical reasoning […] are still dominated by the first-person perspective of a single reasoning agent" (2021, p. 422)-cf. Section 2 of this article. As a result, the premises and conclusion of such reasoning are named after the propositional attitudes or intentional states involved. Paraphrasing Lewiński (ibidem): from my belief that a means m leads to achieving a goal G and my desire or intention to achieve that goal G, it is concluded that I intend to do m, i.e., that I should do m. This observation leads Lewiński to propose a first amendment, which draws from the idea of argumentation taking place within a communicative setting-see the beginning of this section. According to him, the account of practical reasoning can be improved by considering it as a social activity, thus connecting it to "an argumentative activity of deliberation […]. One main consequence of it is a shift of focus away from the internal propositional attitude of intention to some externalized and collective speech act" (2021, p. 427). In performing this shift, practical reasoning turns into practical argumentation or, as Lewiński puts it: "[practical reasoning], when publicly performed, can better be called practical argumentation" (2021, p. 435). Another amendment concerns the characterization of the conclusion of practical argumentation. While others have focused on the speech act of 'imperatives' and 'proposals' as the paradigmatic conclusion of deliberation, Lewiński argues that this is too restricted because it limits the conclusion of deliberation to second person singular and first person plural. He proposes, therefore, to represent the conclusion of practical argumentation as an "action-inducing speech act" (2021, p. 437). 9 As mentioned above, the general scheme of practical argumentation that Lewiński (2021, pp. 435-436) presents is a summarizing account of many sources-in particular, Fairclough and Fairclough (2012)-and includes the two proposals for improvement discussed above. The scheme is pictured in Fig. 1. While practical reasoning concludes intentional states, the PPA to practical argumentation concludes speech acts from these intentional states. It is in this sense that the PPA takes practical argumentation as externalized practical reasoning, i.e., externalized through speech acts. 10 To see this, observe the similarities between Fig. 1 and the common accounts of PR given in Sect. 2: the central role of means-end premises, the presence of values as desires and normative statements, and the resolution of various sufficient means by means of deliberation. In fact, the PPA developed by Lewiński (2021) is explicitly rooted in the PR theories of Audi (2006), Broome (2013), and Searle (2001). We believe that the importance of this externalization lies in the fact that through speech acts, more nuances can be observed in the analysis of practical arguments, including the introduction of second-person perspectives (cf. Sect. 2). Moreover, the introduction of speech acts introduces a different type of commitment to the picture: publicly accessible commitments. Furthermore, as Lewiński points out, "[p]ublicity of practical arguments invokes socially and institutionally recognizable commitments" (2021, p. 435). We come back to this in Sect. 4. The models developed within the second type of approach to practical argumentation, which we have called the 'policy debate approach' (PDA), differ in two ways from those philosophical accounts of PR discussed in the previous section. First, they do not typically contain the premises identified in PR, such as desire and means-end premises, but rather contain premises reflecting the arguer's position regarding the 'stock issues' conventionally addressed in a 'policy debate'. Second, the PDA labels argumentation as 'practical' if it is put forward in support of a so-called 'statement of policy', which is the central claim supported and attacked by the participants in the debate. Below, we discuss these two differences in more detail, starting with an explanation of the concept of 'stock issues'. Stock issues are questions that are typically addressed by the participants in a debate. Their content depends on the domain in which the debate takes place as well as on the nature and content of the debated claim. The notion of 'stock issue' derives from that of 'stasis' or 'status', a term used in classical rhetoric theory for indicating the main topics or points of discussion in speeches belonging to the judicial genre. The notion has been revived in the twentieth-Century debate tradition, where its application has been extended to other genres of discourse (see, e.g., Braet 1984Braet , 1999Carter 1988;Freeley and Steinberg 2014;Ihnen Jory 2012;McCroskey and Camp 1964;Schut and Wagemans 2014). In Fig. 1 The scheme of PA as presented in Lewiński (2021, p. 436) 9 See also Corredor (2023) who, in this special issue, discusses the nature of the conclusion of practical argument. 10 Lewiński (2021) adopts the distinction between internal versus external. We adopt the term 'communication' for PA instead of externalized PR to stress the interaction between arguer and addressee, which we believe is more than just externalized reasoning on the part of the arguer. We come back to this in Sect. 5. a recent paper, Popa and Wagemans conclude from a survey of relevant literature that descriptions of stock issues usually contain one or more of the following points: (i) Stock issues are general in the sense that they apply to more than one interaction and often, by definition, to all discussions of a certain type. For example, in a legal discussion about guilt, arguers usually draw upon the deeds of the ones involved, their knowledge of the risks, aggravating and attenuating circumstances, alibis, and the like. […] (ii) Stock issues have normative force in the sense that the speakers are expected to address them in their argumentative discussions-choosing and ordering them relative to the institutional setting in which the discussion takes place […]. (iii) Depending on the context, stock issues are accompanied by a decision rule which stipulates the weight of each issue in the ultimate decision and thus directs the parties from exchanging arguments pro and con to taking a decision based on the exchange. In the legal context, such decision rules are stipulated by law […]. In less formalized contexts, more often than not they remain implicit and thus need to be reconstructed in order to fully understand the motivation for the decision. (Popa and Wagemans 2021, p. 130) While each genre of argumentative discourse has its own specific set of stock issues, in general, the term is reserved for the standard issues to be addressed in so-called 'policy debates', which center around a particular statement of policy (e.g., 'The government should increase income tax'). The first of these issues is called 'problem' (or 'harm'), and the main reason the proponent should address this issue is that when there is no problem, there is no need for action either. Even if the statement of policy defended by the proponent contains the best plan among competing alternatives, if the status quo is unproblematic, there is no need to change it. A similar reason applies to the two stock issues called 'urgency' and 'inherency'. Apart from the existence of a problem, the proponent should prove that the problem is urgent and inherent to the status quo, i.e., caused by a factor that is characteristic of or belongs to the present situation. Even if a problem has been identified, if it is neither urgent nor inherent in the status quo, there is no need for action. A fourth stock issue is called 'solvency'. This issue relates to the requirement that the policy should have an effect such that the problem is solved. Another stock issue, called 'workability', requires the policy to be feasible. The last point that the proponent must demonstrate is that, in case of undesirable side effects, the positive effects outweigh the negative ones. This stock issue is called 'advantages' or 'cost-benefit'. Within an actual policy debate, the specific content of the premises put forward in support of a statement of policy may seem somewhat arbitrary since they depend on the particularities of the case at hand and the subjective contributions of the participants to the discussion. The general idea behind the PDA is that these premises reflect, to a lesser or greater extent, the stock issues involved. The latter express particular presumptions, expectations, and conventions regarding how such debates are usually conducted and therefore have a certain normative force. In Fig. 2, we picture the stock issues described above in the form of premises, i.e., in the way they are addressed by the proponent in the debate. The stock issues 'problem' or 'harm', 'urgency', and 'inherency' address potential criticisms regarding the relationship between the premise that the proponent is defending a good action, (or when there are alternatives, the best action) and the conclusion that the action should be carried out. This group of stock issues can therefore be interpreted as premises supporting the Fig. 2 A generic argumentation structure for policy debates relationship between the conclusion and the second premise. The stock issues 'solvency', 'workability', and 'advantages' or 'cost-benefit' address potential criticisms regarding the relationship between the second premise that the proposed action leads to the result in question and the first premise that the proposed action is good or the best. This second group of stock issues can, therefore, be interpreted as premises supporting the relationship between the second and the first premise. This representation of the main stock issues in a generic argumentation structure for policy debates is modeled on Wagemans (2016), who indicates how the issues specified in classical rhetorical status theory can be interpreted in terms of a generic argumentation structure for legal debates. The primary purpose of the representation is to indicate the argumentative function of the stock issues in policy debates. In the literature, one finds several partial instantiations of this generic structure. Ihnen Jory (2012), for instance, provides a detailed representation of stock issues as supporting premises in pragmatic argumentation, and van der Geest (2015) presents a structure of argumentation supporting a choice containing several stock issues as sub-premises. While the PPA and the PDA thus differ in their characterization of the content and structure of the premises of practical argumentation, their conceptualization of its conclusion is very similar. As is clear from the general model just presented, the PDA takes practical argumentation as support for statements of policy. This is related to its origins in debate theory, where it is common to make a distinction between three types of statements participants in a debate may put forward (see, e.g., Broda-Bahm et al. 2004;Kruger 1975;Schut and Wagemans 2014;Skorupski 2010;Freeley and Steinberg 2014;Wagemans 2023): statements of fact, statements of value, and statements of policy. Wagemans (2023, p. 125) defines a statement of policy as "a directive or hortative statement that expresses advice to do something or to refrain from doing something". Statements of policy typically predicate of a specific act (course of action, policy) that it should be carried out and may also include as their constituents an actor, an object of the act, and a temporal indication. An example of all these constituents being present is 'The city of Vienna should legalize soft drugs in 2023'. Linguistically, statements of policy are expressed in various ways: as incitements, advice, imperatives, or proposals (ibidem). In this respect, the PDA conceptualization of the conclusion is close to that of the PPA, which works with a similar set of statements expressed in terms of speech act theory. Last, we briefly indicate the relation between the PDA and PPA, and their distance to PR. As is clear from the above analyses, the PPA takes practical argumentation as publicly performed practical reasoning employing a variety of speech acts. The PDA, by contrast, takes practical argumentation as any kind of argumentation with a statement of policy as its conclusion, irrespective of its premises. By implication, every instance of a practical argument in the PPA is also an instance in the PDA (or can be rephrased in terms of it), but not vice versa. Concerning PR, we find that the PPA is closer in many respects, assuming the presence of practical reasoning in any practical argument. However, an important difference is that the PPA includes second-and third-person perspectives, mainly by virtue of the speech act involved. The PDA is more remote from PR since it considers particular instances of arguments that are by definition excluded in the main approaches to PR as 'practical'. We now turn to analyzing these discrepancies in greater detail. Understanding the Discrepancies In Sect. 3, we characterized the relationship between the social, communicative activity of argumentation and the individual, cognitive activity of reasoning by describing argumentation as an invitation to engage in reasoning. The arguer invites, so to speak, the addressee to reconstruct a particular instance of reasoning by offering a set of premises and a conclusion as its 'materials'. Reasoning thus occurs within argumentation, providing the arguer with the content to be communicated and the addressee with the means for reconstructing the reasons offered. The arguer's aim in this activity is to incite commitment to the conclusion on the part of the addressee. Argumentation also brings along specific commitments related to the act of arguing itself. As van Eemeren et al. put it: rather than being just an expressive act free of any obligations, as a rational activity of reason, argumentation involves putting forward a constellation of propositions the arguer can be held accountable for. The commitments created by argumentation depend not only on the propositions that are advanced but also on the communicative function they have in the discourse. (van Eemeren et al. 2014, p. 5, original italics) To understand the discrepancies between PR and PA (both PPA and PDA), it is helpful to note they have a different understanding of commitment: in PR, commitments are actual commitments in terms of cognitive attitudes, and in PA, commitments are reasonably attributable commitments based on the felicity conditions associated with the type of utterance, i.e., the speech act involved. While, in the context argumentation, the arguer may try to elicit cognitive commitments by inviting the addressee to reason, attributable commitments can be seen as public or interpersonal commitments, i.e., they derive from argumentation as a form of communication governed by social conventions. In the remainder, we recapitulate how the discussed approaches assign the label 'practical' and how this relates to the distribution of various types of commitments among the participants in the activities of reasoning and arguing. As discussed in Sect. 2, most accounts of PR only consider reasoning from the first-person perspective (FPP), concluding in 'I should (not) do X', as practical, and label any reasoning with a conclusion addressing an agent other than the reasoner as 'theoretical'. The practical necessitation expressed in the conclusion represents the agent's normative or intentional commitment to a given action. For assigning the label 'practical', however, the reasoner must additionally have specific cognitive commitments to all the involved premises, containing at least one volitional or normative commitment. The reasoning in (1) below, for instance, generates a practical commitment to the prescribed action only if the reasoner endorses both the involved desire (a volitional attitude) and the belief concerning the best means available (an epistemic attitude). (1) I want to go to Amsterdam, and taking the train is, all things considered, the most optimal, hence I must take the train. Thus, we say that in PR it is the notion of commitment as cognitive attitude together with the expression of an intention or necessitation in the conclusion that justifies the label 'practical'. The PPA, as we explained in Sect. 3, opens the door to labeling reasoning from the second-person perspective (SPP) and the third-person perspective (TPP) as 'practical' because it generalizes practical reasoning to practical argumentation via the inclusion of speech acts. That is, practical reasoning may conclude in practical argumentation by speech acts such as 'Just take the bike!'. At the same time, we point out that the PPA assumes a more general notion of practical reasoning than the common approach to PR. Take, for instance, the practical argument (2a)-(2b): (2a) You want to be at work as soon as possible, so you should take the bike. (2b) Just take the bike! The reasoning in (2a) is considered theoretical reasoning, not practical, in common approaches to PR, even though (2a) and (2b) together classify as a practical argument in the PPA. Therefore, we argue that the PPA assumes a wider conception of practical reasoning, closer to the inclusive account provided by Clarke (1985). Furthermore, the use of 'practical' in the PPA is closer to PR than the one in the PDA, for which all argumentation in support of statements of policy is labeled as 'practical'. We emphasize that the PPA's account of commitment differs from the one adopted in PR. Rather than referring to cognitive attitudes-i.e., in psychological terms-the PPA conceives commitments as reasonably attributable commitments following from the felicity conditions associated with performing specific speech acts, i.e., in communicative terms (see also Macagno and Walton 2018). In the PDA, the nature of the claim receiving support through argumentation is the only criterion for such labeling, which occurs irrespective of whether the support contains, for instance, premises expressing volitional attitudes. In contrast to practical reasoning, despite the central role of commitments in argumentation theory in general (see, e.g., Walton and Krabbe 1995), the notion of commitment does not play a decisive role in the PDA understanding of 'practical'. For example, when (1) occurs in an argumentative setting, it would be classified as practical argumentation by virtue of the nature of its conclusion only. In both the PPA and the PDA, practical argumentation involves types of reasoning that are commonly not labeled 'practical' in the context of practical reasoning. Thus, the label 'practical' is assigned differently for PR and PA (PPA and PDA). Since any argumentation contains reasoning, an instance of reasoning/argumentation, such as for example (3), will be labeled differently depending on the perspective from which it is analyzed. (3) You should take the bicycle since it is the best means of transportation in the center of Amsterdam (given that you want to move around in Amsterdam). When conceived as reasoning proper, the general approach in PR is to label (3) theoretical. That is, the conclusion does express necessity, but only in a descriptive way: my reasoning cannot incite a commitment or disposition to act in you, the subject of the conclusion. When analyzed in the context of argumentation, by contrast, (3) is considered practical by virtue of the nature of the conclusion 'You should take the bicycle', which is an imperative in the PPA and a policy statement in the PDA. How to explain this discrepant use of 'practical'? To answer this question, we first note that the labeling procedures in PR and PA adopt two different perspectives on the reasoning/argumentation under scrutiny: both assign a central role to the nature of the involved claim, be it a proposition or a speech act. Still, within PR, the label 'practical' is assigned under stricter conditions, namely, that the cognitive commitments of the reasoner must be of a specific kind (volitional, intentional, or normative). These stricter conditions are why PR focuses mainly on FPP reasoning, excluding SPP and TPP altogether, and may thus be identified as the cause for criticism of PR from the perspective of PPA (cf. Lewiński 2021). From the perspective of PA, we see that, although argumentation might generate various cognitive and attributable commitments, these commitments do not play a decisive role in calling instances of argumentation 'practical', neither in the PPA nor the PDA. For the PDA this is straightforward. To see this point for the PPA, we observe that the types of premises and the speech acts involved qualify a piece of argumentation as practical. The use of speech acts inevitably involves commitments, but it is the speech act that serves as a classifier. From this analysis we conclude that the notion of 'commitment' plays a central role in understanding the discrepancies between the common procedures in PR and PA for labeling something as 'practical'. In the next section, we use this insight as a starting point for developing an integrated approach to practical deliberation that specifies the distribution of commitments among the stakeholders involved in the process. The Stakeholder Commitment Approach Based on our analyses of the relationship between theories and models of PR and PA, we propose in this section our Stakeholder Commitment Approach (SCA). As the name indicates, this approach centers around the commitments of the stakeholders engaged in practical deliberation, be it reasoning or argumentation. Starting from the idea that an interactive engagement to convince the addressee to accept, and so commit to, a practical conclusion is a central aspect of practical argumentation, in our approach to practical reasoning and argumentation, we focus on the different roles of stakeholders and the way this influences the distribution of commitments among them. Before explaining the SCA in full detail, we take three preparatory steps. The first one is to introduce a distinction between the problem holder, i.e., the party that holds the problem, and the problem solver, i.e., the party that is invited or necessitated to solve it. For most accounts of PR, we note, this distinction collapses because, in the FPP, the problem holder is the problem solver (an exception is Clarke 1985). 11 The one who reasons is both the one who desires and the one who is necessitated (or, more generally, addressed) by the conclusion, so commitment relates to a single agent called the reasoner. The SCA employs this distinction between problem holder and problem solver, together with insights from PA, to render accounts of PR more inclusive, e.g., not prima facie excluding SPP and TPP. In PA, both in the PPA and PDA, the situation is different as the roles of problem holder and problem solver may be distributed over the (in)directly involved parties (e.g., I, you, we, they) in many ways. Consequently, commitments can likewise be distributed among different parties involved. While this distribution is underspecified in PA, the SCA provides a fine-grained specification of parties and commitments. As a second preparatory step, we adopt the notion of 'practical commitment' as an additional criterion for labeling argumentation as 'practical'. Henceforth, using the terminology of step one, we define a practical commitment as a commitment to action by the problem solver. Consider (4), in which the identified problem is 'being on time at a band rehearsal'. The problem holder is me. The problem solver is you. Namely, by lending me your bike, I can be on time for my rehearsal. (4) To make it in time for my band rehearsal, you must lend me your bike. We say there is a practical commitment in (4) when you commit as a problem solver to lending me your bike. For instance, your replying with 'of course, here is my bike' would yield such a practical commitment. We use the term practical commitment to emphasize the commitment's relation to action and to differentiate it from theoretical commitments such as doxastic commitments ('I believe that…') and epistemic commitments ('I know that…'). Nevertheless, the two do not form an exhaustive partition, e.g., neither subsume bouletic commitments ('I want…'). The previous sections demonstrated that reasoning and argumentation involve different types of commitment: reasoning deals with cognitive (or psychological) commitments, whereas argumentation deals with commitments that can be reasonably attributed based on communicative conventions governing speech acts. Combined with the distinction between practical and theoretical commitments, we have at least four types of commitments. In brief, with the SCA we obtain different degrees of 'practical', based on how various types of commitments are distributed among the arguer, the addressee, and third parties, as well as among the problem holder and problem solver. As a third and final preparatory step, we emphasize that arguing aims at generating cognitive commitments and thus may cause reasoning previously labeled as 'theoretical' to become 'practical'. Consider (5), put forward by me in a dialogue between you and I. (5) Remember that we want to go to Amsterdam, and since this is only possible if you fill out this form in time, you should fill out the form! (It appears that I am slightly stressed.) The problem solver in (5) is you and the alleged problem holders are you and I together. However, whether (5) is practical depends on the context of this dialogue, e.g., see (6) in response to (5). From the perspective of PR, (5) is theoretical, and although I have a volitional commitment to the motivational premise (wanting to go to Amsterdam), the conclusion is not practically necessitating. However, in arguing with you, I invite you to reconstruct my reasoning and to accept the corresponding conclusion. In doing so, I invite you to reconstruct my theoretical reasoning practically. Depending on whether you (i) accept my reasoning, (ii) are volitionally committed to the first premise, and (iii) are epistemically committed to the second, the reconstructed reasoning will become practical. If you disagree with either one of (i)-(iii), you may still hypothetically agree with the reasoning. For instance, you may retort with (6). (6) Indeed, if we want to go to Amsterdam, I must indeed fill in the form (but I don't want to). In your response, I may attribute to you a theoretical commitment, not a practical one. 12 Only if the addressee is committed to the involved premises in the accepted argument, and one of those commitments is practical or volitional, does the reconstructed reasoning become practical. Recall that the PPA and the PDA would label all of (1)-(6) practical. We stress that although the activity of arguing aims at generating cognitive commitments, the assessment of such commitments is done via communication. That is, the arguer (or audience, for that matter) can only evaluate the success of an argument through the communicated response of the addressee and the attributable commitments it generates. This differentiation is rather intricate but should not cause any complications in what follows. Namely, henceforth, when we talk about commitments in a piece of argumentative discourse, we mean attributable commitment and assume that the attribution is the result of some (implicit) communication. The SCA systematically investigates practical argumentation by looking at the stakeholders, the roles assigned to them, and the practical commitments (potentially) generated through the reasoning contained in the argumentation. Stakeholders are the actual persons (indirectly) involved in the argumentation, and they can be assigned (several) different roles. In our approach, we distinguish between problem-related roles-problem holder and problem solver-and communicative roles-arguer, addressee, and third party. The communicative role of 'third party' is assigned to those stakeholders absent in the activity of arguing but present in the subject of the argumentation. For example, the argumentation in (7) has three stakeholders: 'I', 'you', and 'the government'. You and I have the role of problem holder, whereas the government has the role of problem solver. The communicative roles of I, you, and the government are arguer, addressee, and third party, respectively. (7) You and I want climate change to stop, so the government should reserve more of its annual budget for reducing CO 2 emissions. Table 1 represents the different distributions of roles for a given instance of argumentation, which enable us to determine the presence of practical commitments. For instance, when the arguer is the problem solver, there is practical commitment. Whenever the addressee is the claimed problem solver, there is an invitation to a practical commitment by means of argumentation. When neither the arguer nor the addressee is the problem holder or solver, there is no potential for practical commitment because none of the stakeholders has a problem-related role. The different distributions of presence of, invitation to, and absence of practical commitment provide us with a way to distinguish between different degrees of practicality in argumentation. Instance (I) is the most practical since there is practical commitment concerning both the problem and its solution. In the case of (IV), there is an invitation of practical commitment for the addressee. In (VI), only the solution can potentially be practically committed to. Case (IX) is the least practical since, although we can identify a problem holder and solver (cf. statements of policy), due to the third party's absence, there is no potential of generating practical commitment through this instance of argumentation. 13 In its most general characterization, we may say that SCA stipulates that an instance of argumentation is practical (to some degree) whenever there is a problem holder and problem solver identifiable in the argumentation. The degree of practicality is determined by the different commitments distributed among the stakeholders. We point out that within the context of PA, the cognitive commitments of the parties involved are not and are also not required to be considered because in argumentation, attributable commitments suffice (see Sect. 4 and the beginning of this section). Last, we remark that in the case of group deliberation, we take 'we' as the collective arguer consisting of both the arguer and the addressee. The distribution of roles in (I) tells us of the presence of practical reasoning in the arguer, who is both the problem holder and the problem solver. The argument thus communicates a commitment of the arguer to both the problem (cf. volitional commitment) and its solution (cf. normative commitment). The reasoning in (8) is an instance of (I). In communicating the argument, the arguer presupposes some doubt in the addressee. For instance, I believe that you disagree that I should take the bike because you might think public transport is faster. In that case, you disagree theoretically with my practical reasoning. (8) If I want to be at work on time, I should take the bike since it is the fastest option. In (II), the arguer tries to convince the addressee to solve their problem for them. Part of the argumentation here is directed at convincing the addressee that the arguer's problem must be solved. Take, for instance, (9), where you are my parent. (9) You must make me a sandwich since I am hungry. You may agree, in (9), with me being hungry, but still, you may believe that you are not the one that must solve the problem. Your response may be the following: 'Well, you are old enough to make your own sandwich'. Instance (III) can be seen as an argument that aims at helping the addressee with solving their problem. In such cases, to qualify as an argument, the arguer may, for instance, doubt whether the arguer is the right person to solve the problem, or whether the addressee has a problem at all. The reasoning in (10) is an example. In those cases, the practical commitment on behalf of the arguer concerning solving the problem indicates that the problem solver has an implicit commitment to the addressee's problem (e.g., 'I don't want you to be hungry' or 'I am your parent and I have a duty to make sure you are not hungry'). (10) You are so hungry! I should make you a sandwich. Scheme (IV) can be seen as a form of advice: 'If this is your problem, you should do that' (or it may be conceived of as patronizing 'you have this problem, you should do that to solve it'). In order to qualify as an argumentation, and not as an explanation, we must assume here that the addressee who receives advice for their problem, will not readily accept this advice: you may disagree with having a problem (that needs to be solved), with the proposed means to solve it, or with some of the inferences applied. The instances (V) and (VI) of Table 1 are practical in the sense that the arguer (V) or addressee (VI) may become practically committed to a certain action based on the argumentation. It must be noted that although the problem holder is the third party, once the arguer or addressee becomes practically committed to solving the problem, it is reasonable to assume a commitment from the solver to the problem itself (which is not only hypothetical). In such cases, there could be an additional motivational premise at play to ensure the practical commitment to solving the problem (e.g., I want to help solve the third party's problem). (11) is an example of (V). (11) They want to go to Amsterdam, so I must get the paperwork ready. Communicating an 'I must', such as in (11), suggests a commitment to the problem that has to be solved (e.g., 'I want them to go to Amsterdam'). Example (12) is an instance of (VI). In such cases, there is no initial practical commitment involved, although the addressee is invited to solve the third party's problem. Whether such commitments, in fact, arise is something that the course of the discussion will decide. (12) They want to go to Amsterdam, so you must get the paperwork ready. (I might be your boss, in a grumpy mood.) Both cases (VII) and (VIII) can be called practical argumentation due to the fact that either the arguer or the addressee is committed to the problem being solved. The fact that the problem solver is a third party indicates that no direct practical commitments can be generated through the argumentation. Think of cases in which the addressee and the arguer are trying to find out (theoretically) what the best conduct of the third party would be in order to solve the arguer's problem, as in (13), or the addressee's. (13) I want climate change to stop, so the government should reserve more of its annual budget for reducing CO 2 emissions. In the case of (IX), where both the problem holder and problem solver are third parties, the argumentation occurring between the arguer and addressee is not about generating practical commitments anymore. Such argumentation may be rightly called theoretical since none of the commitments involved is directed toward action. The argumentation in (14) is an instance of such third-party argumentation. We stress that we label such argumentation still as practical due to the presence of an explicit problem holder and problem solver. (14) Billy wants to go to Harlem, so Eduard should book Billy's ticket for the A-train today. (Provided the arguer and addressee are neither Billy nor Eduard.) We point out that under the PDA all instances (I)-(IX) would be labeled practical with the same degree. We can now better understand how the SCA provides refinements in types of practical argumentation by looking at problem-related and communicative roles in the argumentation. By taking into account the different roles of the involved stakeholders, we can also introduce an extension of the analysis of practical reasoning. Whereas traditional approaches often take the distribution in (I) of Table 1 as the only case of practical reasoning proper, we can now include SPP reasoning into the analysis: in case of successful communication on behalf of the arguer, the addressee accepts the invitation to reasoning and commits practically to both the problem and its solution. In this case, the arguer contributes to generating practical reasoning on the side of the addressee, namely, as a reconstructed instance of theoretical reasoning on the side of the arguer. Instances (I)-(IV) of Table 1 are practical in the sense that all relevant commitments are distributed over the directly present parties: arguer and addressee. Instance (I) would be a case of FPP practical reasoning in an argumentative context, (IV) denotes a case of SPP practical reasoning. (According to Clarke 1985, those cases would be practical.) Furthermore, the role assignments in (II) and (III) are instances of reasoning which have not yet been properly investigated in the context of practical reasoning. For example, here, one may investigate whether the reconstructed reasoning on the side of the addressee is practical whenever the addressee commits to solving the problem to which the arguer is committed. What does this say about the commitments of the addressee? Does an epistemic commitment suffice, e.g., 'I know that the arguer has problem X'? Or is there a subproblem expressed through the volitional commitment, e.g., 'I want to solve the problem of the arguer'? In the latter case, there is practical reasoning on the side of the addressee. In that sense, the communicative model presented in the SCA tells us something about the (potential) presence of practical reasoning in the arguer and addressee. As an example, consider the argumentation in (15). (15) I want to go to Amsterdam for work, so Billy must sign my travel forms (Billy is from human resources). Following von Wright (1972), reasoning instances such as the one in (15) implicitly contain the potential of practical reasoning. Namely, each claim necessitating an action to a second or third party that is not the problem holder implies a necessitating conclusion for the problem holder themselves. This is expressed in (16), which can be seen as a practical reasoning consequence of (15). (16) I want to go to Amsterdam for work, so I must ensure that Billy signs my travel forms. Concerning the SCA, this means that any of the instances (II) and (VII) imply arguments of the form (I), and instances of (III) and (VIII) imply arguments of the form (IV). For instance, (15) is an instance of (VII), which can be rephrased as the practically committed argument (16) belonging to category (I). Hence, through the SCA, what is commonly taken as theoretical reasoning is susceptible to reconsideration. A reconceptualization would be more in line with the inclusive account provided by Clarke (1985). Additional to classifying second-and third-person practical inferences as instances of practical reasoning, one may inquire about the distinction between these two perspectives and the role of communication, absent in the TPP, but often present in SPP through an invitation to reproduce and accept the offered reasoning. In fact, the SPP accommodates a starting point for practical argumentation. In common accounts of practical reasoning, such nuances between SPP and TPP are lost. By putting aside SPP and TPP as both theoretical and thereby moving them outside the scope of practical reasoning, one a priori excludes notions fruitful to a better understanding of 'practical'. Conclusion In this paper, we analyzed how prominent theories in the fields of Practical Reasoning (PR) and Practical Argumentation (PA) employ the label 'practical'. After explaining the discrepant use of this label, we developed an integrated approach to practical deliberation called the Stakeholder Commitment Approach (SCA). This approach yields an extension as well as a refinement of the possible grounds for assigning the label 'practical' to instances of reasoning and argumentation by specifying the distribution of the various types of commitments among the stakeholders. Both the extension and the refinement are based on our introduction of a distinction between the roles of problem holder and problem solver. While the extension serves to include those aspects of reasoning that influence practical reasoning but are commonly prima facie excluded, the refinement serves to distinguish aspects of reasoning and argumentation that are often grouped together. The extension is accomplished in comparison to PR. A central feature of the theories and models developed within this field is the plurality of constituents involved in the reasoning process: practical reasoning contains (1) a premise expressing the agent's motivational disposition towards the problem to be solved via an actual desire, want, or intention, i.e., a volitional attitude (in the case of secondary inferences, such motivational disposition is at least indirectly present); (2) theoretical reasoning by virtue of means-end premises expressing the reasoner's epistemic dispositions on actions and their potential outcomes; (3) a non-descriptive conclusion often expressing a normative judgment or intention. Since the reasoner must be cognitively committed to the premises for the conclusions to be practically binding, practical reasoning is commonly considered as reasoning from the first-person perspective. Compared to PR, the SCA is more inclusive in that it considers not only first-but also second-and third-person reasoning as practical. The differentiation between these perspectives takes place by looking at the assigned roles of the stakeholders. The refinement is achieved in comparison to PA. Based on an analysis of prominent theories and models developed within this field, we introduced a distinction between the 'public performance approach' (PPA) and the 'policy debate approach' (PDA). Compared to PR, in both these approaches, the notion of commitment plays a less important role in classifying argumentation as practical. Their criteria for labeling argumentation as 'practical' are not formulated in terms of the commitments invoked but in terms of the nature of the conclusion (the speech acts expressing it and statements of policy, respectively). Nevertheless, in the PPA, commitments are an immediate and important consequence of the involved speech acts. The SCA, in contrast, investigates practical argumentation by looking at the stakeholders, the problem-related and communicative roles assigned to them in the argumentation, and the commitments (potentially) attributable through the argumentation. As a result, it facilitates the characterization of a larger variety of different instances of practical reasoning and practical argumentation. We stress that our approach is fully compatible with both the PPA and the PDA. Through the SCA, we gain a better insight into the practical aspects of argumentation by looking at how the distribution of certain roles influences the practical commitments (potentially) generated through acts of communication. Furthermore, the SCA allows for a better understanding of the relationship between practical reasoning and practical argumentation. On our account, practical argumentation is both more than a communicative externalization of PR (cf. PPA) and more than the mere presence of policy statements (cf. PDA). Among the remaining challenges, we count the delineation of attributable commitments in group deliberations, where the 'we' seems to include both the arguer and the addressee. One may wonder, for instance, whether 'We want to go to the party' involves a practical commitment to the problem of all members of 'we' or only of the speaker. Another issue arises when the problem or the problem holder remains implicit, as is the case in, for example, 'It starts raining, you should hurry home!' We leave such investigations for future work. Furthermore, it would be interesting to study the relationship between the criteria for labeling a piece of argumentation 'practical' and those for assessing the success of the argumentation. Successful communication may lead to a practical commitment. For instance, when I argue that you should be on time for our meeting, and you agree. However, it is not an exhaustive criterion, and failed argumentation may be practical too. For instance, you argue that I should do my taxes today, and I disagree, saying that the deadline is only next week, but I will do my taxes tomorrow instead. The initial argumentation may not be successful, but the overall argumentation remains practical. Apart from addressing these challenges, further research needs to be carried out to compare the SCA to other theories and models proposing or assuming a more integrative approach. One example of such an approach is Sàágua and Baumtrog's (2018) ideal model for practical reasoning and argumentation, which provides a detailed specification of argumentation schemes and allows for analyzing the commitments involved from the perspective of argumentation and reasoning. Another example is Baumtrog's (2018) multifaceted differentiation between dialectic, dialogue, and quasi-dialogue, reasoning and argumentation, among individual and multiple participants, which challenges the assumption that argumentation always involves conversational interchange with one (or more) other person(s) or imagining such an interchange. On a more general level, it would also be interesting to explore how SCA's viewpoint on the relationship between reasoning and argumentation relates to recent developments in cognitive psychology, which suggest empirical test results can be better explained if we hypothesize that the function of reasoning is argumentative rather than corrective (Mercier and Sperber 2011). With our new approach to practical reasoning and argumentation, we hope to contribute to connecting the two fields and exchanging insights between them, importing additional nuances in the investigation of what is 'practical'. We are better positioned to apply such a conceptual modification if we understand why the label is used in distinctive ways. The extensions and refinements of the conceptual framework not only facilitate clarity but may also yield novel questions and investigations within PR and PA and where they interact.
2023-04-05T15:39:30.920Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "9ab2de8bc7ee34ee29d855db10e7eb4967827fe3", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11245-023-09901-w.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "3397a8da72f5ac0ea627468585bc63f753baf3c9", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [] }
252141151
pes2o/s2orc
v3-fos-license
Remote Delivery of Service: A Survey of Occupational Therapists’ Perceptions Background: Telehealth has been declared an accepted method of occupational therapy (OT) service delivery and has been shown to be effective. However, studies done before the outbreak of coronavirus disease (COVID-19) show that most occupational therapists didn’t use it. Aim: The aim of this exploratory study was to examine the perceptions of occupational therapists regarding remote delivery of service following the COVID-19 outbreak. Material and methods: An online survey, including 11-item five-point Likert scale, and 2 open-ended questions were distributed to occupational therapists. Results: Responses were received from 245 Israeli occupational therapists. The majority of the participants (60%) strongly agreed that remote delivery allows an ecological and effective intervention, while 76% strongly agreed that an ideal treatment is one that would combine telehealth with in-person intervention. Qualitative findings indicated that the most significant advantage was providing care in the natural environment and improving accessibility to the service. The most salient barriers were limitations of the therapeutic relationship and threats on clinical reasoning. Conclusion: The study results highlight the complexity of telehealth. Findings indicate that overall occupational therapists perceive remote care as an effective and legitimate service delivery method that cannot be used as an alternative to in-person treatment. These findings can help in developing intervention programs for remote treatment, and their implementation. Introduction Telehealth is defined as the application of evaluative, consultative, preventative, and therapeutic services delivered through information and communication technology. 1 Telehealth commonly interchanged with other related terms (eg, telemedicine telehealth, telerehabilitation, teletherapy, telecare, telepractice, etc.) and describes the delivery of care to patients through synchronous videoconferencing, asynchronous telephone calls and store-and-forward imaging, or remote monitoring. 2 Telehealth is growing rapidly and has the potential to transform the delivery of health care for millions of people. It emerges as a viable strategy that can enable individuals with disabilities to gain access to effective services, regardless of any limitations imposed by geography and local resource capabilities. In addition it can help to overcome physical accessibility barriers and assist in cases of isolation due to extreme weather, war zone, or epidemic. 3 Over the past 2 decades, OT practice has been increasingly influenced by technological advances that have offered increasing opportunities to support telehealth. 1 A remote delivery of OT to a client who is in a different physical location than the therapist has the potential to improve functional outcomes, enhance communication and continuity of care, enhance management of chronic diseases, and promote health and wellness. 4 OT delivered remotely has proven to be highly acceptable and effective for individuals with a variety of health conditions across their life-span. 5 Efficacy had also been established in multiple studies analyzing interventions for varying populations such as children with autism, 6 adolescents with myelomeningocele, 7 adults cancer-survivors, 8,9 and people with acquired brain injury. 10,11 The World Federation of Occupational Therapists affirmed the efficacy of telehealth for the delivery of rehabilitation and OT services, stating its use "leads to similar or better clinical outcomes when compared to conventional inperson interventions." 12 However, despite the encouraging declarations, 13,14 and the robust research evidence that showed its effectiveness, 15 studies done before the outbreak of coronavirus disease shows that most occupational therapists in Israel as well as around the world did not use it in their routine clinical practice. 16,17 The slow implementation of remote delivery of service is also influenced by the lack of 2 Rehabilitation Process and Outcome provider's acceptance and clinicians' hesitation to embrace this changing delivery model. 18,19 Indeed occupational therapists reported little utilization and low self-efficacy with telehealth technology 20,21 and were not ready to adopt new technological systems due to increased workload. 22 In addition, the main barriers perceived by the therapists was their inability to diagnose patients and perform an evaluation process. 23 The rapid spread of the COVID-19 overwhelmed health care systems worldwide. 24 One of the most important challenge during COVID-19 pandemic seems to be its high transmissibility, necessitations social distancing as a strong defense. 25 The requirement to stay at home for extended periods of time put a strain on the health care system. Specifically, it had a profound impact on rehabilitation services causing increasing difficulties in providing in-person rehabilitation care. The unavoidable need to communicate virtually thrust health care practitioners into the use of a telehealth service delivery model. Suddenly, telehealth was on the front lines offering patients the opportunity to get the care they needed via telecommunications. 15,26 A recent study conducted at a large medical center in Israel found that 87% of clinicians (physicians, psychologists, dietitians, speech therapists, social workers, and nurses) recognized the benefit of telehealth via video consultations for patients during the COVID-19 pandemic. However only 68% of the clinicians supported continuation of the service after the pandemic. 27 After the outbreak of the epidemic, there have been several studies conducted among occupational therapists. 18 These studies found overall positive perceptions toward telehealth, whereas most respondents were satisfied and perceived telehealth to be an effective delivery model for OT services. 5,[28][29][30] The overarching aim of this study is to examine the perceptions of Israeli occupational therapists regarding remote delivery of service following the COVID-19 outbreak and to explore the perceived benefits and barriers. Procedures and participants This online cross-sectional exploratory study used design analyzing qualitative and quantitative data. All occupational therapists who have a license from the Israeli Ministry of Health were eligible to participate. The study was approved by the ethics committee of the Faculty of Medicine at The Hebrew University of Jerusalem. The survey was anonymous and no identification data was collected. Their participation was voluntary, completion of the questionnaire was considered to be consent for participation in the survey and no incentives were offered. The survey was built using a free online survey tool (Google Forms). Snowball sampling was used to recruit Israeli occupational therapists through institutional and personal networks, mailing lists, and in closed groups related to occupational therapy on social media platforms (eg, Facebook). The web link was available for 30 days in July 2020, this timeframe corresponding to the period of time between the first and second strict lockdown periods due to COVID-19 and the beginning of the de-escalation phase in Israel. Because participants joined via social media, it is impossible to determine an accurate response rate. The recommended sample size using a power analysis for generalizability for a population of 5000 registered occupational therapists, using a 5% margin of error with a 95% confidence level, was 234 completed surveys. 31 Instrument For the current study we developed an online self-administered questionnaire aimed to investigate the perceptions of Israeli occupational therapists toward remote delivery treatment. The questionnaire items were chosen based on a survey used in previous research, 32 a literature review, 15,[33][34][35][36][37][38][39] and input from 4 occupational therapists experienced in telehealth. The final version of the questionnaire was comprised of 3 sections: (1) a short demographic questionnaire whose purpose was to collect basic information regarding the participants' gender, years of experience, age, and field of work, (2) 11 quantitative questions aiming to captured information regarding the perceptions toward different aspects of remote delivery of OT. The respondents were asked to grade their level of agreement on a 5-point Likert scale ranging from 1 (slightly agree) to 5 (strongly agree), and (3) 2 open-ended questions about the main advantages and disadvantages of remote delivery to glean more in-depth information. The open-ended questions provided the participants with the opportunity to reflect on the quantitative-response questions, and further describe thoughts, concerns, feelings, and experiences, in writing. Expected duration for completing the online survey was about 5 minutes. Data analysis Descriptive statistical analysis was used with frequency distributions to describe the demographic characteristics of the sample and to analyze the survey results. Percentages were calculated based on the number of respondents for each question. Internal consistency within the subscale was calculated using Cronbach's alpha. Spearman's rank correlation coefficient was used to assess the association between years of experience and the perceptions, each individually and average of all together, toward remote delivery of OT. P < .05 denoted the presence of a statistically significant difference. Qualitative content analysis of the 2 open-ended questions provided more in-depth insights to the advantages and disadvantages of telehealth care. Answers to the open-ended questions were analyzed using established methods for deductive content analysis through the following steps: (1) selecting the textual unit of analysis, (2) developing a codebook of mutually exclusive categories, (3) data coding, and (4) reporting the data Almog and Gilboa 3 by category. 40 The research team compiled the responses to the open-ended questions in Excel, then used open coding to generate codes from the responses, then categories and subcategories. The research team has reached a consensus about codes, categories and subcategories through discussion. Categories that have been noted most appear first. The questions were answered in Hebrew and direct quotes were translated into English for this report by a bilingual member of the research team. Results This study includes a convenience sample of 245 occupational therapists, represents approximately 4% of the occupational therapists workforce in Israel (There were 5961 registered occupational therapists in Israel in 2019). 41 Table 1 lists the characteristics of the current sample population. The participants were mostly female within a large range of years of experience. The majority of the sample work in the pediatric field, with only 9 occupational therapists working in other areas that including assistive technology and adults with ADHD. Figure 1 lists the respondents' perceptions collected for the 11 items which were graded on a Likert scale between 1 (slightly agree) to 5 (strongly agree). Internal reliability for the quantitative questions was tested in the current study using Cronbach's Alpha and found α = .837. The 2 items which received highest scores were: (1) an ideal treatment is one that will combine remote delivery with in-person treatment (M = 4.11, ±1.08) and (2) remote rehabilitation treatment in the patient's natural environment allows an ecological and effective intervention (M = 3.64, ±0.94). The 2 items which received the lowest scores: (1) remote delivery of service is an option whose overall advantages outweigh its disadvantages (M = 2.76, ±0.99); and (2) patient progression will be impaired in remote rehabilitative care (M = 2.64, ±0.98). Spearman test was performed to examine the relationship between therapist perceptions toward remote delivery of service and years of experience. No significant correlation was found with any of the statements. Content analysis A summary of participant responses to the 2 open-ended questions are provided in Table 2. All answers regarding the advantages and disadvantages of telehealth care were divided into 2 broad categories, clinical and logistical. Subcategories are sorted in the order of their frequency. The n refer to the number of participants that mentioned the topic in their response, since multiple responses per participant were allowed, the percentages are summed above 100. Providing ecological care in the patients' natural environment Treatment in the natural environment. The main advantages that were mentioned by 51% of the respondents is that remote treatment takes place in the natural environment of the patients. Therefore, it promotes transfer of training, and leading efficient treatment in terms of functional gains. For example, one respondent shared that ". . . (telehealth is) an opportunity to transfer and generalize the treatments in a natural way." A second respondent described the uniqueness of working in the natural environment allowing the clinician to "enter the house through the screen and work with the family and equipment that are there." Maintaining continuity of care. Remote delivery of OT allows the maintenance of the therapeutic relationship in circumstances where in-person contact cannot occur, enabling continuity of care toward promoting goals, preventing deterioration, and encouraging guided therapeutic practice. As one of the respondents noted, the main advantage of remote delivery of care is "the ability to maintain a sequence of treatments even in challenging situations such as corona or physical inaccessibility of treatment." Another added: "The possibility to continue treatment when it is not possible to take place in person, preservation of condition and monitoring of changes and needs." Familiarity with the patient's environment. Remote intervention was also perceived as an opportunity to get to know the patient's human and physical environment. The familiarity with the environment can guide a more precise treatment. For example, one respondent commented: "An additional perspective on the patient and his family allows us to get to know his environment better. . .." Another respondent stated ". . .The advantage in the pediatric field is seeing the natural environment, the human and physical environment which makes it possible to get real-time information about the contexts." A third respondent commented: "Treatment in the natural environment makes it possible to identify strengths and weaknesses in daily functioning by observing during real-time, compared to observing in a clinic." 4 Rehabilitation Process and Outcome Presence and involvement of the main caregiver. Remote service requires higher engagement of family members, parents, or primary caregiver in treatment. This was noted as a significant advantage by several respondents since it enables optimal utilization of the environmental resources. This collaborative work, was also seen as an opportunity to understand the needs of the primary caregiver or family. One respondent stated for example: "If a caregiver will implement and continue what is being done even later, there may be an advantage treating in the patient's home and regular environment. Also the patient may feel safer in his natural environment." Another participant commented: "the possibility to incorporate the caregiver in the treatment (is a significant advantage)." Improve accessibility to the service Accessibility for patient who cannot leave their home. Approximately 23% of the respondents noted that remote delivery of service allows patients to receive treatment they otherwise would not have been able to receive at all, due to long distance between the clinic and the patients' residency or mobility restriction reasons which are usually permanent causes. One respondent summed up: "Accessibility; For those living in the periphery or anywhere that is not close or accessible to treatments, home remedies, people with poor immune systems. . ." Time and money saving. Respondents indicated that another advantage is the time and money saved for both the patient and therapist while using remote delivery of service as there is no need to travel. In addition, time and money are being saved due to the possibility of simultaneous treatment for several people together using remote treatment platform. One respondent stated "(the main advantage is) cost savings of travel and time for the customer and his family. . .." Another respondent added: "Efficiency; and savings in patient and caregiver travel time. Sometimes it is even possible to treat several patients at the same time. . .." A third respondent defined simply "(remote care) makes it easier for the therapist and patient in terms of mobility." Treatment is possible in times of emergencies. The issue of accessibility to treatment during various temporary emergency situations for example COVID-19, war and extreme weather conditions was a benefit reported by 15% of the respondents. As a respondent noted: "the possibility of reaching a person even when it is difficult to leave the house because of a physical or mental reason, or when it is difficult to reach him for example, the current situation during the corona time. . .." Threats on clinical reasoning A limited therapeutic relationship. Approximately 45% of the respondents strongly agreed that the unmediated connection between OT and patient cannot be replaced. They thought remote care may impact interpersonal components of the treatment and lead to a reduction of essential elements such as therapeutic relationship, trust, cooperation, perseverance, commitment, and lack of human warmth. As one respondent wrote, "In my opinion, there is no substitute to a close relationship in terms of recruiting the patient for treatment and his commitment to his personal advancement, creating a non-verbal relationship, and a more comprehensive look at the patient. All of these are more difficult to perform remotely." Another respondent well defined the complexity of therapeutic relationship: "(a main disadvantage is) negative influence on personal contact, (and difficulty) catching nuances that cannot be seen through a screen." Limited ability in the use of hands-on intervention techniques. Over 40% of the respondents, stated the main disadvantage of remote delivery of OT is the absence of physical touch and the limitation on the use of treatment techniques that require it. Lack of physical contact may challenge tests of motor skills such as strength and range of motion and impede intervention that require direct touch such as passive activation of limbs. In addition, absence of a therapist standing close by might be problematic in terms of safety and may even endanger the patients. One respondent commented: "(While providing remote delivery of care it is) Difficult to perform assessment and treatment, passive and manual sessions in some cases, or practice functional tasks among patients with balance difficulties due to the danger of falls." Dependency on the primary caregiver. Some respondents (20%) noted that remote care may be perceived as burden on the primary caregiver, which is usually the parents in cases where the patient is a child. Remote intervention mostly requires their physical presence, investment of additional time, physical effort, and active collaboration beyond what is required in in-person sessions. One of the participants referred to the remote care of children who study in the special education system and honestly shared "With children being cared within the educational system, online care requires effort from the parents and often makes the treatments difficult instead of appreciated. . .." Another respondent noted that remote care requires additional resources from the family: "The parent's ability emotionally, technologically and financially is critical in the process." Inadequacy for specific populations. A smaller percentage of respondents (9%) noted that the main disadvantage is the inadequacy of remote OT for certain populations. For example, elderly population or people with cognitive decline might have difficulty operating the technology independently, or people with significant visual impairment or hearing loss, who will have a hard time seeing or hearing the OT through the screen, are less suitable for remote care. One respondent noted: "(remote delivery of service is) not suitable for everyone and very much depends on the physical and cognitive state of the patient. It is less suitable for cognitive therapy or for younger children." Another respondent added: "It is less appropriate for someone with significant sensory impairment (hearing and vision)." Logistical difficulties Difficulties with technology. About 23% of the respondents commented that various technological problems are the biggest drawback of remote delivery. These divided to 2 main issues: First, some patients are inexperienced and have limited knowledge in operation. The second, the technological means, some of which are complicated, are not suitable for people with disabilities and require equipment and internet infrastructure that does not always exist. One of the respondents summed it up: "(Remote delivery sometimes leads to) Technological difficulties with technological products or of the patient ability to get along with the technology." Inadequacy in home environment infrastructure. The physical and human environment at home are often different from the existing conditions at the clinic, which therapists control to a greater extent. Several respondents (14%) indicated that the natural home environment leads to various challenges beyond technology, such as missing or unsuitable equipment, lack of infrastructure and noise. In addition there might be poor conditions in the physical environment such as small and crowded houses, lack of privacy, and other people around. One respondent noted: "The patient's environment doesn't always allow treatment. For example, there might be noise and different distractions." Discussion This exploratory study investigated the perceptions of 245 occupational therapists regarding specific elements related to the remote delivery of service during the COVID-19 outbreak in Israel. Collectively, the responses to the survey questions indicate a subjective positive attitude toward remote delivery care. However, participants identified several benefits and barriers. The results of this survey have a number of implications for the implementation of remote OT practice initiative since therapists' perceptions will have a significant effect after the epidemic is over too. 30,42 In addition, these findings add to existing research by identifying aspects of telehealth services that need to be considered when evaluating whether telehealth is an appropriate form of service delivery, as well as identifying aspects that may need to be adapted in order to increase feasibility and effectiveness of telehealth services. A majority of participants in this survey, in line with previous studies, perceived remote delivery of care as a suitable method for promoting functional goals (M = 3.43 ± 0.91) and quality of life (M = 3.68 ± 0.88), leading to greater independence. 4,43,44 However, the item that offers an ideal treatment is a combination of remote care with in-person treatment that was rated highest (M = 4.11 ± 1.08) while the item stated that remote rehabilitation treatment is a possibility in which the total advantages outweigh the disadvantages, was rated lowest (M = 2.76 ± 0.99) on the Likert scale. Therapeutic techniques that require touch cannot be used without modifications in remote service and was reported by respondents as one of its limitations. This is in line with a previous study, that reported that even if it is possible to compensate on some hands-on activities that cannot be done, participants value being in the clinic. 45 However, the ecological aspect of the intervention was rated second to highest (M = 3.64 ± 0.93) on the Likert scale and been noted by many of the respondents as the main advantage in the open-ended question. The participants in the current study highlighted the benefit of intervention that meet the clients beyond the simulated clinic setting in his/her "real life," and specifically at home. These findings are in line with a previous study 46 that aimed to explore what occupational therapists perceive to be the values of OT. They found that one of the values perceived by occupational therapists is the ecological approach considering their clients' environments and allowing interventions in the client's natural milieu. It reflects the ultimate goal of OT to apply an intervention to promote life roles, routines, and occupational 7 functioning in natural contexts and take into account the dynamic clients' environments, and the bidirectional influences that exists between individuals and their environment. 47 The therapeutic relationship has emerged as a complex issue. On the one hand the participants in the current survey were concerned that remote delivery of care threatens the ability to create and maintain a therapeutic relationship. This is in line with previous reports that found health care providers considered telehealth to be a barrier in developing and maintaining a therapeutic relationship. Including difficulties with the therapists' verbal and nonverbal communication abilities. 45,48,49 Moreover, occupational therapists are concerned about the potential negative impact of remote care on the therapeutic relationship. 50,51 On the other hand, most occupational therapists agreed that it is possible to develop a therapeutic contact remotely, and the Likert scale question on this issue was rated fairly high (M = 3.2 ± 1.04). This result supports a recent study which found that developing and maintaining a therapeutic relationship is feasible also in a remote mode of delivery. 8 Mixed results were also obtained regarding the influence of remote delivery of service on the primary caregiver. On average, the participants did not view remote delivery as a way to reduce the burden on the primary caregiver and this item was rated second to lowest. Moreover, burden on the parents or primary caregiver, was noted by several participants as a main disadvantage. These findings suggest that remote programs must be carefully developed to avoid increasing caregiver burden. 52 However, the necessary presence of primary caregiver, or family member, during remote sessions was perceived by some of the respondents' as a main advantage too. The presence requirement enables collaborative working, increased motivation, and better opportunity to understand the needs of the primary caregiver and the family. Indeed, previous studies have demonstrated the potential of remote delivery of services to decrease caregiver burden, depression, stress and anxiety and increase caregivers perceived self-efficacy for caregiving skills, and social support among primary caregivers of dementia patients. [53][54][55] These findings show that when the treatment focuses on the primary caregiver, or alternatively provides dedicated information, it can alleviate the burden placed on his shoulders. Our conclusion joins the call to consider the impact of the intervention on caregivers, in order to reduce and not increase, the burden on them. This can be achieved by giving direct attention to them and their needs. Additional issues to consider are technological difficulties patients and therapists have to cope with during remote delivery care. Technical issues such as internet connectivity, software availability, limited knowledge, and experience in operation were noted by respondents in this current study as main barriers of telehealth. These findings are consistent with previous studies which had found that technological difficulties cause frustration among patients and providers, and constitute a disadvantage of remote rehabilitation. 5,11,45 Remote delivery service should take the technology into account and make efforts to make it accessible to all. Moreover, training on how to use videoconferencing and other technology to deliver effective rehabilitation interventions should integrate curriculum for all health providers. 56 One might hypothesize that younger people including younger therapists are highly knowledgeable in technology and therefore feel more convenient with using technology as a means for their clinical practice. 57,58 Surprisingly, no correlations were found between years of experience, and perceptions toward telehealth. The results of this current study suggest that the perceptions toward telehealth are not related to experience, but depend on the specific individual and targeted trainings he has undergone. 59 Therefore, the assumption which our results doubt, that young clinicians are technologically capable, and therefore hold positive attitudes toward telehealth compared to older clinicians, still needs empirical evidence. 60 Telehealth has been gaining traction as a service delivery method across healthcare professions worldwide and COVID-19 rapidly expanded the exposure of occupational therapists to remote delivery of care. This exploratory study results highlight the complexity of telehealth, although findings indicate that, overall, occupational therapists perceive remote care as an effective and legitimate service delivery method. The results point out significant benefits for both therapists and patient such as improving accessibility to the service, allowing the saving of time and money, providing ecological care in the natural environment and maintaining therapeutic contact and continuity of care. Nevertheless respondents noted several barriers, such as difficulties with technology, dependence on the primary caregiver, limitations of the therapeutic relationship, and limited ability in the use of hands-on intervention techniques. Several limitations of this study that should be addressed in future studies can be identified. Firstly, it should be taken into account that the survey examined the perceptions of occupational therapists only, without obtaining additional information which would have enhanced our understanding, such as previous experience in remote care delivery before and during the COVID-19 or the availability of established telehealth infrastructure. The use of telehealth in Israel before the COVID-19 outbreak was rare and the transition to telehealth accrued during the first lockdown was minor and disorganized based on personal initiative and equipment. Therefore it would be reasonable to hypothesize that most on the participants did not have any significant previous experience with telehealth. Secondly, although attempts were made to obtain a representative sample, the therapists who voluntarily participated in the survey through snowball sampling may have been biased toward either a positive or negative view of telehealth depending on their previous experience. Non-respondent bias may limit the generalizability of the results, as respondents with strong beliefs were more likely to participate. Moreover, the focus of the current work was on the occupational therpist's 8 Rehabilitation Process and Outcome perspective relative to telehealth integration. A fundamental pillar of evidence-based practice in health care is the patient's values and preferences. We encourage investigators to assess the patient's experience with telehealth to see if the improved aspects of care from the provider's perspective reflect the patient's experiences. In addition, the survey was adapted for occupational therapists in Israel, who speak Hebrew, and excluded the minority in Israel who do not speak Hebrew. It is recommended in the future to adapt the survey for a sample that represents a wider population worldwide, using a variety of languages. In conclusion, these findings are in alignment with the latest position paper published by the Israeli and the American Occupational Therapy Societies, which recommends providing remote services according to the patient's preferences and needs 14,61 as an adjunct to traditional in-person contact. 30 These results reflect that, overall, therapists hold the position that in-person treatment is preferable compared to remote delivery care, and the latter should not stand on its own and should at least be integrated with in-person treatment. These results indicate that the perceptions toward remote delivery care is not a dichotomy, rather a complex issue which needs to be addressed in accordance with every patient. Given that telehealth seems to be here to stay, the perceived benefits and barriers as experienced by occupational therapists in this study may inform future training initiatives and ongoing telehealth use in occupational therapy.
2022-09-09T17:39:09.603Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "11ec887bf45771b1d9c757837cabf906e1954986", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/11795727221117503", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "205593ce9e15f30fd147cd95f616c3808d679b91", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
201836557
pes2o/s2orc
v3-fos-license
Does increased interdisciplinary contact among hard and social scientists help or hinder interdisciplinary research? Scientists across disciplines must often work together to address pressing global issues facing our societies. For interdisciplinary projects to flourish, scientists must recognise the potential contribution of other disciplines in answering key research questions. Recent research suggested that social sciences may be appreciated less than hard sciences overall. Building on the extensive evidence of ingroup bias and ethnocentrism in intergroup relations, however, one could also expect scientists, especially those belonging to high status disciplines, to play down the contributions of other disciplines to important research questions. The focus of the present research was to investigate how hard and social scientists perceive one another and the impact of interdisciplinary collaborations on these perceptions. We surveyed 280 scientists at Wave 1 and with 129 of them followed up at Wave 2 to establish how ongoing interdisciplinary collaborations underpinned perceptions of other disciplines. Based on Wave 1 data, scientists who report having interdisciplinary experiences more frequently are also more likely to recognise the intellectual contribution of other disciplines and perceive more commonalities with them. However, in line with the intergroup bias literature, group membership in the more prestigious hard sciences is related to a stronger tendency to downplay the intellectual contribution of social science disciplines compared to other hard science disciplines. This bias was not present among social scientists who produced very similar evaluation of contribution of hard and social science disciplines. Finally, using both waves of the survey, the social network comparison of discipline pairs shows that asymmetries in the evaluation of other disciplines are only present among discipline pairs that do not have any experience of collaborating with one another. These results point to the need for policies that incentivise new collaborations between hard and social scientists and foster interdisciplinary contact. Introduction Complex problems faced by our society such as climate change are unlikely to be overcome by a single academic discipline [1]. Despite barriers such as lower funding rates [2] institutional support [3], interdisciplinary research is growing [4] and some funders even expect research teams to be interdisciplinary. Benefits and costs of interdisciplinary endeavours are hotly discussed among scientific community but simultaneously, little is known about the processes underlying willingness to engage in interdisciplinary collaborations. For interdisciplinary research to bear fruits, scientists must take a step outside their own subject matter and recognise the intellectual contribution of other disciplines to inspire novel ways of thinking in addressing these big issues. Otherwise, seeing one's own discipline as the only valid perspective can genuinely hinder the development of innovative interdisciplinary research. Indeed, interdisciplinary research is sometimes associated with lower productivity in terms of papers published [5]. The aim of this empirical article is to examine the experiences of scientists from a broad range of academic disciplines as they encounter and work with researchers from other disciplines. Addressing the 'how' of interdisciplinary research, we test whether, in line with the contact theory [6], interdisciplinary contact can foster positive perceptions of other disciplines. From mono-to interdisciplinary research Throughout this paper, we refer to any research involving two or more individuals from different disciplinary perspectives with the goal of producing new knowledge as interdisciplinary research. Scientific disciplines differ between one another, each involving different subject matter and methodologies. Researchers are highly socialised within those circles as most university departments are compartmentalised into specific disciplines. One can reason about academic disciplines as cultures in the sense that physical departmental divisions foster the transmission of core values within the system and over time [7]. Indeed, research has shown that there are striking differences in attitudes, beliefs and values across disciplines [8]. Moreover, there is evidence of disciplinary socialisation effects: over the course of their training, students acquire a set of beliefs and values and learn different ways of explaining societal phenomena [8]. What happens, therefore, when a pair of disciplines that may not have a shared knowledge on a research question work together? Chiu and colleagues [7] argue that academic progression within those monodisciplinarity systems can be averse to interdisciplinary research. While viewpoint diversity may increase the quality of the output [9], Chiu et al. [7] take a rather pessimistic stance, arguing that exposure to other disciplines can prompt evaluation of differences and remarking faults in other disciplines. This can lead to conclusions that other disciplines do not perceive the problems in the same way and give rise to intellectual centrism, a form of ingroup bias whereby one displays a strong preference for own discipline over other disciplines and fails to appreciate the potential contribution of other disciplines [10]. In other words, they propose that contact with other disciplines, by harming perceptions that other scientists can contribute intellectually to the research questions of interest may hinder interdisciplinary research and collaboration. hierarchy, with some groups enjoying higher power and status than others [14,15]. Meta-analytic research has revealed that for both artificial groups created in the laboratory and real-life groups, intergroup bias is stronger among members of higher status groups than among members of lower status groups [16]. Consistent with social identity theory [15], this is especially likely to occur when the status hierarchy is perceived as stable and legitimate. In such conditions, members of higher status groups simply emphasise their higher value especially in competence-related domains in order to justify their higher status [17], whereas members of low status groups tend to display no bias or even a bias in favour of the outgroup, acknowledging the status hierarchy [18]. While researchers from different disciplines may arguably be of similar social status in the eyes of the public, it is also evident that some disciplines are perceived with higher regard and status than others [8]. Common sense distinctions between "hard" and "soft" sciences suggest that some disciplines are perceived as more prototypical of the scientific ethos than others (e.g., see [19,20]). Generally, there is an agreement that researchers coming from hard sciences such as physics or biology may more readily be perceived as 'real' sciences, conquering discovery of scientific laws. For social sciences such as sociology or economy, on the other hand, the study of human behaviour is highly contextualised, making it more difficult to advance and test theories in highly controlled environments. Indeed, the idea that the sciences are hierarchical is certainly not a new one and it dates back to almost 200 years [21], The question, however, is whether these hierarchies impact how scholars in those disciplines perceive one another. Reflecting the hierarchical rhetoric, scholars have noted that methodologies used by social scientists may appear as more straightforward than those employed by hard sciences; social scientists' expert contribution may be deemed unnecessary [22,23]. Moreover, the suggestions that "social scientists are less likely than researchers in other disciplines to want to participate in interdisciplinary projects" ( [3], p. 525) and that they are "less optimistic about the challenges involved in interdisciplinary working" [24], p. 13) are not uncommon, even when there is little empirical evidence to back them up. In line with those characterisations, it may be expected that those belonging to relatively higher status hard sciences would display a bias in favour of their own academic discipline in contrast to the social sciences. To our knowledge, only one study has so far assessed whether such bias may exist. Kirby, Jaimes, Lorenz-Reaves, and Libarkin [25] have recently analysed data from some 400 earth scientists showing that they perceive social sciences as significantly less competent than the natural sciences, supporting the notion that social sciences are perceived as being of lower status. There was a silver lining, however: those who reported having some experience of working with social scientists held more positive views about the competence of social scientists compared to those with no experience. This highlights the potential importance of interdisciplinary contact and prompts the question of whether exposure to different disciplines may actually have positive effects on perceptions of other sciences. Can working with individuals from disciplines other than one's own foster increased appreciation for these disciplines rather than simply exposing faults and differences? Interdisciplinary contact is an intergroup contact A rich history of research on the effects of intergroup contact counting at least 700 studies shows that interacting with different others tends to reliably decrease intergroup bias and ethnic prejudice [26]. Any form of actual interaction between members of clearly defined groups usually counts as contact. This research tradition is known as the contact hypothesis [6]. Contact, when perceived to be positive by those who engage in it, is argued to 'work' because it facilitates a reduction of anxiety [27]. While most of the attention in the area has been on reducing intergroup bias, contact can also promote harmony between groups and foster diversity values [28] among other outcomes [29]. Intergroup contact can take many forms. Allport, for example, specified the ingredients that should increase the chance that high-quality contact will reduce biases. These elements are commonly referred to as conditions of contact. They state that: (a) group members should be of equal status, (b) work towards the same goal, (c) cooperate with one another, and (d) receive institutional support. Surprisingly, the extent to which these conditions are met has rarely been given attention [30]. If anything, in their meta-analysis, Pettigrew and Tropp argued that these conditions are not essential for contact to have positive effects [26]. In terms of interdisciplinary research, empirical evidence on the impact of contact is extremely limited and it has not been established whether benefits of intergroup contact can extend to increasing positive attitudes between scientists from different disciplines. Understanding which aspects of collaborations in interdisciplinary research are important in promoting more positive and more frequent contact is important in encouraging such projects. In line with this, some suggested that feeling of having input into the collaboration predicts more frequent engagement in interdisciplinary collaboration [31] and that the lack of institutional support may well be a barrier to interdisciplinary research [32]. While there is some data to suggest that more senior scientists engage in more frequent interdisciplinary collaborations [33], the empirical evidence focusing on both frequency and quality of collaborations and how they are related not only to demographics, but also to the four elements of contact, however, is lacking. On the other hand, contact can also differ in its occurrence rates and its duration. In the context of interdisciplinary collaborations, it means that researchers can feel that their work is more or less frequently carried out with colleagues from other disciplines overall. Both the frequency and quality of contact contribute to more positive evaluations between social groups [34]. Although evaluating contact frequency and quality is a common approach within psychological science research, such evaluations tell us little about the specific impact of collaboration pairs and their dynamics. This is where the social network approach can shed more light on how the nature and the dynamics of interdisciplinary research may impact perceived intellectual contributions of other disciplines [35]. The social network approach permits the evaluation of social interactions whereby the focus falls on the relationship between specific nodes that interact with one another [36]. A node represents an individual, or a group of individuals sharing, for example, the same discipline. Moreover, these relations can exist on multiple layers with each layer characterising a different type of relationship between two nodes [37]. Therefore, scientists coming from discipline X can position themselves in relation to other disciplines by compartmentalising the type of interdisciplinary contact they have had. We propose that this can be conceptualised at four levels: (1) scientists from discipline X have no direct experience of working with someone from discipline Y, (2) scientists from discipline X have a recent new collaboration with discipline Y, (3) scientists from discipline X have an experience of working with discipline Y in the past, but not currently in the present, and (4) scientists from discipline X have a continuing collaborative relationship with discipline Y. For example, an engineer could have an ongoing collaboration with a sociologist, but maybe they have only collaborated once in the past with a psychologist. Those two types of collaboration, integrated into two layers of analysis, will have varied contributions to how the engineer perceives not only sociologists and psychologists separately but also social sciences more generally. The present research The present research set out to investigate the nature and consequences of interdisciplinary contact in a two-wave online survey of researchers from a broad range of academic disciplines. Scholars need to recognise the contribution of other disciplines and perceive a sense of commonality to break down the strong 'us' and 'them' divisions in academic disciplines [13]. The first aim of the study was to examine the relationships between the properties of interdisciplinary contact (its frequency, quality, and temporal stability) and their impact on perceptions of intellectual contribution and perceived commonality with other disciplines. Given the evidence for the positive effects of contact, it was expected that having more frequent and more positive contact would be positively related to the perceived intellectual contribution of other disciplines. In addition to participant-level evaluations, we carried out a multilevel network analysis to evaluate how pairs of specific disciplines, based on the temporal stability of these collaborations, impacted perceptions of intellectual contributions of other disciplines. To evaluate the effect of collaboration history between discipline pairs, we compared disciplines that have never collaborated with one another with those for which a new collaboration between Wave 1 (W1) and Wave 2 (W2) has been reported. If commencement of a new collaboration requires an initial perception of intellectual contribution of any discipline, we expect perceived intellectual contribution at W1 to be higher among participants who have begun a new collaboration between W1 and W2 than among participants who do not report any collaboration history. Furthermore, we can evaluate whether perceptions of intellectual contribution were higher among those with sustained collaborations reported at both W1 and W2 than only those who recently began a collaboration. To provide further evidence on what may encourage interdisciplinary contact, we explored whether conditions of contact outlined by Allport [6] predicted how frequently researchers engage in interdisciplinary contact and whether these interactions were generally positive or negative. Our prediction was that all four conditions (equal status between group members, a common goal, warm cooperation, institutional support) would predict more frequent and more positive interdisciplinary contact. The second aim of the study related to evaluating the impact of scientific hierarchies between hard and social sciences on the interdisciplinary collaborations and perceptions of other disciplines. Following Kirby et al. [25], it was expected that social sciences disciplines will be perceived as less capable to contribute intellectually to scientific research questions than hard science disciplines. Moreover, going beyond Kirby et al. [25] and following the literature on the role of status in intergroup bias, it was hypothesised that there would be an asymmetry in the way scientists perceive other disciplines as a function of their group membership. More specifically, we predicted hard sciences, as a higher status discipline, would display a stronger ingroup bias, that is, a stronger preference towards other hard science disciplines than social science disciplines. We build on the existing research in multiple ways. First, extending on Kirby et al.'s research, we sample researchers at different career stages and from multiple scientific disciplines. Second, we provide the first direct test of the effect of interdisciplinary contact in terms of its quantity and quality on the scientists' perceptions of other disciplines. Third, we take a dynamic approach in evaluating the nature of these collaborations across time, how they relate to perceptions of intellectual contribution of other disciplines and perceived commonality, and assessing the influence of beginning a new research collaboration (versus continuing an existing one or not having any direct collaborative experience with certain disciplines). Fourth, we provide some initial evidence on the specific aspects of interdisciplinary collaborations which contribute to more frequent and more positive interdisciplinary contact. Overall, this research strived to provide essential empirical evidence that can stimulate discussion regarding the processes underpinning interdisciplinary research and how to best promote future interdisciplinary collaborations. Materials and methods This two-wave study was conducted as a part of a larger research programme which aimed to promote interdisciplinary approaches bridging social and hard sciences to build more adaptive and resilient societies. The research programme consisted of 12 partner universities in seven countries, representing 12 academic disciplines of the research programme partners (see Table 1). We sent invitation emails to all research partners to complete an online survey and asked them to forward the link to their colleagues and collaborators. The research was presented as a study of "the underpinnings of innovative research, including the mechanisms of interdisciplinary research". In total, 160 academic researchers from the hard sciences and 120 researchers from social sciences participated in the online survey at W1. They reflected all levels of seniority with 26 PhD students, 19 postdoctoral researchers, 53 assistant professors, 104 associate professors, 37 full professors and 37 head of departments (information missing for 4 participants). They were affiliated with 26 different countries. Following the completion of W1, 220 out of 280 participants left their email addresses to be contacted of which 59% participated (n = 129). Attrition analysis suggests that the dropout rates are more systematic in terms of demographics with those who are in more senior positions and hard sciences more likely to drop out at W2. However, there were no differences in the key measured variables between those who completed both waves and those who dropped out (see supplementary analyses for details: https://osf.io/6spxh/). W1 took place between December 2017 and April 2018 whereas W2 between November and December 2018. Both parts of the study were available online and took around 10 minutes to complete. Research was conducted according to the principles expressed in the Declaration of Helsinki. Participants gave written consent to participate in the study voluntarily. Their email addresses were kept confidentially and were destroyed following the W2 of the study. This research was non-interventional and without the use of deception and therefore was not required to go through the ethics committee. Table 1. Rotated (Oblimin) factor loadings of intellectual contribution and commonality items on two factors (social and hard sciences) extracted with principal axis functioning. Number of participants representing each discipline is also displayed. Measures employed across both waves of the online survey were almost identical with an exception of demographic variables collected only at W1 and additional variables measuring conditions of contact at W2. Not all measures collected are presented here, but we list all study materials, including a list of variables not reported in this manuscript in the supplementary materials. Demographic variables (W1) Information regarding gender, academic position, and country of residence was collected. These are included in the analyses for exploratory purposes. Participants also indicated their main academic discipline from a list of 12 which were classified as either hard or social science and all disagreements were discussed by all authors. Inter-rater reliability for coding of disciplines marked as 'other' was 87%. Factor analysis further confirmed this classification (see Table 1). Interdisciplinary contact measures Participants were asked to report on the interdisciplinary collaborations they had in terms of their frequency and quality. The single-item measure for each dimension was adapted from Tam and colleagues' research [38]. In W1, they were asked about their contact experiences in their career up to date and at W2, they reported on more recent collaborations (since January 2018). For contact frequency, they stated how frequently they work with others who come from another discipline (1 = never; 7 = very frequently). For contact quality, they rated how positive or negative these interdisciplinary experiences were overall (1 = extremely negative; 7 = extremely positive). Moving on to questions regarding specific disciplines, participants first specified which disciplines they collaborated in the past (W1) and with which disciplines they collaborated since January 2018 (W2). For those who completed both waves of the study, information regarding all pairs of discipline was extracted, creating 11 scores for each participant. Four categories of histories of collaboration pairs were derived from this information: (1) no collaboration reported in either waves, (2) a new collaboration pair reported in W2, (3) a collaboration pair that was reported in W1, but was not continued, and (4) a continuing collaboration pair reported in both waves (see Fig 1). Outcome measures: Intellectual contribution and perceived commonality We estimated the extent to which researchers viewed other disciplines as capable of contributing to intellectual knowledge underpinning the relevant research questions. All 12 disciplines were displayed and for each, participants were asked about the extent to which other listed disciplines can contribute to the research questions the participant studies (1 = not at all; 7 = a great deal). There was also an option to indicate that one of the listed disciplines is one's own in which case the item has been coded as missing and excluded from the scale construction. A higher score reflected a higher perception of intellectual contribution of scientific disciplines other than one's own (α W1 = 0.79; α W2 = 0.78). For this measure, we further split the scale into perceived intellectual contribution related to hard science disciplines and social science disciplines separately based on Table 1. Following this exercise, the internal reliability remained high for the contribution of hard sciences (α W1 = 0.70, α W2 = 0.75) and social sciences (α W1 = 0.87, α W2 = 0.83). Perceived commonality between one's own discipline and other disciplines was assessed using Venn diagrams. Participants had to choose one out of five degrees of overlap ranging from no overlap at all (1) to almost a total overlap (5). This measure was collected at both wave points. The ratings for each discipline (other than participants' own) were merged into one score with a higher score reflecting a higher degree of perceived commonality with other disciplines (α = 0.73 and 0.71 in W1 and 2 respectively). As with the intellectual contribution items, we further derived two variables for hard and social science targets. These also had a high internal reliability (hard science α W1 and 2 = 0.75; social science: α W1 = 0.81; α W2 = 0.83). Providing evidence for the validity of these outcome measures, the perceived commonality measure was strongly correlated with the perceived intellectual contribution, r(228) = 0.65, p < 0.001 and r(119) = 0.73, p <0.001 in W1 and W2, respectively. Conditions for contact Finally, W2 further included items assessing Allport's conditions for positive contact. Since no existing measure was found, new items were created. On a scale from 1 (not at all) to 7 (completely), participants were asked about the extent to which both sides of collaboration were working to achieve the same goal (goals), the collaboration was supported by their department or lab (institution), the interdisciplinary collaboration was meaningful and warm (warm). To assess whether collaboration was equal, one item asked about the extent to which the collaboration was between equal partners and another item, which was reverse-coded, about the extent to which some partners were dominant over others. These two items correlated highly, r(121) = 0.60, p < 0.001, and were merged into a single variable (col_equal). Open science note All materials, data and analyses are available via the Open Science Framework project page: https://osf.io/ynerz/. Social or hard science? To confirm the classification of disciplines into social and hard sciences, confirmatory factor analysis was conducted with principal axis functioning and Oblimin rotation. As the perceived intellectual contribution and perceived commonality items used the same disciplines, we used these items to confirm that in both measures, there is a clear distinction between social and hard science disciplines and thus two factors were extracted. Inspection of factor loadings suggest that there were no cross loadings and all disciplines consistently loaded on the relevant factors across two measures as it was originally classified by the authors (see Table 1). The impact of interdisciplinary contact on perceived intellectual contribution and commonality To investigate the relationship between interdisciplinary contact and the outcome variables, we first inspected cross-correlations between those variables at both waves. Fig 2 shows that higher frequency of reported interdisciplinary contact was related to a higher perceived intellectual contribution of other disciplines at W1, r(241) = 0.27, p < 0.001. Moreover, this pattern of results was further replicated at W2, r(128) = 0.29, p < 0.001. Higher frequency of interdisciplinary contact was also related to higher perceived commonality, albeit the effects were weaker: r(236) = 0.14, p = 0.037 for W1 measures and r(121) = 0.23, p = 0.012 for W2 measures. Therefore, there is initial evidence that more frequent interdisciplinary contact is related to more positive perceptions of other disciplines. Contact quality, however, was not related to perceived intellectual contribution for W1 measures, r(237) = 0.09, p = 0.152 nor W2 To control for demographic factors, additional variables were also entered into a multiple linear regression predicting intellectual contribution and commonality at W1, including hard/ soft science discipline, gender, and academic position. This model explained 14% of variance, F(5, 230) = 8.37, p <0.001. Contact frequency remained the strongest predictor of perceived intellectual contribution, β = 0.29, t = 4.70, p < 0.001. While gender was a non-significant predictor, β = -0.04, t = -0.63, p = 0.528, being a hard scientist (versus social scientist), β = -0.25, t = -4.06, p < 0.001 and having a less senior academic position, β = -0.13, t = 2.12, p = 0.035 were both associated with higher perceptions of intellectual contribution of other disciplines. For perceived commonality, the predicting variables were of similar effect, but slightly weaker, explaining only 7% of the variance, F(5, 226) = 4.28, p <0.001, with the less senior position, again, associated with higher perceived commonality with other disciplines (see Table 2). Therefore, reporting more frequent (but not necessarily more positive) interdisciplinary contact overall was related to more positive perceptions of other disciplines. Elements of positive contact Finally, as a part of exploratory analyses in W2, four conditions of positive contact as outlined by Allport [6] were examined in the context of reported contact quality and contact frequency (see Table 3). In the regression model predicting contact frequency model, only a small effect of institutional support was observed: those who perceived their institution to be more supportive of interdisciplinary collaborations were more frequently engaging in them, β = 0.20, t = 2.20, p = 0.030. For the contact quality, the model suggests that higher quality contact was associated with a higher perception that both sides of collaborations shared a common goal, β = .31, t = 3.39, p <0.001, and the perception that the nature of collaboration was warm, β = 0.36, t = 3.97, p <0.001. Institutional support and equal-level partnership were non-significant predictors. Both models were statistically significant, F(4, 117) = 2.92, p = 0.024, Adjusted R 2 = 0.06 and F(4, 117) = 13.47, p <0.001, Adjusted R 2 = 0.29 for contact frequency and contact quality, respectively. In neither of the models, perception that the collaboration was between equal partners predicted contact frequency or contact quality. Perceptions of intellectual contribution and commonality with other disciplines across hard and social science group lines For exploratory purposes, we first compared those in hard and social science disciplines on the number of key outcomes to evaluate any differences using a series of independent t-tests. On almost all measures, there were no significant differences between participants from hard and social sciences. The results show that while hard scientists do not report engaging in a more frequent or a more positive interdisciplinary contact than social scientists either in W1 or W2, they did report perceiving other disciplines as contributing less to the research questions they are studying than the social scientist participants (see Table 4 for statistics). Next, we tested whether social science disciplines were perceived as more able to contribute intellectually than hard science disciplines in the eyes of hard scientist participants and social science participants and vice-versa. To evaluate this, a 2 (Participant's discipline: Hard science versus Social science; between factor) x 2 (Target discipline: Hard science versus Social science; within factor) mixed ANOVA with the perceived intellectual contribution as an outcome was conducted (see Fig 3A). Scientists from hard and social sciences did not differ in their overall levels of perceived intellectual contribution, F(1,85) = 0.13, p = 0.716, η 2 G < 0.01. However, Fig 3B). As with previous analyses, we analysed this interaction using two paired t-tests with a Bonferroni alpha correction applied (p = 0.0025). Among those from hard science background, they perceived more commonality with other hard science disciplines (M = 3.52, SD = .91) than with social science disciplines (M = 2.08, SD = .81), t(138) = 15.57, p < 0.001, d = 1.32. For social science participants, they perceived more commonality with social science disciplines (M = 2.79, SD = 0.71) than hard science disciplines (M = 2.47, SD = 0.92), but this difference was not statistically significant under the adjusted alpha, t(106) = 3.10, p = 0.003, d = 0.30. This analysis further confirms that there are asymmetries in the way hard scientists and social scientists perceive one another. Does contact aids positive perceptions or do positive perceptions enable contact? Finally, we present results from the network analysis of 1,260 discipline pairs based on the history of collaboration (see Fig 1). Given the complexities of these analyses, we only report the perceived intellectual contribution as the outcome. Analyses regarding the perceived commonality are available in the supplementary materials (https://osf.io/6spxh/). Using this approach, we evaluated whether the history of collaboration directly affects perceptions of intellectual contribution. To this end, we conducted a one-way ANOVA with the collaboration history (none, new, non-continuing, and continuing) as the factor predicting perceived intellectual contribution. The size of the main effect of collaboration type on perceived intellectual contribution was medium-to-large, F(3, 1365) = 187.10, p <0.001, f = 0.64 (see Fig 4). A comparison of four types of collaboration histories using Tukey's test showed that among non-collaborating discipline pairs, perceived intellectual contribution of other disciplines (M = 3.03, SD = 1.90) was considerably lower than among newly established disciplinary pairs (M = 4.84, SD = 1.78) 95% CI [1.32, 2.29] as well as in comparison to those with non-continuing collaborations (M = 5.04, SD = 1.70) 95% CI [1.59, 2.43]. The difference between new and non-continuing collaborations was non-significant, 95% CI [-0.40, 0.81]. This suggests that a recognition of intellectual contribution may be initially necessary to enable the future interdisciplinary collaboration: discipline pairs that launched a new interdisciplinary collaboration within the coming months had a higher appreciation of the intellectual contribution of those disciplines months before the collaboration initiation. Furthermore, discipline pairs that reported a continued collaboration had a significantly higher level of perceived intellectual contribution (M = 5.74, SD = 1.34) than new discipline pairs, 95% CI [0.36, 1.44] and non-continuing pairs, 95% CI [0. 21, 1.18]. Therefore, there is evidence that having a sustained, pro-longed collaboration can further augment the perceived intellectual contribution of other disciplines. Does contact diminish the asymmetries? Using the data of discipline pairs, we also investigated whether previously reported asymmetries in the way that hard and social sciences perceive one another may change as a result of collaboration history. To this end, we carried out four one-way ANOVAs, each representing a type of collaboration history (none, new, non-continuing, continuing) with discipline pair (hard scientists evaluating other hard science disciplines or social science disciplines or social scientists evaluating hard science disciplines or other social science disciplines). As with the previous analysis, we computed the analyses with the perceived intellectual contribution as the outcome (see Fig 4). Among those who never collaborated, there was a significant effect of collaboration pair on perceptions of intellectual contribution, F(3, 721) = 25.03, p <0.001, f = 0.64. A comparison of four types of discipline pairs using Tukey's test revealed a pattern of findings similar to the one presented in Fig 3A. Social scientists perceived other social science disciplines and hard science disciplines with whom they have never collaborated as not significantly different in their ability to contribute intellectually to relevant research questions. Hard scientists, on the other hand, for disciplines with whom they did not collaborate, perceived other hard science disciplines as being able to contribute intellectually more than other social science disciplines. 19, the perceived intellectual contribution within the disciplinary pairs was non-significant. This demonstrates that the asymmetries in the way hard scientists and social scientists perceive one another's contribution is likely to stem from a lack of experience working with those disciplines. Discussion We tested the role of interdisciplinary contact on perceptions of other sciences with an aim to understand how intellectual centrism, a strong preference for own discipline and failure to appreciate the potential contribution of other disciplines, may be reduced. Multiple regression analyses demonstrated that those who engage in a more frequent interdisciplinary contact tend to report higher appreciation of the other disciplines' intellectual contribution. More frequent contact was also related to an increased perception of commonality with other disciplines. These effects persisted even when controlling for demographic factors such as gender, seniority, and discipline belonging. Examination of the evolution of collaboration between pairs of discipline between W1 and W2 further showed that initial appreciation of intellectual contribution of other disciplines may be necessary to commence a new interdisciplinary collaboration in the future. However, those with a prolonged history of collaboration with the same discipline held the most positive perceptions of intellectual contribution of those disciplines. These findings are in line with the contact theory ( [6], and in contrast to [7]): those who engage in more contact with social groups that are not considered their own, in this case, scientists from other academic disciplines, tend to report more favourable perceptions of those disciplines. The mere frequency of contact as opposed to its quality played a particularly important role. Scientists engaging in a more frequent collaborative endeavours with other disciplines, via exposure and getting to know these disciplines, report higher appreciation of their intellectual input. In our sample, increased interdisciplinary contact also reliably predicted increased sense of commonality with other disciplines. Moreover, continuous and sustained interdisciplinary collaborations were further elevating perceived intellectual contribution and perceptions of commonality between researchers. Therefore, increased contact between scientists of different disciplines should be actively encouraged. This could be achieved by, for example, introducing departmental policies fostering and supporting interdisciplinary endeavours. Having said that, scientists embarking on a new interdisciplinary journey have generally more positive attitudes towards disciplines with which they will start working, highlighting that a degree of intellectual recognition may be what scientists need before they can engage in interdisciplinary contact. It is possible, however, that these positive attitudes may stem from indirect contact, in other words, hearing from other colleagues about their successful interdisciplinary collaborations with another discipline (see [39]). A multiple regression analysis also showed that perceiving local institutions as supportive of interdisciplinary collaborations was related to more frequent interdisciplinary research. This effect was small and based on a smaller W2 sample, but it is in line with previous research highlighting how institutional policies directly impact behaviour by changing norms [40,41]. Green and colleagues also recently showed that introduction of pro-multiculturalism policies is associated with more frequent and more effective contact between two social groups [42]. Based on these findings, a new hypothesis can be generated: that when scientists perceive institutional support encouraging interdisciplinary collaborations, this will increase their involvement in interdisciplinary collaborations and improve their perceptions of the potential scientific contributions of other disciplines. In terms of other conditions of contact, the analysis suggests that having common goals and warm interactions during the collaboration predicted more positive experience of interdisciplinary collaborations. Contrary to our expectations, Allport's fourth condition, having equal status, was not related to either contact frequency or contact quality. While we cannot conclude that this condition is irrelevant in all contexts, we find no evidence that researchers need to perceive their interdisciplinary collaborators as having an equal voice or status for the collaboration to take place and be positive. One possibility is that status differences are simply not a barrier: scientific teams are often made up of members varying in seniority and expertise [43]. Addressing whether and how these conditions underlie interdisciplinary collaborations is a fruitful avenue for future research, addressing the gap in the contact research theory as advocated by Paluck and colleagues [30]. It is worth reiterating that in the present research we provided a first attempt at measuring contact conditions with items of unknown validity. More work is needed to build on this exploratory research and develop stronger measures before we can engage in confirmatory research. Asymmetries in perceived intellectual contribution Our findings highlight a strong asymmetry in perceptions of other disciplines among those working in hard science disciplines and those working in social science disciplines. In line with Kirby et al. [25], the analysis demonstrated that social sciences are generally perceived as less capable of contributing to relevant research questions. Moreover, our study is the first to show that social scientists do not have different perceptions of other disciplines, regardless of whether they were reporting about hard sciences or social sciences. In other words, social scientists may not necessarily perceive other social sciences, excluding one's own, as more able to contribute to the relevant research than hard science disciplines. The story is quite different among respondents from hard science disciplines. Hard science participants displayed a strong preference for other hard science disciplines in terms of both how much they think they contribute intellectually and how much in common they perceive to have with them. These findings resonate with observations within social psychological research demonstrating that intergroup bias is stronger among members of higher status groups than among members of lower status groups [14,16]. The present study further demonstrates this idea in a previously untested context. Moreover, the asymmetry in the way hard and social scientists perceive other disciplines is particularly strong within pairs that never had any collaborative experience. We can only speculate about the causes of those effects but the direction of the relationship is again very clear: negative perceptions of other discipline are related to a lack of contact with them. While some argue that social scientists may be perceived as partners that are unwilling [3,24] or less skilled [25] in their interdisciplinary research, the future research should consider how the visibility of social sciences could be increased to create space for interdisciplinary endeavours. One should keep in mind that these results are based on perceptions only and not whether other disciplines can actually contribute meaningfully to the relevant research question. However, if scientists perceive certain disciplines as incapable of contributing to their own research questions, this may indeed be a real first barrier to embarking on such interdisciplinary journeys. Examining perceived intellectual contribution at discipline pair level, the asymmetry in perceptions of hard and social sciences' contribution was smaller for those who have ongoing collaborations with different disciplines than for those who did not have any experiences of collaboration. While these asymmetries exist among those without any experience of collaboration with hard scientists displaying a strong preference for other hard science disciplines, this was no longer the case when hard scientists have experience of working with social sciences. This is in line with the findings from Kirby et al. [25] according to which those with experience of working with social scientists held more positive views about the competence of social scientists compared to those with no experience. Through the experience of collaboration, group differences in appreciation of different disciplines are considerably weakened. This again highlights the importance of encountering and working with other researchers as a pathway to growing intellectual appreciation of one another. Worth reiterating is also that we did not set out to test the following hypothesis directly, the data suggests that those more senior in their positions tend to appreciate other disciplines less in terms of the perceived contribution and perceived commonality. This effect size was small and therefore more evidence is needed, but it is in line with previous research showing socialisation effects [8] whereby people grow to appreciate the values of their own discipline as they progress through career ranks. Another explanation could be that due to the growing appreciation of interdisciplinary research [4], early career researchers are more likely to be encouraged to pursue projects in collaboration with other disciplines. Having said that, in line with previous research [33] our data also suggests that those holding positions that are more senior reported having interdisciplinary collaborations more frequently than those earlier in their career. For this reason, future research should directly test whether monodisciplinary socialisation across career stages and within hard and social sciences strengthens ingroup biases and may be associated with more resistance to interdisciplinary research. Limitations. Limitations to the present research should be acknowledged. First, not all academic disciplines were included representatively in this study and the sample was admittedly constrained by the requirements of the larger project. Social and hard sciences were also hand-coded by the authors and one may contest whether these disciplines belong to purely social versus hard sciences. We concede that this binary classification is not ideal and perhaps future research should consider the extent to which scholars feel that they strictly belong to those scientific categories. For this reason, one needs to be careful when generalising these findings to all hard science and social science disciplines. Second, while we provided participants with the definition of interdisciplinary research, it is quite likely that participants could have referred to very much neighbouring disciplines as interdisciplinary. For example, within a psychology department, a social psychologist can collaborate with a cognitive neuroscientist and well count it as interdisciplinary since it requires integration of methods or theories. In line with our data and considering the history of collaboration pairs within the disciplines we studied, interdisciplinarity may be a relatively rare phenomenon overall and even more so across hard/social science group boundaries. Relatedly, what consists of a collaboration can vary in intensity from a simple consultation to developing grant proposals together and we did not consider these complexities in the present research. Third, our findings regarding the contact conditions on contact quality and frequency were exploratory and carried out on a smaller sample in W2. Likewise, reduced sample size did not permit us to have enough statistical power to carry out relevant longitudinal tests of the reported effects. Finally, there is no evidence that having a more favourable perception of another discipline can result in a successful interdisciplinary collaboration. This has not been directly tested in the present research. The question of what consists of "successful" interdisciplinary collaboration in itself can be extensively debated: having a collaborator with whom one enjoys working, does it consist of multiple publications, high impact factor publications, high societal impact, receiving scientific awards from peers? While this is something future research can consider, we argue that being open to such collaborations to happen when opportunities arise and frequently participating in interdisciplinary endeavours can increase the likelihood that such research will be considered fruitful and even groundbreaking. However, it is a limitation that we did not measure any indices of perceived productivity within those interdisciplinary projects such as publication success [44]. Given that our study only followed the scholars for less than a year, it was not sufficiently long to evaluate such productivity metrics. More bottom-up approaches exploring which outcomes of interdisciplinary research matter to scholars should be considered to verify whether perceived intellectual contribution and perceived commonality are pertinent for researchers involved in interdisciplinary collaborations and whether they lead to long-term productivity outcomes. It is also possible that the period between W1 and W2 was too short for the new collaborations to develop closely so future designs should consider multiple waves spanning across multiple years. Such design would allow to answer the question of whether intellectual appreciation of other disciplines is related to higher productivity with those disciplines. Implications and conclusion The findings of the present research suggest a number of implications for policy-makers and grant funders. Interdisciplinary contact is generally related to higher intellectual appreciation of other disciplines. Increasing opportunities for contact, whether it is through interdisciplinary grants that aim to connect disciplines that are of seemingly different approaches or whether through increasing spaces where such discussions can be happening, policymakers have a clear role in shifting norms and desires to conduct interdisciplinary research. This is particularly important in reducing asymmetries in the ways hard and social sciences perceive one another. If interdisciplinary research is a solution to addressing global issues, then scientists must recognise the ability of other disciplines to contribute intellectually to important research questions.
2019-08-19T15:08:07.597Z
2019-03-25T00:00:00.000
{ "year": 2019, "sha1": "d79f1d727ed04aa62acce7f3fbf775cef36fa940", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0221907&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7ce50fa2f9f79cf4d88e8cbfce18efe2c6af7458", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "Medicine", "Sociology" ] }
22335922
pes2o/s2orc
v3-fos-license
Lithium: one drug, five complications Background Lithium poisoning could trigger multiple complications. We report the case of a lithium poisoning with five complications that are described for the first time together. Case report A 60-year-old woman was admitted in our intensive care unit for altered consciousness. Severe lithium intoxication was diagnosed (lithium plasmatic level 8.21 mmol/l) associated with acute oliguric kidney failure. Continuous renal replacement therapy was started immediately. Orotracheal intubation was quickly required because of status epilepticus. Medullary aplasia happened 48 h after the patient was intubated. Infectious and immunological causes were ruled out and lithium poisoning was considered as the most likely etiology. Iterative blood and platelet transfusion were required. Severe polyneuropathy was diagnosed on the 5th day after admission. The patient showed a peripheral tetraparesia and cranial nerve failure while lithium plasmatic level had decreased to a therapeutic level. Conversely, urine output increased and hypernatremia promptly occurred, which led to diabetes insipidus diagnosis. Neuropathy decreased in 72 h and the patient was definitely extubated by the 11th day. Hematologic disturbances decreased and no blood transfusion would be required after the 8th day. The patient would keep sequellas of the poisoning. Thin motricity would still be altered and polyuria would remain. Diffuse alopecia was promptly observed, with no iron deficiency or thyroid disturbance. Conclusion In addition to presenting this case report, we herein discuss the drug causality, the consequences, and the plausible pathophysiology of these five situations. Background Lithium is the first-line treatment for bipolar disorder. Lithium poisoning was historically associated with a 25% mortality [1]. A more recent survey showed that mortality is less than 1% [2], namely thanks to intensive care progress and the more wide-spread use of extracorporeal treatment [2]. Most of the lithium poisonings are associated with infections or drug association [3], the most frequent being with diuretics or angiotensin-converting enzyme inhibitors. Self-administration of toxic doses may account for 20% of the hospitalized poisoning [4].The case we report shows multiple lithium complications: pancytopenia, polyneuropathy, diabetes insipidus nephrogenic, seizure, and alopecia. Those complications are described for the first time together in our case report which illustrates how wide the spectrum of lithiumpoisoning-related complications could be. Case presentation A 60-year-old woman was admitted to our intensive care unit (ICU) for altered consciousness. She was receiving lithium for a bipolar disorder for more than 10 years. Her last lithium level was on the therapeutic window (1.19 mmol/l 2 months before the admission) with a 400-mg twice-daily lithium carbonate intake without recent change in her drug regime. Her kidney function was normal (creatinine level 50 μmol/l). The physical exam showed an altered consciousness (Glasgow score 9) with a bilateral nystagmus. She was apyretic and there was no hemodynamic or respiratory failure. Urine output was less than 100 ml over 6 h. ALAT 30 IU/l; ASAT 30 IU/l bilirubin 7 mg/l; screening test for benzodiazepine, barbituric and tricyclic antidepressant were negative. Brain computed tomography did not show any bleeding or ischemic stroke. The lumbar puncture did not show any sign of cerebral nervous system infection (0 cell/mm 3 ; protein level 0.52 g/l; glucose level 0.97 g/l). The electrocardiogram was normal with a normal QT and PR space and without any pattern of ischemia. The patient condition deteriorated dramatically after the sixth hour. Status epilepticus happened and orotracheal intubation was required after clonazepam and phenytoin's failure. Urine output was still under 100 ml over 6 h. Seizure cessation was obtained with midazolam and sufentanil. Antiepileptic therapeutic were settled (levetiracetam and clobazam). The lithium level was belatedly obtained soon after the patient was intubated. It was at 8.21 mmol/l (therapeutic window 0.7-1.2 mmol/l). We decided to begin a continuous extrarenal epuration by veno-venous hemodiafiltration (CVVHDF), the only technic available in our center. Our CVVHDF was obtained thanks to femoral venous access. The mean session duration was 24 h. Mean flow blood rate was 180 ml/min, predilution 600 ml/h, postdilution 1200 ml/ h; ultrafiltration rate adapted to volemia estimation. Anticoagulation was obtained with unfractionated heparin. Lithium concentration was determined on admission, at the beginning, and at the end of each CVVHDF session. Lithium level quickly decreased ( Fig. 1; Table 1); a rebound happened but it was still on the therapeutic window (Fig. 2). CVVHDF was pursued until the fifth day after admission due to persistent oliguric kidney failure. Conversely, we noticed a profound bicytopenia: hemoglobin content fell below 8 g/dl by the sixth day and platelets started to fall below 30,000/mm 3 . Hemolysis parameters were normal. Reticulocyte count was low (20,000/mm 3 ). The bone marrow aspiration showed a poor bone marrow without any malignancy. Biological exams ruled out any infections (parvovirus B19, hepatitis, HIV), vitamin deficiency, or immunological etiology (antinuclear antibody; anti-neutrophil cytoplasmic antibody). Iterative blood and platelet transfusion was required. Sedations were taken out by the seventh day. Consciousness was rapidly normal without any seizure happening. We noticed a tetraplegia with depressed reflex: the muscular testing was of 2/5 over each muscular limb. Deglutition and cough was severely altered. The electromyogram showed severe axonal polyneuropathy without demyelinating pattern. Kidney function quickly went back to normal with a urine output of 1200 ml/24 h and a plasmatic creatinine at 50 μmol/l on day 8. The patient soon became polyuric and the sodium level rose to 146 mmol/l with inappropriate urinary osmolarity (212 mosmol/l) indicating diabetes insipidus. Normal sodium level was obtained with 3500 ml water intake for 24 h. We noticed diffuse severe alopecia without any scalp lesion. Muscular weakness rapidly resolved: the muscular testing was mildly altered on the four limbs, gag reflex was efficient and the patient was then able to cough. Extubation was obtained on day 11 without any ventilatory failure. Eating was possible without swallowing the wrong way. Hematological disturbances corrected slowly and transfusions could be discontinued by the 11th day. Hemoglobin was still low at 8 g/gldl and platelet count was still low at 50,000/mm 3 . Two weeks later, the hematological parameters had reached normal values. Our patient firmly denied any intentional overdose and her psychiatrist confirmed she had stable bipolar disorder. However, we failed to find any infectious or iatrogenic trigger which led to this extremely high lithium concentration. So we cannot formally exclude an intentional overdose. Acute kidney injury with oliguria may have increased the toxicity of lithium although the etiology of acute kidney failure remained unclear. The lithium was definitely stopped and she received divalproate instead. As a new seizure happened, she was granted to an antiepileptic dose of divalproate. Polyuria and polydipsia remained. She still complained about sensory defect and thin motricity weakness. To conclude, we report the case of a severe acute on chronic lithium intoxication with five scarcely reported complications. Discussion Acute on chronic lithium poisoning is associated with a poorer prognosis as compared to non-previously exposed patients, mainly because of a more frequent central nervous system failure (CNS) [5]. In those cases, orotracheal intubation is required for 5% of the patients and 3% of them will present seizures [5]. Those patients do not show higher lithium level as compared to acute lithium poisoning. This may be linked to the slow diffusion of lithium which impedes it from penetrating the central nervous system in acute poisoning while chronically exposed patients have high lithium concentration in CNS before poisoning. As a matter of fact, lithium level does not seem to be the most important parameter to consider when treating lithium intoxication. Lithium has already been tested for its erythropoietic side effect [6]. This effect might implicate glycogen synthase kinase 3 beta (GSKB3) inhibition, a repressive kinase which inhibits vascular endothelial growth factor synthesis. Patients under lithium therapy may show polynucleosis, as was noticed on our patient's first biological exam. Effects on erythroid and megacaryocytic cells are less known, even though lithium has already been successfully used to cure aplastic anemias. Dealing with lithium poisoning, cytopenias are scarcely reported. Lithium can reduce erythroid progenitor growth in vitro [7]. On mice [8], lithium treatment with 20 mg/kg daily can reduce erythropoiesis. To our knowledge, only one case was published concerning neutropenia [9]. We found one case of peripheral thrombocytopenia [10] and one case of severe thrombopenia with myocardial infarction during acute on chronic poisoning [11]. Only one case of pancytopenia during lithium therapy has been published [12]: the case reported is about a chronically treated woman who never got intoxicated. Her lithium level had always been below 0.76 mmol/l and she did not show any other sign of lithium toxicity. The lithium poisoning may be responsible for the pancytopenia we observed. Hemoglobin content gradually fell and biological exam including bone marrow aspiration indicated central cytopenias. Other drugs were unlikely to be responsible for the pancytopenia we observed as clobazam and levetiracetam had been introduced less than 48 h before hemoglobin, platelets, and white blood cells began to fall. We investigated other possible factors which all turned back negative. Nonetheless, there are very few case reports of hematologic toxicity during lithium poisoning and there may be other causes we failed to find. In our defense, lithium levels are scarcely as high as the one reported in our case. Hematotoxicity might only occur for very elevated lithium levels, either blood level or total body level. When a cytopenia occurs, bone marrow aspiration should always be practiced as some cytopenia may be provoked by immunologic way as shown in [10], a situation that may benefit from corticoids. Polyneuropathy is a very rare concern during lithium poisoning but we were doubtful about it being caused by critical illness polyneuropathy with the precocity of occurrence, the lack of main risk factors of reanimation polyneuropathy (no use of curare or corticosteroid), and the impairment of cranial nerve function. Main alternative diagnoses were also ruled out: there was no dyscalcemia or thyroid dysfunction; infectious and immunologic disorders had previously been investigated. We choose not to repeat the lumbar puncture because of the non-demyelinating character of the neuropathy. We found only few cases of such a condition [13][14][15]. Pathophysiology is still controversial and may implicate lithium accumulation in the neural cells. Most of the cases concern acute on chronic poisoning [13][14][15] with lithium levels mildly increased, around 3.2 mmol/l at presentation. Polyneuropathy often begins 3 or 4 days after the ICU admission, sometimes when lithium level has decreased below the toxic scale [15]. Most authors report long tract neuropathy and oculomotor abnormalities [13][14][15]. There must be no sign of demyelinating neuropathy. Diabetes insipidus is a common finding on these patients, and it may be either a risk factor for developing polyneuropathy or a sign of acute on chronic intoxication. Polyneuropathy decreases most of the times and major sequels are very scarce though it may take several months before patients could be able to walk [15]. The case we report is similar to the previously published cases concerning the delayed symptoms and the spontaneous recovery. As in the case published by Chan [15], polyneuropathy was associated with the new onset of diabetes insipidus. However, our patient would not experience complete recovery of her motricity. She would still present impaired thin motricity and paresthesia. This may be linked to the longer exposure to lithium as compared with the cases published [13][14][15], or the severity of the lithium intoxication. We cannot exclude a belated recovery we are too early to find. In our case report, polyneuropathy lengthened mechanical ventilation period by at least 4 days. As a matter of fact, it seems important to wait a few days before proceeding to tracheotomy when a lithium-intoxicated patient shows early polyneuropathy that impedes extubation from being proceeded. One should consider extra renal replacement therapy if lithium level is above 4 mmol/l [2] with impaired kidney function or whenever there is neurological symptoms as lithium levels barely correlate with neurological involvement. Above 5 mmol/l [2], continuous renal replacement therapy (CRRT) is required whatever the kidney function. One should prefer hemodialysis over hemofiltration due to the ionic diffusive nature of lithium. The only extrarenal epuration technic available in our center at the admission was CVVHDF, the reason why our patient did not benefit from hemodialysis. Even though all the described complications occurred after the beginning of CVVHDF, we do not think that CVVHDFH was ineffective. Complications must have occurred because organ injuries had already been settled but were still infraclinic when we admitted this patient in our ICU. For instance, pancytopenia was the first manifestation that occurred and the last one is alopecia. It is very close to what is described when using cytotoxin: quick-regenerating tissues like bone marrow are the first ones to suffer from the intoxication and slow-regenerating ones like hair are the last ones. In our point of view, the success of lithium poisoning treatment must focus on the control of seizure and status epileptius rather than belated toxicities. As status epilepticus was rapidly controlled, we decided to pursue CVVHDF; we would have changed our CRRT technic if it had not improved. Due to its ionic properties, there is an important intracellular pool of lithium, and a major risk of "rebound" after a CRRT session. We choose to determine lithemia before and after each CVVHDF session and we did not find any rebound. However, it is very unlikely we missed any important rebound because our patients kept severe kidney injury and could not lower her lithium concentration by any way when CVVHDF was discontinued. We think that whether a rebound had occurred after CVVHDF, lithium concentration would have remained high until the next CVVHDF session because the residual clearance of lithium was drastically reduced. Data are missing concerning the effects of early CRRT to prevent intoxication sequels, namely the underestimated ones like peripheral neuropathies and diabetes insipidus. Concerning kidney function, chronic exposure to lithium does not seem to affect the glomerular filtration rate [16]. The urine maximal concentration ability is though significantly impaired [2]. Polyuria is actually a very classic chronic lithium exposure complication. Acute poisoning can increase the risk of a polyuria to happen [17]. Lithium intoxication is often associated with new onset of diabetes insipidus nephrogenicus. This situation can lead to dehydration and consequently increase lithium retention. For instance, patients presenting with diabetes insipidus nephrogenic could show a 25-fold increased risk of developing CNS involvement during lithium poisoning [18]. When lithium cannot be stopped, amiloride may be used to lower the risk of the patient developing diabetes insipidus [19]. This effect on urine concentration may be mediated by adenylate cyclase [20] inhibition that impedes aquaporins from migrating to the renal collecting duct. GSk3B is suspected to participate to this diabetes insipidus but its full action is still controversial. Lithium poisoning may cause alopecia. Most of cases are case report [21] and a recent meta-analysis has not shown any increased incidence of alopecia concerning chronically treated patients. However, this review does not focus on lithium poisoning. Alopecia may be linked to GSk3B inhibition as this kinase promotes hair growth. In spite of this complication not being a life-threatening one, alopecia may defavorably impact the patient's quality of life. Conclusion To conclude, we report the case of a severe lithium poisoning requiring urgent CVVHDF. We noticed hematological, neural, and nephrologic complications which are scarcely described. The patient was cured but kept unattended sequels, among which are distal neuropathy and diabetes insipidus.
2017-12-21T00:09:52.936Z
2017-12-20T00:00:00.000
{ "year": 2017, "sha1": "d2badedaaf46a6b845dd6b5ace6c773fbc626f8f", "oa_license": "CCBY", "oa_url": "https://jintensivecare.biomedcentral.com/track/pdf/10.1186/s40560-017-0257-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d2badedaaf46a6b845dd6b5ace6c773fbc626f8f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119329642
pes2o/s2orc
v3-fos-license
A Search for Candidate Li-Rich Giant Stars in SDSS DR10 We report the results of a search for candidate Li-rich giants among 569,738 stars of the SDSS DR10 dataset. With small variations, our approach is based on that taken in an earlier search for EMP/CEMP stars and uses the same dataset. As part of our investigation, we demonstrate a method for separating post-main sequence and main sequence stars cooler than Teff ~ 5800 K using our feature strength measures of the Sr II 4078, Fe I 4072, and Ca I 4227 lines. By taking carefully selected cuts in a multi-dimensional phase space, we isolate a sample of potential Li-rich giant stars. From these, using detailed comparison with dwarf and giant MILES stars, and our own individual spectral classifications, we identify a set of high likelihood candidate Li-rich giant stars. We offer these for further study to promote an understanding of these enigmatic objects. INTRODUCTION The many aspects of the evolution of the Li abundance from the Big Bang to the present has generated a very large and complex literature. The existence of Li-rich giants, first discovered by Wallerstein and Sneden (Wallerstein & Sneden 1982), is one part of the Li story. The origin of these unexpected stars is still not fully understood. Here we outline the main points needed to appreciate the strangeness of Li-enriched post-main sequence (post-MS) stars. For those desiring a deeper review, we recommend the useful and detailed summaries of the observed and predicted post-MS evolution of the Li abundance given by Brown et al. (1989); Ruchti et al. (2011);Casey et al. (2016) and the references cited therein. The maximum Li abundances of F and G main sequence (MS) stars range from A(Li) ≈ 2.1 1 for ancient, low metallicity stars all the way up to A(Li) ≈ 3.3 for young, metal rich stars (Lambert & Reddy 2004;Prantzos et al. 2017). The Li abundance is expected to change dramatically once stars leave the main sequence. Classical stellar evolution theory predicts that the Li abundance will monotonically decrease once the bottom of the outer convective envelope begins to move inward during post-MS evolution. As the envelope deepens, it increasingly entrains mass zones where complete Li destruction has occurred. This in turn dilutes the MS surface abundance of Li. By the time a star reaches the base of the giant branch, the Li abundance is expected to be only ≈ 5-10% of its MS value (Iben 1967a,b). As the star ascends the giant branch in the classical models, the convective envelope continues to deepen and the Li abundances are predicted to drop even further. Moreover, observational evidence (e.g., Brown et al. 1989;Mallik 1999;Liu et al. 2014) indicates that the actual Li depletion is substantially greater than the classical models predict for the majority of G and K giants. With this background, it was a great surprise when Wallerstein and Sneden in 1982 reported the discovery of a metal-rich, field K giant, HD 112127, whose Li I 6708 resonance doublet line had an equivalent width of 0.45Å. A model atmosphere abundance analysis of the weaker Li I 6104 transition led to A(Li) ≈ 3.0, very substantially higher than expected theoretically. Kraft et al. (1999) reported the first Li-rich giant in a globular cluster, a star on the first ascent of the giant branch in M3, with A(Li) ≈ 3.0 and a Li I 6708 equivalent width of 0.52Å. Over the years, other Lirich giants have been found, both in Population I and Population II. These stars are quite rare however, comprising ≤ 1% of giant stars (Brown et al. 1989;Ruchti et al. 2011;Liu et al. 2014;Kirby et al. 2016). The recent tabulation by Casey et al. (2016) lists only 127 known giant stars with A(Li) > 2.0; the values of A(Li) in this listing range all the way up to A(Li) = 4.55. The temperatures of these known Li-rich evolved stars span a considerable range: the hottest is HD 172481, an F supergiant with T eff ≈ 7250 K (Reyniers & Van Winckel 2001) and the coolest is IRAS 12556-7731, an M giant with T eff ≈ 3460 K (Alcalá et al. 2011). Most known examples, however, are found among the G and K stars where our search is optimized. Numerous hypotheses have been put forward to explain these rare Li-rich giants. The majority appeal to some form of "extra" stellar mixing on the giant or asymptotic giant branches that manages to incorporate the Cameron-Fowler process (Cameron & Fowler 1971) to generate 7 Li. Other scenarios rely on acquiring Li-rich material from a companion. The reader is referred to Kirby et al. (2016) for a brief but insightful review of all these mechanisms. There is a need to find as many of these rare stars as possible to get the full parameter space decribing their occurence and begin to narrow down the numerous possibilities for producing them. One of the most successful efforts in finding new Li-rich giants was that undertaken by Martell and Schetrone (M&S13) (Martell & Shetrone 2013). They searched the Sloan Digital Sky Survey Data Release 7 (SDSS DR7) (Abazajian et al. 2009) stars for candidates and identified 27 new Li-rich giants, 23 of which they then subjected to high resolution abundance analysis. It is the purpose of the current paper to find additional Li-rich giant candidates by carrying out a search using the data of SDSS DR10 (Ahn et al. 2014). The DR10 dataset of optical stellar spectra is roughly 1.6X larger than that of the DR7 dataset. This means that there are a substantial number of additional stars now available to examine for Li-enhancement. We carry out our investigation using a slight variation of the approach we employed earlier in a search for extremely metal-poor (EMP) stars (Carbon et al. 2017) (CHN17). In Section 2, we briefly describe how we processed the DR10 dataset through our reduction pipeline so that we could extract the individual feature measurements that are the basis of our approach. In the subsections of Section 3, we detail how we chose our initial Li-rich candidates. In particular, we describe how we selected (Section 3.1) and tested (Section 3.2) feature measurements to impose temperature and luminosity constraints, and how we extracted (Section 3.3) a coarse sample of candidate Li-rich post-MS stars using a set of cuts in a multi-dimensional phase space. Next we describe how we refined the sample to select only the most likely candidates (Section 3.4) and then carried out a detailed spectral classification of these stars (Section 3.5). In Section 4 we discuss our final list of candidate post-MS Li-rich giants. Our principal results are summarized in Section 5. THE DATASET In our search for Li-rich giants, we use the previously prepared dataset (CHN17) composed of calibrated optical fluxes and associated data for 569,738 unique stellar spectra drawn from SDSS DR10. The associated data includes each star's coordinates, heliocentric radial velocity, median S/N, pixel-by-pixel inverse-variance values, and u,g,r,i,z point spread function magnitudes (psfMag). The reader is referred to Section 2 of CHN17 for details concerning the selection of the stellar data from the whole SDSS DR10 database, the processing of the stellar fluxes through our data reduction pipeline, and the feature strength measurements that were subsequently made from the spectra. Here, we review briefly only the salient points needed for understanding the arguments in the current paper. The first step in our pipeline was establishing a continuum for each spectrum. Once the continuum level was established, it was possible to compute quantitative measures for the spectral lines in each spectrum. Two types of feature measures were adopted. The first was S(λ i ) which is the fractional depth relative to the interpolated local continuum of an individual spectral feature at wavelength λ i . The second was D(λ i ) which is the depth of the line at λ i in units of the local noise level in the spectrum as determined from the spectrum's pixel-by-pixel inverse-variance values. S(λ i ) is a direct measure of the line's strength while D(λ i ) gives a handy measure for the line's strength relative to the local noise level. The latter can be particularly helpful when dealing with intrinsically weak lines like the Li I 6708 line central to this paper. Measurements of S(λ i ) and D(λ i ) were made for 1659 spectral features (each with a unique λ i ) for all the 569,738 spectra of our dataset. This produced the final dataset of nearly 2 billion feature measures used in this study. In the CHN17 study, stars with specific interesting characteristics were isolated from the above SDSS DR10 dataset by using linked scatter plot (LSP) tools implemented on the NASA Advanced Supercomputing hyperwall. (See CHN17, Section 1.1 for a detailed description of LSPs and the hyperwall.) For example, by making judiciously selected cuts in successive 2-D phase spaces, CHN17 were able to extract numerous candidate extremely metal poor (EMP) stars from the general dataset. Because of its flexibility, the LSP method has powerful explorative capability. For this reason, we used LSPs on the hyperwall to make the initial reconnaissance of the Li-rich giant problem. It quickly became apparent that there were indeed stars with strong Li I 6708 lines in the DR10 dataset. However, because the 569,738 spectra of the dataset include a very wide range of temperatures, luminosities, and compositions, we needed to determine how to extract candidate Li-rich giants from the rest of the stars. In the next section, we explain how we accomplished this. EXTRACTING THE CANDIDATE LI-RICH GIANTS We require an approach which will effectively separate the relevant post-MS stars from MS stars which may have comparable Li line strengths. The domain of chief interest is that occupied by the late G and K giants. We need to select feature measures, S(λ) and D(λ), in our dataset that can be used to isolate stars in this desired temperature and luminosity range. Note that our approach relies solely on our feature measures. We specifically chose not to employ SDSS-provided quantities such as T eff and log g simply because of the large errors that can occur in individual values of these quantities (e.g., M&S13, Appendix A). The following subsections detail how we used the feature measures to arrive at a list of Li-rich giant candidates. MILES spectra to establish temperature and luminosity constraints Gray & Corbally (2009) note that the Ca I 4227 line progressively strengthens with decreasing temperature in going from G through K spectral types, while the hydrogen lines progressively weaken over the same spectral range. This suggests that S(Ca I 4227) or S(H I) could be used as a first-order surrogate for stellar temperature. (We note here that we investigated using colors based on the SDSS u,g,r,i,z magnitudes as temperature surrogates but found that they did not lead to clean separation between MS and post-MS stars.) In order to estimate luminosity in G and K star domain, Gray & Corbally (2009) and White et al. (2007) suggest a number of possible metal line strengths and ratios. To determine whether any of these might be helpful in our investigation, we turned to the MILES spectrum library (Sánchez-Blázquez et al. 2006) . The 985 stars in the MILES library were selected for the purposes of stellar population synthesis. As a result, they cover a wide range of temperatures, luminosities, and metallicities with particularly good coverage for the spectral ranges of most interest to us (e.g., Sánchez-Blázquez et al. 2006, Figure 1). The MILES spectra span the whole SDSS optical wavelength range relevant to our study and have essentially the same spectral resolution as the SDSS spectra. Moreover, the stars in this library have carefully researched T eff , log g, and metallicity ([Fe/H]) drawn from the literature (Cenarro et al. 2007). These attributes make the MILES spectra ideal for determining which luminosity criteria might be most effective for separating G and K MS stars from post-MS stars. To take advantage of the MILES library, we ran the entire dataset of nearly 1000 MILES spectra through the same spectral reduction pipeline that we used in our earlier study. The pipeline computed continua for each of the MILES spectra and then computed S(λ) and D(λ) measures for each of the 1659 spectral features we use. Details of the pipeline process may be found in CHN17, Sections 2.1-2.3. To explore which of the Gray-Corbally and White et al. luminosity criteria might be best for our purposes, we extracted two subsets from our full MILES dataset of feature measurements. The first subset, which we used to represent MS stars, was comprised of the 344 MILES A through K stars with Cenarro et al. (2007) log g > 3.80. The second subset, representing post-MS stars, was comprised of the 254 MILES A through K stars with Cenarro et al. (2007) log g ≤ 3.80. The division in log g was chosen to be comparable to that adopted by M&S13 in isolating post-MS stars for their study. Many of the Gray-Corbally and White et al. luminosity criteria in the G-K spectral range are ratios of lines strengths (or sums of line strengths), e.g., the ratio of Y II 4375 to Fe I 4384. We represented these by taking the ratios of the corresponding line strength measures, as in S(Y II 4375)/S(Fe I 4384). Using the MS and post-MS subsets of MILES data, we examined the various luminosity sensitive line ratios versus the feature strengths of the likely temperature sensitive lines: S(Hα), S(Hβ), S(Hγ), and S(Ca I 4227). After considerable experimentation, we found that S(Sr II 4078)/S(Fe I 4072) vs S(Ca I 4227) gave the clearest separation between MS and post-MS stars for the G-K stars. We show this separation in Figure 1. The luminosity sensitivity of the S(Sr II 4078)/S(Fe I 4072) ratio is a result of the rather different electron pressure sensitivities of these two lines in the cooler stars. The line ratio systematically shifts as the gravity, and hence electron pressure, decreases with increasing luminosity. A discussion of such effects may be found in Gray (1992), for example. stars are plotted as cyan filled diamonds. The region of this phase space where we searched for candidate Li-rich post-MS stars lies to the right of the vertical cyan dashed line at S(Ca I 4227) = 0.4 and above the lower bound defined by the cyan dashed lines running to the right. The candidate Li-rich stars that we identify in Table 1 are displayed in this figure as green filled circles. The locations of the MS and post-MS stars (black and red circles, respectively) in Figure 1, show that, while there are a few exceptions, the Sr II/Fe I ratio nicely discriminates between MS and post-MS stars in the region S(Ca I 4227) ≥ 0.4. In contrast, the Sr II/Fe I ratio becomes unreliable as a luminosity diagnostic for S(Ca I 4227) < 0.4. Examination of the S(Ca I 4227) vs T eff relation for the MILES stars indicates that the S(Ca I 4227) = 0.4 boundary occurs at ≈ 5800 K, a temperature that corresponds to early-mid G stars in the case of dwarfs (Boyajian et al. 2012). Thus, since it places us in the stellar temperature range most relevant to our search, restricting our search to stars with S(Ca I 4227) ≥ 0.4 should pose no difficulty. However, in the next subsection, we will note one important caveat. Comparison with confirmed Li-rich giants It is helpful to illustrate how a known set of Li-rich giants are distributed in Figure 1. M&S13 searched for Li-rich giants among the stars of SDSS DR7. They chose a set of 8535 stars from DR7 whose SEGUE Stellar Parameter Pipepine (SSPP) T eff and log g values indicated that they would be red giant branch (RGB) stars lying somewhere between slightly below the red giant bump and the red giant tip. They estimated the Li 6708 strength in these stars using a spectral index they computed centered on the line. Selecting only those stars with the most promising Li spectral indices (162 stars), they used low resolution spectrum synthesis to sub-select a set of 36 for follow-up high resolution study. Of these 36, they confirmed that 27 were indeed Li-rich based on high resolution spectroscopy and spectrum synthesis. (We note that M&S13 present derived Li abundances for only 23 of the 27 stars because of S/N problems. Nevertheless, we will consider all 27 as "confirmed Li rich" as indicated in their Table 1.) Of these 27 SDSS DR7 Li-rich stars, 19 are included in the download for our DR10 dataset, the 8 missing stars violated one or more of the selection criteria we adopted in selecting the stars for our dataset. In Figure 1 . This suggests that, by restricting ourselves to stars with S(Ca I 4227) ≥ 0.4, we may run the risk of missing Li-rich giants with low metallicities. Since we see no obvious way at this time to devise a luminosity criterion that does not risk excluding such stars, we shall proceed. Low metallicity giants that are sufficiently cool (hence having intrinsically stronger Ca I 4227) might still land to the right of the S(Ca I 4227) = 0.4 boundary and thus be detectable by us. Imposing the final constraints In order to arrive at a useful set of Li-rich giant candidates it is necessary to constrain more than just the value of S(Li I 6708) and the region in S(Sr II 4078)/S(Fe I 4072) vs S(Ca I 4227) space. For good quality results, one must also add constraints on the noise levels both overall and locally in the 6700Å and 4070Å regions. Similarly, it is necessary to guard against TiO contamination of the Li I 6708 line region. After considerable experimentation, we chose the set of constraints listed below. The additional constraints eliminated many spectra in which noise/contamination produced uncertain results for the value of S(Li I 6708) and/or the S(Sr II 4078)/S(Fe I 4072) ratio. The measures we selected and the constraints we imposed on them are summarized in the logical expressions below. All of these constraints, 1 -6, are applied at the same time to the feature measures of the 569,738 stars in the dataset. Only stars that simultaneously satisfy all the specified constraints are considered in the remaining discussion. We now briefly describe the rationale for each cut shown above: Constraints 1a and 1b apply the luminosity discriminant ratio S(Sr II 4078)/S(Fe I 4072) described in Section 3.1. Constraint 1a applies to the left-hand portion of the region outlined by cyan dashed lines in Figure 1 (i.e., 0.4 ≤ S(Ca I 4227) < 0.69 and above the sloping cyan dashed line); 1b applies to the right-hand portion (i.e., S(Ca I 4227) ≥ 0.69 and above the horizontal cyan dashed line). Constraint 2 further selects out those stars which have Li I absorption strengths above a minimum threshold. We selected the threshold to be equal to the S(Li I 6708) measure of the M&S13 Li-rich giant with the weakest Li I feature. Constraint 3 isolates the objects which have sufficient S/N in their spectra to make estimating luminosity and Li I strength more robust. We found that the spectra of stars with poorer S/N are generally much too noisy to yield reliable measures of either S(Sr II)/S(Fe I) or S(Li I). Constraint 4 further limits the subset of stars to those with Li I 6708 line depths more than 1 σ above the local noise level. This helps eliminate stars that have excessive noise near the Li I line. Constraint 5 limits the stars to only those with detected Sr II 4078 absorption, removing stars for which random noise, or poor continuum placement, produces a false emission feature. Constraint 6, which uses a 48 Ti 16 O (3,2) gamma system band head, was introduced to bias against stars for which the TiO bands were becoming sufficiently strong that they were noticeably affecting the region of the Li I line. To impose the constraints described above, we constructed a suite of MATLAB c (MATLAB 2011) codes implemented on a single computer workstation. The suite reads in the constraints on specified feature variables and returns a list of those stars for which the specified variables simultaneously satisfy all the constraints. This is logically equivalent to the CHN17 method of making a series of successive cuts in 2-D phase spaces that was the basis of the LSP approach. Applying the constraints 1 through 6 to the dataset of 569,738 SDSS DR10 stars produces a subset of 1,523 stars which are potentially Li-rich giants. In the next sub-section, we will describe how we select out the most likely candidate Li-rich giants. Eliminating the obvious false positives The feature constraint on median S/N in Equation (3) was intentionally left "softer" than it might have been so as to capture as many candidates as possible. However, this means that stars may slip through the constraints whose spectra are too contaminated by noise in crucial spectral regions to be sure of their status. In addition, some of the feature strengths used in the constraints may have erroneous values caused by poor continuum placement. (A more detailed discussion of these issues may be found in CHN17, Section 4.) We dealt with these issues by visually examining the spectra of each of the 1,523 stars selected by the constraints of the previous section. The visual examination was done in two steps. In the first step, the chief criteria were the strength and apparent position of the purported Li I line, whether the spectral regions around Li I line and the luminosity indicators appeared relatively unaffected by noise, whether there appeared to be TiO contamination of the Li region, and whether the continuum placement was appropriate. A secondary consideration was whether the Li I 6708 line was comparable to or stronger than Ca I 6718 line (see Casey et al. (2016, Figure 5)). The ratio of these two lines was used by Kumar et al. (2011) in their study to identify candidate Li-rich stars from low-resolution giant spectra. This coarse initial cull was straightforward and was accomplished relatively quickly. It eliminated 1,350 stars from further consideration, the vast majority because the local noise level was too large to be confident of the Li line strength. In the second step, the spectra of the 173 remaining stars were subjected to a more prolonged and careful visual inspection which concentrated on the position, shape, and strength of their Li I 6708 line and the quality of the spectrum. Stars were eliminated if the apparent Li I feature appeared to be strongly asymmetric, shifted significantly from its nominal position, or was too similar in appearance to the surrounding noise features. This second visual cull left 49 candidates. These 49 stars included all 9 of the M&S13 Li-rich giants which fell into our search region, evidence that our selection procedure was working well. The next sub-sections describe how we confirmed whether the final 40 previously unrecognized Li-rich candidates were indeed giants. Comparison with MILES stars To increase our confidence in the likelihood that we were selecting stars that were good Li-rich giant candidates, we first carried out a systematic comparison of each of the 40 stars with the MILES MS and post-MS stars. First we normalized the spectrum of each Li-rich giant candidate and each MILES MS and post-MS star. This was accomplished by normalizing each spectrum by its continuum and then by its flux at 5837Å so as to keep the scales of the different spectra consistent. Next we interpolated the resulting MILES spectra onto the SDSS DR10 wavelength set over the interval [3850 -7400Å]. Using the resulting fluxes, we computed the following summed square differences, SSD, for each of the 40 stars against each of the spectra of all the MILES MS stars seriatim and then, separately, all the MILES post-MS stars: where F cand (λ k ) is the normalized flux at wavelength λ k of one of the candidate Li-rich stars, F MILES (λ k ) is the corresponding normalized flux of one of the MILES stars, and n is the number of wavelengths in the common wavelength set. For each candidate Li-rich star, we compared its spectrum with the closest matching (i.e., smallest SSD values) MILES MS and post-MS stars to see whether the candidate spectrum appeared more consistent with the spectra of dwarfs or giants. Attention was paid not only to the Sr II 4078/Fe I 4072 ratio, but also to the strengths of Sr II 4078 relative to the Fe I 4064 and Fe I 4046 lines (Gray & Corbally 2009). We also considered the values of SSD for the candidate star and the ten closest matches from the MILES main sequence and post-MS lists. For many stars, the SSD values for the top ten closest matches were very strongly in favor of a candidate being most like a giant or dwarf. Based on the above comparisons, 31 stars were rejected because they more closely matched the spectra of MILES MS stars both in their Sr II to Fe I line ratios and in their SSD values. Nine stars remained as candidates to be Li-rich giants. We decided it was prudent to subject these 9 stars to an additional final check. The next sub-section describes the effort by one of us (ROG) to examine the spectra of the 9 stars in detail and make definitive spectral type classifications based on more than the limited number of spectral features we have considered up to this point. Detailed spectral classification While the SDSS spectra have a much larger spectral range, the most sensitive temperature and luminosity criteria are found in the violet -green region, 3800 -5600Å. Because of the unavailability of an MK standard star library for the SDSS spectra, we convolved the SDSS spectra with a gaussian to reduce the resolution to that of the libnor36 MK Standards library 2 (3.6Å/2 pixels) of Gray & Corbally (2014). Gray & Corbally (2009) detail the temperature and luminosity criteria used in the MK classification of G-and K-type stars. In summary, temperature criteria involve the ratio of low-excitation neutral metal lines to hydrogen lines (Fe I λ4046/Hδ, Fe I λ4144/Hδ, Fe I λ4383/Hγ as well as similar line ratios in the vicinity of Hβ). Those ratios, however, are invalid in metal-poor stars, and in that case, the ratio of lines of the Cr I triplet (λλ4254, 4275, 4290 -all resonance transitions) with the higher-excitation Fe I λλ4250, 4260, and 4326 lines provide metallicity-independent temperature criteria. Luminosity criteria include the ratios of Sr II λ4077 3 to nearby Fe I lines (λλ4046, 4064, and 4072), Sr II λ4216/Ca I λ4226, Y II λ4376/Fe I λ4383 as well as the strength of CN violet system, in particular the band blueward of the λ4215 bandhead. However, in stars with carbon abundance peculiarities, the CN band strength can give spurious results, as proved to be the case with a number of stars in the candidate Li-rich sample under consideration. The spectral types were determined by eye on the computer screen by direct comparison with the libnor36 MK standards. The spectral types we obtained are listed in Table 1 for the 8 candidates which proved to be giants. One candidate (J215914.37+004515.8) turned out to be a G9 dwarf and will not be considered further. Three out of the final 8 appear to be normal late Gand early K-type giants. The remaining stars, all late G-to early K-type giants (with the exception of J150029.54+010744.8, which is a Ib-II supergiant), show carbon peculiarities in the form of weak CH (G-band) and CN bands. RESULTS The final set of 8 stars that survived the vetting process described in the previous section are presented in Table 1. For completeness, we retain J150029.54+010744.8 in the set of candidates despite its luminosity class. A model atmosphere analysis will be needed to accurately place it relative to the giant branch. The table gives the date of observation of the measured SDSS DR10 spectrum, selected feature strengths and ratios as described in the text, the SDSS-assigned spectral type, and the spectral type determined by us. The S(Li I 6708) values show that, despite the comparatively low resolution of the SDSS spectra, the absorption depths of the Li I lines in the candidates are not trivial, ranging from 5% to 17%. The D(Li I 6708) values, the line depth in units of the local noise level, all suggest solid detections. Comparing the two columns of spectral types in the table, it is immediately apparent that our spectral types are all systematically earlier than the SDSS assignments. The differences are generally small and perhaps partially reflect the coarseness of the ELODIE library used by SDSS to classify the stars (Lee et al. 2008). Nuances introduced by weakening of CH and CN bands within a spectral type, captured by our approach and indicated in the "SpT Notes" column, might have confused the SDSS classification as well. The DR7 dataset used by M&S13 contains observations obtained up to July 2008 (Abazajian et al. 2009), whereas the DR10 dataset we used contains SDSS optical observations through June 2012 (Ahn et al. 2014). As we mentioned in Section 3.4.1, our approach captures all 9 M&S13 stars in our dataset which have S(Ca I 4227) ≥ 0.4. We note that Table 1 contains 4 additional stars that, according to their dates of observation, were present in the DR7 dataset. These stars apparently failed to pass one of the selection criteria used by M&S13 to derive their list of 36 Li-rich candidates suitable for high resolution examination. It will be interesting to see whether or not future analysis of these stars confirms that they are Li-rich giants as we suggest. The spectra of the remaining 4 stars in Table 1 were obtained after July 2008 and could not have been considered by M&S13. We find it somewhat surprising that we discovered only 4 new candidates among the stars observed after the end of the DR7 dataset. Our downloaded CHN17 dataset has 364,265 stars observed before July 2008 and 205,473 stars observed after that date. This makes the post-July 2008 portion of the dataset 56% of the size of the earlier portion. Given that we found a total of 13 Li-rich candidates in the earlier dataset (the 9 M&S13 stars plus our 4 new candidates), one naively might expect that the more recent portion alone would yield roughly 7 candidate Li-rich giants. That we found only 4 may be only a reflection of the uncertainty of small number statistics. It also may be the result of a shift in the spectral type mix between the two portions of the dataset given that the stellar classes targeted by the SDSS changed with time as the survey went on. We show in Figure 2 spectra of the stars of Table 1 in the vicinity of the Li I line. We have marked with dashed lines the Li I 6708 doublet, the Ca I 6718 feature used by Kumar et al. (2011) and used by us as a secondary criterion, and the TiO 6815 band head we used in Section 3.3. For comparison, we also show at the bottom the two high S/N M&S13 stars with S(Ca I 4227) ≥ 0.4 having the weakest and the strongest Li I lines. It is apparent that the Li I features in our candidates are comparable in strength or stronger than those in stars identified as Li-rich giants by M&S13. Table 1 with two bounding M&S13 stars. The spectra have been normalized to their mean flux in the interval [6693,6695]. The Table 1 Finally, we show our Li-rich giant candidates in Figure 1 as green dots. Our 8 candidate Li-rich giants are distributed in the plot much like the 9 M&S13 already-confirmed Li-rich giants. The most luminous are well away from the boundary between the MS and the post-MS stars. Like the majority of M&S13 stars, the remainder of our candidates lie closer to the boundary. The locations of our candidates in Figure 1, their spectral types in Table 1, and their strong Li I lines (Figure 2) all suggest that they are Li-rich giants. We offer these candidates to researchers for closer examination, an undertaking well beyond the limited scope of this paper. Model atmosphere analyses of higher resolution spectra will be required to definitely determine the Li abundance and evolutionary status of our candidate Li-rich giants. SUMMARY In the current paper, we describe a new approach to identifying candidate Li-rich giants using the SDSS DR10 data release. As part of an earlier investigation (CHN17), 569,738 SDSS DR10 spectra were processed through a pipeline which yielded feature strength measurements for each of 1659 unique spectral features in each spectrum. The resulting nearly 2 billion feature measurements can be used to construct phase spaces of measurements. One may then introduce constraints that can be used to isolate stars with desired characteristics. In CHN17 this was accomplished using linked scatter plots and the hyperwall. In the current paper, we introduced a simple procedure for applying constraints that can be accomplished on a single workstation.
2018-06-27T04:40:22.000Z
2018-06-27T00:00:00.000
{ "year": 2018, "sha1": "1483148e9835c37755f166064968f639ce59233e", "oa_license": null, "oa_url": "https://iopscience.iop.org/article/10.3847/1538-3881/aacbcb/pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "1483148e9835c37755f166064968f639ce59233e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
168883832
pes2o/s2orc
v3-fos-license
Effect of the Road Environment on Road Safety in Poland Run-off-road accidents tend to be very severe because when a vehicle leaves the road, it will often crash into a solid obstacle (tree, pole, supports, front wall of a culvert, barrier). A statistical analysis of the data shows that Poland’s main roadside hazard is trees and the severity of vehicles striking a tree in a run-off-road crash. The risks are particularly high in north-west Poland with many of the roads lined up with trees. Because of the existing rural road cross-sections, i.e. having trees directly on road edge followed immediately by drainage ditches, vulnerable road users are prevented from using shoulders and made to use the roadway. With no legal definition of the road safety zone in Polish regulations, attempts to remove roadside trees lead to major conflicts with environmental stakeholders. This is why a compromise should be sought between the safety of road users and protection of the natural environment and the aesthetics of the road experience. Rather than just cut the trees, other road safety measures should be used where possible to treat the hazardous spots by securing trees and obstacles and through speed management. Accidents that are directly related to the road environment fall into the following categories: hitting a tree, hitting a barrier, hitting a utility pole or sign, vehicle rollover on the shoulder, vehicle rollover on slopes or in ditch. The main consequence of a roadside hazard is not the likelihood of an accident itself but of its severity. Poland’s roadside accident severity is primarily the result of poor design or operation of road infrastructure. This comes as a consequence of a lack of regulations or poorly defined regulations and failure to comply with road safety standards. The new analytical model was designed as a combination of the different factors and one that will serve as a comprehensive model. It was assumed that it will describe the effect of the roadside on the number of accidents and their consequences. The design of the model was based on recommendations from analysing other models. The assumptions were the following: the model will be used to calculate risk factors and accident severity, the indicators will depend on number of vehicle kilometres travelled or traffic volumes, analyses will be based on accident data: striking a tree, hitting a barrier, hitting a utility pole or sign. Additional data will include roadside information and casualty density measures will be used – killed and injured. Introduction The risk of becoming involved in an accident is the result of a malfunctioning element of the transport system (man -vehicle -road -environment). The road and its traffic layout and safety equipment have a critical impact on road user safety [1]. This gives infrastructural work a priority in road safety programmes and strategies at the global [2], European [3] and national level [4]. Run-off-road accidents tend to be very severe because when a vehicle leaves the road, it will often crash into a solid obstacle (tree, pole, supports, front wall of a culvert, barrier). The risks are particularly high in north-west Poland with many of the roads lined up with trees. This may have dire consequences as could be seen in the tragic accident near Gdansk in 1994 when a bus hit a tree killing more than 40 people. Because of the existing rural road cross-sections, i.e. having trees directly on road edge followed immediately by drainage ditches, vulnerable road users are prevented from using shoulders and made to use the roadway. With no legal definition of the road safety zone in Polish regulations, attempts to remove roadside trees lead to major conflicts with environmental stakeholders. This is why a compromise should be sought between the safety of road users and protection of the natural environment and the aesthetics of the road experience. Roadside issues are some of the most critical road safety problems. Research has been conducted for a number of years to help identify roadside hazards and ensure effective road user safety measures. Road safety has been on Poland's agenda since 1994, following a World Bank mission which defined the gravity of the problem compared to other countries, mainly in Western Europe. Over the years different road safety programmes emerged. The programmes mainly focussed on the need to change how the roadside should be designed, developed and used, especially on single carriageway non-built-up sections, to reduce the severity of run-off-road crashes [4,5]. The main consequence of a roadside hazard is not the likelihood of an accident itself but of its severity [6,7]. Poland's roadside accident severity is primarily the result of poor design or operation of road infrastructure. This comes as a consequence of a lack of regulations or poorly defined regulations and failure to comply with road safety standards. As we know from a number of studies looking at how specific road factors affect safety, the roadside environment and its components (vegetation, shoulders, embankments, drainage ditches, poles, signs, engineering objects, etc.) are very critical [6], [8 -12]. Roadsides also include barriers. While they stop vehicles from hitting obstacles or going down a roadside slope, they constitute obstacles themselves. If poorly designed in terms of function and structure, barriers may pose a serious hazard. An analysis of crash statistics shows that Poland's main roadside hazard is trees and accident severity when vehicles run off the road and collide with trees. While the other elements are also a source of safety hazard, they are so to a lesser extent. The severity of accidents was analysed for the different types of run-off-road accidents (measured as the number of fatalities per 100 accidents). The following are the results: hitting a barrier -10, hitting a tree -23, hitting a sign, pole -9, roll-over -7. As the figures show run-off-road accidents are clearly most severe when they involve hitting a tree. The next analysis looked at roadside accidents by road category. The following categories are applied: national roads, regional roads and other roads (county and municipal). Run-off-road accidents are most common on regional roads (15%), followed by national roads at 9% and other roads at 10%. As regards fatalities, the majority occurred on other roads at 24%, regional roads at 22% and national roads at 11%. Safety of national roads is much better than in the other categories. This is because more investments are made to upgrade these roads and the removal of roadside trees is easier. Roadside accidents were also analysed for regional distribution. It was found that in the years 2012 -2015 (figure 2): Pomorskie clearly having the worst record. New measures are required to reduce the hazards posed by dangerous roadside environments. Roadside hazard A number of in-the-field tests were conducted looking at road infrastructure and its safety. Based on the findings, a number of elements were identified which present a potential roadside hazard to road users. In 2013 a road safety inspection method was developed and implemented. The development of the Polish method took account of the experience of other countries [13 -15]. Selected sources of hazards were illustrated with photographic documentation ( figure 3 and figure 4). The sources of Poland's most prevalent roadside hazards include: • trees close to the edge of the road (up to 3 metres away from the edge of the carriageway the risk is the highest, especially in the area of bends in horizontal alignment, junctions and exits), • other green restricting visibility, • elements of infrastructure which are unyielding (concrete or wooden poles, masts, etc.), • supports of civil engineering objects too close to the edge of the road, unsecured (e.g. bridge supports), • drainage facilities -vertical concrete front walls of culverts, • steep embankments, • poor technical condition of shoulders, • inadequately terminated, too short, wrong operating width and damaged road barriers. As well as being the direct cause of an accident, these sources of hazards cause other types of accidents because of where they are. This includes head-on collisions if there are structures within the road, hitting a pedestrian or bicyclist because there is no space for the vehicle to use beyond the carriageway. When these types of accidents are reported, the statistics does not take account of the roadside as a cause or circumstance (e.g. no trees were hit but it was the trees that restricted visibility and eventually led to the accident). As a result, roadside conditions are underreported in road accident databases. All the above examples were identified during a check of Poland's national and regional roads. Sadly, this does not stop there. Accident statistics presented in Section 2 are the consequence of hazardous roadsides, including those along important routes. Below is an illustration of the roadside along one of the most important routes in Warminsko-Mazurskie (north-east of Poland). The region had the highest number of fatalities in run-off-road crashes between 2013 -2015, (figure 2). The trees along the road are very close to the edge of the carriageway. The hazards here are heightened during winter increasing the risk of run-off-road accidents which in this case will have very serious consequences. Methods to solve the problem An important part of reducing the "aggressive" effects of roadside on road safety is to ensure that road network safety is managed as set out in the Directive of the European parliament 2008/96/European Commission [16] and that it forms part of road safety management in the broad sense [1]. Road network safety management involves a procedure divided into several steps designed to: • assess safety and identify high-risk sections, • carry out road safety checks and identify the hazards and sources of hazards on high-risk sections, • select the most effective and efficient corrective measures that are appropriate for the funding available, • communicate the hazards to road users and partners (local authorities, police, and partnered businesses), Road safety management can be delivered at three levels: strategic, tactical and operational. This also applies to the problem in question which can be studied for the different levels of risk management. Strategic risk management occurs primarily when road networks are planned and operated. This is delivered by central authorities and central road authorities. The main sources (factors) of hazard at the strategic level that contribute to the severity of run-off-road accidents include: • historical legacy -alleys lined up with trees, these can be found in regions previously under Prussian rule, a typical roadside design (this is well reflected on the map in figure 2), • speeding because drivers notoriously drive over the speed limit, • existing infrastructure which "forgives" drivers their mistakes on some sections only, • lack of safe roadside design standards or guidelines for designers, • conflicts with environmental services (the hermit beetle, an insect, is more important than human life). If the effects of these factors are to be reduced, we need well programmed actions, effective road safety programmes and plans to support legislation. Roadside hazards are also caused by poor design, construction and maintenance of roadsides. This problem is addressed in a number of programmes and road safety plans. The National Road Safety Programme GAMBIT 2005 [5] dedicates two of its five strategic objectives to the problem of accidents involving striking a tree: • construction and maintenance of safe road infrastructure, • reduction of accident severity (by e.g. a "soft" roadside and "forgiving" roads) • The GAMBIT National Roads programme's objective number 3 aims to "Reduce road deaths as a result of running off the road". This is to be achieved by implementing four strategic actions designed to: • make roads more recognisable, clearer and more consistent, • ensure that vehicles stay in lane (signage, narrow hard shoulders), • shape a safe roadside (safe embankments, safe drainage facilities, removal of hazardous objects (including trees) • and secure hazardous objects (barriers, crash terminals). Many of the actions proposed in the programmes (even when facing opposition from a lack of legislative support, activists and environmental bodies and too little funding) have been a significant help with the reduction target. Reinforced by the GAMBIT National Roads programme, the National Road Safety Programme GAMBIT 2005 helped to reduce run-off-road fatalities within 10 years by 30%. It is estimated that by removing roadside hazards (removing trees and securing trees and utility poles) 2 250 people could be saved from death [17]. Despite that in the period of analysis as many as 6 300 people were killed in run-off-road accidents involving striking a tree or other roadside objects. The work started in the previous programming period must be continued under the National Road Safety Programme in the years to come. Tactical risk management occurs primarily when road networks and parts of roads are planned and operated. This is delivered by regional authorities and regional and county road authorities. The main sources (factors) of hazard at the tactical level that contribute to the severity of run-off-road accidents and require action include: • the region of the country, these problems occur in the north and west of the country, as an example in the region of Pomorskie sections of roads with trees that are less than 1.5 m away from the road occur on 20% of national roads, 40% of regional roads and 65% of county roads; • road category, roadsides are safer (fewer obstacles, more safety measures) on national roads of higher technical class, regional and county roads are severely affected, • type of road section (straight section or horizontal curve), • limited visibility, especially at night-time). The main actions at the operational level include the design, construction and operation of roads to take account of high risk road sections. They are designed to: • identify high-risk sections on the road network; risk maps are very helpful with that, prepared in the EuroRAP project (figure 3), • remove hazardous objects: felling trees, rearranging the objects or relocating the road away from the objects • secure hazardous objects by using safety barriers and other structures, • speed management and hazard notification, • implement roadside safety standards. Figure 3 Map of individual risk on national roads -run-off-road accidents A major problem for this level of management is to obtain permits to fell roadside trees posing a hazard to road users. The process can be helped by a recent Supreme Chamber of Audit report on road safety management which addresses this particular problem (NIK, 2014). Operational risk management occurs primarily when road structures are built, operated and deconstructed. This is delivered by local road authorities and local authorities. The main sources (factors) of hazard at the operational level that contribute to the severity of runoff-road accidents and require action include: Individual risk on Poland's national roads 2010-2012run-off -road accidents • narrowing of road and roadside which forces vehicles to drive on the opposing traffic lane (headon collisions), • reduced visibility at junctions and exits (side impact), • road signs covered up (road not clear and understood), • no space for pedestrian traffic and reduced visibility at pedestrian crossings (hitting a pedestrian) • causing damage to road infrastructure, The main actions at the operational level include construction and operation of roads to take account of: • better visibility through special signage or removing trees at junctions to ensure visibility, • using the 2-1 cross-section on county and regional roads (tests have been conducted in Chojnice area - Fig. 7a), • use of local speed limits (70 or 50 km/h), • special signage -least effective (Fig. 7b). The success of operational level management depends on the quality of efforts at the higher levels. The funding at this level of management is insufficient to fund prevention and treatment. As a result, local roads are usually treated by putting up signs, reducing speed and felling trees. Poland's efforts to reduce roadside hazards frequently build on European initiatives [18]. Modelling the effects of roadside on road safety Analyses of models of how roadside elements affect road safety [19 -22] showed that the methodologies and data differ from model to model. Because the models focus on different factors, they each have strengths and weaknesses. The new analytical model was designed as a combination of the different factors and one that will serve as a comprehensive model for Polish conditions. It was assumed that it will describe the effect of the roadside on the number of accidents and their consequences. The design of the model was based on recommendations from analysing other models. The assumptions were the following: the model will be used to calculate risk factors and accident severity, the indicators will depend on number of vehicle kilometres travelled or traffic volumes, analyses will be based on accident data: striking a tree, hitting a barrier, hitting a utility pole or sign. Additional data will include roadside information and casualty density measures will be used -killed and injured. The study was conducted for national roads in the region of Pomorskie. The first phase of the study was designed to build an inventory of roads and build accident databases. The next stage was to develop mathematical models to show the correlations between roadside and accidents. All analyses were based on the SEWIK database. The accident database included information about accidents and collisions involving running off the road. A period of three years (2013-2015) was selected and served as the basis for all calculations and models. The inventory covered sections of national roads at the total length of about 777 km (except national roads in urban areas). There were separate inventories for the left and right edge of the roadway and the central reservation (in the case of dual carriageways). Potential roadside hazards were identified (trees, embankments, utility poles, engineering structures) and selected type of barriers (concrete, steel, ropes). To ensure that data were collected consistently, two databases were built: roadside database and accident database. The primary data that were imported into the databases at the start included Road Data Bank information about reference sections with details on: section length, traffic volume, number of vehicle kilometres travelled and share of hard shoulders. With the large set of data already in place, reference sections were used for collecting roadside information and creating computational models. The roadside database had about eight thousand records -measurement sections assigned to reference sections 1 -5 km long. The records contained data about section length, annual average daily traffic flow, number of junctions, exits, signs, utility poles and percentage share of sections with barriers, trees, embankments and hard shoulders depending on their width. The chapter presents the analyses and results of the GOF victim density rate. The objective of the model is to estimate the expected number of victims of accidents on national roads per kilometre of road over a specific period. The victim density model is described with the following formula: where: GOF(Y) expected number of accident victims per kilometres of road (dependent variable) α adjustment coefficient Q annual average daily traffic (AADT) βj (1,2,…,n) calculation coefficients B,S,T 1 ,T 2 ,T 3 C,P 1 ,P 2 ,P 3 factors related to the risk of an accident (independent variables) The model has a determination coefficient (R 2 ) equal to 0.85. Table 1. Parameter estimates of the crash prediction models of Eq. (1) Results of the study The effectiveness of road safety measures largely depends on how intensively evaluation tools are used to understand the benefits. These tools include prognostic models. They can be used to identify high risk sections or study the relation between road section features and the potential for accidents. An analysis of the study in Pomorskie shows that victim density declines as the percentage of section with barriers and hard shoulders increases. The results of the study are presented for single carriageways in outside built-up areas of class GP (trunk road with higher speed limits). The number of victims depends on trees within a distance of up to 3.5 m, embankments and trees further away (more than 3.5 m from road edge). A number of road projects struggle with the choice of the most effective safety measures. Choosing the cheapest option may turn out to be hazardous for road users. The consequences and direct and indirect costs of accidents may exceed the original financial gains. If equipped with the right tools, each of the options can be assessed for its pros and cons (cost and benefit analysis, multiple criteria analysis). Cost analysis of safety barriers is an excellent example. Just as any other road safety equipment, barriers are an additional cost when new roads are built or upgraded. The price, lack of good will on the part of road authorities or lack of knowledge are usually the reasons why safety barriers are frequently ignored. Analyses show that putting in safety barriers may reduce the number (density) of victims compared to the same sections with the same hazards and no safety barriers: it is three times in the case of trees more than 3.5 away from the road edge, five times in the case of embankments and as many as seven times in the case of trees up to 3.5 m from the edge. Conclusion Over the last twenty five years more than 20,000 people were killed on Polish roads in run-off-road crashes (of which a clear majority involved hitting a tree). Analyses and studies of roadside hazards offer the following conclusions: • the main factors that influence the risk of being involved in such a crash are: historic developments, road class, length and element of carriageway, hazardous elements at the edge of carriageway (mainly trees), safety measures in place or lack of safety measures, • the risk is the highest in the north and east of Poland considering the entire road network, and in the east of Poland in the case of national roads. • with no regulations, design standards and cooperation with environmental organisations and institutions, human life is valued below that of trees, lichens and insects. • to improve roadside safety we must: identify the hazards on the road network, conduct checks, conduct research (build models of the effects of selected factors on road safety, effectiveness evaluation), implement safety standards, develop guidance and principles for safe roadsides, ensure that there is collaboration between designers, road authorities and environmental organisations and institutions, exchange experience with other countries. For years roadside environments have been one of the most neglected aspects of road safety efforts in Poland. Clarity is needed on the effects of roadsides on road safety. We must understand the hazards roadsides cause and implement effective solutions.
2019-05-30T13:21:04.324Z
2017-10-01T00:00:00.000
{ "year": 2017, "sha1": "f1fa67e6c952606f0d2cd56ecc57726caec32d0b", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/245/4/042065", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "53dd4330b2bbdf5f0349ebbe073c730495d1b3cd", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Geography" ], "extfieldsofstudy": [ "Physics", "Business" ] }
612234
pes2o/s2orc
v3-fos-license
Generality Challenges and Approaches in WSNs Ignoring the generality in the design of Wireless Sensor Networks (WSNs) applications limits their benefits. Furthermore, the possibilities of future extension and adaptation are also restricted. In this paper, several methods to enhance the generality in WSNs are explained. We have further evaluated the suitability of these methods in centralized and de-centralized management scenarios Introduction The field of wireless sensor networks (WSN) is becoming a very popular area that starts having many applications in different fields.In addition to the benefits which one can get, this expansion also comes with many difficulties.In this paper, we target one of them, namely to provide standard and general paradigms in dealing with WSNs.Supporting platform-and hardware-independent applications for WSN is very important in order to ease the use of the big diversity of WSN applications.Moreover, in WSNs, finding generic management architectures that can be reused for managing different sensor node platforms is a challenge being posed and emphasized by the WSN community for long time.The management architecture should be capable of running over a broad range of WSN platforms and supporting a wide variety of WSN applications.The purpose of this paper is to present generality solution which can provide this feature. WSN Management provides control and management of the system components such as the network layer, operating system parameters and application settings in real sensor nodes.Such tools that can provide generic remote interactions of the nodes are really missing. Initially, this generality issue should be viewed in the context of the current traditional network management challenges, so that the learned lessons from generality of the traditional networks should be exploited.Therefore, we are going to discuss some similarities and distinctions of generality between the traditional and wireless sensor networks. In traditional networks, the generality of the system provides several important benefits such as compatibility, interchangeability, and simplicity.The compatibility enables the users to take the advantage of different components in similar architectures.The interchangeability among components with different architectures is enabled.The simplicity enables having a simple and similar way of interacting with the components and using them.Generality solutions provide other advantages such as resolving the heterogeneity and providing openness. Although generality provides many advantages, it adds additional overhead to the system.However, this overhead should not cause a significant loss of the efficiency. WSNs are applications-specific, which is, obviously, in contrary to generality.Here, it is very difficult to find general solutions due to the following reasons.The nodes in WSNs have limited resources such as small physical size, small memory, weak computation, limited energy budget, and narrow bandwidth.Moreover, there is wide range of applications in which one can apply WSNs such as environmental tracking applications, medical and industrial applications, home automation applications, surveillance systems, etc.Furthermore, there are diverse hardware equipments used in designing sensor nodes.For example, there are many sensor technologies which can be built in sensor nodes such as: ECG, EMG, humidity, temperature, vibration sensors.This diversity requires a specific design and specific settings.Finally, over the last few years of WSNs evolvement, a large number of software solutions have been introduced which have increased the software diversity and hence have complicated further finding general solutions. This paper is organized as follows.Section 2 Discussion In WSNs' applications, supporting the generality can be achieved through many solutions.Furthermore, the degree of the supported generality mainly depends on the degree of similarity and the distinction among the different WSNs' applications.Additionally, the generality schemes can be evaluated from many aspects such as the complexity, mobility, openness and scalability. In the following, we have specified the parameters which influence the generality of the system and the management components.These parameters can be classified into two main categories: Compatible System Matrices By these matrices of components, we mean common components which can potentially be compatible with a large number of system or management architectures.An example for a compatible system matrix is the identification of a sensor node or a group of nodes.In current WSNs applications, there are mainly two ways of naming and identifying nodes.They are the following: • Data-Centric paradigm: Here, the node is named by one or more attributes.This paradigm is the most preferred in WSNs because data in WSNs is demanded based on certain attributes, not on the node identity itself.It also supports efficient energy consumption as compared with the Address-Centric paradigm [1].Furthermore, Data-Centric naming provides to a group or to a single node an identification which is based on their attributes such as geographical placement, events, sensor values, time-of-occurrence, etc. • Address-Centric paradigm: This paradigm is commonly used in traditional networks to identify a particular node depending on an initially assigned address.Each node has a fixed address which can be handled to classify the originator of each received message.This paradigm can be used mostly in small-scale WSNs applications. Mapping between these two paradigms is possible, whether the applications are based on Data-centric or Address-Centric components.This means that a generic architecture component can be realized to resolve the heterogeneity between applications based on both identification schemes.An example of this generic identification scheme is supported by SP (Sensornet Protocol) [2]. Incompatible System Matrices These abstractions represent those components of the system, which can not be mapped to each other.In other words, they represent the components that can not be shared with or reused in distinct WSNs applications.Example of this scheme is the use of contention-based MAC protocols such as CSMS with the scheduling-based MAC protocols such as TDMA or FDMA.Although these two schemes fulfill the same objectives, they are based on entirely different characteristics.Figure 1 represents the compatibility and incompatibility between few of the system abstraction components. Basically, the main objective of generality is to find a way to overcome the incompatibility between the incompatible matrices.To achieve general management solutions for such heterogeneous matrices of WSNs, we propose the two following schemes: • Management based on the compatible system abstraction matrices, in which vendors of a specific type of sensor nodes have to set already-agreed existing management matrices for instance, by following either an existing standard or using compatible general abstractions. • Management based on additional system abstracttions.For example, adding middleware that manages the heterogeneity among incompatible components.Another way is using formal languages to converge the incompatibility of the syntax and the semantic among different components.This can be done using a derivation of ASN.1 which is used to provide efficient communication between heterogeneous applications in traditional networks. Generality Paradigms To achieve generic solutions for the heterogeneous matrices of WSNs, we propose five schemes.These are based on a survey of existing solutions and on schemes we have proposed.These solutions can also be integrated with the management of WSNs to provide a generic management. Middleware The middleware aims at providing transparent common level abstractions of one or different levels of the local or remote nodes.In other words, middleware techniques are used to resolve two main challenges.The first is interfacing homogenous applications on different platforms, and the second is interfacing two heterogeneous applications running on homogenous platforms.In WSNs, Middleware can be used to reduce and address the limitations of the application specificity.It also supports the commonness of the systems, deployment, development and maintenance.Many middleware solutions for WSNs have been proposed.Salem, et al. [3] have covered in their classification large number of the WSN middleware proposals such as: Mate' [4], TinyDB [5], SINA [6], and many others. In Mate', a middle abstraction layer is provided.This layer interprets the applications as byte code.Mate' has 24 one-byte-long instructions which can be injected into the network and then propagated to the nodes.Such an abstraction hides the original platform (hardware and operating system) to unify the interpretation of the Mate' instructions; hence, it provides the portability of the applications, which are built using Mate' instruction set, among different platforms that support Mate' virtual machine. As it is not feasible to accomplish common middleware for all kinds of WSNs' nodes, WSNs should be classified into subcategories.As a proposal, WSNs applications can be divided into high rate WSNs (industrial application, medical applications, home surveillance, etc.) and low rate WSNs (habitat monitoring, agriculture monitoring, etc.).For each of these main categories, common middleware specifications can be given. In order to provide a general solution by using middleware, the sensor node designer should adopt one of the middleware techniques, so that these particular sensor node types support the generality of all nodes using the same middleware. The middleware approach can be applicable for both centralized and de-centralized management solutions.In centralized management, where nodes are globally managed by one or multiple external entities such as special strong central nodes or a cluster head, the management framework on these strong nodes should follow the middleware specifications followed on the individual nodes.This can easily be achieved, as these central nodes have unlimited resources as compared to ordinary senor nodes.Therefore, they can accommodate multiple middleware of diverse existing sensor nodes.Hence, such strong nodes can provide support for different heterogeneous sensor nodes at the same time.In decentralized management, where nodes are locally establishing the management among each other, vendors should adopt the middleware which is used by other existing individual nodes, so that these different nodes have a common interface.Here, a general management is difficult to achieve due to nodes' restrictions.Figure 2 shows a representation of the system layers for both centralized and de-centralized management. Dynamic and Mobile Agents Mobile agents, in this context, are small pieces of code which can be exchanged between the nodes in order to resolve the heterogeneity.These mobile agents can also be added while the compilation time as the management entities in [7].Here, the sensor nodes' manufacturer provides an agent which comprises the node specifications.These specifications can be later used to enhance compatibility with the other nodes specifications.In Agilla [8], the users are able to inject the mobile agents, which are special code segments, into the nodes.These agents propagate into the nodes to perform the application-specific tasks.This fluidity of these agents has the potential to convert the nodes in WSN into a shared, general-purpose computing platform.Such platforms are capable of running several autonomous applications in parallel. In decentralized management, the sensor nodes initially announce their specifications by exchanging their agents.Then, the received agents are configured with the management core in order to establish a general management compatible with other heterogeneous nodes.This method is very limited due to the restricted resources on the sensor nodes.Also, agents should be sent as binary code due to the complexity correlated with sending agents as sources. In centralized management, this method is more efficient due to the unlimited resources available on central nodes as compared to the sensor nodes.As shown in Figure 3, the management core on the base station can launch the agents as dynamic libraries to resolve the specificity of other nodes. Semantic Methods This method is based on agreement of the functional meanings of data fields and functionality of components.Here, the exchanged messages among heterogeneous application are semantically interpreted in order to produce a general messaging structure.This general messages format can be then used to provide general management.Figure 4 represents an example of parsing semantically two different messages from heterogeneous platforms (TinyOS and Contiki), in order to have common message structure. Due to the complexity of applying formal language concepts to generate semantically common messages from different messages, this method can be mainly set on base stations in centralized management.The complexity and the effectiveness of this method depend on the degree of overlapping of the functionality and data structures among the heterogeneous WSNs.This method is followed in a tool called Message Interface Generator (MIG) [9] in TinyOS applications.It is specific for TinyOS applications.However, it provides generality among heterogeneous application using TinyOS.In this method, the user assigns semantic names to the messages' fields according to well-known naming conventions.On a base station, compatible messages can be semantically generated from the different messages format.Then, these generated messages can be used by the management core on the base station by the management framework which can be based on JAVA, C or any other programming language.Normally, in all standards, one or more of the system abstraction matrices are fixed.These fixed matrices have to be followed by all vendors or manufacturers.They should fulfill and cover all needed functionally at a specific level of the system.Example for that is the SNMP.SNMP, which is proposed by IETF, has specifications which should be fulfilled by all vendors who are going to introduce a solution compatible with or general to this protocol. Standards In the WSN field, there are standards that have been adopted such as ISO-18000-7 [10], 6lowpan [11], WirelessHART [12], ZigBee [13] and Wibree [14].All these standards are not developed explicitly for WSNs; rather, they are mainly proposed for supporting general low power and low rate networks, which is one category of current WSNs. SP [2] (sensor net protocol) is, so far, the only standard-alike that has been proposed specifically for WSNs.SP is a significant step forward towards generalization of wireless sensor networks.SP represents a unifying abstraction layer that bridges the different network and application protocols to the different underlying data link and physical layers.SP is not at the network layer, which is the IP layer in OSI model, instead it sits between the network and data-link layer (because in sensor nodes data-processing normally occurs at each hop, not just at the end points).SP can be used to identify an individual node, a set of nodes, or a communication structure such as a tree.SP provides generality because it enables the users to use any MAC layer without having to care about the overlaying network layers.Also, it enables users to use any network layer without having to care about the MAC specifications. In WSN, using standards is the most efficient paradigm to obtain generality due to its simplicity and its efficiency.Standards cause lesser overhead in both centralized and decentralized management as compared to the other techniques that provide generality. COTS (Commercial Off The Shelf) COTS paradigm means using the components, hardware and software, available in the market in the design.Use of well-known legacy components based on existing standards enhance the generality, while introducing new components increases the specificity of the system.For instance, using the common AVR microcontrollers would ease dealing with sensor nods rather than designing new application-specific microcontrollers for WSNs, since users have to design more specific development tools and conventions in the latter case.This way is still one of the dominating ways in designing general WSNs applications. Conclusions To sum up, generality is an important feature used to address the specificity of heterogeneous platforms.As WSNs are application-specific, solutions that support the generality provide a great help in management, deployment and development.Many methods can be applied to support the generality such as using middleware, semantic interpretation, mobile agents and using agreed standards.The selection of one of these methods is based on the degree of application specificity and on the management type whether it is centralized or de-centralized. Middleware Not scalable, easy to use, resources-efficient Scalable, difficult to use, resources-inefficient Dynamic and Mobile Agent Not scalable, easy to use, resources-efficient Scalable, difficult to use, resources-inefficient Semantic Methods Not scalable, easy to use, resources-efficient Scalable, difficult to use, resources-inefficient Standards Not scalable, easy to use, resources-efficient Scalable, ease to use, partially resources-efficient In general, centralized management schemes are not suitable in large scale WSNs; however, decentralized schemes perform well in such scenarios, as the management is distributed over all the nodes.The implementation and the use of different generalization schemes in centralized management are simpler than in the decentralized management.Since the management in centralized schemes is performed at the central nodes having sufficient resources, the resources of the individual sensor nodes are less consumed.The table below summarizes the comparison of the generalization methods in both centralized and decentralized management. Figure 2 . Figure 2. Representation of a middleware to have a general interface between nodes from multiple vendors. Figure 3 . Figure 3. Representation of the mobile agent paradigm to enhance the generality of WSNs applications. using TinyOS based on Data Centric.Message from a nodes using Contiki operating system based on Address Centric.D Represents the discarded fields due to non-overlapping with the other message format.U Unifield field due to semantically missing corresponding fields.using TinyOS based on Data Centric.Message from a nodes using Contiki operating system based on Address Centric.D Represents the discarded fields due to non-overlapping with the other message format.U Unifield field due to semantically missing corresponding fields. Figure 4 . Figure 4. How to produce semantically a general packets used in management heterogeneous WSNs.
2016-01-07T01:57:53.067Z
2009-02-26T00:00:00.000
{ "year": 2009, "sha1": "f46aa54c2a56e8690def4233c8e41264b24471c8", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=183", "oa_status": "HYBRID", "pdf_src": "Crawler", "pdf_hash": "f46aa54c2a56e8690def4233c8e41264b24471c8", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
13579063
pes2o/s2orc
v3-fos-license
Bisphenol A Detection in Various Brands of Drinking Bottled Water in Riyadh, Saudi Arabia Using Gas Chromatography/Mass Spectrometer Purpose: To assess whether bisphenol A contamination occurred in seven brands of bottled drinking water in Riyadh, Saudi Arabia. Methods: Liquid-liquid extraction (using dichloromethane) was used to analytically extract bisphenol A from drinking water bottles and a gas chromatograph-mass spectrometer was employed for its detection using a splitless capillary column and helium as the carrier gas. Results: The concentration of bisphenol A (BPA) was high in all the bottled water brands tested. The mean concentration of BPA of the bottled water stored indoors (4.03 ng/L) was significantly lower than that stored outdoors (7.5 ng/L). Conclusion: Our results show that significant amounts of BPA leached from bottle containers into the water. Long storage of bottled water under direct sunlight should be avoided to reduce the risk of human exposure to BPA. INTRODUCTION Bisphenol A (BPA) has been well characterized as an endocrine disruptor which can mimic the body's own hormones [1,2], potentially leading to reproductive defects, cancer, obesity and diabetes [2,3]. BPA is a key monomer and plasticizer in the production of epoxy resins and polycarbonate plastic. Epoxy resins are used to coat the interior of food cans, storage vats, water containers, infant bottles, and other consumer products and water pipes. Human safety levels for BPA are currently under review as a result of new scientific studies [4,5]. A 2011 study that investigated the number of chemicals to which pregnant women in the U.S. are exposed found BPA in 96 % of the studied women [6]. Furthermore, drinking water and other beverages from plastic bottles made with BPA increased the urinary levels of the toxic chemical by nearly 70 % [7]. Many questions had been raised about the possibility of BPA migrating from the bottles under poor conditions such as high temperature and sun radiation. General population exposure (99 %) is by eating food or drinking beverages that contain trace amounts of BPA. In general, plastics that are marked with recycle codes 1, 2, 4, 5, and 6 are very unlikely to contain BPA. Some, but not all, plastics that are marked with recycle codes 3 or 7 may be made with BPA. Type 7 is the catch-all "other" class, and some type 7 plastics, such as polycarbonate (sometimes identified with the letters "PC" near the recycling symbol) and epoxy resins, are made from BPA monomer [8,9]. Bottled water consumption is important in Saudi Arabia, being the main source of drinking water. The aim of the present study was to investigate the presence of BPA in seven brands of bottled drinking water under different storage situations in the province of Riyadh, Saudi Arabia. EXPERIMENTAL Materials Bisphenol A, and bisphenol A (BPA-d 16 ) were purchased from Loba Chemie Labrotary (Mumbai, India). Dichloromethane was obtained from King Saud University, Riyadh, Saudi Arabia. Bisphenol A (BPA-d 16 ) was used as an internal standard. Bottled water samples were purchased from local stores in Riyadh, Saudi Arabia. Sample preparation Seven brands (labeled 1 -7) of commonly consumed bottled water were randomly purchased from local supermarkets in Riyadh, Saudi Arabia. A set of seven different brands of bottled water was purchased from among those stored in indoors while the second set (the same brands as those in the first set) was set was purchased from among those stored outdoors. In each case, water in the bottles was immediately extracted and analyzed. Seven water bottles per brand were used in the analysis. The first set of bottles was stored for a couple of days at room temperature prior to extraction. A sample of water (1 L) was transferred from each bottle to a separating glass funnel. Liquid-liquid extraction with dichloromethane (3 x 50 ml) was employed for the isolation of BPA. The extract was concentrated under a gentle stream of nitrogen. Determination of bisphenol A Gas chromatography/mass spectrometric (GC-MS) analysis of the BPA from water samples was carried out using a Perkin Elmer (Clarus 500) gas chromatography/ mass spectrometer. Separation was done using a capillary column Elite-5-MS (30 m x 0.25 nm, 0.5 μm film thickness). Helium was used as the carrier gas with a constant flow rate of 0.1 ml/min. The temperature of both ion source and quadropole was set at 150 °C. A sample volume of 1 µL was injected in splitless mode at an inlet temperature of 300 °C. The GC oven temperature was set at 100 °C (for 2 min) and then at 300 °C subsequently. The MS interface temperature was at 310 °C. The linearity of the method was tested with a calibration standard curve at seven different concentrations within the range of 0.2 -1.0 ng/L. By using the internal standard, calibration was done by linear least square regression with concentration ranging from 0.2 to 1.0 ng/L for each of the BPA concentrations. Using the GC-MS data calibration handling method, the peak areas of each standard and their respective internal standards were calculated for each concentration. The percent relative standard deviation (RSD %) was taken as the measure of precision of the method. It was calculated by dividing the standard deviation of the seven check standards by their theoretical mean concentration and then multiplied by 100. All the glassware were carefully washed with dichloromethane and left in a furnace at 500 °C for 2 h. Glassware and solvents were carefully handled to avoid contamination. Reagent procedural blanks were regularly analyzed and all data presented in the study were collected for blank values. Statistical analysis The Prism 5 software was used in a one-way ANOVA analysis for statistical treatment of the results. A p ≤ 0.05 was set as statistical significance. Table 1 shows the physical characteristics of the examined bottled water brands. A linear fit of the ratios of the BPA over the internal standard peak areas was obtained with correlation coefficients (R 2 ) ≥ 0.99. Recovery was in the range of 79 -94 %. The detection limit for BPA was calculated from the standard deviation of seven replicates. RESULTS Bisphenol A identification was based on the relative abundance (i.e., based on a comparison between abundance of isotopes), while quantification was carried out using the relative response factor to the surrogate internal standard, BPA-d 16 . Table 1 shows that the mean concentration of BPA in the seven bottled water was 4.03 ng/L for those stored at 25 °C and 7.5 ng/L for those stored at 40 °C. The concentration of BPA in bottled water stored outdoors (40 °C) was significantly higher (p ≤ 0.05) than in those stored indoors (25 °C). Among the water brands stored indoors, the difference between their BPA concentrations was not significant, e.g., 1 vs 5, 1 vs 7, 2 vs 3, 2 vs 4, 3 vs 4, and 5 vs 6. The same observation applies to those stored at the outdoors. DISCUSSION Bisphenol A is a chemical building block used primarily to make epoxy resins and polycarbonate plastic. Toyo'oka and Oshige found the concentration of BPA in drinking water bottles made from PET to be between 3 and10 ng/L [10], while another study did not detect BPA in different plastic containers for beverages, including drinking water [11]. Our results are in agreement with other studies which reported the presence of plasticizer residues in water stored in bottles [10,12]. This could be attributed to the migration of plasticizers from the bottle material to the water since bottle quality may vary depending on the raw material and the technology used in bottle production [12]. Cross contamination during analytical procedure due to wide use of plasticizer may be another cause. Although it is recommended to store water bottles in a cool place and away from the sun and outdoors, in practice this is not always the case. Various processes are involved in water contamination by BPA, such as leaching from the container due to photolytic formation or degradation of organic compounds that could take place during the storage of bottled water [12]. BPA concentration released from bottles increases with storage time and under elevated temperature [12]. It had been reported that during the photolysis of BPA in deionized water under natural sun radiation, the concentration of BPA remained at 80% of the initial concentration during the first 5 days of exposure followed by a progressive decrease [13]. However, another study reported that BPA presents slow direct photolysis in neutral pure water under simulated solar irradiation, but the process is rapid in the presence of humic substances [13]. Another study determined endocrine disrupting chemicals (EDCs) such as BPA in water samples from PET and polyethylene (PE) bottles after outdoor exposure for 10 weeks at temperatures up to 30 • C [14]; the present study, however, did not analyze samples over different periods of time. Food packaging could be a source of xenobiotics, especially those with endocrine disrupting properties. Chemical leaching from food packaging into food contribute to human EDCs exposure and might lead to chronic disease in the light of current knowledge in this field. Even at low concentrations, chronic exposure to EDCs is toxicologically relevant. Concern increases when humans are exposed to combination of EDCs and/or during embryonic development and growth age. Exposure to endocrine disruptors can occur through direct contact with chemicals or through ingestion of contaminated water and food, or air. Bisphenol A is listed as a possible priority substance subject to review for identification in the field of water policy. The ester bonds in BPA-based polymers are subject to hydrolysis and, therefore, BPA leaches into food and drinks from their storage containers. Heat and/or acids speed up the leaching process, and repeated washing of polycarbonate products have all been shown to result in an increase in the rate of leaching of BPA [2]. CONCLUSION Bottled drinking water is an important avenue for human exposure to BPA. However, it is not clear whether the BPA detected in this study originated from the bottle or the water itself, or both. This will be a subject of a future study. Exposure to BPA poses risk to human health. Further research is required to study the broader effects and ingestion routes including food and water to make more realistic human health assessment of daily intake of BPA.
2017-09-09T14:27:14.767Z
2012-10-10T00:00:00.000
{ "year": 2012, "sha1": "2240340eb4281b93e2f6622509693e642fb7a84b", "oa_license": "CCBY", "oa_url": "https://www.ajol.info/index.php/tjpr/article/download/82106/72261", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "cb374520ed965389ad8fa433c4df4f126213c585", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Chemistry" ] }
4236812
pes2o/s2orc
v3-fos-license
The New Zealand Neuromuscular Disease Patient Registry; Five Years and a Thousand Patients The New Zealand Neuromuscular Disease Patient Registry has been recruiting for five years. Its primary aim is to enable people with neuromuscular disease to participate in research including clinical trials. It has contributed data to large anonymised cohort studies and many feasibility studies, and has provided practical information and advice to researchers wanting to work with people with neuromuscular conditions. 1019 people have enrolled since the Registry’s launch in August 2011 with over 70 different diagnoses. Of these; 8 patients have been involved in clinical trials, 134 in other disease-specific research and 757 have contributed anonymised data to cohort studies. As a result the Registry is now effectively facilitating almost all neuromuscular research currently taking place in New Zealand. INTRODUCTION The New Zealand Neuromuscular Disease Patient Registry (NZNMD Registry) is a nation-wide registry for people living in New Zealand (NZ) diagnosed with any disorder supported by the Muscular Dystrophy Association of New Zealand (MDANZ). Because NZ has a relatively small population of approximately 4 250 000 as determined by 2013 NZ population census data [1] MDANZ has become the patient support organisation of choice for people diagnosed with myopathies and neuropathies as well as disorders less * Correspondence to: Miriam Rodrigues commonly associated with neuromuscular patient support organisations; inherited ataxias, hereditary spastic parapareses, leucodystrophies and neurocutaneous disorders. Patients diagnosed clinically with or without molecular confirmation and those who test positive predictively are included. MDANZ is the primary sponsor of the Registry and actively promotes its work. The primary aim of the Registry is to facilitate neuromuscular research by lowering the barriers faced by researchers in finding study participants, and faced by patients in finding studies to take part in [2]. The nature of rare disease research, and therefore the role of registries, evolves as effective treatments are developed. At the time of the Registry's establishment there were very few treatments for these conditions and so the first way of achieving this aim was to contribute data to international natural history studies [3][4][5][6][7], mainly facilitated by the TREAT-NMD Neuromuscular Network of which the Registry is a member. More recently patients have been enrolled in preclinical research and even in clinical trials and we are considering the Registry taking up roles in post-marketing surveillance. This process closely follows the model of registry development proposed by Betsy Bogard (personal communication -see acknowledgements) ( Fig. 1a). A developing aim of the Registry is to obtain molecular confirmation of diagnoses. The Registry facilitates this in two ways; either by highlighting a clinical genetic test to the patient's clinician or, for patients who remain without molecular confirmation of their diagnosis despite clinical genetic testing, by identifying appropriate research studies concerned with finding the genetic basis of their disorder. METHODS To achieve these aims the NZNMD Registry operates as a longitudinal opt-in registry for both children and adults living with neuromuscular disease in NZ. Led by a principal investigator, who is a consultant neurologist with the Auckland District Health Board, and managed by a genetic counsellor who is the Registry Curator, demographic, pre-specified, disease-specific clinical and genetic information is collected, curated and regularly updated. More recently patient self-reported data has been incorporated for some disorders. Data are stored with the same degree of security as clinical data within the hospital firewall. Some data for specific disorders; Duchenne muscular dystrophy (DMD), spinal muscular atrophy (SMA), myotonic dystrophy, facioscapulohumeral muscular dystrophy (FSHD), and Charcot-Marie-Tooth disease (CMT) are housed on secure platforms provided by overseas collaborators [2,7]. The NZNMD Registry is an ongoing research project. The Registry is physically located within the neurology department at ADHB, the largest neurology centre in NZ. The Registry has an oversight committee made up of individuals living with neuromuscular disorders, clinical scientists, adult and paediatric neurologists, geneticists and representation from the sponsor, MDANZ. The NZNMD Registry has a simple referral process. Referrals can come from any source including self-referral, doctor or patient support organisations such as MDANZ, and require, at a minimum, the NZ national health unique patient identifier (NHI). The MDANZ has tasked its fieldworkers with informing and consenting patients to the Registry. Clinicians are not involved in collecting or entering data though their assistance may be required for obtaining genetic tests or clinical records to support the patient's diagnosis. Formal written consent is obtained from all participants before joining the Registry using a short information sheet and consent form, in keeping with the low risks associated with being on the Registry. The Registry receives ethics approval annually. Researchers, from both within NZ and internationally, contact the Registry Curator to request access to the Registry's services. Enquiries to the Registry take different forms. They may be from industry conducting feasibility studies for clinical trials, or from clinicians or academics seeking data for service planning or for natural history studies. In these cases, the relevant population is identified from the database and anonymised data is disseminated in aggregate form to researchers. For studies where researchers are using the Registry to recruit participants, those who are likely to be eligible are identified by the curator and information about the study is disseminated through a variety of means including email, telephone, post and advertisement. Academic research is facilitated free of charge, except for the recovery of direct costs associated with the dissemination of information, not including the time of the Registry Curator. Industry-led research is facilitated along with a request for costs to be met including a contribution towards overheads and the time of the Registry Curator. The NZNMD Registry collects ethnicity data and, in this way, it is able to support and facilitate research involving people with neuromuscular disease from minority ethnic groups. Māori are the indigenous people of NZ and their rights to self-determination are confirmed in the Treaty of Waitangi. Māori have specific values regarding (but not limited to) collection, processing and disposal of DNA. The Registry has policies in place to act in a culturally responsive manner in these regards. Participants' information and confidentiality is highly respected and protected. Their time is also valued and requests for participation in studies for which the participant are ineligible are avoided. Researchers contacting the registry are provided with rapid response times for information requests, which are approved through the oversight committee. RESULTS The NZNMD Registry has enrolled 1019 people with neuromuscular disorders over the past five years with over 70 different diagnoses, which is estimated to represent around one quarter of all people living in NZ with a neuromuscular disorder [8]. (18); GPs (2); National Metabolic Service (1); and General Paediatricians (3). The Registry has contributed data to 23 feasibility studies of which 17 were coordinated through the Diagnosis, demographics and diagnosis rates compared with our earlier publication in 2013 [13]. "not specified" means that these diagnoses were not separated out in the graphs in our previous publication. TREAT NMD Neuromuscular Network. It has also disseminated information and assisted in recruitment for nine disease-specific studies [13][14][15][16] including the landmark ocular coherence tomography study that first discovered the presence of epiretinal membranes as a feature of myotonic dystrophy [13] and has had an active role in patient recruitment for 2 clinical trials for treating DMD with a further 2 planned. It has played a vital role in large prevalence studies of genetic muscle disorders and CMT. The Registry has assisted in recruiting 28 patients with limb girdle muscular dystrophy (LGMD) [17] and 12 patients with CMT, all without molecular diagnoses, into whole exome sequencing studies that aim to find the genetic cause of their condition. Molecular diagnoses have been obtained in 19 patients with significant changes in treatment and management indicated in 6, for example the diagnosis in one case of congenital myasthenic syndrome treatable with salbutamol [18]. Vitally important genetic counselling implications for family members have been identified in several patients. DISCUSSION We have demonstrated that an overarching registry serving all neuromuscular diseases managed by a single project team is effective; this is in contrast to countries such as the UK, Spain and Germany where disease-specific registries are commonly deployed but similar to Canada where the Canadian Neuromuscular Disease Registry (CNDR) covers a range of disorders [19]. Population size, health-system structure, clinician interest and make-up of patient support organisations are all factors that can influence registry design in this regard. A registry's role is dynamic and should be responsive to the changing needs of its stakeholders; patients, researchers and clinicians as illustrated in the Bogard model of registry development (Fig. 1a). In five years the NZNMD Registry has evolved from carrying out roles important during preclinical drug development, such as Advancing disease understanding in the absence of treatment [4][5][6][7] and Connecting patients with researchers, to performing vital work in the clinical trial arena by Identifying patients for clinical trials, and Informing study eligibility criteria. For DMD and SMA we are now considering the role of Post-marketing surveillance. The role of Providing additional data supportive of trial findings has been performed for a new SMA treatment and, as new drugs for both DMD and SMA enter the commercial market, we anticipate the later roles of Data collection to support expansion of drug indication and Advancing the understanding of treatment response becoming important to the Registry (See Fig. 1b). Only 58% of people enrolled in the Registry have a molecular diagnosis (see Table 1) which is a 7% improvement compared with three years ago [20]. However, as lack of molecular diagnosis limits access to treatment, genetic counselling and research, a goal of the Registry is to gain sufficient funding so that it can facilitate access to genetic testing for patients who are not able to access testing through the usual pathways. This would be similar to other registries including Duchenne Connect [21], the Jain Foundation Dysferlin Registry [22] and the Myotubular Myopathy Registry [11]. The Registry has changed the face of neuromuscular research in NZ. The two most important factors in achieving this are the integral involvement of the patient support organisation and the minimal dependence upon clinicians, which can only occur with dedicated staff. Prior to the NZNMD Registry a person with a neuromuscular condition living in NZ could only register in an overseas registry and were very unlikely to be offered the opportunity to participate in research of any sort. Now hundreds of New Zealanders have been involved in research and the vast majority of neuromuscular research in NZ is now facilitated by the Registry. ACKNOWLEDGMENTS The New Zealand Neuromuscular Disease Registry is fully funded by the research trust of MDANZ, Neuromuscular Research New Zealand. *Betsy Bogard is the Director of Global Research Development of the International Fibrodysplasia Ossificans Progressiva Association (IFOPA). She devised her Registry model while employed by Fulcrum Therapeutics. She has kindly given permission for this to be used in this paper but there is no other relationship -real or implied, between either Fulcrum Therapeutics or IFOPA and the New Zealand Neuromuscular Disease Registry.
2018-03-25T22:48:50.651Z
2017-08-11T00:00:00.000
{ "year": 2017, "sha1": "ef93392fbfd1885057c9378c65bbdc6b988d120e", "oa_license": "CCBYNC", "oa_url": "https://content.iospress.com/download/journal-of-neuromuscular-diseases/jnd170240?id=journal-of-neuromuscular-diseases/jnd170240", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "ef93392fbfd1885057c9378c65bbdc6b988d120e", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
118119002
pes2o/s2orc
v3-fos-license
An X-ray Polarimeter for Constellation-X Polarimetry remains a largely unexploited technique in observational X-ray astronomy which could provide insight in the study of the strong gravity and magnetic fields at the core of the Constellation-X observational program. Adding a polarization capability to the Constellation-X instrumentation would be immensely powerful. It would make Constellation the first space observatory to simultaneously measure all astrophysically relevant parameters of source X-ray photons; their position (imaging), energy (spectroscopy), arrival time (timing), and polarization. Astrophysical polarimetry requires sensitive well-calibrated instruments. Many exciting objects are extra-galactic (i.e. faint) and may have small polarization. Recent advances in efficiency and bandpass make it attractive to consider a polarimetry Science Enhancement Package for the Constellation-X mission. Polarimetry remains a largely unexploited observational technique in X-ray astronomy which could provide insight in the study of the strong gravity and magnetic fields at the core of the Constellation-X observational program. Adding a polarization capability to the Constellation-X instrumentation would be immensely powerful. It would make Constellation the first space observatory to simultaneously measure all astrophysically relevant parameters of source X-ray photons; their position (imaging), energy (spectroscopy), arrival time (timing), and polarization. This would provide unprecedented leverage for modeling X-ray emission from a whole range of source types, and could break frustrating degeneracies in cases where imaging timing, and spectroscopy are not definitive. Magnetic fields and scattering are ubiquitous processes in astrophysical sources and they naturally lead to polarized X-ray fluxes. In addition, the strong gravity of black holes imposes unique changes of direction of polarized flux from a disk, which should apply to stellar black holes in "high-soft" states and to the reflection and fluorescence from the disk in AGN in their "low-hard" states. Scattering off electrons in slab-like geometries has signatures which could resolve current ambiguities about the geometry and dynamics of these coronae. The polarization from synchrotron emission and scattering could resolve our ambiguities about Blazar X-ray emission regions and allow us to understand the connection of highly collimated jets to the black hole. These objects are so variable, that it would be extremely valuable to have the polarization measurements at the same time as spectroscopy and timing information. Neutron stars are a bridge from matter as we know it in the lab to black holes. Polarization provides an important probe of their diverse magnetic fields, ranging from 10 8 to 5x10 14 G, and their compact structure imparts polarization signatures that could tell us whether or not their centers contain not only macroscopic nuclear matter, but even quark matter, as it adjusts to being forced to approach collapse to a black hole. Astrophysical polarimetry requires sensitive well-calibrated instruments. Many exciting objects are extragalactic (i.e. faint) and may have small polarization. Recent advances in efficiency and bandpass make it attractive to consider a polarimetry Science Enhancement Package (SEP) for the Constellation-X mission. Figure 1 presents the 3σ sensitivity limits for an SEP which is orders of Figure 1: The fractional polarization detection threshold for observations of 100 ksec. The Spectrum XΓ X-ray Polarimeter (SXRP) was built but not flown; the Advanced X-ray Polarimeter (AXP) was a proposed Small Explorer using gaseous photoelectric track imaging polarimeters. The Constellation-X improvements (red) are due to collecting area and increased polarimeter efficiency. An E -2 spectrum is assumed throughout. The Constellation-X sensitivity assumes that polarimeters are inserted behind two of the four mirror modules. magnitude more sensitive than SXRP, the most sensitive astrophysical polarimeter ever built (Kaaret 2004), which was however never flown. The sensitivity limits in this figure are based on recent laboratory measurements ( Figure 2). In addition to sensitive polarimetry, the SEP provides simultaneous timing from the polarimetry detector and spectroscopy from the primary Constellation-X focal plane instrument. The instrument can follow the change in polarization through an AGN flare or allow folding the data on a Quasi-Periodic Oscillation to study phase effects. Scientific Benefit of Polarimetry This section provides an overview of polarimetry's expected contributions to the study of strong gravity via observations of black holes and strong magnetic fields via observations of neutron stars. Black Holes Overview of the black hole environment: A prime objective of Constellation-X is the study of "strong gravity," which is found in the neighborhoods of the event horizons of black holes. What is the physics here really like? The strong gravity controls the propagation of photons and rotates the polarization of a photon as it bends its direction. The Fe-lines from sources which are super-massive black holes can be explained in terms of gravitational red-shifts and Doppler shifts of disk material at the innermost stable orbit of Kerr black holes (e.g. Brenneman & Reynolds 2006;Minniutti et al. 2003). Tests of this picture, as well as of our understanding of General Relativity (GR), would come from measurements of X-ray polarization as a function of energy. The starting point for models of accretion onto black holes is an optically thick accretion disk. Its inner radius can be as close as the Innermost Stable Circular Orbit (ISCO), which depends on the black hole's mass and angular momentum. But observations and theory suggest that this picture is too simple. Radiation and magnetic pressure will act to drive mass loss from the black hole either in a broadly distributed outflow or in collimated jets (See review by Konigl 2006). Emission from the disk, the outflows, and from the plunging region between the ISCO and the event horizon (Beckwith, Hawley, and Krolik 2006) must all be considered. Polarization measurements of the fluoresced iron-lines and the continuum would help determine the contributions of the possible processes. The energy dependence of both the degree and angle of linear polarization would be especially useful. The modulation response of Goddard's demonstration polarimeter to unpolarized 5.9 keV Fe-55 X-rays (upper left) and 100% polarized X-rays with the detector in three orientations with respect to the beam. The measured polarization of the Fe-55 X-rays is 0.49±0.54%. The other three measurements are consistent with 45±1.1% modulation. The detector was filled with 460 Torr of a 50/50 Ne/dimethyl ether mixture. Inverse-Compton radiation from coronal clouds or outflows: The spectra and the variability properties of Seyfert galaxies and the stellar black hole Cygnus X-1 in its "low-hard" state are similar, if scaling with the mass is considered (e.g. Uttley and McHardy 2001). The power-law emission from the stellar black holes in this state has long been modeled as thermal Comptonization of photons from a disk up-scattered by high temperature electrons in a corona around the black hole. Evidence of the disk in soft X-ray emission has usually been interpreted as implying the disk is truncated outside of the ISCO, at 100-1000 GM/c 2 (with G the gravitational constant, M the mass of the black hole, and c the velocity of light). AGN spectra have been understood to have a similar inverse Compton origin. The low energy wing of the fluoresced iron-line, which extends to ~4 keV, is consistent with the AGN disks extending down to the ISCO. The corona of hot (30-200 keV) electrons could have a spherical geometry or a slab geometry, sandwiching the black hole's equatorial plane and any inward extension of a disk. Many AGN are associated with compact radio sources. Radio emission is also observed to be correlated with the X-ray continuum emission from galactic transient sources (e.g. Gallo et al. 2003). It has been proposed that in both cases the corona is actually the base of a compact jet. Models of the polarization for both spherical and slab coronae and for the outflows been computed (Matt, Fabian, and Ross 1993;Poutanen and Svensson 1996;Bao et al. 1997;Beloborodov 1998) with polarizations ranging from a few percent to 20%, depending on the optical depth of the corona and the inclination angle of a slab. Self-lensing: When the emission region is very near the black hole, such as a disk in the "high-soft" or "thermally dominant" state of stellar black holes , the photon trajectories are bent by the gravity of the black hole and frame-dragging of a rotating black hole can cause large rotations in the polarization (Connors, Piran and Stark 1980). Recent work on the continua spectra from disks for stellar black holes implies the angular momentum of GRS 1915+105 is near maximal . The inclination angle is about 60 degrees. The polarization would be a few percent to 15% by Newtonian physics or GR, but with different behavior of the polarization as a function of energy. For GR the polarization would rotate from 15 to 30 degrees going from 3 keV to 10 keV, while for Newtonian gravity there would be no rotation. The magnitude should decline from about 2.5 % to 1 % with increasing energy. At higher energy the observer sees a mixture of photons issuing from in front of the black hole mixed with those suffering rotation on their journey from behind the black hole. Bright stellar black hole transients occur at the rate of about 2 per year and would be found by soft or hard x-ray monitors. Recurrences with time scales of 1-50 years of about 20 sources, would make it feasible to monitor them in radio or x-ray bands, a project that might be undertaken with Constellation-X for spectroscopic goals in any case, in order to study quiescent evolution of Advection Dominated Accretion Flows (ADAFs). While there have been indications that for the galactic black holes in the low state, the disk does not always extend to the ISCO, it is thought to do so for the temperature conditions of the much larger super-massive black holes. In this case the continuum might emanate from above the disk and from a relatively symmetric distribution. The fluoresced line and the reflection spectrum would come from the disk and be subject to the lensing effect. The polarization, like the red-shift and Doppler smearing, would depend on the height above the disk of the non-thermal continuum and the angle of inclination (Matt 1993;Matt, Fabian & Ross 1993;Dovciak, Karas & Matt 2004). The predicted polarization again ranges from tenths to tens of percent and for significant ranges of conditions, the amount and angle depends strongly on energy in the 3-15 keV range. The polarization must in this case be considered as the net of the polarization of the non-thermal continuum (discussed above) seen directly and the second hand fluorescence and reflection. It is expected that flares are associated with hot spots spiraling into the black hole. Polarization magnitude and direction changes will accompany each circuit (Bao et al. 1997). The polariaztion may exhibit quasi-periodic oscillations which would be directly observable in AGN flares; for stellar mass black holes the phase dependence of the polarization could be studied by folding at a QPO frequency derived from the same data. Jet-dominated AGN (Blazars): Blazar emission is interpreted to be due to relativistic and highly collimated jets, viewed along the axis or obliquely to it. The radius from the black hole at which the jet emerges, the way the magnetic field may thread the black hole's event horizon, and the temperature of the matter in the jet influence the polarization and are crucial to understanding the effect of a black hole on matter near it (Begelman and Sikora 1987;Poutanen 1994;Celotti and Matt 1994;Wolfe and Melia 2006). In these cases polarizations of 40% are typical and even as high as 70%, depending on parameters that include the viewing angle, the uniformity of the magnetic field, and the electron energy distribution. Summary: What we know about sources of strong gravity leads us to believe the polarization should be stronger than 10% for some sources, 0.1-1 % for others and that the determinations will contribute significantly to the strong gravity measurements. Estimated instrument detection thresholds (3 σ) in the energy ranges 2-6 keV and 6-12 keV are better than 3% for a 1 mCrab Crab-like power-law in 50 ks of observations (1 day at an efficiency of 58%). In 3 days, for a 3 mCrab source the sensitivity would reach 1%. There are tens of Seyferts, quasars, and Blazers that would fall in that category. Very significant polarimetric measurements would be made. They could verify (by rotation of the linear polarization with energy) that black holes are at the centers of accreting black hole candidates in the high-soft state and at the center of both stellar and super-massive black holes in low-hard states. They could determine (measuring an energy dependent polarization magnitude) whether power-law continua in the low-hard (and intermediate) states were due to spherical or slab coronae or the bases of relativistic outflows. They could help determine characteristics of the collimated outflows in Blazars, from the viewing angles of the jets to the position of the radio emitting electrons. The stellar black holes have changes of state and the AGN have flares. The polarization measurements would be sensitive enough to see 5 minute changes in bright black holes (> 300 mCrab) and day changes in bright Blazars (50 mCrab). Neutron Stars Overview. Neutron stars provide unique laboratories for the study of matter and radiation under the most extreme conditions in the universe. They contain the most dense matter and strongest magnetic fields known. The properties of matter at such high densities are still largely a mystery. For example, the cores of neutron stars may contain rare states of quark matter not found anywhere else in the universe. Our ignorance reflects an inability to uniquely solve from first principles the structure of matter within the theory of strong interactions, Quantum Chromodynamics. One of the few ways to obtain additional constraints on the theory is to accurately determine masses and radii of neutron stars. A major goal of Constellation-X observations is to constrain the properties of superdense neutron star matter and search for evidence of new or exotic states of matter. As we outline below a polarization capability coupled with the large collecting area of Constellation-X would provide a powerful new probe of neutron stars. Efforts to measure neutron star properties depend largely on detecting radiation directly from their surfaces. Observations of gravitationally redshifted spectral lines, for example, can place accurate constraints on the mass to radius ratio, GM/c 2 R (also called the compactness), of neutron stars (Cottam, Paerels and Mendez 2002;Chang et al. 2006). Detection of the Doppler effect associated with rotational motion of the neutron star, as can be done by measuring widths of surface spectral lines, can provide complementary constraints on neutron star radii (Villareal and Strohmayer 2004). Accurate timing of photons from neutron star surfaces, in the context of measuring the amplitude and shape of pulsations at the spin period of the neutron star, can also provide constraints on neutron star structure (Nath, Strohmayer and Swank 2002;Bhattacharyya et al. 2005). Emission from the Neutron Star Surface. A key physical attribute of the photon emission from neutron star surfaces, not yet been exploited to constrain neutron star structure, is polarization, particularly in the context of strongly magnetized neutron stars with surface fields greater than about 10 12 G. In the strong magnetic fields of these objects the scattering and propagation of photons is intimately coupled to polarization. The primary reason for this is that it is much easier to scatter an electron in a direction along the magnetic field than perpendicular to it. For example, in the magnetized plasma that characterizes a neutron star atmosphere, an X-ray photon can propagate in two normal modes: the ordinary mode (O-mode) polarized parallel to the field and the extraordinary mode (X-mode) polarized perpendicular to the field. For photon energies much less than the cyclotron energy, typically about 12 keV for characteristic field strengths, the mean free path of X-mode photons is much longer than that of the O-mode. Hence the X-mode photons stream out from hotter and deeper layers of the atmosphere, and the emergent radiation is highly polarized, with the local direction of polarizion reflecting the local magnetic field direction (Pavlov and Zavlin 2000). While the emission from a localized patch on the neutron star surface is likely highly polarized, the flux detected by a distant observer represents a sum over all surface elements visible to the observer. Now, the fraction of a neutron star's surface which is visible at infinity is strongly influenced by the gravitational deflection of light rays. The strength of this light bending is proportional to GM/c 2 R, with more compact stars having more visible surface area. Thus the polarization fraction measured by a distant observer encodes information about the compactness of the neutron star. The effect is such that more compact stars will have lower overall polarization fractions. The spectrum of polarization, that is, how the polarization fraction changes with photon energy, is sensitive to the strength of the magnetic field as well as the inclination of the magnetic axis. If the star rotates, then measurements of the polarization fraction and position angle with rotational phase can be used to constrain the inclinations of the rotation and magnetic axes (see for example, Pavlov and Zavlin 2000). Models indicate that polarization fractions in the 10 -35% range can be expected from thermally emitting neutron star surfaces. Higher values are expected for phase resolved measurements Vacuum Polarization. While the above effects are always present, they assume that nothing else influences photon propagation or polarization. However, an additional effect of Quantum electrodynamics (QED) predicts that in a strong magnetic field the vacuum itself becomes birefringent. This prediction of QED has never been tested, but it may reveal itself in the radiation from magnetized neutron stars. This vacuum polarization modifies the dielectric properties of the vacuum and hence the polarization of photon modes, thereby influencing the opacities of photons propagating from the star. Several observational effects are possible. First, the average polarization fraction of neutron star surface emission is predicted to be larger than without the QED effect because vacuum polarization decouples the polarization modes of photons leaving the surface (Heyl, Shaviv, and Lloyd 2003). Second, for field strengths common in many neutron stars vacuum polarization gives rise to a unique energy dependent polarization signature (Lai and Ho 2003). That is, the plane of linear polarization of photons with energies less than about 1 keV is perpendicular to that of photons with energies greater than 4 keV. While the baseline detector we describe is not sensitive below 1 keV, a transition may still be inferred, especially for the strongest magnetic fields. Although many predictions of QED have been tested in terrestrial labs the prediction of vacuum polarization in strong fields has not, and X-ray polarization measurements of magnetized neutron stars may be the only way to do so. Observational Prospects. The classes of neutron stars for which X-ray polarization measurements with Constellation-X would likely be most informative are the thermally emitting isolated neutron stars, and the "magnetar" candidates, the Soft Gamma-ray Repeaters (SGRs) and Anomalous X-ray Pulsars (AXPs). For example the persistent emission from AXPs and SGRs have a spectrum which can heuristically be modelled with a 0.4 -0.6 keV black body, and a steep power-law with index between 2 and 4 (Woods and Thompson 2006). These spectra peak in the X-ray band in which our polarimeter's sensitivity is well matched to the Soft X-ray Telescopes (SXT) of Constellation-X (2 -10 keV). Fluxes of these sources span the range from a few tenths to a few mCrab, and our estimates suggest that polarization as low as a few percent can easily be reached in 100 ksec for many of these objects. Because their expected intrinsic polarization is high (perhaps many tens of percent), it is likely that both sensitive phase and energy resolved polarization measurements will be possible (most likely in the 2 -7 keV band), and almost certainly for the brightest objects. These measurements could provide snapshot "views" of the magnetic field geometry, and perhaps show how it evolves over time. For example, measurements of the polarization properties before and after an AXP or SGR outburst could elucidate details of the magnetic field evolution known to occur in these objects. While the intrinsic surface emission is likely highly polarized, some of these objects may require more complex modelling to fully extract all of the physics because strong and dense magnetospheric currents may be present which can influence the spectrum and polarization via scattering of emergent photons. There has been substantial progress in modeling such effects recently (see Lyutikov and Gavriil 2006;Fernandez and Thompson 2006). While the thermally emitting, isolated neutron stars are also expected to have substantial intrinsic polarization, these objects have cooler (~40 -80 eV) thermal spectra which are less well matched to the nominal > 2 keV polarization sensitivity of our instrument. Nevertheless, a few of the hotter objects, such as PSR J0538+2817, could provide important constraints on the average polarization expected for thermally emitting neutron stars. TPC Polarimeter Concept Photoelectric X-ray polarimetry with finely spaced, pixelized gas detectors has matured into a powerful and practical technique for astronomical observations. In 2003, Black et al. demonstrated the first gas pixel polarimeter suitable for use at the focus of a conical foil mirror. Based on that technology we proposed the Advanced X-ray Polarimeter (AXP) to NASA's Small Explorer program. AXP received the program's highest science rating, and was awarded technology development funding to bring gas pixel polarimeters to greater flight readiness. As a result of those efforts and others, gas pixel polarimeters have now reached a mature level of development (Bellazzini et al. 2006;Hill et al. 2006). Nevertheless, gas pixel polarimeters are fundamentally limited to quantum efficiencies of ~10%. We recently demonstrated the Time Projection Chamber (TPC) as a photoelectric polarimeter without this sensitivity limit. Like the gas pixel detector, this new polarimeter forms images of photoelectron tracks to extract the polarization information. Our small demonstration polarimeter is comparable to the best pixel polarimeters in intrinsic polarization. The TPC technique can be extended to near unit quantum efficiency at energies of astrophysical interest without loss of sensitivity. The TPC polarimeter instruments a volume of gas to image photoelectron tracks produced by X-rays. An X-ray absorbed by the K-shell of an atom emits a photoelectron in a direction that tends to be perpendicular to the X-ray's incident direction and parallel to the X-ray's electric field vector. A uniform electric field is applied in the gas-filled volume to drift ionization electrons formed along the photoelectron track into a Gas Electron Multiplier (GEM), which amplifies the ionization (Figure 4a). The multiplied charge is deposited on a readout anode connected to charge-sensitive electronics, and the resulting signal is interpreted as an image (Figure 4b). For each strip, the electronics record the waveforms of the charge signals before and after a trigger signal from the GEM. A track image is formed in the plane perpendicular to the GEM by binning the charge pulse trains into pixels whose coordinates are defined by strip location in one dimension and arrival time multiplied by the drift velocity in the orthogonal dimension. We derive the magnitude and direction of a source's polarization by measuring an ensemble of photoelectron directions and constructing a histogram of emission angles (Figure 2). The polarization is related to the modulation (N max -N min )/(N max +N min ) of this histogram, and the polarization direction is the angle corresponding to N max . The key advantage of the TPC over the pixel polarimeter is that its quantum efficiency is independent of its modulation response. The depth of the interaction volume determines the quantum efficiency, while the drift distance determines the amount of electron diffusion, which limits the track image resolution. Behind a grazing incidence X-ray (i.e. high f-number) mirror a deep detector is possible without requiring a large drift distance and image resolution is independent of the interaction depth. In a pixel polarimeter, X-rays enter through the drift electrode; higher efficiency requires increasing the drift distance which reduces track image quality and modulation response. The TPC also has a simple readout with high rate capability. The TPC readout anode can be fabricated as a ceramic printed circuit board, and the electronics can be implemented with In this image, the horizontal axis is time and the vertical axis is readout strip. The size of the dot is proportional to charge. discrete commercial-off-the-shelf electronics. The TPC signals are continuously digitized and pipelined. This scheme incurs a deadtime of less than 1 microsecond per event. In exchange for higher sensitivity and a simpler readout, the TPC trades focal plane imaging. Since detection occurs when the charge arrives at the GEM, not when the X-ray interacts, the distance between the interaction point and the GEM is unknown. Demonstration Polarimeter. The demonstration TPC polarimeter which produced the data in Figure 2 consists of a GEM with 75 micron diameter holes on a 150 micron, hexagonal spacing. The active area is 3 cm x 1.3 cm. Readout strips, 3 cm long on a 132 micron pitch (Figure 6), are placed 0.5 mm behind the GEM. This readout plane is a standard printed circuit board. The drift electrode is 2 cm away from the GEM. There are 96 readout strips that are grouped into four sets of 24, giving a total active area of 30 mm × 12.7 mm (depth x width). Every 24 th strip is tied together and connected to an analog-to-digital converter through a charge-sensitive amplifier. In this way, the entire active area can be read out with only 24 electronics channels. As long as the valid electron tracks cross fewer than 24 strips, the tracks can be reconstructed without confusion. This scheme can be expanded in both length (detector depth) and width (number of sets of strips) without increasing complexity or power consumption significantly. The strips are read out with commercial preamplifiers and A/D converters. The demonstration polarimeter has a polarization response (Figure 2) comparable to the best pixel polarimeters, and has several times higher quantum efficiency. We are currently fabricating a prototype polarimeter with 75 micron readout pitch that will achieve similar modulation at 1 atm. Proposed Science Enhancement Package The polarimetry SEP consists of two TPC polarimeters and a mechanism to move the instruments onto and off of the optical axis in front of one or more calorimeters. Since the polarimeters are non-imaging, placing them in front of calorimeters, forward of the mirror focus, does not impact their performance. This design allows for simultaneous polarization and spectroscopy measurements with the same optic with the polarimeter filtering out most lowenergy events. The instrument concept is shown in Figure 6, and key parameters are listed in Table 1. The reference design's predicted instrumental sensitivity as a function of energy, assuming polarimeters in front of two focal planes in the four mirror design, is shown in Figure 7. The detection limits derived from these calculations are shown in Figure 1. We find that the reference instrument can detect a 3% polarization in a 1 mCrab source in 40 ksec. This sensitivity opens the door to interesting observations of massive extra-galactic black holes (i.e. AGN) with modest exposures. The energy resolution of the polarimeter is characteristic of gas proportional counters, i.e. about 20% FWHM at 6 keV. However, as polarimeters would only be in place for some of the mirror modules, the input spectrum will be simultaneously measured with excellent energy resolution. Technology Readiness and Total Cost A three-year project is currently underway to advance the TPC polarimeter from its present level of development, TRL 4, to a state of flight readiness. The ROSES APRA program funds this project through FY09. Additional support is being provided by a related APRA polarimetry project and by Goddard Space Flight Center. The major technical steps in going from the demonstration polarimeter to a flight-like detector are: Increasing the GEM area. GEMs with suitable pitch and area are available from commercial sources. Development is focused on fabricating GEMs that are low-outgassing and highly breakdown-tolerant. Reducing the anode pitch. A readout anode with the required 75-micron pitch was recently fabricated and is currently being integrated into a second-generation polarimeter. Measuring and understanding the performance of the TPC polarimeter. A substantial effort is required to calibrate and optimize the detector. Of particular concern is the measurement and control of systematic errors. On orbit, we anticipate that tracking minimum ionizing particles will be a technique that controls many sources of systematics, for example, a change in drift velocity. A primary goal of the development project is to determine whether such monitoring techniques adequately control systematics, or if the instrument must also be rotated to achieve the required accuracy. While it is not possible to provide a total cost over the life of the Constellation-X mission, there is high likelihood that the mission integrated cost (design, development, analysis, archiving) would be less than a few tens of millions of dollars. The instrument itself is constructed of simple components, and the eventual instrument costs will be driven by project requirements on the instruments. This instrument levies no driving requirements on the mission. Figure 6: Instrumental sensitivity for the reference design (solid line), and for a two detector volume alternative which place 5 cm of Ne/dimethyl ether (dashes) in front of 10 cm of Ar (dots). An Ar cell would extend the polarimeter's bandpass at the cost of somewhat reduced sensitivity at low energies. Polarimeters with sensitivity at lower energies can also be constructed with pure dimethyl ether.
2019-04-14T01:47:52.370Z
2007-01-04T00:00:00.000
{ "year": 2007, "sha1": "6336d4990b7d29ebd23a585bc20b36f2cc96f936", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "76d32f59f4d394870903d4e1b2b4473c5b1a5d7c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
267029553
pes2o/s2orc
v3-fos-license
Effects of flaxseed supplementation on weight loss, lipid profiles, glucose, and high‐sensitivity C‐reactive protein in patients with coronary artery disease: A systematic review and meta‐analysis of randomized controlled trials Abstract This meta‐analysis aimed to evaluate the effects of flaxseed supplementation on weight loss, lipid profiles, high‐sensitivity C‐reactive protein (hs‐CRP), and glucose levels in patients with coronary artery disease (CAD). A systematic search was performed using various online databases, including Scopus, PubMed, Web of Science, EMBASE, and Cochrane Library, to identify relevant randomized controlled trials (RCTs) until June 2023. To evaluate heterogeneity among the selected studies, the Q‐test and I 2 statistics were employed. Data were combined using either a fixed‐ or random‐effects model and presented as a weighted mean difference (WMD) with a 95% confidence interval (CI). Of the 428 citations, six RCTs were included. The pooled results did not show significant changes in the WMD of lipid factors (high‐density lipoprotein cholesterol, triglycerides (TG), low‐density lipoprotein cholesterol, and total cholesterol) following flaxseed intake. However, after performing a sensitivity analysis to determine the source of heterogeneity, flaxseed supplementation resulted in a significant decrease in TG levels (WMD = −18.39 mg/dL; 95% CI: −35.02, −1.75). Moreover, no significant differences were observed in either weight or BMI following flaxseed intake. However, the circulating levels of fasting blood glucose (WMD = −8.35 mg/dL; 95% CI: −15.01, −1.69, p = .01) and hs‐CRP (WMD = −1.35 mg/L; 95% CI: −1.93, −0.77, p < .01) significantly decreased after the intervention. Flaxseed supplementation was associated with lowering FBS, hs‐CRP, and TG levels but did not affect weight loss parameters and other lipid markers in CAD. | INTRODUCTION Cardiovascular diseases (CVDs) continue to be a prominent cause of global mortality despite significant endeavors towards risk factor management and treatment improvement. 1,2Therefore, CVD prevention is a significant public health concern and has wide-ranging effects on the economy and healthcare system. 3,4Moreover, developing countries face high rates of conventional risk factors for coronary artery disease (CAD), such as abdominal obesity and low physical activity. 5,6Associations between inflammatory markers, lipid profiles, anthropometric indices, and CAD have been extensively studied. 7,8Recent guidelines have placed great emphasis on reducing visceral fat and controlling dyslipidemia and blood pressure. 91][12] These alternative therapies are gaining popularity as individuals seek natural and safe approaches for disease prevention or to mitigate CAD risk. 12,135][16] Flaxseed (Linum usitassimum) is a widely used herbal medicine that significantly affects multiple CVD risk factors. 17This is mainly because flaxseed contains a high amount of α-linolenic acid (ALA) (22%), phytoestrogen, phenolic compounds, 18 and lignans (0.2-13.3 mg/g flaxseed). 19,20Furthermore, flaxseed serves as a highly beneficial dietary means of obtaining protein (28%-30%), vitamins, minerals, and a significant quantity of dietary fiber, weighing 28% of its total mass, with one-third being soluble fiber. 21,22Promising effects on lipid and glucose metabolism have been observed in patients with metabolic diseases who consumed flaxseed oil.In a study by Soleimani et al. 23 the addition of flaxseed oil to the diet resulted in a significant decline in insulin resistance among patients with diabetic foot ulcer. 22nversely, another study revealed a lack of evidence supporting the beneficial impact of flaxseed oil supplementation on glycemic control and lipid profiles among individuals with diabetes. 24Existing information regarding the association between flaxseed and inflammation is inconclusive and lacks agreement among researchers. 22merous studies have been conducted on flaxseed, but the results have been inconsistent with respect to its effects on glycemic and lipid profiles, weight loss, and inflammatory markers.Some studies have found positive effects, while others have not.Because of these inconsistent findings and the absence of a comprehensive metaanalysis specifically focusing on patients with CAD, we aimed to incorporate all existing randomized controlled trials (RCTs) to assess the efficacy of flaxseed/flaxseed oil supplementation on glycemic and lipid profiles, body weight loss, and high-sensitivity C-reactive protein (hs-CRP) in patients with CAD. | Search and studies section The current meta-analysis was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-analysis guidelines (Table S1).Clinical trials published from the beginning to June 2023 were identified using a systematic search of online resources, including PubMed/Medline, Scopus, Embase, Web of Science, and Cochrane Library.We used the Patient, Intervention, Comparison Group, Outcome, and Study Design framework to identify studies that met our criteria, including RCTs with either a crossover or parallel design among patients with CADs, reported the effects of flaxseed consumption on weight loss, lipid profiles, glucose, and hs-CRP as the primary outcomes in both the intervention and placebo groups, and were published in the English language without a time restriction.The following medical subject headings, subject terms, and keywords were applied to retrieve potentially relevant articles: ("flaxseed oil" OR "flaxseed" OR "linseed" OR "lignin" OR "Linum sitatissimum" OR "L.usitatissimum" AND "supplementation" OR "intake" OR "administration") and ("CAD" OR "coronary artery disease" OR "coronary heart disease" OR "coronary disease" "OR "CHD" OR "coronary arteriosclerosis" OR "coronary arteriosclerosis") and ("randomized controlled trial" OR "randomized clinical trial" OR "RCT" OR "controlled clinical trial" OR "randomized" OR "intervention studies" OR "controlled trial" OR "random*" OR "trial" OR "clinical trial").The reference lists of the selected articles and previous reviews were manually checked to increase the sensitivity of our search strategy. | Inclusion and exclusion criteria To incorporate RCTs into the study, the authors established certain criteria for inclusion and exclusion.(1) Articles with either a crossover or parallel design; (2) human clinical trials that investigated the effects of flaxseed/flaxseed oil on glucose and lipid profiles, weight loss, and inflammation biomarkers in patients with CAD, and sufficient data reported on mean changes of at least one of the studied outcomes with standard error, standard deviation (SD), or related 95% confidence interval (CI) at baseline and the end of intervention in both groups.Nonclinical studies, trials without a control group, and trials with insufficient information on the intervention and control groups were excluded.We only considered the outcomes in at least three clinical trials.Any discrepancies were resolved through discussion with a third author (M.R.). | Risk of bias (RoB) assessment Applying the Cochrane RoB tool, the quality of the included articles was systematically as described elsewhere. 25Finally, as shown in Figure 1, the RoB in each trial was judged as low (green), high (red), or unclear (yellow). | Certainty of evidence assessment The evaluation of the evidence for outcomes was carried out utilizing the modified Grading of Recommendations Assessment, Development, and Evaluation (GRADE) approach.This process was led by two independent investigators, O. K. and M. A., who evaluated the certainty of the body of evidence based on criteria such as the presence of RoB, indirectness, inconsistency, imprecision, and evidence of publication bias. | Search results and characteristics of included trails The literature search yielded 531 reports.After removing duplicate articles (n = 318), 420 articles were left, 113 of which were disqualified based on their titles and abstracts.Finally, the full texts of 17 pertinent papers were retrieved for evaluation in accordance with the inclusion eligibility criteria.Eleven of these were omitted for the following reasons: not measuring the outcomes of interest (n = 4), performed with other study designs (n = 1), not population with CAD (n = 5), and not controlled for flaxseed/flaxseed oil (n = 1).Finally, six clinical trials [26][27][28][29][30][31] that were published between 2017 and 2022 were selected for the current meta-analysis.Figure 2 shows the step-by-step process used to identify and select included articles. Overall, 309 participants were randomized and completed the study period, of which 153 were randomly allocated to the flaxseed supplementation group and 156 to the placebo group in the six included clinical trials.FBS and hs-CRP levels were investigated in three trials, lipid profiles in four, and weight and BMI in five.The intervention lasted between 10 and 24 weeks, and all the included trials used a parallel study design.Table 1 summarizes the general features of the six trials. | Impact of flaxseed supplementation on FBS With respect to the effects of flaxseed supplementation on FBS levels in patients with CAD, a meta-analysis using a random-effects model of three included trials revealed a significant decrease in this parameter following flaxseed treatment (WMD = −8.35mg/dL; 95% CI: −15.01, −1.69, p = .01),following flaxseed treatment, with an insignificant statistical heterogeneity among trials (p = .72,I 2 = 0.0%) (Figure 3A). Based on the significant heterogeneity in TG levels between studies (p = .06,I 2 = 59.59%), we conducted a sensitivity analysis to identify the source of heterogeneity by excluding each study individually.After removing the study conducted by Talari et al., 28 we observed a reduction in interstudy heterogeneity to I 2 = 37%.Moreover, flaxseed supplementation had a noticeable impact on TG levels (WMD = −18.39mg/dL; 95% CI: −35.02, −1.75) after excluding this particular trial (Figure S1). | Impact of flaxseed supplementation on hs-CRP As shown in Figure 3H, the random-effects meta-analysis using a random-effects model of three trials showed that there was a significant reduction in hs-CRP levels (WMD = −1.35mg/L; 95% CI: With respect to BMI, we found significant publication bias (p = .03). However, after conducting a trim-and-fill sensitivity analysis including censored studies, we found that the pooled WMD for BMI remained unchanged.Visual examination of the funnel plots, shown in Figure S2, found no evidence of possible publication bias. | Certainty of evidence Table 2 presents the certainty of the evidence pertaining to the outcomes obtained by employing a revised GRADE method.The certainty of the evidence was deemed to be low for the three outcomes and very low for the five other outcomes that were taken into account.The primary factors that resulted in downgrading the certainty of the evidence included bias, indirectness, and imprecision. | DISCUSSION The present meta-analysis investigated the efficacy of flaxseed on anthropometric, glycemic, lipid, and inflammatory indices in individuals with CAD.The pooled results indicated that 10-24 weeks of supplementation with flaxseed or flaxseed oil had considerable therapeutic effects on glucose, hs-CRP, and TG levels in individuals with CAD. Dyslipidemia is a condition that increases the risk for atherosclerosis and CAD, resulting in numerous fatalities globally. 32We observed a notable decrease in circulating TG levels, suggesting the 33 also conducted a meta-analysis to determine the impact of flaxseed on blood levels of lipid markers in healthy and dyslipidemic individuals, demonstrating the same results as those obtained by Hadi et al. 4 Similarly, Dodin et al. 34 conducted a study that found that the daily consumption of 40 g of flaxseed led to improved lipid factor levels in healthy menopausal women after 12 months.However, another study indicated that the lipid profile did not exhibit any reduction after a 6-week period of consuming 15 mL/day of ALA flaxseed oil when compared to the consumption of olive oil during the same duration. 35Interestingly, another RCT also demonstrated the opposite effect of flaxseed supplementation on TG (an increase of 12.0% from baseline to 8 weeks following intervention) in elderly subjects. 36The changes in lipid profiles observed in this study may be attributed to several factors, including the consumption of flaxseed or flaxseed oil, variations in absorption efficiency among participants, adherence to the prescribed diet, and individual characteristics such as genetic background, lifestyle choices, age, and sex. 37Moreover, it is worth noting that the absence of a substantial impact of flaxseed oil on lipid profiles, as observed in the current meta-analysis, may be partially attributed to the majority of participants having normal initial levels of these measures.This could potentially be attributed to prior statin therapy administration.The positive effects of flaxseed oil on lipid metabolism may be related to its ability to enhance lipid balance within the adipose tissue-liver axis and promote fatty acid β-oxidation. 38Furthermore, flaxseed oil has been shown to reduce lipogenesis, resulting in decreased TG levels. 39e antidyslipidemic effects of flaxseed have been attributed to multiple mechanisms.Flaxseed, a rich source of ALA and secoisolariciresinol diglucoside, has been demonstrated to have antidyslipidemic properties. 40ALA undergoes conversion into long-chain omega-3 fatty acids and promotes cholesterol efflux from the macrophages.Its high polyphenol content may also contribute to its hypolipaemic effects.The high fiber content of flaxseed exerts antidyslipidemic effects by promoting satiety, reducing caloric intake, and reducing food transit time. 4,41karpour et al. 42 investigated the effects of flaxseed supplementation on proinflammatory markers and endothelial function in an overall population.A significant decrease in circulating CRP levels was observed in 35 RCTs, which is consistent with the findings of this study.In fact, a subgroup analysis conducted by the researchers showed that flaxseed significantly lowered CRP levels in unhealthy or overweight subjects.These effects were observed in RCTs that were given whole flaxseed or lignan supplements for more than 12 weeks. In contrast, another meta-analysis did not find any significant beneficial effects of flaxseed or its derived products (such as lignans or flaxseed oil) on CRP levels in the general population. 21There have been several reported mechanisms to account for flaxseeds' ability to reduce inflammation.One such mechanism is the presence of ALA, a PUFA-omega-3 fatty acid found abundantly in flaxseeds.Long-chain omega-3 fatty acids, which are recognized to have anti-inflammatory qualities, can be produced from ALA. 40 ALA may not exert the same impacts on inflammation.Furthermore, the amount of fiber present in flaxseed may contribute to its ability to protect against inflammation by producing short-chain fatty acids, such as propionate, acetate, and butyrate, which can reduce the activity of certain proinflammatory cytokines in adipose tissue. 21The discrepancies in the findings could be attributed to other factors, such as dietary components, lifestyle choices, or the genetic factors of the participants.Furthermore, various bioactive components found in flaxseed oil and flaxseed, as well as differences in production methods, compound stability, storage conditions, and the use of whole flaxseed or its derivatives, could affect bioavailability and subsequently affect the response of adhesion molecules and inflammatory markers. 20,42e results of the study indicated that there was no substantial impact on body weight and BMI between the two groups when consuming flaxseed.Pan et al. 43 conducted a meta-analysis that revealed that whole flaxseed, rather than flaxseed oil, had a notable impact on weight and BMI reduction.This effect is likely due to the abundant fiber content in flaxseed, which aids in controlling energy intake and enhancing satiety.However, the outcomes did not demonstrate statistical significance in investigations in which flaxseed oil supplementation (four trials) was administered instead of flaxseed (one trial).The specific mechanism by which flaxseed exerts its antiobesity effects remains unclear.Current evidence suggests that the bioactive components of flaxseed oil, particularly ALA, may play a role in reducing obesity. 44Furthermore, flaxseed oil is rich in various unsaturated fatty acids such as linoleic and eicosadienoic acids, along with ALA, all of which possess antiobesity properties. 45The variations in the obtained results could be explained by several factors, including variations in the duration and dosage of the intervention, type of flaxseed oil and ALA sources used, differences in dietary intake, clinical condition of the participants, and levels of physical activity. The impacts of flaxseed and flaxseed oil on FBS have been evaluated in a restricted range of studies. 26,35,46A meta-analysis found that while whole flaxseed had a significant impact on improving glycemic control in the general population, lignan extract and flaxseed oil did not have the same effect. 46This finding was consistent with that of the present meta-analysis.However, other RCTs have not demonstrated a significant reduction in FBS levels following flaxseed supplementation.More extensive trials are necessary to fully understand the effects of flaxseed on the glycemic profile of individuals with CAD. Although reports on the positive impacts of flaxseed on human health have been documented, concerns exist within certain populations, particularly pregnant women.However, animal studies have demonstrated that exposure to estrogen during the neonatal stage can lead to a decrease in sperm production.Therefore, the consumption of flaxseed, especially over a prolonged period, should be approached with caution by pregnant women and men of reproductive age, particularly in cases of chronic consumption. 47is study has several limitations that require attention.Initially, the absence of data in the included RCTs prevented the measurement of other well-known risk factors for coronary heart disease, such as inflammatory markers.It is also important to consider the impact of confounding parameters, such as lifestyle factors and genetic background, on the effectiveness of flaxseed supplements, as well as their formulation.The relatively short intervention period and small sample size are acknowledged as the limitations of this study.However, it is worth noting that this is the first meta-analysis to assess the effects of flaxseed administration on FBS, hs-CRP, body weight loss, and lipid parameters in patients with CAD. Furthermore, efforts were made to minimize bias and heterogeneity in the review process through a comprehensive literature search and sensitivity analysis. | CONCLUSION The findings showed that flaxseed intake had a positive impact on the reduction of TG, FBS, and hs-CRP levels in patients with CAD.This Two authors (M. A. and M. M.) independently extracted the following items from the eligible articles: first author's name, study location, publication year, main features of participants, sample size of participants in the flaxseed oil and placebo groups, duration and dosage of intervention, study type, intervention type, placebo type, baseline dietary intake of energy, protein, carbohydrate, and total fat in the intervention and placebo groups; the mean and SD for fasting blood glucose (FBS), lipid profiles (total cholesterol [TC], low-density lipoprotein cholesterol [LDL-c], triglycerides [TG], and high-density lipoprotein cholesterol [HDL-c]), weight loss (weight and body mass index [BMI]), and inflammatory biomarkers (hs-CRP) at baseline and the end of intervention in both the intervention and placebo groups. STATA 16 . 0 (Stata Corp.) and Review Manager 5.3 (Cochrane Collaboration) were used for all the statistical analyses.To quantify the effects of flaxseed oil on the changes in the following outcomes, data were pooled and the weighted mean difference (WMD) with 95% CI was utilized.(1) FBS, (2) TG, (3) TC, (4) LDL-c, (5) HDL-c, (6, 7) weight, (8) BMI, and (9) hs-CRP.The effect size (ES) was determined using the mean change (SD).The DerSimoniane-Laird technique with a random-effects model was used for the ES meta-analysis.The existence of significant interstudy heterogeneity among trials was assessed using Cochran's Q test (p < .1)and I 2 test (>50%).Sensitivity analysis was carried out using the leave-one-out technique to determine the source of heterogeneity after removing the trials individually.Egger's linear regression tests and funnel plots were applied to evaluate evidence of potential publication bias across studies. F I G U R E 1 Risk of bias assessment result using the Cochrane RoB tool.There was no significant indication of publication bias in Egger's linear regression test for trials examining the impact of flaxseed supplementation on FBS (p = .73),TG (p = .95),TC (p = .23),HDL-c (p = .98),LDL-c (p = .94),weight (p = .14),and hs-CRP (p = .21)levels. potential cardioprotective properties of flaxseed oil.However, no alterations were observed in the other lipid parameters.Previous studies have presented varying outcomes regarding the impact of flaxseed supplementation on lipid profiles.A comprehensive metaanalysis by Hadi et al., 4 including 62 RCTs, showed that flaxseed supplementation may improve blood levels of TC, TG, and LDL-C, but not HDL-c, in various healthy and unhealthy subjects, indicating that it may delay the development of heart disease.In addition, Masjedi et al. finding suggests that flaxseed consumption may significantly mitigate the overall risk factors associated with CAD.Future research should focus on designing large, long-term trials that minimize bias and adhere to current reporting criteria for clinical trials.Additionally, it is important to assess whether any positive effects are sustained over time.Further RCTs are needed to investigate the effects of flaxseed supplementation on other cardiovascular consequences, such as stroke and myocardial infarction. Flowchart of study identification and selection process.Characteristics of the six selected clinical trials.
2024-01-19T05:09:34.999Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "dac52746ba429b03fc60344ac2aedff6ca428621", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/clc.24211", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dac52746ba429b03fc60344ac2aedff6ca428621", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
202783913
pes2o/s2orc
v3-fos-license
A Multi-strategy Shuffled Frog-leaping Algorithm for Numerical Optimization This study proposes a multi-strategy shuffled frog-leaping algorithm for numerical optimization (MSSFLA), which combines the merits of a frog-leaping step rule, the crossover operator, and a novel recursive programming. First, the frog-leaping step rule depends on the level of attractive effect between the worst frog and other frogs in a memeplex, which utilizes the advantages of frogs around the worst frog, making the worst frog more conducive to the evolution direction of the whole population. Second, the crossover operator of the genetic algorithm is used for yielding new frogs based on the best and worst individual frog instead of the random mechanism in the original shuffled frog-leaping algorithm (SFLA). The crossover operation aims to enhance population diversity and conduciveness to the memetic evolution of each memeplex. Finally, recursive programming is presented to store the results of preceding attempts as basis for the computation of those that succeed, which will help save a large number of repeated computing resources in a local search. Experiment results show that MSSFLA has better performance than other algorithms on the convergence and searching effectivity. Therefore, it can be considered as a more competitive improved algorithm for SFLA on the efficiency and accuracy of the best solution. Introduction The shuffled frog-leaping algorithm (SFLA) was first proposed in 2003 by Eusuff and Lansey [1][2], getting inspiration on the social behavior of frogs seeking food in a pond. The characteristics of SFLA, such as stochasticity, simplicity, and easy implementation, allowed it to gain considerable importance and be widely used for solving many complex optimization problems [3][4][5][6][7][8][9]. In spite of its applicability, becoming trapped in a local optimum is a general problem with the SFLA and other heuristic optimization algorithms, where they may work well on one problem, but may fail on another problem. Moreover, although the SFLA can usually yield good solutions, they are still time-consuming to compute and their performances deteriorate rapidly in the increasing complexity and dimensionality of the search space. To overcome these problems, some new improvement techniques need to be extended to obtain better performances in terms of solution quality and competitive computation complexity for local search [10][11][12][13][14][15]. To improve further the global information exchange among memeplexes and speed up convergence, we designed a recursive strategy where a recursive programming technique is presented to store the results of preceding attempts as basis for the computation of those that succeed. To store the temporary results, an external archive set is introduced to memorize the fitness values that were calculated. Furthermore, we concentrate on the opertors of the GA and to imprvoe their search capacity while avoiding entrapment in local optima.The crossover operator of the genetic algorithm (GA) is used for yielding new frogs based on the best and worst individual frog instead of the random mechanism in the original SFLA. Moreover, experiment comparisons with other algorithms verify the efficiency and capability of the proposed algorithm. The rest of this paper is organized as follows: Section 2 introduces the mathematical model used to describe the concepts of the standard SFLA. Section 3 describes the proposed method in details. The experimental results and some comparisons with other common methods are given in Section 4 to verify the efficiency and capability of the proposed algorithm. Finally, conclusion and future work are drawn in Section 5. SFLA Scheme SFLA is a meta-heuristic stochastic search method that mimics the behavior of frogs when seeking for food. Its basic concept of population-based approach is derived from a virtual population of frogs in which individual frogs represent a set of possible solutions. The population is sorted according to their fitness values. Next, the frogs are divided into m memeplexes. Subsequently, frogs begin to two alternating processes, namely, local exploration in a memeplex and global information exchange among all memeplexes. The former tries to evolve frogs that are fitter and, by applying a frog-leaping rule (measurement of step length and displacement) according to Eqs. (1) and (2), attempts to move to a new better location in the search space. (1) and (2) are repeated with respect to the global best frog (p gw ), that is, p gw replacing bm k p . For convenience, wm k p , bm k p , and p gw are the worst frog, the memeplex best individual, and the best individual of the entire population, respectively. New Frog-leaping Step Rule Eq. (1) shows that the improvement benefited from bm k p and p gw . However, a careful examination of the local exploration reveals that the worst frog's leaping step is substantially influenced by all individual frogs in a memeplex, and not just bm k p and p gw . Hence, Eq. (1) can be modified as: represents a submemeplex of n-1 individual frogs around wm k p in a memeplex with n individual frogs; h represents the index of a particular frog. The first item in Eq. (3) considers the influence degree of the frogs surrounding the current frog wm is a normalization factor that indicates the degree of impact on the worst frog while executing the leaping step rule. On the other hand, each frog may have a tendency to remain in its current moving status while leaping to find food, which can be described by movement inertia similar to that of the standard PSO. Here, a new frogleaping step rule is reformulated based on Eq. (3), and then expressed as: where 1   k denotes the leaping step size in the (k-1) th iteration, where it can be considered as the velocity of the worst frog in the (k-1) th iteration. The parameter w is the inertia weight similar to that of the standard PSO, which helps to regulate the search process in the MSSFLA. For the local exploration in each memeplex, we choose linearly decreasing weights for w in the range of [wmin, wmax]. w is varied as: where wmin and wmax are the initial and final weight, respectively; iter is the current iteration and itermax is the maximum number of iterations. Crossover Operator In a standard SFLA, a randomly generated frog after the predefined iteration number will substitute for the worst frog if the latter cannot be improved by using the information from and p gw . Then, the random frog becomes a new member in the current memeplex. Consequently, this will weaken the search efficiency and affect the global performance of the memetic evolution. Given that the best frog usually carries useful information, the search space around it could be the most promising region. Hence, in the embedded crossover operator, the new frog is generated by performing one crossover operation to bm k p or p gw . This strategy aims to enhance the diversity of the population and conduciveness to the memetic evolution of each memeplex. The detailed steps of the crossover operator are as follows. Step 1. First, select the two parent individual (P1, P2) of the crossover operation. We randomly generate a number rn. If rn<0.5, then two random indivduals are selected as the parent individuals of the crossover. Otherwise, p gw and wm k p are selected for the crossover operation. Step 2. Next, perform the crossover operation based on the parent individuals P1, P2. Generate a random integer r, where r is a position-based crossover constant used to generate two child individuals, namely, Ch1, Ch2 from two parent individuals. Then, exchange the gene fragments that are located at the left or right of the intersection r. Step 3. Finally, make a comparison of the fitness values of Ch1 and Ch2. Select the better child individual to substitute for wm k p . Recursive Programming Intuitively, there is no change for the position of many frogs in the local exploration of memeplex. Hence, there is no need to reevaluate the fitness for each frog in SFLA. Here, an external archive set is introduced to memorize the fitness values that are calculated. Figure 1 depicts this recursive programming based on the standard SFLA. For convenience, the iteration number for the local exploration of a memeplex is set to 1. The recursive region for the frogs is surrounded by a dotted box in Figure 1. Meanwhile, Figure 1 briefly illustrates the flowchart for the proposed MSSFLA algorithm to an extent. To better describe this recursive programming, we first assume that there are six individual frogs, which have been sorted according to their fitness values and divided into 2 memeplexes by executing Steps 1, 2, and 3. Next, the two worst frogs (P4, P5) are improved, and then two new positions of the frogs are obtained (P4_new and P5_new) according to Eqs. (4) and (2). This improvement process includes the above-mentioned frog-leaping step rule and crossover operation for the local search process of a memeplex. Thereafter, the recursive programming is performed according to a predetermined procedure in the recursive region. In this case, it consists of five basic steps, where the first step (Step4.2) is to store the fitness values of all frogs of each memeplex except for the worst frogs P4 and P5. P4_new is inserted into a specified external archive with a fixed capacity according to a specified bubble-sorting algorithm. Similarly, the above process in Steps 6 and 7 is repeated, and P5_new will be inserted into the external archive by the specified sorting algorithm. Finally, the external archive set would be the basis database for the next grouping of the individual frogs after completion of Step 8. However, in a standard SFLA, the individual frogs will be remixed and reevaluated after the completion of the local search of each memeplex. This will ignore the existing environments in which the position of many frogs did not change significantly. Consequently, this will result in a waste of computing resources and is not conducive to speed up the convergence of the algorithm. Proposed MSSFLA Algorithm Algorithm 1 -Pseudo code of the MSSLFA Step 1: Initialization phase.Set the algorithm parameters.Randomly generate the initial population, and evaluate each frog in the population. Step 2:Sort the frogs according to their fitness values and divide the population into several emeplexes. For i=1 to NGlobal Step 3: Start a local search in each memeplex. For j=1 to m Step 3.1: If j=1, then store the frogs' fitness values of the j th memeplex into the external archive set Se; otherwise, insert the frogs' fitness value of the j th memeplex into the external archive set Se according to a particular bubble-sorting algorithm. For k=1 to NLocal Step 3. Step 4:Test the stop condition.Go to Step 3 if the stop condition is not met, and the external archive set will be used as the basis for the next grouping of individual frogs.Otherwise,output the best solution. End For. Benchmark Functions The performance of MSSFLA is compared with three different methods, namely, standard SFLA [2], MSFLA [8] and LPSO [16]. The experimental tests utilized six widely used functions listed in Table 1, which have several local optima and saddles in their solution space. Table 1 briefly describes the function expression, dimensions (D), their search ranges, global optimal solution (x*), the global optimization value (fmin), and the acceptable values (ε). If a solution found by an algorithm is better than the acceptable value, then the run is deemed successful. For a fair comparison, all functions are tested on 30 dimensions. Table 1. Benchmark test problems. Parameter Settings For the standard SFLA, we use the parameter settings in [2]. There are 20 memeplexes, each containing 10 frogs. The local exploration in each memeplex is executed for 10 iterations. For the parameter settings for MSFLA, we follow the parameter settings in [8]. For LPSO, the cognitive and social parameters, i.e., c1 and c2, are both equal to 2.0. The inertia weight w linearly decreases from 0.9 to 0.4 with the increase in the iteration number. The population size is set to 30, and the maximum distance of flight is set to half the size of the range, i.e., vmax=0.5×range. For MSSFLA, the frogs'number is set to Sp=200 and the memeplex number m is set to 20. To speed up convergence of the algorithm, the local evolution number NLocal and the global iteration number NGlobal are set to 2 and 1000, respectively. For a fair comparison, the global iteration number is also set to 1000 for other compared algorithms. Table 2 shows the results of the mean best solution and standard deviation (SD), which are used to test the convergence efficiency of the compared algorithms. In the experiments, MSSFLA and three other algorithms were applied to the optimization on six 30-dimensional functions, namely, f1-f6. The typical convergence curves of the mean fitness value of the 100 runs for four algorithms on six functions are illustrated in Figure 2(a)-(f). As shown in Table 2, the approach proposed in this study has very good performances in solving these problems compared with other competing heuristic algorithms. Figure 2 shows that the proposed algorithm could converge to the global optimal solution in advance in terms of the iteration numbers. For instance, Figure 2(a) shows that the MSSFLA was converged around 500 iterations for the f1 function, which is only half of the predefined NGlobal. Although the three other algorithms can also converge to the global optimal value, the convergence speed of the MSSFLA significantly outperforms the three other algorithms according to the predefined iteration numbers. For f2, the MSSFLA converged to a bounded region around 700 iterations. For f3 and f5, the MSSFLA converged to a bounded region around 900 iterations. For f6, MSSFLA converged to a bounded region around 750 iterations. The MSSFLA was able to converge quickly to a bounded region around the best optimum in a finite number of iterations except for the f4 function. For the iteration on the mean fitness value, the improved MSSFLA algorithm has faster convergence speed than the standard SFLA. From the quality of the solution perspective, the improved MSSFLA has better convergence accuracy according to In summary, according to above-mentioned experimental results and analysis, MSSFLA is a very promising algorithm for finding the globally optimal solution, and can also be better applied into addressing complex engineering problems for quickly finding better global stationary solutions. Conclusions This study presents a novel multi-strategy shuffled frog leap algorithm to improve the local and global search ability for complex optimization problems, where the standard SFLA is modified by integrating a variety of strategies including a new frog leap rule, a recursive programming for local exploration, and a crossover operator of GA. In our future work, we should further improve the MSSLFA by testing different crossover operators in a memeplex to make it more suitable for the discrete case, which will be discussed in detail in a future article.
2019-09-17T02:47:04.660Z
2019-08-01T00:00:00.000
{ "year": 2019, "sha1": "43a53dabfea1b9ec5ed9cac584a94e3079b98991", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1302/4/042021", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "2161fca505ac47298204e61dc8f1bba20f6a196e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
258418659
pes2o/s2orc
v3-fos-license
Neurological Adverse Reactions to SARS-CoV-2 Vaccines SARS-CoV-2 vaccines are not free of side effects and most commonly affect the central or peripheral nervous system (CNS, PNS). This narrative review aims to summarise recent advances in the nature, frequency, management, and outcome of neurological side effects from SARS-CoV-2 vaccines. CNS disorders triggered by SARS-CoV-2 vaccines include headache, cerebro-vascular disorders (venous sinus thrombosis [VST], ischemic stroke, intracerebral hemorrhage, subarachnoid bleeding, reversible, cerebral vasoconstriction syndrome, vasculitis, pituitary apoplexy, Susac syndrome), inflammatory diseases (encephalitis, meningitis, demyelinating disorders, transverse myelitis), epilepsy, and a number of other rarely reported CNS conditions. PNS disorders related to SARS-CoV-2 vaccines include neuropathy of cranial nerves, mono-/polyradiculitis (Guillain–Barre syndrome [GBS]), Parsonage–Turner syndrome (plexitis), small fiber neuropathy, myasthenia, myositis/dermatomyositis, rhabdomyolysis, and a number of other conditions. The most common neurological side effects are facial palsy, intracerebral hemorrhage, VST, and GBS. The underlying pathophysiology is poorly understood, but several speculations have been generated to explain the development of CNS/PNS disease after SARS-CoV-2 vaccination. In conclusion, neurological side effects develop with any type of SARS-CoV-2 vaccine and are diverse, can be serious and even fatal, and should be taken seriously to initiate early treatment and improve outcome and avoid fatalities. INTRODUCTION It is now undisputed that SARS-CoV-2 vaccines not only have positive but also negative effects, i.e., side effects (adverse reactions) [1,2]. Side effects can be mild, moderate, severe, or fatal [1,2]. Side effects occur with all currently marketed vaccine brands (Table 1), but the spectrum and frequency of side effects may differ slightly between brands [2]. Side effects occur after any dose, in both sexes, and with variable latency after vaccination. A causal relationship is considered established if side effects occur within four [1] to six [3,4] weeks after vaccination. Side effects can affect any organ or tissue, but the most commonly affected organ is the central or peripheral nervous system (CNS, PNS) [1]. Various CNS/PNS dis-orders have been attributed to SARS-CoV-2 vaccines, but the causal relationship often remains unproven. This narrative review aims to summarise recent advances and future perspectives regarding the nature, frequency, management, and outcome of side effects from SARS-CoV-2 vaccinations (SC2Vs). Original articles and reviews published between 1966 and December 2022 were included. Abstracts, proceedings, and editorials were excluded from data analysis. The review does not claim to be complete, since the number of words and references was necessarily limited. RESULTS In most studies female preponderance of neurological side effects was reported [3]. In a study of 60 patients with venous sinus thrombosis (VST) due to a SC2V, 75% were female. According to a multicentre study on 4,478 healthcare workers from Sri Lanka, low age and female sex were associated with an increased frequency of developing systemic reactions to vaccination with the Astra Zeneca vaccine (AZV) [5]. CNS Diseases Complicating SC2Vs CNS disease reported as side effects of anti-SC2Vs are listed in Table 2 [4,. SC2V-related CNS disease includes headache, cerebrovascular disease, inflammatory disorders, epilepsy, and other conditions. Additionally, CNS diseases due to side effects manifesting in extra-neural tissues (e.g., myocarditis or vaccine-induced, immune thrombotic thrombocytopenia [VITT]) have to be taken into account. Headache Headache is one of the most common neurological side effects of SARS-CoV-2 vaccines. Headache has been reported in 30−51% of vaccinees [2,5]. According to a multicentre study on 4,478 healthcare workers, 50.8% developed transient headache after vaccination with the AZV [5]. In a study of 13,809 vaccinees experiencing neurological side effects, 30% reported headache [2]. Headache usually occurs within a few hours after vaccination and lasts either for a few hours or for several days. If headache persists for longer, diseases of the CNS, PNS, ears, eyes, or of the cardiovascular system should be considered. SC2V-related headache can go along with or without an identifiable cause [6]. Identifiable causes of headache include arterial hypertension, vasospasm, intracerebral hemorrhage (ICH), subarachnoid bleeding (SAB), reversible, cerebral vasoconstriction syndrome (RCVS), autoimmune encephalitis (AIE), aseptic meningitis, giant cell arteritis (GCA), pituitary apoplexy, VST, or cranial nerve neuralgia [6]. In the majority of cases, however, the cause of post-vaccination headache remains elusive. Headache has been reported with any brand of SARS-CoV-2 vaccines. A 21-year-old female experienced thunderclap headache with nausea, and vomiting 8 hours after the first AZV dose [7]. All investigations for specific causes of headache were non-informative [7]. No cause of recurrent thunderclap headache could be detected also in a 62year-old female after a Biontech Pfizer vaccine (BPV) dose [7]. Headache after a SC2V may also derive from apoplexy of a pituitary adenoma [8]. A 50-year-old male experienced nausea, vomiting, diplopia, and drug-resistant headache one day after the third Moderna vaccine (MOV) dose [8]. Pituitary magnetic resonance imaging (MRI) revealed bleeding inside a macro-adenoma, consistent with pituitary apoplexy [8]. In patients with known migraine SARS-CoV-2 vaccines can increase the frequency of attacks or trigger new clinical manifestations. In a study of eight patients with a history of migraine focal neurological deficits (lateralised sensory disturbances, motor deficits, or both) developed within 24 hours after SC2V and lasted 2−14 days. Migraine occurred in four of them [9]. Because cerebral MRI was normal and single photon emission computed tomography studies showed large regions of hypoperfusion and small regions of hyperperfusion, neurological deficits were interpreted as migraine aura [9]. SC2V-related headache responds favourably to nonsteroidal, anti-rheumatic drugs (NSARs), opioids, or opiates. Venous sinus thrombosis VST is one of the most common and most severe CNS complications of SC2Vs but the prevalence of VST may vary significantly between studies. In a review of 86 articles about neurological side effects of SC2Vs, VST was reported in 706 patients [2]. However, in a study of 232,603 vaccinees, SC2V was complicated by VST in only three cases [3]. VST was first described in July 2021 in two males who developed fatal VST following the first AZV dose [10]. Meanwhile, several hundred cases of SC2V-related VST have been reported. VST manifests commonly with focal neurological deficits, seizures, and a dip in sensorium. VST can be associated by venous and arterial thrombosis in vascular beds other than the cerebral vasculature. VST may occur with or without VITT. VITT is due to IgG antibodies directed against the platelet factor-4 (PF-4) polyanion complex located on the surface of thrombocytes but additionally bind to un-complexed PF-4. These antibodies activate thrombocytes, resulting in platelet aggregation, hyper-coagulation and thrombocytopenia. Intravenous immunoglobulins (IVIGs) and nonheparin-based anticoagulation are the mainstay of treatment for VITT [11]. Second dose/boosters of mRNA COVID-19 vaccines appear safe in patients with adenoviral vector-associated VITT [11]. The cause of VST in patients without VITT remains elusive. The outcome of VST due to VITT also varies considerably between studies. In a retro-spective study of 36 patients with VST due to VITT, the outcome of SC2V-related VST was fatal in up to 40% of cases [12]. Ischemic stroke There is increasing evidence that SC2Vs can be complicated by acute ischemic stroke [13]. The prevalence of ischemic stroke, however, varies considerably between studies [13]. In a meta-analysis of 782,989,363 vaccinees, acute ischemic stroke was reported in 17,481 of them [14]. The pooled proportion of SC2V-related ischemic stroke amounted to 4.7 cases per 100,000 vaccinations [14]. In a retrospective study of Mexican vaccinees receiving six different types of vaccines between 12/2020 and 8/2021, acute ischemic stroke was found in 0.54 per 1,000,000 doses (95% confidence interval [CI] 0.40− 073) [15]. The pathophysiology of ischemic stroke after SC2V is poorly understood but it is speculated that it is due to vasculopathy (arterial hypertension, endothelialitis, vasculitis, vasospasms, dissection), coagulopathy (cardioembolism, dysfunctional thrombocytes [e.g., VITT]), VST, or cardio-embolism (endocarditis, myocarditis, intraventricular thrombus formation, atrial fibrillation). A few patients have been reported who developed RCVS time-linked to a SC2V, manifesting clinically with severe headache and consecutive ischemic stroke [16]. In a 60year-old female occlusion of the right internal carotid artery due to VITT seven days after the first AZV dose resulted in fatal stroke in the territory of the right middle cerebral artery (MCA) and anterior cerebral artery [17]. There was thrombocytopenia of 5,000 per μl and PF-4 antibodies were positive [17]. Treatment of SC2V-related ischemic stroke is not at variance from treatment of stroke not associated with SC2V except for stroke due to VITT. Patients with SC2V-related ischemic stroke may benefit from thrombectomy or thrombolysis as patients with stroke due to other causes [18]. A 83-year-old Japanese female with long-term atrial fibrillation and anticoagulation with rivaroxaban experienced occlusion of the left MCA three days after the first BPV dose but recovered almost fully after systemic thrombolysis and thrombectomy [19]. Three days after the second BPV dose she suffered occlusion of the right MCA but thrombectomy was unsuccessful this time [19]. Several patients with a transitory ischemic attack shortly after SC2V have been reported [20]. Intracerebral bleeding SC2V-related ICH with or without breakthrough to the ventricles or the subarachnoid space is not infrequent. In a review of 86 articles about the neurological side effects of SC2Vs, 2,412 patients with ICH were listed [2]. In a retrospective study of vaccinees receiving 79,399,446 doses of six different SARS-CoV-2 vaccines, ICH was reported in nine patients (16.1%, 0.11 per 1,000,000 doses (95% CI 0.06−0.22) [15]. The outcome was favourable with a modified Rankin scale of 0−2 in 41.1% of cases but fatal in 21.4% [15]. SC2V-related ICH can be generally due to arterial hypertension or hypocoagulability (VITT). SC2Vrelated ICH is more commonly associated with than without VITT. SC2V-related ICH usually manifests as macro-bleeding. Microbleeds have been only rarely reported. In a 57-year-old female, ICH occurred five days after the first AZV dose. Digital subtraction angiography ruled out aneurysm or occlusion and platelet counts were normal. Because she had taken acetyl-salicylic acid for general malaise immediately after vaccination, ICH was attributed to the antithrombotic effect of the drug [21]. VITT-associated ICH was reported in a 60-year-old female who suffered fatal right frontal lobar ICH 16 days after the first AZV jab [22]. There was only moderate thrombocytopenia but PF-4 antibodies were elevated [22]. In a 40-yearold female with Moyamoya disease, Sjögren syndrome, and autoimmune thyroiditis, application of the second MOV dose was complicated by left ICH and intraventricular bleeding 3 days after SC2V [23]. The patient benefited from an external ventricular drainage and subsequent stereotactic evacuation of the hematoma [23]. A 46-year-old female developed ICH due to vasculitis following the first BPV dose [24]. The patient benefited from surgical evacuation of the clot [24]. Several other cases of SC2V-related ICH have been reported [25][26][27][28][29][30]. Subarachnoid bleeding SAB is a rare complication of SC2Vs. It usually occurs in patients who experience VITT with consecutive VST but without an aneurysm. A 48-year-old male developed hematuria, petechial rash, and headache two weeks after the first AZV dose [31]. Work-up revealed thrombocytopenia (14 × 10 9 /L), VITT, extensive VST, and SAB [31]. He made a complete recovery after urgent thrombectomy, heparin, steroids, and IVIGs, and was discharged with dabigatran for 6 months [31]. VITT-associated SAB was also reported in one of 23 patients with VITT following an AZV dose [32]. In a 54-year-old female disseminated intravasal coagulation with multi-district thrombosis developed 12 days after vaccination with the AZV [33]. A brain computed tomography scan showed multiple subacute intra-axial hemorrhages in atypical locations, including the right frontal and temporal lobes with ipsilateral SAB [33]. Magnetic resonance angiography (MRA) and magnetic resonance venography revealed acute basilar artery thrombosis together with superior sagittal sinus thrombosis [33]. The patient died 5 days later despite maximum therapy. SAB adjacent to the falx was reported in a 22-year-old female 4 days after the first AZV dose [34]. SAB was attributed to VST from VITT [34]. SAB after VITT-associated VST was also reported in a Norwegian healthcare worker in her thirties who experienced fatal ICH with breakthrough to the subarachnoid space one week after the first AZV dose [35]. Vasculitis Vasculitis is an autoimmune disease affecting the small, medium-sized, or large arteries. Vasculitis is subdivided into several subtypes, one of which is GCA. GCA affects the medium-sized or large arteries, particularly those of the head, especially those of the temples, which is why GCA is also called temporal arteritis. GCA frequently causes headaches, scalp tenderness, jaw pain, or visual impairment. Untreated, it can lead to blindness due to ischemic optic neuropathy. SC2V-related GCA of cerebral arteries has been reported in several patients [3,36]. An 82-year-old male presented with a 4-month history of headaches, jaw claudication, weight loss, bilateral temporo-parietal skin necrosis, and almost complete vision loss, which had developed about 10 days after the second BPV dose [36]. Biopsy of temporal arteries confirmed bilateral, late-stage GCA [36]. In a study of 232,603 vaccinees, vaccination was complicated by GCA in one [3]. In a prospective case study with a median follow-up of 387 days, GCA was reported in a single patient, which worsened during 12 months of follow-up despite immunosuppressive treatment [4]. In a monocentric study of 27 patients with previous immune-mediated disease, SC2Vrelated GCA was associated with polymyalgia rheumatic [37]. Flares of GCA after SC2Vs have not been reported in a study of 17 GCA patients [38]. Arteritic, anterior, ischemic optic neuropathy (AAION) is a vasculitis of the small arteritis of the optic nerve. A 79-year-old female developed sudden, bilateral visual loss 2 days after the second BPV dose [39]. At presentation her best-corrected visual acuity was 20/1,250 and 20/40 in the right and left eye on the Snellen acuity chart, respectively [39]. There was pallor of the optic nerve head bilaterally [39]. Temporal artery biopsy was compatible with AAION [39]. She received prednisone with a slow taper and subcutaneous tocilizumab 125 mg weekly [39]. Systemic vasculitis, including the temporal arteries was also reported in an 80-year-old male 7 days after the second dose of an mRNA-based anti-SARS-CoV-2 vaccine [40]. Pituitary apoplexy Pituitary apoplexy is a rare complication of SC2Vs. Pituitary apoplexy manifests with headache and bitemporal hemianopia. The pathophysiology is explained by bleeding in a pre-existing pituitary adenoma. The bleeding enlarges the tumour and damages not only the pituitary gland but also blocks blood supply to the pituitary gland. The larger the tumor, the higher the risk of a future pituitary apoplexy. A 28-year-old female developed new tension-type headache for one month after the first dose of the Vaxzevira vaccine [41]. After the second dose headache came back, but more intense than after the first dose and in association with amenorrhoea and hyperprolactinemia [41]. Serial MRIs revealed pituitary apoplexy that partially resolved after three months [41]. Pituitary apoplexy was also reported in a 50-year-old male who developed nausea, vomiting, diplopia, and headache 1 day after the third MOV dose [42]. The patient profited from trans-sphenoidal resection [42]. Immune-histochemical evidence for SARS-CoV-2 proteins next to pituitary capillaries was provided [42]. Bleeding was explained by endothelialitis of pituitary capillaries, cross-reactivity of SARS-CoV-2 with pituitary proteins, by coagulopathy due to PF-4 antibodies, or acutely increased blood demand [42]. Pituitary apoplexy 1 day after the second AZV dose was also reported in a 24-year-old female, who manifested with new, sudden-onset frontal headache that benefited from hormone substitution for pituitary insufficiency [43]. Reversible, cerebral vasoconstriction syndrome RCVS represents a group of conditions that are pathophysiologically characterised by reversible, multifocal narrowing of cerebral arteries with clinical manifestations typically including thunderclap headache and sometimes neurologic deficits due to cerebral edema, ischemic stroke, or seizure. The outcome is usually fair, although major strokes can result in severe disability or death in some patients. SC2V-related RCVS has been first described in a 38-year-old female who developed sudden-onset thunderclap headache together with bilateral scotomas 18 days after the second MOV dose [16]. MRI revealed acute cortical stroke in the territory of the right posterior cerebral artery (PCA) and absence of the PCA on MRA (Fig. 1). Epileptiform discharges were recorded on electroencephalography [16]. The patient benefited significantly from nimodipine and anti-seizure drugs [16]. SC2V-related RCVS has been also reported in a 30-year-old male with a history of RCVS [44]. He experienced an accumulation of RCVS attacks 12 hours after receiving the first BPV dose [44]. The patient profited from losartan, which was given until 3 days after the second dose [44]. The authors concluded that targeting the angiotensin-2-receptor could be a therapeutic and preventive option in patients susceptible for RCVS [44]. Susac syndrome Susac syndrome is clinically characterised by the trias encephalopathy (encephalitis), occlusion of retinal arteries, and hearing loss. Susac syndrome is due to endotheliopathy mediated through release of perforin and granzym-B from activated cytotoxic CD8 T-lymphocytes. Perforin and granzym-B destroy the blood brain barrier through destruction of endothelial cells. A 50-year-old female developed fever, myalgia, and unilateral scotoma, one month after receiving the Sinovac vaccine (Table 1) [45]. Ophthalmologic investigation revealed paracentral, acute middle maculopathy and neurological work-up aseptic pleocytosis. She was diagnosed with Susac syndrome and profited from empirical antibiotic and virostatic treatment and from high-dose prednisolone [45]. Inflammatory disease Because SC2V does not primarily induce infection, inflammatory disease following SC2Vs is predominantly immunogenic. Either new onset immunological disease or flares of previously diagnosed immunological disease (e.g., myasthenia, myositis, multiple sclerosis) have been reported as complications of SC2Vs [3]. Encephalitis Encephalitis complicating SC2Vs has been repeatedly reported and is usually due the immunological reaction to the vaccine (AIE). In rare cases, encephalitis may be due to superinfection with an infectious agent due to immunosuppression from the vaccination. SC2V-related AIE can be associated with or without specific AIE antibodies (seropositive, seronegative). Various subtypes of SC2V-related AIE have been published, such as non-specific encephalitis with or without AEI antibodies, limbic encephalitis, rhombencephalitis, acute, (hemorrhagic) necrotizing encephalopathy (ANE, AHNE), acute, disseminated encephalomyelitis (ADEM), acute, hemorrhagic encephalomyelitis (AHEM), multifocal necrotising encephalitis (MNE), and cerebellitis. Non-specific encephalitis: The first reported patient with SC2V-related AIE was a 35-year-old female who developed fever, skin rash, and headache two days after the second BPV dose followed by behavioural changes, and refractory status epilepticus [46]. She was diagnosed with seronegative AIE and recovered upon methyl-prednisolone and plasma exchange [46]. Since then SC2V-related AIE has been reported in several other patients [47][48][49][50][51][52][53]. SC2V-related encephalitis with positivity for AIE anti-bodies has been only rarely reported. A 48-year-old man presented with severe fatigue a few days following his second BPV dose which rapidly evolved in progressive cognitive decline and hyponatremia but recovered under high-dose methyl-prednisolone [54]. He was later diagnosed with anti-leucine rich glioma inactivated-1 (LGI1) positive AIE [54]. Anti-LGI1 AIE is characterized by cognitive impairment or rapid progressive dementia, psychiatric disorders, facio-brachial dystonic seizures and refractory hyponatremia [54]. There is also one report about a female in her twenties who developed N-methyl-Daspartate receptor positive encephalitis 7 days after the first BPV dose [55]. Limbic encephalitis: Limbic encephalitis after SC2V has been reported in only a few cases. In a case series of 21 patients with neurological autoimmune disease following a SC2V, 1 patient had limbic encephalitis [3]. Limbic encephalitis was also reported in a single patient out of 20 cases with neuro-immunological complications after SC2V and a long-term follow [4]. In a 35-year-old female with seizures two days after the second MOV dose, limbic encephalitis was diagnosed and successfully treated with steroids, IVIG, and rituximab [56]. A case of limbic encephalitis has been also described by Maramotam in a case series from India [48]. Rhombencephalitis: Rhombencephalitis after a SC2V was reported in a single patient so far [57]. The 30-yearold male, a neurologist himself, reported generalised malaise, headache, hypogeusia, ataxia, and tongue weakness a few weeks after the second BPV dose [57]. He was diagnosed with BPV-associated rhombencephalitis upon imaging and cerebrospinal fluid pleocytosis [57]. He improved significantly upon methyl-prednisolone [57]. Acute, (hemorrhagic), necrotizing encephalopathy: ANE was reported in a 29-year-old female presenting with fever, tachycardia, seizure, and stupor 8 days after vaccination with the BBIBP32-CorV vaccine [58]. MRI revealed bilaterally symmetric hyperintensities in the thalamus and cerebellum, typical for ANE [58]. Serum interleukin-6 was markedly elevated and she carried a RANBP2 variant, which is typical for familial ANE. The course was complicated by pyelonephritis, acute kidney failure, acute hepatic failure, and septic shock, coma, and death 6 days after presentation [58]. Because the patient was also positive for SARS-CoV-2, it cannot be ruled out that ANE was due to the infection. ANE was also described in a 56-year-old male who presented with fever and akinetic mutism 2 days after the first AZV dose [59]. ANE was diagnosed upon typical, diffusion-weighted imaging hyperintense thalamic lesions on MRI [59]. He also carried a RANBP2 variant and profited significantly from methyl-prednisolone [59]. AHNE has been reported in a 75-year-old female after the first AZV dose [60]. Despite administration of methyl-prednisolone and IVIGs, the patient died one month after onset [60]. Acute disseminated encephalomyelitis: ADEM is a monophasic autoimmune demyelinating disease of the CNS that typically presents with multifocal neurological deficits and is commonly triggered by viral infections or immunization in genetically susceptible individuals. ADEM occurs more commonly in children than adults. ADEM is diagnosed upon clinical and radiological features. Cerebral imaging shows deep and subcortical white-matter lesions and grey matter lesions in the thalami or basal ganglia. ADEM favourably responds to methyl-prednisolone or IVIGs. There is increasing evidence that SC2Vs of any brand trigger the development of ADEM. It has been reported in several patients meanwhile [48,[61][62][63][64][65][66][67][68][69][70][71][72][73]. Acute, hemorrhagic encephalomyelitis: AHEM is a rare hyper-acute form of ADEM [74]. AHEM is charac-terized by fulminant inflammation and demyelination in the brain and spinal cord and is often preceded by an infection or vaccination [74]. SC2V-related AHEM has been reported in a 53-year-old male with rheumatoid arthritis under methotrexate and etanercept who developed fatal AHEM following the second BPV dose [74]. AHEM has been also reported in a case series of three patients after the first AZV dose [75]. Patient-1, a 61-year-old male, and patient-2, a 25-year-old female, benefited from methyl-prednisolone and plasma exchange [75]. Patient-3 died despite application of methyl-prednisolone [75]. Multifocal, necrotizing encephalitis: MNE has been described in a single patient so far. A 76-year-old male with Parkinson's disease (PD) experienced pronounced cardiovascular side effects, which were not specified, after the first AZV dose [76]. After the second BPV dose, behavioural and psychological changes were noticed [76]. He did not want to be touched anymore and presented with increased anxiety, lethargy, and social withdrawal even from close family members [76]. Additionally, striking worsening of PD was noted, leading to severe motor impairment and recurrent need for wheelchair support [76]. Two weeks after the third vaccination with the BPV, he suddenly collapsed but recovered [76]. One week later, he collapsed again due to cardio-pulmonary arrest but was successfully resuscitated after > 1 hour [76]. Unfortunately, he died shortly after starting mechanical ventilation. Autopsy and histopathological analyses of the brain uncovered acute, predominantly lymphocytic vasculitis and MNE with pronounced inflammation including glial and lymphocytic reaction [76]. In the heart, signs of chronic cardiomyopathy as well as mild acute lympho-histiocytic myocarditis and vasculitis were found [76]. Because only the spike protein (S-protein) but no nucleo-capsid protein could be detected within the foci of inflammation particularly in the endothelial cells of small blood vessels, of both, brain and heart, the condition was attributed to the vaccination rather than to a SARS-CoV-2 infection [76]. Cerebellitis: Cerebellitis has been only rarely reported as a complication of SC2Vs. In a 39-year-old female with stable multiple sclerosis since age 22 under treatment with interferon, natalizumab, and ocrelizumab, developed fatigue, fever, and sopor 17 days after the first BPV dose [77]. Cerebellitis was diagnosed that required posterior fossa decompression for imminent herniation [77]. She died six months after onset following deterioration of multiple sclerosis and development of tetraplegia, bulbar involvement, and recurrent pulmonary infections after having been switched to cladribine [77]. Meningitis Only few cases with SC2V-related aseptic or infectious meningitis have been reported. In a multicentre study on the reactivity to the AZV vaccine in 4,478 healthcare workers, vaccination was complicated by aseptic meningitis in only a single patient [5]. In two other patients, a 43-year-old female and a 38-year-old female, aseptic meningitis developed 4 and 10 days after the second and first dose of the BPV respectively [78]. Both patients presented with fever, headache, neck pain, and generalised papulous exanthema, lymphocytic pleocytosis, but both recovered completely upon symptomatic treatment [78]. A 42-year-old female developed aseptic meningitis seven days after the first BPV dose but also recovered completely [79]. Aseptic meningitis was also reported in a 17-year-old female who developed fever and severe headache, three weeks after the first BPV dose [80]. Work-up revealed optic disc edema, multifocal well-circumscribed chorio-retinal lesions in the periphery, and aseptic pleocytosis [80]. Multiple evanescent white dots syndrome was diagnosed [80]. Clinical manifestations and abnormal findings resolved spontaneously within one month [80]. Several other cases have been reported. Transverse myelitis Generally, SC2V-related transverse myelitis can occur together with optic neuritis or encephalitis, with both, or can occur as an isolated condition. Isolated transverse myelitis manifests with motor, sensory, and autonomic deficits, and has been repeatedly reported as a complication of SC2Vs. In a meta-analysis of 49 studies, transverse myelitis was reported in 20 cases [61]. In a prospective study of 25 patients with SC2V-related acute inflammatory CNS disease, transverse myelitis was found in 4 [62]. Patients with SC2V-related transverse myelitis are frequently positive for MOG-IgG [62]. In a study of 476 children with mulitsystem inflammatory syndrome in children (MIS-C) and neurological deficits, one child had transverse myelitis [88]. A rare subtype of transverse myelitis, known as longitudinally extensive transverse myelitis extending over > 3 cord segments, has been reported as a complication of SC2Vs in some patients [89][90][91][92]. The most common therapy of SC2V-related transverse myelitis is methyl-prednisolone. Epilepsy Newly onset seizures are a common complication of SC2Vs. SC2V-related seizures may be due to other CNS disease occurring as a complication of SC2Vs or may occur in the absence of another CNS disease. Seizures may go along with or without structural lesions on cerebral imaging. Symptomatic epilepsy may be due to ischemic stroke, bleeding, VST, encephalitis, meningitis, or other CNS disorders manifesting with structural lesions. If vaccinees develop seizures after SC2V in the absence of a structural lesion, they should undergo CSF investigations to rule out encephalitis/meningitis. In patients with a history of epilepsy there is no increase in seizure frequency or severity. Some patients developed status epilepticus after a SC2V. In a single patient status epilepticus was attributed to systemic capillary leak syndrome (SCLS) [93]. SCLS is a rare but potentially life-threatening disorder clinically manifesting with recurrent episodes of arterial hypotension, hypalbuminemia, elevated haematocrit, and generalised edema. SCLS is due to endothelial hyper-permeability triggered by viral infections or vaccinations. SCVS is increasingly recognised as a complication of SC2Vs but has been only rarely described to manifest with neuro-logical compromise. A 36-year-old male was admitted for syncope, hypotension, and tachycardia and developed status epilepticus, cardiac arrest, anasarca, acute kidney injury, disseminated intravascular coagulation, pulmonary edema, rhabdomyolysis, and pleural effusions on hospital day 3 [93]. He was diagnosed with SCLS and recovered completely [93]. Other SC2V-related CNS complications Several other CNS complications of SC2Vs have been occasionally reported (Table 2). These include opsoclonus myoclonus syndrome (OMS), narcolepsy, Tolosa Hunt syndrome, cytotoxic lesions of the corpus callosum, neuroleptic malignant syndrome, hypophysitis, the wine glass sign, idiopathic intracranial hypertension, and isolated adeno-corticotropic hormone deficiency (Table 2). Although OMS has been repeatedly reported as a complication of COVID-19 in pediatric and adult patients, it was only rarely reported as a complication of a SC2V [48]. In an Indian study, a single patient with OMS after SC2V has been reported [48]. The patient was a 65-year-old male who developed behavioural changes 10 days after the second AZV dose [48]. Over the next three weeks jerky movements became apparent. There was mild pleocytosis. OMS was diagnosed and he profited from IVIGs and methyl-prednisolone [48]. Narcolepsy is due disturbed sleepwake cycle regulation and characterised by excessive daytime sleepiness and brief involuntary sleep episodes. Narcolepsy can be associated with cataplexy (sudden loss of muscle strength) in 70% of cases or without. It may go along with or without structural cerebral lesions in the hypothalamus or brainstem. Narcolepsy with cataplexy is evidenced to be an autoimmune disorder. The first reported patient with SC2V-related narcolepsy was a 57year-old female, who developed narcolepsy and impaired memory immediately after the first BPV dose [94]. Despite this reaction she received the second dose and experienced the same side effects but this time more intense than before [94]. The patient did not carry human leukocyte antigen alleles associated with narcolepsy [94]. There are also case reports documenting aggravation or exacerbation of Kleine Levin syndrome, hypersomnia, excessive daytime sleepiness, and narcolepsy after SC2V [95]. CNS disease due to vaccination-related complications outside the nervous system CNS disease after a SC2V may not only be due to primary but also secondary affection of the CNS. Secondary CNS complications of SARS-CoV-2 vaccines mainly include embolism to the brain due to endocarditis, myocarditis, intraventricular thrombus formation, heart failure, or arrhythmias. Physicians should be aware of cardiac complications after SC2Vs and should consider a cardiogenic origin of cerebrovascular disease after a SC2V. There are also reports about patients who experienced breakthrough infections of the CNS after SC2Vs. For example, infectious meningitis due to reactivation of the zoster virus was reported in a 12-year-old male 11 days after the first BPV dose [96]. The patient recovered completely upon intravenous acyclovir [96]. VIIT and SCLS are also well-known for causing secondary CNS disease. Cranial nerve lesions The cranial nerve most commonly affected from SC2Vs is the facial nerve (cranial nerve VII). Only in some patients affection of other cranial nerves (I, V, VI, VIII, IX) has been reported. Whether facial palsy is due to the same pathophysiological mechanism as viral infection-related facial palsy is under debate. Because facial palsy is also a common manifestation of GBS it is also conceivable that isolated facial palsy represents an abortive form of GBS. Hypogeusia, ageusia hyposmia, and anosmia have been also reported as complications of SC2Vs but are less common than in SARS-CoV-2 infections. In a prospective, exploratory observational study on 258 vaccinees having received either AZD1222 or BBV152, only 0.8% reported ageusia [97]. SC2V-related trigeminal neuralgia has been only reported in three patients [98,99]. The first patient is a 77-year-old female with a history of microvascular de- compression for previous trigeminal neuralgia [99]. One month later, she received the first BPV dose [99]. Twelve hours after vaccination she experienced pain and numbness in the face. Antibodies against the zoster virus were negative. Carbamazepine and pregabalin improved the condition but right-sided facial numbness persisted [99]. The second patient is a 45-year-old female who developed trigeminal neuralgia 3 days after having received the first BPV dose [98]. Because NSAR did not result in complete discontinuation of the complaints, pregabalin was started [98]. However, also pregabalin did not result in complete recovery, why glucocorticoids were given with success [98]. The third patient is a 48-year-old male who complained of left-sided facial pain (stabbing, electric shock-like) one day following the second BPN dose [100]. After one week, additionally numbness of the left upper limb developed. He received steroids, which were tapered and resulted in almost complete recovery at the three-months follow-up [100]. Guillain Barre syndrome GBS is a neuro-immunological disorder due to an autoimmune reaction against components of the PNS affecting the roots of cranial or peripheral nerves. Depending on the site of the antibody attack (myelin sheath or node of Ranvier), demyelinating and axonal forms are delineated. The most common subtypes of GBS are acute, inflammatory, demyelinating polyneuropathy (AIDP), acute, motor, axonal neuropathy (AMAN), acute, motor and sensory, axonal neuropathy (AMSAN), Miller-Fisher syndrome, pharyngo-cervico-brachial variant, mono-or polyneuritis cranialis, and Bickerstaff encephalitis. In the Western world the most common subtype is AIDP whereas in Asia AMAN, and AMSAN prevail. Clinical presentation and treatment of SC2V-related GBS is not at variance from GBS due to other causes. More than 300 cases of SC2V-related GBS have been reported to date [3,4,101]. Prevalence of SC2V-related GBS varies considerably between studies. In a study of 2,163 GBS patients, vaccination with the AZV was associated with an increased risk of GBS [102]. In a prospective case study with a median follow-up of 387 days, GBS developing within 6 weeks after SC2V in only four patients [4]. Although there is an ongoing debate whether GBS is truly causally related to SC2Vs, more arguments speak for than against a casual relation. Chronic inflammatory demyelinating polyneuropathy (CIDP) CIDP is characterised by symmetric sensorimotor deficits in the limbs and diagnosed according to European Federation of Neurological Sciences criteria. In an ambispective, multicentre hospital-based cohort study carried out between March to October 2021 in India on the neurological side effects of anti-SC2Vs with the AZV or BBV152 (Covaxin), a single patient developed CIDP [103]. Plexitis (Parsonage Turner syndrome) PTS is clinically characterised by neuralgic neck and shoulder pain, muscle weakness, and sensory disturbances. Only few cases with SC2V-related PTS have been reported [104]. Patients with SC2V-related PTS usually recover partially under steroids, analgesics, and occupational therapy, but complete recovery can take months. Small fiber neuropathy SFN is due to affection of A-delta fiber or C-fibers and manifests clinically with pain in a highly variable distribution, sensory disturbances, and autonomic dysfunction. The golden standard for diagnosing SFN is skin biopsy showing reduced intra-epidermal nerve fiber density or reduced sweat gland nerve fiber density. Nerve conduction studies are usually normal unless SFN is associated with large-fiber neuropathy. SC2V-related SFN has been first described in a 57-year-old female who presented with burning dysesthesia initially in the feet and consecutively spreading to the calves, and minimally to the hands one week after the second BPV dose [105]. Several other biopsy-proven cases have been reported since then. Myasthenia Several patients with newly onset myasthenia or exacerbation of myasthenia after SC2V have been reported. In a prospective case study with a median follow-up of 387 days, myasthenia was reported in a single patient [4]. In a study of > 200,000 vaccinees, one patient developed myasthenia [3]. In a multi-centre study from India, only one developed myasthenia [103]. Treatment of SC2V-related myasthenia is not at variance from myasthenia due to other causes. Myositis or dermatomyositis Myositis is a common complication of SC2Vs [3,4] but often remains a suspicion due to unavailability of muscle MRI or muscle biopsy [103]. Because myalgia is a common complication of SC2Vs, and because these patients often present with creatine-kinase (CK) elevation, myositis is suspected. Myositis-specific antibodies are usually absent in these patients. A few patients have been reported who developed dermatomyositis after a SC2V [106]. Rhabdomyolysis Rhabdomyolysis is due to acute muscle cell necrosis and manifests clinically with fatigue, myalgia, exercise intolerance, cola-coloured urine, or even muscle weakness. Causes of rhabdomyolysis are variegated but in association with SC2V it may be due to myositis, previous seizure, or SCLS. A 38-year-old Brazilian male was admitted for intense pain in lower limbs, rock hard calves, arthro-myalgia, diarrhea, and some isolated episodes of fever [107]. One day prior to onset of symptoms he had received the second BPV dose [107]. On admission, there was arterial hypotension, hypoalbuminemia, the haematocrit was 70%, lactic acidosis (6.88 mmol/L), and renal insufficiency [107]. On hospital day 2 he developed rhabdomyolysis with a CK value of 39,000 U/L and respiratory failure and required mechanical ventilation. After ruling out all differential diagnoses, SCLS was diagnosed and rhabdomyolysis attributed to increased compartment pressure [107]. The patient recovered completely upon symptomatic treatment [107]. A 68-year-old female was admitted for nausea/vomiting, syncope, hypotension, and tachycardia [93]. Consecutively, she developed protracted hypotensive shock, anasarca, acute kidney failure, disseminated intravascular coagulation, bilateral lowerextremity compartment syndrome with rhabdomyolysis, and widespread digital necrosis [93]. She was diagnosed with SCLS but deceased despite maximum treatment [93]. VACCINES UNDER DEVELOPMENT OR UNDER APPROVAL In addition to approved and marketed anti-SARS-CoV-2 vaccines a number of vaccines are in development or approval. In a randomised, placebo controlled phase 1/2 trial, the recombinant protein-based anti-SARS-CoV-2 vaccine S-268019 (Shionogi, Japan) was rated as safe in adults up to day 50. The vaccine elicited a robust IgG antibody response, but failed to elicit adequate levels of neutralising antibodies. In a randomised, observer-blinded, phase 2/3 study, S-268019-b demonstrated non-inferiority to BNT162b2 on the co-primary endpoints for neutralizing antibodies. Most participants reported mild reactogenicity on days 1−2, the most common being fatigue, fever, myalgia, and injection-site pain but no serious adverse events. In a study of the protein-based SARS-CoV-2 vaccine MVC-CPV1901 (Dynavax, Taiwan) on healthy adolescents, the most commonly reported adverse events were pain, tenderness, malaise, and fatigue. No serious adverse events were reported. In another study on the effect of booster doses of the MVC-CPV1901 vaccine mild or moderate adverse events were reported. In a cohort study on the efficacy and tolerability of the CIGB-66 (Abdala) vaccine (CIGB, Cuba), no serious adverse events were reported in any of the enrolled Cuban vaccinees. In a study of 480 participants who received the third booster dose of the ZF2001 (Zifivax) vaccine (Anhui Zhifei Longcom, China), the incidence of adverse reactions within 30 days of vaccination was 5.8%. No serious adverse events were reported in this study either. Numerous other anti-SARS-CoV-2 vaccines are being studied (e.g., ARCT-154, Nooravaccine, Turcovac, SCTV01C, COVID-19 vaccine Hipra, DelNS1-2019-nCoV-RBD-OPT1, Covax19, Razi Cov-Pars, CoviVac, GRAdCOV2, VXA-CoV2-1, ChulaCoV19, BBV154, PTX-COVID19-B, SC-Ad6-1, ReCOV, ABNCoV2, GX-19N, EpiVacCorona, AV-COVID-19, rVSV-SARS-CoV-2-S, BECOV2, GBPS10, COVAC-1DS-5670a etc.) but most have either had no serious adverse reactions reported or the profile of side effects has not yet been evaluated in clinical trials. DISCUSSION This review shows that the spectrum of neurological side effects of SC2Vs is broad, ranging in severity from mild to severe, and that the outcome ranges from full recovery to death. Most of these neurological side effects are due to the physiological or an enhanced immune response to the vaccine or its components. However, the immunological response to the vaccine can also be diverse. Several explanations have been proposed to clarify the pathophysiology of the neurological side effects of SARS-CoV-2 vaccines. According to one of them, adverse reactions result from vaccine-induced S-protein generation. According to this hypothesis, the S-protein or some of its peptide fragments not only stimulate the immune system, but also bind to angiotensin converting enzyme-2 (ACE-2) receptors not only on endothelial cells but also on several cell types surrounding the capillary beds. Through this mechanism, S-protein enters the cell and stimulates a series of intracellular reactions, mimicking SARS-CoV-2 infection. There is also evidence that adverse reactions result from the abundance of ACE2 receptors on cell surfaces. When ACE2 receptors are upregulated, which is the case with nicotine or anti-cancer drugs, SARS-CoV-2 infections can be more severe. When ACE2 upregulation is suppressed by inhibition of transient receptor potential canonical (TRPC3)-NADP oxidase (Nox2) complex formation, pseudovirus-induced contractile and metabolic dysfunction of rat myocardiocytes can be attenuated. These results suggest that downregulation of ACE2 expression could represent a future therapeutic option of SARS-CoV-2. A third hypothesis suggests that adverse reactions result from the induction of a pro-inflammatory response by nano-particles used for mRNA delivery (pegylation). There is also some evidence that SARS-CoV-2 vaccines elicit an allergenic response, reflected in reports about vaccination-induced mast cell activation syndrome. Arguments supporting this hypothesis are skin le-sions and the beneficial effect of antihistamines in some patients after SC2Vs. This hypothesis is further supported by two reports of chronic, spontaneous urticaria (CSU) after SC2Vs. The likelihood of CSU recurrence within three months of BPV was correlated with a positive autologous serum skin test, allergic comorbidities, and basopenia. Some authors also propose that all adverse reactions of SARS-CoV-2 vaccines can be explained by MIS-C/MIS-A. An argument for MIS-C/MIS-A is that inflammatory markers, such as cytokines, chemokines, glial factors, 14-3-3, and others are often found elevated in patients with SARS-CoV-2 vaccine side effects. Another hypothesis suggests that SARS-CoV-2 vaccines cause side effects by suppressing the immune response via G-quadruplexes, exosomes, and microRNAs. One argument for this is that SC2Vs can reduce immune competence and can cause superinfections or flares of pre-existing immunological disease or even trigger a new immunologic disease. The role of VITT should not be neglected in terms of side effect generation, but VITT does not occur in every vaccinee who develops side effects. There are also side effects that cannot easily be explained by thrombosis of venoles, arterioles, or larger vessels. It is also speculated that SCLS may play a role in the pathophysiology of neurological side effects. However, not all patients with SARS-CoV-2 vaccine side effects develop SCLS, so it cannot be used as a general explanation for SC2V-related side effects. CONCLUSIONS Neurological adverse reactions occur with any type of SARS-CoV-2 vaccine, are varied, can range from mild to severe, treatable or hardly treatable, and should be taken seriously to initiate early treatment and thus improve the outcome and avoid fatalities. None. No potential conflict of interest relevant to this article was reported.
2023-05-01T06:16:08.613Z
2023-05-30T00:00:00.000
{ "year": 2023, "sha1": "a792f06350ae2cecbb67fd2f91215f11970639ca", "oa_license": "CCBYNC", "oa_url": "https://www.cpn.or.kr/journal/download_pdf.php?doi=10.9758/cpn.2023.21.2.222", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c7c22d88398d4814dce2f9b6ddba43b18fb8848d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
174819525
pes2o/s2orc
v3-fos-license
Failure to detect equid herpesvirus types 1 and 4 DNA in placentae and healthy new-born Thoroughbred foals Equid herpesvirus type 1 is primarily a respiratory tract virus associated with poor athletic performance that can also cause late gestation abortion, neonatal foal death and encephalomyelopathy. Horizontal transmission is well described, whereas evidence of vertical transmission of equid herpesvirus type 1 associated with the birth of a healthy foal has not been demonstrated. This study sampled a population of Thoroughbred mares (n = 71), and their healthy neonatal foals and foetal membranes, to test for the presence of both equid herpesvirus types 1 and 4 using a quantitative polymerase chain reaction assay. Foetal membrane swabs and tissue samples were taken immediately post-partum, and venous blood samples and nasal swabs were obtained from both mare and foal 8 h after birth. Neither equid herpesvirus type 1 nor equid herpesvirus type 4 nucleic acid was detected in any sample, and it was concluded that there was no active shedding of equid herpesvirus types 1 and 4 at the time of sampling. Consequently, no evidence of vertical transmission of these viruses could be found on this stud farm during the sampling period. In horses, multiple herpesviruses have been detected, some of which are associated with clinical diseases. The equid alphaherpesviruses 1 and 4 (EHV-1 and EHV-4) have an economically significant impact on athletic and reproductive performance (Gilkerson et al. 1999). Respiratory disease caused by EHV-1 and EHV-4 is seen most frequently in weanlings and yearlings (Van Maanen 2002) with associated poor performance and loss of training time (Gilkerson et al. 1999). Reproductive losses usually occur because of the late gestation abortions and neonatal foal death caused by EHV-1 (Gilkerson et al. 1999;Van Maanen 2002). Outbreaks of the neurological form of EHV-1 are usually sporadic ) and may result in the death or euthanasia of the affected animal (Charlton et al. 1976;Wilsterman et al. 2011). Primary infection with EHV-1 occurs via the respiratory tract (MacLachlan & Dubovi 2011) following contact with infected secretions from virus-shedding horses (Rusli, Mat & Harun 2014), or contact with an aborted foetus or foetal membranes (Allen et al. 2004). The replication of the virus begins in the epithelium of the upper respiratory tract or conjunctivae and continues in the draining lymph nodes (Allen et al. 2004;Rusli et al. 2014). Within 24 hours (h), EHV-1-infected mononuclear cells are detectable in lymph nodes associated with the respiratory tract (Kydd et al. 1994). Virus-infected cells can be detected in the trigeminal ganglion within 48 h of initial infection (Allen et al. 2004;Slater et al. 1994). Immunologically-naive horses may shed the virus from the nasopharynx for up to 15 days after first exposure, whereas previously exposed horses typically shed for only two to four days (Allen et al. 2004;Burrows & Goodridge 1975). The resultant leukocyte-associated viraemia can then infect the endothelium in the uterus (Allen et al. 2004;Lunn et al. 2009;Rusli et al. 2014). Infection of the endothelial cells of the uterine blood vessels allows for transmission of the virus from the mare to the foetus (Kimura et al. 2004) or placental infarction and detachment (Smith et al. 1992). Equid herpesvirus type 1 is primarily a respiratory tract virus associated with poor athletic performance that can also cause late gestation abortion, neonatal foal death and encephalomyelopathy. Horizontal transmission is well described, whereas evidence of vertical transmission of equid herpesvirus type 1 associated with the birth of a healthy foal has not been demonstrated. This study sampled a population of Thoroughbred mares (n = 71), and their healthy neonatal foals and foetal membranes, to test for the presence of both equid herpesvirus types 1 and 4 using a quantitative polymerase chain reaction assay. Foetal membrane swabs and tissue samples were taken immediately post-partum, and venous blood samples and nasal swabs were obtained from both mare and foal 8 h after birth. Neither equid herpesvirus type 1 nor equid herpesvirus type 4 nucleic acid was detected in any sample, and it was concluded that there was no active shedding of equid herpesvirus types 1 and 4 at the time of sampling. Consequently, no evidence of vertical transmission of these viruses could be found on this stud farm during the sampling period. Failure to detect equid herpesvirus types 1 and 4 DNA in placentae and healthy new-born Thoroughbred foals Read online: Scan this QR code with your smart phone or mobile device to read online. Copyright: © 2019. The Authors. Licensee: AOSIS. This work is licensed under the Creative Commons Attribution License. The pathogenesis of neurological disease relates to the strong endotheliotropism of virulent strains of EHV-1. Vasculitis and subsequent thrombosis can occur in the central nervous system (CNS), with resultant ischaemic damage and myelomalacia (Friday et al. 2000). The establishment of latency is a key feature of all herpesvirus infections (Dunowska 2016): EHV-1 becomes latent in the trigeminal ganglia and lymphoid tissue (Slater et al. 1994). A review of literature on latent EHV-1 infection suggested that more than 50% of the horse population is latently infected with EHV-1 (Brown et al. 2007). It has been suggested that shedding of the virus through reactivation of latent infection is an important biological source of the virus (Allen et al. 2004;Edington, Welch & Griffiths 1994). The development of chronic, low-grade infections through reactivation of latency is an effective strategy for EHV-1 to maintain itself within the global horse population (Allen et al. 2004;Brown et al. 2007). Arguably, it is against the interest of the virus to cause the death of its host, and initiating abortion would create a 'dead end' in viral replication. An EHV-1 positive abortion or neonatal death may assist horizontal transmission because the abortus or neonate serves as a source of infection. However, a seemingly superior viral evolutionary strategy may be to disseminate the virus via the birth of an infected but viable foal. This may result in immediate infection of vulnerable animals in the same cohort but may also permit the development of latency. Future reactivation events might then continue to disseminate the virus to an even wider population of horses. In a recent preliminary study, a strong correlation was found between the presence of a major histocompatibility complex (MHC) class 1 B2 allele and pregnancy loss in horses, which was present regardless of the EHV-1 status of the foetus (Kydd et al. 2016). The presence of this allele was found to be a statistically significant risk factor among many risk factors for abortion (Kydd et al. 2016). While this association needs further investigation, it raises the possibility that in mares carrying this particular allele, abortion caused by EHV-1 infection may be an accident, rather than a specific viral propagation strategy. Major histocompatibility complex class I plays a key role in the generation of host immune responses and, in vitro, acts as an entry receptor via viral glycoprotein D (Sasaki et al. 2011). The present study aimed to detect the presence of EHV-1 and -4 DNA in the placentae, blood and nasopharynx of a stud farm's population of Thoroughbred broodmares and their new-born, viable and healthy foals during a single foaling season. Research methods and design The study population consisted of 71 maiden and multiparous Thoroughbred mares, aged 5-19 years, together with their neonatal foals. All animals were resident on a stud farm near Piketberg, Western Cape, South Africa. The pregnant mares were maintained outside but were stabled during parturition to allow closer supervision. Foetal membrane sampling was performed immediately after placental expulsion. Foetal membranes were inspected to determine their integrity and note any signs of pathology. A dry cotton swab was rubbed over the villous surface of the chorion at three sites, namely pregnant horn, non-pregnant horn and body (Figure 1). Approximately 8 h after foaling, venous blood samples and nasal swab samples from both mare and foal were collected into EDTA BD Vacutainer ® tubes (Becton Dickinson, Johannesburg, South Africa) and 10-cm plastic shafted cotton tipped nasal swabs, respectively. A duplex quantitative polymerase chain reaction assay (qPCR) was performed for EHV-1 and EHV-4 (Diallo et al. 2006). Nasal and placental swabs were agitated in 0.5 mL of 0.1 M phosphate buffered saline (PBS) (pH 7.4) in a 1.5 mL Pierce™ Microcentrifuge tube (Thermo Fisher Scientific, United States) for 5 seconds (s). Samples were then centrifuged for 60 s at 10 000 G using a desktop centrifuge (Rotanta 460, Germany) to concentrate cellular material and pathogen material, if present. Excess supernatant was removed from each sample container and was discarded to reduce the sample volume. Then, 100 µL of distilled water was added to each container. Samples were then agitated and placed in a temperature-controlled heat block at 95 °C. The 0.1 mL PCR (polymerase chain reaction) plates were prepared in a separate section of the laboratory. The master mix (17 µL per sample) was placed into each sample well of the PCR plate, and a foil seal was placed over the plate. The prepared samples (3 µL) were then added to the individual wells of the plate by introducing the pipette tip through the foil seal. Lastly, the positive and negative controls were added. Nucleic acids extracted from EHV-1 and EHV-4 reference viral cultures obtained from the Equine Virology Research Laboratory, University of Pretoria, were used as positive controls. Endonuclease-free water was used as a negative control. The qPCR was performed according to the manufacturer's guidelines and followed the standard operating procedure (SOP) of the Veterinary Genetics Laboratory using the Applied Biosystems TM Thermo Fisher Scientific StepOnePlus TM Real-Time PCR System. A cut-off value of 40 cycles (C t ) was assigned for the detection of viral DNA in the prepared samples. Ethical approval for the research was obtained from the University of Pretoria's Animal Ethics Committee (project number V109-16). Results The qPCR failed to detect either EHV-1 or EHV-4 nucleic acid in any nasal swabs collected from the study population of 71 mares and their foals, or from their foetal membranes (Table 1). As EHV-1 and EHV-4 are respiratory tract viruses, the failure to detect viral shedding suggests that cellassociated viraemia in any of the sampled horses was unlikely, and consequently blood samples for serology and viral detection were not tested. Discussion Our study was designed to gather evidence to test the hypothesis that horizontal dissemination is not the only means of transmission of EHV-1 and that vertical transmission is an alternative mechanism for viral propagation. We did not find any evidence of active shedding of EHV-1 or EHV-4 DNA in healthy post-partum mares and their foals nor in the placentae and were, therefore, unable to support this theory. In considering potential pitfalls for our study, an entire batch of false negative samples, as a result of damage to viral DNA during transport, was considered unlikely; prior studies using identical sampling, transport and extraction methods and the same qPCR assay to detect EHV-1 and -4 DNA were successful (Badenhorst et al. 2015;Schulman et al. 2014). Furthermore, the positive control reacted as anticipated. Nevertheless, neither EHV-1 nor -4 viral DNA was detected in this relatively large sample set. A reported EHV-1 abortion-associated epizootic occurred on the same farm in 2007, with 9 of the then 30 resident pregnant broodmares aborting (Schulman et al. 2012). The current study included five mares that, although present, did not abort during the 2007 outbreak but were probably exposed to infectious EHV-1. An additional mare, present during the previous outbreak, was also resident on the farm but was not sampled because of her barren status in 2016. Given this history, we concluded that at least some mares sampled for the current trial had been previously exposed to EHV-1. Based on this assumption, the mares in the current study may simply not have demonstrated viral recrudescence with subsequent viraemia and shedding (Dunowska 2016). The percentage of latently infected mares was unknown at the time of the study and the farm's protocol of routine, comprehensive vaccination of pregnant mares may have suppressed viral reactivation and shedding (Goehring et al. 2010;Minke, Audonnet & Fischer 2004). The detection of active viral shedding in animals that are possibly latently infected presents a challenge that is discussed extensively in the literature. A recent study found a low rate of detection of EHV-1 in adult horses, even among those showing pyrexia and respiratory signs (Pusterla et al. 2016). In another study of 124 hospitalised critically ill horses, no evidence of EHV-1 shedding was detected, although low levels of latency could not be excluded (Carr, Schott & Pusterla 2011). Sonis and Goehring (2013) concluded from a study of hospitalised febrile horses that nasal shedding of EHV-1 and EHV-4 was a rare event, as only one of the 64 febrile horses was PCR positive for EHV-4 and none were positive for EHV-1. Several studies have reported the time point between birth and weaning at which foals became EHV-1 and -4 positive (Foote et al. 2004;Gilkerson et al. 1999). Foote et al. (2004) showed the presence of EHV-1 and EHV-4 DNA in nasal swabs from a group of foals, some of which were as young as 11 days. The foals were sampled at an average of 40 days old to determine seroprevalence using a glycoprotein G-specific ELISA (27% of the foals). The young age at which these foals seroconverted has two potential explanations: firstly, a very rapid post-partum infection and seroconversion, despite the presence of maternally derived antibody; secondly, as a result of vertical transmission, intrauterine priming may have occurred, leading to rapid seroconversion on exposure immediately after birth. During an EHV-1 abortion storm, EHV-1 was identified by virus isolation in 4 out of 39 foals aged 7-9 days, 3 of which showed no clinical signs (Mumford et al. 1987). In a study by Gardiner and co-workers, EHV-1 was isolated from the chorioallantois of infected mares that gave birth to premature foals, which shed EHV-1 for the first week of life (Gardiner et al. 2012). This repeated discovery of EHV-1 and EHV-4 DNA and infectious virus in very young healthy foals was a significant factor in the justification of the present study. Conclusion A field study sampling a single stud farm with a single management system over one season obviously limits the extrapolation of the findings to either the South African or global horse population. On this particular farm, there was no evidence of active EHV-1 or EHV-4 infection at the time of sampling. Given the cyclic nature of herpesviral disease, repeat sampling in successive breeding seasons or in a breeding season affected by a confirmed EHV-1 outbreak http://www.jsava.co.za Open Access may better represent the actual risk of vertical transmission of EHV-1 in actively shedding horses. Although this study did not yield any evidence of vertical transmission of EHV-1, the possibility of vertical transmission was not conclusively excluded. Further research is required to address this intriguing hypothesis. Any evidence for vertical EHV transmission would have important consequences for management practices on stud farms and improve our understanding of the dynamics of equid herpesviral disease in horse populations.
2019-06-07T20:32:34.987Z
2019-05-30T00:00:00.000
{ "year": 2019, "sha1": "715a2301ac2a28a67ed754624eb3444246f5ef3d", "oa_license": "CCBY", "oa_url": "https://jsava.co.za/index.php/jsava/article/download/1736/2383", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "715a2301ac2a28a67ed754624eb3444246f5ef3d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
17314121
pes2o/s2orc
v3-fos-license
Endoscopic management of acute colorectal anastomotic complications with temporary stent. Acute postoperative anastomotic complications following colorectal resection include leak and obstruction. Often an operation is necessary to treat these complications. The role of endoluminal procedures to treat these complications has been limited. This article illustrates that such an approach is technically feasible and can be used to treat some colorectal anastomotic complications. INTRODUCTION Advances in endoscopy over the last decade have provided the endoscopic surgeon an armamentarium of tools that have shifted the field from a purely diagnostic modality to a therapeutic one. One illustration of this progress is endoscopic decompression of malignant colorectal obstruction with self-expanding stents, a procedure that has gained wide acceptance as an alternative to surgical diversion or excision, especially in the setting of metastatic disease or in poor surgical candidates. [1][2][3][4][5][6][7] Stenting of benign colorectal conditions has been described, but its role has been limited due to technical issues and lack of long-term data on permanent metal stents for nonmalignant disease. 8 -13 However, with the recent introduction of nonmetal esophageal stents, there is a growing interest and some experience with their role in the treatment of benign conditions, including postoperative complications following esophageal or gastric surgery. 14 -16 In this article, I describe the technical aspects and results of endoscopic stenting in 2 patients with acute postoperative anastomotic complications following colorectal surgery. Patient 1 A 52-year-old man underwent an elective left hemicolectomy for a 2-cm adenoma. He was discharged home on the third postoperative day but readmitted to the hospital a week later with abdominal distention, nausea, and vomiting. A computed tomography of the abdomen at the time of admission revealed colonic obstruction at the anastomosis with surrounding inflammatory changes without evidence of intraabdominal abscess (Figure 1). He was managed with nasogastric tube decompression, bowel rest, and intravenous peripheral nutrition. The bowel obstruction persisted, and an operation was advised, but the patient inquired about alternative options. The colorectal surgery service was consulted. At the time of consultation, the patient's vital signs were stable and he was afebrile. His abdomen was significantly distended without peritoneal signs. His laboratory findings included a normal white count. Gastrograffin enema demonstrated complete anastomotic obstruction without a leak. On postoperative day 15, the patient consented to endoscopic stenting with possible laparotomy, colonic resection, and fecal diversion. Patient 2 A 67-year-old woman presented with massive lower gastrointestinal hemorrhage secondary to diverticulosis. The bleeding persisted, and following transfusion of 8 units of packed red blood cells, the patient underwent emergent abdominal colectomy with ileorectal anastomosis. Her postoperative course was significant for persistent bowel obstruction managed with bowel rest, nasogastric tube decompression, intravenous antibiotics, and fluids. Computed tomography performed on postoperative day 10 revealed an anastomotic obstruction with a leak, perianastomotic inflammatory changes, and free air under the diaphgram. The colorectal surgery service was consulted. At the time of evaluation, the patient was ill-appearing and debilitated. Her vital signs showed sinus tachycardia with a heart rate of 115 beats per minute, and a temperature of 38.5 degrees Celsius. Her abdomen was significantly distended, and she had localized peritoneal signs in the lower quadrants. Her laboratory findings included a white blood count 12.1 x 10 3 /L, hemoglobin 8.9 g/dL, and albumin of 1.8 g/dL.The patient was advised to undergo laparotomy with diverting ileostomy, but her family inquired about alternative options with the hope of avoiding a stoma. Endoscopic intervention was discussed, and the patient consented to endoscopic stenting with possible laparotomy and fecal diversion. ENDOSCOPIC TECHNIQUE AND RESULTS The procedures were performed in the endoscopy suite with the patient under intravenous sedation. The patients were placed in the left lateral decubitus position. The flexible sigmoidoscope was advanced to the anastomosis. An edematous and obstructed anastomosis was noted in patient 1 at 60 cm from the anal verge ( Figure 2). In patient 2, there was dehiscence of approximately 40% of the anastomosis with an associated abscess cavity (Figure 3). Under endoscopic and fluoroscopic guidance, the anastomosis was carefully traversed in both patients by using an Amplatz Super Stiff wire (0.09652 cm x 260 cm) (Boston Scientific, Natick, Maryland). A stiff wire was used to support the rigid stent delivery device. The endoscope was withdrawn keeping the wire in place, and the delivery device of the Polyflex stent (120 mm, 18x23 mm diameter) (Boston Scientific, Natick, Maryland) was advanced over the wire under fluoroscopic and endoscopic guidance. In patient 1, the adult flexible sigmoidoscope was used to straighten the sigmoid loop to allow safe advancement of the rigid delivery device. The stent was centered across the anastomosis, and the position was verified by fluoroscopy with the flexible endoscope tip at the distal aspect of the stent. The stent was successfully deployed in both patients, and the delivery device and endoscope were withdrawn without difficulty (Figure 4). Patient 1 had spontaneous decompression of his colonic obstruction with passage of flatus and bowel movement the day of the procedure. The nasogastric tube was removed, and a liquid diet was initiated the following day. Seventy-two hours later, there was spontaneous passage of the stent. The patient was discharged home on post- Patient 2 had spontaneous decompression of her obstruction the day of her procedure but 2 days later redeveloped obstructive symptoms. Repeat endoscopy revealed migration of the stent into the distal rectum. Repeat stenting was performed, and the distal aspect of the new stent was clipped to the rectal mucosa by using 4 Resolution clips (Boston Scientific, Natick, Maryland) to minimize migration. The patient's condition improved, and she was started on a liquid diet; however, 10 days following her second stent, her symptoms recurred. Repeat endoscopy showed migration of the stent. Stenting was repeated, and the distal aspect of the stent was secured to the rectum by using TriClip clips (Wilson-Cook Medical, Winston-Salem, North Carolina) (Figure 6). The obstructive symptoms resolved, and the patient was started on a liquid diet that was advanced to a regular diet. She was discharged from the hospital on postoperative day 28. Except for a urinary tract infection treated on an outpatient basis, she had a full recovery. One month after discharge, the Polyflex stent was removed in the endoscopy suite. Repeat endoscopy 3 months later demonstrated a patent and healed anastomosis without evidence of stricture (Figure 7). She remains well and free of obstructive symptoms 6 months following her hospitalization. DISCUSSION Although uncommon, acute anastomotic complications following colorectal resection do occur and are distressing to the patient and the surgeon. They include bleeding, obstruction, or leakage that often results in reoperative surgery in the early postoperative period. Reoperative intervention in such a setting carries increased morbidity, prolonged recovery, and may lead to fecal diversion, committing the patient to additional operations to close the stoma. Endoscopic stenting can effectively palliate malignant colorectal obstruction in patients with metastatic disease or in patients who are poor surgical candidates due to significant medical comorbidities. 1-7 However, stenting for benign colorectal disorders has not been widely practiced or advocated due to technical limitations, lack of instrumentation, and absence of long-term data on its effectiveness and the safety of metal stents. [7][8][9][10][11][12][13] Furthermore, numerous operative interventions are available to successfully address the needs of most patients with benign colorectal disorders. But with the recent introduction of covered metal and nonmetal esophageal stents, there has been interest in exploring their role in the management of benign gastrointestinal conditions or postoperative complications following gastric or esophageal surgery. 14 - 16 Eubanks and colleagues 14 recently reported their experience with off-label use of polyester or silicone covered esophageal stents in 19 patients with gastrojejunal anastomotic complications following bariatric surgery. The stenting was undertaken for acute leaks in 11 patients, chronic gastrocutaneous fistulas in 2, and stricture in 6. Immediate symptomatic improvement was noted in 90% of their patients, including 91% of the patients with leaks and 84% of patients with strictures. During a mean follow-up of 3.6 months, resolution of the anastomotic complications was noted in 16 patients (84%). A total of 34 stents were used in the 19 patients with an endoscopic reintervention rate of 42%. The most common complication was stent migration, which occurred in 58% of stent placement. While most patients with stent migration were treated endoscopically, 3 required laparoscopic extraction of the stent from the distal small intestine. In another study from Germany, Kauer and colleagues 15 reported their stenting experience with thoracic anastomotic leaks after esophagectomy. Ten consecutive patients underwent placement of a covered metal stent. Successful leak occlusion was noted in 90% of the patients. Stent migration occurred in 40% of patients. Endoscopic reintervention to treat stent-related complications was performed in 50% of patients. No patient required surgical intervention. During the study period, 5 patients underwent endoscopic stent removal after full healing, 3 had spontanenous passage of the stent per anus, and 2 patients died with the stent in place from nonstent related causes. A recent study from Karbowski and colleagues in Seattle reported their experience with the Polyflex esophageal stent in a variety of esophageal conditions. The Polyflex stent is a self-expanding and removable stent composed of a polyester infrastructure with silicone covering. Currently, it is approved in the United States for esophageal insertion. In their study, a total of 37 stents were placed in 30 patients with esophageal conditions. Twenty patients (66%) had benign conditions. Indications for stenting included esophageal fistula in 7, anastomotic stricture in 5, acute esophageal perforations in 3, acute postoperative leak in 1, and other causes in 14. Technical success was 100%, but stent migration was noted in 30% of patients. Endoscopic stenting for malignant colorectal disease is an option for patients with metastatic disease or those who are poor surgical candidates. [1][2][3][4][5][6][7] However, little data exist on its role in benign conditions. 8 -13 The 2 patients described in this report illustrate that endoscopic stenting can be helpful in patients with acute postoperative complications after colorectal resection. While the standard of care in 2009 is operative procedure when an intervention is needed under such circumstance, an endoscopic approach may successfully treat the anastomotic complication and avoid the morbidity and recovery of another laparotomy and the potential need for fecal diversion. However, it is important to note that there is a lack of data in the literature on the short-and long-term outcome of the interventions described in this report. Furthermore, the Polyflex stent is FDA approved for esophageal indications and was used off-label in this study due to the lack of other commercially available stents to treat benign colorectal conditions. However, the patients' request to seek an alternative to operative intervention and possible stoma raised the possibility of an endoscopic option. But only after assessing the patients' full understanding that endoscopic stenting for their conditions is experimental at this stage and after obtaining full informed consent was the procedure undertaken. Fortunately, this nonconventional approach resolved the anastomotic complications in both patients but also demonstrated the challenges noted by the above studies in terms of risk of migration and the need for multiple endoscopic reinterventions. The risk of stent migration and need for reintervention in the setting of benign disease is higher than when stents are used for malignant obstruction. 9,14 -16 In the case of the second patient, the use of a TriClip (Wilson-Cook Medical, Winston-Salem, North Carolina) clip to secure the distal end of the stent to the bowel wall helped prevent further migration. The use of TriClip in this setting was off-label, and there is currently no data in the literature to support such a use. To my knowledge, currently no other means area available to properly anchor a temporary stent to the bowel. Future technologic development may yield stent devices that are less prone to migration or ancillary fixation clips that help fixation to the colorectal wall. CONCLUSION Some postoperative anastomotic complications, traditionally managed operatively, may be amenable to endoscopic interventions, as illustrated by the 2 cases in this report. Hopefully, future technological advances will increase the armamentarium of tools and enable endoscopic surgeons to tackle a broader spectrum of colorectal conditions with endoluminal procedures.
2014-10-01T00:00:00.000Z
2009-07-01T00:00:00.000
{ "year": 2009, "sha1": "a0a0559eb725b47dbb528ae819db84d155f7cd57", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "a0a0559eb725b47dbb528ae819db84d155f7cd57", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233456361
pes2o/s2orc
v3-fos-license
P113 Starting a research collaborative in the midst of a pandemic – the Humberside experience Abstract Introduction The COVID-19 pandemic severely impacted research activities. Large international research collaboratives have successfully produced high quality COVID-related research. We aimed to investigate factors that influence trainee engagement in collaborative research and quantify engagement in our local area to propose methods to ensure all trainees have the opportunity to take part in impactful research. Methods This is a mixed methods study consisting of a survey to trainees regarding their experiences in research and an assessment of engagement in COVID-related research. The survey was circulated to undergraduates and trainees of all grades in February 2020. Results Engagement with the survey was poor, with a < 10% return rate. 43% of respondents stated that they had no experience of research. Engagement with collaborative research was popular, with responders declaring involvement with at least 7 other collaboratives. Reasons for participating in research were improving patient care (23%), producing high quality research (19%) and CV building (19%). Barriers included perceived lack of time (23%) and lack of knowledge about research (23%). The CASSH collaborative co-ordinated the local response to COVIDSurg, COVIDSurg|Cancer and the COVER study, involving 20 trainees contributing data on 368 patients. Conclusions All research collaboratives rely on the enthusiasm of participants in order to succeed. We have presented some of the motivators and barriers to participation in our region and outlined how we have built on national projects to improve engagement on a local level. Further projects are planned to capitalise on this improved engagement. Introduction: The COVID-19 pandemic severely impacted research activities. Large international research collaboratives have successfully produced high quality COVID-related research. We aimed to investigate factors that influence trainee engagement in collaborative research and quantify engagement in our local area to propose methods to ensure all trainees have the opportunity to take part in impactful research. Methods: This is a mixed methods study consisting of a survey to trainees regarding their experiences in research and an assessment of engagement in COVID-related research. The survey was circulated to undergraduates and trainees of all grades in February 2020. Results: Engagement with the survey was poor, with a < 10% return rate. 43% of respondents stated that they had no experience of research. Engagement with collaborative research was popular, with responders declaring involvement with at least 7 other collaboratives. Reasons for participating in research were improving patient care (23%), producing high quality research (19%) and CV building (19%). Barriers included perceived lack of time (23%) and lack of knowledge about research (23%). The CASSH collaborative co-ordinated the local response to COVIDSurg, COVIDSurgjCancer and the COVER study, involving 20 trainees contributing data on 368 patients. Conclusions: All research collaboratives rely on the enthusiasm of participants in order to succeed. We have presented some of the motivators and barriers to participation in our region and outlined how we have built on national projects to improve engagement on a local level. Further projects are planned to capitalise on this improved engagement. STING. Steroid Injections DurinG Covid-19. A cross speciality steroid injections undertaken during this part of the pandemic. Clinicians will be able to input information on patient demographics, background Covid risk and steroid injection specifics. At follow up at 4-6 weeks complications and Covid specific outcomes will be recorded, as well as patient perceived symptom improvement. Each unit collecting data will have assigned collaborator(s), with a senior consultant validating the data. Data will be collected and managed using Research Electronic Data Capture (REDCap). Data collection and management will adhere to Caldicott II principles and GDPR. Results: Results will be analysed through RedCap and compared to national Covid incidence. Local complication and patient reported outcomes will be compared between specialities, environments and steroid specifics (volume, location etc.). Conclusion: A pan-speciality look at steroid injection use during the pandemic will be useful primarily to contribute to understanding the safety of steroid use. Secondarily to look at cross speciality differences in administration, PROMs and to appreciate patient groups who may be excluded from steroid treatment. Join the team! Head to RSTN to get involved. P115 Students experiences of graduate attribute development, a University Introduction: Graduate attributes are recognised as the skills and values that should be acquired by all students in tertiary education, regardless of their field of study. Graduate attributes have significant relevance to employability and in enhancing students' personal development. This research aimed to explore students' lived experiences of developing two graduate attributes embedded within their undergraduate medical curriculum: communication and collaboration skills. Methods: Phenomenology was the validated, qualitative research method most suited to address the research question. Phenomenology allows one to better understand peoples' direct, lived experiences of a particular phenomenon; in this case, graduate attribute development. Purposefully sampled participants were chosen for semi-structured interviews, all of whom had completed the same undergraduate medical education programme. Semi-structured interviews were performed until thematic saturation occurred, at participant eight. The Braun and Clarke method of thematic analysis was utilised to identify recurring themes from the interview process. Results: The five key themes that emerged were (1) the value placed on graduate attribute development, (2) the presence of graduate attribute learning opportunities, (3) the presence of barriers against graduate attribute development, (4) graduate attribute literacy and preconceptions and (5) the students' transition into employment. Conclusions: There is a need for improved graduate attribute assessment methods and the development of meaningful learning strategies that promote transformative learning opportunities. Barriers against communication and collaboration skills development exist and need to be addressed. Students need to better understand the relevance of communication and collaboration skills to their future professional careers before the student-to-doctor transition has occurred. InciSioN UK Collaborative InciSioN UK Collaborative Corresponding Author: Mr. Michal Kawka (mik17@ic.ac.uk) Introduction: It is estimated that over 10% of the global burden of disease can be treated with surgery, most of which is located in low and middle-income countries (LMICs), underpinning the importance of the i32 | BJS Open, 2021, Vol. 5, Supplement No. 1
2021-05-01T05:14:34.566Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "14124527fd00dc431ce6f407a59e8edadb8b44bb", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/bjsopen/article-pdf/5/Supplement_1/zrab032.112/38271212/zrab032.112.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "14124527fd00dc431ce6f407a59e8edadb8b44bb", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
261309261
pes2o/s2orc
v3-fos-license
Alon: Journal for Filipinx American and Diasporic Studies This article brings into question the ethics of conducting feminist research on and with Filipina American women as a Fili-pina American researcher. Through identifying and challenging the assumptions of kapwa —a “pillar” of Filipino cultural values that refers to viewing the “self-in-the-other” 1 —I ask, how does one research communities they have deep and personal stakes in without reproducing the existing “fissures and hierarchies of power” existing in Filipinx American studies? 2 Drawing from personal experiences of navigating research-participant conflict during fieldwork, I center this methodological question to inter-rogate the affective assumptions of sameness and unity amongst Fil-Ams in diaspora and to address what responsibilities we might have as Fil-Am feminist researchers to challenge such assumptions in our research and writing. In order to center women’s complex lived experiences and disrupt positivist, static representations of Filipinx American diaspora, kapwa must be reimagined as a critical standpoint and “sameness” de-centered through the feminist methodological tool of critical self-reflection small town of Lancaster is predominantly White, conservative, and working-to middle-class, with their family being the only non-White/interracial family in their neighborhood.This all goes to show that my niece had (and has) very little exposure to other non-White children such as herself, outside of her visits to Illinois where her Filipino American relatives live. For this reason, my sister-in-law-a very caring woman who always embraced our family's cultural differences with compassion-was both surprised and humored when Ada ran to a group of brown people on the beach in South Carolina, inserting herself into their afternoon picnic with such ease and comfort."I was a little mortified," she said with laughter."We had no idea who they were, and then some random little girl just runs up to them thinking she knows them, like that's her family!"It wasn't until my brother and his wife spoke to the other family that they learned they were also, in fact, Filipino Americans.Ada's seemingly intuitive comfort around other Filipinos garnered amusement, adoration, and an unanticipated moment of cultural camaraderie on the beach.Upon hearing this story at the kitchen table, my mom and I laughed as she said, "I can't believe she recognized that they were Filipino!She must have thought that because they looked like us, she knew them."After the laughter died down my mom continued to say, "I guess that's a pretty typical Filipino thing though… always saying 'hi' even if we don't know each other.I'm just surprised she recognized that already." I was also familiar with the unspoken practice of giving a smile, hello, or "Are you Filipino?" when coming across another Fil-Am in the store, classroom, or non-family social gathering.I came to intellectualize this cultural characteristic after learning about kapwa, a concept popularized by psychologist Virgilio Enriquez in the 1970s to explain Filipinos' interpersonal behaviors as rooted in an internal view of another not as separate from ourselves, but connected through a "shared self."Similar to my niece, I remember the first time I saw another Fil-Am girl in my predominantly white elementary school in the Chicago suburbs and the excitement I felt when realizing I wasn't the only Filipino at our school.I immediately approached her during recess and asked if she wanted to be friends.During our first playdate her mom made us lumpia and torta; for some reason, this is the only part of the hours spent together I can actually remember.However, after my mom picked me up, met Marie's mom, and drove us back to our house she told me that Marie seemed like a nice girl, but she wasn't sure if we should keep being friends outside of school.Upset and confused, I asked her why: "Her family is different from ours," she explained.Although I still couldn't understand my mom's desire to distance her family from theirs, she remained steadfast in her decision.For reasons of her own, my mother had assumptions about our differences that ended the playdates with Marie. The memory of hearing about Ada's adorable mistake and the one of my last playdates with Marie now exist in juxtaposition, illuminating the dark underbelly of 'typical' Filipino behavior and the assumptions of community, sameness, and unity that comes with it.On the one hand, it's 'typical' -and perhaps even expected -for complete strangers to warmly embrace the other as a friend, or to at least acknowledge each other as a fellow member of the Filipino American diaspora.In private, however, unspoken divisions rooted in classism, colorism, homophobia, etc. highlight the ways in which ethnic-sameness is complicated by the internalization of Western colonialism and its practices for enforcing (dis)empowerment.Put simply: Filipino cultural values, such as kapwa, are rooted in a strong belief in community, but when left unchallenged, they can also be the source of intra-community conflict.Without a critical interrogation of what kapwa actually means or looks like in practice, solutions for healing from histories of colonization, assimilation, and the power-laden hierarchies within the Filipino American diaspora are limited. In this article I explore and challenge the assumptions of kapwa in our daily lives and in the field as Filipinx American researchers.My ultimate goal is to propose a new understanding of kapwa outside of the traditional frameworks of Filipino Psychology or Filipino Virtue Ethics, which treat kapwa as a defining characteristic of a homogenously defined Filipino identity.Rather, I draw from feminist methodologies to conceptualize kapwa as a critical positioning that de-centers sameness when working with other Filipina American women.Such a methodology, I argue, requires deep introspection and an interrogation of what Philippine personhood really entails. In this discussion I conduct a literature review of kapwa and detail their contributions and limitations; I then bring Remoquillo, The Problem with Kapwa in scholarship on feminist methodologies that call for critical self-reflection and standpoint epistemologies.As I discuss in the literature review, feminist theory directly challenges the universality of knowledge production (which I argue is present in the literature on kapwa), and can disrupt assumptions of sameness in the field.Following this trajectory and the actions of feminist researchers before me, I place myself under a speculative scope as I reflect on a personal experience of conflict with a research participant that was in large part caused by my internalized assumptions of sameness based on an imagined notion of Filipina American womanhood.By discussing my own methodological mistakes, I hope to exemplify how intentional research methods are central to producing innovative scholarship that highlights the complexities of Filipinx American identity and the field. I began to think more critically about the implications of kapwa, community, and diaspora when conducting fieldwork for my dissertation on the Filipina American diaspora in Chicago, a project deeply rooted at the intersection of Asian American Studies and Women's and Gender Studies.Overall, I worry about the dangerous implications that 'sameness' has when conducting feminist research in Filipinx American Studies, specifically the danger of replicating an over-romanticized view of a diasporic community that does not always challenge power dynamics in the field, but simply masks it.This would be the complete opposite of what I originally set out to do in my research on Filipina American women in the Chicago metro area -an ethnic, gendered, and geographical community that I identify as belonging to.However, one's belongingness to the communities they research can become an assumption when there is not enough critical interrogation on how community is defined in the first place.These assumptions can lead to the reproduction of systemic violence and further marginalize or exploit the women that I interact with and analyze -women that I may see myself as similar to, but whose own intersectional identities and life experiences make them inherently different than me. As an "intimate insider," 4 how do I translate my belongingness to that community as I step into the role as a researcher?Cultural studies scholar Jodie Taylor describes the interrelation dynamics between a researcher and those they research when friendship becomes a factor shaping fieldwork, particularly when it comes to "the liberties that friends take with each other; their sometimes insightful gazes; their sometimes myopic familiarity; their choices between honesty and flattery; and their levels of reciprocity among other things." 5What boundaries should be made as to not overly blur the lines between myself and my interviewees, subsequently masking the always-existing power dynamics that are at play in the field?How can I apply feminist methodologies in "the field" when that field becomes the Filipina American diaspora-a concept, feeling, and history rather than a physical location? When conducting interviews with other second-generation Filipina American women in the Chicago metro area, I had to ask myself, "What does an anti-racist, feminist methodology look like when conducting research that I see myself intimately implicated in?"I used to believe that by sharing an identity with those I wrote about, I could more easily conduct feminist research the "right" way.Our shared identity would allow me to speak with them, not for them; I wouldn't just be representing their stories, but our stories.Perhaps, I thought, sharing an identity with my participants would prevent me from playing the same God Trick produced by Whiteness within the academy-my sameness would protect me from being a voyeuristic outsider peeping into marginalized communities without representing them in nuanced, multidimensional ways. 6owever, during the first few weeks of conducting interviews I quickly learned that having an (assumed) shared identity presented an even more urgent need to recognize my responsibility as a researcher producing scholarship on an already underrepresented community; I needed to pay even closer attention to the ways in which power is always at play when conducting interviews, even when bonding over shared experiences with sexism, racism, and immigrant family life veiled those power dynamics.Furthermore, I learned that an important part of conducting feminist research on one's own community requires a conscious introspection of any internalized beliefs of a homogenous dias-5 Taylor, "The Intimate Insider," 4. 6 Donna Haraway, "Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective," Feminist Studies, vol.14, no. 3 (1988): 575-599; Juanita Sundberg, "Masculinist Epistemologies and the Politics of Fieldwork in Latin Americanist Geography," The Professional Geographer, vol.55, no. 2 (2003): 180-190.Remoquillo, The Problem with Kapwa Alon: Journal for Filipinx American and Diasporic Studies, 3 no. 1 (2023) poric identity. demysTifying The filipino orienTATion & siKolohiyAng pilipino In recent years, scholars writing in and on the field of Filipinx American Studies have increasingly challenged homogenous understandings of "Filipino"/x American identity, or what American studies scholar Martin Manalansan refers to as "Philippine peoplehood." 7According to Manalansan, this new wave of scholars signals the "Filipino turn" in Asian American studies, as well as a rising critical mass of Filipinx American activist and artists who share a strong commitment to and investment in the project of Filipinx American Studies. 8However, Manalansan warns against treating this Filipino turn as simply a cause for celebration and instead urges scholars to use this as "an occasion to grapple with existing intellectual fissures and structural hierarchies that have animated and continue to animate the field". 9Similarly, when challenging hiya-another cultural value in Filipino Virtue Ethics (FVE) that roughly translates to social shame or guilt--Manalansan argues that any monolithic notion of a national character or identity becomes "a particularly 'Filipino' problematic character flaw, an ingredient for a putative national personality trait, and a collective feeling caused by some deficiency or lack." 10 I understand this "lack" or "deficiency" as related to displacement in a postcolonial landscape, both in the Philippines and in diaspora.However, rather than completely doing away with hiya, or any other term that seeks to encapsulate the "meaning" of Philippine peoplehood, Manalansan urges us to think of a more productive use of such terms, one that involves "a sensitivity to agents and contexts." 11Following this epistemological approach allows me to reflect on memories of my niece, my childhood classmate, my mom, and my present-day research through a more inquisitive lens that reframes kapwa as an emotional positioning, and not a cultural value.Kapwa can therefore be more accurately described as a search for oneself in another in response to feelings of isolation, confusion, and disconnection from one's Filip-inx/o/a identity. Following this trajectory, my present analysis of kapwa and its limitations is meant to highlight the need for Filipinx American studies to think deeper about our approach to conducting research on and in diaspora, beginning with the work of feminist researchers whose innovative methodologies disrupted masculinist practices across disciplines.While Filipinx American studies is becoming increasingly intersectional and transnational, scholarship concerning the Filipino American diaspora and history often engage with gender in relation to labor (i.e., the feminization and exploitation of overseas Filipino workers) or roles in a heteropatriarchal family (i.e.mothers as guardians of their children), but not always as a methodological and epistemological lens that calls for the critical self-reflection of researchers ourselves. In contrast to Manalansan's call for an interrogation of Philippine Peoplehood, the dominating literature surrounding kapwa has not taken a critical positioning towards the assumption of sameness.I first learned about loób and kapwa when conducting research on Filipino American Postcolonial Studies, which was also one of my first introductions to work on the Filipinx American diaspora.The oversimplified English translation of kapwa ("another person") does not adequately describe the cultural significance of the term.In more non-academic spaces, kapwa has increasingly become a popular theme in Filipinx American social media platforms, branding, and marketing.For Filipinx American content creators on platforms such as Instagram, the concept of kapwa is represented as a unique feature of Filipino American community practices and diasporic identity that sets us apart from other Asian Americans.Loób refers to one's "relational will" towards another, or kapwa.When in practice, kapwa can be more accurately understood as a feeling of an inseparable, spiritual connection to others in the community.In order to better grasp how Filipinx Americans engage with kapwa (as a phrase, conscious practice, or behavior), we must first understand the historical and institutional roots from which the concept emerged. Although kapwa first emerged during pre-Spanish colonization, the most common understandings of kapwa are based Remoquillo, The Problem with Kapwa on Virgilio Enriquez's construction of Sikolohiyang Pilipino, or Filipino Psychology, which "is anchored on Filipino thought and experience as understood from a Filipino perspective." 12The core of Sikolohiyang Pilipino is kapwa and it is used to describe "the Filipino personality" as always shaped by interpersonal values and social interactions.Similarly, Filipino Virtue Ethics (developed out of a Aristotelian-Thomistic perspective) interprets kapwa as "together with the person" and is positioned as one of the foundational pillars that aims to support a "special collection of virtues dedicated to the strengthening and preserving human relationships" in Filipino culture. 13fter receiving his master's and doctoral degrees in Psychology from Northwestern University, Enriquez returned to the Philippines in 1971 with the goal of decolonizing Western psychology that led to "the native Filipino invariably [suffering] from the comparison [to American categories and standards] in not too subtle attempts to put forward Western behavior patterns as models for the Filipino."Alternatively, Sikolohiyang Pilipino focuses on "identity and national consciousness, social awareness and involvement, psychology of language and culture, and applications and bases of Filipino psychology in health practices, agriculture, art, mass media, religion" and more. 14Enriquez also drew from indigenous techniques of healing, religion, politics, and more to conceptualize the Filipino Orientation of Sikolohiyang Pilipino. 15ccording to Enriquez, while Filipino behavior has been studied and interpreted by Western institutions for centuries, these interpretations are always-already informed by histories of domination and have either reinforced Orientalist notions of Filipino infantilization or ignored the unique cultural factors in the Philippines that creates the Filipino Orientation.Therefore, the Filipino Orientation stresses an "indigenization from within" that is "based on assessing historical and socio-cultural realities, understanding the local language, unraveling Filipino characteristics, and explaining them through the eyes of the native Filipi-no." 16imilarly, Jeremiah Reyes wrote about Filipino Virtue Ethics (FVE) as a "revised interpretation" of twentieth century American scholarship produced on Filipino values.Such an interpretation was necessary after American social scientists observed Filipino behavior without a deeper cultural and historical understanding of the Philippines.For example, American anthropologist, Frank Lynch, coined the term "smooth interpersonal relationships" when describing "the greatest value of Filipino culture." 17However, Lynch's seemingly positive evaluation of Filipino culture and behavior exemplifies the historically White-centricity of Western social sciences and the reproduction of colonialist perspectives of Filipino people as willing subjects of Western colonization whose presumed submissiveness and docility created harmonious relationships between Filipinos and their colonial aggressors.Reyes instead points to the ways in which Filipino cultural values are a product of Southeast Asian tribal and animist traditions and the traditions of Spanish colonial culture that lasted for over 300 years.In contrast to Enriquez, however, Reyes does not place a critical lens on Western colonialism, which can be noted through his tendency to refer to Spanish colonizers as passing on their "traditions" to the native Filipinos, and not violently erasing the existing cultures of the islands and replacing it with their own religious, educational, and political institutions that disrupted family and community networks. The persistence of kapwa in and outside of scholarly spaces illustrates the impact of Enriquez's work nearly fifty years after Sikolohiyang Pilipino was established, and I want to acknowledge the importance and power of studying the emotive processes that organize our interpersonal connection and identity-formation, something that I think both Enriquez and Reyes aim to do in at least some ways. 18However, my feminist critiques of Sikolohiyang Pilipino and FVE's conceptualization of kapwa targets their homogenization of Filipino culture, identity, and behavior through the concept of the Filipino Orientation and a 16 Enriquez, From Colonial to Liberation Psychology, 51. 17 Frank Lynch, "Philippine values II: Social acceptance," Philippine Studies, vol. 10, no. 1 (1962): 89. 18In more recent years, kapwa has transcended academic borders and become increasingly popularized in Filipinx American popular culture.Filipinx American social media influencers, tattoo studios, yoga collectives, and more have used kapwa to promote themselves and their brand as dedicated to Filipino culture, traditions, and history. Remoquillo, The Problem with Kapwa Alon: Journal for Filipinx American and Diasporic Studies, 3 no. 1 (2023) reliance on "the native Filipino."While I believe that decolonizing the social sciences and humanities to revise Orientalist constructions of Filipino culture is a necessary task, I take issue with the over-romanticizing of a "native" perspective that constructs indigeneity in universalist terms and the tendency to portray Filipinos born and/or still living in the Philippines as the only "authentic" producers of cultural knowledge. For example, Sikolohiyang Pilipino stresses that part of our socialization is "being sensitive to non-verbal cues, having concern for the feelings of others, being truthful but not at the expense of hurting others' feelings" that result in an "indirect pattern of communication of Filipinos."19However, Enriquez suggests that the Westernized Filipino is "impatient" with this mode of communication (due to their cultural detachment from the native Filipino perspective) and is therefore insensitive to such non-verbal cues.Enriquez also uses this to describe "the great cultural divide" caused by Westernized Filipinos' elitism and apparent rejection of all things Filipino.20Therefore, the Westernized Filipino (such as the Filipino American) is unable to truly understand or feel kapwa.However, conflating Filipinx Americans' Westernization with elitism or cultural ignorance ignores the ways that Filipinxs experience identity, self, and community differently based on one's geographic location.Using his theory of "positions in process," ethnic studies scholar Rick Bonus argues that Filipinx identity is never singular, and that Filipinx American identity must be understood as a spatial and temporal negotiation. 21The "cultural ignorance" and disconnection Enriquez critiques are not voluntary; rather, they are the direct products of socio-emotional pressures of assimilating to the dominant culture, intergenerational trauma, and internalized perceptions of Filipino inferiority/Western superiority. Additionally, Filipino Psychology and FVE are predominantly male-dominated and adopt a gender-neutral approach when defining kapwa as a racial or ethnic construct.In reality, identity is an intersectional experience shaped by one's gender, socio-economic class, geographical positioning and more.While colonization negatively impacted all Filipinos, the introduction of Western heteropatriarchy was particularly damaging for women and girls who occupied a "displaced position" as second-class citizens in the Philippines and the Filipino diaspora. 22Therefore, I argue that the scholarship dominating conversations about kapwa are illustrative of how ethnic-sameness is treated as the organizing category for understanding Filipino interpersonal behaviors and cultural norms, while ignoring the ways in which women and girls experience Filipino culture as subjects of heteropatriarchy. In the Filipino American diaspora, Filipina immigrant women and their contributions to uplifting Filipino culture in the United States have been recorded as directly tied to their roles as dutiful wives and attentive mothers who raise children in accordance to "respectable" Filipino behavior. 23As feminist scholars have showcased, however, Filipina American girls continue to experience higher pressures to behave in respectable manners through hyper-surveillance of their sexuality and expectations to silently obey their parents' orders.Such gendered disparities in parenting is one of the most persistent ways that Filipino immigrants have countered Orientalist notions of Filipina women's alleged hypersexuality and immorality. 24In turn, women and girls are expected to carry quite a heavy load when it comes to not only cultural preservation, but ethnic representation when faced with the threats of Western colonialism and White supremacy.Therefore, without a critical understanding of how masculinist standpoints dominate narratives of the Filipino orientation and experience, methodological approaches to conducting research and understandings of kapwa fail to adequately and accurately represent women's experiences in diaspora. posiTionAliTy And The god TricK: feminisT inTervenTions inTo posi-TivisT meThodologies Feminist scholars have developed their own set of methodologies to challenge the heteropatriarchal gaze and positivism that dominated research in the humanities and social sciences; while traditional schools of thinking in psychology favor measur-22 Lou Collette S. Felipe, "The Relationship of Colonial Mentality with Filipina American Experiences with Racism and Sexism," Asian American Journal of Psychology, vol. 7, no. 1 (2016): 25. 23 Fred Cordova, Filipinos: Forgotten Asian Americans.A Pictoral Essay / 1763-Circa-1963 (Seattle: Demonstration Project for Asian Americans, 1983), 147. 24Yen Le Espiritu, Home Bound: Filipino American Lives Across Cultures, Communities, and Countries (Berkeley: University of California Press, 2003), 157. Remoquillo, The Problem with Kapwa Alon: Journal for Filipinx American and Diasporic Studies, 3 no. 1 (2023) able, quantitative data to understand behavior, feminists have promoted qualitative methods.Since the 1980s, feminists collaborated, debated, and disagreed as they attempted to create a new set of ethical research practices that could produce "authentic" feminist scholarship, or scholarship that was produced by women and for women with the intention of challenging the masculinist and positivist representations of The Human Experience, a universalist construction of human relations and society through the perspective of a select few.25 I argue that drawing from these interventions can disrupt the male-dominated narratives and methodologies surrounding Filipino culture and kapwa. Urban planning and policy scholar Shirley Hune explains, "In Asian American Studies, race is the organizing category and the master narrative remains male-centered.Hence, the historical significance of women is rendered invisible when their lives, interests, and activities are subsumed within or considered to be the same as those of men." 26 In the same trajectory, other anti-racist feminists developed their own methods for conducting ethical research by engaging with an intersectional lens that not only addressed the gendering, racialization, and sexualization that informed the positions of their research subjects, but also encouraged researchers themselves to reflect on how their intersectional identities rearranged the centers and margins of the communities they were working with and within. 27Feminist geographers in particular challenged the very notion of "the field" in fieldwork as they drew from feminist methodologies as a tool for dismantling the assumptions that the researcher and researched are inherently separate (opposite), and that the field is somewhere "over there" or "back then," rather than being in the here and now. 28verall, feminist interventions in conducting fieldwork have gone great lengths to reinvent the ways in which traditionally White, masculinist disciplines produce scholarship on marginalized communities, disavowing the God Trick that attempted to produce "authentic" knowledge about already real people, systems, and socio-political networks. 29According to Haraway, the God Trick signified the ways in which the social sciences, dominated by masculinist perspectives, attempted to create universal truths regarding the human experience without any consideration of how their power and privilege through gender, race, and class skewed their world view.The theory of situated knowledges, however, helped open up epistemological spaces for the voices and perspectives of women of color researchers who invested in feminist scholarship as a way to represent the marginalized communities they came from.Such a return, however, requires "the emotionally laborious weighing of accountability for kin and other relations" 30 when the researcher's presence transforms "home" into the field.Anthropology scholar Dada Docot's examination of the conflicts and crises of returning to her hometown of Nabua in the Philippines as an expat and researcher calls into question the meanings of home, belongingness, and ultimately, power.Such questions are at the root of feminist methodologies, and can offer a new and critical perspective to approaching ethnographic research in Filipinx American studies not only by disrupting masculinist approaches to conceptualizing ethnic identity, but by focusing on the responsibility of researchers to reflect on the complex, power-laden relationships between researchers and research subjects. My feminist critiques of Sikolohiyang Pilipino's or Filipino Virtue Ethics' theorization of kapwa are not meant to act as a distraction from my own assumptions of sameness based on a shared ethnic identity.In fact, it was feminist scholar Donna Haraway herself who cautioned feminists from assuming that they were safe from playing the God Trick simply because they were women conducting research on other women. 31Rather, my personal reflections of conducting fieldwork aim to show that any person is capable of falling into the comforting assumptions of sameness (be it race, ethnicity, gender, or age), and that such slippages are symptoms of larger problems or realities: the minoritized presence of Filipinx American Studies in Asian American 29 Donna Haraway, "Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective," Feminist Studies, vol.14, no. 3 (1988): 575-599. 30Dada Docot, "Negative Productions during Fieldwork in the Hometown," GeoHumanities, vol.3, no. 2 (2017): 308. 31Haraway, "Situated Knowledges," 580. Remoquillo, The Problem with Kapwa Alon: Journal for Filipinx American and Diasporic Studies, 3 no. 1 (2023) Studies; the newness of Filipinx American Studies as separate from Philippines Studies; and the struggles that Filipinx American women and nonheteronormative folks face when attempting to find accessible representations of their unique experiences in diaspora.Therefore, without a critical feminist understanding of the intersectionality of Filipino culture and diaspora, such shortcomings are reincarnated through everyday practices of kapwa and can create damaging interpersonal environments when reimagining psychological models of behavior, conducting Filipinx American fieldwork, and building community. In essence, I hope to convey the notion that kapwa is less of a reality for Filipinx Americans, and more symbolic of how minoritized groups in the United States engage in practices of (be)longing, or the acts of longing to belong to a larger group and place. 32My experiences when conducting fieldwork articulate those same longings, and call attention to the dire need of feminist interventions into how we understand and practice kapwa.As I shared in my opening vignette, as a Pinay 33 I oftentimes view myself and other Pinays as having a shared identity, which fed into my belief that I am naturally more fit than someone outside of our ethnic and gender communities to conduct feminist research on our experiences.However, as educator and scholar Allyson Tintiangco-Cubales suggests, we must work towards community-building through a critical Pinayist standpoint that pushes us to "check [ourselves] and how [we] wish to seek out and keep allegiances with allies, including each other." 34Following Tintiangco-Cubales's call for a critical Pinayist standpoint, I've learned that assumptions about a community without an internal interrogation of what community actually means and looks like when conducting field work may cause us, as Peminist scholars, to run the risk of reinforcing the fissures and hierarchies in Filipinx American studies rather than challenging them. new conversations surrounding feminist methodologies in Filipinx American fieldwork can ensue.As powerful of a sensation kapwa can be, particularly for those who very rarely had conversations with others who had similar gendered experiences as an ethnic minority, this was also where I found the limitations and dangers of kapwa when specific boundaries weren't kept in place.Although we emotionally resonated with similar experiences, cultural norms, and colloquialisms, my subconscious temptation to find sameness in these conversations also lead to the assumption of sameness.While I believed my intentions to be altruistic and for the sake of creating a community with those involved in the project, such assumptions also highlight the limitations of imagining a diasporic community based on sameness and camaraderie.Rather than regarding such limitations -and my own experience of navigating through conflict in the field-as an obstacle in the search for writing about diasporic community, I hope to shed light on the importance of embracing moments of difference, contention, and confusion when exploring new terrains of Filipinx American identity and diaspora. reThinKing "The field" & meThodology Recruiting research participants in the middle of a pandemic inevitably changed the geographical and conceptual terrains of what is considered "the field," and therefore directly altered the ways I maneuvered through the formative stages of fieldwork.Rather than physically traveling to Chicago to meet with participants in person, sit in on meetings organized by Filipina American clubs, or attend social events with the participants and their organizations, I arranged Zoom meetings or socially-distanced interviews in outdoor coffee shops in neighborhoods such as Hyde Park, Bridgeport, and Lakeview.However, in my research, I position these virtual spaces and short windows of moments as "the field" by use of theories written on emotional geographies, ethnic belonging, and imagined communities. Scholars in the field of Emotional Geographies contend that our emotions directly shape how we experience spaces, and that "We live in worlds of pain or of pleasure; emotional environs that we sense can expand or contract in response to our experience of events-though there is rarely a clear or consistent sense of simple "cause" and "affect," further reiterating the ways Remoquillo, The Problem with Kapwa Alon: Journal for Filipinx American and Diasporic Studies, 3 no. 1 (2023) in which emotion creates immaterial spaces that "can clearly alter the way the world is for us, affecting our sense of time as well as space.Our sense of who and what we are is continually (re)shaped by how we feel."35Furthermore, theories on Asian American ethnic enclaves draw from anthropological concepts of primordialism and instrumentalism in immigrant communities, the former explaining immigrant-ethnic cohesion in a host country as rooted in biological and ancestral sameness because of their origins, whereas instrumentalism typically explains ethnic cohesion as more of a choice dictated by shared goals and interests. 36Lastly, Benedict Anderson's oft-cited theory of "imagined communities" continues to inform the ways in which scholars conceptualize national belonging as more of an intellectual, imaginative, and emotional process than a geographically determined one.For Anderson, "[The community] is imagined because the members of even the smallest nation will never know most of their fellow members, meet them, or even hear of them, yet in the minds of each lives the image of their communion," further explaining the characteristics of being imaginative because the "finite, if elastic, boundaries" of a community are shaped by the personalized image constructed by different members, therefore creating multiple definitions and boundaries of that community occurring all at once yet in the same "space." 37The emotional and reflective conversations held with my Filipina American interviewees, paired with my own positioning as a Filipina American who grew up in similar circumstances, created an immaterial diasporic space in which a type of imagined community was fostered.Discussing similar memories and shared feelings created a space that challenged the notion of fieldwork as geographically rooted, and instead introduced an emotional terrain in which we all could step into and explore. The women in my study were all second-generation Filipina Americans born and/or raised in the Chicago metro area.I met several of them through Filipino/a/x American organizations and clubs based in Chicago, while others I met through word of mouth-posting electronic flyers on my social media pages, asking friends to pass on my information to anyone who would fit recruitment criteria for my research.Within two weeks, I quickly accumulated almost fifteen volunteers who all expressed interest in talking about their experiences as Filipina American women and daughters of immigrants.The virtual and distanced interviews I conducted and the virtual events I attended were all tied together through their feelings of belonging to a gendered and ethnic community that they felt was separate from the Filipino American community at large because they were women.Comments were often made that signaled participants' identification with me as a part of their imagined Filipina American community as they would say things like, "Oh, you know how Filipino moms are…" or "You know Filipina titas (aunties), they all like tsismis (gossip)."One of my favorite interactions with a participant was when she talked about her past relationships with men, referring to them as "basura (trash) boys."This colloquial use of "trash" in the contemporary English language to describe her past partners-one she assumed I'd be familiar with because of my age -was translated to Tagalog when used in conversation with me.Even though neither one of us spoke Tagalog fluently, we both knew exactly what she was talking about and were able to share a moment of laughter. Our interactions created a space shaped by emotions, memories, and imaginations of gendered diasporic belonging, a space in which we each stepped into as we logged onto Zoom or sat six feet apart with masks on at a coffee shop.The more I listened to them talk about their experiences-and we found that we shared many of them-the more our imagined diasporic community grew and the more the field developed around us and from us, momentarily creating what felt like kapwa.However, as my experience with one participant in particular revealed, the feelings of kapwa are temporal and subjectively experienced: what may have been a positive experience for me ended up as an emotionally triggering one for her.It is through my first experience with confrontation in the field that I learned more about the meaning of conducting feminist fieldwork, building community, and seeking connection in the Filipinx American diaspora. The disillusionmenT of KApwA Through conflicT in The field In August 2020, I sat at the dining table in my apartment with my open laptop and notepad, ready to jot down any memora-Remoquillo, The Problem with Kapwa Alon: Journal for Filipinx American and Diasporic Studies, 3 no. 1 (2023) ble quotes and observations to be used in the dissertation.As I launched the Zoom meeting room designated for interviews, I reviewed my short list of opening questions to get our conversation going: did you grow up around other Filipinx Americans?How was culture talked about in your household?When and why did your parents immigrate to the United States?Sam* was one of the first women who responded to my call for participants after receiving a flyer through the Filipino American Historical Society's (FANHS) listserv.It should be noted that I do not use her real name, provide any personal information, nor do I discuss any specific interview materials gathered during our conversations.I draw from my experiences with Sam to further examine the politics of kapwa in the field, but not to reveal any sensitive information regarding her personal experiences that were shared with me during the conversations. Much like the other participants, Sam sent me an email briefly explaining her participation in FANHS and expressed her interest in getting involved in the project.She briefly described her upbringing as a second-generation Filipina American in a Chicago suburb about an hour from where I grew up, although for several years now she had been living out of state for school and work.In her message, Sam wrote about her excitement to talk about her experiences because she felt that that "more representation of Filipina Americans' experiences need to be shared."After scheduling a time to meet, I sent Sam a more detailed description of the project-its goals, focuses, and methodological scope.I included a list of the general, open-ended questions I'd ask in the interviews, but clarified that it would be mostly a conversation that could go in any direction that she as the participant would like it to go.Because I was exempt from IRB approval, I was not required to obtain written consent, although I received recorded, verbal consent indicating that they understood that if the topics became too sensitive or emotionally difficult for them, they had the right to refuse to answer a question, end the interview at any point, and say things off record that would be left out of the dissertation.Once she acknowledged and accepted these terms, I began the interview: "tell me about yourself." I could immediately tell that Sam was highly intelligent and not afraid of voicing her opinions or expressing her emotions.There were very rarely (if any) lulls in our conversations as we seemed to swiftly move from one topic to another as she let her stories of high school and college friends, her immigrant parents, and relationship with her brother flow so freely.Other than a few words of acknowledgment, I was almost completely silent for the first twenty or so minutes, giving her the space to take things in the direction she wanted and allowing myself to gauge her energy and adjust to her pace.As her nerves seemed to calm down and she began asking questions about myself and my own up-bringing, I noticed a shift in our dynamic.We became much more conversational, transitioning into more of a back and forth dialogue as we compared and contrasted the neighborhoods and schools we grew up in.I then asked Sam the same question I ask all participants: how did you come to learn about your identity as a Filipina?In the interviews before and after this one, participants had a tendency to refer to their relationships with their immigrant parents (usually their moms who they were closest to) who passed down cultural values and shared family histories.Similarly, Sam talked about the close relationship that she and her younger brother had with their mom who was a Filipina immigrant.Soon after, however, she began talking about the contentious relationship she had with her dad while growing up.As she seemed to get deeper into the memories of her childhood, Sam's pace began to speed up again as she shared memories of an immigrant household affected by alcoholism, domestic disputes, violence, the trauma of being sexually abused by a family friend, and the deep-seated pain of feeling abandoned by her mother, who she felt didn't protect her and brother enough. As I tried to process Sam's pain and remember all the training I received in my graduate methodologies courses, I felt my own emotions and memories of a similar past come flooding in.I told her that I understood the confusion she must have felt from having a "close knit" family that was also the source of a lot her pain and trauma.I also mentioned to her that, unfortunately, these were common occurrences in Filipino American families.Some theorists explain the common occurrence of domestic violence as a result of Western colonialism and the forced implementation of Eurocentric gender roles organized by heteropatriarchal domination (Espiritu, 1997, p. 13), while others explain issues with mental health and substance abuse amongst Filipino Americans as symptoms of colonial mentality. 38I wanted her to know that she wasn't alone in this trauma, that I also grew up in a household where yelling, physical fights, and substance abuse deeply shaped my experiences as a child and adolescent.However, my responses remained vague and non-specific as I found myself tiptoeing around the unspoken expectation to share my own stories with the same detail that Sam gave me.I wasn't ready to confront my own traumas and accept that my pain and that of my families' were always-already implicated in the project.Perhaps out of panic and stress, I chose to intellectualize our feelings, treating them representative of a shared cultural issue explained by postcolonial theory and psychology.By doing so, I was able to extract myself from the surprising and uncomfortable emotive space we now found ourselves in and fall back into the role of The Researcher.However, I didn't realize that while I had the ability to "pull out," Sam was stuck, and unready to make an intellectual pivot when remembering her traumas.As I made the split-second call to take this turn away from my own discomfort and anxiety, I unknowingly exercised my power as a researcher in a way that prioritized my emotions above hers. Although the rapid pace of our conversations made it difficult, I did my best to check in on how she was feeling, asking her if she would like to stop to take a breather before continuing on.At the end of our hour and a half long conversation, we were both physically and emotionally spent.I thanked her again for her time, and she expressed her desire to have a follow-up discussion during my second round of interviews-she even texted me the names of a few different Filipino American podcasts that she felt I would be interested in.I felt relieved that things had gone smoothly-that I had handled such difficult moments correctly-and that she wanted to keep participating in the project.However, less than a week later, I received an email from Sam that was starkly different than our last interactions.This email was filled with panic, worry, anger, and accusations.She asked what my methodology was; what feminist scholarship I was using to support my analyses (and included a list of sources that I "should follow next time [you] interview someone"); how I was storing all of the interview materials; and asked for proof that my project was exempt from IRB approval.She asked how I would protect her identity in my research, and then made a comment that as a researcher herself, she felt that I didn't know what I was doing.Shocked and embarrassed, I typed out my answers to her questions and apologized for any discomfort she felt during or after the interview process.I told her that it was okay if there were parts of the interview she wanted to be left out, and that there would be no animosity if she wanted to withdraw from the project all together.I never heard from her again, but ultimately decided to leave her out in an attempt to respect her feelings of discomfort and regret. After reviewing all the steps, I took the issue to my advisor and an IRB officer at the University of Texas at Austin.Ultimately, they came to the conclusion that I did everything I was supposed to do, and that I handled the situation with as much care and caution as I could have.My advisor told me that things like this happen in the research process, and to treat it as an experience to learn from rather than fearing that it would be detrimental to my entire project.In retrospect, my fear of being seen as a faulty researcher by my advisor and the university took precedent as I relied on them to affirm my credibility.Reexamining my reaction to Sam's emails reveals how I unconsciously reassumed the position of a researcher (not just a fellow Filipina American to Sam) because I knew that I could receive some degree of institutional protection, when really what I was feeling was extreme emotional vulnerability.Not only was I worried about my dissertation, but I was also plagued with very real, raw emotions.I was shockedeven angry-by what felt like an abrupt change in her view of me.I felt the discomforts of rejection as I realized that I misinterpreted or overestimated a connection with Sam, when she did not feel the same as me.I originally left the interview feeling like we talked about such important and revelatory topics and shared such vulnerable parts of ourselves to each other.Once leaving that space, however, Sam re-oriented herself in opposition to me.To her, we were no longer two Filipina American women from Illinois who were trying to figure out our own identities through family memories, but I was the researcher and she was the researched; she was the vulnerable one while I was a threat to her safety.As the reality of our complicated relationship dawned on me, I felt my cheeks burn as I thought about the shame and embarrassment I would feel if word got back to my advisor that I wasn't good at my job-that I wasn't a trustworthy researcher and community member.The kapwa I thought existed through our similarities was demystified, and out of our attempts Remoquillo, The Problem with Kapwa Alon: Journal for Filipinx American and Diasporic Studies, 3 no. 1 (2023) for self-preservation, we turned on each other and retreated into our own anxieties, fears, and pain.Now, I can reflect on our interactions and my internal reactions to her emails following the interview out of the terrains of "good" or "bad."Rather, I see it as an outcome of a complicated web of different feelings, people, and positionalities.Even though I never intended to express the sentiments of kapwa in my interviews by treating the interviewees and myself as one in the same person, connected by our shared identities as Filipina Americans, the underlying assumptions of kapwa and Filipino American diasporic community still informed our interactions.At first, these assumptions provided a space in which we were able to share the burden of familial trauma.Once leaving, however, those assumptions made Sam feel unsafe and too vulnerable-feelings not acknowledged when kapwa and community are imagined.Yet, these difficult emotions and the interpersonal conflicts became equally important when exploring new definitions of the Filipinx American diaspora in my research.Although my project ended up going in a different direction that no longer included multiple narratives of Filipina Americans (including Sam's), my experience with Sam challenged my assumptions of conducting feminist research with(in) a community I identified with, while also initiating the implementation of boundaries in the field-for my protection and the participants' . Surface-level understandings of kapwa can over-emphasize unity and connection at the expense of one's boundaries as well as mental and emotional safety.Creating boundaries-or establishing a clear sense of self as distinct from another-helps ensure the safety and care without sacrificing interpersonal connections that humanize our research.Feminist geographers Dana Cuomo and Vanessa Massaro similarly found that reconstituting and reconstructing the physical and emotional boundaries of field space was essential when researching their resident community in Philadelphia.Their reflections on conducting fieldwork in their own community, and with people they had friendly ties to as neighbors and not researchers, illustrated the much-needed yet under-discussed topic of boundary-making as a methodological practice in feminist research.In Cuomo and Massaro's joint introduction they wrote, "While such blurred lines may be desirable for geographers looking to get 'inside' their research site, we found that we needed to create physical and emotional boundaries to construct us explicitly as researchers in the eyes of our participants." 39The "blurred lines" that the geographers mention refer to the ways in which "the field" that was subject to their analytical eyes was not physically distinguishable from "home," thus blurring the lines between insider or outsider, friend or neutral third party.Similar to Cuomo and Massaro, my emotional and physical closeness to my participants constructed "the field" as both "spatially and temporally messy and difficult to discern," and therefore resulted in the unintentional collapsing between myself as the researcher and those that I was still researching. 40hile un-blurring the lines between the researcher and participant-and exposing the assumptions of kapwa-may spark anxieties about producing work that leans too far into the formality of oppositional positionalities, the work of feminist geographers sets an example of how boundary-making can be one solution to nuancing Filipinx American methodological entanglements with kapwa.Implementing boundaries to create some degree of distance could have helped keep Sam emotionally and physically safe; boundaries would have also helped me better navigate these feelings of confusion, loss, and hurt.Furthermore, having clear set boundaries can benefit participants by allowing them to "imagine how the outside world would receive their stories," rather than forgetting that our conversations would not necessarily remain within the immaterial walls of our temporal diasporic community. KApwA: A criTicAl sTAndpoinT & meThodology I do not suggest that boundary-making and kapwa are mutually exclusive-that researchers must choose between a consciousness of (dis)empowerment in the field or seeking a deeper connection with those we research on and for.Rather, I propose a reframing of kapwa as a critical standpoint that actively interrogates the meanings of community and sameness: what if kapwa did not begin and end with the assumption of sameness, but with a commitment to representing the diversity of diasporic identities and intra-community healing?Kapwa as a critical 39 Dana Cuomo and Vanessa A. Massaro, "Boundary-making in Feminist Research: New Methodologies for 'Intimate Insiders' ," Gender, Place & Culture: A Journal of Feminist Geography, vol. 23, no. 1 (2014): 95. 40 Ibid., 96. Remoquillo, The Problem with Kapwa Alon: Journal for Filipinx American and Diasporic Studies, 3 no. 1 (2023) standpoint challenges the notion that interpersonal and internal conflict are antithetical to community, and resembles Manalansan's call to embrace the "wildness" and "mess" of qualitative research in order to better obtain a "sensitive, visceral, affective, and emotional literacy about the struggles of queer subjects such as immigrants, people of color, and single mother on welfare." 41Similar to Manalansan, I argue that embracing the ambiguity of identity and discomforts of conflict are crucial methods toward healing the pains of disconnection and producing work that truly represents the multifaceted and complex positionalities of the Filipinx American diaspora. After deeply reflecting on my experience with Sam, I now believe that there is a way to un-romanticize kapwa when conducting research with-and on-other Filipina American women, while simultaneously remaining true to the feminist ideologies of practicing empathy and creating emotional connections in the field.Over the years and dozens of interviews conducted since Sam, I have learned how to take a critical positioning towards kapwa while still paying close attention to the ways in which emotions are ever-present in the field.I am careful about the pace at which our conversations move to ensure that they are in charge of what is shared and when they choose to share it; I check myself whenever I have the urge to finish their sentences, or reframe what they are saying in a way that mirrors my own internal dialogue.I also try to take better care of myself through the interview processes by listening to my body when it tells me that we are emotionally charged, exhausted, content, or confused.Like all other qualitative researchers, I will continue to face challenges in the field that make me question myself-I like this.As feminist researchers teaching and contributing to the growing field of Filipinx American Studies, we must continue to produce and practice ethical methodologies that keep our participants and ourselves safe.We must continue to challenge the assumptions of community, grieve the pains of disunity, and search for new modes and methods of finding connection and fostering kapwa.
2023-08-30T15:13:55.257Z
2023-08-28T00:00:00.000
{ "year": 2023, "sha1": "b510d3e1841f48fc51d01a947b174b603d56f1d6", "oa_license": "CCBYNC", "oa_url": "https://escholarship.org/content/qt3g3872vk/qt3g3872vk.pdf?t=s03heo", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "771244a91fd3de4e04d6c31349c0725bab73441b", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
18667841
pes2o/s2orc
v3-fos-license
Ferromagnetism and Metal-Insulator Transition in the Disordered Hubbard Model A detailed study of the paramagnetic to ferromagnetic phase transition in the one-band Hubbard model in the presence of binary alloy disorder is presented. The influence of the disorder (with concentration $x$ and $1-x$ of the two alloy ions) on the Curie temperature $T_{c}$ is found to depend strongly on electron density $n$. While at high densities, $n>x$, the disorder always reduces $T_c$, at low densities, $n<x$, the disorder can even \emph{enhance} $T_c$ if the interaction is strong enough. At the particular density $n=x$ (i. e. not necessarily at half filling) the interplay between disorder-induced band splitting and correlation induced Mott transition gives rise to a new type of metal-insulator transition. In correlated electron materials it is a rule rather than an exception that the electrons, apart from strong interactions, are also subject to disorder. The disorder may result from non-stoichiometric composition, as obtained, for example, by doping of manganites (La 1−x Sr x MnO 3 ) and cuprates (La 1−x Sr x CuO 4 ) [1], or in the disulfides Co 1−x Fe x S 2 and Ni 1−x Co x S 2 [2]. In the first two examples, the Sr ions create different potentials in their vicinity which affect the correlated d electrons/holes. In the second set of examples, two different transition metal ions are located at random positions, creating two different atomic levels for the correlated d electrons. In both cases the random positions of different ions break the translational invariance of the lattice, and the number of d electrons/holes varies. As the composition changes so does the randomness, with x = 0 or x = 1 corresponding to the pure cases. With changing composition the system can undergo various phase transitions. For example, FeS 2 is a pure band insulator which becomes a disordered metal when alloyed with CoS 2 , resulting in Co 1−x Fe x S 2 . This system has a ferromagnetic ground state for a wide range of x with a maximal Curie temperature T c of 120 K. On the other hand, when CoS 2 (a metallic ferromagnet) is alloyed with NiS 2 to make Ni 1−x Co x S 2 , the Curie temperature is suppressed and the end compound NiS 2 is a Mott-Hubbard antiferromagnetic insulator with Néel temperature T N = 40 K. Our theoretical understanding of systems with strong interactions and disorder is far from complete. For example, it was realized only recently that in gapless fermionic systems the soft modes couple to order parameter fluctuations, leading to different critical behavior in the pure and the disordered cases [3]. A powerful method for theoretical studies of strongly correlated electron systems is the dynamical mean-field theory (DMFT) [4,5,6]. The DMFT is a comprehensive, conserving, and thermodynamically consistent approximation scheme which emerged from the infinite dimensional limit of fermionic lattice models [7]. During the last ten years the DMFT has been extensively employed to study the properties of correlated electronic lattice models. Recently the combination of DMFT with conventional electron structure theory in the local density approximation (LDA) has provided a novel computational tool, LDA+DMFT [8,9], for the realistic investigation of materials with strongly correlated electrons, e. g. itinerant ferromagnets [10]. The interplay between local disorder and electronic correlations can also be investigated within DMFT [11,12,13,14,15]. Although effects due to coherent backscattering cannot be studied in this way [11], since the disorder is treated on the level of the coherent potential approximation [16], there are still important physical effects remaining. In particular, electron localization, and a disorder-induced metal-insulator transition (MIT), can be caused by alloy-band splitting. In the present paper we study the influence of disorder on the ferromagnetic phase. We will show that in a correlated system with binary-alloy disorder the Curie temperature depends non-trivially on the band filling. In the disordered one-band Hubbard model we find that for a certain band filling (density) n = N e /N a , where N e (N a ) is the number of electrons (lattice sites), disorder can weakly increase the Curie temperature provided the interaction is strong enough. A simple physical argument for this behavior is presented. We also find that at special band fillings n = 1 the system can undergo a new type of Mott-Hubbard MIT upon increase of disorder and/or interaction. In the following we will study itinerant electron ferromagnetism in disordered systems, modeled by the Anderson-Hubbard Hamiltonian with on-site disorder where t ij is the hopping matrix element and U is the local Coulomb interaction. The disorder is represented by the ionic energies ǫ i , which are random variables. We consider binary alloy disorder where the ionic energy is distributed according to the probability density Here ∆ is the energy difference between the two ionic energies, providing a measure of the disorder strength, while x and 1 − x are the concentrations of the two alloy ions. For ∆ ≫ B, where B is the band-width, it is known that binary alloy disorder causes a band splitting in every dimension d ≥ 1, with the number of states in each alloy subband equal to 2xN a and 2(1 − x)N a , respectively [16]. We solve (1) within DMFT. The local nature of the theory implies that short-range order in position space is missing. However, all dynamical correlations due to the local interaction are fully taken into account. In the DMFT scheme the local Green function G σn is given by the bare density of states (DOS) N 0 (ǫ) and the local self-energy Σ σn as G σn = dǫN 0 (ǫ)/(iω n + µ − Σ σn − ǫ). Here the subscript n refers to the Matsubara frequency iω n = i(2n + 1)π/β for the temperature T , with β = 1/k B T , and µ is the chemical potential. Within DMFT the local Green function G σn is determined selfconsistently by together with the k-integrated Dyson equation G −1 σn = G −1 σn + Σ σn . The single-site action A i for a site with the ionic energy ǫ i = ±∆/2 has the form where we used a mixed time/frequency convention for Grassmann variables c σ , c ⋆ σ . Averages over the disorder are obtained by · · · dis = dǫP (ǫ)(· · ·). Since an asymmetric DOS is known to stabilize ferromagnetism in the one-band Hubbard model for moderate values of U [17,18,19] we use the DOS of the fcc-lattice in infinite dimensions, [20]. This DOS has a square root singularity at ǫ = −1/ √ 2 and vanishes exponentially for ǫ → ∞. In the following the second moment of the DOS, W , is used as the energy scale and is normalized to unity [21]. The one-particle Green function in Eq. (2) is determined by solving the DMFT equations iteratively [17,18] using Quantum Monte-Carlo (QMC) simulations [22]. Curie temperatures are obtained by the divergence of the homogeneous magnetic susceptibility [17,23]. We find a striking difference in the dependence of the Curie temperature T c on disorder strength ∆ for different band fillings n < x and n > x (we chose x = 0.5 for numerical calculations). At n = 0.7, the critical temperature T c (∆) decreases with ∆ for all values of U and eventually vanishes at sufficiently large disorder [ Fig. 1(a)]. By contrast, at n = 0.3, T c (∆) weakly decreases with ∆ at small U , but increases with ∆ at large values of U [ Fig. 1(b)]. As will be explained below, this striking difference originates from three distinct features of interacting electrons in the presence of binary alloy disorder: i) T p c ≡ T c (∆ = 0), the Curie temperature in the pure case, depends non-monotonically on band filling n. Namely, T p c (n) has a maximum at some filling n = n * (U ), which increases as U is increased [17]; see Fig. 2. ii) In the alloy disordered system the band is split [16] when ∆ ≫ W . As a consequence, for n < 2x and T ≪ ∆ electrons only occupy the lower alloy subband while the upper subband is empty. Effectively, one can therefore describe this system by a Hubbard model mapped onto the lower alloy subband. Hence, it corresponds to a single band with the effective filling n eff = n/x. It is then possible to determine T c from the phase diagram of the Hubbard model without disorder [17]. iii) The disorder leads to a reduction of T p c (n eff ) by a factor x, i. e. we find when ∆ ≫ W [24]. Hence, as illustrated in Fig. 2, T c can be determined by T p c (n eff ). Surprisingly, then, it follows that, if U is sufficiently strong, the Curie temperature of a disordered system can be higher than that of the corresponding pure system [cf. Tc for interacting electrons with strong binary alloy disorder. Curves represent T p c , the Curie temperature for the pure system, as a function of filling n at two different interactions U1 ≪ U2 (cf. [17]). For n x, Tc of the disordered system can be obtained by transforming the open (for U1) and the filled (for U2) point from n to n eff , and then multiplying T p c (n/x) by x as indicated by arrows. One finds Tc(n) < T p c (n) for U1, but Tc(n) > T p c (n) for U2. This difference originates from the non-monotonic dependence of T p c on n. To illustrate the alloy band splitting in the presence of strong interactions discussed above [see (ii)] we calculate the spectral density from the QMC results by the maximal entropy method [25]. The results in Fig. 3 show the evolution of the spectral density in the paramagnetic phase at U = 4 and n = 0.3. At ∆ = 0 the lower and upper Hubbard subbands can be clearly identified. The quasiparticle resonance is merged with the lower Hubbard subband due to the low filling of the band, and is reduced by the finite temperature. At ∆ > 0 the lower and upper alloy subbands begin to split off. A similar behavior was found at n = 0.7. The separation of the alloy subbands in the correlated electron system for increasing ∆ is one of the preconditions [cf. (ii)] for the enhancement of T c by disorder when n < x, as discussed above. The splitting of the alloy subbands and, as a result, the changing of the band filling in the effective Hubbard model implies that T c vanishes for n > x. Namely, in the ferromagnetic ground state each of the alloy subbands can accommodate only xN a and (1 − x)N a electrons, respectively. Therefore, if the ground state of the system were ferromagnetic the upper alloy subband would be partially occupied for all n > x. This would, however, increase the energy of the system by ∆ per particle in the upper alloy subband. Therefore, in the ∆ ≫ U limit the paramagnetic ground state is energetically favorable. This explains why T c vanishes at n = 0.7, as found in our QMC simulations [ Fig. 1(a)]. Our conclusion that T c vanishes for n eff = n/x > 1 when ∆ ≫ W is consistent with the observation in [17] that there is no ferromag- netism for n > 1 in the Hubbard model without disorder on fcc-lattice in infinite dimensions. The filling n = x is very particular because a new MIT of the Mott-Hubbard type occurs. Namely, when ∆ increases (at U = 0), the non-interacting band splits, leaving 2xN a states in the lower and 2(1 − x)N a states in the upper alloy subband. Effectively, it means that at n = x the lower alloy subband is half filled (n eff = 1). Consequently, a Mott-Hubbard MIT occurs in the lower alloy subband at sufficiently large interaction U [26]. In fact, for ∆ ≫ U we may infer a critical value U c = 1.47W * at T = 0 from the results of Refs. [27,28], where W * is the renormalized bandwidth of the lower alloy subband. Furthermore, from the analogy of this MIT with that in the pure case [29] we can expect a discontinuous transition for T T * ≈ 0.02W * , and a smooth crossover for T T * . From the results shown in Fig. 4 it follows that T * < 0.071, since for T = 0.071 and U = 6 a gap-like structure develops in the spectrum at ∆ ≈ 1.6, implying a smooth but rapid crossover from a metallic to an insulator-like phase [30]. The MIT described above is not obscured by the onset of antiferromagnetic long-range order because in infinite dimensions the fcc-lattice is completely frustrated [20]. Hence the insulator is paramagnetic. The transition therefore occurs between a paramagnetic insulator (PI) at high T and a ferromagnetic metal (FM) at low T , at least at large U, as shown in the inset of Fig. 4. The actual boundary between the paramagnetic metal (PM) and the paramagnetic insulator-like phase has not yet been determined. The thick line in the inset of Fig. 4 indicates the approximate position of the phase boundary between the PM and PI phases. We note that at the In summary, we showed within DMFT that the interplay between binary-alloy disorder and electronic correlation can result in unexpected effects, such as the enhancement of the transition temperature T c for itinerant ferromagnetism by disorder, and the occurrence of a Mott-Hubbard type MIT off half-filling. An observation of these effects requires good control of the system parameters over a wide range as was recently shown to be possible in experiments with optical lattices [31]. We thank B. Velicky for valuable correspondence. KB is grateful to R. Bulla and K. Wysokiński for discussions, and to G. Keller for computer assistance. This work was supported by a Fellowship of the Alexander von Humboldt-Foundation (KB), and through SFB 484 of the Deutsche Forschungsgemeinschaft.
2015-03-21T17:44:09.000Z
2002-10-14T00:00:00.000
{ "year": 2002, "sha1": "109094d7d6575881a8fac6a8f71545bb18676290", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0210296", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5d503d22dd387e915cfbfa397b1777e2cbb5d232", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
245544538
pes2o/s2orc
v3-fos-license
Morphology and I-V Characteristics of Electrochemically Deposited Zinc Oxide on Silicon This paper reported on the electrochemical deposition of zinc oxide (ZnO) on (100) p-silicon (p-Si) substrate using different volume ratio electrolyte mixture. The deposition process was done in room temperature with a current density of 10 mA/cm for 30 minutes. Prior to the experiment, all samples were treated by RCA cleaning steps. The 0.1 M of zinc chloride (ZnCl2) and potassium chloride (KCl) electrolyte mixture were used at a 1:1, 3:1 and 5:1, namely Sample A, B and C. All samples were characterized using scanning electron microscopy (SEM) and energy dispersive X-ray (EDX). The results show that all samples have the same morphology of a flake-like structure with different Zn:O ratio between 2 to 3 for all samples. The current-voltage (I-V) characteristic graph was obtained by dark current measurement using Keithley SMU 2400 and the threshold voltage (Vth) values were determined at 2.21 V, 0.85 V and 1.22 V for sample A, B and C respectively which correlates with Zn:O ratio. The highest value of Zn:O ratio can be found in sample A while the lowest from sample B. Based on these results, it shows that electrochemical deposition technique is capable of being used to deposit the flake-like structure ZnO on semiconductor material to form the p-n junction which behaves like a diode. The value of Vth seems to be Zn:O ratio dependent. Higher ratio of Zn and O will cause the higher value of intrinsic carrier concentration and built-in potential which will increase the Vth value. INTRODUCTION Semiconductor is a material that has a conductivity level somewhere between insulator and conductor such as silicon (Si) and germanium (Ge) [1]. These types of materials are unique as they will conduct electricity when a certain voltage or current is achieved. However, current flow depends on the junction created between two different layers that built from p-doped material and n-doped material [2] which apply the concept of combination and recombination of electron and hole. P-doped layer will have hole as the majority carrier and n-doped layer will have electron as the majority carrier. When these two layers were brought together, it will form a p-n junction and the electron from n-doped layer can diffuse to p-doped layer to combine with hole thus will conduct electricity. Usually, the junction is created by doping one side of Si with boron to form a p-type layer and another side with phosphorus to form a n-type layer [2]. Alternatively, the pn junction can be formed by depositing a layer of p-type material on n-type Si substrate or vice versa to form a p-n junction [3]. Zinc oxide (ZnO), is an inorganic compound with wide bandgap (3.4 eV) and large excitation binding energy [4]. By nature, ZnO is n-type material which is suitable to be deposited on a p-type substrate to form a p-n junction. An application such as solar cell is expected to have better performance with the presence of ZnO as it has wider bandgap compared to Si. Typically, the p-n junction created by doping Si with boron and phosphorus lacks in harvesting photon with higher energy as it will be converted into heat. This will lower the efficiency of the junction created. Better performance with high energy photon would be achieve when ZnO is deposited onto Si substrate. Various deposition processes had been reported to deposit ZnO on Si substrates, such as arc discharge [5,6], pulse laser deposition (PLD) [7,8], pyrolysis [9,10] and also electrochemical deposition [11][12][13]. The electrochemical deposition is a process where a film of conducting material is deposited from a solution containing ions onto an electrically conducting surface. This process is acceptably cost-effective, has high deposition rate, simple and quick. The process to deposit can be done by altering several parameters such as current density [14][15][16], temperature [17], electrolyte concentration [18], deposition time [19], pH [20] and agitation [19,21]. The electrochemical deposition is mostly attempted for depositing conducting material onto conductor substrate. There only few had reported about deposition onto a semiconductor substrate due to constraints of thermodynamic and kinetics of the deposition process involved [22]. Therefore, this work had explored the feasibility of electrochemical deposition of ZnO on p-Si to further understand on mechanism involved and characteristics of deposited samples. Different volume ratio of electrolyte samples was used to observe any changes on structural properties of deposited ZnO. Particularly, initial studies on morphology and currentvoltage (I-V) characteristics of ZnO/Si samples were observed as both are basic parameters that leads towards overall electrical performance and application. METHODOLOGY In this work, ZnO was deposited on the p-Si (100) substrate with the resistivity of 0.008-0.018 Ωcm by the electrochemical deposition method. Zinc (Zn) plate was used as the anode while p-Si (100) substrate as the cathode. All samples were treated by RCA cleaning steps prior to the experiment. The details of the sample preparation and pre-treatment had been reported in [23]. Deposition of ZnO on Si substrates was done at room temperature using different electrolyte composition between zinc chloride (ZnCl 2 ) and potassium chloride (KCl) with the volume ratio of 1:1, 3:1 and 5:1 named as sample A, B, and C respectively. The applied current density was set to 10 mA/cm 2 for 30 minutes. The setup used in this study is shown in Figure 1. Based on the setup, the Zn plate that was used as the anode is expected to play an important role in the deposition process. Based on chemical reaction shown in equation 1 below, the oxidation process will occur at the anode where the Zn plate will be oxidized to become Zn 2+ ion and dissolved in the electrolyte. These Zn 2+ ions that comes from Zn plate will be used to maintain the concentration of Zn 2+ ion in the electrolyte as Zn 2+ ion is being used to be deposited on the cathode surface. Then, Zn 2+ ion will react with hydroxide ion in the electrolyte to form zinc hydroxide (Zn(OH) 2 ). Finally, Zn(OH) 2 would be transformed into ZnO via reduction process which would be deposited onto Si substrate at the cathode as represented by equation 3. Zn → Zn 2+ +2e (1) Zn 2+ + 2OH -→Zn(OH) 2 (2) Zn(OH) 2 →ZnO+ H 2 O Next, the samples were characterized using scanning electron microscopy (SEM) (JEOL JEM-ARM 200F) with energy-dispersive X-ray (EDX) spectroscopy module to observe their morphology and composition details. The samples also being characterized by using Keithley SMU 2400 to determine the I-V characteristic of the junction created between ZnO and Si. Copper tape has been used as the contact for both back and top of surface substrates. From the I-V characteristic, the threshold voltage (V th ) was determined to identify the suitable operating condition of the junction such that conducting path between p-layer and n-layer would be created [24]. Figures 2, 3 and 4 show the SEM images and EDX spectra for sample A, B, and C. From SEM images, all samples indicated that flake-like structure ZnO was formed on Si substrate, similar as reported by [25]. Based on our previous work in [23], the thickness of ZnO layer produced in these samples were estimated to be in 20 to 50 µm. Besides, length of the ZnO flakes were around 1 to 3 µm, with crossing-randomly distributed on the substrates. As reported by [26], ZnO nanoflake may exhibits strong photoluminescence peak which indicates the relationship between the applications in solar cell. In general, the growth mechanism of ZnO on the Si substrate will follow the Volmer-Weber growth mechanism, that also known as 3D island formation [22]. The process starts with the nucleation process on the Si surface. Then, the nucleation site allows the formation of nuclei (island) that will evolve to form grain. Further formation of nuclei will occur on the existing island and finally, the stack of nuclei will form a ZnO layer on the Si substrate. As the ZnCl 2 volume ratio increased from sample A to sample C, the uniformity of the deposited ZnO distribution on the Si substrate surface increased. The EDX spectra also shows that the amount of Si element detected was significantly varied, where sample A had 50.5%, sample B had 28.7%, and sample C had 0.4%. This trend suggested that more Si surface had been covered by Zn and O deposits for Sample C, compared to Sample A. Moreover, the amount of Zn and O detected are reversely proportional to the amount of Si detected, as shown in Figure 5. This further confirmed that more Zn and O were deposited on sample C. Therefore, more percentage of Zn and O can be expected to be deposited on the substrate when higher ratio of ZnCl 2 over KCl is used. This could be due to a large amount of Zn 2+ presence in the electrolyte during the process. As reported by [27], the balance stoichiometry for ZnO was Zn:O (1:1). Yet, the EDX result of this study shows that the ratio between Zn to O for sample A, B and C was obtained at 2.81, 2.35 and 2.49 respectively. This ratio implies that there are numbers of Zn atom that are still free and did not bonding with O atom to fully form ZnO [15] . Figure 6, the V th values were determined as 2.21 V, 0.85 V and 1.22 V for sample A, B and C respectively. The V th seems to associate with the ratio of Zn:O of the samples. Sample B which has the lowest ratio of Zn:O gave the lowest V th reading, while the highest V th was obtained by sample A that had the highest Zn:O ratio in this work. As reported by [28], as the ratio of Zn:O increases, the value of intrinsic carrier concentration will also increase. Thus, the value of built-in potential will increase which causes V th value also increases which comply with the following equations: where, ∅ '( = ∅ + − ∅ * = ln In this work, the ZnO/Si samples were heterojunction structure. It is found that the obtained reverse current is considerably too high for solar cell application. Normally, the current flow in the p-n junction under the dark is based on reverse saturation current typically very small as it used to overcome the barrier between p-n junction. It is speculated that this high current indicates that low junction, thus electron can freely move between the junction although in reverse saturation current condition. More analysis data such as ideality factor, reverse saturation current, parallel resistance and series resistance value would be needed to further verify the properties of the junction created, towards solar cell application. CONCLUSION This work had shown that ZnO layer can be deposited on semiconductor material (p-Si substrate) although the thermodynamic and kinetics of the process are reported to be quite complex. The SEM results have shown that ZnO with flake-like structure is obtained for all samples which have a possible application in solar cell and photoluminescent devices. By increasing the volume ratio of ZnCl 2 presence in the electrolyte mixture, the amount of Zn and O deposited on the Si substrate will also increase. The balance stoichiometry of ZnO which is Zn:O (1:1) might be achieved by further optimization works. The V th value shows a relationship with the ratio of ZnO. The lower value of V th might be useful towards low power consumption for device applications.
2021-12-30T16:14:48.520Z
2021-12-27T00:00:00.000
{ "year": 2021, "sha1": "9522f92170bc69f0348de58d275d4dd70c46ac26", "oa_license": null, "oa_url": "https://elektrika.utm.my/index.php/ELEKTRIKA_Journal/article/download/307/210", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "27b9bb9be614968b95a81b30cdd9cfa40a3779e4", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
230717950
pes2o/s2orc
v3-fos-license
Characterization of focal liver lesions using sulphur hexafluoride (SF6) microbubble contrast-enhanced ultrasonography Focal hepatic lesions incidentally detected during ultrasound usually need further step for proper characterization. The aim of this study was to highlight the efficacy of microbubble contrast-enhanced ultrasonography (CEUS) in characterization of focal liver lesions. This prospective study was conducted on 60 patients presented with hepatic focal lesions in the period from January 2019 to June 2020. CEUS studies were performed after a baseline conventional ultrasound with the same machine by the same operator. The ultrasound contrast agent used is second-generation US contrast agent. The enhancement patterns of the hepatic lesions were studied during the vascular phases up to 5 min and the data were correlated with histopathology, triphasic contrast-enhanced CT, and clinical follow-up. CEUS demonstrated a sensitivity of 94.2%, specificity of 88.9%, positive predictive value of 91%, negative predictive value of 94.1%, and accuracy of 92.3% for characterization of hepatic focal lesions, compared to a sensitivity of 100%, specificity of 81.8%, positive predictive value of 84%, negative predictive value of 100%, and accuracy of 90.7% for triphasic CT. CEUS is an effective tool in characterization of HFLs and recommended as a second diagnostic step after conventional ultrasound to immediately establish the diagnosis especially in patients with contraindications to CECT. Background Focal liver lesions are usually detected incidentally during an abdominal ultrasound examination, during surveillance in chronic liver diseases and cirrhosis or during first evaluation or follow-up for a primary neoplasm, but the accuracy of the final definitive diagnosis can be limited [1]. In fact, even though color Doppler imaging during ultrasound study of the liver can improve diagnostic confidence in the characterization of focal liver lesions, it has important limitations because of limited sensitivity and specificity because benign and malignant lesions may show similar appearance on B-mode and Doppler ultrasound [2]. Further examinations as CT, MRI, or PET CT can be used for characterization and posttherapeutic follow-up of HFLs; however, each examination has its own limitations and consequently CEUS is considered a more easy, safe, rapid, and accurate alternative [3][4][5][6]. CEUS is a novel imaging technique using microbubble contrast agents that has approved by many countries to improve the detection and characterization of focal liver lesions by obtaining the real-time enhancement of lesions [7][8][9]. Although CEUS is subjected to the same limitations as ordinary US and is inferior to CECT/CEMRI in some aspects, CEUS has proved to be of great value in the management of HCC with inherent advantages, such as sufficient high safety profile making it suitable for patients with renal failure or allergic to iodine, absence of radiation, easy reproducibility, and high temporal resolution [10]. The use of CEUS is recommended in official guidelines and suggested as a second diagnostic step after ultrasound detection of indeterminate focal liver lesions to immediately establish the diagnosis, especially for benign liver lesions, such as hemangiomas, avoiding further and more expensive examinations [11]. The purpose of the study was to evaluate the efficacy of contrast-enhanced ultrasonography in characterization of hepatic focal lesions. Patients This was a prospective study conducted on 60 patients with hepatic focal lesions from January 2019 to June 2020, included 38 males (63.3%) and 22 female (36.7%) ranging in age between 32 and 68 years with mean age of 53.83 ± 10.664. Inclusion criteria include age between 18 and 70 years, incidental detection of hepatic focal lesion on sonography, patients with cirrhosis being evaluated for hepatocellular carcinoma or post-therapeutic follow-up and suspected liver metastasis. Exclusion criteria include age less than 18 years, pregnant or lactating women, acoustic window insufficient for adequate sonographic examination of the liver, critically ill or medically unstable patients, known allergy to any component of US contrast agent. The present study was approved by the institutional review board and all patients were informed about the study and provided written informed consents. Conventional B-mode and color Doppler scanning Ultrasonography was performed using a real-time machine (Hitachi, EUB-7500-Hitachi Medical Systems, Japan) with a 3.5 MHz convex array probe. Contrast-enhanced ultrasound (CEUS) CEUS studies were performed after a baseline ultrasound with the same machine used in conventional ultrasound. CEUS was performed by the same operator (has 15 years of experience) with a preinstalled contrastspecific sonographic imaging mode (a low frame rate (5 Hz) and a very low mechanical index (MI) < 0.08, were used for real-time imaging). The ultrasound contrast agent UCA used is 2nd generation US contrast agent SonoVue (Bracco, Italy). SonoVue is a kit including one vial [containing 25 mg of lyophilized powder the active substance is sulphur hexafluoride in the form of microbubbles, macrogol4000, distearoyl phosphatidyl choline, dipalmitoyl phosphatidyl glycerol sodium, and palmitic acid], one pre-filled syringe containing 5 ml sodium chloride 0.9% and one Mini-Spike transfer system. The microbubble dispersion is prepared before use by injecting 5 ml of sodium chloride 0.9% solution to the contents of the vial. The vial is then shaken for a few seconds until the lyophilisate is completely dissolved, and a homogeneous white milky liquid is obtained. CEUS studies were carried out after the administration of 2.4 ml of the SonoVue (for each lesion to be characterized) as a bolus via a 20 gauge peripheral intravenous cannula, followed by a 10 ml saline flush. The injection was repeated using the same dose (2.4 ml) or double dose (4.8 ml), up to a total dose of 9.6 ml for multilesion assessment if needed. All patients were monitored for adverse events, for 4 h after the procedure. The clinical status, blood pressure, and heart rate were followedup, yet no adverse events occurred in any patient. Image interpretation Complete assessment of the liver by conventional Bmode scanning was done with special focus on focal lesions assessing the number, size, site, the echogenicity of the focal lesion. Portal vein diameter and patency were assessed using color and power Doppler. Splenic size and texture as well as presence of ascites were assessed. Any abdominal masses or lymph node enlargement were also commented upon. By CEUS, the enhancement patterns of the hepatic lesions were studied during the vascular phases up to 5 min, including the arterial (10-45 s), portal (45-120 s), and late phases (120-300 s). All sonographic examinations were digitally recorded. A CEUS examination was considered conclusive if following contrast administration, the focal lesion had a typical enhancement pattern and no other diagnostic methods were required, while considered inconclusive if the enhancement pattern of the lesions was not typical and correlation with other diagnostic methods were performed (contrast CT or biopsy of the lesion). All the studied cases with metastasis, adenomas, and few cases of HCCs were diagnosed by biopsy. Diagnosis of rest of cases was done by correlation between patient history, available triphasic CT study, serum AFP particularly in patients with post-therapeutic recurrence, and malignant PVT as well as patient follow up. Statistical analysis Results are expressed as mean ± standard deviation or number (%). Comparison between categorical data was performed using chi-square test. Standard diagnostic indices including sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and diagnostic efficacy were calculated as described by (Galen, 1980). SPSS computer program (version 12 Windows) was used for data analysis. P value less than or equal to 0.05 was considered significant. Results Of total 60 patients included in the study, 30 patients had HCCs (22 males and 8 females). Among them, 18 patients underwent therapeutic intervention (8 patients underwent RFA, 8 patients underwent TACE, and 2 patients underwent ethanol injection). A total of 12 patients were diagnosed with liver metastases, with one lesion was studied in each patient. Triphasic CT abdomen and CEUS were done to all patients. The mean age of patients with liver metastasis was 50.33 ± 12.61 years. Six patients (50%) were females, and 6 patients (50%) were males. The pattern of CT enhancement of the metastatic lesions (biopsy proven) was as follows: 6 hypervascular metastases (50%), which were metastatic renal cancer, melanoma, and choriocarcinoma, 4 hypovascular metastases (33.3%) from rectal and ovarian cancers, and 2 post-therapeutic metastases (16.7%) from pathologically proven neuroendocrine were non-enhanced. CEUS showed typical enhancement pattern in the 12 (100%) lesions. There was no statistically significant difference (p value > 0.05) between CEUS and CT in the characterization of hepatic metastases (Table 2). Eighteen patients (8 females and 10 males) were diagnosed with 22 benign hepatic focal lesions by triphasic CT, and their mean age was 44 ± 8.047 years. Twelve lesions (54.5%) were located in the right lobe and 10 phase, 6 lesions (30%) showed homogenous enhancement, 4 lesions (20%) showed incomplete enhancement with non-enhancing cores, 6 lesions (30%) were isoenhanced, and 4 lesions (20%) were non-enhanced. No statistically significant difference was found in the findings between CEUS and triphasic CT (p value > 0.05). Regarding the enhancement pattern of triphasic CT scan in the diagnosis of benign hepatic focal lesions, CT showed typical enhancement pattern in all 22 (100%) lesions [16 hemangiomas, 2 adenomas, 2 regeneration nodules, and 2 hydatid cysts]. CEUS showed typical enhancement pattern in 18 lesions [12 lesions were hemangiomas, 2 regeneration nodules, 2 adenomas (Fig. 5) and 2 hydatid cysts (Fig. 6)]. CEUS showed atypical enhancement pattern in 2 lesions which were hemangiomas. There was no statistically significant difference (p value > 0.05) between CEUS and triphasic CT in the characterization of benign hepatic focal lesions (Table 3). Triphasic CT demonstrated a sensitivity of 100%, specificity of 81.8%, positive predictive value of 84%, negative predictive value of 100%, and accuracy of 90.7%, while CEUS demonstrated a sensitivity of 94.2%, specificity of 88.9%, positive predictive value of 91%, negative predictive value of 94.1%, and accuracy of 92.3% for characterization of hepatic focal lesions. Discussion CEUS has improved the characterization of focal liver lesions showing comparable results to those with CT and MRI and when performed by experienced operators, it significantly improves overall diagnostic accuracy by more than 30% compared with unenhanced ultrasound [12]. This study was conducted on 60 patients with 70 hepatic focal lesions. CEUS and triphasic CT were done to all patients. No adverse events occurred after the administration of SonoVue. From 70 focal lesions assessed, CEUS missed 4 lesions (2 HCCs and 2 hemangiomas). That was due to either very small size or deeply situated lesions or those lesions seated within hepatic dome hindered by the costal margin and was not easily accessible. Thus, in this study that investigated the role of CEUS in characterization of malignant from benign hepatic focal lesions, the sensitivity, specificity, PPV, NPV, and accuracy of CEUS in the differentiation between benign and malignant hepatic focal lesions were 94.2%, 88.9%, 91%, 94.1%, and 92.3%, respectively, and for triphasic CT were 100%, 81.8%, 84%, 100%, and 90.7%, respectively. There was no statistically significant difference between CEUS and triphasic CT. The two most important multi-center studies regarding CEUS application for the characterization of focal liver lesions were the German Society of Sonography (DEGUM) multi-center study and a French study, showed good value for CEUS for focal liver lesion characterization [13,14]. The DEGUM study included 1349 patients with focal liver lesions on ultrasound. A total of 1328 focal liver lesions (755 malignant and 573 benign) were assessed. The reference standard diagnosis was made by means of liver biopsy in 75% of cases and by contrast-enhanced CT or contrast-enhanced MRI in the other cases. The accuracy of CEUS for the diagnosis of focal liver lesions was 90.3%. CEUS showed 95.8% sensitivity and 83.1% specificity, with 95.4% positive predictive value and 95.9% negative predictive value for differentiating benign versus malignant lesions. The French study assessed the clinical value of CEUS using SonoVue for the characterization of focal liver lesions discovered in patients with a cancer history or in those with chronic liver disease. The study included 1034 focal liver lesions undiagnosed on ultrasound alone. The reference standard methods were contrast-enhanced CT, contrastenhanced MRI, or liver biopsy and CEUS had 79.4% sensitivity and 88.1% specificity in differentiating benign versus malignant focal liver lesions. These findings are also approximate to the study done by Sporea and Sirli, included 573 benign lesions and 755 malignant lesions, who investigated if CEUS ready for use in daily practice for evaluation of focal liver lesion. The overall accuracy of CEUS for the diagnosis of HFLs was 90.3%. CEUS had 95.8% sensitivity and 83.1% specificity, with 95.4% positive predictive value (PPV) and 95.9% negative predictive value (NPV) for differentiating benign versus malignant lesions [15]. Another study by Trillaud et al. for characterization of focal liver lesions with SonoVue enhanced sonography in comparison to CT in which 68 focal liver lesions were benign and 55 were malignant showed sensitivity, specificity and accuracy of 95.5%, 75.0%, and 90.0% for CEUS and 72.7%, 37.5%, and 63.3% for CT. In comparison, CT was significantly less sensitive (p < 0.0001), less specific (p < 0.029), and less accurate (p < 0.0001) than SonoVue enhanced ultrasound unlike our study [16]. Although our results using SonoVue enhanced ultrasound were near from this study regarding the sensitivity, specificity and accuracy, but we did not find any statistically significant difference between the different imaging modalities (p < 0.452). In this study, 8 patients with HCC had associated portal vein thrombosis, of which 2 were malignant thrombi. CEUS and triphasic CT detected and correctly characterized 8/8 thrombi (100%). There is no statistically significant difference between them. These findings are in disagreement with a study conducted by Rossi et al. who compared CEUS and triphasic CT in the detection and characterization of PVT complicating HCC in 50 patients, in which 44 thrombi were pathologically diagnosed as malignant and 6 were benign. CEUS detected 50/50 (100%) thrombi and correctly characterized 49/50 (98%) while CT detected 34/50 (68%) thrombi and correctly characterized 23/ 34 (68%), So, CEUS outperformed triphasic CT in terms of both thrombus detection and characterization in this study [19].Another study by Sorrentino et al. who investigated CEUS versus biopsy for the differential diagnosis of PVT in 108 HCC patients, 58 patients (53.7%) with malignant PVT and 50 (46.3%) with benign PVT. Sensitivity, specificity, positive and negative predictive value of biopsy and CEUS were the same for both: 89.6%, 100%, 100% and 89.2%, respectively [20]. Regarding the role of CEUS in the assessment of HCC after therapeutic intervention, this study showed that CEUS correctly identified 8 (40%) ablated HCC and 8 incompletely ablated HCC (40%) but misdiagnosed 4 incompletely ablated HCC (20%) as ablated. These findings are in agreement with a multi-center study by Fig. 5 A 32-years-old female with accidentally discovered focal lesion. Triphasic CT study revealed homogenously enhancing focal lesion in arterial and portal phases that became isodense to hepatic parenchyma in delayed phase. a NEUS revealed average sized liver showing segment IV hypoechoic focal lesion measuring about 2.3 cm. b-d CEUS real-time examination of the focal lesion showed early homogenous contrast uptake that became iso to slightly hypoechoicin delayed phase, histopathologically proven to be hepatic adenoma [22]. In this study, no adverse events occurred after Sono-Vue administration to any of the patients. The safety profile of SonoVue in this study is in agreement with Sporea et al. who used CEUS using SonoVue to evaluate hepatic focal lesions in 294 patients and reported no adverse events in any of their patients [23]. The principal limitation of this study was the limited lesion number, mainly due to the exclusion of patients with suboptimal US scan due to the patient body habitus or intervening bowel gas which should be considered a major limitation in the applicability of the technique. In Cantisani et al. study, CEUS still presents the same important drawbacks of every US examination, including operator dependency, obese patients, and non-compliant subjects. For these reasons, if the B-mode US is unsatisfactory, the subsequent CEUS examination will be suboptimal. A specific limitation of CEUS in studying the liver is that limited spatial resolution and, as such, very small lesions may be missed. The US study of the subdiaphragmatic liver by subcostal scanning is sometimes inadequate, especially in patients with a high lying diaphragm [24]. Also, SonoVue role in percutaneous ablation is limited because of its short-lasting enhancement effect and thus, a new second-generation sonographic contrast agent, Sonazoid, with post-vascular phase is more useful as a contrast agent during thermal ablation of HCCs [25]. Sonazoid allows real-time vascular imaging, stable Kupffer phase imaging lasting up to 60 min (which is not possible with SonoVue), its use is tolerable for multiple scanning and enables the detection of Bmode ill-defined nodules, facilitating correct staging of HCC before treatment [26]. Conclusion CEUS is an effective tool in characterization of hepatic focal lesions and recommended as a second diagnostic step after conventional ultrasound to immediately establish the diagnosis especially for benign lesions avoiding further and more expensive examinations and also should be a valuable alternative when a contrast study is needed and CT and MRI contrast agents are contraindicated, as in patients with renal failure and patients with known allergic reaction to CT/MRI contrast agents. Also, CEUS could be used in patients who need shortterm interval repeated regular follow-up by triphasic liver CT in whom, we can do CEUS and CECT alternately and consequently reduce the patient exposure to ionizing radiation and to iodine-based contrast agents.
2021-01-05T15:55:44.900Z
2021-01-05T00:00:00.000
{ "year": 2021, "sha1": "02597dceb7a22557c34503b676c81b878eaf4dcb", "oa_license": "CCBY", "oa_url": "https://ejrnm.springeropen.com/track/pdf/10.1186/s43055-020-00374-0", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "02597dceb7a22557c34503b676c81b878eaf4dcb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
219202608
pes2o/s2orc
v3-fos-license
Pancreatic β-Cell-specific Repression of Insulin Gene Transcription by CCAAT/Enhancer-binding Protein β Chronic exposure of β-cells to supraphysiologic glucose concentrations results in decreased insulin gene transcription. Here we identify the basic leucine zipper transcription factor, CCAAT/enhancer-binding protein β (C/EBPβ), as a repressor of insulin gene transcription in conditions of supraphysiological glucose levels. C/EBPβ is expressed in primary rat islets. Moreover, after exposure to high glucose concentrations the β-cell lines HIT-T15 and INS-1 express increased levels of C/EBPβ. The rat insulin I gene promoter contains a consensus binding motif for C/EBPβ (CEB box) that binds C/EBPβ. In non-β-cells C/EBPβ stimulates the activity of the rat insulin I gene promoter through the CEB box. Paradoxically, in β-cells C/EBPβ inhibits transcription, directed by the promoter of the rat insulin I gene by direct protein-protein interaction with a heptad leucine repeat sequence within activation domain 2 of the basic helix-loop-helix transcription factor E47. This interaction leads to the inhibition of both dimerization and DNA binding of E47 to the E-elements of the insulin promoter, thereby reducing functionally the transactivation potential of E47 on insulin gene transcription. We suggest that the induction of C/EBPβ in pancreatic β-cells by chronically elevated glucose levels may contribute to the impaired insulin secretion in severe type II diabetes mellitus. Insulin is a hormone essential for the control of mammalian glucose homeostasis and is produced predominantly in pancreatic ␤-cells of adult animals (1). The expression of the insulin gene occurs to a large extent at the level of transcription. Control elements residing in the 5Ј 350-base pair sequence flanking exon 1 of the rat insulin I gene are sufficient to direct ␤-cell-specific expression (2) (Fig. 1A). Arrays of A and E elements 1 (Far-FLAT, Nir-P1) constitute symmetrical enhansons that cooperatively account for Ͼ90% of the transcriptional activity of the insulin gene promoter (3). The E elements are recognition motifs for transcription factors in the basic helixloop-helix (bHLH) 2 family, such as E12 and E47, which activate the insulin promoter in close synergism with A element binding homeobox transcription factors, such as IDX-1. Chronic hyperglycemia may contribute to the pancreatic ␤cell dysfunction observed in patients with type II diabetes, a phenomenon attributed to the concept of glucose toxicity (4). Studies using in vivo animal models and in vitro ␤-cell lines have demonstrated that a reduction of insulin gene transcription by glucose toxicity is associated with the loss of transactivator proteins such as IDX-1/IPF-1/STF-1 and RIPE3b1-binding protein (5)(6)(7)(8)(9)(10). Because insulin gene transcription is both positively and negatively regulated, we sought to identify repressors that might also mediate the effects of glucose toxicity on insulin gene transcription. In this report we describe CCAAT/enhancer binding protein-␤ (C/EBP␤) as a glucoseinduced repressor of insulin gene transcription. C/EBPs are a family of transcription factors that regulate genes of the acute phase response, cell growth, differentiation, and the expression of cell type-specific genes (11)(12)(13)(14)(15)(16). The C/EBPs consist of the activators C/EBP ␣, ␤, ␥, ␦, and ⑀ and the repressors CHOP, LIP, and C/EBP-30; the latter two repressors arise by alternative downstream translation of the mRNAs (17). The C/EBPs bind to DNA exclusively as dimers and contain a conserved C-terminal basic region-leucine zipper domain that is characterized by a DNA-contacting basic region linked to a leucine zipper dimerization motif (18). They bind preferentially to a consensus DNA sequence T(T/G)NNGNAA(T/G) (19,20). The founding member of the family of C/EBP transcription factors, C/EBP␣, is expressed during terminal differentiation of cells such as adipocytes (13) and keratinocytes. C/EBP␤ is abundant in liver, is expressed in response to stressactivated signaling pathways, and activates the expression of genes involved in the acute phase response such as cytokine genes. It has been shown that the expression of C/EBP␤ transactivates the transcription of genes encoding the insulin receptor and glucose transporter-2 (21,22), suggesting that C/EBP␤ may play an important role in glucose homeostasis and the metabolic stress associated with diabetes mellitus. The promoters of both of the rat insulin I and II gene, as well as the human insulin gene, contain sequence elements that closely resemble the consensus C/EBP-binding site (see Fig. 1). The sequence similarities among the elements imply that the C/EBP family of DNA-binding proteins may regulate the expression of the insulin gene. In addition, the activity of the insulin gene promoter is regulated by glucose and hormones, which elevate ␤-cell [Ca 2ϩ ] and cAMP levels and possibly protein kinase C activity (2,23). C/EBP␤ may mediate the effects of multiple second messengers on insulin gene expression, since its activity can be influenced by Ca 2ϩ , cAMP, and protein kinase C signaling pathways (24 -26). In the present study, we find that C/EBP␤ is expressed in pancreatic ␤-cells and is up-regulated by supraphysiologic glucose concentrations in the culture media of pancreatic ␤-cell lines. C/EBP␤ inhibits insulin promoter activity in ␤-cell lines, but not in the non-␤-cell HeLa and BHK cell lines. In pancreatic ␤-cells C/EBP␤ specifically interacts with a heptad leucine repeat sequence within activation domain 2 (AD2) of the basic helix-loop-helix transcription factor E47, thereby inhibiting the DNA binding activity and the transactivation potential of E47. EXPERIMENTAL PROCEDURES Reagents-DNA-modifying enzymes were purchased from New England Biolabs (Beverly, MA) or Boehringer Mannheim; radioactive compounds were from NEN Life Science Products; D-luciferin-potassium was from the Analytical Luminescence Laboratory (San Diego, CA); RPMI 1640 and DMEM medium and fetal bovine serum (FBS) were purchased from Life Technologies, Inc. Nucleotides were obtained from Pharmacia Biotech Inc. All other reagents were purchased from Sigma. Cell Culture-The pancreatic ␤-cell line HIT-T15 (27) at passage 64 and COS-7 cells were purchased from the ATCC. Ins-1 (28) cells at passage 99 were a gift from Dr. Claes B. Wollheim (University of Geneva, Switzerland). HIT-T15 cells were maintained in RPMI 1640 medium (Life Technologies, Inc.) with 10% FBS at 37°C in a 5% CO 2 , 95% air atmosphere as described (7). Ins-1 cells were grown in RPMI 1640 medium with 10% FBS, 50 M ␤-mercaptoethanol, 1 mM sodium pyruvate, and 10 mM HEPES as reported (28). Both cell lines were passaged weekly. For the model of long term exposure, HIT cells were cultured from passage 64 to passage 82 in 11.1 mM D-glucose or 0.8 mM D-glucose with adjustment of osmolality by the addition of mannitol to the low glucose medium. Glucose concentrations for HIT-T15 cells were chosen according to the left-shifted insulin response curve as previously reported (29). Ins-1 cells were grown in 25 mM or 5.6 mM D-glucose, respectively, with mannitol adjustment as reported previously (30). The ␤TC-6 cell line (31) was a gift from Dr. Shimon Efrat (Albert Einstein University College of Medicine, New York). The ␤TC-6 cells were cultured in Dulbecco's modified Eagle's medium (25 mM glucose) supplemented with 10% FBS. Passages from 23 to 33 were used for transfection experiments. Islet Isolation-Male Sprague-Dawley rats (150 -200 g) were anesthetized with 100 mg/kg intraperitoneal pentobarbital sodium. Islets were isolated from the pancreata using an adaptation for rat islets of the method of Gotoh et al. (32). Briefly, after cannulation of the common bile duct and instillation of 10 ml of a prewarmed (37°C) solution containing 1 mg/ml Collagenase P and 0.5 mg/ml DNase I, the pancreas was removed and digested for 30 min at 37°C in a shaking water bath followed by dilution and washing of the digest and hand picking of the released islets under a dissecting microscope. Liver nuclei were prepared by the method of Gorski et al. (33). Antisera and Western Immunoblot-Polyclonal rabbit antisera for C/EBP␤ and E47 were obtained from Santa Cruz Biotechnology (Santa Cruz, CA). The IDX-1 antiserum was described previously (34). Western immunoblot analysis was performed on nuclear extracts prepared from the cell lines according to standard techniques (35). Extracts of pancreatic islet whole cells and liver nuclei were prepared by lysing isolated rat islets and liver nuclei in SDS-PAGE sample buffer (36). In each lane, a sample containing 100 g of protein was loaded. Purification of Proteins Expressed in Bacteria-A C/EBP␤ protein fragment containing the basic region-leucine zipper binding domain that starts at an internal methionine site (12) was made by insertion into the BamHI-XhoI site of the pRSET-A vector (Invitrogen Inc., Carlsbad, CA). The protein was transcribed and translated and was purified with a nickel-chelating resin column and eluted by a pH gradient. DNase I Footprinting-C/EBP␤ isolated from bacteria was incubated with a fragment of the 5Ј-flanking region of the rat insulin I gene (nucleotides Ϫ280 to ϩ1). The fragment was end-labeled on the coding strand by the polymerase chain reaction. The DNase I footprinting analysis was carried out as described previously (38). Plasmids, DNA Transfection, and Luciferase Assays-The plasmid Ϫ410INS-LUC contains a fragment of the rat insulin I gene promoter from base pairs Ϫ410 to ϩ49 cloned into the pXP2 vector containing the coding sequence of firefly luciferase cDNA. The luciferase-reporter plasmid DNA containing the CEB mutation within the rat insulin I gene promoter was created by mutation of the CEB box from Ϫ128 TG-TAAT Ϫ133 to Ϫ128 CTCGGC Ϫ133 using oligonucleotide-directed mutagenesis (39). The luciferase plasmid containing 190 bp of the promoter of IDX-1 (Ϫ190IDX-LUC) was a gift from Dr. Marc Montminy (The Salk Institute, La Jolla, CA) and has been described previously (40). The luciferase reporter construct containing the proximal 180 bp of the promoter sequence of the human islet sulfonylurea receptor (Ϫ180SUR-LUC) 3 was a gift from Dr. Jorge Ferrer (Harvard Medical School, Boston, MA). All insulin promoter deletions were generated by polymerase chain reaction mutagenesis and subsequently sequenced. For expression of C/EBP␤ in COS-7 cells and in vitro transcription/translation, the plasmid C/EBP␤-pcDNA I was used (41). For bHLH factors, the plasmid shPanI.pBAT14 (42) (hamster homolog of E47; a gift from Dr. M. German, University of California, San Francisco, CA) was used for expression in COS cells. Pan I (rat homolog of E47) was produced by in vitro transcription/translation from the plasmid pARP5/P2 (43) (gift from C. Nelson, University of California, San Francisco). To produce the leucine zipper minus mutant form of Pan I, two point-mutations were introduced into the plasmid pARP5/P2 by polymerase chain reactionbased site-directed mutagenesis (QuikChange® site-directed mutagenesis kit, Stratagene, La Jolla, CA) to replace the second and the third leucines within the heptad leucine repeat of the AD2 of Pan I/E47 by phenylalanines. The mutated sequences were confirmed by sequencing using the dideoxy chain termination method (44) with Sequenase version 2.0 (U.S. Biochemical Corp.). The plasmid for bacterial expression of the GST-C/EBP␤-fusion protein (pGEX-KG) has been described (41). At 50% confluence (10-cm culture dishes), HIT-T15 or ␤TC-6 cells were transfected with 3 g of the rat insulin I promoter-luciferase plasmid DNA using the DEAE-dextran method in cell suspension (45). HeLa cells were transfected using CaPO 4 (46) with 3 g of the Ϫ410INS-LUC (or deletion constructs) and 0.5 g of C/EBP␤-pcDNA I expression plasmid. After a 48-h incubation, the cells were harvested, and the luciferase activity was determined as described previously (47). The plasmids containing fusions between the GAL4 DNA-binding domain and either AD1 or AD2 of E47 (48) were a gift from Dr. Roland Stein (Vanderbilt University, Nashville, TN). At 50% confluence (10-cm culture dishes), HIT-T15 cells were cotransfected with 10 g of a luciferase reporter-construct containing a multimerized Gal4-binding site ((GBS) 3 -p59RLG) and one of each of the GAL4 constructs with a C/EBP␤ expression plasmid (C/EBP␤-pcDNA I) or empty vector (pcDNA I) using the DEAE-dextran method in cell suspension (45). Rous sarcoma virus-CAT was used as an internal control for monitoring of transfection efficiency. After a 48-h incubation, the cells were harvested, and the luciferase activity was determined as described previously (47). Values were expressed as means Ϯ S.E. of three independent experiments. Transfection of COS-7 cells for protein expression was performed by liposomal transfer (Lipofectin®, Life Technologies, Inc.) according to the manufacturer's manual. In Vitro Transcription/Translation-Recombinant proteins were produced by in vitro transcription/translation with the TNT-coupled reticulocyte lysate system (Promega, Madison, WI) according to the manufacturer's instructions using T7 polymerase for all plasmids transcribed. Each translation reaction was performed in duplicate with and without inclusion of [ 35 S]methionine, and protein identity was confirmed by autoradiography of products separated by SDS-polyacrylamide electrophoresis or Western immunoblot analysis. In Vivo Labeling, Immunoprecipitation, and GST Pull Down Assays-Proteins from transfected COS-7 cells were labeled in vivo by incubation in L-methionine/L-cysteine-free Dulbecco's modified Eagle's medium (Life Technologies) containing 200 Ci/ml [ 35 S]methionine/ [ 35 S]cysteine (NEN Life Science Products) and 10% dialyzed FBS for 4 h. Nuclear extracts from COS-7 cells and from the ␤-cell lines HIT-T15 and Ins-1 were prepared as described previously (35). For immunoprecipitation, nuclear extracts and in vitro translated protein solutions were adjusted to 200 mM NaCl, 0.1% Nonidet P-40, 50 mM HEPES, 1 mM phenylmethylsulfonyl fluoride, 5 mM EDTA, 0.5 mM dithiothreitol and precleared with protein A-Sepharose (Pharmacia Biotech, Uppsala, Sweden). After the addition of the respective antisera for C/EBP␤ and E47 (Santa Cruz Biotechnology) and incubation at 4°C for 15 h, the immune complexes were precipitated with protein A-Sepharose, washed, and subjected to SDS-PAGE followed by autoradiography or Western immunoblot analysis. For GST pull down analysis, the GST fusion proteins of C/EBP␤ were prepared as described by Ron and Habener (41), except the proteins were not eluted from the glutathione-Sepharose. Interacting proteins were precipitated with the glutathione-Sepharose-coupled fusion proteins in the same buffer used for immunoprecipitation and analyzed by SDS-PAGE autoradiography and Western immunoblot as described. Statistics-All values were expressed as means Ϯ S.E. Statistical analysis was performed via Student's t test for paired and unpaired values (49). Potential C/EBP␤-binding Sites in the Promoters of the Rat and Human Insulin Genes-Inspection of the promoters of the rat insulin I, rat insulin II, and human insulin genes reveals nucleotide sequence elements that resemble the consensus motif that binds the C/EBP family of transcription factors (Fig. 1, A and B). Expression of C/EBP␤ in Rat Islets and Cultured ␤-Cell Lines and Regulation by High Glucose Levels-Isolated rat pancreatic islets and several ␤and non-␤-cell lines were assayed for C/EBP␤ expression by Western immunoblot using liver nuclei extracts as a positive control, since C/EBP␤ was originally defined as the liver-enriched activator protein, LAP (14). The 32-kDa C/EBP␤ protein was detected in isolated rat islet whole cell extracts ( Fig. 2A), although the antiserum also recognized a more abundant protein with an apparent molecular mass of 42 kDa. C/EBP␤ was also detected in the nuclear extracts from Ins-1, ␤TC-6, and HIT-T15 cells, which are islet ␤-cell lines derived from rat, mouse, and hamster, respectively. To determine whether the expression of C/EBP␤ is altered in ␤-cells during chronic or short term exposure to supraphysiologic concentrations of glucose, we used standard in vitro glucose desensitization models (6,30). HIT-T15 cells were serially passaged in RPMI 1640 medium containing 11.1 mM glucose or 0.8 mM glucose for 16 weeks. Since the EC 50 for glucose-stimulated insulin secretion is left-shifted to 1 mM in HIT-T15 cells rather than about 8 mM in normal islets (50), 11.1 mM glucose was chosen as a supraphysiological concentration, and 0.8 mM was considered a physiological concentration of glucose for the HIT-T15 cells. As shown in Fig. 2B, the expression of IDX-1 decreased from week 4 to 16, confirming the published observations (9). In contrast, the expression of C/EBP␤ was markedly enhanced from week 8 to 16. These observations indicate that the level of C/EBP␤ in HIT-T15 cells is up-regulated by prolonged exposure to high glucose concentrations and that C/EBP␤ might serve as a repressor of insulin gene transcription. The increased expression of C/EBP␤ in HIT-T15 cells after chronic exposure to high glucose was prevented by culturing HIT-T15 cells in the RPMI 1640 medium containing 0.8 mM glucose (Fig. 2B). To validate the in vitro long term model and to show an effect of high glucose concentrations on the regulation of insulin secretion, we measured the insulin concentration in the culture medium in response to increasing glucose concentrations in HIT-T15 cells at different passages in 2-h static incubation intervals. Whereas HIT-T15 cells cultured in high glucose displayed a passage-dependent decrease in glucose-responsive insulin secretion (50 Ϯ 7.7% after 16 weeks in 11.1 mM glucose), no such decrease was seen in cells cultured in low glucose (data not shown). The findings of long term high glucose exposure on C/EBP␤ expression were confirmed using Ins-1 cells, which respond to glucose similarly to isolated islets. C/EBP␤ expression was enhanced in Ins-1 cells cultured at 25 mM but not 5.6 mM glucose, whereas IDX-1 levels were decreased in Ins-1 cells cultured at 25 but not at 5.6 mM glucose (data not shown). To more precisely test the regulation of C/EBP␤, we sought to examine the role of supraphysiologic glucose concentrations in a short term model, which has recently been used to examine IDX-1 expression in response to high glucose concentrations (30). Exposure of Ins-1 cells to 25 mM glucose for 72 h and then reversing the high glucose concentration back to normal resulted in a marked up-regulation of C/EBP␤ after 24 h. The up-regulation of C/EBP␤ was reversible by a subsequent 24-h period in 5.6 mM glucose (Fig. 2C). As a control, the expression of IDX-1 was examined to confirm the down-regulation of IDX-1 protein in this system described previously. We also measured the insulin content of the cells to validate the effect of high glucose on this parameter. A decrease in insulin content during the 72-h high glucose period was observed (from 98 Ϯ 4.3 milliunits/mg protein before, to 37.4 Ϯ 6.1 milliunits/mg protein after 72 h in 25 mM glucose), which was partially reversible by the subsequent culture period of 24 h in low glucose (59.4 Ϯ 6.2 milliunits/mg protein) (data not shown). Bacterially Expressed C/EBP␤ Binds to Insulin Promoter Sequences-Mapping of the C/EBP␤-binding sites within the rat insulin I promoter was carried out using DNase I protection footprinting assays on a 280-base pair fragment of the 5Јflanking region of the rat insulin I gene (nucleotides Ϫ280 to ϩ1) labeled on the coding strand. Incubation of this labeled fragment of DNA with bacterially expressed C/EBP␤ resulted in three regions of DNase I protection and additional hypersensitive sites, indicating that C/EBP␤ interacts with specific sequence regions in the promoter (Fig. 3A). Counting from the transcription initiation site (ϩ1), the first protected region from nucleotide Ϫ70 to Ϫ86 contains the previously characterized A1 (P1) element (between nucleotides Ϫ64 and Ϫ85). The second region, from nucleotide Ϫ107 to Ϫ121, corresponds to the E1 (Nir) box (between nucleotides Ϫ104 and Ϫ112). DNase I hypersensitive sites flank both the A1 (P1) and E1 (Nir) elements. The third region, from nucleotide Ϫ126 to Ϫ147, however, contains the CEB box and has not been described previously. Notably, the bacterially expressed C/EBP␤ also gave a hypersensitive digestion pattern at the boundaries of the A3/4 (FLAT) element (between nucleotides Ϫ207 and Ϫ227), suggesting that C/EBP␤ may distort the DNA in this region without completely protecting the FLAT element from DNase I digestion. These data confirm that C/EBP␤ binds to the CEB box and suggest that the recombinant protein also binds to the A1 (P1) element and the E1 (Nir) box and may interact with the A3/4 (FLAT) element. C/EBP␤ does not appear to bind to the rat insulin I gene cAMP-response element (CRE, Ϫ184 TGACGTCCAAT Ϫ174 ), although the rat insulin I gene promoter CRE contains a nearly canonical TGACGTCC core sequence for binding of C/EBPs and cAMP-response element-binding protein (CREB). To examine whether the bacterially expressed C/EBP␤ binds to the insulin promoter sequence element (CEB box), predicted to contain the consensus binding site (Fig. 1A), EMSAs with oligonucleotides corresponding to this region were performed (Fig. 3B). C/EBP␤ forms DNA-protein complexes with a 51-bp oligonucleotide probe comprising the CEB box of the rat insulin I promoter. Binding of C/EBP␤ to this element was also demonstrated by using the corresponding sequences of the rat insulin II and using the human insulin gene promoters as probes (data not shown). These data indicate that the bacterially expressed C/EBP␤ binds to the CEB boxes within the promoters of these three insulin genes as predicted by sequence comparison (Fig. 1B). To assess the relative affinity of C/EBP␤ interaction with the A1 (P1) element, the E1 (Nir) box, the CEB box, and the A3/4 (FLAT) element, EMSA was performed using the CEB probe in the presence or absence of 30-or 300-fold excesses of the unlabeled oligonucleotides containing the CEB box, mutated CEB box, P1 element, Nir box, or FLAT element. The oligonucleotides containing the Nir box, the P1 element, and the FLAT element competed with the CEB probe for binding to the bacterially expressed C/EBP␤, but with at least 10-fold less efficiency than the unlabeled oligonucleotide containing the CEB box (Fig. 3B). These data suggest that C/EBP␤ interacts with the P1, Nir, and FLAT elements with a relatively lower efficiency compared with the CEB box. Endogenously Expressed C/EBP␤ in HIT-T15 Cells Binds to the CEB Box-EMSAs were used to characterize the DNAbinding properties of endogenously expressed C/EBP␤ in HIT-T15 cells (Fig. 3C). The DNA probe was the CEB consensus oligonucleotide. Binding reactions were performed with or without the addition of C/EBP␤ antiserum or preimmune serum. One of the slowest migrating complexes for the CEB box probe (lane 2) was disrupted by the addition of C/EBP␤ antiserum (lane 4) but not by the preimmune serum (lane 3). Moreover, the C/EBP␤ antiserum resulted in the appearance of The binding specificities of the protein complex that binds to the probe containing the CEB box were examined by EMSA under conditions of competition with unlabeled oligonucleotides containing the wild type and mutated CEB box (Fig. 3C). The addition to the binding reaction of a 30-or 300-fold unlabeled wild type oligonucleotide containing the CEB box resulted in the abolishment of the C/EBP␤ complex (Fig. 3C, lanes 7 and 8). In contrast, the unlabeled oligonucleotide containing the mutated CEB box (from 5Ј-agcTGTAAT-3Ј to 5Ј-agcCTGCCG-3Ј) was a much weaker competitor for the binding to the labeled CEB box probe (Fig. 3C, lanes 10 and 11). These observations suggest that the endogenously expressed C/EBP␤ in HIT-T15 cells binds specifically to the CEB box. Transactivation of the Rat Insulin I Gene Promoter by C/EBP␤ in Non-␤-cells-Because HeLa and BHK-21 cells lack certain ␤-cell-specific transcription factors required for the ex-pression of the insulin gene, such as IDX-1 and BETA-2 (51, 52), they have been widely used to assess the transactivation of the insulin gene by recombinant proteins. The effect of C/EBP␤ on the rat insulin I gene promoter was examined by cotransfection experiments using HeLa and BHK-21 cells with a C/EBP␤ expression vector (pcDNA I) and a reporter construct containing portions of the rat insulin I gene 5Ј-flanking region in the plasmid pXP2 (Ϫ410INS-LUC). The Ϫ410INS-LUC consists of nucleotides Ϫ410 to ϩ49 of the rat insulin I gene linked to the gene encoding the firefly luciferase (LUC) and was activated 22-fold by C/EBP␤ in HeLa cells (Fig. 4A). Under the same conditions, C/EBP␣ also stimulated Ϫ410INS-LUC expression, but the effect was weaker than that of C/EBP␤ (4.2fold; data not shown). The empty expression vector for C/EBP␤, pcDNA I, had no significant effect on Ϫ410INS-LUC expression. In addition, C/EBP␤ did not affect the truncated thymidine kinase promoter of herpes simplex virus (Ϫ81 to ϩ52 bp, pTK81-LUC) that lacks a C/EBP␤-binding site. To examine whether the transactivation of Ϫ410INS-LUC by C/EBP␤ was mediated through interactions with the CEB box, nucleotide FIG. 3. Binding of recombinant and endogenous C/EBP␤ to insulin promoter elements. A, 32 P-labeled probe of the rat insulin I promoter was incubated with recombinant C/EBP␤ before DNase I digestion. Base numbers and control elements are indicated at the right. The arrows denote hypersensitive patterns. B, 32 P-labeled CEB box oligonucleotide was incubated with recombinant C/EBP␤. Unlabeled oligonucleotides comprising the rat insulin I promoter wild type CEB box, mutated CEB box, Nir box, P1 box, and FLAT element were used as competitors. C, 32 P-labeled CEB box oligonucleotide was incubated with nuclear extracts (Nu. Ex.) from HIT-T15 cells. C/EBP␤-specific antiserum (␣C/EBP␤) and preimmune and normal rabbit serum (NRS) were used in supershift experiments; unlabeled wild type or mutant CEB box probe was used as competitor. The asterisk denotes a major DNA-probe complex that probably consists of a C/EBP isoform other than C/EBP␤. substitution mutations were introduced into this site in the rat insulin I promoter from Ϫ125 agcTGTAAT Ϫ133 to Ϫ125 agcCT-GCCG Ϫ133 (CEB mutation). The same mutations of CEB that were introduced into the oligonucleotide probes of the gel shift assays depicted in Fig. 3B resulted in a marked decrease in the binding affinity of C/EBP␤. Mutation of the CEB box significantly reduced the transactivation by C/EBP␤ on the rat insulin I gene promoter (from 22-to 2-fold), suggesting that C/EBP␤ transactivates the rat insulin I gene promoter mainly through the newly identified CEB box in non-␤-cells (Fig. 4A). Other gene promoters containing the C/EBP␤-binding sites, such as an angiotensinogen gene promoter construct, containing four copies of its C/EBP␤-binding site (APRE-LUC), the rat glucagon gene promoter (Ϫ350Glu-LUC) (53) were also activated by C/EBP␤ in HeLa cells, with 190 Ϯ 11-fold (n ϭ 3) and 20 Ϯ 2-fold (n ϭ 8) stimulation, respectively (Fig. 4B). Moreover, the promoters of the transcription factor IDX-1 gene (Ϫ190IDX-LUC) and the sulfonylurea receptor gene (Ϫ180SUR-LUC) were also activated in non-␤-cells (Fig. 4B). Co-transfection of C/EBP␤ and Ϫ410INS-LUC into BHK21 cells (a baby hamster kidney cell line that was used to characterize the effects of the homeodomain proteins Lmx-1 and Cdx-3 on rat insulin I gene transcription (54)) confirmed the findings of the HeLa cell transfection experiments (data not shown). These data suggest that C/EBP␤ is a positive regulatory factor for the rat insulin I gene in non-␤-cells. Repression of the Rat Insulin I Gene Promoter by C/EBP␤ in ␤-Cells-To examine whether C/EBP␤ also transactivates the FIG. 4. Transactivation and repression potential of C/EBP␤ in non-␤-and ␤-cells. HeLa cells (A and B) or HIT-T15 cells (C, D, E, and F) were transiently cotransfected with C/EBP␤ expression plasmid (ϩC/EBP␤) or empty pcDNA I vector (Control) and several reporter-gene constructs as described under "Experimental Procedures." Wild type rat insulin I promoter (Ϫ410INS-LUC), CEB box mutated rat insulin I promoter (CEB-Mutation), truncated thymidine kinase promoter of herpes simplex virus (pTK81-LUC), multimerized C/EBP␤-binding sites of the angiotensinogen promoter (APRE-LUC), glucagon promoter (Ϫ350GLU-LUC), rat IDX-1 promoter (Ϫ190IDX-LUC), human ␤-cell sulfonylurea receptor promoter (Ϫ180SUR-LUC), rat insulin II promoter CAT reporter (Ϫ410INS-II-CAT), and 5Ј-deletion mutant luciferase reporter constructs of the rat insulin I promoter from Ϫ410 to Ϫ90 are shown. rat insulin I gene promoter in ␤-cells, the C/EBP␤ expression vector and Ϫ410INS-LUC were co-transfected into HIT-T15 cells. Surprisingly, C/EBP␤ markedly inhibited the rat insulin I gene promoter activity without affecting the 81 bp of the thymidine kinase promoter (pTK81-LUC) (Fig. 4C). In contrast, both the APRE-LUC and glucagon promoter (Ϫ350Glu-LUC) were activated by C/EBP␤ in HIT-T15 cells, with 23.8 Ϯ 2.1-and 2.3 Ϯ 0.1-fold stimulation, and the promoters of IDX-1 and SUR were stimulated 5.8 Ϯ 0.2 and 2.1 Ϯ 0.3-fold, respectively (Fig. 4D). These findings indicate that the C/EBP␤-mediated repression of the rat insulin I gene promoter activity in HIT-T15 cells is unique to the rat insulin I gene inasmuch as C/EBP␤ activates the glucagon, APRE, IDX-1, and SUR promoters in HIT-T15 cells. C/EBP␣ also inhibited the rat insulin I gene promoter activity in HIT-T15 cells (data not shown). The rat insulin II gene promoter (Ϫ410INS-II-CAT), which also contains a putative C/EBP␤-binding site (Fig. 1B), was also inhibited by C/EBP␤ (Fig. 4E). In contrast to the transactivation of the rat insulin I gene promoter in the non-␤ HeLa and BHK cells, mutation of the CEB box within the 410-bp rat insulin I gene promoter did not affect the inhibition of the mutated rat insulin I gene promoter activity by C/EBP␤ in HIT-T15 cells (Fig. 4C, CEB-Mutation). These observations suggest that C/EBP␤ inhibits the rat insulin I gene promoter through cis-elements or their corresponding transcription factors other than the CEB box and its binding proteins. Similar results were obtained from co-transfection experiments using ␤TC-6 and Ins-1 cells (data not shown). Localization of the DNA Sequences Important for C/EBP␤ Repression of Rat Insulin I Gene Promoter Activity in ␤-Cells- The DNA sequences within the rat insulin I 5Ј-flanking region essential for the C/EBP␤-mediated repression of promoter activity was investigated by introducing a series of 5Ј-deletions into a rat insulin I promoter-LUC fusion gene (Fig. 4F). Deletion of the DNA sequence between bp Ϫ410 and Ϫ282 did not affect the C/EBP␤-mediated inhibition of the rat insulin I gene promoter activity in HIT-T15 cells, indicating that this region is not important for the C/EBP␤ action. Subsequent deletions suggest that there are two regions critical for the negative regulation of the rat insulin I gene promoter activity by C/EBP␤. The removal of the sequence between Ϫ282 and Ϫ190 that contains the E2 (Far) box and the A3/4 (FLAT) element resulted in a decreased basal promoter activity but also in a significantly decreased repression by C/EBP␤. The involvement of the sequence proximal to bp Ϫ120, which contains the E1 (Nir) box and the A1 (P1) element, in C/EBP␤-mediated repression of the rat insulin I gene promoter activity was also suggested, although not unequivocally established, because basal promoter activity dropped to background levels upon deletion of the E1 (Nir) element. More important, however, the deletion of the promoter region between Ϫ190 and Ϫ120, which contains the CEB box, did not completely abolish C/EBP␤mediated inhibition of the rat insulin I gene promoter, suggesting that this element is not exclusively mediating the C/EBP␤ effects on this promoter in pancreatic ␤-cells. However, deletion to Ϫ90, which eliminates the E1 (Nir) element eliminates inhibition by C/EBP␤. The results of these experiments furthermore suggest that the rat insulin I gene promoter contains multiple negative regulatory DNA elements or their corresponding DNA-binding protein factors as targets for the C/EBP␤-mediated repression in ␤-cells. The co-transfection of the deletion constructs and the C/EBP␤ expression vector into ␤TC-6 cells gave similar results (data not shown). Interaction of C/EBP␤ with Basic Helix-Loop-Helix Transcription Factors-Because deletion or mutation of the CEB box from an insulin-promoter-reporter plasmid did not abolish repression of reporter gene activity by C/EBP␤ in ␤-cells (but did abolish activation in non-␤-cells), we reasoned that the molecular mechanism for the inhibition of insulin promoter activity in ␤-cells may be different from direct binding to the CEB box. One possibility is an inhibitory interaction of C/EBP␤ with activating transcription factors of the insulin gene. One family of transcription factors that has been shown to transactivate the insulin gene promoter consists of bHLH proteins, such as E12/E47 and BETA-2. We examined by co-immunoprecipitation experiments whether C/EBP␤ would interact with the bHLH factor E47. Indeed, endogenous E47 immunoreactivity, co-migrating with E47-protein expressed in COS cells (Fig. 5A, lane 2), was readily co-immunoprecipitated from nuclear extracts of the ␤-cell line Ins-1 using an antiserum directed against C/EBP␤ (Fig. 5A, lane 4), providing evidence for a direct protein-protein interaction of these two transcription factors. This molecular interaction was further confirmed in a ␤-cell-independent cell system by co-immunoprecipitation of E47 with C/EBP␤ antiserum from extracts of COS-7 cells, transfected with expression plasmids for C/EBP␤ and E47 and labeled in vivo with [ 35 S]methionine for visualization of the immune complexes by autoradiography (Fig. 5B, upper panel). No specific immune complexes were precipitated from cells, transfected with empty vectors (Mock, lanes 1 and 2). After transfection with E47 expression plasmid alone, the expressed E47 protein could be immunoprecipitated with the E47 antiserum (lane 4), but not with the C/EBP␤ antiserum (lane 3). Only when both C/EBP␤ and E47 were expressed in COS-7 cells, co-immunoprecipitation of C/EBP␤ and E47 proteins was accomplished by C/EBP␤ antiserum (lane 5). Of note, C/EBP␤ could not be co-immunoprecipitated with the antiserum directed to E47 (Fig. 5B, lane 6, and Fig. 5A, lane 3), a finding we attribute to a possible masking effect of the antigenic epitope in E47 upon interaction with C/EBP␤. The specificity of the C/EBP␤ antiserum is demonstrated by immunoprecipitating only C/EBP␤ protein from extracts of COS-7 cells, transfected with the C/EBP␤ expression vector only (Fig. 5B, lanes 7 and 8). The identity of the bands in the autoradiography was confirmed by Western immunoblotting with specific antisera for C/EBP␤ and E47 (Fig. 5B, lower panels). Interference of C/EBP␤ with the Leucine Zipper Domain of E47-The bHLH proteins of the E2A family, E12 and E47, contain two structurally and functionally distinct transactivation domains, AD1 and AD2 (55). In contrast to AD1, which is equally active in all cells, the AD2 transcriptional activation domain functions preferentially in pancreatic ␤-cells. Analysis of the amino acid sequence of E47 reveals a heptad leucine repeat within AD2, which is conserved in the E47 proteins of different animal species. Therefore, we reasoned that an interaction of the leucine zipper transcription factor C/EBP␤ with the leucine repeat within AD2 of E47 could underlie the demonstrated physical interaction of both proteins. To further characterize the molecular basis of this interaction, we introduced two point mutations into the leucine repeat domain of E47, changing the first two leucines to phenylalanines (Fig. 5C) and yielding the protein E47-LZ Ϫ . In vitro translated and labeled proteins of C/EBP␤, E47, and E47-LZ Ϫ were subjected to immunoprecipitation, and immune complexes were visualized by autoradiography after fractionation by SDS-PAGE (Fig. 5D, top). Aliquots of the reticulocyte lysates before immunoprecipitation were included as controls in the electrophoresis (Fig. 5D, lanes 1-3). In reactions containing only single proteins, both wild type and E47-LZ Ϫ proteins were immunoprecipitated by the E47 antiserum (lanes 4 and 5), and in vitro translated C/EBP␤ protein was precipitated by C/EBP␤ antiserum (lane 6). When mixed together, E47 antiserum precipitated again only wild type E47 (lane 7) or mutated E47 (lane 9) alone. Both wild type E47 and C/EBP␤, however, were co-immunoprecipitated by the C/EBP␤ antiserum (lane 8). In contrast, only C/EBP␤ alone was precipitated by the C/EBP␤ antiserum when used with the mutated E47-LZ Ϫ (lane 10). These findings provide evidence for the notion that C/EBP␤ may interact with E47 via the leucine repeat within AD2. This protein-protein interaction was also confirmed by an antiserum-independent assay, namely the GST pull down procedure. Labeled and in vitro translated wild type E47 specifically bound to GST-C/EBP␤ (Fig. 5E, lane 4). In contrast, the E47-LZ Ϫ mutant protein showed no interaction with the GST-C/EBP␤ fusion protein (lane 5). GST-C/EBP␤ was able to precipitate both wild type E47 and C/EBP␤ (lane 7) but not mutant E47-LZ Ϫ together with C/EBP␤ (lane 8). C/EBP␤ was included in the reactions in lanes 6 -8 to ensure that the GST-C/EBP␤ was functional (that it dimerized with C/EBP␤). Inhibition of E47 Binding Activity by Interaction with C/EBP␤-To determine whether interactions of C/EBP␤ with E47 affect the binding activity of the E47 homodimer to the E-box DNA response elements within the insulin promoter, we tested the binding of in vitro translated proteins in electrophoretic mobility shift assays on the E1 elements Nir and RIPE3 of the rat insulin I and II promoters, respectively (Fig. 6, A and B). On both elements, the specific complex containing the E47 homodimer (lanes 1), as determined by competition FIG. 5. Physical interaction of C/EBP␤ with E47. A, nuclear extracts from COS cells transfected with empty vector (Mock) or an E47 expression vector (E47) and nuclear extracts from untransfected Ins-1 cells were immunoprecipitated with the indicated antisera (IP␣), and immunoprecipitates were subjected to Western immunoblotting. B, COS cells transfected with empty vector (Mock) or expression plasmids for E47 and/or C/EBP␤ were labeled in vivo with [ 35 S]methionine, and extracts were immunoprecipitated with the indicated antisera (IP␣). Labeled immunoprecipitates were visualized by autoradiography after SDS-PAGE (top). The identity of bands was confirmed by Western immunoblotting (lower parts). The asterisks denote nonspecific bands. C, point mutations changing leucine to phenylalanine introduced within the AD2 of E47. D, 35 S-labeled in vitro translated wild type E47 (E47), leucine zipper mutated E47 (E47-LZ Ϫ ), and C/EBP␤ proteins were co-precipitated with the indicated antisera (IP␣). Aliquots of translation products (lanes 1-3) or immunoprecipitates (lanes 4 -10) were visualized by autoradiography after SDS-PAGE (top). The identity of the bands was confirmed by Western immunoblotting (lower parts). E, 35 S-labeled in vitro translated wild type E47 (E47), leucine zipper mutated E47 (E47-LZ Ϫ ), and C/EBP␤ proteins were precipitated with a GST-C/EBP␤ fusion protein (GST-pull down), and pure translation products or precipitates were visualized by autoradiography after SDS-PAGE (top). The identity of the bands was confirmed by Western immunoblotting (lower parts). with an excess of cold probe (lanes 2) and antiserum supershift analysis (lanes 3), is either diminished (Nir) or displaced (RIPE3) by the addition of C/EBP␤ (lanes 4). In contrast, the E47-LZ Ϫ mutant protein, although capable of forming a binding complex on the DNA elements (lanes 5), is resistant to interference by C/EBP␤ (lanes 8). This observation further supports the involvement of the leucine repeat sequence within activation domain 2 of E47 in the interaction with C/EBP␤. Functional Inhibition of the Transcriptional Activation Potential of E47 by C/EBP␤-The question whether the proteinprotein interaction between C/EBP␤ and E47 would also lead to a functional inhibition of the transactivation potential of E47 was assessed by transient transfection into HIT-T15 cells of expression plasmids, encoding fusion proteins of the yeast Gal4-DNA binding domain with either AD1 or AD2 of E47, together with a luciferase reporter construct containing a multimerized Gal4-binding site (GBS) linked to 59 bp of the angiotensinogen gene promoter (Fig. 7). The Gal4 constructs containing AD1 and AD2 were equally active in transactivating the luciferase reporter gene in HeLa cells (not shown), whereas the AD2 construct (Gal4-DBD:E47-(329 -436)) was significantly more active than the AD1 construct (Gal4-DBD:E47-(1-99)) in HIT-T15 cells. Cotransfection of an expression plasmid for C/EBP␤ repressed the activity of Gal4-DBD:E47-(329 -436) (AD2) on the luciferase reporter gene, leaving the transactivation potential of Gal4-DBD:E47-(1-99) (AD1) unaffected (Fig. 7). These findings provide evidence for a functional inhibitory interaction of C/EBP␤ with the AD2 domain of E47 leading to a reduced transactivation potential of E47. DISCUSSION Chronic hyperglycemia in patients with type II diabetes mellitus may contribute to defective glucose-induced insulin secretion, a phenomenon that has been attributed to glucose toxicity (4). After culture in high glucose concentrations for 7 days, human islets contain markedly reduced insulin content, a change that can be partially reversed by subsequent culture in lower glucose concentrations (56). Using immortalized ␤-cell lines, it was found that chronic exposure to supraphysiologic glucose concentrations is associated with decreased insulin gene transcription and decreased expression of the insulin gene transactivators IDX-1 and RIPE3b1-binding proteins (5-9). It was recently reported, however, that the chronic glucotoxic alterations of insulin gene expression in the pancreatic ␤-cell line HIT-T15 are only partially reversible upon subsequent lowering of the high glucose levels, although the expression of IDX-1 and RIPE3b1 factors was readily restored (57). These findings imply that an additional inhibitory factor, which reg-FIG. 6. Inhibition of DNA binding of E47 by C/EBP␤. In vitro translated unlabeled wild type E47 (E47) and leucine zipper-mutated E47 (E47-LZ Ϫ ) were used in EMSA with 32 P-labeled doublestranded oligonucleotides comprising the E1 elements of the rat insulin I gene promoter (Nir) (A) and the rat insulin II gene promoter (RIPE3) (B). Supershift and competition experiments were performed with E47 antiserum (␣E47) or a 100-fold excess of unlabeled probe (100x). ulates insulin gene transcription under these conditions, may be involved. In this study, we have identified C/EBP␤ as a transrepressor of insulin gene transcription, which is up-regulated by supraphysiological glucose levels in pancreatic ␤-cell lines. A high affinity binding site for C/EBP␤ in the rat insulin I gene promoter, the CEB box, and several relatively lower affinity sites, namely the A1(P1), the E1 (Nir box), and the A3/4 (FLAT) elements, were identified. DNase I footprint analysis using recombinant C/EBP␤ indicates that C/EBP␤ binds to the CEB box, the Nir box, and the P1 element and may interact with the FLAT element. DNA-protein binding assays using short oligonucleotides indicated that the CEB box is the high affinity binding site, whereas the other sites interact with C/EBP␤ with at least a 10-fold lower affinity compared with the CEB box. Although C/EBP␤ is capable of binding to the CREs of the phosphoenolpyruvate carboxykinase and somatostatin genes (58,59), it did not interact with the rat insulin I gene CRE as indicated by both DNase I footprint analysis and EMSA. C/EBP␣ was shown not to bind to the glucagon CRE (53,58), suggesting that the CRE alone is not sufficient for the binding of C/EBP proteins, and the flanking sequences may play a critical role for the converged binding of C/EBP proteins and CREB. Previous work has demonstrated that the E2 (Far) box and the A3/4 (FLAT) element, and their counterparts located proximal to the transcription initiation site, termed the E1 (Nir) box and the A1 (P1) element, are the most important cis-acting DNA elements required for rat insulin I gene expression. The E2A gene products, E12, E47, and/or E2-5, bind to the Far and Nir boxes and activate the rat insulin I gene promoter synergistically with the homeodomain proteins IDX-1 (IPF-1/STF-1/ PDX-1), HNF-1␣, and Lmx-1 (52,54,60), which bind to the FLAT and P1 elements. Although C/EBP␤ activates the rat insulin I gene promoter in non-␤-cells through binding to the newly identified CEB box, this interaction does not mediate the repression of the rat insulin I promoter by C/EBP␤ in ␤-cells. Our studies indicate that C/EBP␤ inhibits rat insulin I gene transcription through physical and functional interaction with the basic helix-loop-helix protein, E47. The bHLH protein family of transcription factors is divided into three classes according to their DNA-binding properties, structural features, and tissue distribution (61). Factors of the E2A family (E12, E47, E2-5) are ubiquitously expressed members of class A. Class A factors of the E2A family are components of the major ␤-cell nuclear binding complex (insulin enhancer factor, IEF) of the rat insulin I and human insulin gene promoters (62). Furthermore, the class B bHLH factor BETA-2 is expressed specifically in pancreatic ␤and ␣-cells and is reported to heterodimerize with E2A proteins on the RIPE3 element of the rat insulin II promoter, an interaction that is believed to contribute to the tissue specific expression of the insulin II gene (51). Importantly, more than 90% of the overall activity of the rat insulin I promoter activity is attributable to the synergistic transactivation by E2A proteins and homeobox factors (IDX-1, Lmx-1) (3). Thus, it is conceivable that molecular interference of C/EBP␤ with E47 disrupts not only the homo-or heterodimerization of the bHLH factors themselves but also their synergistic transactivation with homeodomain proteins (Fig. 8). Two transactivation domains (AD1 and AD2) have been identified in E47. The AD2 subdomain contains a heptad leucine repeat sequence. Mutation of this "leucine zipper" altered the transcriptional activity of E47. Interestingly, AD1 functions in a wide variety of tissues and cells, whereas the expression of AD2 activity is largely restricted to pancreatic ␤-cells, supporting a potentially important role for the AD2 domain in regulating gene transcription in ␤-cells. In addition to the ability of the AD2 domain of E47 to contribute to transactivation, we uncovered evidence that the leucine repeat serves as a domain for a direct protein-protein interaction with C/EBP␤; mutations of two of the leucines abrogated this interaction. Furthermore, C/EBP␤ inhibited binding of the E47 homodimer to E-box-containing elements within the rat insulin I and II promoters. Whether this is due to the formation of a classical leucine zipper dimerization between C/EBP␤ and E47 has not been unequivocally established by our studies. A report showing an inhibition of insulin gene transcription by the leucine zipper transcription factor c-Jun via targeting of the AD2 of E47 in ␤-cells (48, 63) parallels our findings in part. c-Jun functionally inhibited the transactivation potential of the AD2 of E47, but in contrast to our observations with C/EBP␤, c-Jun did not appear to physically interact with the AD2 of E47. Therefore, it is conceivable that leucine zipper transcription factors of different families may interact by different mechanisms with the AD2 of E47. Recently, it has been suggested that E2A factors are not required for insulin gene transcription, based on the observation that mice with a targeted disruption of the E2A gene do not appear to have abnormalities in the morphology of the endocrine pancreas and do not develop overt diabetes (64). Because the basal and stimulated, as well as the tissue-specific, expression of the insulin gene is regulated in a complex manner, however, the absolute contribution of different bHLH proteins to insulin promoter activity within different animal species is largely unknown. E2A gene products represent only one subfamily of bHLH factors, and other ubiquitously expressed members could substitute for E2A in the mouse gene knockout model. This notion is further supported by the observation that BETA-2 knockout mice do develop diabetes (66), a circumstance that may be attributable to the tissue-specific action of this bHLH transcription factor in mice, and the E47/BETA-2heterodimer may also be a target for C/EBP␤-mediated repression of insulin II gene activity. Furthermore, it remains to be determined whether or not E47/BETA-2 heterodimers also bind to and activate E-box-containing elements within the rat insulin I promoter, a finding that would further support the FIG. 8. Model of C/EBP␤-mediated inhibition of insulin gene transcription. Proposed mechanism of inhibitory action of C/EBP␤ exemplified for the rat insulin I gene promoter. A, in the absence of C/EBP␤, E47 and IDX-1, binding to the A and E elements of the rat insulin I gene promoter, constitute two symmetrical enhansons that exhibit synergism in their transactivation potential. B, in the presence of C/EBP␤, the DNA binding of E47 homodimers is inhibited by proteinprotein interaction with C/EBP␤, which disrupts both the transactivation potential of E47 and the synergistic interaction with IDX-1. A similar mechanism is conceivable for the rat insulin II promoter. importance of C/EBP␤ as a glucose-induced repressor of insulin gene transcription. In conclusion, C/EBP␤ may serve as a transcription factor mediating the dysregulation of insulin gene expression under conditions of sustained supraphysiological glucose concentrations. In fact, we have extended our studies toward examination of the expression of C/EBP␤ in pancreatic ␤-cells in animal models of diabetes mellitus, with preliminary results implying an involvement of this factor in the pathophysiology of glucotoxic alterations during the development and progression of this disease. 5
2019-08-17T14:39:02.691Z
1997-11-07T00:00:00.000
{ "year": 1997, "sha1": "6ba69761d5520b48353a47ce836b36ae783a156b", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/272/45/28349.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "461513db647a2192401d77369514320461d11449", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
58659762
pes2o/s2orc
v3-fos-license
Tunable Fiber Laser with High Tuning Resolution in C-band Based on Echelle Grating and DMD Chip The tunable fiber laser with high tuning resolution in the C-band is proposed and demonstrated based on a digital micromirror device (DMD) chip and an echelle grating. The laser employs a DMD as a programmable wavelength filter and an echelle grating with high-resolution features to design a cross-dispersion optical path to achieve high-precision tuning. Experimental results show that wavelength channels with 3 dB-linewidth less than 0.02 nm can be tuned flexibly in the C-band and the wavelength tuning resolution is as small as 0.036 nm. The output power fluctuation is better than 0.07 dB, and the wavelength shift is below 0.013 nm in 1 h at room temperature. Introduction Tunable lasers as a powerful tool have been widely applied in spectroscopy, photochemistry, biomedicine, and optical communications for decades. For example, in dense wavelength division multiplexing (DWDM) optical communication, tunable lasers can not only replace multiple fixed-wavelength lasers to save the operation cost but also realize the remote dynamic allocation of networks resources. The number of wavelength channels in C-band determines the information transmission capacity in networks. Therefore, how to improve narrow-linewidth channels with a high tuning accuracy from laser sources has been receiving an increasing amount of attention from researchers and network service vendors. To date, various technologies have been proposed and implemented to realize tunable filters in laser sources, including fiber Bragg grating (FBG), Fabry-Perot (F-P) cavity, acousto-optics, interferometer, liquid crystal on silicon (LCoS), etc. FBG can be tuned easily through either heating or applying strain along the device. For example, it is reported that FBG can achieve 0.2 nm/V tuning accuracy from 1555-1565 nm driven by direct current (DC) voltage of multilayer piezoelectric transducers [1]. However, FBG-based tunable lasers are affected by the environment fluctuation, resulting in a high packaging cost and limited tuning range. The fiber-optic self-seeding F-P cavity achieves a wide range of single longitudinal modes tuning from 1153.75 to 1560.95 nm with a tuning step of 1.38 nm [2]. Avanaki et al. investigate a fiber Fabry-Pérot tunable filter using a well-established optimization method, simulated annealing (SA), to achieve maximum amplitude for the Fourier transformed peaks of the photodetected interferometric signal [3]. Furthermore, Y. Ding implemented a small-scale tuning with the accuracy of approximately 0.6 nm by using micro-ring Mach-Zehnder interferometers [4]. These technologies generally need additional matching devices, like an F-P laser, saturable absorber-based filters, which makes them complex and expensive to commercialize. Nowadays, a LCoS spatial light modulator as a programmable filter produced by Very-Large-Scale-Integrated (VLSI) technology has been applied to laser systems [5]. A digital micromirror device (DMD), another Opto-VLSI processor has also been attempted in a non-projection field. In 2006, Chuang and Lo proposed a spectral synthesis method with a spectral tuning accuracy of 0.076 nm/pixel in the C-band based on a DMD chip [6]. In 2009, W. Shin used the DMD-based tunable laser system as light sources for the optical time domain reflectometry, with a tuning range of 1525-1562 nm, and an improved laser tuning accuracy of 0.1 nm [7]. Our research group also reported a multi-wavelength tunable fiber laser based on a DMD chip with a step of 0.055 nm [8]. Echelle gratings are a special type of blazed gratings featured by a large blazing angle of grooves and often operate at high diffraction orders to obtain high dispersion. They are different to conventional gratings [9][10][11][12]. An echelle grating splits the radiant energy into a multitude of diffraction orders that overlap in the narrow interval of the grating diffraction angle. Therefore, in practical application, an additional order separator like the prism or grating, whose dispersion direction is perpendicular to that of an echelle grating are inserted to separate the overlapping orders. By focusing the two-fold dispersed radiation, a two-dimensional spectrum is produced, thus achieving an applicable high-resolution spectrum. So far, echelle gratings are mainly applied in ultraviolet and visible high-resolution spectrometers [10,11]. In this work, we first apply an echelle grating into a DMD-based tunable laser to realize the high tuning resolution in C-band. The echelle-based tunable fiber laser is designed for a cross-dispersion structure of a closed-loop fiber system. The laser wavelength was tuned in the range of 1540-1560 nm with a tuning step of 36 pm. The 3dB-linewidth of the signals was less than 0.02 nm. The side mode suppression ratio (SMSR) reaches 40 dB, and the maximum output power was 7.5 dBm. Echelle Grating and System Design The spectral order of an echelle grating is the result of mutual modulation of multi-slit interference and single-slit diffraction. The echelle equation is expressed as: where m, λ, and d are the diffraction order, wavelength, and grating constant, respectively. α, β and γ are the incident angle, corresponding diffraction angle, and off-axis angle. As shown in Figure 1a, θ B is the blaze angle of an echelle grating and θ is the incident angle to the facet. So, the relation of angles is written as: Substituting Equation (2) into (1), the diffraction of an echelle grating is characterized as follows: An echelle grating has the maximum diffraction efficiency only when the Littrow condition is satisfied, that is the incidence is at the blaze angle. On both sides of the blaze angle, the diffraction efficiency of a grating decreases rapidly as θ increases. However, the strict Littrow condition leads to the difficulty in the arrangement of the actual optical path. Therefore, a quasi-Littrow structure is usually employed with the incident ray at an off-axis angle γ from the principal section of a grating, as shown in Figure 1b. The condition of the quasi-Littrow configuration is: Therefore, the echelle grating equation under the quasi-Littrow condition is: Micromachines 2019, 10, 37 3 of 8 The free spectral range λ SFR : The range of the dispersion angle of m-order: Micromachines 2018, 9, x FOR PEER REVIEW 3 of 8 △ = = 2 sin cos (6) The range of the dispersion angle of m-order: It can be seen from Equations (5)-(7) that an echelle grating has the following features: (1) The free spectral range is small and the spectral order is seriously overlapped. Therefore, it is necessary to use auxiliary dispersion elements for cross-dispersion to obtain a two-dimensional spectrum. (2) The angular dispersion is so high that the wavelength resolution is greatly improved. (3) The dispersion angle of one order is small, and the wavelengths in the free spectral range of each stage are concentrated near the blazed order, so an echelle grating can blaze in the entire band. In Figure 2, the two-dimensional cross-dispersion is realized by a diffraction grating and an echelle grating with main sections that are perpendicular to each other. As we know, echelle gratings are, to date, mainly applied in ultraviolet (UV) and visible (VIS) spectrometers, so most of the prisms are used as auxiliary dispersers placed before or after an echelle grating to achieve cross-dispersion. In our work, the laser operates in the C-band and the prism glass material shows strong absorption in infrared. Therefore, a diffraction grating is adopted to replace the prisms in fiber lasers. It can be seen from Equations (5)-(7) that an echelle grating has the following features: (1) The free spectral range is small and the spectral order is seriously overlapped. Therefore, it is necessary to use auxiliary dispersion elements for cross-dispersion to obtain a two-dimensional spectrum. (2) The angular dispersion is so high that the wavelength resolution is greatly improved. (3) The dispersion angle of one order is small, and the wavelengths in the free spectral range of each stage are concentrated near the blazed order, so an echelle grating can blaze in the entire band. In Figure 2, the two-dimensional cross-dispersion is realized by a diffraction grating and an echelle grating with main sections that are perpendicular to each other. As we know, echelle gratings are, to date, mainly applied in ultraviolet (UV) and visible (VIS) spectrometers, so most of the prisms are used as auxiliary dispersers placed before or after an echelle grating to achieve cross-dispersion. In our work, the laser operates in the C-band and the prism glass material shows strong absorption in infrared. Therefore, a diffraction grating is adopted to replace the prisms in fiber lasers. The range of the dispersion angle of m-order: It can be seen from Equations (5)-(7) that an echelle grating has the following features: (1) The free spectral range is small and the spectral order is seriously overlapped. Therefore, it is necessary to use auxiliary dispersion elements for cross-dispersion to obtain a two-dimensional spectrum. (2) The angular dispersion is so high that the wavelength resolution is greatly improved. (3) The dispersion angle of one order is small, and the wavelengths in the free spectral range of each stage are concentrated near the blazed order, so an echelle grating can blaze in the entire band. In Figure 2, the two-dimensional cross-dispersion is realized by a diffraction grating and an echelle grating with main sections that are perpendicular to each other. As we know, echelle gratings are, to date, mainly applied in ultraviolet (UV) and visible (VIS) spectrometers, so most of the prisms are used as auxiliary dispersers placed before or after an echelle grating to achieve cross-dispersion. In our work, the laser operates in the C-band and the prism glass material shows strong absorption in infrared. Therefore, a diffraction grating is adopted to replace the prisms in fiber lasers. Figure 3 demonstrates the tunable laser structure employing a DMD chip as a programmable filter in bulk optics and a fiber resonator with an erbium-doped fiber amplifier (EDFA). The lasing process in a fiber cavity is achieved by optical pumping and erbium gain. The bulk optics obtain the high-precision mode selection by an echelle grating and a 0.55" DMD in the experiment. The detailed working principle of a 0.55" DMD and its diffraction efficiency have been analyzed in [13]. The EDFA emits the amplified spontaneous emission spectrum (ASE) signals from 1530-1560 nm. After a 90/10 optical fiber coupler, 90% ASE light energy returns into a ring and then continues to be coupled into the bulk optics via a circulator and an optical fiber collimator. The bulk optics consists of two cylindrical lenses, a diffraction grating, an echelle grating, and a DMD chip. The fiber collimator and the 1200 line/mm diffraction grating are located at the front and the rear focal planes of lens (f 0 = 100 mm), respectively. The diffraction grating and the 79 line/mm echelle grating are separated by 100 mm. In order to ensure that the echelle grating adheres to the quasi-Littrow condition, the incident beam is arranged at an off-axis angle γ so that the diffracted beam and the incident beam are in the same horizontal plane. The cylindrical lens 1 (f 1 = 150 mm) and cylindrical lens 2 (f 2 = 100 mm) are 50 mm and 100 mm from the echelle grating, respectively. Therefore, the busbars of two cylindrical lenses are perpendicular to each other, and the two dispersion directions after two gratings are collimated, respectively. The DMD is at the back focal plane of two cylindrical lenses, as shown in Figure 4. By uploading steering holograms onto the DMD controlled by remote software, any waveband of ASE spectra can be routed and coupled into the optical system along the original path, and the others are dropped out with dramatic attenuation, thereby achieving the laser longitudinal mode selection and wavelength tuning. The selected wavebands through the collimator and circulator returning into a ring cavity are amplified by EDFA, leading, after several recirculations, to high-quality single-mode laser generation. Micromachines 2018, 9, x FOR PEER REVIEW 4 of 8 Figure 3 demonstrates the tunable laser structure employing a DMD chip as a programmable filter in bulk optics and a fiber resonator with an erbium-doped fiber amplifier (EDFA). The lasing process in a fiber cavity is achieved by optical pumping and erbium gain. The bulk optics obtain the high-precision mode selection by an echelle grating and a 0.55" DMD in the experiment. The detailed working principle of a 0.55" DMD and its diffraction efficiency have been analyzed in [13]. The EDFA emits the amplified spontaneous emission spectrum (ASE) signals from 1530-1560 nm. After a 90/10 optical fiber coupler, 90% ASE light energy returns into a ring and then continues to be coupled into the bulk optics via a circulator and an optical fiber collimator. The bulk optics consists of two cylindrical lenses, a diffraction grating, an echelle grating, and a DMD chip. The fiber collimator and the 1200 line/mm diffraction grating are located at the front and the rear focal planes of lens (f0 = 100 mm), respectively. The diffraction grating and the 79 line/mm echelle grating are separated by 100 mm. In order to ensure that the echelle grating adheres to the quasi-Littrow condition, the incident beam is arranged at an off-axis angle γ so that the diffracted beam and the incident beam are in the same horizontal plane. The cylindrical lens 1 (f1 = 150 mm) and cylindrical lens 2 (f2 = 100 mm) are 50 mm and 100 mm from the echelle grating, respectively. Therefore, the busbars of two cylindrical lenses are perpendicular to each other, and the two dispersion directions after two gratings are collimated, respectively. The DMD is at the back focal plane of two cylindrical lenses, as shown in Figure 4. By uploading steering holograms onto the DMD controlled by remote software, any waveband of ASE spectra can be routed and coupled into the optical system along the original path, and the others are dropped out with dramatic attenuation, thereby achieving the laser longitudinal mode selection and wavelength tuning. The selected wavebands through the collimator and circulator returning into a ring cavity are amplified by EDFA, leading, after several recirculations, to high-quality single-mode laser generation. The off-axis arrangement greatly influences the laser tuning range and accuracy. We optimize the off-axis angle γ of the echelle grating (79 line/mm) for the laser system. According to Equation mm in Figure 4, matching with the experimental pattern in the inset of Figure 4. The 0.55′′ DMD receiving wavelength range is around 20 nm from 1540-1560 nm and is limited by the DMD size. The tuning accuracy of the laser wavelength is 0.0177 nm/pixel, in theory. Considering the used echelle grating has a wide working range from UV to 25 μm, this laser system is convenient to be extended in the 2 μm-band, which has potential applications in the biomedical domain [14][15]. Experimental Results When the optical loop is closed, Figure 5 shows a typical laser signal with the center of the wavelength at 1546.733 nm when the pump power is 120 mW. The power of the laser output is around 7.5 dBm, the 3 dB-linewidth is less than 0.02 nm (limited by the resolution of the YOKOGAWA spectrum analyzer, Yokogawa Test & Measurement Corporation, Tokyo, Japan), and the SMSR exceeds 40 dB. The off-axis arrangement greatly influences the laser tuning range and accuracy. We optimize the off-axis angle γ of the echelle grating (79 line/mm) for the laser system. According to Equation (5), m = 15 and λ = 1550 nm are selected as the calibration blazed wavelength, and the corresponding γ under the quasi-Littrow condition is calculated as 18.05 • . Using Zemax OpticStudio software, we design the optical system to analyze the beam distribution on the DMD surface. The simulation results illustrate the length of the two-dimensional dispersion strip is 12.2 mm in Figure 4, matching with the experimental pattern in the inset of Figure 4. The 0.55" DMD receiving wavelength range is around 20 nm from 1540-1560 nm and is limited by the DMD size. The tuning accuracy of the laser wavelength is 0.0177 nm/pixel, in theory. Considering the used echelle grating has a wide working range from UV to 25 µm, this laser system is convenient to be extended in the 2 µm-band, which has potential applications in the biomedical domain [14,15]. Experimental Results When the optical loop is closed, Figure 5 shows a typical laser signal with the center of the wavelength at 1546.733 nm when the pump power is 120 mW. The power of the laser output is around 7.5 dBm, the 3 dB-linewidth is less than 0.02 nm (limited by the resolution of the YOKOGAWA spectrum analyzer, Yokogawa Test & Measurement Corporation, Tokyo, Japan), and the SMSR exceeds 40 dB. Different holograms are loaded onto the DMD chip, each hologram corresponds to a different selected wavelength. Each selected wavelength is amplified by EDFA to achieve lasing. Figure 6 is the measured outputs of the echelle-grating-based fiber laser tuning from 1542 to 1558 nm by remotely uploading the 8 × 768 pixel-holograms at different positions along the DMD active window when the threshold pumping power is 28 mW. It demonstrates an excellent tuning capability. Notice that the range of the actual tuning wavelength is a little wider than 16 nm. The wavelength outside the tuning range requires a higher threshold power to lasing due to the off-axis angle and the influence of stray light. Experimental Results When the optical loop is closed, Figure 5 shows a typical laser signal with the center of the wavelength at 1546.733 nm when the pump power is 120 mW. The power of the laser output is around 7.5 dBm, the 3 dB-linewidth is less than 0.02 nm (limited by the resolution of the YOKOGAWA spectrum analyzer, Yokogawa Test & Measurement Corporation, Tokyo, Japan), and the SMSR exceeds 40 dB. Different holograms are loaded onto the DMD chip, each hologram corresponds to a different selected wavelength. Each selected wavelength is amplified by EDFA to achieve lasing. Figure 6 is the measured outputs of the echelle-grating-based fiber laser tuning from 1542 to 1558 nm by remotely uploading the 8 × 768 pixel-holograms at different positions along the DMD active window when the threshold pumping power is 28 mW. It demonstrates an excellent tuning capability. Notice that the range of the actual tuning wavelength is a little wider than 16 nm. The wavelength outside the tuning range requires a higher threshold power to lasing due to the off-axis angle and the influence of stray light. Figure 7 is the fine tuning characteristics of laser outputs with the fine tuning accuracy 0.036 nm. We modulate the selected wavelength each time by moving 2-pixels on the hologram. The tuning accuracy corresponding to each pixel is related to the number of DMD pixels covered by the ASE spectrum on the surface of the DMD. Note that the tuning accuracy can be further improved by employing a DMD with a smaller pixel size, like the DLP2010NIR (Texas Instruments Incorporated, Dallas, TX, USA. Each pixel size is 5.4 μm). The shoulders on both sides of the laser spectrum may be due to self-phase modulation or other nonlinear phenomena arising from a high-level of output power [16]. Figure 7 is the fine tuning characteristics of laser outputs with the fine tuning accuracy 0.036 nm. We modulate the selected wavelength each time by moving 2-pixels on the hologram. The tuning accuracy corresponding to each pixel is related to the number of DMD pixels covered by the ASE spectrum on the surface of the DMD. Note that the tuning accuracy can be further improved by employing a DMD with a smaller pixel size, like the DLP2010NIR (Texas Instruments Incorporated, Dallas, TX, USA. Each pixel size is 5.4 µm). The shoulders on both sides of the laser spectrum may be due to self-phase modulation or other nonlinear phenomena arising from a high-level of output power [16]. Figure 8 shows the drift of wavelength (dotted line) and the fluctuation of peak power (solid line) at the pump power 40 mW during 1-h observation at the center wavelength of 1546 nm. The maximum wavelength drift is less than 0.013 nm and the maximum peak power fluctuation is 0.07 dB at room temperature. The linewidth is better than that reported in [5] (0.05 nm) and [8] (0.02 nm), and the maximum peak power fluctuation is better than that in [8] (0.25 dB). Compared with other tunable lasers with the same tuning mechanism, the laser output stability has been further improved. ASE spectrum on the surface of the DMD. Note that the tuning accuracy can be further improved by employing a DMD with a smaller pixel size, like the DLP2010NIR (Texas Instruments Incorporated, Dallas, TX, USA. Each pixel size is 5.4 μm). The shoulders on both sides of the laser spectrum may be due to self-phase modulation or other nonlinear phenomena arising from a high-level of output power [16]. tunable lasers with the same tuning mechanism, the laser output stability has been further improved. Finally, due to the off-axis angles introduced into the tunable fiber laser, the aberration-like coma and astigmatism influences the tuning range and accuracy. Therefore, we will continue to optimize the optical path and reduce the stray light effect caused by an echelle grating in the follow-up work, which will be helpful to further improve the tuning property of devices. Also, loading the modulation algorithm on the DMD is an attractive solution, and our research process in the future will also consider using algorithms to further improve the performance of tunable fiber lasers. Conclusions The C-band tunable fiber laser based on a DMD chip and an echelle grating is proposed and demonstrated experimentally. The laser employs a DMD as a programmable wavelength filter and an echelle grating with high-resolution features to design a cross-dispersion optical path to achieve high-precision tuning. The optimal off-axis angle of an echelle grating under the quasi-Littrow condition is simulated and analyzed in detail. Experimental results show that wavelength channels are tuned in the range of 1542-1558 nm with a tuning step of 0.036 nm. The 3 dB-linewidth of the signals is less than 0.02 nm, the SMSR reaches 40 dB, and the maximum output power is 7.5 dBm. At room temperature, the output power fluctuation is better than 0.07 dB in 1 h, and the wavelength shift is below 0.013 nm. Author Contributions: X.C. and G.X. conceived and designed the experiments; J.L. performed the experiments; J.L., Y.G. and D.D. analyzed the data; G.X. and M.L. contributed reagents/materials/analysis tools; J.L. and X.C. wrote the paper. Finally, due to the off-axis angles introduced into the tunable fiber laser, the aberration-like coma and astigmatism influences the tuning range and accuracy. Therefore, we will continue to optimize the optical path and reduce the stray light effect caused by an echelle grating in the follow-up work, which will be helpful to further improve the tuning property of devices. Also, loading the modulation algorithm on the DMD is an attractive solution, and our research process in the future will also consider using algorithms to further improve the performance of tunable fiber lasers. Conclusions The C-band tunable fiber laser based on a DMD chip and an echelle grating is proposed and demonstrated experimentally. The laser employs a DMD as a programmable wavelength filter and an echelle grating with high-resolution features to design a cross-dispersion optical path to achieve high-precision tuning. The optimal off-axis angle of an echelle grating under the quasi-Littrow condition is simulated and analyzed in detail. Experimental results show that wavelength channels are tuned in the range of 1542-1558 nm with a tuning step of 0.036 nm. The 3 dB-linewidth of the signals is less than 0.02 nm, the SMSR reaches 40 dB, and the maximum output power is 7.5 dBm. At room temperature, the output power fluctuation is better than 0.07 dB in 1 h, and the wavelength shift is below 0.013 nm. Author Contributions: X.C. and G.X. conceived and designed the experiments; J.L. performed the experiments; J.L., Y.G. and D.D. analyzed the data; G.X. and M.L. contributed reagents/materials/analysis tools; J.L. and X.C. wrote the paper.
2019-01-22T22:30:19.554Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "f4d819d50c39257f41219c47ed7c199dd90ecf5a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-666X/10/1/37/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f4d819d50c39257f41219c47ed7c199dd90ecf5a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
270530092
pes2o/s2orc
v3-fos-license
Analysis of Stress Intensity Factor in a Cracked Plate SUMMARY The present paper aimed to study the effect of the hole on the stress distribution and stress intensity factor of the mode I fracture in a perforated plate with an edge crack. For this purpose, a plate with an edge crack and one or two holes near the crack tip is investigated. First, the cracked plate without a hole was analysed with Abaqus software, and the results were validated through Westergaard’s analytical method. Then, a plate with an edge crack and one or two holes near the crack tip was also analysed with Abaqus software. The effects of the distance between the hole and the crack tip, as well as the radii of the holes on the stress distribution σ y and stress intensity factor K I were also investigated. The findings indicated that making a hole in the plate significantly reduces the amounts of the stress and stress intensity factor at the crack tip if the hole has an appropriate radius and position relative to the crack tip. INTRODUCTION Stress concentration at the crack tip is considered one of the most important issues in linear fracture mechanics.Determining the stress distribution at the crack tip depends on calculating and determining a parameter called the stress intensity factor (SIF).This parameter can not only determine the amount of stress at the crack tip and its perimeters but also indicates the condition of the material against the crack.SIF is regarded as one of the important ones in the theory of linear fracture mechanics.Stress intensity factors for different loads and geometries are provided in the handbooks.Griffith first proposed the relationship between sample fracture stress and crack length in 1920 [1].He performed stress analysis on an elliptical hole as an elliptical crack that was growing unstably.Griffith was the first to formulate and present a theory of crack growth based on the criterion of energy conservation.He is the founder of modern fracture mechanics.Accordingly, a crack grows if the strain energy resulting from its growth is equal to or greater than that of the surface energy of the material. ENGINEERING MODELLING 37 (2024) 1, 93-108 Irwin [2] introduced a new crack growth criterion as an energy release rate based on the Griffith energy criterion that could be used more by engineers.Then, Westergaard [3] suggested a method for determining the stress distribution and displacement at the crack tip in 1938 by introducing the complex functions.Then, Irwin found that stress distribution obtained at the crack tip by Westerguard can be expressed based on a parameter called the stress intensity factor.Williams [5] used another technique to obtain the stress distribution at the crack tip.Fracture mechanic investigations have continued ever since.Oliver [6] was one of the first researchers who used the finite element method to model cracks after introducing this method in engineering.He used the mesh removal technique to model the perforation in the sample and simulated the crack.Rashid [7] used the moving meshes technique and the replacement of meshes to model cracks. In 1999, Moes et al. [8] used a mesh-free method for simulation and solved the problems of the previous methods.The first use of software such as Abaqus for crack simulation was proposed by Hang et al. [9].In the extended finite element method (X-FEM), the software can calculate the stress intensity factor by creating a detachable surface in the sample and defining elements at the crack tip.Zohali and Fariba [10] introduced a relation to determine the stress intensity factor in an edge crack under concentrated loading based on the distance from the loading point using the Abaqus software.Evans and Luxmoore [11] determined the full equations for stresses and displacements around a central crack in an infinite plate subjected to uniaxial and biaxial tension using the Westergaard stress function. Hyde and Warrior [12] presented an improved method for determining photoelastic stress intensity factors using the Westergaard stress function.Cirello et al. [13] developed a numerical procedure that combines two hybrid finite element formulations to analyse the stress intensity factors in cracked perforated plates with a periodic distribution of holes and square representative volume elements.The accuracy of the method in predicting the stress intensity factor was verified by a comparison with experimental measurements carried out by a photoelasticity method and by commercial finite element software. The present study investigates the effect of geometric changes on the stress distribution and stress intensity factor in an edge-cracked plate.For this purpose, a hole-free plate with an edge crack was analysed with Abaqus software, and the stress distribution and stress intensity factor were calculated.Furthermore, Westergaard analytical method was used to validate the results, and the numerical and analytical results were compared to each other, which showed good consistency.Then, a plate with an edge crack and a hole near the crack tip was analysed, and the effect of the distance of the hole centre from the crack tip on the stress distribution and on the stress intensity factor was investigated.Additionally, a plate with an edge crack and two aligned holes were analysed and the distance of holes from the crack tip and the radii of the holes were evaluated according to their effects on the stress distribution and stress intensity factor. MODEL DESCRIPTION The model investigated here consists of a finite rectangular plate with an edge crack.This plate is under uniform uniaxial loading at the top and bottom edges.Figure 1 shows a schematic of the plate with the edge crack.Also, the plate is made of SAE 304 stainless steel. THEORETICAL AND NUMERICAL ANALYSIS OF A PLATE WITH AN EDGE CRACK The procedure for computing the stress state near a crack tip was established by Westergaard [3].The process based on the Airy stress function concept is used in its Cartesian form.The complex stress function Z is defined as follows: ( (2) In Eq. ( 1), Re Z and Im Z are the real and imaginary parts of the Z function, respectively.Also, Eq. (2) represents the general form of a complex number.If the Z is an analytic function, we can use the Westergaard stress function.The Westergaard stress function is defined as follows: ̿ ̅ In Eq. ( 3) ̅ and ̿ are obviously first and second integrals of complex functions Z(x), so that: From the Cauchy-Reimann conditions, i.e. ( Therefore, the plane stress equations are as follows: The stress components ahead of a crack tip for mode I of loading are as follows: ENGINEERING MODELLING 37 (2024) 1, 93-108 √ cos 1 sin sin √ cos 1 sin sin √ sin cos cos Eq. ( 7) calculates the stress field ahead of a crack tip, in which σx, σy and τxy are the stress components in the direction of the x-axis (in the direction of the crack), in the direction of the yaxis, and in the xy plane, respectively.These equations are approximations, limited to an area close to the crack tip.The distance r and angle θ are as defined in Figure 2. Fig. 2 Definition of the coordinate axis ahead of a crack tip The KI parameter in Eq. ( 7) is called the stress intensity factor (SIF), indicating the singularity of the stress at the crack tip.The KI equation for an edge cracked plate under uniaxial loading is as follows [3,11,14]: Where Y is called the geometry factor, signifying the geometry of a crack system in relation to the applied load and σ is the applied load at the plate edge: 1 99 0 41 18 7 38 48 53 85 Where a and W represent the crack length and plate width.Table 1 displays the stress values σx, σy and the stress intensity factor KI calculated by Eqs. ( 7) and ( 8) for different distances from the crack tip (r), θ=0 and σ=1 MPa.Also, Figure 3 shows the stress values σx , σy and y versus distance r from the crack tip for θ=0, calculated by Eq. ( 7).According to Eq. ( 7), it is clear that for θ=0, σy = σx and the y will be zero. Fig. 3 The plot of stress values versus distance r from the crack tip calculated by the theoretical method for θ=0 The Abaqus FEA software was used for finite element simulation.The contour integrals technique is used for calculating the stress intensity factors of the crack.First, the plate was modeled and then meshed in the software.6600 quadratic elements of CPS8R type are used for model meshing.Further, singular elements were used around the crack tip. Figure 4 shows the view of the entire element`s plate and the crack tip area. Fig. 4 The finite element mesh of the plate without a hole The mechanical properties of SAE 304 stainless steel assigned to the model are represented in Table 2. Figure 5 shows the Von Mises stress contour near the crack tip with the maximum stress indicated. Fig. 5 The Von Mises stress contour plot of the plate without a hole Also, Table 3 displays the stress σy and stress intensity factor KI obtained by the numerical analysis with Abaqus for different distances from the crack tip (r) and θ=0.The results of stress and stress intensity factors analyzed by the theoretical and numerical methods (using Abaqus) were compared to determine the accuracy of the results.Table 4 compares the stress values and stress intensity factors using theoretical and numerical analysis and the error is calculated.Figure 6 compares the stress σy values versus distance r in the methods.The calculated errors in Table 4 indicate that the accuracy of the numerical results is satisfactory and thus, modeling conditions such as the size of the elements are used for subsequent numerical analyses.Fig. 6 The plots of the stress σy versus distance r from the crack tip calculated by theoretical and numerical methods FINITE ELEMENT ANALYSIS OF A PERFORATED PLATE WITH AN EDGE CRACK The effect of the existence of a hole near the crack tip on the stress distribution was investigated, as well as the value of the stress intensity factor.A plate with a hole on the x-axis with a distance of S from the crack tip and a radius of 5 mm is modeled in Abaqus software.Figure 7 shows the dimensions and the elements view of the plate.All the conditions and sizes of the element in this simulation are similar to the finite element analysis of the previous section.Further, Figure 8 displays the Von Mises stresses in the model near the crack tip. 5 that by increasing the distance S (the hole is farther away from the crack tip), the stress value at the crack tip decreases and causes a decrease in the value of σy.Also, with the increase of r (distance from the crack tip), the position of stress measurement moves away from the place of stress concentration, and the value of σy should decrease. Fig. 9 The plots of the stress σy versus distance r from the crack tip for different values of distance S (in the plate with one hole) From the above results, it can be concluded that by increasing the distance of the hole from the crack tip (S), the stress intensity factor KI and stress σy decrease.The three-dimensional diagram of σy in terms of distance r and S is shown in Figure 10.Also, the polynomial relationship resulting from the curve fitting extracted from the Matlab software is also shown in this figure.According to this plot, the minimum stress value occurs at r=5 mm and S=35 mm, and the maximum at r=0.2 mm and S=15 mm. FINITE ELEMENT ANALYSIS OF A PLATE WITH AN EDGE CRACK AND TWO HOLES (WITH DIFFERENT DISTANCES FROM THE CRACK TIP) In this section, a plate with an edge crack and two aligned holes is investigated.The horizontal position of the two holes is the same, and the aim is to investigate the effect of the horizontal distance S (the horizontal distance of the center of the circles from the crack tip) on the stress value σy and the stress intensity factor KI. The vertical distance of two holes from the crack tip is equal to 20 mm.Tables 7 and 8 compare the stress σy and stress intensity factor KI for different values of distance S, respectively.Also, Figure 13 displays the effect of distance S on the stress value σy.As shown in Table 7, the existing hole increases the stress value σy near the crack tip in the plate compared to the hole-free plate (for r=0.2, 0.4, 0.6, 0.8 mm).However, the existing hole decreases stress σy in the areas farther from the crack tip compared to the hole-free plate.A similar trend is observed in Figure 13.In addition, the results of Table 8 show that the presence of two holes in the plate increases the stress intensity factor KI compared to the hole-free plate. FINITE ELEMENT ANALYSIS OF A PLATE WITH AN EDGE CRACK AND TWO HOLES (WITH DIFFERENT RADIUS) In this section, a plate with an edge crack and two aligned holes is investigated.The horizontal position of the two holes is the same and S is equal to zero.The aim is to investigate the effect of the radius of the holes stress value σy and the stress intensity factor KI. Figure 14 shows the dimensions and elements view of the model.Figure 15 shows the Von Mises stresses in the model near the crack tip for radius R = 10 mm.Tables 9 and 10 stress σy and stress intensity factor KI for different values of radius R, respectively.Figure 16 displays the effect of radius R on the stress value σy.As shown in Table 9 and Figure 16, at the area near the crack tip with increasing radius of the hole, the value of stress σy increases (for r=0.2, 0.4, 0.6, 0.8, 1 mm) while at the area farther from the crack tip, the stress value σy decreases with increasing radius of the holes.It is also observed in Table 10 that with increasing the radius of the holes, the stress intensity factor KI also increases. CONCLUSION In this study, the effect of the hole on the stress distribution and stress intensity factor of a plate with an edge crack was investigated.Based on the obtained results, several important findings are summarized: • The stress intensity factor KI and σy decrease when the distance of the hole from the crack tip increases in the plate with one hole in front of the crack tip.Because the closer the hole is to the crack tip, the more it affects the stress distribution around the crack tip, the stress intensity factor will be higher as a result. • In the case of a plate with two holes with different distances from the crack tip, the presence of the holes in the plate near the crack tip (for r=0.2, 0.4, 0.6, 0.8 mm) increases the stress σy compared to a plate without a hole.However, in areas farther from the crack tip, the presence of the holes reduces the value of stress σy compared to a plate without a hole. • It can be concluded that the presence of the hole in the plate increases the stress intensity factor KI compared to the hole-free plate.For example, in the case of a plate with two holes with a radius of R=10 mm, the value of the stress intensity factor is about 18% higher than the plate without holes. • In the plate with two holes with different radii, the stress σy increases when the hole radius increases near the crack tip (for r=0.2, 0.4, 0.6, 0.8, 1 mm).While at distances farther from the crack tip, the stress σy decreases with increasing hole radius.Also, with increasing the hole radius, the stress intensity factor increases. • In general, according to the obtained results, it can be said that the presence of the hole in front of a crack causes a change in the geometry and stress distribution around the crack tip, which causes a change in the stress intensity factor.These changes are a function of the number, radius, and distance of the holes from the crack. Fig. 1 Fig.1The plate schema with an edge crack Fig. 7 aFig. 8 Fig. 7 a) The dimensions of the perforated plate, b) the finite element mesh of the perforated plate Fig. 10 Fig. 10 The 3D plot of the stress σy versus distances r and S (in the plate with one hole) Figure 11 Fig. 11 Fig. 12 Figure11shows the dimensions and elements view of the model.Also, Figure12shows the Von Mises stresses in the model near the crack tip for S = 0. Fig. 13 Fig.13 The plots of the stress σy versus distance r from the crack tip for different values of distance S (in the plate with two holes) Fig. 14 Fig. 14 a) The dimensions of the plate with two holes (S=0), b) The finite element mesh of the plate with two holes (S=0) Fig. 16 Fig. 16 The plots of the stress σy versus distance r from the crack tip for different values of radius R plate with two holes) Table 1 The stresses σx, σy and KI calculated by the theoretical analysis Table 2 Mechanical properties of SAE 304 stainless steel Table 3 The stress σy and KI calculated by the numerical analysis Table 4 Comparison between the theoretical and numerical results Table 5 The stress σy for different values of distance S (in the plate with one hole) Table 6 The stress intensity factor KI for different values of distance S (in the plate with one hole) Table 7 The stress σy for different values of distance S (in the plate with two holes) Table 8 The stress intensity factor KI for different values of distance S (in the plate with two holes) Table 9 The stress σy for different values of radius R (in the plate with two holes) Table 10 The stress intensity factor KI for different values of radius R (in the plate with two holes)
2024-06-17T15:28:28.734Z
2024-06-14T00:00:00.000
{ "year": 2024, "sha1": "9d5e69227dd181f26ed06b39109b73e8d8be6539", "oa_license": null, "oa_url": "https://hrcak.srce.hr/file/459319", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "098a7485bf88e5f737d25426a5ca296255e3b5bd", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
236379171
pes2o/s2orc
v3-fos-license
A review of application of natural products as fungicides for chili Anthracnose disease in chillies is a serious problem for farmers. So far, synthetic fungicides have been used as solution for the treatment of this disease. However, the side effects of synthetic fungicides to public health and environment raised awareness on alternative fungicides derived from natural resources. This paper aims to review plants that are potential as an alternative to fungicides for chili plantation, fabrication of test solutions, in vitro and in vivo fungicide test. Many plants were investigated as alternatives to plant-based fungicide. The utilization of leaves as samples including rhizomes, roots, tubers, weevils, seeds, fruit, flowers and other parts of the plant. The extract fabrication method used as a fungicide test include: maceration method, gradual fractionation method, and decoction method. The maceration method is the method most widely used to extract fungicidal active compounds from plants. Some studies that carried out in vitro tests were unable to compare with synthetic fungicides so it was not possible to determine their effectiveness for plant-based fungicide for chillies when compared to synthetic fungicides. In vitro extract of 80% alcohol and 10%/60% n-hexane of pacar cina (Aglaia odorata L.) leaves can be compared with the performance of propineb 0.2%. In addition, the 60% and 70% kirinyuh (Chromolaena odorata L.) leaf extracts were also able to match Acrobat 0.2% performance in vitro. Based on the in vivo test, suren (Toona sureni Merr) leaf extract and nut bulbs can be used as an alternative to vegetable / natural fungicides to help overcome the problem of anthracnose in chilies. Introduction One of the goals of the Sustainable development goals (SDGs) is to achieve food security and declare as sustainable agriculture. Chili is one of the food commodities whose production must be increased in order to realize food security in Indonesia. Every year, there are increased in demand for chilies which is in line with the growth in population and the development of food industry that require the chilies as raw material (Subagyono et al., ). In addition, there is always increase in the price of chili in particular month due to low productivity of chili harvest. The decrease in chili productivity can be caused by pests and plant diseases (Warisno Dahana, ). The pests attack the plants and causes chilies suffered severe damage and crop failure. The pests that can attack chili plants include: peach aphids, thrips pests, mites, fruit fly pests, and fruit borer pests. On the other hand, chili plant diseases include: anthracnose, phytophthora rot, fusarium wilt, cercospora leaf spot, bacterial wilt, yellow virus, mosaic disease (Piay et al., ). Therefore, control of plant pest organisms must be done in order to increase the production of chilies (Badan Pusat Statistik Republik Indonesia, ). Some plants that have the potential to be used as natural pesticides include: tembelekan/cherry pie (Lantana camara), jarak tintir/coral plant (Jatropha multifida), pacar cina/chinese rice (Aglaia odorata L.), mengkudu/noni (Morinda citrifolia L.), mimba/neem (Azadirachta indica A. Juss.), kenikir/compositae (Cosmos caudatus Kunth.), sirih/betel (Piper batle L.), awar-awar (Ficus septica) and others. Basically, natural pesticides do not only come from plants, but also from bacteria, viruses, and fungi (Novizan, ). The purpose of this paper is to review: ) plants that have the potential as an alternative natural fungicide for chili, ) fabrication of solution for in vitro and in vivo test, ) in vitro test as fungicide for chili, and ) in vivo test as fungicide for chili. Potential plants as alternative fungicide for chili Many plants have been investigated on the potential as an alternatives to plant-based / natural fungicides for chili. Table shows the names and parts of the plant and the method tested for fungicide. The part of plants that is widely used in research on finding alternative natural fungicides is the leaves. Few studies have used parts of rhizomes, roots, tubers, weevils, seeds, fruit, flowers or all parts of a plant (combination of flowers, leaves, stems, roots, and seeds). Betel leaf is a part of the plant that has been investigated both in vitro and in vivo. The researchers only used one plant type separately to determine its potential as natural fungicide. Only few researchers have combined plants, for example: mixture of betel and tobacco leaf (Oktarina et al., ), (Anjani, ), (Nur Rohmah, ) and mixture of kenikir/compositae (Cosmos caudatus Kunth.) and betel (Maimunah et al., ). In general, fungicide test methods used in many studies are divided into categories, namely: ) in vitro and ) in vivo. There are researchers who only focus on using in vitro test methods or in vivo test methods. In addition, the researchers also used both test methods in combination . In the in vitro test method, many types of fungi that cause Anthracnose disease in chilies are used, for example: Collectotrichum capsici, Colletotrichum gloesporioides and Colletotrichum acutacum mushrooms. Several parameters that can be observed in the in vitro test include: percentage of inhibition, diameter of fungal colony growth, zone of inhibition, spore growth, spore germination and percentage of spore density. On the other hand, in the in vivo test more parameters can be observed which include: anthracnose disease severity, intensity of fungal attack on chilies, percentage of disease incidence, effectiveness of fungicides, diameter of chili spots, incubation period of fungi in chilies, plant height, number of fruit and the weight of the chilies. In this in vivo test, the success of the research is strongly influenced by environmental factors, for example: temperature, humidity and rainfall (Suwastini et al., ). Preparation of extract The preparation of test solutions for chilies fungicide was summarized in Table . In general, the method of extracts preparation used can be classified into types, namely: ) maceration method, ) graded fractionation method, and ) decoction method. . Maceration method This method are most widely used to extract active compounds from certain plants for fungicide. Plants were prepared in the powder or flour form are added with a solvent and then are soaked for a designated time. The filtrate is separated from the dregs and the maceration process can be continued with new solvent until color filtrate is clear. The filtrate is concentrated using rotary evaporator with temperature control according to the type of solvent used until concentrated extract is free solvent (K Ngibad, ), (Khoirul Ngibad, ), (Wibowo et al., ). The solvents used in the extract preparation for test fungicide on chilies, including: water solvent (ultrapure water) and organic solvents ( % methanol, methanol, % ethanol, % ethanol, ethanol, ethyl acetate, and n-hexane). The usage of solvents in the maceration process is expected to be able to extract the large fraction of possible fungicidal active compounds. In addition, there are differences in the ratio of sample weight and volume of solvent used between researchers, ranging from : to : . The greater the ratio of solvent volume and sample weight will maximize the extract or active fungicidal compound produced. However, it is necessary to pay attention to the effectiveness of usage the solvent volume. . Stratified fractionation method Practices of the graded fractionation method have been carried out, for example: the fine powder of Chinese henna leaves was fractionated in stages using filter made of various sizes of paralon to form funnel containing activated charcoal as filter and adsorption of nonpolar compounds. The liquid-liquid solvent extraction method used cold distilled water. Then, it was followed by solution of alcohol or n-hexane with concentrations of , , , , , , , , and %, respectively (Efri et al., ). Then, babadotan/goatweed (Ageratum conyzoide) leaf powder was placed into simple fractionation tool, then the filtered residue was collected and air-dried. The filtrate or crude extract was added with methanol solvent then was collected and air-dried to obtain the methanol fraction of the babadotan leaf extract. In the same way, to get ethyl acetate and n-hexane extract (Wulandari et al., ). The water solvent is expected to be able to extract the active polar fungicide compound which is polar. Decreasing the level of polarity starting from methanol, ethyl acetate, and nhexane solvents is expected to be able to separate the active fungicide compounds based on their polarity level. . Decoction method Decoction method has been used to extract the fungicidal active compounds found in betel leaf. Samples were boiled in water with ratio of : for hour. The extract are filtered and sterilized using autoclave at temperature of °C to obtain sterile betel leaf extract (Trisnawati et al., ). The boiling process of the Cassia alata Linnaeus sample which was blended with water was carried out for minutes (Arneti Sulyanti, ). This decoction method is rarely used because it is feared that the active fungicidal compounds present in the sample could be damaged by heat treatment. In vitro test as fungicide for chili The review results of research related to in vitro fungicide test are summarized in Table . The concentration of the test solution was carried out in various ways. For example, the concentration of mixture of betel leaf and tobacco extract with concentration of % was made by mixing ml of PDA (Potato Dextrosa Agar) and ml of mixture of betel and tobacco extracts (Nur Rohmah, ). Another technique was found in preparation of kenikir leaf extract test solution which is done by mixing the extract with Tween as emulsifier with ratio of : (w / v) and diluted using sterile distilled water to get concentration of %, %, %, and % (Amelia et al., ). In other cases, the suren concentrated extract was assumed to be % concentration then the concentrated extract was diluted using distilled water into several concentrations ( %, %, and %) (Andriyani et al., ). Besides water, methanol was also used as solvent to make test solution for the Curcuma sp. rhizome with concentration of -ppm (Sari et al., b). The synthetic fungicide control used by several researchers in in vitro tests included: propineb %, . % propineb, azoxistrobin, diphenoconazole, benomyl, anthracol, . % acrobat and . % carbendazim. The usage of synthetic fungicide controls is very useful as comparison against the plants being studied. Many studies do not use synthetic fungicide controls so that the potential of these plants is less known when compared to synthetic fungicide controls. On the other hand, the most widely used fungi for in vitro tests are Colletotrichum capsici and then Colletotrichum gloeosporioides. The percentage of inhibition of fungal mycelium (%): . -. (Lestari et al., ) Betel and Tobacco % with concentration ratio Some of the parameters used in the in vitro test include: colony diameter, percentage of colony inhibition, density / number of spores, and colony area. Colony diameter is measured by making vertical and horizontal lines perpendicular to each other at the bottom of the petri dish as vertical and horizontal diameters. Then, the colony diameter is calculated using formula (Andreas et al., ) : With : D = diameter of horizontal colony D = diameter of vertical colony Observations are made by measuring the diameter of the growth of C. capsici colonies. The measurement of inhibition using the formula: ( ) With : DH = Inhibition (percent) a = diameter of C. capsici colony (mm) (negative control) b = diameter of C. capsici colony (mm) (treatment) Spore density was determined by taking ml of spore suspension from isolate propagation treatment. Furthermore, the spore density was calculated using hemocytometer that had been dropped by the suspension under a double lens (binocular), which is one type of lens from a light microscope with a magnification of times. (Herlinda et al., ). The spore density was calculated using Gabriel Riyatno formula ( ) (Gabriel Riyanto, ): = 0, 25 10 6 ( ) With: C = spore density per ml of solution t = total number of spores in the sample box observed n = number of sample boxes ( large x small boxes) . = correction factor for the use of a small-scale sample box on the haemacytometer Colony area was measured using millimeter plotting paper by depicting the colony area on plastic glass (Liswarni Edriwilya, ). The plants studied as an alternative to natural fungicides for chili have the ability to inhibit the growth of anthrax-causing fungi in chilies by in vitro study, which include: Colletotrichum capsici, Colletotrichum gloeosporioides, and Colletotrichum acutacum. However, many in vitro studies do not compare with synthetic fungicides. So, it is not possible to know the effectiveness of the performance of natural fungicides for chilies when compared to synthetic fungicides. Based on Table , it can be seen that % alcohol extract and % n-hexane extract and % Chinese henna leaves is similar with the . % propineb performance by in vitro study. In addition, the % and % kirinyuh leaf extracts were also able to match . % acrobat performance by in vitro study. In vivo test as fungicide for chili In effort to find alternatives natural fungicides, the researchers focused not only on in vitro studies but also in vivo studies of var-ious plants with certain concentrations as shown in Table . This in vivo test was directly applied to chili plants to be treated with natural fungicides with test conditions appropriate to the actual environment in chili farm. Anthracnose disease in chilli is characterized by the appearance of blackish brown spots that will expand into soft rot with black dots in the middle which are collection of seta and conidia of C. capcisi fungi. The attack of C. capsici fungi begins by attaching the spores to the fruit and then the spores will germinate. Furthermore, through the fungal hyphae inject the fruit tissue and take nutrients in it so that it can interfere with metabolism and even cause cell death. The more severe the disease attack, the more extensive the rotting area on the fruit will be, this is due to damage to the fruit tissue and even cell death which ultimately results in the fruit experiencing dry rot or shrinking. (Andriyani et al., ). Disease severity is the surface area of chilies that shows symptoms of disease. Disease severity can also be interpreted as the part of the plant affected by disease or the disease area of the sample plant. Determination of the percentage of disease severity can be calculated by the formula as follows (Suwastini et al., ): Several studies have also identified other parameters in the in vivo test, for example: fruit weight, mycelium dry weight, spot diameter, yield / number of red chilies, number of fruits, effectiveness and level of fungicidal ability, incubation period of anthracnose disease, morphometry of cayenne pepper, period incubation, percentage and fresh weight of healthy cayenne pepper affected by anthracnose disease, when the early symptoms of anthracnose disease appeared in red chilies, and the height of chili plants. The plants studied for chilies had the effectiveness of being used as a natural fungicide. However, many in vivo studies do not compare with synthetic fungicides. So, it is not possible to know the effectiveness of the performance of natural fungicides for chilies when compared to synthetic fungicides. Table shows that suren leaf extract and nut bulbs can be used as alternatives to natural fungicides to help overcome the problem of anthracnose in chilies. Conclusion This paper reviews the potential plants as an alternative to chili fungicides, the preparation of test solutions, in vitro and in vivo fungicide tests. The part of the plant that is widely studied as fungicide for chilies is the leaves, while the parts of the plant that are rarely used as samples are the parts of the rhizome, roots, tubers, weevils, seeds, fruit, flowers or all parts of the plant. The methods of extract preparation used as fungicide test include: maceration method, stratified fractionation method, and decoction method. The plants studied had the ability to inhibit the growth of the Colletotrichum capsici, Colletotrichum gloeosporioides, and Colletotrichum acutacum. The % alcohol extract and % and % n-hexane extract of Chinese henna leaves can be equal with the performance of . % propineb by in vitro study. In addition, the % and % kirinyuh leaf extracts were also able to match acrobat . % performance by in vitro study. Two parameters that are often observed in the in vivo test are the percentage of anthracnose disease incidence and the percentage of anthracnose disease severity. Suren and nut bulbs leaf extract can be used as alternative to natural fungicides to help overcome the problem of anthracnose in chilies. Declaration of competing interest The authors declare no known competing interests that could have influenced the work reported in this paper.
2021-07-27T00:05:53.979Z
2021-05-22T00:00:00.000
{ "year": 2021, "sha1": "3f09cdbd93f10449ebd1e40cd2af4dd4229a32d6", "oa_license": "CCBYSA", "oa_url": "https://journal2.unusa.ac.id/index.php/etm/article/download/2022/1368", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "d0dddf27c58c8b5be96b7825ccf3f0a6ac631fc6", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
235298967
pes2o/s2orc
v3-fos-license
Micellar Nanocarriers of Hydroxytyrosol Are Protective against Parkinson’s Related Oxidative Stress in an In Vitro hCMEC/D3-SH-SY5Y Co-Culture System Hydroxytyrosol (HT) is a natural phenolic antioxidant which has neuroprotective effects in models of Parkinson’s disease (PD). Due to issues such as rapid metabolism, HT is unlikely to reach the brain at therapeutic concentrations required for a clinical effect. We have previously developed micellar nanocarriers from Pluronic F68® (P68) and dequalinium (DQA) which have suitable characteristics for brain delivery of antioxidants and iron chelators. The aim of this study was to utilise the P68 + DQA nanocarriers for HT alone, or in combination with the iron chelator deferoxamine (DFO), and assess their physical characteristics and ability to pass the blood–brain barrier and protect against rotenone in a cellular hCMEC/D3-SH-SY5Y co-culture system. Both HT and HT + DFO formulations were less than 170 nm in size and demonstrated high encapsulation efficiencies (up to 97%). P68 + DQA nanoformulation enhanced the mean blood–brain barrier (BBB) passage of HT by 50% (p < 0.0001, n = 6). This resulted in increased protection against rotenone induced cytotoxicity and oxidative stress by up to 12% and 9%, respectively, compared to the corresponding free drug treatments (p < 0.01, n = 6). This study demonstrates for the first time the incorporation of HT and HT + DFO into P68 + DQA nanocarriers and successful delivery of these nanocarriers across a BBB model to protect against PD-related oxidative stress. These nanocarriers warrant further investigation to evaluate whether this enhanced neuroprotection is exhibited in in vivo PD models. Introduction Hydroxytyrosol (HT) is a natural phenolic compound that has generated interest in Parkinson's disease (PD) research due to its antioxidant properties. HT is a major component of olive oil and therefore prominent in the Mediterranean diet [1,2], which has been related to lower mortality [3,4], improved cardiovascular health [5,6] and slower cognitive decline [7]. A wide body of research supports a protective role of HT against neurodegeneration [8][9][10][11][12][13]. HT's catecholic structure provides reactive oxygen species (ROS) scavenging properties through the ability of the benzene ring-bound hydroxyl groups to donate either an electron or hydrogen atom to stabilise ROS [14][15][16]. In PD-related cellular models, HT protects dopaminergic neurons against cell death following oxidative stress [8,11] and protects against alpha synuclein fibril formation and aggregation in the PC12 cell line [17]. Animal studies have shown resistance to oxidative stress via reduced lipid peroxidation in dissociated brain cells following administration of thoroughly at 80 • C for 1-2 min and sonicated for another 1 min using a VWR Ultrasonic cleaner bath USC300T (VWR International Limited, Lutterworth, UK) to fully dissolve the film in the water. The obtained solution was filtered through a sterile 0.22 µm filter to remove any unloaded HT and DFO. Some samples were freeze dried (lyophilized) using a Virtis AdVantage 2.0 BenchTop freezedryer (SP Industries, Ipswich, UK) for the X-ray diffraction (XRD) and fourier-transform infrared (FTIR) analyses. Table 1. Hydrodynamic diameter (d), polydispersity index (PDI), surface charge, drug loading (DL) and encapsulation efficiency (EE) of blank and drug-loaded P68 + DQA nanoformulations prepared at 80 • C (mean ± S.D., n = 6). Sample Contents ( Size and Surface Charge of the Nanoformulations The Zetasizer Nano ZS (Malvern Instruments, Worcestershire, UK) was used to analyze the dimensions and surface charge of the nanoformulations. Photon correlation spectroscopy was used to measure size distribution as Z-Ave hydrodynamic diameter and polydispersity index. Determination of Drug Loading and Encapsulation Efficiency Drug loading and encapsulation efficiency of the nanoformulations was studied using UV-Visible (UV-Vis) spectroscopy based on the calibration curves of the free drugs. Methanol and water were used in a 1:1 ratio to dissolve the carrier and release the drug to achieve a theoretical concentration of 20 µg/mL HT and DFO. HT and DFO content were calculated using UV-Vis spectroscopy (Cary 100 UV-Vis, Agilent Technologies, Santa Clara, CA, USA) at 280 and 204 nm, respectively. The following equations were used to calculate the percentage of drug loading and encapsulation efficiency: Drug loading (%) = (determined mass of drug within nanocarriers/mass of drug-loaded nanocarriers) × 100 (1) Encapsulation efficiency (%) = (determined mass of drug within nanocarriers/theoretical mass of drug within nanocarriers) × 100 (2) Structural Analysis The Rigaku MiniFlex600 x-ray diffractometer (Rigaku Corporation, Tokyo, Japan) was used for atomic and molecular structural analysis of the samples at a 5−35 • range and step size of 0.01 • (the scanning rate was 2 • /min). XRD patterns were obtained for pure HT, DFO, P68 and DQA, as well as the P68 + DQA HT and HT + DFO nanocarriers (in lyophilized form). All XRD analysis was carried out at room temperature. A PerkinElmer Spectrum 100 FTIR spectrometer (PerkinElmer, Waltham, MA, USA) was used to analyse the chemical structure of the pure drugs, nanocarriers alone, their physical mixtures and lyophilised drug-loaded nanocarriers from 650 to 4000 cm −1 , at a resolution of 4 cm −1 . Antioxidant Power of the Antioxidant Nanoformulations The modified ferric iron reducing antioxidant power (FRAP) assay was used to determine the potential antioxidant activity of HT-loaded nanoparticles compared to free HT at a range of concentrations, as previously described [22,25]. Briefly, FRAP reagent (a mixture of pH 3.6 acetate buffer, tripyridyl triazine, and iron (III) chloride) and samples of free and P68+DQA HT were incubated for 30 min at room temperature before being read at 593 nm. In line with previous reports [25,45,46], trolox was used as the standard and the antioxidant capacity of the samples was given as the trolox equivalent concentration. For cytotoxicity evaluation of the drug-loaded nanoformulations, SH-SY5Y cells were treated with free HT and HT + DFO or the corresponding concentrations of each drugloaded nanoformulation for 24, 48 and 72 h. The MTT assay was used to assess cell viability based on the reduction of the yellow thiazolyl blue tetrazolium bromide salts (3-(4, 5dimethylthiazol-2-yl)-2, 5-diphenyltetrazolium bromide, MTT) to the purple formazan by mitochondrial dehydrogenases, as previously described [24,25]. Briefly, once confluent cells were treated with either free or nanoformulated HT at varying concentrations for up to 72 h. 20 µL of MTT diluted in DPBS (5 mg/mL) was then added to the cells. Following a 4 h incubation at 37 • C and aspiration of the wells, 100 µL of DMSO was used to dissolve any resulting formazan crystals and the plates were incubated for 15 min on a shaker (75 rpm). The absorbance was then read at 570 nm on a spectrophotometer. Trans-Endothelial Electrical Resistance Assessment The resistance of the BBB model was assessed using trans-endothelial electrical resistance (TEER) measurements as previously described by Burkhart et al. [54]. TEER values were read using an epithelial Volt-Ohm meter and sterile Chopstick Electrodes and expressed as Ω·cm 2 (resistance of the tissue (Ω) × membrane area (cm 2 )). Tight junctions increase the resistance, therefore, high TEER values are desired [51]. TEER values of hCMEC/D3 cells have been shown to reach 300 Ω·cm 2 in the presence of hydrocortisone [49,53,55]). Therefore, before carrying out any of the BBB passage experiments, TEER values were measured each day post seeding into Transwell ® plates, until a resistance of close to 300 Ω·cm 2 was reached. Assessment of Nanocarrier Passage across the hCMEC/D3 BBB Model The ability of the nanoformulations to pass across the model BBB was assessed using a transport assay as previously described [56,57]. Phenol red-free HBSS was used to carefully wash each chamber three times, avoiding disturbance to the hCMEC/D3 monolayer. In total, 1 and 2.5 mL HBSS was then added to the apical and basolateral chambers (respectively) and incubated for 10 min at 37 • C. The apical chamber was then aspirated and treated with 1.5 mL nanoformulated or corresponding free HT and HT + DFO treatments (in HBSS) at a range of concentrations for 1 h at 37 • C. Following sampling of the basolateral chambers, HT content was calculated using UV-Vis spectroscopy at 280 nm as described above. To assess the stability of the BBB model and potential cytotoxicity of the treatments, TEER measurements were taken immediately after each transport assay. hCMEC/D3 and SH-SY5Y Co-Culture in the Costar Transwell ® System The hCMEC/D3 cells were seeded at 300,000 cells/cm 2 into the 3.0 µm pore polycarbonate membrane inserts of 96-well Costar Transwell ® plates as described above. In parallel, SH-SY5Y cells were seeded at 1,000,000 cells/cm 2 into 96-well plates. Once the hCMEC/D3 cells reached a membrane potential of at least 300 Ω·cm 2 and the SH-SY5Y cells reached confluence, the hCMEC/D3 cultured Transwell ® inserts were place into the 96-well plates containing the confluent SH-SY5Y cells ready for immediate treatment. Assessment of the Protective Effects of the Nanocarriers against Rotenone Following Passage across the BBB Model SH-SY5Y cells in the basolateral chamber of the Transwell ® co-culture system were treated with 200 µL MEM. hCMEC/D3 cells in the apical chamber were treated with 150 µL of nanoformulated or free HT or HT + DFO treatments (in HBSS) at a range of concentrations. The cells were then incubated at 37 • C for 1 h. The Transwell ® inserts containing the hCMEC/D3 cells were then removed and the SH-SY5Y cells were incubated for a further 2 h at 37 • C. Following incubation, SH-SY5Y cells were treated with 100 µM rotenone for 24 h at 37 • C. The MTT assay was then carried out as described above to assess the ability of the treatments to protect against reduced cell viability induced by rotenone after passing the BBB model. The mitochondrial hydroxyl radical detection assay was conducted to assess the protective effects of the nanoformulations against rotenone induced oxidative stress in this co-culture model but using black-walled, clear-bottom 96-well microplates for the SH-SY5Y cells. This assay was carried out as previously described by Mursaleen et al. [25], in accordance with the manufacturer's protocol (ab219931; Abcam, Cambridge, UK). Following treatment with free or nanoformulated HT and HT + DFO and the removal of the hCMEC/D3 cultured Transwell ® inserts (as described above), SH-SY5Y cells were washed with DPBS and treated for 1 h at 37 • C with 100 µL of 6.25X OH580 probe. The cells were then incubated with 100 µM rotenone for 24 h at 37 • C. DPBS was used to wash the cells before the fluorescence was read on the Fluostar Optima Fluorescence Plate Reader (BMG LABTECH, Aylesbury, UK). Statistical Analysis The mean of six replicates was calculated for each treatment in all experiments. Data are expressed as mean ± standard deviation (S.D.). Two-way analysis of variance (ANOVA) followed by the Tukey's or Šidák multiple comparisons post hoc test was used to analyse the FRAP, TEER and BBB passage data. The MTT and mitochondrial hydroxyl assay results were analysed using one-way ANOVA followed by the Dunnett's T3 post hoc test (PRISM software package, Version 8, Graphpad Software Inc., San Diego, CA, USA). Results Both HT and HT + DFO loaded nanocarriers exhibited high mean encapsulation efficiency (95% and 97%, respectively) ( Table 1). The drug-loaded nanocarriers exhibited a significantly higher mean particle size compared to the unloaded blank nanoformulation (p < 0.0001) ( Table 1). The addition of DFO into the formulation increased the mean encapsulation efficiency of HT by 2%. The mean size of the HT and HT + DFO loaded nanocarriers were 166 and 146 nm, respectively (Table 1). Although the addition of DFO to the HT P68 + DQA nanoformulation appeared to lower the particle size, this was not a significant difference ( Table 1). The mean polydispersity indices of the nanoformulations were < 0.24 which indicates that the majority of the nanocarriers within each formulation sample were of similar size ( Table 1). The mean surface charges of the drug loaded nanocarriers were moderately positive (7-10 mV) whereas the surface charge of the blank unloaded nanoformulation was slightly negative (−0.78 mV) ( Table 1). XRD spectra for free and nanoformulated HT and HT + DFO are shown in Figure Results Both HT and HT + DFO loaded nanocarriers exhibited high mean encapsulation efficiency (95% and 97%, respectively) ( Table 1). The drug-loaded nanocarriers exhibited a significantly higher mean particle size compared to the unloaded blank nanoformulation (p < 0.0001) ( Table 1). The addition of DFO into the formulation increased the mean encapsulation efficiency of HT by 2%. The mean size of the HT and HT + DFO loaded nanocarriers were 166 and 146 nm, respectively (Table 1). Although the addition of DFO to the HT P68 + DQA nanoformulation appeared to lower the particle size, this was not a significant difference ( Table 1). The mean polydispersity indices of the nanoformulations were < 0.24 which indicates that the majority of the nanocarriers within each formulation sample were of similar size ( Table 1). The mean surface charges of the drug loaded nanocarriers were moderately positive (7-10 mV) whereas the surface charge of the blank unloaded nanoformulation was slightly negative (−0.78 mV) ( Table 1). XRD spectra for free and nanoformulated HT and HT + DFO are shown in Figure 1605 cm −1 represents the vibration of aromatic C=C bonds. Methyl group stretching and deformation is represented by the peaks at 2928 and 2847 cm −1 . The physical mixtures for each formulation (Figure 2(iD,iiE)) show peaks corresponding to each constituent within the mixture. However, the HT and DFO peaks appear less intense in the mixtures ( Figure 2). The FTIR spectrum for each of the lyophilized formulations are similar to those of the physical mixtures but in each case the peaks corresponding to the HT and DFO elements are less intense (Figure 2(iD,iE,iiE,iiF)). (Figure 2(iD,iiE)) show peaks corresponding to each constituent within the mixture. However, the HT and DFO peaks appear less intense in the mixtures (Figure 2). The FTIR spectrum for each of the lyophilized formulations are similar to those of the physical mixtures but in each case the peaks corresponding to the HT and DFO elements are less intense (Figure 2(iD,iE,iiE,iiF)). Figure 3 shows the antioxidant capacity of free and P68 + DQA nanoformulated HT (10-200 µM), analysed using the FRAP assay. When comparing the different concentrations of HT (F(6, 70) = 427.5, p < 0.0001) and free vs nanoformulated HT (F(1, 70) = 1029, p < 0.0001), significant differences were observed. Each P68 + DQA concentration of HT, except 10 µM, exhibited significantly higher trolox equivalent antioxidant capacity than the corresponding concentrations of free HT (p < 0.0001) (Figure 3). The percentage increase in antioxidant capacity of the P68 + DQA HT compared to the free HT preparations were over 100% for most concentrations (20,40,80, 100, and 200 µM) but all were over 93% (Figure 3). Figure 3 shows the antioxidant capacity of free and P68 + DQA nanoformulated HT (10-200 μM), analysed using the FRAP assay. When comparing the different concentrations of HT (F(6, 70) = 427.5, p < 0.0001) and free vs nanoformulated HT (F(1, 70) = 1029, p < 0.0001), significant differences were observed. Each P68 + DQA concentration of HT, except 10 μM, exhibited significantly higher trolox equivalent antioxidant capacity than the corresponding concentrations of free HT (p < 0.0001) (Figure 3). The percentage increase in antioxidant capacity of the P68 + DQA HT compared to the free HT preparations were over 100% for most concentrations (20,40,80, 100, and 200 μM) but all were over 93% (Figure 3). The same concentration ranges of free and P68 + DQA HT were then tested on the SH-SY5Y cell line to evaluate the cytotoxicity of each concentration, using the MTT assay. Cell viability was maintained at control levels or above following treatment for 24 h with 10-200 μM free and P68 + DQA HT (F(21, 81.45) = 6.801, p < 0.0001) (Figure 4). No significant reduction in cell viability was observed for any concentration of HT (free or formulated) following 48 h treatment ( Figure 4B). Although Figure 4B shows a significant reduction of cell viability compared to control, when treating with 200 μM free HT (p = 0.0135) and the corresponding blank formulations at 80 μM (p = 0.0338) and 100-200 μM (p < 0.0001), cell viability was above 80% in all cases (F(21, 57.30) = 6.9155, p < 0.0001). By the 72 h time point, a significant reduction in cell viability was observed for free HT at 40 μM (p = 0.0141), free and P68 + DQA formulated HT at 60-200 μM (p < 0.0001), and with the corresponding blank formulations (p < 0.0005) ( Figure 4C). However, no cytotoxicity was observed with 40 μM treatment of free and P68 + DQA formulated HT (F(21, 73.11) = 29.41, p < 0.0001) ( Figure 4C). The same concentration ranges of free and P68 + DQA HT were then tested on the SH-SY5Y cell line to evaluate the cytotoxicity of each concentration, using the MTT assay. Cell viability was maintained at control levels or above following treatment for 24 h with 10-200 µM free and P68 + DQA HT (F(21, 81.45) = 6.801, p < 0.0001) (Figure 4). No significant reduction in cell viability was observed for any concentration of HT (free or formulated) following 48 h treatment ( Figure 4B). Although Figure 4B shows a significant reduction of cell viability compared to control, when treating with 200 µM free HT (p = 0.0135) and the corresponding blank formulations at 80 µM (p = 0.0338) and 100-200 µM (p < 0.0001), cell viability was above 80% in all cases (F(21, 57.30) = 6.9155, p < 0.0001). By the 72 h time point, a significant reduction in cell viability was observed for free HT at 40 µM (p = 0.0141), free and P68 + DQA formulated HT at 60-200 µM (p < 0.0001), and with the corresponding blank formulations (p < 0.0005) ( Figure 4C). However, no cytotoxicity was observed with 40 µM treatment of free and P68 + DQA formulated HT (F(21, 73.11) = 29.41, p < 0.0001) ( Figure 4C). The 10 and 20 µM concentrations of free and P68 + DQA HT exhibited no cytotoxicity at any time point (24, 48 or 72 h) and were therefore used in the subsequent evaluations. The 50 and 100 µM DFO were used for the combined treatments based on our previous reports [24,25]. The mean TEER of hCMEC/D3 cell monolayers grown on Transwell ® inserts peaked at 320 Ω·cm 2 on day five post seeding and no significant difference in TEER was observed following any of the free and nanoformulated HT and HT + DFO treatments ( Figure 5). The 10 and 20 μM concentrations of free and P68 + DQA HT exhibited no cytotoxicity at any time point (24, 48 or 72 h) and were therefore used in the subsequent evaluations. The 50 and 100 μM DFO were used for the combined treatments based on our previous reports [24,25]. The mean TEER of hCMEC/D3 cell monolayers grown on Transwell ® inserts peaked at 320 Ω·cm 2 on day five post seeding and no significant difference in TEER was observed following any of the free and nanoformulated HT and HT + DFO treatments ( Figure 5). When comparing the P68 + DQA nanoformulation and free drug treatments of HT and HT + DFO, significant differences in the percentage of HT were observed following BBB passage (F(1, 32) = 406.4, p < 0.0001) ( Figure 6). All P68 + DQA formulations of HT and HT + DFO resulted in significantly more HT (between 34.8 and 50.1%) compared to the free drug treatments (p < 0.0001 in all cases), reaching more than 76% HT following passage across the hCMEC/D3 monolayer with the P68 + DQA 10 μM HT treatment (Figure 6). When comparing the P68 + DQA nanoformulation and free drug treatments of HT and HT + DFO, significant differences in the percentage of HT were observed following BBB passage (F(1, 32) = 406.4, p < 0.0001) ( Figure 6). All P68 + DQA formulations of HT and HT + DFO resulted in significantly more HT (between 34.8 and 50.1%) compared to the free drug treatments (p < 0.0001 in all cases), reaching more than 76% HT following passage across the hCMEC/D3 monolayer with the P68 + DQA 10 µM HT treatment ( Figure 6). When assessing the ability of free and P68 + DQA HT and HT + DFO to protect against rotenone induced cytotoxicity following BBB passage, significant differences were observed (F(9, 22.26) = 49.87, p < 0.0001) (Figure 7). All free and P68 + DQA HT and HT + DFO pretreatments resulted in significantly higher cell viability compared to rotenone treatment alone ( Mitochondrial hydroxyl levels were also assessed using the Transwell ® model to evaluate the ability of the free and nanoformulated treatments to protect against rotenone induced oxidative stress. Significant differences were observed when using the mitochondrial hydroxyl assay to assess the ability of free and P68 + DQA HT and HT + DFO to protect against rotenone induced oxidative stress in the Transwell ® model (F(8, 28.41 = 107.9, p < 0.0001) (Figure 8). Both 10 and 20 μM P68 + DQA HT conditions resulted in significantly lower levels of hydroxyl compared to the corresponding free drug conditions (p = 0.0298 and p = 0.0003, respectively) ( Figure 8). However, the combination of 10 μM HT and 100 μM DFO in P68 + DQA nanoformulations resulted in the lowest percentage mitochondrial hydroxyl levels relative to control (1.3%) compared to all the other HT and HT + DFO treatments (Figure 8). Treatments were added to the apical compartment of the Transwell ® system and incubated for 3 h, the SH-SY5Y cells were then incubated with 100 μM rotenone for 24 h. These results were compared to rotenone treatment alone. MEM represents the control condition where cells were only treated with media (mean ± S.D., n = 6). * represents significance values of control or pre-treatment conditions compared to rotenone treatment alone (**** p < 0.0001, *** p < 0.001, ** p < 0.01). # represents significance values of nanoformulated drug compared to free drug within the same treatment condition (### p < 0.001, # p < 0.05). Figure 7. SH-SY5Y MTT assay results for free and P68 + DQA preparations of 10 and 20 µM HT and combined HT and DF0 (10 or 20 µM HT + 50 or 100 µM DFO, respectively) following passage across the hCMEC/D3-SH-SY5Y co-culture Transwell ® system. The hCMEC/D3 cells were grown on the insert and the SH-SY5Y cells were located at the bottom of the basolateral compartment. Treatments were added to the apical compartment of the Transwell ® system and incubated for 3 h, the SH-SY5Y cells were then incubated with 100 µM rotenone for 24 h. These results were compared to rotenone treatment alone. MEM represents the control condition where cells were only treated with media (mean ± S.D., n = 6). * represents significance values of control or pre-treatment conditions compared to rotenone treatment alone (**** p < 0.0001, *** p < 0.001, ** p < 0.01). # represents significance values of nanoformulated drug compared to free drug within the same treatment condition (### p < 0.001, # p < 0.05). Mitochondrial hydroxyl levels were also assessed using the Transwell ® model to evaluate the ability of the free and nanoformulated treatments to protect against rotenone induced oxidative stress. Significant differences were observed when using the mitochondrial hydroxyl assay to assess the ability of free and P68 + DQA HT and HT + DFO to protect against rotenone induced oxidative stress in the Transwell ® model (F(8, 28.41 = 107.9, p < 0.0001) (Figure 8). Both 10 and 20 µM P68 + DQA HT conditions resulted in significantly lower levels of hydroxyl compared to the corresponding free drug conditions (p = 0.0298 and p = 0.0003, respectively) ( Figure 8). However, the combination of 10 µM HT and 100 µM DFO in P68 + DQA nanoformulations resulted in the lowest percentage mitochondrial hydroxyl levels relative to control (1.3%) compared to all the other HT and HT + DFO treatments (Figure 8). Figure 7. SH-SY5Y MTT assay results for free and P68 + DQA preparations of 10 and 20 μM HT and combined HT and DF0 (10 or 20 μM HT + 50 or 100 μM DFO, respectively) following passage across the hCMEC/D3-SH-SY5Y co-culture Transwell ® system. The hCMEC/D3 cells were grown on the insert and the SH-SY5Y cells were located at the bottom of the basolateral compartment. Treatments were added to the apical compartment of the Transwell ® system and incubated for 3 h, the SH-SY5Y cells were then incubated with 100 μM rotenone for 24 h. These results were compared to rotenone treatment alone. MEM represents the control condition where cells were only treated with media (mean ± S.D., n = 6). * represents significance values of control or pre-treatment conditions compared to rotenone treatment alone (**** p < 0.0001, *** p < 0.001, ** p < 0.01). # represents significance values of nanoformulated drug compared to free drug within the same treatment condition (### p < 0.001, # p < 0.05). Figure 8. SH-SY5Y mitochondrial hydroxyl assay results for free and P68 + DQA preparations of 10 and 20 μM HT and combined HT and DF0 (10 or 20 μM HT + 50 or 100 μM DFO, respectively) following passage across the hCMEC/D3-SH-SY5Y co-culture Transwell ® system. The hCMEC/D3 cells were grown on the insert and the SH-SY5Y cells were located at the bottom of the basolateral Figure 8. SH-SY5Y mitochondrial hydroxyl assay results for free and P68 + DQA preparations of 10 and 20 µM HT and combined HT and DF0 (10 or 20 µM HT + 50 or 100 µM DFO, respectively) following passage across the hCMEC/D3-SH-SY5Y co-culture Transwell ® system. The hCMEC/D3 cells were grown on the insert and the SH-SY5Y cells were located at the bottom of the basolateral compartment. Treatments were added to the apical compartment of the Transwell ® system and incubated for 3 h, the SH-SY5Y cells were then incubated with 100 µM rotenone for 24 h. These results were compared to rotenone treatment alone. Mitochondrial hydroxyl levels are expressed as the percentage of hydroxyl identified in control cells (SH-SY5Y cells treated with MEM media only, for 24 h). (mean ± S.D., n = 6). * represents significance values of control or pre-treatment conditions compared to rotenone treatment alone (**** p < 0.0001, ** p < 0.01). # represents significance values of nanoformulated drug compared to free drug within the same treatment condition (### p < 0.001, # p < 0.05). Discussion There is increasing evidence suggesting that HT is protective in numerous models of PD [2,11,[17][18][19]58,59]. Yet, the full therapeutic potential of HT as a disease modifying treatment for PD is unlikely to be reached due to issues such as low bioavailability and stability, lack of targeted delivery, and limited brain delivery [20]. The aim of this study was firstly to assess the ability of the P68 + DQA micellar nanocarriers (developed by Mursaleen et al. [24,25]) to incorporate HT, alone or in combination with DFO, and sec-ondly to assess whether these nanoformulations could protect against reduced cell viability and increased oxidative stress induced by a rotenone model of PD in a hCMEC/D3-SH-SY5Y Transwell ® co-culture system. HT, alone or combined with DFO, was successfully incorporated into P68 + DQA nanocarriers with high loading efficiency (Table 1). This is consistent with the encapsulation efficiencies of these nanocarriers with other antioxidants [24,25]. The HT and HT + DFO P68 + DQA nanocarriers exhibited consistent particle sizes (polydispersity indices < 0.24), each below 170 nm ( Table 1), suggesting that these formulations should be of dimensions sufficient to cross the BBB based on previous reports [23,60,61]. The mean surface charges of the HT and HT + DFO P68 + DQA nanoformulations were similarly neutral (+7.43 and +9.87 mV, respectively) ( Table 1). These relatively neutral surface charges suggest that these HT P68 + DQA nanocarriers, with and without the combination of DFO, should be able to access the brain without causing toxicity to the BBB [23,28,30,31,62,63]. XRD studies revealed the crystalline nature of free HT and DFO. This was suppressed by formulation into P68 + DQA nanocarriers (Figure 1). This amorphous transformation is of benefit to these formulations due to the known association with increased solubility and stability [64]. This suggests that these HT and HT + DFO P68 + DQA nanoformulations would be suitable for oral or nasal delivery as they should remain stable once ingested or inhaled and would be more easily absorbed into the blood for systemic or neuronal circulation than free HT and HT + DFO, due to the increased solubility [65]. The decrease in intensity of the HT and DFO peaks, with minimal shifting, in the FTIR spectra for the relevant lyophilized formulations compared to the physical mixtures ( Figure 2) indicates the incorporation of each of these drugs into the P68 + DQA nanoformulations, without any conjugation interactions between the chemical groups [22,66]. The concentration ranges selected for HT (10-200 µM) and tested in the FRAP and MTT assays were based on and consistent with previous literature [2,[67][68][69]. The FRAP results show a correlation between increased concentration of HT and increased antioxidant capacity (Figure 3). Generally, the P68 + DQA HT nanoformulations exhibited significantly higher antioxidant capacity than the corresponding free HT concentrations (Figure 3). This is likely due to the improved stability of HT when loaded into the P68 + DQA nanocarriers as low stability is a possible disadvantage for polyphenols such as HT due to the extraction process [20]. Ultimately, the 10 and 20 µM concentrations of HT were selected for further evaluation as these were the highest concentrations of both the free drug and P68 + DQA nanoformulations that resulted in no observable cytotoxicity in SH-SY5Y cells after treatment for up to 72 h (Figure 4). The hCMEC/D3 cell line was used to model the BBB in a Transwell ® system based on previous studies [47][48][49][50][51][52]. The different free and nanoformulated HT treatments, alone and in combination with DFO, were tested on this model to assess whether they are likely to enter the brain in vivo and to evaluate the protective effects of these treatments against rotenone induced oxidative stress. Importantly, no significant differences in TEER values were observed following treatment with all concentrations of free and P68 + DQA HT and HT + DFO ( Figure 5), suggesting that none of these treatments are likely to cause toxicity to the BBB. The results of this study indicate that HT can pass across the BBB to some extent as supported by previous literature [11,18,19]. However, in every case P68 + DQA nanoformulation increased the percentage of HT reaching the basolateral compartment of the Transwell ® model by up to 50% (with 10 µM HT) ( Figure 6). Rotenone was used to model PD in this system as it is a pesticide and insecticide that is commonly used to induce the characteristic features of PD in both in vitro and in vivo models [70]. It is a strong inhibitor of mitochondrial complex 1 and has been linked to the higher incidences of PD in agricultural areas [71,72]. Rotenone inhibits electron transfer from the iron-sulphur clusters in complex I to ubiquinone which blocks oxidative phosphorylation and limits ATP synthesis [73]. Such incomplete electron transfer also results in the excessive formation of ROS and together eventually leads to apoptosis of the affected cells [74][75][76]. Unlike other neurotoxin models of PD, rotenone models have been shown to produce the most PD-like motor symptoms in animals as well as the most histopathological hallmarks of PD, from iron accumulation and oxidative stress to Lewy body pathology [77][78][79][80][81][82]. When using the Transwell ® model to evaluate the protective effects of free and P68 + DQA HT and HT + DFO against rotenone induced cytotoxicity and mitochondrial hydroxyl in SH-SY5Y cells following passage across the hCMEC/D3 membrane, the P68 + DQA nanoformulated treatments were superior in every case (Figures 7 and 8). This indicates that the P68 + DQA formulations were able to mostly stay intact until reaching the mitochondria within the SH-SY5Y cells, as it is here where rotenone exerts its effects as a mitochondrial complex 1 inhibitor [72]. The highest concentrations of the P68 + DQA combinations of HT and DFO were the most effective of the treatments at protecting against rotenone induced cytotoxicity and increased mitochondrial hydroxyl, in both cases maintaining cell viability above 80% and hydroxyl at least in line with control levels (Figures 7 and 8). However, there was no significant difference between the 20 µM HT and 20 µM HT + 100 µM DFO pre-treatments. This perhaps relates to the reported iron chelating properties of HT [83], suggesting that there may be little added value in combining HT and DFO, despite the combination treatments being the most effective overall. Taken together, these results suggest that P68 + DQA HT and HT + DFO nanocarriers have the relevant characteristics to access the brain without producing cytotoxicity and protect against rotenone induced oxidative stress. Conclusions This study demonstrates for the first time the incorporation of HT and HT + DFO into P68 + DQA nanocarriers and successful delivery of these nanocarriers across a BBB model to protect against PD-related oxidative stress. These results highlight the benefit of using micellar nanocarriers to improve the passage of HT across biological membranes and enhance its therapeutic effects. The ability of the P68 + DQA nanocarriers to enhance the protective effects of HT and HT + DFO against rotenone induced oxidative stress warrants further investigation in in vivo models as it suggests that these nanocarriers have potential to become therapeutic agents for PD. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
2021-06-03T06:17:23.417Z
2021-05-31T00:00:00.000
{ "year": 2021, "sha1": "0f638091c3874ca5689da3683b28d512393301db", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3921/10/6/887/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "79d4e75babf6adb65d268d711ffcbd03f50017fb", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
271831445
pes2o/s2orc
v3-fos-license
Perioperative antithrombotic medication: An approach for the primary care clinician The primary care clinician faces many challenges and is often left to manage complex pathology because of resource constraints at higher levels of care. One of these complex conditions is the perioperative management of antithrombotic medication. This narrative review is focused on helping the clinician navigate the complex path and multiple guidelines related to the perioperative use of antithrombotic medication. Perioperative antithrombotic guidelines (American College of Chest Physicians, European Society of Regional Anaesthesia, and American Society of Regional Anesthesia) and relevant publications were identified by a PubMed search using the terms perioperative AND anticoagulants OR antithrombotics AND guideline. Issues relevant to clinical practice were identified, and attempts were made to explain any ambiguity that arose. Adhering to basic pharmacological principles and evidence-based guidelines allows for the safe usage of antithrombotics. Knowing when to stop, continue, bridge and restart antithrombotic medication prevents perioperative morbidity and mortality. Stopping antithrombotic medication too early can lead to thromboembolic complications associated with their primary disease process. Not stopping antithrombotic medication or stopping it too late can potentially cause life-threatening bleeding, haematomas and increased transfusion requirements. Introduction It is common for clinicians involved in perioperative care to encounter patients on antithrombotic medication.In 2017, over 4 million patients were prescribed an anticoagulant in the United States Medicare database. 1The number of inpatient and day-case surgeries is increasing yearly, and it is a case of when, rather than if, the primary care clinician encounters a patient on antithrombotic therapy presenting for elective or emergency surgery. 2,3Patients on antithrombotics are at increased risk of dying, cerebrovascular accidents (CVAs), gastrointestinal bleeds (GIB), permanent paralysis because of spinal haematomas, endovascular stent thrombosis, mechanical heart valve thrombosis, deep venous thrombosis (DVT) and the complications arising from embolisation. 4Knowing how to manage patients on these agents is essential in the primary care clinician's armamentarium. The pre-operative challenge is balancing the risk of bleeding from surgery and weighing it up against the risks associated with cessation of medication.Stopping antithrombotics carries a significant risk of thrombosis of coronary, carotid, intracerebral or other stents, intravascular devices and arterial and venous vessels. 5In the immediate postoperative period, a hypercoagulable state ensues, but it is also when a patient is at the highest risk of bleeding from a surgical wound. 6einitiating antithrombotic therapy must be performed promptly, but it must be balanced against the risk of haemorrhagic complications.This review will attempt to guide the primary care clinician by summarising the most recent published guidelines on the topic.These include the American College of Chest Physicians (ACCP) Perioperative Management of Antithrombotic Therapy practice guidelines, European Society of Regional Anaesthesia (ESRA), Regional Anaesthesia in Patients on Antithrombotic Drugs guidelines and the American Society of Regional Anesthesia (ASRA) guidelines on Regional Anesthesia in the Patient Receiving Antithrombotic or Thrombolytic Therapy. 7,8,9The South African Society of Anaesthesiologists (SASA) published the Guidelines for Regional Anaesthesia in South Africa in 2016, but the section on perioperative anticoagulation and regional anaesthesia is based on ASRA guidelines; hence the ASRA guidelines will be discussed. 10e primary care clinician faces many challenges and is often left to manage complex pathology because of resource constraints at higher levels of care.One of these complex conditions is the perioperative management of antithrombotic medication.This narrative review is focused on helping the clinician navigate the complex path and multiple guidelines related to the perioperative use of antithrombotic medication.Perioperative antithrombotic guidelines (American College of Chest Physicians, European Society of Regional Anaesthesia, and American Society of Regional Anesthesia) and relevant publications were identified by a PubMed search using the terms perioperative AND anticoagulants OR antithrombotics AND guideline.Issues relevant to clinical practice were identified, and attempts were made to explain any ambiguity that arose.Adhering to basic pharmacological principles and evidence-based guidelines allows for the safe usage of antithrombotics.Knowing when to stop, continue, bridge and restart antithrombotic medication prevents perioperative morbidity and mortality.Stopping antithrombotic medication too early can lead to thromboembolic complications associated with their primary disease process.Not stopping antithrombotic medication or stopping it too late can potentially cause life-threatening bleeding, haematomas and increased transfusion requirements.Keywords: perioperative; antithrombotics; anticogaulants; antiplatelets; neuraxial anaesthesia. Perioperative antithrombotic medication: An approach for the primary care clinician Read online: Of the 44 recommendations, 33 were of very low certainty of evidence.This speaks to the relative paucity of information on the subject and is a confounder when applying these guidelines to clinical practice.However, given that these guidelines are written with a primary focus on safety, it is still reasonable to apply them.American Society of Regional Anesthesia and ESRA guidelines used a similar approach in synthesising and compiling the recommendations, but grading was performed differently.Similar limitations were found in all guidelines, as complications such as spinal haematomas are rare.The bulk of the recommendations are made from observational and epidemiological studies, with some being made from expert opinion. Pharmacology Agents that inhibit haemostasis are classified into either antiplatelets or anticoagulants (Figure 1).Injectable antithrombotics have dual effects.The antiplatelet medication inhibits platelet aggregation and thrombus formation.They are most useful at preventing arterial thrombi and indications for their use include treatment and prophylaxis against vascular occlusive disease states and prevention of stent thrombosis. 4Aspirin (ASA), a non-selective cyclo-oxygenase (COX) inhibitor, is used in secondary prophylaxis against cardiovascular disease, treatment of peripheral vascular disease and acute coronary syndromes (ACS) and as primary prophylaxis against stent thrombosis.The P2Y12 inhibitors (clopidogrel, ticlopidine, ticagrelor, prasugrel and cangrelor) are used in combination with ASA to treat ACS and prevent stent thrombosis.If not stopped preoperatively, P2Y12 inhibitors have a powerful antiplatelet effect and increase blood loss and the need for transfusions. 7,11al anticoagulants are used to prevent and treat venous thromboembolism (VTE).Vitamin K antagonists (VKAs) COX, cyclo-oxygenase; VKA, vitamin K antagonist; DOAC, direct-acting oral anti-coagulant; LMWH, low molecular weight heparin; UFH, unfractionated heparin.Warfarin is also used to treat DVTs and pulmonary embolism. 12Regular international normalised ratio (INR) monitoring is required to assess the adequacy of warfarin therapy.An INR of two to three is desirable for most conditions, while an INR of 3-4 is recommended for mechanical valves at high risk of thrombosis. 13rect-acting oral anticoagulants (DOACs) are a group of drugs that directly inhibit thrombin or factor Xa. Dabigatran is the classic direct thrombin inhibitor, and rivaroxaban, apixaban and edoxaban are factor Xa inhibitors.These agents are used for similar indications as warfarin but are considered superior as they do not require routine drug monitoring and have less bleeding risk.Direct-acting oral anticoagulants have three significant drawbacks: they cannot be used in patients with mechanical heart valves as they increase mortality because of thrombotic complications, emergent reversal of anticoagulation is challenging as antidotes are not as readily available as vitamin K is for warfarin and they are more expensive than vitamin K antagonists and not readily available in the public sector. 14The single exit price (SEP) of 30 tablets of Xeralto 10 (Bayer [Pty] Ltd) is R1130.39compared to an SEP of R180.09 for 100 tablets of Cipla-Warfarin's 5mg (Cipla Medpro [Pty] Ltd). 15,16jectable antithrombotics include heparins (unfractionated and low molecular weight), fondaparinux (direct factor X inhibitor) and glycoprotein IIb/IIIa inhibitors.Heparins are used ubiquitously and will be discussed further.Unfractionated heparin (UFH) exerts its effects by binding to anti-thrombin III, which inactivates factors X and II (thrombin).Low molecular weight (LMWH) heparin exerts most of its effects directly on factor X. Enoxaparin sodium, better known by its trade name Clexane (Sanofi Aventis South Africa (Pty) Ltd) is the most commonly used LMWH in the public sector.Low Molecular weight heparin is more expensive than UFH (SEP R 784.02 per packet of 10 prefilled Clexane 40 mg injections vs. SEP R66.71 for X5, 5 mL vials of 5000 Iµ/mL UFH).However, given its predictable pharmacokinetics, safety of being prefilled and ease of administration, LMWH is the preferred injectable anticoagulant in clinical practice and is used extensively in the public sector in South Africa. 17,18It is administered through the intravenous or subcutaneous route, as a bolus.Protamine sulphate can fully reverse UFH's anticoagulant effect, while LMWH can only be partially reversed with protamine (60%).Heparins are used as prophylaxis against thrombosis, as well as for treating thromboembolism and perioperatively to bridge patients who cannot take oral anticoagulants and are at high risk of thrombotic complications. Fondaparinux is an alternative agent to classical heparins that can be used for heparin resistance or heparin-induced thrombocytopaenia.Its widespread use is mainly limited by its cost (SEP R2333.69 per X10 injections). 19 Perioperative considerations Patients on antithrombotics present for elective or emergency surgery.Emergency surgery must proceed, and one must deal with the consequences of bleeding as best as possible. Elective surgical patients on antithrombotics are subdivided into three groups: high risk, low risk and intermediate risk of thromboembolism.0 Lowbleeding risk surgery includes minor dermatological procedures, cataract procedures, minor dental procedures and pacemaker implantation. 20The most important patient groups to identify are those at high risk of bleeding and thrombosis.Following the correct classification, perioperative management largely depends on the antithrombotic agent administered. Antiplatelet agents Antiplatelet agents prevent coronary, cerebral and peripheral vascular events.Patients with vascular risk factors are at high risk of perioperative myocardial adverse events (cardiac death, non-fatal myocardial infarction and cardiac arrest). 21rematurely or incorrectly stopping medication in vasculopathy is potentially fatal.European Society of Regional Anaesthesia guidelines recommend that ASA at doses less than 200 mg can be continued in patients undergoing nonneurosurgical or ophthalmological procedures with a minimal increase in bleeding events. 9,18The same recommendation does not apply to the P2Y12 inhibitors and they should be stopped routinely before elective surgery.Prasugrel therapy must be stopped 7-10 days prior to surgery, clopidogrel 5-7 days and ticagrelor 3-5 days.P2Y12 inhibitors can be recommenced within 24 h after surgery, as it takes 4-5 days for them to reach a therapeutic effect when administered without a loading dose. 7tients with a recently placed coronary artery stent should have all non-emergent surgery delayed for at least 6 weeks to 3 months as they require dual antiplatelet therapy (DAPT).In the elective setting, DAPT should be continued for 6-12 months, depending on the type of stent.Emergency surgery should proceed knowing that the patient has an elevated bleeding risk. 7Should the patient require elective surgery outside of the critical 6 weeks to 3 months period but still within the recommended DAPT period, the following approach is recommended: Preoperatively, ASA should be continued and P2Y12 inhibitor stopped.The P2Y12 inhibitor can be reinitiated as soon as the bleeding risk after surgery allows; ideally, this should occur within the first 24 h. 7ngrelor is used where bridging of a P2Y12 inhibitor is deemed essential.The drug must be initiated within 72 h of stopping the P2Y12 inhibitor and stopped 1-6 h prior to surgery.It can be reinitiated 4-6 h after surgery. Bridging therapy with cangrelor or a heparinoid is recommended only if a stent has been placed in a critical area, such as the left main stem or it has been placed within the prior 3 months. 7Minor ophthalmological, dermatological and dental procedures can continue without DAPT interruption with no increased risk of bleeding. 7 Vitamin K antagonists Vitamin K antagonists should be stopped at least 5 days prior to surgery.Routine monitoring of INR is not indicated after cessation of VKAs unless the patient is known to have a labile INR.Medication should be restarted within 24 h after surgery unless there is ongoing bleeding, expected additional intervention or a clinical reason to withhold.Full anticoagulation will only occur after 4-8 days.This recommendation applies to all elective surgery, and bridging anticoagulation, usually in LMWH or UFH, is only required for high-risk patients.Simple dental extractions, dermatological procedures and cataract removal are the exception and do not require cessation of anticoagulation. 7 patients presenting for emergency surgery with an elevated INR and at high risk of bleeding, the following strategy is recommended: 1. Stop warfarin 2. Administer intravenous vitamin K 10 mg if actively bleeding 3. Administer prothrombin complex 25 IU/kg -50 IU/kg OR 4. Fresh Frozen Plasma 10 mL/kg -20 mL/kg 5. Monitor INR every 30 min until goal INR has been achieved. 22,23 Direct oral anticoagulants Direct oral anticoagulants confer a significant advantage over VKAs because they have a predictable pharmacological onset and offset time.The anti-factor Xa agents should be stopped 2 days before major surgery, and dabigatran should be stopped 2 days before elective surgery in a patient with a creatinine clearance (CrCl) >50 mL/min and 4 days prior to surgery in those with a CrCl < 50 mL/min.In low bleedingrisk procedures, all DOACs can be resumed 24 h after surgery, while in high bleeding-risk procedures, DOACs should be resumed 48-72 h after surgery.Neither routine perioperative drug monitoring nor heparin bridging is recommended because of the favourable pharmacokinetics and the rapidity (2 h) in which DOACs reach peak effect. 7 Bridging and heparins The decision to continue or withhold anticoagulants or bridge the patient with heparin is based on three principles: the type of oral anticoagulant given, bleeding risk of surgery and risk of thrombosis.Routine heparin bridging after cessation of VKAs is not recommended as it increases bleeding risk without conferring additional protection against thrombosis. The exception to this, where bridging therapy is reasonable, is in patients at high risk of thrombosis, i.e., patients in whom VKAs have been stopped. Low molecular weight heparin (1 mg/kg 12 hourly) should be stopped 24 h prior to surgery.Treatment can be recommenced 24 h after low bleeding risk surgery and 48-72 h after high bleeding risk surgery.Intravenous (IV) UFH should be stopped 4-6 h prior to surgery and can be recommenced 24 h after surgery (depending on postoperative bleeding risk).Figure 2 summarises antithrombotic therapy recommendations in high-risk surgery. Neuraxial and deep nerve block considerations The most feared complication after neuraxial anaesthesia (spinal and epidural anaesthesia) is a spinal haematoma leading to transient or permanent loss of function.The incidence of significant complications after neuraxial anaesthesia (spinal haematoma and abscess) ranges from 1:6000 cases in nonobstetric anaesthesia to 1:154 000 in obstetric anaesthesia, with epidural anaesthesia carrying a greater risk than spinal anaesthesia. 24Spinal haematoma formation carries high morbidity; hence, specific recommendations are made regarding anticoagulants and neuraxial procedures or therapies (Figure 3). 8The American Society of Regional Anesthesia and Pain Medicine gives guidance on three clinical situations: how to manage antithrombotic medication prior to neuraxial anaesthesia, how to manage antithrombotic medication after neuraxial anaesthesia and how to manage antithrombotic medication with an epidural catheter in situ. 8e European Society of Regional Anaesthesia considers deep nerve blocks to have a high risk of bleeding.The same recommendations apply to neuraxial anaesthesia and deep nerve blocks. 18Examples of commonly used deep nerve blocks are deep cervical plexus block, infraclavicular brachial plexus nerve block, thoracic and lumbar paravertebral block, lumbar plexus block, proximal sciatic nerve block and pericapsular nerve block. 18The superficial nerve blocks can generally be performed without cessation of anticoagulation. 18 Vitamin K antagonists Neuraxial anaesthesia should only be performed if the INR is less than 1.5.Vitamin K antagonists can be restarted with an epidural catheter in situ but it should only be removed if the INR is less than 1.5. 8 Unfractionated heparin Subcutaneous use of UFH has largely been superceded by LMWH in clinical practice.For this review, IV use of UFH will be discussed. Unfractionated heparin must be stopped 4-6 h prior to spinal anaesthesia or epidural catheter insertion.The activated partial thromboplastin time (aPTT) must also be normal. Intravenous UFH can be reinitiated within 1 h after spinal anaesthetic or epidural catheter removal.Maintaining an indwelling neuraxial catheter while a patient is on IV UFH is not advised. In the event of a traumatic neuraxial anaesthetic, it is reasonable to withhold initiation of UFH.However, the guidelines regarding how long to wait before the first postoperative dose after a traumatic neuraxial anaesthetic use is unclear. 8 Low molecular weight Therapeutic LMWH (1 mg/kg, 12 hourly), should be stopped 24 h before neuraxial anaesthesia and can be reinitiated 24 h after catheter removal.In high-bleeding risk surgery, LMWH should only be administered 48-72 h after removal.Therapeutic dosages should not be used with a catheter in situ. 8If a decision is made to initiate therapeutic LMWH postoperatively, the indwelling catheter should be removed at least 24 h after it was inserted and therapeutic dosing can be given 4 h after it has been removed. Neuraxial anaesthesia can be administered 12 h after prophylactic LMWH (0.5 mg/kg daily or 40 mg daily).Daily dosing prophylaxis can be reinitiated 12 h after catheter removal. 8In contrast to therapeutic LMWH, epidural catheters can be maintained in situ with once-daily prophylactic dosing.Twelve hours should elapse between the last dose of prophylactic LMWH and catheter removal, and the next prophylactic dose can be given 4 h after catheter removal. Direct-acting oral anticoagulants Factor Xa inhibitors must be stopped 3 days prior to neuraxial anaesthesia and can be restarted 6 h after the procedure.Dabigatran in patients with a CrCl > 80 mL/min must be stopped 3 days prior to neuraxial anaesthesia, similar to the factor Xa inhibitors.Patients with a CrCl of 50 mL/min -79 mL/min require cessation of medication 4 days before the procedure, and those with a CrCl of 30 mL/ min -49 mL/min should have medication stopped 5 days prior.It is best to avoid neuraxial anaesthesia in individuals with a CrCl<30 mL/min as the duration of action of dabigatran is unpredictable.All DOACs can be restarted 6 h after catheter removal or spinal insertion.Epidural catheters should not be maintained in situ while a patient is on a DOAC. 8 Antiplatelet agents Asprin can be safely continued during neuraxial anaesthesia and requires no dose adjustment. The intervals from the cessation of medication to neuraxial anaesthesia placement are the same as described above for elective surgery.If no loading dose is used, P2Y12 inhibitors can be initiated immediately post-neuraxial procedure.If a loading dose is needed, 6 h should pass between the procedure and administering of medication.Neuraxial catheters should not be maintained in situ after antiplatelet medication has been given (except ASA). 8gure 3 summarises the recommendations when performing neuraxial anaesthesia. Conclusion The perioperative management of antithrombotic medication is complex and fraught with risk if inappropriate cessation or initiation of treatment is prescribed.However, by adhering to basic pharmacological principles and best practice guidelines, the risks involved can be mitigated, and patient outcomes can be improved. 7atients are deemed to be at high risk if they have a greater than 10% chance of developing VTE.Three patient populations form the high-risk category.The first group of patients has a known diagnosis of AF and a CHA 2 DS 2 VASc score greater than seven or less but with recent CVA or rheumatic heart disease.7Asubset of patients with mechanical heart valves forms the second group.The high-risk groups are those with mitral valve prosthesis and an associated major risk factor for stroke, aortic and mitral valves with a tilting disc or cage ball mechanism and recent stroke.The final group of patients are those with hypercoagulable conditions.Examples include VTE within the previous 3 months, severe thrombophilia or antiphospholipid syndrome.In addition to the risk of VTE, the procedure can be classified as having high, intermediate and low risk of associated bleeding.Surgeries with 30-day bleeding risk greater than 2% are deemed as high bleeding-risk surgeries. LMWH = low molecular weight heparin.Start 24 h after low bleeding risk surgery and 48-72 h after high bleeding risk surgery; ‡, Intravenous heparin infusion; §, ASA in patients undergoing non-neurosurgical, non-ophthalmological surgery; ¶, Dabigatran in a patient with CrCl> 50 mL/min.Perioperative management of common anticoagulants in high bleeding risk surgery.
2024-08-11T15:18:50.944Z
2024-07-31T00:00:00.000
{ "year": 2024, "sha1": "63d685bbd2a9203ef650fad34004abf33f849a79", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "84754ba6279d381f54983bf70bd5aa2dab0298d9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
222181950
pes2o/s2orc
v3-fos-license
The role of gambling type on gambling motives, cognitive distortions, and gambling severity in gamblers recruited online The recent literature shows that the type of gambling practiced influences problem gambling. This study was aimed at investigating the factors associated with gambling type, including gambling severity, gambling motives, and cognitive distortions. A total of 291 regular male gamblers (229 skill gamblers and 62 mixed gamblers, i.e., those who play at least one game of chance and one skill game) were recruited online and assessed for gambling severity (South Oaks Gambling Screen), gambling motives (Gambling Motives Questionnaire-Financial), cognitive distortions (Gambling-Related Cognition Scale), and psychological distress (Hospital Anxiety and Depression Scale). After controlling for the number of games played and psychological distress, we found that gambling type was significantly associated with gambling severity. Moreover, controlling for psychological distress showed that gambling type was also significantly associated with coping motives and interpretative bias. First, mixed gamblers had higher severity scores and higher coping motivation than skill gamblers; second, skill gamblers seemed more at risk of developing interpretative bias. Thus, the gamblers presented different psychological, motivational, and cognitive profiles according to gambling type, indicating that different clinical interventions may be relevant. Working on coping motives and anxiety and depression symptoms with an abstinence purpose would be more suitable for mixed gamblers. Indeed, working on these points could lead to the gambler reducing or eventually ceasing gambling, as the need to regulate negative emotions through gambling behavior would fade in parallel. Gambling type, psychological distress, gambling motives, and cognitive distortions should be taken into consideration systematically in clinical interventions of patients with plural and mixed practice of games. Introduction While gambling is a leisure activity perceived as a source of entertainment for the majority of gamblers, this behavior can become problematic for some, with the experience of craving, loss be related with the gambler's psychological distress, which refers to the feelings of negative affects such as stress, anxiety, and depression [1,24]. The literature has demonstrated that gambling modalities or gambling types re important factors to consider when studying problem gambling. However, few studies have investigated the factors associated with gambling type by simultaneously taking into account gambling motives, gambling-related cognitions, and gambling severity. Thus, on the one hand, the gambler's psychological distress tends to change depending on the gambling intensity, and on the other hand, on the game's outcome. In fact, frequent comorbidities have been found between pathological gambling and emotional distress such as anxiety and depression symptoms [3,25]. Although there is no consensus about the order of emergence of these disorders, some authors have indicated that anxiety and depression symptoms (as well as anxiety and mood disorders considered as categorical diagnoses) constitute risk factors in the development of gambling severity [25,26]. However, this does not alter the fact that pathological gambling causes or reinforces initially present anxiety and/or depression symptoms [27]. Therefore, pathological gambling is a risk factor for the emergence of anxiety and depression symptoms as well as a way of coping with unpleasant and negative affect [13,28,29]. As described above, gambling motives are associated with the development of erroneous beliefs [15,17], in that gambling motives and cognitive distortions are directly and indirectly involved in the development of gambling severity [1,16,17]. The literature has also shown that playing in order to regulate negative feelings is more present as a coping motive among problem gamblers. Although these variables appear to be closely related, we may assume that the strength of the association between these variables may differ depending on the type of gambling modality. Differences in the psychological and psychopathological profiles of gamblers with problematic gambling activity can be observed depending on the type of games played or the types of gambling modalities [30][31][32]. Games can currently be classified according to whether the rewards associated with the gambling activity are immediate or delayed [33], depending on the level of arousal provided by the game played [10], or according to the presence or absence skills [34,35]. Some means of classifying games overlap in the sense that skill games generally provide high arousal while chance games provide low arousal [10,19]. Gamblers using gambling to escape or avoid negative affect usually orient towards games of luck, whereas gamblers using gambling to upregulate positive emotions usually choose skill games that provide sensations, excitement, and arousal [30,36]. Thus, gambling motives, such as gambling for experience enhancement or as a way of coping (two sides of affect regulation), seem to depend on gambling type [10]. This difference in motivation can subsequently lead to differences in cognitive distortions (in terms of nature and/or intensity) and gambling severity. In addition, engaging in several types of games seems to be frequent, particularly among problem gamblers [36][37][38], leading researchers to take an interest in the mixed gamblers category, in which both skill and chance games are practiced. According to the current literature on motivational, cognitive, and emotional variables, it is necessary to differentiate gamblers depending on gambling activity first because they each constitute a specific population, and second because they can have different characteristics from gamblers moving towards only one type of game. This highlights the need to obtain information on each type of gambler to think about prevention actions and to offer appropriate treatment. Taking into consideration the reality of gambling practices [36,38], the present study was aimed at comparing skill and mixed gamblers in terms of gambling severity, gambling motives, and cognitive distortions, while taking into account psychological distress and the number of games played. We investigated two hypotheses: first, that mixed gamblers present a higher psychological distress score than skill gamblers, and second, that gambling type is associated with gambling severity, gambling motives, and cognitive distortions. Participants and procedure Participants were recruited through online gambling forums (betclever, Club Poker). Once the agreement of the administrators was received, the same announcement was published on these two forums, briefly presenting the research and its objectives. Interested members were invited to click on the LimeSurvey1 link to access the online scales and questionnaires (preceded by an information note and a consent form). All participants were at least 18 years old, fluent French speakers, and had regular gambling practice (i.e., at least once per week). This criterion of regularity has been used in previous studies [2,17,30]. In addition, participants were not undergoing treatment for a gambling problem. Ethical approval was obtained from the Research Committee of Paris University (IRB number: 20162200001072) for its realization, and before taking part, all participants provided their written informed consent. A total of 291 male regular gamblers were included in the study. The absence of women in our sample will be discussed within the limits of the study. The participants were divided into two groups according to the type of games played: skill gamblers who played only skill games (n = 229, 78.7%), and mixed gamblers who play both skill games and games of luck (n = 62, 21.3%). Measures Sociodemographic and gambling data. Participants were assessed for age, marital status, level of education, household composition, professional activity, socio-professional category, and games played using an 11-item questionnaire constructed for this study. Gambling severity. Gambling severity was assessed with the 20 items (e.g., "When you gamble, how often do you go back another day to win back money you lost?") from the French validated version of the South Oaks Gambling Screen (SOGS [39,40]). A score of �2 represents the absence of problem gambling, a score of 3 or 4 defines a problematic use of gambling, while a score of �5 corresponds to probable pathological gambling. However, consistent with previous studies [32,36], we used a gambling severity dichotomy: scores of 0-2 indicated no problem gambling and scores of �3 suggested problem gambling, which includes both at-risk and probable pathological gamblers. As almost no mental health problem is categorical [41], the dimensional score was used in the statistical analysis, and the categorical score was only used to describe the two subsamples. According to the literature, gambling behavior evolves over time, so only the current assessment of a gambling problem was used and the lifetime assessment was removed. Cronbach's alpha for the total scale (α = .72) was higher than .70, indicating good reliability [42]. Gambling motives. Gambling motives were measured with the French validated version of the GMQ-F [3,14]. This tool is an improved version of the GMQ [13], which was directly adapted from the Drinking Motives Questionnaire [43]. Initially, the GMQ only measured three types of motivation: enhancement (five items, e.g., "Because it's exciting"), coping (five items, e.g., "Because it helps when you are feeling nervous or depressed"), and social (five items, e.g., "Because it makes a social gathering more enjoyable"). Subsequent studies showed the importance of the financial aspect in gambling, thus a fourth dimension of financial motives (nine items, e.g., "Because winning would change your lifestyle") was added and assessed [14], resulting in the GMQ-F. As item 9 of the social motives subscale posed a problem during the French validation of this tool, it was deleted, leading to a model with 23 items. Cognitive distortions. Cognitive distortions were measured with the 23 items from the French validated version of the GRCS [20,44]). Items are grouped into five subscales: gambling expectancies (four items, e.g., "Gambling makes things seem better"), illusion of control (four items, e.g., "Specific numbers and colors can help increase my chances of winning"), predictive control (six items, e.g., "I have some control over predicting my gambling wins"), inability to stop gambling (five items, e.g., "It is difficult to stop gambling as I am so out of control"), and interpretative bias (four items, e.g., "Relating my losses to bad luck and bad circumstances makes me continue gambling"). These were scored on a 7-point Likert scale ranging from 1 (strongly disagree) to 7 (strongly agree). Regarding Nunnally's criterion (1978), Cronbach's alpha coefficients were acceptable for inability to stop gambling (α = .83) and illusion of control (α = .73), and lower for gambling-related expectancies (α = .59), predictive control (α = .54), and interpretative bias (α = .54) [42]. Psychological distress. Psychological distress was assessed using the French version of the Hospital Anxiety and Depression Scale (HADS [45,46]). This is a 14-item self-report scale, seven of which relate to anxiety (e.g., "Worrying thoughts go through my mind") and the other seven to depression (e.g., "I feel as if I have slowed down"). Items were scored 0-3. Although the literature highlights the existence of a bidimensional structure, the HADS does not provide good separation between anxiety and depression [47]. Thus, we used the HADS total score to obtain information on the participants' overall psychological distress. Cronbach's alpha for the scale was .83, which indicates good internal consistency [42]. Statistical analysis All data were analyzed with SPSS version 21 and were tested with a two-sided significance level of .05. To conduct relevant statistical analyses, we performed a skewness test: scores obtained for gambling severity, gambling motives, cognitive distortions, and psychological distress were between -1.96 and +1.96, suggesting compatibility with the realization of parametric statistics [48,49]. First, univariate analyses (Student's t-test and chi-square test) were carried out to describe and compare skill gamblers and mixed gamblers. Second, multivariable analyses were conducted (multiple linear regressions) on the whole sample to determine whether gambling type was associated with gambling severity, gambling motives, and cognitive distortions. To control the risk of being wrong for all the tests carried out, we adjusted the p-values by taking the confounding factors into account (i.e., the psychological distress and the number of games played). Thus, the results reported come from two models: one unadjusted and the other adjusted by controlling the confounding factors. Table 1 details the sociodemographic data. Descriptive analyses revealed that gamblers were mainly higher education graduates (68.7%), employed (64.9%), married or in a relationship (47.1%), mostly without children (64.6%), and were 34.0 years old on average (SD = 10.2). Statistical analyses (Student's t-test and chi-square test) showed no significant differences in terms of sociodemographic characteristics between skill and mixed gamblers (all, p � .603). Sociodemographic and gambling data The prevalence of problem gambling was 30% (n = 69) in skill gamblers and 46.8% (n = 29) in mixed gamblers: the prevalence levels were not significantly higher in mixed gamblers (χ 2 = 2.51; p = .113). In our sample, 15.8% of participants played online games exclusively, while 3.4% played exclusively land-based games. The majority of gamblers recruited played both on the Internet and in live (casinos and tobacconists). In addition, most of the participants (n = 229, 78.7%) reported playing only skill games (poker, sports betting, horses betting, blackjack), and no one reported playing only games of luck (scratch cards, slot machines, roulette, lottery). However, some participants (n = 62, 21.3%) indicated that they played both games of skill and games of chance. Skill gamblers mainly played poker and sports betting, whereas mixed gamblers mainly played poker, scratch cards, sports betting, roulette, and slot machines ( Table 2). Among the skill gamblers, 44.5% played at least two games with a strategy aspect (mixed gamblers, by definition, all played at least two types of games). Gambling severity and number of games played. The number of games played explained the gambling severity significantly (t = 2.815; p = .005). To dissociate the effect of the number of games played from playing different types of games, the number of games played was first included as a confounding factor in the model aimed at predicting gambling severity. Gambling type and psychological distress. Mixed gamblers presented significantly higher HADS total scores (t = -2.63; p = .009; Cohen's d = .36) than skill gamblers. Based on these results, psychological distress was included as a potential confounder in all regression analyses performed to limit bias in the analysis of the link between gambling type and the variables studied. As psychological distress can also be higher due gambling severity, we conducted regressions aimed at explaining gambling severity with and without this covariate. Factors associated with gambling type. To clarify the association between gambling type and gambling severity, regressions were carried out, with gambling type as the independent variable, and the number of games played and psychological distress were introduced respectively into the model as adjustment variables. A third regression model aimed at explaining gambling severity was carried out without introducing psychological distress as a covariate. Statistical analyses revealed a significant predictor effect of gambling type on gambling severity when the model was adjusted for the number of games played (η 2 = .015; adjusted p = .038) and for psychological distress (η 2 = .021; adjusted p = .013), with a larger effect size, although remaining small, when the model was not adjusted for psychological distress (η 2 = .040; adjusted p = .001). Moving from skill to mixed gamblers led to a significant increase in gambling severity, especially when the model was adjusted for the number of games played and psychological distress (confounding factors). Multiple regressions were also conducted to determine whether the gambling type (independent variable) could predict gambling motives and cognitive distortions when controlling for psychological distress. The multiple regressions revealed a significant predictor effect of gambling type on coping motives (η 2 = .015; adjusted p = .036) and interpretative bias (η 2 = .014; adjusted p = .040). When controlling for psychological distress (HADS total score), moving from skill to mixed gamblers also led to a significant increase in coping motives and interpretative bias (Table 3). Discussion In this study, we investigated two hypotheses: first, that mixed gamblers present a higher psychological distress score than skill gamblers, and second, that gambling type is a predictor of gambling severity, gambling motives, and cognitive distortions. Our study sheds light on the specificity of mixed gambling, which was associated with higher gambling severity, higher coping motives, and higher interpretative bias. Moreover, our study shows that mixed gambling is associated with greater psychological distress. Finally, we found that a relation between gambling type and gambling severity exists, as does the association between the number of games played and the aforementioned variable, which nevertheless appears to be greater. Almost half of the skill gamblers (44.5%) played several games, but only games involving strategy. Although gambling multi-activity constitutes a risk factor for developing pathological gambling [50], the number of games played does not seem to be the only factor to take into account. Indeed, gambling type was significantly associated with higher gambling severity when controlling for the number of games played and for psychological distress, respectively. These results suggest that mixed gambling may be a risk factor for the development of problem gambling when compared to having a purely strategic gambling activity. However, when examining size effects, our results suggest that gambling severity and gambling type are associated, but that the association may be secondary relative to the number of games played. This refers to the involvement effect, which in the literature, has been approached through the number of games played [51] and through the media used to play [52]. In this regard, Wardle and colleagues (2011) showed that gamblers using both game media (offline and online) displayed problem gambling more frequently and were more involved in the game than those using only one game medium [52]. This mixed-mode playing joins the mixed practice, as we call it in this study: gamblers have to move around to play certain games, especially for games of luck in France. Thus, the practice of mixed games, i.e., different types of games requiring the use of online and offline media, contribute more to gambling problems than the practice of playing only one type of game [32,53]; which does not prevent the possibility of online and offline playing for the same skill game. Moreover, the practice of several games with different characteristics among the mixed gamblers (including at least one game of luck) suggests that they are probably not addicted to a particular game, but rather tend to continue their plural gambling activity more for the functions it fulfills. The results also showed that gambling type was significantly associated with specific coping motives, suggesting that mixed gambling and coping motives are closely linked. This result is not surprising, as previous studies have highlighted that gamblers who play to escape or reduce negative affect usually move to games of luck where no reflection is required [10,31,36,54]. Moreover, the association between coping motives and problematic gambling is one of the most solid results in gambling research. Indeed, the literature highlights that coping motives appear to be a strong predictor of gambling severity [3,10,17]. Additionally, one of the gambling disorders criteria (Diagnostic and Statistical Manual of Mental Disorders, fifth edition [DSM-5]; [1]) refers to both the gambler's psychological distress and the coping motivation: "Often gambles when feeling distressed (e.g., helpless, guilty, anxious, depressed)." Thus, gambling appears to be a way to regulate repetitive or intrusive negative emotions among mixed Table 3. South Oaks Gambling Screen, Gambling Motives Questionnaire-Financial, and Gambling-Related Cognitions Scale scores as a function of gambling type. gamblers. All these elements raise the hypothesis that mixed gamblers would be more likely than skill gamblers to present emotional vulnerability. Indeed, gamblers of skill games tend to play for the sensations, arousal, and excitement the game provides [10,36,54]. Playing these types of games can also regulate emotions, but in the sense of increasing positive feelings [36]. Problematic gambling seems to be related to emotional regulation deficit [32,[55][56][57]. Among skill gamblers, this seems to refer more to the presence of alexithymia [36], while among mixed gamblers, it seems to refer to difficulty in regulating negative affect efficiently and appropriately. In other words, this implies that skill gamblers have a lack of feeling and that mixed gamblers on the contrary have too many (negative) feelings. Our results as well as data from the literature indicate the importance of distinguishing these two groups of gamblers. Strictly playing skill games, or playing both games of luck and games of skill were not associated with the inability to stop gambling. As a reminder, the perceived inability to stop gambling refers to the perceived loss of control of the gambling activity and the sense of being unable to stop it (to reduce or control it), which corresponds to one of the DSM-5 diagnostic criteria [1]. Thus, whatever the games played, with or without skills, this belief appears to be common to all gamblers with problem gambling. However, anxiety and depression seem to be related to the inability to stop gambling. The development of the belief that one is unable to stop gambling behavior can be linked to the presence of low self-esteem [56], especially in individuals who suffer from anxiety and depression. Skill (n = 229) M (SD) Mixed (n = 62) M (SD) Éta 2 p-value Finally, we investigated the link between gambling type and cognitive distortions. While all gamblers are likely to develop erroneous beliefs, our study points out a difference based on the type of games played, which is consistent with previously highlighted results [58,59]. Indeed, our results show that, when anxiety and depression are controlled for, playing only skill games significantly predicts interpretative bias. As skill games are based on strategy, experience, knowledge, and chance, it is understandable that a gambler who has done research on and has experience with the game played can attribute his successes to himself. In poker, a player who has studied the odds of a winning or losing hand can attribute a loss to bad luck after having learned that the hand in question can statistically produce a win eight times out of ten. However, because of the characteristics of these games, skill gamblers tend to overestimate their skills in the outcome and thus underestimate the part of chance. This is particularly the case in poker, where gamblers overestimate their own ability more than gamblers who play other games [58,60]. In short, gamblers who exclusively play skill games seem to be at higher risk of misinterpreting their outcomes. Although our findings suggest that gambling type is associated with gambling severity, coping motives, and interpretative bias, this could be explained by the fact that mixed gamblers play chance games, or the fact that they play more games, or more game types. To determine the potential impact of gambling type more precisely, our study should be replicated with gamblers who exclusively play skill games, chance games, and both types of games, while controlling for the confounding factors. However, with regards to the present results, systematically asking gamblers about the type of game they play, their gambling motives, cognitive distortions, and psychological distress can already help health professionals identify the most effective clinical interventions. One of the aims of the clinical intervention supported by our data could thus focus more on abstinence than on risk reduction, especially because the regulation of negative emotions through gambling behavior can decrease with work directly carried out on emotion regulation. Nevertheless, this study has limitations that should be taken into account for the interpretation and generalization of the results. The main limitation is the online recruitment method (gambling forums focused on skill games) of self-selected participants, i.e., a sample that may not be fully representative of the gambling population and which contributed to the over-representation of male gamblers [61,62]. However, the presence of only male gamblers can also be explained by the higher sex ratio of male gamblers. Indeed, the male sex represents a risk factor both in the gambling experience and the development of problematic gambling behavior [63,64]. Moreover, female gamblers seem to present a different motivational, cognitive, and emotional profiles compared to male gamblers [14,65,66]. Further studies should therefore focus more specifically on female gamblers to better understand their psychological functioning with regard to gambling motives, cognitive distortions, and psychological distress. This same limitation also contributes to the absence of gamblers who exclusively play chance games and to the presence of two numerically non-homogeneous groups. Another limitation is the use of the SOGS, which is known to produce false positives, especially in the general population [67], which may explain the high prevalence of problem gamblers. Finally, certain subscales used in the present study have questionable reliability with regard to the Nunnally criteria (1978) widely used in scientific research [42]. Items within these subscales may not measure the same characteristic consistently. For example, the predictive control subscale (GRCS) assesses several types of beliefs: predictive control and probabilistic control, including the gambler's fallacy. The same is true for interpretative bias, which is another GRCS subscale measuring attributional belief and memory bias. The low number of items can also contribute to lower internal reliability. Despite these limitations, the results obtained support the presence of distinct psychological, motivational, and cognitive profiles according to gambling type. If the influence of the number of games played is greater, the influence of gambling type seems to exist and be at least present, requiring consideration thereof. Thus, our results suggest new research perspectives and different clinical interventions. First, it would be interesting to verify the existence of a pathological gambler typology and determine how the gambling type or the number of games played may relate to each of the three distinct subgroups of gamblers as described in the pathways model of Blaszczynski and Nower [68]. Thus, mixed gamblers with gambling problems may more frequently belong to the emotionally vulnerable group of gamblers, while skill gamblers with gambling problems may be found more frequently in the behaviorally conditioned group of gamblers. Indeed, due to the characteristics of skill games being partly governed by strategy, experience, and knowledge, gamblers continue the gambling behavior especially because they tend to underestimate the contribution of chance. Varescon.
2020-10-08T13:05:50.796Z
2020-10-06T00:00:00.000
{ "year": 2020, "sha1": "148e9f81ad3494f7caf1394b9c15f27e183f0f08", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0238978&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "be3414174ce7a6d331f21202144ff997815274e6", "s2fieldsofstudy": [ "Economics", "Psychology", "Law" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
4124708
pes2o/s2orc
v3-fos-license
The serum zinc concentration as a potential biological marker in patients with major depressive disorder Despite many clinical trials assessing the role of zinc in major depressive disorder (MDD), the conclusions still remain ambiguous. The aim of the present clinical study was to determine and comparison the zinc concentration in the blood of MDD patients (active stage or remission) and healthy volunteers (controls), as well as to discuss its potential clinical usefulness as a biomarker of the disease. In this study 69 patients with current depressive episode, 45 patients in remission and 50 controls were enrolled. The zinc concentration was measured by electrothermal atomic absorption spectrometry (ET AAS). The obtained results revealed, that the zinc concentration in depressed phase were statistically lower than in the healthy volunteers [0.89 vs. 1.06 mg/L, respectively], while the zinc level in patients achieve remission was not significantly different from the controls [1.07 vs. 1.06 mg/L, respectively]. Additionally, among the patients achieve remission a significant differences in zinc concentration between group with and without presence of drug-resistance in the previous episode of depression were observed. Also, patients in remission demonstrated correlation between zinc level and the average number of depressive episodes in the last year. Serum zinc concentration was not dependent on atypical features of depression, presence of psychotic symptoms or melancholic syndrome, age, age of onset or duration of disease, number of episodes in the life time, duration of the episode/remission and severity of depression measured by the Hamilton Rating Scale for Depression (HDRS), and the Montgomery-Asberg Depression Rating Scale (MADRS). Concluding, our findings confirm the correlation between zinc deficit present in the depressive episode, and are consistent with the majority of previous studies. These results may also indicate that serum zinc concentration might be considered as a potential biological marker of MDD. Introduction Zinc is a crucial element of living organisms, which is involved in many basic physiological processes (mainly as a cofactor of over 300 enzymes). This cation is recognized as being important for: transcription, translation, DNA repair, proliferation and maturation of cells, apoptosis, neurogenesis, synaptogenesis, neuron growth, or keeping the balance of oxidative and nitrosative redox potential [see (Jurowski et al. 2014 andSzewczyk et al. 2011) for review]. Zinc ions also play a role in the regulation of immune system and inflammation processes, influencing the level of some inflammatory cytokines (e.g. interleukin 6, IL-6; tumor necrosis factor α, TNFα; or interleukin 1β, IL-1β) (Szewczyk et al. 2011;Gapys et al. 2014;Maes et al. 1994Maes et al. , 1997Maes et al. , 1999Maret and Sandstead 2006;Maserejian et al. 2012;Ranjbar et al. 2013;Russo 2011). It has also been shown that Zn concentration declines during the inflammatory state (Bonaventura et al. 2015). The aim of this study was to determine the correlation between symptomatology of MDD, and the serum changes of zinc concentration. The presented results are a part of the large clinical study named De-Me-Ter (BDepression -Mechanisms -Therapy^, task 3.2 -Identification of endogenous marker of depression and therapy effectiveness) (Siwek et al. 2016a, b;Siwek et al. 2015;Styczeń et al. 2015). Recruitment of the study participants The study participants (Caucasian men and women) were recruited among the in-and outpatients of the Department of Psychiatry, University Hospital, Cracow, Poland, in the period September 21 of 2009 until August 30 of 2013. Patients fulfilling the DSM-IV-TR criteria for Major Depressive Disorder (MDD) (both in the active phase of depression, and achieve remission) were recruited to the casecontrol study. All the study participants signed an informed consent and were provided with detailed information (verbally and in writing) about the aims and rules of this clinical study. Each potential study participant had the opportunity to ask questions about the study before singing the consent and all those questions were answered by the doctor responsible for recruitment. The Jagiellonian University Bioethical Committee has approved this study (decision number KBET /77/B/2009;dated June 25, 2009). The most important exclusion criteria were: diagnosis of a severe psychiatric disorder other than MDD (for example: schizophrenia, schizoaffective disorder, bipolar disorder), substance use disorders (excluding addiction to nicotine and caffeine), comorbidity of serious physical illness (both acute or chronic), diagnosis of a severe personality disorders, breastfeeding, pregnancy or medication which could significantly interfere with blood zinc concentration. As severe somatic diseases excluding from the study (due to the possibility of statistically significant change in the concentrations of biomarkers examined in the De-Me-Ter study) the authors considered the following: chronic autoimmune and inflammatory diseases, acute inflammatory or infections present within a month prior to the recruitment in the study, primary adrenocortical insufficiency, renal failure, chronic pancreatitis, h y p o p o r at h y r o i d i s m , h y p e r t h y r o i di s m , p r i m a r y hypoaldosteronism, cancer, megaloblastic anemia due to iron deficiency, thalassemia, hemochromatosis, liver cirrhosis, Wilson's disease, nephritic syndrome and burns. The additional excluding criteria was the fact of using by the participants the following drugs: hydralazine, nonsteroidal anti-inflammatory drugs (acetylsalicylic acid, ibuprophen, indometacin), tetracyclines, florochinolones, calcium, iron, chelating agents or glucocorticosteroids. All the patients were receiving pharmacotherapy with proven efficacy (mono-or polytherapy), in accordance with the up-to-date treatment guidelines for MDD. The group of healthy volunteers consisted of people with no present and past history of severe and chronic somatic or psychiatric diseases, without history of substance use disorders (except for caffeine and nicotine abuse), and with no psychiatric disorders in the first-degree relatives. This group was recruited through advertisements on hospital notice boards, by referral from hospital staff, or their relatives and friends. The detailed socio-demographic and clinical characteristics of the examined population are presented previously . The diagnostic tools For the measurement of the severity of depressive symptoms the Montgomery-Asberg Depression Rating Scale -MADRS (Montgomery and Asberg 1979) and the Hamilton Rating Scale for Depression -HDRS (Hamilton 1960) were used. Collection and processing of blood samples. Quantitative analysis of zinc in the blood serum samples According to the study protocol max. 9.8 ml of blood was obtained from each study participant, at the same time of the day (between 8 and 9 a.m.). Blood was collected from a brachial vein using the Monovette system (Sarstedt, Germany). After the cloth formation, the blood was centrifuged at 1800 xg for 30 min, and the serum (only nonhemolysed) was kept frozen at 80°C until it could be analyzed. The samples were stored in zinc-free tubes for a maximum period of 4 months. The assessment of zinc was performed in specialized laboratory of trace element analysis, Department of Analytical Chemistry, University of Science and Technology, Cracow. Serum zinc levels were measured by a electrothermal atomic absorption spectrometry (ET AAS) using a PerkinElmer spectrometer model 3110 (USA). The following measurements' conditions were used: airacetylene flame, 285.2 nm wavelength, 0.7 nm slit and single-element HCL lamps. Gas flow and burner position were optimized before measurements to achieve high sensitivity. The samples were diluted appropriately to fit into the linear range (1-50 ng/ml) of calibration curves. Standards (Zinc standard solution; Merck Millipore Corp., Darmstadt, Germany) and samples were prepared as water solutions. Diluted samples were homogenized by means of sonification. In the case of samples of extremely low volume (a few microliters) the additive method of sample microdilution was used. The lowest concentration traceability for zinc was 0.5 μg/L. Despite dilution, no sample pre-treatment procedures were applied prior to quantitative elements determination. Depending on the total sample volume, triplicate determinations were performed. The accuracy of ET AAS technique was tested by means of recovery analysis, which for Zn was in the range of 94-99 %. All reagents used were of analytical grade. The test tubes for Zn were thoroughly acid washed (0.1 % Nitric acid) and rinsed with double distilled deionized water. Statistical methods The test χ 2 was used to analyze the differences between the quality variables. The Shapiro-Wilk test was performed in order to evaluate the normal distribution of quantitative data. Because of absence of the normal distribution of data we used the Kruskal-Wallis ANOVA or Mann-Whitney U-test. Correlations between quantitative variables -due to lack of normal distribution -were analysed with the Spearman's Rank correlation. Results One-hundred-and-fourteen patients (including 28 men and 86 women) who met the DSM-IV-TR criteria for MDD (69 patients were in a depressive episode and 45 were in a remission) were enrolled into the De-Me-Ter case-control study. Among the recruited group of participants there were patients using Selective Serotonin Reuptake Inhibitors (SSRI; 63 patients), Serotonin-Norepinephrine Reuptake Inhibitors (SNRI; 34 patients), tricyclic antidepressants (TCA; 15 patients), mirtazapine (5 patients) and also taking atypical antipsychotic drugs (olanzapine or quetiapine; 15 patients), lithium and lamotrygine (a total of 5 patients) due to enhance the antidepressant therapy . The control group consisted of 50 healthy volunteers (including 14 men and 36 women). The mean age in the group of patients (49.4 ± 10.7 years) did not show significant differences from the control group (45.8 ± 12.4 years), (p = 0.064; Mann-Whitney U-test). There were also no statistically significant differences between the sexes in two groups (test χ 2 ; p = 0.64). The percentage of women in the examined population of patients was 75 %, and in the control group it was 72 %. In the patient group there were no differences in the mean zinc concentrations between women and men subgroups (p = 0.91, Mann Whitney U-test) . The zinc concentration in the serum samples of patients in depressive episode were significantly lower from those obtained in the healthy volunteers group (p = 0.003, Mann Whitney U-test). However, there were no statistically significant differences in zinc levels between patients achieve remission and control group (p = 0.348, Mann Whitney U-test) or between depressed stage and remission (p = 0.096, Mann Whitney U-test) ( Table 1). Among the group of depressed patients there was no statistically significant difference in zinc levels between patients with and without the following clinical features: atypical or psychotic symptoms of depression, melancholic syndrome, or drug-resistance (Table 2). However, among the group of patients in the remission a significant differences in zinc concentration between patients with and without presence of drugresistance in the previous episode of depression (p = 0.035, Mann Whitney U-test) were observed. Furthermore, our results showed no significant correlations between zinc levels and age of the patients and some clinical features (for all: patients in depressive phase, remission and total group population): duration of the disorder, average number of hospitalizations in the last year, average number of depressive episodes in the last year, number of total hospitalization throughout life, severity of depression measured by HDRS (a total score), or MADRS (a total score or age of onset) or duration of current episode severe depression, or remission. The only significant correlation was obtained between zinc concentration in patients achieve remission and average number of depressive episodes in the last year (Table 3). Discussion The presented case-control study demonstrated that the serum zinc levels in patients with depressive episode were significantly lower than those obtained in healthy volunteers' samples. Moreover, there was no statistically significant differences in Zn 2+ concentration between patients in remission and healthy volunteers group. The average zinc concentration in patients achieve remission almost reach the control level, albeit statistically zinc levels between acute stage of the disease and remission are not different. Additionally, no significant differences between zinc concentration in patients with or without such clinical features as: atypical features of depression, drug resistance, presence of psychotic symptoms or melancholic syndrome were noticed. Probably the first article which indicated a link between zinc and depression in the clinical studies was published in 1983 and demonstrated the case of major depressive disorder with a low serum zinc level (Hansen et al. 1983). Next article appeared in 1989. Little et al. examined the zinc concentration in serum and urine of depressed patients and demonstrated hypozincemia in 9 out of 30 patients with mood disorders, however, this study did not include the control (healthy) group (Little et al. 1989). In the study on 31 Karachi depressed patients, a significantly lower serum zinc level was shown, but only in depressed women (not in men) compared to healthy controls (Manser et al. 1989). In the following year (1990) McLoughlin and Hodge clearly demonstrated hipozincemia in 14 untreated depressed patients, compared with 14 healthy controls (McLoughlin and Hodge 1990). Our results are similar to this demonstrated by Maes and colleagues (Maes et al. 1994). In the first study on 48 MDD patients they showed that not only the zinc level was significantly lower in patients group (when compared to healthy volunteers), but also that it was negatively correlated with the severity of depressive symptoms (Maes et al. 1994). In two further studies on 31 and 48 patients (respectively) diagnosed with MDD Maes et al. also confirmed previous observations Maes et al. 1997Maes et al. , 1999. Additionally, they observed that zinc level was much lowered in drug-resistant patients (Maes et al. 1997). The our results are also supported by those obtained by Siwek et al. (2010); Salimi et al. (2008) and Amani et al. (2010). The data concerning pre-and postpartum depression demonstrated relationship between zinc and depression status (Wójcik et al. 2006;Roomruangwong et al. 2016), as well as bipolar depression in which lower zinc level was characteristic for depressive episode (Siwek et al. 2016b;Stanley and Wakwe 2002). Moreover, growing evidence for the valuable augmentation effects of zinc adjunctive therapy coming from clinical studies. The studies demonstrated by Ranjbar et al. (2013); Russo (2011);Siwek et al. (2009) andNowak et al. (2003) showed that zinc supplementation significantly reduced depression severity (depression rating scores) and facilitated the outcome in antidepressant therapy especially in the treatment-resistant patients. Also, zinc monotherapy reduced symptoms of depression in patients with coexisting obesity (Solati et al. 2015). On the other hand, there are several clinical studies that did not confirmed the presented results. The result obtained by Gronli et al. on 100 patients aged over 64 years (62 women, 38 men) with different psychiatric diagnoses showed that the patients without depression were characterized by a higher deficit of zinc (2013). Similarly, Nguyen et al. noticed in their observational study on 369 women that there is no correlation between presence of depressive symptoms and zinc levels (2009). Also some other groups did not demonstrate alterations in the blood serum of depressed patients (Narang et al. 1991;Irmisch et al. 2010;Salustri et al. 2010). The dissimilarity between the data presented in those reports may origin from the differences in the time of zinc measurement e.g. depression/remission phase, depression diagnosistreatment resistant/non-resistant, previous/current pharmacotherapy, duration of illness, gender and dietary variability (Swardfager et al. 2013a, b;Siwek et al. 2013;Nowak 2015). The mechanisms of the antidepressant activity of zinc may be connected with its influence (modulation) of the neuro, immuno and/or oxidative systems [see (Szewczyk et al. 2011;Siwek et al. 2013;Nowak 2015;Maurya et al. 2016;Tyszka-Czochara et al. 2014) for review]. The obtained results from the presented study confirm the correlation between zinc deficit present in the depressive episode, and are consistent with majority of previous studies. The presented results indicate that zinc blood concentration might be a biological marker of MDD. Nevertheless the zinc level in the course of MDD data still remain ambiguous and should be examined on a bigger populations of depressed patients to widen the knowledge on the role of zinc in the pathophysiology of MDD.
2017-08-02T23:05:33.439Z
2016-08-08T00:00:00.000
{ "year": 2016, "sha1": "703f47f725007aa4ad0f2a48182faa910e94c754", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11011-016-9888-9.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "06c18fe180238359ff8aa040db992e723c3fea49", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
219522718
pes2o/s2orc
v3-fos-license
Reliability and Uncertainties of the Analysis of an Unstable Rock Slope Performed on RPAS Digital Outcrop Models: the Case of the Gallivaggio Landslide (Western Alps, Italy) : A stability investigation based on Digital Outcrop Models (DOMs) acquired in emergency conditions by photogrammetric surveys based on Remote Piloted Aerial System (RPAS) was conducted on an unstable rock slope near Gallivaggio (Western Alps, Italy). The predicted mechanism of failure and volume of the unstable portion of the slope were successively verified on the DOMs acquired after the rockfall that effectively collapsed the May 29th, 2018. The comparison of the pre- and post-landslide 3D models shows that the estimated mode of failure was substantially correct. At the same time, the predicted volume of rock involved in the landslide was overestimated by around 10%. To verify if this error was due to the limited accuracy of the models georeferenced in emergency considering only the Global Navigation Satellite System/Inertial Measurement Unit (GNSS/IMU)-information of RPAS, several Ground Control Points (GCPs) were acquired after the failure. The analyses indicate that the instrumental error in the volume calculation due to the direct-georeferencing method is only of the 1.7%. In contrast, the significant part is due to the geological uncertainty in the reconstruction of the real irregular geometry of the invisible part of the failure surface. The results, however, confirm the satisfying relative accuracy of the direct-georeferenced DOMs, compatible with most geological and geoengineering purposes. Introduction In recent years, the application of remote sensing techniques for the development of Digital Outcrop Models (DOMs) and the analysis of unstable rock slopes has been continuously increasing (e.g. [1][2][3][4][5][6][7]). The geological and engineering investigations performed by DOMs easily overcome most of the limitations of the traditional field-based survey, as the inaccessibility of the rock slopes, their unfavourable orientation and the high risk for the operators [8]. In particular, as shown by [9] and [10], due to the recent development in Remote Piloted Aerial System (RPAS), RGB camera performance and reduced costs, the RPAS-based Digital Photogrammetry (RPAS-DP) seems the best solution for high steep slope analysis, because it permits one to acquire data with similar resolution and cheaper instrumentation compared to the use of a laser scanner [11], among other things avoiding the effects of occlusion that affects the terrestrial remote sensing techniques [10]. The RPAS-DP survey is often matched with accurate topographic measurements of a great number of Ground Control Points (GCPs) using several possible solutions [12], like differential Real Time Kinematic (RTK) Global Positioning System (GPS) or total station, because they allow obtaining high-accuracy DOMs. However, in large, dangerous and scarcely accessible areas, high accuracy topographic surveys can take a long time (e.g. few days) and slow down the analysis and, therefore, in emergency condition, DOMs can be georeferenced using only the information recorded by the RPAS onboard Global Navigation Satellite System/Inertial Measurement Unit (GNSS/IMU) instrumentation without the GCPs acquisition survey. This approach is referred to as directgeoreferencing (DG). Whereas several authors have investigated the effect of the DG approach on the absolute accuracy of the 3D models of gently dipping areas ( [13][14][15][16][17]), only a few studies have investigated the absolute and relative accuracy of direct georeferenced DOMs representing steep rock cliffs. References [18] and [7] show that the DG approach can strongly affect the absolute accuracy of the DOMs, giving positioning error higher than ten meters, but it only slightly influences the relative accuracy, giving orientation and scale errors lower than 1° and 0.3-0.5%, respectively. One of the foremost common misconceptions is that the uncertanity of the RPAS-based engineering geological analysis is equal to the accuracy of the DOMs, without considering other sources of errors such as, for example, methodological ones (e.g. inaccuracies that derive from the discontinuity in trace mapping, discontinuity in attitude estimation and reconstruction of the mesh representing the rock wedge). A previous work [19] investigated the uncertainty of the discontinuity trace mapping based on DOMs, while different authors [20][21][22][23] have evaluated the reliability and uncertainty of the discontinuity attitude estimation by DOM mapping using synthetic discontinuity. Other authors [24] investigated the error of the post-failure rockfall volume calculation due to the algorithm used for the mesh interpolation by means of synthetic rock blocks, but no one has investigated the uncertainty of potential failure mechanism and volumes estimation performed for a real case study by RPAS-DP. The principal aim of this research is to evaluate the reliability, and the uncertainties of the stability analysis of a high steep and unstable rock slope based on Digital Outcrop Models (DOMs) acquired in emergency conditions by RPAS photogrammetric surveys. The study was conducted on the unstable, sub-vertical and inaccessible rock slope of Gallivaggio (Western Alps, Italy), where the predicted mechanism of failure and volume of the unstable portion of the slope offered the rare opportunity to be verified after the rockfall that effectively collapsed the slope on May 29th, 2018 ( Figure 1). The investigations took place following these main phases ( Figure 1): • A 3D Direct-Georeferenced model (DOMs) was developed during the emergency, before the rockfall, in a relatively short time, without measuring Ground Control Points (GCPs), but using only the positions of the photographs registered by the RPAS onboard GPS; • the stability analysis performed on this DOM defined the fracture network affecting the slope, the potential sliding surfaces, the possible failure mechanism and the volume of rock possibly involved in the landslide; • after the landslide of May 29th, 2018 a second emergency photogrammetric survey was realized and a post-landslide Direct-Georeferenced DOM was realized with which the mode of failure and the volume of rock involved were verified; • 30 GCPs were successively measured in the field by a laser reflector-less total station and the pre-and post-landslide photogrammetric surveys were used to realize two new GCPgeoreferenced models by which the accuracy of the preceding analyses and in particular the volume estimation of the landslide was checked. Site Description The unstable rock slope studied is located along the left side of the Spluga Valley (also called San Giacomo Valley), in the San Giacomo Filippo district (the Western Alps, Sondrio Province, Italy), approximately at km 126 of the SS36 road ( Figure 2). Geological and Geomorphological Setting The Spluga Valley, like many other Alpine valleys, was originated by the erosional phenomena connected to the Last Glacial Maximum (LGM) and it is characterized by an N-S trend, a U-shape, and steep slopes. It is included mainly in the Penninic Zone that is characterized by the presence of the Tambò and Suretta nappes and the Spluga syncline [25]. According to [26], the Spluga Valley is principally affected by the presence of two sets of lineaments related respectively to the Insubric Line, with a direction ranging from WNW-ESE to NW-SE, and to the Engadine Line, represented by a shear plane with a NE-SW direction. Moreover, N-S trending tensional fractures caused by the post-glacial rebound, and N-S shear fractures that may be related to reactivated minor pre-existed lineaments are also present. The study slope is located onto the southern part of the Tambò nappe that is predominantly composed by polycyclic and polymetamorphic paragneiss [27]. However, the study area is totally formed by the Truzzo granite formation, a late Varisican granitic complex dated 284 ± 21 Ma (Rb-Sr whole-rock and minerals: [28]) that can be considered a monocyclic basement [25], because it is affected only by the Tertiary Alpine-deformation [29]. The study slope, as the main part of the Truzzo granite, consists of a two-micas-shaped orthogneiss with pluri-centimetric phenocrysts of potassium feldspar [25,26]. Rock Slope and Slope Toe Area The Truzzo granite that outcrops along the rock slope is locally characterized by a metamorphic foliation that dips with a low angle toward NE ( Figure 2). Different authors [26,30,31] suggested the presence of at least 3 or 4 strongly persistent discontinuity sets affecting the studied rock slope that can cause rock block instabilities. In particular, [26] identified the following sets: (i) fractures parallel to the main foliation; (ii) sub-vertical shear fractures with a WNW-ESE and NW-SE direction (as the main tectonic lineaments related to the Insubric line); (iii) N-S tension fractures, parallel to the valley (and probably related to the post-glacial rebound). Just below the rock slope, different anthropic artefacts are endangered as the ancient Sanctuary of Gallivaggio (1598 A.D.), few houses, a restaurant, and the SS36 national road ( Figure 3). . Geological map of the area surrounding the studied rock slope (modified after [31]). The map shows all the element at risk at the slope toe as the ancient sanctuary of Gallivaggio, the power line, the SS36 national road and some now evacuated buildings. At the slope toe is present a rockfall protection composed by an embankment and a catch dig. Evolution of the 29th May 2018 Failure After the occurrence of some isolated rock block falls, the Lombardy Region Civil Protection and the Mountain Community began to build a rockfall embankment in 2008 [30]. Due to the presence of several artefacts, in particular, the SS36N national road and the Gallivaggio sanctuary, between 2011 and 2012 the Regional Agency for the Environmental Protection of Regione Lombardia (ARPA Lombardia) installed the first monitoring system. Successively, in 2016, the stability of the rock slope was also monitored by a permanent Ground-Based Interferometric Synthetic Aperture Radar (GBInSAR) system [30,31]. From the end of 2017, the GBInSAR registered an increase of the slope movements and, in particular, a dangerous acceleration from April 13th 2018 [30]. These acceleration values, together with the previous minor rockfalls, were indicating an incipient state of failure of the rock slope, where the frictional resistance of the discontinuities and rock bridges were maintained a precarious equilibrium [31]. Therefore, on April 17th an emergency RPAS survey was conducted in order to investigate the condition of the unstable rock slope, and by the obtained high-resolution DOM, quickly identify the possible failure mechanism and estimate the unstable rock volume. On May 24th the ARPA Lombardia noticed that the acceleration trigger thresholds were exceeded and on May 29th, 35 minutes before the rock slope collapsed, the ARPA Lombardia sent the last warning before the landslide event [30]. As suggested by [31], the failure probably occurred by an excess of the shear stress that induced the propagation and coalescence of micro-cracks and, therefore, the breakage of the intact rock bridges along the sliding surfaces, allowing to reach failure condition of the rockfall. On May 30th, another emergency RPAS survey was conducted to verify the condition of the slope after the landslide rapidly. Digital Photogrammetric Survey As already outlined, two different RPAS surveys were conducted before and after the landslide of the May 29th 2018 due to the difficulty and danger to investigate directly in the field the unstable portion of the rock slope ( Figure 4). The specification of the RPAS and the camera used in these surveys are described in Table 1. The RPASDP survey on April 17th 2018 (before the event) was conducted manually, acquiring 171 images with a mean distance camera-outcrop of 35.5 meters. Considering this distance and the camera features (Table 1), the mean resolution of the images is about 9 mm/pixel. The area covered by the DP survey is around 5890 m 2 and mainly includes the unstable portion of the rock slope. The RPASDP survey on May 30th 2018 (the day after the primary failure) was also conducted manually, acquiring 246 images with a mean distance camera-outcrop of 191 meters and a consequent mean resolution of the photos of about 39 mm/pixel. The resulting Digital Outcrop Model (DOM) covers an area of 79957 m 2 , showing the landslide effects on the entire slope. Digital Outcrop Model Development The Digital Outcrop Models (DOMs) were developed using the commercial Structure from Motion-based software, Photoscan © (Agisoft © , St. Petersburg, Russia). The DOMs were georeferenced with two different methods: (i) by direct-georeferentiation and (ii) by GCPs georeferentiation. The first procedure (direct-georeferentiation) considers only the positions registered by the RPAS onboard GNSS/IMU, whereas the second build the model on the base of the locations of the GCPs recorded in the field by high-accuracy topographic instrumentation. [18] and [7] showed as the Direct-Georeferenced (DG) DOMs may be offset from the real coordinates with a rigid translation ranging from 10 cm to 10 meters. Still, their scale and orientation are sufficiently accurate for geological purposes, with a medium homogeneous scaling of 1-5‰ and a rigid-rotation up to a maximum of 1°. The workflow used with Photoscan© involves the following steps: 1) Alignment of images using their full resolution and their orientation using the GNSS/IMUinformation recorded by the RPAS onboard GPS or the GCPs position measured in the field by a total station; 2) dense point cloud reconstruction using the high-quality setting of Photoscan (half of the image resolution); 3) generation of the textured mesh using the dense cloud and the high face count suggested by Photoscan © and the generic mapping and the mosaic blending modes for creating 20 texture files of 4096x4096 pixels. The main feature of the developed DOMs (point clouds and meshes) are indicated in Table 2. Table 2. Statistic of the point clouds and meshes representing the slope before and after the landslide. The accuracy of the different DOMs is described in Section 4.2. Digital Outcrop Models Analysis The 3D Digital Outcrop Models (DOMs) were analyzed in a 3D stereoscopic environment using the open-source software Cloud Compare v2.9 with a computer equipped with an SD2220W stereoscopic device that is composed by two separate polarized display monitors placed one above the other in a clamshell configuration, with a half-silvered glass plate bisecting the angle between the two displays. On the pre-failure DOMs, the visible discontinuities and the traces of the potential failure surfaces affecting the rock slope were recognized and then measured directly on the point cloud by a point picking tool and the calculation of the best-fit planes. These data were projected onto a stereonet, and the principal discontinuity sets were identified. The estimation of the possible mechanism of failure was predicted evaluating the attitude of the detected surfaces in relation to the average orientation of the slope. To estimate the volume of the potential landslide, the pre-event DOMs were analyzed through the following stages ( To determine the rock volume effectively involved in the landslide on May 29th 2018, the preand post-event DOMs were compared as described in Section 4.2. GCP Survey In the Gallivaggio case study, due to the emergency condition, the first evaluation of the unstable slope was done on the DOMs developed using the direct-georeferencing approach. Only some days after the collapse, the acquisition of GCPs allowed to develop DOMs with better absolute orientation and perform an uncertainty analysis ( Figure 1). The relative and absolute accuracy of the digital outcrop models were assessed using the 30 Ground Control Points (GCPs) measured in the field after the failure event of May 29th 2018 by a GPT-7001L reflector-less total station (Topcon, Tokyo, Japan) that has an accuracy of 2 + 2 x 10-6 mm ( Figure 6). We acquired 34 points, 30 on the rock wall and four benchmarks close to the total station. The position of four benchmarks has also been acquired using a 1200GPS RTK (Leica, Wetzlar, Germany). Considering the mean distance between the location of targets and the total station (900 m), and the use of reflector-less measurement mode, the accuracy can be conservatively estimated in 5 cm. The acquired GCPs were used to develop GCP-referenced DOMs of the pre-and post-failure slopes. These DOMs were used to evaluate the accuracy of the direct-georeferenced 3D models (generated using only the positions recorded by the RPAS onboard GNSS/IMU instrumentation) and the consequent errors in estimating the volume of unstable rock. According to [33] and [7], the accuracy of a DOM can be distinguished in absolute and relative accuracy. The first is represented by the error in the positioning of all 3D points of the DOM in an absolute coordinate system. In contrast, the second represents the difference in length and azimuth of the vectors that join two points on Earth, and the corresponding length and azimuth of the same vectors in the model. For example, a DOM with correct orientation and scale and an incorrect positioning in the 3D space has high relative accuracy and low absolute accuracy. Pre-Failure Analysis The main goal of the pre-failure study was the identification of critical discontinuities in the most unstable sector and the estimation of the volume of the possible rockfall. From the stereoscopic inspection of the DOMs, it was possible to identify and map 134 discontinuities that can be subdivided into four different sets DS1, DS2, DS3 and DS4 (Figure 7). However, many non-systematic fractures were observed and also the identified sets show a significant dispersion in the attitude. The mean length of the visible traces of the mapped discontinuities is 14.30 meters, whereas the minimum and the maximum length are 2.40 and 60.60 meters, respectively. In addition to this fracture network, the traces of four main critical discontinuities, characterized by a high aperture (also more than 1 m) and signs of incipient instability, were identified. The combination of these fractures formed a highly unstable rock wedge ( Figure 8). The attribution of these fractures to the defined sets is problematic, especially as regards the F2 fracture, while the other three fractures can be roughly ascribed to sets DS1, DS3 and DS4. These opened fractures are rather irregular, and the measurement of their orientation is difficult (just because they are open) and show some variability. To minimize this drawback, the outcropping traces of the four fractures were sampled considering only the most reliable intersection points and four clouds of 37 (fracture F1), 66 (F2), 23 (F3), 32 (F4) points were obtained. Then, the attitude of all the planes determined from the different possible combinations of the points of each cloud was calculated. Groups of a minimum of four points were chosen and only the planes calculated from groups with collinearity (K) < 0.8 and coplanarity (M) > 4 were considered (Figure 9), as proposed by [20]. This analysis (Figure 9) indicates that the distribution and the orientation of the fractures F1 and F4 are somewhat regular with a clear maximum of relative density with an attitude of 238°/41° and 270°/70°, respectively. On the contrary, fractures F2 and F3 show two different maxima of relative density (Figure 9b,c) suggesting that the traces of these fractures can be the envelope of two kinds of fractures with two different orientations: 227°/47° and 353°/88° for F2 and 306°/85° and 240/40° (which coincides with the orientation of F1) for F3. According to these results, the presence of five critical discontinuities (Figure 10a) was considered (F1, F2a, F2b, F3 and F4) for the stability analysis (Figure 10c,d) and the volume calculation. In particular, the stability analysis (Figure 10b) suggested that the discontinuity F1 and F2a could be critical for the planar sliding failure, and they can act as sliding surfaces of the rock wedge. Also, the fracture F4 has an attitude very close to being critical for planar sliding. Taking into account the large opening and the morphological characteristics of the potential sliding surfaces, the basic friction angle was prudently considered in the stability tests, which, in the absence of experimental data and analogy with similar lithologies [34] was deemed to be equal to 30°. The same five fractures were used to reconstruct the geometry of the unstable rock wedge ( Figure 11) except for the discontinuity F2a that plays a minor role because it does not influence the shape of the wedge, but only cuts the rock wedge in the middle. The volume of the unstable rock wedge was estimated using the procedure described in the previous chapter, extending the potential sliding fractures visible as traces on the outcrop to delimit the internal part of the unstable block, and cutting the point cloud appropriately to delimit the external part (Figure 11). . Figure 11. (a) 3D model of the rock slope before the landslide; (b) identification of the critical discontinuities approximated as plane surfaces (see Figure 10 for the discontinuity orientation); (c) delimitation of the critical rock wedge; (d) front, (e) rear, (f) left and (g) right views of the instable rock wedge. After the selection of the points that delimit the unstable block both inside and outside the outcrop, a 'watertight' mesh was created using Poisson surface reconstruction plugin [32] of the CloudCompare software. The volume of the potentially unstable rock block represented by this mesh was of 8240 m 3 . Post-Failure Analysis The DOMs created after the landslide of 29th May 2018 show what happened. In particular, the real mode of failure, and the rock volume effectively involved in the event and the actual failure surfaces were identified. Four principal sliding surfaces are visible ( Figure 12). They are rather irregular and wavy, although with an orientation similar to that of the four fractures identified before the landslide (Figures 8 and 10). Their mean orientation was evaluated calculating the normal vectors of the points of the DOM that represent the different failure surfaces (Figure 13). To determine the rock volume effectively involved in the landslide of May 29th 2018, it was decided to adopt a quantitative approach that minimizes the effect of the user-related bias (interpretation and manual delimitation of the sliding and the external surfaces of the rockfall), similar to those applied in [35,36,37]. Figure 12). The maximum density of the poles indicates the best-fitting fractures with their mean orientation. The failure surfaces S2 (Figure 13b) and S3 (Figure 13c) show the higher orientation variability, and this observation confirms that they can only approximately considered as planes. Nevertheless, the comparison with the pre-event analysis confirms that the mode of failure of the landside ( Figure 14) was controlled by planar sliding on S1 and S4 fractures (Figure 14b), and by wedge sliding ( Figure 14c) of a complex rock wedge formed by the four discontinuity intersections (S1-S2, S1-S4, S2-S4 and S3-S4). This approach uses the M3C2 algorithm, proposed by [38] and implemented in CloudCompare, and can be summarized in: a) Selection of the portion of DOMs that represent the same rock slope area before and after the landslide (Figure 15a,b); b) calculation of the distance between the pre-and post-event 3D models using the M3C2 plugin (Figure 15c), a tool that permits to calculate the distance between two DOMs and plot it onto the 3D model surfaces; c) delimitation of the external and the sliding surfaces using a distance threshold of 10cm (red lines in Figure 15d,e); Reconstruction of a closed mesh that represents the rockfall using the Poisson surface reconstruction plugin of Cloud Compare [32] and volume calculation ( Figure 16). The distance threshold was set to 10 cm because it is the double of the uncertainty of the total station measurements. To avoid an overestimation of the rock volume, all the tall vegetation represented in the models was removed using both semi-automatic (RGB and density value of the points of the cloud) and manual procedures (point selection and elimination) proposed by [7]. At the end of this quantitative approach, the volume of the fallen rock mass thus calculated was of ca 6730 m 3 . The comparison of the pre-and post-landslide 3D models shows as the estimated mode of failure (Figure 10b,c) was substantially correct, with four discontinuity intersections critical for the wedge sliding (F1-F2b, F1-F3, F1-F4, F3-F4 and S1-S2, S1-S3, S1-S4, S3-S4 onto the pre-and post-failure DOMs, respectively) (Figures 10c and 14c) and two fractures (F1, F4 and S1, S4) acting as sliding surfaces (Figures 10b and 14b). Nevertheless, the predicted volume of rock involved in the landslide was overestimated by around 1510 m 3 (8240 vs 6730 m 3 ). However, accurate inspection of the post-failure 3D models shows that not all the unstable portions of the slope collapsed, as a block in precarious conditions of equilibrium is still in place ( Figure 17). The volume of this block is about 809 m 3 . Therefore, the difference between the estimated and the real landslide volume is reduced to about 701 m 3 (~ 10% of the real collapsed volume). The longitudinal sections of Figure 18 indicate as the difference between the predicted and the real volume of the landslide is probably essentially influenced by the geometry of the discontinuities. Whereas the estimated failure surfaces (D1, D2, D3 and D4) are considered planes, the real detected surfaces (S1, S2, S3 and S4) are, on the contrary, wavy and rather irregular and can be considered in many cases the envelope of discontinuities belonging to different sets. Relative and Absolute Accuracy of the DOMs The Direct-Georeferenced (DG) models are characterized by a low absolute accuracy with a high value of the mean errors of the coordinates (> 1m), but a low standard deviation (< 0.45m) that suggests a rigid translation (Table 3). On the other hand, GCP-georeferenced DOMs show a high level of absolute accuracy (Table 4). To analyze the relative accuracy of the DG DOMs, the length and the orientation (azimuth and plunge) of the 435 vectors that join all the possible couples of GCPs measured by the total station have been compared with those calculated from the DG models. The results are reported in Table 5. The maximum angular difference in attitude (Table 5) of the vectors is <1°, while their mean difference in length is about 4.5‰ ( Figure 19). Table 5. Statistics of the differences between the 435 GCP-couples vector length and orientation (azimuth and plunge) measured by the total station and estimated from the direct georeferenced model. It is essential to highlight that a 2D error of 5‰ on each axis corresponds to a 3D volumetric error of 1.5% that was confirmed by the difference of the volume of the landslide calculated in the direct-referenced and GCP-referenced models (6730 m 3 vs 6864 m 3 ; ca. 1.7%). Length (m) Length (‰) Azimuth (°) Plunge (°) These results confirm that the Direct-Georeferenced models are generally correctly oriented and usable for most geological and geoengineering purposes [7]. Discussion The slope of Gallivaggio that is located along the left side of the Spluga Valley (Sondrio Province, Western Alps, Italy) was investigated by Remote Piloted Aerial System (RPAS) Digital Photogrammetric (DP) surveys before and after the 29th May 2018 rock landslide. Two pairs of 3D Digital Outcrop Models (DOMs) were realized. The Direct-Georeferenced models were developed during the emergency, before and after the event, in a relatively short time, without measuring Ground Control Points (GCPs), but using only the positions of the photographs registered by the RPAS onboard GPS. On the contrary, the GCP-georeferenced models were developed using the locations of 30 GCPs measured in the field by a total station. The analysis of the pre-landslide models was aimed to define the fracture network affecting the slope, the potential sliding surfaces, the possible failure mechanism and the volume of rock possibly involved in the landslide. The analysis was particularly complex due to the irregular fracture network of the slope and the presence of potential sliding surfaces open, poorly exposed and challenging to measure. Case Specific Evaluations Four main discontinuity sets were recognized (see also [26]). Set DS1 is attributable to NW-SE and WNW-ESE sub-vertical shear fractures (as the main tectonic lineaments related to the Insubric line), set DS2 represents the fractures that dip ENE and are sub-parallel to the main foliation, set DS3 is NE-SW trending as the sub-vertical shear fractures related to the Engandine Line and set DS4 coincides with N-S tension fractures parallel to the principal valley. It is probably related to the postglacial rebound of the area. Moreover, by the analysis of the pre-failure DOMs, four potential failure surfaces were identified (F1, F2, F3 and F4 - Figure 10), whose attitudes and intersections can induce both planar and wedge sliding. The measurement of these open and partially loose fractures was verified by sampling all the most reliable points of their traces on the outcrop. Finally, the volume of the unstable rock mass delimited by the intersection of these fractures was calculated in 8240 m 3 . The comparison between these results with what happened after the landslide, and that was quantified analyzing the post-failure models allows to make the following considerations: • The discontinuities really involved in the landside (S1, S2, S3 and S4 - Figures 12 and 13) and detected in the post-failure DOMs have similar orientations, and the mechanism of failure of the landslide was correctly determined, with two fractures (F1, F4 and S1, S4 onto the preand post-failure DOMs, respectively) acting as sliding surfaces (Figures 10b and 14b) and 4 discontinuity intersections critical for the wedge sliding (F1-F2b, F1-F3, F1-F4, F3-F4 and S1-S2, S1-S3, S1-S4, S3-S4) (Figures 10c and 14c); • the predicted volume of the landslide was overestimated by about 10% (~700 m 3 ), considering that a rock block of about 809 m 3 is in precarious conditions of equilibrium, but still in place ( Figure 17). This difference seems due to the geometry of the effective failure surfaces that are wavy and rather irregular because probably they are the envelope of differently oriented fractures, while the sliding surfaces considered in the calculation were assumed to be planar; • the differences in fracture measurements and the calculation of the volume of unstable rock performed on the Direct-Georeferenced (DG) and on the GCPs-georeferenced models are negligible from a geological point of view. This confirms the satisfying relative accuracy of the DG DOMs that shows an error in orientation <1°, and in the length of 4.5‰. Even the difference in the volume calculation was only 1.7% (134 m 3 ). However, it must be considered that the absolute accuracy of these models is limited as they are affected by displacement errors of even a few meters (in this case around 2 m and 5 m of planar and vertical displacement, respectively), while the GCP-georeferenced models showed high absolute accuracy, with errors on the axes X, Y and Z always around 5 cm (similar to the total station accuracy). Uncertainty Evaluation and Analysis The main steps of the uncertainty analysis performed in this work, from the pre-failure estimation to the post-failure and GCP-based analysis, are shown in Figure 1. In general, the RPAS photogrammetric survey gives the best results when coupled with GCP measurements by high-accuracy topographic instrumentation, allowing to obtain 3D models with a centimetric accuracy (Table 4) and, therefore, to map discontinuities, and to estimate and calculate the rockfall mechanisms and volume with high precision. Notwithstanding, if the risk of a collapse of the monitored area is high, it is often difficult/impossible and extremely dangerous to perform a GCP-survey. Therefore, the 3D models must be directly-georeferenced using the RPAS position registered by the onboard instrumentation. In our research we demonstrate as the effects of the direct-georeferencing onto the absolute and relative accuracies of DOMs and therefore onto the reliability of the orientation and length measurements are in good agreement with the results obtained in other studies ( [7,18]). We showed as whereas the absolute accuracy of these 3D models is low, the relative one is generally high: the length and orientation errors are equal or smaller than 5‰ and 1°, respectively, while the calculation of the rockfall volume can produce an error of 1.7%. This error can be considered as the instrumental error of direct-georeferencing RPAS photogrammetry approach and depends on the accuracy of the RPAS onboard instrumentation. During the evaluation of the possible rockfall volume performed before the failure event onto a direct-georeferenced DOM, the instrumental error can be neglected in comparison to other kinds of uncertainties that are related to the procedures used to analyze and measure the features of the rock slope on the DOMs (methodological errors). In the literature, these errors are indicated as intrinsically connected to the discontinuity trace mapping [19], discontinuity attitude estimation ( [20][21][22][23]) and interpolation of the mesh representing the unstable rock block whose volume is to be calculated [24]. Reference [19] claims that the discontinuity trace mapping based on DOMs can produce errors that are at least the double of the resolution of the digital models because of the ambiguity of the colours of the adjacent pixels. In our research, we tried to overcome this drawback realizing a highresolution DOM (9 mm/pixel) and selecting only the most reliable outcrop-discontinuity intersection points, avoiding any type of automatic tracing of discontinuities Other possible sources of error lie in the orientation estimates produced by plane fitting that can be highly uncertain, especially when observed data are approximately collinear [21]. Some authors [20][21][22][23] have proposed different methods based on geometrical parameters and statistical analysis of the sampled points to minimize these possible errors and to evaluate the accuracy of the results. Our approach involved the accurate detection of the best points visible along the traces of the sliding surfaces and the calculation of all possible planes considering only groups of a minimum of four points characterized by collinearity (K) < 0,8 and coplanarity (M) > 4 (Figure 9), as proposed by [20]. This procedure proved to be effectively adequated because the predicted variability in the orientation of the discontinuities (Figure 9) was similar to the real variability detected onto the post-failure DOMs ( Figure 13). The reconstruction of the mesh that must represent the unstable or the fallen rock block is commonly realized by the conversion of a point cloud to a triangulated mesh. For accurate volume calculations, the triangulated mesh needs to be watertight (without missing triangles or surface holes), free of intersecting triangles and have consistent normal vectors, with correct topology. [24] claim that the accuracy of the mesh interpolation can be influenced by the density of points cloud representing the rock block and the algorithm chosen of reconstruction. Some reconstruction algorithms were implemented using a synthetic rockfall object and three natural rockfall events. The comparison shows that the calculated volume can also vary by 10%, even considering only meshes with a similar high density of points [24]. In our research to minimize this effect, we used very dense point clouds (10000 pts/m 2 ) representing the rock blocks, the Poisson surface reconstruction method that, as reported by [39] and [40], gives the best results for dense point cloud and closed geometry, as a rock block. Nevertheless, the volume estimation of the unstable portion of the rock slope of Gallivaggio that is characterized by an irregular fracture network was affected by an error of 10%. This error is mainly influenced by the assumption that the potential sliding surfaces are planar. In contrast, the real ones are wavy and irregular because of the presence of rock bridge and minor discontinuities that can make the failure surface complex [41]. Therefore, it is essential to emphasize that, during the emergency condition, the GCP-survey can become superfluous because the major influence on the uncertainty of the rockfall volume is not due to the instrumental error (1.7%) but to the methodological one (10%). In particular, due to the consideration of the critical failure surfaces as planar ( Figure 18). Notwithstanding, it is important to consider that using the proposed methodology it is possible to correctly estimate the orientation variability of the critical fracture (Figures 9 and 13) and, therefore, to accurately estimate the failure mechanisms (Figures 10 and 14). Conclusions Digital Outcrop Models developed by photogrammetric surveys of RPAS equipped with onboard GNSS/IMU can be considered powerful tools to analyze the fracture network and the stability conditions of rock slopes, especially when slopes are inaccessible, highly unstable and dangerous. This methodology permits to acquire information from inaccessible areas in a short time, with a dramatic increase of the data principally due to the possibility to examine large portions of the slope using a safe methodology, and with substantial time-saving respect to the classic geomechanical field-based analyses. RPAS photogrammetric surveys, compared to Terrestrial Laser Scanner investigations also have the great advantage to avoid holes in the reconstruction of the point cloud and mesh of a DOM because the occlusion effect that can characterize slope with complex geometry and without optimal terrestrial viewpoints, like the case of Gallivaggio. Hole-filling procedures are one of the major problems for a correct mesh reconstruction and can produce errors in the volume estimations [24]. The DOMs developed by a direct-georeferencing procedure (i.e. without measuring ground control points, but using only the positions of the photographs registered by the RPAS onboard GPS) are generally correctly oriented and usable for most geological purposes [7]. In particular, it is possible to obtain information on unstable slopes and perform reliable stability analyses and volume estimations, especially during the emergency in a short time, also without programming a survey of acquisition of ground control points. The procedure used in the study of Gallivaggio rockfall is aimed to estimate the volume of rock potentially involved in a landslide proved correct and substantially reliable also if applied to a slope affected by irregular and rather dispersed sets of fractures. It consists of five steps: 1) identification and mapping of the critical discontinuities; 2) fitting of 3D mean discontinuity planes after an evaluation of the uncertainties of their measurements; 3) delimitation of the surface of the unstable wedge; 4) creation and correction of the point cloud representing the unstable wedge; 6) creation and calculation of a closed 3D volume. In this case study, the errors in the estimation of the volume are: 1,7 % related to the use of direct approach and 10% linked to the uncertainty in the definition of the sliding surface due to the presence of irregularities. The uncertainty due to the direct approach can be avoided by the use of GCPs. The geological uncertainty is not entirely preventable. To evaluate the variability of the measurements of the discontinuities taken on the pre-failure DOMs, their outcropping traces were sampled and transformed into point clouds, and the attitude of all the planes determined from the different possible combinations of four points (characterized by a reduced coplanarity and collinearity) of each cloud was calculated. The analysis of the results allows determining the best-fit plane of every discontinuity and its representativeness and reliability. This procedure appears particularly necessary to establish the orientation of a single plane and characterized by a non-regular geometry or in the presence of a rock mass non-regularly fractured. However, the hardly predictable orientation and geometry of the not visible parts of the sliding surfaces, at least, in this case, prevent a more precise prediction of the volume of rock subject to failure.
2020-05-21T09:14:37.884Z
2020-05-20T00:00:00.000
{ "year": 2020, "sha1": "7aadee4d8d83a843d3ee4e280222b5d2f3a3c1a5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/12/10/1635/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "53d4c641ce89ae0e39ef8e087bb02f1d67595d4a", "s2fieldsofstudy": [ "Geology", "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Geology" ] }
82401707
pes2o/s2orc
v3-fos-license
The impact of biotechnological advances on the future of US bioenergy Modern biotechnology has the potential to substantially advance the feasibility, structure, and efficiency of future biofuel supply chains. Advances might be direct or indirect. A direct advance would be improving the efficiency of biochemical conversion processes and feedstock production. Direct advances in processing may involve developing improved enzymes and bacteria to convert lignocellulosic feedstocks to ethanol. Progress in feedstock production could include enhancing crop yields via genetic modification or the selection of specific natural variants and breeds. Other direct results of biotechnology might increase the production of fungible biofuels and bioproducts, which would impact the supply chain. Indirect advances might include modifications to dedicated bioenergy crops that enable them to grow on marginal lands rather than land needed for food production. This study assesses the feasibility and advantages of near‐future (10‐year) biotechnological developments for a US biomass‐based supply chain for bioenergy production. We assume a simplified supply chain of feedstock, logistics and land use, conversion, and products and utilization. The primary focus is how likely developments in feedstock production and conversion technologies will impact bioenergy and biofuels in the USA; a secondary focus is other innovative uses of biotechnologies in the energy arenas. The assessment addresses near‐term biofuels based on starch, sugar, and cellulosic feedstocks and considers some longer‐term options, such as oil‐crop and algal technologies. Published 2015. This article is a U.S. Government work and is in the public domain in the USA. Biofuels, Bioproducts and Biorefining published by Society of Industrial Chemistry and John Wiley & Sons Ltd Background Th e global biofuels industry is growing. 1,2 A cellulosic biofuels industry is emerging with a total projected capacity worldwide of about 195 M gal/year (a modest amount compared with the 20 to 30 B gal/year for fi rst-generation starch-and sugar-based ethanol production). Cellulosic biorefi neries have opened in Italy, the USA, Brazil, China, and Spain; and others are scheduled to launch in 2014-2015. Most employ biological processes. Secondgeneration biofuel refi neries have been announced for 2017 and beyond with a projected capacity of about 5 to 6 B gal/ year. Global biofeedstocks could produce about 914 million tons of residues by 2030, which could replace half of the gasoline needed by then. Th e USA remains the deployment and technology leader in biofuels -both conventional (grain-based) and advanced (e.g., cellulosic ethanol, algal biodiesel, drop-in biofuels).However, other nations are gaining (e.g., Brazil, China, the European Union, and Southeast Asia). In 2011, Brazil produced just over half as much ethanol as the US total of ~53 B L/year. 3 Scaling up the bioenergy industry to meet US and global bioenergy goals will require a supply chain with a volume rivaling the current agricultural and energy industries combined. 4 Although biofuel production has similar key requirements to other energy supply chain networks, it has several unique aspects related primarily to biofeedstocks and the potential use of biology for conversion. Consideration must be given to the feedstocks employed Introduction and key fi ndings A dvanced biotechnology will be crucial to improving biofuel supply chains. Over the next 10 years, most of the progress based on biotechnology research and development will be in conversion processes. Improvements in feedstocks will follow over a slightly longer period. Potential biotechnology game changers for bioenergy include: • parallel yield and convertibility improvements in biofeedstocks and residues • robust, easily convertible lignocellulosic feedstocks and residues with minimal pre-treatment • predictable agronomic feedstock improvements for yield and sustainability (e.g., low nitrogen and water use and increased soil organic carbon sequestration) • ability to control rhizosphere (the soil microbial) communities to improve biofeedstock traits • economically stable bioconversions able to handle biofeedstock variability • new tools to rapidly and rationally genetically engineer new microbial isolates with unique complex capabilities (e.g., new enzymes and fuels or capability to thrive in harsh conditions such as pH or temperature extremes) • rational reproducible control of energy fl uxes and carbon balance in microbes (e.g., decoupling growth from metabolism) • cellular redesign to overcome fermentation product inhibition while maintaining yield and rate • stable, high-rate microalga lipid production in open systems • expanded compatible biotechnology processes to produce co-products(e.g., from lignin) along with fuels Th e developing bioenergy industry faces several broad issues that are critical to accelerating bioenergy deployment but that will be infl uenced only indirectly by biotechnology: • Scaling up an industry capable of achieving US and global bioenergy goals requires building a huge supply chain. • Given the large scale of biofuel production needed, massive market demand for co-products is necessary. • Th e low density and decomposition of biomass is challenging for feedstock storage and logistics. • Land-use and sustainability issues must be addressed. Biotechnological and agronomic approaches can increase sustainability and manage water use. Feedstocks have great mid-term potential and receive less attention.) A main current driver for continued improvement in bioenergy is that liquid biofuels remain the only renewable alternative for the diesel and jet markets, which are benefi ting from biotechnological improvements. Table 1 summarizes factors in the biofuel supply chain and highlights how advanced biotechnology might significantly impact them. Th e following sections consider these factors in sequence -feedstocks, logistics and land use, conversion, and products -and present analyses of the feasibility of advances, risks, barriers to major advances and crosscutting impacts on the supply chain. Drawing on improved understanding of mechanisms, advanced biotechnologies lead to targeted manipulations using genetic engineering, synthetic biology, directed evolution, or genetically assisted breeding. Th e biotechnological approaches most likely to have the greatest impact on the biofuel supply chain include synthetic biology, protein design, and associated tools. Synthetic biology can be defi ned as the rational design of biological systems on the basis of engineering principles. Central to this approach is the capability to regulate genes in native and introduced (e.g., agricultural residues, dedicated energy crops), logistics and land-use decisions associated with the selected feedstocks, conversion technologies used to convert the feedstocks (e.g., thermochemical conversion, fermentation) to products (e.g., ethanol),and utilization of the products produced (Fig. 1). See Supplemental S1 for longer defi nitions. Th is assessment examines the infl uence of biotechnology on US biofuel production through the lens of the supply chain. Currently, ethanol from corn provides close to 10% of US liquid 'gasoline' fuel supply. Mandates of the US Energy Independence and Security Act of 2007 will require signifi cant increases in feedstock amounts and sources. Th us, future supply chains may include cellulosic feedstocks for ethanol production (currently in deployment) and advanced fungible biofuels based on cellulosic or other biomass feedstocks (e.g., algal oils). Discussion Advanced biotechnology can benefi t the bioenergy supply chain with near-term developments in conversion and co-products. (Most of our discussion covers this area. and directed evolution. 'Rational design' is defi ned as the knowledge-based design of new proteins; 'directed evolution' refers to the generation of new protein functions through functional screening of randomly generated variant copies of the protein. DNA writing and site-specifi c genome engineering capabilities form the basis of the rational design approach. Th e directed evolution approach requires high-throughput, automated design, construction, engineering, and evaluation of tens of thousands of combinations using robotics and computational techniques. Other important biotechnological tools include: • molecular techniques relevant to -robust, reliable, and rapid single-cell genomics. -protein profi ling from very small samples. -transient and stable transformation. • phenotyping techniques for in situ measurements. • standards-adhered reconstructions of transcriptional networks and nodes. • annotated genome sequences from hundreds of multiple relevant species and genotypes. Th e creation of transgenics or GMOs can prove the function of key genes or pathways leading to the desired phenotypes. Armed with this knowledge, genetically assisted breeding can greatly accelerate the development of specifi c phenotypes that are nontransgenic and thus not regulated as GMOs. Th is will be especially useful for improved non-GMO plant biofeedstocks, especially where the traits are metabolic pathways in a controlled fashion. Examples of bioengineering options include • varying promoter activity and effi cacy. • using diff erently inducible promoters. • modifying ribosome binding strength. Site-specifi c genome bioengineering tools that facilitate targeted genome editing and transcription modulation are essential for elucidating the function ofdeoxyribonucleic acid (DNA) elements. Such tools include homologous gene targeting, transposases, site-specifi c recombinases, meganucleases, integration of viral vectors, and artifi cial zinc-fi nger technology. When coupled to eff ector domains, an emerging technique -customizable transcription activator-like eff ectors -provides a promising platform for achieving a wide variety of targeted genome manipulations. 5 Th ese site-specifi c genome bioengineering tools may achieve greater precision and predictability in modifying biological traits in feedstocks and potentially in microbial-base conversion technologies. Variations in enzymatic function and protein features ultimately have functional relevance at the organism level and thus have been tested over evolutionary time via natural selection. Th e nascent fi eld of protein engineering has been most used for sector design in microbial sciences. 6 Th ere are two main approaches to improving or creating new protein functions: rational design of proteins duction by advanced breeding and biotechnology if the markets make such eff orts worthwhile. 11 Similar changes might also be seen in residues from other food crops. Biotechnology will be used to improve robustness and decrease water/nutrient requirements for energy crops. Two active avenues are in control of the plant (such as osmotic capacities, transpiration, nutrient transporters) and manipulation and control of the rhizosphere, where certain microbes have been shown to enhance drought resistance in switchgrass. 13 Leading candidate crops that may supply secondgeneration feedstocks for lignocellulosic ethanol include perennial grasses such as switchgrass, Miscanthus, energy cane, and short-rotation woody crops like poplar and willow. Perennial oilseed species such as Jatropha may serve as biodiesel feedstocks, 14 and algae may eventually become a major feedstock. Th e attractiveness of microalgae stems from their rapid growth under a broad variety of conditions, including in saline or nutrient-poor water or waste water. Th ey can be grown in open ponds, closed chambers, or vertical stacks, eliminating the use of arable land. However, there are barriers to making algal biofuels economically viable, including high infrastructure and energy costs for growth and harvest; the necessity of year-round warm weather; and a high possibility of contamination, evaporation, or inconsistent mixing. 15 Additional barriers are outlined in Cheng and Timilsina 16 and in the DOE algal fuels roadmap. 17 Traditional biology or advanced biotechn ology may be able to improve the salt tolerance of algae, allowing the use of brackish water for growth. Most research targets improvements in eukaryotic algae, especially those that naturally accumulate large quantities of lipids under stress. Recent advances suggest a combination of metabolic engineering and process control can alleviate algal challenges. Further discussion of how biotechnology could advance algal feasibility is in supplemental material section S2. Feedstock composition Lignocellulosic-biomass-based biofuels would use dedicated energy crops such as switchgrass, along with green waste or residue from food or nonfood crops (e.g., corn, wheat, rice, sorghum). Th e lignin and cellulose, their organization and quality, structural features of the plant cell wall, and patterns of cell wall development directly aff ect biomass properties and in turn the sugar yield and the potential for producing liquid transportation fuels. A major goal of feedstock biotechnology eff orts is making more easily convertible plants, for instance by reducing lignin content. Early work dispelled the intuitive idea either cisgenic or already within the natural variation in the population. 7 Feedstocks Biotechnology can improve the feedstock component of the supply chain in multiple ways: • increasing feedstock yields • improving robustness and reducing amendment (e.g., fertilizer) requirements • increasing tolerance for environmental and biological stresses such as drought and pest attacks • modifying feedstock composition (e.g., to reduce biomass recalcitrance to conversion). Improvements will likely occur through genetic understanding that aff ects the phenotypic characteristics of interest. Potential useful manipulations include breeding eff orts that select for traits of interest and direct genetic modifi cations. Traditional breeding will transform into genetically assisted breeding using biotechnological tools to rapidly identify desired traits in plants at earlier stages. Plant biotechnology will have a major impact (although fi eld trials slow the impact of new discoveries). Some concepts in yield and robustness are already in fi eld trials, and research into composition will follow. Yield, robustness, and planting requirements Agricultural and forest residues are critical feedstocks for the fi rst biofuel conversion facilities. 8 Corn stover, a particularly important agricultural residue, is the feedstock source for cellulosic ethanol facilities currently under construction by POET 9 and DuPont. 10 But because corn grain probably will continue to be a higher-value commodity than stover for bioenergy, most biotechnology eff orts to improve corn focus on optimizing grain yield and quality. Average grain yields have been projected to approach 250 bu/ac in 2030, 11 up from an average 160 bu/ac in 2013. 12 Th is increase could have the paradoxical eff ect of actually reducing corn stover feedstocks. Th e total amount of stover produced depends on the harvest index (ratio of grain mass to total aboveground crop mass). Currently, the harvest index averages 0.5 (half of the above ground plant mass is grain); but genetic selection to increase grain yields could increase the harvest index to as much as 0.7, 11 greatly reducing the amount of corn stover available. For example, if the corn grain yield were 250 bu/ac with a harvest index of 0.5, the pre-harvest stover yield would be 15.7 Mg/ha (7 ton/ac). If the harvest index increased to 0.7, the stover yield would drop to 6.7 Mg/ha (3 ton/ac). However, the harvest index can be modifi ed to increase biomass pro-manipulated at a time. Th e manipulation of one gene oft en has both expected and unexpected pleiotropic and undesirable eff ects on the plants. Th e paucity of publications reporting dramatic improvements may be due to the failure to target the right combination of gene(s) and promoter elements, and to the lack of understanding of the correlation between protein structure and the most eff ective enzyme biochemistry. Th erefore, a whole pathway or systems view is needed. Th ese bottlenecks to realizing biotechnological crop advancements can be addressed with • robust statistical analysis methods to handle complex and diverse datasets. • plant models with shorter life cycles. • fi eld trials coordinated tightly with greenhouse assessments. • parallel and concerted investment by industry. Logistics and land and water use Eff orts to genetically enhance energy crop production and conversion will aff ect supply-chain logistics (i.e., harvest, storage, transport, and size reduction) and resource utilization. Speculative biotechnological improvements that would improve logistics include increased biomass density, improved storability, decreased variability, and reduced ash content (especially for thermochemical processes). Higher yields for biomass crops -a primary goal of feedstock biotechnology research -typically help reduce crop production and logistics costs. Sokhansanj et al. 31 show that increasing switchgrass yields from 4.5 to 13.4 ton/ac dropped production costs by more than 50% (from $37.66 to $17.37 ton/ac). Although many biotechnological advances, such as increasing yields, will improve logistical effi ciency and reduce costs, others may increase the diffi culty of building supply chains capable of delivering high-quality year-round feedstocks. For example, how will genetic improvements that make plants more easily convertible impact sizereduction and storage operations? More discussion is in the supplemental material, sections S3 and S4. Competition between bioenergy crops and food/feed production (real and perceived) for land and water is a challenge. Measures to reduce land competition will include increasing biomass yield (dry mass produced per unit of land per unit of time), without harming soil health, 32 to minimize the land area required for bioenergy feedstocks, 33 and designing plants that tolerate environmental stresses such as drought and salt and could be grown on currently unproductive or underused lands. 33,34 Strategies such as double-cropping, either seasonal that reducing lignin content would reduce the strength of plants and lead to lodging (i.e., plants falling over in the fi elds, making them diffi cult to harvest). 18 Instead, increased lignin was found to be more likely to cause lodging, possibly by making plants more brittle. As researchers continue to develop plants with lower lignin content, there may be a point at which improvements in digestibility are not worth the reductions in strength, water movement or pest resistance. A review by Pedersen et al. 19 found that, although results were mixed, reduced lignin tends to decrease the agricultural fi tness of plants. However, they noted the signifi cance of reduced lignin content is strongly aff ected by interactions with the environment and genetic background. Lignin has been shown to play a role in plant shear strength; 20 since grinding equipment designed to exert shear forces on biomass is more energy-effi cient, reducing lignin could in turn reduce the energy required for grinding. As adequate quantities of modifi ed bioenergy crop material become available in fi eld trials, experiments are needed to test the relationship between lignin co ntent and grinding energy. Th ere has been a rapid rise in reports on genetic and molecular underpinnings of biomass properties in both woody and herbaceous plants. Several excellent recent reviews are available on genomics and biotechnological approaches to improving plant cell wall characteristics and saccharifi cation effi ciency. [21][22][23][24][25][26][27] Th ese reviews report that changes in cellulose, xylan, and pectin can all have beneficial impacts. Another goal of feedstock-related biotechnology is increasing plant-oil production in natural oil-seed crops, algae, or new plant species. Th ese plant oils can be directly made into biodiesel or upgraded into a biojet fuel, as several recent reviews report. [28][29][30] Barriers to improvement of feedstocks A nearly universal goal of plant biotechnology research is increased yields, in terms of overall plant growth (height or mass) and/or the plant component needed for the feed, food, or fi ber market. It remains a colossal challenge to precisely defi ne the genetic basis of a plant trait. Th e more complex the trait, the more factors there are that control the trait, making it harder to pin down targets for transformation. Recent biotechnological approaches include studying pathways and genome-wide-omics*, but when it comes to validating a hypothesis for a role or function, one gene is *The term '-omics' refers to the growing suite of large-data biological analyses, including genomics, transcriptomics, proteomics, and metabolomics. the past two decades and probably will continue to do so, but major SSF breakthroughs are unlikely. Consolidated bioprocessing (CBP) uses microorganisms that produce their own hydrolytic enzymes and complete fermentation into the product in one unit operation. CBP probably will be realized in some form within the next 10 years; hybrid CBP/SSF approaches also will be deployed that combine cellulase-expressing industrial microbes that are incapable or poorly capable of converting plant biomass alone, but that require less added cellulase than current strains. Th ere has been progress both in engineering cellulolytic microbes to make fuels 38,39 and engineering fuel-making microbes to degrade cellulose. [40][41][42] Synthetic biology and metabolic engineering tools have been brought to bear on issues such as co-utilization of glucose (the sole component of cellulose) and xylose (the primary component of hemicellulose) in a variety of organisms. [43][44][45][46] Further near-term progress toward efficient co-utilization is certain. Non-xylose hemicellulosic sugars such as rhamnose, arabinose, and galactose are underutilized; advanced synthetic biology tools for metabolic control are being developed 47 that will enable them to be used more effi ciently, thus increasing the fuel/chemical yield per ton of biomass without decreasing the fi tness or robustness of the CBP micro-organism. Preliminary reports suggest un-pretreated biomass could be a feasible substrate for bioconversion. 48 If biotechnology can improve the yield and rate of product formation from un-pretreated biomass, it would be a game-changing technology because it would eliminate the need for costly chemical pretreatment. Planned waste streams from bioprocessing -including lignin, acetic acid, and glycerol -need to be utilized to add value for biorefi neries. Biotechnology could help remedy the underutilization of lignin, 49 a highly amorphous and hydrophobic polymeric network of substituted aromatic compounds that accounts for approximately 25% of plant biomass. 50 It is typically burned for process heat and electricity, b ut it could be used to make fi ne and/or bulk chemicals to increase the economic viability of biorefi neries. One challenging potential solution would be engineering a microbe to depolymerize lignin and convert it to a fuel or a bulk chemical. Given the vast quantities of lignin that would be produced by a mature biofuels industry, the target products would need pre-existing large markets to prevent market saturation and enable rapid commercialization. Glycerol is a by-product of both biodiesel and bioethanol production. Both native and engineered organisms have been demonstrated to convert crude glycerol to valueadded products such as succinic acid, propanediol, and ( planting biomass as a winter cover crop) or spatial (planting grasses between rows of a tree plantation) can help energy crops coexist with food crops. Some energy crops, particularly trees, appear to be able to tolerate saline soils that are not suitable for food crop production. It has been estimated that trees selected or designed for ability to grow on salt-aff ected land could supply up to 8% of global primary energy consumption. 35,36 Miscanthus, which can substitute for corn in ethanol production, requires less land and water than corn. Carefully selecting and transforming crops for higher yields and lower water use will be necessary to achieve sustainability in the use of basic resources such as land and water. Expanded research to improve the productivity of bioenergy crops with low or no chemical inputs will achieve the dual goals of expanding the production and use of biomass and improving sustainability. 32 Conversion technologies Advanced biotechnology is critical to developing and deploying solutions for biomass conversion. Th ere are two aspects of this issue: (1) modifi cation of the microbes used in conversion processes, via metabolic engineering or synthetic biology, to produce new or improved products and (2) improvement in the key factors of bioconversion: yield, titer, and rate. Biotechnology has the potential to enhance three primary aspects of conversion: • additional feasible feedstock streams • product diversifi cation • process technologies and effi ciencies Metabolic engineering, synthetic biology, and other biotechnological advances will allow more rapid optimization of conversion processes by rational enzyme engineering, by introduction of new pathways and regulation, and by control of the fl ux. Conversion of additional feedstocks and components Th e most critical bioenergy feedstocks will be cellulosic and hemicellulosic sugars from plant biomass. Several companies (Abengoa, Beta Renewables, and DuPont 37 ) are using conversion processes driven by separately produced cellulolytic enzymes [e.g., simultaneous saccharifi cation and fermentation (SSF)]. Enzyme engineering will lead to advances in rational improvement of hydrolytic cellulolytic enzyme activity and will help lower cellulase production costs. Enzyme cocktails have steadily improved over for either expanded feedstock streams or new products to be economically viable. Additional barriers are outlined in Cheng and Timilsina. 16 Th e fi nal titer of a desired product (e.g., ethanol) is oft en controlled by product inhibition or tolerance. Much research in a variety of organisms has targeted understanding and mitigating the toxicity of process inhibitors, including alcohols, hydrocarbons, phenolic compounds, and organic acids. [54][55][56][57] Most eff orts to increase product tolerance involve adaptation and evolution; learning how to truly engineer tolerance would be a major advance. High-titer (10% to 20%) soluble products are usually separated via distillation. Direct production of insoluble hydrocarbons provides a distinct biotechnologydriven advantage for processing because the initial process can be a liquid/liquid phase separation. However, insoluble solvents can cause cellular disruption and inhibition in many microorganisms. Moderate but important advances are likely in the realm of tolerance of products and other inhibitors, which will allow higher titer production. Rational decoupling of growth from metabolic fl ux has the potential to increase yield by eliminating the fl ux of substrate to production of new cells while also increasing titer by reducing product inhibition. 58 Previous technologies include microbial or biocatalyst retention processes such as cell recycling or immobilization to maintain a high rate. Future developments in synthetic biology to control cellular physiology have a moderate chance of enabling the rational decoupling of growth and metabolism. However, economical decoupling with a concomitant increase in yield and titer would result in substantial process improvements. Challenges associated with growing and processing algae -including concerns over water use, pond contamination, use of GMOs in open ponds, photobioreactor scale-up, product harvest, and product dewatering -will continue to be barriers 17 and will be diffi cult to address using biotechnology only. In this case, issues associated with phototrophic growth are combined with the challenge of genetic manipulation of non-model bacteria. Current bioprocesses tend to rely on mesophilic microorganisms (20 to 45°C), but the ability to operate under extreme conditions could simplify process operations and lower costs. Extremophiles can tolerate high levels of temperature (up to 100°C), pH, or salts, and thermophilic microbes oft en operate at increased rates. 59 Th ermophilic processes are speculated to improve separations (i.e., in situ vapor extraction) and lower heating and cooling costs. Partial analyses show generally small advantages because some heating, cooling or product separations will still be polyhydroxyalkanoates (PHAs). 51 Acetic acid, a low-value or waste substance present in plant biomass in the form of acetylated xylan, is also a product of microbial metabolism. Microbes will be engineered to convert acetic acid to value-added products rather than allowing the carbon to go to waste. Recently, Saccharomyces cerevisiaewas engineered to consume acetic acid. 52 It is unclear whether substantial value will be added to these waste streams within 10 years, but the rapid progress of synthetic biology makes these high-risk, high-reward projects more feasible. Increased product diversifi cation New biomass-derived fuels, chemicals, and other products are likely to become more prominent over the next decade. Cellulose-derived short-chain alcohols such as ethanol will be the fi rst to be commercially successful; other products will follow. Th ere are substantial cost barriers to the commercialization of microbially produced hydrocarbons (biogasoline, biojet fuel) and medium-to long-chain fatty acids (for esterifi cation to biodiesel) as fuels, but an intense research eff ort over the next decade would have a moderate chance of reducing the cost to a competitive range. Even if costs remain too high to allow their use as fuels, many of these compounds are potential high-value co-products. For example, farnesene can be sold as a highvalue precursor for cosmetics and other applications, and Amyris's business model targets doing so in the near term. Even bio-compounds such as n-butanol and isobutanol are current feedstocks for the chemical industry and may begin displacing petroleum-derived compounds. Other compounds with high potential for deployment within the next decade include organic acids (e.g. succinic acid, malic acid, lactic acid, adipic acid), diols (e.g. propanediols, butanediols), and PHAs, which add to the current biocommodities of acetic acid and 1,3-propanediol. A long-range, high-risk application of synthetic biology would be to combine the metabolic engineering approaches used to convert sugars to useful products with the engineering of photosynthetic microbes to provide direct conversion of sunlight into chemical products, fuels, or electricity. 53 However, this combines the challenges of effi cient bioproduct formation with the major challenges of algal bioprocessing and is seen as unlikely in the next decade. Improved process technologies and effi ciencies Th e key factors in a bioconversion process are yield, titer, and rate. Synthetic biology and metabolic engineering are the main drivers of increased yield and will be required At the same time, fundamental studies will continue to provide an intellectual foundation for future work. Th is might include discovery of unique enzymatic activities or pathways that could be harnessed for biotechnology, along the lines of the recently discovered enzyme fatty aldehyde decarbonylase 61 that produces hydrocarbons from fatty acid derivatives. Oft en, a microbe newly isolated from the environment has desirable properties but a lack of genetic tools hinders its development as an industrial biocatalyst. Typically, several years of intensive trial-and-error genetic tool development are needed to genetically engineer any novel microbe. More rational, reliable approaches to developing genetic tools and cultivation methods would allow the use of many unique features of novel microbes (such as extremophiles or microbes that can consume complex substrates or produce novel products). Th is development is only moderately likely over the next decade but could be a game-changer if accomplished. Genetic tools for these non-model organisms are improving. 62 Products Potential applications of biotechnology to by-product production are primarily in • altering feedstock characteristics so by-products are more suitable for the intended use. • altering enzymes, bacteria, or yeast used in the fermentation process to yield higher-value or more consistent co-products. Given the scale of biofuel production necessary to make an impact on the market, co-products must be useful in massive quantities to avoid collapsing their market prices. For example, a major by-product of bioenergy production, particularly biodiesel, is glycerin. Glycerin is valuable because it is useful in producing hundreds of products; however, because of increased biofuel production, the glycerin supply has exceeded demand and the price has collapsed. 63 Th is example illustrates the problem of targeting specialty chemicals as co-products of biofuel production. At least two by-products, distillers' grain and lignin, could supply a large market and thus improve the economics of biofuel production. Th e use of distillers' grain in animal feed is a proven contributor to the economics of biofuel production from corn. Lignin has potential uses as a precursor in carbon fi ber production or as an additive to plastics to improve the qualities of plastic products. 49 US exports of distillers' grain as an animal feed have risen sevenfold since 2005/2006 12 with the rise in ethanol required. Th e greater advantage of extremophilic processes is indirect -they are likely to resist contamination by undesired microbes. In addition, feed stream cleanup for these processes (e.g., for salts or low pH) would be less stringent. However, extensive biotechnological modifi cation will be required either to use most current extremophiles (likely for selected extremophiles) or to make current robust microbial hosts into extremophiles (unlikely soon). Biotechnology can accelerate thermochemical conversion of plant material and municipal waste into syngas followed by syngas bioconversion. Bacterial strain engineering and development by companies such as LanzaTech have demonstrated the potential for bioconversion of syngas to a suite of fuels and chemicals. Although biotechnology is unlikely to signifi cantly overcome all limitations of syngas as a feedstock, such as mass transfer of CO, the product diversifi cation barrier is ripe for biotechnological innovation. In addition to using microbes alone for bioconversion, an alternate approach is in vitro product formation with enzymes alone. It removes the desired enzyme pathway from the living microorganism and uses a mixture of specifi c enzymes. Pathways of more than ten enzymes have been devised and tested. 60 Breakthroughs are needed in enzyme stability and cofactors, as well as in lowering the cost of enzyme production. Improved in vitro enzymatic processes are likely but will be cost prohibitive for most commodities. Microbial communities may be assembled to perform desired functions. However, nearly all industrial product formation uses single isolated microbes or enzymespure microbial cultures -as opposed to mixed cultures. Exceptions are in food production or waste treatment (e.g., biomethane production) and are driven by consumption for growth. A major breakthrough would be the ability to design, assemble, and control a mixed culture to produce a specifi c desired product (such as a biofuel). However, many challenges in pure culture apply also to mixed populations, making this approach unlikely in the next decade. Barriers to improving conversion processes Th e major barriers to biotechnological advancements in conversion of additional feedstock streams and new products are limited knowledge of microbial metabolism and diffi culty in controlling metabolic fl ux. Advances in synthetic biology, particularly, will begin to enable more dynamic control of metabolic fl ux via coupling sensing of the environment with gene expression, translational and posttranslational control, and allosteric regulation. altered composition of soil symbionts). Biotechnological improvements in these areas will reduce plants' dependence on fertilizers, pesticides, and other agrochemicals. Th is in turn is expected to result in (i) reduced pollution and (ii) reduced production costs; however, these sustainability factors will need to be measured. 67 If microbes can be engineered to effi ciently convert cellulosic material into liquid fuels and chemicals from unmodifi ed plants, this would suggest that plant biomass yield per acre would become a primary plant engineering target. However, if the former goal remains out of reach, then an essential target for improvement would be to modify plants so that they are more easily converted to products. Another niche use to provide additional value within the supply chain might be the use of biorefi nery wastewater for bioproduction of fuels and chemicals. 68 Conversion will also impact how fuels are used. Th e choice of compounds produced is highly pertinent. If ethanol remains the dominant product, then the blend wall will continue to be a barrier and to limit the size of the biofuels market. However, if biogasoline, biodiesel, and biojet drop-in fuels become economical, then the blend wall will no longer be an issue. Developing microbes designed for mutually beneficial interactions with plants has been identifi ed as a way to increase crop yields, decrease nutrient applications, improve resistance to pests and diseases, and improve plant water use. 69 Th ese symbiotic relationships are expected to increase grower profi t by stabilizing yields across years with varying weather conditions, and by reducing grower costs. Th ey should help address the challenges of greenhouse gas emissions and competition for land by improving yields with reduced inputs. Th ere are other important questions related to the sustainability of biofuel plantations: we do not understand how the biota of ecosystems (e.g. pollinators, aphids, birds, small mammals) will be aff ected by the cultivation of plants improved using advanced biotechnology, in nonnative or untested geographical niches, and the related land use changes. Th e impacts of improved biofuel crops on the carbon cycle, biosequestration, 70 and the promise of biofuel as a carbon-neutral fuel option need to be demonstrated. Th e impacts of changes in atmospheric CO 2 levels, precipitation regimes, and temperatures require that bioenergy plant performance and sustainability be assessed within the context of climate change models. Th e ultimate success will come from combining advances that result from biotechnology approaches with more traditional chemical engineering to develop costeff ective processes. production from corn. Distillers' grain is a good source of animal feed, and using it as such improves the greenhouse gas benefi ts of biofuel production. It is oft en blended with other plant materials and supplemented with specifi c amino acids for optimal nutritional quality. Because there is little information on equivalent materials that will come from cellulosic biofuel plants, research is needed on potential issues with using them, including nutritional quality, presence of toxic products, and predictability of the content. Feedstock characteristics and the processes used in preprocessing it will aff ect the utility of the by-products as an animal feed. Genetic manipulation of the feedstock could increase the value of the distillers' grain by increasing total protein content and the content of amino acids that might be lacking (e.g., lysine). Alternatively, the fermentative microorganisms could be engineered to do the same, improving the value of the product. However, given the time scale of deploying engineered plants and the current lack of nutritional data, this has a low probability of completion within the next 10 years. Potential uses of lignin include use as a feedstock for structural materials such as carbon fi ber 64 and materials for energy-related applications, such as anodes for lithium ion batteries. 65 Th e current price of carbon fi ber limits its use to specialty applications, but a price drop would open new markets for it as a structural material. 66 One challenge is the variability of the structure and composition of lignin, 65 and no one has demonstrated cost-eff ective production of carbon fi ber from lignin that meets the strength requirements. 66 Ongoing research into how genetic variation in the feedstock aff ects the suitability of the lignin for use as a carbon fi ber precursor could lay the foundation for future plant-engineering strategies for improving lignin conversion to carbon fi ber. Lignin pathways are being altered in bioenergy crops to increase conversion. Th ese same plants will likely also be assessed for the eff ect of lignin composition on the production of useful products, including carbon fi ber. Crosscut impacts on supply chain It is essential to study, understand, and improve the environmental and ecological sustainability of future bioenergy crops that may be planted at the scale of millions of hectares of land. Th e ideotypic or most desirable bioenergy plant will need to be productive on marginal lands (land having poor soil structure, nutrient composition, and moisture) and in fl uctuating weather conditions and will need to be resilient in the face of various abiotic stresses (e.g., water, nutrients, and temperature) and biotic stresses (e.g., pathogens or Conclusions Projections based on current research and development indicate that biotechnology will continue to improve biofuel supply chains. Over the next 10 years, most biotechnological advances will be in conversion processes, followed over a slightly longer period by feedstock improvements. In conversion, genetic engineering of microbes and enzymes, combined with other biotech process modifi cations, will continue to improve yields, rates, and titers. Improved microbes and enzymes will more completely convert biomass feedstocks and handle a broader variety of biofeedstocks. Th ere will be rapid advances in developing and deploying additional marketable co-products from biorefi neries -including fuel blendstocks beyond ethanol (e.g., butanol or hydrocarbons) to get past the ethanol 'blend-wall' -as well as commodity chemicals to improve the biorefi nery economy. For feedstocks, advances are anticipated in yield, nutrient uptake (lower fertilizer use), and tolerance of environmental stresses (e.g., drought). Th ese probably will require the adoption of (GMOs). Because GMO crops have longer fi eld testing and deployment cycles, and because of societal reservations about GMOs, deployment of these improvements is likely later within the 10-year timeframe. A biofuels industry of the scale required to produce enough energy to make a major impact will also generate massive quantities of co-products and by-products. Biotechnology can be useful in tailoring feedstocks so that they produce by-products suitable for widespread use and in altering conversion processes to yield consistently highvalue co-products. Some biotechnology fi elds could allow more rapid impacts and should be monitored for breakthroughs. Strategic and knowledge gaps in advanced biotechnology related to bioenergy are primarily in two scientifi c areas: (i) understanding of underlying biology to allow rational changes and (ii) improvement of tools for implementing changes. Craig C. Brandt Mr Brandt is a research staff member in the Biosciences Division at Oak Ridge National Laboratory. He has a diverse background in data management, statistics, and software development. His research has focused on the application of advanced data analysis techniques to a variety of problems in bioremediation, biogeochemical cycling, and resource analysis. His recent work has been in the area of analysis of biomass resource supply and supply chain optimization. Adam Guss Adam Guss is currently a Genetic and Metabolic Engineer at Oak Ridge National Laboratory, where his primary focus is engineering micro-organisms to convert lignocellulosic biomass to fuels and chemicals. He holds a PhD in Microbiology from the University of Illinois at Urbana-Champaign.
2019-03-19T13:13:04.374Z
2015-09-01T00:00:00.000
{ "year": 2015, "sha1": "84a896f0ce163a830fcc951754e278d6c237ab40", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1002/bbb.1549", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "e5fd9d4233aa757ce1910e80baa3492e2e56b992", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Business" ] }
211230674
pes2o/s2orc
v3-fos-license
Broadly applicable oligonucleotide mass spectrometry for the analysis of RNA writers and erasers in vitro Abstract RNAs are post-transcriptionally modified by dedicated writer or eraser enzymes that add or remove specific modifications, respectively. Mass spectrometry (MS) of RNA is a useful tool to study the modification state of an oligonucleotide (ON) in a sensitive manner. Here, we developed an ion-pairing reagent free chromatography for positive ion detection of ONs by low- and high-resolution MS, which does not interfere with other types of small compound analyses done on the same instrument. We apply ON-MS to determine the ONs from an RNase T1 digest of in vitro transcribed tRNA, which are purified after ribozyme-fusion transcription by automated size exclusion chromatography. The thus produced tRNAValAAC is substrate of the human tRNA ADAT2/3 enzyme and we confirm the deamination of adenosine to inosine and the formation of tRNAValIACin vitro by ON-MS. Furthermore, low resolution ON-MS is used to monitor the demethylation of ONs containing 1-methyladenosine by bacterial AlkB in vitro. The power of high-resolution ON-MS is demonstrated by the detection and mapping of modified ONs from native total tRNA digested with RNase T1. Overall, we present an oligonucleotide MS method which is broadly applicable to monitor in vitro RNA (de-)modification processes and native RNA. INTRODUCTION Ribonucleic acids (RNA) contain a vast variety of chemical modifications, which derive from the four canonical nucleosides adenosine, guanosine, uridine and cytidine. Modifications are introduced by dedicated enzymes, sometimes referred to as RNA writers. In analogy, RNA erasers exist, which demethylate adenosine in messenger RNA (mRNA) (1,2) or transfer RNA (tRNA) (3,4). Many RNA modifying enzymes add methyl groups to either the nucleobase or the ribose. For example, tRNA methyltransferase 1 (TRMT1) is responsible for the dimethylation of guanosine to 2,2dimethylguanosine in tRNA (m 22 G) (5) and methyltransferase like proteins 3/14 (METTL3/METTL14) methylate position 6 of adenosine and the epitranscriptomic mark m 6 A forms in mRNA. In addition to RNA modification by methylation, the conversion of adenosine to inosine by deaminases such as the adenosine deaminase tRNA specific enzyme 2/3 (ADAT2/3) has been reported (6). Interestingly, many neurological diseases are connected with mutations in RNA modifying enzymes such as TRMT1 (5) and ADAT2/3 (7). In addition to the active decoration of RNA with modifications, the removal by active demethylation is also possible. Demethylation of m 6 A and its ribose methylated variant m 6 Am has been reported in human mRNA (1,2). Bacterial RNAs, including tRNA and rRNA, are also methylated, however there are no reports of active demethylation of enzymatically methylated sites. In many bacteria, the methylation of adenosine at position N1 (1-methyladenosine, m 1 A) is not found in DNA or tRNA. However, alkylation stress can lead to direct m 1 A formation in DNA and RNA. Due to methylating agents, which chemically methylate nucleic acids, bacteria have the ability for active demethylation. This m 1 A is removed by the alphaketoglutarate-dependent dioxygenase AlkB (8). In human tRNAs, m 1 A is found in 42% of all tRNAs at position 58. m 1 A58 has been reported to be substrate to the human homologues of AlkB, namely ALKBH1 (3) and ALKBH3 (9). The detection of modified moieties in RNA is possible by chemical means (10), by sequencing (11) and mass spectrometry (MS) (12). Even with the ever-rising number of sequencing techniques, which detect modified nucleosides in whole transcriptomes, MS remains the key technique for characterization of modified nucleosides. RNA MS analytics can be subdivided into three major principles. The first relies on complete enzymatic digestion of the RNA into e41 Nucleic Acids Research, 2020, Vol. 48, No. 7 PAGE 2 OF 16 the nucleoside building block and is highly sensitive with lower limits of detection (LLOD) in the fmol and amol range (nucleoside-MS). This technique is commonly used for detection (12), quantification (13) or discovery (14,15) of modified nucleosides. The second uses enzymes, which only partially digest the RNA and smaller oligonucleotides (ON) emerge (in this manuscript referred to as ON-MS). In the case of ON-MS, some of the sequence context surrounding a modified nucleoside is preserved and the technique is used to place modified nucleosides in known and unknown RNA sequences. Here, the pioneering work of the McCloskey (16,17), Limbach (18) and Suzuki (19) lab have largely contributed to the establishment of ON-MS. Disadvantages of this bottom-up approach are the loss of (most) sequence information, congestion of peaks in a small mass range, which mess up spectra, as well as the detection loss of low-abundant-modified RNAs by domination of their unmodified counterparts (20)(21)(22). The third principle of MS-based RNA analytics is the analysis of full-length RNAs (top-down-MS) at sophisticated mass spectrometers which has been pioneered by the Breuker lab (23)(24)(25)(26). Topdown-MS of RNA is suited for many types of modification and it reveals the sample heterogeneity. It provides a high sequence coverage and information without a timeconsuming digestion step. The disadvantage of top-down approaches is the need for RNA shorter ∼60 nts and the available dissociation steps in the MS field are low-efficiency processes. Another challenge is the underdevelopment of back-end bioinformatic tools. Nevertheless, quantitation of modified nucleobases has been also shown by top-down-MS (27). Both bottom-up-and top-down-MS approaches utilize MS/MS analysis and the underlying RNA dissociation reactions have been reviewed (28). From the perspective of instrumentation, high-resolution mass spectrometers are ideally suited for the MS and MS/MS analysis of RNA and its fragments. Instruments, such as time-of-flight, iontrap or orbitrap MS, deliver the necessary resolution to determine the charge state of the ONs and they allow sequence prediction based on their accuracy. Low resolution instruments, such as triple quadrupole MS, are not commonly applied for oligonucleotide MS analysis as they lack the resolution to distinguish similar ONs and to clearly determine the charge state of an ionized ON. In contrast to top-down-MS, bottom-up-MS relies on the chromatographic separation of the ONs to solve the problem of congested MS spectra. The liquid chromatography is achieved by using an RP-18-based column and an ion-pairing reagent such as triethyl-ammonium-acetate (TEAA), which separates the oligonucleotides by their length. The eluting oligonucleotides are then analyzed in negative ionization mode by a high-resolution mass spectrometer and sequenced according to their fragmentation pattern (29,30) or modification footprints (31). With TEAA dependent flow chromatography (32), it is now possible to analyze 2-5 ng of purified tRNA isoacceptors (∼80-200 fmol pure tRNA) and determine the sequence and modification status of these tRNAs (33). Due to the common use of organic bases as ion-pairing reagents for ON separation and thus negative ionization, oligonucleotide MS is mostly used in labs with liquid chromatography coupled mass spectrometry (LC-MS) instruments dedicated for ON analysis. For other labs, the limitation of using ion-pairing reagents is the difficulty of their removal from the instrument. Residual ion-pairing reagents stay on the LC system and interfere with other types of chromatography and in addition they reduce the sensitivity of the mass spectrometer. To overcome this problem ion-pairing reagent free chromatography can be used on a reverse phase (RP) column (34). Here, the retention behavior of oligonucleotides is unexplored. Another alternative is chromatography on a hydrophilic interaction liquid chromatography (HILIC) column which will allow separation in a similar fashion to ion-pairing chromatography (35). Both methods are reported in combination with negative ionization mode MS detection and are thus not applicable for labs with mass spectrometers that preferably operate in positive ionization mode. Recently, a method was presented which used positive ionization detection of oligonucleotides after ion-pairing chromatography (36). Thus, laboratories have currently the option of doing classical ON-MS (ion-pairing reagent chromatography in negative ionization mode), ion-pairing reagent free ON-MS in negative mode or ion-pairing chromatography in positive mode. Thus, there is currently no method available which overcomes both limitations for a broad application of ON-MS. A general challenge for MS-based modification analysis is its non-quantitative nature. For quantification, the signal intensity of an analyte must correlate with its concentration or amount. In MS, the signal intensity depends of course on the amount of analyte, but in addition on a multitude of other parameters such as salt load, ionization properties of the analyte, instrument parameters and so on. These detection fluctuations make quantification by MS a challenging task, which can only be done by using stable isotope labeled internal standards (SILIS) of the analyte of interest. For nucleoside-MS, this problem has been overcome by synthetic (37) or biosynthetic (13,38) preparation of stable isotope labeled nucleosides. For ON-MS, biosynthetic approaches have been reported (39,40). Another elegant way to solve the problem was presented by the Limbach lab (41). They performed the enzymatic digest of the unknown RNA in the presence of H 2 18 O, which results in oxygen-18 incorporation into the oligonucleotide 3 -phosphate. As a third option, in vitro transcribed RNA is prepared in the presence of stable isotope-labeled nucleoside triphosphates (NTP) that is then used as an internal standard (42,43). Although ON-MS has become a powerful tool for analysis of RNA modifications within their sequence context, it is not commonly applied. Due to the benefits of ON-MS, we developed a TEAA free chromatography, which separates the ONs not by length, but by chemical composition of the sequence. In this manuscript, we describe the development and separation principle of the method using synthetic ONs. Detection is achieved by low-resolution and high-resolution MS in positive ionization mode. We present MS/MS data for the analyzed ONs and determine the LLOD in various detection modes. We describe the automated purification of unlabeled and stable isotope labeled in vitro transcripts of tRNA Val AAC and tRNA Ser UGA by ribozyme fusion transcription. These in vitro transcripts and native tRNA from HEK cells are analyzed by our ON-MS method. To verify correct folding of the produced tR-PAGE 3 OF 16 Nucleic Acids Research, 2020, Vol. 48, No. 7 e41 NAs, we use the adenosine-to-inosine deaminating enzyme ADAT2/3 on tRNA Val AAC . Inosine formation is observed by both nucleoside-MS and ON-MS. Importantly, these experiments are done on the same day using the same instrument which highlights the compatibility of our ON-MS method with sensitive small compound analysis. Furthermore, we use the developed ON-MS method to monitor the demethylation of short oligonucleotides containing 1methyladenosine by bacterial AlkB in vitro. Overall, we provide a method for automated purification of RNA transcripts and a broadly applicable ON-MS method for instruments commonly used for other types of small compound analysis, especially nucleoside analysis. Salts, reagents and nucleosides All salts, solvents and reagents were obtained from Sigma Aldrich (Munich, Germany) at molecular biology grade unless stated otherwise. All solutions and buffers were made with water from a Millipore device (Milli-Q, Merck, Darmstadt, Germany). Nucleosides: adenosine (A), cytidine (C), guanosine (G) and uridine (U) were purchased from Sigma Aldrich. 1-methyladenosine (m 1 A) and inosine (I) were purchased from Carbosynth (Newbury, UK). Oligonucleotides All oligonucleotides were delivered in a stock concentration of 100 M in water and are listed in Supplementary Table S1. AlkB in vitro assay An aliquot of bacterial AlkB protein (Peak Proteins, Cheshire, UK) was thawed on ice. Every assay was performed in a volume of 50 l with a final concentration of 50 mM TRIS HCl pH 7.5, 15 mM KCl, 2 mM L-ascorbate, 300 M ␣-ketoglutarate, 300 M Fe(II) (NH 4 ) 2 (SO 4 ) 2 × 6H 2 O. L-ascorbate, ␣-ketoglutarate and diammonium iron (II) sulfate hexahydrate stock solutions were made afresh. A total of 10 M of the synthetic RNA oligonucleotide was incubated with 1 M AlkB enzyme. All assays were incubated at 37 • C for 1 h and immediately stopped afterward by filtering through a molecular weight cut-off filter (VWR, Partnumber: 516-0229) for oligonucleotides or RNA precipitation for tRNA in vitro transcripts, respectively. tRNA digestion for nucleoside mass spectrometry Up to 1 g RNA in 30 l aqueous digestion mix were digested to single nucleosides by using 0.2 u Alkaline Phosphatase, 0.02 u Phosphodiesterase I (VWR, Radnor, PA, USA) and 0.2 u Benzonase in 5 mM TRIS (pH 8.0) and 1 mM MgCl 2 . Furthermore, 0.5 g tetrahydrouridine (Merck, Darmstadt, Germany), 1 M butylated hydroxytoluene and 0.1 g pentostatine were added. The mixture was incubated with the RNA for 2 h at 37 • C and filtered through 96-well filterplates (AcroPrep™ Advance 350 10 K Omega™, PALL Corporation, New York, USA) at 4 • C for 30 min at 3000 × g, or through single tubes (VWR, Partnumber: 516-0229) at room temperature for 7 min at 5000 × g. The filtrate was mixed with 1/10 Vol. of 10× yeast SILIS (stable isotope labeled internal standard) (38) for absolute quantification. Nucleoside mass spectrometry For nucleoside-MS measurements, a liquid chromatography unit (1290 Infinity II, Agilent Technologies, Waldbronn, Germany) equipped with a diode-array detector (DAD, Agilent Technologies) was used that was interfaced with a triple quadrupole mass spectrometer (G6470A, Agilent Technologies) via an electrospray ionization (ESI) source (Jet Stream, Agilent Technologies). For separation of nucleosides, a Synergi Fusion-RP column (Phenomenex ® , Torrance, California, USA; Synergi ® 2.5 m Fusion-RP 100Å, 150 × 2.0 mm) at 35 • C and a flow rate of 0.35 ml/min were used. The eluents were 5 mM NH 4 OAc, brought to pH 5.3 with glacial acetic acid (buffer A), and pure acetonitrile (buffer B). The gradient started at 100% A for 1 min, followed by an increase of solvent B to 10% over 5 min. From 5 to 7 min, solvent B was increased to 40% and was maintained for 1 min before returning to 100% solvent A in 0.5 min and a 2.5 min re-equilibration period. The QQQ mass spectrometer was operated in dynamic multiple reaction monitoring (dMRM) mode between 1.1 min and 9 min with a cell accelerator voltage of 5 eV. Operating parameters: positive-ion mode, skimmer voltage of 15 V, cell accelerator voltage of 5 V, N 2 gas temperature of 230 • C and N 2 gas flow of 6 l/min, sheath gas (N 2 ) temperature of 400 • C with a flow of 12 l/min, capillary voltage of 2500 V, nozzle voltage of 0 V and nebulizer at 40 psi. The detailed mass spectrometric parameters for each nucleoside are given in Supplementary Table S2. Calibration for nucleoside mass spctrometry For calibration, synthetic nucleosides Cytidine, Uridine, Guanosine, Adenosine, 1-methyladenosine (m 1 A) and Inosine (I) were weighed and dissolved in water to a stock concentration of 1-10 mM. Calibration solutions ranging from 0.15 to 500 pmol for each canonical nucleoside and from 0.15 to 500 fmol for each modified nucleoside were prepared by serial dilution (1:10). The calibration solutions were mixed with 1/10 Vol. of 10× yeast SILIS and analyzed by nucleoside-MS. Data were analyzed using Agilent's Quantitative or Qualitative Software. The absolute amount determined for m 1 A and I was normalized to the amount of injected RNA as determined by the absolute abundance of all four canonical nucleosides (38). Mammalian cells HEK 293T and HeLa ACC 57 cells (DSMZ, Braunschweig, Germany) were cultured in Dulbecco's Modified Eagle Medium (DMEM). DMEM medium was prepared by dissolving 8.4 g DMEM powder D5030 in 1 l pure water. Before sterile filtration, carbonate and phenol red were added to a final concentration of 3.7 g/l NaHCO 3 and 0.0159 g/l phenol red. Stocks of glucose (225 g/l) and L-glutamine e41 Nucleic Acids Research, 2020, Vol. 48,No. 7 PAGE 4 OF 16 (15 g/l) were prepared and sterile filtered. These solutions were added to the DMEM medium before usage to a final concentration of 4.5 g/l glucose, 0.584 g/l L-glutamine and 10% fetal calf serum (FCS). The methionine concentration was 0.15 g/l in the final media. For splitting, the cells were treated with TrypLE Express (Gibco, Carlsbad, CA, USA). The cells were incubated and cultivated at 10% CO 2 atmosphere. RNA isolation HEK and HeLa cells were harvested directly in cell culture flasks using 1 ml TRI-Reagent ® per 25 cm 2 . Isolation was performed according to the manufacturer's protocol with chloroform (Roth, Karlsruhe, Germany). The RNA was finally dissolved in 30 l water. PCR All polymerase chain reactions (PCR) were performed in a total volume of 50 l with a final concentration of 1fold Phusion Buffer HF (New England Biolabs, Ipswich, MA, USA) and 0.8 M forward and reverse primer. The sequence of templates and primers are given in Supplementary Table S1. Additionally, 1 l dNTPs, 0.5 l Phusion polymerase and 100 ng of the desired DNA template were added. All samples were amplified with the same PCR program: 95 • C for 2 min, 95 • C for 30 s for 20 amplification cycles, 57 • C for 30 s for 20 times and 68 • C for 1 min for 20 times. At the end of the program, the PCR reaction was incubated at 68 • C for 1 min and was cooled down to 4 • C. Every PCR reaction was performed twice and pooled afterward for the T7 in vitro transcription. T7 in vitro transcription The total volume of the T7 in vitro transcription was 200 l. A total of 100 l PCR product were added to T7 buffer mix and T7 enzyme (TranscriptAid T7 High Yield Transcription Kit, Thermo Fisher Scientific, Waltham, MA, USA) and 1.6 l of each rNTP ( 14 N-rNTPs were provided by the kit, 15 N-rNTPs were purchased by Silantes, Munich, Germany, Partnumber.: 121306100). The mixture was incubated for 2 h at 37 • C and 600 rpm. After 2 h incubation, the sample was treated with 2 l T7 enzyme mix and 5 l 50 mM MgCl 2 and incubated for additional 2 h. After another 2 h incubation, the sample was treated again with 2 l T7 enzyme mix and 5 l 50 mM MgCl 2 and incubated for additional 2 h to improve the yield of the transcription. In total, the transcription was finished after 6 h. DNA template was removed by addition of 4 l DNase 1, which is provided in the kit, 1 h at 37 • C. In the next step, MgCl 2 was added with a final concentration of 5 mM and the sample was incubated at 60 • C for 1 h to auto-catalytically cleave the precursor in vitro transcript into its target tRNA. Prior to RNA precipitation, the sample was centrifuged at 5000 × g for 5 min at room temperature to remove the insoluble pyrophosphate of the transcription reaction. The supernatant was precipitated by addition of 0.1 Vol. of 5 M ammonium acetate and 2.5 Vol. of ice-cold ethanol (100%) followed by overnight incubation at −20 • C. The RNA was pelleted by centrifugation (12 000 × g, 40 min, 4 • C), washed with 70% ethanol and resuspended in 30 l water. The column temperature was set to 40 • C for native tRNA purification using the Advance Bio 300Å and 60 • C for in vitro transcript purification using the AdvanceBio 130Å. For elution, a 1 ml/min isocratic flow of 0.1 M ammonium acetate was used. Eluting RNA was detected at 254 nm with a diode array detector. The eluted RNA was collected by a fraction collector and the eluent was evaporated (GeneVac, EZ-2 PLUS, Ipswich, UK) to a volume of ∼50 l before ethanol precipitation. The purified RNA was dissolved in 30 l water for further enzymatic assays or MS analysis. Handling guide for SEC columns For prolongation of the column lifetime, it is essential to avoid sudden pressure changes. Thus we recommend to run a conditioning method, which slowly increases the flow rate from 0 ml/min to 1 ml/min within 20 min. For tRNA purification from total RNA, a column temperature of 40 • C is sufficient. For purification of in vitro transcripts, 60 • C yielded better separation results. Note: Column lifetime is shortened at 60 • C and thus long exposures to high temperatures should be avoided. After use, the column was stored in 0.05% NaN 3 . RNA concentration and quality measurements The RNA concentration was determined with an Implen Nanophotometer NP 80 (Munich, Germany). For quality control of SEC purified in vitro transcripts, the Agilent 2100 Bioanalyzer (Small RNA analysis chip, Partnumber: 5067-1548, Agilent, Waldbronn, Germany) was used. Isoacceptor purification The procedure was adapted from Hauenschild et al. (45). For tRNA isoacceptor purification, pre-purified total tRNA was used. The sequence of the biotinylated 2deoxyoligonucleotide probes is listed in Supplementary Table S1. Purification of ADAT2/3 Open Reading Frame sequences encoding for human ADAT2 and ADAT3 were cloned into the petDuet plasmid. ADAT2 was cloned into Multiple Cloning Site (MCS) 1 which allows fusion with a HIS tag. ADAT3 was cloned into MCS2. BL21DE3 RIPL Escherichia coli cells (Agilent) were transformed with the above plasmid. Starter culture was grown to an O.D of 0.1-0.5, and inoculated into a larger culture to an O.D of 0.05 and cells were induced with Isopropyl-␤-D-thiogalactoside (IPTG) at an O.D. of 0.6-0.8. After induction, E. coli were grown at 20 • C for 15 h. Cells were harvested and lysed by sonication in a buffer containing 20 mM TrisHCl pH 7.6, 5% glycerol, 0.1% Triton, 1 mM Dithiothreitol (DTT), 0.1 mM (Phenylmethylsulfonylfluorid) PMSF, 500 mM NaCl, 25 mM imidazole. Cell lysates were oscillated with HisPur Colbalt Resin (Thermo Scientific # 89964) at 4 • C for 2 h. Beads were washed 3× with buffer containing 20 mM TrisHCl pH 7.6, 5% Glycerol, 0.1% Triton, 1 mM DTT, 0.1 mM PMSF, 300 mM NaCl and 25 mM imidazole. Elution of protein was carried out with above buffer containing 300 mM imidazole. Elution buffer was exchanged to buffer containing 20 mM TRIS, 5% Glycerol, 0.1% Triton, 150 mM NaCl and 1 mM DTT. Expression and purification was verified by sodium dodecyl sulphate-PAGE (polyacrylamide gel electrophoresis) followed by western blotting. Quantification of purified protein was carried out by Coomassie with BSA size standards. Final concentration of ADAT2/3 was ∼8 ng/l. ADAT2/3 assay An aliquot of ADAT2/3 protein was thawed on ice. A total of 6 l tRNA substrate (115 ng/l) were incubated in 6 l water and 6 l melting mix. The melting mix buffer was prepared with 30 mM TRIS pH 7.5 and 1 mM ethylenediaminetetraacetic acid. To denature the tRNA, the mix was heated to 95 • C for 2 min and immediately placed on ice for 3 min for tRNA folding. In the next step, 3 l folding mix was added and incubated for 20 min at 37 • C. The folding mix buffer was prepared with final concentrations of 333 mM HEPES pH 7.5, 20 mM MgCl 2 and 333 mM NaCl. A total of 9 l of ADAT2/3 was added to obtain a total reaction volume of 30 l and incubated for 1 h at 37 • C and immediately stopped afterward by RNA precipitation with 300 l LiClO 4 in acetone (2%). After incubation at room temperature for 5 min, the sample was centrifuged at 5000 × g for 5 min. The supernatant was discarded and the tRNA pellet was solved in 20 l milliQ water for further analysis. tRNA digestion for oligonucleotide mass spectrometry RNase T1 was diluted to a 10 U/l solution by mixing 2 l of the RNase T1 stock (186 U/l, Sigma-Aldrich, Munich, Germany) with 35.2 l TRIS pH 7.5 (25 mM). The diluted RNase T1 should be stored at 4 • C. Up to 1 g RNA was digested with RNase T1 at 37 • C for 1 h in a total volume of 50 l with final concentrations of 25 mM TRIS pH 7.5, 100 mM NaCl and 1 U/l RNase T1 and 0.2 u/l Alkaline Phosphatase. The digested samples were filtered through a molecular weight 10 kDa cut-off filter (VWR, Dreieich, Germany, Partnumber: 516-0229) and analyzed by MS. The gradient started at 100% buffer A, followed by an increase of B to 5% over 10 min. From 10 to 12 min, buffer B was increased to 50% and was maintained for 1 min before returning to 100% buffer A and a 4 min re-equilibration period. For source optimization experiments a shorter gradient was used starting at 100% buffer A, increased to 2% buffer A over 1 min, increased to 10% A by 4 min, then to 50% A by 5 min, held at 50% A for 0.5 min then returned to 100% A. The same re-equilibration period as the previous method was used. The QQQ mass spectrometer was operated in full scan (MS2Scan), between 500-1000 m/z with a fragmentor voltage of 100 V and a cell accelerator voltage of 5 V in positive ionization mode. For determination of CID spectra, collision energies of 5-40 eV were used and the instrument operated in Product Ion Scan mode. Mass transition of ONs were used in a targeted MRM method. Data were analyzed using Agilent's Qualitative Analysis Software. High-resolution mass spectra of oligonucleotide ions were recorded by a Thermo Finnigan LTQ Orbitrap XL with a heated electrospray ionization (HESI) source was operated in positive ionization mode with a capillary voltage of −10 V and temperature of 310 • C. The spray voltage to 3.3 kV, and the atmospheric pressure chemical ionization (APCI) temperature was set to 135 • C. Sheath, auxiliary and sweep gases were set to the following respectively 5, 35 and 7 arbitrary units. MS1 specta were collected from 200 to 1000 m/z and data-dependent acquisition (DDA) set to acquire MS2 spectra of the top three most abundant ions. Data acquisition and analysis was completed on the Thermo Xcalibur software platform. Exploring the applicability of a TEAA-free chromatography for oligonucleotide mass spectrometry In a first step, we wanted to find a TEAA (triethylammoniumacetate) and other ion-pairing reagents free chromatography for separation of oligonucleotides which might be compatible with commonly used mass spectrometers and small compound analysis. A literature search, to avoid ion-pairing reagents, revealed a method from 1994, where an RP-18 column was used for separation of tRNA isoacceptors (46). Here, the elution was achieved by a gradient of simple ammonium acetate and acetonitrile, which is comparable to the separation procedure of nucleosides (38). Inspired by this early work, we tested the separation of synthetic oligonucleotides (ON) on our nucleoside column (Phenomenex, Fusion-RP) using a 10 mM ammonium acetate buffer and acetonitrile for elution. Due to the negatively charged phosphate backbone of RNA, we did not expect good retention on a hydrophobic RP-18 column. However, due to the special column material of the tested column, we observed excellent retention for the tested ONs. As the UV chromatogram (260 nm) in Figure 1A shows, the tested 8-mer ONs elute first, followed by 9-mer ONs and 5-mer ONs. To understand which properties of the ONs influence the retention behavior, we used ONs, which only differ in one nucleobase in their sequence. As shown in Figure 1A, the cytidine containing 8-mer elutes first, while the exchange to a uridine leads to better retention. The permutated 8-mer with adenosine is retained most on the column. In the 5mer, we observe a better retention of the cytidine containing ON compared to its U containing permutation. In case of the 9-mer ONs, the deamination of one adenosine into inosine leads to reduced retention. In these sequence contexts, we clearly see a dependence of the retention behavior on the chemistry of the present nucleobases and not on the length of the ON. From our data, we conclude that the retention behavior of ONs in our TEAA-free system depends on two factors: the distribution of nucleobases and the sequence. We observe no rules which allow prediction of ON elution. In a next step, we connected our developed chromatographic system with our low resolution triple quadrupole (QQQ) mass spectrometer. We tuned the instrument in both negative and positive ionization mode using the tune mix supplied by the manufacturer and determined the optimal source parameters for sensitive detection of ON 13 Figure S1). With these parameters, we scanned for the eluting ONs in a mass range of m/z 500-1000 in positive and negative ionization mode. We received MS signals of all tested ONs below 10 nt in both positive and negative ionization mode ( Figure 1B/C). Our instrument is commonly operated in positive ionization mode, and thus it was no surprise to find a substantially higher sensitivity in positive ionization mode. We attribute this observation to the fact that the instrument is only operated in positive ion mode with the respective buffers for ideal protonation in the ESI source. Thus, the ionization efficiency in negative ion mode is reduced. On other instruments, the sensitivity is higher in negative mode compared to positive mode (47). We recommend assessment of the most sensitive ionization mode on every mass spectrometer by running a mixture of short synthetic ONs. The addition of the modifier 1,1,1,3,3,3-Hexafluor-2-propanol (HFIP) did not improve the detection in positive or negative ion mode. Due to the common use of our instrument in positive ion mode and the higher sensitivity, we decided to use the positive ionization mode for further analyses. We used the mongo-oligo mass calculator tool (https: //mods.rna.albany.edu/masspec/Mongo-Oligo) to compare our experimental signals with the predicted signals. For the tested ONs, we observed the predicted m/z signals, commonly of the +2 charge state for 5 mers, +3 charge for 8 mers and +4 charge for 10 mers (Supplementary Figure S2). The charge state of an ON is commonly calculated with the natural abundance of carbon-13 on high-resolution instruments. The difference between the 12 C-m/z value and the 13 C-m/z value is ∼1 in a +1 charge state, ∼0.5 in a +2 charge state, ∼0.3 in a +3 charge state, etc. On a low-resolution instrument, such as the used QQQ MS, the natural isotope abundance is not resolved and cannot be used to predict the charge state. As an alternative, the difference between the (multi-) protonated ion and the sodium charged ion can be used. In a +1 charge state, the difference is 22 units, in a +2 charge state it is 11 units (charged by one protonation and one sodium cation), in a +3 charge state it is ∼7 units (two protons and one sodium cation), etc. Our mass spectrum for the C-permutated 8-mer ONs revealed a main signal at m/z 808.1 (protonated species) and a m/z 816 for the Na + -charge and thus we conclude a +3 charge state of this ON (see Figure 1D). For the 8-mer ONs, other charge states are of minor abundance (see full mass spectra in Supplementary Figure S3). In addition to the analyses of ONs by low resolution MS, we wanted to test the compatibility of our method with a high resolution mass spectrometer. The instrument was optimized with the 9-mer ON 13 and the optimal parameters were chosen according to the results given in Supplementary Figure S1. As shown in Figure 1E and Supplementary Figure S2, all tested ONs were detectable by high-resolution MS in positive ionization mode. On this instrument Na + adducts were of minor abundance and protonated species dominated the MS spectra. MS of oligonucleotides is commonly used to determine the sequence and location of modified nucleosides (26,48). For this purpose, fragmentation of the ONs by collisioninduced-dissociation (CID) is a useful tool and the behavior of ONs in positive and negative ionization mode is well described (47). On the low resolution QQQ instrument, we analyzed ON 13 and ramped CID energies from 0 to 30 eV. The recorded product ion scans are displayed in Figure 2A. With increasing collision energy, the signal of the precursor ion decreases, while multiple new signals appear in the spectra. At higher collision energies, the most prominent product ions in positive ionization mode are protonated nucleobase fragment ions. At energies around 15-20 eV, c and y ions become apparent ( Figure 2B). Due to the low resolution of a QQQ instrument, the charge state of these c-and y-ions cannot be determined and thus QQQ MS is not suitable for de novo sequencing. The signal to fragment matching in Figure 2B and for other ONs in Supplementary Figure S4 was only possible due to the known sequence of the ON and the use of the Mongo oligonucleotide tool (https://mods.rna.albany.edu/masspec/Mongo-Oligo). CID analysis on the high resolution MS produced more c-and y-fragment ions of known charge states and thus de novo sequencing is theoretically possible (Supplementary Figure S5). Low resolution instruments such as our used QQQ mass spectrometer are commonly used for the sensitive detection and quantification of small compounds. In our hands, the LLOD and lower limits of quantification (LLOQ) of modified nucleosides is in the single digit fmol range or even amol range (38). For ON analysis, we injected dilutions of ON 13 and ON 14 and found LLODs/LLOQs of 200 fmol in MS2Scan mode, 800 fmol in MS/MS mode and 800 fmol in Product Ion (PI) Scan mode for sequence determination purposes ( Figure 2C). For isolated tRNA isoacceptors, the lowest sensitivity was achieved with a TEAA dependent nano-LC setup (33). With nano-LC, 2-5 ng tRNA (∼80-200 fmol) are sufficient for analysis which is comparable to our results. Although the sensitivity towards ONs is lower compared to small molecule analysis, it is sufficient to analyze ONs derived from synthesis, in vitro experiments or potentially even native RNAs. Preparation of in vitro transcribed tRNA by SEC To expand the field of application, we wanted to use our method on partial RNA digests of full-length in vitro transcripts and also native RNAs. Thus, we decided to produce two human tRNAs by ribozyme-fusion in vitro transcription and analyze the transcripts after RNase T1 digestion by ON-MS. After transcription, the cleaved transcript is commonly purified by PAGE and the ribozyme and the full-length transcript are removed (49). However, PAGE purified RNA always contains small amounts of polyacrylamide, which potentially interacts with later MS. Thus, we adapted a previously described method based upon size exclusion chromatography (SEC, SEC-3 column) that is rapid, automatable and produces clean RNA (44). We have found the AdvanceBio SEC column to be more robust and thus recommend this column for tRNA purification from total RNA. We supply a detailed handling guide to prolong column lifetime to several hundred injections in the materials and methods section. A comparative profile of an RNA ladder (1000 nts to 50 nts) loaded onto these two 300Å columns is shown in Figure 3A column temperature of 60 • C, both columns nicely separate the 50 and 80 nts marker from the longer RNAs >150 nts. As expected, native tRNA elutes at the same time as the 80 nts marker, which indicates that secondary structures play no role in the separation of RNAs. Thus, these columns are ideal for separation of large RNAs such as ribosomal RNA (rRNA) and mRNA from the smaller tRNAs. The AdvanceBio column is also available with a 130Å pore size. We tested the separation efficiency of this column with the ladder and as it is to be expected, the large RNA markers can no longer be separated due to reduced interaction with the smaller pores ( Figure 3C). The 130Å column is less suitable for the separation of total RNA. However, it has a potential for the separation of RNAs smaller 80 nts, e.g. tRNA and tRNA-derived fragments (tRF or tiRNA) (Supplementary Figure S6). For the sake of column preservation, we tested the chromatographic resolution at lower temperatures. We find sufficient separation of the small 80 nts from the larger RNAs (>150 nts) at 40 • C (Supplementary Figure S7). We have applied the AdvanceBio 300Å SEC column at 40 • C in several studies for the purification of tRNA from total RNA (5,(50)(51). The purification was always reliable and thus we recommend this column with the developed parameters for tRNA purification. For the purification of tRNA from ribozyme-fusion in vitro transcription, both 300Å and 130Å pore size columns have been tested. tRNA Ser UGA is an 85 nts long tRNA and fused with the ribozyme fusion transcript is 135 nts in length. The fusion transcript auto-catalytically cleaves itself into the 85 nts long tRNA and the 40 nts long ribozyme. As shown in Figure 3D, the tRNA peak is separated from the full-length transcript, while the cleaved ribozyme partly co-elutes with the tRNA (130Å column at 60 • C). We collected the tRNA peak and analyzed the fraction by highresolution automated electrophoresis (Bioanalyzer). Here, we find that some of the ribozyme (40 nts) is still detectable in the tRNA fraction but the full-length transcript (135 nts) is completely removed ( Figure 3E). If necessary, the collection of the tRNA peak is possible in a smaller time window or second purification round and thereby the ribozyme can be completely removed (Supplementary Figure S8). After successful purification of the comparably long tRNA Ser , we were curious whether purification of a tRNA with short variable loop is possible by our automated system. For this purpose, we prepared a fusion-transcript of tRNA Val AAC . The full-length transcript is 124 nts long, the tRNA itself 76 nts and the ribozyme again 40 nts. For this tRNA, the size difference between tRNA and the ribozyme is only 36 nts. As a rule of thumb, SEC separation of biomolecules is possible as long as they differ in size by a factor of two. This prerequisite is not met in case of tRNA Val and as expected, the separation of the produced tRNA and the ribozyme is not possible ( Figure 3D, blue chromatogram). The ribozyme co-elutes as a shoulder and subsequently, more ribozyme can be found in the purified tRNA transcript ( Figure 3E). Similar to the presented solution for tRNA Ser , a reduced collection time window can help to remove the remaining ribozyme if complete removal of the ribozyme is necessary for subsequent experiments. Overall, SEC purification of ribozyme-fusion transcribed tRNAs is an alternative to purification by PAGE. The complete transcription mix can be loaded onto the SEC column without pre-purification and within 20 min the pure tRNA fraction is received in a volume of 500-1000 l in 0.1 M ammonium acetate. Concentration of the tRNA product is possible by subsequent solvent evaporation and/or ethanol precipitation. Analysis of RNase T1 treated tRNA by oligonucleotide mass spectrometry With the tRNA transcripts in hand, we were ready to test our LC-MS method by injecting a partial digest of the produced tRNAs. For this purpose, 1 g of each transcript was incubated with RNase T1. The enzyme was removed by molecular weight cut-off filtration and 200 ng (∼8 pmol) of the resulting ON mixture were injected onto the LC-QQQ set-up in scanning mode. As expected from our experiments with synthetic ONs, we observe separation of the RNase T1 derived ON mixture as seen for the total ion chromatogram shown in Figure 4A (tRNA Val AAC ) and 4b (tRNA Ser UGA ). The color-coded tRNA sequences are shown in Supplementary Figure S9. For tRNA Val AAC , we prepared transcripts containing stable isotope labeled nucleotides 15 N 3 -cytidine or 15 N 5 -guanosine. The fragments of the unlabeled and labeled transcripts elute at the same retention time. The low resolution mass spectra of the eluting peaks allowed the as- signment of the ON sequences, which are derived from various sequences of the tRNA. As expected, we observe differences in the m/z value due to the stable isotopes in the 15 N containing fragments compared to the unlabeled fragments ( Figure 4A). Thus, our method is capable of separating RNase T1 derived tRNA digests. In this context, we observed a low MS abundance of the 9 mer fragment (#4) compared to the 8 mer fragment (#6) from tRNA Val . This is explained by the different amount of acetonitrile (ACN) at these retention times. While the 9-mer elutes at around 4.5% ACN, the 8-mer elutes later and with ∼ 25% ACN. With higher amounts of ACN, the ionization efficiency is increased and thus later eluting compounds have a higher signal intensity. When the gradient is eliminated by direct injection of ONs or elution at isocratic ACN conditions (Supplementary Figure S2), the abundance of all injected synthetic ONs is comparable and thus we conclude that the ionization efficiency is not majorly impacted by the composition of canonical nucleosides. RNA and especially tRNA are heavily posttranscriptionally modified. To study the applicability of our method to native RNA, we treated total tRNA from HEK cell culture with RNase T1, separated the resulting fragments with our chromatographic method and analyzed the effluent by high-resolution MS. Here, the mass resolving power of the Orbitrap instrument is necessary to confirm the sequence and modification status of the observed tRNA derived ONs. With this approach, we could identify several modified ONs from total tRNA. The corresponding chromatograms and MS/MS spectra are shown in Figure 5. ADAT2/3 deaminates tRNA Val AAC at position 34 in vitro Since tRNA modifying enzymes depend on a correctly folded tRNA substrate for their activity, we tested whether the SEC purified IVTs fold into the expected tRNA shape and are thus usable in in vitro modification assays. For these assays, we used human ADAT2/3 enzyme, which has been shown to catalyze the deamination of wobble adenosine at position 34 of in vitro transcribed tRNA Val AAC to inosine (52). We incubated tRNA Val AAC with purified ADAT2/3 expecting deamination of A34 to I34. The presumably deaminated tRNA was digested by RNase T1 and we screened for the tRNA sequence covering position 34, namely CCUAA-CACG and CCUIACACG (see Figure 6A). Surprisingly, we only found the unmodified sequence CCUAACACG and not the inosine containing sequence ( Figure 6B). Due to this unexpected result, we digested an aliquot of the ADAT2/3 treated tRNA to nucleosides and switched to quantitative nucleoside analysis. While no inosine was detected in the untreated tRNA by nucleoside MS, we could detect an 8.0% conversion of adenosine to inosine per tRNA molecule in the ADAT2/3 modified transcript (Figure 6F). From this finding we conclude that our SEC puri- fied tRNA folds correctly and is thus recognized as a substrate by the tRNA modifying enzyme ADAT2/3. Intrigued by the absence of the inosine modified ON in the Oligo-MS analysis and with the goal to assign the location of the A-to-I editing in tRNA Val AAC , we utilized the synthetic 9-mer ONs presented in Figure 1A. These ONs represent the sequences expected from an RNase T1 digestion of tRNA Val AAC (unmodified: CCUAACACG and modified: CCUIACACG). RNase T1 is a commonly used endoribonuclease, which specifically cleaves RNA after guanosine moieties. Guanosine and adenosine differ at position 6 and position 2 of their purine structures. Inosine and guanosine are identical at position 6 but differ at position 2. Due to the high chemical similarity of inosine and guanosine, we wondered whether RNase T1 is capable of inosine recognition and subsequent cleavage. A literature search revealed that the detection of inosine by sequencing is commonly done utilizing RNase T1 (53,54). In MS-based RNA modification studies, this knowledge is not yet widespread. To test the impact of RNase T1 on I containing RNA in MS, we incubated the A and I containing ONs with RNase T1 and analyzed the resulting mixture. For CCUAACACG, we observe one prominent peak which corresponds to the full-length ON and a second peak (less than 30%), corresponding to the 3 cleavage product ACACG (Supplementary Figure S11). In contrast, full-length CCUIACACG, is barely detectable after RNase T1 treatment. Instead, two new peaks are found (Supplementary Figures S10 and 11). One corresponds to the 3 fragment ACACG and the other to the inosine containing 5 cleavage product CCUI (Supplementary Figure S10). From our observation it is now clear, that RNase T1 cleaves both guanosine and inosine containing RNA sequences which impacts bottom-up oligonucleotide mass sepctrometry. Based upon our results, the substrate preference of RNase T1 can now be summarized as G > I > A. In case of tRNA Val AAC , RNase T1 will then result in an additional cleavage after I34 (Supplementary Figure S12). We re-analyzed ADAT2/3 treated and untreated tRNA by oligonucleotide MS and screened for the newly identified ONs. We detected the CCUI fragment in unlabeled and 15 N-Cytidine-labeled tRNA digests after ADAT2/3 treatment, which indicates the A-to-I conversion of position 34 of the tRNA. Due to the similar mass of the CCUI and the CCUA fragment (1 Da), the result must be confirmed in the +1 charge state on a low resolution instrument or by high resolution MS. Analysis of the sample on the Orbitrap instrument revealed indeed formation of CCUI and thus confirms our findings from low resolution MS (Supplementary Figure S13). For quantification of the deamination reaction, we used the synthetic ON CCUIACACG and prepared an external calibration with its RNase T1 digest and re-analayzed the samples on the QQQ MS. We detected 7.6% inosine formation using this external calibration method ( Figure 6C and D). Mass spectrometric quantification is ideally done using stable isotope labeled internal standards. Here, we used our 15 N 3 -cytidine labeled transcript of tRNA Val as the substrate and the unlabeled synthetic ON as the internal standard. With this method, we find a 6.2% conversion of A34 to I34 ( Figure 6E). This result is 1.3-fold lower compared to our accurate nucleoside-MS method ( Figure 6F), which is acceptable for most potential applications of our method. Our data suggests that our method is capable of absolute quantification, but fur-e41 Nucleic Acids Research, 2020, Vol. 48, No. 7 PAGE 12 OF 16 ther studies using internal standards as suggested by (39)(40)(41)(42)(43) must be performed to determine the accuracy and precision for a given biological context. The activity of AlkB is sequence dependent Another context which benefits from ON-MS on a low resolution MS, is in vitro modification/demodification experiments. Here, we applied our method to the analysis of 1-methyladenosine (m 1 A)-modified ONs to study the demethylation by AlkB (8). With the goal to study the substrate specificity of AlkB by ON-MS, we designed three m 1 A-containing ONs. In human tRNA, m 1 A is commonly found at position 58. Sequence overlay of 13 random human tRNAs revealed the weight matrix of the nucleobases surrounding m 1 A58 in human tRNAs ( Supplementary Figure S14). The first ON (UUCG(m 1 A)UUC), was designed to match this sequence surrounding m 1 A58 from human tRNAs and is thus referred to as ON-human. The second ON reflects a mutated version of the ON-human, where three preserved bases are exchanged (ON-mutated, UAGG(m 1 A)UUG). The last ON, ON-ribo, reflects the sequence surrounding the only possible m 1 A site of E. coli 16S rRNA; (CGUC(m 1 A)CAC). This site is found to be methylated in bacteria resistant to certain antibiotics (55). With this experiment, we wanted to evaluate the activity of AlkB toward these three ONs and whether a sequence dependence is observable. All three ONs were incubated with 10 M AlkB. The subsequent ON-MS analysis revealed the formation of demethylated ONs as shown in Figure 7A-C. The demethylated ONs elute at a later retention time compared to the respective m 1 A modified ONs on our TEAA-free chromatography due to the lost positive charge of m 1 A. For ONhuman and ON-ribo, we observe only trace formation of the demethylated ON ( Figure 7A/C). In contrast, we observed more demethylated ON for ON-mutated. At first glance, our observation indicates that the m 1 A site of E. coli rRNA, and the ON with the sequence of eukaryotic tRNA m 1 A58 are less suitable substrates to AlkB compared to the completely unnatural substrate, ON-mutated. However, without the use of stable isotope labeled internal standards, MS is non-quantitative and the data may not be interpreted in such a way. To provide a quantitative method, we digested the AlkB treated and untreated ONs to nucleosides and performed nucleoside-MS with our established stable isotope dilution MS (38). For this purpose, we equilibrated the system for 1 h by flushing the instrument with our aqueous nucleoside-MS buffer and performed our quantitative nucleoside experiments. Before AlkB treatment, we find 0.23 m 1 A per ON-human, 0.12 m 1 A per ON-mutated and 0.19 m 1 A per ON-ribo. At a low AlkB concentration (1 M), we observe 9.8% demethylation for the ON-human (P = 0.0067). The ON-mutated shows 12.9% demethylation (P < 0.0001) and the ON-ribo 18.6% demethylation (P < 0.0001). The absolute quantification data is in stark contrast to the nonquantitative results observed by the oligonucleotide MS. Intriguingly, AlkB seems to prefer the only potential m 1 A site found in bacteria, namely m 1 A1408 of the 16 S rRNA. Our observation is confirmed by incubation of the same ONs with higher amounts of AlkB (10 M, Supplementary Figure S15). DISCUSSION In this manuscript we present a method for the separation of oligonucleotides and subsequent detection by oligonucleotide MS (ON-MS). While methods for ON-MS have been available for several years, key hurdles toward their broader application were the use of ion-pairing reagents such as TEAA and negative ionization. We have overcome these hurdles by successfully using a modified RP-18 column with a simple ammonium acetate buffer in combination with acetonitrile. These are common solvents for LC-MS instruments, which do not contaminate the system or interfere with subsequent analysis of other compounds. The broad applicability is demonstrated by our analyses of synthetic ONs, RNase T1 digestion derived RNA fragments of in vitro transcripts and native tRNA on both low and highresolution mass spectrometers. A higher resolving power is required to distinguish mass differences of oligonucleotides, their sequence and modification content. A key challenge is that the nucleotide bases uridine and cytidine (and adenosine and inosine) have a mass difference of only 1 Da, which is already the resolution limit of the QQQ MS. For the analysis of unknown sequences, the mass accuracy and resolving power of the mass spectrometer must therefore be sufficient to resolve this difference (56). A second benefit of high resolution MS, is the determination of charge states even in MS/MS experiments which allows sequence determination and localization of modified nucleosides as demonstrated in Figure 5. Nevertheless, low resolution MS instruments are valuable for the analysis of oligonucleotides of known sequence which have been prepared for e.g. in vitro analysis of RNA modifying ( Figure 6) or demethylating enzymes (Figure 7). A single sample run is <15 min and ONs below 10 nt are separated according to their chemistry, but the elution order is not predictable. Thus 2D analysis of sequences as shown by the Szostak lab is not possible with our method (57). In our hands, switching from ON-MS to sensitive nucleoside-MS is possible within 60 min to allow system and column equilibration. A comparison of our TEAA free and the common ON-MS method is given in Table 1. For the analysis of RNA modifying enzymes, the preparation and use of in vitro transcribed RNA is highly useful. In this manuscript, we use ribozyme-fusion transcription of a 76 nts long tRNA and an 85 nts long tRNA. Instead of laborious PAGE purification, we use automated SEC. With nucleoside-MS ( Figure 6F), we show that the thus purified tRNA is recognized and converted by ADAT2/3, which demonstrates the correct folding of the tRNA. With the goal to determine the position of A-I conversion, the produced tRNAs are digested by RNase T1 and the resulting fragments are separated by our TEAA-free chromatography. It is noteworthy that our chromatography allows the separation of ONs where only one oxygen is replaced with an amino group. The RNase T1 derived fragments are assigned by the detected m/z values in positive ionization mode by low resolution MS (Figures 4 and 6). For such in vitro experiments, there is no immediate need for highresolution MS as the resulting fragments and MS/MS products can be predicted and resolved by low resolution MS. In case of the studied A-I conversion, the target fragment differs in only 1 Da, which can be resolved by low-resolution MS in the +1 charge state, but not in higher charge states. In such cases, the use of high-resolution instruments might become necessary for confirmation (Supplementary Figure S13). We observe efficient cleavage after inosine by RNase T1 in the A-I converted tRNA and synthetic ONs as expected from approaches to sequence inosine containing RNA (53,54). This substrate diversity of RNase T1 is not commonly described in the bottom-up-MS literature but due to the high abundance of inosine in tRNA (6) and mRNA (58) it should be kept in mind when using RNase T1. Altogether, we present various MS-based approaches for the bottom-up analysis of RNA which are easily transferrable to laboratories which avoided ON-MS in the past due to the problems introduced by ion-pairing reagents and negative ion mode. We demonstrate that switching from ON-MS to small compound analysis and back is possible after a brief equilibration procedure (Figures 6 and 7). With this, ON-MS can be routinely used on any LC-MS system to study RNA modifications in their sequence context.
2020-02-13T09:02:11.991Z
2020-02-21T00:00:00.000
{ "year": 2020, "sha1": "36a68f130f325c74b6115a3bcfed423c88365ee1", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/48/7/e41/33030661/gkaa091.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "015cb2fac48665fc8df764c099bf7efe41c2f377", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
219573489
pes2o/s2orc
v3-fos-license
Theoretical and Experimental Investigations of Spatio-Temporally Modulated Metasurfaces with Spatial Discretization A dual-polarized, spatio-temporally modulated metasurface is designed and measured at X-band frequencies. Each column of subwavelength unit cells comprising the metasurface can be independently biased, to provide a tunable reflection phase over a range of $330^\circ$. In this work, the bias waveform applied to adjacent columns is staggered in time to realize a discretized traveling-wave modulation of the metasurface. An analytic model for the metasurface is presented that accounts for its discretized spatial modulation. The analysis considers a finite unit cell size and thus provides increased accuracy over earlier analysis techniques for space-time metasurfaces that commonly assume continuous spatial modulation. Theoretical and experimental results show that for electrically-large spatial modulation periods the space-time metasurface allows simultaneous frequency translation and deflection. When the spatial modulation period on the metasurface is electrically small, new physical phenomena such as subharmonic frequency translation can be realized. When the spatial modulation period of the metasurface is wavelength-scale, simultaneous subharmonic frequency translation and deflection can be achieved. For certain incident angles, retroreflective subharmonic frequency translation is demonstrated. I. INTRODUCTION Metasurfaces are two dimensional structures textured at a subwavelength scale to achieve tailored control of electromagnetic waves. Developments in tunable electronic components have allowed dynamic control over the electromagnetic properties of metasurfaces. Devices such as varactors, transistors and MEMS [1][2][3] in addition to 2D and phase change materials [4][5][6] can be integrated into metasurfaces to tune their electric, magnetic and magneto-electric responses. Often, the properties of a metasurface are spatially modulated to shape electromagnetic wavefronts and achieve focusing, beamsteering, and polarization control [7][8][9][10]. By incorporating tunable elements into their design, the properties of metasurfaces can also be modulated in time [11][12][13]. While spatial modulation redistributes the plane-wave spectrum of the scattered field, temporal modulation provides control over the frequency spectrum. Applying both spatial and temporal variation is known as spatiotemporal modulation, and has recently been applied to metasurfaces [14][15][16][17][18][19][20][21][22]. Space-time modulation can simultaneously allow frequency conversion and beam steering and shaping. It can also be used to break Lorentz reciprocity and enable magnetless nonreciprocal devices such as gyrators, circulators and isolators [23][24][25][26]. Spatio-temporally modulated structures are typically analyzed and designed as continuous surfaces. That is, the unit cell size of the physical structure is assumed to be deeply subwavelength. However, accounting for the discretization of the unit cell provides increased accuracy and can yield useful effects which are not predicted in the continuum limit. For example, when the spatial period of the modulation is smaller than a wavelength, subharmonic frequency translation can be achieved in the specular direction. In this case, scattered waves can radi- ate at a higher order frequency harmonic determined by the number of unit cells in a subwavelength spatial period. Such a behavior does not arise from a continuous analysis of sub-wavelength modulation periodicity, since higher-order spatial harmonics introduced by the spatial discretization are not considered. In this paper, we demonstrate a spatio-temporally modulated metasurface consisting of discrete unit cells, as shown in Fig. 1. Varactor diodes are surface-mounted onto the metasurface, acting as tunable capacitances. Each column of unit cells on the metasurface can be independently, temporally modulated, allowing space-time modulation along one axis. This structure was first presented in [13], where the varactor diodes on each column were temporally modulated with the same bias signal. In [13], a sawtooth reflection phase in time was used, re-sulting in Doppler-like (serrodyne) frequency translation. Structures of a similar design have been subsequently reported in [27]. Here, we consider a discretized, traveling-wave modulation in which the capacitance variation of adjacent columns is staggered in time. This modulation scheme is reminiscent of N-path networks that have received significant attention in the circuits community [28][29][30][31][32][33] as of late. In the context of electronic circuits, an N-path network consists of a set of linear, periodically time-varying (LPTV) signal paths connected to a common input and output. Each path includes at least one time-varying circuit component. The time-modulation of adjacent LPTV paths is staggered in time by T p /N , where T p is the modulation period, and N is the number of paths in the network. The N-path configuration suppresses certain harmonic mixing products. Specifically, for a modulation frequency f p = 1/T p and excitation frequency f 0 , the only harmonics present at the input and output are those at f = f 0 +rN f p , where r ∈ Z [28][29][30][31][32][33]. N-path networks have attracted widespread attention in the circuits community due to their filtering capabilities [31,34,35] and as a method to break time-reversal symmetry and realize non-reciprocal devices such as circulators [36,37], and isolators [26,30]. The non-reciprocal behavior of N-path networks can find various applications in full duplex wireless communication and radar. Non-reciprocal devices are also needed in optical fiber communications and the protection of sensitive electronic equipment from high-power microwaves. Periodic space-time modulation of a metasurface imparts tangential momenta (an impressed wavenumber) onto each frequency harmonic of the scattered field. The tangential wavenumber of the modulation is given by the spatial modulation period. Provided that each unit cell is sub-wavelength, the behavior of the metasurface can be divided into three regimes based on whether the spatial period of the modulation is (1) electrically small (much smaller than a wavelength), (2) electrically large (much greater than a wavelength), or (3) on the order of a wavelength. When the spatial modulation period is electrically small, the columns of the metasurface appear collocated and the behavior approaches that of an N-path network. The staggered modulation scheme in this case results in harmonic cancellation which can be exploited to achieve subharmonic frequency translation. Reflected harmonics at frequencies f = f 0 + rN f p ( r ∈ Z) correspond to propagating wavenumbers, while the remaining frequencies are evanescent. In contrast, when the spatial modulation period is much larger than a wavelength, the capacitance variation between adjacent sub-wavelength cells is reduced. In this limit, the discretized metasurface approaches a continuous spatiotemporally modulated structure: the incident wave undergoes frequency translation [12,13] and angular deflection. Finally, when the spatial period is comparable to the wavelength, both subharmonic frequency translation and angular deflection can be simultaneously achieved. At certain incident angles, the metasurface can perform retroreflective subharmonic frequency translation. In this paper, the proposed metasurface design will be explored both in theory and experiment. In Section II, the semi-analytical procedure for computing the response of the presented metasurface is discussed. This includes the homogenization of each unit cell, followed by a treatment of both time and space-time variation. This procedure is carried out for various modulation schemes in Section III. Specifically, effects such as specular subharmonic frequency translation, deflective/retroreflective serrodyne frequency translation, and deflective/retroreflective subharmonic frequency translation are examined. The results of the theoretical study are then validated experimentally in Section IV. II. ANALYSIS OF THE TEMPORALLY AND SPATIO-TEMPORALLY MODULATED METASURFACE The spatio-temporally modulated metasurface is depicted in Fig. 1. It is a reflective, electrically-tunable impedance surface [38], consisting of a capacitive sheet above a grounded dielectric substrate. The capacitive sheet is realized as an array of metallic patches interconnected by varactor diodes. It can be modulated in both space and time with a bias signal that is applied through the metallic vias that penetrate the substrate. A unit cell of the designed metasurface is shown in Fig. 2a. The varactor diodes connecting the metallic patches are biased through the vias located at the edges of the unit cells, while the the via at the central patch is connected to ground. The remainder of the biasing network is shielded behind the ground plane. The biasing network and diode orientations allow the reflection phase of the metasurface to be independently tuned for two orthogonal (TE and TM) polarizations. Bias waveforms V x bias (t, x) and V y bias (t, x), as shown in Fig. 1, control the sheet capacitance for the two orthogonal polarizations. A detailed description of the fabrication and biasing network are provided in Section IV. A cross section of the metasurface is shown in Fig. 3 under TE and TM excitations. The biasing vias can be seen perforating the dielectric substrate. Here, we derive a semi-analytical procedure for computing the response of the metasurface shown in Fig. 1, with a discretized traveling-wave modulation. In Section II A, the metasurface unit cells are homogenized and represented by an equivalent circuit model. The semianalytical procedure for obtaining the scattered fields in the presence of time-modulation alone is examined in Section II B. Building upon this framework, a procedure for computing the scattered fields produced by the spacetime modulated metasurface is then presented in Section II C. A. Homogenization of the spatio-temporally modulated metasurface In the analysis that follows, the unit cells of the metasurface are homogenized. Within each unit cell, the metallic patches interconnected by varactor diodes will be treated as a capacitive sheet. This sheet can be modulated in time, independently of adjacent unit cells. The dielectric substrate perforated by vias (−l < z < 0), will be treated as a uniaxial anisotropic material, with a relative permittivity tensor [39]: where h is the relative permittivity of the host medium and zz is the effective relative permittivity along the vias. Since the metasurface is electrically thin at the operating frequency of 10 GHz (l = 0.016λ = 0.508 mm), a local model can be used to describe the wire medium [39]: where k p = 541.81 rad·m −1 is the plasma wavenumber of the wire medium extracted from a full-wave simulation of the unit cell shown in Fig. 2a, and k 0 is the free-space wavenumber of the incident wave. The anisotropic substrate supports TE (ordinary mode) and TM (extraordinary mode) polarizations. The normal wavenumber for each polarization in the substrate can be written as, where k x is the tangential wavenumber of the incident wave. Each unit cell can be modeled with the shunt resonator depicted in Fig. 2b. The circuit model consists of a tunable capacitance (representing the capacitive sheet) backed by shorted transmission-line section (representing the grounded dielectric substrate) that acts as an inductance. As a result, the bias voltage applied to the varactor diodes can be used to tune the reflection phase. The phase range of this topology is 2π −∆φ, where ∆φ is the round trip phase delay through the substrate. Details on the phase range of the realized metasurface are provided in Section IV. In this paper, two different reflection phase waveforms are considered. A sawtooth reflection phase with respect to time is studied, which allows serrodyne frequency translation [12,13], as well as a sinusoidal reflection phase with respect to time. As mentioned earlier, each column of unit cells can be biased independently, allowing for space-time modulation along a single (x) axis. As a result, the homogenized model consists of capacitive strips whose widths are given by the unit cell size d 0 = λ 0 /5 = 6 mm, where λ 0 is the wavelength in free space at 10 GHz. The capacitance seen by each polarization can be controlled independently and is uniform over the strip. It is instructive to first consider the analysis of the metasurface when it is uniformly biased across all unit cells. In this case, there is no spatial variation in the homogenized model and the reflected power will will spread into discrete frequency harmonics due to the periodic time variation of the reflection phase. The tangential incident and reflected fields above the metasurface (z = 0 + ) can be expanded into frequency harmonics as where ω 0 is the radial frequency of the incident wave, ω p is the radial frequency of the modulation, and k x is the tangential wavenumber of the incident wave. The incident and reflected electric field harmonics can be written in vector form as V inc and V ref respectively. The vector V inc represents the incident tangential electric field and contains only a single entry (V inc m = V inc 0 δ m ) since the incident field is monochromatic. The vector V ref contains all the reflected tangential electric field coefficients V ref m . Based on the detailed derivation in Supplemental Material I and II, the reflected electric field can be calculated for each polarization, where the superscript "X" is "E" for TE polarized waves and "M" for TM polarized waves. Y T X 0 is the freespace tangential wave admittance matrix. It is a diagonal matrix containing entries of the free-space admittances at the corresponding frequency harmonics. Y T X is the input admittance matrix of the time modulated metasurface. It is not a diagonal matrix since the timemodulated capacitive sheet introduces coupling between different harmonics. This analysis procedure can be used to predict the scattered field for arbitrary time-periodic modulating waveforms. Suppose the reflection phase is modulated by a sawtooth waveform. In this case, serrodyne frequency translation is expected [12]. The frequency of the incident wave is f 0 = 10 GHz and the modulation fre- quency is f p = 25 kHz. The incident wave impinges on the metasurface at an oblique angle of 25 • . The capacitance modulation needed to upconvert the wave to f 0 +f p is calculated in Supplemental Material I, and shown in Fig. 4 for each polarization. The calculation includes 141 temporal harmonics for the field expansion and 101 temporal harmonics for the capacitance modulation. The reflected spectra for the two orthogonal polarizations reveal a Doppler shift to a frequency of f 0 + f p . For TE polarization, a 0.107 dB conversion loss and 22.83 dB sideband suppression are achieved. For TM polarization, a 0.125 dB conversion loss and 21.64 dB sideband suppression are achieved. As mentioned earlier, the equivalent circuit model of the unit cell provides a phase range that is slightly less that 2π (1.6π for TE polarization and 1.54π for TM polarization used in the analysis), resulting in undesired sidebands. For TM polarization, the reflection phase range is slightly smaller than TE at the oblique angle of 25 • , resulting a slightly higher conversion loss. C. Space-time modulation of the metasurface A space-time gradient can be applied to the metasurface by introducing a time delay between the capacitance modulation applied to adjacent columns. This staggered modulation scheme is shown in Fig. 5. It provides a discretized travelling wave (N-path) modulation. The capacitance modulation on each path is chosen to either produce a sawtooth or sinusoidal reflection phase with respect to time. In this staggered modulation scheme, there are N columns of subwavelength unit cells within one spatial modulation period d. This impresses a modulation wavenumber β p = 2π/d onto the metasurface. N adjacent columns of the metasurface are modulated with bias signals staggered in time by an interval T p /N , where T p = 1/f p is the temporal modulation period. One can view each column as a path in a N-path network. The capacitance on each path v is related to that of the adjacent path by a time delay, Here C v (t, x) is pulse function in space, and periodic function in time (see Supplemental Material III). In contrast to an N-path circuit, the paths (columns of unit cells) are not connected to a common input and output. Instead, each path is displaced by a subwavelength distance d 0 = 6 mm (d 0 = λ 0 /5) from its adjacent paths. Examples of 2-and 3-path spatio-temporal modulation schemes are shown in Fig. 6. At any given time, the spatial variation of the reflection phase is a discretized sawtooth (blazed grating) ranging from 0 to approximately 2π over a period d = N d 0 . As shown in Supplemental Materials III and IV, the capacitance relationship given by Eq. (10) allows the spatio-temporally modulated sheet capacitance to be expanded in the following form, where β p = 2π/d is the modulation wavenumber, and β d = 2π/d 0 = N β p is an additional wavenumber which results from the discretization of the spatial modulation into paths (unit cells). The summation over r accounts for the discontinuity in capacitance at the the boundary of each path as well as the microscopic variation of capacitance within the paths (which in this case is constant). The summation over q accounts for the macroscopic capacitance variation over one spatial modulation period d. The sheet capacitance of each path, is capable of generating a staggered sawtooth reflection phase in time, as shown in Fig. 7. The N-path symmetry of the system establishes a relation between the fields on adjacent paths [28]. The total electric field must satisfy Eq. (12) is used in Supplemental Materials IV to show that the fields can be expanded in terms of a modified Fourier series. At (z = 0 + ), the tangential incident and reflected fields on the metasurface can be expressed as In Eq. (13)(14)(15)(16), spatio-temporal harmonic pair (r, q) of the electromagnetic field on the surface has a tangential wavenumber and a corresponding radial frequency where N is the number of paths within a spatial period of the metasurface. It can be seen that the staggered modulation between paths impresses a tangential wavenumber of qβ p onto the q th frequency harmonic. The reflected angle of each harmonic pair is equal to The coefficients of the incident/reflected electric and magnetic fields are related by the free-space tangential wave admittance defined for each spatio-temporal harmonic pair. Meanwhile, the spatio-temporally modulated sheet capacitance provides coupling between different harmonic pairs. To solve for the scattered field, the incident and reflected tangential electric field harmonics are once more organized into vectors, V inc and V ref . Each entry corresponds to a unique spatio-temporal harmonic pair (r, q). The reflected electric field can then be calculated for each polarization using Eq. (9). A detailed derivation of the entries of the metasurface admittance matrix Y T X and the free-space tangential admittance matrix Y T X 0 is provided in Supplemental Materials IV. Note that when the unit cell size is infinitesimally small (d 0 d), the variation of field across a unit cell is negligible. Therefore the harmonic pairs that remain are only those with r = 0. The capacitance modulation of the metasurface can thus be seen as continuous, For such a modulation, the metasurface supports harmonics at frequency f 0 + qf p , with a corresponding wavenumber k x + qβ p . Note that (20) is of the form of a traveling wave, This case models the continuum limit, in which the spatial discretization of the traveling wave bias can be neglected. With the modulation waveform (sawtooth reflection phase in time) shown in Fig. 4a and Fig. 4c, the metasurface converts the incident wave at (f 0 , k x ) to (f 0 + f p , k x + β p ), as shown in Fig. 8b. When k x +β p > (ω 0 +ω p )/c, the metasurface can convert an incident wave to a surface wave (as shown in Fig. 8a), provided that the corresponding surface wave is supported by the metasurface. However, when the corresponding surface wave is not supported, the metasurface reflects all the power back in the specular direction at the same frequency f 0 . More generally, the transverse resonance condition can be used to identify surface modes supported by the spatially discretized metasurface. For harmonic pairs with tangential wavenumbers larger than free space (k xrq > ω rq /c), the transverse resonance condition can be used to judge if the corresponding surface waves are supported. Solving Eq. (21) yields the ω 0 −k x dispersion relationship for the supported surface wave. Note that when a surface wave is supported, the reflection coefficient V ref /V inc in Eq. (9) diverges. This is not the case for the incident angle and path number N combinations considered in this paper. Thus, for all the proceeding examples presented in this paper, a surface wave is not supported by the metasurface. For our metasurface, the unit cell size is chosen to be d 0 = λ 0 /5. Since the unit cell size of the metasurface is fixed, the total spatial period d = N d 0 can be controlled by changing N , the number of paths. This enables the same metasurface to achieve different functions depending on the number of paths included in a spatial period. If the modulation period d is electrically small (N is small), then the spatial modulation wavenumber β p is large. As depicted in Fig. 8a, this can lead to a number of higher order harmonics existing outside the light cone. Since the spatial modulation period is electrically small, the paths can be viewed as collocated and a N-path circuit model can be used to approximate the physical structure. The equivalent circuit model of the spatio-temporally modulated metasurface for d λ 0 is depicted in Fig. 9. If the spatio-temporally modulated metasurface does not support surfaces waves at the operating frequency, power is only coupled to radiating harmonics: those within the light cone. Based on Eq.(17), the radiated harmonics are those with: Since r ∈ Z, Eq. (22) implies that propagating harmonics correspond to q = 0, ±N, ±2N . . . . Therefore, the radiated reflected wave only contains frequency harmonics at f 0 +rN f p , where r ∈ Z. This phenomena can only be observed when the spatial discretization of the metasurface is considered. In the continuum limit, sub-wavelength spatial modulation results in specular reflection at the same frequency as the incident wave. However, the spatial discretization introduces additional spatial harmonics (the summation over r in (13)(14)(15)(16)) that can couple to the incident wave. As a result, the metasurface can achieve subharmonic frequency translation. Note that the tangential wavenumbers of the radiated subharmonic mixing terms are all equal to that of the incident tangential wavenumber (since k xrq = k x , when q + N r = 0). With the capacitance modulation shown in Fig. 4a and 4c, the sawtooth reflection phase on each path enables the metasurface to upconvert the frequency to the first propagating harmonic pair. In this case, the metasurface performs subharmonic frequency translation from f 0 to f 0 + N f p . When the modulation period d is electrically large (N is large), the spatial modulation wavenumber β p is small, as depicted in Fig. 8b. When N is a very large value, according to Eq. (S. 39) in the Supplemental Material, the capacitance coefficient C rq is zero for r = 0. For this case, the field variation across each unit cell is small, and the capacitance modulation waveform is simplified to the continuum limit given by Eq. (20). In other words, the metasurface shows a similar performance to one with an infinitely small unit cell size. Serrodyne frequency translation to a deflected angle can be achieved using the sawtooth waveform given in Fig. 4. In addition, when the modulation period d is on the order of a wavelength, the metasurface can simultaneously perform subharmonic frequency translation and angular deflection. The deflected angle of the harmonic pair of interest is given by Eq. (19). Setting θ rq = −θ i in Eq. (19) yields an expression for the incidence angles at which retroreflection occurs for a particular spatiotemporal harmonic. The deflective and retroreflective behavior of the metasurface is showcased in Section III for various scenarios. III. SCATTERING FROM A SPACE-TIME MODULATED METASURFACE FOR DIFFERENT SPATIAL MODULATION PERIODS Computed results are given here for various spatiotemporal modulation cases. In Section III A, space-time modulation schemes are designed to achieve subharmonic frequency translation. The spatial modulation period is kept electrically small. In Section III B, the spatial modulation period is electrically large such that beam deflection and frequency translation can be achieved simultaneously. Finally, in Section III C, the spatially modulation period is on the order of a wavelength, allowing simultaneous retroreflection and subharmonic frequency translation. For each of the cases that follow, the conversion loss and sideband suppression for the desired frequency harmonic at a given observation angle are provided in Table I. The table will be referred to throughout this section. In all of the cases studied, the incident signal frequency is f 0 = 10 GHz. The modulation frequency, f p = 25 kHz, which is the maximum frequency which could be experimentally validated using the available equipment (see Section IV). For each angle of incidence, the capacitance modulation is calculated based on Eq. (S.4) to achieve the desired time-varying reflection phase. Unless specifically stated otherwise, the reflection phase of each column (path) is a sawtooth function in time. For both polarizations, the field is expanded into 141 × 141 harmonic pairs. The temporal capacitance modulation on each path is truncated to 101 temporal harmonics. A. Small spatial modulation period (|kx ± βp| > k0) In this section, electrically small spatial modulation periods (|k x ± β p | > k 0 ) are considered. In this regime, both the +1 (k x +β p ) and −1 (k x −β p ) spatial harmonics are outside of the light cone. Since the unit cell size of the metasurface is fixed to d 0 = λ 0 /5, 2-and 3-path modulation (N = 2, 3) are chosen to satisfy the small period condition. The incident wave is chosen to impinge on the metasurface with an oblique angle of 25 • . The modulation schemes for the 2-path (N = 2) and 3-path (N = 3) examples are shown in Fig. 6 and Fig. 7. As explained in section II C, the metasurface performs subharmonic frequency translation at the specular angle. Case 1: Reflective (specular) subharmonic frequency translation For 2-and 3-path modulation, the reflection spectra computed from Eq. (9) are shown in Fig. 10. As expected, the strongest reflected harmonic is at a frequency of f 0 +N f p . For the case of subharmonic frequency translation, all the reflected propagating harmonics share the same tangential wavenumber as the incident wave. Since the modulation frequency is much lower than the incident frequency, f p f 0 , all the harmonics are at a reflection angle of 25 • , as depicted in Fig 11. If the modulation frequency f p is comparable with f 0 , then each of the reflected, propagating frequency harmonics will have different radiated angles due to their substantially different free space wavenumbers. Table I. As mentioned earlier, the unit cell provides a phase range that is slightly smaller than 2π, resulting in conversion loss and undesired sidebands. It can be seen that, as the converted frequency harmonic (which is equal to the path number N in this case) is increased, the conversion loss increases and the sideband suppression decreases. This is because the N-path metasurface upconverts the frequency to the first propagating harmonic pair. The higher the upconverted frequency, the longer this process takes and the larger the conversion loss due to the formation of sidebands that results from the imperfect reflection phase range. B. Large spatial modulation period (|kx ± βp| < k0) In this section, we consider two cases where the modulation period d is larger than the free space wavelength λ 0 (N is large). In this regime, both the +1 (k x + β p ) and −1 (k x − β p ) spatial harmonics are inside the light cone. In addition, when N is a large value, the harmonic pairs (r, q) with r = 0 dominate (Eq. (S.38)). In the first case, the metasurface exhibits serrodyne frequency translation to a deflected angle. In the second case, the incident angle is specifically chosen to achieve serrodyne frequency translation in retroreflection. Case 2: Deflective Serrodyne frequency translation First, let us consider the example shown in Fig. 12a, where a wave is incident at an angle θ 1 = 25 • and the number of paths is large, N = 20. From Eq. (17), the tangential wavenumbers of the reflected harmonic pairs are given by given that d = 20d 0 = 4λ 0 . The harmonics located inside the light cone (propagating harmonics) are those with q = 0, ±1, ±2, −3, −4, −5. For the capacitance variation shown in Fig. 4a and Fig. 4c, the metasurface acts as a serrodyne frequency translator. It upconverts the incident wave to the harmonic pair (r = 0, q = 1) with frequency f = f 0 + f p . Note that in this case, each radiated harmonic has its own tangential wavenumber, and thus reflects at a different angle given by Eq. (19). The harmonic of interest (f 0 + f p ) reflects to θ 2 = 42 • , as shown in Fig. 12a. The reflected spectra for both polarizations are given in Fig. 13. The conversion loss and sideband suppression for both polarizations are provided in example 3 of Table I. Let's consider another example (shown in Fig. 12b) where the spatio-temporal modulation of the metasurface and incident frequency are kept the same, but the incident and reflected angles are swapped. The incident angle is θ 2 = −42 • . Each radiated (propagating) harmonic pair of the reflected field has tangential wavenumber where q = 0, ±1, 2, 3, 4, 5, 6. The metasurface frequency translates the incident signal at f 0 to the harmonic pair (r = 0, q = 1), which is at frequency f = f 0 +f p . The tangential wavenumber of the harmonic pair (r = 0, q = 1) can be easily calculated as −k 0 sin(25 • ). When modulation frequency f p is comparable to incident frequency f 0 , the reflection angle can differ from θ 1 = −25 • due to the significant change in the free-space wavenumber at the reflected frequency [14], as shown in Supplemental Material V. However, modulation frequency here is f p = 25 kHz, which is far smaller than the incident frequency of f 0 = 10 GHz. As a result, the reflection angle is −25 • . The reflection spectra for both polarizations are given in Fig. 14. From example 4 of Table I, it can be seen that the conversion loss and sideband suppression are nearly identical to those of the previous example (shown in example 3 of Table I). Case 3: Retroreflective serrodyne frequency translation Here, we consider the case where the incident angle is chosen to achieve retroreflection. According to Eq. (19), setting θ rq = −θ i yields an expression for the incidence angles at which retroreflection occurs for the converted spatio-temporal harmonic. For this case, the modulation wavenumber β p = 2k x , as shown in Fig. 15. The reflected wave propagates back to the source with an upconverted frequency. The retroreflection angle θ i can be calculated by solving θ 0,1 = −θ i in Eq. (19), Here, the number of paths is chosen to be N = 20, and the retroreflection angle is calculated to be θ i = −7.18 • . The calculated reflection spectra for both polarizations are shown in Fig. 16. The spectra clearly show a Doppler shift to frequency f 0 + f p . The conversion loss and sideband suppression for both polarizations are provided in example 5 of Table I. Note that in Fig. 16, only the harmonic pair (r = 0, q = 1) (at frequency f 0 + f p ) is retroreflective. The reflection angle of other harmonics can be calculated based on Eq. (19). In this section, we consider three cases where the spatial modulation period is on the order of the wavelength of radiation (|k x | + |β p | > k 0 & ||k x | − |β p || < k 0 ). In this regime, either the +1 (k x +β p ) or the −1 (k x −β p ) spatial harmonic is inside the light cone. For the fixed unit cell size of d 0 = λ 0 /5, 4-path modulation (N = 4) is chosen to satisfy the wavelength-scale period condition. In the first case, the metasurface exhibits simultaneous subharmonic frequency translation and deflection. In the second case, the incident angle is specifically chosen to achieve subharmonic frequency translation in retroreflection. In the last case, we show that the retroreflective frequency can be switched by changing the temporal modulation waveform to sinusoidal. Case 4: Deflective/retroreflective subharmonic frequency translation First, let us consider the example shown in Fig. 17a, where a wave is incident on the metasurface with a positive k x value. According to Eq. (17), the radiated harmonics are those with: Eq. (26) implies that the radiated reflected wave contains frequency harmonics at f 0 + rN f p and f 0 + (rN − 1)f p , where r ∈ Z. Under a capacitance variation that generates sawtooth reflection phase, the reflected wave is upconverted to the first radiated frequency harmonic, which in this case is the harmonic pair (r = −1, q = 3). Therefore, the reflected wave is Doppler shifted to a frequency f 0 + 3f p . In addition, we choose an incident angle such that the wave is retroreflected. As explained in Section III B 2, the modulation wavenumber is set to β p = 2k x (see Fig. 17a). The retroreflection angle can be calculated by setting θ −1,3 = −θ i in Eq. (19), which is 39 • for a path number of N = 4. The calculated retroreflection spectra are shown in Fig. 18a and 18c. Doppler-like frequency translation to frequency f 0 + 3f p occurs for the incident angle of 39 • , for both polarizations. The frequency of interest f 0 +3f p exhibits retroreflective subharmonic frequency translation. The conversion loss and sideband suppression for both polarizations are provided in example 6 of Table I. Note that in Fig. 18a and 18c, only the harmonics represented by a solid line are propagating in the retroreflective direction. The harmonics represented by dashed lines are propagating in the specular direction. Note that, with a wavelength-scale spatial modulation period, the performance of the metasurface is directiondependent. When the incident angle is −39 • , as shown in Fig. 17b, it is clear that the radiated harmonics are those with: In this case, the harmonic pair (q = 1, r = 0) is inside the light cone. Therefore, the metasurface performs serrodyne frequency translation: upconversion to a a frequency f 0 + f p . Since β p = 2k x , the frequency of interest f 0 + f p is also retroreflected. The calculated reflection spectra are shown in Fig. 18b and 18d. Doppler-like frequency translation to frequency f 0 + f p is observed for both polarizations. The conversion loss and sideband suppression for both polarizations are provided in example 7 of Table I. An illustration of the directiondependent retroreflective behavior of the metasurface, with a wavelength-scale spatial modulation period, is depicted in Fig. 19a. Here, retroreflection is also achieved using a sheet capacitance that generates a staggered sinusoidal reflection phase with respect to time on adjacent columns. The capacitance modulation waveform is shown in Fig. 20a and Fig. 20c for each polarization. Each column of the metasurface generates a sinusoidal reflection phase in time. When all the columns of the metasurface are biased with the same waveform, the reflection spectra take the form of a Bessel function, as shown in Fig. 20b and Fig. 20d. The peak-to-peak modulation amplitude is chosen to be 276 • to suppress the zeroth harmonic in reflection [26]. Unlike the sawtooth modulation, the sinusoidal reflection phase excites both +1 (r = 0, q = 1) and −1 (r = 0, q = −1) frequency harmonics. Fig. 17 shows that for a positive k x incident wavenumber, the reflected +1 frequency harmonic is outside of the light cone (|k x + β p | > k 0 ), and the refleted -1 frequency harmonic is inside the light cone (|k x − β p | < k 0 ). Since the +1 frequency harmonic does not radiate and is not supported by the metasurface as a surface wave, the power is reflected from the metasurface with a frequency of f 0 − f p . In other words, for a wavelength-scale spatial modulation period, the metasurface supports singlesideband frequency translation with the sinusoidal modulation. When the incident angle is 39 • , the frequency of interest f 0 − f p is retroreflective. Again, the retroreflective behavior of the metasurface is directionally dependent. When the incident angle is −39 • , the +1 frequency harmonic is inside the light cone, while the −1 frequency harmonic is outside, as shown in Fig. 17b. For the capac- itance variation shown in Fig. 20a and 20c, the retroreflection wave is radiated at a frequency f 0 + f p . For this case, the direction-dependent retroreflective behavior of the metasurface is depicted in Fig. 19b. The calculated reflection spectra are shown in Fig. 21. Table I. Note that the retroreflection angle for both of the last two cases (with sawtooth and sinusoidal reflection phase) was ±39 • for a path number of N = 4. By simply changing the temporal modulation waveform, the retroreflection frequency was changed between f 0 − f p and f 0 + 3f p , for the same incident angle of 39 • . IV. METASURFACE DESIGN AND FABRICATION In this section, a prototype metasurface is described and measurements are reported for several of the spacetime modulation cases described earlier. Details of the metasurface realization, as well as the measurement setup used to characterize its performance, are given in Section IV A. The static performance of the metasurface under various DC bias conditions is presented in Section IV B. Based on this static (DC) characterization, the required bias waveform the time-modulated metasurface is determined in Section IV C. This section also includes the measured reflection spectra for time-variation alone. Finally, measured results are given in Section IV D for several of the spatio-temporal modulation cases explored theoretically in Section III. A. Metasurface design and measurement setup A unit cell of the dual-polarized, ultra thin (0.06λ) metasurface is shown in Fig. 2a. Varactor diodes (MAVR-000120-1411 from MACOM [40]) are integrated onto the metasurface to act as tunable capacitances for two orthogonal polarizations. The biasing networks for each of these polarizations were printed behind the ground plane of the unit cell, as shown in Fig. 22a and Fig. 22c. Each bias layer consists of 28 metallic lines that can independently modulate all 28 columns of the metasurface. A total of 3136 MAVR-000120-1411 varactor diodes were mounted onto the metasurface. The varactor diodes are biased through vias located on the center of the metallic patches. A photo of the fabricated metasurface is shown in Fig. 22b and tan δ = 0.0027) substrate with a thickness of 0.508 mm was chosen for each layer. Rogers 4450F ( r = 3.52 and tan δ = 0.004) bondply, with a thickness of 0.101 mm was used as an adhesive layer. A cross section of the material layers used to fabricate the metasurface is shown in Fig. 22c. The total thickness of the fabricated metasurface is 1.726 mm (0.06λ). The metasurface was experimentally characterized using the quasi-optical Gaussian beam system shown in Fig. 23. In the experimental setup, the fabricated metasurface is illuminated by a spot-focusing lens antenna (SAQ-103039-90-S1). The antenna excites a Gaussian beam with a beamwidth of 50 mm at the focal length of 10 cm. The width of the fabricated metasurface is larger than 1.5 times the beamwidth to limit edge diffraction. A continuous wave signal provided by Anritsu MS4644B vector network analyzer at f 0 = 10 GHz was used as the incident signal. The amplitude of the incident signal impinging on the metasurface was measured to be −20 dBm. An Agilent E4446A spectrum analyzer was used to capture the reflected spectrum. The path loss of the system was measured and calibrated out of the measurements. The metasurface was modulated by four Keysight M9188A 16-channel D/A converters. Each channel of the D/A converter was synchronized and staggered in time. The D/A converter has an output voltage range from 0 V to 30 V, and a maximum modulation frequency of f p = 25 kHz. B. Measurements of a DC biased metasurface: tunable reflection phase We will first look at the simulated and measured DC performance of the proposed metasurface. The capacitance provided by the varactors ranges from 0.18 pF to 1.2 pF. Using the commercial electromagnetic solver AN-SYS HFSS, a full-wave simulation of the unit cell shown in Fig. 2a is conducted in the absence of time-variation. In the simulations, each varactor diode is modeled as a lumped capacitance in series with a resistance. The ca- pacitance and resistance values of the varactor diode were extracted as a function of bias voltage from its SPICE model [40]. The simulated reflection coefficients of the metasurface for various varactor capacitance values are given in Fig. 24. The incident angle is set to 25 • . At the operating frequency of 10 GHz, the reflection phase of the metasurface can be varied from −181.1 • to 155 • for TE polarization, providing a maximum phase range of 336.1 • . For TM polarization, the reflection phase of the metasurface can be varied from −181.8 • to 146.3 • , providing a maximum phase range of 328.1 • . At the operating frequency of 10 GHz, the simulated reflection amplitude for both polarizations remains greater than −3 dB across the entire phase range. Note that at the resonant frequency of the unit cell, the input susceptance goes to zero, and the surface admittance becomes purely resistive. The effective resistance seen by the incident wave is determined by the losses within the dielectric, the finite conductivity of the metallic patches and the losses of the varactor. As a result, the reflection coefficient magnitude dips at resonance dips the reflection phase becomes zero (a high impedance condition). The highest return loss at 10 GHz is 3.41 dB for TE polarization, occurring for a varactor capacitance of 0.313 pF. For TM polarization, the highest return loss at 10 GHz is 2.47 dB, occurring at a varactor capacitance of 0.30 pF. The metasurface suffers higher loss for TE polarization than TM polarization. This is because, at the incident angle of 25 • , the value of the freespace tangential wave impedance for TE polarization is closer (impedance matches better) to the purely resistive input impedance of the metasurface at resonance. The simulated cross-polarization behavior of the metasurface is lower than −50 dB for all the orthogonal varactor capacitance combinations. The static (DC biased) performance of the metasurface was measured under an oblique angle of 25 • . The measured TE and TM reflection coefficients under various bias voltages are given in Fig 25. The bias voltage used in measurement ranged from 0 V to 15 V, providing a varactor capacitance range of 0.18 pF to 1.2 pF. At the operating frequency of 10 GHz, the measured reflection phase of the metasurface could be varied from −182.7 • to 149.9 • for TE polarization, providing a phase range of 332.6 • . For TM polarization, the measured reflection phase could be varied from −176.9 • to 147.6 • , providing a phase range of 324.5 • . At resonance, the measured reflection amplitude was found to be much lower than in simulation, indicating higher losses in the fabricated metasurface. This could be attributed to additional ohmic loss within the diode as well as losses introduced by the tinning and soldering procedures used to mount the diodes. Nevertheless, the simulated and measured static performances of the metasurface are in good agreement. A de- tailed comparison between simulation and measurement for each bias voltage is given in Supplemental Materials VI. A harmonic balance simulation with the Keysight ADS circuit solver was used to verify the theoretical analysis and compute reflection spectrum. However, to use the harmonic balance circuit solver, a circuit equivalent of the fabricated metasurface needed to be extracted for each polarization. The equivalent circuits are extracted from full-wave scattering simulations. A voltage-dependent resistance is added to it to account for the added losses observed in measurement. The equivalent circuits for the two polarizations under an oblique incident angle of 25 • are given in Supplemental Materials VI. From the equivalent circuits, the capacitance modulation required to obtain a given reflection phase versus time dependence can be obtained. C. Measurements of a time-modulated metasurface: serrodyne frequency translation As discussed in Section II B, if all columns of the metasurface are biased with the same modulation waveform, providing a sawtooth reflection phase versus time, the metasurface can perform serrodyne frequency transla-tion. As shown in Fig. 24 and Fig. 25, the reflection amplitude is not unity due to the loss in the metasurface. Therefore, the capacitance modulation waveform had to be numerically optimized. The detailed optimization process is detailed in Supplemental Materials VII. In the experiment, the optimized waveform was sampled at 20 data points per period T p = 40 µsec, and the sampled waveform was entered into the D/A converter. All channels of the D/A converter were synchronized with the same bias waveform. The bias waveform across several diodes was measured using a differential probe (Tektronix TMDP0200) and Tektronix oscilloscope MDO3024. The optimized and measured bias voltage waveforms are shown in Fig. 26a for TE polarization and Fig. 26c for TM polarization. The measured reflection spectrum for an oblique angle of 25 • is shown in Fig. 26b for TE polarization and Fig. 26d for TM polarization. Both polarizations show serrodyne frequency translation to f = f 0 + f p . For TE polarization, a 4.604 dB conversion loss and 9.196 dB of sideband suppression are achieved. For TM polarization, a 3.67 dB conversion loss and 9.86 dB sideband suppression are achieved. For each polarization, the measured reflection spectrum in Fig. 26b and 26d generally agrees with harmonic balance simulations of its extracted circuit models, as shown in Fig. S6 of Supplemental Materials VIII. D. Measurements of a space-time modulated metasurface As shown in Section III, various functions can be realized by spatio-temporally modulating the metasurface, including specular subharmonic frequency translation, deflective/retroreflective serrodyne frequency translation, and deflective/retroreflective subharmonic frequency translation. Measured results are given here as a validation of our analysis. Again, the incident frequency f 0 and modulation frequency f p are set to 10 GHz and 25 kHz, respectively. Unless stated otherwise, the reflection phase of each column (path) is a sawtooth function in time. It is optimized as described in Supplemental Material VII. In Section IV D 1, the spatial modulation period of the metasurface is set to be electrically small (N = 2, 3). Subharmonic frequency translation is demonstrated for specular reflection. In Section IV D 2, the spatial modulation period is electrically large. The number of paths is chosen to be N = 20. Simultaneous beam steering and frequency translation is demonstrated. In Section IV D 3, the spatial modulation period is on the order of a wavelength. The number of paths is set to N = 4, and retroreflective subharmonic frequency translation is demonstrated. In Section IV D 4, the spatial modulation period is still on the order of a wavelength (N = 4); however, a staggered sinusoidal reflection phase is used to demonstrate retroreflective frequency translation. The measured conversion loss and sideband suppression for each of the examples are provided in Table TABLE II. Measured conversion loss to desired reflected frequency harmonic f and and sideband suppression given: Nthe number of paths, θi -the incident angle, θ obs -the observation angle, and the temporal phase modulation waveform (either a sawtooth or a sinusoid). Note that positive values of θi and θ obs correspond to waves traveling along the positive x direction. Ex. N θi θ obs Wave- II, and will be referred to throughout this section. Reflective (specular) subharmonic frequency translation In this section, electrically-small spatial modulation periods are considered (|k x ± β p | > k 0 ). The incident wave is chosen to impinge on the metasurface with an oblique angle of 25 • . The measured reflection spectra for 2-path (N = 2) and 3-path (N = 3) modulation schemes are given in Fig. 27. The reflection spectra are measured at a reflection angle of θ = 25 • (see Fig. 11). The measured spectra for both polarizations clearly demonstrate subharmonic frequency translation, where the only radiated harmonics are those reflected at frequencies f = f +rN f s and r ∈ Z. Doppler-like frequency translation is observed for both polarizations.The measured conversion loss and sideband suppression for both polarizations are shown as examples 1 and 2 in Table II. Compared to the homogenized, lossless metasurface presented in Section III A, the conversion loss and sideband suppression degrade more as the number of paths is increased. This is attributed to the evanescent harmonic pairs on the surface of the structure. This is discussed further in Supplemental Material VIII. Measured deflective serrodyne frequency translation In this section, the spatial modulation period is chosen to be electrically large (|k x ± β p | < k 0 ). The path number is set to N = 20. For the capacitance variation shown in Fig. 26a and Fig. 26c, the metasurface acts as a serrodyne frequency translator, and simultaneously deflects the wave to a different angle. When the incident angle is θ 1 = 25 • , the measured reflection spectra at the reflection angle θ 2 = 42 • is shown in Fig. 28. The spectra for both polarizations clearly show a Doppler shift to frequency f 0 + f p . The measured conversion loss and sideband suppression for both polarizations are provided in example 3 in Table II. Note that the refection angles for harmonics with frequency f 0 and f 0 + 2f p are 25 • and 67 • respectively. We can see from Fig. 28 that a fraction of the reflected power from these two harmonics was still captured by the finite aperture of the receive antenna due to its relatively close placement. By simply interchanging the transmitting and receiving antennas, we can measure the reflection spectra for the case where the incident angle is θ 2 = −42 • . The measured reflected spectra at the reflection angle of θ 1 = −25 • are given in Fig. 29. Again, the spectra for both polarizations clearly show a Doppler shift to a frequency f 0 + f p . The measured conversion loss and sideband suppression for both polarizations are shown in example 4 in Table II. The harmonics with frequency f 0 and f 0 + 2f p are reflected at −42 • and 9.74 • respectively, which are also captured by the receiving antenna. Note that, the reflected spectra in Fig. 28 and Fig. 29 are almost identical. This is due to the fact that the modulation frequency is far lower than the signal frequency. Otherwise, the reflection angle would differ from θ 1 = −25 • , as discussed in Supplemental Material V. Measured retroreflective subharmonic frequency translation In this section, the spatial modulation period is on the order of the wavelength of radiation (|k x | + |β p | > k 0 , & ||k x | − |β p || < k 0 ). The path number is chosen to be N = 4. As in Section III C, the retroreflective angle is chosen to be ±39 • . In experiment, a 3 dB directional coupler (Omni-spectra 2030-6377-00) was attached to the antenna in order to measure the retroreflected spectra. Note that the modulation waveform on each column is optimized with the same procedure given in Supplemental material VII, for an incident angle of 39 • . The measured retroreflection spectra at an oblique angle of 39 • are given in Fig. 30a and 30c for TE and TM polarization, respectively. As expected, frequency translation to f 0 +3f p is observed for both polarizations. The measured conversion loss and sideband suppression for both polarizations are shown in example 5 in Table II. Note that, comparing Fig. 30a and 30c to Fig. 18a and 18c, only the harmonics in solid lines are captured by the antenna. For an incident angle of −39 • , the measured retroreflection spectra are shown in Fig. 30b and 30d for TE and TM polarization, respectively. As expected, the spectra for both polarizations show a Doppler shift to frequency f 0 + f p . The measured conversion loss and sideband suppression for both polarizations are shown in example 6 in Table II. Again, comparing Fig. 30b and 30d to Fig. 18b and 18d, only the harmonics in solid lines are captured by the antenna. In this section, the bias waveform on adjacent columns generates staggered sinusoidal reflection phases. The modulation waveform on each column is optimized with the same procedure given in Supplemental material VII, for an incident angle of 39 • . Again, a wavelength-scale spatial modulation period (N = 4) is used. As in Sec- tion III C, the retroreflection angle is ±39 • . The measured retroreflection spectra at an oblique angle of 39 • are given in Fig. 31a and 31c. As expected, frequency translation to f 0 − f p is observed for both polarizations. The measured conversion loss and sideband suppression for both polarizations are provided in example 7 in Table II. The measured retroreflection spectra at an oblique angle of −39 • are given in Fig. 31b and 31d. Frequency translation to f 0 + f p is observed for both polarizations. The measured conversion loss and sideband suppression for both polarizations are provided in example 8 in Table II. Both of the two previous retroreflection cases used 4 paths per spatial modulation period for a retroreflection angle of 39 • (examples 5 and 7 in Table II). The only difference between the two cases was the time-dependence of the reflection phase (sawtooth versus sinusoidal). Thus, we have shown that simply changing the temporal modulation waveform, the retroreflection frequency can be changed. In this case, it changed from f 0 −f p to f 0 +3f p . V. CONCLUSION We reported a spatio-temporally modulated metasurface that can simultaneously control the reflected frequency and angular spectrum. A proof-of-principle metasurface was designed and fabricated at X-band frequencies. Additionally, a theoretical treatment of the spatiotemporally modulated metasurface was presented which accounts for the spatial discretization of the structure. The theoretical treatment provides an accurate model of the metasurface as well as insight into the subharmonic frequency translation possible with subwavelength spatial modulation periods. Specifically, when the spatial modulation is electrically large, the metasurface exhibits serrodyne frequency translation, where the metasurface can upconvert or downcovert the incident frequency f 0 by the modulation frequency f p . Meanwhile, tuning the spatial modulation period allows the metasurface to steer the reflected beam, and even exhibit retroreflection. When the spatial modulation is electrically small, the metasurface exhibits sub-harmonic frequency translation. In this case, all the radiated harmonics are reflected in the specular direction. When the spatio modulation period is on the order of a wavelength, retroreflective subharmonic frequency translation can be achieved. The retroreflected wave carries a frequency that can be switched by changing the temporal modulation waveform. The designed metasurface provides a new level of reconfigurability. Multiple functions including beamsteering, retroreflection, serrodyne frequency translation, and subharmonic frequency translation can all be achieved with one ultra-thin (0.06λ) metasurface by appropriately tailoring the space-time modulation waveform. The designed metasurface can find various applications in nextgeneration communication, imaging and radar systems. I Finding the time-modulated sheet capacitance In this section, the relation between the reflection phase and the sheet capacitance will be derived. A homogenized model of the spatio-temporally modulated metasurface is shown in Fig. 5. It consists of a discretized, space-time modulated capacitive sheet over a grounded dielectric substrate. The spatial modulation period of the capacitive sheet is d, and its temporal modulation period is T p = 2π/ω p . Each spatial modulation period is discretized into N paths (unit cells of width d 0 ) over which the sheet capacitance is uniform. At the operating frequency of f 0 = ω 0 /2π = 10 GHz, the wave admittance in the substrate for T E and T M polarizations are, where subscripts s, 0 and z denote the substrate, the operating frequency ω 0 , and the z-component of the wavenumber, respectively. The relative permittivity of the substrate is h . Note that the substrate admittance looking down from z = 0 − is simply a ground plane translated by a distance l, which provides an inductive input reactance (see Fig. 2b) where "X" is either "E" for TE polarized waves or "M" for TM polarized waves. If there is no spatial variation, the entire capacitive sheet has the same time variation. The sheet capacitance C T X (t) is assumed to be a periodic function of time, where C T X 0 is static capacitance designed to resonate with the inductive reactance given by (S.3) at frequency f 0 for each polarization. It is given by resonates with C T X 0 at frequency f 0 , the reflection phase φ(t) at the incident frequency is fully controlled by ∆C T X (t), where Y T X 00 is the tangential wave admittance in free space at radial frequency ω 0 for each polarization: where k 0 is the free space wavenumber at radial frequency ω 0 , and k x is the tangential wavenumber of the incident wave. In this paper, the reflection phase of each column of the metasurface is either a sawtooth (φ(t) = ω p t), or a sinusoidal function (φ(t) = A 0 sin(ω p t)) in time; where ω p is the radial frequency of the modulation. From Eq. S.6, the capacitance ∆C T X (t) can be found for a desired time-varying phase φ(t) The capacitance C T X (t) is a periodic function in time. Therefore, it can be expressed as a Fourier series: where the coefficient C T X q is equal to This Fourier representation of the capacitance is used to find the fields scattered from the metasurface: a time-modulated capacitive sheet over a grounded uniaxial dielectric substrate. II Calculation of the reflected spectrum from the timemodulated metasurface The total tangential fields above (z = 0 + ) and below (z = 0 − ) the time-modulated capacitive sheet take the form, Y T X sq and k T X szq are the tangential wave admittance and normal (z-directed) wavenumber in the substrate at the frequency ω q = ω 0 + qω p , and are given by If the magnitude of the voltage signal that modulates the varactors comprising the time-modulated capacitive sheet is much larger than the incident signal at f 0 , the sheet can be treated as a linear, time-varying capacitance. Therefore, the boundary condition [1] at z = 0 is given by Inserting the Fourier series expansion for capacitance from Eq. (S.10) and the field expressions from Eqs. (S.12-S.14) into (S.17) yields In matrix form, this can be written as where I T X is a vector with the complex coefficients I q of the total tangential magnetic field above the metasurface, V T X is a vector with the coefficients V q of the total tangential electric field, and Y T X is the input admittance matrix of the metasurface (looking into the metasurface at z = 0 + ) with entries The incident and reflected tangential fields above the metasurface are given by Eq. (5-8). Further, the coefficients of the incident electric and magnetic fields, as well as the reflected electric and magnetic fields, are related by the free-space tangential admittance: For each polarization, the diagonal admittance matrix Y T X 0 contains entries From Eq. (S. 19) and (S.21), the reflected electric field can be calculated for each polarization using Eq. (9) in the main text. III Finding the discretized, space-time modulated sheet capacitance The space-time capacitance, C(t, x), of the capacitive sheet can be expanded as a 2-D Fourier series: where β p = 2π/d is the spatial modulation wavenumber and ω p = 2π/T p is the radial frequency (temporal modulation wavenumber) of the modulation. The coefficients C mq of the 2-D Fourier series can be calculated as: As noted in the main text, the capacitive sheet is assumed to be spatially invariant across a given path. According to Eq. (10), the capacitance modulation of a path is staggered in time by T p /N with respect to its adjacent path. Therefore, if there are N unit cells in one spatial modulation period d (N-path configuration), the sheet capacitance, of a path can be expressed as which is a pulse function in space, and periodic function in time (see Eq. (S.10)). The spatiotemporally varying sheet capacitance can then be expressed as The capacitance of path v, C v (t, x), can also be expanded as a 2-D Fourier series, The equation above can be used to derive the following relationship between the Fourier coefficients of the sheet capacitance on adjacent paths, The Fourier coefficents of the overall capacitive sheet, C mq , given by Eq. (S.25), can be found by 4 summing the capacitance over all the paths and employing Eq. (S.30), Tp 0 C(t, x)e jmβpx e −jqωpt dtdx + . . . It is clear from Eq. (S.31) that the coefficient C mq is zero except when Therefore, Given the staggered modulation of the paths (unit cells), the metasurface functions as an N-path system, and the indices m and q are related by Eq. (S.32). Inserting Eq. (S.32) into Eq. (S.24), the sheet capacitance can be rewritten as, where the wavenumber β d = N β p = 2π/d 0 is an additional wavenumber resulting from the discretization of the spatial modulation. The summation over r accounts for the discontinuity in capacitance at the the boundary of each path as well as the microscopic variation of capacitance within the paths (which in this case is constant). The summation over q accounts for the macroscopic capacitance variation over one spatial modulation period d. Given Eq. (S.33), the spatio-temporal coefficients of the capacitance variation are given by From Eq. (S.36), we see that the capacitance C v (t, x) along a path is a separable function, where f v (t) is the temporal modulation of capacitance C T X (t) along path 1. In this paper, f v (t) generates either a sawtooth reflection phase in time (see Fig. 5a and 5c) or a sinusoidal reflection phase in time (see Fig. 20a and 20c). In addition, g v (x) is a function describing the spatial dependence of capacitance along path 1, which is assumed to be a pulse function, Inserting Eq. (S.36-S.37) into Eq. (S.35), we obtain where C T X q are the temporal coefficients of the capacitance modulation for a single path, given by Eq. (S.11). IV Calculation of the reflected spectrum from a discretized, space-time modulated metasurface As a result of the symmetry introduced by the staggered modulation scheme, the tangential field above the metasurface satisfies the following N-path field relation [2]. This space-time field distribution on the surface can also be expressed in terms of a modified 2-D Floquet expansion: where the wavenumber β d = N β p = 2π/d 0 results from the discretization of the spatial modulation. The summation over r accounts for the microscopic field variation along each path (unit cell size of length d 0 = d/N ), while the summation over q accounts for the macroscopic field variation over one spatial modulation period d. Observing the field expression in Eq. (S.42), it can be concluded that the staggered modulation between paths impresses a tangential wavenumber of qβ p onto the q th harmonic. The total tangential magnetic field on the spatio-temporally modulated metasurface (see Fig. 3) can be expressed as, where Y T X srq and k T X srqz are the tangential wave admittance and normal wavenumber in the substrate for each spatio-temporal harmonic pair (r, q), where s, r, q, and z denote the substrate, harmonic pair (r, q), and z-component of the wavenumber. We can separate the total tangential field into incident and reflected tangential fields, as given in Eqs. (12)(13)(14)(15). The coefficients of the incident electric and magnetic field, as well as reflected electric and magnetic field are related by the free-space wave admittance In order to simplify the calculation, each harmonic pair (r, q) is mapped to one harmonic index α = (r + M )(2M + 1) + q + 1 [3], as shown in Table S1. The harmonic mapping allows the tangential fields, given by Eqs. (S.42) and (S.43), to be represented as vectors V T X and I T X , where each contains (2M + 1) 2 entries: V α and I α respectively. The boundary condition and freespace admittance given by Eqs. (S.45) and (S.48) can then be written in matrix form. The size of metasurface admittance matrix Y T X and free-space tangential admittance Y T X 0 is (2M + 1) 2 × (2M + 1) 2 . The reflected electric field can be calculated for each polarization using Eq. (9). The metasurface admittance matrix Y T X contains entries: V Theoretical study of the spatio-temporally modulated metasurface for a high modulation frequency In this section, we study the spatio-temporally modulated metasurface under a modulation frequency f p that is comparable to the incident frequency f 0 . The modulation frequency is chosen to be f p = 0.6 GHz and the incident frequency is f 0 = 10 GHz. The incident plane wave is obliquely incident at When the incident signal impinges on the metasurface at θ 2 = −39 • , the propagating reflected harmonics are those with q = 0, ±1, 2, 3, 4, 5, 6, 7, 8. The harmonic of interest (f 0 + f p ) travels with a reflection angle of θ 3 = −21 • . Note that the reflected angle θ 3 = θ 1 [4]. The reflected spectrum for TE and TM polarization is given in Fig. S1c and Fig. S1d, respectively. For TE polarization, a 0.89 dB conversion loss and 7.51 dB sideband suppression are observed. For TM polarization, a 0.77 dB conversion loss and 8.23 dB sideband suppression are observed. VI Reflection from a DC biased unit cell for an incident angle of 25 • As described in the paper, the tunability of the metasurface is provided by surface-mounted varactor diodes MAVR-000120-1411. In the full-wave simulations, the varactor diode is modeled as a lumped capacitance in series with a resistance. The capacitance and resistance values are extracted as a function of bias voltage from the varactor's SPICE model [5]. The simulated reflection coefficient of the unit cell for various capacitance values is shown in Fig. 24 for an incident angle of 25 • . In addition, the reflection coefficient of the fabricated metasurface is measured for various bias voltages for the same angle of incidence, and is shown in Fig. 25. Comparing the simulated and measured reflection coefficients, we noticed that the varactor capacitance versus bias voltage characteristic given by the SPICE model did not accurately match the experimental results. Therefore, the varactor capacitance versus experimental bias voltage characteristic was obtained by aligning the measured reflection phase to simulation. In addition, the measured reflection amplitude indicated that there was higher loss in measurement than in simulation. The additional loss can be introduced by a higher measured varactor resistance or by the tinning and soldering processes used to mount the varactor diodes. In order to conduct harmonic balance simulation (see Section VIII) and predict the reflection spectrum of the metasurface when modulated, a circuit representation of the fabricated metasurface was extracted for each polarization. The circuit models are extracted from the full-wave scattering simulations, with an added voltage-dependent resistance R T X to account for the additional loss observed in measurement. The dependence of R T X is obtained by aligning the measured and simulated reflection amplitudes. The extracted circuit models are shown in Fig. S2. The values of the extracted circuit parameters are shown in Table S2. The varactor diode C d is modeled using the SPICE model for MAVR-000120-1411 varactors. For each varactor capacitance, the corresponding bias voltages used in circuit simulation (given by the SPICE model) and measurement, as well as the additional resistances R T X are given in Table S3 and S4 for the TE and TM polarizations. The varactor characteristics and additional losses R T X are slightly different for the two polarizations. This is likely due to tolerances in the varactor capacitance and resistance values. The extracted circuits shown in Fig. S2 are simulated with the commercial circuit solver Keysight Advanced Design System (ADS). Comparisons between full-wave simulation, measurement, and circuit simulation are shown in S3 and S4 for various capacitance values. The reverse bias voltage values used in circuit simulation are given in Table S3 and S4. The circuit simulations agree closely with full-wave simulations and measurements of the metasurface, confirming the accuracy of the circuit model shown in Fig. S2. VII Calculating the optimized bias waveform In order to achieve serrodyne frequency translation, a bias waveform is needed that generates a sawtooth reflection phase, which varies 2π radians over each modulation period. To obtain the bias waveform, the following procedure was followed. Using the extracted circuit models shown in Fig. S2, the reflection amplitude and phase were plotted versus bias voltage at 10 GHz, as shown in Fig. S5. The plots show that the bias voltage versus reflection phase curves follow a tangent function. Since the targeted sawtooth reflection phase is linear with respect to time over each modulation period, the bias waveform was assumed to be of the following form, A harmonic representation of the optimized bias waveform (referred to as simulated bias waveform) was then used in the harmonic balance simulation, which is detailed in the next section. In addition, a mapping between the bias voltage used in circuit simulation and in measurement for each varactor capacitance value, was obtained from Table S3 and S4. The experimental bias waveform was determined by applying this mapping to the optimized bias waveform. A sampled version (20 points per period) of the experimental bias waveform (referred to as measured bias waveform) was applied to the metasurface through the D/A converter in measurement. The optimized and measured bias waveform used in the measurement are shown in Fig. 26a and 26c for serrodyne frequency translation. VIII Harmonic balance simulation of the extracted circuit model If all the columns of the metasurface are biased with the same waveform, the metasurface's response can be predicted by performing a harmonic balance simulation of a single unit cell's extracted circuit model. Harmonic balance simulations of the circuit model shown in Fig. S2 were performed using Keysight ADS. The incident signal was set to an amplitude of −20 dBm at frequency f 0 = 10 GHz. The optimized waveforms V y bias and V x bias were calculated as described in the previous section. When the reflection phase is a sawtooth function in time, the simulated reflection spectra are given in Fig. S6b and S6d. The simulation results agree with the measurement results shown in Fig. 26. However, the measured results shows higher conversion loss and lower sideband suppression. This can be attributed to the fact that the measured bias waveform is a coarsely sampled version of the optimized waveform. The sampling rate of the D/A converter used in experiment is 0.5 MHz. Therefore, only 20 samples per period could be taken of the 25 KHz modulation waveform. For a reflection phase that is a sinusoidal function of time, the simulated reflection spectra are given in Fig. S7b and S7d. As mentioned in the paper, when the spatial modulation period is deeply subwavelength, the metasurface can be viewed as an N-path system. Subharmonic frequency translation is supported in this case, and the metasurface exhibits Doppler-like frequency translation to a high-order frequency harmonic. The metasurface can be represented using an N-path circuit model shown in Fig. 9, where there are N branches of time-varying circuits connected to a common port. Each path (column of metasurface) is represented by a circuit model shown in Fig. S2. The bias wavefrom of each path is given in Fig. S6a and S6c; and is staggered in time by T p /N with respect to that of its adjacent path. The simulated reflection spectra for 2-path and 3-path configurations are shown in Fig. S8. Note that, as the path number N increases, the conversion loss increases as well. This is because the N-path metasurface upconverts the frequency to the first propagating harmonic pair. The higher the upconverted frequency, the more loss there is in the frequency conversion process. The simulated results agree with the measurement results shown in Fig. 27. However, the conversion loss of the measured results degrade more severely as the path number increases. This is due to the fact that when the metasurface is lossy, the evanescent harmonic pairs on the metasurface consume energy as well. Those harmonic pairs are not represented in the N-path circuit network, where the N branches of time-varying circuits are considered perfectly co-located. Fig. S5a. The bias waveform is given in Fig. S6a. (c) Optimized bias waveform for TM polarization. (d) Reflection spectrum from harmonic balance simulation of the extracted circuit shown in Fig. S5b. The bias waveform is given in Fig. S6c.
2020-06-12T01:01:24.560Z
2020-06-05T00:00:00.000
{ "year": 2020, "sha1": "837c1ccc746c153d4f57971df1289c6ecba2e080", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "837c1ccc746c153d4f57971df1289c6ecba2e080", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
202001337
pes2o/s2orc
v3-fos-license
Spontaneous tumor lysis syndrome in a patient with advanced gastric adenocarcinoma: a case report Tumor lysis syndrome (TLS) is an oncologic emergency that usually occurs after initial treatment of a malignant tumor. It manifests as hyperuricaemia, hyperkalaemia, hyperphosphataemia and hypocalcaemia, ultimately resulting in acute kidney failure, seizures, cardiac arrhythmias, and even death. Here, we report a very rare case of spontaneous TLS in a patient with advanced gastric adenocarcinoma who eventually succumbed to renal failure. Extra vigilance towards electrolyte imbalances should be given during initiation of therapy in cases of large gastric cancer with severe distant metastasis. Risk assessment prior to surgery, early diagnosis and comprehensive treatment strategies are vital in improving the prognosis of gastric cancer patients with TLS. Urgent hemodialysis should be implemented as soon as possible in order to prevent further renal deterioration. Introduction Tumor lysis syndrome (TLS) is an oncologic emergency that usually occurs after the initial treatment of a malignant tumor. Chemotherapy often causes lysis of large quantities of tumor cells, leading to the release of intracellular contents into the blood which causes hyperuricaemia, hyperkalaemia, hyperphosphataemia and hypocalcaemia. These electrolyte and metabolic abnormalities ultimately result in symptoms such as acute kidney failure, seizures, cardiac arrhythmias, and even sudden death (1)(2)(3). TLS commonly occurs in hematologic cancers in view of their high cell turnover, rapid proliferation rates and increased chemosensitivity (4). While the incidence of TLS has been increasingly reported in solid tumors (5)(6)(7)(8)(9), TLS in gastric cancer is uncommon (10). TLS usually occurs within a week of initiation of cytotoxic therapy and may occur spontaneously in certain circumstances (11)(12)(13). Here, we report a rare case of spontaneous TLS in a patient with advanced gastric cancer. Case presentation A 62-year-old gentleman was admitted in the Second Affiliated Hospital of Zhejiang University, School of Medicine, with a 20-day history of worsening weakness and poor appetite. This was associated with discomfort of the right flank, hard stools and a loss of weight of 5 kg over one month. No nausea, vomiting or fever were reported. The patient had no history of renal disease. Physical examination on admission revealed BMI of 24.2 (a height of 172 cm and a weight of 71.6 kg). Left cervical lymph nodes were not palpable. Mild tenderness was present in the epigastric region. Gastroscopy showed a huge annular irregular mass on the wall of lower two thirds of the gastric body with peripheral mucosa edema ( Figure 1). Biopsy confirmed the presence of a low-grade adenocarcinoma. Abdominal enhanced CT scan revealed a large-scale thickness (size of 9 cm) of the gastric wall with serosa invasion, infiltration of the left lower ureter, multiple enlarged perigastric lymph nodes (lymph nodes No. 2, 4, 5, 6), suspicious metastatic nodes in greater omentum, and a small amount of pelvic effusion ( Figure 2). Furthermore, 18 F-fluorodeoxyglucose (FDG) PET/CT scan showed that diffuse enhancement in the stomach (SUVmax =6.72), perigastric lymph nodes (SUVmax =4.22), a blurry omentum and a small amount of pelvic effusion ( Figure 2). Laboratory findings at admission are as follows: white blood cells: 13. Laparoscopic exploration was performed on the ninth day of admission, revealing numerous white nodules in the peritoneum, ligamentum teres hepatis, and pelvic cavity ( Figure 3). Intraoperative frozen section one nodule revealed metastatic low-grade adenocarcinoma. Numerous free tumor cells were observed upon microscopic inspection Figure 4). An intraabdominal catheter was inserted in preparation of subsequent intraperitoneal chemotherapy. Surprisingly, the patient's renal function deteriorated immediately after surgery, as evidenced by a serum creatinine level of 746 μmol/L. On the nineteenth day of admission, bilateral ureteral stents were inserted in order to alleviate ureteral obstruction. Subsequently, serum creatinine declined to 150 μmol/L. Nevertheless, this decrease was short-lived as serum creatinine began to rise gradually again. Potassium and phosphorus levels were slightly increased, while calcium levels remained low during the entire course of treatment. Inflammatory biomarkers such as white blood cells and C-reactive protein (CRP) were relatively high despite antiinfection therapy ( Table 1) ( Figure 5). The patient was diagnosed as having TLS during multi-disciplinary team (MDT) discussion of the case. Subsequently, the patient was treated with volume expansion, diuretics, sodium bicarbonate, anti-infective therapy, with the aim to improve his electrolyte imbalances. Despite these interventions, serum creatinine continued rising, reaching a peak of 1,005 μmol/L. The patient refused hemodialysis and eventually succumbed to renal failure a month after his initial surgery. Discussion TLS is frequently reported during initiation of therapy against malignant tumors. It comprises of a series of metabolic abnormalities caused by massive release of intracellular contents into the blood that exceeds the ability of renal clearance. It is manifested by hyperuricaemia, hyperkalaemia, hyperphosphataemia and hypocalcaemia (1)(2)(3). While there is currently no universal definition of TLS, the most accepted TLS classification was proposed by Cairo and Bishop, which was modified from works of Hande and Garrow (14). Based on the Cairo and Bishop classification, TLS is stratified into laboratory TLS (L-TLS) and clinical TLS (C-TLS). L-TLS is defined as changes in any two or more measurable values, including uric acid of 8 mg/dL or greater, potassium 6 mmol/L or greater, phosphorus 2.1 mmol/L or greater, calcium 1.75 mmol/L or less, or a 25% change from baseline in any of these electrolytes. C-TLS is defined as presence of L-TLS and at least one clinical manifestation including renal insufficiency, arrhythmia, seizure or sudden death (15). The patient in our case suffered from hyperuricemia (8.74 mg/dL) and hyperphosphataemia (2.20 mmol/L) at baseline, and experienced a decrease in calcium at almost 25% from baseline as well as along with renal insufficiency (1,005 μmol/L). These findings fulfill the criteria of both L-TLS and C-TLS. TLS is usually associated with haematological malignancies in view of the large cell turnover, higher proliferation rates and increased chemosensitivity of these cells (4,14,16,17). Despite the increasing reports of TLS in solid tumors, the incidence of TLS in gastric adenocarcinoma is very rare (10). To the best of our knowledge, there were only 5 reports of TLS in gastric adenocarcinoma (10,(18)(19)(20)(21). Three of them were chemotherapy-induced while another two developed spontaneous TLS. However, the patient in one of the case reports of spontaneous TLS was noted to have received chemotherapy 2 months prior to the development of TLS, while the other case report of a patient with spontaneous TLS was diagnosed at time of presentation. While the risk factors for the development of spontaneous TLS has yet to be identified, previous studies have revealed several risk factors that may be associated with spontaneous TLS in solid tumor, such as tumor extension, a large initial tumor burden, bulky tumors, extensive metastasis, extrinsic compression of the genitourinary tract by the tumor, tumor cells with high proliferative rate, and abnormal pretreatment laboratory findings such as elevated LDH, serum creatinine, and uric acid. Patient-related factors such as preexisting nephropathy, hypotension, and obstructive uropathy are also risk factors for developing TLS (10). In this report, the patient was diagnosed with metastatic gastric cancer, an indication of a large tumor burden. Furthermore, this patient was found to have elevated LDH, serum creatinine, and uric acid levels at the point of presentation. Ureteral obstruction was also present in this patient. All these factors placed this patient at a high risk of developing TLS. Additionally, surgical procedures and severe infection may have further increased the risk of TLS in this patient as operative procedures have the potential to trigger tumor cell death or to stimulate tumor cell growth. The metabolic abnormalities in TLS, if untreated, will lead to life-threatening complications such as acute kidney (10). Early recognition and prevention are especially crucial for patients at high risk. Treatment of TLS focuses on correction of electrolyte disturbances and preservation of renal function. A typical TLS treatment regimen involves volume expansion, urinary alkalinisation, allopurinol, rasburicase, or even dialytic modalities if necessary (1,22). Our patient received volume expansion, urinary alkalinisation and correction of electrolyte disturbances as soon as he was diagnosed as TLS. Nevertheless, his renal function continued to deteriorate, and without the initiation of lifesaving haemodialysis, this patient eventually succumbed to his disease. Acknowledgments We acknowledge Liang Han for his revision of English grammar. We acknowledge Lin Chen for providing PET-CT imaging. Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee(s) and with the Helsinki Declaration (as revised in 2013). Written informed consent was obtained from the relatives of the patient for publication of this case report and any accompanying images. Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the noncommercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.
2019-09-09T18:39:18.004Z
2019-08-01T00:00:00.000
{ "year": 2019, "sha1": "ca9fda6e7ff1bc6f4f9d4856bb76d86f38bc87a5", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.21037/tcr.2019.07.53", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5463aab3672e41445e4545bebfaa1cb9a238585f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236451424
pes2o/s2orc
v3-fos-license
Assocation Between Leber's Hereditary Optic Neuropathy and MT-ND1 3460G>A Mutation-Induced Alterations in Mitochondrial Function, Apoptosis, and Mitophagy Purpose To investigate the molecular mechanism underlying the Leber's hereditary optic neuropathy (LHON)-linked MT-ND1 3460G>A mutation. Methods Cybrid cell models were generated by fusing mitochondrial DNA-less ρ0 cells with enucleated cells from a patient carrying the m.3460G>A mutation and a control subject. The impact of m.3460G>A mutations on oxidative phosphorylation was evaluated using Blue Native gel electrophoresis, and measurements of oxygen consumption were made with an extracellular flux analyzer. Assessment of reactive oxygen species (ROS) production in cell lines was performed by flow cytometry with MitoSOX Red reagent. Assays for apoptosis and mitophagy were undertaken via immunofluorescence analysis. Results Nineteen Chinese Han pedigrees bearing the m.3460G>A mutation exhibited variable penetrance and expression of LHON. The m.3460G>A mutation altered the structure and function of MT-ND1, as evidenced by reduced MT-ND1 levels in mutant cybrids bearing the mutation. The instability of mutated MT-ND1 manifested as defects in the assembly and activity of complex I, respiratory deficiency, diminished mitochondrial adenosine triphosphate production, and decreased membrane potential, in addition to increased production of mitochondrial ROS in the mutant cybrids carrying the m.3460G>A mutation. The m.3460G>A mutation mediated apoptosis, as evidenced by the elevated release of cytochrome c into the cytosol and increasing levels of the apoptotic-associated proteins BAK, BAX, and PARP, as well as cleaved caspases 3, 7, and 9, in the mutant cybrids. The cybrids bearing the m.3460G>A mutation exhibited reduced levels of autophagy protein light chain 3, accumulation of autophagic substrate P62, and impaired PTEN-induced kinase 1/parkin-dependent mitophagy. Conclusions Our findings highlight the critical role of m.3460G>A mutation in the pathogenesis of LHON, manifested by mitochondrial dysfunction and alterations in apoptosis and mitophagy. The m.3460G>A mutation changed a highly conserved alanine at position 52 with threonine (A52T) in MT-ND1, the essential subunit of complex I. [21][22][23][24][25] Therefore, we hypothesized that the m.3460G>A mutation may alter both the structure and function of complex I, thereby causing mitochondrial dysfunction. Recently, we identified 19 unrelated Han Chinese families bearing the m.3460G>A mutation among a large cohort of 1793 genetically unrelated Han Chinese subjects. 14,26,27 The occurrence of the m.3460G>A mutation in the genetically unrelated families of different ethnic backgrounds strongly supports the notion that this mutation is involved in the pathogenesis of LHON. 8,[23][24][25][26][27] In the present study, we performed clinical evaluations and Sanger sequence analyses of the mtDNA of these Chinese families bearing the m.3460G>A mutation. The functional significance of the m.3460G>A mutation was further investigated through the use of cybrid cell lines constructed by transferring mitochondria from lymphoblastoid cell lines derived from an affected matrilineal relative carrying the m.3460G>A mutation and from a control subject belonging to the same mtDNA haplogroup into human mtDNA-less (ρ 0 ) cells. 28,29 Using western blot and Blue Native polyacrylamide gel electrophoresis (BN-PAGE) analyses, we examined whether or not the m.3460G>A mutation influenced the stability of MT-ND1 and the assembly and activity of complex I. These cell lines were then assessed for effects of the mtDNA mutations on the enzymatic activities of respiratory chain complexes, the rate of O 2 consumption, mitochondrial ATP production, mitochondrial membrane potential, and generation of ROS. These cell lines were further evaluated for the effect of these mtDNA mutations on apoptotic state and mitophagy. Families and Subjects A total of 19 unrelated Han Chinese families bearing the m.3460G>A mutation were recruited from eye clinics across China, as described previously. 14,20,27 This study was in compliance with the tenets of the Declaration of Helsinki. Informed consent, blood samples, and clinical evaluations were obtained from all participating family members under protocols approved by the ethics committees of Zhejiang University School of Medicine and Wenzhou Medical University. Comprehensive histories were obtained and physical examinations were performed for these participating subjects to identify personal or family medical histories of visual impairment or other clinical abnormalities. The ophthalmic examinations of probands and other members of these families were conducted as detailed previously. 30 Sanger Sequence Analysis Genomic DNA was isolated from whole blood of family members and control subjects using the QIAamp DNA Blood Mini Kit (51104; QIAGEN, Hilden, Germany). The entire mitochondrial genomes of family members were PCR amplified in 24 overlapping fragments using sets of light-strand and heavy-strand oligonucleotide primers; they were subsequently analyzed by direct sequencing as described elsewhere. 32 These sequence results were compared with the updated Cambridge Reference Sequence (GenBank accession number: NC_012920). 33 Cell Cultures and Culture Conditions Lymphoblastoid cell lines were immortalized by transformation with the Epstein-Barr virus, as described elsewhere. 34 Cell lines derived from the affected matrilineal relative (WZ83-III1) carrying the m.3460G>A mutation and one ageand sex-matched control subject (A70) lacking the mutation but belonging to the same mtDNA haplogroup D4 (Supplementary Fig. S2) were grown in the RPMI 1640 medium (Thermo Fisher Scientific, Waltham, MA, USA) supplemented with 10% fetal bovine serum (FBS). The 143B.TKcell line was grown in Dulbecco's modified Eagle's medium (containing 4.5 mg/mL glucose and 0.11 mg/mL pyruvate), supplemented with 100 μg/mL bromodeoxyuridine (BrdU) and 5% FBS. The mtDNA-less ρ 0 206 cell line, derived from 143B.TK -, was grown under the same conditions as the parental line, except for the addition of 50 μg/mL uridine. 28 Transformation by cytoplasts of mtDNA-less ρ 0 206 cells was performed using immortalized lymphoblastoid cell lines, as detailed previously. 28,29 The cybrids derived from each donor cell line were analyzed for the presence and level of the m.3460G>A mutation and mtDNA copy numbers as detailed elsewhere. 11,27 Three cybrids derived from each donor cell line with homoplasmy of mtDNA mutations and similar mtDNA copy numbers were used for the following biochemical characterization. All cybrid cell lines were maintained in the same medium as the 143B.TKcell line. Quantification of the density in each band was performed as detailed previously. 29 BN-PAGE and In-Gel Activity Assays BN-PAGE and in-gel activity assays were performed using mitochondrial proteins isolated from mutant and control cybrid cell lines, as detailed elsewhere. 35,36 Samples containing 30 μg of mitochondrial proteins were separated on 3% to 11% Bis-Tris NativePAGE gel (Thermo Fisher Scientific). The primary antibody used for this experiment was the total human OXPHOS antibody cocktail (Abcam), with voltage-dependent anion channel as a loading control. Alkaline phosphatase-labeled goat anti-mouse IgG and goat anti-rabbit IgG (Beyotime, Jiangsu, China) were used as secondary antibodies, and protein signals were detected using the BCIP/NBT Alkaline Phosphatase Color Development Kit (Beyotime). The in-gel activity assay was performed as detailed elsewhere. 37 Briefly, the NativePAGE gels were prewashed in ice-cold water and then incubated with the substrates of complex I (1-mM Tris-HCl, pH 7.4; 1 mg/mL nitroblue tetrazolium [NBT]; 0.1 mg/mL NADH), complex II (84-mM sodium succinate; 50-mM phosphate buffer, pH 7.4; 2 mg/mL NBT; 0.2-mM phenazine methosulfate) at room temperature (RT), and complex V (35-mM Tris-HCl, pH 7.4; 14-mM magnesium sulfate; 270-mM glycine; 10-mM ATP; 0.2% lead nitrate). The gels were then incubated at 37°C overnight. After the reaction was stopped with 10% acetic acid, the gels were washed extensively in water and scanned to visualize the activities of the respiratory chain complexes. ATP Measurements The CellTiter-Glo Luminescent Cell Viability Assay (Promega, Madison, WI, USA) was used to measure cellular and mitochondrial ATP levels, modifying the manufacturer's instructions. 29 Briefly, the assay buffer and substrate were equilibrated at RT and transferred to and gently mixed with the substrate to obtain a homogeneous solution. After a 30-minute equilibration of the cell plate at RT, 100 μL of the assay reagents was added to each well with 2 × 10 4 cells and the content was mixed for 2 minutes on an orbital shaker to induce cell lysis. After a 10-minute incubation at RT, the luminescence was read on a microplate reader (Synergy H1; BioTek, Winooski, VT, USA). Assessment of Mitochondrial Membrane Potential Mitochondrial membrane potential was measured with a JC-10 Mitochondrial Membrane Potential Assay Kit (Abcam), generally following the manufacturer's recommendations with some modifications, as detailed elsewhere. 39,40 In brief, ∼2 × 10 6 cells of each cybrid cell line were harvested, resuspended in 200 μL 1× JC-10 assay buffer, and then incubated at 37°C for 30 minutes. Alternatively, harvested cells were preincubated with 10 μM of FCCP for 30 minutes at 37°C prior to staining with JC-10 dye. After they were washed twice with PBS, the cells were resuspended in 200 μL PBS. The fluorescent intensities for both J-aggregates and monomeric forms of JC-10 were measured at excitation (Ex)/emission (Em) = 490/530 and 490/590 nm with a BD Biosciences LSR II flow cytometer (Becton, Dickinson and Company, Franklin Lakes, NJ, USA). Measurement of ROS Production The levels of mitochondrial ROS generation were determined using the MitoSOX assay (Thermo Fisher Scientific) as detailed previously. 41,42 Briefly, approximate 2 × 10 6 cells of each cell line were harvested, resuspended in PBS supplemented with 5 μM of MitoSOX, and then incubated at 37°C for 20 minutes. After they were washed twice with PBS, the cells were resuspended in PBS in the presence of 0.8-mM freshly prepared H 2 O 2 and 2% FBS and then incubated at RT for another 45 minutes. Cells were further washed with PBS and resuspended with 1 mL of PBS with 0.5% paraformaldehyde. Samples with or without H 2 O 2 stimulation were analyzed using the BD Biosciences LSR II flow cytometer, with excitation at 488 nm and emission at 529 nm. In each sample, 10,000 events were analyzed. Tunel Assay The TUNEL assay was carried out using the In Situ Cell Death Detection Kit (Sigma-Aldrich) according to the manufacturer's protocol. Briefly, cybrid cells were seeded at a density of 2 × 10 4 cells per well into a V-bottomed 96well microplate overnight. After they were washed twice with PBS, the cells were incubated in the presence of 0.8-mM freshly prepared H 2 O 2 at RT for 30 minutes, fixed with freshly prepared 4% paraformaldehyde in PBS for 60 minutes at RT, permeabilized with 0.1% Triton X-100 (Sigma-Aldrich) for 2 minutes on ice, and incubated in the TUNEL reaction mixture for 60 minutes at 37°C. Samples were analyzed under a fluorescence microscope (DMi8; Leica Microsystems, Wetzlar, Germany). Statistical Analysis Statistical analysis was carried out using Prism 8.0 (Graph-Pad Software, San Diego, CA, USA) and was conducted on data from three or more biologically independent experimental replicates. Comparisons between groups were planned before statistical testing, and target effect sizes were not predetermined. In all graphs, error bars displayed on graphs represent means ± SEM of at least three independent experiments. Statistical analysis included two-tailed Student's t-test (with 95% confidence interval) for two groups or ordinary one-way analysis of variance (ANOVA) with Dunnett's multiple comparison test for three or more groups. Differences were considered statistically significant at P < 0.05. Clinical Genetic Evaluation of 19 Chinese Families Carrying the m.3460G>A Mutation Nineteen Han Chinese pedigrees bearing the m.3460 G>A mutation were identified in a large cohort of 1793 Chinese probands with LHON. 14,20,27 All available members of these families underwent comprehensive physical and ophthalmo-logic examinations to identify personal or family medical histories of visual impairments and other clinical abnormalities. As shown in Supplemental Table S1 and Supplementary Figure S1, 42 (28 males/14 females) of 247 matrilineal relatives exhibited variable penetrance and expression of optic neuropathy among and within families. In particular, the severity of visual loss ranged from profound visual loss to normal vision. The age at onset of optic neuropathy of matrilineal relatives bearing the m.3460G>A mutation ranged from 5 to 29 years, with an average of 18.9 years. As shown in Supplementary Table S2, the penetrance of optical neuropathy among these pedigrees varied from 2.8% to 50%, with an average of 20.3%. Furthermore, all affected matrilineal relatives of these families revealed no other clinical abnormalities, including hearing loss, diabetes, and neurological diseases. Sanger sequence analysis of whole mtDNAs among these Chinese pedigrees revealed the presence of the m.3460G>A mutation and distinct sets of mtDNA polymorphisms, including 269 known variants, as shown in Supplementary Table S3. As shown in Supplementary Figure S2 and Supplementary Table S4, the mtDNAs from 19 pedigrees resided at mtDNA haplogroups A, B5, C4a1, D, F, M12, M7, M8a2, R9, and H2. 43 These mtDNA variants included 76 in the D-loop region, 11 in the 12S rRNA gene, 10 in the 16S rRNA gene, Supplementary Table S3. These variants were further assessed for their presence in 485 control subjects and potential structural and functional alterations. The lack of potential structural and functional alterations indicates that these mtDNA variants may not play an important role in the phenotypic manifestation of the m.3460G>A mutation. Effect of the m.3460G>A Mutation on the Stability of MT-ND1 We evaluated the effect of the m.3460G>A (A52T) mutation on the structure and function of MT-ND1. Based on the cryo-electron microscopy (cryo-EM) structure of mammalian complex I (PDB entry: 5XTD and 6G2J), 22,44 A52 forms hydrophobic interactions with P48, L55, and 56F in MT-ND1 (Fig. 1A). Hence, the substitution of non-polar hydrophobic alanine at position 52 with the hydrophilic threonine due to m.3460G>A mutation in MT-ND1 may destabilize these interactions inside MT-ND1, thereby perturbing the structure and stability of MT-ND1. To test this hypothesis, we analyzed the levels of MT-ND1and MT-ND4 by western blot in these mutant and control cell lines. As shown in Figures 1B and 1C, the levels of MT-ND1 in three mutant cell lines ranged from 45.7% to 50.6%, with an average of 47.4% (P = 0.0010), relative to the mean value measured in three control cell lines, whereas the levels of MT-ND4 in the mutant cybrids were comparable with those in control cybrids. Complex I of human and mice is composed of 45 subunits, including seven subunits encoded by mtDNA and 38 subunits encoded by nuclear genes. 21 In fact, MT-ND1 interacts with NDUFA1, NDUFA3, NDUFA8, and NDUFA13, whereas MT-ND4 interacts with MT-ND5, NDFS2, NDUFB1, NDUFB4, DDUFB5, NDUFB8, and NDUFB11. 22,44 To examine whether m.3460G>A mutation affects the expression of other subunits of complex I, we measured the levels of NDUFA3, NDUFA8, and NDUFA13 by western blot analysis among mutant and control cell lines. As shown in Figure 1B, the levels of these subunits in mutant cybrids were comparable to those in control cybrids. To analyze whether the m.3460G>A mutation affects mitochondrial protein homeostasis, we measured the levels of the caseinolytic mitochondrial matrix peptidase proteolytic subunit (CLPP) involved in mitoribosome assembly 45 and AFG3 like matrix AAA protease stress 46 in the various cell lines. As shown in Figure 1B, there were no significant differences in the levels of AFG3L2 and CLPP between the mutant and control cybrids. These data indicate that the m.3460G>A mutation may not affect proteostasis stress. Perturbed the Assembly and Activity of Complex I To determine whether or not the mutated MT-ND1 affects the assembly of complex I, mitochondrial membrane proteins isolated from mutant and control cell lines were separated (B) Graphs show the ATP-linked OCR, proton leak OCR, maximal OCR, reserve capacity, and non-mitochondrial OCR in the mutant and control cell lines. The non-mitochondrial OCR was determined as the OCR after rotenone/antimycin A treatment. The basal OCR was determined as the OCR before oligomycin minus the OCR after rotenone/antimycin A treatment. The ATP-linked OCR was determined as the OCR before oligomycin minus the OCR after oligomycin. The proton leak OCR was determined as the basal OCR minus the ATP-linked OCR. The maximal OCR was determined as the OCR after FCCP minus the non-mitochondrial OCR. Reserve capacity was defined as the difference between the maximal OCR after FCCP minus the basal OCR. The average values of four independent experiments for each cell line are shown. The horizontal dashed lines represent the average value for each group. Graph details and symbols are explained in Figure 1. by BN-PAGE, electroblotting, and hybridizing, with NDUFA9, SDHA, UQCRC2, and CO2 (subunits of complexes I, II, III, and IV, respectively) and ATP5A (subunit of complex V) as loading controls. As shown in Figures 2A and 2B, the levels of complex I in the mutant cybrids were 67.2% relative to the average values in the control cybrids (P = 0.0267), whereas the levels of complexes II, III, and IV in the mutant cybrids were comparable to those in the control cybrids. We analyzed the potential consequence of m.3460G>A mutation on the stability and activity of complex I using the in-gel activity assay. Mitochondrial membrane proteins isolated from mutant and control cell lines were separated by BN-PAGE and stained with specific substrates of complexes I, II, and V (complex V as a loading control). 36,37 As illustrated in Figures 2C and 2D, the activities of complex I in the mutant cell lines were 78.2% (P = 0.0023) relative to the average values of control cell lines. In contrast, the average in-gel activities of complex II in the mutants were comparable to those of the controls. Respiration Deficiency To further elucidate whether the m.3460G>A mutation affects cellular bioenergetics, we examined the OCRs of various mutant and control cybrid cell lines with the Seahorse XF96 extracellular flux analyzer. 38 As shown in Figure 3, the basal OCR in the mutant cybrids carrying only m.3460G>A was 33.1% (P = 0.0098), relative to the mean value measured in the control cybrids. To assess which of the enzyme complexes in the mitochondrial respiratory chain was perturbed in the mutant cell lines, the OCR was measured after the sequential addition of oligomycin (to inhibit ATP synthase), FCCP (to uncouple the inner mitochondrial membrane and allow for maximum electron flux through the electron transport chain), rotenone (to inhibit complex I), and antimycin (to inhibit complex III). The difference between the basal OCR and the drug-insensitive OCR yielded the ATP-linked OCR, proton leak OCR, maximal OCR, reserve capacity, and nonmitochondrial OCR. As shown in Figure 3, the basal OCR, ATP-linked OCR, proton leak OCR, maximal OCR, reserve capacity, and nonmitochondrial OCR in mutant cell lines carrying the m.3460G>A mutation were 33.1%, 27.8%, 82.2%, 52.9%, 74.5%, and 110.9%, respectively, relative to the mean values measured in the control cybrids. Impaired Mitochondrial ATP Synthesis The capacity of oxidative phosphorylation in mutant and control cybrids was examined by measuring the levels of cellular ATP using a luciferin/luciferase assay. Populations of cells were incubated in media in the presence of glucose (total cellular ATP), and 2-deoxy-D-glucose (2-DG) with pyruvate to inhibit glycolysis (oxidative phosphorylation). 29 As shown in Figure 4, the levels of mitochondrial ATP production in the mutant cybrids ranged from 73.3% to 87.5%, with an average of 81.1% relative to the mean values measured in the control cybrids (P = 0.0114), whereas there were no differences in the levels of total cellular ATP between mutant and control cybrids. Decrease in Mitochondrial Membrane Potential The JC-10 Mitochondrial Membrane Potential Assay Kit was used to measure the mitochondrial membrane potential ( m) in three mutant and three control cybrids. 39,40 The ratios of fluorescence intensities Ex/Em = 490/590 and 490/525 nm (FL590/FL525) were recorded to determine the m of each sample. The geometric means of the relative ratios of FL590/FL525 between mutant and control cybrids were calculated to represent the m levels, as described elsewhere. 39 As shown in Figure 5, the levels of m in three mutant cybrids carrying the m.3460G>A mutation were decreased, ranging from 28.9% to 35.7%, with a mean value of 31.5% (P = 0.0006) measured in three control cybrids. By contrast, the levels of m in mutant cybrids in the presence of FCCP were comparable to those measured in the control cybrids. ROS Production Increase The levels of ROS generation in the vital cells in three mutant cybrids carrying the m.3460G>A mutation and three control cybrids lacking this mutation were measured using the MitoSOX assay via flow cytometry under normal condi- tions and H 2 O 2 stimulation. 41,42 Geometric mean intensity was recorded to measure the rate of ROS in each sample. The ratio of geometric mean intensities between unstimulated and stimulated cell lines with H 2 O 2 was calculated to determine the reaction to increasing levels of ROS under oxidative stress. As shown in Figures 6A and 6B, the levels of mitochondrial ROS in the three mutant cybrids varied from 123.5% to 135.2%, with an average of 129.3% (P = 0.0040) of the mean value measured in three control cybrids. To determine whether the m.3460G>A mutation affects antioxidant systems, we used western blot analysis to measure the levels of three antioxidant enzymes in the mutant and control cybrids: SOD2 in the mitochondrial matrix and SOD1 and catalase in the cytosol and mitochondrial intermembrane space. 47 As shown in Figure 6C, the mutant cell lines revealed marked increases in the levels of SOD1 and catalase and mild increases in the levels of SOD2, as compared with those in the control cell lines. Promoting Intrinsic Apoptosis Deficient activities of oxidative phosphorylation have been linked to protecting against certain apoptotic stimuli. 11,18 To evaluate whether or not the m.3460G>A mutation affects the apoptotic process, we performed TUNEL assays to examine cell death in the mutant (III1-1) and control (A70-1) cybrids with or without H 2 O 2 stimulation. As shown in Figure 7A, cell death in the mutant cybrids (III1-1) increased as compared with the control cybrids (A70-1). Without H 2 O 2 stimulation, there were 36 cell deaths out of 3147 total cells (n = 10 sides) in the control cybrids (A70-1) and 43 cell deaths out of 2686 total cells (n = 10 sides) in the mutant cybrids (III-1). With H 2 O 2 stimulation, the mutant cybrids (III1-1) had 243 cell deaths out of 2724 total cells (n = 10 sides), as compared with 81 cell deaths out of 2721 total cells (n = 10 slides) in the control cybrids (A70-1). We then examined the apoptotic state of the cybrids by using immunocytostaining assays to determine the immunofluorescence pattern of cybrids that were double labeled with rabbit monoclonal antibody specific for cytochrome c and mouse monoclonal antibody to TOM20, a nuclear-encoded mitochondrial protein. As shown in Figure 7B, the mutant cybrids carrying the m.3460G>A mutation revealed markedly increased levels of cytochrome c, as compared with the control cybrids. The impact of the m.3460G>A mutation on the apoptotic process was confirmed with western blot analysis, which showed that the mutant cybrids exhibited 67.4%, 53.0%, and 51.8% increased levels of cytochrome c, BAX, and BAK, respectively, which are able to pierce the mitochondrial outer membrane to mediate cell death by apoptosis, 48 as compared with the control cybrids ( Figures 7C, 7D). Furthermore, we used western blot analysis to measure the levels of apoptosis-related proteins: uncleaved/cleaved PARP and uncleaved/cleaved caspases 3, 7, and 9 in mutant and control cell lines. 49 As shown in Figures 7E and 7F, the average levels of cleaved PARP protein and caspases 3, 7, and 9 in the three mutant cell lines were 153.5%, 116.2%, 140.8%, and 114.5%, respectively, of the average values measured in the three control cell lines. However, there were no significant differences in the average levels of uncleaved PARP or caspases 3, 7, and 9 between control and mutant cybrids (Figs. 7G, 7H). Alteration in Mitophagy Mitophagy, an important mitochondrial quality control system, selectively degrades damaged mitochondria with autophagosomes and their subsequent catabolism by lysosomes. 50 To investigate the effect of the m.3460G>A mutation on mitophagy, we evaluated the mitophagic states of mutant and control cell lines using immunoblotting and endogenous immunofluorescence assays. As shown in Figure 8A, mutant cell lines displayed ∼16% reduced levels of lysosome-associated membrane glycoprotein 1 (LAMP1), indicating that the m.3460G>A mutation affects the mitophagy process. The levels of autophagy in mutant and control cell lines were then examined using two markers: microtubule-associated protein 1A/1B light chain 3 (LC3) and Beclin-1. 51 During autophagy, the cytoplasmic form (LC3-I) is processed into a cleaved and lipidated membrane-bound form (LC3-II), which is recruited to autophagosomal membranes. The amount of LC3-II is clearly correlated with the number of autophagosomes. 52 The sequestosome 1/P62 protein, (SQSTM1, hereafter referred to P62), is an autophagy substrate that colocalizes with ubiquitinated protein aggregates in many neurodegenerative diseases. 53 As shown in Figures 8B and 8D, reduced levels of LC3 and increased levels of P62 were observed in mutant cell lines carrying the m.3460G>A mutation, compared with controls. In particular, the average levels of LC3-II/I and P62 in the three mutant cell lines carrying the m.3460G>A mutation were 47.0% (P = 0.0140) and 137.5% (P = 0.0375) of the mean values measured in the three control cell lines, respectively. These data suggest that the m.3460G>A mutation impaired autophagy in the mutant cell lines. Mitophagy promotes mitochondrial turnover and prevents accumulation of damaged mitochondria. It is regulated by the PINK1/parkin pathway. Damaged mitochondria are recognized by PINK1, which builds up on the outer mitochondrial membrane and recruits parkin. The accumulation of PINK1 and recruitment of parkin target mitochondria for degradation by lysosomes. 50 To assess if the m.3460G>A mutation affects PINK1/parkin-mediated mitophagy, we evaluated the mitophagic states of mutant and control cell lines using immunoblotting. As shown in Figures 8C and 8E, reduced levels of PINK1 and parkin were observed in mutant cell lines carrying the m.3460G>A mutation compared with the controls. In particular, the average levels of PINK1 and parkin in the three mutant cell lines carrying the m.3460G>A mutation were 36.7% (P = 0.0105) and 56.0% (P = 0.0202), respectively, of the mean values measured in the three control cell lines lacking the mutation. These data reveal that the m.3460G>A mutation alters mitophagy. DISCUSSION In the current study, we investigated the pathophysiology of the LHON-associated m.3460G>A mutation. Nineteen Han Chinese families bearing the m.3460G>A mutation were identified among a large cohort of Chinese patients with LHON. 14,26,27 Optic neuropathy as a sole clinical phenotype was only present in the maternal lineage of these pedigrees, which showed a wide range of penetrance and expression of the optic neuropathy. The average age of onset of LHON ranged from 5 to 29 years, with an average of 18.9 years among the 19 Chinese families, whereas the average age of onset of optic neuropathy was 20 years in eight Caucasian families. 54,55 The penetrance of optic neuropathy (affected The distribution of cytochrome c from mutant III1-1 and control A70-1 cybrids was visualized by immunofluorescent labeling with TOM20 antibody conjugated to Alexa Fluor 594 (red) and cytochrome c antibody conjugated to Alexa Fluor 488 (green) and analyzed by confocal microscopy. DAPI-stained nuclei were identified by their blue fluorescence. (C, E, G) Western blotting analysis. Cellular proteins (20 μg) from various cell lines were electrophoresed, electroblotted, and hybridized with several apoptosis-associated protein antibodies: cytochrome c, BAK, and BAX (C); cleaved caspases 3, 7, and 9 and PARP (E); and uncleaved caspases 3, 7, and 9 and PARP (G), with β-actin as a loading control. (D, F, H) Quantification of apoptosis-associated proteins: cytochrome c, BAK, and BAX (D); cleaved caspases 3, 7, and 9 and PARP (F); and uncleaved caspases 3, 7, and 9 and PARP (H). The levels of apoptosis-associated proteins in various cell lines were determined as described elsewhere. 11 Three independent determinations were made in each cell line. Graph details and symbols are explained in Figure 1. matrilineal relatives/total matrilineal relatives) in the 19 Chinese pedigrees bearing the m.3460G>A mutation ranged from 2.8% to 50%, with an average of 20.3%; whereas, 134 affected matrilineal relatives out of 768 matrilineal relatives among 21 Caucasian families harboring the m.3469G>A mutation developed optic neuropathy. 5,55 Furthermore, 28 males and 14 females of the 247 matrilineal relatives among the 19 Han Chinese families exhibited visual impairment, whereas the average ratio for affected male and female matrilineal relatives among 21 Caucasian families was 3.35. 5,55 Moreover, the mtDNA haplogroups (A, B5, C4a1, D, F, M12, M7, M8a2, R9, and H2) of the 19 Chinese pedigrees and those (B4d1, F2, A5b, M12a, D4b2b, and D4b2) in six other Chinese families bearing the 3460G>A mutation differed from those (H, J, K, U, V, W, and X) in 11 Dutch and seven Italian families carrying the m.3460G>A mutation. [56][57][58] The occurrence of the m.3460G>A mutation in these genetically unrelated families affected by optic neuropathy and that they differed considerably in their mtDNA haplogroups highlight the impact of the mutation on the pathogenesis of LHON. The phenotypic variability, including the penetrance and expression of the m.3460G>A mutation, may be due to the influence of mitochondrial haplotypes or nuclear modifier genes. 19,26,27,30,[58][59][60] The m.3460G>A mutation replaced the highly conserved alanine at position 52 with a threonine in MT-ND1, which is the essential subunit of complex I comprised of an additional six mitochondrion-encoding and 39 nucleus-encoding subunits. 21 Based on the cryo-EM structure of mammalian complex I, the hydrophobic side chains of A52 contribute to the hydrophobic interactions with P48, L55, and 56F in MT-ND1. 22 The substitution of nonpolar hydrophobic alanine at position 52 with the hydrophilic threonine by the m.3460G>A mutation in MT-ND1 may destabilize these interactions, thereby altering the structure and function of MT-ND1. The instability of mutated ND1 was evidenced by a 52.6% decrease in the levels of MT-ND1 observed in the cybrids harboring the m.3460G>A mutation. However, the m.3460G>A mutation did not affect the expression of NDUFA3, NDUFA8, or NDUFA13, which interact with MT-ND1. 22 The mutated MT-ND1 altered the assembly of complex I, as is the case for m.3394T>C and m.3866T>C mutations. 18,20 Alterations in the stability of MT-ND1 and the assembly of complex I were responsible for significant reductions in the activity of complex I observed in the cybrids in this study and lymphoblastoid cell lines carrying the m.3460G>A mutation. 15 Furthermore, the m.3460G>A mutation gave rise to reduced basal OCRs, as well as reduced levels of ATP-linked OCR protons, leak OCRs, maximal OCRs, and reserve capacity in the mutant cybrids. These OXPHOS deficiencies diminished ATP synthesis, impaired the mitochondrial membrane potentials, and increased the production of oxidative reactive species in the mutant cell lines bearing the m.3460G>A mutation, as is the case for cells carrying the m.3394T>C and m.3866T>C mutations. 18,20 These results indicate that defects in complex I in the mutant cell lines carrying the m.3460G>A mutation play a critical role in the mitochondrial dysfunction that leads to the development of LHON. Mitochondrial dysfunction and overproduction of ROS caused by LHON-associated mtDNA mutations often affect apoptotic death and mitophagy. 11,18,20,[61][62][63] In the present investigation, we have shown that mitochondrial dysfunctions caused by the m.3460G>A mutation promote the apoptotic process. Both immunocytostaining assays and western blot analysis revealed the elevated releases of cytochrome c into cytosol in the mutant cybrids carrying the m.3460G>A mutation compared with the control cybrids lacking the mutation. Additional effects of the m.3460G>A mutation include the increased expression of BAK and BAX, which mediate cell death by apoptosis, and the elevated levels of apoptosis-activated proteins (cleaved caspases 3, 7, and 9 and PARP) in the mutant cybrids carrying the m.3460G>A mutation compared with the control cybrids. 48,49 Mitophagy disposes of damaged mitochondria and maintains a healthy mitochondria population in cells. 50 The effect of the m.3460G>A mutation on autophagy was first evidenced by reduced levels of LAMP1 in the mutant cell lines carrying the m.3460G>A mutation as compared with the control cell lines. The reductions in LC3 level in the mutant cell lines suggested a general decrease in the capacity of the mutant cells to generate autophagosomes, thereby perturbing the autophagic degradation of ubiquitinated proteins. Furthermore, the increased levels of p62 in cybrids carrying the m.3460G>A mutation indicate an accumulation of autophagic substrates, such as misfolded proteins, leading to the deleterious effects. Notably, the reduced levels of PINK1 and parkin observed in the mutant cell lines carrying the m.3460G>A mutation indicated that the m.3460G>A mutation altered the mitophagic removal of damaged mitochondria. Photoreceptor cells and RGCs account for the biggest demand for ATP by cells in the retina. 37,64 Mitochondrial dysfunction may lead to the dysfunction or death of RGCs, thereby contributing to the development of optic neuropathy. Our findings highlight the critical role of the m.3460G>A mutation in the pathogenesis of LHON, manifested by mitochondrial dysfunction and alterations in apoptosis and mitophagy.
2021-07-28T06:17:56.066Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "67a8b3e1fb03ecd1e909497756f3bd486cebfa5c", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1167/iovs.62.9.38", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2a52bf17ef923d77d74edcb880a40c4cad75bf87", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
55139397
pes2o/s2orc
v3-fos-license
EFFECTS OF SOIL DEGRADATION AND ORGANIC TREATMENTS ON VEGETATIVE GROWTH, YIELD AND GRAPE QUALITY Delimited degraded soil areas caused by an improper land preparation before vine plantation and or management can be observed in conventional and organic European vineyards. Soil malfunctioning including: poor organic matter content, imbalanced nutritional status, altered pH, water deficiency, soil compaction and/or scarce oxygenation. The goal of the present study was to compare the effects of some agronomic strategies to restore optimal soil functionality in degraded areas in organic commercial vineyards located in five countries, and to evaluate the impact of these soil management practices on vegetative growth, yield and grape quality parameters. Grapevines located in non-degraded soils showed higher vegetative growth and yield, and lower total soluble solids in grapes. Generally, there were no significant differences in vegetative growth, yield and grape quality among the soil management strategies in degraded areas. Introduction Viticulture is one of the most erosion-prone land uses (Garcia-Ruiz, 2010).Some vineyards are located on steep slopes and shallow soils where heavy rain events generate runoff, and soil tillage exacerbates soil losses (Le Bissonnais and Andrieux, 2007).In both conventional and organic European vineyards, it is no rare to have delimited areas characterized by problems in vine health, grape production and quality, that show a bad soil functioning.Causes for soil malfunctioning can be manifold and they sometimes interact such as: poor organic matter content, imbalanced nutritional status, altered pH, water deficiency, soil compaction and/or scarce oxygenation.In the bibliography, cover cropping has been extensively assessed in a variety of soil and climate conditions across the world, largely under Mediterranean climate (Quader et al., 2001;Pardini et al., 2002;Dinatale et al., 2005;Ingels et al., 2005;Gaudin et al., 2010;Marques et al., 2010).These studies identify a large variety of ecosystem services provided by cover crops in vineyards, such as weed control, pest and disease regulation, water supply, water purification, field trafficability, soil biodiversity and carbon sequestration.The EU Organic Farming Regulations (834/2007 and 889/2008) provide general considerations on the maintenance of soil fertility and biodiversity, but do not include guidelines on the preparation of soil for planting of perennial crops and the maintenance of its functionality.The recovery of optimal production and ecosystem functionality of degraded vineyards is the object of the research project of ReSolVe -Restoring optimal soil functionality in degraded areas in organic Vineyards, funded for by the European Fund FP7 ERA-net Project, CORE Organic Plus.The ReSolVe project involves eight research groups in six different countries: Italy, Spain, France, Sweden, Slovenia and Turkey.The aim of the present study was to compare the effects of selected organic agronomic strategies to restore optimal soil functionality in degraded areas in different country partners, and to evaluate the impact of these soil management practices on the grapevine growth, yield and grape quality parameters. Materials and methods The experimental study was carried out in season 2017 in nine organic commercial vineyards located in Italy (Chianti Classico wine district and Maremma), France (Montagne St. Émilion and La Clape), Spain (La Rioja) and Slovenia (Koper) for wine grapes, and in Turkey (Dokuztekne and Sarıveli) for table grapes.Four different soil treatments: Control degraded, Composted Organic Amendment (COMP), Green Manure (GM) and Dry Mulching (DM) were implemented in the degraded areas to appraise the restoration and improvement of these soils as described by Priori et al., this issue.Additionally, a control treatment corresponding to a nondegraded soil was also included and named T0.Effects of these soil treatments on each experimental plot have been evaluated through different measurements of vegetative growth (chlorophyll, SPAD units), yield components (yield per vine) and grape composition (total soluble solids) to characterize and compare the three different soil remediation strategies.Data analysis were performed by one-way ANOVA comparing the means to test the influence of the all treatments involved in the nine experimental vineyards plots.The ANOVA statistical analysis obtained for each parameter of vegetative growth, yield components and grape composition is displayed below.These parameters were also evaluated farm-by-farm in each country.Means separation was performed using Student-Newman-Keuls test at significance level p = 0.05. Results and discussion Figure 1 provides vegetative growth, determined as leaf chlorophyll contents in different organic vineyards corresponding to the five soil treatments.The highest SPAD index was reached in the non-degraded soil.Statistically significant differences between degraded and non-degraded soils were obtained in Fontodi (Italy) and Celebi (Ceyhan/Turkey) farms.No statistically significant differences were found between the four soil treatments in degraded soils in the four organic vineyards in Turkey and Italy.The mean values of yield per vine in the nine vineyards and in the four soil treatments are displayed in Figure 2. As vegetative growth, the highest values of yield per vine were observed in non-degraded soils except in Puelles farm (Spain).No statistically significant differences were found between the four soil treatments in degraded areas whatever the sites.Regarding grape composition, the values of total soluble solids (sugars) in the organic vineyards corresponding to five soil treatments are presented in Figure 3. Significant differences were found between degraded and non-degraded soils in Turkey (Çelebi farm) and Slovenia (Bonini and Prade farms).However, no significant differences in total soluble solids were found among the soil management strategies in degraded areas. Conclusions Grapevines located in non-degraded soils showed higher vegetative growth and yield, and lower sugar concentration in grapes.The impact different soil treatments were consistent among the five countries.Generally speaking, there were no significant differences on vegetative growth, yield and grape quality among the soil management strategies in degraded areas.Which may most probably due to insufficient project period (3 years) to obtain effect of organic additives responses as organic material enrichment response to plant and soil need more than couple of years. Figure 2 . Figure 2. Impact of soil treatment on grape yield in nine organic vineyards in five countries.No significant statistical differences were found between the four treatments applied in all sites.
2018-12-05T12:28:05.243Z
2018-06-18T00:00:00.000
{ "year": 2018, "sha1": "d00701a242f0966597a7b676ed47ba6f38d1115d", "oa_license": "CCBYNC", "oa_url": "https://hal.inrae.fr/hal-02628410/document", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "bb30c3e5fadad3e2fe39920c43042cdac6f0b7d5", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
234279556
pes2o/s2orc
v3-fos-license
Risk and Income Evaluation Decision Model of PPP Project Based on Fuzzy Borda Method In order to effectively analyze the risk-return decision-making model of PPP project by Yuan et al., (2020) this paper, based on the fuzzy Borda method and synergy effect theory, considers the synergistic effect of PPP project, constructs the model of investment risk sharing, incentive, and supervision punishment, and determines the investment risk sharing, incentive, and PPP project investment. This paper also aims to supervise and punish the decision-making mechanism to achieve the goals of the PPP project. The research results show that the increased synergy of project participants not only reduces the impact of investment risk on project revenue but also promotes project participants to increase their willingness to undertake risks, actively undertake project risks, and achieve synergy effects of PPP projects. Through the cooperation of both parties, the total income of PPP projects is increased. The research results show that the government chooses social capital participants with complementary advantages to form synergy as shown by Jiang et al. (2016); with the increase of synergy, the government needs to increase the incentive intensity, improve the performance behavior of social capital participants as proposed by Junlong et al. (2020), curb their speculation, and promote the two sides. Due to the increased synergy and the willingness of social capital participants to increase cooperation and reduce speculation, the government should reduce the intensity of supervision and punishment. Introduction PPP stands for the English full name "Public-Private-Partnership," in which "Public" mainly refers to the government and "Private" mainly refers to private enterprises. is can be translated into government-enterprise cooperation, that is, public-private partnership that is an emerging project financing and project management model. is model introduces foreign capital and private capital into public infrastructure and public service projects and encourages private companies and the government to participate in the construction of public infrastructure in a cooperative manner. As early as the 1980s, China explored the application of the PPP model in the field of infrastructure. is is the first time that the government has participated in the construction of public infrastructure and public service projects through BOT (build, operate, and transfer), such as the construction of power plants, but the scale of investment is usually large. After the company invested and built an initial power plant through the BOT model, it signed a 20-year concession agreement. During this 20-year period, the company can recover the cost and profit through its own operation. 20 years later, the government regained management rights. As an important financing means, the PPP model can comprehensively utilize the advantages of the government and the private sector [1] and establish a full-scale cooperative relationship between the government and the private enterprise, the so-called "risk sharing and benefit sharing." is model is in the construction of public infrastructure. e use of many other models does not have the advantage. In [2,3], the authors propose a conceptual model that includes three risk dimensions and 15 SRFs to mitigate social risks and improve the social sustainability of transport PPP projects. A survey conducted to investigate stakeholder comments on the proposed SRF indicated that all SRFs are important. SRF can be used to assess social, economic, and environmental risks. Confirmatory factor analysis (CFA) validates the classification of SRFs and points out that all risk dimensions contribute to social risk. e impact of society and the environment on social sustainability may contribute more to the generation of social risks. In [4], the authors used the analytical hierarchy process to develop a generic risk assessment model; the next step was to investigate the primary purpose of early development experts' preferences for weights, rankings, and risk factor scores. e risk assessment leads to a consistent level of risk factors affecting the brownfield decision in the context of Melbourne. e results show that site-specific and project risks are the most important in brownfield development decisions. Financial, market, and planning risks are critical. Political, legal, and socioeconomic risks are relatively less important. In [5], the authors evaluated the decision-makers (DMs) responsible for maintaining the pipeline based on different perspectives of risk assessment models. It was found that this could help prioritize maintenance efforts and optimize the use of human resources and other resources. As for the transportation of natural gas through pipelines [6], the risk analysis work must consider the physical and operational characteristics of the product, the failure mode, and its consequences, depending on each unexpected situation considered. erefore, this paper enhances previous recommendations for multicriteria decision models by using visualization tools and statistical tests that evaluate multidimensional risks. In [7], the authors construct a multistandard decision-making framework for risk assessment of construction project of image fuzzy information and propose an image fuzzy normalized projection (PFNP) model, which overcomes the existing image fuzzy projection model. is paper also establishes the entropy weight method of image fuzzy sets to calculate the standard target weight vector [8]. In the image fuzzy environment, the PFNP model is combined with the VIKOR method to construct a VIKOR method based on integrated image fuzzy normalized projection. In [9], the authors introduce a practical model to select the best and most appropriate portfolio of projects, taking into account project investment capital, returns, and risks. e article addresses the ever-changing and highly uncertain project environment by using interval type 2 fuzzy sets (IT2FS). e authors also introduce a new R&D project evaluation model that comprehensively addresses the constraints and limitations of R&D portfolio selection. In recent years, the state has strengthened investment in infrastructure, which has a certain degree of mitigation of "bottleneck" constraints [10], but as the demand for infrastructure grows, China's infrastructure investment demand is growing, giving the country enormous financial pressure. China needs a lot of money to improve its infrastructure construction. It is difficult to meet the requirements of infrastructure development simply by relying on the government as an investment entity [1,11]. In this case, the government urgently needs to introduce a new model to alleviate the funding problem of infrastructure construction. Aiming at the advantages of high recognition rate and good accuracy of the fuzzy Borda method, many research teams at home and abroad apply it in different fields. In [12], the authors apply the fuzzy Borda method to agriculture, develop fuzzy Borda counts to consider some ambiguities, and derive standard weights through language comparison decision criteria. e results are found in the comparison of decision criteria in multistandard decision-making (MCDM). e study extended the fuzzy Borda count to take into account the agent's confidence in its standard weight preferences [13]. ese criteria include soil erodibility, hydrological sensitivities, wildlife habitats, and impervious surfaces and capture buffer-protected ecosystems in reducing soil erosion, controlling runoff generation, enhancing wildlife habitats, and mitigating storm water impacts, respectively. In [14], the authors apply the fuzzy Borda method to algorithm optimization and introduce a dynamic multiobjective optimization algorithm that integrates the cat group algorithm and the Borda count sorting method. In the proposed method, the cats of the population are sorted and classified based on the Borda selection method before and after the change. e cat with the worst Borda grade is then reinitialized to improve the diversity of the population. In addition, multiobjective cat group optimization (CSO) is applied as an aggregate-based evolutionary algorithm to converge to the optimal frontier. e simulation results show that the algorithm achieves competitive results compared with the traditional method. In [15], the authors apply the fuzzy Borda method to noise and use the weighted least squares method based on local anomaly factor (LOF) to solve the weighted fuzzy tree (W-FT) algorithm. e authors validate the square method of the subsequent parameters of the fuzzy rule by two typical nonlinear examples [16]. Based on W-FT, the soft-sensor model of boiler NO x emission is established and compared with other modeling methods. It shows that the proposed W-FT algorithm can effectively identify noise and outliers, and the model based on W-FT has higher prediction accuracy and stronger generalization ability. In [17], the authors apply the fuzzy Borda method to the optimization model and propose a new interval intuitionistic fuzzy set (IVIFS) sorting function, which considers the quantity and reliability information of IVIFS and combines the TOPSIS advantage to build an optimization model to determine when the attribute weights are unknown and partially known. In addition, the authors have developed an effective method to solve the MAGDM problem, where the attribute values are represented by IVIFS. e paper studies numerical examples of supplier selection problems to prove the applicability and feasibility of the method [18]. In [19], the author uses a multirisk matrix and FMI qualitative screening methods, uses MSI evaluation criteria to establish methods, and develops effective preventive maintenance strategies based on AHP and MSI quantitative analysis methods and Borda counting method. In order to effectively analyze the risk-return decisionmaking model of PPP projects, this paper builds a model of investment risk sharing, incentive, and monitoring by the fuzzy Borda number analysis method, considering the synergistic effect of PPP projects based on the PPP model and synergy effect theory, and determines PPP project. Investment risk sharing, incentives, and supervision punishment decision-making mechanisms are used to achieve the goals of the PPP project. In the process of PPP project investment risk allocation, selecting social capital participants with complementary advantages is the precondition for generating synergy effect and analyzing the synergistic effect of PPP project and the synergy effect of enterprise cooperation. Based on the enterprise cooperation output model, the total output model of the PPP project is obtained. where "Public" mainly refers to the government and "Private" mainly refers to private capital. In a broad sense, the PPP model is a public goods service provided by the government departments and private capital in the form of cooperation, which is comprehensive. In a narrow sense, many countries have different translations for PPP models, such as public-private partnerships and government-enterprise partnerships. Fuzzy Borda Algorithm and Its Application in PPP Project Risk and Return Evaluation Decision Combined with the definition of the PPP model by international organizations and the description of PPP mode in China's Ministry of Finance in the normative documents, the meaning of the PPP model can be summarized as follows: the government and social capital sign long-term contracts with each other and participate in the whole process of partnership. In the process of public infrastructure and public service projects, with cooperation, the two parties share the responsibility of the project, share the project risks, and share the project benefits [19]. Game eory. Game theory, having the English full name "Game eory," can be literally translated as "game theory." It refers to some individuals, groups, or organizations that have conflicts of interest under certain rules and constraints, according to their own information conditions, simultaneously or sequentially. Whether it is one or multiple policy choices, each choice will eventually get a corresponding result. In this process, when one of the parties chooses the strategy, it will be affected by other parties, and the party will eventually influence the decision-making behavior of other parties regardless of the decision made. erefore, game theory is also called "the game theory" [3]. Cooperative Game. e cooperative game refers to the game in which the parties involved in the project conduct cooperation and alliance. Cooperative gaming can at least increase the interests of one party, and sometimes even create benefits for both parties. is also increases the interests of the entire society to a certain extent, so it is also called a positive game. e existence of a cooperative game must meet the following basic conditions: first, for the alliance, the total benefit obtained after the cooperation is greater than the sum of the benefits obtained by each participant when operating alone; secondly, there should be participants who can get more income distribution rules than when they do not join the league. In the cooperative game, the two sides mainly adopt a cooperative approach to enhance the interests of both parties, emphasizing team rationality and maximizing collective interests [20]. e prerequisite for cooperation is the fair and equitable distribution of interests. Synergies. e synergy effect of PPP refers to the government's choice of social capital participants with management advantages to cooperate; the government handles political, economic, and social risks, and the social capital participants provide capital and management experience and use the complementary advantages of both parties to form synergy. In the preproject construction period and operation period, the complementary advantages of government and social capital participants are integrated and a reasonable organization is established through clear government roles and responsibilities, coordination and communication among organizations are strengthened, and a reasonable risk-sharing mechanism is established. Also, the incentive mechanism promotes synergy between the two to achieve the objectives of management coordination, financial coordination, and operational synergy of PPP projects. Fuzzy Based on Grid Acquisition Borda Number Analysis. Suppose the expert scores the mth attribute of the indicator C p : B m C p , where m � 1, 2, ..., M, representing the mth attribute of the element, and p � 1, 2, ..., N, indicating the pth indicator. Determining the Degree of Membership of Each Indicator. In the mth evaluation index, each evaluation index C p is calculated to belong to the most important (1) In equation (1), f hp is the frequency of each indicator ambiguity, R p is the sum of the index, C p indicates the fuzzy frequencies, and δ h m (C p ) is the sparse degree of the priority relationship of each indicator. For δ h m (C p ) to make the following definition, the mth attribute of the element is sorted according to the size and the order relationship of the attribute is obtained. If the index C p ranks h in this order Mathematical Problems in Engineering Also, when the index C i and the index C j are the same as the index membership degree U mp in the order relationship of the mth attribute of the element, that is, the ranking of the two indicators in the priority relationship is juxtaposed as h, Similarly, if three indicators, C i , C j , and C k , are in the order relationship of the mth attribute, i.e., the ranking is h, e calculation of the fuzzy Borda number FB(pC) is performed as follows: the definition is used to indicate the weight of each evaluation index C p ; when it ranks h in the order relationship of the attribute, then the calculation formula of the fuzzy Borda number FB(C p ) can be obtained, as follows: Among them, W hp � (f hp /R p ). e fuzzy Borda number FB(Cp) of each index obtained is normalized, and the weight values of each evaluation index are obtained as follows: At this point, the weights of the evaluation indicators at all levels can be calculated. Risk-Sharing Model of PPP Project Based on Fuzzy Borda Method Model Assumptions Hypothesis 1. e variance of the return on investment risk is σ2, indicating the degree of impact of investment risk on project income. e government's rate of return variance (σ1) and the social capital participant's rate of return variance (σ2) reflect the uncertainty of the return of the risk when the two parties independently bear the risk, indicating the degree of impact of the investment risk on the participants' income as a measure of project participation. e same type of risk is treated and the variance of the yield of each project participant on the premise of a certain income is compared. e party with the largest variance, that is, the degree of influence of the risk on the participants' income, indicates that the party's risk control ability is smaller. Hypothesis 2. F(σ 1 , σ 2 ) reflects the impact of the synergy between the government and social capital participants on investment risk. where ρ is the value [−1, 0). ρ represents the degree of complementarity of risk management and control capabilities among project participants. e risk management and control between project participants can be more complementary, that is, the smaller the ρ value, the greater the synergy. Since PPP projects adopt a "limited recourse" financing model, x and y, respectively, represent the proportion of risk taken by the government and social capital participants. Model Construction. rough the cooperation of both parties, the project investment risk is dealt with, and the risk has little effect on the project's revenue. e model is as follows: In the case of complementary risk management and control capabilities, the synergy of project participants can have an impact on investment risk allocation. When ρ � 0, the respective risks are independent and the result is as follows: When −1 < ρ < 0, the risk management and control ability of the government and social capital participants are strong, and as ρ decreases, the degree of complementarity increases and the synergy capacity increases. at is, the government has the advantages of coordination, organization, and credibility, lacking technical and operational management advantages, and social capital participants have a comparative advantage on the government's shortboard. And, because of the risk loss σ 2 < (σ 1 x + σ 2 y) 2 after cooperation, σ < (σ 1 x + σ 2 y), indicating that the total risk after the cooperation between the government and social capital participants is less than the goal of achieving "1 + 1>2" before the cooperation. rough the cooperation of both parties, the impact of investment risk is reduced. When −1 < ρ < 0, as the degree of complementarity of risk management and control increases (the smaller the value of ρ ), the synergy capacity increases and the impact of investment risk on project returns decreases. is explain that when the government chooses social capital participants with strong complementary advantages, the greater the degree of complementarity between the two parties, the greater the synergy ability. e project participants actively deal with the investment risks of the project and the impact of investment risks on PPP projects. is requires the government to evaluate the social capital participants involved in the bidding during the bidding stage and select social capital participants with strong complementary strengths and strong synergies with the government. (Table 1 is the evaluation level, judgment basis, and quasi-family quantification) issue the questionnaire to the experts and ask the experts to answer separately. (3) Multiple Rounds of Investigation by Experts. e investigators conducted the statistical analysis of the expert questionnaires and sent the summary results together with the next round of questionnaires to the experts. e experts conducted a new round of questionnaires based on other expert opinions. e investigators collected opinions again for statistical induction. After repeated operations, the expert opinions gradually converge until P < 0.05. Judgment Matrix Construction Method. is paper proposes a new form of evaluation matrix, which contains the information of each expert, in order to calculate the accuracy of the expert opinions. e matrix form is shown in Table 2. Among them, xij represents the evaluation level of the factor hi by the expert hi. Considering the hesitation, uncertainty, and other psychological states that may exist in the evaluation of experts, the single evaluation value is often difficult to give. In order to make the experts have more expression space, this paper evaluates the form of interval number [aij, bij] and represents xij as follows: [aij, bij] is an expert's analysis of the expectations and status quo of the evaluation items, and the range of the evaluation of the ranks between 1 and 5 levels. is form of expressing opinions or results in interval numbers can better reflect the ambiguity and uncertainty of information. At the same time, the use of this form in data processing can reduce the loss of fuzzy information. Calculating the Weight of the Indicator (1) Calculate the product Mi of each row element of the judgment matrix A, and the formula is (2) Calculate the nth root value W i � ��� M i n , i � 123, . . . , n of each line Mi. In the test, n is the order of the matrix. (3) Normalize the Wi and obtain the weight of the indicator Zi. e calculated formula is (4) Calculate the maximum eigenvalue λ max of the judgment matrix. Standardization of Indicator Data Standardization formula for benefit indicators is as follows: Standardization formula for cost-type indicators is as follows: Fixed-type indicator standardization formula is as follows: Mathematical Problems in Engineering Deviation-type index standardization formula is as follows: Interval indicator standardization formula is as follows: PPP Project Model Results Based on Model Borda Method-Risk Sharing. In order to better understand the investment risk allocation problem of PPP projects, the following example is used to verify the correctness of the conclusion. In the case of parameters σ1 � 1 and σ2 � 1, Figure 1 simulates the impact of project participants' synergy on the investment risk of PPP projects. e greater the capacity of synergies, the more the risk management capacity between the government and the stakeholders of social capital added, the smaller the impact of the investment risk of a PPP project. Figure 2 simulates the impact of project participants' synergies and risk ratios on risk sharing. e simulation results show that the government's risk-sharing ratio decreases as the government's risk ratio to social capital participants increases. It can also be seen that when m � 2, the government's risk-sharing ratio decreases with the increase of synergy ability (the smaller the value of ρ ), but it fails to accurately identify the synergy ability of m < 2 to the government's risk-sharing ratio. On the basis of Figures 1 and 2, we simulate the relationship between the government's risksharing ratio and its risk loss in the case of the parameter σ2 � 1, simulating ρ � −1, ρ � −0.5, and ρ � 0. Figure 2 shows that as the government's risk-return variance increases, it shows that the smaller the government's risk control ability, the smaller the proportion of its investment risk and the principle of risk sharing of PPP projects. In Figure 2, when σ2 > σ1, as the synergistic energy increases (the smaller the value of ρ ), social capital participants take risks and help governments share certain risks. When σ2 < σ1, the smaller the synergy value, the higher the willingness to take risks in the government's cooperation process and the willingness to help social capital participants Figure 2 can also observe that the government's risk-sharing ratio decreases with the government's risk control ability, which indicates that the risk assumed should be within the risk control ability of the project participants. PPP Project Model Results Based on the Fuzzy Borda Method: Incentive. e following example is used to verify the accuracy of the conclusions of PPP project synergy and to analyze the impact of the cooperation between the government and social capital participants on the overall income of PPP projects and the impact on the behavior of social capital participants. In the case where the parameters b1 � 1, b � 1.5, ρ � 0.8, and σ � 0.7 are fixed, the effects of the synergistic effect coefficient on the incentives, the efforts of the project participants, the speculative behavior of the social capital participants, and the income of the PPP project are analyzed. Figure 3 shows that the government's Pareto optimal effort level increases as the synergy coefficient increases; the social capital participants' efforts increase as the synergy coefficient increases; the government's incentives for social capital participants follow synergy. e effect coefficient increases and increases. Figure 4 shows that as the synergy coefficient increases, the efforts of social capital participants are getting closer to the Pareto optimal effort level. Due to information asymmetry, the government cannot fully grasp the information of social capital participants. Social capital participants take speculation for their own economic interests, violating the willingness of cooperation between the government and social capital participants and impairing the overall PPP project. However, as the synergy between the government and social capital participants increases, the willingness to cooperate among social capital participants increases, reducing the possibility of speculative behavior by social capital participants. Figure 4 shows that the total return of the PPP project increases as the synergy coefficient increases. Looking at Figure 4, through the synergy between the two parties, increasing the total income of the PPP project will not only attract social capital participants to enter the PPP project but also ease the government's debt pressure and achieve a win-win situation. PPP Project Model Results Based on Fuzzy Borda Method-Supervision Punishment. e following example is used to verify the accuracy of the conclusions of the PPP project synergy. In the case where the parameters b1 � 1, b � 1.5, ρ � 0.8, σ � 0.7, d � 0.5, L � 1, and M � 2 are fixed, the reliability of the conclusions herein is verified. Figure 5(a) shows that the government's incentive intensity increases with the increase of the synergy coefficient, and as the government's punishment increases, the social capital participants' compliance efforts increase with the synergy coefficient. Figure 5(b) shows that the speculative behavior of social capital participants decreases with the increase of the synergy coefficient and decreases with the increase of government punishment. e government's supervision intensity increases with the synergy coefficient. It first increases and then decreases as the government's punishment increases. e threshold value of the penalty force in Figure 5 decreases as the synergy coefficient increases. Mathematical Problems in Engineering From the above research, we can find that choosing the right social capital participants and constructing a reasonable supervision mechanism can achieve the goal and promote cooperation between the two parties. is paper comprehensively considers the efforts of both parties and the synergy effect of PPP projects based on principal-agent theory, divides the efforts of social capital participants into performance behaviors and speculation behaviors, introduces the government's supervision and punishment, and builds a PPP project supervision and punishment model based on synergy effect, setting up an effective supervision and punishment mechanism to achieve the goal. Discussion rough the above analysis of the nature of the PPP project, the government selects social capital participants with complementary advantages to form a cooperative relationship. e government relies on the funds of the social capital participants to ease its financial pressure and uses the management means of the social capital participants to save costs and increase project returns, to deal with some investment risks through the market means of social capital participants, and, to a certain extent, to help the government achieve the social benefits of the project. e reasonable risk sharing of the project must ensure that the benefits of the project participants after the cooperation are greater than the benefits before the cooperation so that the social capital participants can enter the PPP project and achieve the synergy between the two parties. Reasonable risk sharing not only needs to meet the interests of project participants but also needs to meet the risk control ability of the project participants and the ability of the two parties to cooperate, so as to ensure the long-term effectiveness of the cooperation between the two parties. e synergy effect of PPP projects stems from the cooperation of the government and social capital participants with synergistic capabilities. When selecting the social capital participants, the government should focus on the lack of government; but, at the same time, it can promote the success of PPP projects, thus achieving financial coordination, management coordination, and operational synergy of PPP projects, reducing project costs, and improving efficiency. e research results show that the government chooses social capital participants with complementary advantages to form synergy; with the increase of synergy, the government needs to increase the incentive intensity, improve the performance behavior of social capital participants, curb their speculation, and promote the two sides. Due to the increased synergy and the willingness of social capital participants to increase cooperation and reduce speculation, the government should reduce the intensity of supervision and punishment. In the process of allocating investment risks of PPP projects, it is necessary to minimize the impact of investment risks on earnings under the established income. By analyzing the risktaking ability of the government and social capital participants and the synergistic ability formed by complementary risk management and control capabilities, the project participants' investment risk allocation ratio is determined and the project risk-sharing goal is achieved. Conclusions Based on the fuzzy Borda method and the synergistic effect theory, this paper considers the synergistic effect of PPP projects, constructs a model of investment risk sharing, incentive, and supervision punishment, determines the investment risk sharing, incentive, and supervision penalty decision mechanism of PPP project, and realizes the goal of PPP project. e study is concluded as follows: (1) Choosing social capital participation with complementary advantages in the process of allocating investment risk to PPP projects is a prerequisite for creating synergies. In this article, we will comprehensively consider the synergistic effects between the government and social capital participation and set up a risk allocation model for investing in PPP projects that produce synergistic effects. e research results show that the increased synergy of project participants not only reduces the impact of investment risk on project revenue but also promotes project participants to increase their willingness to undertake risks, actively undertake project risks, and achieve synergy effects of PPP projects. (2) We analyzed the synergistic effect of the PPP project and the synergy effect of enterprise cooperation. Based on the enterprise cooperative output model, the total output model of PPP project is proposed. Considering the efforts and synergy of both parties, a PPP project incentive model based on synergy is constructed to study the incentive problem of PPP projects with synergistic effects. e research results show that with the improvement of the synergy between project participants, the government will increase incentives for social capital participants and project participants will improve their efforts and achieve synergies in PPP projects. rough the cooperation of both parties, the total income of PPP projects is increased. (3) Considering the efforts of both parties and the synergy effect of PPP projects, the efforts of social capital participants are divided into compliance behaviors and speculation behaviors, introducing government supervision and punishment and constructing a PPP project supervision and punishment model based on synergy effect. e research results show that the government chooses social capital participants with complementary advantages to form synergy; with the increase of synergy, the government needs to increase the incentive intensity, improve the performance behavior of social capital participants, curb their speculation, and promote the two sides. Due to the increased synergy and the willingness of social capital participants to increase cooperation and reduce speculation, the government should reduce the intensity of supervision and punishment. Data Availability No data were used to support this study. Conflicts of Interest e authors declare that they have no conflicts of interest.
2021-05-11T00:03:29.193Z
2021-01-20T00:00:00.000
{ "year": 2021, "sha1": "c207b6dda0465cdf344b079d58638475389547f8", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/mpe/2021/6615593.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2dad4dacc7e47e9e4c9990d01ddb33eb6309a5fa", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Business" ] }
255590601
pes2o/s2orc
v3-fos-license
Analyzing the genetic diversity and biotechnological potential of Leuconostoc pseudomesenteroides by comparative genomics Leuconostoc pseudomesenteroides is a lactic acid bacteria species widely exist in fermented dairy foods, cane juice, sourdough, kimchi, apple dumpster, caecum, and human adenoid. In the dairy industry, Ln. pseudomesenteroides strains are usually found in mesophilic starter cultures with lactococci. This species plays a crucial role in the production of aroma compounds such as acetoin, acetaldehyde, and diacetyl, thus beneficially affecting dairy technology. We performed genomic characterization of 38 Ln. pseudomesenteroides from diverse ecological niches to evaluate this species’ genetic diversity and biotechnological potential. A mere ~12% of genes conserved across 38 Ln. pseudomesenteroides genomes indicate that accessory genes are the driving force for genotypic distinction in this species. Seven main clades were formed with variable content surrounding mobile genetic elements, namely plasmids, transposable elements, IS elements, prophages, and CRISPR-Cas. All but three genomes carried CRISPR-Cas system. Furthermore, a type IIA CRISPR-Cas system was found in 80% of the CRISPR-Cas positive strains. AMBR10, CBA3630, and MGBC116435 were predicted to encode bacteriocins. Genes responsible for citrate metabolism were found in all but five strains belonging to cane juice, sourdough, and unknown origin. On the contrary, arabinose metabolism genes were only available in nine strains isolated from plant-related systems. We found that Ln. pseudomesenteroides genomes show evolutionary adaptation to their ecological environment due to niche-specific carbon metabolism and forming closely related phylogenetic clades based on their isolation source. This species was found to be a reservoir of type IIA CRISPR-Cas system. The outcomes of this study provide a framework for uncovering the biotechnological potential of Ln. pseudomesenteroides and its future development as starter or adjunct culture for dairy industry. One of the primary phenotypic differences between Ln. mesenteroides and Ln. pseudomesenteroides is their acid production capability from mannitol and starch, with the former producing acid from mannitol but not from starch upon 7 days of incubation; however, Ln. pseudomesenteroides shows the opposite reaction. Moreover, strains of Ln. pseudomesenteroides are negative for L-proline arylamidase and L-isoleucine arylamidase. Leuconostoc pseudomesenteroides and Ln. citreum strains are easy to distinguish from the remaining Leuconostoc species, as reported by Farrow et al. (1989). Both Ln. pseudomesenteroides and Ln. mesenteroides can convert fructose into mannitol, a low-calorie sugar metabolized independently of insulin that could substitute glucose, fructose, lactose, or sucrose in foods (Hemme and Foucaud-Scheunemann, 2004). Leuconostoc pseudomesenteroides has widely been found in fermented food systems such as fermented dairy products, wine, olives, kimchi, beans, cacao, and meat (Ludbrook et al., 1997;De Bellis et al., 2010;Nieto-Arribas et al., 2010;Kim et al., 2011;Mesas et al., 2011;Papalexandratou et al., 2011). In dairy starter culture technology, Ln. pseudomesenteroides strains exist in mesophilic starter culture formulations in conjunction with Ln. mesenteroides and lactococci species. The presence of Ln. pseudomesenteroides is instrumental for various technological traits. For instance, due to the heterofermentative carbohydrate fermentation capacity of this species, CO 2 gas releases from the pentose phosphate pathway, which leads to desired open texture in blue-veined cheese so that aerobic Penicillium roqueforti could colonize and proliferate in the cheese. In Gouda-type cheese, Ln. pseudomesenteroides produces diacetyl by degrading citrate and contributes to eye formation by its heterofermentative lifestyle (Hemme and Foucaud-Scheunemann, 2004). This species also produces dextrans that contribute to food systems' textural attributes and sensorial properties by improving their viscosity and final stability (Duboc and Mollet, 2001). Moreover, the bioproduction of aromatic compounds such as acetoin, acetaldehyde, and more importantly diacetyl further emphasizes Ln. pseudomesenteroides' contribution to organoleptic attributes in several fermented dairy foods (Vedamuthu, 1994;Hemme and Foucaud-Scheunemann, 2004;Nieto-Arribas et al., 2010). Since Ln. pseudomesenteroides is a versatile LAB species and has widely been isolated from diverse food sources, comparative genomics of Ln. pseudomesenteroides strains will contribute to our understanding of the adaptation of this species to fermented foods. Thus, the present study aimed to mine insights into the evolution, environmental adaptation, and biotechnological potential of Ln. pseudomesenteroides isolated from different ecological niches. The limited studies regarding Ln. pseudomesentreoides have led to relatively limited knowledge with regard to genomic diversity at the species level. In order to completely uncover the potential of Ln. pseudomesenteroides, we should evaluate genetic diversity within the species and define strains of industrial and scientific interest. In this study, we evaluated 38 strains including the type strain FDAARGOS_1003 through comparative genomic analyses to establish the genetic diversity of the overall species and their biotechnological potential. Annotation and genetic diversity analysis A total of 40 Ln. pseudomesenteroides strains were acquired from the NCBI GenBank database (Clark et al., 2016). CheckM tool was utilized to determine the quality of genome assemblies (Parks et al., 2015). Thirty-eight genomes were annotated with Prokka (Seemann, 2014) with the following argumentskingdom Bacteria -compliant. Identification of core-and pangenomes and presence/absence of genes across all strains were performed by feeding Roary (Page et al., 2015) with GFF files from Prokka using the following arguments: -e -n -v -r. Open/close classification of the pangenome was determined by fitting Heap's law model with 10,000 permutations by micropan (Snipen and Liland, 2020) package. Evaluation of the similarity between genomes by presence/absence of genes was performed with principal coordinate analysis (PCoA) in R (version 4.1.1; R Core Team, 2021) by calculating Jaccard distance with prabclus (Hennig and Hausdorf, 2020) package. Neighbor joining phylogenetic analysis of whole genome alignment provided TYGS (Meier-Kolthoff and Göker, 2019). The phylogenetic tree of the whole genome alignment was built with the iTOL web tool (Letunic and Bork, 2021). Core orthogroups shared between Ln. pseudomesenteroides strains were annotated to clusters of orthologous groups (COG) categories using eggNOG-mapper (Huerta-Cepas et al., 2017). Identification of genetic potentials Identification of clustered regularly interspaced short palindromic repeat (CRISPR) elements and Cas enzyme clusters were performed by the CRISPRCasFinder (Couvin et al., 2018) web tool, and CRISPRviz was utilized to detect spacer and repeat sequences and their alignment. Putative carbohydrate-active enzyme (CAZyme) domains were identified by using the HMMER (Potter et al., 2018) tool on the dbCAN database v10) according to protocol dbCAN. Using default settings, putative prophage sequences were identified with the PHASTER (Hennig and Hausdorf, 2020) tool. Detection of potential bacteriocin-like sequences was performed with the BAGEL4 (van Heel et al., 2018) web tool, and the detected sequences were validated with the NCBI's BLASTP (Cantarel et al., 2009) suite. Screening of antimicrobial resistance genes was performed with the Comprehensive Antibiotic Resistance Database (CARD; Alcock et al., 2020) web tool with perfect hits only. PLSDB (Galata et al., 2019;Schmartz et al., 2022) web tool was used to identify plasmid sequences with default settings. ISfinder (Siguier et al., 2006) web tool was utilized for detecting insertion sequences in Ln. pseudomesenteroides genomes by adjusting the e-value threshold to 0.01. Horizontally transferred sequences were identified with COLOMBO (Waack et al., 2006) tool. Annotation of interspersed repeats and low-complexity sequences was performed with RepeatMasker (Smit and Rubley, 2008) tool. Putative secondary metabolite gene clusters were screened using antiSMASH (Blin et al., 2021). Genome characteristics and genetic diversity Forty Ln. pseudomesenteroides genome assemblies retrieved from NCBI GenBank were quality-checked using CheckM (Parks et al., 2015). Table 1 shows the 38 Ln. pseudomesenteroides genomes (i.e., LMGCF08 and 4882 were discarded due to their low-quality CheckM outputs) isolated from different ecological niches including dairy, cheese starter culture, sourdough, kimchi, apple dumpster, cane juice, caecum, and human adenoid representing a broad range of ecological environments. The genome sizes of the strains ranged from 1.81 to 2.32 Mb (average 2.04 Mb). The G + C content of each strain slightly deviated and ranged from 38.5 and 39.2% (average 39%), which is consistent with the reference strain FDAARGOS_1003 (Table 1). Total CDS in each genome ranged from 1,825 to 2,359 proposing variability across Ln. pseudomesenteroides genomes (Li et al., 2021). To approximate the overall gene pool of Ln. pseudomesenteroides, we calculated the core genome ( Figure 1A) and pangenome ( Figure 1B) based on 38 strains. A total of 7,724 COGs were estimated, and the pangenome curve showed an asymptotic trend that did not plateau in 38 genomes implying new genes were still identified. Therefore, the pangenome of Ln. pseudomesenteroides is open ( Figure 1B). Core genome analysis of 38 Ln. pseudomesenteroides strains revealed that the number of shared COGs reduced with an increase in the number of sequenced genomes. A total of 919 COGs were identified that were present in the core genome of all 38 strains, which represent ~12% of the entire pangenome ( Figure 1B). In addition, a total of 6,805 variable COGs were determined, of which 2,907 of them were characterized as unique ( Figure 1C). Across all Ln. pseudomesenteroides strains screened, 17-2 carried the unique COGs of 237 ( Figure 1C). The core-and pangenomes were annotated using eggNOG-Mapper (Huerta-Cepas et al., 2017) and assigned to functional groups ( Figure 2). The largest core-and pangenome category included coding sequences with functions pertained to function unknown. The second and third largest pangenome categories contained replication, recombination and repair and transcription. Pangenome categories of amino acid transport and metabolism had similar number of CDS with carbohydrate transport and metabolism. The smallest pangenome categories included cell motility and secondary metabolites biosynthesis, transport and catabolism. The second largest core genome category was composed of CDS with functions pertained to translation, ribosomal structure and biogenesis. Carbohydrate transport and metabolism, nucleotide transport and metabolism, and transcription related CDS formed the third largest core genome category. The smallest core genome categories consisted of CDS associated with cell motility and secondary metabolites biosynthesis, transport and catabolism. To analyze the phylogenetic relationship among Ln. pseudomesenteroides PCoA plot and neighbor joining rooted phylogenetic tree were constructed on 38 Ln. pseudomesenteroides strains ( Figures 3A,B). To confirm that these strains are all Ln. pseudomesenteroides, the outgroup strains of Ln. lactis CBA3625, Ln. mesenteroides SRCM102733, and Ln. carnosum CBA3620 were included in phylogenomic analysis. Dairy strains were clustered into two groups on the negative side of PCo1. On the other hand, 13 lay at positive values of PCo2. Plant-associated strains lay only Frontiers in Microbiology 04 frontiersin.org at the positive side of PCo1 and PCo2 except LMG 11482, which was located on the positive side of PCo1 and negative side of PCo2 ( Figure 3A). Phylogenomic analysis revealed that there are seven major branches within Ln. pseudomesenteroides strains and outgroup strains form two separate clades ( Figure 3B). The first four branches close to each other were composed of dairyassociated strains. The fifth branch consisted of one clinical isolate, caecum, Bryndza cheese, and unknown source strains. The last two clades were far from the first five and mainly consisted of plant-based systems such as sourdough, kimchi, cane juice, and apple dumpster. Carbohydrate active enzymes Identification of carbohydrate-active enzymes (CAZyme) revealed that glycosyltransferase and glycoside hydrolase family enzymes were the most prevalent CAZymes across all Ln. pseudomesenteroides genomes analyzed. 17-2, TMW21073, and CBA3630 possessed the highest number of GH family CAZymes (Supplementary Figure S1). The abundance of GT family CAZymes was higher in TMW21073, NCDO 768, LMG 11482, and the type strain FDAARGOS_1003. The concentration of CE, AA, and CBM family CAZymes was remarkably lower than GH and GT family CAZymes across 38 genomes (Supplementary Figure S1). Five genomes were not predicted to encode AAs. Four main clades were identified based on the abundance of CAZymes in each genome. The first clade from the bottom-up contained sourdough, food and kimchi associated strains. However, the second clade members were pertained to plant-associated, caecum, and human clinical isolate. The third and fourth clade members were primarily belonged to dairy. Carbohydrate metabolism Putative carbohydrate metabolism, citrate metabolism, malate metabolism, and mannitol metabolism genes of 38 Ln. pseudomesenteroides strains were detected based on the presence or absence of key genes annotated by Prokka (Table 2). Thirtyeight Ln. pseudomesenteroides genomes were found to be encoding phosphoketolase and fructose bisphosphate aldolase but lacked phosphofructokinase. Comparative analysis of carbohydrate metabolism-associated genes showed pronounced differences among certain strains of Ln. pseudomesenteroides. 76% of Ln. pseudomesenteroides genomes analyzed carried fructokinase functional in fructose to fructose 6-phosphate conversion. All Ln. pseudomesenteroides studied in the present work harbored beta-galactosidase. Two different betagalactosidases, lacLM and lacZ, were found. While all the genomes encoded lacLM, only TMW21195 was predicted to encode lacZ that was severely truncated. lacS, lactose specific Estimation of the core genome (A) and pangenome (B) of 38 Leuconostoc pseudomesenteroides strains by including genomes one by one. R programming language (R Core Team, 2021) and ggplot2 (Wickham, 2016) package were used to plot the graph. (C) Venn diagram representing the core and unique gene families of Ln. pseudomesenteroides obtained by MCL clustering algorithm analyses. Frontiers in Microbiology 06 frontiersin.org transporter, was carried by all Ln. pseudomesenteroides genomes analyzed. Genes encoding for malL (sucrose-isomaltose) and malP (maltose phosphorylase) were found in all Ln. pseudomesenteroides genomes. However, malR (HTH-type transcriptional regulator) was found in 47% of the strains that were belong to dairy. Sucrose 6-phosphate hydrolase (scrB) was found in 17 genomes analyzed in which 76% was belong to non-dairy associated niches (i.e., sourdough, cane juice, food, caecum, and unknown). bglA gene encoding beta-glucosidase was found in all Ln. pseudomesenteroides isolates except AMBR10. trePP encoding for trehalose 6-phosphate phosphorylase was available across all strains. treA (trehalose-6-phosphate hydrolase) was absent in all genomes except for Bryndza cheese isolate of KMB_610 although the gene was truncated. xylA, xylB, and xylG encoding xylulose isomerase, xylulose kinase, and xylulose transport protein, respectively, were found in all Ln. pseudomesenteroides tested. araBAD operon encoding for arabinose metabolism pathway was only available in plantassociated strains of CBA3630, Dm-9, FDAARGOS_1003, LMG 11482, LMG 11483, NCDO 768, TMW21073, and TR070. Thirty-eight Ln. pseudomesenteroides strains were predicted to carry key enzymes functional in galactose utilization through the Leloir pathway except for BM2, LMGH278, LMGTW6, LN12, and UBA11295, which missed galE encoding UDP-glucose 4-epimerase. The maltose operon genes maLF, malG, malL, malP, and malR presence did not show a homogenous distribution across 38 Ln. pseudomesenteroides strains. For example, malF was not present in any of the strains except LMGH280. Although malG existed in LMGH100, LMGTW3, LMGTW6, LN02, and LN12, only the former strain carried a complete gene while the latter four contained truncated gene. Genes that were found in citCDEFGOS operon composed of citrate lyase ligase (citC), citrate lyase (citDEF), holo-ACP synthase (citG), transcriptional regulator (citO) and sodium dependent citrate transporter (citS). Citrate uptake and metabolism operon was encoded by all strains but cane juice isolates of FDAARGOS_1003, LMG 11482, NCDO 768, sourdough isolate of TR070, and unknown isolate of LMG 11483. The genes encoding for malate dehydrogenase was available in NCDO 768, LMG 11482, FDAARGOS_1003, Dm-9, 17-2, TMW21195, TR070, and TMW21073. Fumarate hydratase was found in 17-2, AMBR10, CBA3630, Dm-9, KMB_610, LMG 11482, MGBC116435, NCDO 768, TMW21073, and UBA11295. Malolactic enzyme was found in 76% of all genomes. The gene encoding for aspartate aminotransferase functional in production of aspartate from oxaloacetate was possessed by all genomes. Acetolactate synthase and acetolactate decarboxylase that are functional in pyruvate transformations into alpha-acetolactate and acetoin were also harbored by all genomes. Mannitol dehydrogenase (mdh) encoding for mannitol production from fructose was also evident in all genomes however mdh was truncated in ~13% of strains primarily belonging to dairy. All Ln. pseudomesenteroides genomes encoded dextransucrase which converts sucrose to dextran and fructose. However, 50% of Ln. pseudomesenteroides strains carried severely truncated dextransucrase gene which were primarily isolated from dairy except for Dm-9 (i.e., apple dumpster). Comparison of functional COGs in pan-(light green) and core (red) genome of 38 Ln. pseudomesenteroides strains. Proteolytic activity The genes involved in proteolytic activity showed differences between strains (Table 3). The genes encoding for peptide ABC transporter operon, oppABCDF, were found in all genomes; however, oppF was truncated in 63% of the genomes. oppB was truncated in FDAARGOS_1003, LMG 11482, LMGH100, and NCDO 768. oppA and oppC were truncated in LMGH280 and PS12, respectively. prtP gene encoding for pII type serine proteinase functional against casein (Frantzen et al., 2017) was found in ~53% of Ln. pseudomesenteroides genomes. Leuconostoc pseudomesenteroides genomes also contained a range of aminotransferases and peptidases. For example, pepN (aminopeptidase) was carried by all genomes in which LMG 11483 and TR070 harbored truncated gene. pepA, pepC, pepQ, and pepX were also found in all genomes. 47% of the Ln. pseudomesenteroides genomes had the pepV gene encoding for beta-alanine dipeptidase. All Ln. pseudomesenteroides genomes had complete pepS and pepT genes with the exception of UBA11295 which carried truncated forms of those genes. Despite We also analyzed Ln. pseudomesenteroides genomes for genes encoding for arginine deiminase (ADI) metabolism (arcA, arcB, arcC, and arcD) and found that only two genomes (i.e., 17-2 and TMW21195) carried the complete gene set required for ADI metabolism ( Figure 5). Mobile genetic elements All 38 genomes of Ln. pseudomesenteroides were explored for the existence of mobile genetic elements of plasmids, transposable elements, prophage, and CRISPR locus (Table 4). A total of 62 plasmids were identified in ~68% of Ln. pseudomesenteroides genomes analyzed (Table 1, Supplementary Table S2). Ten unique plasmids were detected. Transposable elements were explored using RepeatMasker (Smit and Rubley, 2008) tool, and these elements accounted for less than 0.1% of each Ln. pseudomesenteroides genomes. CBA3630, FDAARGOS_1003, and FDAARGOS_1004 genomes were predicted to harbor the largest number of (i.e., 145, 140, and 141) transposable elements. The analysis of prophages via PHASTER (Arndt et al., 2016) identified 189 prophage-like elements, of which 20 were intact, 123 were incomplete, and 46 were questionable. Out of 38 Ln. pseudomesenteroides screened, only 16 were predicted to encode intact prophages, while all strains carried incomplete and/or questionable prophages. By classifying the prophage-like elements, it was found that PHAGE_Lactob_phiAT3_NC_00589 accounted for the largest portion, followed by PHAGE_Lactoc_ bIL309_NC_00266, PHAGE_Lactob_Lb_NC_04798, and PHAGE_Lactob_T25_NC_04862. Strain araA araB araD citC citD citE citF citG citO citS fba fruA galE galK galT lacL lacM lacS lacZ malF malG malL malP malR malX manA manX manZ scrB treA trePP xylA xylB xylG treA trePP bglA fruA levE scrB mdh mdh1 gtfC gmue fumC mleA aspC Thirty-eight Ln. pseudomesenteroides strains were also evaluated for CRISPR locus using CRISPRviz and CRISPRCasFinder tools. Because CRISPR varied in evidence level when using the CRISPRCasFinder tool, we considered the ones that exceeded evidence level one (Couvin et al., 2018). A total of 28 strains included complete CRISPR-Cas systems which belonged to Type IIA according to CRISPRCasFinder results. To further elaborate our understanding of the CRISPR-Cas system in Ln. pseudomesenteroides, we identified and located repeats and spacers and successfully assigned them to canonical types and subtypes ( Figure 6A). Six distinct groups of spacers were predicted to have 100% identity in their corresponding groups. The first group from the top down consisted of LN23 and LMGH100; the second group was composed of LMGCF15, LMGCF06, and HPK01; the third group was comprised of LN02, LMGTW3, and LMGTW6; the fourth group members were LMGH278, and LMGH61; the fifth group contained LMGH97 and LMGTW1; and the last group had IM1374 and BM2. The following strains of LMGH95, AMBR10, MGBC116435, and UBA11295 were not aligned into a group due to their diverse spacer compositions. LN12, LMGH280, and FDAARGOS_1004 did not participate in a group due to a single mismatch of spacer content. In parallel, LMGH284, and IM1427 were not laid into six groups owing to double mismatches. Nevertheless, LMGH95 and AMBR10 had no spacer identity with any CRISPR-Cas containing Ln. pseudomesenteroides genomes. By grouping according to repeat identity across 28 Ln. pseudomesenteroides strains, eight groups emerged ( Figure 6B). LMGH95 and AMBR10 did not show any repeat identity with any of the genomes shown in Figure 6B. LMGH280 did not participate in a group because of a double mismatch in its repeats. Secondary metabolites The 38 Ln. pseudomesenteroides genomes were screened for the existence of gene clusters encoding for secondary metabolites using antiSMASH (Blin et al., 2021). A total of 10 secondary metabolites found were classified as: Alkoloid (62) Table S5). However, the similarity scores achieved for all secondary metabolites including bacteriocins were in the range of 0.07 and 0.3 which reveal that the likelihood of presence of putative secondary metabolite gene clusters in Ln. pseudomesenteroides is low. Hence, Ln. pseudomesenteroides can be considered as low potential producer of secondary metabolites. This is also supported by Figure 2 results that number of COG functions associated with secondary metabolites biosynthesis had the lowest number of CDS in pangenome of 38 Ln. pseudomesenteroides. Bacteriocin screening of 38 strains were also performed using BAGEL4 which predicted three kinds of bacteriocins (Supplementary Table S1), of which two of them were undefined and carried by AMBR10 and CBA3630. Garvicin Q family class II bacteriocin was predicted to be encoded by MGBC116435 only. Discussion The present study aimed to explore the genetic diversity and biotechnological potential of Ln. pseudomesenteroides strains isolated from diverse ecological niches such as dairy, sourdough, kimchi, apple dumpster, cane juice, caecum, and human adenoid through comparative genomics. The average genome size of 38 Ln. pseudomesenteroides was 2.04 Mb (ranging from 1.81 to 2.32 Mb) which is in the range with lactic acid bacteria in general. In addition, average G + C content was found at 39% (ranging from 38.5% to 39.2%) in consensus with low G + C LAB implying Ln. pseudomesenteroides had gone through genetic drift (Makarova et al., 2006;Brandt and Barrangou, 2018). All 38 strains shared a mere 12% of COGs in the core genome, revealing the genotypic differences in Ln. pseudomesenteroides were primarily determined by the accessory genome. In addition, ~21.4% of the core genome is composed of sequences without a known function producing future candidates for functional studies . Pangenome analysis of Ln. pseudomesenteroides revealed an open genome, resulting in this species' functional diversity (Li et al., 2021). It is generally accepted that broadly distributed bacterial species often carry open pangenomes, which leads to the acquisition of alien genes from the environmental niche and adjust against environmental conditions such as E. coli (Fu and Qin, 2012), Bacillus cereus (Bazinet, 2017), and Streptococcus pneumonia (Tettelin et al., 2008). The variable genes found in Ln. pseudomesenteroides via Roary was ~88% of the total gene content in the pangenome, which proposes a large degree of diversity within this species (Medini et al., 2005). The distribution of genomes in the PCoA plot and neighborjoining rooted phylogenetic tree demonstrate that strains of the same origin are usually clustered together or not too distant. For example, all dairy-associated strains were located on the lefthand side of the PCoA plot as two clusters ( Figure 3A). On the other hand, plant-associated isolates from cane juice (FDAARGOS_1003, LMG 11482, and NCDO 768), sourdough (17-2 and TR070), kimchi (CBA3630), apple dumpster (Dm-9) positioned on the right-hand side of the plot as a group. In parallel, the phylogenetic association of dairy or plantoriginated strains showed a similar clade formation in the phylogenetic tree ( Figure 3B). This perhaps implies that Ln. pseudomesenteroides had experienced evolutionary adaptation to their corresponding microniche. We would expect that phylogenetically related strains would share similar isolation origins (Brandt et al., 2020). Outliers to this would be human adenoid and caecum isolates of AMBR10 and MGBC116435, respectively, which lay together with dairy isolates at the negative side of the PCo2. Nevertheless, those two strains formed a separate clade with KMB_610 and UBA11295. The discrepancy of location of AMBR10 in PCoA plot might be due to harboring a single plasmid in its genome. CAZyme distribution across 38 Ln. pseudomesenteroides genomes revealed four distinct clades, with plant-derived strains, caecum, and human adenoid isolates forming the first two closely related clades. Dairy-associated isolates and two unknown sourced genomes comprised the third and fourth clades. This finding supports the phenomenon that isolation source is an important factor causing genomic diversity in the carbohydrate metabolism of Ln. pseudomesenteroides. In silico prediction of GHs encoded in 38 Ln. pseudomesenteroides genomes revealed pangenome is composed of genes encoding GHs that belong to 21 different GH families involved in carbohydrate metabolism (Figure 4). Members of GH13 family represented the largest proportion of all GHs predicted in 38 Ln. pseudomesenteroides genomes accounting for 18%, which is consistent with the previous report for genus Leuconostoc (Sharma et al., 2022). GH13 is belong to alpha amylase group and possesses catalytic machinery and conserved sequence regions (Martinovičová and Janeček, 2018). It was reported that GH13 including beta-galactosidase, betaglucosidases, beta-xylosidases, and amylases exist in Leuconostoc sp. MTCC 10508 (Kaushal and Singh, 2020). The lysozyme which is belong to GH73 or GH25 was also found in 38 Ln. pseudomesenteroides genomes which is a catalyst of hydrolysis of beta (1-4) among N-acetylmuramic acid and N-acetyl glucosamine of cell wall and might possess antimicrobial potential (Michlmayr and Kneifel, 2014). In silico evaluation of pangenome predicted 11 GT families that participate in carbohydrate metabolism. GT2 and GT4 family members represented the largest proportion of GTs in Ln. pseudomesenteroides genomes accounting for 42% and 32%, respectively and in alignment with previous observations for Leuconostoc genus (Sharma et al., 2022). Third largest member of GT family was GT51 involved in glycan metabolism. This enzymatic machinery degrades the complex polysaccharides of plants into mono-and oligosaccharides which could later be transported by ABC transporters (Sharma et al., 2022). GTs are functional in forming the glycosidic bonds by transferring sugar component from sugar donor (i.e., activated) to acceptor compound (Cantarel et al., 2009). GT51 which constitutes murein polymerases were also found in all Ln. pseudomesenteroides genomes. Murein polymerases are heavily participate in peptidoglycan synthesis and have a key role for maintaining the cell wall integrity (Sauvage et al., 2008). Leuconostoc pseudomesenteroides genomes carry crucial CAZymes that participate in carbohydrate hydrolysis and synthesis during fermentation. A bacterium's carbohydrate fermentation potential is a critical indicator of biotechnological functionality of the strain and sets the fundamentals for strain selection and cultivation (Jiang et al., 2020). All Ln. pseudomesenteroides strains studied in the present work were predicted to carry phosphoketolase, a key enzyme in pentose phosphate pathway. The genus Leuconostoc is obligatory heterofermentative meaning six-carbon sugars are being utilized through pentose phosphate pathway also known as phosphoketolase pathway (Axelsson, 2004). Phosphofructokinase encoding gene was not found in any of the Ln. pseudomesenteroides genomes indicating Embden-Meyerhof pathway is not functional in this species (Frantzen et al., 2017). In contrast, fructosebisphosphate aldolase encoding gene was available across all Ln. pseudomesenteroides genomes. This might imply a potential biosynthesis of fructose 1,6-bisphosphate and glyceraldehyde 3-phoshate via fructose 1-phosphate, therefore homofermentative cleavage of fructose in Ln. pseudomesenteroides (Frantzen et al., 2017). However, Grobben et al. (2001) reported that a mannitol producing Ln. pseudomesenteroides variant grown with sucrose could produce mannitol, CO 2 , lactate, acetate or ethanol through phosphoketolase shunt (Grobben et al., 2001). Putative fructose fermentation route in mannitol producing heterofermentative lactic acid bacteria is shown in Figure 7. In the presence of sucrose or fructose, Ln. pseudomesenteroides could potentially produce mannitol through mannitol dehydrogenase enzyme. Mannitol was reported to have osmoprotectant effect which improves the survival of dried Lactococcus lactis cells (Efiuvwevwere et al., 1999). Moreover, it is an antioxidant (Shen et al., 1997) and about 50% as sweet as sucrose thus considered to be a low calorie sweetener (Furia, 1972). Fructose could also be converted to fructose 6-phosphate by fructokinase at the expense of one ATP. Fructose 6-phosphate then feeds into pentose phosphate pathway because it lacks the 1-phosphofructokinase which is the key enzyme in Embden-Meyerhof pathway. Even though Ln. pseudomesenteroides carried fructose bisphosphate aldolase, the metabolism of hexose sugars such as fructose likely occurs through phosphoketolase pathway (Grobben et al., 2001). While all Ln. pseudomesenteroides genomes contained lacLM, interestingly only TMW21195, a strain clustered with plantassociated isolates, harbored lacZ. To our knowledge, lacZ was not A B FIGURE 6 Alignment of spacers (A) and repeats (B) of each detected CRISPR locus. Each colored diamond represents a unique repeat, and each colored square represents a unique spacer in the CRISPR-Cas system. Grey "x" boxes show missing spacer. Frontiers in Microbiology 15 frontiersin.org reported in Ln. pseudomesenteroides until present study as Frantzen et al. (2017) reported that Ln. pseudomesenteroides only encode beta-galactosidase through lacLM (Frantzen et al., 2017). The lacZ gene found in TMW21195 was severely truncated which might imply gene decay, a sign of prolonged degenerative evolution perhaps due to long period of growth in plant-based systems where no lactose exist. In Leuconostoc lactose is transported into the cytoplasm via lacS, lactose-specific transporter. lacS includes a C-terminal EIIAGlc-like domain and it can be phosphorylated leading an improved rate of lactose uptake in Streptococcus thermophilus (Gunnewijk and Poolman, 2000). All Ln. pseudomesenteroides isolates screened in the present study have lacS but in Ln. cremoris this gene is truncated and lacked C-terminal domain perhaps impacting lactose transport and thus growth rate on lactose (Frantzen et al., 2017). Genetic potential for arabinose metabolism (araBAD) was only found in plant-associated isolates of CBA3630, Dm-9, FDAARGOS_1003, LMG 11482, LMG 11483, NCDO 768, TR070, and food isolates of TMW21073 and TMW21195. These isolates clustered together in PCoA plot and closely related in neighbor joining phylogenetic tree. None of dairy-associated Ln. pseudomesenteroides carried genes encoding for arabinose metabolism implying that these lineages are not capable of metabolizing arabinose from the environment. We speculate that dairy Ln. pseudomesenteroides might lost araBAD operon as a consequence of repetitive and prolonged period of growth in milk. Dextransucrase, functional in transformation of sucrose into fructose and dextran, is a crucial biotechnological trait of Ln. pseudomesenteroides as dextran contributes to textural and sensorial attributes of food systems by improving their viscosity and final stability (Duboc and Mollet, 2001). All plant-associated genomes evaluated in the present study contained complete dextransucrase gene except for Dm-9. However, all dairyassociated Ln. pseudomesenteroides strains possessed a deletion in the dextransucrase gene. The dairy Ln. pseudomesenteroides show telltale signs of prolonged degenerative evolution perhaps as a consequence of a long period of proliferation in milk where no sucrose exists. All dairy-associated Ln. pseudomesenteroides genomes analyzed contained the cit operon composed of citC, citDEF, citG, citO, and citS. The existence of citCDEFGOS operon allows co-fermentation of citrate and sugar yielding increased proton motive force and energy yield to the cell (Marty-Teysset et al., 1996). This perhaps indicates that capability to metabolize citrate plays a crucial role in successful adaptation to milk environment (Frantzen et al., 2017). ~40% of non-dairy associated Ln. pseudomesenteroides strains lacked the complete operon for the citrate metabolism, perhaps a sign of evolutionary gene loss as a result of long period of proliferation in non-dairy related niches where no citrate exist. Lactic acid bacteria produce acetoin, lactate, acetate, ethanol, succinate, and aspartate from citrate (Gänzle, 2015). The final products obtained from pyruvate was heavily relied on the pH such as acetoin formation favors low pH (Ramos et al., 1995). An elevated transformation of citrate or pyruvate to acetoin at low pH was reportedly shown in both heterofermentative (Drinan et al., 1976;Cogan et al., 1981) and homofermentative (McFall and Montville, 1989;Starrenburg and Hugenholtz, 1991) lactic acid Putative carbohydrate and citrate metabolism pathway of Ln. pseudomesenteroides. Microbiology 16 frontiersin.org bacteria. Citrate is first converted to oxaloacetic acid by citrate lyase and then transformed into pyruvate via oxaloacetate decarboxylase. Pyruvate is further decarboxylated to acetaldehyde-TPP after which it is converted to alpha-acetolactic acid by alpha-acetolactate synthase. Alpha-acetolactic acid can divert into acetoin or diacetyl. While conversion into acetoin requires alpha-acetolactate decarboxylase, transformation to diacetyl occurs through non-enzymatic decarboxylative oxidation of alpha-acetolactate. Acetoin can also be produced from diacetyl by diacetyl reductase (Ramos et al., 1995;Gänzle, 2015). Leuconostoc pseudomesenteroides could break down citrate to lactate, acetate, acetoin, and diacetyl due to carrying relevant enzymes required for such conversions. However, it is not capable of producing succinate and 2,3-butanediol because of not carrying succinate dehydrogenase and acetoin reductase, respectively ( Figure 7). Moreover, none of the dairy-associated genomes were predicted to encode diacetyl reductase preventing conversion of diacetyl to acetoin. This was supported by previous studies that Ln. pseudomesenteroides P4 isolates missed the genes required for reduction of diacetyl to acetoin and 2,3-butanediol (Frantzen et al., 2017). The transformation of oxaloacetate to aspartate occurs in a single step reaction catalyzed by transaminase. Frontiers in Aspartate is the precursor of other amino acids such as asparagine, threonine, and methionine (Gottschalk, 1986). It was reported that aspartate is used to biosynthesize amino acids in Ln. oenos (Ramos et al., 1995). It was also reported that aspartate was not an essential amino acid for the proliferation of Ln. oenos when citrate and malate were supplemented to the growth medium (Amoroso et al., 1993), indicating that aspartate could be synthesized from citrate. In dairy fermentations Leuconostoc spp. grow in relation with Lactococcus spp. It is not clear yet that whether the associative growth is of mutual interest to Leuconostoc spp. and Lactococcus spp. (Frantzen et al., 2017). Although Leuconostoc spp. was reported to grow poorly due to the lack of proteolytic activity (Thunell, 1995), Frantzen et al. (2017) described the genetic potential for caseinolytic activity and Cardamone et al. (2011) reported the capacity of milk acidification by Ln. pseudomesenteroides (Cardamone et al., 2011). It has been shown that Ln. pseudomesenteroides is an important species in the production of cheese (Frantzen et al., 2017). Lactic acid bacteria use ADI pathway for transforming arginine into ornithine by citrulline and producing ATP and ammonia. The ammonia being produced elevates the pH and protects the bacteria against stressful acidic conditions (Cotter and Hill, 2003). We found that only 17-2 and TMW211195 carried putative arcA (arginine deiminase), arcB (ornithine transcarbamoylase), arcC (carbamate kinase), and arcD (arginineornithine transporter), which catalyze the ADI pathway ( Figure 5). Although majority of Ln. pseudomesenteroides genomes analyzed in the present study encoded ornithine transcarbamoylase they lacked the remaining genes completing ADI metabolism. Typically, strains from the same species would be highly similar. However, only about 12% of COGs were shared across 38 Ln. pseudomesenteroides strains would perhaps be explained by the presence of mobile genetic elements. It may also be due to inaccurate assemblies (Brandt et al., 2020). The putative mobilome of Ln. pseudomesenteroides provided a certain portion of plasmids, transposable elements, IS elements, and prophage-like elements. Despite mobilome accounting for a minor part of Ln. pseudomesenteroides genome, occasional transposition of these elements to the other regions of the genome is a significant contributor to genomic plasticity and the evolution of bacteria (Arber, 1991). The foreign DNA acquired from transposition results in the existence of the CRISPR-Cas system in variable sites of bacterial species (Li et al., 2021), which confers adaptive immunity to bacterial species to combat invasive elements (Barrangou et al., 2007). Because the CRISPR-Cas system is an instrumental toolbox for Cas-based genome editing, we identified the presence and diversification of CRISPR in 38 Ln. pseudomesenteroides genomes. We found that 74% of genomes encoded a putative CRISPR system on a species level. This is larger than lactobacilli (62%) and bacteria (46%), suggesting that Ln. pseudomesenteroides could be a potential reservoir for new CRISPR-based tools (Sun et al., 2015). Type IIA CRISPR-Cas system was the primary and single type found in Ln. pseudomesenteroides genomes. Type IIA is the signature cas9 programmable endonuclease and the most useful CRISPR tool (Jinek et al., 2012). Generally, strains belonging to same species have similar vaccination records yielding sharing of spacers or similar spacer history (Brandt et al., 2020). We end up seeing limited shared spacer content across Ln. pseudomesenteroides strains. Of the putative type IIAs [LN23's and LMGH100's loci] or [LMGCF15's, LMGCF06's, and HPK01's loci] or [LN02's, LMGTW3's, and LMGTW6's loci] or [LMGH278's and LMGH61's loci] or [LMGH97's and LMGTW1's loci] or [IM1374's and BM2's loci] shared common spacer history. In a parallel manner, these genomes shared the same clade in the phylogenetic tree ( Figure 3B). The distribution of these genomes on the phylogenetic tree implied the possible relationship between immunity system against exterior genetic material and the evolutionary route (Jiang et al., 2020). Despite the discrepancy in the spacer content, a relatively higher degree of similarity was observed in putative repeats of Ln. pseudomesenteroides ( Figure 6B). This perhaps implies a hypervariability across Ln. pseudomesenteroides strains with regards to CRISPR and genomic rearrangements. A few reports described bacteriocin production in Ln. pseudomesenteroides (Chen et al., 2018;Wang et al., 2018). We found three types of bacteriocins among 38 Ln. pseudomesenteroides strains, namely two unidentified bacteriocin-like structures in AMBR10 and CBA3630 and garvicin Q family class II bacteriocin in MGBC116435. Garvicin Q is a non-lantibiotic class II bacteriocin showing robust activity against Listeria spp and Lactococcus spp. (Tymoszewska et al., 2017). GarQ is a subclass IID bacteriocin biosynthesized by Lactococcus garviae BCC43578 isolated from sausage. Among the IS elements found in 38 Ln. pseudomesenteroides, both Lactococcus garviae and Lactococcus lactis appeared to be IS donors to MGBC116435. Leuconostoc pseudomesenteroides strains are used in mesophilic dairy starter co-cultures in conjunction with lactococci (Server-Busson et al., 1999). We speculate that Ln. pseudomesenteroides MGBC116435 acquired the garvicin Q biosynthesis capability as a competitive inhibition strategy, perhaps to compete with the competitor strain in the same ecological niche (Chaucheyras-Durand and Durand, 2010;Li et al., 2020). Since AMBR10 is a clinical isolate, we do not emphasize its bacteriocin-like structure owing to the potential pathogenicity of this strain. Bacteriocin-like structure found in CBA3630 and garvicin Q in MGBC116435 suggest that screening for unique antimicrobials requires further attention as a consequence of diverse microniches occupied by Ln. pseudomesenteroides strains and a large number of genes without known function. Conclusion Overall, the present study comparatively evaluated 38 Ln. pseudomesenteroides to determine genetic diversity across strains from different ecological niches and their biotechnological potential. Whole genome analysis demonstrated high genomic diversity across the strains, perhaps due to a large portion of accessory genomes, mobile genetic elements, and genes with unknown functions (i.e., hypothetical genes). Furthermore, comparative genomic analysis of the strains paves the way for describing ecological fitness to the host environment, for example, immunity against foreign DNA invasion through the CRISPR-Cas system and carbohydrate fermentation capacity differences seen between plant vs. non-plant associated strains. Only the plantassociated strains were predicted to carry arabinose sugar metabolism, which empowers the adaptation and survival of these strains in plant-associated environments. The present work explored the bacteriocin production capacity, evolutionary adaptation, and ecological fitness of Ln. pseudomesenteroides in the light of comparative genomics and enables genome-guided strain selection for industrial biomanufacturing. The findings of the current study set the baseline for the genetic characterization of Ln. pseudomesenteroides strains. Moreover, present work facilitates genome-guided strain selection with specific biotechnological features for industrial bioprocesses and creates a groundwork for characterizing traits of commercial relevance. Data availability statement Publicly available datasets were analyzed in this study. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary material. Author contributions FO: conceptualization and supervision. FO and IG: investigation, data curation, formal analysis, visualization, and writing-original draft. All authors contributed to the article and approved the submitted version.
2023-01-11T15:56:59.161Z
2023-01-11T00:00:00.000
{ "year": 2022, "sha1": "5532f7732648172ac6e22d44c06c744eab5aa312", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "5532f7732648172ac6e22d44c06c744eab5aa312", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
262063124
pes2o/s2orc
v3-fos-license
The effects of carvacrol and p‐cymene on Aβ1‐42‐induced long‐term potentiation deficit in male rats Abstract Aims Alzheimer's disease (AD) is the most common type of dementia in which oxidative stress plays an important role. In this disease, learning and memory and the cellular mechanism associated with it, long‐term potentiation (LTP), are impaired. Considering the beneficial effects of carvacrol (CAR) and p‐cymene against AD, their effect was assessed on in vivo hippocampal LTP in the perforant pathway (PP)‐dentate gyrus (DG) pathway in an Aβ1‐42‐induced rat model of AD. Methods Male Wistar rats were randomly assigned to five groups: sham: intracerebroventricular (ICV) injection of phosphate‐buffered saline, Aβ: ICV Aβ1‐42 injections, Aβ + CAR (50 mg/kg), Aβ + p‐cymene (50 mg/kg), and Aβ + CAR + p‐cymene. Administration of CAR and p‐cymene was done by gavage daily 4 weeks before and 4 weeks after the Aβ injection. The population spike (PS) amplitude and field excitatory postsynaptic potentials (fEPSP) slope were determined in DG against the applied stimulation to the PP. Results Aβ‐treated rats exhibited impaired LTP induction in the PP‐DG synapses, resulting in significant reduction in both fEPSP slope and PS amplitude compared to the sham animals. Aβ‐treated rats consumed either CAR or p‐cymene separately (but not their combination), and showed an enhancement in fEPSP slope and PS amplitude of the DG granular cells. Conclusions These data indicate that CAR or p‐cymene can ameliorate Aβ‐associated changes in synaptic plasticity. Surprisingly, the combination of CAR and p‐cymene did not yield the same effect, suggesting a potential interaction between the two substances. | INTRODUC TI ON Dementia is characterized by a decline in thinking, memory, behavior, cognitive function, calculation, orientation, comprehension, language, learning capacity, and judgment, and the capability to do daily activities. 1 Dementia almost affects the elderly, but it is not a normal part of aging.About 50 million people suffer from dementia worldwide with about 10 million new cases annually. 2The physical, social, psychological, and economic effects of dementia have been declared to affect the patient and his career, families, and society. 3zheimer's disease (AD) has been known as the commonest form of dementia accounting for 60%-70% of cases. 4e accumulation of beta-amyloid (Aβ) plaques and tau tangles: neurofibrillary tangles (NFT) are some brain alterations associated with AD. 5 Lack of cholinergic and adrenergic function, 6 oxidative stress, 7 inflammation, 8 steroid hormone deficiencies, 9 and excitotoxicity 10 have been suggested as the other mechanisms for developing AD. 2][13] Synaptic plasticity occurs locally in individual synapses and consists of long-term potentiation (LTP) or long-term depression (LTD). 14,15AD is associated with the suppression of LTP and an elevation of LTD in the hippocampus. 13,16,17Aβ 1-42 is the major mediator of the cognitive impairments in AD. 17 Some drugs and therapeutic techniques can temporarily control AD symptoms, but there is no permanent treatment.Therefore, the identification of novel therapeutic candidates to slow down or inhibit AD progression is highly important. 18ne of the available medications for AD stop neuronal damage and destruction leading to AD symptoms and mortality.Two classes of drugs have been approved for the treatment of AD, including acetylcholinesterase inhibitors (donepezil, rivastigmine, and galantamine) and memantine as the NMDA receptor antagonist.Despite significant advances in understanding AD pathogenesis, current therapies address only moderate signs of impaired brain function 19,20 there is no effective treatment to control AD-related hippocampalsynaptic plasticity impairments.Here, we examined the effect of carvacrol (CAR) (5-isopropyl-2-methyl phenol) and p-cymene on Aβinduced LTP deficit in male rats.CAR and p-cymene were found with therapeutic potential in preventing or modulating AD. CAR is a phenolic monoterpenoid that is present in the essential oil of some aromatic plants, like species of Zataria, Origanum, Thymbra, Thymus, Satureja, Lepidium flavum, Citrus uranium Bergama, and Coridothymus belonging to the family Lamiaceae and Lippia of Verbenaceae. 21,22The effectiveness and safety of CAR to alleviate cognitive deficits due to an increase in Aβ levels or cholinergic hypofunction in rats have been declared. 23Also, it has been reported that CAR attenuates cytotoxicity induced by Aβ via activating protein kinase C (PKC) and inhibiting oxidative stress in PC12 cells. 24CAR has antimicrobial, antioxidant, anti-cancer, and anti-inflammatory activities. 22,25CAR has also been reported to act as an inhibitor of acetylcholinesterase. 26,27 p-Cymene as a naturally occurring aromatic organic compound is classified as an alkylbenzene because of a monoterpene.It is found in essential oils, such as cumin and thyme oils.It is employed as a prominent mediator in medications as well as a flavoring compound. 28It improves AD-induced disorders, such as memory impairment through antioxidant and anti-inflammatory properties and also direct anti-fibril effect. 29Also, it has been shown that p-cymene acts as an inhibitor of Aβ peptide aggregation and Aβ-induced cytotoxicity. 30e novelty of this work lies in the investigation of the effects of CAR and p-cymene on Aβ-induced LTP deficit in male rats.While the effect of CAR and p-cymene in alleviating cognitive deficits due to increased Aβ levels have been declared before, there has been no existing data on their specific impact on LTP on the Aβ-induced LTP impairment.Understanding how these compounds may affect LTP, a crucial neural mechanism underlying memory and learning processes, is important.Considering the beneficial effects of CAR and p-cymene (Figure 1), here, we assessed their effect on in vivo hippocampal LTP in the perforant pathway (PP)-dentate gyrus (DG) pathway in an Aβ 1-42 -induced rat model of AD. | Ethics statement The experiments using rats were performed following the animal care and use guidelines confirmed by the Institutional Ethics Committee (IR.UMSHA.REC.1394.582),Hamadan University of Medical Sciences, and according to the National Institutes of Health Guide for Care and Use of Laboratory Animals. 31Minimizing the animal suffering was considered.Experiments leading to pain and distress were conducted in another room where other animals were not present. | Animals and experimental design Male Wistar rats (200-250 g body weight) provided by the animal breeding colony of Hamadan University of Medical Sciences were kept at 22 ± 2°C under a 12/12-h light/dark cycle (light on 7 a.m.) with free access to food and water.The rats were kept in cages with 2-3 animals per cage.One week of adaptation was considered and then, the rats were randomly assigned to the following groups (n = 6-8 32,33 and p-cymene 29,34 were selected based on the previous reports.Two months later, LTP was induced in DG using high-frequency stimulation (HFS).Figure 2 displays the experimental design and timeline. | ICV injection of Aβ 1-42 and neurosurgical procedure Aβ 1-42 (1 mg; Tocris Bioscience Bristol) dissolved in 1 mL of PBS (vehicle) followed by incubation (7 days/37°C) before usage that led to the formation of Aβ fibrils (with neurotoxic activity). 35,36Animals were anesthetized by i.p. injection of ketamine (100 mg/kg) and xylazine (10 mg/kg) and transferred to the stereotaxic device (Stoelting Co.). 37The skull was exposed over the ventricular area based on the following coordinates: 2 mm lateral to the midline, 1.2 mm posterior to bregma, and 4 mm ventral to the surface of the cortex. 38e Hamilton microsyringe (5 μL; USA) was employed for injections within 5 min and the injections were done gently (1 μL/min) into the right lateral ventricle.After injection, the syringe was kept in the injection area for 5 min and then gently removed.Animals had a recovery time of 7 days. 39 | The surgical procedure, electrophysiological recording, and LTP induction CAR and p-cymene were administered intragastrically by gavage daily 4 weeks before and 4 weeks after the Aβ injection.Then, after anesthetization with urethane, animals were placed in the stereotaxic device for surgery, implantation of the electrode, and field potential recording (Figure 3A).1][42][43][44][45] In brief, under urethane anesthesia (intraperitoneal injection, 1.5 g/kg), the rat's head was fixed in the stereotaxic device for surgical procedure and electrophysiological recording.Animals' body temperature was kept at 36.5 ± 0.5°C using a heating pad.After drilling small holes in the skull, two stainless steel bipolar electrodes (125 μm diameter, Advent Co.) covered by Teflon were positioned in the right cerebral hemisphere.The stimulating electrode was located in the PP (: AP:−8.1 mm from bregma; ML: +4.3 mm from midline; DV: 3.2 mm from the skull surface), whereas the recording electrode was located in the DG granular cell layer (AP:−3.8mm from bregma; ML: +2.3 mm from midline; DV: 2.7-3.2mm from the skull surface) based on the Paxinos and Watson atlas. 38,42,46For minimizing trauma to the brain tissue, electrodes were lowered very gently (0.2 mm/min) from the cortex to the hippocampus. Input-output current profiles were achieved through the stimulation of the PP for determining the stimulus intensity for each rat (80% maximal population spike) (Figure 4).Single biphasic For each time point, 10 continuous evoked responses were averaged at a stimulus interval of 10 s. [47][48][49][50] The stimuli parameters were defined in homemade software followed by sending through a data acquisition board connected to a constant current isolator unit (A365 WPI) before delivery to the PP using the stimulus electrodes.A preamplifier was employed to pass the induced field potential response from the DG and followed by amplification (1000×) (Differential amplifier DAM 80 WPI), and filtration (bandpass 1 Hz to 3 kHz).The response was digitized at 10 kHz and observed on a computer (and an oscilloscope).It was then saved in a file for the next offline analyses. | Measurement of evoked potentials PS and fEPSP are two components of the evoked field potential in the DG.During electrophysiological recordings, alterations in PS amplitude and fEPSP slope were examined. 42uations 1 and 2 were used to calculate PS amplitude and EPSP slope, respectively (Figure 3B). ΔV = the potential difference between points 3 and 4. ΔV1 = the potential difference between points 5 and 6. ΔV2 = the potential difference between points 6 and 7. (1) EPSP = ΔV ΔT (2) F I G U R E 3 Recording of LTP (A) and measurement of PS amplitude and EPSP slope (B).PS amplitude and EPSP slope were determined using Equations 1 and 2, respectively (see the text).ΔT, time difference and ΔV, the potential difference. F I G U R E 4 Input-output current profiles were achieved through the stimulation of the PP for determining the stimulus intensity for each rat. | Statistical analysis Data are expressed as mean ± SEM and analyzed by GraphPad Prism® 8.0.2 software.The Shapiro-Wilk test checked the normal distribution of the data.LTP data were analyzed through two-way repeated-measures ANOVA followed by the Bonferroni test.LTP data were normalized to the mean value of fEPSP slopes and PS amplitude recorded before LTP induction (Equation 3). 43,51,52p-values less than 0.05 were regarded as significant. | Effects of CAR and p-cymene on the fEPSP slopes of DG granular cells of Aβ -treated rats Field potential recordings were obtained in the DG granular cells after PP stimulation.Our results indicated that CAR and p-cymene attenuated LTP deficit induced by Aβ 1-42 .4][55] CAR has been shown to decrease the expression of caspase-3. 56Caspases are involved in apoptosis and also play a role in the development of AD. 57,58 They can be activated upon exposure to Aβ. 59 Recent evidence suggests that Caspase-3 activation by mitochondria is needed for hippocampal synaptic plasticity. 60reover, it has been reported that Aβ 1-42 -induced inhibition of LTP is mediated by a signaling pathway involving caspase-3, Akt1, and glycogen synthase kinase-3 beta (GSK-3β) 17 in rats and mice, where caspase-3 is needed for the suppression of LTP by Aβ.Accordingly, it can be concluded that CAR may prevent the destructive effects of Aβ on hippocampal LTP by reducing caspase-3 expression.Also, Aβ 1-42 leads to a sequential decrease in the levels of phosphorylated-Akt (also known as protein kinase B, PKB), 61 and treatment with CAR increases the levels of phosphorylated Akt. 56PI3K/Akt is involved in LTP induction in the PP-DG synapses. 62C is also critical for the induction of LTP. 63Signaling deficits of the PKC pathway are involved in the pathophysiology of AD. 64,65 CAR has been shown to have a stimulatory effect on PKC activity. 24PKC activation can prevent synaptic loss, an elevation in Aβ, and cognitive impairments in AD mice. 66In addition, CAR increases the expression of proteins associated with neuronal and synaptic plasticity (F-actin, β-tubulin III, GAP-43, 200-kDa neurofilament, and synapsin-I) and improves bioenergy sensing (p-AMPKα, AMPKα, and ATP). 55The neuroprotection effects of CAR have also been attributed to its ability to block the transient receptor potential, inhibit neuronal NOS, and regulate Ca 2+ homeostasis. 67Therefore, it seems that these mechanisms may be associated with the results obtained in the current study; however, there is a need for more investigations in the future. | Effects of CAR and p-cymene on the PS amplitude of DG granular cells of Aβ -treated rats p-cymene possesses antioxidant, anti-inflammatory, and direct anti-fibril effects, 29 and can inhibit Aβ aggregation and Aβ-induced cytotoxicity. 30Inflammation has been shown to play a major role in AD pathogenesis 68 and the p38 MAPK pathway plays a key role in this regard. 69There is an association between p38 MAPK pathway immunoreactivity and neurotic Aβ plaques and neurofibrillary tangle-bearing neurons. 69The p38 mitogen-activated protein kinase 75 Also, p-cymene reduced MAPK and NF-κB activity as well as TNFα production. 76Inhibition of LTP by TNFα has been reported in the literature, 77 and p-cymene can prevent the destructive effects of Aβ on hippocampal LTP by reducing TNFα production. Finally, it was interesting to observe that the combined treatment with CAR and p-cymene could not prevent the destructive effects of Aβ on hippocampal LTP, which may be due to the extensive elimination of reactive oxygen species (ROS).ROS act as signaling molecules and regulate many physiological processes. 78,79They also cause reversible post-translational protein changes for regulating signaling pathways. 78ROS at normal levels play a role in mediating several cellular responses, such as cell growth and immunity. 80ROS generation is caused by basic metabolic processes; nonetheless, high levels of ROS result in DNA damage, lipid peroxidation, and even cell death. 81,82CAR and p-cymene have antioxidant activity.However, the imbalance of the excessive antioxidant capacity (antioxidative stress) is just as harmful as oxidative stress.Despite destructive effects, the consequences of oxidative stress may be beneficial for several physiological processes in cells.On the other hand, "antioxidative stress," particularly in the cases of overconsumption of synthetic antioxidants is associated with destructive effects. 83The combined treatment with CAR and p-cymene may lead to antioxidant stress that failed to counteract the destructive effects of Aβ on hippocampal LTP.It is possible that, despite individually demonstrating potential therapeutic benefits, the combination of CAR and p-cymene did not produce a synergistic effect and may have even interacted in a way that reduced their overall effectiveness.Additionally, the timing of treatment may have been suboptimal, as the compounds may have needed to be administered earlier or later in the disease progression to have a positive impact.Further studies are needed in this regard. | CON CLUS ION In summary, treatment with the CAR or p-cymene alone can prevent synaptic plasticity impairment caused by Aβ 1-42 .Interestingly, our results showed that the combined treatment with CAR and p-cymene could not prevent the destructive effects of Aβ on hippocampal LTP. Further investigations are needed to determine the detailed mechanism (s) of action of CAR or p-cymene. F I G U R E 1 3 ( Chemical structure of the carvacrol and p-cymene.per group): group 1 (sham; intracerebroventricular [ICV] injection of phosphate-buffered saline [PBS] into the right lateral ventricle and administration of saline through gavage once daily, starting 4 weeks before and 4 weeks after the ICV injection), group 2 (Aβ; ICV injections of Aβ 1-42 and administration of saline through gavage once daily, starting 4 weeks before and 4 weeks after the ICV injection), group Aβ + CAR; 50 mg/kg of CAR was administered via oral gavage, once a day, for 4 weeks before and 4 weeks after the Aβ injection), group 4 (Aβ + p-cymene; 50 mg/kg of p-cymene was administered through gavage, once a day 4 weeks before and 4 weeks after the Aβ injection), group 5 (Aβ + CAR + p-cymene; 50 mg/kg of CAR and 50 mg/kg of p-cymene was administered through gavage, once a day, 4 weeks before and 4 weeks after the Aβ injection).Doses of CAR Following 1 week of adaptation, the rats received carvacrol (CAR) and/or p-cymene via oral gavage 4 weeks before and 4 weeks after the ICV Aβ 1-42 injection.Then, after anesthetization with urethane, animals were positioned in a stereotaxic device for surgery, implantation of the electrode, and field potential recording.After observing a stable baseline for at least 40 min, LTP induction was done with high-frequency stimulation (HFS) (10 bursts of 20 stimuli, 0.2 ms stimulus duration, 10-s interburst interval) in the dentate gyrus (DG) of rats.pulses (0.1 ms) were delivered by constant current isolation units (A365 WPI) at a frequency of 0.1 Hz.The field potential recordings were obtained in the DG granular cells after stimulating the PP.The PP received test stimuli every 10 s.Electrodes were located for eliciting the maximal amplitude of population spike (PS) and field excitatory postsynaptic potentials (fEPSP).After ensuring a steady-state baseline response that lasted about 40 min, LTP induction was done using an HFS protocol of 400 Hz (0.2 ms stimulus duration, 10 bursts of 20 stimuli, 10-s interburst interval) at an intensity, which could evoke a PS amplitude and field EPSP slope of approximately 80% of the maximum response.The fEPSP and PS were recorded 5, 30, and 60 min following the HFS for determining the alterations in the synaptic response of DG neurons. Figure 5 displays a representative example of evoked field potential in the DG recorded before and 60 min following HFS.The effects of CAR and p-cymene on the EPSP slopes as well as PS amplitudes of Aβ-treated rats are illustrated in Figures 6 and 7, respectively.HFS did not induce LTP in Aβ-treated rats (F [3, 20] = 1.740, p = 0.1911, one-way ANOVA).The one-way ANOVA was used to test whether there were significant differences before and after LTP induction (at different time points).Aβ-treated rats were found with a significant decrease in changes in the slope of fEPSP immediately and 60 min following HFS than the sham group.A two-way ANOVA was used to indicate the differences between the groups.A significant effect of time (F [2.463, 66.50] = 43.32,p < 0.0001) and treatment (F [4, 27] = 6.197, p = 0.0011) in the slope of EPSP of the granular cell of DG was observed (Figure6).The post-hoc analysis revealed a significant difference between the sham and Aβ-treated groups.EPSP Slope was reduced in the Aβ-treated rats compared with the sham group (p = 0.0015; Figure6).CAR or p-cymene consumption by the Aβ-treated rats increased the EPSP slope of the DG granular cells (p < 0.05; Figure6).The changes in the fEPSP slope immediately and 60 min following HFS were significantly greater in the Aβ + CAR and Aβ + p-cymene groups than in the Aβ-treated rats. A significant effect of time (F[3, 120] = 64.80,p < 0.0001) and treatment (F [4, 120] = 6.510, p < 0.0001) on PS amplitude of the DG granular cells was observed (Figure7).The post-hoc analysis revealed a significant difference between the sham and Aβ-treated groups (p < 0. 05; Figure6).PS amplitude was reduced in the Aβ-treated rats compared with the sham group.CAR and p-cymene consumption by the Aβ-treated rats increased the PS amplitude of the DG granular cells (p < 0.05; Figure7). 4 | DISCUSS ION We investigated the effect of the treatment with CAR and p-cymene on in vivo hippocampal LTP in the PP-DG pathway in the Aβ 1-42induced rat model of AD.Aβ impaired LTP induction in the PP-DG synapses.This observation is confirmed by a decrease in EPSP slope and PS amplitude of LTP.Therefore, the observations of previous studies, in which Aβ caused synaptic plasticity impairment was confirmed.In the present study, it was for the first time observed that treatment with CAR or p-cymene alone can prevent the destructive effects caused by Aβ on hippocampal synaptic plasticity.Interestingly, our results showed that the combined treatment with CAR and p-cymene could not prevent the Aβ destructive effects on hippocampal LTP. LTP = the EPSP or PS value after HFS induction × 100 % the average EPSP or PS at baseline F I G U R E 5 Representative sample traces of evoked field potential in the dentate gyrus were recorded before and 60 min following the high-frequency stimulation in all groups. F I G U R E 6 Time-dependent alterations in hippocampal responses against perforant path stimulation after high-frequency stimulation (HFS).The groups were found with significantly different long-term potentiation (LTP) of the EPSP slope in dentate gyrus (DG) granular cell synapses of the hippocampus.The left panel displays changes (%) in the fEPSP slope versus time after HFS in all groups.Bar graphs display the average fEPSP slope changes (%) within 60 min after HFS.Treatment with carvacrol (CAR) and p-cymene (but not their combination) prevented Aβ-induced impairment of LTP expressed as the slope of fEPSP in the DG.Data are expressed as mean ± SEM % of baseline.*p < 0.05 and **p < 0.01.F I G U R E 7Time-dependent alterations in hippocampal responses against perforant path stimulation after high-frequency stimulation (HFS).The groups were found with significantly different long-term potentiation (LTP) of the PS amplitude in dentate gyrus (DG) granular cell synapses of the hippocampus.The left panel displays PS amplitude changes (%) versus time after HFS in all groups.Bar graphs display the average PS amplitude changes (%) within 60 min after HFS.Treatment with carvacrol (CAR) and p-cymene (but not their combination) prevented Aβ-induced impairment of LTP expressed as the slope of the fEPSP in the DG.Data are expressed as mean ± SEM % of baseline.
2023-09-21T06:17:31.787Z
2023-09-19T00:00:00.000
{ "year": 2023, "sha1": "b7b8140a4f1a89b9b6c29da36a8d1e111d64007d", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/cns.14459", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "36bb2fd21ec8a2560dc1198b55319a0cf3ff2441", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
216641931
pes2o/s2orc
v3-fos-license
Zero-Determinant strategies in repeated multiplayer social dilemmas with discounted payoffs In two-player repeated games, Zero-Determinant (ZD) strategies enable a player to unilaterally enforce a linear payoff relation between her own and her opponent's payoff irrespective of the opponent's strategy. This manipulative nature of the ZD strategies attracted significant attention from researchers due to its close connection to controlling distributively the outcome of evolutionary games in large populations. In this paper, necessary and sufficient conditions are derived for a payoff relation to be enforceable in multiplayer social dilemmas with a finite expected number of rounds that is determined by a fixed and common discount factor. Thresholds exist for such a discount factor above which desired payoff relations can be enforced. Our results show that depending on the group size and the ZD-strategist's initial probability to cooperate there exist extortionate, generous and equalizer ZD-strategies. The threshold discount factors rely on the desired payoff relation and the variation in the single-round payoffs. To show the utility of our results, we apply them to multiplayer social dilemmas, and show how discounting affects ZD Nash equilibria. I. INTRODUCTION T HE functionalities of many complex social systems rely on their composing individuals' willingness to set aside their personal interest for the benefit of the greater good [18]. One mechanism for the evolution of cooperation is known as direct reciprocity: even if in the short run it pays off to be selfish, mutual cooperation can be favoured when the individuals encounter each other repeatedly. Direct reciprocity is often studied in the standard model of repeated games and it is only recently, inspired by the discovery of a novel class of strategies, called zero-determinant (ZD) strategies [20], that repeated games began to be examined from a new angle by investigating the level of control that a single player can exert on the average payoff of the opponents. In [20] Press and Dyson showed that in infinitely repeated 2 × 2 prisoners dilemma games, if a player can remember the actions in the previous round, this player can unilaterally impose some linear relation between his/her own payoff and that of the opponent. It is emphasized that this enforced linear relation cannot be avoided even if the opponent employs some intricate strategy with a large memories. Such strategies are called zero-determinant because they enforce a part of the transition matrix to have a determinant that is equal to zero. Later, ZD strategies were extended to games with more than two possible actions [23], continuous action spaces [17], alternative moves [16], and The work was supported in part by the European Research Council (ERC-CoG-771687) and the Netherlands Organization for Scientific Research (NWO-vidi-14134). A. Govaert observation errors [15]. These game-theoretical advancements were subsequently applied to a variety of engineering contexts including cybersecurity in smart grid systems [3], sharing of spectrum bands and resources [2], [28], and power control of small cell networks [29]. The success of ZD strategies was also examined from an evolutionary perspective in [6], [24]. For a given population size, in the limit of weak selection it was shown in [25] that all ZD strategies that can survive an invasion of any memory-one strategy must be "generous", namely enforcing a linear payoff relation that favors others. This surprising fact was tested experimentally in [7]. In [5] the literature on ZD strategies, direct reciprocity and evolution is reviewed. Most of the literature focuses on two-player games; however, in [19] the existence of ZD-startegies in infinitely repeated public goods games was shown by extending the arguments in [20] to a symmetric public goods game. Around the same time, characterization of the feasible ZD strategies in multiplayer social dilemmas and those strategies that maintain cooperation in such multiplayer games were reported in [9]. Both in [9] and [19] it was noted that group size n imposes restrictive conditions on the set of feasible ZD strategies and that alliances between co-players can overcome this restrictive effect of the group size. The evolutionary success of ZD strategies in such multiplayer games was studied in [10] and the results show that sustaining large scale cooperation requires the formation of alliances. ZD strategies for repeated 2 × 2 games with discounted payoffs were defined and characterized in [8]. In this setting, the discount factor may also be interpreted as a continuation probability that determines the finite number of expected rounds. The threshold discount factors above which the ZD strategies can exist were derived in [11]. In this paper we use the framework of ZD strategies in infinitely repeated multiplayer social dilemmas from [9] and extend it to the case in which future payoffs are discounted with a fixed and common discount factor, or equivalently, to a repeated game with a finite expected number of rounds. We then build upon our results in [4], in which enforceable payoff relations were characterized, by developing new theory that allows us to express threshold discount factors that determine how fast a desired linear payoff relation can be enforced in a multiplayer social dilemma game. These results extend the work of [11] to a broad class of multiplayer social dilemmas. Our general results are applicable to multiplayer and two player games and can be applied to a variety of complex social dilemma settings including the famous prisoner's dilemma, the public goods game, the volunteer's dilemma, the multiplayer snowdrift game and much more. The derived threshold discount factors show how the group size and the payoff functions of the social dilemma affect one's possibilities for exerting control given a constraint on the expected number of interactions, and shows how the discount factor affects Nash equilibria of the repeated game. These results can thus be used to investigate, both analytically and experimentally, the effect of the group size and the initial condition on the level of control that a single player can exert in a repeated multiplayer social dilemma game with a finite but undetermined number of rounds. From an evolutionary perspective, our results may also open the door for novel control techniques that seek to achieve or sustain cooperation in large social systems that evolve under evolutionary forces. [21] The paper is organized as follows. In section II, preliminaries concerning the game model and strategies are provided. In section III, the mean distribution of the repeated multiplayer game with discounting and the relation to a memory-one strategy is given. In section IV, ZD strategies for repeated multiplayer games with discounting are defined, and in section V the enforceable payoff relations are characterized. In section VI, threshold discount factors are given for generous, extortionate and equalizer ZD strategies. We apply our results to the multiplayer linear public goods game and the multiplayer snowdrift game in Section VII. In Section VIII, we provide the proofs of our main results. We conclude the paper in Section IX. A. Symmetric multiplayer games In this paper we consider multiplayer games in which n ≥ 2 players can repeatedly choose to either cooperate or defect. The set of actions for each player is denoted by A = {C, D}. The actions chosen in the group in round t of the repeated game is described by an action profile σ t ∈ A = {C, D} n . A player's payoff in a given round depends on the player's own action and the actions of the n − 1 co-players. In a group in which z co-players cooperate, a cooperator receives payoff a z , whereas a defector receives b z . As in [9], [19] we assume the game is symmetric, such that the outcome of the game depends only on one's own decision and the number of cooperating co-players, and hence does not depend on which of the co-players have cooperated. Accordingly, the payoffs of all possible outcomes for a player can be conveniently summarized in table I. 1 0 Cooperator's payoff a n−1 a n−2 . . . We have the following assumptions on the single-round payoffs of the symmetric multiplayer game. Assumption 1 (Social dilemma assumption [9], [12]). The payoffs of the symmetric multiplayer game satisfy the following conditions: For all 0 ≤ z < n − 1, it holds that a z+1 ≥ a z and b z+1 ≥ b z : irrespective of one's own action players prefer other group members to cooperate. a) For all 0 ≤ z < n − 1, it holds that b z+1 > a z : within a mixed group defectors obtain strictly higher payoffs than cooperators. b) a n−1 > b 0 : mutual cooperation is favored over mutual defection. c) Assumption 1 is standard in multiplayer social dilemma games and ensures that there is an immediate benefit to defect against cooperators, while mutual cooperation leads to better, if not the best, collective outcome. Example 1 (Public goods game). As an example of a game that satisfies Assumption 1, consider a public goods game in which each cooperator contributes an amount c > 0 to a public good. The sum of the contributions get multiplied by an enhancement factor 1 < r < n and then divided evenly among all group members. The payoff of a cooperator is a z = rc(z+1) n − c, while the payoff of a defector is b z = rcz n , for z = 0, 1, . . . , n − 1. Example 2 (Multiplayer snowdrift game). Another example is the multiplayer snowdrift game that traditionally describes a situation in which cooperators need to clear out a snowdrift so that everyone can go on their merry way. By clearing out the snowdrift together, cooperators share a cost c required to create a fixed benefit b [13], [22], [26], [30]. If a player cooperates together with z group members, their one-shot payoff is a z = b − c z+1 . If there is at least one cooperator (z > 0) who clears out the snowdrift, then defectors obtain a benefit b z = b. If no one cooperates, the snowdrift will not be cleared and everyone's payoff is b 0 = 0. B. Strategies In repeated games the players must choose how to update their actions as the game interactions are repeated over rounds. A strategy of a player determines the conditional probabilities with which actions are chosen by the player. To formalize this concept we introduce some additional notation. A history of plays up to round t is denoted by h t = (σ 0 , σ 1 , . . . , σ t−1 ) ∈ A t such that each σ k ∈ A for all k = 0 . . . t−1. The union of possible histories is denoted by H = ∪ ∞ t=0 A t , with A 0 = ∅ being the empty set. Finally, let ∆(A) denote the probability distribution over the action set A. As is standard in the theory of repeated games, a strategy of player i is then defined by a function ρ : H → ∆(A) that maps the history of play to the probability distribution over the action set. An interesting and important subclass of strategies are those that only take into account the action profile in round t − 1, (i.e. σ t−1 ∈ h t ) to determine the conditional probabilities to choose some action in round t + 1. Correspondingly these strategies are called memory-one strategies. The theory of Press and Dyson showed that, for determining the best performing strategies in terms payoffs in two-action repeated games, it is sufficient to consider only the space of memory-one strategies [20], [23]. STRATEGIES IN REPEATED MULTIPLAYER GAMES WITH DISCOUNTING In this section we zoom in on a particular player that employs a memory-one strategy in the multiplayer game and refer to this player as the key player. In particular, we focus on the relation between the mean distribution of the action profile and the memory-one strategy of the key player. Let p σ ∈ [0, 1] denote the probability that the key player cooperates in the next round given that the current action profile is σ ∈ A. By stacking the probabilities for all possible outcomes into a vector, we obtain the memory-one strategy p = (p σ ) σ∈A whose elements determine the conditional probability for the key player to cooperate in the next round. Accordingly, the memory-one strategy p rep σ , gives the probability to cooperate when the current action is simply repeated. To be more precise, Then, for all σ −i , the entries of the repeat strategy are given by p rep (C,σ−i) = 1 and p rep (D,σ−i) = 0. To describe the relation between the memory-one strategy and the mean distribution of the action profile we introduce some additional notation. Let v σ (t) denote the probability that the outcome of round t is σ ∈ A, and let v(t) = (v σ (t)) σ∈A be the vector of these outcome probabilities. As in [8], [11], [16], [17] we focus on repeated games with a finite but undetermined number of rounds. Given the current round, a fixed and common discount factor, or continuation probability, 0 < δ < 1 determines the probability that a next round takes place. By taking the limit of the geometric sum of δ, the expected number of rounds is 1 1−δ . As in [8], the mean distribution of v(t) is: (1) In this paper we are interested in the expected and discounted payoffs of the players in the repeated game. Let g i σ denote the single-round payoff that player i receives in the action profile σ. The vector g i = (g i σ ) σ∈A thus contains all possible payoffs of player i in a given round. The expected single-round payoff of player i in round t is then given by π i (t) = g i · v(t). The average discounted payoff of player i in the repeated game is then [14] The following lemma relates the limit distribution v to the memory-one strategy p of the key player. The presented lemma is a straightforward multiplayer extension of the 2player case that is given in [8] and relies on the fundamental results from [1]. Lemma 1 (A fundamental relation). Suppose the key player applies memory-one strategy p and the strategies of the other players are arbitrary, but fixed. Then, it holds that where p 0 is the key player's initial probability to cooperate. Proof: The probability that i cooperated in round t is q C (t) = p rep · v(t). And the probability that i cooperates in round t + 1 is q C (t + 1) = p · v(t). Now define, Multiplying equation (3) by (1−δ)δ t and summing up over t = 0, . . . , τ we obtain the telescoping sum The result follows by substituting the definition of u(t) and v. That is, lim Remark 1. Note that in the limit δ → 1, the infinitely repeated game is recovered. In this setting, the expected number of rounds is infinite. If the limit exists, the payoff is given by By Akins Lemma (see [1], [9]), for the repeated game without discounting, irrespective of the initial probability to cooperate, it holds that (p − p rep ) · v = 0. Hence, a key difference between the repeated game with and without discounting is that p 0 remains important for the relation between the memory-one strategy p and the mean distribution v when the game has a finite number of expected interactions. In the limit δ → 1, the importance of the initial conditions on the relation between p and v disappears [9]. DISCOUNTING We now investigate the effect that the key player's memoryone strategy can have on the average discounted payoffs in the repeated game. We will use i to indicate the key player and j to indicate his/her co-players. Let g j σ denote the singleround payoff of player j in action profile σ ∈ A, and let g j = (g j σ ) σ∈A be the vector of these payoffs. Based on Lemma 1 we now formally define a ZD strategy for a multiplayer game with discounting. For this we let 1 = (1) σ∈A . Definition 1 (ZD strategy). A memory-one strategy p with all entries in the closed unit interval is a ZD strategy if there exist constants s, l, φ and weights w j , for j = i such that under the conditions that φ = 0 and n j =i w j = 1. Remark 2. When δ = 1, the ZD strategy in Definition (1) recovers the ZD strategies studied in [9]. We will elaborate on the effect of the discount factor δ in Sections VI and VII. for all j = i, the formulation of the ZD strategy for a symmetric multiplayer social dilemma can be simplified using only the number of cooperators in the social dilemma. To this end, let g −i σi,z denote the average single-round payoff of the n − 1 co-players of i when player i selects action σ i ∈ {C, D} and 0 ≤ z ≤ n − 1 co-players cooperate. Using the single-round payoffs in Table I this can be written . We obtain g −i = (g −i σi,z ) by stacking these payoffs into a vector. Similarly, let v σi,z (t) denote the probability that at round t, player i chooses action σ i and 0 ≤ z ≤ n − 1 co-players cooperate, and let v(t) = (v σi,z (t)) ∈ [0, 1] 2n be the vector of these outcome probabilities. The expected payoff of player i at time t is again given by π i (t) = g i · v(t). Moreover, the average expected payoff of the co-players at time t can be conveniently written as π −i (t) = g −i · v(t). The mean distribution of v(t) is again obtained using (1), but now the entries of v provide the fraction of rounds in the repeated game in which player i chooses σ i and z players cooperate. Then, as before, π i = g i · v and π −i = g −i · v which leads to the ZD strategy Let w = (w i ) ∈ R n−1 denote the vector of weights that the ZD strategist assigns to her co-players. The following proposition shows how the ZD strategy can enforce a linear relation between the key player's average discounted payoff (from now on simply called payoff) and a weighted average payoff of his/her co-players. Proposition 1 (Enforcing a linear payoff relation). Suppose the key player employs a fixed ZD strategy with parameters s, l and weights w as in Definition 1. Then, irrespective of the fixed strategies of the remaining n − 1 co-players, payoffs obey the equation where π −i = n j =i w j π j . Proof: The proof follows by substituting (4) into (2). For positive slopes s and weights w j > 0 for all j = i, the linear payoff relation in (5) ensures that the collective best response of the co-players also maximizes the benefits of the key player. This is particularly interesting in social dilemmas in which the payoff increases with the number of cooperating co-players. The strength of the correlation between the payoffs is determined by the slope s of the linear payoff relation. For positive slopes 0 < s < 1, the baseline payoff results in a generous (l = a n−1 ) or extortionate (l = b 0 ) payoff relation. The former typically implies a relative performance in which co-players, on average, do better than the ZD strategist (π −i ≤ π i ), while the latter typically implies the ZD strategist outperforms the average payoff of his/her coplayers (π −i ≤ π i ) [9]. The theoretical ability of generous and extortionate ZD strategies to promote selfless cooperation of co-players was empirically studied in [7], [27]. Two special cases of the linear payoff relation remain of interest. When s = 1 the average payoff of the co-players is equal to the payoff of the key player. Such ZD strategies are called fair and were proven to exist in infinitely repeated social dilemmas in [9]. In the other extreme s = 0, payoffs are not correlated but the key player can set the average payoff of the co-player to the baseline payoff l. Table II summarizes the most studied ZD strategies. Because the entries of the ZD strategy correspond to conditional probabilities, they are required to belong to the unit interval and not every linear payoff relation with parameters s, l is can be enforced. For repeated games with discounting, the discount factor that determines the expected number of rounds is part of the ZD strategy and therefore influences the set of enforceable payoff relations. We will focus on the role of the discount factor δ in the remainder of the paper. Consider the following definition that was given in [8] for two-player games. Definition 2 (Enforceable payoff relations). Given a discount factor 0 < δ < 1, a payoff relation (s, l) ∈ R 2 with weights w is enforceable if there are φ ∈ R and p 0 ∈ [0, 1], such that each entry in δp according to equation (4) is in [0, δ]. We indicate the set of enforceable payoff relations by E δ . An intuitive implication of decreasing the expected number of rounds in the repeated game (by decreasing δ) is that the set of enforceable payoff relations will decrease as well. This monotone effect is formalized in the following proposition that extends a result from [8] to the multiplayer case. Proof: Albeit with different formulations of p, the proof follows from the same argument used in the the two-player case [9]. It is presented here to make the paper self-contained. From Definition 2, (s, l) ∈ E δ if and only if one can find Then by substituting (4) into the above inequality we obtain, with Now observe that p 0 (1 − δ)1 on the left-hand side of the inequality (7) is decreasing for increasing δ. Moreover, δ1 + (1−δ)p 0 1 on the right-hand side of the inequality is increasing for increasing δ. The middle part of the inequality, which is exactly the definition of a ZD strategy for the infinitely repeated game in [9], is independent of δ. It follows that by increasing δ the range of possible ZD parameters (s, l, φ) and p 0 increases and hence if 0 ≤ p ≤ 1 is satisfied for some δ , then it is also satisfied for some δ ≥ δ . We are now ready to state the existence problem studied in this paper. Problem 1 (The existence problem). For the class of multiplayer social dilemmas with payoffs as in Table I that satisfy Assumption 1, what are the enforceable payoff relations when the expected number of rounds is finite, i.e., δ ∈ (0, 1)? Characterizing the set of enforceable payoff relations is important not only because it describes the possibilities for a single player to exert control in the repeated game, but also because it allows us to characterize the equilibrium set for ZD strategies. If all weights are equal and players apply the same ZD strategy, then all players' payoff is l. The incentive to deviate from the common ZD strategy can then be analysed with respect to the enforced baseline payoff l and the enforced linear payoff relations to obtain Nash equilibrium conditions. If the set of enforceable payoff relations includes the minimum and maximum average group payoff per round, then the Nash equilibrium conditions can be extended to arbitrary "mutant" strategies [9]. Including the discount factor in the characterization of the enforceable payoff relations thus allows to explain how Nash equilibria of the repeated social dilemma can change under the influence of discounting. In Section VII, we return to this using Example 1 and 2. MULTIPLAYER SOCIAL DILEMMAS WITH DISCOUNTING In this section, we present our results on the existence problem. The proofs of these results are found in Section VIII. We begin by formulating conditions on the parameters of the ZD strategy that are necessary for the payoff relation to be enforceable in the finitely repeated multiplayer game. Proposition 3 (Necessary conditions). The enforceable payoff relations (l, s, w) in the repeated multiplayer game with δ ∈ (0, 1) and single-round payoffs as in Table I that satisfy Assumption 1 and b 0 ≤ l ≤ a n−1 , with at least one strict inequality. In the following theorem we extend the results from [9] to multiplayer social dilemmas with discounting. To write the statement compactly, we let a −1 = b n = 0. Moreover, let w z = min w h ∈w ( z h=1 w h ) denote the sum of the z smallest weights and letŵ 0 = 0. Theorem 1 (Characterizations of enforceable payoff relations). For the repeated multiplayer game with one-shot payoffs as in Table I that satisfy Assumption 1, the payoff relation (s, l) ∈ R 2 with weights w ∈ R n−1 is enforceable for some δ ∈ (0, 1) if and only if − 1 n−1 < s < 1 and moreover, at least one inequality in (8) is strict. Remark 4. For n = 2 the full weight is placed on the single opponent i.e.,ŵ j = 1. When the payoff parameters are defined as b 1 = T , b 0 = P , a 1 = R, a 0 = S, the result in Theorem 1 recovers the earlier result in [8]. An immediate consequence of Theorem 1 is that fair strategies with s = 1, that always exist in infinitely repeated social dilemmas without discounting [9], never exist in these games when payoffs are discounted. For example, proportional Titfor-Tat, that is a fair ZD strategy for the infinitely repeated public goods game [9], is not a ZD strategy in the repeated public goods game with discounting. In Section VII, we apply our results to two well-known multiplayer social dilemmas to illustrate the crucial role of δ in the possibilities for enforcing a linear payoff relation. Theorem 1 does not stipulate any conditions on the key player's initial probability to cooperate other than p 0 ∈ [0, 1]. However, the existence of extortionate and generous strategies does depend on the value of p 0 . This is formalized in the following Lemma that was also observed in [9], [11]. Lemma 2 (Necessary conditions on p 0 ). For the existence of extortionate strategies it is necessary that p 0 = 0. Moreover, for the existence of generous ZD strategies it is necessary that p 0 = 1. These requirements on the key player's initial probability to cooperate make intuitive sense. In a finitely repeated game, if the key player aims to be an extortioner that profits from the cooperative actions of others, she cannot start to cooperate because she could be taken advantage off by defectors. On the other hand, if she aims to be generous, she cannot start as a defector because this will punish both cooperating and defecting co-players. The requirements on the key player's initial probability to cooperate are also useful in characterizing the effect of the discount factor δ on the set of enforceable slopes. This will be investigated in the next section. VI. THRESHOLDS ON DISCOUNT FACTORS In the previous section we have characterized the enforceable payoff relations of ZD strategies in multiplayer social dilemma games with discounted payoffs. Our conditions generalize those obtained for two-player games and illustrate how a single player can exert control over the outcome of a multiplayer social dilemma with a finite number of expected rounds. The conditions that result from the existence problem do not specify requirements on the discount factor other than δ ∈ (0, 1). However, as we will see the discount factor or "patience" of the players in the multiplayer social dilemma heavily influences the possibilities to exert control in the repeated multiplayer social dilemma. Threshold discount factors, above which a payoff relation can be enforced, provide insight into the minimum number of expected interactions that are required to enforce a desired linear payoff relation. In this section we address the following problem, that was studied for two player games in [11]. Problem 2 (The minimum threshold problem). Suppose the desired payoff relation (s, l) ∈ R 2 satisfies the conditions in Theorem 1. What is the minimum δ ∈ (0, 1) under which the linear relation (s, l) with weights w can be enforced by the ZD strategist? We consider the three classes of ZD strategies separately. Before giving the main results it is necessary to introduce some additional notation. Definew z = max w h ∈w z h=1 w h to be the maximum sum of weights for some permutation of σ ∈ A with z cooperating co-players. Additionally, for a payoff relation (s, l) ∈ R 2 and weights w ∈ R n−1 define In the following, we will use these extrema to derive threshold discount factors for extortionate, generous and equalizer strategies in symmetric multiplayer social dilemma games. The proofs of our results can be found in Section VIII. A. Extortionate ZD strategies We first consider the case in which l = b 0 and 0 < s < 1, such that the ZD strategy is extortionate. We have the following result. Theorem 2 (Extortion thresholds). Assume p 0 = 0 and the payoff relation (s, b 0 ) ∈ R 2 satisfies the conditions in Theorem 1, then ρ C > 0 and ρ D + ρ C > 0. Moreover, the threshold discount factor above which the extortionate payoff relation can be enforced is determined by B. Generous ZD strategies If a player instead aims to be generous, in general, different thresholds will apply. Thus, we now consider the case in which l = a n−1 and 0 < s < 1 such that the ZD strategy is generous. Theorem 3 (Generosity thresholds). Assume p 0 = 1 and the payoff relation (s, a n−1 ) ∈ R 2 satisfies the conditions in Theorem 1. Then ρ D > 0 and ρ C + ρ D > 0. Moreover, the threshold discount factor above which the generous payoff relation can be enforced is determined by C. Equalizer ZD strategies The existence of equalizer strategies with s = 0 does not impose any requirement on the initial probability to cooperate. In general, one can identify different regions of the unit interval for p 0 in which different threshold discount factors exist. For instance, the boundary cases can be examined in a similar manner as was done for extortionate and generous strategies and, in general, will lead to different requirements on the discount factor. In this section, we derive conditions for the discount factor such that the equalizer payoff relation can be enforced for a variable initial probability to cooperate that is within the open unit interval. Theorem 4 (Equalizer thresholds). Let s = 0 and assume l satisfies the bounds in Theorem 1. The equalizer payoff relation can be enforced for p 0 ∈ (0, 1) if and only if the following inequalities hold In this case, δ τ is determined by the maximum right-hand-side of (10)- (13). These conditions on δ also hold when s = 0 and b 0 < l < a n−1 . Remark 5. Because the maxima and minima in (9) depend on the slope s, the expressions of the threshold discount factors for a fixed baseline payoff l typically varies over the set of enforceable slopes. This is exemplified in Section VII. However, the expressions of the threshold discount factors also provide insight into why fair payoff relations π −i = π i with a slope s = 1 cannot be enforced in a repeated social dilemma with a finite expected number of rounds. By Assumption 1b and w 0 = 0 it follows that both ρ D and ρ C in (9) are zero when s = 1. As a result, all expressions for δ τ are equal to one. With Theorems 2, 3, and 4, we have provided expressions for deriving the minimum discount factor for some desired linear payoff relation. Because the expressions depend on the 'single-round payoff of the multiplayer game, in general they will differ between social dilemmas. In order to determine the thresholds, one needs to find the global extrema of a function over z that, as we will show in the next section, can be efficiently done for a many social dilemma games. Essentially, the obtained threshold discount factors ensure that a suitable φ > 0 exists for which the ZD strategy is well-defined. Section VIII contains a detailed derivation of the thresholds that also indicates how to set φ to fully define the ZD strategy in terms of the game parameters and the desired enforceable payoff relation. VII. APPLICATIONS TO MULTIPLAYER SOCIAL DILEMMAS In this section the above theory is applied to the linear public goods game of Example 1 and the multiplayer snowdrift game of Example 2 to illustrate the role of the discount factor on the set of enforceable slopes s for generous and extortionate strategies and subsequently, the Nash equilibria of the repeated game. Characterizing this effect is important also because the slope determines the correlation between the payoffs and thus also the degree to which cooperative actions of opponents can be incentivised [25] within a finite expected number of rounds. The weights are assumed to be equal, that is w j = 1 n−1 for all j = i. In this case, the conditions for existence and the thresholds become relatively easy to obtain. All the proofs of this section are found in the Appendices. We first apply Theorem 1 to the public goods game to characterize the enforceable slopes and baseline payoffs. Proposition 4 (Enforceable slopes in the public goods game). Suppose p 0 = 0, l = 0 and 0 < s < 1, so that the ZD strategy is extortionate. For the public goods game with discounting and r > 1, every slope s ≥ r−1 r can be enforced independent of n. If s < r−1 r , the slope can be enforced if and only if Generous strategies with p 0 = 1 and l = rc − c have the same set of enforceable slopes. Extortionate strategies in the public goods game (and the multiplayer snowdrift game) satisfy l = b 0 = 0 and thus the enforced linear payoff relation simply becomes π −i = sπ i . Slopes close to one thus imply π −i and π i are approximately equal, while slopes close to zero imply a high level of extortion that allows the strategic player to do better than the average of his/her co-players. From Proposition 4 it follows that in the public goods game the lower bound on enforceable slope is s ≥ 1 − n r(n−1) . Both n and r thus determine how much better the strategic player can do than the average of his/her coplayers. Because full cooperation leads to the highest singleround average group payoff an opposite argument can be made for generous strategies: low values of s ensure the average payoff of the co-players is close to optimal. Just like the set of enforceable slopes, also the threshold discount factors for generous and extortionate strategies are the same in the public goods game and are characterized in the following proposition. Proposition 5 (Thresholds for extortion and generosity). For the enforceable slopes s ≥ 1 − n r(n−1) , in the public goods game the threshold discount factor for extortionate and generous strategies is determined as One can notice that when s = 1, the threshold discount factor in (14) evaluates as δ τ = 1. This is consistent with Theorem 1 and illustrates that fair strategies can only be enforced when the expected number of rounds is infinite (see Figure 1 for a numerical example). From Propositions 4 and 5 one can also obtain insight in the effect of δ on the Nash equilibrium of the repeated public goods game. The result in [9, SI Proposition 3] ensures that extortionate strategies are a symmetric Nash equilibrium if and only if s < n−2 n−1 , while generous strategies are a symmetric Nash equilibrium if and only if s > n−2 n−1 . At s = n−2 n−1 both types of ZD strategies are a Nash equilibrium. Combining this with the lower-bound of enforceable slopes and the expression for the threshold discount factor it follows that extortionate ZD strategies are a Nash equilibrium in the repeated public goods game from any discount factor δ > 0 provided that the slope is sufficiently small. On the other hand, generous ZD strategies can only be a Nash equilibrium in the public goods game when the discount factor is sufficiently high δ ≥ (n−r)(n−1) (n−1) 2 +(r−1) . with the same discount factor as generous strategies, however only the latter are a Nash equilibrium, as indicated by the region 3 4 ≤ s < 1. These conditions also hold when the deviating player is not restricted to ZD strategies and thus provide rather general equilibrium conditions for the repeated public goods game with discounting. Let us now investigate the multiplayer snowdrift game in which the cost of cooperation is shared by all cooperators, resulting in a nonlinear payoff function with respect to the number of cooperating co-players z. The following proposition characterizes the enforceable payoff relations of generous and extortionate strategies by applying Theorem 1 to the payoffs in Example 2. The possibilities for extortion are thus limited by the payoff parameters b, c and the group size n. In fact, the lower-bound on the enforceable slopes of extortionate strategies in Proposition 6 prevents extortionate strategies from being a Nash equilibrium in the multiplayer snowdrift game. In contrast, generous strategies can enforce any slope independent of the game parameters. However, the threshold discount factors of these strategies do depend on the game parameters as is characterized in the following proposition. Proposition 7. For the multiplayer snowdrift game with b > c and n ≥ 2, for slopes s ≤ 1 − c b(n−1) the threshold discount factor for generous strategies is determined by For higher slopes s > 1 − c b(n−1) the threshold of generous strategies is determined by The threshold discount factor of enforceable slopes of extortionate strategies are also given by (16). Note that again the expression of the threshold discount factor for high slopes of both generous and extortionate strategies in (16) becomes one when s = 1. Because generous strategies have no restriction on the enforceable slopes, they can also be a Nash equilibrium provided that the slope and discount factor are large enough (see Figure 2 for a numerical example). In both the public goods game and the multiplayer snowdrift game the set of enforceable payoff relations, and thus the degree of extortion and generosity, are strongly influenced by the game parameters and the discount factor. Up to now, we focused on deriving the minimum expected number of rounds that are necessary to enforce some desired or given payoff relation. However, the examples in this section also illustrate the reverse problem: given an expected number of rounds or discount factor, what is the set of payoff relations that a single player can exert? And does the employed strategy constitute a Nash equilibrium? Here, we have answered these question for two well known multiplayer games but using our theoretical results many other games including the prisoner's dilemma, the volunteer's dilemma, and the multiplayer stag-hunt game, can be analysed in the same manner. VIII. PROOFS OF THE MAIN RESULTS This section provides detailed proofs of the main results in Sections V and VI. A. Proof of Proposition 3 Suppose all players are cooperating e.g. σ = (C, C, . . . , C). Then from the definition of δp in equation (1) and the payoffs given in Table I, it follows that δp (C,C,...,C) = 1 + φ(1 − s)(l − a n−1 ) − (1 − δ)p 0 . (17) Now suppose that all players are defecting. Similarly, we have In order for these payoff relations to be enforceable, it needs to hold that both entries in equations (17) and (18) are in the interval [0, δ]. Equivalently, and Combining (19) and (20) it follows that 0 < (1 − δ) ≤ φ(1 − s)(a n−1 − b 0 ). From the assumption that a n−1 > b 0 listed in Assumption 1, it follows that Now suppose there is a single defecting player, i.e., σ = (C, C, . . . , D) or any of its permutations. In this case, the entries of the memory-one strategy are as given in equation (22). Again, for both cases we require δp σ to be in the interval [0, δ]. This results in the inequalities given in equations (23) and (24). By combining the equations (23) and (24) we obtain Again, because of the assumption b z+1 > a z it follows that The inequalities (26) and (21) together imply that Because at least one w j > 0, it follows that Combining with equation (21) we obtain In combination with equation (27) it follows that The inequalities in the equations (29) and (30) finally produce the bounds on s: Moreover, because it is required that n j=1 w j = 1, it follows that min j =i w j ≤ 1 n−1 . Hence the necessary condition turns into: We continue to show the necessary upper and lower bound on l. From equation (19) we obtain: From equation (21) we know φ(1 − s) > 0. Together with equation (33) this implies the necessary condition l − a n−1 ≤ 0 ⇔ l ≤ a n−1 . We continue with investigating the lower-bound on l, from equation (20) Because φ(1 − s) > 0 it follows that l ≥ b 0 . Naturally, when l = a n−1 by assumption 1 it holds that l > b 0 and when l = b 0 then l < a n−1 . This completes the proof. B. Proof of Lemma 2 For brevity, in the following proof we refer to equations that are found in the proof of Proposition 3. Assume the ZD strategy is extortionate, hence l = b 0 . From the lower bound in (20) in order for l to be enforceable, it is necessary that p 0 = 0. This proves the first statement. Now assume the ZD strategy is generous, hence l = a n−1 . From the lower bound in (19) in order for l to be enforceable, it is necessary that p 0 = 1. This proves the second statement and completes the proof. C. Proof of Theorem 1 In the following we refer to the key player, who is employing the ZD strategy, as player i. Let σ = (x 1 , . . . , x n ) such that x k ∈ A and let σ C be the set of i s co-players that cooperate and let σ D be the set of i s co-players that defect. Also, let |σ| be the total number of cooperators in σ including player i. Using this notation, for some action profile σ we may write the ZD strategy as (36) Also, note that n j =i and because n j =i w j = 1 it holds that l∈σ C w l = 1 − k∈σ D w k . Substituting this into equation (37) and using the payoffs as in Table I we obtain n j =i w j g j σ = a |σ|−1 + j∈σ D w j (b |σ| − a |σ|−1 ). Accordingly, the entries of the ZD strategy δp σ are given by equation (39). For all σ ∈ A we require that 0 ≤ δp σ ≤ δ. The inequality (42) together with the necessary condition s < 1 from Proposition 3 implies that and thus provides an upper-bound on the enforceable baseline payoff l. We now turn our attention to the inequalities in equation (41) that can be satisfied if and only if for all σ such that x i = D the following holds Combining equations (44) and (43) From equation (40) it follows that p 0 = 1 is required. Then equation (41) implies Which is exactly the corresponding lower-bound of l, that is thus required to be strict when the upper-bound is non-strict. (39) Now suppose we have a non-strict lower bound, e.g. From equation (41) it follows that p 0 = 0 is required. Then, the inequalities in equation (40) require that This completes the proof. D. Proof of Theorem 2 For brevity in the following proof we refer to equations that can be found in the proof of Theorem 1. From Lemma 2 we know that in order for the extortionate payoff relation to be enforceable it is necessary that p 0 = 0. By substituting this into equation (40) it follows that in order for the payoff relation to be enforceable it is required that for all σ such that x i = C the following holds: Hence, Equation (40) with p 0 = 0 implies that for all σ such that . (49) Naturally, ρ C ≥ ρ C . In the special case in which equality holds, it follows from equation (49) that δ ≥ 0, which is true by definition of δ. We continue to investigate the case in which ρ C > ρ C . In this case, a solution to equation (49) for some φ > 0 exists if and only if which leads to the first expression in the theorem. Now, from equation (41) with p 0 = 0, it follows that in order for the payoff relation to be enforceable it is necessary that Because φ > 0 is necessary for the payoff relation to be enforceable, it follows that ρ D (σ) ≥ 0 for all σ such that x i = D. Let us first investigate the special case in which ρ D (z,w z ) = 0. Then (51) is satisfied for any φ > 0 and δ ∈ (0, 1). Now, assume ρ D (z,w z ) > 0. Then, equations (51) and (49) imply In order for such a φ to exist it needs to hold that This completes the proof. E. Proof of Theorem 3 The proof is similar to the extortionate case in the proof of Theorem 2. From Lemma 2 we know that in order for the generous payoff relation to be enforceable it is necessary that p 0 = 1. By substituting this into equation (41) it follows that in order for the payoff relation to be enforceable it is required that for all σ such that x i = D the following holds: Hence, equation (41) with p 0 = 1 implies that for all σ such that . which leads to the first expression in the theorem. Moreover, from equation (40) we know that the following must hold: Because φ > 0 it follows that ρ C (σ) ≥ 0 for all σ such that x i = C. Let us now consider the special case in which φρ C (z,w z ) = 0. Then, equation (57) is satisfied for any φ > 0 and δ ∈ (0, 1). Now suppose ρ C (z,w z ) > 0. Then, (57) and (55) imply that in order for the generous strategy to be enforceable it is necessary that Such a φ exists if and only if This completes the proof. F. Proof of Theorem 4 For brevity, we refer to equations found in the proof of Theorem 1. From (40) and (41) it follows that in order for the payoff relation to be enforceable for p 0 ∈ (0, 1) it must hold that for all σ such that x i = C, ρ C (σ) > 0, and for all σ such that x i = D, ρ D (σ) > 0. For the existence of equalizer strategies this must also hold for the special case in which s = 0. Hence, we can rewrite (40) and (41) to obtain the following set of inequalities There exists such a φ > 0 if and only if the following inequalities are satisfied By collecting the terms in p 0 and δ for (62)-(65) the conditions can be derived as follows. The condition in (62) can be satisfied if and only if In the special case that ρ D (z,w z ) − ρ D (z,ŵ z ) = 0, this is satisfied for every p 0 ∈ (0, 1) and δ ∈ (0, 1). On the other hand, if ρ D (z,w z ) − ρ D (z,ŵ z ) > 0, then the inequality can be satisfied for every p 0 ∈ (0, 1) if and only if (10) holds. Likewise, (64) can be satisfied if and only if If ρ C −ρ C = 0, this inequality is satisfied for every p 0 ∈ (0, 1). On the other hand, if ρ C − ρ C > 0, the inequality is satisfied if and only if the condition in (12) IX. FINAL REMARKS We have extended the existing results for ZD strategies to multiplayer social dilemmas with discounting. However, the fundamental relation between the memory-one strategy and the mean distribution is independent of the structure and symmetry of the game and thus the results in this paper can be extended by considering discounted multiplayer games that are not social dilemmas or have asymmetric single-round payoffs. Our theory supports the finding that due to the finite expected number of rounds the initial probability to cooperate of the key player remains important for the opportunities to exert control. Based on the necessary initial probability to cooperate we derived expressions for the minimum discount factor above which a ZD strategy can enforce some desired generous or extortionate payoff relation. Because equalizer strategies do not impose any conditions on the initial probability to cooperate, we have derived a condition that ensures the desired equalizer strategy is enforceable for a variable initial probability to cooperate in the open unit interval. By combining the set of enforceable payoff relations and the threshold discount factors our results can also be used to investigate under which conditions on the expected number of rounds, generous and extortionate ZD strategies, that both can promote mutual cooperation in social dilemmas, constitute a symmetric Nash equilibrium in the multiplayer social dilemma. Future research can include individual or time-varying discounting functions, and the analysis of subgame perfection of the ZD strategy Nash equilibria. We focus first prove the case in which l = 0 and 0 < s < 1, and thus the strategy is extortionate. In this case the equations in (74) We continue to obtain the maximizers and minimizers of equations (75) and (76), that because of linearity in z can only occur at the extreme points z = 0 and z = n − 1. When n > r and r > 1, as is the case when the linear public goods game is a social dilemma, we have the following simple conditions on the slope of the extortionate strategy. If − 1 n−1 < s ≤ 1− n r(n−1) no extortionate or generous strategies can exist. Hence assume s ≥ 1 − n r(n−1) . Then, We focus now on the case in which l = rc − c and 0 < s < 1, and hence the strategy is generous. If l = rc − c the equations in (74) become The extreme points of these functions read as ρ C g = ρ C g (0) = ρ D e , ρ C g = ρ C g (n − 1) = ρ D e , ρ D g = ρ D g (n − 1) = ρ C e , ρ D g = ρ D g (0) = ρ C e . It follows that the fractions in Theorem 3 are equivalent to those in Theorem 2.
2019-10-17T12:36:33.000Z
2019-10-17T00:00:00.000
{ "year": 2019, "sha1": "2eb817ee5948b5113e6b2df051e2309451fabb68", "oa_license": null, "oa_url": "https://pure.rug.nl/ws/files/181805152/Zero_Determinant_Strategies_in_Repeated_Multiplayer_Social_Dilemmas_With_Discounted_Payoffs.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b97559f4e1a81801ca67d7f7494ff7a43c8ae084", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Computer Science", "Economics" ] }
139686999
pes2o/s2orc
v3-fos-license
Pseudo-Elastic Analysis with Permanent Set in Carbon-Filled Rubber Via cyclic loading and unloading tests of natural/styrene-butadiene rubber (NSBR) blends at room temperature, the effects of the stretching, rate, temperature, and volume fraction of carbon black in the filled rubber on a permanent set (residual strain) were studied. The results showed that increasing the stretching, rate, and volume fraction of carbon black and reducing the temperature yieldedgreater residual strain.Theuniaxial tensile behaviors of compositeswith theMullins effect and residual strainwere simulated using the ABAQUS software according to the aforementioned data. AnOgden-type constitutive model was derived, and the theory of pseudo-elasticity proposed by Ogden and Roxburgh was used in the model. It was found that the theory of pseudo-elasticity and the Ogden constitutive model are applicable to this composite, and if combined with plastic deformation, the models are more accurate for calculating the residual strain after unloading. Introduction Carbon black-filled rubber on the virgin loading shows significant hysteresis during the load-unload-reload-unload cycle, which is called the Mullins effect.The Mullins effect mainly depends on the proportion of filler in the rubber composites [1].The Mullins effect is negligible in unfilled rubber but becomes obvious in rubber filled with a high carbon black content.The difference in stress during the first loading and unloading paths of a virgin carbon black-filled rubber specimen is greater than for any other loading and unloading cycle.The hysteresis area enclosed by the stress-strain curve between loading and unloading represents the dissipated energy.If some cycles with a constant stretch amplitude are conducted, the hysteresis area stabilizes.With some loading cycles, the stress heals after a certain amount of recovery time.The energy dissipation of rubber after a recovery is less than that of virgin rubber, but each subsequent stretching is always less severe than the previous stretching [2]. A theoretical model for the hyperelastic behavior of filled rubbers was proposed by Mullins and Tobin (1957) [3] and Mullins (1969) [4].Ogden and Roxburgh (1999) [5,6] used a single softening damage variable to model the idealized Mullins effect in filled rubber, which experiences not only uniaxial tension but also biaxial and multiaxial tension, while ignoring the residual strain.In 2001, Ogden calculated the stress softening and residual strain during the azimuthal shearing of a pseudo-elastic circular cylindrical tube [7].Dorfmann and Ogden (2004) [8] proposed a constitutive model for the Mullins effect with a permanent set employing two damage variables in the energy-dissipation function.Related studies have been performed [9,10], and different damage variables have been proposed [11][12][13][14].In addition, the Mullins effect has been studied from a molecular viewpoint [15][16][17], and discussions on the Mullins effect have continued. In this study, uniaxial tensile experiments involving different stretching ratios of natural/styrene-butadiene blends with different carbon black contents were performed, and the influences of different factors on the residual strain were investigated according to these experiments.A combination of the Mullins effect and plastic deformation theory was introduced using the Ogden-Roxburgh pseudo-elastic model, and the Mullins effect with the residual strain of the blends was simulated and verified using the finite-element software ABAQUS. Mullins Effect The cyclic stress-strain curves with different stretching amplitudes, as well as the virgin loading curve, for virgin rubber are presented in Figure 1.For loading and unloading cycles, the specimen was first loaded to a particular point and then unloaded, and the unloading curve was significantly lower than the loading curve.The second loading curve was lower than the virgin loading curve but higher than the first unloading curve prior to the virgin loading strain and recovered with a bigger strain.It was not fully restored to its original state when unloaded to zero stress, and residual strain (permanent set) was produced.The Ogden-Roxburgh pseudo-elasticity model and plastic deformation theory are used to describe the Mullins effect and residual strain. Ogden-Roxburgh Pseudo-Elasticity Model 3.1.Ogden Constitutive Model.The Ogden (N = 3) (1972,1982) [18] constitutive equation of the strain energy function for in-compressible stress-softening materials is as follows: The uniaxial extension of a stretch is described as follows: The Ogden (N = 3) strain energy function under uniaxial tensile stress is given as follows: 3.2.Ogden-Roxburgh Pseudo-Elasticity Model.The axial components of the Cauchy stress tensor for an incompressible isotropic material such as carbon-reinforced rubber are as follows: The material is continuously damaged during the loading and unloading paths, assuming that the damage function [5] defined as () indicates the energy dissipated during the loading and unloading paths.The damage variable was defined as , which is given by (5).In the loading path, we set = 1 and, in the unloading path, was decreased within the range of 0 < ≤ 1.The strain energy function was defined as ( 1 , 2 , ), and we denote 0 (, −1/2 ) in the loading path and W(, −1/2 , ) in the unloading path, as described by (6).The Cauchy stress is described by (7), and the nominal stress is given by (8).The damage function and damage variable are described by ( 9) and (10), respectively. Elasticity-Plasticity Theory. With the development of continuum elastoplastic theory, Lee [19] presented new concepts that must be introduced in order to formulate a satisfactory elastic-plastic theory when both the elastic and the plastic components of the strain can be finite, and the function is as follows: Here, is the multiplicative decomposition of the deformation gradient, is the elastic portion, and is the plastic portion. There is substantial discussion regarding elastoplastic separation, and it has been investigated by many scholars [20,21].It appears that a permanent set is generated by the plastic portion, and we can use the nonlinear elasticity model combined with plasticity and the Mullins effect to present the Mullins effect with the permanent set of rubber. Uniaxial Tensile Experiments. The prepared specimens satisfied the shape and size for dumbbell specimens of the National Standard, GB/T528-2009.Carbon black-filled natural/styrene-butadiene rubber (NSBR) blends were prepared using the same technological process and vulcanization.The ingredients of the carbon black-filled NSBR blends are shown in Table 1, including three kinds of rubber filled with different volumes of carbon black.The experiment was performed using an AG-Xplus 50KN Shimadzu material tester. Uniaxial tensile loading-unloading-reloading-unloading experiments with different elongation ratios were performed at room temperature using a Shimadzu AG-plus 50 kN automatic control-testing machine.The residual strains during the experiments with different strain rates, stretching ratios, and carbon black volumes were examined. Residual Strain. The carbon black-filled rubber samples were restored after a uniaxial tensile stress for a certain period of time.As a result, there remains a certain degree of residual strain that cannot be neglected.The residual strain is related to the stretching ratio, carbon black volume fraction, loading rate, and temperature, among other aspects.Figures 2-4 show the relationship between different factors and residual strain under the same conditions. The relationships between the residual strain and the strain rate of the uniaxial loading-unloading cycle in the experimental curve of NSBR5 at different strain rates of 0.005/s, 0.03/s, and 0.15/s at room temperature are shown Figure 2.During the loading process, the rubber molecular chains and carbon black developed a slip deformation, and the stress increased with the increase of the strain rate because there was insufficient time to complete the slip, and the elastic modulus of the material increased.The hysteresis was related to the tensile rate during the loading-unloadingreloading-unloading process.When the tensile rate was higher, the residual strain became larger, and the modulus of the material was greater; however, this was not obvious at a low tensile rate. Figure 3 shows the experimental curve for a rubber composite with carbon black volume fractions of 8.37%, 15.45%, and 20.08% at different stretching ratios of the uniaxial loading-unloading cycle, as well as the relationship between the residual strain and carbon black volume fractions at a tensile rate of 0.005/s at room temperature.As shown in Figure 3(b), the residual strain after unloading increased with the carbon black amount.With the addition of carbon black, the material was hardened, the modulus increased and rigid chains were formed between the carbon black and the rubber molecular chain.In addition, the residual strain increased with the increase of the stretching ratio for the same carbon content, but it increased slowly with a higher carbon black content. Uniaxial tensile test curves obtained at different temperatures and the relationship between the residual strain and the temperature of carbon black-filled composites are shown in Figure 4.The uniaxial tensile stretching was 2 under a tensile rate of 0.005/s at temperatures of -40, -30, -15, 25, 4(a), in the lowtemperature region of -40 to −15 ∘ C, the loading curve tended to be linear, indicating that it was close to its vitrification temperature.When the temperature was decreased from room temperature, the forces of the rigid chains between the rubber molecular chains and the carbon black were increased, the movement of rubber molecules was reduced, and the material was hardened; thus, the elastic modulus was increased.Therefore, the residual strain after unloading was larger at a lower temperature.However, when returning to room temperature, the flexible chains were restored, and the residual strain was dramatically decreased. The mechanical properties were different when the temperature was above room temperature.In the range of 25-70 ∘ C, with the increase of the temperature, the composite material became softer, and the elastic modulus decreased, although the change was very slight.As shown in Figures 4(b) and 4(d), the residual strain of the two kinds of rubbers decreased with the increase of the temperature, but the change was significantly smaller than that in a lowtemperature environment. Ogden-Roxburgh Pseudo-Elastic Model. Since Ogden and Roxburgh (1999) first proposed the pseudo-elasticity theory model, research on the Mullins effect of carbon black-filled rubber composites has continued, and it now involves the development of constitutive models.The Mooney-Rivlin constitutive model has good accuracy when the strain is small ( < 1.5), although the Ogden constitutive model (N = 3) remains more accurate within a larger range.The Ogden constitutive model or the Marlow model [22] is usually chosen in studies, and the main problem has been that the pseudoelastic model parameters and are not sufficiently accurate when other constitutive models are used.Therefore, the applicability and precision of the Ogden (N = 3) constitutive model are discussed for < 2. Bi-square robust control and the Levenberg-Marquardt nonlinear least-squares method were used to fit the parameters using the MATLAB software.The parameters of the model were fitted to the experimental data from the loading and unloading tests with different stretching amplitudes of the NSBR blends at room temperature, as shown in 2. The hyperelastic behavior and Mullins effect were defined in the model.The Ogden-Roxburgh Pseudo-elastic model was used to simulate the uniaxial loading-unloading cycle of carbon black-filled rubber at different stretching ratios. Experimental and simulation curves obtained without considering the residual strain of NSBR5 and NSBR7 are shown in Figures 5 and 6, respectively, and the stretching ratio is = 1.2, 1.4, 1.6, and 1.8 with a tensile rate of 0.005/s. As shown in the figures, the simulation curves for the two kinds of materials are basically in agreement with the experimental curves, and their root-mean-square error (RSME) values are 0.1011 and 0.1744 MPa for NSBR5 and NSBR7, respectively.As reasons for these RSME values, the deviations of the simulation and experimental unloading curves within the small strain range ( < 0.5) were greater than that of any other loading and unloading cycles, which indicates that the pseudo-elastic model is insufficiently accurate within a small strain range.In addition, the hysteresis effect causes the reloading curves to not exactly coincide with the unloading curves.However, the hysteresis and Mullins effects cannot be considered together in the ABAQUS software; thus, the error is unavoidable.The simulation and experimental unloading curves were different where the stress was close to zero, which indicates that the simulation curves are not effective for expressing the residual strain in an elastic material with continuous loading. Ogden-Roxburgh Pseudo-Elastic Model with Plastic Deformation.Different residual strains are shown in the experimental curves of Figure 3 after unloading, and to reduce the deviation between the simulation and experimental results, the hyperelastic behavior and Mullins effect were defined in the model through a combination with the plastic deformation theory in the ABAQUS software.The given isotropic hardening function represents the degree of plastic deformation.The true stress and true strain curves of NSBR5 and NSBR7 correspond to the nominal stress and nominal strain curves shown in Figure 3(a).The residual strain is the plastic strain that can be gained from the plasticity data of different stretching ratios, i.e., = 1.2, 1.4, 1.6, and 1.8, as well as the true stress and elastic strain.The true stress-strain curve for NSBR5 is shown in Figure 7, for which the plasticity data were obtained from the results shown in Table 3. According to the simulation results shown in Figures 5 and 6, the plastic deformation theory was combined to simulate the uniaxial loading-unloading cycle of carbon blackfilled composites at different stretching ratios.Experimental and simulation curves for the NSBR5 and NSBR7 materials with a tensile rate of 0.005/s are shown in Figures 8 and 9, respectively, where the stretching ratio is 1.2, 1.4, 1.6, and 1.8.The RSME values of the nominal stress between the simulation and the experimental curves are 0.0909 and 0.1496 MPa for the NSBR5 and NSBR7 materials, respectively, which are smaller than those in Figures 5 and 6.At the initial position, the simulation curves showed the residual strain well.However, the reloading and unloading curves did not coincide, owing to the hysteresis.If the simulation and experimental unloading curves are consistent, a deviation between the simulation unloading curves and the experimental reloading curves is bound to exist because of the hysteresis.Even if the pseudo-elastic parameters of each cycle are calculated and substituted into the model separately, the unloading simulation curves of each cycle will be closer to the experimental unloading curves [23] but will increase their deviation with the experimental reloading curves.Therefore, a deviation is definitely unavoidable if the hysteresis cannot be considered in the model.Thus, it was shown that the pseudo-elastic model with plastic deformation can be used to determine the residual strain well, with a low RSME and high precision.This is a good model for studying the Mullins cycle process of hyperelastic materials under stress softening and residual strain (permanent set). Conclusion The residual strain was shown to be greater when the tensile stretching ratio is higher, the carbon black content is higher, or the tensile rate is higher and the temperature (within the range of -40-70 ∘ C) is lower.According to a comparison of the experimental nominal stress-strain curves of the uniaxial tensile loading-unloading cycle of carbon black-filled rubber composites with no loading history and simulation results obtained using ABAQUS software, the Ogden-Roxburgh model can describe the hyperelasticity and Mullins effect well without residual strain.We can obtain better results by combining the Ogden-Roxburgh model with plastic deformation, which allows the residual strain to be calculated with high accuracy. Figure 2 : Figure 2: Curves of the residual strain and rate for the rubber: (a) the experimental results of NSBR5 for the Mullins effect at different rates of 0.005/s, 0.03/s, and 0.15/s and (b) the residual strain and rate of NSBR5 and NSBR7. Figure 3 : Figure 3: Curves of the residual strain and carbon volume fraction for the rubber: (a) experimental results of NSBR for the Mullins effect at a rate of 0.005/s and (b) the residual strain and carbon volume fraction. Figure 4 : Figure 4: Curves of the residual strain and temperature of the rubber: (a) experimental results of NSBR7 for the Mullins effect at a rate of 0.005/s in the temperature range of -40-25 ∘ C; (b) the residual strain vs. the temperature of NSBR5 and NSBR7 in the temperature range of -40-25 ∘ C; (c) experimental results of NSBR7 for the Mullins effect at a rate of 0.005/s in the temperature range of 25-70 ∘ C; (d) the residual strain and temperature of NSBR5 and NSBR7 in the temperature range of 25-70 ∘ C. Figure 3 ( Figure 3(a).The tensile rate was 0.005/s, and the stretching ratio was = 1.2, 1.4, 1.6, and 1.8.The results are shown in Table2.The hyperelastic behavior and Mullins effect were defined in the model.The Ogden-Roxburgh Pseudo-elastic model was used to simulate the uniaxial loading-unloading cycle of carbon black-filled rubber at different stretching ratios. Figure 5 : Figure 5: Experimental and simulation results of NSBR5 for the Mullins effect without the permanent set. Figure 6 : Figure 6: Experimental and simulation results of NSBR7 for the Mullins effect without the permanent set. Figure 7 : Figure 7: True stress-strain curve for the Mullins effect tests of NSBR5. Figure 8 :Figure 9 : Figure 8: Experimental and simulation results of NSBR5 for the Mullins effect with the permanent set. Table 1 : Ingredients of carbon black-filled NSBR composites.The data in the table indicate the quality of the components, in addition to the carbon black content. Note. Table 2 : Summary of the model parameters for virgin loading curves of NSBR5 and NSBR7. Table 3 : Plasticity data obtained from the Mullins effect tests for NSBR5 and NSBR7.
2019-04-30T13:08:55.083Z
2019-01-03T00:00:00.000
{ "year": 2019, "sha1": "130615b47c34848fb205cb06eb34d7ff6575e5bb", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2019/2369329", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "130615b47c34848fb205cb06eb34d7ff6575e5bb", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
244488435
pes2o/s2orc
v3-fos-license
Invariant manifolds for stochastic partial differential equations in continuously embedded Hilbert spaces We provide necessary and sufficient conditions for stochastic invariance of finite dimensional submanifolds for solutions of stochastic partial differential equations (SPDEs) in continuously embedded Hilbert spaces with non-smooth coefficients. Furthermore, we establish a link between invariance of submanifolds for such SPDEs in Hermite Sobolev spaces and invariance of submanifolds for finite dimensional SDEs. This provides a new method for analyzing stochastic invariance of submanifolds for finite dimensional It\^{o} diffusions, which we will use in order to derive new invariance results for finite dimensional SDEs. Introduction The problem of finding invariant submanifolds of solutions of stochastic partial differential equations (SPDEs) arises in connection with stochastic models in finance wherein the submanifolds offer the possibility of finite dimensional realizations of the solutions which are otherwise infinite dimensional (see, for example [5,4,3,14,15,39,40,41,42]). The problem, related to the computability of 'forward interest rate models', is also known as the 'consistency problem' for such models [12]. In this paper we study the mathematical problem of finding invariant submanifolds for a class of SPDEs that include apart from the quasi-semilinear and semilinear SPDEs (see, for example [11,28,41]) a more recent class of SPDEs studied in [33,34]. We will refer to this latter class as Itô type SPDEs. Consider an SPDE of the form dY t = L(Y t )dt + A(Y t )dW t Y 0 = y 0 (1.1) taking values in a separable Hilbert space H, let M ⊂ H a finite dimensional submanifold and (W t ) a standard Brownian motion in R r . Let T y M denote the tangent space of M at y ∈ M and DA(z), z ∈ H the Fréchet derivative of a smooth function A : H → H at z. By local invariance of M we mean the following property of the solutions (Y t ) : if y ∈ M , then Y t ∈ M for all t < τ , for some positive stopping time τ . If the A j are smooth, then necessary and sufficient conditions for local invariance are that (1) The vector fields A j (y) ∈ T y M , the tangent space of M at y, and (2) L(y) − 1 2 r j=1 DA j (y)A j (y) ∈ T y M , see, for example [11,28], and also [13] for the more general situation of jumpdiffusions and submanifolds with boundary. The conditions (1) and (2) above are referred to in [11] as 'Nagumo type consistency' conditions. However the term 1 2 r j=1 DA j (y)A j (y) in condition (2) can also be viewed as a 'Stratonovic' correction term (see Remark 4.22 and Remark 6.8). While conditions (1) and (2) suffice for the quasi-semilinear and semilinear cases mentioned above, they no longer suffice for the Itô type SPDEs. One of the principal challenges that we deal with in this paper is to find a suitable generalization of condition (2) for these SPDEs. In Theorem 6.3, we prove a general result giving coordinate free necessary and sufficient conditions for a target manifold to be invariant for the solutions of SPDEs that we consider, by replacing the term 1 2 r j=1 DA j (y)A j (y) by a term of the form 1 Here the bracket terms [A j | M , A j | M ] M in the latter sum are vector fields (in a quotient space of tangent vectors) that arise as the quadratic variation terms in an Itô formula, when we realize the solutions (Y t ) of our SPDE on M as the image Y t = φ(X t ) of a finite dimensional process (X t ) and φ : V ⊂ R m → M , a coordinate chart (see Definition 6.1 and the proof of Theorem 6.3). The advantage in this formulation is clearly that it does not require smoothness of the vector fields A j , which is also seen in subsequent results; see, for example Theorem 6.21. While diffusions on manifolds in R d is a well studied topic (see for a partial list [19,8,9,17,18,38]), the submanifolds considered above in [5,11,28] are finite dimensional submanifolds of a single Hilbert space H. On the other hand the Itô type SPDEs have the following feature: The solution (Y t ) lies in a Hilbert space G ⊂ H and the equation holds in H since the mappings L, A j : G → H. In the framework of [33,34], the spaces G and H are realized as G = S p+1 (R d ) and H = S p (R d ) where S p (R d ), p ∈ R are the Hermite Sobolev spaces and L, A j are quasi-linear second and first order differential operators respectively. In the present paper, our analysis is carried out in a framework that we call a (G, H)-submanifold, where G is continuously embedded in H; see Section 3 (in particular Definition 3.20) for details. For such manifolds, the topology comes from G (or equivalently the relative topology of H on G) whereas the differentiable structure comes from H. It turns out that the case of quasi-semilinear equations in a Hilbert space H can also be realized, with less restrictive conditions on the volatilities (Remark 6.26), in the framework of continuously embedded Hilbert spaces G and H (Section 5) where G ⊂ H is related to the domain of the operators L and A j (Assumption 5.1). In particular we generalize the results of [11,28] in the semilinear case to non-smooth volatilities. An important feature of the solutions (Y t ) of the SPDE considered in [33,34] is that they are translation invariant i.e. Y t = τ Xt Φ, where (X t ) is a R d -valued diffusion and Φ ∈ G. In other words, Y t = ψ(X t ) where ψ(x) := τ x Φ : R d → G := S p+1 (R d ) is the orbit map associated with the translation operators τ x : S p (R d ) → S p (R d ). In Theorem 3.40 we show that the S p+ 1 2 (R d ) is a core for the infinitesimal generators of the translation group viz. −∂ i : The map x → τ x Φ : R d → S p+1 (R d ) is continuous and C 2 when we compose with the inclusion map from S p+1 (R d ) → S p (R d ). Thus, for example when p + 1 < − d 4 and Φ = δ 0 we show that the map ψ(·) : R d → G is a homeomorphism and since τ x δ 0 = δ x , the set M := {δ x : x ∈ R d } is a topological submanifold of G with one chart. On the other hand ψ(·) : R d → H := S p (R d ) is an immersive C 2 -map, giving rise to a differentiable structure on M when considered as subset of H. In the case of quasi-semilinear SPDEs one can likewise consider submanifolds M ⊂ H as (G, H)-submanifolds when M ⊂ G (Lemma 5.14). The invariance property of SPDEs with coefficients as differential operators generalizes to the case when the coefficients L, A j are given in terms of the generators of a multi parameter group (T (t), t ∈ R d ) which replaces the translation group (τ x ). In this case invariant submanifolds are given in terms of the orbit map viz. M = {T (t)y 0 : t ∈ R d } for an initial condition y 0 ∈ G. It turns out that, under some assumptions on L, A j , the target manifold M is in turn induced by a submanifold N ⊆ R d and that M is invariant for the SPDE iff N is invariant for an associated finite dimensional Itô SDE (Theorem 4.16). Consequently, one can use the invariance of M for (Y t ) to prove the invariance of a submanifold N ⊆ R d for the solutions of an Itô diffusion. We do this in Theorem 4.19 (see also Proposition 4.20) and Theorem 4.26; we illustrate the latter result with the example of the unit sphere S d−1 (Corollary 4.28 and Example 4.30) and recover an earlier result of Stroock. In the preceding analysis an important role is played by the quasi-linear structure of the operators L, A j . These ultimately determine the form of invariant manifolds M (Theorem 6.21). Throughout Section 4, we assume this structure. The paper is organized as follows: Section 2 introduces SPDEs in the framework of continuously embedded Hilbert spaces G ⊂ H. In Section 3 we introduce the notion of a (G, H)-submanifold. Section 3.1 is devoted to calculus on such submanifolds. In Section 3.2 we consider (G, H)-submanifolds generated by the infinitesimal generators of a multi parameter C 0 -group. In Section 4 we prove the equivalence between invariance of M ⊂ G for the solutions (Y t ) of our SPDE and invariance of submanifolds N ⊂ R d for solutions (X t ) of an Itô SDE with Y t = ψ(X t ). Here y 0 ∈ G and ψ(s) := T (s)y 0 is given by the orbit map T (s)y 0 , s ∈ R d of a multiparameter C 0 -group (T (s)) acting on G. Section 5 deals with invariance of submanifolds for solutions (Y t ) for quasi-semilinear and semilinear SPDEs. Note that to illustrate our main results from Section 6, we already use these results in Section 4. We present the main invariance result and its proof in Section 6, (Theorem 6.3). Appendix A is devoted to results on multi-parameter groups and Appendix B to results on Hermite Sobolev spaces. This includes a proof of the Sobolev embedding theorem for Hermite Sobolev spaces. Stochastic partial differential equations in continuously embedded Hilbert spaces In this section we provide the required prerequisites about SPDEs in continuously embedded Hilbert spaces. 2.1. Definition. Let G and H be two normed spaces. Then we call (G, H) continuously embedded normed spaces (or normed spaces with continuous embedding) if the following conditions are fulfilled: (1) We have G ⊂ H as sets. Now, let (G, H) be separable Hilbert spaces with continuous embedding. Let r ∈ N be a positive integer. Furthermore, let L : G → H and A 1 , . . . , A r : G → H be continuous 1 mappings. 1 More precisely, here and in the sequel, we call a mapping A : G → H continuous if A : (G, · G ) → (H, · H ) is continuous. Definition. Let y 0 ∈ G be arbitrary. A triplet (B, W, Y ) is called a local martingale solution to the SPDE (1.1) with Y 0 = y 0 if the following conditions are fulfilled: (1) B = (Ω, F , (F t ) t∈R+ , P) is a stochastic basis; that is, a filtered probability space satisfying the usual conditions. (2) W is an R r -valued standard Wiener process on the stochastic basis B. (3) Y is a G-valued adapted 2 process such that for some strictly positive stopping time τ > 0 we have P-almost surely and P-almost surely where we agree on the notation The stopping time τ is also called the lifetime of Y . If we can choose τ = ∞, then (B, W, Y ) is also called a global martingale solution (or simply a martingale solution) to the SPDE (1.1) with Y 0 = y 0 . 2.4. Remark. In the particular case G = H = R d the SPDE (1.1) is rather an SDE, and a martingale solution (B, W, Y ) is a weak solution. 2.5. Remark. If there is no ambiguity, we will simply call Y a local martingale solution or a global martingale solution to the SPDE (1.1) with Y 0 = y 0 . 2.6. Remark. As it is apparent from the integrability condition (2.1), the stochastic integrals appearing in (2.2) are understood as stochastic integrals in the Hilbert space (H, · H ). Therefore, the right-hand side of (2.2) is generally H-valued, whereas the left-hand side is G-valued. This indicates that the existence of martingale solutions to the SPDE (1.1) can generally not be warranted. If there exists a martingale solution Y , then its sample paths are continuous with respect to the norm · H , but they do not need to be continuous with respect to the norm · G . 2.7. Remark. Let B be a stochastic basis. In our situation, there are two reasonable ways to define what it means that a G-valued process Y is adapted; namely: (1) We regard Y as a process taking its values in the subspace G of the Hilbert space (H, · H ) and call it adapted if for each t ∈ R + the mapping Y t : Ω → G is (2) We regard Y as a process taking its values in the Hilbert space (G, · G ) and call it adapted if for each t ∈ R + the mapping Y t : Ω → G is F t -B(G)-measurable. However, by Kuratowski's theorem (see, for example [29, Thm. I. 3.9]) we have B(G) = B(H) G , showing that these two concepts of adaptedness are equivalent. Now, let M ⊂ G be a subset. In this paper, the subset M will typically be a submanifold. 2.8. Definition. The subset M is called locally invariant for the SPDE (1.1) if for each y 0 ∈ M there exists a local martingale solution Y to the SPDE (1.1) with Y 0 = y 0 and lifetime τ > 0 such that Y τ ∈ M up to an evanescent set 3 . 2.9. Definition. The subset M is called globally invariant (or simply invariant) for the SPDE (1.1) if for each y 0 ∈ M there exists a global martingale solution Y to the SPDE (1.1) with Y 0 = y 0 such that Y ∈ M up to an evanescent set. Finite dimensional submanifolds in embedded Hilbert spaces In this section we provide the required background about finite dimensional submanifolds in embedded Hilbert spaces. It is divided into two parts. In Section 3.1 we provide the preliminaries about submanifolds in Hilbert spaces, and later on we introduce submanifolds in embedded Hilbert spaces. In Section 3.2 we deal with submanifolds given by orbit maps of group actions, in particular in Hermite Sobolev spaces. Finite dimensional submanifolds in Hilbert spaces In this section we deal with finite dimensional submanifolds in Hilbert spaces. Let H be a Hilbert space. Furthermore, let m ∈ N and k ∈ N be positive integers. Here we use the notation N = N ∪ {∞}, which means that k = ∞ is allowed. be an open subset, and let φ ∈ C k (V ; H) be a mapping. ( For what follows, let M be an m-dimensional C k -submanifold of H. 3.4. Definition. Let y ∈ M be arbitrary. The tangent space of M to y is the subspace where x := φ −1 (y), and φ : V → U ∩ M denotes a local parametrization of M around y. 3.5. Remark. By Lemma 3.3 the Definition 3.4 of the tangent space does not depend on the choice of the parametrization. 3 A random set A ⊂ Ω × R + is called evanescent if the set {ω ∈ Ω : (ω, t) ∈ A for some t ∈ R + } is a P-nullset, cf. [21, 1.1.10]. 3.6. Remark. Let y ∈ M be arbitrary, and let φ : 3.8. Definition. The tangent bundle of M is defined as We denote by Γ(T M ) the space of all vector fields on M . We denote by Γ z (T M ) the space of all local vector fields on M around z. We denote by Γ * (T M ) the space of all locally simultaneous vector fields on M . 3.12. Definition. Let φ : V → U ∩ M be a local parametrization of M . (1) For a mapping a : 3.13. Remark. Note that for every vector field A ∈ Γ(T M ) and every local parametrization φ : V → U ∩ M there exists a unique mapping a : V → R m such that A| U∩M = φ * a. As we will see now, tangent spaces can also be characterized by means of curves. In the next result we consider the particular situation of submanifolds in Euclidean space which are the zeros of smooth functions. For two normed spaces X and Y and an integer n ∈ N we denote by L n (X, Y ) the space of all continuous n-multilinear maps T : X n → Y . 3.18. Lemma. Let X, Y, Z be normed spaces, and let U ⊂ X and V ⊂ Y be open subsets. Let f ∈ C 2 (U ; Y ) with f (U ) ⊂ V and g ∈ C 2 (V ; Z) be mappings. Then we have g • f ∈ C 2 (U ; Z), and for each x ∈ U we have Proof. This follows from the higher order chain rule; see [1, pages 87, 88]. Now, we turn back to the situation where M be an m-dimensional C k -submanifold of the Hilbert space H. Then for all y ∈ U ∩M and w 1 , w 2 ∈ T y M we have Proof. By assumption we have W := U ∩M = ∅. Thus, by Lemma 3.3 the mapping By the usual chain rule, we have and hence Dϕ( Therefore, by the second order chain rule (Lemma 3.18) we obtain Since Dφ 1 (x 1 )D 2 ϕ(x 2 )(u 1 , u 2 ) ∈ T y M , this completes the proof. Now, let G be another Hilbert space such that (G, H) is a pair of continuously embedded Hilbert spaces. Denoting by τ G and τ H the respective topologies, we have (iv) M ⊂ G and each local parametrization φ : . If any of the previous conditions is fulfilled, then we have T M ⊂ G × H. Let y ∈ M be arbitrary, and let φ : V → U ∩ M be a local parametrization around y, which is also a homeomorphism φ : V → (U ∩ M , · G ). Then the restricted identity is homeomorphism. The additional statement T M ⊂ G × H is a direct consequence of (vi). 3.22. Remark. If M is an m-dimensional (G, H)-submanifold of class C k , then, according to Proposition 3.21, it is also an m-dimensional topological submanifold of G. Recall that (G, H) denotes a pair of continuously embedded Hilbert spaces, and that M is an m-dimensional C k -submanifold of H. Now, let (H 0 , H 1 , . . . , H k−1 , H k ) be continuously embedded Hilbert spaces such that G = H 0 and H = H k . 3.23. Definition. We call M an m-dimensional (H 0 , . . . , H k )-submanifold of class C k if the following conditions are fulfilled: (1) M is an m-dimensional (G, H)-submanifold of class C k . (2) M is an m-dimensional C j -submanifold of H j for each j = 1, . . . , k. 3.24. Proposition. The following statements are equivalent: (ii) ⇒ (iii): Let y ∈ M be arbitrary, and let φ k : V k → U k ∩ M be a local parametrization of the C k -submanifold M around y. By Proposition 3.21 we have φ k ∈ C(V k ; G) ∩ C k (V k ; H). Let j ∈ {1, . . . , k − 1} be arbitrary, and let φ j : V j → U j ∩ M be a local parametrization of the C j -submanifold M of H j around y. Of course, M is also a C j -submanifold of H, and φ k : V k → U k ∩ M is a local parametrization around y. The mapping φ j : V j → U j ∩ M is also such a local parametrization around y, because For what follows, let H 0 be another Hilbert space such that (G, H 0 , H) are continuously embedded Hilbert spaces. We assume that M is an m-dimensional (G, H 0 , H)-submanifold of class C 2 . By Proposition 3.24 we have T M ⊂ G × H 0 . For the following result, recall the notation from Definition 3.12. In particular, ifĀ(·, z) ∈ L(H 0 , H) for each z ∈ G, then we have the decompositionĀ for all y ∈ G andĀ(·, z)| MU ∈ Γ(T M U ) for each z ∈ M U . We defineā : In particular, ifĀ(·, z) ∈ L(H 0 , H) for each z ∈ G, then we have the decomposition 3.26. Remark. Before we proceed with the proof, let us clarify some notation. In general, the symbols D 1 and D 2 denote the partial derivatives with respect to the first and the second coordinate. We use the notation DA · B| MU for the mapping M U → H, y → DA(y)B(y), and the mapping Da · b is defined analogously. Furthermore, we use the notation D 1Ā · B| MU for the mapping M U → H, y → D 1Ā (y, y)B(y), and the mapping D 1ā · b is defined analogously. The mapping φ −1 Proof of Proposition 3.25. By Proposition 3.24 we have Therefore, we have A • φ ∈ C 1 (V ; H) and B • φ ∈ C(V ; H 0 ), and hence, by [12, Prop. 6.1.1] we deduce that a ∈ C 1 (V ; R m ) and b ∈ C(V ; R m ). Let y ∈ U ∩ M be arbitrary and set x := φ −1 (y) ∈ V . There exists ǫ > 0 such that Consequently, the curve is well-defined and satisfies γ(0) = y. Since φ ∈ C 1 (V ; H 0 ), we have On the other hand, since A| MU = φ * a, we have A(γ(t)) = Dφ(x + tb(x))a(x + tb(x)), t ∈ (−ǫ, ǫ). Thus, noting that Dφ ∈ C 1 (V ; L(R m , H)), by the Leibniz Rule we have Combining the latter two identities we obtain the decomposition (3.2). Now, suppose that the additional assumptions from the second statement are fulfilled. Similar as above, by [12, Prop. 6.1.1] we deduce that a, b ∈ C(V ; R m ) and a ∈ C 1,0 (V × V ; R m ). Let y, z ∈ U ∩ M be arbitrary, and set x := φ −1 (y) ∈ V and ξ := φ −1 (z) ∈ V . By the decomposition (3.2) we have With y = z this in particular proves the decomposition (3.3). IfĀ(·, z) ∈ L(H 0 , H) for each z ∈ G, then the decomposition (3.4) is a direct consequence. Now, suppose that the additional assumptions from the third statement are fulfilled. Similar as above, by [12,Prop. 6 Therefore, for all x ∈ V we have Hence, by the decompositions (3.2) and (3.3) for all y ∈ U ∩ M we obtain where x := φ −1 (y) ∈ V , proving the decomposition (3.5). IfĀ(·, z) ∈ L(H 0 , H) for each z ∈ G, then the decomposition (3.6) is a direct consequence. For what follows, let d ∈ N be a positive integer such that m ≤ d, and let N be an m-dimensional C k -submanifold of R d . The following definition generalizes the concept of an immersion from Definition 3.1. 3.28. Definition. Let X ⊂ R d be an open subset such that X ∩ N = ∅, and let ψ ∈ C k (X; H) be a mapping. (1) Let x 0 ∈ X ∩ N be arbitrary. The mapping ψ is called a C k -immersion on N at For what follows, we fix a mapping ψ ∈ C k (R d ; H). Thus, we consider the situation X = R d . 3.29. Lemma. Let x 0 ∈ N be arbitrary, let {v 1 , . . . , v m } be a basis of T x0 N , and let h 1 , . . . , h m ∈ H be such that the matrix is invertible. Then ψ is a C k -immersion on N at x 0 . Proof. It suffices to show that the vectors Dψ(x 0 )v i , i = 1, . . . , m are linearly independent. For this purpose, let c 1 , . . . , c m ∈ R be such that Then for each j = 1, . . . , m we have and by invertibility of the matrix (3.7) we deduce that c 1 = . . . = c m = 0. 3.30. Lemma. Let x 0 ∈ N be such that ψ is a C k -immersion on N at x 0 . Then there exists an open neighborhood W 0 ⊂ R d of x 0 such that: (1) The submanifold W 0 ∩ N has one chart. Proof. Let ϕ : V → W ∩ N be a local parametrization around x 0 . We set ξ 0 := ϕ −1 (x 0 ) ∈ V and φ := ψ • ϕ. Then by the chain rule we have φ ∈ C k (V ; H) and Dφ(ξ 0 ) = Dψ(x 0 )Dϕ(ξ 0 ), showing that φ is a C k -immersion at ξ 0 . By [ 3.31. Lemma. Suppose that ψ| N : N → ψ(N ) is a homeomorphism, and that ψ is a C k -immersion on N . Then the following statements are true: Proof. Let ϕ : V → W ∩ N be a local parametrization of N , and set φ := ψ • ϕ. Since ψ| N is a homeomorphism, there exists an open subset U ⊂ H such that ψ(W ∩ N ) = U ∩ M . Hence, the mapping φ : V → U ∩ M is a homeomorphism. Furthermore, by the chain rule, for each ξ ∈ V we have Dφ(ξ) = Dψ(x)Dϕ(ξ), where x := ϕ(ξ) ∈ W ∩ N , showing that φ is a C k -immersion. Hence, the first two statements follow, and the third statement is a consequence of Proposition 3.24. 3.32. Definition. We say that a submanifold M as in Lemma 3.31 is induced by (ψ, N ). From now on, we assume that ψ| N : N → ψ(N ) is a homeomorphism, and that ψ is a C k -immersion on N . According to Lemma 3.31, let M be the mdimensional C k -submanifold of H, which is induced by (ψ, N ). The structure of local parametrizations is illustrated in the following diagram: Proof. This is a consequence of Lemma 3.31. 3.34. Lemma. Let y ∈ M be arbitrary, and set x := ψ −1 (y) ∈ N . Then we have Proof. Let ϕ : V → W ∩ N be a local parametrization around x. By Lemma 3. 31 there exists an open subset U ⊂ H such that the mapping φ := ψ • ϕ : V → U ∩ M is a local parametrization around y. Setting ξ := ϕ −1 (x) ∈ V , by the chain rule we obtain completing the proof. For the upcoming result, let ϕ : V → W ∩ N be a local parametrization of N , and let φ := ψ • ϕ : V → U ∩ M be the corresponding local parametrization of M ; see Lemma 3.31. For a mapping a : V → R m we define φ * a : U ∩ M → H and ϕ * a : W ∩N → R d according to Definition 3.12, and for a mapping b : 3.35. Lemma. The following statements are true: (1) For a mapping a : V → R m we have φ * a = ψ * ϕ * a. For the following auxiliary result, recall the Definition 3.10 of a local vector field, and the Definition 3.11 of a locally simultaneous vector field. 3.36. Lemma. For every mapping a : N → R d the following statements are true: (1) Let A : M → H be the mapping A := ψ * a. If a ∈ Γ(T N ), then we have A ∈ Γ(T M ). Proof. For the proof of the first statement, let y ∈ M be arbitrary, and set x := ψ −1 (y) ∈ N . By Lemma 3.34 we obtain We proceed with the proof of the second statement. Let z ∈ M be arbitrary, and set ξ : Therefore, by Lemma 3.34 for each y ∈ U ∩ M we obtain Finite dimensional submanifolds generated by orbit maps of group actions In this section we deal with finite dimensional submanifolds given by orbit maps of group actions, in particular in Hermite Sobolev spaces. Let H be a separable Hilbert space. We also fix positive integers k ∈ N and m, d ∈ N such that m ≤ d. If G is the higher-order domain of closed operators, then there is another criterion for a (G, H)-submanifold, which adds to Proposition 3.21. Now, let T = (T (t)) t∈R d be a multi-parameter C 0 -group on H with generator A; see Appendix A for details. As a consequence of Lemmas 3.30, 3.31 and Proposition A.11, we obtain the following examples of submanifolds generated by the orbit maps of the group T . Proposition. Let Then the following statements are true: Now, we turn to Hermite Sobolev spaces; see Appendix B for further details. For submanifolds in Hermite Sobolev spaces, there is another criterion for a (G, H)submanifold, which adds to Proposition 3.21. Recall that H denotes the Hermite operator. 3.39. Proposition. Let p ∈ R and l ∈ N be arbitrary. We set G := S p+l (R d ) and H := S p (R d ). Let M be an m-dimensional C k -submanifold of H. Then the following statements are equivalent: Proof. By is a homeomorphism as well. is a homeomorphism, then by Lemma B.8 the identity is a homeomorphism as well. For the rest of this section, we will present examples of submanifolds in Hermite Sobolev spaces which are generated by the orbit maps of the translation group. For this purpose, recall the translation group τ = (τ x ) x∈R d from Appendix B; see in particular Lemma B.10. For each i = 1, . . . , d we define the family Let p ∈ R be arbitrary. Then τ 1 , . . . , τ d are commutative C 0 -groups on S p (R d ), and we have , and that it is even a large subspace of D(A p,i ) in the sense that it is a core for A p,i . 3.40. Theorem. For each p ∈ R and each i = 1, . . . , d the following statements are true: ( Lemma. Let p, q ∈ R with p ≤ q and n ∈ N 0 be arbitrary. Then the following statements are true: Taking into account Lemma B.2, we deduce that ξ Φ ∈ C n (R d ; S p (R p )), and hence, by Proposition A.10 we have Φ ∈ D(A n p ). Furthermore, taking into account Lemma B.2 again, we obtain A α q Φ = A α p Φ for all m ∈ N 0 with m ≤ n and α ∈ {1, . . . , d} m . 3.42. Proposition. Let p ∈ R and n ∈ N be arbitrary. Then the following statements are true: consists of separable Hilbert spaces with continuous embedding. (2) For all m ∈ N 0 with m ≤ n and α ∈ {1, . . . , d} m we have Proof. By induction we prove S p+ n 2 (R d ) ⊂ D(A n p ) and the identity (3.9) for each n ∈ N. For n = 1 this is a consequence of Theorem 3.40. We proceed with the induction step n − 1 → n: By induction hypothesis and Lemma 3.41 we have Now, let Φ ∈ S p+ n 2 (R d ) be arbitrary. By Lemma 3.41 and induction hypothesis, for all α ∈ {1, . . . , d} n−1 we have . Furthermore, using Theorem 3.40 we obtain (3.9). Finally, by Lemma B.5 the pair (3.8) consists of separable Hilbert spaces with continuous embedding for each n ∈ N. 3.43. Proposition. Let p ∈ R, n ∈ N 0 and Φ ∈ S p+ n 2 (R d ) be arbitrary. Then the following statements are true: Proof. This is a consequence of Propositions A.11, 3.42 and Lemma B.5. For the next result, recall that every finite signed measure µ on Proof. First, we show that ψ is injective. Let x, y ∈ R d be such that ψ(x) = ψ(y). Then we have τ x µ = τ y µ. Since supp(µ) is compact, by Lemma B.1 there exists a Schwartz function ϕ ∈ S (R d ) such that ϕ(x + z) = x + z and ϕ(y + z) = y + z for all z ∈ supp(µ). By Lemma B.13 we obtain Since µ(R d ) = 0, we deduce that x = y. Next, we show that ψ : Since ϕ has compact support, by Lebesgue's dominated convergence theorem we deduce that On the other hand, we have ψ(x n k ) → ψ(x), and hence the contradiction Since the sequence (x n ) n∈N is bounded and supp(µ) is compact, by Lemma B.1 there exists a Schwartz function ϕ ∈ S (R d ) such that ϕ(z + x) = z + x for all z ∈ supp(µ) as well as ϕ(z + x n ) = z + x n for all z ∈ supp(µ) and all n ∈ N. This gives us . Let x 0 ∈ R d be arbitrary. Since supp(µ) is compact, by Lemma B.1 for each j = 1, . . . , d there exists a Schwartz function ϕ j ∈ S (R d ) such that ϕ j (z + x 0 ) = z j for all z ∈ supp(µ). Therefore, by Proposition 3.43 and Lemma B.13 for all i, j = 1, . . . , d we have Since µ(R d ) = 0, by Lemma 3.29 it follows that ψ is a C k -immersion at x 0 . For the next results, recall that every polynomial f : Now, let i = 1, . . . , d be arbitrary. Note that Therefore, for all x ∈ R d we obtain completing the proof. Proof. First, we show that ψ| N is injective. Let x, y ∈ N be such that ψ(x) = ψ(y), that is τ x f = τ y f . By Lemma 3.45 we have Taking partial derivatives with respect to z, inductively we deduce that x = y. Next, we show that ψ| N : N → ψ(N ) is a homeomorphism. Let (x m ) m∈N ⊂ R n and x ∈ R n be such that ψ(x m ) → ψ(x). By Lemma 3.45 we have Taking partial derivatives with respect to z, inductively we deduce that x m → x. Now, we prove that ψ is a C k -immersion on N . By Proposition 3.43 we have Then for all i, j = 1, . . . , m we have z j,i − x 0,i = 1 − δ ij , and by Lemma 3.45 we obtain Proof. First, we show that ψ is injective. Let x, y ∈ R d be such that ψ(x) = ψ(y). Then we have τ x ϕ = τ y ϕ. Suppose that x = y. Then for all z ∈ R d we have We set ∆ := y − x = 0. Inductively, for all z ∈ R d and n ∈ N 0 we obtain Since the matrix (3.10) is invertible, we have ϕ = 0. Hence, there exists z ∈ R d such that ϕ(z − x) = 0. However, by Theorem B.19 we have ϕ ∈ C 1 0 (R d ), and hence, we obtain the contradiction showing that ψ is injective. Next, we show that ψ : However, by Theorem B.19 we obtain the contradiction ϕ ∈ C 1 0 (R d ), showing that the sequence (x n ) n∈N is bounded. Now, let (n k ) k∈N be an arbitrary subsequence. Since (x n k ) k∈N is bounded, there exists another subsequence (n k l ) l∈N such that lim l→∞ x n k l = y for some y ∈ R d . This gives us ψ(x n k l ) → ψ(y), and hence ψ(x) = ψ(y). By the injectivity of ψ we deduce that x = y. Therefore, we have lim l→∞ x n k l = x. Since the subsequence (n k ) k∈N was arbitrary, we deduce that . Let x 0 ∈ N be arbitrary. We set Φ j := δ x0+zj for j = 1, . . . , n. By Proposition 3.43 and Lemmas B.9, B.12, for all i, j = 1, . . . , n we have Since the matrix (3.10) is invertible, by Lemma 3.29 we deduce that ψ is an immersion on N at x 0 . As an immediate application of Lemma 3.31, Proposition 3.43 and our previous findings (Propositions 3.44, 3.46 and 3.47), we obtain the following examples of submanifolds generated by the orbit maps of the translation group. 3.48. Examples. Let k ∈ N be arbitrary, and let N be an m-dimensional C ksubmanifold of R d . We assume that Φ ∈ S p+ k 2 (R d ) with a suitable p ∈ R belongs to one of the following three types: Invariant manifolds generated by orbit maps In this section we investigate invariant manifolds generated by orbit maps. In Section 4.1 we treat SPDEs with coefficients given by generators of group actions, in Section 4.2 we discuss the structure of invariant submanifolds, and in Section 4.3 we consider such SPDEs in Hermite Sobolev spaces. As we will see, there is an interplay between these SPDEs and finite dimensional SDEs, which we investigate in Section 4.4. Coefficients given by generators of group actions Let (G, H 0 , H) be separable Hilbert spaces with continuous embeddings. We consider the SPDE (1.1) with continuous mappings L, A 1 , . . . , A r : G → H for some r ∈ N. Let d ∈ N be a positive integer, and let T = (T (t)) t∈R d be a multi-parameter C 0 -group on H such that T | G is a multi-parameter C 0 -group on G, and T | H0 is a multi-parameter C 0 -group on H 0 . We denote by B = (B 1 , . . . , B d ) the generator of T ; see Appendix A for further details. We assume that H 0 ⊂ D(B) and G ⊂ D(B 2 ). Furthermore, we assume that B i | H0 ∈ L(H 0 , H) and B i | G ∈ L(G, H 0 ) for each i = 1, . . . , d. Let y 0 ∈ G be arbitrary, and denote by ψ ∈ C 2 (R d ; H) the orbit map given by ψ(t) := T (t)y 0 for each t ∈ R d . Let N be an m-dimensional C 2 -submanifold of R d , and let M be an m-dimensional (G, H 0 , H)-submanifold of class C 2 , which is induced by (ψ, N ); see Definition 3.32. Recall that this requires that ψ| N : N → ψ(N ) is a homeomorphism, and that ψ is a C 2 -immersion on N . For vectors Σ 1 , . . . , Σ r ∈ R d we define the matrix Σ ∈ R d×r as Σ ij := e i , Σ j for i = 1, . . . , d and j = 1, . . . , r. 4.1. Theorem. The following statements are equivalent: where the continuous mappingsσ 1 , . . . ,σ r ,b : N → R d are the unique solutions of the equations Proof. Let y ∈ M be arbitrary, and set x := ψ −1 (y) ∈ N . By Proposition A.11 for j = 1, . . . , r we have as well as Therefore, applying Theorem 6.10 concludes the proof. 4.2. Proposition. Suppose that the following conditions are fulfilled: (1) The submanifold M is locally invariant for the SPDE (1.1). (2) The submanifold N has one chart with a global parametrization ϕ : whose coefficients a 1 , . . . , a r , ℓ : V → R m are the unique solutions of the equationsσ where the continuous mappingsσ 1 , . . . ,σ r ,b : N → R d are the unique solutions of the equations (4.2) and (4.3) Then the submanifold M is globally invariant for the SPDE (1.1), and the submanifold N is globally invariant for the SDE (4.1). Proof. This is a consequence of Proposition 6.11. The structure of invariant submanifolds In the previous we have considered invariant submanifolds which are induced by (ψ, N ), and shown that the coefficients of the SPDE (1.1) must be of the form (4.2) and (4.3). In this section, we will show that for such coefficients an invariant submanifold must, subject to appropriate regularity conditions, necessarily be an induced submanifold. Let T = (T (t)) t∈R d be a multi-parameter C 0 -group on H as in Section 4.1. Furthermore, let M be an m-dimensional (G, H 0 , H)-submanifold of class C 2 , which is locally invariant for the SPDE (1.1). Suppose that for each j = 1, . . . , r we have For simplicity, we assume that m = r. Furthermore, we assume there exists a mapping Λ : V → R m×d of class C 1 such that Proof. Let x ∈ V be arbitrary, and set y := φ(x) ∈ U ∩ M . Noting (4.8), the two sets . This gives us a mapping Γ : V → R m×m satisfying (4.9). The mapping Consequently, the mapping Γ is of class C 1 , which concludes the proof. Now, we consider the product Φ := Γ · Λ : V → R m×d , which is again of class Proof. We may assume that the open set V is a connected neighborhood of x 0 . By (4.7) and (4.9) the mapping φ ∈ C 2 (V ; H) is a D(B)-valued solution to the PDE By assumption the mapping Φ has a primitive ϕ : V → R d . We may assume that ϕ(x 0 ) = 0. Thus, by Proposition A.12 we obtain φ = ψ • ϕ. Since ∇ϕ = Φ and rk Φ(x 0 ) = m, the mapping ϕ is a C 2 -immersion at x 0 . Hence, by Lemma 3.30 there exists an open neighborhood V 0 ⊂ V of zero such that ϕ| V0 : 4.6. Remark. We may assume that the open set V is a simply connected neighborhood of x 0 . Then Φ has a primitive if and only if . . , m and k = 1, . . . , d. Invariant submanifolds in Hermite Sobolev spaces In this section we will apply our findings from Section 4.1 in order to construct examples of invariant submanifolds in Hermite Sobolev spaces; see Appendix B for further details about Hermite Sobolev spaces. Let p ∈ R be arbitrary and set G : . . , d and j = 1, . . . , r be distributions. We define the coefficients L, A 1 , . . . , A r : G → H of the SPDE (1.1) as σ ij , y ∂ i y, j = 1, . . . , r, (4.11) where ·, · denotes the dual pair on S −(p+1) (R d ) × S p+1 (R d ); see Lemma B.3 and also Remark B.4. Furthermore σ, y ∈ R d×r denotes the matrix with elements σ ij , y for i = 1, . . . , d and j = 1, . . . , r. Let Φ ∈ G be arbitrary, and denote by ψ ∈ C 2 (R d ; H) the orbit map given by ψ(x) = τ x Φ for each x ∈ R d . Due to our results from Section 3.2 we are in the mathematical setting of Section 4.1. In particular, by Proposition 3.42 we have H 0 ⊂ D(−∂) and G ⊂ D((−∂) 2 ). Concerning notation, for a matrix Σ ∈ R d×r we agree to set Σ j := (Σ ij ) i=1,...,d ∈ R d for j = 1, . . . , r. Furthermore, for distributions c i ∈ S −(p+1) (R d ), i = 1, . . . , d and y ∈ G we define c, y ∈ R d as c, y := ( c i , y ) i=1,...,d . Let N be an m-dimensional C 2 -submanifold of R d , and let M be an m-dimensional (G, H 0 , H)-submanifold of class C 2 , which is induced by (ψ, N ). Recall that this requires that ψ| N : N → ψ(N ) is a homeomorphism, and that ψ is a C 2 -immersion on N . Proof. This is a consequence of Proposition 4.2. Example Then the set is an m-dimensional (G, H 0 , H)-submanifold of class C 2 with one chart, which is locally invariant for the SPDE (1.1). Note that the invertibility of the matrix (∂ i ϕ(z j )) i,j=1,...,m is required in order to ensure that ψ is an immersion on N ; see Proposition 3.47. If b i ∈ L 2 (R d ) for i = 1, . . . , d and σ ij ∈ L 2 (R d ) for i = 1, . . . , d and j = 1, . . . , r, then M is even globally invariant for the SPDE (1.1). Recalling that L 2 (R d ) = S 0 (R d ), this follows from Lemma B.16, which ensures that the coefficients a 1 , . . . , a r , ℓ : R m → R m are bounded. 4.12. Remark. Note that in each of the previous examples we have considered the submanifold N := R m × {0}, which ensures that in any case the assumptions from Examples 3.48 concerning N are fulfilled. Since the submanifold N is a linear space, in any case the respective condition (4.15), (4.16) or (4.17) ensures that N is locally invariant for the SDE (4.12); see Corollary 6.7. Of course, we can also consider other choices of the submanifold N such that the assumptions from Examples 3.48 are fulfilled. In particular, noting Theorem 6.3, in the situation of Example 4.9 we can choose any m-dimensional C 2 -submanifold N of R d such that for each x ∈ R d . 4.13. Remark. Consider the particular situation m = d, N = R d and Φ = δ 0 , which is covered by Example 4.9. Then, by Lemma B.12 the invariant submanifold is given by and the coefficients of the SDE (4.12) are simply given byb = b andσ j = σ j for j = 1, . . . , r. 4.14. Remark. Note that the findings of this section are in accordance with [33,Lemma 3.6], where it was shown that solutions to the SPDE (1.1) with coefficients (4.10) and (4.11) can be realized locally as Y t = τ Xt Φ with an R d -valued Itô process X. Interplay between SPDEs and finite dimensional SDEs In this section we illustrate how our findings from the previous Section 4.3 can be used in order to study stochastic invariance for finite dimensional diffusions. Taking into account Remark 4.13, our idea is to link invariance of the submanifold N for the SDE (4.18) with invariance of the submanifold M for the SPDE (1.1) in Hermite Sobolev spaces, where M is defined in (4.19) below. For this purpose, we set p := −(q + 1). Then we have p + 1 < − d 4 , and hence, we can consider the SPDE (1.1) with coefficients (4.10) and (4.11) in the framework of the previous Section 4.3 with Φ = δ 0 . As pointed out in Remark 4.13, then the coefficients of the SDE (4.12) are simply given byb = b andσ j = σ j for j = 1, . . . , r, and hence, the SDE (4.18) from this section coincides with the SDE (4.12). By Lemma B.12, the orbit map ψ ∈ C 2 (R d ; H) is given by ψ(x) = δ x for each x ∈ R d . Therefore, by Examples 3.48 with k = 2 the set is a d-dimensional (G, H 0 , H)-submanifold of class C 2 , which is induced by (ψ, N ). The following result shows how local invariance of the submanifold N for the SDE Proof. The first statement is a consequence of Proposition 4.8. In the situation of the second statement, let x 0 ∈ N be arbitrary, and let X be a global weak solution to the SDE (4.18) with X 0 = x 0 . We define the stopping time and, since N is closed as a subset of R d , arguing by contradiction we can show that P(τ = ∞) = 1; see, for example, the proof of [13, Thm. 2.8]. Consequently, when we are interested in proving local invariance of the submanifold N for the SDE (4.18), we can alternatively show local invariance of the submanifold M for the SPDE (1.1), which turns out to be simpler in certain situations. We illustrate this procedure in the proof of the following We are interested in finding an additional condition ensuring that N is locally invariant for the SDE (4.18). In the general framework of Section 6, such a condition is provided by Proposition 6.6. In the present situation, we will establish another equivalent condition by using the connection to the SPDE (1.1). More precisely, for each j = 1, . . . , r we defineĀ j : G × G → H 0 as where ·, · denotes the dual pair on S −(p+1) (R d ) × S p+1 (R d ). ThenĀ j is continuous, by (4.11) we have A j (y) =Ā j (y, y), y ∈ G, and it has an extensionĀ j ∈ C 1,0 (H 0 × G; H) such thatĀ j z :=Ā j (·, z) belongs to L(H 0 , H) for each z ∈ G. (2) In particular, we have Proof. Recalling (4.10) and (4.11), these statements follow from Proposition 3.43, where for the third statement we note that for each y ∈ M we have This completes the proof. Concerning the notation used in equations (4.22) and (4.23) below, we refer to Definition 6.1. 4.19. Theorem. Suppose that condition (4.20) is fulfilled. Then the following statements are equivalent: The latter relation shows that and thus, the stated equivalence is a consequence of Theorem 4.16 and Theorem 6.3. If the submanifold N is affine, then it is locally invariant for the SDE (4.18) if and only if we have (4.20). This is a consequence of Corollary 6.7. More generally, we have the following result. Recall that Γ * (T N ) denotes the space of all locally simultaneous vector fields on N ; see Definition 3.11. Remark. Consider the R d -valued Stratonovich SDE with a continuous mapping c : R d → R d . It is well known that the submanifold N is locally invariant for the Stratonovich SDE Proof. The first statement is a consequence of the Sobolev embedding theorem for Hermite Sobolev spaces (Theorem B.19), and the second statement follows from the definition (4.11). Now, let y ∈ M be arbitrary. Then we have y = δ x , where x := ψ −1 (y) ∈ N . Therefore, by the Leibniz rule and by duality, we obtain Hence, using Proposition 3.43 completes the proof. 4.24. Remark. If σ ij ∈ S q+ 1 2 (R d ) for all i = 1, . . . , d and j = 1, . . . , r, then by the first part of Lemma 4.23 the Itô SDE (4.18) can equivalently be expressed by the Stratonovich SDE (4.28), where the continuous mapping c : R d → R d is given by The following result provides simple representations for the vector fields in (6.19) and (6.21) in the present situation; see also Remark 6.22. These representations are given in terms of the vector fields b| N and c| N . Dσ j · σ j | N ∈ Γ(T N ). (iii) We have c| N ∈ Γ(T N ), where the continuous mapping c : R d → R d is given by (4.30). If any of the previous conditions is fulfilled, then we have Concerning the components of f we assume that f k ∈ S q+1 (R d ) for all k = 1, . . . , n. Recalling that q > d 4 , by the Sobolev embedding theorem for Hermite Sobolev spaces (Theorem B. 19) we have f ∈ C 2 (R d ; R n ). We also assume that Df (x)R d = R n for all x ∈ N . Then, by Lemma 3.17 we have We define the operator L : and for each j = 1, . . . , r we define the operator A j : 4.26. Theorem. The following statements are equivalent: (i) The submanifold N is locally invariant for the SDE (4.18). (ii) For all k = 1, . . . , n we have Before we provide the proof of Theorem 4.26, let us state some consequences. Note that we can decompose the operator L as L = L 1 + L 2 , where the first order operator L 1 : and the second order operator L 2 : C 2 (R d ) → C(R d ) is given by x, σ j (x) = 0, j = 1, . . . , r. For each x ∈ O we obtain the partial derivatives with an R d -valued Wiener process W ; see [17, Example 3.3.2]. With our notation, the volatilities σ 1 , . . . , σ d : R d → R d are given by Let us compute the corresponding Itô dynamics. For this purpose, let x ∈ R d be arbitrary. Then we have and hence, for each j = 1, . . . , d we obtain In particular, for x ∈ S d−1 we obtain Therefore, we may alternatively consider the R d -valued Itô SDE cf., for example, equation (2.1) in [25]. Using Corollary 4.28, we will show that the unit sphere S d−1 is globally invariant for the SDE (4.40). First, note that the SDE ..,d , and there exist σ ij ∈ S q (R d ), i, j = 1, . . . , d such that where σ = (σ ij ) i,j=1,...,d . Hence, we may assume that the coefficients b : R d → R d and σ : R d → R d×d of the SDE (4.18) are given by these mappings with components from S q (R d ). Now, let x ∈ S d−1 be arbitrary. Since the matrix σ(x) is symmetric, taking into account the identification R d ∼ = R d×1 we have Therefore, we have and hence • More generally, let N be a (d − 1)-dimensional submanifold of R d which is compact and connected. By the Jordan-Brouwer separation theorem its complement R d \ N consists of two connected components N 1 and N 2 . Now, we approach the proof of Theorem 4.26. Recall that ψ ∈ C 2 (R d ; H) denotes the orbit map ψ = ξ Φ with Φ = δ 0 . Thus, we have ψ(x) = δ x for all x ∈ R d , and by Proposition 3.44 the mapping ψ is a C 2 -immersion, and ψ : R d → ψ(R d ) is a homeomorphism. By Examples 3.48 the set For the next auxiliary result note that f k ∈ S −p (R d ) for all k = 1, . . . , n. (2) For each y ∈ M we have T y M = T y K ∩ n k=1 ker( f k , · ). Proof. Let y ∈ K be arbitrary. Setting x := ψ −1 (y) ∈ O, we have y = δ x . We have y ∈ M if and only if x ∈ N , and by (4.31) we have x ∈ N if and only if f k (x) = 0 for all k = 1, . . . , n. This is equivalent to f k , δ x = 0 for all k = 1, . . . , n, which is satisfied if and only if y ∈ n k=1 ker( f k , · ), proving the first statement. For the proof of the second statement, let y ∈ M be arbitrary. Setting x := ψ −1 (y) ∈ N , we have y = δ x . By Lemma 3.34 we have Let w ∈ T y K be arbitrary. There is a unique vector v ∈ R d such that w = Dψ(x)v. We have w ∈ T y M if and only if v ∈ T x N . By Proposition 3.43 and by duality, for each k = 1, . . . , n we have Therefore, by (4.32) we have v ∈ T x N if and only if w ∈ n k=1 ker( f k , · ), completing the proof. Now, we are ready to provide the proof of Theorem 4.26. Proof of Theorem 4.26. (i) ⇒ (ii): Let x ∈ N be arbitrary. There exist a global weak solution X to the SDE (6.14) with X 0 = x and a positive stopping time τ > 0 such that X τ ∈ N up to an evanescent set. Let k = 1, . . . , n be arbitrary. By Itô's formula we have P-almost surely Noting that f k (X τ ) = 0, we deduce (4.33) and (4.34). Indeed, let y ∈ M be arbitrary. Setting x := ψ −1 (y) ∈ N , we have y = δ x . Thus, taking into account the definitions (4.10) and (4.11) of the coefficients, by duality for all k = 1, . . . , n we obtain as well as In particular, we have A j | M ∈ Γ(T K U ), j = 1, . . . , r, (4.45) where K U := U ∩ K . From these equations, it follows that ϕ * * (a j , a j ) + 1 2 r j=1 φ * * (a j , a j ). Thus, noting (4.44), by Lemma 4.33 we obtain Therefore, by Lemma 3.34 we deduce that Hence, there is a continuous mapping ℓ : V → R m which is the unique solution to the equationb ϕ * * (a j , a j ) = ϕ * ℓ. (4.50) Therefore, using Lemma 3.35, by (4.49) and (4.50) we obtain Now, by Proposition 6.9 we deduce that the submanifold M is locally invariant for the SPDE (1.1). Consequently, by Theorem 4.16 it follows that the submanifold N is locally invariant for the SDE (4.18). Quasi-semilinear stochastic partial differential equations In this section we investigate quasi-semilinear SPDEs. For such equations, we can consider analytically weak solutions, and accordingly study weak local invariance of a submanifold. In Section 5.1 we will show that, under suitable assumptions, local invariance is actually equivalent to weak local invariance. This result in particular applies to semilinear SPDEs, which we will treat in Section 5.2. General results In this section we treat general quasi-semilinear SPDEs. Let (G, H) be separable Hilbert spaces with continuous embedding, and let L : G → H and A j : G → H, j = 1, . . . , r be continuous mappings. Throughout this section, we assume that the following assumption is satisfied. Assumption (Quasi-semilinearity). We suppose that the following conditions are fulfilled: (1) G is a dense subspace of H. Example. Let H := S p (R d ) for some p ∈ R. We set G := S p+ 1 2 (R d ) and assume that the coefficients L, A 1 , . . . , A r : G → H are given by (1) B = (Ω, F , (F t ) t∈R+ , P) is a stochastic basis; that is, a filtered probability space satisfying the usual conditions. (2) W is an R r -valued standard Wiener process on the stochastic basis B. (3) Y is an H-valued adapted, continuous process such that, for some strictly positive stopping time τ > 0, for each ζ ∈ H 0 we have P-almost surely and P-almost surely where we agree on the notation assume that Y τ ∈ U ∩ M up to an evanescent set. Now, we define the continuous R m -valued process X := ψ(Y ). Then we have X τ ∈ V , and since ζ 1 , . . . , ζ m ∈ H 0 , the process X is a local strong solution to the SDE . . , r are given by Since φ ∈ C 2 (V ; H), by Itô's formula we obtain that the process Y is a local solution to the SPDE with lifetime τ , where we recall the notation from Definition 3.12. Let ξ ∈ H 0 be arbitrary. Then we have On the other hand, the process Y is a local analytically weak martingale solution to the original SPDE (1.1) with Y 0 = y and lifetime τ . Therefore, we have Thus, taking into account the continuity of the mappings (5.2) and (5.3), we have . . , r. for all j = 1, . . . , r. This proves y ∈ D(L y ) and y ∈ D(Ā j y ) for all j = 1, . . . , r. Taking into account (5.1), we deduce that y ∈ G. Consequently, we have M ⊂ G. By (5.8) and (5.9) we obtain for each x ∈ V . Furthermore, from (5.11) and (5.12) we obtain . . , r for all ξ ∈ H 0 and all y ∈ U ∩ M . Since H 0 is dense in H, we obtain A j (y) =Ā j (y, y) + σ j (y) = (φ * A j ζ )(y), j = 1, . . . , r for all y ∈ U ∩ M . Since y 0 ∈ M at the beginning of the proof was chosen arbitrary, we deduce that L| M : (M , · H ) → (H, · H ) and A j | M : (M , · H ) → (H, · H ), j = 1, . . . , r are continuous. Furthermore, by taking into account (5.10), we see that Y is local strong solution to the SPDE (1.1) with Y 0 = y 0 , proving that M is locally invariant for the SPDE (1.1). Semilinear stochastic partial differential equations In this section we present consequences of our previous findings for semilinear SPDEs of the form Such equations have been studied, for example, in [7,16,24,31]. Here the state space H is a separable Hilbert space, and B : H ⊃ D(B) → H is a densely defined, closed operator. Moreover α : H → H and σ j : H → H for j = 1, . . . , r are continuous mappings. We endow G := D(B) with the graph norm y G := y 2 H + By 2 H , y ∈ G. (5.14) By Proposition A.7, the pair (G, H) consists of separable Hilbert spaces with continuous embedding. 5.13. Remark. If B generates a C 0 -semigroup on H, then we can also consider mild solutions. However, this is not required for our upcoming results. Let M be a finite dimensional C 2 -submanifold of H. Invariant manifolds of weak solutions to semilinear SPDEs have been studied, for example, in [11,28]; see also [13] for the case of jump-diffusions and submanifolds with boundary. 5.14. Lemma. The following statements are equivalent: We will derive further consequences in the upcoming section. The general invariance result In this section we provide the general invariance result. Let (G, H) be separable Hilbert spaces with continuous embedding, and consider the SPDE (1. , where we recall the notation from Definition 3.12. where the continuous mappings ℓ : V → R m and a j : V → R m , j = 1, . . . , r are the unique solutions of the equations then the submanifold M is globally invariant for the SPDE (1.1). 6.5. Remark. Choosing G = H = R d , we see that Theorem 6.3 and Proposition 6.4 cover the well-known situation of finite dimensional SDEs. Before we provide the proofs of Theorem 6.3 and Proposition 6.4, let us state some consequences of these results. Consider the condition We are interested in finding an additional condition which ensures such that M is locally invariant for the SPDE (1.1). 6.6. Proposition. Suppose that condition (6.5) is fulfilled. Then the following statements are equivalent: (i) M is locally invariant for the SPDE (1.1). Proof. This is a consequence of Theorem 6.3. We say that the submanifold M is affine if for any local parametrization φ : V → U ∩ M we have D 2 φ = 0. 6.7. Corollary. Suppose the submanifold M is affine. Then the following statements are equivalent: (i) M is locally invariant for the SPDE (1.1). Proof. This is a consequence of Theorem 6.3 and Proposition 6.6. Then we can rewrite the SPDE (1.1) in Stratonovich form as where K : H → H is given by If we have (6.1), then by the decomposition (3.2) from Proposition 3.25 we have and hence condition (6.2) is equivalent to We will present a corresponding result for continuously embedded Hilbert spaces with an additional intermediate space later on; see Theorem 6.20 below. We can express the statement of Theorem 6.3 in local coordinates as follows. Proof. This is an immediate consequence of Theorem 6.3. Since y ∈ M was arbitrary, by Proposition 6.9 we deduce that the submanifold N is locally invariant for the SDE (6.8). Furthermore, by Lemma 3.35 we obtain as well as Since the element y ∈ M was arbitrary, this procedure provides us with continuous mappings b, σ 1 , . . . , σ r : N → R d which are the unique solutions of the equations (6.9) and (6.10). ϕ * * (a j , a j ). For the next result, recall that the submanifold M has one chart if N has one chart; see Lemma 3.33. 6.11. Proposition. If the submanifold M is locally invariant for the SPDE (1.1) and the submanifold N has one chart with a global parametrization ϕ : V → N , then for continuous mappings ℓ, a 1 , . . . , a r : V → R m the following statements are equivalent: (i) ℓ, a 1 , . . . , a r : V → R m are the unique solutions of the equations (6.3) and (6.4). Proof. Taking Proof. This is a consequence of Proposition 3.21. 6.14. Proposition. For a mapping A : M U → H the following statements are equivalent: If any of the previous two conditions is fulfilled, then we have ψ * A = φ −1 * A. 6.16. Lemma. Let E ⊂ G be a subset, let K : (E, · G ) → (H, · H ) be a continuous function, let y 0 ∈ G be arbitrary, let τ > 0 be a positive constant, and let Y : Then we have K(y 0 ) = 0. Proof. Let y ∈ H ′ be an arbitrary continuous linear functional. By assumption we have By a monotone class argument, we even have and therefore By the continuity of the mapping and hence, in particular y ′ (K(y 0 )) = 0. Since the functional y ′ ∈ H ′ was arbitrary, we arrive at K(y 0 ) = 0. 6.17. Lemma. Let E ⊂ G be a subset, let L, A 1 , . . . , A r : (E, · G ) → (H, · H ) be continuous mappings, let y 0 ∈ G be arbitrary, let Y be an E-valued process with Y 0 = y 0 such that the sample paths are continuous with respect to · G , and let τ > 0 be a positive stopping time such that P-almost surely with continuous mappings ℓ : V → R m and a j : V → R m for j = 1, . . . , r. 6.18. Lemma. Every open subset V ⊂ R m is a C ∞ -submanifold of R m , which is locally invariant for the SDE (6.14). Then we have X τ ∈ V , and since ψ is linear, the process X is a local weak solution to the SDE dX t = (ψ * L)(X t )dt + (ψ * A)(X t )dW t X 0 = x with lifetime τ . The sample paths of Y τ = φ(X τ ) are continuous with respect to · G , because φ ∈ C(V ; G). Since also φ ∈ C 2 (V ; H), by Itô's formula we obtain that the process Y is a local martingale solution to the SPDE Proof. It is obvious that On the other hand, the process Y is a local martingale solution to the original SPDE (1.1) with Y 0 = y and lifetime τ . We set M U := U ∩ M and let j = 1, . . . , r be arbitrary. By Lemmas 6.12 and 6.13, the mapping φ * ψ * A j | MU : (M U , · G ) → (H, · H ) is continuous. Therefore, and since the sample paths of Y τ are continuous with respect to · G , we may apply Lemma 6.17, which gives us A j (y) = (φ * ψ * A j )(y). Since y ∈ M U was arbitrary, we obtain Therefore, by Proposition 6.14 we deduce that By Lemmas 6.12 and 6.13, the mapping is continuous. Therefore, and since the sample paths of Y τ are continuous with respect to · G , we may apply Lemma 6.16, which gives us Hence, using Proposition 6.15 we obtain Therefore, by Proposition 6.14 we deduce that Since the point y 0 ∈ M chosen at the beginning of this proof was arbitrary, we deduce (6.1) and (6.2). We set x 0 := φ −1 (y 0 ) ∈ V . By Lemma 6.12 for each j = 1, . . . , r the mapping a j := φ −1 * A j | MU : V → R m is continuous, and the mapping is continuous as well. Therefore, by Lemma 6.18 the open set V is locally invariant for the SDE (6.14). Hence, there exist a stopping time τ > 0 and a local weak solution X to (6.14) with X 0 = x 0 and lifetime τ such that X τ ∈ V up to an evanescent set. We define the M -valued process Y := φ(X). Then we have Y τ ∈ U ∩ M . Furthermore, since φ ∈ C(V ; G), the sample paths of Y τ are continuous with respect to · G . Taking into account Lemma 6.12, we have Hence, by Lemma 6.13 the mapping A j | MU : (M U , · H ) → (H, · H ) is continuous for each j = 1, . . . , r. Furthermore, taking into account Lemma 6.12, we have and hence φ * * (a j , a j ). (6.16) Therefore, by Lemma 6.12 the mapping L| MU : (M U , · H ) → (H, · H ) is continuous. Moreover, by Itô's formula and relations (6.15), (6.16) we obtain that Y τ is a local martingale solution to the SPDE which is just the original SPDE (1.1), with Y 0 = y 0 and lifetime τ . This proves that M is locally invariant for the SPDE (1.1). Proof. This is a consequence of the decomposition (3.2) from Proposition 3.25 as well as Lemma 6.12. 6.20. Theorem. Suppose that for each j = 1, . . . , r we have A j ∈ C(G; H 0 ) with an extension A j ∈ C 1 (H 0 ; H). Then the following statements are equivalent: (i) The submanifold M is locally invariant for the SPDE (1.1). (ii) We have Proof. This is a consequence of Theorem 6.3 and Lemma 6.19. In the next result we present sufficient conditions for local invariance under the assumption that the volatilities A 1 , . . . , A r have a quasi-linear structure. Recall that for any z ∈ M the space Γ z (T M ) denotes the space of all local vector fields on M around z; see Definition 3.10. 6.21. Theorem. We suppose that for each j = 1, . . . , r there exists a continuous mappingĀ j : G × G → H 0 such that A j (y) =Ā j (y, y), y ∈ G having an extensionĀ j ∈ C 1,0 (H 0 × G; H) such thatĀ j z :=Ā j (·, z) belongs to L(H 0 , H) for each z ∈ G. Furthermore, we assume that Proof. Note that condition (6.20) implies (6.1). Furthermore, using the decomposition (3.4) from Proposition 3.25 we obtain and hence, condition (6.21) is equivalent to (6.2). Consequently, applying Theorem 6.3 completes the proof. 6.22. Remark. Suppose that conditions (6.20) and (6.21) from Theorem 6.21 are fulfilled such thatĀ j even has an extensionĀ j ∈ C 1 (H 0 × H 0 ; H) for each j = 1, . . . , r. Then the submanifold M is locally invariant for the SPDE (1.1), and the mapping A j ∈ C(G; H 0 ) has an extension A j ∈ C 1 (H 0 ; H) for each j = 1, . . . , r. Hence, by Theorem 6.20 the invariance condition (6.19) is satisfied as well. The vector fields in (6.19) and (6.21) do not, in general, coincide. Using Proposition 3.25, we can determine their difference by using local coordinates. Namely, if φ : V → U ∩ M is a local parametrization, then by the decomposition (3.6) we have where the notation is analogous to that in Proposition 3.25. 6.23. Remark. Note that Theorem 6.21 applies if the Hilbert spaces are Hermite Sobolev spaces G := S p+1 (R d ), H 0 := S p+ 1 2 (R d ) and H := S p (R d ) for some p ∈ R, and the the volatilities A 1 , . . . , A r are given by differential operators as specified in (4.11); see Sections 4.3 and 4.4 for further details. Indeed, then the mappings A j : G × G → H 0 , j = 1, . . . , r are given by (4.21). In view of Remark 6.22, we also point out that for each j = 1, . . . , r the mappingĀ j even admits an extension A j ∈ C 1 (H 0 × H 0 ; H) if and only if σ ij ∈ S −(p+ 1 2 ) (R d ) for all i = 1, . . . , d. Now, let us consider semilinear SPDEs of the form (5.13), which we have already encountered in Section 5.2. The following result provides a connection between the results from [11] (see also [28]) and our findings. Recall that the state space of the semilinear SPDE (5.13) is a separable Hilbert space H, and that G = D(B), equipped with the graph norm given by (5.14). 6.24. Theorem. Let M be a finite dimensional C 2 -submanifold of H. Then the following statements are equivalent: (i) The submanifold M is weakly locally invariant for the semilinear SPDE (5.13). Proof. This is a consequence of Theorem 5.15 and Theorem 6.3. 6.25. Remark. If we even have σ j ∈ C 1 (H) for all j = 1, . . . , r, then conditions (i)-(iii) are equivalent to the following: (iv) M is a (G, H)-submanifold of class C 2 , and we have (6.22) as well as This follows from Lemma 6.19. 6.26. Remark. Let k ∈ N and l ∈ N 0 be arbitrary, let M be a C k -submanifold of H and assume that σ j ∈ C l (H) for all j = 1, . . . , r. Then k is the degree of smoothness of the submanifold, and l is the degree of smoothness of the volatilities. In the literature, the following situations have been considered: (1) In [11] it is assumed that k = 2 and l = 1. (2) In [28] (which uses the support theorem from [27]) it is assumed that k = 1 and l = 1. (3) Here, in Theorem 6.24 we assume that k = 2 and l = 0. Summing up these degrees of smoothness, we see that in our result we have also achieved k + l = 2. We conclude this section by indicating a result analogous to Theorem 6.3 for deterministic PDEs of the kind with a continuous mapping K : G → H. Here G and H may be Banach spaces, and M only needs to be a (G, H)-submanifold of class C 1 . The proof of following result is similar to that of Theorem 6.3; indeed the arguments are even simpler. Appendix A. Multi-parameter strongly continuous groups In this appendix we provide the required results about multi-parameter strongly continuous groups. For this purpose, we begin with reviewing one-parameter strongly continuous groups. Let H be a separable Hilbert space. A family T = (T (t)) t∈R of continuous linear operators T (t) ∈ L(H) is called a strongly continuous group (or a C 0 -group) on H if the following conditions are fulfilled: (1) T (0) = Id. (2) We have T (t + s) = T (t)T (s) for all t, s ∈ R. A.5. Definition. A family T = (T (t)) t∈R d of continuous linear operators T (t) ∈ L(H) is called a multi-parameter strongly continuous group (or a multi-parameter C 0 -group) on H if the following conditions are fulfilled: (1) T (0) = Id. (2) We have T (t + s) = T (t)T (s) for all t, s ∈ R d . (3) For each x ∈ H the orbit map is continuous. Let T = (T (t)) t∈R d be a multi-parameter C 0 -group on H. For each i = 1, . . . , d we define the family T i = (T i (t)) t∈R of continuous linear operators T i (t) ∈ L(H) as Then T 1 , . . . , T d are commutative C 0 -groups on H, and we have A.6. Remark. As a consequence of Remark A.1, there are constants M ≥ 1 and w ∈ R such that Inductively, for each n ≥ 2 we define the higher-order domain where we use the notation Furthermore, we agree on the notation D(A 0 ) := H. Then for each n ∈ N 0 the space D(A n ) equipped with the graph norm is a separable Hilbert space. Indeed, the completeness is a consequence of the closed graph theorem, and the separability follows from considering the linear isometry Proof. This is a consequence of [10, II.2.7]. A.9. Proposition. The adjoint group T * = (T (t) * ) t∈R d is also a multi-parameter C 0 -group on H, and for each i = 1, . . . , d the generator of T * i is given by A * i . Proof. This is a consequence of [30, Cor. 1.10.6]. A.10. Proposition. Let n ∈ N 0 be arbitrary. Then the following statements are true: (1) We have (2) Let x ∈ D(A n ) be arbitrary. Then for all m ∈ N 0 with m ≤ n and all α ∈ {1, . . . , d} m we have Proof. Taking into account Lemma A.2 and the representation (A.1) of the group T , this follows by induction on n. A.11. Proposition. Let n ∈ N 0 and x ∈ D(A n ) be arbitrary. Then the following statements are true: (2) In particular, we have ξ x ∈ C n (R d ; H), and for each m ∈ N 0 with m ≤ n we have where we use the notation v α := v α1 · . . . · v αm . Proof. This is a consequence of Proposition A.10. Then we have φ = ξ x0 • ϕ, where the primitive ϕ : V → R d is chosen such that ϕ(t 0 ) = 0. p ∈ R the space S −p (R d ) is dual to S p (R d ). More precisely, we have the following result. B.3. Lemma. For each p ∈ R there is a unique continuous bilinear mapping −p ·, · p : S −p (R d ) × S p (R d ) → R extending ·, · L 2 : S (R d ) × S (R d ) → R. Furthermore, the following statements are true: (1) For each p ∈ R the triplet (S −p (R d ), S p (R d ), ·, · ) is a dual pair. (4) For all p, q ∈ R with p ≤ q we have −p Φ, ϕ p = −q Φ, ϕ q for all Φ ∈ S −p (R d ) and ϕ ∈ S q (R d ). Proof. Using the ONBs which completes the proof. Note that for the last two statements we also use Lemma B.2 and the statements preceding this result. B.4. Remark. In the sequel, we will simply write Φ, ϕ whenever Φ ∈ S −p (R d ) and ϕ ∈ S p (R d ) for some p ∈ R, which is justified by the last two statements of Lemma B.3. is an isometric isomorphism. B.9. Lemma. The following statements are true: (1) For each p ∈ R there exists a polynomial P k of degree k = 2([|p|] + 1) such that for all Φ ∈ S p (R d ) and x ∈ R d we have In particular, we have τ x Φ ∈ S p (R d ). (2) For each p ∈ R and every Φ ∈ S p (R d ) the map R d → S p (R d ), x → τ x Φ is continuous. B.10. Lemma. For each p ∈ R the family τ = (τ x ) x∈R d is a multi-parameter C 0group on S p (R d ). Proof. This is an immediate consequence of Lemma B.9. B.11. Lemma. Let p ∈ R, Φ ∈ S p+ 1 2 (R d ) and i = 1, . . . , d be arbitrary. Then there exists a continuous mapping R : R × R → S p (R d ) with R(x, 0) = 0 for all x ∈ R such that in the space S p (R d ), where we use the notation τ i x := τ xei for each x ∈ R. Proof. let ϕ ∈ S (R d ) be arbitrary. By Taylor's formula, for all x 0 ∈ R d we obtain ϕ(x 0 + (x + h)e i ) = ϕ(x 0 + xe i ) + h∂ i ϕ(x 0 + xe i ) Therefore, we obtain the equation where the integral is an S −p (R d )-valued Bochner integral, which is well-defined by virtue of Lemma B.9. Now, applying Φ, · we obtain where the integral is a S p (R d )-valued Bochner integral, which is well-defined by virtue of Lemma B.9. The mapping R : R × R → S p (R d ) defined as is continuous by Lemma B.9 and Lebesgue's dominated convergence theorem. Furthermore, we have R(x, 0) = 0 for all x ∈ R. Since ϕ ∈ S (R d ) was arbitrary, the claimed identity follows. For each x ∈ R d we define the Dirac distribution δ x ∈ S ′ (R d ) as δ x , ϕ := ϕ(x) for all ϕ ∈ S (R d ). B.12. Lemma. The following statements are true: (1) We have τ x δ y = δ x+y for all x, y ∈ R d . (2) In particular, we have τ x δ 0 = δ x for all x ∈ R d . (3) For each p < − d 4 and every x ∈ R d we have δ x ∈ S p (R d ). More generally, let µ be a finite signed measure on (R d , B(R d )). Then we define a tempered distribution, again denoted by µ, by duality as µ, ϕ := R d ϕ(y)µ(dy) for all ϕ ∈ S (R d ). B.13. Lemma. Let µ be a finite signed measure on (R d , B(R d )). Then the following statements are true: (1) We have τ x µ, ϕ = R d ϕ(y + x)µ(dy) for all x ∈ R d and ϕ ∈ S (R d ). (2) For each p < − d 4 we have µ ∈ S p (R d ). Proof. Let x ∈ R d and ϕ ∈ S (R d ) be arbitrary. By duality we have τ x µ, ϕ = µ, τ −x ϕ = R d ϕ(y + x)µ(dy). Now, let p < − d 4 be arbitrary. By Lemma B.12 the function R d → S p (R d ), x → δ x is continuous and bounded. Therefore, the Bochner integral Proof. Recalling that L 2 (R d ) = S 0 (R d ), for each x ∈ R d we have completing the proof. For the next result, we recall that S p (R d ) ⊂ L 2 (R d ) for each p ≥ 0. Also recall that C 0 (R d ) denotes the space of all continuous functions f : R d → R such that lim x →∞ f (x) = 0. Equipped with the supremum norm, this space is a Banach space. B.17. Lemma. Let p > d 4 and f ∈ S p (R d ) be arbitrary, and define the mapping g : R d → R, g(x) := δ x , f . Then we have g ∈ C 0 (R d ) and f = g almost everywhere. Proof. By Lemma B.12 we have g ∈ C 0 (R d ). Furthermore, there exists a sequence (ϕ n ) n∈N ⊂ S (R d ) such that ϕ n → f in S p (R d ). Thus, we also have ϕ n → f in L 2 (R d ), and hence there is a subsequence (n k ) k∈N such that ϕ n k → f almost everywhere. Therefore, for almost all x ∈ R d we obtain completing the proof. More generally, for each k ∈ N 0 the space C k 0 (R d ) denotes the space of all B.18. Lemma. Let k ∈ N 0 and p > d 4 + k 2 as well as f ∈ S p (R d ) be arbitrary, and define the mapping g : R d → R, g(x) := δ x , f . Then the following statements are true: (1) We have f = g almost everywhere. (3) For each α ∈ N d 0 with |α| ≤ k we have D α g(x) = δ x , ∂ α f for all x ∈ R d . Proof. By Lemma B.17 we have f = g almost everywhere. We prove the remaining statements by induction on k ∈ N 0 . For k = 0 these follow from Lemma B.17. We proceed with the induction step k −1 → k. Let β ∈ N d 0 with |β| = k −1 be arbitrary, and let i = 1, . . . , d be arbitrary. We set α := β + e i . Let x ∈ R d be arbitrary. By induction hypothesis we have D β g(x) = δ x , ∂ β f . Hence, for each h ∈ R with h = 0 we have Note that δ 0 ∈ S −p+ k 2 (R d ). Hence, by Lemma B.11 (applied with Φ = δ 0 and x = 0) we have is a continuous function such that R(0) = 0. Therefore, applying the translation operator τ x on both sides, we obtain . Therefore, we obtain and by Lemma B.12 we deduce that D α g ∈ C 0 (R d ). B.19. Theorem (Sobolev embedding theorem for Hermite Sobolev spaces). For each k ∈ N 0 and p > d 4 + k 2 the pair (S p (R d ), C k 0 (R d )) consists of continuously embedded Banach spaces. B.20. Remark. Note that the inclusion S p (R d ) ⊂ C k 0 (R d ) in Theorem B.19 is meant in the sense that for each f ∈ S p (R d ) there exists a version g ∈ C k 0 (R d ) such that f = g almost everywhere. Moreover, we have with suitable constants C αβ ∈ R for all α, β ∈ N d 0 with |α| + |β| ≤ 2m. By Lemma B.8 we obtain completing the proof. B.22. Corollary. Let m ∈ N 0 be arbitrary. If f ∈ W 2m (R d ) has compact support, then we have f ∈ S m (R d ). Proof. This is an immediate consequence of Proposition B.21. (1) By the classical Sobolev embedding theorem we have f ∈ C k (R d ).
2021-11-24T02:16:28.287Z
2021-11-23T00:00:00.000
{ "year": 2021, "sha1": "27520933c4c85bc1010ddae60368d3421e4c9396", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "7dc4c6fba30c67f26c8f160bdc2bd2ae76198531", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
9413196
pes2o/s2orc
v3-fos-license
Coordinated Control of a DFIG-Based Wind-Power Generation System with SGSC under Distorted Grid Voltage Conditions This paper presents a coordinated control method for a doubly-fed induction generator (DFIG)-based wind-power generation system with a series grid-side converter (SGSC) under distorted grid voltage conditions. The detailed mathematical models of the DFIG system with SGSC are developed in the multiple synchronous rotating reference frames. In order to counteract the adverse effects of the voltage harmonics upon the DFIG, the SGSC generates series compensation control voltages to keep the stator voltage sinusoidal and symmetrical, which allows the use of the conventional vector control strategy for the rotor-side converter (RSC), regardless of grid voltage harmonics. Meanwhile, two control targets for the parallel grid-side converter (PGSC) are identified, including eliminating the oscillations in total active and reactive power entering the grid or suppressing the fifth- and seventh-order harmonic currents injected to the grid. Furthermore, the respective PI-R controller in the positive synchronous reference frame for the SGSC voltage control and PGSC current control have been developed to achieve precise and rapid regulation of the corresponding components. Finally, the proposed coordinated control strategy has been fully validated by the simulation results of a 2 MW DFIG-based wind turbine with SGSC under distorted grid voltage conditions. Fundamental, fifth-order and seventh-order components. Introduction With the increased penetration of wind energy into power grids all over the World, more and more large-scale wind turbines and wind power plants have been installed in rural areas or offshore where the grids are generally quite weak.The operation and control of such remote wind turbines under non-ideal voltage conditions, including severe voltage sags, network unbalance, and harmonic voltage distortions, have attracted more and more attention [1][2][3][4][5][6][7][8][9][10][11][12][13][14].With many excellent merits such as low rating converter capacity, variable speed constant frequency operation and independent power regulation capability, wind turbines based on doubly-fed induction generators (DFIGs) have become one of the mainstream types of variable speed wind turbine in recent years.Unlike wind generators with the full-sized grid-connected converters (such as permanent magnet synchronous generators), DFIG is very sensitive to aforementioned grid disturbances as its stator is directly connected to the grid and the rating of the back-to-back converter is limited. Recently, some improved operation and control strategies for DFIG were investigated under non-ideal grid voltage conditions.For the severe grid short-circuit fault and unbalanced grid voltage conditions, some improved excitation control strategies or an additional series voltage compensation method using a dynamic voltage restorer (DVR) have been proposed to effectively enhance the low voltage ride through (LVRT) capability of the DFIG system [1,2,[15][16][17][18][19]. Besides, the overall operation performance of the whole DFIG system can be improved by coordinately controlling the rotor-side converter (RSC) and parallel grid-side converter (PGSC) during a network unbalance, and some enhanced operation functionalities such as eliminating the oscillations in the active or reactive power from the whole system, or suppressing the negative-sequence currents injected to the grid have been achieved [6,11].To further improve the operation performance of DFIG system under distorted grid voltage conditions, some enhanced control strategies for the DFIG have been studied in [13,14].As mentioned in [14], a rotor current PI regulator and a harmonic resonant compensator tuned at six times the grid frequency in the positive (dq) + reference frame are designed to provide different operation functionalities, i.e., removing the stator or rotor current harmonics, or eliminating the oscillations at six times the grid frequency in the stator output active and reactive powers.However, due to the limited RSC control variables, the proposed method cannot eliminate the stator and rotor current harmonics and the output power pulsations in the DFIG simultaneously under network harmonic distortions.Therefore, harmonic power losses in the stator and rotor windings or the stator power oscillations and torque pulsations in the DFIG still exist, which might degrade the life time of the winding insulation materials or deteriorate the output power quality. The main reason causing stator and rotor current distortions, electromagnetic torque and power pulsations in the DFIG is harmonically distorted stator voltages, so if the stator voltage harmonics can be eliminated and only the positive-sequence voltage is left, the adverse effects of network voltage distortion upon the DFIG will be removed naturally.As similarly discussed in [20][21][22], the DFIG system with a series grid-side converter (SGSC) can be used to enhance the overall operation performance under distorted voltage conditions.As mentioned in [20][21][22], by coordinately controlling the SGSC, PGSC and RSC, the DFIG system with SGSC has been fully demonstrated to be able to deal with the LVRT operation under severely symmetrical and unsymmetrical grid faults or network unbalance in steady-state operation.However, the operation and control of such a DFIG system under grid voltage harmonic distortion have not been discussed in detail.Unlike other series voltage compensation methods using a DVR mentioned in [15][16][17][18][19], the DFIG system with SGSC can also cope with the case of steady-state grid voltage harmonic distortions as the SGSC is directly connected with the PGSC and RSC through the dc-link. For the operation and control of SGSC, PGSC and RSC during network harmonic distortions, this paper investigates ways to further improve the operation performance of the DFIG system with SGSC.In the grid voltage-oriented positive (dq) + and harmonic (dq) 5− , (dq) 7+ reference frames, the mathematical models of SGSC and PGSC under 5th and 7th grid voltage harmonics are developed.Besides, the control target for the SGSC and different control targets for the PGSC under the distorted voltage conditions are identified, and the reference values of the PGSC's fundamental and harmonic currents are deduced.Furthermore, a coordinated control of the SGSC, PGSC and RSC and control schemes for SGSC and PGSC using a PI controller and a harmonic resonant regulator tuned at six times the grid frequency in the positive (dq) + reference frame are developed.Finally, numerical simulations on a 2 MW DFIG system with SGSC are presented to verify the proposed control scheme. Modeling of DFIG System with SGSC during Network Harmonic Distortions Figure 1 shows the configuration of the DFIG system with SGSC.The stator voltage of DFIG can be flexibly regulated by controlling the series injected voltage of SGSC to improve the operation performance, which can be expressed in the stationary αβ reference frame as: In this paper, only the fundamental component and the low order (fifth-and seventh-order) harmonic components are considered.In the stationary αβ reference frame, the voltage and current vectors can be represented as the combinations of the fundamental positive-sequence vector and the harmonic fifth-and seventh-order vectors.In order to simplify the analysis, the multiple synchronous rotating reference frames are adopted, as shown in Figure 2.For the positive (dq) + reference frame, the d + -axis is aligned with the positive-sequence grid voltage vector. SGSC In order to counteract the effect of the 5th and 7th grid voltage harmonic components and compensate the impedance voltage drop of the series transformer, a series compensation voltage vector generated by the SGSC should be injected to keep the DFIG stator voltage in line with the positive-sequence grid voltage, which can be expressed as: where u com+ is the positive-sequence voltage vector error which needs to be compensated.Under distorted grid voltage conditions, the instantaneous active and reactive power flowing through the SGSC can be represented as: ) As can be seen from Equation (4), although there are zero oscillations in generator's power output, the pulsations of six times the grid frequency in SGSC's active and reactive power still exist due to the interaction between the fifth-and seventh-order harmonic grid voltages and positive-sequence stator currents, which inevitably leads to oscillations of the total output power entering the grid.On the other hand, the oscillating active power from the SGSC is also delivered into the dc-link capacitor, which produces the pulsations in dc-link voltage. PGSC Considering that the PGSC is directly connected to the grid, its operation behavior is similar to a grid-connected voltage-source converter (VSC) system.Unlike the power oscillations in SGSC, due to the existence of fifth-and seventh-order harmonic grid currents, the active and reactive power flowing through the PGSC include three parts, i.e., the dc average power, the pulsations at the six times the grid frequency and the pulsations at the twelve times the grid frequency, which can be expressed as: where: As it can be seen from Equation ( 6), under distorted grid voltage conditions, the significant active and reactive power oscillations at six times the grid frequency could result from the interactions among the fundamental or harmonic grid voltages and currents.However, it still can be found from Equation (6) that the power pulsations at twelve times the grid frequency are only produced by the fifth-and seventh-order harmonic grid voltages and currents.Compared with the pulsations terms at the 6th grid frequency, the pulsations terms of the 12th grid frequency is relatively small, which could be reasonably neglected for the system control analysis and design. As shown in Figure 1, the common dc-link voltage can be obtained as: As derived from Equations ( 6) and ( 7), like the case illustrated by Equation ( 4), the power exchanges between the PGSC and grid contain active and reactive power oscillating at the six times the grid frequency, which further leads to pulsations in the dc-link voltage and deteriorates the quality of output powers. RSC By eliminating the fifth-and seventh-order stator voltage harmonics, the DFIG stator voltages become sinusoidal and balanced.As a result, a conventional vector control (VC) strategy or direct power control (DPC) scheme for the RSC remains in full force under distorted grid voltage conditions.With effective control of SGSC, the adverse effects of voltage harmonics upon DFIG such as large stator and rotor current harmonics, electromagnetic torque and power pulsations will be eliminated naturally.Consequently, the enhanced operation performance for the overall DFIG system can be significantly improved.As the operation and control of RSC under normal condition have been well documented in numerous references [23,24], these topics will be not discussed in detail in this paper. SGSC As mentioned in Sections 2.1, the SGSC should be controlled to achieve the following control target: In order to meet the demand of Equation ( 8), positive-sequence component of stator voltage vector should be controlled to equal to that of grid voltage vector, while the 5th and 7th harmonic component of stator voltage vectors should be controlled to zero. PGSC As it can be seen from Equation ( 6), under distorted grid voltage conditions, there are six grid current components of PGSC, i.e., i + gd+ , i + gq+ , i 5- gd5-, i 5- gq5-, i 7+ gd7+ and i 7+ gq7+ can be controlled to improve the overall system performance.Therefore, for the PGSC, apart from the average grid-side active and reactive powers P g_av and Q g_av , shown in Equation ( 6), there are four more power oscillating terms of the 6th grid frequency can be controlled when ignoring the power pulsations of the 12th grid frequency.It is worth noting that, unlike the PGSC control during a network unbalance mentioned in [22], the simultaneous elimination of the oscillations in total active and reactive power can be realized due to the enough control variables of grid currents, which can also simplify the control target selection and system control design.Consequently, the PGSC may be controlled to achieve one of the following two control targets: Target 1: to eliminate the oscillations at the six times the grid frequency in the total active and reactive powers entering the grid simultaneously; Target 2: to suppress the whole system's fifth-and seventh-order harmonic currents injected to the grid. Target 1 As shown in Figure 1, in order to eliminate the oscillation of the total powers, the oscillations at the six times the grid frequency in the active and reactive powers flowing through the SGSC should be equal to the corresponding oscillations in the active and reactive powers flowing through the PGSC, i.e.: series_cos6 g_cos6 series_sin6 g_sin6 series_cos6 g_cos6 series_sin6 g_sin6 , , Based on Equations ( 6) and ( 9), and taking into account the fact that the d + -axis is aligned with the positive-sequence grid voltage vector, which means u + gq+ = 0, the required current reference values for the PGSC to realize Target 1 can be given as Equation (10), where the oscillating terms of active and reactive power flowing through the SGSC are calculated from Equation (4).The term i +* gd+ and i +* gq+ represents the dq-axis fundamental component of grid current, respectively: Unlike the unbalanced voltage conditions, considering that the fifth-and seventh-order grid voltage harmonics are much smaller than the fundamental grid voltages, the dc average active and reactive power are mainly determined by the fundamental grid voltage and current.Therefore, under distorted grid voltage conditions, the term i +* gd+ and i +* gq+ can be assumed to be proportional to the required average active power P * g_av and reactive power Q * g_av from the PGSC delivered to the grid, respectively.As the required average active power should maintain the constant dc-link voltage, it would be related with the average component of the dc-link voltage.If a conventional dc-link voltage PI regulator is adopted, the average active power reference value can be expressed as: where K pu and τ iu are the proportional and integral time parameters of the PI controller, respectively.On the other hand, when neglecting the power pulsations of the 12th grid frequency, the common dc-link voltage can be rewritten using the average and oscillation power components as: which indicates that the oscillations of the common dc-link voltage can also be diminished when eliminating the total power oscillations of the whole system. Target 2 As shown in Figure 1, the total current delivered into the grid is the sum of the currents from DFIG stator-side and PGSC.Since the fifth-and seventh-order current harmonics of the DFIG stator are eliminated with the effective control of SGSC, the required current reference for Target 2 thus can be expressed as: While achieving the goal of no fifth-and seventh-order harmonic currents injected to the grid, Equation ( 13) also indicates that currents generated from the PGSC are sinusoidal and balanced. RSC For RSC, as the stator voltage of DFIG is still sinusoidal and balanced during network distortions, the conventional vector control strategy or direct power control strategy can be still used without modification. System Implementation Using PI-R Controllers Under distorted grid voltage conditions, several separate dual current PI controllers can be used in the multiple rotating synchronous reference frames to provide the required system response.However, the decomposition of positive-sequence, fifth-and seventh-order feedback components will introduce extra time delays and errors when detecting the amplitude and phase signals, which might degrade the transient performance and stability of the system.In order to avoid such a sequential decomposition, the controllers including a standard proportional-integral (PI) regulator and a resonant (R) compensator tuned at six times the grid frequency in the rotating positive (dq) + reference frame are developed for the SGSC voltage control and PGSC current control, respectively. Under distorted grid voltage condition, the voltage and current in the (dq) + reference frame consist of two parts, i.e., the dc fundamental positive-sequence component and the ac fifth-and seventh-order harmonic components oscillating at the frequencies of ±6ω.As the resonant controller is a generalized double-side ac integrator [25], it can simultaneously eliminate the ac errors of the positive-and negative-sequence components at the frequencies of ±6ω.Therefore, a resonant compensator tuned at six times the grid frequency can be introduced to regulate the fifth-and seventh-order voltages or currents to their reference values in the positive (dq) + reference frame, while the fundamental positive-sequence voltages or currents can be controlled by using a traditional PI regulator.Consequently, the PI plus R (PI-R) controllers for the SGSC and PGSC in the rotating positive (dq) + reference frame can be designed to directly regulate both the fundamental positive-sequence component and the harmonic components without involving sequential decomposition, significantly improving the transient performance of the whole system.A detailed study on the PI-R controller has been provided in [14].Therefore, only a brief description is given in this paper. The designed transfer function of the PI-R controller is given as: where K p , K i and K r are the proportional, integral and resonant parameters of the PI-R controller, respectively.And ω c is the cutoff frequency used to widen the resonant frequency bandwidth. In the positive (dq) + reference frame, the respective SGSC and PGSC output control voltage can be obtained as: (16) where: gdq gdq g gdq g gdq is defined as the feedforward grid voltage of PGSC. Based on Equations ( 15) and ( 16), the control voltages in the positive-sequence reference frames can be transformed into the stationary αβ reference frames, i.e.: The respective resulting control voltages u seriesαβ and u cαβ for the SGSC and PGSC can be applied by using standard space vector pulse width modulation (SVPWM) techniques. Figure 3 shows the schematic diagram of the proposed control scheme for the DFIG system with SGSC during network voltage distortions.The reference values for both SGSC and PGSC in the positive-sequence synchronous reference frame are compared with their respective feedback voltage and current signals to generate the final regulating control voltages.It is obvious that the feedback signals need not decomposed into the positive-sequence, fifth-and seventh-order components in both the SGSC and PGSC controllers, which improving the dynamic response performance of the whole system naturally.Furthermore, the conventional vector control strategy for the RSC can be still used without modification. Evaluation Studies For evaluation of the proposed control strategy, simulations on a 2 MW DFIG-based wind power generation system with SGSC have been conducted by using Matlab/Simulink.Details of the simulated DFIG system are given in the Appendix A. The PI-R controller parameters for both SGSC voltage and PGSC current by using the traditional transfer function design method are listed in Table 1. Figure 4 shows the configuration of the simulated DFIG system with SGSC.In the simulation model, three discrete control systems are built for the SGSC, PGSC and RSC, respectively.The DFIG is rated at 2 MW/690 V, and the dc-link voltage is set at 1200 V.The discrete control periods for the three converters are all 100 μs, and the switching frequency for each converter is 2 kHz.The DFIG system is connected to a programmable power grid rated at 20 MVA via a step-up transformer.The programmable ac voltage source is used to generate the fifth-and seventh-order harmonic grid voltages during the simulation studies, which are set to 4% and 3%, respectively.During the initial simulation, the rotor speed is assumed to be fixed at a normal speed of 1950 r/min (the maximal slip: -0.3).The total active and reactive power outputs of the generation system are 2 MW and zero (rated power and unit power factor), respectively.Figures 5-7 show the simulation results of the DFIG system with SGSC under grid voltage harmonics of the aforementioned condition between 1.6 and 1.7 s. Figure 5 shows the simulation results of the system with the conventional control strategy, i.e., no harmonic control for the system during grid voltage harmonics, while the Figures 6 and 7 show the simulation results of the system with the proposed control Targets 1 and 2, respectively.During the simulation process, the RSC is controlled with the conventional vector control strategy to achieve the decoupling control of stator active and reactive powers, while the SGSC is controlled to keep the DFIG stator voltage always in line with the positive-sequence grid voltage and the PGSC is controlled with two different control targets.(p) PGSC 7th harmonic dq-axis currents (pu); (q) grid and stator positive-sequence dq-axis voltages (pu); (r) grid and stator 5th harmonic d-axis voltages (pu); (s) grid and stator 5th harmonic q-axis voltages (pu); (t) grid and stator 7th harmonic dq-axis voltages (pu). As it can be seen from Figures 5-7a, due to the existence of the fifth-and seventh-order harmonic grid voltage components, the grid voltages are obviously distorted between 1.6 and 1.7 s.In a conventional control system, a only single PI controller in the positive (dq) + synchronous reference frame is used for the SGSC voltage and PGSC current.As the traditional single PI controller of SGSC has limited regulating gain for the fifth-and seventh-order harmonic voltage components which are oscillating at six times the grid frequency in the positive (dq) + reference frame, the fifth-and seventh-order voltage harmonics in the stator still exist and the stator voltages will contain 300 Hz pulsations in the positive synchronous reference frame, as shown in Figures 5b,q-t.The harmonically polluted stator voltages will lead to badly distorted stator currents, which inevitably make the rotor currents contain both fundamental component of 15 Hz, and harmonic components of 315 Hz (300 + 15 Hz) and 285 Hz (300 -15 Hz), respectively.Consequently, the significant oscillations at 300 Hz in the electromagnetic torque and instantaneous stator powers of DFIG could occur, as shown in Figure 5g,j,l.In the meanwhile, distorted currents and power pulsations in the PGSC will also result from the failure regulation of harmonic currents in the PGSC when a single current PI controller is used, which further degrading the operation performance of whole system, as shown in Figure 5e,f,h,i,k,n.As it can be seen from Figures 6 and 7b,q-t, when the proposed control strategy for the SGSC under distorted grid voltage condition is implemented, the harmonic voltage at the DFIG's stator terminal can be eliminated by injecting appropriate series compensation voltages of SGSC to counteract the grid voltage harmonics, although the grid voltage harmonics always exist.Compared with the conventional control method, the fundamental component of the stator voltage is controlled to be equal to the positive-sequence grid voltage, while the harmonic components of the stator voltage are effectively controlled to zero by using the proposed PI-R voltage control strategy.As analyzed in Section 2, once the stator voltage harmonics are suppressed, the stator and rotor current harmonics, electromagnetic torque and power oscillations in the DFIG will be eliminated naturally, which are nicely demonstrated in Figures 6 and 7c,d,g,j,l. The system performances with the two different control targets for the PGSC are compared in Figures 6 and 7.As seen in these figures, the objectives of the two control targets have been fully achieved.With Target 1, the 300 Hz oscillations in the total active power and reactive power entering the power grid can be eliminated simultaneously, and the pulsation of the common dc-link voltage is also suppressed effectively, as shown in Figure 6h,k,m.When Target 2 is selected, the fifth-and seventh-order harmonic currents of PGSC can be eliminated successfully, as analyzed in Section 3, the total current distortions can also be suppressed, as shown in Figure 7e.With the limited control variables of the PGSC, the oscillation of the total active and reactive power output could not be eliminated simultaneously, as shown in Figures 7h,k, respectively. Figures 5-7 also show the system dynamic responses with the conventional standard PI control scheme and the developed PI-R control method when the voltage distortions occurring between 1.6 and 1.7 s, respectively.As shown, compared with the case without harmonic control, the required voltage and current references can be obtained with the improved control targets, which allow us to achieve the goal of eliminating 300 Hz pulsations in the total output powers entering the grid or removing fifth-and seventh-order harmonic currents injected to the grid.With the proposed PI-R controllers for the SGSC and PGSC, the respective fifth-and seventh-order harmonic voltages or currents oscillating at 300 Hz in the positive (dq) + reference frame can be tuned to the their references by using the resonance regulator.Consequently, it can be seen that the voltage and current feedback signals of SGSC and PGSC precisely track their corresponding reference values, which indicates that the developed PI-R controllers have excellent dynamic response performance.It is also worth noting that the function of SGSC does not need to be changed during the normal grid condition and the distorted voltage condition for the PGSC's two control strategies, and the R regulator can eliminate the harmonic voltages or currents when the voltage distortions is cleared, which means that the developed PI-R regulator can work under both distorted grid voltage conditions and the normal conditions, without any modifications. For detailed comparison, the harmonic spectrums of the grid and stator voltages, the stator and rotor currents are given in Figure 8, and the harmonic and pulsating components are listed in Table 2.As shown, the two control targets have been fully achieved with the proposed control strategies.With the effective control of SGSC, the fifth-and seventh-order voltage harmonics in the stator have been successfully suppressed, reduced to 0.08% and 0.05% with respect to the fundamental component, respectively.As a consequence, not only the stator and rotor current harmonics but also the significant oscillations in the DFIG's electromagnetic torque and output powers can be avoided simultaneously, which undoubtedly improves the operation performance and stability of the generator under distorted network conditions.With Target 1, the 300 Hz pulsations in both the total active and reactive power entering the grid and dc-link voltage can be eliminated simultaneously.In addition, Target 2 can effectively diminish the total harmonic currents injected to the grid.While for a practical system, the control target can be flexibly selected by considering the operation of the network to meet different requirements.To further illustrate the robustness of the proposed control strategies, the system responses with both a step change of PGSC's reactive power and the variations of generator's speed and active power under 4% 5th and 3% 7th steady-state voltage condition are carried out.Figure 9a,b show the results when a step change of PGSC's reactive power from 0 to -0.1 pu at 2.0 s.As shown, the dynamic response of the PGSC's reactive power is relatively satisfactory, which means that the PGSC can participate in the auxiliary reactive power regulation if needed.Meanwhile, the fifth-and seventhorder PGSC current harmonics can accurately track their reference values during the PGSC's reactive power quick regulation, and the other operation requirements for the DFIG system can also be fully met with the two control targets.Simulations during variable rotor speed and power with the proposed control strategies are shown in Figure 9c,d.During the simulation process, the generator speed is changed from 1.3 to 0.8 pu during 2.0 s to 2.7 s, and the stator active power is changed from -0.8 pu to -0.2 pu with the stator reactive power being zero.It is obvious that the operation performances of the DFIG system with the variations of rotor speed and power are nicely demonstrated.With the developed control strategies, the oscillations in electromagnetic torque, stator active and reactive powers, and total power outputs or total harmonic currents are eliminated during the whole process.In addition, the three phase stator and rotor currents of the DFIG are all sinusoidal and balanced as well.As the power pulsations in PGSC and SGSC are proportional to the stator current, as a consequence, the peak amplitude of oscillating active and reactive power in the PGSC and the DFIG system are all decreased with the reduction of the generator's output power, as shown in Figure 9c,d.To further illustrate the robustness of the proposed control strategies, the system responses with both a step change of PGSC's reactive power and the variations of generator's speed and active power under 4% 5th and 3% 7th steady-state voltage condition are carried out.Figure 9a,b show the results when a step change of PGSC's reactive power from 0 to -0.1 pu at 2.0 s.As shown, the dynamic response of the PGSC's reactive power is relatively satisfactory, which means that the PGSC can participate in the auxiliary reactive power regulation if needed.Meanwhile, the fifth-and seventhorder PGSC current harmonics can accurately track their reference values during the PGSC's reactive power quick regulation, and the other operation requirements for the DFIG system can also be fully met with the two control targets.Simulations during variable rotor speed and power with the proposed control strategies are shown in Figure 9c,d.During the simulation process, the generator speed is changed from 1.3 to 0.8 pu during 2.0 s to 2.7 s, and the stator active power is changed from -0.8 pu to -0.2 pu with the stator reactive power being zero.It is obvious that the operation performances of the DFIG system with the variations of rotor speed and power are nicely demonstrated.With the developed control strategies, the oscillations in electromagnetic torque, stator active and reactive powers, and total power outputs or total harmonic currents are eliminated during the whole process.In addition, the three phase stator and rotor currents of the DFIG are all sinusoidal and balanced as well.As the power pulsations in PGSC and SGSC are proportional to the stator current, as a consequence, the peak amplitude of oscillating active and reactive power in the PGSC and the DFIG system are all decreased with the reduction of the generator's output power, as shown in Figure 9c,d. Conclusions This paper has investigated the dynamic modeling and enhanced control of a grid-connected DFIG-based wind turbine with a series grid-side converter under grid voltage harmonic distortion.Based on the deduced mathematic models of the DFIG system with SGSC, a coordinated control strategy for the SGSC, PGSC and RSC has been proposed to improve the performance of the system.Two alternative control targets for the PGSC, including eliminating the pulsations oscillating at six times of the grid frequency in the total active and reactive power entering the grid or keeping the three phase total current sinusoidal and symmetrical have been achieved by coordinately controlling the SGSC and PGSC, while the RSC is able to be controlled with the conventional vector control strategy regardless of grid voltage harmonics.The respective PI-R regulator in the positive synchronous reference frame for the SGSC voltage control and PGSC current control have been proposed to directly regulate both the fundamental positive-sequence component and the harmonic components, without involving sequential decomposition.Therefore, excellent dynamic response performance can be achieved.The simulation results of a 2 MW DFIG system with SGSC under distorted grid voltage conditions fully demonstrate the effectiveness of the developed control strategies. Figure 3 . Figure 3. Schematic diagram of the proposed control scheme for the DFIG system with SGSC. Figure 4 . Figure 4. Configuration of the simulated DFIG system with SGSC. Figure 9 . Figure 9. Simulation results with PGSC's reactive power step at 2.0 s and generator speed variations during 2.0 s to 2.7 s. (a) Reactive power step with Target 1; (b) Reactive power step with Target 2; (c) Variable rotor speed with Target 1; (d) Variable rotor speed with Target 2. Stator output active and reactive powers.P r and Q r Rotor output active and reactive powers.P g and Q g PGSC output active and reactive powers.P series and Q series Active and reactive powers through SGSC.P total and Q total Total output active and reactive powers of the DFIG system with SGSC. Table 1 . Parameters of the PI-R controllers. Table 2 . Comparisons of different control Targets.
2014-10-01T00:00:00.000Z
2013-05-17T00:00:00.000
{ "year": 2013, "sha1": "7e7ea5c590ab6ffc3e967832a898c8e899a77c0a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/6/5/2541/pdf?version=1426590397", "oa_status": "GOLD", "pdf_src": "CiteSeerX", "pdf_hash": "7e7ea5c590ab6ffc3e967832a898c8e899a77c0a", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Engineering" ] }
256076839
pes2o/s2orc
v3-fos-license
A Novel Decomposition-Based Multi-Objective Evolutionary Algorithm with Dual-Population and Adaptive Weight Strategy : Multi-objective evolutionary algorithms mainly include the methods based on the Pareto dominance relationship and the methods based on decomposition. The method based on Pareto dominance relationship will produce a large number of non-dominated individuals with the increase in population size or the number of objectives, resulting in the degradation of algorithm performance. Although the method based on decomposition is not limited by the number of objectives, it does not perform well on the complex Pareto front due to the fixed setting of the weight vector. In this paper, we combined these two different approaches and proposed a Multi-Objective Evolutionary Algorithm based on Decomposition with Dual-Population and Adaptive Weight strategy (MOEA/D-DPAW). The weight vector adaptive adjustment strategy is used to periodically change the weight vector in the evolution process, and the information interaction between the two populations is used to enhance the neighborhood exploration mechanism and to improve the local search ability of the algorithm. The experimental results on 22 standard test problems such as ZDT, UF, and DTLZ show that the algorithm proposed in this paper has a better performance than the mainstream multi-objective evolutionary algorithms in recent years, in solving two-objective and three-objective optimization problems. Introduction In recent years, the field of multi-objective optimization has developed rapidly.A multiobjective optimization problem (MOP) refers to the existence of two or more conflicting objectives, the optimization of one which may lead to the deterioration of the other objectives, so that the globally unique optimal solution cannot be obtained, as in a single objective optimization problem.Multi-Objective Evolutionary Algorithm (MOEA) is often used to solve this kind of problem, which is usually based on the continuous iterative evolution of individuals in the population, and it finally obtains the solution set with uniform distribution and good convergence. The current mainstream MOEAs can be divided into three categories, respectively: the method based on the Pareto dominance relationship, the method based on evaluation metrics, and the method based on decomposition.The basic idea of MOEAs, based on Pareto dominance relationship is to generate the next generation population according to certain hybridization and mutation strategies, order all individuals in the population according to the dominance relationship, and screen individuals according to the degree of individual dominance and the sparsity of the objective space.Deb et al. [1] improved the classical NSGA algorithm, introduced the concept of congestion degree, proposed the elite selection strategy, and designed the fast non-dominated sorting algorithm, NSGA-II, with the elite selection strategy.On the basis of this algorithm framework, they also proposed a multi-objective evolutionary algorithm, NSGA-III [2], based on reference points, which pays more attention to the non-dominant individuals in the population, and achieves a good performance in solving high-dimensional MOPs.Yuan et al. [3] improved the NSGA-II algorithm for job-shop scheduling problems in an intelligent manufacturing environment, adopted a process-based random mutation strategy and a crossover method to generate a new generation of population, and adopted the analytic hierarchy process to determine the optimal solution.Zhang et al. [4] introduced the rotation characteristic into the simulated binary crossover operator SBX and proposed an improved simulated binary crossover algorithm RSBX based on rotation and the combination with NSGA-II to significantly improve the performance of the algorithm.Yi et al. [5] introduced an adaptive mutation operator to improve the standard NSGA-III algorithm and to enhance the ability of the algorithm to solve complex optimization problems.Based on the NSGA-III algorithm, Cui et al. [6] used the selection operator to determine the reference point of the minimum ecological digit, and then selected an individual with the shortest boundary intersection distance, based on punishment, to better balance convergence and diversity.Gu et al. [7] introduced the information feedback model and proposed an improved algorithm, IFM-NSGA-III, which used the historical information of individuals in previous generations in the updating process of the current generation. The method is based on evaluation metrics such as Inverted Generational Distance (IGD) and Hypervolume (HV), which are used to guide the population closer to the Pareto front.Sun et al. [8] proposed a method based on IGD metrics in order to select excellent individuals in each generation of individuals, and designed an efficient dominant comparison method to rank the results.Hong et al. [9] developed a new metrics-based algorithm that uses an enhanced diversification mechanism, combined a new solution generator and an external archive, and used a double local search mechanism to search different subregions of the Pareto front.Yuan et al. [10] introduced a cost-value-based distance into the target space to evaluate the contributions of individuals to explore potential fields, proposed a metrics-based CHT algorithm and embedded it into the evolutionary algorithm, achieving good experimental results in the MOPs where individuals appear in the local infeasible region.Li et al. [11] proposed a multi-modal MOEA based on weighted indexes, named MMEA-WI, and integrated the diversity information of solutions in the decision space into a performance index of target space to ensure that the Pareto front can be approached more effectively.Li et al. [12] believed that the evaluation of evolutionary algorithms is essentially a binary classification, and then proposed an online asynchronous training method of a support vector machine model based on empirical kernel, to classify promising and unpromising solutions, and then reversely added the newly generated solutions as training samples to improve the accuracy of the classifier.Garcia et al. [13] proposed an metrics-based algorithm COARSE-EMOA to solve the MOPs with equality constraints.A reference set of a feasible Pareto front was used to calculate the generation distance, and then it was used as the density estimation to obtain a better solution set distributed near the Pareto front. Zhang et al. [14] proposed a Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D), which provided a new idea for MOEAs.This kind of algorithm no longer uses the traditional Pareto dominance relationship to guide the population evolution, but it uses the aggregation function to decompose the multi-objective problem into a number of simple single-objective problems, and co-evolve.The weight vector is used to control the direction of population evolution, which greatly reduces the computational complexity of the algorithm and has strong searching ability.It has become a classic method to solve MOPs, and has been continuously improved and extended by scholars, based on this algorithm.Zhu et al. [15] proposed a decomposed multi-objective evolutionary algorithm MOEA/D-DAE based on detection escape strategy in order to solve the problem that the algorithm is prone to stagnation in the complex feasible domain of constrained multi-objective optimization problems.Cao et al. [16] used the multi-population heuristic algorithm PBO as an effective search engine, and proposed a new multi-objective evolutionary algorithm MOEA/D-PBO based on decomposition.Wang et al. [17] proposed an improved algorithm AES-MOEA/D based on an adaptive evolutionary strategy, and adopted the evolutionary strategy of competition between the SBX and DE operators to overcome the problem of species diversity degradation caused by a single operator.Xie et al. [18] designed an improved algorithm, DTR-MOEA/D, with local target space knowledge and the dynamic transfer of reference points, established the dynamic transfer criterion of reference points according to the population density relationship in different regions, and adopted the population diversity enhancement strategy guided by regional target space knowledge to improve the population diversity.Chen et al. [19] used the improved directions in the current and historical populations to generate new solutions, and introduced this mechanism into the MOEA/D-PBI algorithm, greatly improving the convergence ability of the algorithm. In solving practical problems, Zhang et al. [20] designed a novel integral squeeze film bearing damper based on the multi-objective optimization problem, and combined the non-dominated sorting genetic algorithm and grey correlation analysis for the multiobjective optimization of stiffness and stress.Akbari et al. [21] designed and optimized the blades of small wind turbines to maximize power output and starting torque.They took the chord length and the twist angle as design variables, and used a multi-objective optimization study to evaluate the optimal blade geometry.Jiang et al. [22] proposed a multi-objective optimization procedure combined with the NSGA-II algorithm with entropy weighted TOPSIS for the lightweight design of a dump truck carriage, and the multi-objective lightweight optimization of the dump truck carriage was carried out based on the Kriging surrogate model and the NSGA-II algorithm. On the basis of referring to plenty of relevant work and experimental analysis, this paper makes a comprehensive comparison and analysis of two major methods, respectively: MOEAs based on Pareto dominance relationship and MOEAs based on decomposition.MOEAs based on Pareto dominance relationship usually use a non-dominated sorting algorithm to sort population individuals and to introduce a crowding degree operator to filter out overlapping individuals in the population, which can better maintain the diversity of population individuals, but there is also an obvious problem, where in the face of MOPs with a complex Pareto front, the searching ability of the algorithm is poor, and the convergence rate of the individual population is slow.However, MOEAs based on decomposition are different.Since the decomposition method is used to transform multiobjective optimization problems into multiple single-objective optimization problems, the individual searching ability of such methods is strong, and it is easier to search the boundary of Pareto front, but the problem of an uneven searching ability is likely to occur when facing complex MOPs.Therefore, it is very necessary to combine these two types of mainstream algorithms and to put forward a more comprehensive method which can avoid the defects of these two types of algorithms.At the same time, some new mechanisms and strategies should be introduced to ensure the smooth progress of the search process.Based on the above analysis, the research roadmap of the algorithm proposed in this paper is formed, as shown in Figure 1. In this paper, we combined MOEAs based on Pareto dominance relationship and MOEAs based on decomposition, and proposed an improved algorithm called MOEA/D, with Dual-Population and Adaptive Weight strategy (MOEA/D-DPAW).In the process of iterative evolution, two different populations are set up to complete the evolution in their own way.The two groups exchange and share information with each other, resulting in better individuals.Furthermore, MOEA/D-DPAW used the Pareto dominance relationship between individuals and the crowding degree operator to ensure the diversity of the population, and used the weight vector adaptive adjustment strategy and enhanced neighborhood search mechanism to improve the local search ability of the algorithm in complex space to ensure the convergence of the algorithm.The final solution set obtained by the algorithm can maintain a more uniform distribution of a population of individuals on the premise of approximately fitting the real Pareto front. The remainder of the paper is organized as follows.Section 2 introduces the previous knowledge.In the Section 3, the proposed algorithm framework and related improvement strategies are described in detail.Section 4 is the experimental part which compares and analyzes the proposed algorithm with the mainstream evolutionary multi-objective optimization algorithm in recent years.Section 5 summarizes the main work of this paper and points out further research directions. Problem Model Taking minimization MOP as an example, it can be formulated as follows: where x = (x 1 , x 2 , . . ., x n ) T ∈ X is an n-dimensional decision vector in space R n , and X represents the decision space formed by all decision variables.y = ( Y is an m-dimensional optimization objective, and Y represents the objective space formed by all optimization objectives.m is the number of optimization objectives.For a viable solution x * ∈ X, if and only if there is no other viable solution x ∈ X satisfying . ., m, and there is at least one j ∈ {1, 2, . . ., m} that makes f j (x) ≤ f j (x * ) valid, then x is called a Pareto optimal solution of the MOP.In the decision space, all feasible solutions satisfying the Pareto optimal solution conditions form the Pareto optimal solution set PS ⊆ X.In the corresponding objective space, the definition of Pareto front is PF = {F(x)|x ∈ PS}.The essence of MOPs is to make the individuals in the population approach the real Pareto front, and to finally find a group of compromise solutions approaching the Pareto front. Dominance Relationship and Crowding Degree For MOPs, it is assumed that individuals p and q are two solutions in the population.If p is better than q, then p dominates q.Specifically, two conditions must be satisfied: (1) For all targets, individual p is no worse than q; that is, f i (p) ≤ f i (q), i = 1, 2, . . ., m. (2) There is at least one objective for which p is better than q, that is, ∃i ∈ {1, 2, . . ., m} satisfies f i (p) < f i (q).In the process of population evolution, there will be many non-dominated individuals.When the population size exceeds the capacity initially set, it is necessary to use the fast non-dominated sorting algorithm to select individuals with high convergence as much as possible, and to maintain the diversity of the population through the crowding degree operator.In this paper, the crowding degree of individual p in population P is defined as follows: (2) where Crowding(p) represents the crowding degree of individual p, and distance(p, q) represents the Euclidean distance between individuals p and q in the decision space.R is the size of the neighborhood radius.According to the Equations ( 2) and ( 3), the crowding degree of individuals is always within the range of [0,1].The crowding degree of an individual in a population depends on the number and distance of other individuals in its neighborhood.The greater the number of individuals in the neighborhood, or the smaller the Euclidean distance between an individual in the neighborhood and the current individual, the greater the crowding degree of the individual will be, and the easier it will be to be eliminated during population maintenance.In the process of the population maintenance operation, the most crowded individuals are constantly removed.If there are multiple individuals with the most crowded degree, one of them will be randomly selected, and then the crowding degree of other individuals belonging to the removed individual neighborhood will be updated.This process is repeated until the population size is satisfied. Method of Decomposition In the MOEA/D algorithm and its variants, the core idea is to split the multi-objective optimization problem into a set of scalar optimization problems, and to optimize a set of scalar optimization problems simultaneously.Each subproblem only needs to combine the information provided by several neighboring subproblems to complete the optimization calculation.First, a set of weight vectors λ = (λ 1 , λ 2 , . . ., λ m ) T , λ i ≥ 0 should be initialized to meet the condition ∑ m i=1 λ i = 1.There are three common decomposition methods for all decision variables x in the decision space x ∈ X, as follows: • Weighted Sum approach: The weight vector is used as a coefficient corresponding to the objective function one by one, and the mathematical formula is shown as below: where g ws is the aggregate function that needs to be minimized.The idea of the decomposition method is simple, and it is only applicable to the regular Pareto front, and the effect is poor when dealing with problems with a complex Pareto front. • Tchebycheff approach: The decomposition method formula of this method is shown as below: where g te is the aggregate function that needs to be minimized, and For any Pareto optimal solution x, there will be a set of weight vector λ which makes x become the optimal solution in the Tchebycheff equation.This method can be used to obtain different Pareto optimal solutions by changing the weight vector.Because this method has a good effect on most problems, has universality, and is easy to implement, so in the experimental part, this paper chooses the Tchebycheff decomposition method. • Penalty-based Boundary Intersection approach: This method attempts to find the intersection point between a group of rays passing through the target space from an ideal point and the Pareto front.If these rays are uniformly distributed, then the intersection points found will be approximately uniformly distributed: where g pbi is the aggregate function that needs to be minimized, θ is the penalty parameter, and the meaning of z * is the same as the Tchebycheff approach.Let y be the projection point of F(x) on the ray, with z * as the origin and −λ as the direction vector, then d 1 is the distance between y and z * , and d 2 is the distance between y and F(x).When using this method, the penalty parameter θ is very important, and the setting of θ will directly determine the final performance of the algorithm.However, when solving practical problems, it is difficult to determine the size of parameter θ at the beginning, and so this method is not used in the experimental part of this paper. Framework In this paper, we proposed a Multi-Objective Evolutionary Algorithm Based on Decomposition with Dual-Population and Adaptive Weight strategy (MOEA/D-DPAW).The framework of the algorithm is shown in Figure 2.There are two different populations in MOEA/D-DPAW, which are respectively used in the evolutionary algorithm based on decomposition and the evolutionary algorithm based on Pareto domination.During each iteration, two populations evolve in their own way, exchanging and sharing information with each other, resulting in better individuals. In order to ensure the convergence of the algorithm, the evolutionary population P 1 will continue to evolve according to the MOEA/D based on the Tchebycheff decomposition approach.Firstly, the corresponding weight vector will be assigned to all individuals, and then the genetic operator will be used for each subproblem to generate new solutions in its neighborhood.After that, the update operation of the population individuals and ideal points will be completed.An external population EP is maintained in the process, to collect all the non-dominant solutions during the evolution of the population.In addition, because the evenly distributed fixed weight vector is not conducive to solving the complex MOPs, in the process of population P 1 evolution, MOEA/D-DPAW uses the weight vector adaptive adjustment strategy to periodically adjust the weight vector, which makes the algorithm more applicable to practical problems, and makes the population of individuals closer to the Pareto front.This is covered in more detail in Section 3.2.At the same time, due to the limitation of neighborhood, the individual exploration of population P 1 will always focus on some specific areas in the objective space, so the algorithm has a poor search ability when facing a complex Pareto front.To solve this problem, MOEA/D-DPAW uses the enhanced neighborhood exploration mechanism.By means of further interaction between the individuals in population P 2 and population P 1 , the exploration range of individuals in population P 1 is expanded, thus further improving the search ability of the algorithm.This part will be introduced in detail in Section 3.3.In terms of maintaining the individual diversity of the population, MOEAs based on decomposition often lack some specific methods to avoid the problem of uneven distribution of individuals within the population.In the face of complex MOPs, the individuals of the population may be concentrated in some specific areas.In view of this, the evolutionary population P 2 based on the Pareto dominance relationship is introduced into the MOEA/D-DPAW algorithm.Before each iteration evolution, population P 2 will merge with individuals in population P 1 , and then it conducts a fast non-dominated sorting of the resulting population.Based on the concept of crowding degree that is proposed in Section 2.2, population maintenance operations will be carried out.On the premise of ensuring the uniform distribution of individuals, excellent individuals will be selected for the subsequent enhanced neighborhood exploration mechanism.To sum up, the overall algorithm of MOEA/D-DPAW is shown in Algorithm 1. Lines 7 to 13 are the evolutionary process, based on the Pareto dominance relationship.Lines 14 to 33 are the evolutionary process based on decomposition; specifically, lines 19 to 23 are the process of updating the subproblem using the Tchebycheff formula, and lines 24 to 31 are the process of maintaining the external population EP.Finally, lines 34 to 36 are the call of the weight vector adaptive adjustment strategy, and line 37 is the call of the enhanced neighborhood exploration mechanism, which can refer to the contents of Sections 3.2 and 3.3, respectively. Adaptive Weight Strategy It can be seen from the three decomposition methods in Section 2.3 that the setting of the size of the weight vector is crucial.Each different weight vector corresponds to a unique sub-problem that is formed after decomposition.However, in the original MOEA/D algorithm, the weight vector is fixed at the initialization, and there will be no changes afterwards, so that it is difficult to immediately determine the most appropriate weight vector size in the face of complex MOPs.To solve this problem, the MOEA/D-DPAW algorithm proposed in this paper uses the weight vector adaptive adjustment strategy, and periodically adjusts the weight vector in the process of population evolution, which can make these decomposed subproblems approach to the real Pareto front better.In the initialization phase, MOEA/D-DPAW uses a uniform random method to generate an initial set of weight vectors λ = (λ 1 , λ 2 , . . ., λ m ) T ∈ R m .WS-transform [23] is applied to project the weight vector of the scalar quantum problem to its solution mapping vector: In the process of population evolution, MOEA/D-DPAW will periodically adjust the weight vector every 5%T interval, where T is the number of evolution generation.During the weight vector adjustment, Equation (10) was used to calculate individual sparsity first for the individual p, and then the most crowded 5%N subproblems were removed, where N is the population size. MOEA/D-DPAW uses the external population EP to store the non-dominated solutions found during the search.When adjusting the weight vector to create a new subproblem, Equation ( 10) should be used to calculate the sparse degree of individuals in EP relative to the current population.Then, the sparsest individual x s in EP should be selected each time to generate a new subproblem and to calculate its objective function value F(x s ) = ( f s 1 , f s 2 , . . ., f s m ).Finally, Equation ( 11) should be used to generate a new weight vector λ and associate with it, and this new subproblem is added to the population.The specific process of the Adaptive Weight Strategy algorithm is shown in Algorithm 2. Calculate the sparsity degree of each individual using Equation ( 2) and ( Remove the individual with the minimum sparsity degree 6: count ← count + 1 7: end while 8: while count > 0 do 9: Calculate the sparsity degree of each individual between P and EP using Equation (10) 10: Generate a new weight vector λ s using Equation (11) 11: Select the individual x s which has highest sparsity degree of EP 12: Add the newly subproblem λ s associated with x s to P 13: count ← count − 1 14: end while 15: Update neighbors of each weight vector of λ 16: return P, λ Enhanced Neighborhood Exploration Mechanism In general, MOEAs based on decomposition tend to search in the direction of the Pareto front in the process of population evolution.When faced with MOPs with a complex Pareto front, it may lead to repeated searching in some specific areas by the individual population.However, MOEAs based on the Pareto dominance relationship always maintain a group of representative non-dominant individuals in the process of population evolution, and coupled with the limitation of crowding degree, and so they perform well in individual diversity.Therefore, MOEA/D-DPAW uses the enhanced neighborhood exploration mechanism to combine the evolutionary characteristics of two different populations, as shown in Algorithm 3. In the process of each iterative evolution, the search in population P 2 is based on a Pareto domination relationship.For all individuals in population P 2 , the number of individuals in population P 1 within the neighborhood range is checked; if the number is less than one, it indicates that the current space is a region that is difficult to explore using the decomposition algorithm, but that there is a high possibility of excellent solutions.Therefore, this individual is asked to use the mutation operator to generate new solutions, and to add them to the population P 1 .Finally, we perform population maintenance operations on P 1 , according to Section 2.2, to obtain the population for the next iteration.The size of the neighborhood radius parameter r is equal to the average distance between population P 2 and the nearest several individuals of the current individual, which is equal to the size of the neighborhood in the evolution process of population P 1 .Via the enhanced neighborhood exploration mechanism, the population of individuals can avoid repeated searching in a fixed area, improve the diversity of the population individuals, and have a better ability to deal with MOPs with a complex Pareto front. Algorithm 3 Enhanced Individual Exploration Require: P 1 (Population based on decomposition), P 2 (Population based on Pareto domination) 1: E ← ∅ 2: for all q ∈ P 2 do 3: for all q ∈ P 1 do 5: if distance(p, q) ≤ r then Computational Complexity The MOEA/D-DPAW algorithm proposed in this paper mainly consists of two parts.The first part is the algorithm is based on decomposition with population P 1 , and the computational complexity of this part mainly comes from the decomposition and updating of subproblems and individual exploration.The complexity of this part is O(mTN 2 ), where m is the number of objectives and T is the neighborhood size.N is the number of individuals in population P 1 or P 2 .The second part is the algorithm based on the Pareto dominance relationship with population P 2 .The computational complexity of this part is mainly derived from the non-dominated sorting of population individuals, the complexity of which is O(mN 2 ).The complexity of the individual replication operation of population P 1 and population P 2 is O(N), while the complexity of the individual replacement operation is O(mNlogN) because it involves the calculation of crowding degree.In the adaptive weight vector adjustment strategy used in this paper, the computational complexity is O(θmNlogN), where θ is the periodic adjustment parameter of the weight vector, which is set to 0.05N.In the enhanced neighborhood exploration mechanism proposed in this paper, the computational complexity is mainly derived from the traversal and screening of all individuals in population P 1 and population P 2 , and the time complexity is O(N 2 ).Based on the above analysis, the overall computational complexity of MOEA/D-DPAW algorithm is O(mTN 2 + mN 2 + θmNlogN + N 2 ) ≈ O(mTN 2 ).It can be seen that the introduction of dual-population, weight vector adaptive adjustment strategy, and enhanced neighborhood exploration mechanism exploration mechanism do not significantly increase the computational complexity of the algorithm.The overall computational complexity of the MOEA/D-DPAW algorithm is the similar to the original MOEA/D algorithm, and it can still complete the iterative evolution of the population relatively quickly. Experimental Setup The experimental environment of this paper is an Intel (R) Core (TM) i5-9500 CPU @ 3.00 GHz, 16 GB RAM.All comparison methods are implemented in the PlatEMO [24] platform based on MATLAB.In order to test the performance of the proposed algorithm, 22 standard test problems were selected for the two-objective optimization problem and the three-objective optimization problem.Specifically, this paper uses ZDT1-ZDT4, ZDT6, and UF1-UF7 as the test set of the two-objective optimization problem, and uses DTLZ1-DTLZ7 and UF8-UF10 as the test set of the three-objective optimization problem.For all twoobjective optimization problems, the population size is set to N = 150, and the maximum fitness evaluation of the algorithm is set to 60,000.For all of the three-objective optimization problems, the population size is set to N = 200, and the maximum fitness evaluation of the algorithm is set to 100,000.The neighborhood size is set to T = 5, tested with a simulated binary crossover operator and polynomial mutation operator, and their distribution index is set to η = 20, crossing probability to p c = 1, and mutation probability to p m = 1/N. Method of Comparison In this paper, the following five MOEAs in recent years are selected as the comparison benchmark, and the relevant parameters are set in accordance with the corresponding references during the experiment. • AGEMOEA [25]: A method based on non-Euclidean distance is used to estimate the geometric structure of the Pareto frontier, and the diversity and population density are dynamically adjusted to achieve a good convergence effect.• MOEA/D-URAW [26]: A variant of the MOEA/D algorithm, which uses a uniform random weight generation method and an adaptive weight method based on population sparsity to solve complex multi-objective optimization problems.• NSGA-II-SDR [27]: A variant of the NSGA-II algorithm, based on the angle between the candidate solutions, proposes an adaptive niche technique that identifies only the best convergent candidate solutions as non-dominant solutions in each niche, thus better balancing the convergence and diversity of evolutionary multi-objective optimization. • CMOPSO [28]: An improvement of the multi-objective particle swarm optimization algorithm, which uses a multi-objective particle swarm optimization algorithm based on competition mechanism.Particles are updated on the basis of each generation of population competition.• MOEA/D-DAE [29]: A variant of the MOEA/D algorithm, which uses the detection escape strategy to detect the algorithm stagnation state by using the feasible ratio and the overall constraint violation change rate, and then adjusts the constraint violation weight in time to guide the population search out of the stagnation state. Performance Metric In this paper, Inverted Generational Distance (IGD) and Hypervolume (HV) are used to evaluate the performance of the algorithm.The IGD value is calculated as follows: where P * is a set of reference points that is uniformly distributed on the real Pareto front, and distance(x, P) is the Euclidean distance between the individual closest to individual x in population P. IGD reflects the average value of the minimum distance between points, and points between the actual Pareto frontier and the approximate Pareto frontier obtained by the algorithm, which can comprehensively reflect the convergence and diversity of the algorithm.The smaller the IGD index value, the higher the quality of the solution set obtained.The HV value is calculated as follows: where [ f i (x), z * i ] represents the hypercube formed between the population P and the ideal reference point under the i-th objective.δ represents the Lebesgue measure, which is used to calculate volume.The HV index can be regarded as the supervolume of the space formed by the actual Pareto solution set and the reference point after the algorithm is completed.The higher the value of the HV indicator, the better the algorithm performance. Results and Analysis All experiments in this paper were independently run 30 times, and the mean and standard deviation of the experimental results were recorded.The Wilcoxon rank sum test with a significance level of 0.05 was used for the statistical analysis of the experimental results.The symbols "+", "−" and "≈" indicate that the experimental results of another comparison algorithm are significantly better than, significantly worse than, or approximate to the experimental results of the MOEA/D-DPAW algorithm. The results of IGD in six algorithms in this experiment on 12 two-objective test problems of ZDT and UF are shown in Table 1.According to the table, the MOEA/D-DPAW algorithm proposed in this paper has obtained the best IGD values on six test sets: ZDT1, ZDT3, UF1, UF2, UF4, and UF6.For the ZDT2 and ZDT4 problem, although the CMOPSO algorithm has the best solution effect, the IGD values calculated using the MOEA/D-DPAW algorithm are close to it.For the ZDT6, UF3, UF5, and UF7 problems, the performance of the MOEA/D-DPAW algorithm is slightly inferior to that of the MOEA/D-DAE and the CMOPSO algorithm.The results of HV in six algorithms in this experiment on 12 two-objective test problems of ZDT and UF are shown in Table 2.According to the table, The MOEA/D-DPAW algorithm proposed in this paper has obtained the best HV values on seven test sets: ZDT1, ZDT2, ZDT4, UF2, UF4, UF5, and UF6.For the ZDT3 and ZDT6 problem, the MOEA/D-URAW and the CMOPSO algorithms, respectively, obtain the best performances, but the results of the MOEA/D-DPAW algorithm are almost equal to them.For the UF1, UF3, and UF7 problems, the CMOPSO algorithm has the best performance.Although the MOEA/D-DPAW algorithm is not as good as the CMOPSO algorithm on those test problems, it still has excellent performance compared with other algorithms.Therefore, the MOEA/D-DPAW algorithm has excellent performance in solving the two-objective optimization problem, which proves the effectiveness of the improved strategy proposed in this paper, and the convergence and diversity of the solution set can be guaranteed. The results of IGD in six algorithms in this experiment on 10 three-objective test problems of DTLZ and UF are shown in Table 3.According to the table, it can be seen that the MOEA/D-DPAW algorithm proposed in this paper obtained the best IGD values on the DTLZ2, DTLZ4, DTLZ5, DTLZ7, UF8, and UF9 test problems.For the UF10 problem, the AGEMOEA algorithm achieves the best solution, but the solution by the MOEA/D-DPAW algorithm is very close to it.For the problems of DTLZ1, DTLZ3, and DTLZ6, the performance of the MOEA/D-DAE algorithm is better.Although the performance of the MOEA/D-DPAW algorithm is inferior to that of the MOEA/D-DAE algorithm, it is still more outstanding compared with the remaining four algorithms.The results of HV in six algorithms in this experiment on nine three-objective test problems of DTLZ and UF are shown in Table 4.According to the table, the MOEA/D-DPAW algorithm proposed in this paper has obtained the best HV value on the four test problems of DTLZ2, DTLZ4, UF8, and UF9.The MOEA/D-DAE algorithm also obtained the best HV value for the DTLZ1, DTLZ3, DTLZ5, DTLZ6, and DTLZ7 problems, indicating the excellent performance of these two algorithms in solving three-objective optimization problems.From the perspective of data distribution, both the MOEA/D-DPAW algorithm and the MOEA/D-DAE algorithm have their advantages and disadvantages.Regarding the UF10 problem, the AGEMOEA algorithm achieved the best running results, and the MOEA/D-DPAW algorithm performed slightly worse than the AGEMOEA algorithm.Based on the above analysis, the MOEA/D-DPAW algorithm proposed in this paper also has a good performance when solving the three-objective optimization problem, and it can finally obtain the solution set with uniform distribution and a good convergence effect. In order to display the operation results of the algorithm more intuitively, this paper selects three two-objective optimization problems, ZDT1, ZDT2, and ZDT3, three threeobjective optimization problems DTLZ1, DTLZ4, and DTLZ7, with six representative MOPs altogether.The results of six algorithms on the ZDT3 problem are shown in Figure 5.The real Pareto front of this problem is discrete distribution, which is more complex than the first two problems.As can be seen from the figure, the MOEA/D-DPAW algorithm proposed in this paper has the best convergence effect with the CMOPSO algorithm and the highest degree of closeness to the real Pareto front, and the population has good diversity.It shows that the improved strategy in the MOEA/D-DPAW algorithm is still effective, and that good results can be obtained when facing relatively complex MOPs.Among the other four comparison algorithms, their convergence is relatively worse than that of the MOEA/D-DPAW algorithm, the final population obtained by the MOEA/D-URAW algorithm is also relatively discrete, and the diversity is not fully guaranteed.Based on the above analysis, the effectiveness and superiority of the proposed MOEA/D-DPAW algorithm in solving two-objective optimization problems can also be intuitively seen. The results of six algorithms on the DTLZ1 problem are shown in Figure 6 (e) (f) Conclusions Aiming at the multi-objective optimization problem, this paper proposed an improved MOEA/D algorithm with a Dual-Population and Adaptive Weight strategy.In the algorithm, two different populations evolve according to their own standards; each uses its own advantages to exchange information, so as to further improve the performance of the solution.In addition, the weight vector adaptive adjustment strategy is used in the proposed algorithm to periodically change the weight vector in the evolution process, which makes the algorithm more suitable for solving MOPs with a complex Pareto front.At the same time, the enhanced neighborhood exploration mechanism is used to improve the local search ability of the algorithm, so as to avoid that the population individuals always focus on a specific area when facing the complex MOPs.The comparative experimental results on 22 standard test problems show that the algorithm proposed in this paper has a better solving accuracy and a better convergence effect, compared with many mainstream evolutionary multi-objective optimization algorithms in recent years when facing the twoobjective optimization problems and three-objective optimization problems.Additionally, it can maintain the diversity of the individual population well, and population individuals can be distributed more evenly on the premise of being close to the real Pareto front. In the future, based on the proposed algorithm in this paper, it can be extended to solving MOPs with higher dimensions, and exploring the performances of related strategies in more complex and high-dimensional spaces.Additionally, more attention should be paid to its effect in solving the parameter optimizations of practical problems. . It can be seen from the figure that the performance of the MOEA/D-DPAW algorithm proposed in this paper is the best.This indicates that the advantages of the MOEA/D-DPAW algorithm are more obvious when facing high-dimensional MOPs, and that the enhanced neighborhood exploration mechanism can more effectively guide population individuals to search the Pareto front.Although the populations obtained via the other five comparison methods are also close to the true Pareto front, the distribution of individual populations is more dispersed than that of the MOEA/D-DPAW algorithm proposed in this paper.The population distribution effect obtained by the NSGA-II-SDR algorithm is the worst, while the population distribution obtained by the other four algorithms is significantly better than that obtained by the NSGA-II-SDR algorithm, but still inferior to the MOEA/D-DPAW algorithm. Randomly select mating solutions from B i to generate an offspring x if g te (x|λ j , z * ) ≤ g te (x j |λ j , z * ) then * ← min(z * , F(x)) Table 1 . IGD values obtained by six algorithms about two-objective optimization problems on ZDT and UF. Table 2 . HV values obtained by six algorithms about two-objective optimization problems on ZDT and UF. Table 4 . HV values obtained by six algorithms about three-objective optimization problems on DTLZ and UF.
2023-01-22T16:07:55.546Z
2023-01-17T00:00:00.000
{ "year": 2023, "sha1": "90eb8c373a8fba9cc211c882100a31d290dc76f0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-1680/12/2/100/pdf?version=1675048469", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "7e890e24baaf14bf135af975f704e6a37664622c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
247291913
pes2o/s2orc
v3-fos-license
TOI-1696: a nearby M4 dwarf with a $3R_\oplus$ planet in the Neptunian desert We present the discovery and validation of a temperate sub-Neptune around the nearby mid-M dwarf TIC 470381900 (TOI-1696), with a radius of $3.09 \pm 0.11 \,R_\oplus$ and an orbital period of $2.5 \,\rm{days}$, using a combination of TESS and follow-up observations using ground-based telescopes. Joint analysis of multi-band photometry from TESS, MuSCAT, MuSCAT3, Sinistro, and KeplerCam confirmed the transit signal to be achromatic as well as refined the orbital ephemeris. High-resolution imaging with Gemini/'Alopeke and high-resolution spectroscopy with the Subaru/IRD confirmed that there are no stellar companions or background sources to the star. The spectroscopic observations with IRD and IRTF/SpeX were used to determine the stellar parameters, and found the host star is an M4 dwarf with an effective temperature of $T_{eff} = 3185 \pm 76\,\rm{K}$ and a metallicity of [Fe/H] $=0.336 \pm 0.060 \,\rm{dex}$. The radial velocities measured from IRD set a $2$-$\sigma$ upper limit on the planetary mass to be $48.8 \,M_\oplus$. The large radius ratio ($R_p/R_\star \sim 0.1$) and the relatively bright NIR magnitude ($J=12.2 \,\rm{mag}$) make this planet an attractive target for further followup observations. TOI-1696b is one of the planets belonging to the Neptunian desert with the highest transmission spectroscopy metric discovered to date, making it an interesting candidate for atmospheric characterizations with JWST. INTRODUCTION Exoplanet population statistics from the Kepler mission (Borucki et al. 2010) revealed that there is a dearth of planets around the size of Neptune (∼3-4R ⊕ ) with orbital periods less than 2-4 d. This has been referred to as the "Neptunian Desert" or "photo-evaporation desert" or simply "evaporation desert" (Szabó & Kiss 2011;Mazeh et al. 2016;Lopez 2017). The scarcity of planets in this region of parameter space can be explained by photoevaporation, that is, atmospheric mass loss due to high-energy irradiation from the host star (Owen & Wu 2017). The small number of planets that have so far been found in the desert (e.g. West et al. 2019;Jenkins et al. 2020) are believed to retain substantial atmospheres (or are still in the process of losing them), but the physical mechanisms are not well understood. Comparing planets that have lost their atmospheres with those that have retained their atmospheres will be useful to understand the processes such as photo-evaporation theory. Therefore, it is important to increase the number of planets in this region and reveal the nature of their atmospheres. TESS (Ricker et al. 2015), which has identified over 5000 exoplanet Reid et al. 1991) with the TESS photometric aperture (black outline) and Gaia sources (gray circles). The cyan circle indicates the position of TOI-1696; we note the proper motion is low enough that its current position is not significantly offset in the archival image. candidates so far 1 , made it possible to discover more planets in the Neptunian Desert. In this paper, we report the validation of a new planet around the mid-M dwarf TOI-1696, whose transits were identified by the TESS mission. The planet TOI-1696 b has a sub-Neptune size (3.09 ± 0.11 R ⊕ ) and an orbital period of 2.5 days, which places it within (or near the boundaries of) the Neptunian desert. The large radius ratio (R p /R ∼ 0.1) makes the planet's transits deep, and combined with the relatively bright near-IR (NIR) magnitude (J=12.2 mag) of the star, the planet is one of the best targets for future atmospheric research via transmission spectroscopy. The rest of this paper is organized as follows. In Section 2, we present the observational data and the reduction procedures used for the analyses. In Section 3, we explain the analyses methods and results. In Section 4, we discuss the features of the planet and its future observational prospects, concluding with a summary in Section 5. Transit photometry -TESS TESS observed TOI-1696 with a 2 min cadence in Sector 19 from 2019 Jul 25 to Aug 22, resulting in photometry spanning approximately 27 days with a gap of about one day in the middle when the satellite reoriented itself for data downlink near perigee. Light curves were produced by the Science Processing Operations Center (SPOC) photometry pipeline (Jenkins 2002;Jenkins et al. 2010;Jenkins & et al. 2020) using the aperture shown in Figure 1. We used the PDCSAP light curves produced by the SPOC pipeline Smith et al. 2012;Stumpe et al. 2014) for our transit analyses. TOI-1696 is located in a fairly crowded field, owing to its low galactic latitude (b = −0.81 • ). The SPOC pipeline applies a photometric dilution correction based on the CROWDSAP metric, which we independently confirmed by computing dilution values based on Gaia DR2 magnitudes 2 . TOI-1696.01 was detected by the SPOC pipeline in a transiting planet search, and the candidate was subsequently reported to the community by the TESS Science Office (TSO) on 2020 January 30 via the TESS Object of Interest (TOI; Guerrero et al. (2021)) Releases portal 3 . The candidate passed all data validation diagnostic tests (Twicken et al. 2018) performed by the SPOC 4 . The SPOC pipeline removed the transit signals of TOI-1696.01 from the light curve and performed a search for additional planet candidates (Li et al. 2019), but none were reported. We independently confirmed the transit signal found by the SPOC. After removing stellar variability and residual instrumental systematics from the PDCSAP light curve using a 2nd order polynomial Savitzy-Golay filter, we searched for periodic transit-like signals using the transit least-squares algorithm (TLS; Hippke & Heller 2019) 5 , resulting in the detection of TOI-1696.01 with a signal detection efficiency (SDE) of 11.6, a transit signal-to-noise 2 approximating Gaia Rp as the TESS bandpass, and assuming a full width at half maximum (FWHM) of 25 3 https://tess.mit.edu/toi-releases/ 4 Full vetting report available for download at https://exo.mast.stsci.edu/exomast planet.html? planet=TOI169601 5 https://transitleastsquares.readthedocs.io/en/latest/ index.html ratio (SNR) of 7.4, orbital period of 2.50031 ± 0.00001 days, and transit depth of 10.6 parts per thousand (ppt), which is consistent with the values reported by the TESS team on ExoFOP-TESS 6 . We subtracted this signal and repeated the transit search, but no additional signals with SDE above 10 were found. TLS also reports the approximate depths of each individual transit; we note that these transit depths and uncertainties are useful for diagnostic purposes only, as they are simplistically determined from the mean and standard deviation of the in-transit flux. The depths of the odd transits are within 1.5σ of the even transits, suggesting a low probability of either signal being caused by an eclipsing binary at twice the detected period. The TLS detection is shown in Figure 2. 2.2. Transit photometry -FLWO/KeplerCam 6 https://exofop.ipac.caltech.edu/tess/ We used KeplerCam, mounted on the 1.2m telescope located at the Fred Lawrence Whipple Observatory (FLWO) atop Mt. Hopkins, Arizona, to observe a full transit on 2020 February 17. KeplerCam has a 23 .1 × 23 .1 field-of-view and operates in binned by 2 mode producing a pixel scale of 0.672 . Images were obtained in the i-band with an exposure time of 300 seconds. A total of 29 images were collected over 144 minutes. The data were reduced using standard IDL routines and photometry was performed using the AstroImageJ software package ). Transit photometry -LCO/SINISTRO We observed a full transit on 2020 November 13, using Sinistro, an optical camera mounted on a 1m telescope located at McDonald Observatory in Texas, operated by Las Cumbres Observatory (Brown et al. 2013). Sinistro has a 26 .5 × 26 .5 field of view with a pixel scale of 0.389 . We observed 62 images in total during 339 minutes, using a V -band filter, with an exposure time of 5 min. The data were reduced by the standard LCOGT BANZAI pipeline (Mc-Cully et al. 2018), and photometry was performed using AstroImageJ software. Transit photometry -LCO/MuSCAT3 MuSCAT3 is a multi-band simultaneous camera installed on the 2m Faulkes Telescope North at Las Cumbres Observatory (LCO) on Haleakala, Maui (Narita et al. 2020). It has four channels, enabling simultaneous photometry in the g (400-550 nm), r (550-700 nm), i (700-820 nm) and z s (820-920 nm) bands. Each channel has a 2048×2048 pixel CCD camera with a pixel scale of 0.27 , providing a 9 .1 × 9 .1 field of view. We observed a full transit of TOI-1696.01 on 2020 December 23, from BJD 2459206.703523 to 2459206.827246. We took 36, 41, 89, and 131 exposures with exposure times of 300, 265, 120, and 80 s in the g, r, i, and z s bands, respectively. The data reduction was conducted by the standard LCOGT BANZAI pipeline. Then the differential photometry was conducted by a customized aperture-photometry pipeline for MuSCAT series (Fukui et al. 2011). The optimized aperture radii are 8, 6, 10, and 8 pixels (2.16 , 1.62 , 2.7 , and 2.16 ) for the g, r, i, and z s bands, respectively. We optimized a set of comparison stars for each band to minimize the dispersion of the light curves. For computational efficiency, and to achieve a more uniform signal-to-noise ratio (SNR), we subsequently binned the g, r, i, and z s data to 300, 240, 180, and 120 s, respectively. Transit photometry -NAOJ 188cm/MuSCAT We also observed a full transit with MuS-CAT (Narita et al. 2015), which is installed on the 188cm telescope of National Astronomical Observatory of Japan (NAOJ) in Okayama, Japan. MuSCAT has a similar optical design as MuSCAT3 but has three CCD cameras for the g, r and z s bands. On the night of 2021 July 28 we observed TOI-1696 from BJD 2459424.228358 to 2459424.30679. At that point, the r-band camera was not available due to an instrumental issue, so we observed with only the g and z s bands, using an exposure time of 60 s for both bands. The data reduction and differential photometry was performed using the pipeline described in Fukui et al. (2011). The optimized aperture radii were 4 and 6 pixels (1.44 and 2.16 ) for the g and z s bands, respectively. Similarly to the MuSCAT3 data, we binned the g and z s data to 300 and 120 s, respectively. 2.6. Speckle imaging -Gemini/'Alopeke On the nights of 2020 December 03 and 2021 October 14, TOI-1696 was observed with the 'Alopeke speckle imager (Scott 2019), mounted on the 8.1 m Gemini North telescope on Mauna Kea. 'Alopeke simultaneously acquires data in two bands centered at 562 nm and 832 nm using high speed electron-multiplying CCDs (EMCCDs). We collected and reduced the data following the procedures described in Howell et al. (2011). The resulting reconstructed image achieved a contrast of ∆mag = 5.8 at a separation of 1 in the 832 nm band. No secondary sources were detected. The data taken on 2021 October 14 is shown in Figure 3. Adaptive optics imaging -Palomar/PHARO On 2021 September 19 we conducted nearinfrared high-resolution imaging using the adaptive optics instrument PHARO mounted on the 5 m Hale telescope at Palomar Observatory (Hayward et al. 2001). We observed TOI-1696 seperately in the Brγ (2.18 µm) and H cont (2.29 µm) bands, reaching a contrast of ∆mag = 8 at a separation of 1 in both bands. The AO images and corresponding contrast curves are shown in Figure 4. High-resolution spectroscopy -Subaru/IRD We obtained high-resolution spectra of TOI-1696 in the NIR with IRD (Tamura et al. 2012;Kotani et al. 2018), mounted on the 8.2 m Subaru telescope. IRD can achieve a spectral resolution of ∼70,000 in the wavelength range 930 nm to 1740 nm. The derived spectra were used for the three purposes: to search for spectral companions (e.g. SB2 scenarios), to measure fundamental stellar parameters (e.g. effective temperature and metallicity), and to rule out large radial velocity (RV) variations that would indicate an eclipsing binary (EB), as well as placing a limit on the mass of the planet. From UT 2021 January 30 to 2022 January 08, we obtained 13 spectra of TOI-1696 using 1800 s exposure times, as part of a Subaru Intensive Program (Proposal IDs S20B- 088I and S21B-118I). The raw data were reduced using IRAF (Tody 1993) as well as a pipeline for the detector's bias processing and wavelength calibrations developed by the IRD instrument team (Kuzuhara et al. 2018;Hirano et al. 2020). For the RV analyses and stellar parameter derivation, we computed a high-SNR coadded spectrum of the target following the procedures described in Hirano et al. (2020). For use as a spectral template in the analysis described in Section 3.2, we also downloaded archival IRD data of GJ 699 (Barnard's Star) 7 , which was obtained on 2019 March 23 (HST). We reduced and calibrated the GJ 699 data following the same procedures as the TOI-1696 data. Medium-resolution spectroscopy -IRTF/SpeX We collected observations of TOI-1696 on UT 2020 December 09 using SpeX, a mediumresolution spectrograph on the NASA Infrared Telescope Facility (IRTF) on Maunakea . We obtained our observations in SXD mode with a 0. 3 × 15 slit, providing a spectral resolution of R ≈ 2000 over a wavelength range 700 nm to 2550 nm. In order to remove sky background and reduce systematics, the spectra were collected using an ABBA nod pattern (with a separation of 7. 5 between the A and B positions) and with the slit synced to the parallactic angle. We reduced our spectra using the Spextool reduction pipeline (Cushing et al. 2004) and removed telluric contamination using xtellcor (Vacca et al. 2003). The derived spectra were used to calculate the stellar metallicity. Stellar parameters estimation In the next subsections we estimate the fundamental stellar parameters of TOI-1696. First, the stellar effective temperature T eff and metallicity [Fe/H] are derived from two independent methods; one is from the IRD spectra and the other is from the SpeX spectra and photometric relations. Second, the stellar radius R , mass M , and other related parameters are derived using empirical relations and the above We derived the effective temperature T eff and abundances of individual elements [X/H] from the coadded IRD spectrum. To avoid amplifying noise in the spectrum, we decided not to deconvolve the instrumental profile prior to these analyses. We determined the parameters by the equivalent width comparison of individual absorption lines between the synthetic spectra and the observed ones. For T eff estimation, 47 FeH molecular lines in the Wing-Ford band at 990 − 1020 nm was used as same as in Ishikawa et al. (2022). We also derived the abundance of eight metal elements as described in Section A.1. We iterated the T eff estimation and the abundance analysis alternately until T eff and metallicity were consistent with each other. First, we derived a provisional T eff assuming solar metallicity ([Fe/H] = 0), and then we determined the individual abundances of the eight elements [X/H] using this provisional T eff . Second, we redetermined T eff adopting the iron abundance [Fe/H] as the input metallicity, and then we redetermined the abundances using the new T eff . We iterated the estimation of T eff and [Fe/H] until the final results and the results of the previous step agreed within the error margin. As a result, we derived T eff = 3156 ± 119 K and [Fe/H] = 0.333 ± 0.088 dex. 3.1.2. Estimation of T eff and [Fe/H]: from SpeX spectra and photometric relations Before analyzing our SpeX spectra, we corrected the data to the lab reference frame using tellrv 8 (Newton et al. 2014(Newton et al. , 2022. We then determined metallicity with metal 9 (Mann et al. 2013), using only the K-band part of the spectrum, which is historically the most reliable, although the metallicities from H-and J-band are broadly consistent. We calculated the stellar parameters using a series of photometric relations, following the Section 4.3 of (Dressing et al. 2019). First, we calculated the luminosity of the star using the Gaia EDR3 distance (Stassun & Torres 2021), 2MASS J magnitude, r magnitude (from the Carlsberg Meridian Catalogue; Muiños & Evans 2014), and the metallicitydependent r-J bolometric correction in Table 3 of Mann et al. (2015). Next, we calculated the radius of the star using the relation between R , absolute K magnitude, and [Fe/H] defined in Table 1 of Mann et al. (2015). Lastly, we calculated T eff using the Stefan-Boltzmann law. As a result, we derived T eff = 3207 ± 99 K and [Fe/H] = 0.338 ± 0.083 dex. The strong agreement in T eff and [Fe/H] between the two methods suggests a high degree of reliability of the measurements. For the following analyses, we used the weighted mean of the two respective measurements for T eff and [Fe/H], specifically, T eff = 3185 ± 76 K and [Fe/H] =0.336 ± 0.060 dex. Estimation of stellar radius and mass We estimated other stellar parameters such as stellar mass M , radius R , surface gravity log g, mean density ρ , and luminosity L following the procedure described in Hirano et al. (2021). In short, the distributions of the stellar parameters are derived from a Monte Carlo approach using a combination of several empirical relations as well as the observed and literature values. The R value was calculated through the empirical relation from Mann et al. (2015), and M from Mann et al. (2019). In deriving the stellar parameters by Monte Carlo simulations, we adopted Gaussian distributions for T eff and [Fe/H] based on our spectroscopic analyses (see Sections 3.1.1 and 3.1.2), the apparent K sband magnitude from 2MASS, and the parallax from Gaia EDR3 (Stassun & Torres 2021). We assumed zero extinction (A V = 0), considering the proximity of the star to Earth. As a result, we derived R = 0.2775 ± 0.0080 R and M = 0.255 ± 0.0066 M along with the other parameters listed in A.1. By interpolating Table 5 of Pecaut & Mamajek (2013) we determined the spectral type of TOI-1696 to be M4V (M3.9V ± 0.2). To check the robustness of this analysis, we confirmed them to be in good agreement with stellar parameters derived through independent analyses based on SED fitting and isochrones (see Section A. Search for spectroscopic binary stars If a stellar companion orbits the target star, the observed spectra will generally be the combination of two stellar spectra with different radial velocities. To see if TOI-1696 is a spectroscopic binary (i.e. an SB2), we calculated the cross-correlation function (CCF) of the TOI-1696's IRD spectra with that of the well-known single-star GJ 699 (Barnard's Star). The spectrum of TOI-1696 used for the analysis was obtained on UT 2021 January 30 08:53, which corresponds to the an orbital phase of 0.247 based on the TESS ephemeris. For the analysis, we divided the spectra into six wavelength bins that are less affected by . We corrected the telluric absorption signal using the spectra of the rapid-rotator HIP 74625, which was observed at the same night. The CCF to the template spectrum was calculated for each segment, after barycentric velocity correction. Finally, we computed the median of the CCFs from each segment. As shown in Figure 5, the resulting CCF is clearly single-peaked. If the observed transit signals were actually caused by an eclipsing stellar companion, the RV difference at quadrature would be > 100 km s −1 , which would result in a second peak in the CCF given that the flux of such a companion would be detectable. We thus conclude TOI-1696 is not an eclipsing binary. Stellar age Because young stars are active and rapidly rotating, stellar activity and rotation period can be used as proxy for determining its youth. We did not find any stellar rotational signal in the TESS SPOC light curve, suggesting that the star is not very active. Similarly, no strong rotational signal was found in archival photometric data from ZTF Data Release 9 (Bellm et al. 2019;Masci et al. 2019) and ASAS-SN (Kochanek et al. 2017). GJ 699 has a rotation period of 145 days and v sin i of less than 3 km s −1 (Toledo-Padrón et al. 2019), which is below the limit of IRD's resolving power (∼70000, corresponding to ∼ 4.5 km s −1 ). While the CCF of TOI-1696 has a FWHM value consistent with that of GJ 699 (see Figure 5), even if we assume the rotation axis of TOI-1696 is in the plane of the sky, relatively short rotation periods cannot be ruled out, as their rotational broadening would not be resolvable with IRD. However, fast rotation would most likely be accompanied with surface magnetic activity levels that would produce detectable photometric signals. We also used banyan Σ (Gagné et al. 2018) to check if TOI-1696 is a member of any known stellar associations, using its proper motion and the parallax from Gaia EDR3. banyan Σ tool 10 returned a value of 99.9% field star, suggesting it is not a member of any nearby young moving group. The non-detection by GALEX also means that the star is not young enough to be bright in the UV. We thus conclude that TOI-1696 is most likely a relatively old, slow rotator. Transit analysis We jointly fit the TESS , KeplerCam, Sinistro, MuSCAT3, and MuSCAT datasets using the PyMC3 ( The model assumes a chromatic transit depth, a linear ephemeris, a circular orbit, and quadratic limb darkening. For efficient and uninformative sampling, the quadratic limb darkening coefficients were transformed 10 http://www.exoplanetes.umontreal.ca/banyan/ 11 https://docs.exoplanet.codes/en/stable/ following Kipping (2013). To account for systematics in the ground-based datasets we included a linear model of airmass and other covariates, such as the pixel response function peak, width, and centroids, when available. To account for stellar variability and residual systematics in the TESS SPOC light curve, we included a Gaussian Process (GP Rasmussen & Williams 2005) model with a Matérn-3/2 covariance function. To account for the possibility of under-or over-estimated uncertainties, we included a white noise scale parameter for each dataset/band, enabling the errors to be estimated simultaneously with other free parameters; we placed Gaussian priors on these white noise scale parameters, with center and width equal to unity. We placed Gaussian priors on the stellar mass and radius based on the results in Table 1. We also placed Gaussian priors on the limb darkening coefficients based on interpolation of the parameters tabulated in Claret et al. (2012) and Claret (2017), propagating the uncertainties in the stellar parameters in Table 1 via Monte Carlo simulation. To optimize the model we used the gradientbased BFGS algorithm (Nocedal & Wright 2006) implemented in scipy.optimize to find initial maximum a posteriori (MAP) parameter estimates. We then used these estimates to initialize an exploration of parameter space via "no U-turn sampling" (NUTS, Hoffman & Gelman 2014), an efficient gradient-based Hamiltonian Monte Carlo (HMC) sampler implemented in PyMC3. Detailed plots showing the model fits to the ground-based datasets are shown in Figure 7, Figure 8, and Figure 9. We did not detect any significant wavelength dependence of the transit depth (see Figure 10), which rules out many plausible false positive scenarios involving eclipsing binaries (see Section 3.6 for more details). The results of this fit are listed in Table 2. Having established the achromaticity of the transit depth, we conducted a second fit with an achromatic model to robustly estimate the planet radius. This fit resulted in a final value of R P /R = 0.1025±0.0014, corresponding to an absolute radius of 3.09±0.11 R ⊕ , and all other parameters were unchanged. Companion mass constraints To put a limit on the mass of TOI-1696.01, we fit an RV model with a circular orbit to the RV Table 2. Results of joint fit to the TESS and ground-based transit datasets. Parameter Value data from Subaru/IRD. Between the H-band and the Y J-band spectra obtained with IRD, we opted to use the H-band spectra for RV analysis because of its higher SNR. 12 The data observed on 2021 January 29 was excluded be-12 There have been reports of unpredictable systematic errors caused by persistence light on the detector in Hband, especially when bright stars are observed before fainter stars. We checked the objects observed before TOI-1696 and found that none were more than 1.2 mag brighter in the H-band, i.e. persistence light isn't likely to be a problem with these data. Figure 6. The phase-folded TESS light curve after removing the best-fit GP noise model, with the best-fit transit model (blue) from our joint analysis of the TESS and ground-based light curves. cause of the possibility of an RV offset, as there was a gap of 8 months relative to the succeeding observations. We also removed any data with the clouds passing, which can cause systematic errors. The final dataset consisted of 9 RV measurements from 2021 September 29 to 2022 January 8. We used the RV model included in PyTransit which we simplified to have five free parameters: phase-zero epoch T 0 , period, RV semiamplitude, RV zero point, and RV jitter term. For the T 0 and the period, we put Gaussian priors using the T 0 and period derived from the transit analysis. For the other parameters we put wide uniform priors. We ran the built-in Differential Evolution optimizer and then sample the parameters with Markov Chain Monte Carlo (MCMC) using 30 walkers and 10 4 steps. We use the following equation to derive the planet mass, where M p is planet mass, M is star mass, P is orbital period, K is RV semi-amplitude, e is eccentricity (fixed to zero), and i is inclination (fixed to 90 • ). To propagate uncertainties, we use the posteriors for M and P from previous analyses. In Figure 11 we plot Keplerian orbital models corresponding to different masses encompassing the 68 th , 95 th , and 99.7 th percentiles of the semi-amplitude posterior distribution. The 2-σ upper limit is 48.8 M ⊕ which places the companion 2 orders of magnitude below the deuterium burning mass limit. The best-fit semi-amplitude is K = 14.4 ms −1 , which corresponds to a mass of M p = 12.3M ⊕ , and the best-fit jitter value is σ K =62 m s −1 . We calculated an expected planetary mass of ∼8 M ⊕ with MRExo 13 , which uses a mass-radius relationship calibrated for planets around M dwarfs (Kanodia et al. 2019). This mass corresponds to a semi-amplitude of 9.4m s −1 , but the observed RV data exhibits significantly larger variability (σ ≈ 52m s −1 ). We interpret this variability as being responsible for the large jitter value found by the fit, which suggests it is out-of-phase with TOI-1696.01. Since the star appears to be quiet, one explanation for this signal is the existence of an additional (possibly non-transiting) planet, but more RV measurements would be required to determine if this is the case. Furthermore, if such a planet were dynamically interacting with TOI-1696.01, then this could help explain TOI-1696.01's location in a sparsely populated part of the period-radius plane (see Section 4). Eliminating false positive scenarios A number of astrophysical scenarios can mimic the transit signal detected from TESS photometry, including an eclipsing binary (EB) with a grazing transit geometry, a hierarchical EB (HEB), and a diluted eclipse of a background (or foreground) EB (BEB) along the line of sight of the target. In the following, we will examine the plausibility of each scenario. First, the Renormalised Unit Weight Error (RUWE) from Gaia EDR3 is 1.12, which suggests that TOI-1696 is single (Belokurov et al. 2020). We can also rule out the EB scenario based on the analysis of the IRD CCF in Section 3.2, and the mass constraint derived in 13 https://github.com/shbhuk/mrexo Figure 11. Phase-folded RVs with Keplerian models corresponding to the 1-, 2-, and 3-σ mass upper limits. Gray points with the error bars show the errors estimated from the data processing method described in Section 2.8. The error bars in orange show the original errors + jitter term value of 62 m s −1 (added in quadrature) from the best-fit RV model (orange line, best Mp = 12.3M⊕.) Section 3.1.1. Finally, the absence of any wavelength dependence of the transit depth from our chromatic transit analysis (Section 3.4) is incompatible with contamination from a star of different spectral type (colour) than the host star, the details of which are discussed in the Appendix B. In the absence of dilution, the measured radius of 3.09 ± 0.11 R ⊕ (0.27 R Jup ) equals the true radius, which makes it significantly smaller than the lower limit of 0.8 R Jup expected for brown dwarfs (Burrows et al. 2011). Grazing transit geometries can also be eliminated, as the impact parameter is constrained to b < 0.7 at the 99% level based on our transit and contamination analyses. The apparent boxy shape of our follow-up lightcurves is in stark contrast with the V-shaped transit expected for grazing orbits. Hence, grazing EB scenario is ruled out. Moreover, we can constrain the classes of HEBs that can reproduce the observed transit depth and shape using our multi-band observations. We aim to compute the eclipse depths for a range of plausible HEBs in the bluest and reddest bandpasses where they are expected to vary significantly. We adopt the method presented in Bouma et al. (2020) to perform the calculation taking into account non-zero impact parameter, the details of which are discussed in the Appendix C. Comparing the simulated eclipse depths with the observed depth in each band, we found that there is no plausible HEB configuration explored in our simulation that can reproduce the observed depths in multiple bands simultaneously. Hence, the HEB scenario is ruled out. Although, TOI-1696's probability of being a BEB is very high a priori given its location at the galactic plane, we argue in the following that the BEB scenario is extremely unlikely. Our MuSCAT3 observation can resolve the signal down to 3 , which represents the maximum radius within which the signal must originate. Furthermore, our high-resolution speckle imaging ruled any nearby star and blended sources down to 0.1 at a delta mag of 4.5. We checked archival images taken more than 60 years apart, but the proper motion of TOI-1696 is not enough to obtain a clear view along the line of sight of the star. However, we can use statistical arguments to estimate the probability of a chance-aligned star. To do this, we use the population synthesis code Trilegal 14 (Girardi et al. 2005), which can simulate the Galactic stellar population along any line of sight. Given the position of TOI-1696, we found a probability of 5 × 10 −8 to find a star brighter than T =16 15 , within an area equal to the smallest MuSCAT3 photometric aperture (aperture radius = 3 ). Assuming all such stars are binary and preferentially oriented edge-on to produce eclipses with period and depth consistent with the TESS detection, 14 http://stev.oapd.inaf.it/cgi-bin/trilegal 15 T denotes the TESS bandpass. The maximum delta magnitude was computed using dT=-2.5log 10 (depth), which translates to the magnitude that can produce a 100% eclipse then this can represent a very conservative upper limit of a BEB scenario. Despite the small probability of a BEB based on the trivial star counting argument, we discuss relevant tools in the following section for a more thorough statistical modeling. Statistical validation Here we quantify the false positive probability (FPP) of TOI-1696.01 using the Python package Vespa and Triceratops (Morton 2015; Giacalone & Dressing 2020), the details of which are discussed in Section D. Although we were able to rule out the classes of EB, BEB, and HEB in Section 3.6, we ran Vespa considering all these scenarios for completeness and computed a formal FPP< 1 × 10 −6 which robustly quantifies TOI-1696.01 as a statistically validated planet. Additionally, we validated TOI-1696.01 using Triceratops and found FPP=0.0020. Giacalone et al. (2021) noted that TOIs with FPP < 0.015 have a high enough probability of being bona fide planets to be considered validated. The low FPPs calculated using Vespa and Triceratops added further evidence to the planetary nature of TOI-1696.01. We now refer to the planet as TOI-1696 b in the remaining sections. Here, we consider the nature of TOI-1696 b by placing it in context with the population of known exoplanets 16 . Figure 12 shows a radius vs period diagram, indicating that there are only a handful of planets with similar characteristics to TOI-1696 b. The measured planetary radius R p of 3.09±0.11 R ⊕ and the orbital period P of 2.50031 ± 0.00001 days, places it securely within the bounds of the Neptunian desert as defined by Mazeh et al. (2016). The region occupied by TOI-1696 remains sparsely populated despite recent discoveries of TESS planets within the Neptunian desert (e.g. Murgas et al. 2021;Brande et al. 2022). It should be noted that the Neptunian desert was originally determined based on a population of planets orbiting mainly solar-type stars from the Kepler mission. Because TOI-1696 is an M dwarf, the incident flux at a given orbital separation will be less than solar-type stars. Nevertheless, we emphasize that the target exists in a sparsely populated region of parameter space, despite the large number of planets discovered around M dwarfs since the Kepler mission (i.e. from K2 and TESS ). For example, if we limit the comparison to the 279 confirmed planets around M dwarfs with T eff below 3800 K, only 14 planets have been found so far with orbital periods shorter than 10 days and planetary radii in the range 2.5R ⊕ <R p < 5 R ⊕ . As shown in Figure 12 (Cointepas et al. 2021), and TOI-2406 b (Wells et al. 2021) in terms of orbital period and radius. In particular, TOI-2406 b appears most similar to TOI-1696 b as it orbits around a mid-M dwarf with an effective temperature of 3100 ± 75, and has a radius of 2.94 ± 0.17 R ⊕ and orbital period of 3.077 days. TOI-2406 is also thought to be relatively old without any activity signal. As both TOI-1696 b and TOI-2406 b are excellent targets for detailed characterization studies, together they may provide unique insights into this class of planet. There is also some similarity between TOI-1696 b and the Neptunian Desert planets orbiting young host stars, such as AU Mic b and c, K2-25 b, K2-95 b and K2-264 b. It has been suggested that these planets may have inflated radii and could possibly still be undergoing atmospheric mass-loss (e.g. Mann et al. 2016). Further study of TOI-1696 b could reveal whether its similarity to these planets (despite being older) is only superficial, or if it is indicative of an inflated radius. Prospects for transmission spectroscopy Given the rarity of this planet, it would be useful to assess its prospects for future atmospheric observations to understand its formation and evolution. In particular, the relatively large size of the planet compared to its host star makes it a good candidate for transmission spectroscopy. Using Equation 1 in Kempton et al. (2018), we calculated the transmission spectroscopy metric (TSM) of TOI-1696 b from its mass, radius, equilibrium temperature, stellar radius, and J-band magnitude. We used the values in Figure 1 and 2, and assumed a mass of 8 M ⊕ estimated by MRExo. The derived TSM value of TOI-1696 b is 105.6. For reference, Kempton et al. (2018) suggested that planets with TSM>90 are ideal targets for atmospheric follow-up. For comparison, we calculated the TSM for the known population of transiting M dwarf planets. We selected planets with T eff < 3800K, R p < 10R ⊕ , and H < 11mag 17 . For planets without mass measurements, we assumed the masses predicted by MRExo. For planets without an equilibrium temperature, we estimated it from the semi-major axis and the host star's effective temperature (assuming zero albedo). Figure 13 shows the computed TSM values for the selected samples of planets. The TSM of TOI-1696 b places it in the top 10, making it one of the best targets for future atmospheric investigations. Existence of a primordial atmosphere Up to this point in the section, the discussion has been based on the assumption that Figure 14. Initial H2/He atmospheric mass fraction of a TOI-1696 b-like planet that satisfies the radius of 3.09 ± 0.11 R⊕ and Teq = 489 ± 13 K after photo-evaporative mass loss for 8 Gyr under the standard XUV radiation field (LXUV) and 10LXUV. The grey region shows the H2/He atmospheric mass fraction that reproduces the observed radius of TOI-1696 b with a rocky core. the target has an atmosphere. Usually it is thought that planets above the so-called radius gap can retain their atmospheres (Weiss & Marcy 2014;Rogers 2015). However, does TOI-1696 b actually have an H 2 /He atmosphere? Here we study the atmospheric mass that TOI-1696 b can retain after ∼ 8 Gyr under a stellar XUV irradiation. The mass of TOI-1696 b remains poorly constrained as discussed in Section 3.5. We modeled TOI-1696 b as a rocky planet with Earth-like core compositions (MgSiO 3 :Fe = 7:3) in the mass range from 0.5M ⊕ to 20M ⊕ . The silicate man-tle and iron core were described by the 3rdorder Birch-Murnagham EoS for MgSiO 3 perovskite (Karki et al. 2000;Seager et al. 2007) and the Vinet EoS for -Fe (Anderson et al. 2001), respectively. The Thomas-Fermi Dirac EoS (Salpeter & Zapolsky 1967) was applied to high-pressure EoS for MgSiO 3 at P ≥ 4.90 TPa and Fe at P ≥ 2.09 × 10 4 GPa (Seager et al. 2007;Zeng & Sasselov 2013). The pressure and temperature in a H 2 /He envelope were calculated using the SCvH EoS (Saumon et al. 1995). We computed the thermal evolution of TOI-1696 b with a H 2 /He atmosphere by calculating its interior structure in hydrostatic equilibrium for ∼ 8 Gyr, and calculated its mass loss process. The initial mass fraction of a H 2 /He atmosphere for a rocky planet ranges from 0.001% to 30% of its core mass. The energy-limited hydrodynamic escape (Watson et al. 1981) controls the mass loss rate given by η is the heating efficiency due to stellar XUV irradiation, L XUV is the stellar XUV luminosity, G is the gravitational constant, and R p is the planetary radius (Erkaev et al. 2007). Since the heating efficiency for a hydrogen-rich upper atmosphere was lower than 20 % (Shematovich et al. 2014;Ionov & Shematovich 2015), we adopted η = 0.1. K tide is the reduction factor of a gravitational potential owing to the effect of a stellar tide: where R H is the Hill radius. The XUV luminosity (L XUV ) of TOI-1696 followed from the X-ray-to-bolometric luminosity relations of M-type stars (Jackson et al. 2012), where we adopted the current luminosity of TOI-1696 as its bolometric luminosity. We also considered a 10L XUV model because of the large uncertainty in L XUV of young M dwarfs. Figure 14 shows the initial H 2 /He atmosphere of a TOI-1696 b-like planet that reproduces the radius of 3.09±0.11 R ⊕ at the current location (i.e., T eq = 489 ± 13 K) after the mass loss driven by the standard XUV radiation (L XUV : blue) and 10 times higher one (10L XUV : red). The grey region shows the H 2 /He atmospheric mass fraction of TOI-1696 b with a rocky core that satisfies its observed radius. The observed radius of TOI-1696 b favors the existence of a H 2 /He atmosphere atmosphere with 3 wt% unless its core contains icy material. We find that TOI-1696 b can possess the H 2 /He atmosphere for 8 Gyr if its core mass is larger than ∼ 1.5M ⊕ (∼ 4M ⊕ for 10L XUV models). If TOI-1696 b initially had the H 2 /He atmosphere of 3%, it should be completely lost. Also, TOI-1696 b with mass of 10M ⊕ can retain almost all the H 2 /He atmosphere accreted from a disk. These suggest that TOI-1696 b with a rocky core of 1.5 − 4M ⊕ is likely to be a sub-Neptune with a H 2 /He atmosphere. CONCLUSIONS TESS found transit signals of a sub-Neptune planet orbiting a mid-M dwarf TOI-1696. To validate and characterize the planetary system, we conducted follow-up observations of this system including ground-based transit photometry, high-resolution imaging, and highand medium-resolution spectroscopy. We have used several methods to determine the stellar parameters based on the results of the spectroscopic observations, and have confirmed that the results are consistent. The host star, TOI-1696, is a M-type star with a M at 0.255 ± 0.0066 M and T eff at 3185 ± 76 K. The fact that this target is located near the Galactic plane makes validation difficult. We used the results obtained to rule out various scenarios that could reproduce the TESS signal (grazing EB, HEB, and BEB). The validated planet, TOI-1696 b is a Sub-Neptune size planet with the radius at 3.09 R ⊕ and rotation period at 2.5 days, which locates in the Neptunian desert. To see its atmospheric properties, we calculated how much of the atmosphere it currently retains, and found the planet likely to retain the H 2 /He atmosphere if it has a core of > 1.5-4M ⊕ . In order to statistically evaluate the feasibility of transmission spectroscopy on this planet, we have also calculated and compared the TSM and concluded that this target is one of the planets with the best prospects for atmospheric detection among the currently known Sub-Neptunesized planets. In addition, future RV observations with high-resolution infrared spectrographs such as IRD will allow us to place more substantial limits on the planetary mass. ACKNOWLEDGEMENTS Funding for the TESS mission is provided by NASA's Science Mission Directorate. We acknowledge the use of public TESS data from pipelines at the TESS Science Office and at the TESS Science Processing Operations Center. This research has made use of the Exoplanet Follow-up Observation Program website, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. Resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center for the production of the SPOC data products. This paper includes data collected by the TESS mission that are publicly available from the Mikulski Archive for Space Telescopes (MAST). This work makes use of observations from the Las Cumbres Observatory global telescope network. Some of the observations in the paper is based on observations made with the MuS-CAT3 instrument, developed by Astrobiology Center and under financial supports by JSPS KAKENHI (JP18H05439) and JST PRESTO (JPMJPR1775), at Faulkes Telescope North on Maui, HI, operated by the Las Cumbres Observatory. This research is in part on data collected at the Subaru Telescope, which is operated by the National Astronomical Observatory of Japan, and at the Gemini North telescope, located within the Maunakea Science Reserve and adjacent to the summit of Maunakea. We are honored and grateful for the opportunity of observing the Universe from Maunakea, which has cultural, historical, and natural significance in Hawaii. Our data reductions benefited from PyRAF and PyFITS that are the products of the Space Telescope Science Institute, which is operated by AURA for NASA. This research made use of Astropy, 18 a community-developed core Python package for Astronomy (Astropy Collaboration et al. 2013, 2018. Some of the observations in the paper made use of the High-Resolution Imaging instrument(s) 'Alopeke. 'Alopeke was funded by the NASA Exoplanet Exploration Program and built at the NASA Ames Research Center by Steve B. Howell, Nic Scott, Elliott P. Horch, and Emmett Quigley. 'Alopeke (and/or Zorro) was mounted on the Gemini North (and/or South) telescope of the international Gemini Observatory, a program of NSF's NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. on behalf of the We calculated the abundances of seven other elements other than iron from IRD spectra. We used 28 lines in total caused by neutral atoms of Na, Mg, Ca, Ti, Cr, Mn, and Fe and singly ionized Sr. The detailed procedures of abundance analysis and error estimation are described in Ishikawa et al. (2020). As an independent determination of the basic stellar parameters, we performed an analysis of the broadband spectral energy distribution (SED) of the star together with theGaia EDR3 parallax (Stassun & Torres 2021), in order to determine an empirical measurement of the stellar radius, following the procedures described in Stassun & Torres (2016); Stassun et al. (2017Stassun et al. ( , 2018. We pulled the JHK S magnitudes from 2MASS, the W1-W3 magnitudes from WISE, and the grizy magnitudes from Pan-STARRS. Together, the available photometry spans the full stellar SED over the wavelength range 0.4-10 µm (see Figure A.16). We performed a fit using NExtGen stellar atmosphere models, with the effective temperature (T eff ) and metallicity ([Fe/H]) constrained from the spectroscopic analysis. The remaining free parameter is the extinction A V , which we fixed at zero due to the star's proximity. The resulting fit ( Figure A.16) has a reduced χ 2 of 1.7. Integrating the (unreddened) model SED gives the bolometric flux at Earth, F bol = 5.20 ± 0.25 × 10 −11 erg s −1 cm −2 . Taking the F bol and T eff together with the Gaia parallax, gives the stellar radius, R = 0.276±0.015 R . We used the T eff and [Fe/H] values from spectroscopic results as priors for the parameter estimation. In addition, we estimated the stellar mass from the empirical relations of Mann et al. (2019), giving M = 0.279±0.014 M . Finally, the radius and mass together imply a mean stellar density of ρ = 18.79 ± 3.26 g cm −3 . A.3. Stellar parameter comparison In addition to the methods described above, we used the Python package isochrones, which calculates stellar parameters from the stellar evolution models. The three methods are not fully independent, as some of them use the same relations such as mass derivation from Mann et al. (2019), but comparing three results are useful to confirm the results are robust. The derived stellar parameters agreed within 1 ∼ 2σ, as shown in A.1. We pick up the results from the empirical relations as our final stellar parameters in Table 1. B. CONTAMINATION ANALYSIS Contamination leads to a decrease in the observed transit depth (the planet appears to be smaller than it truly is), and this effect is achromatic even if the host and the contaminant(s) are of different spectral types. Having simultaneous multicolor photometry allows us to measure possible contamination and consequently provides strong constraints on the false positive scenarios discussed in Section 3.6. Following the methods presented in Parviainen et al. (2020Parviainen et al. ( , 2021, we used the physicsbased contamination model included in Py-Transit v21 to model the light curves using a transit model that includes a light contamination component based on model stellar spectra leveraging multicolor photometry. Fitting the transit+contamination model to MuS-CAT3 lightcurves allows us to measure the contamination in i-band 19 , the effective temperature of the host (T eff,H ), and the effective temperature of the contaminant (T eff,C ). We used normal priors for the period and T 0 based on the results of our transit analysis. We also used normal priors on limb darkening, host effective temperature, and host star density, based on our spectroscopic analysis. Among them, the spectroscopic priors are the most important. Without a limb darkening prior, the transit fit in g-band is boxy perhaps due to the sparse data sampling. Without the T eff,H prior, the posteriors are not well behaved. Without the host ρ prior, the model converges to very high values (∼33g cm −3 ) which is inconsistent with the results from our previous analyses. The joint and marginal posteriors of the relevant parameters are shown in Figure B.1. Significant levels of blending from sources with effective temperature different from that of the host star are excluded, and also the blending 19 We adopt i as reference passband for simplicity from sources with T eff,C ∼ T eff,H are strongly constrained. C. HEB SIMULATION We assumed that each system was composed of the primary star (TOI-1696, Star 1), plus a tertiary companion (Star 3) eclipsing a secondary companion (Star 2) every 2.5 d. For a grid of secondary and tertiary star masses ranging from 0.1 to 0.4M , we then calculated the observed maximum eclipse depth caused by Star 3 eclipsing Star 2 in MuSCAT3 g-and z-bands using the following procedure. First, we interpolated L and T eff of Star 2 and Star 3 from MIST isochrones given their masses, and the age, metallicity, and mass of Star 1 in Table 1. We then computed the blackbody function of each star given their T eff then convolved it with the transmission functions for each band downloaded from the SVO filter profile service 20 . We then integrated the result using the trapezoidal method and computed the bolometric flux F bol , using the integrated functions above. Using Stefan-Boltzmann law and given T eff and L , we computed the component radii and luminosities to derive the eclipse depth. Figure C.2 shows the HEB configurations that produce eclipse depths in g-(blue) and z-bands (red) that are consistent with the observed depth for two given impact parameters. The lower impact parameter corresponds to the 3-σ lower limit derived from our contamina- tion analysis while the other impact parameter corresponds to the median value derived in our transit analysis. We confirm that indeed eclipses of an HEB are always deeper in the red than in the blue bands (i.e higher m 2 /m 1 in z-than g-band) since the eclipsing companions are usually redder than the central star. The important point here is that the HEB configurations that produce eclipses consistent with our observation do not overlap within 1-σ in g-and z-bands for any reasonable impact parameters. Note also that our contamination analysis constrained possible contaminants to have the same colour as the host star, so only masses very close to TOI-1696 (vertical dashed line in Figure C.2) are allowed. Thus, we can rule out the HEB false positive scenario. D. VALIDATION WITH Vespa AND Triceratops Vespa 21 was originally developed as a tool for statistical validation of planet candidates identified by the Kepler mission (e.g. Morton et al. 2016), but has also been used extensively to validate planets from subsequent missions, such as K2 (e.g. Livingston et al. 2018;de Leon et al. 2021). Vespa compares the likelihood of a planetary scenario to the likelihoods of several astrophysical false positive scenarios involving eclipsing binaries (EBs), hierarchical triple systems (HEBs), background eclipsing binaries (BEBs), and the double-period cases of all these scenarios. The likelihoods and priors for each scenario are based on the shape of the transit signal, the star's location in the Galaxy, and single-, binary-, and triple-star model fits to the observed photometric and spectroscopic properties of the star generated 21 https://github.com/timothydmorton/VESPA using isochrones. We used the MuSCAT3 lightcurve because of its high SNR and low levels of limb darkening, which provides the best constraint on the transit shape. We also used the Gemini and Palomar contrast curves described in Section 2.6, a maximum aperture radius of maxrad =3 (interior to which the transit signal must be produced), and ran the simulation using a population size of n=10 6 , resulting to a formal FPP< 1 × 10 −6 . We also used Triceratops 22 which is a tool developed to validate TOIs (Giacalone & Dressing 2020;Giacalone et al. 2021) by calculating the Bayesian probabilities of the observed transit originating from several scenarios involving the target star, nearby resolved stars, and hypothetical unresolved stars in the immediate vicinity of the target. These probabilities were then compared to calculate a false positive probability (FPP; the total probability of the transit originating from something other than a planet around the target star) and a nearby false positive probability (NFPP; the total probability of the transit originating from a nearby resolved star). Given our follow-up photometry rules out nearby stars as a potential source of the transit signal, we eliminate all stars except the target in the Triceratops analysis. As an additional constraint, we use the contrast curve from our follow-up speckle imaging as a direct input in Triceratops. For the sake of reliability, we performed the calculation 20 times for the planet candidate and found FPP=0.0020. The low FPPs calculated using Vespa and Triceratops are small enough to statistically validate TOI-1696.01 as a planet. HEB mass configurations which produce eclipse depths in g-band (blue) and z-band (red) consistent with the observed depths (indicated in the upper left corner of the first panel). The left panel corresponds to the lower limit of the impact parameter and the right for the median value. The colored solid line and dashed lines correspond to confidence regions that are consistent with the observed depths within 1-and 2-σ, respectively. The vertical black line corresponds to the mass of the central star (i.e. TOI-1696). The fact that the red and blue regions do not overlap within 1-σ taking into account impact parameter rules out the HEB false positive scenario.
2022-03-08T06:47:31.404Z
2022-03-05T00:00:00.000
{ "year": 2022, "sha1": "b4e2ae813cbf2a7806d7f17ebf85c1840706d48c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "de7cbe28e4e29afbb236ac8487a987e34b27eda7", "s2fieldsofstudy": [ "Physics", "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
218495494
pes2o/s2orc
v3-fos-license
Distinct and combined responses to environmental geometry and features in a working-memory reorientation task in rats and chicks The original provocative formulation of the ‘geometric module’ hypothesis was based on a working-memory task in rats which suggested that spontaneous reorientation behavior is based solely on the environmental geometry and is impervious to featural cues. Here, we retested that claim by returning to a spontaneous navigation task with rats and domestic chicks, using a single prominent featural cue (a striped wall) within a rectangular arena. Experiments 1 and 2 tested the influence of geometry and features separately. In Experiment 1, we found that both rats and chicks used environmental geometry to compute locations in a plain rectangular arena. In Experiment 2, while chicks failed to spontaneously use a striped wall in a square arena, rats showed a modest influence of the featural cue as a local marker to the goal. The critical third experiment tested the striped wall inside the rectangular arena. We found that although chicks solely relied on geometry, rats navigated based on both environmental geometry and the featural cue. While our findings with rats are contrary to classic claims of an impervious geometric module, they are consistent with the hypothesis that navigation by boundaries and features may involve distinct underlying cognitive computations. We conclude by discussing the similarities and differences in feature-use across tasks and species. features can interact with the progression of geometry learning. However, successful use of featural cues in these paradigms has been argued to reflect the engagement of associative processes that do not recruit the specialized cognitive computations engaged in the use of geometric cues following disorientation. That is, reference memory tasks such as those discussed above recruit general learning systems parallel to the specialized computations employed in the use of boundary geometry. Reference memory tasks, therefore, may mask detection of the modular nature of boundary-based spatial mapping by recruitment of associative processes. For such reasons, some researchers emphasize the importance of using a working memory task in observing distinctions between geometry and feature use 25,26 . It is important to note, however, that even in working memory or spontaneous tasks, researchers have found varying degrees of feature use depending on the task (e.g., aversive vs. appetitive tasks in rats 27 ; verbal cuing in children 28 ) or environment (e.g., large vs. small environments 29,30 , circular/octagonal environments 10,31 ). Because alternative explanations for the observation of interactions between geometry and features are available, it seems that evidence for or against independent computation of geometric and featural cues rests not on whether a cue influences behavior, but on how it influences behavior 25 . Such differences in function may additionally lead to hypotheses about both the cognitive computations and neural representations underlying spatial navigation. A recent study with fish provides a clear demonstration of the importance of both the task and the available environmental cues in disoriented spatial behavior 32,33 . The two types of tasks (working memory and reference memory) were tested across three conditions: geometry-only, features-only, and a combination of geometry and features. As in previous studies, environmental geometry immediately guided navigation in both tasks, and features were reliably learned in the reference memory task (although they learned local features significantly faster than the distal ones). In contrast, in the working memory type task, visual landmarks acquired perceptive salience and attracted the fish but without serving as a spatial landmark when they were located far from the target location. Interestingly, however, when provided simultaneously with environmental geometry, the featural cues -whether those cues were distinctive corner panels or a single uniquely-colored wall 33 -led the fish to limit their choices to the correct corner significantly more often, even in the absence of reinforced training. Similarly, behavioral studies with mice have independently tested the use of geometry and landmarks in both a working memory task and a reference memory task 34 . While mice reoriented by geometry from the very start in both tasks, a featural cue (i.e. striped wall) was successfully used only as a local marker (but without any sense of left versus right) in the working-memory task. With reinforcement in the reference-memory task, however, mice became increasingly accurate in identifying the one target corner. Interestingly, genetically modified mouse models of neurological disorders showed selective learning of features and geometry, depending on their hippocampal dysfunction or attentional deficit 35 . In those studies, however, there was no simultaneous testing of geometry with a featural cue, making it unclear whether the presence of a geometric structure enhances feature-use in rodents the way it did in fish, particularly in the spontaneous "working-memory" task. The present study tests the generalizability of such findings across different species of animals through a working-memory test of rats and domestic chicks, presenting geometry and features in isolation (Experiments 1 and 2) and in conjunction (Experiment 3). The task involved observing disoriented animals' corner preferences (in the absence of reward) after having previously found food in one corner of an arena. Experiment 1 examined the use of rectangular environmental geometry using a uniformly black arena; Experiment 2 examined feature-use in a square arena with a single striped wall; Experiment 3 tested the simultaneous use of geometry and features using a rectangular arena with a single striped wall. Given the potential effects of arena size, we kept our rectangular arena's dimensions the same as the original Cheng study 2 . However, instead of corner panels, we chose to implement a single striped wall, in accord with the above studies on mice and based on single landmark control over head direction cell responses 36,37 , using a spatial frequency based on considerations of rodent visual acuity 38,39 , which is lower than that of chicks 40 . Different subjects were used for each experiment. These arenas were set in well-controlled testing rooms designed to offer no cues to orientation other than those in the arenas themselves. www.nature.com/scientificreports www.nature.com/scientificreports/ indicates an effective disorientation procedure and provides us with an internal control that ensures the absence of cues that might make the correct corner uniquely identifiable. Experiment 2: Feature. Experiment 2 tested reorientation by a feature in a square arena with a single striped wall (see Fig. 2a). If animals distinguished all four corners with respect to the feature (i.e., relative directions or "sense"), there should be a preference for the correct corner over all others. A more limited use of the feature would be if animals used the striped wall as a direct local feature, without using it to compute sense relations. In this case, for instance, we should detect a preference for the two striped corners when the goal was at a striped corner. Nine rats (mean = 5 trials) and eight chicks (mean = 5 trials) were observed. Rats. The proportion of time spent at the correct corner was not significantly different from a chance value of 0.25 (t(8) = 1.64, p = 0.14, Cohen's d = 0.55). This indicates a failure to compute 'sense' (i.e., left/right-ness) from the featural cue. Nevertheless, there was a clear statistical preference for featurally correct over featurally incorrect corners (t(9) = 3.27, p = 0.01, Cohen's d = 1.09, see Fig. 2b). Interestingly, however, further inspection showed that this effect was driven by trials in which the goal was near the striped wall: when the goal was at a striped corner, rats spent significantly more time at the two striped corners (72%: t(8) = 4.21, p = 0.003, Cohen's d = 1.40) but when the goal was at an all-black corner, rats did not exhibit a preference for the black corners (46%: t < 1, p > 0.25, Cohen's d = 0.16). In both cases, rats did not distinguish the correct corner from the symmetric corner (t's < 1, p's> 0.25), indicating a complete failure to use the striped wall to compute directional relationship between locations. In summary, Experiment 2's results indicated a clear, if relatively modest, influence of the featural cue; the rats could not use it to extract 'sense' but leveraged a 'goal proximity' benefit from it, consistent with use of the feature as a beacon. Chicks. The proportion of time spent at the correct corner was not statistically different from chance of 0.25 (t < 1, p > 0.25, Cohen's d = 0.27); whether the corner was striped or black did not make a significant difference in performance (t(7) = 1.03, p = 0.34, Cohen's d = 0.39, Fig. 2). Moreover, chicks did not respond to the striped feature as a cue to distinguish between the two striped corners and the two black corners, whether the goal was at a striped or all-black corner (all t's <1, p's> 0. 25). Although this null effect should not be over-interpreted, the fact that chicks were not significantly influence by the featural cue cannot be attributed to an inability to perceive Experiment 3: Geometry and Feature. Experiment 1 demonstrated that both chicks and rats spontaneously used boundary geometry to compute spatial locations. Experiment 2 showed that although rats did not compute relative positions with respect to the feature, they used it as a local feature to guide search. In contrast, the chicks showed no sign of feature-use with respect to the striped wall and explored the corners at random. If both geometry and features are present simultaneously, will one of those cues dominate over the other? Or, will both of them influence behavior? Experiment 3 tested the combined use of environmental geometry and a feature when both were present, in a rectangular arena with one striped wall (see Fig. 3a). Nine rats (mean = 5.7 trials) and eight chicks (mean = 5.8 trials) were observed. Rats. As in Experiment 1, the combined proportion of time spent at the two geometrically correct corners was significantly higher than a chance level of 0.5, (t(8) = 3.05, p = 0.02, Cohen's d = 1.02, see Fig. 3b). However here, unlike Experiment 1, rats tended to prefer the correct corner over the geometrically identical diagonal corner (t(8) = 2.27, p = 0.05, Cohen's d = 0.76). There was no significant difference in accuracy between goals at the striped and all-black corners (t(8) = 1.39, p = 0.20). In summary, contrary to some interpretations of the original geometric module hypothesis, search time was immediately guided by the use of both geometric and featural information. Interestingly, use of both sets of information occurred at the level of individual rats: e.g., the majority of the nine rats showed a pattern whereby search at the correct corner was at least 12% higher than at the rotationally equivalent corner and at least 9% higher than at the featurally similar corner (fifth-ranked rat scored: 36% at correct corner, 24% at rotationally equivalent corner, 27% at featurally similar corner, 13% at error corner). In fact, the group average of 17% at the error corner was significantly lower than chance (t(8) = −3.11, p = 0.015) Chicks. Similarly to the rats, the combined proportion of time spent at the correct corner and its geometric equivalent was significantly higher than a chance level of 0.5 (t(7) = 3.19, p = 0.02, Cohen's d = 1.13, Fig. 3); but, in contrast to the rats, there was no discrimination between the correct corner and its geometric equivalent (t < 1, p > 0.25). The proportion of time spent at the correct corner was not significantly different from chance (t < 1, p > 0.25, Cohen's d = 0.27)). Figure 2. Results of Experiment 2. (a) Corner preferences by rats and chicks, as measured by the proportion of time spent in each corner. The correct corner is denoted with a star. Because the target corner was varied across trials, the data have been rotated prior to averaging and are displayed in this rotated form. Rats preferred the correct and featurally symmetric corners (i.e., correctly matching the presence/absence of the striped wall with the target) over the other two, while the chicks did not use the striped wall to guide their behavior. (b) The rats' use of the striped feature as a cue (the proportion of time spent in the correct and featurally symmetric corners) was limited to the trials in which the goal was near the stripes (rather than the all-black side of the arena). The asterisk denotes a significant t-test against a 0.5 chance level with p < 0.05. Chicks did not use the striped feature, even when it served as a local cue to location. (2020) 10:7508 | https://doi.org/10.1038/s41598-020-64366-w www.nature.com/scientificreports www.nature.com/scientificreports/ In summary, rats were able to use both environmental geometry and a feature to guide search at the correct goal, while chicks reoriented only on the basis of environmental geometry, despite their tendency to rely on visual features when trained to do so 9 (see Exp. 4, Supplementary Materials). Discussion The present experiments shed new light on the decades-old debate over the geometric module by accomplishing the following: First, Experiment 1 replicated the main finding that environmental geometry guides spatial navigation in both mammals and birds in the absence of path integration (or positional tracking). Rats and chicks spontaneously used geometry in a working-memory task with varied goal locations and did not need repeated training at one rewarded location to compute spatial relationships among the arena boundaries. This is consistent with the wealth of existing studies demonstrating that spatial mapping relies on a neurocognitive representation of boundary geometry 25,26,[42][43][44][45] . Second, Experiments 2 and 3 show, for the first time, that rats can use a featural cue in spatial reorientation. Although chicks did not seem to do so in the present experiment, rats successfully leveraged a featural cue to guide their search in a working-memory task, which, importantly like Cheng's, tapped 'spontaneous' behavior without requiring extensive training. In Experiment 3, they jointly (in one trial) used two sets of information to guide their behavior, one based on environmental geometry, the other based on the featural cue. These findings clearly weaken the empirical basis underlying a 'strong' version of modularity that is deterministic at the level of behavior (i.e. that the output of a geometric module assumed control over behavior without the influence of other processes such as landmark-use). In the critical Experiment 3, the effect size of greater search in the correct than rotationally equivalent corner is Cohen's d = 0.76, which is far from negligible. In other words, disoriented animals are not limited only to environmental geometry in guiding initial search under disorientation. Overall, comparison of effect sizes in our findings is consistent with a greater influence of environmental geometry over features in reorientation, but it is clear that rats can also take features into consideration when performing such tasks. Nevertheless, the way in which the feature was used in this working-memory task was limited to a local-marker of the goal (apparent in their preference of locations based on the presence of the striped cue), unlike the relative spatial relationships that were computed with respect to the environmental geometry. This is in line with past findings of disoriented spatial behavior in mice, fish, as well as human children, using features and geometry in isolation 10,25,32-35 . The rats use the striped wall to discriminate between the correct corner and the rotationally symmetric corner, while the chicks are only guided by boundary geometry. Asterisks denote significant t-tests of geometry (correct + rotationally symmetric corners) against a 0.5 chance level with p < 0.05. Star denotes the paired t-test between the correct and rotationally symmetric corners, with p = 0.05. The results taken together describe an underlying spatial representation of environmental geometry which, in the absence of repeated reinforcement, operates alongside a separate feature-detector with an associative bias (direct-marking) 25,33,42 . This suggests that while the computations involved in the use of geometry and features are different, the output of a single behavioral choice (i.e., in Exp. 3) may involve an adaptive weighting of cues according to properties such as salience and experienced validity 4,46 . However, the patterns of behavior we observed here (and in the studies mentioned above) directly contradict a global image-matching strategy 38 , which would allow animals to distinguish between the two striped corners. What could explain the rats' use of the feature as a local cue, given the past findings of failure and the failure on the part of the chicks? First, it may be important to point out that there were several differences between the original methods from Cheng's 2 study and ours. First, we used a different strain of rats (Lister Hooded) from those tested in Cheng's study (Sprague Dawley), although both studies used male rats. Lister Hooded rats have better vision than Sprague Dawleys, but not all the featural cues in Cheng's study were visual. More importantly, when designing our apparatus, we chose to keep the same size apparatus as Cheng's but to simplify the featural cue, based upon knowledge of cues used to orient spatial cells 36,37 . Instead of having multiple visual cues and odors, which might introduce issues related to multiple cue discrimination and recognition, we chose a simple, prominent striped pattern with clearly visible contrast edges, at a spatial frequency that rats were sure to perceive, perhaps even more easily than a uniformly white wall 38,40 . We also chose to limit our target locations to corner feeders in an empty arena (that could be cleaned to get rid of odor cues), rather than having the targets be anywhere in a sandbox and allowing rats to dig for the reward. The clear visibility of the feature within our well-lit room and its salience against an otherwise black arena, along with the clear distinction between possible choices of feeders near and far from the feature, may have prompted rats in our study to use the featural cue. Along the same logic, it is possible that our across-species standardization of the striped cue may have favored rats over chicks, given chicks' innate preference for smaller visual features 47 . We note, however, that the chicks used that very same cue quite proficiently in a reinforced reference-memory task (Exp. 4, Supplementary materials). Another possibility is that, given the difference in their body size, by using the same-sized apparatus for both chicks and rats we may have inadvertently tested the chicks in comparatively larger environments. However, according to previous studies, larger environments favor feature-use 30,31 , which makes this an unlikely explanation for the species differences in this task. One other possibility is that our task protocol in having the rats sample the target location twice (see Methods) provided them with a better representation of the environment. However, even if that were somehow true, it still would not be able to explain the fact that the chicks and rats performed so similarly in their use of boundary geometry (61% vs. 62% geometrically correct preference in Exp. 1), yet so differently in their use of the feature (46% vs. 68% featurally correct preference in Exp. 2). Study after study, the variability in performance across tasks and species is with respect to feature-use (or the competition between features pitted against geometry), not about the use of geometry itself. Perhaps the crucial point is that variation across species, whether it is due to differences in their perceptual system or other ecological factors 4 , affects the representation of featural landmarks to a greater extent than the representation of environmental structure. For the past few decades, the geometric module hypothesis has provided a powerful theoretical framework for understanding the central role of environmental boundaries in navigation and spatial mapping. During that time, spatial navigation research has witnessed tremendous progress and a wealth of scientific knowledge that is likely unparalleled in any other area of cognition research, and the study of environmental boundaries and landmarks in spatial representation and behavior has made a significant contribution to that end. Behavioral evidence in a wide range of animal species suggests that computations of environmental geometry involve representations of distance and directions with respect to three-dimensional boundary layouts 48,49 . Evidence from human neuroimaging studies suggests that learning locations with respect to boundaries and landmarks preferentially recruits the hippocampus and dorsal striatum, respectively 50 ; a similar distinction has been found in the avian 51 and rodent brains [52][53][54] . At the level of single neurons, geometry-based navigation may be supported by boundary-coding neurons such as boundary vector cells in the hippocampal formation [55][56][57] . Prior to their discovery 42 , boundary vector cells were hypothesized as a major input to hippocampal place cells, which in turn form allocentric "maps" of the surrounding environment in a manner sensitive to environmental geometry [58][59][60] . Direct visualization of neurons responding to a single-shot experience of environmental boundary transformations in young domestic chicks 61 and intracranial recordings of boundary-specific increases in theta power from human epilepsy patients performing a computer-based navigation task 62 suggest that such neural representation of boundaries may be a commonly shared underlying neural correlate of boundary-based navigation. Arguably, a vital challenge now is to combine investigation of these and other types of hippocampal spatial neurons 36 with reorientation behavior 43,44 across various species. Thirty-five years after Ken Cheng's 2 formulation of the geometric module, we find ourselves in a new era of research on spatial cognition. We envisage thirty more fruitful years of research to successfully integrate behavioral and neural evidence on the representations of boundaries and features, to provide deeper insight as to how and at what level such representations interact, and to better characterize the principles that are shared and distinct across vertebrate species that ultimately give rise to behavior. Subjects. Rats. Twenty-nine adult male Lister Hooded rats (Rattus norvegicus, Harlan Olac, Bicester, England) were housed in groups of five with continuous access to water. They were held on a 12-hr light/dark cycle; testing occurred during the light phase. Rats were on a restricted diet of 15 g of food per animal, beginning two days prior to testing. No animals dropped below 95% of free-feeding weight. procedures. Rats. One day before testing, rats were provided with some chocolate chips in their cages. Before the first trial of each day, animals received a familiarization trial during which they were given three minutes to explore the arena and eat one chocolate chip placed at its center. For each test trial, chocolate chips were added to one feeder. The animal was allowed to explore until it had eaten a piece of chocolate, at which point it was removed for 15 seconds, before being placed back in to the arena from the same starting point. The animal was allowed to explore the arena until it had eaten a second piece of chocolate. This was to discourage the use of an alternation strategy, documented in rats as a method of foraging 63 . The animal was then disorientated for 30 seconds by placing it in a dark, covered box and rotating the box. Disorientation involved clockwise and then anticlockwise rotations (at least 720° in each direction). During this time, the feeder containing chocolate was removed from the arena and replaced with an identical, but empty, feeder. The arena was cleaned with 15% ethanol and rotated 90° clockwise to counteract the use of possible uncontrolled extra-maze cues. The animal was placed back into the arena from a randomly selected wall and allowed to search for 60 seconds. Trials in which the rat was unresponsive (no feeder approaches) were omitted; if a rat was unresponsive for more than three of the six trials, data from that rat were omitted entirely 64 . Chicks. After being moved from the incubator to its home cage, each chick was given 2-3 mealworms. For the next two days, chicks were individually taken to the testing room and placed in a plain square arena (devoid of informative landmarks or geometry) with all black walls and identical black feeders at the corners (for familiarization). An object, identical to the imprinting object, was fixed to the arena floor. On the first day, chicks ate mealworms from a feeder at the center of the arena, and then in a corner. On the second day, chicks found worms in one corner, before being disoriented. Chicks were then released from the center of a randomly chosen wall and given 60 seconds to search for mealworms. Test trials were administered for two days following familiarization. Before the first trial of each day, the chick was given a 60-second familiarization period inside the testing arena with a feeder containing mealworms at the center. For the test trials, the chick was released from the center of a randomly chosen wall. It was allowed to explore until it had eaten a mealworm, removed from the arena, and disorientated by rotating clockwise and anticlockwise for 30 seconds. During this time, the feeder containing mealworms was removed from the arena and replaced with an identical, but empty, feeder. The arena was wiped and rotated 90° clockwise. The chick was placed back into the arena from the center of a randomly selected wall and allowed to search for 60 seconds. Trials in which the chick was unresponsive (no feeder approaches) or distressed (made two attempts to jump out of the arena) were omitted. If a chick was unresponsive or distressed for more than three of the six trials, then data from that chick were omitted entirely. All husbandry and experimental procedures complied with European Legislation for the Protection of Animals used for Scientific Purposes (Directive 2010/63/EU). Experiments with rats were carried out in the Psychology Department of Durham University, in accordance with the U.K. Animals (Scientific Procedures) Act of 1986; all experiments were approved by the university internal review board on animal testing. Experiments with chicks were carried out in the Animal Cognition and Neuroscience Laboratory of the Center for Mind/ Brain Sciences at the University of Trento and were previously authorized by the University of Trento's Ethics Committee for the Experiments on Living Organisms, and by the Italian Ministry of Health (auth. num. 201/2013-B). All experiments were performed in accordance with relevant guidelines and regulations.
2020-05-05T14:33:29.560Z
2020-05-05T00:00:00.000
{ "year": 2020, "sha1": "057d73741ae0f3a19148acb696ab4a0511a5352f", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-64366-w.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "41f3cb853875d89de723e6f19c4a766f2bca624d", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
247346549
pes2o/s2orc
v3-fos-license
THE EFFECTIVENESS OF USING SEQUENCE PICTURE MEDIA IN TEACHING EFL STUDENTS IN WRITING PROCEDURE TEXT Teaching EFL students are both challenging and demanding as English is neither the students’ mother tongue nor their second language. Therefore, it is essentially required to select an effective way or technique in achieving the objectives of the lesson. This study was aimed to investigate the use of sequence picture media and its effectiveness in teaching EFL students how to write a procedure text. This study was conducted by using quantitative research with one group pretest-posttest design at Universitas Prima Indonesia. The steps involve administering pretest measuring the dependent variable, applying the experimental treatment X to the subjects; and administering a posttest again measuring the dependent variable. There were 40 students taken as the subject of the study. The data were collected through a pretest and a posttest. The result on this study showed that the students’ responses and performances were improved. It can be seen from the finding which showed that students’ percentage in using the language components such as grammar, sequence words, commands or imperative sentences, and adverbial phrases accurately. The result of the questionnaire also showed that the students considered sequence picture technique was effective in helping them write procedure text correctly. INTRODUCTION Writing is one of the four skills that must be mastered by students who use English as a foreign language (EFL) since it is one of the ways for them to express their ideas, thoughts and feelings. Aside from that, writing is also essentially required in academic purposes such as writing procedures on how to operate a new electronic device or how to install that device, writing application letters, writing emails, even writing daily journals and other necessary needs. Furthermore, students also have need of writing in order to enable them communicate with their pen pals or their cyber friends on social media. Being able to successfully demonstrate their ideas, thoughts and feelings will help them overlook challenges on their future. Leki (2001:199) also described that English writing in both educational and professional settings is increasingly important in countries of non-native speakers of English. But unfortunately, writing has been perceived as the most difficult thing to do by most of EFL students. Before conducting this research, the writer has observed students' responses toward writing-based tasks during the on-going class. It was found that the majority of the students were reluctant to do their task when it came to writing. After being observed and asked several questions, some of the causes of this problem are lack of vocabulary. Lacking of vocabulary puts boundaries for students to expand their ideas and thoughts. It was often found that their writing was stuck or remained unfinished due to the absence of the words they need in writing. Another cause is failing to structure their ideas effectively. This happened because they did not have the outlines of the writing which actually would help them organize their ideas in writing. Besides that, poor understanding of grammar and syntax skills have been one of the major obstacles for students to write. This has been affecting their willingness to write. They were afraid that their writing would be scored low due to the errors on their grammar. And the result of this observation is in line with the previous finding by Tampubolon (2020) which described that writing in English was not only viewed as a difficult task to do due the lack of the vocabulary mastery but also resulted from the lack of self-confidence the students have when they are asked to write their own composition. And the other one is plagiarism. Plagiarism is the most common case been found in writing especially among students. One of the main factors of this plagiarism was due to the lack of argumentative ideas within the students. So they always tried to find ways to copy their friends' work or taking existed writing on internet. As a result, they showed tendency to use Google translation by copying the whole sentences from source language and some of them even did not submit their work with an excuse that they did not have any ideas on how to do that. This phenomenon must be taken seriously and needs an effort on how to deal with it. Therefore, the writer was interested to do this research to investigate this problem and to find out the answer to the phenomenon as stated above. English as a foreign language (EFL) is the term used to describe the study of English by non-native speakers in countries where English is not the dominant language (Nordquist, 2020). Teaching EFL students are both challenging and demanding as English is neither the students' mother tongue nor their second language. English educators are demanded to select an effective technique in aiming the objective of the lesson in possible way. In this study, the writer was using sequence picture technique in teaching how to write procedure text. Procedure text is one of so many genres of writing which allows students to elaborate the process of doing something or showing how something is done. This could be burdensome for them if they are not familiar with the terms and the things related to it. Therefore they need some guidance to help them brainstorm the ideas and stimulate their schemata on things they have to work on. Hence, the writer was investigating the use of sequence picture technique to teach them on how to write a procedure text effectively. Procedure text is a type of text which is designed to elaborate the procedure of doing or using something, for example the procedure of using washing machine, cooking spaghetti and so on. Its generic structure consists of introductory paragraph, list of material or tools needed, and sequence of steps (Hartono, 2005). The purpose of writing procedure text is to give an overview for the readers about the process of doing something so they can visualize the process even though they have not done or used it. Sequence picture media is a teaching media which consists of series of pictures compiled into a collage that is described in order. Sequencing pictures in writing will help the students write their ideas in order. So they will know what to do first, second, and next. Besides that, it also gives them a visualization about what they are going to write. RESEARCH METHODOLOGY This study is a quantitative research with one group pretest-posttest design aiming to investigate the effectiveness of sequence picture technique in teaching how to write a procedure text among the students of Universitas Prima Indonesia. There were 40 students taken as the subject of this study. The study started by observing the students' attitude towards writing-based tasks during teaching experiences. After obtaining preliminary data through observation, then the writer planned to administer a writing test as a pretest then analyzed the result. Afterwards, the writer conduct a teaching process by implementing the use sequence picture media in order to find the solution toward the problem gathered in the previous observation and enhance the writing score on pretest. On the implementation of sequence picture media, on the very first step, the writer prepared a series of picture that have been sequenced accordingly. The pictures were presented in the classroom during teaching and learning process. Then the writer prepared an explanation on how to write a procedure text by following its generic structure which consists of introductory paragraph, list of material or tools needed, and sequence of steps (Hartono, 2005). Aside from that, the writer also provided and explanation the language components such as target grammar and vocabulary used in writing procedure text and other writing features like transition markers (first, second, third, etc.) and also punctuation and capitalization. The students were given an exposure by showing the series of pictures that have been prepared and giving them a proper explanation about how to write a procedure text. The students then were formed into groups of 4 to have discussion to do the first task which is writing a procedure text for cycle one. While doing the task, the writer observed the classroom by recording students' attitude towards the task such as their behavior, cooperation, action and expression. During group work, most of students participated in the discussion and showed good cooperative behavior towards the given task. And to see individual performance, the writer then gave a posttest. The writing were analyzed based on the language features used in the text such as using simple present tense correctly, using command or imperative sentence appropriately, using adverbial phrases properly such as adverb of time, manner, and place and using adverbs of sequence accurately such first, second, third, etc. The above figure was one of the pictures used by the writer in teaching students how to write a procedure text which is about process of making chocolate. The sequence picture was presented to accommodate the students with their writing so they could get the ideas and guidance in deliberating it. FINDING AND DISCUSSION The result on cycle one showed the effectiveness of applying sequence picture technique in teaching procedure text as seen as follow. The result on pretest as indicated on the above chart shows that 19% of the students could write the procedure text very good which was categorized into good and 37% of the students could demonstrate their writing well which reached good category while the rest of the students which consists of 46% performed fairly. While in term of vocabulary, 16% of the subjects were able to use the vocabulary appropriately which then categorized into very good, then there were 34% of the students could utilize the vocabulary quite well which then grouped into good, and the rest of the students which is the other 50% still needed help and more guidance in writing the procedure text using proper words. The last language component which was also analyzed by the writer is the use of grammar. As seen, 18% of the students could make use of the grammar correctly in writing their sentences, which is categorized in very good. And 27% of the participants were able to demonstrate the use of grammar well which labeled into good. While the other 28% of the students still could not use the grammar accordingly then labeled into fair, which means they need more exposure and encouragement about how to build correct sentences in English. Based on the above result, it was necessarily required to conduct an alternative way to see the improvement of the students' grades in writing procedure text through sequence picture media. After conducting pretest, the writer gave some feedback towards students' writing by showing what they did wrong and which part they were already good at. The writer encouraged the students to select the right words used in their sentences since the lowest point in cycle one was in term of vocabulary. And majority of the inappropriateness was due to the error in word choice. The writer also reminded students to use the proper grammar when writing their sentence which is present simple tense since most sentences in procedure text are written in that kind of tense. After making sure all of the students understood the parts they had to work on, the writer then gave another writing task for them by making some little changes. It only took around 15 minutes for some students to finish their writing, while some other students spent around 20-30 minutes to do it. After making sure all of the students have submitted their writing, the writer then analyzed them by considering the three main writing components in procedure text such as grammar, vocabulary and body structure. Then the writer found the data as follow. The above chart shows that 60% of the students could follow the generic structure of the procedure text in writing their text very well which means there are 24 students performed very good. And the chart also indicates that 30% of the students could write well, meaning there are 12 students who are categorized into good performance. While the rest of the students which is 10% still unable to write well. It means there are 4 students who performed fair. On second part which is vocabulary, it shows that 55% of the students used the vocabulary appropriately which means there are twenty two students who performed well in term of vocabulary that is categorized into very good. And another 30% of the students were able to utilize the vocabulary properly. It means there were around eleven students who were categorized into good. While the other 15% number of the subject consisting of six students could make use of the vocabulary fairly. While on the last language component namely grammar, there were also twenty two students as shown on the above chart as much as 55% who could use the grammar correctly. And another 30% which consists of 12 students were able to use the grammar well and grouped into good category. Then the last 15% of the students which consists of six showed less understanding in the use of grammar in their writing. The findings on this study are in line with the findings on the previous study conducted by Anisa which shows that there was a significant progress on students' writing after being taught using sequence picture technique. Therefore the study suggests that the use of sequence picture technique can improve students' writing performance. CONCLUSION AND SUGGESTION Based on the findings and discussions on the previous chapter, the writer comes into some conclusions. Teaching EFL students in writing procedure text using sequence picture media is significantly effective. The students' writing scores are impressively increased after being taught using picture sequence media. The use of this media is considered successful because the students find it helpful in giving them ideas about the writing. Furthermore, the media can help them write their composition smoothly by following the sequence of the pictures therefore their writing organization is found easily to be followed. Besides that, the students' score in terms of vocabulary is also improved due to the use of this media. It is because the media can activate their schemata and stimulates their mind in selecting the proper words to be used in their writing. Lastly, the students' grammar is also significantly improved. By using the sequence picture media, students can view the use of appropriate type of verbs in their sentences through the help of this media. Therefore they can write their sentences with proper types of verbs and tenses. The writer suggests that the use of sequence pictures must in line with English educators' guidance especially in explaining the procedure of the writing the outlines of the students' composition and also giving example of the model writing. Aside from that, the explanation of the grammar use in writing procedure text is essentially needed. It is also suggested to the further writer to expand the complexity of the writing so the students will find it more challenging. And this sequence picture media might be considerably used for another type of writing with some necessary adjustment.
2022-03-09T18:47:20.884Z
2022-01-24T00:00:00.000
{ "year": 2022, "sha1": "38def2465afe5d5feb4cdd9348eafecece7a2c7d", "oa_license": "CCBYSA", "oa_url": "https://journal.eltaorganization.org/index.php/joal/article/download/48/66", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "c8f4e9f582901dbd8ec82e3ca330d349cc74506a", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
907075
pes2o/s2orc
v3-fos-license
Phytotoxicity of 4,8-Dihydroxy-1-tetralone Isolated from Carya cathayensis Sarg. to Various Plant Species The aqueous extract from Carya cathayensis Sarg. exocarp was centrifuged, filtered, and separated into 11 elution fractions by X-5 macroporous resin chromatography. A phenolic compound, 4,8-dihydroxy-1-tetralone (4,8-DHT) was isolated from the fractions with the strongest phytotoxicity by bioassy-guided fractionation, and investigated for phytotoxicity on lettuce (Latuca sativa L.), radish (Raphanus sativus L.), cucumber (Cucumis sativus L.), onion (Allium cepa L.) and wheat (Triticum aestivum L.). The testing results showed that the treatment with 0.6 mM 4,8-DHT could significantly depress the germination vigor of lettuce and wheat, reduce the germination rate of lettuce and cucumber, and also inhibit radicle length, plumule length, and fresh weight of seedlings of lettuce and onion, but could significantly promote plumule length and fresh weight of seedlings of cucumber (p < 0.05). For the tested five plants, the 4,8-DHT was the most active to the seed germination and seedling growth of lettuce, indicating that the phytotoxicity of 4,8-DHT had the selectivity of dosage, action target (plant type) and content (seed germination or seedling growth). Introduction The phenomenon of inhibition of one plant's growth by chemicals released by another plant into environment is generally defined as allelopathy [1]. The term "allelopathy" was first use by Hans Molisch from a physiological perspective to describe the effect of ethylene on fruit ripening [2]. Allelopathic effects are mainly determined by the dose and property of the exuded chemicals, mostly consisting of some secondary metabolites with phytotoxicity, and also by various other environment factors [3,4]. Allelochemicals, delivered through decomposition, volatilization, leaching and root exudation [5], play an important role in the distribution of plant populations [6], the succession of communities, as well as the nutrient chelation [7], and was also suggested as a mechanism driving exotic plant invasion [8]. Found mainly near Tianmu Mountain in the Zhejiang Province of China (30°18'30''-30°24'55''N, 119°23'47''-119°28'27''E) [9], Carya cathayensis Sarg. is famous for its daintiness and nutritional content, and has generated increasing interest as a healthy foodstuff to decrease the risk of heart disease [10]. After collecting the fruits of C. cathayensis, the exocarp as a forest residue were piped up along the hillside. When the leached exocarp solution flowed down the mountainside, weeds and low shrubs along the flow path and nearside gradually withered and died, suggesting a phytotoxic phenomenon. Phytotoxicity could be a starting point for investigation of allelopathy, and also might be the function of static (i.e., the existing concentration in soils) and dynamic (i.e., the renewal rate) availability of allelochemicals [11,12]. In the last decade, the study of allelopathy has made significant progress. Today, more and more papers on this subject are being published in high impact journals [13]. Recent advances in our efforts to isolate and identify trace amount of biologically active substances including various allelochemicals have encouraged us to study allelopathy deeper and farther, especially in controlled environments [14]. Obviously, it was necessary to collect, extract, separate and then analyze allelochemicals in the natural environment to bioassay their phytotoxicity. Currently, ecologists try to identify and gain pure allelochemicals from complex mixtures by various technologies including UPLC, GC, LC-MS, GC-MS, GC-MS-MS, TLC, CE, paper chromatography, etc. [15][16][17]. In order to answer if and how allelopathy influences plant interactions and invades plant, however, multidisciplinary efforts involving plant ecology, genetics, physiology, biochemistry, soil science, microbiology and so on must be made to address this complex research area [18,19]. Over the last years, Juglandaceae [9], the constituent of C. cathayensis as a member of the walnut family, has been shown to include five flavonoids (cardamonin, pinostrobin chalcone (PC), wogonin, chrysin, pinocembrin) in the leaves, and α-tetralonyl glucosides from the fresh rejuvenated fruit [20]. Previous studies showed that cathayenone A isolated from C. cathayensis husk exhibited obvious antifungal activities [21]. However, the chemical composition of C. cathayensis exocarp has not been reported so far. The present study will describe the process of isolation of 4,8-dihydroxy-1-tetralone (4,8-DHT) and investigate the phytotoxicity activities of this novel compound, to provide a basis for development and utilization of C. cathayensis exocarp and 4,8-DHT as allelochemicals. Effects on Seed Germination of Lettuce Effects of C. cathayensis exocarp extract and eluting fractions on lettuce seed germination were measured by germination rate and germination vigor ( Figure 1). Using distilled water as a reference (CK), germination vigor was significantly inhibited by the aqueous extract (W) and 11 eluted ethanol solution fractions, and the eluting fraction of 60% ethanol solution completely inhibited seed germination vigor (p < 0.05, Figure 1A). Germination rate was also significantly inhibited by the aqueous extract and the 10 eluted fractions, and the fraction eluted by 60% ethanol solution completely inhibited seed germination rate (p < 0.05, Figure 1B). The three fractions eluted by the solutions with the water/ethanol volume ratio of 5:5, 4:6 and 3:7, obviously demonstrated stronger inhibition to seed germination of lettuce than the other fractions. Effects of Extract on Seedling Growth of Lettuce Effects of C. cathayensis exocarp extract and eluted fractions on lettuce seedling growth were measured with regard to radicle length, plumule length and fresh weight of seedling ( Figure 2). Compared with the control group, the ethanol free water fraction significantly promoted and the fraction elutied by the solution with a water/ethanol ratio of 9:1 slightly promoted the radicle growth, while the other eluted fractions and overall water extract significantly inhibited the radicle growth of lettuce, and the fractions corresponding to the eluents with water/ethanol ratios of 4:6 and 3:7 showed the strongest inhibitory effects (p < 0.05, Figure 2A). As shown in Figure 2B, the fractions eluted by the solutions with water/ethanol ratios of 10:0, 9:1 and 7:3 significantly stimulated the plumule growth, but the fractions eluted with water/ethanol ratios of 8:2 and 0:10, as well as the overall water extract, slightly inhibited the plumule growth, and other eluting fractions significantly inhibited the plumule growth of lettuce, for which the inhibitory effects of the fractions corresponding to water/ethanol ratios of 4:6 and 3:7 were the strongest (p < 0.05). As shown in Figure 2C, the fractions eluted by solutions with water/ethanol ratios of 9:1, 10:0 and 0:10 significantly increased, but other eluting fractions and the overall water extract significantly reduced the fresh weight of seedlings, and the eluting fraction with a water/ethanol ratio of 4:6 exhibited the strongest inhibition (p < 0.05). In sum, the fractions eluted by the solutions with water/ethanol ratios of 5:5, 4:6 and 3:7 showed very strong inhibition to seed germination and seedling growth of lettuce, indicating that that there might exist some active phytotoxic chemicals in them. Extraction and Purification of 4,8-DHT UPLC analysis of those eluting fractions with the stronger inhibitory activity was conducted. A compound with retention time of ~3 min could be found in the chromatograms of all the three fractions, and its content in the fraction with regard to water/ethanol ratio of 4:6 was 76.15% ( Figure 3A-C). After purification by silica gel and Sephadex LH-20 column chromatography, the residue was recrystallized from EtOAc to give colourless block crystals with a purity of 99.8% ( Figure 3D). Identification and Characterization of 4,8-DHT The above compound in Figure 3D was identified and characterized by GC-MS, 1 H-NMR and 13 C-NMR analysis and comparison with literature data. As shown in Figure 6B, the solution with 0.6 mM 4,8-DHT significantly promoted plumule length of cucumber, but displayed no significant promotion of that of radish and wheat, and significantly inhibited plumule length of lettuce and onion; the solution with 6 mM 4,8-DHT significantly reduced plumule length of all plants. The data in Figure 6C shows that the solution with 0.6 mM 4,8-DHT caused a significant increase of fresh weight of cucumber seedlings, no significant effect on radish and wheat seedlings, and produce a weight significant decrease of lettuce and onion seedling, while the solution with 6 mM 4,8-DHT significantly decreased fresh weight of seedlings of all plants. Among the tested plants, seedling growth of lettuce was the most sensitive to 4,8-DHT treatment. 4,8-Dihydroxy-1-tetralone (1) has a pair of enantiomers found in both fungi and plants, the (-)-1 enantiomer is commonly named regiolone and the (+)-1 isomer is called isosclerone. According to ab initio calculations of ORs and ECD spectra, the absolute configurations of the two naturally occurring enantiomeric naphthalenones were assigned as (R) for (-)-regiolone and (S) for (+)-isosclerone [23]. The green husks, leaves, stem and bark of the genus Juglans (Juglandaceae) have been widely used as folk medicines for the treatment of cancer and dermatosis in Korea, Japan and China since ancient times. The green walnut husks (Juglans regia L.), named Qin-Long-Yi in Chinese, is a traditional herbal medicine which has long been used for clearing heat, eliminating toxin, alleviating pain and treating skin disease [24]. Several diarylheptanoids and regiolone isolated from the extracts of the green walnut husks have been studied for their structure and various biological activities such as antitumor, antiinflammatory, antifungal, antibacterial properties. They also inhibit the NF-kB activation, NO, TNF-production, free radical scavenging activities, and related effects [25][26][27]. Phytotoxicity tests and morphological investigations on plant species of horticultural interest indicated that regiolone as principal component of walnut husk washing waters could induce a concentration-dependent stimulating effect on the growth of radish, lettuce cv. cavolo Napoli up to 165%, and elicit an opposite inhibitory effect up to 70% on spinach and lettuce cv. Gentilina [28]. Isosclerone [(+)-1] was first isolated from Sclerotinia sclerotium as a new bioactive metabolite in a plant growth regulating test, later from Scytalidium species and as a phytotoxin produced from Scolecotrichum graminis, the causal agent of a leaf streak disease in orchard grass [29,30]. Isosclerone was also isolated from other fungal and plant species, it was reported as a phytotoxin of Botrytis cinerea, known as a pathogen of a number of crops, especially as the pathogen to the gray mold rot of grapes, as an antitumor metabolite from Penicillium diversum var. aurem [31,32]. 4,8-DHT isolated from the exocarps of C. cathayensis was confirmed as a racemate (1:1). In this preliminary investigation, bioassay-guided fractionation of an aqueous extract of C. cathayensis exocarps lead to the finding of the key active component 4,8-DHT, proving bioassay to be an efficient method for this purpose. The fraction eluted by the solution of water/ethanol at a volume ratio of 4:6 demonstrated the strongest inhibitory intensity, in comparison with other eluting fractions. UPLC analysis revealed the existence of an active compound which was further identified to be 4,8-DHT by MS-GC and NMR analyses. For the moment, we would not exclude that there might exist other phytotoxic chemicals in water extract of C. cathayensis exocarps. However, the eluting fraction with regard to the solution of water/ethanol at ratio of 4:6 displayed a more important research value. Higher plants release a diversity of allelochemicals into the environment, including phenolics, alkaloids, long chain fatty acids, terpenoids and flavonoids [33]. In this study, we isolated a phenolic substance, 4,8-DHT, from the exocarps of C. cathayensis as the major active compound responsible for the observed phytotoxicity. Seed germination and seedling growth studies using phytochemical extracts are most widely used to determine the phytotoxic potential in vegetation [34]. Crop seeds are commonly selected for use in phytotoxic bioassays, because they satisfy a number of selection criteria: they are readily available, affordable, repeatable and reliable; and they germinate quickly, completely, and uniformly. In this study, we selected five crops as test species. Since the content of 4,8-DHT in exocarps of C. cathayensis was about 0.25 mg/g, one kilogram of exocarps soaked in ten kilograms of water could generate the solution of 4,8-DHT at 1.4 mM concentration. According to this, we selected a concentration range of 4,8-DHT in treatment solution. The treatment with 0.6 mM 4,8-DHT could significantly reduce germination vigor of lettuce and wheat, germination rate of lettuce and cucumber, and could also significantly inhibit radicle length, plumule length, and fresh weight of seedlings of lettuce and onion. The treatment with 6 mM 4,8-DHT could significantly inhibit the progression of seed germination and seedling growth of the test plants. On the contrary, the treatment with 0.6 mM of 4,8-DHT could significantly promote plumule length and fresh weight of seedlings of cucumber. For seed germination and seedling growth, lettuce was the most sensitive to 4,8-DHT treatment in the five test plants. In summary, the allelopathy must be an interaction between a pair of donor and receptor, and different receptors have variable sensitivity to the same donor; an allelochemical might have the duality of inhibition and promotion to plant growth, and the action intensity and direction could vary with its amount. Chemicals from a plant alone are not sufficient to ensure their allelopathic potential. Abiotic and biotic environmental conditions determine the allelopathic potential of chemicals in soil [6]. Recent studies have deepened our understanding of allelopathy by examining it in environmental, biogeographic, and evolutionary contexts [35][36][37]. Field examination of intraspecific chemical inhibition of water extract of C. cathayensis exocarps and 4, 8-DHT might yield further insights into the role of allelopathic potential of chemicals in the ecosystem. Furthermore, the mechanisms that 4,8-DHT induces growth stress and alters the biochemical and physiological processes needed to be determined. Degradation of 4,8-DHT in the natural environment should be studied to confirm whether 4,8-DHT was completely friendly to environment before it was developed. Plant Materials Exocarps of C. cathayensis were collected from Chun-an country in Zhejiang province (29°22'~29°50'N/118°34'~119°15'E). The selected exocarps were not exposed to rain, otherwise some active compounds soluble in water would be washed away. After some sundries such as nut fragments were picked out, the exocarps were kept in a cool and well-ventilated place. Seeds of lettuce (Latuca sativa L.), radish (Raphanus sativus L.), cucumber (Cucumis sativus L.), onion (Allium cepa L.) and wheat (Triticum aestivum L.) were purchased from the market in Ningbo of China, and used for bioassay. Extraction and Isolation of Active Compounds The ground exocarps of C. cathayensis were soaked in distilled water (1 g per 20 mL) at room temperature for 48 h. After exhaustive extraction with stirring, the mixture was sieved through cheesecloth and squeezed to extract as much liquid as possible. The liquid was treated by centrifugation at 4000 rpm for 2 min followed by vacuum filtration through Whatman No. 4 filter paper. Then, the filtrates went through a glass column (60 mm × 610 mm) packed with X-5 macroporous resin. After adsorption saturation, the resin column was sequentially eluted with aqueous solutions containing 0%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90% and 100% ethanol (2.5 L per step). All the eleven eluants were respectively concentrated by decompression into the solutions with concentration about 2.0 mg/mL for testing the allelopathic activity on lettuce. Among them, three fractions with stronger allelopathy activity were analyzed by UPLC. One component with high content was targeted at same retention time, and then was separated through silica gel column with the eluted solution of petroleum ether/ethyl acetate (6:4). Quantitatively, the targeted component of 2.5 ± 0.2 g was obtained from C. Cathayensis exocarps of 10 kg. After further purification with Sephadex LH-20 column chromatography, the substance was recrystallized in EtOAc to form the colourless block crystals with [α] 20 D = ± 0° (c1.3, CH2Cl2) and m.p. 366-368 K. For the identification of this active substance, GC-MS analysis showed a molecular ion peak at m/z 178 Daltons, and both 1 H and 13 C-NMR gave a complete and reliable assignment signals. Instrumental Analysis The active compound was identified by GC-MS, 1 H-and 13 C-NMR. An Agilent 6890N/5973 gas chromatograph-mass spectrometer equipped with a HP-5MS column (30 m × 0.25 mm I.D × 0.25 μm) was used for the identification of the active compound. Gas chromatography was operated under an initial oven temperature of 80 °C for 2 min, then programmed to 180 °C at a rate of 15 °C /min followed by 5 min at 180 °C, and 25 °C/min to 280 °C for 5 min of final isotherm. The carrier gas helium flowed through the column at a rate of 1.0 mL/min. The splitless injection was adopted with the sample volume of 10 μL. The temperature of injector and detector was maintained at 280 °C. The mass spectrometer was operated at 70 eV with the scan range of m/z ratio from 30 to 550. 1 H-NMR (400 MHz) and 13 C-NMR spectra (100 MHz) were recorded in NMR spectrometer (Bruker Ac-400 spectrometer). Samples were dissolved in DMSO and chemical shifts were reported in parts per million (ppm) relative to an internal standard of tetramethylsilane. Effects of 4,8-DHT on Seed Germination 100 grains of surface-sterilized lettuce seeds were placed in each sterile Petri dish (15 cm diameter) lined with Whatman No. 3 filter paper in three replicates. Ten mL of treatment solutions and distilled water as control were added to each Petri dish. The Petri dishes were placed in programmable illuminated incubator with an L/D cycle of 12 h/12 h and a temperature cycle of 25/15 °C. Treatments were carried out in a complete randomized design with three replicates for each treatment. Germination (radicle emergence) was measured 4 and 7 days after treatment. To test the effects of 4,8-DHT on seed germination of lettuce, radish, cucumber, onion and wheat, we conducted experiments similar to those described above. Germination (radicle emergence) was measured in 4 and 7 days after treatment for lettuce, 4 and 10 days for radish, 4 and 8 days for cucumber and wheat, 6 and 12 days for onion. Effects of 4,8-DHT on Seedling Growth Pre-germination of lettuce seeds were conducted in plastic boxes (20 × 15 × 10 cm) lined with Whatman No. 3 filter paper for 3-4 days until radicle emergence. One hundred successfully germinated seeds were placed in Petri dishes in three replicates and 10 mL treatment solutions and distilled water as control were added to each Petri dish. Seedlings were incubated in programmable illuminated incubator (incubation conditions were the same as seed germination). Five seeds were randomly taken out from each Petri dish and the length of plumule and radicle were measured with a vernier caliper (GB/T 1214.2-1996, Measuring Instrument LTD, Shanghai, China). Fresh weight of seedlings was also recorded (Mettler Toledo Instrument Ltd., Boston, MA, USA). The measurements were taken on once every two days after incubation and continued for a total of 18 days. Bioassays of 4,8-DHT on seedling growth of five tested species were conducted with the same procedure as above. Plumule, radicle length and fresh weight of seedling were measured once every two days after incubation and continued for a total of 18 days. Statistical Analysis We calculated germination rate and germination vigor for each of the five tested species. The percentage of germinated seeds measured in the fourth day for lettuce, radish, cucumber, wheat and the sixth day for onion was counted as germination vigor, while the percentage of germinated seeds measured in the seventh day for lettuce, the eighth day for cucumber and wheat, the tenth day for radish, and the twelfth day for onion was counted as germination rate. The significant differences among treatment solutions and control on seed germination and seedling growth of test species were first examined by ANOVA (p < 0.05) and then analyzed using Fisher's test at p < 0.05 level (analyses were performed using the SPSS statistical package v.20, IBM Corp, Armonk, NY, USA). Conclusions The phenolic compound 4,8-dihydroxy-1-tetralone was found to be a key phytotoxic chemical in the exocarp of C. cathayensis and demonstrated significant phytotoxicity to seed germination and seedling growth of plants lettuce, radish, cucumber, onion and wheat. The phytotoxic intensity depended on the 4,8-DHT amount and the tested plant. Whereas a high amount of 4,8-DHT (6 mM) generally exhibited an inhibition to seed germination and seedling growth of each plant, small amounts of 4,8-DHT (0.6 mM) actually promoted seedling growth of cucumber, indicating that the phytotoxicity of 4,8-DHT had the dosage, action target (plant type) and content (seed germination or seedling growth) selectivity. It is expected that results of this investigation could establish a basis for the development and utilization of the exocarp of C cathayensis and 4,8-DHT.
2016-03-14T22:51:50.573Z
2014-09-26T00:00:00.000
{ "year": 2014, "sha1": "ea3f9e7f16fc39ad5b7c1845e102413e21a791fd", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/19/10/15452/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ea3f9e7f16fc39ad5b7c1845e102413e21a791fd", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
247531470
pes2o/s2orc
v3-fos-license
Is Ankle Kinesio Taping Effective to Immediately Change Balance, Range of Motion, and Muscle Strength in Healthy Individuals? A Randomized, Sham-Controlled Trial Background The ankle–foot complex plays a key role in maintaining balance because it collects proprioceptive data. Kinesio taping (KT) is a rehabilitative method performed by the cutaneous application of a special elastic tape. The mechanical correction technique of KT was suggested to reposition the joints and alter balance parameters. The aim was to reveal the pure effects of ankle KT on balance, range of motion (ROM), and muscle strength in healthy individuals. Methods Forty healthy students were recruited for this randomized, sham-controlled study at a local university. Participants were divided into two groups—experimental and sham application groups. The primary outcome measures were balance parameters. Athlete Single Leg (ASL), Limits of Stability (LoS), and Clinical Test of Sensory Interaction and Balance (CTSIB) were used to measure single-leg dynamic balance, dynamic postural control, and sensory interaction of balance, respectively. Dorsiflexion ROM and dorsiflexor muscle strength were the secondary outcomes. Results The ASL score showed significant improvement only in the experimental KT group (P=0.02); however, the LoS score increased significantly in both groups (P<0.05). CTSIB scores, dorsiflexion ROM, and dorsiflexor muscle strength for both ankles did not change in any of the groups after intervention (P>0.05). Moreover, there was no superiority of one intervention over the other in improving any of the variables (P>0.05). Conclusion The mechanical correction technique of KT can be useful in providing immediate improvement in single-leg dynamic balance in healthy individuals. However, it may not be effective to significantly change the sensory interaction of balance, dorsiflexion ROM, and muscle strength. INTRODUCTION Balance is an interrelated component of physical function and is defined as the ability to align body segments against gravity to maintain or move within the available base of support. 1) In particular, mechanoreceptors are located within the joint capsular tissues, ligaments, tendons, muscles, and skin of the ankle and provide essential information to enable adjustment of ankle positions, which play a major role in the maintenance and correction of balance in different directions during weight-bearing activities. 2) Superior balance ability is necessary to avoid lower limb injuries and achieve the highest competitive level. Many researchers have attempted to improve balance by applying appropriate interventions to the ankle in healthy individuals and athletes. 3) Kinesio taping (KT) is a popular elastic taping method with proposed mechanisms of action such as improved proprioception through increased stimulation of cutaneous mechanoreceptors, altered muscle function by supporting weakened muscles, and repositioned joints by mechanical correction. 4) Several application techniques of KT have been suggested according to its purpose. For instance, muscle technique is used to alter muscle activation, mechanical correction is suggested to reposition the joints, and the epidermis-dermis-fascia technique aims to improve wound healing and edema. 5) Researchers commonly used muscle techniques since they were mainly focused on investigating the effects of KT on muscle function and strength, and studies on the effects of mechanical corrections are limited. The mechanical correction method of KT on the ankle was thought to have an effect similar to that of joint mobilization. 6) Since it was proposed that articular stretching due to joint mobilization on the ankle increases sensory outputs of the mechanoreceptors, which are related to balance, 7) we anticipated that mechanical correction may change balance parameters. The effects of KT on variables such as muscle strength, pain, range of motion (ROM), and balance in people with ankle disorders have previously been investigated. 8) However, it is essential to differentiate the effects of KT from those of placebo and the normal healing process of the disease itself. To reveal the pure effects of KT, the present study was conducted in healthy individuals as a sham-controlled trial. This study aimed to determine whether the mechanical correction technique of KT applied to healthy ankles immediately affected balance, ROM, and muscle strength. We hypothesized that KT would significantly change balance in individuals with healthy ankles when compared with sham application. The results of this study would be useful for clinicians in making clinical decisions regarding the use of KT in healthy individuals and athletes when balance improvement is desired. Design This prospective, randomized, double-blinded, sham-controlled study with a parallel design, with an allocation ratio of 1: Participants Forty healthy students aged 18-25 years volunteered for the study. Participants were recruited from a local university. Participants with (1) any neurological, musculoskeletal, or vascular disease; (2) previous history of surgery in any of the lower extremities; and (3) previous experience with KT were excluded. Participants were randomly allocated to the experimental KT group and the sham KT group, and block randomization (AABB, ABAB, ABBA, BBAA, BABA, BAAB paradigm) was used. A is the experimental group and B is the sham group. Participants were assigned to one of the two groups using this paradigm by an independent investigator. To ensure blinding, the paradigm was concealed in a sealed envelope and provided to each participant. Participants were then asked to give the sealed envelope only to the researcher who would be performing KT before the intervention. Outcome measures were evaluated by a blinded assessor before and 45 minutes after the intervention procedure in a different hall from where the interventions were applied. Participants were not informed about the intervention that would be applied during data collection. All measurements were taken at the same time of the day. Procedures KT was applied according to the procedures recommended by Kase et al. 5) A 5-cm wide, pink Kinesio Tex tape was used for both groups. For the experimental KT group, a posterosuperior glide was first manually applied to the distal end of the fibula (lateral malleolus). To sustain this glide, the mechanical correction technique of KT was used for both ankles. 5) For each ankle, participants were asked to stand barefoot, and the foot that would be taped was positioned on a tool 30-cm high in neutral position. An I-shaped Kinesio tape with 75%-100% stretch (20 cm in length) was applied around the lower leg, attaching from the lateral malleolus to the middle third of the medial tibia ( Figure 1). A tension of 75%-100% has been suggested to provide sensory stimulation and mechanical assistance to facilitate motion. 5) Once applied, the physiotherapist warmed the tape by rubbing his hand three times throughout the tape to maximize tape adhesion. To simulate the experimental taping technique with an eliminated mechanical effect for the sham KT group, the tape was applied in the same manner, but without tension and any glide to the fibula. A longer I-shaped tape (28 cm in length) was used as it was not stretched and did not cover the same distance on the skin of participants in the experimental KT group. The same technique was applied to other ankles. Baseline Measurements The demographics of participants were noted. Leg dominance was determined by instructing participants to kick a soccer ball. In this study, the physical activity (PA) level of the participants was evaluated as it could be a confounding factor for the results. Therefore, the Turkish version of the International Physical Activity Questionnaire Short Form was used to determine PA levels. The short form has nine items and provides data on PA in the last 7 days at four intensity levels: (1) vigorous-intensity activity such as aerobics, (2) moderate-intensity activity such as leisure cycling, (3) walking, and (4) sitting. 1) Balance parameters Balance was assessed using the Biodex Balance System (BBS) SD 2) Limits of stability This test assesses dynamic postural control and challenges participants to move and control their center of gravity within their base of support. Participants were asked to shift their weight to move the cursor from the center target to a blinking target and back as quickly and with as little deviation as possible. The same process was repeated for each of the eight targets. Targets on the screen blinked in a random order. The test was repeated three times with 10-second rest in between trials. The test duration and overall scores were recorded. The maxi-mum score for the test was 100. Higher scores indicated better balance and greater control of the participants' stability. 3) Athlete single leg test The ASL assesses single-leg dynamic balance. Participants were asked to stand barefoot with their dominant foot centered on the balance platform. In the single-leg stance, sway in the platform causes the participant to move, and the degrees of the motion on the platform are recorded as the participant attempts to balance on the moveable surface. Subjects were instructed to keep the cursor in the middle of the target as they balanced without any support from the upper extremities or non-dominant foot. Participants were allowed to receive simultaneous visual feedback of the balance platform's position and movement by a cursor on a target where the center was the optimal neutral position. Three 20-second dynamic trials were performed, and the average value of three trials was recorded for each participant. Balance ability was measured in units of the stability index (StI). The lower the StI, the better the single-leg dynamic balance. 4) Clinical test of sensory integration of balance The CTSIB provides information about the ability to stand upright under several sensory conditions: (1) The sway index (SwI) was recorded for each condition of each trial. Lower SwI scores indicated greater balance. 1) Dorsiflexion range of motion Ankle dorsiflexion ROM (DFROM) was measured using the weightbearing lunge test (WBLT). Participants were instructed to stand in front of a wall with the second toe, center of the heel, and knee kept in The distance between the second toe and wall was measured. Participants performed three trials for each foot, and the average value was used for statistical analysis. 2) Muscle strength Dorsiflexor muscle strength was measured using a hand-held dynamometer (Lafayette Hand Held Dynamometer, model 01165; Lafayette Instrument, Lafayette, IN, USA). Participants were asked to sit on a bedside (height=100 cm) with the hip and knee at 90° flexion. All participants warmed up before the test to perform it correctly. They were first shown the movement to be tested and then asked to perform it. After bringing the ankle to dorsiflexion, the dynamometer was placed over the metatarsal heads on the dorsum of the foot. Then, the participants gradually increased their muscle force to a maximum that had to be sustained for 6 seconds against the dynamometer. Three measurements were performed, and the highest score was used for analysis. Two-minute rest was given between measurements. clinical characteristics between the groups was analyzed using the chisquare test. Since the normality assumption was violated, non-parametric tests were used for statistical analysis. The Mann-Whitney U test and Wilcoxon test were applied for between-group and withingroup analyses, respectively. A 5% type-I error level was used to deter-mine statistical significance (P<0.05). The effect size for each nonparametric comparison was calculated as r=Z/√N, where Z is the Z score of the comparison and N is the number of total observations. RESULTS The groups were analyzed for all outcomes (Figure 2), and the analyses were performed by the original assigned groups. Post hoc power analysis with 5% type-I error was performed using effect sizes of the ASL score, revealing 82% power. There were no statistically significant differences in the distribution of demographic and clinical characteristics between the groups (P>0.05) ( Table 1). No significant differences were found between the groups in terms of age, body mass index, and baseline values of the measured variables after the initial assessment (P>0.05) ( Table 2). and dorsiflexor muscle strength for both ankles did not change in any of the groups after intervention (P>0.05) ( Table 3). Statistical analyses of between-group mean differences showed that there was no superior intervention over the other in improving any of the variables (P>0.05) ( Table 3). DISCUSSION The results of this study showed that KT could be useful in increasing According to relevant studies conducted in healthy individuals, the effect of KT on balance appears to be controversial. 13,14) Nakajima and Baldridge 14) applied the tendon correction technique of KT and found that overall dynamic postural control did not improve initially compared to that seen in the placebo group. Parallel to this study, Wilson et al. 13) applied KT to the gastrocnemius to facilitate the muscle and concluded that KT did not improve dynamic single-leg balance evaluated using the ASL test of the BBS compared with a sham application. Although the results of the current study contradict those reported by Wilson et al.,13) they are consistent with those reported by Nakajima Baldridge. 14) However, it should be noted that a comparison of the results of these three studies may not be optimal because different KT techniques were used in each study. Some studies have investigated the effects of similar mechanical correction techniques using different taping materials. 6,[15][16][17] Most of these had no beneficial effects on dynamic postural control; however, the current study showed that both experimental and sham KT were effective in improving dynamic postural control. These findings suggest that the elastic properties of KT might have been more effective than the desired mechanical effects such that dynamic postural control had also improved in the sham group, in which no tension was ap- This study has several strengths. A computer-assisted balance measurement device (BBS) was used to assess balance-related outcomes, which is more sensitive and reliable than other non-computerized methods used in relevant studies. Another distinctive feature is that the effects of the mechanical correction technique of KT were examined on balance and compared to those of a sham application. The limitations of this study are as follows. The results should not be generalized since only healthy, pain-free individuals were included. In addition, the implications of laboratory or clinical findings on function and performance are essential; therefore, it might have been useful if performance was measured using vertical jump or single hop tests. In this study, the immediate effects of KT were examined in a relatively small sample, and future studies including a larger sample size and long-term follow-ups would more clearly elucidate the effects of ankle KT on balance. In conclusion, the mechanical correction technique of KT was useful in providing an immediate improvement in the single-leg dynamic balance of healthy individuals. However, it did not significantly change the sensory interaction of balance, DFROM, and muscle strength. Although both experimental and sham KT were effective in improving postural control, no intervention was superior over the other.
2022-03-19T15:12:11.252Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "2ed36800d70f4168f59c34244fa2a2232914651d", "oa_license": "CCBYNC", "oa_url": "https://www.kjfm.or.kr/upload/pdf/kjfm-21-0015.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "872db2911d87169147230037db77de1416f4af46", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
52960836
pes2o/s2orc
v3-fos-license
Hominin evolution was caused by introgression from Gorilla The discovery of Paranthropus deyiremeda in 3.3-3.5 million year old fossil sites in Afar, together with 30% of the gorilla genome showing lineage sorting between humans and chimpanzees, and a NUMT ("nuclear mitochondrial DNA segment") on chromosome 5 that is shared by both gorillas, humans and chimpanzees, and shown to have diverged at the time of the Pan-Homo split rather than the Gorilla/Pan-Homo split, provides conclusive evidence that introgression from the gorilla lineage caused the Pan-Homo split, and the speciation of both the Australopithecus lineage and the Paranthropus lineage. Introduction The Gorilla lineage as the "missing link" "During many years I collected notes on the origin or descent of man, without any intention of publishing on the subject, but rather with the determination not to publish, as I thought that I should thus only add to the prejudices against my views." -Charles Darwin, 1871 Genome sequencing has been evolving along the law of accelerating returns (Kurzweil, 2004), the total amount of sequence data produced doubling approximately every seven months (Stephens, 2015). With the genetic revolution, phylogenetic relationships are no longer limited to morphological characters, they can instead be read like an open book. The fossil record, when 3 combined with genomics, can reveal an evolutionary history that were unimaginable based on just morphological analyses. This thesis will explore a new chapter, that shows how hominin evolution is not a single continuous lineage, instead the hybridization of two separate lineages, separated over millions of years, whose genomes recombined into the hybrid lineages Paranthropus and Australopithecus . Curiously, that hybridization also accounts for the "missing link", the hybridization of two lineages explains the absence of a single continuous lineage. The protagonist of the thesis is a single gene, a pseudogene on chromosome 5, tentatively called "ps5", that originates from the mitochondrial genome and belongs to a class of genes that have unique properties for tracing hybridization where it would have otherwise been impossible to read (Perna, 1996;Bensasson, 2001;Hazkani-Covo, 2010). This pseudogene alone provides definitive evidence that there was gene transfer between Gorilla , Pan and Homo at the time of the Pan-Homo split. With clear evidence of introgression, the rest of the genetic trail of hybridization can be read with ease, standing on a strong foundation of indisputable proof. Ps5 In early screening of mitochondrial pseudogenes within the human genome, a pseudogene sequence on chromosome 5 was discovered (Li-Sucholeiki et al., 1999), which later turned out to be a large (~9kb) NUMT, tentatively called "ps5" (Popadin, 2017). With advances in genome sequencing of Gorilla and Pan , the same~9kb pseudogene sequence was discovered at homologous chromosomal positions in both those lineages, while it was absent in Pongo . The pseudogene, when compared to mitochondrial branches of Gorilla , Pan and Homo , is shown to have diverged between the three lineages not at the Gorilla/Pan-Homo split, rather at the Pan-Homo split (Popadin, 2017), clear evidence that there was gene transfer between the three lineages at that time. The ps5 pseudogene shares affinities with the gorilla lineage mtDNA (Popadin, 2017) which suggests that it originated in the gorilla lineage. With the probability of a NUMT insertion being unaffected by hybridization, it is clear that the insertion happened prior to the introgression event, and that the pseudogene had been evolving in the gorilla lineage for a period of time before introgressing into Pan and Homo . (Popadin, 2017) With high availability of genetic data for both mitochondrial DNA and the pseudogene sequence, the exact history of ps5 can be read by comparing mutations within all three lineages. The ratio of synonymous to non-synonymous mutations is a marker to distinguish between coding and non-coding gene sequences, because non-synonymous mutations are selected against until the gene is inactivated (Tomoko, 1995). For the "stem" of the ps5 pseudogene (the mutations that have accumulated prior to its divergence into three lineages), the fraction of coding ("mitochondrial") mutations to non-coding ("pseudogenic") mutations is 3/4 (Popadin, 2017). 5 The mutation rate in the mitochondrial genome is significantly higher than in the nuclear genome, which means that the 25% pseudogenic mutations have needed proportionally longer time to accumulate. With the estimate of 10x higher mutation rates in mtDNA (Brown, 1979), and 3x more "mitochondrial" mutations, it took 3.3x longer to accumulate the "pseudogenic" mutations, giving a rough estimate of the insertion happening at 1.8 Myr after the Gorilla/Pan-Homo split, 4.2 Myr before the introgression event that led to the Pan-Homo split. ancestor. The insertion of mtDNA fragments into the nuclear genome of Gorilla can be roughly estimated to 1.8 Myr after the Gorilla / Pan-Homo split, and the transfer to Pan and Homo to the human-chimpanzee split, along with 30% of the Gorilla genome. Insights into hominin evolution from the Gorilla Genome Project The Gorilla Genome Project was the first complete genome of Gorilla , from a female western lowland gorilla, and it revealed a closer relationship between humans and gorilla than what morphological analyses had shown: in 30% of the genome, gorilla is closer to human or chimpanzee than the latter are to each other. At the time interpreted as incomplete lineage sorting (Scally, 2012), the ps5 NUMT as definitive evidence of gene transfer between Gorilla , Pan and Homo around the time of the Pan-Hom o split (Popadin, 2017), shows that the lineage sorting is more parsimonious as a result of introgression. Introgression may lead to speciation, in which the new hybrid lineages become reproductively isolated from parental populations ( Baack, 2007), and since Pan and Introgression Introgression is the transfer of genetic information from one species into the gene pool of another by repeated backcrossing of an interspecific hybrid with one of its parent species. Homo have diverged through lineage sorting, with 15% of the introgressed genes ending up in Pan and another 15% in Homo , it is reasonable to conclude that the introgression caused the Pan-Homo split ( Fig. 1), and therefore that it occurred at the time of the Pan-Homo split, around 6 million years ago. Paranthropus, a companion to Australopithecus With conclusive evidence that introgression from Gorilla caused the Pan-Homo split, it can also be seen that Paranthropus and Australopithecus , as two separate lineages, both speciated as a result of introgression from the Gorilla lineage (Fig. 2). The lineage sorting seen in Pan and Homo (Scally, 2012) can be predicted for Paranthropus as well, with the gorilla-like features, such as strong muscles of mastication, being a result of lineage sorting from the introgression of Gorilla (Fig. 1), conserved because the browsing adaptations that are seen in Gorilla were 7 co-opted for grazing ( Cerling, 2017), in convergent evolution with other species in the Afar region, such as Eurygnathohippus (Melcher, 2013) and Theropithecus (Levin, 2015), both grass-eating species descended from browsers. The discovery of 3.2-3.5 million year old hominin fossils that show divergent evolution from Au. afarensis from the same time period ( Haile-Selassie, 2012, featuring an abductable great toe ( Fig. 4) Pan-Homo split via Gorilla introgression The lineage sorting of 30% of the gorilla genome that is seen in humans and chimpanzees (Scally, 2012) is a result of introgression, an event that caused the speciation of Pan and Homo ( Fig. 1), and the two lineages diverged through lineage sorting with 15% of the introgressed genes ending up in Pan and another 15% in Homo . 9 Paranthropus and Australopithecus were hybrid lineages Traits within Paranthropus that resemble Gorilla , such as the sagittal crest (Fig. 5), are more parsimonious as a result of the introgression event rather than convergent evolution, and lineage sorting similar to the 30% of the Gorilla genome that displays lineage sorting with Pan and Homo (Scally, 2012), which supports the hypothesis of Paranthropus as a lineage that also speciated from the introgression (Fig. 1). The taxonomic classification of Paranthropus deyiremeda The combination of data from genome sequencing with the fossil record provides an insight into how Paranthropus and Australopithecus are related, and shows that both lineages speciated as a result of introgression from Gorilla , and provides a foundation for the taxonomic classification of Paranthropus deyiremeda. The foot stiffness in Au. deyiremeda ( Haile-Selassie, 2012 ) is not a preserved character, it is a derived character that is absent in the Au. afarensis lineage as well as in Pan and Gorilla , and that exists together with an abducted great toe, and is contemporary with an adducted (human-like) hallux as a derived feature in Au. afarensis ( Haile-Selassie, 2012 ), substantial adaptive differences that had accumulated over significant time spans of divergent evolution, indisputable data for that Au. deyiremeda is a separate lineage that had adapted for a separate niche, which is also what justified its original classification as a "close relative" (Haile-Selassie, 2015). The denthognathic features that are similar to Paranthropus (Haile-Selassie, 2015) suggest similar dietary adaptations, and within the hypothesis of introgression as a cause of speciation, the most parsimonious explanation is lineage sorting from the introgression event, with adaptations for browsing such as large muscles of mastication that were co-opted for grazing. ( Cerling, 2017) 10 Conclusion The speciation of Hominini was caused by introgression of Gorilla, and Pan , Australopithecus and Paranthropus diverged as a result of lineage sorting (Fig. 1) The indisputable evidence that introgression from Gorilla caused the speciation of Pan and Homo is made possible by the genome revolution, centered around the mitochondrial pseudogene "ps5", and it provides a map, a reference frame, that makes it possible to read the world in ways that were previously out of sight, and can provide an important reference for continued research into hominin evolution. What remains to be understood is what environmental and ecological factors triggered the hybridization. Materials & Methods Lineage sorting and the ps5 NUMT Phylogenetic relationships can be read from genome comparison. Mitochondrial pseudogenes within the nuclear genome, that originate from mtDNA, provide an ideal marker for tracing hybridization events over large evolutionary time scales, and the ps5 NUMT in Gorilla , Homo and Pan has preserved a record of an event in hominin evolution that, when combined with the fossil record as well as genome analyses as a whole, shows a clear trail of introgression from Gorilla at the Pan-Homo split, and that this hybridization was what caused the the speciation of hominins. The Ps5 homologs in Gorilla , Pan and Homo , when compared to their mitochondrial genomes, shows that it formed from mtDNA at a point after the Gorilla/Pan-Homo split, and that it originated in the Gorilla lineage, with a rough estimate of insertion into the nuclear genome 1.8 Myr after the Gorilla/Pan-Homo split, and that it evolved within the nuclear genome of Gorilla over 3.3x the time period it accumulated "mitochondrial" mutations, to then be transferred to the common ancestor of Pan and Homo during the hybridization event, where ps5 is a small but important record of that event. With the exponential growth rate of genome sequencing, the amount of sequence data produced doubling approximately every seven months (Stephens, 2015), there is full genome sequences for both Homo , Pan and Gorilla (Venter, 2001;Waterson;2005;Scally, 2012), and the comparison of all three lineages showed, quite unexpectedly, that in 30% of the Gorilla genome, gorilla is closer to human or chimpanzee than the latter are to each other. In other words, there is a genomic record of lineage sorting between Pan and Homo for 30% of the Gorilla genome, with 15% ending up in Pan and another 15% in Homo . Knowing that there was gene transfer at the time of the Pan-Homo split, the lineage sorting is most parsimonious as a result of introgression of Gorilla , in a hybridization event that also transferred the ps5 NUMT from Gorilla to the common ancestor of Pan and Homo , and that led to the Pan-Homo split as the two lineages diverged through lineage sorting of the introgressed genes ( Fig. 1). The fossil record combined with genomics With ps5 The lineage sorting seen in Pan and Homo (Scally, 2012) can be predicted for Paranthropus as well, with gorilla-like features, such as a sagittal crest from strong muscles of mastication, a 13 result of lineage sorting from the introgression of Gorilla (Fig. 1), conserved because the browsing adaptations seen in Gorilla were co-opted for grazing ( Cerling, 2017). Through the combination of genomics and the fossil record, a foundation for the taxonomic classification of Paranthropus deyiremeda can be constructed, with clear evidence of divergent morphological features in P. deyiremeda and Au. afarensis , which fits perfectly with lineage sorting between the two hybrid lineages. The taxonomic classification of P. deyiremeda extends the fossil record of the Paranthropus lineage backwards in time to the mid-Pliocene, 3.5 Mya, and shows a clear record of that by the mid-Pliocene, the hybrid lineages Australopithecus and Paranthropus had adapted to separate niches, each lineage conserving its own set of traits from their two parental lineages. -Selassie, , 2016Wood, 2016), and also shows that the evolution of genes that ended up in Australopithecus , and therefore in extant humans, as well as in Paranthropus , can and should be traced along the gorilla lineage as well. Fig. 2. Morphological traits in Gorilla and the hybrid lineages Paranthropus and Australopithecus Introgression from Gorilla caused the speciation of both Australopithecus and Paranthropus , and means that traits that have evolved independently in the gorilla lineage were transferred into the hybrid lineages. Paranthropus are often described as "gorilla-like", they have sagittal crests which suggest strong muscles of mastication, and broad, grinding herbivorous teeth, that led to the name "nutcracker man" for Paranthropus boisei who lived between 2.4-1.4 Ma. Fig. 3. Phylogenetic tree with hominine mtDNA and ps5 homologs Joint phylogenetic tree of hominine mtDNA and the ps5 pseudogene of mtDNA. Black and pink lines depict the mitochondrial and the pseudogene lineages respectively, diverging from their mitochondrial common ancestor. The insertion of mtDNA fragments into the nuclear genome of Gorilla can be roughly estimated to 1.8 Myr after the Gorilla / Pan-Homo split, and the transfer to Pan and Homo to the human-chimpanzee split, along with 30% of the Gorilla genome. Fig. 4. Fossil record for a taxonomic classification of Paranthropus deyiremeda The Burtele foot, BRT-VP-2/73, found in 2009 in Burtele at Woranso-Mille, Afar, tentatively assigned Au. deyiremeda , contemporaneous with Au. afarensis , shows distinct locomotor adaptation as it retains a grasping hallux, in contrast to the human-like adducted hallux that had developed in Australopithecus afarensis . The conclusive evidence that hominin evolution was caused by introgression from Gorilla suggests that Au. deyiremeda is better classified as Paranthropus deyiremeda . With a revised taxonomic classification, building on a combination of genomic data and fossil records, it can be predicted that Paranthropus and Australopithecus , like Pan and Homo , diverged through lineage sorting as the two lineages co-opted genes from the Gorilla lineage to adapt for separate niches. Fig. 5. Gorilla-like traits in the Paranthropus lineage a result of lineage sorting from Gorilla Paranthropus aethiopicus , 2.8-2.3 Ma, with gorilla-like sagittal cranial crests as an attachment for strong muscles of mastication, a dietary adaptation. The genetic proof of an introgression event at the time of the Pan -Homo spit shows that the most parsimonious origin for those features within Paranthropus was lineage sorting from the introgression event, originating in Gorilla , rather than convergent evolution. Image from public domain (CC BY-SA 3.0).
2018-10-02T22:35:25.000Z
2018-08-30T00:00:00.000
{ "year": 2018, "sha1": "e1a8e2e9270c120d430ec33ab84b82af56015f5f", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=87215", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "2645154e72763b9625c5be4b51061354466f00b7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
254435734
pes2o/s2orc
v3-fos-license
Exposure to Antibiotics and Neurodevelopmental Disorders: Could Probiotics Modulate the Gut–Brain Axis? In order to develop properly, the brain requires the intricate interconnection of genetic factors and pre-and postnatal environmental events. The gut–brain axis has recently raised considerable interest for its involvement in regulating the development and functioning of the brain. Consequently, alterations in the gut microbiota composition, due to antibiotic administration, could favor the onset of neurodevelopmental disorders. Literature data suggest that the modulation of gut microbiota is often altered in individuals affected by neurodevelopmental disorders. It has been shown in animal studies that metabolites released by an imbalanced gut–brain axis, leads to alterations in brain function and deficits in social behavior. Here, we report the potential effects of antibiotic administration, before and after birth, in relation to the risk of developing neurodevelopmental disorders. We also review the potential role of probiotics in treating gastrointestinal disorders associated with gut dysbiosis after antibiotic administration, and their possible effect in ameliorating neurodevelopmental disorder symptoms. Introduction: Neurodevelopmental Disorders Brain development is a dynamic process resulting from a constant interplay between genetic and environmental factors [1]. Neurodevelopmental disorders (NDDs) are a group of heterogeneous syndromes with functional impairments in the central nervous system (CNS) caused by the disruption of essential processes during neurodevelopment [2]. Brain dysfunctions during development reflect deficits in social communication, verbal, nonverbal, and social interactions and are characterized by repetitive behaviors and activities [3]. NDDs include autism spectrum disorder (ASD), intellectual disability (ID), attention deficit hyperactivity disorder (ADHD), epilepsy [4], and schizophrenia (SCZ). To date, there has been a worldwide increase in the prevalence rates of NDDs: ID, 0.63%, ADHD, 5-11%, ASD, 0.70-3%, and SCZ 0.32% [5][6][7][8][9][10]. Although NDDs tend to run in families, as suggested by twin and family studies, the inheritance patterns are complex and still under investigation [11]. Genetic alterations including de novo mutations, and rare and common copy number variants [12][13][14][15][16][17][18] have been found to be an emerging source of genetic causality [19]. Despite the huge amount of risk in genes found in the human population, there are overlapping functions that affect a limited number of biological pathways [20]. A relevant and recurrent pathway is represented by the synapse function along with CNS development. Risk genes involved in the synaptic pathway are found expressed in dendrites, axons, and pre-and postsynaptic terminals, playing different roles [21,22]. There are several genetic mutations found in synaptic genes identified in individuals, encoding for proteins [60]. Moreover, a reduction of SCFA levels can affect both intestinal barrier and bloodbrain barrier (BBB) integrity, leading to neuroinflammation with direct implications in brain functions [61]. In immunometabolism, gut microbes promote peripheral immune responses associated with CNS disorders through driving, for instance, inflammatory Th17 cell responses [62]. Long-range effects on CNS-resident immune cell function, can be promoted following the release of bacterial metabolites, such as cytokines, into the bloodstream and lymphatic systems [59]. On the other hand, the loss of intestinal barrier integrity due to gut inflammation can activate innate and adaptive immune cells to release pro-inflammatory cytokines IL-1β, IL-6, and tumor necrosis factor-α (TNFα) into the circulatory system, leading to systemic inflammation. In the last decade, the metabolism of tryptophan is gaining attention in the microbiota-gut-brain axis because this essential amino acid serves as a precursor to a variety of imperative bioactive molecules generated by both the enterochromaffin cells of the host and gut microbes [63]. Moreover, 5-hydroxytryptamine (5-HT), commonly known as serotonin, is the main product of tryptophan conversion and, incidentally, it has been demonstrated that some bacteria found in gut microbiota are able to synthesize 5-HT, affecting thus the plasma levels of this neurotransmitters [64]. It is true that 5-HT has a pivotal role in the regulation of several functions both at the intestinal and central level, such as modulation of mood, memory, and cognition [65]. It has been found that altered levels of 5-HT, mainly synthesized in the gut, are implicated by different diseases, such as irritable bowel syndrome (IBS) as well as humor disorders (i.e., anxiety and depression-like disorders) [66] and ASD [67]. During inflammation or in response to stress, tryptophan metabolism shifts to kynurenine production, leading to the formation of either kynurenic acid or quinolinic acid. The balance between these compounds is crucial in psychiatric and neurological disorders; indeed, kynurenic acid has a neuroprotective role, and quinolinic acid is involved in neurotoxicity. A variety of health benefits for the neurotransmitter of GABA have been shown, including neurostimulation and gut modulation [68,69]. The GABAergic system has an important implication also in the NDDs and neurodegenerative disorders, in terms of alterations of GABA concentrations and GABA receptor expression [70,71]. A body of evidence has associated alterations in the gut-brain axis with drug and antibiotics therapies and, in turn, altered levels of specific neuroactive molecules in different types of NDDs [60,72]. It is now clear that the gut-brain axis plays an active role in the neurodevelopmental processes, including the establishment of BBB, neurogenesis, maturation of microglia, and myelination [73] with long-term health effects ( Figure 1). NDD-Associated Gastrointestinal Symptoms Recently, gut microbiota has assumed considerable importance for its function in influencing brain functions, including social behaviors [74]. Individuals with NDDs show different compositions of the intestinal microbiome in terms of the number and types of species. A balanced and appropriate composition of the gut microbiota confers health benefits; a disruption of this balance can reflect on brain function and behavior by acting through the microbiome-gut-brain axis [46], our "second brain" [45]. The relative abundance of the single species constituting the microbiota and its metabolites are associated with the onset of neurologic and psychiatric disorders [75]. GI disorders (ranging from severe constipation to diarrhea) are frequently observed in individuals with ASD; however, they are estimated to occur with a wide variability, from 2.2% to 96.8% of the ASD population [76]. Despite this considerable heterogeneity, overall, most studies highlight a greater prevalence of GI problems in children with ASD compared with their neurotypical counterparts, suggesting that microbial dysbiosis could contribute to gut symptoms in neurodevelopmental disorder (NDD)-affected individuals [77,78]. Moreover, a compromised gut epithelial barrier (so-called "leaky gut") has been described in association with GI problems in NDDs [79]. This will enable bacteria and metabolites to cross the GI barrier and trigger aberrant immune responses. Accordingly, elevated levels of inflammatory cytokines in children with ASD have been reported in association with symptom severity [80]. Cytokine levels have also been correlated with ASD-associated bacterial populations (e.g., Clostridiales) and GI symptomatology [81,82]. Differences in microbial composition among ADHD Dutch young patients were found by sequencing 16S rRNA extracted from fecal samples [83][84][85]. A significant decrease in the gut microbial diversity in ADHD children has been reported: an unusually higher level of the family Bacteroidaceae, Neisseriaceae that causes a significant decline in the gut microbial diversity in ADHD patients, differing from a higher level of Prevotellaceae, Catabacteriaceae, and Porphiromonadaceae found in the control group [86]. An altered Bifidobacterium population during early childhood has been correlated with a high risk of developing ADHD [87,88]. Among the NDDs, SCZ is highly heterogeneous with a genetic and epigenetic component. SCZ-affected individuals are characterized by gut microbiota dysbiosis with consequential GI problems like gastroenteritis, colitis, and IBS [89][90][91][92]. Altered species diversity identifies the gut microbiota in SCZ patients [93] causing cognitive and functional anomalies [94]. However, higher levels of oropharyngeal microbial species such as Bifidobacterium dentium, Lactobacillus oris, Veillonella atypica, Dialister invisus, Veillonella dispar, and Streptococcus salivarius were found in SCZ individuals with respect to healthy controls [95,96]. In NDD individuals, there is a significant decrease or complete absence of Bifidobacteria, with respect to the control subjects [97]. A deficit of intestinal Bifidobacteria has been correlated with indigestion, vitamin B12 deficiency, dysregulated immune system, gut inflammation, depression, and anxiety-like behaviors [98,99]. The microbiota can be included among the wide range of environmental factors affecting neurodevelopment. Indeed, microbiota disruption due to antibiotics administration can impact neurogenesis and behavioral deficits [100]. Alteration in the gut microbiome composition has been observed in several NDDs conditions, including depression [101], autism [102], and other conditions [103]. Mice models have been extensively used to understand the contribution of the microbiota on behavior and how gut dysbiosis may contribute to development of NDDs. Germ-free mice colonized with fecal microbiota from children with ASD show increased autistic-like behaviors in comparison to controls [104]. Moreover, specific bacteria groups, such as Clostridiaceae, Lactobacillales, Enterobacteriaceae, and Bacteroides, were found to be enriched in the ASD gut microbiota composition in mice in comparison with the control microbiota. Interestingly, mice that were treated with gut microbiota from ASD patients displayed an alternative splicing in the brain of ASD-related genes [105]. Several mutations have been reported in risk genes for NDDs, genetic mouse models have been generated by introducing the human mutation in the endogenous murine gene, whereas environmental models of ASDs are characterized by the exposure to a specific external influence [106]. Both types of mice models have been considered for studying how the genetic or environmental modification impacts on the gut-brain axis [107]. The R451C mutation, mapping in the postsynaptic protein neuroligin3 (NLGN3), was found in two affected brothers [108] who, beyond their behavioral deficits, displayed GI dysfunction including chronic gut pain, diarrhea, and esophageal regurgitation [109]. The mouse model, expressing the human R451C NLGN3 mutation, displays in vivo small intestinal motility and increased numbers of myenteric neurons in the small intestine, suggesting that the mutation alters enteric nervous system (ENS) function and structure [109]. In addition, R451C NLGN3 mutation alters mucus density and the spatial distribution of bacteria species in the GI tract of mice [110]. Additionally, in the knockout (KO) mouse model for NLGN3, gut dysfunction is characterized by a faster colonic motility and an increased colonic diameter, although GI structure and enteric neuron populations were unaltered [111]. Similarly to the alterations observed in mice models for NLGN3, the deletion of Shank3, a leading ASD candidate gene encoding for a neuronal scaffold protein implicated in the organization of the synapse through several protein-protein interactions among which are neuroligin-1, alters gut function and the microbiome [112,113]. SHANK3αβ KO mice show altered GI morphology and display differences in the composition of fecal microbiota [112]. Mutations in chromodomain helicase DNA binding protein 8 (CHD8) are among the most common de novo mutations associated with ASD in the developing brain [114]. Mutations in CHD8 increase susceptibility to GI issues in affected individuals [115,116]. CHD8 mutations associated with ASDs caused lower intestinal motility when expressed in zebrafish [115]. The CHD8+/−mouse model has a shorter intestine [117], the width of the mucus layer is lower in the small intestine of CHD8+/− mouse, and the number of goblet cells is reduced. Moreover, it was found that the CHD8+/− mouse has a higher bacterial load and microbiota diversity in the colon tract with respect to WT mice [118]. Fragile X syndrome is an NDD caused by a mutation in the X-linked FMR1 gene [119]. In the Fmr1 KO2 mouse, the gut microbiome is altered and associated with different bacterial species population, belonging to genera Akkermansia, Sutterella, Allobaculum, Bifidobacterium, Odoribacter, Turicibacter, Flexispira, Bacteroides, and Oscillospira [120]. A nongenetic, idiopathic ASD mouse model is the black and tan brachyury mouse strain (BTBR), which is characterized by repetitive behavior and social deficits [121]. The BTBR mouse model displays a marked intestinal dysbiosis with respect to the WT strain. The GI profile exhibits an altered gut microbial composition, altered social behavior, increased gut permeability, and colon proinflammatory biomarkers [122]. During pregnancy, alterations of the maternal gut can influence the microbial diversity and immunity in the offspring, predisposing them to the onset of NDD conditions [123][124][125]. This is the case of the maternal immune activation (MIA) mouse model, in which pregnant mice are administered with potent immune activator and generate pups with ASD-like behaviors [126]. MIA mice show decreased intestinal barrier integrity, dysbiosis of the intestinal microbiota and neurodevelopmental abnormalities in the offspring [127]. MIA offspring display an altered serum metabolomic profile, increased gut permeability, and abnormal intestinal cytokine profiles, such as the IL-6 [128], IL-1β, and TNF-α and also the total number of bacteria is significantly reduced in MIA offspring [129]. The influence of the microbiota composition and its effects on gut-brain communication is not fully clear and involves multiple mechanisms and factors. The microbiota colonization is a pivotal event; the gut microbiota changes during pregnancy, and maternal antibiotic administration during lactation influences the milk composition, which can affect the infant gut microbial composition [130]. In other words, the human gut microbiota exhibits distinct and singular metabolic traits characterized by a maternal signature [131]. At present, it is not possible to define a single bacterium as a hallmark of NDDs despite the significant increase, decrease, or complete absence of species found in affected individuals and mouse models of disorders. The increased or decreased abundance of gut microbes can be correlated with disrupted GI mucosal barrier, pathological intestinal conditions, and decreased immune surveillance, due to altered gut metabolite production. It is important to Antibiotics 2022, 11, 1767 6 of 23 underline that some bacteria species have positive or negative effects on the gut-brain axis outcomes in humans and in mouse models and the same components of the gut microbiota do not have the same effect on different individuals, as suggested by the evidence obtained in both genetic and environmental NDD mouse models; in fact, single bacteria species may be important either for health or disease conditions. Prenatal Antibiotics Exposure and the Risk of ASD Pregnancy is a crucial period for fetal brain development. Embryonic brain development can be divided into three main stages: the first trimester, which is characterized by the formation of the neural tube and the production of neural progenitor cells and neurons; the second trimester, when neurons migrate to the cortical layer and begin to form synaptic connections; and finally, the third trimester, during which neuronal axons, glia and oligodendrocytes, are integrated into neural circuits [132]. During the prenatal period, the fetal brain is particularly sensitive to the surrounding environmental stimuli impacting on the CNS development [133]. A possible disturbance of the prenatal environment due to drug exposure, such as antibiotic administration, might cause neurodevelopmental alterations and subsequently lead to the onset of NDDs. Antibiotic treatment during pregnancy is continuously increasing as intrapartum prophylaxis; however in utero exposure to treatments may affect the newborn [134][135][136]. The most frequently prescribed antibiotics are "macrolides" that include erythromycin, azithromycin, and clarithromycin [137]. A systematic study on the use of macrolides, during the first trimester of pregnancy, showed consistent evidence of an increased risk of abortion and cardiovascular fetus malformation [40]. A large epidemiological work associates maternal infection and the use of antibiotics during pregnancy with an increased risk of developing NDDs [138] and altered brain functions in the offspring. However, it is still unclear how the exposure to antibiotics in utero affects the maturation of the gut microbiome from the fetal to the adult and the development of CNS in children. A long-term follow-up in children whose mothers took part in the "ORACLE II" trial of antibiotics, showed an increased prevalence of cerebral palsy associated with antibiotic treatment [138]. Neural tube defects were reported more frequently in children from women exposed, for 12 weeks before conception, to trimethoprim, an antibiotic that blocks the dihydrofolate reductase enzyme responsible for the conversion to folate [139], which is necessary for the closure of the neural tube during development. This is indicating a direct correlation between the treatment of the mothers and the newborn. Longer treatments increased the correlation between antibiotics and ADHD development [140]. Population-based studies showed an association between the maternal use of antibiotics during pregnancy and ASD in children [141,142]. Two main studies involved 96,736 and 780,547 children, respectively. The first study reported a significant increase in the risk of ASD after treatment with antibiotics during pregnancy [142]. In the second one, a strong association between prenatal exposure to antibiotics was correlated with the risk of ASD [141]. The associations between antibiotic exposure and later development of NDDs could reflect the direct effects on the gut-brain axis. Prenatal exposure was associated with a 32-41% (hazard risk ratios (HR) = 1.32-1.41) increased the possibility of sleep disorders, whereas the risk estimates for NDDs increased from 12% to 53%. The analysis has included antibiotics used against airway, urinary tract, skin and, soft tissue infections [143]. To deeply explore the contribution of maternal gut microbiota during pregnancy on newborn brain development, researchers have used animal models treated with antibiotics known to alter the maternal GI tract. Tochitani et al. showed that the administration of antibiotics to pregnant dams, during the embryonic stage (E9-E16), perturbs the maternal composition of the gut microbiota and flora in the offspring at the stage of postnatal period (P24) and this affects social behavior and locomotor activity with respect to control animals [144]. Similar evidence comes from a recent study where C57BL/6J mouse dams were exposed to antibiotic treatment dissolved in drinking water from gestational day 12 through offspring of the postnatal period (P14). Male and female offspring display ASD-like behaviors, including alteration in ultrasonic vocalization production during Antibiotics 2022, 11, 1767 7 of 23 maternal separation and altered offspring thermoregulation in comparison to age-matched control [145]. During pregnancy, the role of the gut perturbation, due to antibiotic administration, on brain development and CNS dysfunction has been studied on rat models. Females were exposed, during the gestational period, to antibiotics that proved to alter the social behavior of the offspring [146]. In the same study, pups exposed to prenatal antibiotic treatment displayed anxiety-like behavior and greatly reduced social interactions [146]. Voung and colleagues identify that early mid-gestation is a critical period during which the maternal microbiome promotes fetal thalamocortical axonogenesis in the offspring in order to support developmental processes regulating behaviors in adult mice [147]. Taken together, these findings support the influence of maternal stimuli on fetal development; however, the molecular pathways implicated, and the metabolites involved, still remain unclear. Early Antibiotics Exposure and the Risk of ASD Brain plasticity is the change in neuronal networks in response to various stimuli that can permanently shape the brain [148]. Neuronal plasticity is sensitive to internal and/or external stimuli, and the interaction between genes and the environment are influenced by a variety of factors [149]. Childhood and adolescence are pivotal periods for brain development which include critical events such as neurogenesis, axonal dendritic growth, and synaptogenesis [150]. Antibiotics, often essential for treating infections in an early stage of life, can promote long-lasting adverse effects on brain development [151][152][153][154][155]. The use of antibiotics in children who develop NDDs was shown by several studies [156][157][158]. A higher risk of developing severe mental disorders at an adult age was found in children and adolescents treated with antibiotics, with the most pronounced effects observed by the use of a broad and moderate spectrum of antibiotics [159,160]. Early-life antibiotic exposure has been linked to a lower intelligence quotient and social scores, as well as higher behavioral difficulty scores, suggesting that it may represent a risk factor for ADHD, depression, and anxiety disorders [161]. ADHD is one the most common NDDs, and several studies have demonstrated that the exposure to antibiotics in early life alters the equilibrium in the gut microbiota and contributes to the development of ADHD [162]. The window between 0 and 2 years represents an important period in brain development, wherein changes, modification, and organization of the brain occur more and more rapidly than at any time during childhood [163]. The relationship between antibiotic exposure, in the first 2 years of life, and cognitive deficits was examined in a cohort of 342 children at the age of 11. This study showed that toddlers treated with antibiotics had an increased risk of developing behavioral deficits and depression symptoms during childhood [164]. The association between the use of antibiotics during early life, ADHD, and ASD was also studied in twins from 7 to 12 years old in the Netherlands and in 9-year-old twins from Sweden. In both studies, children that were temporarily exposed to antibiotics between 0 and 2 years (any pharmaceutical formulation, oral or intravenous, defined as parent-reported) resulted in an increased risk of ADHD and ASD; however the importance of the familial environment and the genetics influence in the etiology of NDDs has also to be considered [165]. Children exposed in the first two years of life to the most prescribed antibiotic classes (penicillins, cephalosporins, and macrolides that markedly impact on microbiota composition (see Table 1)) were more likely to develop asthma and allergic rhinitis and atopic dermatitis and ADHD [36]. The effect of postnatal exposure to penicillin was tested on a cohort of 677,403 children that resulted in an increased risk of developing ASD in comparison to untreated children [166]. Children are more susceptible to bacterial infections than adults, and severe infections regarding the nervous system during childhood might also result in the onset of NDDs later in life [167][168][169][170][171]. Infections treated with antibiotic drugs, and infections requiring hospitalizations in particular, were associated with increased risks of SCZ and psychiatric disorders, which may be mediated by effects of infections/inflammation in the brain, alterations of the microbiome, genetics, or other environmental factors. The connection between infections during periods of rapid growth, brain plasticity and diagno-sis of NDDs may be explained by microbial metabolites, produced by the enteric bacteria, interfering with normal brain development. During normal conditions, gut metabolites produced by Lactobacillus and Bifidobacterium spp. produce the inhibitory neurotransmitter GABA affecting its activity in the brain. Antibiotic treatment alters the production of metabolites, such as SCFAs or amino acids, that may lead to dysfunction of the epithelial barrier in the intestine and BBB in the brain. Individuals with NDDs were more likely to have experienced severe infections of CNS from age 0-3 years [172]. A study on a Danish population confirms these observations: a wide range of NDDs in relation to previous CNS infections are reported and a significant association (HR 3.29) with developing ID and ASD [173]. Otitis media is one of the most common infections in childhood with high incidence in children with ASD [174]. Wimberley and colleagues found an increased case of ASDs associated with otitis media infection and antibiotics treatment in a study based on a Danish cohort of 780,547 children [141]. The correlation between infections and antibiotic treatment may contribute to a later diagnosis of NDDs [167]. Antibiotics used to treat gastroenteritis caused by Shigella infection, when administered at a young age (5-18 years), increase risks of developing ADHD respective to children who did not [175], confirming that the antibiotic treatment by affecting the human microbiome, plays an important role in developing NDDs [176]. The effects of early exposure to antibiotics on gene expression and behaviors has been widely documented in several in vivo studies using the murine model [177][178][179][180]. Leclercq and colleagues exposed mice of both sexes to a low dose of penicillin in late pregnancy and early postnatal life and showed changes in anxiety-like and aggressive behavior and decreased sociability [181]. They showed antibiotics treatment caused long-lasting changes in mice gut microbiota and altered BBB integrity in the hippocampus brain region [181]. At the molecular level, antibiotics administration altered the expression of molecules involved in memory and learning like the brain-derived neurotrophic factor (BDNF) that is crucial for promoting neuronal survival and synaptic plasticity [182]. In young mice, disruption of gut microbiota by antibiotics, shows deficits in memory retention and leads to a significant reduction of BDNF production in the hippocampus of the adult brain [183]. Levels of BDNF and its receptor, tropomyosin-related kinase B, are downregulated in the hippocampus and are unchanged in the prefrontal cortex of treated animals [182]. Dysbiosis in mice can be obtained by exposing young animals to a cocktail of a broad spectrum of antibiotics that cause gut inflammation, depressive-like symptoms, social behavior and cognitive deficits, along with changes in brain neuronal firing and microglial-glial activation [178]. A recent study shows that perinatal exposure to antibiotics affects cortical development with a long-lasting effect on brain functions in young mice [184]. Perinatal penicillin exposure significantly increased sensorimotor gating and decreased the ability to discriminate between textures in adolescent mice. These behavioral alterations were accompanied by increased spontaneous neuronal activities and a delayed maturation of inhibitory neuronal circuits [184]. An excess use of antibiotics induced neurotoxic effects on mice brain [185]. Amoxicillin administration, at clinical doses, has been reported to induce depression in young rats [186]. Significant changes in gene expression have been observed in both the frontal cortex and amygdala after 10 days exposure to a low dose of penicillin to postnatal mice [187]. Alterations in the microbiota were more extensive in mice that were exposed to antibiotics during the gestation of the dams, confirming the connection and the transfer between the maternal microbiota and the embryos with respect to mice treated after birth [187]. These results provide evidence that early-life antibiotic exposure in humans and mice have effects not only on the gut microbiome but also on gene expression within critical brain structures, which are vulnerable to perinatal insults [188]. Early-life antibiotic exposure causes unexpected consequences on childhood health; however, these findings require further validation. [189,190]. Probiotics and Use of Probiotics in NDDs In this paragraph, all the names of lactobacilli are cited according to the original species names, preceding the reclassification of Lactobacillus genus that have published in 2020 [191]; for the current nomenclature please see the following link: http://lactotax. embl.de/wuyts/lactotax/ (last updated on 2 September 2021)). According to the current definition, probiotic bacteria, traditionally belonging to Gram-positive taxa, (i.e., Lactobacilli and Bifidobacteria), are "live microorganisms that, when administered in adequate amounts, as part of a food or a supplement, confer a health benefit on the host" [192]. In the last several decades, probiotics have been largely used as adjuvant in the treatment of several diseases, mainly for the maintenance of a healthy gut environment through their impact on gut microbiota composition, interactions with the intestinal epithelium, and finally with the immune system [193]. So far, the role of probiotics in treating GI disorders is well known, especially for the restoration of gut dysbiosis associated with antibiotic administration, but probiotics have shown the potential to have a broad spectrum of health benefits, ranging from digestive to neurodevelopment and neurodegenerative disorders [65]. In this context, due to the crucial role of the gut microbiota in modulating human brain function via the gut-brain axis [194], a particular class of probiotics, defined with the term "psychobiotic" have shown the ability to specifically confer health benefits at the brain level. As conventional probiotics, psychobiotics can directly modulate the gut microbiota composition and functionality. During their transient colonization at the intestinal level, they can contribute to the maintenance of a healthy gut microbiota by producing growth factors that favor the growth of beneficial microbes, for example during antibiotic therapy, by competing for nutrients and/or producing inhibiting molecules that protect from pathogens colonization, and by interacting with the intestinal mucosa and modulating the intestinal immune system [70]. The most speculated mechanism of action by which psychobiotics exert their beneficial effects for mental health is the production and/or stimulation of different types of neuroactive molecules, previously described, directly involved in the two-way microbiota-brain communication [195]. Lactobacilli and Bifidobacteria are involved in the production of GABA with some Lactobacillus spp. and also in the production of acetylcholine, whereas Bacillus species can stimulate the production of dopamine and noradrenaline. Serotonin has been found to be produced by certain Escherichia, Enterococcus, and Streptococcus species [195,196]. Moreover, the production of neurotransmitters has been reported as a species-specific feature in Lactobacillus genus [197]. Intestinal bacteria can also be involved in modulating neurotransmitter levels by regulating the metabolism of neurotransmitter precursors, as the case of increased plasma tryptophan levels by Bifidobacterium infantis [198] or, for example, by indirectly stimulating the serotonergic system through the production of SCFAs [199]. Interestingly, bacteria from food origins have been recently shown to be able to modulate neurotransmitters [200]. For example, Lactobacillus plantarum DR7 improved stress, anxiety, and cognitive functions by stimulating dopamine and serotonin pathways [201,202], whereas the food-associated L. plantarum C29 showed the ability to improve cognitive functions in adults with mild cognitive impairments [203]. Several studies reported the ability of different probiotics species in restoring neurotransmitter levels in diverse neurological diseases. In Table 2, we summarize several pieces of preclinical and clinical evidence of the use of probiotics in NDDs, and we discuss this evidence below. Lactobacillus rhamnosus (JB-1) has been found to reduce stress-induced corticosterone and anxiety-and depression-related behavior in mice by modulating GABA expression in the brain via the vagus nerve [204]. Similar effects have been reported for Lactobacillus helveticus R0052 and B. longum R0175 effects in rats [205]. Amelioration of depression-like behavior via reduction of 5-HT and dopamine levels in the brain of rats have been found after administration of Bifidobacterium infantis 35624 [206]. Intake of Bacteroides fragilis restores normal 5-HT levels in an ASD animal model [207], whereas L. plantarum of PS128 increased the dopamine level in the prefrontal cortex in early-life stress mice [208] and improved many of the behavioral aspects of ASD, such as disruptive and rule-breaking attitudes and hyperactivity/impulsivity in children [208]. A strain of L. plantarum (MTCC1325) was able to restore acetylcholine also in rats affected by neurodegenerative disorders [209]. Probiotic Lactobacillus helveticus NS8 also showed the ability to modulate neurotransmitters, such as BDNF, serotonin, and noradrenaline in the hippocampus of rats as well as to increase circulating antiinflammatory cytokines, leading to improvement of both the intestinal barrier and the BBB, and in turn ameliorating the global inflammation status [210]. Although the entire mechanism by which probiotics ameliorate diverse NDDs symptoms is still unrevealed, a healthy gut microbiota, and in turn, the maintenance of a proper signaling network from ENS to CNS is recognized to be fundamental for proper brain functions; thus, the use of probiotics and related metabolites as an alternative intervention strategy to ameliorate and/or counteract NDDs is emerging (and clinical trials are increasing but still limited). The generation and the use of ASD animal models has shown not only that the microbiota is essential for development of social behavior [211], but also that restoring normal gut microbiota components with probiotics can correct GI permeability defects. In fact, the altered microbial composition and ASDrelated abnormalities are linked to reduced intestinal production and toxin absorption [128]. Probiotic administration has been shown to improve ASD symptoms [212], and to prevent somatic symptoms in SCZ [213], and in drug-resistant epilepsy [214]. Intake of fermented food and probiotics, such as Bifidobacterium spp. and Lactobacillus spp., have been shown to ameliorate psychiatric disorder-related behaviors, including anxiety, depression, obsessivecompulsive disorder, and memory skills, as well as to attenuate stress responses [215]. Two randomized control studies by Santocchi et al. [216,217] evaluated the effects of a diet supplemented with a mix of probiotics, called De Simone Formula (labeled as Vivomixx â in EU) on the main symptoms of ASD in preschool children with and without GI symptoms. The treatment, which involved the administration of eight probiotic strains, has shown significant effects not only in the improvement of GI symptoms but also in multisensory processing and adaptive functions [217]. Recently, Kalenik et al. reported some randomized trials in which different probiotic strains have been administered in ADHD children to evaluate the effect of probiotic supplementation with the occurrence of ADHD symptoms [218], but 4 out of 5 studies have been shown no substantial differences in cognitive and neurodevelopmental outcomes. However, in one study evaluating the impact of early administration of Lactobacillus rhamnosus GG on the development of ADHD in infants, probiotic supplementation showed a preventive effect in reducing the risk of developing ADHD [87]. Interestingly, in 2019 a promising multinational clinical trial has been started to evaluate the effect of the administration of a symbiotic formula (Synbiotic 2000 Forte 400) containing a mixture of different probiotics (Pediococcus pentosaceus, Lactobacillus paracasei subsp. Paracasei, L. plantarum, and Leuconostoc mesenteroides) and four prebiotics (β-glucan, inulin, pectin, and resistant starch) in a cohort of 180 adults with ADHD [219]. To date, clinical studies applying probiotics in ADHD patients are still limited and future studies are needed to achieve sufficient evidence to recommend probiotics administration as beneficial treatment in ADHD. Probiotics supplementation has been applied as an alternative treatment in SCZ, even though the literature is still limited, and more clinical studies are needed. A systematic review by Ng et al., associates the effect of probiotic administration with the amelioration of side-effect symptoms of SCZ, mainly related to the antipsychotic therapy, such as perturbation of the microbiota composition that may lead to adverse metabolic effects, weight gain, constipation, and finally to systemic inflammation and neuroinflammation [220]. Lactobacillus rhamnosus GG and Bifidobacterium animalis subsp. lactis strain Bb12 have shown to have positive effects in improving intestinal barrier integrity and ameliorating bowel difficulties in patients with SCZ via modulation of inflammatory cytokines belonging to IL-17 family, but no significant impact on positive and negative syndrome scale (PANSS) psychiatric symptom scores [221]. Improvements in constipation and insulin resistance have been found by Nagamine et al. [222] after four weeks of treatment with BIO-THREE ® , a mixture of Streptococcus faecalis, Bacillus mesentericus, and Clostridium butyricum, in SCZ patients, whereas similar results with no changes in PANSS were reported also by other clinical studies using the same mixture of probiotics (L. rhamnosus GG and B. animalis subsp. lactis strain Bb12) in patients with psychotic symptoms [213,223]. However, Okubo et al. (2019) reported that the probiotic strain Bifidobacterium breve A-1 was able to improve PANSS scores, depression, and anxiety in SCZ patients after four weeks of treatment. Moreover, the authors correlated the amelioration of symptoms with a modulation of IL-22 and TNF-related activation induced cytokines (TRANCE), involved in the intestinal barrier functions [224]. A combination of probiotic mixture (Bifidobacterium bifidum, Lactobacillus acidophilus, Lactobacillus fermentum, and Lactobacillus reuteri) and vitamin D have been administered in SCZ patients, showing amelioration of PANSS scores with reduced inflammation and enhanced plasma total antioxidant capacity [225]. Munawar et al. [226] extensively reviewed diverse nonpharmaceutical approaches in the treatment of SCZ, including probiotics and prebiotics, suggesting the use of a psychobiotic in SCZ as a promising challenge for clinical research. Preclinical studies and human trials show that probiotics, mainly psychobiotics, can be beneficial for brain health and thereby they could be a promising alternative therapy for the treatment of NDDs. However, some aspects need to be considered and more deeply elucidated, such as the strain specificity, the probiotic dosage, time of treatment, and the precise mechanism of action at the molecular level. Moreover, there are some limitations, including individual differences (i.e., genetic background, environmental factors, diet, gender) and/or the low number of participants which remains a limit in producing highquality clinical data [53]. Conclusions Prenatal, natal, and postnatal adverse factors represent the underlying conditions for the onset of NDDs during brain development. NDDs are multifactorial disorders involving both a strong genetic component and environmental contributors. This wide and complex group of factors makes it difficult to find a trigger. The genetic and phenotypic complexity underlying NDDs are still the main obstacle to finding effective therapies. We have focused on the effects caused by an environmental factor, represented by antibiotic administration during different stages of CNS development. Antibiotic administration has been proposed as a possible therapy for ASD patients; however, this treatment can have side effects by affecting the gut microbiota homeostasis by targeting both pathogens and healthy commensal bacteria. It is now accepted that antibiotics alone or added to a genetic risk may perturb the gut-brain axis and have effects on the correct development of the brain. To date, mounting evidence from human and animal studies suggests that gut microbial targeting therapy may be beneficial as a new and safe method for treating individuals affected by NDDs. Microbiota is indeed influenced by different environmental factors before birth, during infancy, and during childhood, and can play a key role CNS development, influ-encing neurogenesis and microglial maturation. Alterations in the gut microbiome caused by antibiotics administration in children can lead to inappropriate neuronal maturation during critical phases of brain development. On the other hand, a particular class of probiotics, defined as "psychobiotics", can specifically confer health benefits at the brain level. They can transiently colonize the gut, and in turn, restore the composition of a healthy gut microbiota by producing growth factors for beneficial microbes, for example during antibiotic therapy, by competing for nutrients and/or producing inhibiting molecules that protect from pathogens colonization, and by interacting with the intestinal mucosa and modulating the intestinal immune system as well as by producing and/or stimulating different types of neuroactive molecules directly involved in the two-way microbiota-brain interplay. In summary, evidence from preclinical and clinical studies provides support for the promising effect caused by probiotic administration. The future holds the exciting potential of probiotic-based therapies to prevent and cure the onset of NDDs.
2022-12-09T15:41:55.010Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "c473af70475d204dd1caa34f6e6b0599ed2fc6e1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6382/11/12/1767/pdf?version=1670405797", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c473af70475d204dd1caa34f6e6b0599ed2fc6e1", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [] }
251516517
pes2o/s2orc
v3-fos-license
Optimal dose of lactoferrin reduces the resilience of in vitro Staphylococcus aureus colonies The rise in antibiotic resistance has stimulated research into adjuvants that can improve the efficacy of broad-spectrum antibiotics. Lactoferrin is a candidate adjuvant; it is a multifunctional iron-binding protein with antimicrobial properties. It is known to show dose-dependent antimicrobial activity against Staphylococcus aureus through iron sequestration and repression of β–lactamase expression. However, S. aureus can extract iron from lactoferrin through siderophores for their growth, which confounds the resolution of lactoferrin’s method of action. We measured the minimum inhibitory concentration (MIC) for a range of lactoferrin/ β–lactam antibiotic dose combinations and observed that at low doses (< 0.39 μM), lactoferrin contributes to increased S. aureus growth, but at higher doses (> 6.25 μM), iron-depleted native lactoferrin reduced bacterial growth and reduced the MIC of the β-lactam-antibiotic cefazolin. This differential behaviour points to a bacterial population response to the lactoferrin/ β–lactam dose combination. Here, with the aid of a mathematical model, we show that lactoferrin stratifies the bacterial population, and the resulting population heterogeneity is at the basis of the dose dependent response seen. Further, lactoferrin disables a sub-population from β-lactam-induced production of β-lactamase, which when sufficiently large reduces the population’s ability to recover after being treated by an antibiotic. Our analysis shows that an optimal dose of lactoferrin acts as a suitable adjuvant to eliminate S. aureus colonies using β-lactams, but sub-inhibitory doses of lactoferrin reduces the efficacy of β-lactams. Introduction Staphylococcus aureus is the leading cause of soft tissue, device-related and surgical wound infections [1]. It adheres to tissue and abiotic surfaces, like prosthetics, and develops metabolically frugal biofilms that are tolerant of antibiotics [2,3]. Subpopulations known to exhibit a phenotype with reduced growth rate and altered gene expression; these include persisters, and small colony variants (SCV) [4]. Biofilm formation is a key part of establishing chronic infection [5,6], with an increased tolerance to antibiotic therapies [7]. Achieving antibiotic concentrations capable of persister cell eradication in vivo can be difficult or impossible [8]. Increasing the dose and the use of antibiotics also poses the risk of developing of drug resistant strains. For instance, β-lactam antibiotics are very effective against fast growing bacterial cells but are ineffective against bacterial cells that are not actively dividing and synthesizing new cell wall peptidoglycans [9]. β-lactam antibiotics have been a mainstay of clinical therapeutics, especially for methicillinsusceptible S. aureus (MSSA) infections as they are highly effective against non-resistant pathogens, have a lower risk of side effects, and are less expensive, relative to second-line antibiotics [10]. However, the concern that sustained β-lactam use could promote horizontal gene transfer of virulence factors to other microorganisms [11] has increased attention on developing alternative antibiotic treatments. Bacteria develop tolerance through two primary mechanisms: 1) through adaptive stress responses that reduce the efficacy of the antibiotic (such as prevent binding, or actively degrading the antibiotic) [12,13]; 2) through stratification of the population into tolerance phenotypes with some subpopulations that are highly tolerant of the antibiotic and transition between these phenotypes occur based on the level of stress [14,15]. S. aureus strains are known to produce β-lactamase enzymes to degrade β-lactams. Therefore, we hypothesise that complementing β-lactam antibiotic with an adjuvant that reduces the impact of the adaptive response (β-lactamase production) will result in improved efficacy of the antibiotic and consequently require lower β-lactam dose. In this context, we explored whether lactoferrin, a naturally occurring iron-binding protein with many biological functions, including bacteriostatic, and immunomodulatory activities could be used as an adjuvant. Lactoferrin has multiple activities that contribute to antimicrobial activity. Firstly, it can sequester iron that is essential for bacterial growth. Secondly, lactoferrin and its peptide derivatives bind to bacterial components and can disrupt bacterial function and membrane integrity [16]. Studies on Pseudomonas aeruginosa have shown that lactoferrin disrupts biofilm formation through a combination of effects that degrade the biofilm matrix and stimulate dispersal; these include chelating iron, stimulating twitching motility, interfering with cell-to-cell signalling processes, increasing DNase activity, and reducing bacterial glycosidase activity [17,18]. Investigating the combined effect of lactoferrin and β-lactam is not straightforward as MSSA strains can overcome inhibitory and lethal actions of lactoferrin and degrade β-lactam. S. aureus cultures typically consist of subpopulations due to heterogeneous availability of nutrients, which necessitates altered metabolic phenotypes. Of particular interest are slow growing or non-growing cells, stationary-phase cells and persisters [9,19]. In this paper, such slow growing and/or non-growing cells are collectively referred to as Persister cells. Persister cells that exist within a population enable recovery given time; an essential dynamic that needs to be characterised to quantify efficacy of a treatment. In the presence of persisters, bacterial growth dynamics such as the rate of growth and time to recovery provide a more realistic insight into the efficacy of the treatment. The complexity associated with experimentally tracking these subpopulations and measuring the time course data for all the lactoferrin and β-lactam treatment combinations required us to explore alternative methods to infer this information. Here we used a computational model to infer this information. There are several mathematical models in published literature that incorporate the dynamics of susceptible and persister populations [20][21][22][23][24]. These models focus on the role played by adaptive responses that prevent antibiotic binding, persister formation and reversion, and the time scale of antibiotic treatments. We did not find any mathematical models in published literature that investigated the role of lactoferrin or any adjuvant on the efficacy of the antibiotic. Since our goal is to determine the role played by lactoferrin in altering the adaptive response of bacteria, specifically does lactoferrin reduce β-lactamase production; if it does then determine the extent to which β-lactamase production must be reduced for a β-lactam dose to be effective. To achieve this, we extended a published model on β-lactam antibiotic activity [24] to predict the growth dynamics from the experimental data, and estimate the role of lactoferrin in the β-lactamase production dynamics. Here, we show the response of S. aureus broth cultures to various combinations of β-lactam antibiotic (Cefazolin) and lactoferrin doses at different iron saturation levels, model predictions of temporal dynamics of the bacterial population in response to these doses, and the potential synergistic role of lactoferrin in reducing the tolerance of the population. Materials preparation Staphylococcus aureus Xen36 (PerkinElmer, Part Number 119243), a methicillin sensitive, beta-lactamase positive strain [25], that is engineered for bioluminescence to facilitate in vivo infection studies is used as a representative isolate [26]. Antibiotic assays were performed in BBL Cation-Adjusted Mueller Hinton II Broth (MHB; Fort Richard, Auckland). Cefazolin was purchased from Sigma-Aldrich. Native bovine lactoferrin was supplied by Fonterra, NZ. Ironloaded lactoferrin (approximately 80% iron saturation) was prepared by incubating 100 mg/ml native lactoferrin with 2× molar equivalents of Fe 2 Cl 3 and NaHCO 3 for 24 hours as described in [27], followed by three 12-hour rounds of dialysis against 40 volumes of PBS to remove unbound iron and return to a neutral pH. Protein content was calculated from HPLC data (Fonterra) and Fe 3+ content was verified using inductively coupled plasma-mass spectrometry (Agilent 7700 ICP-MS in He mode). Bacterial cultures Bacterial cultures were set by adding two to three colonies of S. aureus Xen36 to 10 ml of MHB in a 50 ml conical tube and incubated overnight at 37˚C with shaking at 200 RPM. Broths were assessed by absorbance at 600 nm, comparing doubling dilutions against a previously established standard curve, and diluted to the required cell density. Checkerboard assay A checkerboard assay was used to measure synergistic/inhibitory interactions of lactoferrin preparations with the antibiotic cefazolin against S. aureus Xen36. A two-dimensional, twoagent (cefazolin and lactoferrin), doubling microdilution checkerboard was prepared in a 96-well microplate [28] in sterile MilliQ water, with each well containing 50 μl of the reagent combinations. An overnight culture of S. aureus Xen36 grown in MHB was diluted to 2 × 10 5 CFU/ml in 2× MHB, and 50 μl was added to each well of the microplate to give a final volume of 100 μl in 1 × MHB (containing approximately 14 μM Fe) to challenge an inoculum of 1 × 10 5 CFU/ml with the various cefazolin/lactoferrin combinations. Cefazolin was tested in a range from 0 to 4.0 μg/ml. Lactoferrin preparations (native, iron-loaded, and a 1:1 mixture of native:iron-loaded) were tested in a range from 0 to 100 μM. Optical measurements of absorbance at 600 nm and of bioluminescence (EnSpire 1 Multimode Plate Reader, PerkinElmer, USA) were taken prior to incubation and after 16 ± 2 h incubation at 37˚C with humidity and shaking at 200 RPM. Each checkerboard assay plate was replicated on three separate occasions, data is available in S1 Data. Absorbance values were used to calculate minimum inhibitory concentrations MIC50 (where growth is inhibited by � 50% of the no antibiotic control) and MIC90 (where growth is inhibited by � 90% of the no antibiotic control) for antibiotic alone and antibiotic in the presence of lactoferrin preparations. Estimations of MIC50 and MIC90 were also made using bioluminescence measurements. Kinetic model development We developed a mathematical model to examine population based response to lactoferrin/ βlactam antibiotic treatment. The model is based on the observations that, on exposure to β-lactam antibiotic, β-lactamase producing bacteria like S. aureus express β-lactamase and degrade the bound antibiotic which reduces the antibiotic's efficacy. Further, bacterial lysis releases extra-cellular β-lactamase into the surroundings, which further confers protection to the population by degrading the antibiotic in the surroundings. Our model focuses on a mixed heterogeneous population of slow and fast growing cells that constitutively expresses β-lactamase when no lactoferrin is bound to the cells. The model accounts for growth rates of the subpopulations, stress induced partitioning of population and β-lactamase production in response to antibiotic and lactoferrin induced stress. Modelling the population density response provides a level of abstraction that captures the contributions of multiple cell level interactions. A schematic of the interactions characterised by the model is shown in Fig 1. The model characterises the growth and lysis dynamics of four types of bacterial subpopulations: susceptible (n, n lf ) and persister (p, p lf ) cells. Lactoferrin free sub-population (n, p) can produce β-lactamase in the presence of β-lactam. The remaining population (n lf , p lf ) to which lactoferrin is bound, is repressed from expressing β-lactamase. The density of the repressed population depends on the concentration of extracellular lactoferrin. The model accounts for the variability in growth rates (g, g p ), lysis rates (l, lf l , l p , lf lp ) and β-lactamase expression between subpopulations. Note that the growth rates of both lactoferrin bound and unbound susceptible and persister cells share the same growth rates respectively, however have the lysis rates are different to characterise the complex antibiotic and β-lactamase interactions for each cell-type. We adopted and extended previously published models [22,24] to account for the dynamics of bacterial densities, extracellular β-lactamase concentration (b out ), membrane bound βlactamase concentration (b in ), lactoferrin concentration (Lf), and β-lactam concentration (A). To model the capability of S. aureus to extract iron from lactofferin and support its growth/ maintenance, the modelled growth rates depend on lactoferrin's iron saturation level. The amount of membrane bound β-lactamase concentration (b in ) produced by the bacterial cells is proportional to the available β-lactam concentration. The amount of extracellular β-lactamase concentration (b out ) is determined by the lysis rate of the β-lactamase producing cells. The model makes the following assumptions 1. The cell density for each subpopulation (n, n lf , p, p lf ) depends on its growth rate (g, g p ) and their corresponding lysis rate (l, lf l , l p , lf lp ), 2. The entire initial population can produce β-lactamase in the presence of β-lactam, 3. The growth rates are a function of the maximum growth rate of the subpopulation (σ 1 , σ 5 ), 4. There is sufficient nutrient available to the population at all times. The nutrient level was set constant, 5. The population level β-lactamase concentration (b in � ) was determined by multiplying the number of β-lactamase producing cells present at a given time and per cell β-lactamase concentration (b in ) and the averaged volume of a bacterial cell, β, 6. Lactoferrin bound to the bacteria are not recovered when the bacteria to which they are bound lyse, 7. The lysis rate of bacteria to which lactoferrin is bound is a function of the antibiotic concentration and lysis rates (σ 2 , σ 6 ), and 8. The lysis rate of bacteria free of lactoferrin is a function of the antibiotic concentration, lysis rates (σ 2 , σ 6 ), and amount of membrane bound β-lactamase b in . PLOS ONE The non-dimensionalised model equations are Here H(Lf) is the Heaviside function. The physical interpretation of the parameters are given in Table 1. Parameter fitting. The model parameters were estimated using constrained optimization using linear approximations (COBYLA). The set of parameters that predict the growth values for all data sets was selected. Among the parameters, the parameter [Fe 3+ ] that captures the growth induced by the lactoferrin due to its iron saturation level was matched to the data set. Parameters were further constrained to ensure that the solutions always remained in the positive numerical domain as there are no-negative densities or concentrations. For parameter estimation, the combined lactoferrin/ β-lactam dose was introduced when the total bacterial population reached steady state. The bacterial population kinetics from the resulting crash and recovery was used to fit the experimental predictions. The system ordinary differential equations are stiff and were solved using Rosenbrock23 ODE solver. All simulations were performed using Julia v 1.6, DifferentialEquations package v 6.19 and NLopt v 0.6.2. The model parameters are listed in Table 1 and the results are plotted in Fig 2. Parameter sensitivity analysis. The proposed system of equations have 28 non-dimensional parameters. While the low RMSD (root mean square deviation) of fit (Fig 2) provides some confidence; however, in models with large number of parameters the potential to find PLOS ONE more than one dissimilar set of parameters that generate similar model outputs is highly likely [29]. Sensitivity analysis allows the identification of the set of parameters that have the greatest influence on the model output. It consequently provides useful insight into which model parameters contribute most to the variability of the model output, and identifying the important and influential parameters that drive model outputs and magnitudes. Here, in addition to the low RMSD, we constrained the parameter search to find a parameter set where key parameters that affect the bacterial population as a function of Cefazolin and lactoferrin had the greatest influence. Following [30] we used sobol sensitivity analysis to determine the totalorder sensitivity index, 0 � ST � 1.0. ST characterises a parameter's contribution to the variation in model's predictions, alone or through the interaction with any number of other parameters. Sobol sensitivity analysis was performed on the dimensionless model, on a parameter space centred around each parameter's fitted value(x 0 ), spanning x 2 x 0 2 ; 2x 0 � � À � . Using a quasi-montecarlo approach a design matrix of 5000 samples of the parameter space were evaluated using Julia v 1.6, GlobalSensitivity package v 1.3. The parameters with the leading totalorder sensitivity index values are listed in Table 2. where V i is variance due to parameter i alone and Var(Y) is the variance due to the all parameter interactions. ST lf was calculated as 3 � ðST g 7 þ ST H þ ST s 7 Þ, using the variance sum law. The scaling factor of 3 is to account for the triple counting of Var(Y) in the denominator of the sum. The results suggest that the fitted set of parameters enforce the proposed behaviour (Fig 1), suggesting that the observed response arises from the bacterial population behaviour. Experimental validation. We used the model to predict the bacterial densities at 24 h post treatment for a range of Cefazolin and native lactoferrin doses to identify minimum bactericidal concentration (MBC). The model predicted a small range of doses around Cefazolin 0.75 μg/ml and native lactoferrin dose of 18.0 μM to be optimal. We selected 0.75 μg/ml and 18.0 μM as it was not used in the fitting of model parameters. S. aureus colony response to Cefazolin and lactoferrin treatment Minimum inhibitory concentration (MIC) assays were performed with S. aureus Xen36. This strain can form biofilms, expresses β-lactamase and is bioluminescent. To investigate whether the inclusion of lactoferrin as an adjuvant would improve the antimicrobial activity of Cefazolin, a β-lactam antibiotic commonly used in orthopaedics, against S. aureus populations. We measured bacterial growth, after overnight incubation with different combinations of cefazolin and lactoferrin. To investigate the role of lactoferrin in terms of iron restriction, we compared native (15-19% iron saturation) lactoferrin, iron-saturated (80% iron saturation) lactoferrin PLOS ONE Lactoferrin reduces the resilience of in vitro S. aureus colonies and a 1:1 mixture of native and iron-saturated lactoferrin. The results show the baseline MIC50 and MIC90 for cefazolin alone were 1.0 and 2.0 μg/ml respectively, and that native and mixed-native bacteriostatic were antimicrobial at MICs, but iron saturated lactoferrin was not obviously bacteriostatic in the test conditions (Fig 3). The MIC for cefazolin reduced to 0.5 μg/ ml in the presence of 6.25 μM lactoferrin (Fig 3a and 3b). Notably, there is enhanced bacterial survival when cefazolin concentrations of 1 μg/ml (just below MIC) are combined with 0.1 − 1.5 μM lactoferrin (Fig 3). We argue that the observed increase in growth is a consequence of the existence of both susceptible and persister cells. Sub-inhibitory concentrations of antibiotic elicits a stress response from the bacterial population, and some of the population switch to persister phenotype [31] thereby reducing the bactericidal activity of the β-lactam antibiotc and the emergence of a viable population for sub lethal doses. The elimination of fast growing cells by the antibiotic creates a conducive environment for slow growing cells and their daughter cells (some of which will revert to the fast growing phenotype) due to reduced competition for nutrients. Our experiments did not constrain nutrients and was rich in both carbon and amino acids. Under these conditions, growth rates and metabolic activity are strongly coupled [32] and is observed in our bioluminescence assay results. Fig 3d-3f show that bioluminescence is strongly and positively correlated with the optical density based estimate of the population for four key combined lactoferrin/β-lactam treatments. Firstly, this confirms that the observed biomass is representative of living cells and not a measurement of lysed cells and protein biomass. Since persisters are metabolically frugal, bioluminescence concomitant with observed biomass indicates the presence of metabolically active fast growing cells [33]. Consider the mixed and iron-saturated lactoferrin cases (Fig 3b and 3c), the increase in bacterial growth can be attributed to the loss of lactoferrin's antibacterial activity. Binding of iron PLOS ONE to lactoferrin significantly alters the structural configuration of lactoferrin and leads to the loss of lactoferrin's antibacterial activities [34,35]. In addition, S. aureus produces siderophores (small molecule iron chelators), and surface proteins like IsdA and a variety of proteases under sub-inhibitory concentrations of antibiotics [36,37]. Siderophores acquire iron from the extracellular space, and from Fe 3+ saturated lactoferrin to help the bacteria grow [37]. Surface proteins like IsdA could further reduce the bactericidal activity of lactoferrin [38] resulting in reduced antibiotic efficacy and a larger resilient bacterial population. These results suggest that iron restriction is an essential mechanism through which lactoferrin induces bacteriostasis. Although, iron-bound lactoferrin improves the antibiotic's efficacy, an increased amount of antibiotic is required. Results from our mathematical modelling, introduced momentarily, suggests that iron-bound lactoferrin continues to play a role in reducing β-lactamase production and hence improve efficacy. These results imply that, in the absence of an antibiotic the bacterial population grows to the carrying capacity of the medium. When treated with a β-lactam antibiotic they exhibit adaptive response i.e. bacteria degrade the antibiotic through their secreted β-lactamase [39,40]. At higher concentrations of the antibiotic the bacteria are unable to degrade all the bound antibiotic and eventually lyse. Studies have shown that at intermediate antibiotic concentrations lysed bacteria will release further β-lactamase into extracellular space, which confers resilience to the wider population by degrading more antibiotics and signalling the population to release more β-lactamase into the extracellular space [41][42][43][44]. In the following section we show that these observations are substantiated by our mathematical model. In summary our experimental results suggest that 1) S. aureus cultures display both adaptive and phenotype switching response as observed in biofilms, and 2) this response is dose dependent with sub-optimal doses of lactoferrin leading to increased growth. Temporal dynamics of S. aureus colonies in response to Cefazolin and lactoferrin treatment To investigate the role played by the adaptive response and phenotype switching in conferring tolerance to the bacterial population, we used the kinetic model (Fig 1) to infer the temporal population dynamics and the impact of lactoferrin/ β-lactam on the bacterial population. Simulations were initialised with a known initial density of susceptible (0.3) and persisters (0.01) and simulated until the population density was above 50% of the carrying capacity (B 50 ) of the medium. The population was then subjected to the treatment. To eliminate any impact of the initial population density on the results, we performed 10000 independent simulations with different bacterial population densities at the time of the treatment. Bacterial population stratification and growth dynamics was determined from the averaged kinetics. To quantify the tolerance of the bacteria to recover from lactoferrin/ β-lactam dose, we defined, following [24], resilience as the rate of recovery by the population after experiencing the initial crash. An effective lactoferrin/ β-lactam treatment will lyse more bacterial cells and occupy the remaining bacteria in degrading the antibiotic. This will result in a prolonged recovery time, and a less effective dose will result in faster recovery, Fig 4a. Based on this, resilience can then be defined as Here T 50 , is the time at which the untreated population reaches 50% of its carrying capacity B 50 . T A 50 is the time at which the treated population reaches B 50 post the population crash from antibiotic treatment. The metric will be larger for a resilient response and lower for a non-resilient population. Lactoferrin modulates adaptive response Disruption of β-lactamase production is a mechanism that can improve the efficacy of β-lactams and can be measured using resilience i.e. low resilience for better efficacy and vice versa. Model results , Fig 4b, show the stratification of bacterial population into lactoferrin bound and lactoferrin free populations of susceptible and persisters in response to treatment. The density of lactoferrin bound bacteria increases proportionally to the lactoferrin dose and affects the time taken by the bacterial population to recover. Population resilience was assessed through multiple simulations (N = 10000) each with different initial density fractions of susceptible and persister cells at the time of lactoferrin/ β-lactam treatment. The population's growth response for various lactoferrin and β-lactam concentrations was predicted and resilience (Eq 1) was computed from the growth kinetics. The averaged resilience for each lactoferrin/β-lactam combination is plotted in Fig 5a. The results highlight the key role played by lactoferrin in the bacterial response; while low doses of lactoferrin alone do not significantly degrade the population, larger doses (� 1.56 μM) degrade the population to that of minimal sub-inhibitory Cefazolin dose. Lactoferrin leads to the selection of fast-growing cells To investigate the nonlinear effects of lactoferrin dose on the growth rates and bacterial population stratification. We determined from the kinetic model the maximum growth rate of susceptible and persister cells post treatment and computed the difference between these growth rates for different combinations of lactoferrin/ β-lactam doses, Fig 5b. This difference characterises the maximum growth divergence that occurs in the population. A large value is indicative of a larger susceptible population and hence more susceptibility to β-lactam's. While a lower value indicates a larger (compared to untreated cohort) persister type cells and greater tolerance and resilience. In order to eliminate any bias due to the initial population density-fractions of these cell populations. We performed a stochastic (N = 10000) simulation with different bacterial phenotype density-fractions for each of the lactoferrin/ β-lactam dose at the time of treatment and plotted the average of the difference in susceptible vs persister maximum growth rates. The simulations results show significant growth rate differences being effected in the bacterial subpopulations as a function of lactoferrin dosage. Similar results have been previously reported in [33]; but here as a function of lactoferrin/ β-lactam dosage. Note that, our model assumes that native lactoferrin bound bacterial cells do not divide as lactoferrin inhibits growth. The difference in growth rates shows that lactoferrin treated populations consistently have a larger proportion of susceptible cells compared to persisters and this proportion increases with lactoferrin dose. Discussion Our ability to understand how bacterial populations respond to β-lactams in conjunction with other adjuvants is essential to devise combination therapy that would target different resistance mechanisms of bacteria and manifest in more synergistic efficacy, with reduced potential for the emergence of resistant strains. Our experiments show that sub-optimal concentrations of lactoferrin increases bacterial population heterogeneity, sustain growth and limit the efficacy of β-lactams. However, optimal doses of lactoferrin improve the efficacy of β-lactams by reducing β-lactamase production and selection of fast-growing cells that are more sensitive to killing by β-lactams. Despite the complexity and numericity of biological processes involved in the population's response to a treatment, growth rates and the partitioning of population in response to stress provide a level of abstraction that captures the contributions of multiple cell level interactions. We modelled the experimentally observable population dynamics using these abstractions and reveal a role of lactoferrin dose on the population. Our experiments suggest that dosing regimens that use lower, yet lethal, concentrations of β-lactam can be as effective as higher concentrations of β-lactam when used with an appropriate lactoferrin dose. This highlights the need for precise understanding of the various bacterial subpopulations and their growth rates in the design of an appropriate lactoferrin/ βlactam dose. Further work needs to be done to resolve the mechanisms by which lactoferrin affects β-lactamase expression, whether the bound lactoferrin or its peptides are released in an active form when the bound cells lyse and whether the subpopulations interact differentially with lactoferrin. Recent insights into human microbiome and its role in general well-being makes it essential that the amount of antibiotic the host is exposed is targeted, minimal and most importantly reduces the emergence of more resistant subpopulation of bacterial pathogens [45,46]. Our kinetic model uses the bacterial colony's population level response to a combination antibiotic treatment and predicts recovery times, and effective doses combinations. This information can be used to design dosing regimens based on in-vitro data. It could help in developing effective strategies for both treatment, effective use of β-lactam antibiotics and reducing the emergence of antibiotic resistant pathogens.
2022-08-13T06:17:25.154Z
2022-08-12T00:00:00.000
{ "year": 2022, "sha1": "28a8cf4919161df9703df199e3444224375c1b95", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "ad199f87881f0bb391b0040ace60ff698ba9dc22", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
242932841
pes2o/s2orc
v3-fos-license
Eight-year Operation Status and Data Analysis of the First Human Milk Bank in East China To analyze the operation status and data over the last 8 years of operation of the rst human milk bank (HMB) in East China. Methods Data related to the costs, donors, donation, pasteurization, and recipients were extracted from the web-based electronic monitoring system of the HMB for the period August 1, 2013 to July 31, 2021. Results Over the 8 years of operation, 1,555 qualied donors donated 7,396.5 L of qualied milk at a cost of ¥1.94 million, with the average cost per liter of donor human milk being ¥262.3. The donors were between 25 and 30 years of age, and the majority (80.1%) were primipara. All the donated milk was pasteurized and subjected to bacteriological tests before and after pasteurization: 95.4% passed the pre-pasteurization tests, and 96.3% passed the post-pasteurization tests. A total of 9,207 newborns received 5,775.2 L of pasteurized donor milk. The main reason for the prescription of donor human milk was preterm birth. As a result of continuous quality improvements, January 2016 witnessed a signicant increase in the volume of qualied DHM and the number of qualied donors. However, in 2020, as a result of the restrictions related to the COVID-19 pandemic, the volume of qualied DHM and the number of qualied donors decreased. Over its 8 years of operation, our HMB has made steady quality improvements in its screening and information processes. Continuous quality improvement is on ongoing need, along with recruiting more qualied donors and collecting donor human milk for vulnerable newborns. Introduction Breast milk is the ideal source of nutrition for newborns within 6 months after birth [1] . Compared to formula, both breastfeeding and human milk have immense advantages, as they are associated with reduced rates of chronic lung disease, necrotizing enterocolitis, feeding intolerance, nosocomial infection, retinopathy of prematurity, and mortality in premature infants [2][3][4] . Therefore, donor human milk (DHM) is preferred to formula when maternal milk is absent or insu cient, as recommended by the World Health Organization [5] . Although the properties of DHM are affected by the processes of collecting, processing, and storing, it is still better than formula in terms of nutritional composition and biological value [6][7] . Human milk banks (HMBs) are essential facilities for the selection, collection, testing, transportation, storage, and distribution of DHM for special medical needs. The rst HBM was established in 1909 in Vienna, and since then HMBs have been set up in Europe and in the United States and have been in operation for more than 100 years [8] . The rst HMB in China was opened in March 2013 in Guangzhou [9] , and as of 2019, China had 19 HMBs [10] . Our HMB is the rst and largest HMB in East China. It was founded in August 2013 according to the standards and guidelines of the Human Milk Banking Association of North America, and has been in operation for 8 years. Over its 8 years of operation, a total of 7,396.5 L of quali ed DHM has been collected, and 9,207 newborns have received DHM from our bank. We have made continuous quality improvements over these 8 years, and the purpose of this study is to describe these changes and the operation status of our HMB over the 8 years,and provide some recommendations to other HMBs operating in China. Data resources and research variables Data from August 2013 to July 2021 were extracted from the computerized information management system of our HMB. The extracted donor data included age, education, residence address, occupation, number of children, number of donations, etc. Additionally, the results of bacteriological tests done before and after pasteurization were obtained. The extracted recipient data included gestational age, birth weight, mode of childbirth, number of days that DHM was used, etc. Current donation process Donors are eligible only if they pass the health screening, serological test(including HBV,HCV,HIV,syphilis and CMV), and bacteriological test (both pre-and post-pasteurization tests) at the time of the rst donation. All eligible donors receive on-site guidance by professionals at the time of their rst donation. Before breast milk collection, the donors are requested to wash their hands under strict instructions and clean their breasts with antiseptic wipes, especially around the nipple and areola. An electric milk absorption pump is used, and all accessories are cleaned and disinfected before and after use. The collected breast milk is placed in a dedicated storage container and placed in a 4℃ refrigerator, and all the relevant information is recorded. All the donor milk is mixed and pasteurized (by continuous disinfection at 62.5°C for 30 min) within 24 h after collection and then stored in a special -20℃ refrigerator for no more than 3 months ( Fig. 1). Due to the restrictions imposed by COVID-19, our HMB now only accepts on-site milk donation, and does not accept milk donations collected at home. Ethics Our HMB was approved by our hospital's ethics committee before it was established. Potential donors are provided with the necessary information and sign the consent form before their rst donation. Additionally, a medical informed consent form is signed by the parents of the recipients before DHM is provided. Infrastructures and cost analysis Our HMB is equipped with an independent milk collection room, a hospital-level milk pump, a pasteurizer (temperature-controlled water bath), human milk storage containers, medical refrigerators (including a custom-made ordinary refrigerator and special ultra-low temperature refrigerator), computers with a supporting information management system, etc. The guidelines for organizational management; infrastructure; donor screening; and the collection, processing, storage, and provision of donated breast milk are in accordance with The Human Milk Banking Association of North America. The total cost for 8 years was ¥1.94 million for 7,396.5 L of quali ed DHM. The average cost per liter of DHM was ¥262.3. Before the establishment of the HMB, we received a donation of ¥1 million from a group of good Samaritans as start-up capital. The rest of the expenses were borne by our neonate department, which covered 80% of the cost, and the hospital, which covered 20% of the cost. The annual cost with its breakdown is shown in Table 1. Characteristics of donors and donations Between August 1, 2013, and July 31, 2021, a total of 1,805 mothers were enrolled, but 250 did not pass the health screening or serological tests. The remaining 1,555 quali ed donors were mostly between 25 and 30 years of age (55.5%): 62.6% had a bachelor's degree, 61.2% had given birth by vaginal delivery, 66.7% had had term births, and 80.1% were primipara (Table 2). Further, 46.2% of donors had donated milk more than 10 times ( Table 2). The total number of donations in 8 years was 19,089, and the annual maximum number of donors was 195 (in 2014) ( Fig. 2A). In 2013, the number of quali ed donors was 91 and the volume of quali ed DHM was 102.2 L. As a result of continuous quality improvements, January 2016 [11] witnessed a signi cant increase in both the volume of quali ed DHM and the number of quali ed donors. In 2020, the number of patients at our NICU decreased due to the COVID-19 pandemic, and the volume of quali ed DHM and the number of quali ed donors consequentially decreased ( Fig. 2B and 2C). Processing of DHM Prior to January 2016, donors who passed the health screening, serological test, and bacteriological test (both pre-and postpasteurization) at the time of the rst donation were considered to be quali ed and allowed to donate in the future without pre-or postpasteurization bacteriological testing. After the continuous quality improvements, quali ed donors' DHM was rst pooled and then pasteurized. In the early stage of quality improvement, bacteriological testing was conducted before and after the pasteurization of each batch every day. It was subsequently found that our milk extraction process and pasteurization process were reliable if the batches passed most of the tests, and thereafter, bacteriological testing was conducted every 10 days and batches that failed the pre-and postpasteurization bacteriological test were discarded. On average, 95.4% of the batches passed the pre-pasteurization tests, and 96.3% passed the post-pasteurization tests (Fig. 3). Recipient characteristics Over the 8-year study period, a total of 5,775.2 L of DHM was supplied to 9,207 infants (Fig. 2C), and the maximum individual volume was 13.7 L. Most of the recipients were preterm (83.3%), and 81.7% were single birth. In 64.4% of the recipients, the birth weight was between 1,500 g and 2,500 g, and in 41.7%, gestational age was between 34 wk to 37 wk. Further, 94.3% of them received DHM for less than 15 days, and the average of duration for receiving DHM was 4.5 days (Table 3). Discussion This study describes the operations of our HMB and analyzes the data gathered over the last 8 years of its existence. Over the 8-year period, the total expenditure of the HMB was 1.94 million RMB, including employee salary, materials costs, test costs, etc., and both the donation and provision of DHM were free. The average cost per liter of quali ed DHM was ¥262. In comparison, studies from the United States have calculated that 1 L of DHM costs approximately US $150, and studies from Germany have reported a cost of €82.88 per liter of DHM [12][13][14] . The costs of our HMB are probably lower because most donors are not required to repeat the HBV,HCV,HIV and syphilis test when they donate within 6 months of their serological test, which is done during hospitalization at the time of delivery at our hospital. Despite this, the cost of providing DHM is much higher than the cost of formula and mother's own milk; additionally, breastfeeding could reduce the incidence of diseases, such as necrotizing enterocolitis and late onset sepsis, and save a lot of future medical costs [15] . Therefore, we also advocate and promote breastfeeding through various means at our HMB. Over the 8-year period, a total of 250 mothers did not meet the criteria for donation because they did not pass the health screening or serological test. In 2015, 169 mothers did not qualify because donor milk was screened with the CMV-DNA test, which is positive in most Chinese mothers [16] . In fact, after pasteurization, the CMV-DNA test shows negative results, which means that the donor milk is not likely to cause neonatal infection [17] . Thus, after the continuous quality improvements that were initiated in January 2016, we only used the serum TORCH test for screening. During the last 8 years of operation of the HMB, the number of quali ed donors and the volume of quali ed DHM rst showed an increase and then a decrease. In 2016, we carried out quality improvement programs for the HMB that resulted in a considerable increase of the number of quali ed donors and the volume of quali ed DHM. Later, in 2018, we undertook breastfeeding quality improvement programs to highlight the importance of breastfeeding, and this resulted in an increase in the rate of feeding with mother's own milk and a decline in the number of quali ed donors and the volume of quali ed DHM. In 2020, the number of donors and the volume of DHM decreased signi cantly due to the restrictions imposed as a result of the COVID-19 pandemic. Our donors were mainly aged between 25 and 30 years, and this is similar to the donor demographic of HMBs reported in Taiwan and Thailand [18][19] . Further, 66.7% of the donors in the present study delivered full-term infants. It is mainly because the mothers of preterm infants have a reduction in postpartum lactation due to physical reasons [20] . In the present study, 61.2% of donors gave birth by vaginal delivery. This percentage is different from that reported in Thailand [17] , but is similar to a previous report in mainland China [7] . Further, 46.2% of the donors in this study donated more than 10 times, which is higher than the percentage reported in mainland China and indicates that we have a higher average number of donations than other domestic HMBs. This is probably attributable to the efforts of our staff in providing correct information and following up with donors. Additionally, 62.6% of the donors had a bachelor's degree, and well-educated mothers tend to have better knowledge about and attitude towards milk donation [21] . Most of the donors began to donate milk at 1 month postpartum at the earliest, and the number of donors who started to donate deceased rapidly at 3-6 months postpartum. The main reason is probably that most women in China are only given 4-6 months of maternity leave. Once they resume work, their free time and volume of lactation probably decrease dramatically. Our HMB uses the traditional pasteurization method (62.5°C for 30 min) for sterilization, as a result of which some of the immunological components in donor milk, such as sCD14, may be lost. However, this has little impact on the protein, fat, carbohydrates, some trace elements, and the activity of some enzymes [22,5] that are very important and irreplaceable for the development of neonates, especially premature infants. At the initial stage of the establishment of our HMB, donors who passed the pre-and post-pasteurization bacteriological test at the time of the rst donation were considered as quali ed and were not tested for future donations. However, there is a risk that milk donated by these donors at later time points could be infectious and cause neonatal infection. Therefore, from January 2016, as a result of continuous quality improvements in donor screening, donor milk collection, and donor milk sterilization and storage, DHM donated by quali ed donors was rst mixed and then pasteurized. In the early stage of quality improvement, each batch of DHM was tested before and after pasteurization every day. The milk extraction process and pasteurization process were considered to be reliable if the batches passed most of the tests, and thereafter, bacteriological testing was conducted every 10 days and batches that failed the pre-and post-pasteurization bacteriological test were excluded. The DHM in our HMB was only supplied to the neonates in our NICU and was not continued after discharge, and it was provided free of cost. This model is different from that described in previous reports from Thailand and the UK,which provide DHM for non-hospitalized infants [23][24] . During the 8-year period, 9,207 newborns received DHM, most of whom were premature infants (80.1%), and the newborns had severe infection, feeding intolerance, and necrotizing enterocolitis. The maximum volume per neonate was 13.7 L, which was used by an extremely premature infant. The duration of use of DHM was mostly less than 15 days, and the average duration was 4.5 days, which is shorter than that previously reported in Scotland [25] . This is probably because most mothers have expressed enough breast milk to feed their own babies after a certain time point. During the COVID-19 pandemic, China implemented strict policies, so we also added COVID-19 nucleic acid testing to the donor screening tests. Visitation was forbidden during newborn hospitalization, and women from other provinces were not admitted to our hospital until recently. From February 4 th to March 4 th 2020, breast milk donation and transport to the hospital were not allowed. These policies led to a signi cant reduction in both the number of donors and volume of DHM. Although this was accompanied by a signi cant reduction in the number of newborns as well as the demand for DHM, these conditions also led to the depletion of DHM stored at our HMB. Additionally, the breastfeeding rates decreased signi cantly during this period. However, it is worth noting that there was no case of COVID-19 infection at our hospital during this period. Conclusion The establishment and e cient management of HMBs could support and promote breastfeeding or DHM feeding, and provide better choices and better nutritional treatment options for children who cannot be breastfed by their mothers, critically ill children, or children with certain diseases. Additionally, it is important to raise social awareness about the bene ts of breastfeeding or DHM feeding, and actively publicize and provide breastfeeding guidance. Complete, timely, and standard records of data in the HMB database can guarantee its long-term e ciency. It is also necessary to establish a systematic and standardized database at the national level for storing information, analyzing and interpreting various results, supervision and management, and clinical application of research. Over the 8 years of operation of our HMB, through continuous QI, our processes have been gradually netuned and made e cient, and we will continue to provide DHM to newborns in the future, and provide some recommendations to other HMBs operating in China. Declarations Authorship con rmation statement All of the authors contributed to the study and qualify for authorship. [25]Simpson, J. Who gets donated human milk in Glasgow. In Proceedings of the EMBA International Milk Banking Congress,Glasgow, UK, 5-6 October 2017. Figure 1 Donation process at our HMB
2021-10-15T15:23:47.513Z
2021-10-11T00:00:00.000
{ "year": 2021, "sha1": "14f773942d7e920473f067d46342b6122264fc2d", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-951420/latest.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "3acce8fd66df46af5941bbc7148b08a5b2aad957", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55817387
pes2o/s2orc
v3-fos-license
Sub-TeV quintuplet minimal dark matter with left-right symmetry A detailed study of a fermionic quintuplet dark matter in a left-right symmetric scenario is performed in this article. The minimal quintuplet dark matter model is highly constrained from the WMAP dark matter relic density (RD) data. To elevate this constraint, an extra singlet scalar is introduced. It introduces a host of new annihilation and co-annihilation channels for the dark matter, allowing even sub-TeV masses. The phenomenology of this singlet scalar is studied in detail in the context of the Large Hadron Collider (LHC) experiment. The production and decay of this singlet scalar at the LHC give rise to interesting resonant di-Higgs or diphoton final states. We also constrain the RD allowed parameter space of this model in light of the ATLAS bounds on the resonant di-Higgs and diphoton cross-sections. Introduction and Motivation Dark matter (DM) and its large abundance compared to baryonic matter has been a long standing puzzle without any definite answer as of yet.The Standard Model (SM) of particle physics is bereft of any such particles and hence the need to extend the SM becomes imperative in order to incorporate a DM candidate in the particle spectrum.Numerous approaches have been made to come up with consistent DM models that can explain the experimental observations.A small class of such models is what is referred to as Minimal dark matter (MDM) [1][2][3][4][5][6] models.MDM models postulate a new fermionic or bosonic multiplet, an n-tuplets of the SU (2) group.Being color neutral these new multiplets have no strong interactions, and can only weakly interact with other SM particles mainly through gauge interactions.The stability of the quintuplet, on the other hand, is either ensured by some discrete symmetry or they can be accidentally stable.In a scenario where the lightest component of the new multiplet is electrically neutral, it could be a good candidate for the DM.In this work we will study a MDM model where the DM is coming from the neutral component of a quintuplet fermion. A dark matter coming from SU (2) L quintuplet has severe limitations.Firstly, only a hypercharge zero quintuplet can evade the strong direct detection limits with all other cases being very highly constrained.All the states in the quintuplet fermion are mass degenerate at the tree level.This degeneracy is lifted (only by few hundred MeV) by radiative corrections.This makes the collider phenomenology for the quintuplet extremely challenging.Quintuplets, being charged under the SM gauge group, are produced in pairs at the collider experiments via the gauge interactions.Subsequently, they decay into lightest component of the quintuplet in association with very soft SM leptons and jets.The lightest component of the quintuplet, being the candidate for the DM, remains invisible in the detectors whereas, the final state leptons or jets will be too soft (due to the extremely small mass splitting) to observe at the collider.Attempts have been made to overcome this difficulty by introducing an additional quadruplet scalar [7,8] in order to write a dimension-4 decay term for the quintuplet fermion.However, in this case, the dark matter candidate would then be lost or only be there in an extremely fine-tuned region of the parameter space.We thus take an alternate approach to this problem by choosing a left-right symmetric model where the DM is neutral component of an SU (2) R quintuplet fermion. Left-right symmetric (LRS) models [9,10] by themselves are a very well motivated extension of the SM.They are gauge extensions of the SM with the gauge group being SU (3) C × SU (2) L × SU (2) R × U (1) B−L .At a fundamental level, LRS models preserve parity (P) symmetry.The spontaneous breaking of the right-handed symmetry at some high scale leads to the observed parity violation at the weak scale.Another important consequence of this fact is that the P-violating terms in the QCD Lagrangian leading to the strong-CP problem [11][12][13][14][15][16][17][18][19] are absent in these class of models and hence naturally solving the strong-CP without a global Peccei-Quinn symmetry [20].The gauge structure here compels us to have a right-handed neutrino in the particle spectrum, thus allowing for a small neutrino mass generation through the seesaw mechanism [21][22][23][24][25][26]. In this work, we have considered SU (3) C × SU (2) L × SU (2) R × U (1) B−L gauge symmetry and enlarged the fermion spectrum by introducing a vector-like fermion multiplet which is a quintuplet under SU (2) R .The neutral component of the SU (2) R quintuplet could be good candidate for the dark matter.As the fundamental gauge group in this case does not include U (1) Y , it is only produced after the right-handed symmetry breaking of SU (2) R ×U (1) B−L → U (1) Y .The hypercharge quantum number is thus a derived quantity and allows for many different combinations of charge assignment for the quintuplet to get the zero hypercharge for the DM particle.This model would thus have a much richer phenomenology with many different possibilities for the DM and other particles of the quintuplet.The tree-level masses of the quintuplet particles are still degenerate but the radiative corrections are much larger in this case.The mass degeneracy among the quintuplet fermions of different charges are now produced at the right-handed symmetry breaking scale (heavy right-handed gauge bosons are running in the loops) resulting in much larger mass splitting among them.Thus the production and subsequent decay of the high charge-multiplicity components of the quintuplet produce interesting signatures at the collider experiments without sacrificing the dark matter aspect of the model. The dark matter candidate in the present scenario is the neutral component of the vectorlike SU (2) R quintuplet fermion.Therefore, the dark matter has gauge interactions with the SU (2) R gauge bosons namely, the W R and Z R -bosons.The self-annihilation and coannihilation of the dark matter mainly proceed through a Z R and W R exchange in the s-channel, respectively.It is important to note that the lower values for the masses of W R and Z R are highly constrained from the LHC data as well as other low energy observables.As a result, the self-annihilation and co-annihilation cross-sections are, in general, small.The dark matter relic density measured by WMAP [27] and PLANCK [28] collaborations can only be satisfied for some particular values of DM masses (at around half the W R and Z R masses) where self-annihilation and/or co-annihilation cross-sections are enhanced by the resonant production of Z R and/or W R , respectively.It has been shown in Ref. [3] that the dark matter relic density can only be satisfied at quite large DM masses (around 4 TeV or higher) if one also accounts for the direct detection constraints.A way to circumvent this problem, as has been discussed later in this paper, is to introduce a singlet scalar which can open up a lot of new annihilation and co-annihilation channels allowing for a correct DM relic density for almost any DM mass while also satisfying the direct detection bounds.We have studied this in details in this paper, focusing on the DM and singlet scalar phenomenology in the model. The scalar, being singlet under both SU (2) L and SU (2) R , has Yukawa coupling only with the vector like quintuplet fermions.The couplings with the other scalars in the model arise from the scalar potential.However, couplings with the SM gauge bosons, namely W ± , Z and photon, arise at one loop level via the loops involving quintuplet fermions.The collider phenomenology of the singlet scalar crucially depends on its loop induced coupling with a pair of photons.In the framework of this model, singlet scalar-photon-photon coupling is enhanced due to multi-charged quintuplet fermions in the loop.Therefore, at the LHC experiment, statistically significant number of singlet scalar could be produced via photon-photon1 initial state.Depending on the parameters in the scalar potential, the singlet scalar dominantly decays into a pair of SM Higgses or photons, giving rise to interesting di-Higgs or di-photon resonance signatures at the LHC.We have also studied the collider signatures of the singlet scalar and bounds on the singlet scalar masses from the di-Higgs and di-photon resonance searches by the ATLAS collaboration with 36 inverse femtobarn integrated luminosity data of the LHC running at 13 TeV center-of-mass energy. The rest of the paper is organized as follows.In Sec. 2 we have introduced the model.The particle spectrum is listed along with the interactions among the particles.We have computed the gauge boson, fermion and the scalar masses and mixings in this section.In Sec. 3 we have studied the dark matter phenomenology of the model along with the bounds from direct detection experiments.In Sec. 4 we study the phenomenology of the singlet scalar.Sec. 5 has the collider bounds from the most recent diphoton and di-Higgs results from the Large Hadron Collider (LHC).Finally we conclude in Sec.6 with some discussions. Quintuplet Minimal Dark Matter Model In this work, we consider a minimal model for dark matter (DM) in the framework of SU (3) C × SU (2) L ×SU (2) R ×U (1) B−L gauge symmetry, where B and L are baryon and lepton numbers respectively.Due to the left-right symmetric nature of the model, the chiral fermions are now doublets for both left and right-handed sectors and are given as: where the numbers in the bracket corresponds to SU (3) C , SU (2) L , SU (2) R and U (1) B−L quantum numbers respectively.The electric charge Q for a particle in this model is given as: , where T 3 L/R are the third component of the isospin for SU (2) L/R .A minimal scalar sector requires a right-handed doublet Higgs boson to break the SU (2) R symmetry and a bidoublet Higgs field to break the electroweak symmetry and produce the quark and lepton masses along with the CKM mixings.They are given as: The absence of a triplet scalar in the Higgs sector prevents us from writing a lepton number violating term in the Yukawa Lagrangian and hence a light neutrino mass generation is not possible in this scenario without introducing unnaturally small Yukawa couplings.We thus introduce a singlet fermion N(1,1,1,0) which will help generate light neutrino mass through inverse seesaw mechanism. We introduce an additional SU (2) R vector-like fermion quintuplet given by φ 0 1 = v 1 and φ 0 2 = v 2 .For simplicity, we will assume the VEV of the S-field to be zero. 2he gauge bosons of SU (2) L , SU (2) R , and U (1) B−L mix among themselves to give four massive (W R , Z R , and the SM W and Z-boson) and one massless (the SM photon) gauge bosons.We denote the left-handed (right-handed) triplet gauge state as W i L (W i R ) while the B − L gauge boson is B. The mass-squared matrix for the charged gauge boson M 2 W in the basis (W R , W L ) and the neutral gauge bosons M 2 Z in the basis (W 3R , W 3L , B) is given as where is the EW VEV ∼ 174 GeV and g R = g L = 0.653.This gives the masses of the new right-handed heavy gauge bosons as: while the left-handed W and Z boson masses are the same as in the SM with the effective hypercharge gauge coupling given as . The relevant couplings of the gauge bosons with χ are given as: Here ), χ i are the particles in the quintuplet Σ with Q χ i being the corresponding electric charge and s W = sin θ W where θ W is the Weinberg angle.These couplings are particularly important from the perspective of DM phenomenology as they can lead to self annihilation (through Z R ) and co-annihilation (through W R ) of the DM particle so as to satisfy the RD bounds at these points.The fermion masses are generated from the following Yukawa Lagrangian: where Y and f are the Yukawa couplings and Φ The quark and charged lepton masses in this model would then be given as: (2.8) For simplicity, we will choose a large tan β (= v 1 /v 2 ) limit which requires Y q 33 ∼ 1 to explain the top quark mass while Y q 33 < 10 −2 .The neutrino mass matrix, on the other hand, is a 3 × 3 matrix in the basis (ν L , ν R , N ) given as: where . This is the inverse seesaw mechanism of neutrino mass generation. If we assume that f R v R >> m D , µ N the approximate expressions for the neutrino mass eigenvalues (for one generation) are given as (2.10) So it is easy to get a light neutrino mass by appropriately choosing all the parameters in the neutrino sector. The most general scalar potential involving the bidoublet field, an SU (2) R doublet field, and a real singlet field is given by: (2.11) The physical Higgs spectrum consists of four CP-even scalars, one CP-odd pseudoscalar, and one charged Higgs boson.Two charged states and two CP-odd states are eaten up by the four massive gauge bosons.Using the Higgs potential given in Eqn.2.11 to eliminate µ 2 1 , µ 2 2 and µ 2 R from the minimization conditions, we get the CP-even scalar mass-squared matrix as: where Diagonalizing this mass-squared matrix gives four mass eigenstates.Table 1 gives one benchmark point for a set of parameters which can easily give a light SM-like 125 GeV Higgs boson denoted by h, consisting almost entirely of the real part of φ 0 1 field.We also get a 500 GeV scalar denoted by H 1 , consisting of almost purely the singlet S with negligible mixing with the others.This state is the one most important from the dark matter point of view and it is easy to see here that the mass of this state can be easily increased (decreased) by just decreasing (increasing) the value of µ 2 S and very slightly modifying value of λ 1 accordingly.Such a change does not significantly alter the composition of this H 1 eigenstate till about a mass of 200 GeV.Further decreasing the mass of H 1 (just by increasing µ 2 S ) results in significant mixing of the singlet with the SM-like state and is ruled out from Higgs data.Though one can then alter the other parameters of the model to still keep the mixing low.Two very heavy states H 2 and H 3 with masses of the order of v R consisting of real part of φ 0 2 and H 0 R states are also present in the spectrum.The heavy states are required to be heavier than 15 TeV in order to suppress flavor changing neutral currents [29][30][31][32][33].This can be easily satisfied in our model by choosing a high value (> 10 TeV) for the right-handed symmetry breaking scale, v R .The mass of the pseudo-scalar A 1 is given as: while the charged Higgs boson H ± mass is GeV 2 , v1 = 173.9GeV, v2 = 5 GeV, vR = 13 TeV.Subscript R and I stands for the real and imaginary parts of the field respectively. Dark Matter Phenomenology The motivation for introducing the vector-like quintuplet fermion χ(1, 1, 5, X) was to obtain a candidate for DM.Since all the components of Σ get mass from the same term they are all mass degenerate at tree-level, but radiative corrections remove this degeneracy.Radiative corrections to the masses of the quintuplet fermions will be introduced from the gauge sector and the singlet scalar but since the coupling of the singlet scalar to all the quintuplet particles are the same, it does not introduce any splitting between their masses.The mass splitting due to quantum corrections is thus given by, where Fig. 1 gives a plot of the mass differences between the neutral and the various charged states for all three cases with B − L = 0, 2, 4. For a major portion of the parameter space, the masses of the charged components of the quintuplet get positive contribution from the radiative corrections and hence χ 0 becomes the lightest among the quintuplet fermions 3 .Thus the lightest component of the quintuplet χ 0 can be a good candidate for dark matter.The stability of χ 0 is automatically ensured by virtue of its gauge quantum numbers.As χ 0 is part of the quintuplet, it can decay to the SM particles only via interactions with dimension-6 or higher operators resulting in a decay width suppressed atleast by a factor of 1/Λ 2 .Taking the mass of χ 0 to be at TeV scale, the decay width via dimension-6 operator is of the order Quintuplet Mass differences with B-L=4 of M 3 /Λ 2 .This corresponds to a lifetime greater than the age of the universe for Λ 10 14 GeV. Relic Density The dark matter relic density as a function of the DM mass for B − L = 4 and B − L = 0 cases are given in Fig. 2. We have varied the DM mass from 100 GeV to 9 TeV and plotted the relic density for three values of λ corresponding to λ = 0.1, 1.0, 2.0 and two fixed values of the scalar masses of 500 GeV and 1.5 TeV respectively while keeping α 3 µ 3 = v EW .The other important numbers required to fully understand the plots are M W R = 6 TeV, M Z R = 7.14 TeV.Considering first the B − L = 4 case, it is easy to see that there are five dips in the plot with three of them being very sharp while two others being shallower.These are the regions where a sudden enhancement in the cross-section of either annihilation of two DM particles or co-annihilation of a DM with a singly charged χ ± giving rise to a sudden decrease in the relic density.The three regions with sharp fall-off corresponds to three s-channel processes while the two flatter ones correspond to two t-channel processes 4 .The first dip at M DM = M scalar /2 corresponds to the s-channel process where two DM particles annihilate through an H 1 into SM particles.This process reaches its resonance at a DM mass of half of the scalar mass and the sharp fall is because of the s-channel process.Careful analysis of the plots will show that the dip in the left plot is actually deeper than the right one.This is because the total decay width of the 500 GeV scalar particle is smaller than the 1.5 TeV case resulting in a larger resonant cross-section and hence a deeper valley in the left plot. The second dip is at M DM = M H 1 and corresponds to the t-channel process of two DM particles annihilating into two H 1 bosons.An interesting thing to note is at larger values of λ the relic density can easily satisfy the experimentally observed value while for smaller λ values, the relic density is greater than the experimental limits.This is because the annihilation cross-section σ χ 0 χ 0 →h 1 h 1 ∼ λ 4 and only for large values of λ will lead to a large enough annihilation for the required decrease in relic density.If we take λ → 0 then this dip will disappear all together.Another consequence of being a t-channel process is that in the limit t → 0 the cross-section σ ∼ M −2 H 1 and hence larger (smaller) the scalar mass, smaller (larger) the cross-section.This is the reason why the plot on the right with M H 1 = 1.5 TeV has a much smaller decrease in the relic density at this point compared to the left plot with M H 1 = 500 GeV. The third fall-off corresponds to co-annihilation of the DM particle through a W ± (χ 0 χ ± → W ± R → SM ) resonance while the fourth is DM annihilation through a Z R resonance (χ 0 χ 0 → Z R → SM ).It is easy to see that the dips are exactly at M W ± R /2 and M Z R /2 respectively.The sharp fall-off again is an indication that both are s-channel processes. The fifth dip corresponds to t-channel annihilation process of χ 0 χ 0 → Z R H 1 and is exactly at a DM mass equal to half of the combined masses of Z R and H 1 bosons.The annihilation cross-section σ ∼ λ 2 in this case and hence this dip will again disappear in the limit λ → 0. Actually there is another t-channel co-annihilation process corresponding to that is masked by the dip corresponding to the Z R -mediated annihilation channel. The relic density plot for the B−L = 0 case is given in the right panels of Fig. 2.This plot has only four dips, the ones involving Z R are absent here.This is because there is no χ 0 χ 0 Z R vertex as can be obtained from eqn. 2.6 by putting Q χ i = 0 along with Q B−L = 0. Similar to the previous case, the first dip corresponds to the resonance of s-channel annihilation process mediated by H 1 .The larger (smaller) total decay width in the heavier (lighter) H 1 mass case leads to a shallower (deeper) dip like before.The second dip corresponds to the t-channel annihilation process χ 0 χ 0 → H 1 H 1 .The third dip is the s-channel co-annihilation mediated by a W ± R boson.The fourth dip corresponds to the t-channel co-annihilation of χ 0 χ ± → W ± R H 1 which was not visible in the previous case.It is clearly visible here as the Z R boson couplings with the DM particle is absent.This cross-section is again proportional to λ 2 and hence the dip decreases with λ and eventually vanishes as λ → 0. The relic density plot for the B − L = 2 case is given in Fig. 3.Here the DM mass is only taken till around 3 TeV as after that the negatively charged component of the quintuplet becomes the lightest and stable and hence, ruled out.The plot is very similar to the previous ones.The three dips visible here are due to s-channel H 1 mediated annihilation, the t-channel annihilation of two DM particles into two H 1 s and the s-channel W ± R mediated co-annihilation processes respectively.We have included two plots here for a DM mass of 500 GeV for two different values of α 3 µ 3 being v EW and 0.1v EW respectively.It is easy to see that the only difference between the two plots are in width of the s-channel annihilation region.The lower value of α 3 µ 3 leads to a much narrower region with the relic density being satisfied by two points which are very close in DM mass while for the larger coupling there are two quite distinct values of DM mass possible.This is because at the resonance point, a smaller value of α 3 µ 3 will lead to a smaller annihilation cross-section resulting in a narrower and shallower dip in the relic density plot.Similarly for the other two cases (B − L = 0, 4), the effect of this trilinear coupling would only be seen in the scalar mediated annihilation channel as that is the only relevant process involving this coupling. The introduction of the scalar singlet S has a huge influence on the allowed dark matter masses which can satisfy the observed relic density.As has been discussed earlier, majority of dips in the dark matter relic density plots would disappear in the absence of this singlet scalar.In fact if S is removed from the spectrum there would be no possible dark matter satisfying the observed relic density for the B − L = 2 case with a W R mass of 2 TeV [3].The introduction of a singlet even in this constrained parameter space could provide at least two points for correct DM relic density for a small enough singlet scalar mass.We would thus like to examine what happens if we keep the singlet boson mass as a free parameter while also varying λ.As has been discussed in Sec. 2 the singlet-like Higgs boson mass can be easily changed by just varying the value of µ 2 S , hence it is quite natural that the singlet mass is not a fixed quantity but a variable in this analysis. The scatter plots in Fig. 4 represent the allowed points which satisfy the experimentally observed relic density as a function of DM mass, H 1 mass and λ.We have just considered a relatively low DM mass benchmark region with 0.1 TeV ≤ M DM ≤ 2.5 TeV.The left plots are for the B − L = 4 case while the right pane are for B − L = 0.The plot for the B − L = 2 case is very similar to the one for B − L = 0 and hence has not been included here.To understand this similarity we need to look at the quintuplet spectrum for each of them.For B − L = 4 there is only one singly charged particle in the quintuplet spectrum while for both B − L = 0, 2 there are two singly charged quintuplet fermions each.The masses of the charged particles are also quite close resulting in a very similar behavior for both these cases at least till the point where the neutral fermion is the lightest.see that at a DM mass of M H 1 /2 there is a sharp dip with two points satisfying the correct relic density, one before and another after the resonance point.Similarly for any scalar mass there should be two such points and hence two narrow straight lines.The triangular region is the one corresponding to the t-channel annihilation of two DM particles into two H 1 bosons.A close inspection of this region will show that the upper boundary of the plot is lined by points which are of λ ∼ 2. These are the points where the DM mass is just equal to the scalar mass and the annihilation is only possible for very large cross-section due to the phase space suppression.All the underlying points are where M DM > M H 1 .In this region there is a monotonic increase in λ as we move from low to high DM mass (for a fixed M H 1 ).Since this process is t-channel, the annihilation cross-section would decrease an the mass difference M DM − M H 1 increases and a larger value of λ is needed to compensate for this decrease.The value of λ in this region should also increase as we move downward towards a lower M H 1 for a constant DM mass for the exact same reason. The rectangular region around 2 TeV is due to the co-annihilation of the DM with a charged particle through a W R boson.This region should be independent of λ but the plot shows something completely different.We see that in the parts of this region overlapping with the other two, only small values of λ are allowed.Actually in the overlapping regions there are two processes contributing to the decrease in relic density and if the W R co-annihilation process has to dominate, the other two processes which are both proportional to some power of λ (λ 4 and λ 2 for the t-and s-channel processes respectively) should be small.Hence the low λ points are only allowed in these regions. The B −L = 2 plot in Fig. 4 is similar in nature to the previous case except there is a new region here which is a line parallel to the Y-axis around a DM mass of 200 GeV.If we look back at Fig. 3 we see that the relic density is initially increasing and crosses the experimentally observed line at around the 200 GeV DM mass.This point is independent of λ and gives rise to the vertical line here.Another important new observation here is that now a part of the triangular region can never satisfy the relic density constraints irrespective of the value of α 3 µ 3 .As has been discussed earlier, moving from low to high masses in the triangular region requires the annihilation cross-section to progressively increase as well.Thus we require a larger value of λ but since we only allow λ ≤ 2 there are some parts which simply cannot produce enough annihilation and the relic density is always higher that the observed limit.This situation is remedied as we move closer to the W R mediated co-annihilation region as now both the t-channel annihilation and the s-channel co-annihilation together can contribute to decrease the relic density to the correct experimental limit.Independently just by increasing the value of λ to include points upto λ < 3 will result in complete disappearance of this empty patch.The relic density constraints can then be satisfied over the entire parameter region considered here. The case with a smaller α 3 µ 3 = 0.1 v EW are plotted in the lower panel of Fig. 4. The only difference compared to the upper panel plots (with α 3 µ 3 = v EW ) is that the narrow straight line here is indeed one line instead of two.This, from our earlier observation, is due to the much narrower dip in the s-channel scalar annihilation region for a smaller value of trilinear coupling. Direct Detection This model can lead to quite significant DM-nucleon scattering cross-section via the Z R -boson exchange diagram, resulting in stringent constraints from DM direct detection experiments.This constraint would be most severe for higher B − L cases while for B − L = 0, the χ 0 − Z R interaction itself is absent resulting in no significant bounds in this case.We thus study the case of maximal B − L(= 4) where the DM-nucleon scattering, mediated by Z R , would be suppressed by 1/M 4 Z R .The left panel of Fig. 5 gives the scattering cross-section of χ 0 -proton and the χ 0 -neutron as a function of m Z R for two different values of g R /g L .A smaller value of g R /g L leads to a larger cross-section and hence requires a larger Z R mass to evade the direct detection limits.This fact can be easily seen in the right panel of Fig. 5 where we have plotted the direct detection bound from LUX [34] in the m DM -m Z R plane.The shaded region is consistent with the LUX data and hence a Z R mass greater than 7 TeV is safe for a DM mass above 100 GeV for g R /g L = 1 as has been chosen throughout this paper. Phenomenology of the Singlet Scalar Although the singlet scalar was introduced to satisfy the dark matter relic density for almost any DM mass compared to only a few points in its absence, it gives rise to interesting signatures at the collider experiments.before going into the details of production cross-section and collider signature it is important to study the decays of singlet scalar (H 1 ).At tree level, H 1 couples with a pair of SM Higgs bosons or with a pair of quintuplet fermions.Therefore, if kinematically allowed, H 1 dominantly decays into a pair of Higgses of a pair of quintuplet fermions.H 1 also has loop induced couplings with a pair of photons, Z-bosons and photon-Z pair.The coupling of both photon and Z-boson with the quintuplet fermions being proportional to the electric charge of the fermions (see Eq. 2.6), the loop induced decays can be quite significant as it involves the multi-charged quintuplet fermions running in the loop.In particular, the diphoton decay could be as significant as other decay modes in certain parts of the parameter space.The loop induced interactions (in particular interaction with a pair of photons) play the most crucial role in the production and phenomenology of H 1 in the context of hadron collider experiments.Since the B − L = 4 quintuplet would contain fermions of highest charge multiplicity, we will only consider this case for our analysis in this section as it will lead to the strongest constraints on the model.The decay width of the singlet scalar where χ i ⊂ {χ ++++ , χ +++ , χ ++ , χ + and χ 0 }, M χ i and Q χ i are the mass and charge of the corresponding χ i respectively.The loop function x for x ≤ 1.In Fig. 6 we have plotted the scalar decay branching ratios as a function of the singlet mass for a fixed value of DM mass, λ and for two different values of α 3 .The left panel corresponds to α 3 = 1.0 while the right panel is for α 3 = 0.1.For M H 1 < 250 GeV, the di-Higgs decay is kinematically forbidden and hence, the only possible decay modes are the loop induced decays into a pair of SM gauge bosons.The decay into a pair of W ± -bosons are highly suppressed by the small W L -W R mixing and hence, not shown in Fig. 6.The di-Higgs decay mode becomes kinematically allowed for M H 1 > 250 GeV for a 125 GeV SM Higgs.In this region of the parameter space, the branching ratios depends on two parameters, namely the Yukawa coupling λ (which determines the strength of the loop induced diboson-singlet scalar interactions) and α 3 (which determines the di-Higgs decay width) in the scalar potential.We clearly see that as we decrease the value of α 3 , the diphoton branching ratio increases compared to the di-Higgs.A similar phenomenon will also take place if one increases the value of λ keeping α 3 constant.Once the quintuplet decay channel opens up, the entire decay is almost into the quintuplet fermions with all other channels completely disappearing.It is important to notice the enhancement of the loop induced decay branching ratios into two gauge bosons at around 400 GeV which is the threshold of quintuplet on-shell contribution in the loop. The singlet scalar has loop induced coupling with a pair of photons.The production of H 1 at the LHC proceeds through photon-fusion process and hence, suppressed by the small parton density of photon inside a proton.In fact, the parton density of photon is so small that most of the older versions of PDF's do not include photon as a parton.However, photo-production B-L=4, α 3 =1.0,µ 3 =174 GeV, λ=1.0, m χ 0=200 GeV B-L=4, α 3 =0.1,µ 3 =174 GeV, λ=1.0, m χ 0=200 GeV is the only way to produce H 1 at the LHC.Moreover, if we want to include QED correction to the PDF, inclusion of the photon as a parton with an associated parton distribution function is necessary.And in the era of precision physics at the LHC when PDF's are determined upto NNLO in QCD, NLO QED corrections are important (since α 2 s is of the same order of magnitude as α) for the consistency of calculations.In view of these facts, NNPDF [35,36], MRST [37] and CTEQ [38] have already included photon PDF into their PDF sets.However, different groups used different approaches for modeling the photon PDF.For example, the MRSTgroup used a pasteurization for the photon PDF based on radiation off of primordial up and down quarks, with the photon radiation cut off at low scales by constituent or current quark masses.The CT14QEDvariant of this approach constrains the effective mass scale using ep → eγ + X data, sensitive to the photon in a limited momentum range through the reaction eγ → eγ.The NNPDF group used a more general photon parametrization, which was then constrained by high-energy W, Z and Drell-Yan data at the LHC.For computing the photon-luminosity at the pp collision with 13 TeV center of mass energy, we have used NNPDF23 lo as 0130 PDF with the factorization scales being chosen to be fixed at M H 1 .The resonant photo-production cross-section of H 1 at the LHC can be computed from its di-photon decay width and LHC photon luminosity at √ s = M H 1 as follows: where, tau = M 2 H 1 /s, f γ (x) is the photon PDF and s is the pp center of mass energy.The production cross-section for the heavy singlet is shown in Fig. 7 as a function of singlet mass for a few different DM masses.The DM mass is important here because the masses of other quintuplet fermions is determined by the DM mass and radiative corrections.A larger DM mass implies that the masses of the charged states running in the photon fusion loop are also larger and hence a smaller cross-section for singlet production.It is important to notice the bump around M H 1 ∼ 2M χ 4+ due to the threshold enhancement of the diphoton decay width. Collider Bounds After production, the singlet scalar dominantly decays into a pair of photons or Higgs bosons as long as the decays into a pair of quintuplet fermions are kinematically forbidden.Therefore, the production and decay of H 1 gives rise to interesting resonant diphoton and/or di-Higgs signatures at the LHC.The ATLAS and CMS collaborations of the LHC experiment have already searched for any new physics signatures in the diphoton [39] and di-Higgs [40] invariant mass distributions.In absence of any significant deviation from the SM prediction, limits are imposed on the production cross sections times branching ratio of a resonance giving rise to above mentioned signatures.These limits could lead to significant bounds on the DM RD allowed scalar mass (for example, see Fig. 4).In our analysis we found that the diphoton bound is a lot more severe especially for the α 3 = 0.1 case.In Fig. 8 we have plotted the H 1 production cross-section times the diphoton branching ratio (σ pp→H 1 × H 1 → γγ) and compared with the ATLAS observed limit [39].The diphoton-production cross-section is plotted for two different values of DM mass of 200 GeV and 600 GeV for fixed values of λ = 1.0 and α 3 = 0.1.For the 200 GeV DM mass, any scalar mass below 430 GeV is ruled out from the experiments.For the 600 GeV DM mass a small region from 630 GeV to 680 For comparison, the solid black line shows the ATLAS bound [39]. GeV along with 730 GeV to 1220 GeV scalar masses would be ruled out.In is interesting to note that larger values of the scalar masses are excluded for higher DM mass while the smaller values of M H 1 remain allowed.This is a consequence of the fact that larger DM mass corresponds to smaller H 1 production cross-section and hence, the diphoton signal crosssection in the smaller M H 1 region are smaller than the ATLAS bound.On the other hand, larger DM mass also corresponds to a threshold enhancement of the diphoton decay width and hence, diphoton signal rate at larger M H 1 .As a result, some part of M H 1 region around M H 1 ∼ 2M χ 0 is excluded for M χ 0 ∼ 600 GeV. Experimental bounds on the resonant di-Higgs [40] and diphoton [39] signal cross-sections have a significant impact on the DM allowed regions given in Fig. 4. We have scanned the DM allowed points in Fig. 4 to check the consistency of those points with the ATLAS di-Higgs and diphoton searches and the results are presented in Fig. 9.The pink points in the plots are ruled out from diphoton search while the black points are ruled out by di-Higgs searches.As one would expect, the di-Higgs bounds are only applicable for case with α 3 = 1.0 as the two Higgs final state scalar decay branching ratio is quite large in this case.The diphoton bounds are much stronger for the lower α 3 as the diphoton decay branching ratio is much larger here.Even though a part of the parameter space is ruled out a large part of it is still remains which can explain all the observations from both the collider and dark matter experiments. Summary and Conclusions To summarize, we have performed the dark matter and collider phenomenology of a leftright symmetric (SU (3) C × SU (2) L × SU (2) R × U (1) B−L gauge symmetry) model with an additional SU (2) R quintuplet fermion and a singlet scalar.The motivation for introducing the quintuplet fermion is to obtain a viable candidate for cold dark matter.The neutral component of the quintuplet fermion, being weakly interacting and stable (if lightest among the other components of the quintuplet), could be a good candidate for dark matter.The dark matter, in this model, can interact with ordinary matter via the exchange of a SU (2) R gauge boson (in particular, Z R ).The bounds on the dark matter-nucleon scattering cross-sections from the direct dark matter detection experiments such as LUX exclude M Z R below few TeV for a sub-TeV dark matter.Moreover, the gauge interactions of the neutral quintuplet fermion with massive (> few TeV) SU (2) R gauge bosons result into small annihilation and co-annihilation cross-sections and thus, predict relic density which is much larger than the observed WMAP/PLANCK results.The observed relic density can only be satisfied for few discrete values of the dark matter mass near W R /Z R resonance region (near M W R /2 and M Z R /2).Therefore, in the framework of left-right symmetry with a quintuplet dark matter candidate, sub-TeV dark matter masses are ruled out from the direct detection experiments and relic density constraints.Moreover, an experimentally consistent dark matter candidate in the range of few TeV is only possible for B − L = 4 case.To resolve these issues we introduce a singlet scalar in the above mentioned framework.The Yukawa coupling of the singlet scalar with the quintuplet fermion gives rise to a host of new annihilation channels for the Dark matter.We perform in detail the dark matter phenomenology in this singlet scalar extended scenario.We show that the WMAP/PLANCK measured dark matter relic density can be satisfied over a large range of dark matter masses including sub-TeV range.Moreover, the neutral component of the quintuplet fermion with B − L = 2 and 0 cases can give rise to an experimentally consistent candidate for dark matter as long as they are the lightest member of the quintuplet. We also study the collider signatures of the singlet scalar in details.Being singlet, it has no tree level interactions with the SM gauge bosons or Yukawa interactions with the ordinary leptons and quarks.However, the Yukawa interaction with the quintuplet fermion is allowed by the gauge symmetry.The interactions of the singlet scalar with a pair of EW gauge bosons arise from the loop induced higher dimensional operators.On the other hand, the scalar potential contains the interactions involving the singlet scalar and a pair of SM Higgs bosons.It enables us to study the loop induced (γγ, ZZ and Zγ) as well as tree level decays (hh and a pair of quintuplet fermions) of the singlet scalar.We find that as long as the decay of the scalar to a pair of quintuplet fermions are kinematically forbidden, it only decays to a pair of Higgs bosons or a pair of photons with significant branching ratios.In view of this, we study the photo-production (photon-photon fusion process) of the singlet scalar at the LHC with 13 TeV center-of-mass energy.The photo-production of the singlet scalar and its subsequent decay give rise to interesting resonant diphoton and di-Higgs signatures at the LHC.New physics contributions to the resonant diphoton and di-Higgs productions have already been studied in detail by the ATLAS and CMS Collaborations.We use the most recent ATLAS bounds on the resonant di-Higgs and diphoton cross-sections to show that some part of the dark matter relic density allowed parameter space in our model could be ruled out.It is worthwhile to mention that a significant part of the parameter space in our model is still consistent with the dark matter direct detection constraints, WMAP/PLANCK results as well as the LHC bounds.Future LHC data will be able to probe this part of the parameter space which is still allowed.Therefore, the singlet scalar extended quintuplet MDM left-right symmetric model will be able to explain any future LHC excesses either in the resonant di-Higgs or diphoton channels.On the other hand, the absence of any such excesses could lead to more stringent bounds on the parameter space of our model. Figure 1 : Figure 1: Mass difference between various charged states as a function of the neutral state mass for different B-L cases for MW R = 6 TeV and MZ R = 7.14 TeV.Note that the y-axis ranges are different in all the panels. Figure 2 : Figure 2: Relic density as a function of the dark matter mass.The left panel is for B − L = 4 case, whereas the right panel is for B − L = 0 case.The upper (lower) panels are for the scalar mass 0.5 TeV (1.5 TeV).In all the panels, we take MW R = 6 TeV, MZ R = 7.14 TeV, α3µ3 = vEW . Figure 3 : Figure 3: Relic density as a function of the dark matter mass for B-L=2 case with MW R = 6 TeV, MZ R = 7.14 TeV. Let us first understand the B − L = 4 case.Looking at the top left plot in Fig 4 we clearly see that there are three well-defined distinct regions.A narrow straight line with M DM ≈ M H 1 /2, a triangular region bounded from above by a straight line M DM = M H 1 and a rectangular region around 1.9 TeV M DM 2.05 TeV.The narrow straight line corresponds to the s-channel annihilation of two DM particles through a H 1 boson.There are actually two lines here with two points for each H 1 mass.If we look back at Fig. 2 we Figure 4 : Figure 4: Scatter plots in mDM -m scalar plane showing the allowed parameter space satisfying the relic density.We vary λ in the range 0 to 2 in each panel.The left panel is for B − L = 4 case, whereas the right panel is for B − L = 0 case.The upper (lower) panels are for α3µ3 = vEW (0.1 vEW).In all the panels, we take MW R = 6 TeV, MZ R = 7.14 TeV. Figure 5 : Figure 5: χ 0 -proton and χ 0 -neutron scattering cross-sections are shown as a function of mZ R considering two different values of gR/gL = 0.6 and 1.0 in the left panel.The top right panel depicts the colored region in mDM -mZ R plane which satisfies the LUX[34] upper bound on DM-nucleon scattering cross-section for gR/gL = 0.6.We show the same for gR/gL = 1.0 in the bottom right panel. Figure 6 : Figure 6: The decay branching ratios of singlet scalar (H1) as a function of its mass.The left (right) panel is for α3µ3 = vEW (0.1vEW ).In both the panels we consider B − L = 4 case with λ = 1.0 and m χ 0 = 200 GeV. Figure 7 : Figure 7: The production cross-section of singlet scalar (H1) as a function of its mass for various choices of DM mass (m χ 0 ).The curves are shown for B − L = 4 case with λ = 0.5. Figure 8 : Figure 8: The diphoton production cross-section is shown as a function of the singlet scalar mass for two different choices of the DM mass.We draw these curves for B − L=4 case with α3µ3 = 0.1vEW and λ = 1.0. Figure 9 : Figure 9: The allowed parameter space in mDM − m scalar plane satisfying the relic density for B − L = 4 case.The left (right) panel is for α3µ3 = vEW (0.1vEW ).In both the panels we vary λ in the range 0 to 2. The pink (black) points show the regions excluded from diphoton [39] (di-higgs [40]) search at LHC. Y .The heavy gauge boson masses are thus naturally generated at this scale.The electroweak (EW) symmetry breaking and the fermion masses and mixings, on the other hand, are generated by the neutral components of Φ field once they acquire a non-zero VEV
2018-05-21T05:53:19.000Z
2018-03-05T00:00:00.000
{ "year": 2018, "sha1": "7e1c881b372c02e83bbaba2fc183025ce953c927", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP05(2018)123.pdf", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "7e1c881b372c02e83bbaba2fc183025ce953c927", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
11598475
pes2o/s2orc
v3-fos-license
Recurrent takotsubo with prolonged QT and torsade de pointes and left ventricular thrombus Takotsubo cardiomyopathy, also known as “takotsubo syndrome,” refers to transient apical ballooning syndrome, stress cardiomyopathy, or broken heart syndrome and is a recently recognized syndrome typically characterized by transient and reversible left ventricular dysfunction that develops in the setting of acute severe emotional or physical stress. Increased catecholamine levels have been proposed to play a central role in the pathogenesis of the disease, although the specific pathophysiology of this condition remains to be fully determined. At present, there have been very few reports of recurrent takotsubo cardiomyopathy. In this case report, we present a patient with multiple recurrences of takotsubo syndrome triggered by severe emotional stress that presented with recurrent loss of consciousness, QT prolongation, and polymorphic ventricular tachycardia (torsade de pointes) and left ventricular apical thrombus. Introduction T akotsubo cardiomyopathy with a recently recommended nomenclature of Takotsubo syndrome (TTS) is an increasingly recognized entity characterized by transient (reversible), mainly apical and mid left ventricular (LV) dysfunction, which less commonly involves other LV segments (sparing the apex) in the absence of significant coronary artery disease that is potentially triggered by emotional, physical stress (primary TTS), medical illness (acute exacerbations of multiple medical conditions such as asthma, pneumothorax, gastrointestinal bleeding, or hypoglycemia), or surgical intervention (secondary TTS) [1][2][3][4][5]. Increased catecholamine surges have been proposed to play a central role in the pathogenesis of this condition [6]. Typically, the LV function recovers and normalizes in few days to weeks. It may account for up to 2% of suspected acute coronary syndrome (ACS). Takotsubo syndrome is much more common in women than in men, particularly postmenopausal women [2][3][4]7]. In a review of six prospective and four retrospective studies women accounted for 80-100% of cases, with a mean age of 61-76 years [7]. Triggering factors were bereavements of loved one, devastating financial losses, natural disasters, acute physical and critical illness, medical procedures or surgeries, and other catastrophic news. Takotsubo cardiomyopathy (syndrome) is a diagnosis of exclusion [5,8]. Researchers at the Mayo Clinic proposed diagnostic criteria in 2004, that include: (1) transient hypokinesis, akinesis, or dyskinesis in the LV midsegments with or without apical involvement; regional wall motion abnormalities that extend beyond a single epicardial vascular distribution, and frequently, but not always, a stressful trigger; (2) the absence of obstructive coronary disease or angiographic evidence of acute plaque rupture; (3) new electrocardiogram (ECG) abnormalities (STsegment elevation and/or T-wave inversion) or modest elevation in cardiac troponin; and (4) the absence of pheochromocytoma and myocarditis [3]. In a recent position statement of the European Society of Cardiology, a new set of seven diagnostic criteria are proposed incorporating anatomical features, ECG changes, cardiac biomarkers, and reversibility of the myocardial dysfunction [5]. Numerous etiologies have been described, including catecholamine release during stress [3,9,10] and microvascular spasm or ischemia [11,12]. Before this recent position statement [5], there were no established treatment algorithms for TTS; however, a new management algorithm is now proposed based on risk stratification pathways [5]. Most patients present with acute chest pain mimicking an ACS and are treated according to ACS guidelines. TTS was generally considered a benign condition; however, in-hospital mortality is 0-8% and death is much more common in the setting of LV-outflow obstruction and from noncardiac causes [13][14][15][16][17][18]. The long-term outcome of TTS is not as benign as previously believed with recently reported 5-year mortality rates of 3-17% [5]. The results of a recent population-based registry showed that mortality rates in patients with TTS were worse than in control individuals without coronary artery disease and comparable to patients with ACS [19]. Recurrent TTS data are limited due to the relative short-term observation and only a few cases have been previously reported [16,17]. Currently no evidence supports prophylactic treatment after the first presentation. We present in this report a rare case of recurrent apical ballooning syndrome in a woman who presented with recurrent chest pain, loss of consciousness (LOC), long QT (LQT), torsade de pointes (TdP), and LV thrombus during severe emotional stress. Case report A 48-year-old woman with a history of postpartum depression (no medications) with no coronary risk factors was first hospitalized 7 years previously (2009) following an episode of LOC at home and chest pain. She was diagnosed with heart failure with abnormal wall motion on transthoracic ECG manifesting as severe hypokinesis involving the apex. She was given guideline-directed medical therapy for heart failure in the form of angiotensin converting enzyme inhibitors, b-blockers, aspirin, and diuretic therapy. Four months later following improvement of her condition, coronary angiogram and echocardiography were performed in a UK center and both were reported to be normal, therefore all medications were discontinued. Over the past 6 years she had infrequent syncope always after emotional stress, in 2010 again after a heightened emotional situation she had recurrent chest pain, LOC, and dyspnea at another institution. Echocardiography reported low LV ejection fraction (EF)-40% with apical wall motion abnormalities-and hypokinesis. Medical therapy with angiotensin converting enzyme inhibitors, b-blockers, aspirin, and diuretics was introduced. One year later (2011), she was seen in our cardiology clinic for follow-up and echocardiography once again revealed normal LV function with an EF of 55%. Telemetry on the coronary care unit showed several episodes of TdP which on five occasions rapidly degenerated to ventricular fibrillation (VF; Fig. 2). Serum electrolytes, inflammatory markers, and pulmonary hypertension were normal and she was not receiving any QT-prolonging drugs. She required electrical defibrillation and received magnesium sulfate and mexilitine. Additionally, b-blockers were held for bradycardia. Repeat coronary angiography again demonstrated normal epicardial vessels and no coronary spasm. On reviewing her history, we noted that before this event, as well as all previous events, she was under the same severe emotionally stressful situations after which she always experienced central chest pain followed by LOC. Internal cardiac defibrillator (ICD) implantation was decided for secondary prevention of sudden cardiac death. Prolonged QT Four days later, while in hospital, repeat echocardiography revealed mild improvement of LV contractility with a rise in EF to 40%; however, there was a moderately sized LV apical thrombus (Fig. 4), hence anticoagulation was commenced. This soon resolved on repeat echocardiography 1 week later with marked improvement of LV contractility and once again normalization of LV function with an EF of 55% with resolution of the apical thrombus (Fig. 5). Serial ECGs for 3 consecutive days displayed marked repolarization abnormalities with fluctuating prolonged QT intervals that failed to normalize. After a long discussion among all treating physicians, the electrophysiologist, and the patient, an ICD was implanted; thereafter, she was completely stable. The final diagnosis was as follows: recurrent TTS complicated by LV apical clot, acquired LQT, TdP, and VF cardiac arrest. She was discharged home on guideline-directed medical therapy for heart failure and she continues to have regular follow-ups. Echocardiogram repeated almost 1 year later revealed normal LV size and systolic function (EF, 55%) and there were no ICD therapies, heart block, or recorded arrhythmic events on device interrogation. Emotional and physical stresses often precipitate the clinical presentation. This suggests a relationship between cortical brain activity (a central catecholamine surge) and myocardial stunning [2,3,22,23]. TTS is divided into two types: primary and secondary. In primary TTS, the acute cardiac symptoms are the primary reason for seeking care-such patients may or may not have clearly identifiable stressful triggers (often emotional). In secondary TTS, most cases occur in patients already hospitalized for another medical, surgical, obstetric, or psychiatric condition. In these patients, sudden activation of the sympathetic nervous system or a rise in catecholamines precipitates an acute TTS as a complication of the primary condition or its treatment. Studies have found that patients with TTS have statistically significant higher levels of serum catecholamines (norepinephrine, epinephrine, and dopamine) than patients with myocardial infarctions [23][24][25][26]. Increased beta-2-adrenoceptor activity in the setting of a high catecholaminergic state has been proposed as a possible reproducible model for this entity, inducing cardiac dysfunction and myocyte injury though calcium leakage due to hyperphosphorylation of ryanodine receptor 2 [27]. The apical portions of the LV have the highest concentration of sympathetic innervations found in the heart and increased beta-2 concen- tration gradient from the apex to the base could play an important role in apical myocardial dysfunction and ballooning commonly found in TTS cases [23][24][25][26][27]. Combining the results from multiple studies plasma norepinephrine levels were elevated in 74% of cases [15]. LV Apical Thrombus The pathogenesis of TTS may be multifactorial, similar to catecholamine-induced cardiomyopathy [24], pheochromocytoma [25], and subarachnoid hemorrhage [26]. The catecholamine hypothesis as a cause for reoccurring TTS, as in our case, can be further supported by observation of a similar reversible cardiomyopathy with global or focal dysfunction in patients with pheochromocytoma [25], in the setting of acute brain injury and Guillain-Barré autonomic neuropathy, which have also been postulated to be related to catecholamine excess. TTS has been reported as a novel association with catecholaminergic polymorphic VT particularly in young women with congenital LQT [28]. Catecholamine excess has reversible toxic effects on the myocardium, which has been documented in cases of pheochromocytoma [25][26][27][28][29][30][31]. Histological examination of biopsy samples from the affected LV of patients with TTS has shown intracellular accumulation of glycogen, many vacuoles, disorganized cytoskeleton and contractile structure, contraction band necrosis, and increased extracellular matrix proteins, which is associated with clinical states of catecholamine excess [32][33][34]. These alterations resolved nearly completely after functional recovery. TTS is associated with minor release of cardiac enzymes, which suggests some microscopic damage to the myocytes. The absence of causative coronary artery disease on angiography and the diffuse rather than localized wall motion abnormalities point to an insult that is global but microscopic in nature. In our report we present an interesting case with several important parts including repeated stressinduced syncope from acquired LQT and TdP, recurrent TTS, and development of apical thrombosis. Our patient did not have any significant family history of cardiac disease or sudden death and thus we did not test her for ryanodine receptor mutations. LV Apical Clot Resolved Recurrent TTS data is limited due to the relative short-term observation and only a few cases have been previously reported [16,17]. Five-year recurrence rates of 5-22% have been reported, with the second episode occurring from 3 months to 10 years after the index event [5]. Recurrence of a different anatomical variant has been reported. If a patient has a recurrence, long-term clinical follow-up should be considered. Currently no evidence supports prophylactic treatment after the first presentation. There are no controlled data to define the optimal medical regimen to treat TTS, but it has been advocated that it is reasonable to treat these patients with standard medications for LV systolic dysfunction with guideline-directed medical therapy for heart failure. These include angiotensin converting enzyme inhibitors, beta-blockers, and diuretics, particularly for volume overload states [3]. Aspirin and statin therapy are also reasonable [8]. It was reported that thrombosis occurs in TTS cases, which might reflect vasoconstrictor, platelet activation, or prothrombotic effects of extremely high epinephrine levels [33][34][35][36][37]. In one study, 5% of patients with TTS developed LV thrombus, and all patients with LV thrombus were started on anticoagulation and one patient developed stroke [38]. This must be weighed against the hypothesized increased risk of cardiac rupture with apical ballooning and aspirin or heparin therapy [37]. Consequently, the role of anticoagulation is largely regarded on a case by case basis. It is reasonable to continue anticoagulation until the LV function returns and thrombus resolution [39]. In our case, heparin infusion was given for 7 days and was stopped after the disappearance of apical thrombus and oral anticoagulation was planned to start after ICD implantation, but after repeating the echo before the procedure, apical ballooning disappeared and LV contractility returned back to normal, therefore oral anticoagulation was abandoned. Arrhythmia resulting from QT prolongation is commonly observed in patients with TTS. The prevalence of QT interval prolongation among TTS patients is high, ranging from 50% to 100% according to different case series [40][41][42]. QT interval prolongation might precipitate TdP, a polymorphic ventricular tachycardia that might lead to ventricular fibrillation and sudden death. This potentially fatal arrhythmia is associated with administration of QT prolonging agents, hypokalemia, hypomagnesemia, and congenital LQT syndrome, which were not applicable in our case. Although QT interval prolongation is prevalent among TTS patients and might precede TdP, the latter has rarely been reported in TTS patients (it featured in 7 of 286 patients from the German registry data) [43]. Hypothesized ventricular arrhythmia mechanisms are not solely attributed to cathetcholamine excess and enhanced sympathetic activity and have also been proposed to occur from myocardial inhomogeneity resulting from myocardial edema in the LV dysfunctional segments and its predisposition to repolarization abnormalities including acquired LQT [43]. Literature reports the administration of magnesium sulfate as an effective therapy for ventricular tachycardia in the acute phase of TTS if the QT interval is prolonged [41,43]. This reflected our acute phase management. However, we opted for ICD implantation given the degeneration to TdP to VF on multiple occasions, although at present, there is no clear cut recommendation for ICD implantation as a prophylaxis and some data have shown that pacemaker implantation is necessary and continues to be required even after resolution of the acute and subacute phases if the index dysrhythmia was heart block or sinus node suppression [44]. Contrarily, resolution of TdP was largely observed after resolution of the acute and subacute phases supporting transient measures like temporary pacing or wearable defibrillators rather than implantable ICDs. Similar to the reported literature, our patient at follow-up device interrogation did not suffer further ventricular arrhythmias. b-Adrenoceptor blockers can prolong the QT interval and leave unopposed the potential adverse effects of high local concentrations of catecholamines at a-adrenoceptors. The use of b-blockers in the acute phase of TTS is still a matter of debate. Given the findings in an animal model, treatment with a combined aand b-blocker seems rational, whereas treatment with a catecholamine as a pressor and cardiotonic seems contraindicated [43][44][45][46][47]. This case of recurrent TTS, LOC with acquired LQT, complicated by serious ventricular arrhythmias (TdP and VF) and acute LV apical thrombus is considered one of the rarest cases of TTS reported in literature. Chronic management of TTS is primarily empirical, and involves treatment of the underlying causes and situations. In patients who are hemodynamically stable, it is advantageous to prevent excessive sympathetic activation by combining aand b-blockade. Administration of magnesium sulfate is the treatment of choice for ventricular tachycardia in the acute phase of TTS with LQT. In TTS complicated by LV apical thrombus, it is reasonable to continue anticoagulation until the LV function returns to normal. We hope that our case with recurrent relapses and complications contributes to the understanding of this interesting rare condition and will further help in the management and follow-up of similar patients. We need to understand the pathogenesis and to advise rational treatment and prevention strategies, and to pay more attention not only to the myocardium and coronary arteries but also to the integration of central neural, autonomic, endocrine, and circulatory systems in emotional distress. Lastly, we think more researches are needed to find a way to risk stratify TTS patients to detect those at risk for recurrence.
2018-04-03T03:52:59.158Z
2016-08-06T00:00:00.000
{ "year": 2016, "sha1": "0e372f646366cc57108f79813c1adce170b5be4f", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.jsha.2016.07.004", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3666b88064e4a8dc0910ba34c830221453a6d889", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55802635
pes2o/s2orc
v3-fos-license
Evaluation of the Circulation Patterns in the Black Sea Using Remotely Sensed and in Situ Measurements The objective of the present work is to provide an overview of the general circulation features in the Black Sea basin. In order to achieve this, 18 years (1993-2010) of satellite data coming from the Aviso website were analyzed. A description of the general circulation patterns in the Black Sea is first presented. This is followed by statistical analyses of the satellite data in 20 points covering the entire area of the sea. The reference points were chosen as follows: 12 points along the Rim cyclonic current, 3 points inside the Rim cyclonic current, 4 points on the edge of two of the biggest anticyclonic gyres outside the Rim current and one point in the northwestern shelf area of the basin. Rose graphics were drawn for the reference points for winter and summer time. Finally, 9 years of in situ data obtained from the Gloria drilling platform were analyzed and compared with the satellite data. The present study shows that most of the reference points are sensitive to seasonal changes. The current velocities depend mostly on the points location: the points located on the Rim current and on the nearshore anticyclonic eddies present higher values than the ones located in or outside the general circulation features. Introduction The Black Sea is an enclosed sea situated between Europe, Anatolia and Caucasus, bounded by the 40.56˚N and 46.33˚N latitude and 27.27˚E -41.42˚E longitude.It is the second enclosed sea on Earth after the Caspian Sea, with a surface of 423,000 km 2 .The only connection bounding the Black Sea to the Global Ocean is by the Bosphorus strait, a 0.7 -3.5 narrow channel with 31 km in length and a depth that can vary from 39 to 100 m. The sea contains three vertical water layers that do not mix, the bottom one being the largest anoxic water body on Earth.The surface layer is located on the sea surface, spreading to 50 m depth and is the most active water layer of the sea.It responds strongly to the seasonal temperature variations and wind fields.The second layer is the cold intermediate layer located at depths that vary from 50 to 180 m.Its most significant characteristic feature is the fact that the temperature here is constant, between 6˚C and 8˚C, not being affected by the temperature changes in the surface layer.The cold intermediate layer is formed by the convective processes associated with the winter cooling of the surface waters [1][2][3].Below the intermediate cold layer is the bottom layer where waters are mostly stagnant showing small changes in properties, except near boundaries.In the depths higher than 1700 m, the bottom layer is subjected to geothermal heating from the sea floor, the temperature being about 8.8˚C [4].The maximum depth of the Black Sea is of 2588 m.However, these are isolated points located in the south and southeast of the basin.The average maximum depth of the sea is 2100 m. The Black Sea's salinity is lower than that in the open seas or in the oceans, due to the enclosed state and high river discharges.The average salinity in the Black Sea is 18.2 PSU, but it can be much lower near the river discharges.The bottom layer's salinity, however, has increased values by an average of 21.8 PSU.This difference is maintained due to the fact that the surface and bottom waters do not mix, and the lower layer is receiving more saline waters from the Mediterranean Sea.Moreover, the surface layer is exposed to rain, river discharges and dilution. The cyclonic character of the Black Sea circulation resulting from the cyclonic state of the wind field patterns was first described by Knipovich [6] Later on Filipov [7], Boguslavskiy et al. [8], Blatov et al. [9], Stanev et al. [10], Stanev [11] and Eremeev et al. [12] provided valuable details regarding the sea circulation patterns.However, the model proposed did not contribute to a significant change to Knipovich's classical circulation model. The northwestern shelf of the sea consists of a close to 200 km wide shelf that receives the fresh water input from Danube, Dniestr and Dniepr rivers.The surface circulation is characterized by a persistent cyclonic coastal current referred to as the Rim current, with a width of over 75 km and an average speed of 0.2 ms −1 at the surface [13].Between the Rim current and the coast, a number of seasonal anticyclonic eddies are formed.While the Rim current meanders eastward along the Anatolian coast, it forms two anticyclonic coastal eddies that were identified and labeled by Oguz et al. [14] as the Sinop and Kizilirmak eddies.In the eastern area of the basin, the Batumi eddy is formed.The Rim current flows along the Caucasian coast to the narrow continental slope, meandering in the form of backward curling.The jet separates three cyclonic eddies of the eastern basin that constitute the multiple cells of the Eastern Basin Cyclonic Gyre [13].In the coastal side of the offshore jet, a small anticyclonic eddy is formed, called the Caucasian eddy.The Rim current continues to meander to the south of the Crimean Peninsula between two larger coastal anticyclonic eddies and two cyclonic eddies located in the central part of the basin.The anticyclonic eddies located on the northern side are referred as the Crimean Eddy and Sevastopol Eddy, respectively [13]. While it proceeds southwest towards the Bosphorus area, the Rim current creates the Bosphorus eddy.A small anticyclonic eddy is formed in the western area of the Black Sea basin, between Sevastopol and Bosphorus eddies, labeled as Kali-Akra.Its basin-wide circulation is closed with the Sakarya eddy, situated in the southwest area. Among the above mentioned eddies, Batumi and Se-vastopol are the most permanent and largest mesoscale structures [15,16].Figure 1 presents a scheme of the Black Sea surface circulation as discussed above.The solid lines indicate the recurrent features of the general circulation. Statistical Analysis of the Circulation Patterns Using Satellite Data In order to achieve a better understanding of the current fields in the Black Sea basin and of their time and space variations, 18 years of satellite data were analyzed, covering the time period 1993-2010.The satellite data were obtained from Aviso website [17] and contains daily measurements of the U and V components of the currents with a spatial resolution of approximately 10 km on the horizontal and of 13 km on the vertical.20 reference points were considered in the present analysis, as shown in Figure 2. The first 12 points (P1, P2, … P12) were considered on the Rim current (with red), points P13-P15 were located inside the Rim cyclonic current (with purple), points P16, P17 at the edge of the Batumi eddy, P18, P19 at the edge of the Sinop eddy (with green) and point P20 was located on the northwestern shelf area of the Black Sea basin (with orange).In Table 1 the coordinates of the reference points are presented, along with the monthly averaged values of current velocities.Table 2 shows the statistical analyses for the reference points considering the following parameters: minimum, maximum, mean and median values, standard deviation, skewness and kurtosis.In Table 3, percentile analyses regarding the 50th and 95th percentiles are presented for the reference points considered, grouped in winter and summer time, respectively where winter time is the six month period from October to March and summer from April to September.In statistical analysis, the standard deviation measures the data dispersion from the mean value as in Equation (1): with representing the mean value, where E is the expectation operator.X represents a discrete random variable with the probability mass function p(x).Then the expected value will be: In probability theory and statistics, skewness is a measure of the symmetry distribution in a certain data set.The skewness value can be positive, negative or undefined.The skewness of a variable X is defined as the third standardized moment: where 3  is the third moment above the mean and the k th moment about the mean is defined as: Kurtosis represents the relative concentration of the data in the centre versus in the tails of a frequency distribution when is compared with the normal distribution (which has a kurtosis value of 3).This is equal to the fourth moment around the mean divided by the square of the variance (or the fourth power of the standard deviation) of the distribution minus 3. Kurt Moreover, analyses regarding the 50th and 95th percentiles were performed for all the points, grouped by summer time and winter time. Percentiles are generally used in order to characterize a frequency distribution.In special the 50th and the 95th percentiles are often considered to identify the median values and the maximum data distributions being unaffected by outward values which are distant from the rest of the data.Percentiles (p i ) are computed as follows: where i represents the position inside the dataset that marks the percentile to be calculated and n is the total number of the values in the distribution. A first conclusion that can be drawn from Table 1 is that the average current velocity values in the Black Sea are in general small.There are usually small variations between summer and winter periods.The most stable points regarding velocity variations appear to be P1, P2, P3, P4, P5 and P6.As expected, the points P1-P12 have higher current velocities than the rest, due to their coordinates located on the Rim current.The points P13, P14 and P15, located inside the curve described by the Rim current, have smaller velocities than the ones situated on the Rim or on the two anticyclonic eddies.The smallest velocity values are the ones recorded for the point P20, situated in the northwestern shelf zone, an area with mostly calm waters where no significant circulation feature was observed. Directional Distributions of the Current Velocity The rose type graphics are used to give a more comprehensive picture of how current speeds and directions are distributed in a particular point.Using a polar coordinate system for gridding the frequency of the currents over the time period is plotted by current direction, with color bands showing current velocity ranges.The direction of the longest spoke shows the current direction with the greatest frequency.Each concentric circle represents a different frequency, starting from zero at the center with increasing frequencies at the outer circles.In Figures 3 and 4 rose graphics were drawn for the 20 reference points.Figure 3 presents rose graphics for the winter time, where winter is considered the time frame from October to March, while Figure 4 presents the rose graphics for the summer time (April to September).By comparing Figure 3 and Figure 4, it can be observed that there are significant changes between winter and summer time in current orientation, however these changes do not apply to all the points.P1, P5, P6, P11 and P13 present mostly the same structures for both time frames. Comparisons against in Situ Data For the Black Sea some current measurements were available for the time period 2002-2009 and they were compared against the corresponding satellite data provided by Aviso.The measurements were taken at the Gloria drilling platform located on the western side of the Black Sea, near the Romanian coasts at 44˚31'N, 29˚34'E, every six hours.The data were then computed to a daily average, to fit the satellite data profile.The comparison between the satellite data and the measurements at the Gloria drilling platform in the Black Sea shows that the in situ measured current velocity values are usually higher than the satellite data with a bias of 0.077 ms −1 .Table 4 presents some statistical parameters as mean values, bias, RMS error, SI (scatter index) and r (correlation coefficient). With X i representing the measured values at the Gloria drilling platform, Y i the corresponding satellite data values and n the number of ata points considered, the d statistical evaluated are defined by the following relationships: Copyright © 2013 SciRes. Discussions According to satellite data the maximum current velocity is of 0.626 ms −1 and it belongs to P17, the point located at the edge of the Batumi eddy, closely followed by P19 with 0.588 ms −1 situated at the edge of the Sevastopol eddy, and P12 with 0.558 ms −1 , situated on the Rim current.Except for the points P4 and P6, the current velocities recorded on the Rim current are higher than the ones recorded inside.The minimum values are close to zero for all the points, while the mean values vary from 0.068 ms −1 for P14 to 0.160 ms −1 for P19.The median values ranges from 0.056 ms −1 for P5 to 0.151 ms −1 for P19.Higher values for the standard deviation suggest that the data is spread out compared to the mean values.A zero value for the skewness suggests that the values are relatively evenly distributed on both sides of the mean value while a positive skew indicates that the tail on the right side of the probability density function is longer than the left side and the bulk of values lie to the left of the mean, this being the case here where the skewness values range from 0.742 (P6) to 0.699 (P17).Kurtosis represents the relative concentration of the data in the center versus the tails of the frequency distribution when is compared to the normal distribution that has a kurtosis value of 3. In the present work the values of the kurtosis vary from 3.420 to 10.306.For the winter time the point P1 is oriented towards south-west, feature that is preserved for the summer time, with a small peak added oriented towards north-east.Point P2 in the winter time shows also a south-west clear orientation, while for the summer this decreases, a peak oriented north-west being also added.Regarding P3, strong differences between winter and summer time can be observed.While in the winter is showing a strong south-east orientation, for the summer time this changes to north-west.P4 is showing a small north-east orientation for the winter time, while for summer is difficult to pinpoint a definite direction, with two small peaks oriented west and east.P5 shows little differences in current orientation between summer and winter, also with no definite direction.The same case applies also for P6 where is also difficult to identify a direction, with the observation that while in the summer time it presents a stable radial structure, small peaks in all directions can be observed in the winter.P7 presents 5 peaks clearly oriented north-west in the winter time, while in the summer there is also no definite direction.As well as for P3, P8 presents major differences between winter and summer time.While in the winter is oriented towards south, this changes drastically in the summer time, when a strong north-east component appear, accompanied by a small peak oriented towards south.For the point P9 is difficult to pinpoint a clear orientation in the winter: it appears to be oriented towards east, but there's no clear direction.For the summer time most of the peaks are oriented south.Major differences can also be observed for P10: in the winter time there is a clear west orientation, fact that changes in the summer when an east orientation appear, with small reminiscences from the winter feature.P11 point seems to preserve most of its features between the seasons, although in the winter there is a small north-east component that in the summer disappears.Regarding the P12 point, major differences can be observed: in the winter there is a strong component oriented north-east, while in the summer the general direction is split in two: a south-south-west component and a west one.Due to their position inside the Rim current, it wasn't expected to see important changes between summer and winter time for the points P13, P14 and P15, however there are small differences, especially for P15 located at the edge of the Eastern Gyre.Points P16, P17, P18 and P19 are located at the edge of the Batumi (P16, P17) and Sevostok (P18, P19) eddies.These are two of the biggest nearshore anticyclonic eddies present in the Black Sea, and are characterized by high velocities and strong seasonal differences, fact confirmed by the present analysis.In the winter time P16 is split into multiple directions, mostly oriented east, while in the summer there is a definite west orientation, with high peaks.P17 presents a higher turbulence for the winter time, with no definite direction, but mostly oriented north, east and west.This feature changes for the summer time when a north-east component appears, along with a smaller one towards north-west.For P18 in the winter a strong northwest orientation can be observed, while in the summer this changes towards south-east.Also P19 presents important differences between seasons with a strong component oriented south-west in the winter that changes to a north-east in the summer time.Seasonal variations can also be observed in P20, the point located outside the general features of the Black Sea, in the northwestern shelf area.While in the winter two dominant directions are present: north and south, for the summer there is a general orientation south. Conclusions As expected, most of the points located on the Rim cyclonic current and on the nearshore anticyclonic eddies have higher velocities than the ones located in the central gyres or northwestern shelf area.Also, they are described by a higher instability regarding current speed and direction on the seasonal changes. Higher value for kurtosis as the ones registered at points P5 (10.307),P17 (8.979), P13 (6.570) and P11 (6.219) means that in these cases there is a strong possibility that higher velocities than usual will appear. A similar study with the emphasis on the anticyclonic and cyclonic eddies, was treated in [18].The implementation of a global circulation modeling system for the Black Sea basin was presented by Toderascu and Rusu in [19].Also, the subject of modeling of wave-current interactions at the Danube mouths was treated by Rusu in [20].Another work that needs to be mentioned here is the work of Rusu and Macuta regarding the numerical modeling of long shore currents in marine environment [21], as well as the work of L. Rusu regarding the application of numerical models to evaluate oil spills propagation in the coastal environment of the Black Sea [22]. Figure 1 . Figure 1.Schematic of the Black Sea surface circulation.The solid lines indicate recurrent features of the general circulation. Figure 2 . Figure 2. The bathymetric map of the Black Sea with the location of the 20 reference points as follows: red-points located on the Rim cyclonic current, purple-points located inside the Rim current, green-points located at the edge of the anticyclonic eddies, orange-point located in the northwestern shelf area of the sea.Table 1.Monthly averaged values of the current velocity (ms −1 ) for the reference points (P1, P2, … P20) for the period 1993-2010. Figure 3 . Figure 3. Current velocity roses for the reference points, winter time.
2018-12-12T19:33:01.688Z
2013-08-22T00:00:00.000
{ "year": 2013, "sha1": "2cce11aab756281c6ed53f90f7829b7272365264", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=36756", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "2cce11aab756281c6ed53f90f7829b7272365264", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
252847706
pes2o/s2orc
v3-fos-license
Factors associated with long‐term HIV pre‐exposure prophylaxis engagement and adherence among transgender women in Brazil, Mexico and Peru: results from the ImPrEP study Abstract Introduction The HIV epidemic continues to disproportionately impact Latin‐American transgender women (TGW). We assessed factors associated with long‐term pre‐exposure prophylaxis (PrEP) engagement and adherence among TGW enrolled in the Implementation of PrEP (ImPrEP) study, the largest PrEP demonstration study in Latin America. Methods HIV‐negative TGW aged ≥18 years reporting 1+eligibility criteria in the 6 months prior to enrolment (e.g. sex partner known to be living with HIV, condomless anal sex [CAS], transactional sex or having a sexually transmitted infection [STI]) who could safely take PrEP were enrolled. Follow‐up visits were conducted at 4 weeks and then quarterly. We conducted logistic regression to identify factors associated with long‐term PrEP engagement (3+ follow‐up visits in 52 weeks) and complete self‐reported adherence (no missed pills in the past 30 days) during follow‐up. For both outcomes, we constructed multivariable models controlling for country, socio‐demographics, sexual behaviour, substance use, STIs and self‐reported adherence at 4 weeks (long‐term engagement outcome only). Results From March 2018 to June 2021, ImPrEP screened 519 TGW, enrolled 494 (Brazil: 190, Mexico: 66 and Peru: 238) and followed them for 52 weeks. At baseline, 27.5% of TGW were aged 18–24 years, 67.8% were mixed‐race and 31.6% had >secondary education. Most, 89.9% reported CAS, 61.9% had >10 sex partners and 71.9% reported transactional sex. HIV incidence was 1.82 cases per 100 person‐years (95% confidence interval [CI]: 0.76–4.38). Almost half of TGW (48.6%) had long‐term PrEP engagement, which was positively associated with reporting complete adherence at week 4 (aOR:2.94 [95%CI:1.88–4.63]) and was inversely associated with reporting CAS with unknown‐HIV partner (aOR:0.52 [95%CI:0.34–0.81]), migration (aOR:0.54 [95%CI:0.34–0.84]), and being from Mexico (aOR:0.28 [95%CI:0.14–0.53]). Self‐reported adherence was associated with TGW aged >34 (aOR:1.61 [95%CI:1.10–2.34]) compared to those aged 25–34 and those with >secondary education (aOR:1.55 [95%CI:1.10–2.19]) and was lower among TGW from Peru (aOR:0.29 [95%CI:0.21–0.41]) or reporting PrEP‐related adverse effects (aOR:0.63 [95%CI:0.42–0.92]). Conclusions Although TGW were willing to enrol in ImPrEP, long‐term PrEP engagement and complete self‐reported adherence were limited, and HIV incidence remained relatively high. A successful HIV prevention agenda should include trans‐specific interventions supporting oral PrEP and exploring long‐acting PrEP strategies for TGW. I N T R O D U C T I O N HIV infection disproportionately impacts transgender women (TGW) worldwide, with HIV prevalence being 50 times greater than adults of reproductive age in low-and middle-income countries (LMICs), such as those in Latin America [1][2][3]. The HIV prevalence among TGW in Latin America was estimated at 25.9% [4], 32-49% in Brazil [5], 20-64% in Mexico [6] and 30% in Peru [7]. This increased vulnerability is caused by substantial social marginalization and isolation experienced by TGW, leading to poverty, lower education and exclusion from the formal labour market [8], leading to high rates of sex work [6,[9][10][11]. In Brazil, Mexico and Peru, TGW also experience substantial violence [11][12][13][14], internalized stigma and fear of discrimination [15,16] and increased burdens of mental health and substance abuse [17]. These vulnerabilities can also influence their health-seeking behaviour and engagement in HIV prevention services. Moreover, these services often do not have the resources to truly address the needs of this population [18]. Daily oral pre-exposure prophylaxis (PrEP) with tenofovir disoproxil fumarate 300 mg (TDF) combined with emtricitabine 200 mg (FTC) has been demonstrated to prevent HIV infection [19]. Still, it is highly dependent on pill adherence and engagement in prevention services [20,21]. A sub-analysis of TGW included in the iPrEx study yielded no difference in HIV acquisition between study arms (PrEP vs. placebo); however, PrEP was efficacious in preventing HIV among TGW who were adherent to daily oral PrEP as measured by drug levels [22]. Questions remain on the interactions between feminizing hormone therapy (FHT) and PrEP among TGW, with studies showing decreased levels of TDF/FTC among FHT users [23][24][25], or lack of interaction [26]. The vulnerability of TGW to HIV makes their use of PrEP of vital importance [27][28][29]. However, few TGW have been engaged in HIV prevention services [5,30] or PrEP studies [31], hindering the possibility of meaningful analysis [32], despite high willingness to use PrEP [33][34][35]. In addition, PrEP studies have shown low PrEP continuation among TGW [36]. Research has highlighted the need for PrEP programmes to specifically address the needs of trans populations, including TGW [31,[37][38][39]. However, efforts towards this end have been limited [3]. Although daily oral PrEP was recommended in 2014 by the World Health Organization, PrEP availability has been limited in Latin America [29,40]. PrEP has been available within Brazil's Public Health System (SUS) since 2017, Mexico since 2021 [41], but remains accessible only via purchase or through limited demonstration studies in Peru. The Implementation of PrEP (ImPrEP) study is the largest PrEP demonstration study in Latin America and aims to evaluate the feasibility of PrEP implementation among gay, bisexual and other cisgender men who have sex with men (MSM) and TGW in the context of the Public Health Systems of Brazil, Mexico and Peru. This analysis aims to assess the factors associated with longterm PrEP engagement and self-reported adherence among TGW enrolled in the ImPrEP study. M E T H O D S 2.1 Study design and participants ImPrEP was a prospective, single arm, open-label, multicentre study that assessed same-day oral PrEP implementation in Brazil (14 sites in 12 cities), Mexico (4 sites in 3 cities) and Peru (10 sites in 6 cities). Inclusion criteria were HIV-negative MSM and TGW, aged ≥18 years and at least one of the following in the prior 6 months: condomless anal sex (CAS), anal sex with partner(s) known to be living with HIV, sexual transmitted infections (STIs) signs/symptoms or diagnosis, or transactional sex. Participants were enrolled from March 2018 to December 2020. This analysis only includes participants self-identified as women, travestis [12,33,42] Study procedures Participants were recruited through social media advertisements, peer/healthcare provider referrals and through MSM/TGW peer-educators at each site. We also offered enrolment to individuals seeking PrEP or HIV/STI testing. Potentially eligible individuals were screened using laboratory, clinical and risk criteria and enrolled to receive sameday oral PrEP [43]. HIV viral load and serum creatinine clearance (CrCl) were evaluated at enrolment. Participants were contacted to discontinue PrEP and return to the site in case of acute HIV infection (detectable HIV viral load) or CrCl<60 ml/minute [44]. Follow-up visits were scheduled at week 4 and quarterly thereafter, for a total of five planned follow-up visits in 52 weeks. Given restrictions due to the COVID-19 pandemic during 2020 and 2021 [45][46][47], the total number of visits and the visit intervals were impacted. At each visit, participants received TDF/FTC refills according to the next scheduled visit interval. Individuals who returned more than 24 weeks after any visit were required to re-enrol in the study. Data on demographics, prior post-exposure prophylaxis (PEP) use (past 12 months), indication for PEP and the main reason for attending the service were assessed at enrolment. Participants also reported information on sexual behaviour and substance use at enrolment and quarterly visits. Self-reported adherence and symptoms related to PrEP use were assessed at follow-up visits. HIV rapid tests were performed every visit; HIV confirmatory testing was conducted as needed. Study definitions Age was described as median and interquartile range (IQR) and in categorical ranges of 18-24, 25-34 and >34 years. We categorized self-reported race/skin colour as White, Black, Indigenous, Asian and Mixed-race (Pardo or Mestizo); however, as these categories are distinct by country, they were dichotomized into white versus any other race. We used the following education categories: primary or less (complete or incomplete), secondary (complete or incomplete) and more than secondary. Individuals born in a state or country different from the implementation site were considered as migrants. Main reason to attend the service was stratified as seeking PrEP and other (seeking an HIV test, other health service or PEP). Sexual behaviour was assessed with the questions: number of cisgender men or/and TGW sexual partners (described with median and IQR, categorized into <5, 5-10 and >10 for analyses), any CAS (yes/no), receptive CAS (yes/no), CAS with partner(s) known to be living with HIV (yes, no or I don't know) and transactional sex (sex in exchange for money, drugs, gifts or favours; yes/no). Binge drinking was assessed with the question: "Did you have five or more drinks within a two-hour period?" (yes/no) [48]. Stimulant use was considered use of any of the following: club drugs (e.g. ecstasy, LSD and GHB), cocaine (powder, crack or base). PrEP-related gastrointestinal symptoms were defined as any of the following: diarrhoea, flatulence, nausea, vomit, abdominal pain or other. At enrolment, questions on sexual behaviour referred to the previous 6 months, while number of sex partners in Brazil and Mexico and substance use referred to the previous 3 months. At quarterly visits, all questions referred to the previous 3 months. At the 4-week visit, any PrEP-related symptom(s) referred to the previous 30 days; at other visits, this information dated back to the period since the last visit. Outcomes We evaluated two main outcomes: long-term PrEP engagement and complete self-reported adherence. Long-term PrEP engagement was defined as attendance at the 4-week visit and two or more quarterly visits within a 52-week period. As most participants attending these three visits would have received 210 PrEP pills (30 pills at enrolment and 90 pills at each follow-up visit), this would be enough for achieving highly protective levels of tenofovir diphosphate (4 pills per week for 52 weeks) [20]. Participants' self-reported adherence was assessed at every follow-up visit with the question: "In the previous 30 days, approximately how many pills did you NOT take?" Those who answered zero were considered as having complete self-reported adherence, as a previous analysis estimated "zero" as the self-reported PrEP adherence cut-off equivalent to highly protective levels of tenofovir diphosphate [49,50]. Individuals who re-enrolled in the study completed the initial study assessment, which did not include an adherence question. Re-enrolled individuals were classified as non-adherent as the quantity of pills received in their prior visit (30 or 90) would have been insufficient to cover the period that they were absent from the study. Statistical analysis We described TGW's characteristics at enrolment, long-term PrEP engagement and self-reported adherence overall and according to country. We censored participants at study withdrawal or on 30th June 2021. HIV incidence was calculated based on the number of new HIV cases detected during the follow-up overall and stratified by country and age. We used logistic regression to identify initial enrolment factors associated with long-term PrEP engagement. Potential predictors included baseline socio-demographic and behavioural characteristics, such as country, age group, race, education, main reason to attend the service, migration, number of sex partners, any CAS, receptive CAS, CAS with partner known to be living with HIV, transactional sex, binge drinking, stimulant use and self-reported adherence at week 4. Individuals who did not return to follow-up visits were considered non-adherent. We evaluated PrEP-related gastrointestinal symptoms in bivariate analysis, but not in the multivariable model as this variable is only available for individuals returning to a week 4 visit, which would modify the analytic sample. In the initial model, the effect of each variable was controlled by country and all statistically significant variables at a p-value ≤0.1 were included in the final adjusted model. To account for correlated measures within participants, we used logistic generalized estimating equation models to identify factors associated with complete self-reported adherence at each post-enrolment visit completed by the study participants over the 52 weeks. We used the same potential predictors considered in the long-term PrEP engagement analysis allowing behavioural characteristics and symptoms related to PrEP to be included as time-varying variables. In the initial models, the effect of each variable was controlled by country and study visit. All variables statistically significant at p-value ≤0.1 were included in the final adjusted model. All analyses were conducted in R version 4.1.1 [51]. R E S U LT S A total of 9979 individuals were screened, 559 (5.6%) TGW. Of these, 543 were enrolled and 494 were followed for at least 52 weeks and included in this analysis (Brazil: 190, Mexico: 66 and Peru: 238) ( Figure 1). Reasons for ineligibility included HIV infection at screening/enrolment (one acute and 16 chronic HIV infections), referral for PEP, adherence concerns (clinician thought the person would not be adherent to PrEP) and clinical concerns (other clinical condition, such as untreated tuberculosis or diabetes) ( Figure 1). During followup, 32 individuals were re-enrolled, their additional visits were included in our analysis. Among the 494 TGW included in this analysis, median age was 29 years (IQR: 24, 36); 27.5% aged 18-24 years. Most were mixed race (67.8%), had secondary education (58.7%), had not migrated (70.2%), attended the service seeking PrEP (65.4%) and most (71.9%) reported transactional sex. Median number of sex partners was 25 (IQR: 5, 100), and 61.9% reported >10 partners. The majority reported CAS (89.9%) and CAS with partner with unknown HIV status (64.8%), while 4.0% reported CAS with partner known to be living with HIV. Binge drinking and stimulant use were reported by 67.8% and 20.2%, respectively (Table 1). Overall, TGW were followed-up for 274.5 person-years and five HIV seroconversions occurred resulting in an overall HIV incidence rate of 1.82 (95% CI: 0.76-4.38) per 100 personyears. The HIV incidence rate was 3.80 (95% CI: 1.58-9.13) in Peru, while no HIV cases were observed in Brazil or Mexico. Incidence rate among TGW aged 18-24 and 25-34 years was twice as high compared to TGW aged >34 years ( Table 2). D I S C U S S I O N TGW enrolled in the ImPrEP study were able to safely initiate same-day oral PrEP. The ImPrEP study is the first to evaluate PrEP implementation among Latin-American TGW and includes a large cohort of TGW, the largest in LMICs with results reported separately from MSM to our knowledge. Long-term PrEP engagement and self-reported adherence were low and associated with underlying socio-demographic characteristics, such as age and education. Our data corroborate the finding that early adherence as measured by selfreport at week 4 is associated with higher likelihood of longterm PrEP engagement [31]. Although HIV prevalence among TGW is high in Latin America, no HIV incident cases were observed in Brazil and Mexico in a context with PrEP availability at no cost to the user. Conversely, HIV incidence in Peru was high, especially among younger TGW. In our analysis, less than half of TGW (47.6%) remained engaged in PrEP over the year of follow-up, lower than observed for MSM included in the ImPrEP study (p<0.001) [52] and reflecting long-term PrEP engagement among TGW in past studies [31,36]. Long-term PrEP engagement was lower in Mexico, while complete self-reported PrEP adherence was lower in Peru, indicating gaps in PrEP services in these settings. Peru and Mexico have adopted trans-specific guidelines for care [37,53], but the promises of services tailored to the needs of TGW remain a goal rather than a reality. More than half of TGW (49/89, 55%) enrolled in a Peruvian study to provide support for PrEP users were lost to follow-up in a short period (3 months) [36]. In Brazil, high retention (111/130, 85%) was observed in the PrEParadas study, a PrEP demonstration study designed for TGW, including gender-affirming care environment implemented at the study site and TGW peer-educators [39]; nonetheless, PrEP adherence decreased over time, especially among TGW with lower education [39]. TGW consistently have more difficulties in engaging in prevention and treatment services, reflecting their underlying vulnerabilities and the poor adaptation of services to their needs. Novel HIV prevention strategies will only succeed if health services are acceptable and accessible to TGW [3]. Although long-term PrEP engagement and self-reported PrEP adherence are related outcomes, the variables associated with each were distinct. Self-reported PrEP adherence was higher among TGW with post-secondary education. Lower education was previously associated with low PrEP adherence among Brazilian TGW [39]. Education level is also an important aspect related to HIV outcomes among people living with HIV [54,55]. Notably, long-term PrEP engagement was lower among TGW who had migrated. Internal and external migration seeking better opportunities is common in LMICs. TGW usually migrate to larger cities probably aiming for less stigma and more life opportunities [56]. In a Brazilian study that enrolled 345 TGW, 40% were internal migrants [30]. Although we have not measured income in this study, these results suggest that additional social and financial support might increase PrEP adherence and engagement among TGW with high socio-economic vulnerability. Interest in PrEP, based on complete adherence at the week 4 visit was associated with long-term PrEP engagement. Additionally, PrEP as the main reason for attending the service was borderline significant. In South Africa, PrEP education emerged as an urgent matter for TGW [57]. Expanding PrEP literacy among TGW communities, including knowledge about PrEP benefits, duration of side effects and importance of adherence, is essential for achieving better PrEP outcomes. Targeted adherence-supporting interventions and peer support activities may be especially important [36] for TGW who are offered PrEP but were not looking for PrEP, those who are younger and with lower education levels, helping to improve PrEP engagement and adherence. TGW remain highly vulnerable to HIV and public health programmes offering PrEP should include tailored support for this population to bolster adherence and engagement to services. The country-level differences observed for long-term PrEP engagement and self-reported adherence likely reflect underlying distinct public health systems and TGW populations included in each setting. Lower long-term PrEP engagement in Mexico may reflect the fact that most study sites had stronger connections with MSM, which might made TGW feel less included and hence less informed. In Peru, TGW reported consistently lower adherence compared to the other countries. Compared to Brazil and Mexico, TGW from Peru had lower educational levels and were the least likely to have enrolled seeking PrEP, suggesting lower PrEP awareness, and ultimately impacting their PrEP adherence. Differences in the characteristics of the enrolled TGW may have contributed to their lower adherence and consequently higher HIV incidence, even though not all evaluated variables were significant on their own. Our findings on long-term PrEP engagement reflect the difficulties that TGW face to remain engaged in services. Efforts should be taken to retain TGW, including support to their existing social networks [36,58,59] and building on the experience of TGW who do return for follow-up visits. TGW remain highly marginalized, as evidenced by the rates of transphobia in Latin America. Out of the 375 murders of trans people reported between October 2020 and September 2021 worldwide, the great majority (83%) occurred in Latin America; and Brazil and Mexico are in the top of the list [60]. Intersecting social vulnerabilities must be acknowledged when planning PrEP services for TGW. Our study has limitations. First, ImPrEP was not designed to specifically assess outcomes among TGW, and, therefore, measures of key importance for this population were not evaluated. Data on FHT use are not available for most of TGW participants, so we could not include this information in this analysis. Additional qualitative studies to assure understanding of the factors influencing PrEP adherence and engagement among TGW may be needed. Our results are not informative of PrEP uptake, given the study design, study screening only occurred among TGW who expressed an interest in participating. The study inclusion criteria focused on enrolling individuals who could benefit from PrEP, but not all potentially eligible individuals wanted to be screened. Data on PrEP refusal were not collected. Inclusion of fewer TGW from Mexico and data from various sites within a relatively small sample may limit cross country comparisons. Although self-reported adherence can be limited by different biases, such as recall, response or social desirability bias, which may overestimate adherence [61], neutral assessment (assessment conducted by non-clinical/non-counselling staff trained to collect adherence information without judgement or negative consequences) [62] can minimize these biases and is recommended to ensure the quality of self-report [49]. Previous analyses from a Brazilian PrEP study have shown that, self-reported adherence can discriminate participants with and without protective TDF-FTC levels [49,50]. In a recent study from New York (USA), self-reported PrEP adherence has shown to be accurate and a valid indicator of PrEP uptake [63]. Our analysis of self-reported PrEP adherence used a very stringent definition, requiring only individuals who reported taking all pills in 30 days to be categorized as adherent. However, the number of missing pills reported by visit (Figure 2a) led to sufficient number of pills required for protection (i.e. 4 pills per week, which would provide sufficient protection) [20]. Additionally, the average number of missing pills (Figure 2b) ranged from 2 to 8 within the past 30 days and decreased over time. Importantly, self-reported adherence is based on TGW who attended the service and, therefore, does not include those who missed visits. C O N C L U S I O N S Although TGW ere willing to be enrolled in ImPrEP and remained on oral PrEP during short-term follow-up, long-term PrEP engagement and PrEP adherence were limited. HIV incidence remained high in Peru despite the availability of PrEP free of charge throughout the study. A successful HIV prevention agenda among TGW considering country or region particularities will need to address social and financial barriers and include trans-tailored interventions supporting PrEP education, engagement and adherence. Long-acting PrEP may be particularly useful for this population. C O M P E T I N G I N T E R E S T S KAK reports employment at Universidad Peruana Cayetano Heredia and University of California, Los Angeles. All other authors report no potential competing interests. A U T H O R S ' C O N T R I B U T I O N S VGV, CFC, BG and HV-R conceived and designed the ImPrEP study. BG conceived and supervised the current analysis and manuscript preparation. KAK and TST interpreted the findings and drafted the manuscript. RIM, ICL and MC did the statistical analyses. EMJ, BH, JVG, MB, CP, SB-A and HV helped with data acquisition, interpretation of the findings and drafting the manuscript. GM and AR were involved in revising the manuscript for important intellectual content. All authors read and approved the final manuscript. A C K N O W L E D G E M E N T S We would like to thank all study participants. This project was made possible thanks to Unitaid's funding and support. Unitaid accelerates access to innovative health products and lays the foundations for their scale-up by countries and partners. Unitaid is a hosted partnership of WHO. We would like to thank the support of the Ministries of Health of Brazil, Mexico and Peru. The funding bodies had no role in the design of the study and collection, analysis and interpretation of data and in writing the manuscript. F U N D I N G BG and TST are funded by the National Council of Technological and Scientific Development (CNPq) and Carlos Chagas Filho Foundation for Research Support in the State of Rio de Janeiro (FAPERJ). DATA AVAILABILITY STATEMENT Data will be available upon reasonable request and will be approved by ImPrEP coordination team.
2022-10-13T13:03:48.180Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "27625a9d4bc7d51dab834f15e970c93cb0d73839", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "27625a9d4bc7d51dab834f15e970c93cb0d73839", "s2fieldsofstudy": [ "Political Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
58557639
pes2o/s2orc
v3-fos-license
Transcatheter aortic valve replacement will be standard of treatment for severe aortic stenosis with porcelain aorta Calcification involving thoracic aorta is a known pathology and it is often associated with calcification extending to valvular, coronary and carotid arterial system. It reflects a common atherosclerotic calcific pathology affecting the cardiovascular system. Porcelain aorta (PA) is the term used to refer to the condition characterized by extensive calcification of thoracic aorta. Concomitant severe aortic stenosis (AS) or severe coronary artery disease (CAD), which often compound PA, significantly increase cardiovascular mortality and hence this condition needs to be addressed meticulously. Aortic valve replacement (AVR) in the presence of PA is very complex surgical procedure associated with increased complications and mortality but transcatheter AVR (TAVR) today has eased out this unmet clinical need. Inoperability for AVR in PA is an example where surgery is challenging because of technicalities rather than the high clinical risk from several other comorbid conditions seen in these patients. PA has been defined as a structural disease of the aortic wall where extensive and circumferential calcium deposition occurs in the thoracic aorta that can be detected by computed tomography (CT) or fluoroscopy. Calcium is present circumferentially or partly in the ascending aorta, the arch and the descending thoracic aorta. The calcium deposition may occur in the intima alone, as seen in those with atherosclerotic pathology. People with hypertension, diabetes, dyslipidemia and smoking generally have such intimal calcification and it usually affects the very elderly population. On the other hand, a non-atheromatous calcium deposition in the thoracic aortic media can also occur leading to the genesis of PA. This form of aortic calcification is more commonly seen in younger people with some systemic illnesses such as chronic kidney disease, post mediastinal radiation and systemic inflammatory disorders like Takayasu arteritis, systemic lupus erythematosus and rheumatoid arthritis. Different authors, surgeons, forums or bodies have defined PA differently but the common denominator in all of them is that “aortic calcification” extends in such a manner that it interferes with aortic cannulation, aortic clamping, aortotomy and safe access to ascending aorta for AVR, necessitating modification of the surgical technique to avoid the complications. The valve consortium has defined it as severely atherosclerotic aorta characterized by heavy circumferential calcification or severe atheromatous plaques of the entire ascending aorta extending to the arch such that aortic cross clamping is not feasible. CT scan is the imaging tool of choice to define PA [Fig. 1(A–C)]. A surgical classification given by Amorin et al, to guide treatment, consists of type 1 PA, when ascending aorta alone is involved and type II, if circumferential calcification involves descending thoracic aorta, with or without involving the arch. Type 1 is further classified as type 1 A, if it is impossible to crossclamp the aortic root and as type 1B, if clamping is feasible but at a high surgical risk. 19 PA is an incidental finding, mostly discovered during imaging studies carried out for cardiovascular and pulmonary diseases and the patients are asymptomatic with regards to PA. Prevalence of PA has been seen to be in a wide range of 2.7%-42.9% in people without cardiovascular disease. In patients undergoing electron beam CT for known or suspected CAD, the prevalence is reported to be 0.7%. Among patients with AS, PA is known to be present in approximately 7.5% subjects. In patients undergoing valve surgery or coronary artery bypass surgery, the prevalence of PA is reported to be 1.25-13%, whereas the same in those undergoing TAVR is in the range of 5%-33%. 20 However, PA can remain unnoticed many-atimes, only to be detected as a surgical finding after sternotomy. 21 As mentioned above, PA may remain innocuous in itself, without producing any symptoms. However, a very stiff aorta can also lead to congestive heart failure, left ventricular (LV) failure, LV hypertrophy, elevated pulse pressure, coronary ischemia or sometimes, even sudden arrhythmic death. Conventionally, before TAVR emerged as an alternative, surgery was the only option for those with severe AS, but in the presence of PA, the surgical AVR was either declined or the patient had no choice but to undergo an extensive and technically highly complex surgery generally associated with high rate of complications, particularly cerebrovascular embolic strokes. Two surgical alternatives to classical extensive surgery were attempted in these patientssurgery through axillary artery cannulation or, apico-aortic conduit or ascending aorta replacement. However, these clearly lacked efficacy and were associated with additional set of complications. [22][23][24][25][26] Sutureless AVR is another surgical advancement but its efficacy and safety in the presence of PA is still not proven. 27 The advent of TAVR has allowed a promising therapeutic option for severe AS patients who also have PA. Currently in the era of TAVR, PA is an important independent factor for consideration during patient selection and is sometimes the sole indication for performing TAVR, irrespective of the risk category of aortic valve surgery. Fifteen percent of the inoperable patients in the randomized PARTNER study had PA. Data on TAVR in PA come from small series and case reports. A multicenter, observational, prospective study from three Spanish hospitals included 35 patients of severe AS with PA and their data showed no difference in terms of safety, feasibility and had similar rates of success and complications as compared to those seen in patients without PA. Axillary arterial access was chosen by some operators for patients who had unsuitable femoral access routes. Self-expanding Core Valve TM was used in the study. [28][29][30][31] Conventionally, TAVR for PA has been mostly done via transfemoral route, but some authors 32 With this background, we share our experience of TAVR in two cases of symptomatic severe calcific AS in whom PA was detected by the surgeons who subsequently declined to perform surgical AVR and the patients were referred for TAVR. Interestingly, between the two patients, one was close to half of the other's age at the time of TAVR. The older patient, a 80-years old lady, had developed PA secondary to atherosclerotic intimal calcification, whereas the younger patient (age 38 years) had non-atherosclerotic medial calcification as the underlying pathology resulting from radiation therapy of the chest given at age five for Hodgkin's Lymphoma. The first patient (the elderly lady) had previously undergone coronary artery bypass graft (CABG) surgery eight years back for CAD and her moderately stenosed aortic valve was not replaced by the surgeon because of PA and related risks of cross clamping during surgery. Post CABG, she was asymptomatic with her routine activities for few years. She then started having recurrence of angina. The grafts were patent but aortic stenosis had become severe. Surgical replacement of the valve was denied again because of the presence of PA, which was known since the past and further risks of redo open heart surgery and frailty at age of 80 years. Since TAVR had become available in India by that time, she underwent successful AVR by TAVR technique using a Core Valve TM . The patient is doing well at six years follow-up after TAVR. In contrast, the second patient is the youngest patient undergoing TAVR at our institution. As mentioned above, he was a 34-years old gentleman who had received radiotherapy for Hodgkin's lymphoma at five years of age. Irradiation of chest led to early degenerative changes and calcification of his aorta, aortic valve and coronary arteries. By 38-years of age, he had developed severe, symptomatic calcific AS. Breathlessness was the main symptom and he was in New York Heart Association functional class III at the time of presentation. The cardiac surgeons declined to perform a surgical AVR on him due to PA observed on multi-slice CT scan and therefore, he was advised TAVR. The pre-TAVR work up revealed heavily calcified tricuspid aortic valve, calcified ascending aorta and sinotubular junction, and small sinuses of Valsalva. The height of left coronary arteries ostia was 11.7 mm and the right was 13.4 mm [ Fig. 2(A-D)]. The sinus of Valsalva diameters were 24.8 Â 23.9 Â 25.4 mm and the sinotubular junction diameter was 17.9 Â 18.1 mm [ Fig. 2(A-D)]. In view of lower left coronary ostia height and small sinuses, there was concern about the pinching of left main ostia. Coronary protection strategy was planned. A coronary stent was pre-placed in the left anterior descending artery over an All-star coronary wire [ Fig. 3(A-D)]. A 23-mm Evolut R TM aortic valve was then deployed successfully and the coronary stent was pulled out once no coronary impingement was confirmed after deployment of the aortic prosthesis on an aortogram done post implantation. Unfortunately, the stent got trapped between the sinus of Valsalva and the prosthesis frame. Different maneuvers were tried to pull out the trapped stent but to no result. A new strategy was then applied wherein a chimney between the frame of device, sinus and the left main coronary artery was created by deploying the stent in the potential space much away from the ostium of left main [ Fig. 3(A-D)]. Coronary flow to the left system was not compromised and TAVR was completed successfully. The patient was doing well at one-month follow-up after the procedure and the follow-up CT scan revealed well deployed Evolut-R valve, patent stent (chimney like) and unimpaired left main coronary flow [ Fig. 4(A-C)]. Thus, to conclude, given its technical feasibility, TAVR is likely to become the preferred strategy for treating patients with severe AS and concomitant PA. PA is an established impediment to surgical AVR because of the technical inoperability and is associated with increased risk of stroke. The other surgical alternatives for severe AS in presence of PA are technically very complex and challenging. In comparison, TAVR is a much safer and easier option for treating such patients. Advancements in TAVR technology have made it possible that whether it is a hell or high water arising from the burden and distribution of calcium in PA, TAVR can navigate and reach the shore in most patients with much lesser risk of complications! Letter to the Editor / Indian Heart Journal 70 (2018) 943-947
2019-01-22T22:22:00.471Z
2018-05-28T00:00:00.000
{ "year": 2018, "sha1": "605224a28d204379f73f6f2e04ea60883bcf28f7", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ihj.2018.05.017", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "04645891c155b90e510326c10ff3861e363791a4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
210181580
pes2o/s2orc
v3-fos-license
Loneliness Modulates Automatic Attention to Warm and Competent Faces: Preliminary Evidence From an Eye-Tracking Study Social connections are essential for human survival. Loneliness is a motivational factor for building and maintaining social connections. Automatic attention occurs with little cognitive effort and plays a key role in detecting biologically salient events, such as human faces. Although previous studies have investigated the effect of loneliness on social behavior, the effect of loneliness on automatic attention to human faces remains largely unknown. The present study investigated the effects of loneliness on automatic visual attention to warmth and competence facial information, which determines facial attraction. This study included 43 participants who rated warmth and competence facial information. Then, they engaged with the target-distractor paradigm in which they saw two house images at the top and bottom and indicated whether the images were identical. During the task, we presented two faces as distractors and measured visual attention toward the faces as automatic attention because participants did not have to attend to the faces. The results showed an interactive effect between subjective loneliness and facial information on automatic attention. Warm targets automatically captured the attention of people feeling relatively lonely, whereas competent targets automatically captured the attention of those who felt less lonely. These results suggest that loneliness adaptively influences automatic processing of social information. INTRODUCTION Automatic attention is an adaptive tool for detecting and enhancing processing of salient events from an evolutionary perspective. Attention can be conceptualized to have two functions: voluntary (endogenous) attention, and automatic (exogenous) attention (Carretié, 2014). Voluntary attention is goal-driven and consciously directed toward the event or stimulus, while automatic attention is stimulus-driven and triggered by external events in the environment (Carretié, 2014). Automatic attention plays a key role in the efficient monitoring, detecting, and processing of biologically salient events that appear out of the current focus of attention, including fearful expressions (Hsu and Pessoa, 2007) pathogens (Van Hooff et al., 2014), and delicious-looking foods (Motoki et al., 2018). The saliency of an event depends on an individual's state, such as loneliness. Loneliness is the negative experience of a discrepancy between the desired and achieved personal network of relationships (Maner et al., 2007;de Jong Gierveld et al., 2016) and is related to negative emotional experiences (Hawkley and Cacioppo, 2010). Many studies have demonstrated that loneliness increases attention to social information, especially social threads, using various experimental methodologies such as electroencephalogram (Cacioppo et al., 2015(Cacioppo et al., , 2016 and eyetracking (Bangee et al., 2014). Additionally, it is known that loneliness causes hypervigilance to visual cues about social information (e.g., Bangee et al., 2014) as well as auditory cues about social information (i.e., voice, Shin and Kim, 2019). Although it seems obvious that loneliness increases attention to social stimuli, the effects of loneliness on automatic visual attention to social information remain unclear. The abovementioned studies have addressed these effects on selective attention using Stroop (Cacioppo et al., 2015(Cacioppo et al., , 2016Shin and Kim, 2019) and free-viewing (Bangee et al., 2014) tasks. However, other studies investigating the effects of loneliness on attention only measured voluntary attention Lodder et al., 2016) because the participants consciously paid attention to social information. Given the unique role that automatic attention plays in adaptive behaviors (Carretié, 2014), it is important to examine whether loneliness would also influence automatic visual attention to social information, such as faces. The human face presents perhaps the most primary form of social information because it reveals various indicators such as age, gender, race, mood, intentions, and focus (Bruce and Young, 1986). Indeed, people automatically make social judgments when seeing faces even if they do not expect to interact with the individual (Winston et al., 2002). Additionally, the manner in which people see faces can influence behaviors such as preference (Shimojo et al., 2003). Thus, the present study investigated the effects of loneliness on automatic visual attention to faces because faces contain important social information. In the current study, we set up the first hypothesis: H1: Lonely people pay more automatic attention to faces compared to those who are less lonely. Although loneliness promotes attention to faces, this behavior may depend on one's impression of the face, such as warmth and competence. People automatically evaluate faces based on two fundamental dimensions, valence (warmth) and dominance (competence) Saito et al., 2019), and these two facial impressions might influence automatic attention differently based on the level of loneliness. From an evolutionary perspective of a fundamental motivational framework (Griskevicius and Kenrick, 2013;Motoki and Sugiura, 2017), loneliness modulates the saliency of information. However, no studies have investigated whether loneliness also modulates automatic attention to faces based on the two fundamental dimensions of facial evaluation (i.e., warmth and competence). It is possible that warmth information is more salient than competency information to lonely people because affiliation, which is a fundamental motive when forming and maintaining cooperative alliances, is motivating. Indeed, lonely people show greater motivation for reconnecting with others (de Jong Gierveld et al., 2016). For example, lonely people tend to preferentially recall events related to social interactions when they are asked to recall events after reading another person's diary. On the other hand, less lonely people do not show this type of behavior . Moreover, socially excluded people pay more attention to signs of social acceptance (DeWall et al., 2009;Xu et al., 2015) and show a greater ability to distinguish between genuine and deceptive smiles (Bernstein et al., 2008). Therefore, warm faces would capture more automatic attention from lonely people. In contrast, competent faces are more salient to less lonely people. In some cases, the perception of competence is more important than the perception of warmth (Wojciszke, 2005;Wojciszke and Abele, 2008;Cuddy et al., 2011). For example, people who seek affective information (e.g., an advertisement focusing on the pleasantness of the product) place more significance on the perception of warmth whereas people who seek cognitive information (e.g., an advertisement focusing on attributes such as ingredients or the manufacturing of the product) place more significance on the perception of competence (Aquino et al., 2016). The need for cognition is indirectly correlated with low levels of loneliness; two traits associated with loneliness (self-esteem and social anxiety) are associated with the need for cognition (Osberg, 1987). Specifically, self-esteem and low anxiety levels are both positively correlated with the need for cognition and correlated with low levels of loneliness (Al Khatib, 2012;Lim et al., 2016). These findings suggest that less lonely people seek competence rather than warmth in others. Previous studies of fundamental human motivation have shown that acquiring higher status in a group (status motive) is a primary concern after fulfilling the need for affiliation (Kenrick et al., 2010) because higher status enables people to access desirable mates, food, and resources more easily (Crosier et al., 2012;O'Connor et al., 2014). If an individual wants to acquire status within a group, then competence rather than warmth in others becomes important because perceived competence is highly correlated with status (Fiske et al., 2007). Considering the status motive (Kenrick et al., 2010), it is adaptive for people feeling less lonely to detect a competent person because the person who potentially acquires a higher status can establish and maintain a network of alliances, which may require initially gaining status or acquiring territory. Moreover, given that the motivation to mate is subordinate to the motivation for status, it is adaptive to identify a competent person who may be a desired mating partner for opposite-sex perceivers and a strong competitor for own-sex perceivers. Indeed, individuals with mating-related but not affiliation-related motivation have been shown to pay more attention to high-status men (DeWall and Maner, 2008). Thus, competent faces would capture more automatic attention from less lonely people. Given the role of automatic attention, which is the detection of biologically salient events, we set up the second hypothesis. H2a: A greater level of loneliness promotes automatic attention to warm faces. H2b: Less loneliness promotes automatic attention to competent faces. This study sought to clarify how the internal state of loneliness influences the rapid and efficient processing of social information by simultaneously presenting human faces and nonsocial images, such as a house, and using an eye-tracking device to examine whether loneliness promotes automatic (task irrelevant) attention to faces and whether loneliness promotes automatic attention only to warm faces. We set up two hypotheses. The first hypothesis is that people feeling more lonely pay more automatic attention to faces, as compared to those who are less lonely. The second hypothesis is that the degree of loneliness modulates the extent of automatic attention; a greater degree of loneliness promotes automatic attention to warm faces whereas a lesser degree of loneliness promotes automatic attention to competent faces. Participants Forty-four Tohoku University undergraduates and graduates (22 men and 22 women) participated. One participant was excluded from the analysis due to deficiencies in the eye-tracking calibration. Thus, there were 43 participants (21 men and 22 females, mean age: 21.00, SD = ± 1.90 years). Participants received about $10 for their participation. Design The present study included three independent continuous variables: (1) warmth rating of faces, (2) competency rating of faces, and (3) subjective rating of loneliness. The primary outcome was automatic attention toward faces. Apparatus The present study used the Tobii Pro X2-60 (60 Hz; Tobii Technology, Stockholm, Sweden) to monitor eye movements; this system does not require anything to restrict head movement (e.g., a chin rest) The stimuli were shown on an LCD monitor with a resolution of 1920 × 1080. The distance between the participant and the display was approximately 60 cm. Participants were calibrated using a nine-point calibration in the Tobii studio. During the eye-tracker task, participants were instructed to make their best effort not to move their heads. Loneliness Measure The UCLA Loneliness Scale (version 3) includes 20 items to measure the degree of perceived social isolation (Russell, 1996). Participants rated how often they have experiences that made them feel isolated, such as, "My social relationships are superficial, " and, "I feel alone." We used a Japanese version of the UCLA Loneliness Scale (version 3), which has been confirmed to have high reliability and validity in a Japanese sample (Masuda et al., 2012). Facial Stimuli We created 163 three-dimensional male faces using software (FaceGen Modeller 3.14) after referring to a previous study Saito et al., 2017;Motoki et al., 2019). The faces were rated by participants of the present study with regard to warmth ("How warm is this person?"), competence ("How competent is this person?"), and attractiveness ("How attractive is this person?"). They answered each question on a seven-point Likert scale from 1 to 7 (cold-warm, incompetent-competent, and unattractiveattractive). Each question was blocked and they took a short rest after answering 50 questions. This face-rating task was presented using PsychoPy (Peirce, 2007) and, based on the ratings of each participant, 80 face stimuli in which warmth and competence had minimal correlations were selected for each participant. Thus, a different set of facial stimuli was used for each participant. The correlation between warmth and competence was minimized as much as possible to avoid the multicollinearity problem because the analyses included warmth and competence ratings as independent variables (r = 0.166). The selection procedure for the facial stimuli was performed as follows. First, the faces were categorized as warm and competent, warm but incompetent, cold but competent, and cold and incompetent. Faces that received a score of 4 (scale midpoint) or higher on the warmth scale were categorized as warm faces and those that scored below four-points were categorized as cold faces. Similarly, faces that received a score of 4 or above on the competence scale were categorized as competent faces and those rated as less than 4 were categorized as incompetent faces. Then, 20 faces were randomly selected from each category; the mean ratings for each category are presented in Table 1. To determine any differences for each rating among the four categories, the ratings were assessed with one-way analysis of variance (ANOVA) tests. There were significant main effects of the different categories on each rating (warmth: F[1.47,162.50) = 316.36, p = 0.000, η 2 p = 0.88, competence: F[1.47,62.21) = 299.59, p = 0.000, η 2 p = 0.87, and attractiveness: F[2.26,94.76) = 79.01, p = 0.000, η 2 p = 0.65). Subsequent multiple comparison analyses revealed that stimuli categorized as warm were rated as warmer than those categorized as cold. Additionally, stimuli categorized as competent were rated as more competent than those categorized as incompetent (Appendix Table A1). Thus, it was confirmed that the four categories were successfully divided based on the participant ratings. House Stimuli The present study displayed 10 house images used in a previous study (Motoki et al., 2018). The images contained three elements: a house, a yard, and the sky. Procedure Groups of two to four participants were placed in a room that accommodated a maximum of 10 people. After obtaining informed consent from each participant, all participants completed the UCLA Loneliness Scale (Russell, 1996). Because there were only two laptops available for the ratings task and all participants could not perform the task at the same time, half of participants did so before and the remainder did so after completing the face-rating task. The loneliness rating (t[41] = 0.549, p = 0.581, n.s.) did not differ between groups; therefore, we concluded that task order did not affect the degree of loneliness. Then, the participants performed a filler task that was unrelated to the current study for 10 min. After that, we asked the participants to perform a face-house task, which employed the concurrent but distinct target-distractor paradigm (Carretié, 2014). First, a fixation cross appeared for 2 s. Then, two house images and two identical facial images appeared on the screen (Figure 1). We presented the house images at the top and bottom of the screen as the targets and the facial images on the left and right sides of the screen as distractors using Tobii software (ver. 3.3.2; Tobii Technology). The participants were required to indicate whether the targets (houses) were identical or different within each trial by either pressing the F key (same) or the J key (different). Additionally, we asked them to respond as soon as they knew the correct answer. A fixation cross appeared for 1 s between trials. The task was divided into two sessions of 40 trials each, and there was a short rest between sessions. There were 20 same house trials and 20 different house trials in each session. There was a total of 80 trials and the participants encountered 20 faces from each of the four categories (warm and competent, warm but incompetent, cold but competent, cold and incompetent; 80 in total). The order of the trials and the sessions were counterbalanced. Statistical Analyses All statistical analyses were conducted using R software (R Core Team, 2017). We used the lme4 package in Bates et al. (2015) for the generalized linear mixed model (GLMM) and p-values for each effect were obtained based on Satterthwaite's approximation using the lmerTest package (Kuznetsova et al., 2017). The statistical power of each analysis was retrospectively assessed using the smir package (Green and MacLeod, 2016). Then, onetailed p-values for the two hypotheses were calculated because these hypotheses posited the specific directions of the differences. To assess automatic attention to human faces in each trial, the screen was divided into two areas of interest (AOI); i.e., houses (target) and faces (distractors). These AOI were defined as follows: houses (target), which were the outlines of both top FIGURE 1 | An example of the face-house task. Participants indicated whether the target images (house) located top and bottom were identical or not. Two identical faces were located side by side as distracters. Total viewing time of the faces was measured as automatic attention. and bottom house images, and faces (distractors), which were the outlines of both sides of face images. Total fixation times on the houses (target) and faces (distractors) were measured using eye tracking. Automatic attention was defined as total fixation time on faces (distractors) because participants were not required to attend to the distractors. These statistical procedures were described in a previous study (Motoki et al., 2018). The total viewing time of faces was entered into the GLMM as a dependent variable and subject was entered as a random effect to control for repeated measures. Five fixed effects, including the warmth and competence ratings of the facial stimuli that were rated prior to the face-house task, were also entered into the model. Subjective loneliness, which was also rated prior to the face-house task, and the interactions between warmth and loneliness and between competence and loneliness were also assessed. Six covariates (i.e., attractiveness, brightness of the images, total house viewing time in each trial, trial condition [same/different], age of the participant, and sex of the participant) were entered into the subsequent model to account for potential effects from other factors. Attractiveness was rated by each participant prior to the face-house task and entered into the model because attractiveness captures attention (Sui and Liu, 2009). The brightness of images was calculated using the rgb2gray and mean2 functions of MATLAB 2017a 1 and was entered into the model because brightness captures attention (e.g., Turatto and Galfano, 2000). Total house viewing time and trial condition were entered because trial difficulties that might be reflected in these variables could affect gaze behavior and viewing time for both the targets and distractors. Age and sex were entered to reduce individual differences. All variables were standardized, except for nominal variables, trial condition (same: 0.5, different: −0.5), and sex of the participants (male: 0.5, female: −0.5). Table 2 presents the descriptive statistics of the following variables: mean score for loneliness, reaction time (ms, which is the average time for one trial in the face-house task), viewing time of the houses and faces (ms), accuracy of the task performance (%), and disrupt rate (%), which was the percentage of trials in which participants looked at the distractors. First, a GLMM analysis was conducted to assess the effects of perceived traits (warmth and competence) and subjective loneliness on automatic attention. There was no main effect of loneliness on automatic attention to faces (β = 0.078, z = 1.231, p = 0.112, n.s.). This result indicates that loneliness did not promote automatic attention to faces regardless of the facial information, so the first hypothesis positing that people feeling relatively lonely would pay more automatic attention to faces, as compared to those who are less lonely was not supported. More importantly, there were significant interactions between perceived warmth and subjective loneliness (β = 0.028, z = 1.689, p = 0.046) and between perceived competence and subjective loneliness (β = −0.039, z = −2.381, p = 0.009). The full results of this analysis are presented in Appendix Table A2. The present study also conducted a simple slope analysis to interpret each interaction (Figure 2). First, the simple slopes for the association between automatic attention and perceived warmth were tested for low loneliness (−1 SD below the mean) and high loneliness (+ 1 SD above the mean). The analysis showed that people feeling relatively lonely automatically paid attention to the warm targets (β = 0.058, z = 2.527, p = 0.006), whereas people feeling less lonely did not (β = 0.004, z = 0.153, p = 0.439, n.s.). A post hoc power analysis revealed high statistical power for detecting the simple effect of warmth when the loneliness rating was high (power = 0.80). On the other hand, when the loneliness rating was low, statistical power was low (power = 0.10). Although the statistical power was low for some of the analyses, it should be noted that post hoc power analyses, such as those performed in our study, may be misleading. According to Zumbo and Hubley (1998), it is nonsensical to make power calculations after a study has been conducted and a statistical decision has been made because there is a one-to-one correspondence between power and the p-value of any statistical test (Ellis, 2010). Thus, although the post hoc power analysis showed that the statistical power of the study was low, the study may not have underpowered. The simple slopes for the association between automatic attention and perceived competence were tested for low and high loneliness. The analysis showed that people feeling less lonely paid significantly more attention to competent targets (β = 0.047, z = 2.001, p = 0.023), whereas those feeling relatively more lonely did not (β = −0.032, z = −1.324, p = 0.093). A post hoc power analysis revealed moderately high statistical power for detecting the simple effect of competence when the loneliness rating was low (power = 0.70). When the loneliness rating was high, power was low (power = 0.50). DISCUSSION The present study investigated how two fundamental impressions of faces, warmth and competence, and subjective loneliness would affect automatic attention. The results showed that warm targets captured the automatic attention of relatively lonely people, whereas competent targets captured the automatic attention of less lonely people. This result supports our hypothesis 2a and 2b that the degree of loneliness modulates the extent of automatic attention to warm and competent faces. Our result that people who felt relatively lonely automatically paid attention to warm faces is consistent with the regulatory model of belonging need . According to the model, individuals monitor their level of social inclusion and maintain it at an acceptable level in the same way as they maintain basic needs (e.g., food, water, and sleep). When the sociometer or other assessment mechanism indicates that one's state of belonging is unsatisfactory, the regulatory system becomes activated to monitor social information that may provide cues to belonging and inclusion. Indeed, people feeling deficient in social connections show a greater tendency to attend to social information, such as in the environment, compared with those who have a satisfactory level of acceptance . The current study indicates that the monitoring system works automatically and that the system can monitor less salient information, such as perceived warmth, rather than salient information, such as a smiling face. The adaptive change in the direction of automatic attention to faces is consistent with a goal systems framework (Kruglanski et al., 2002). According to this framework, activation of a goal, even unconsciously, leads people to present behavior to accomplish the goal. Indeed, people who are primed to activate a prestige goal chose a higher-priced option than do people primed to activate a thrift goal (Chartrand et al., 2008). Furthermore, when people are primed with the goal of reducing physical coldness, they prefer socially warm activities rather than control FIGURE 2 | The interaction between the warmth evaluation and subjective loneliness (left) and the interaction between the competency evaluation and subjective loneliness (right). The red line represents a high degree of loneliness (+ 1 SD). The green line represents average loneliness. The blue line represents a low degree loneliness (-1 SD). Shaded regions indicate 95% confidence intervals. activities (Zhang and Risen, 2014). Therefore, it may be that the goal of the lonelier participants in our study was to connect with people they perceived as warm. Looking at warm faces is a useful strategy for connecting with warm people because a direct gaze gives the receiver a better impression of the sender (Ewing et al., 2010;Khalid et al., 2016). Conversely, the goal of less lonely participants may have been to acquire a higher status (Griskevicius and Kenrick, 2013), given that high status in a group (status motive) is a primary concern after fulfilling the need for affiliation (Kenrick et al., 2010). For these participants, directing their gaze toward competent faces is a useful strategy because building a good relationship with a higher status person may raise one's own status. Our findings are not consistent with the hypothesis that lonely people pay more automatic attention to faces than do less lonely people. Gardner et al. (2005) found that higher levels of loneliness were correlated with attention to social information. However, hypervigilance to social information depends on the emotional valence of the information. DeWall et al. (2009) reported that participants threatened with exclusion paid more attention to social information conveying social acceptance (smiling faces) than to that conveying social disapproval (angry and sad faces). These findings suggest that loneliness increased attention to positive but not negative social information. Our finding that lonely people paid more attention to warm faces than to faces in general is consistent with DeWall et al. (2009). However, previous studies have found that loneliness increased attention to negative social information. For example, Shin and Kim (2019) found that lonely people showed heightened attention to a negative vocal tone that signaled social threat. Thus, it may be important to elaborate when lonely people pay more attention to social acceptance versus social threats to further elucidate the role of loneliness in social circumstances. In our results, we observed the main effect of warmth but not competence on automatic attention. Although perception is composed principally of warmth and competence information, previous studies have suggested that the warmth information is primary (Engell et al., 2007;Todorov et al., 2008). Trustworthiness (warmth) judgments after a 100ms exposure to a target face is highly correlated with judgments made without time constraints compared to competent judgments (Willis and Todorov, 2006). These results suggest that people can make these judgments in a short time and that warmth judgments are made faster than other judgments. From an evolutionary perspective, the primacy of warmth makes sense because another person's intent for good or ill will is more important to survival than whether the other person can act on those intentions (Fiske et al., 2007). Given these findings, it is reasonable to infer that warmth was generally more significant for capturing automatic attention than competence. On the other hand, the bias toward information about warmth or competence is influenced by deliberative aspects, such as the social interaction perspective (Wojciszke and Abele, 2008). According to these authors, warmth is more important than competence in cases of individuals who are not close to one another whereas competence is more important than warmth in cases of close friends and the self (Wojciszke and Abele, 2008). In the present study, automatic attention to warmth and competence information was modulated by loneliness. Thus, although warmth information was more important in general, the importance of warmth and competence information might depend on an individual's state, such as loneliness, in some cases. This study had several limitations. First, it was difficult to conclude a causal relationship between loneliness and eye movement behavior. Thus, a direct manipulation of loneliness, such as with the Cyberball paradigm (Williams et al., 2000), will be important to confirm the causal relationship. Second, our sample size may have affected the results. The study included 43 subjects, which was similar to the sample size of a previous study (n = 46) that used eye-tracking methodology to examine the effects of social exclusion on attention to faces (DeWall et al., 2009). Although the observed power of the main findings was not particularly low (the interaction of competence and loneliness was high [80%] and that of warmth and loneliness was moderately low [40%]), it would have been ideal to calculate the sample size based on the effect size of the previous relevant study prior to observing outcomes. Thus, a future study with an appropriate sample size based on the results of the present study will strengthen the reliability of our findings. Finally, the facial stimuli used in the present study were relatively artificial. Although artificial faces are advantageous in that it is possible to control extraneous factors such as hairstyle and lighting, these are not faces people would normally interact with. Because the concept of loneliness was the primary focus of the present study, the use of artificial faces may have decreased the ecological validity. Therefore, future studies using the same procedure should include more naturalistic faces as stimuli. The present results provide new insight into automatic attention. Prior studies showed that salient stimuli (e.g., emotional faces and delicious-looking foods) capture automatic attention (Carretié et al., 2012;Motoki et al., 2018). Our study suggests that saliency of social stimuli is dependent on an individuals' state, such as feeling lonely. Indeed, there is a modulatory effect of individual statetrait characteristics on automatic attention. For example, patients with generalized anxiety disorder present greater automatic attention to negative distractors than do healthy people (MacNamara and Hajcak, 2010). Therefore, future studies should consider the level of saliency changes based on individual states. DATA AVAILABILITY STATEMENT The datasets generated for this study are available on request to the corresponding author. ETHICS STATEMENT This study was approved by the Ethical Committee of the School of Medicine at Tohoku University and was conducted in accordance with the Declaration of Helsinki. All participants in this study joined voluntarily and provided written informed consent prior to the participation. AUTHOR CONTRIBUTIONS TS, KM, RN, and MS designed and developed the study protocol. TS and KM conducted the study and collected the data. TS analyzed the data and wrote the manuscript. All authors interpreted the data, and read and approved the final manuscript. KM, RN, RK, and MS provided critical revisions. The study included 3440 observations for 43 participants.
2020-01-17T14:15:10.522Z
2020-01-17T00:00:00.000
{ "year": 2019, "sha1": "254bacdb6c694e54d009ef8de5287390fed79cad", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2019.02967/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "254bacdb6c694e54d009ef8de5287390fed79cad", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
55265639
pes2o/s2orc
v3-fos-license
Evaluating the robustness of top coatings comprising plasma-deposited fluorocarbons in electrowetting systems Thin dielectric stacks comprising a main insulating layer and a hydrophobic top coating are commonly used in low voltage electrowetting systems. However, in most cases, thin dielectrics fail to endure persistent electrowetting testing at high voltages, namely beyond the saturation onset, as electrolysis indicates dielectric failure. Careful sample inspection via optical microscopy revealed possible local delamination of the top coating under high electric fields. Thus, improvement of the adhesion strength of the hydrophobic top coating to the main dielectric is attempted through a plasma-deposited fluorocarbon interlayer. Interestingly enough the proposed dielectric stack exhibited a) resistance to dielectric breakdown, b) higher contact angle modulation range, and c) electrowetting cycle reversibility. Appearance of electrolysis in the saturation regime is inhibited, suggesting the use of this hydrophobic dielectric stack for the design of more efficient electrowetting systems. The possible causes of the improved performance are investigated by nanoscratch characterization. Introduction Electrowetting (EW) deals with the enhancement of the wetting properties of solids by the modification of the electric charge density at a liquid/solid interface. Suitable application of external electric field induces variation of the contact angle of conductive liquids on insulating substrates such as polymers, glass and oxides. EW can provide more than 100° of contact angle modulation reversibly, especially in an oil ambient, with fast response to actuation in the order of milliseconds [1]. As a result, EW has been utilized for a number of technological applications such as lab-on-chip devices [2,3], liquid lenses [4,5], electronic displays [6,7] and "smart" microbatteries [8], to name a few. For all these applications it is desirable to use low voltages to induce contact angle changes, through either reduction of the dielectric thickness or the use of ionic surfactants [9]. However, indication of dielectric failure (most commonly electrolysis) is frequent, especially in cases of thin dielectrics. Consequently, improving the robustness of the dielectric is of great importance since it is related to the robustness of devices. Electrowetting on dielectric (EWOD, usually called EW) can be realized when a conductive sessile drop sits on a hydrophobic dielectric on top of a conductive electrode [10]. The dependence of the contact angle, θ V , on the applied voltage, V, is given by the Lippmann equation [11], 2 1 cos cos 2 where θ Υ is the Young's contact angle and γ is the liquid surface tension. C is the capacitance per unit area, d is the thickness of the dielectric with dielectric constant ε r , and ε 0 is the permittivity of vacuum. Lippmann equation demonstrates reliable predictions of θ V at low voltages, however at high voltages experiments show that, beyond a critical voltage, V s , the contact angle (CA) reaches a lower limit in contradiction to Eq. (1) which predicts complete wetting, i.e. θ V =0° at sufficiently high applied voltage. This phenomenon is widely known as CA saturation that limits the EW response to the applied voltage. Recent studies attribute the CA saturation to leakage current mechanisms, i.e. dielectric breakdown [12], dielectric charge trapping [13,14] and air ionization [15] caused by the increased electric field strength in the vicinity of the three-phase contact line (TPL). Material breakdown at the onset of saturation coupled with the charge leakage propagating through the dielectric is of great research importance, as the understanding of the related mechanisms could lead to more efficient EW devices. A usual choice for hydrophobic dielectric is stand-alone amorphous fluoropolymers Materials and Methods Various hydrophobic dielectric stacks were fabricated on phosphorus-doped Si wafers which were also used as ground electrodes (resistivity, 1-10 Ω/cm). The hydrophobic dielectric stacks consist of a main dielectric and a hydrophobic top coating. SiO 2 or TEOS were used as the main dielectrics. Commercial amorphous fluoropolymers (AFs) such as Asahi Cytop ® 809M, Teflon ® AF 1600 and plasma-deposited fluorocarbons (FCs) were used as hydrophobic top coatings. The adhesion strength of Teflon ® to various substrates is most commonly improved with the use of silanes. As a result, fluorosilanes are used as primers for the Teflon ® AF [23] and, in particular, perfluorooctyltriethoxysilane solution is spin coated onto the oxide layer and the coated wafers are heated at 95 °C for 15 min. Teflon ® AF is then spun on top of the fluorosilane layer. Asahi Cytop ® 809M, as a commercial AF alternative, is diluted in perfluorohexene and spun on top of SiO 2 (35 nm thick Cytop ® ). A special process sequence, in an oven, is needed for the Cytop to adhere well to the oxide surface. In this work, on top of TEOS, an alternative hydrophobic top coating was used. A thin plasma FC film (30-100 nm) was deposited as an adhesion promoter layer for the commercial Teflon ® AF [19]. Teflon ® AF (30-60 nm) was diluted at Fluorinert ® Fluid FC-77 solvent; and then spin coated on the plasma FC film. After spinning, the sample was baked in air at 95°C for 5 min. Verification of the thicknesses of the oxide and the top coating layers was performed with a spectroscopic ellipsometer, model M2000 J.A. Woolam Co. (accuracy in the measured thickness ± 0.5 nm). Measurements of the dependence of the CA on the applied voltage were performed in an in-house built EW experimental setup, previously described in Papathanasiou et al. [16]. Real time image processing software, that was developed in-house, was used to analyze the drop shape. The method is described in [16] and the accuracy is of the order of ±1.5°. The surface of the hydrophobic top coatings was inspected in detail with an optical microscope (Zeiss AX10 Imager.A1m). Immediately after the EW experiments the sessile drop was removed from the sample for optical characterization of the drop's footprint. Nanoscratch testing was performed with Hysitron TriboLab ® Nanomechanical Test Instrument, which allows the application of loads from 1 to 10.000 μN and records the displacement dependence on applied load with high load (1 nN) and a high displacement (0.04 nm) resolution. The TriboLab ® employed in this study is equipped with a Scanning Probe Microscope (SPM), in which a sharp probe tip moves in a raster scan pattern across the sample surface using a three-axis piezo positioner. All nanoscratch measurements were performed with the standard three-sided pyramidal Berkovich probe, with an average radius of curvature of about 100 nm, in a clean area environment with 45% humidity and 23°C ambient temperature [24]. The scratch tests performed in this work included three main segments. Firstly, a prescratch scan under a very small load (1 μN) was carried out. Then, the indenter scraped the sample under a certain force and a scratch was generated. The normal applied loads (NL) used in this work were 50-300 μN. The length of the scratches was 10 μm. Finally, a postscratch test under the same NL as the pre-scratch test was conducted to obtain the image of the surface after scratch. An estimation of the residual scratch ditch and the extent of immediate recovery can be obtained by comparing the pre-scratch with the post-scratch image profiles. Electrowetting on Composite Hydrophobic Coating In this work we focused on the investigation of adequate coupling in terms of The adhesion of Teflon ® to substrates (e.g. silicon, glass) depends primarily on physical interaction since it has no reactive chemical groups for chemical bonding [19]. Fluorosilanes, that were originally used in our group to promote adhesion between Teflon ® coating and TEOS, proved to be inadequate for investigating the electrowetting CA modulation at voltages higher than V s . Electrolysis was still present during the experiments in the saturation regime (at V s and beyond). Our study showed that the adhesion of Teflon ® AF to TEOS could be improved by the use of a thin plasma-deposited FC layer. Plasma-deposited FC films are known to adhere well to oxide surfaces due to an oxyfluoride interface layer on which a Teflon-like (1 < F/C < 2) layer grows (F/C stands for "fluorocarbon ratio") [25]. It is the chemical affinity of the plasma-deposited FC to Teflon ® that improves the overall bondability of Teflon ® AF to the oxide substrate. The result is a sandwich-like hydrophobic coating, hereafter called "Composite Coating", which consists of a thin plasma-deposited FC layer and a thin spin coated Teflon ® film. In Figure 1, EW experiments on the tested samples are presented. The EW tests were performed in dodecane ambient as follows: The applied voltage was increased from 0 Volts in increments of 2.5 Volts up to the critical voltage, namely V s , where CA saturation sets on. Then the voltage was turned off and the sessile droplet rested back in its initial shape. This will be from now on referred to as an EW cycle. Moreover, robustness verification in terms of dielectric breakdown prevention was performed. For this purpose, composite coated samples were compared to Teflon ® coated ones with respect to the CA dependence on applied voltages up to 2.5V s . Usually, EW experimental data are presented up to the saturation limit, and compared with the predictions of Young-Lippmann equation. In rare cases and for relatively thick hydrophobic dielectrics, experimental data for applied voltages V > V s are presented [26]. In this work three samples were tested at applied voltages apparently beyond the saturation. The samples consist of TEOS as the main dielectric (with thicknesses of 180 nm and 821 nm) and on top of it the following hydrophobic top coatings were fabricated: Two composite coatings (with thicknesses of 58 nm and 174 nm) and one Teflon ® coating (52 nm thick). As expected, the experimental data are in close agreement with the predictions of Young-Lippmann equation (dashed lines in Fig.1a, b) up to the onset of saturation. In Fig.1a In Fig. 1b Possibly charge trapping in the hydrophobic dielectric suppresses the wetting enhancement during the EW cycle sequence [14]. It should be mentioned that CA hysteresis (difference between the advancing and the receding CA) was ~5°. Optical Microscopy Characterization of the Hydrophobic top Coatings The surface of the hydrophobic coating was inspected by optical microscopy, Table 1). The first sample (sample S1) that features only plasma-deposited FC coating on top of TEOS shows resistance to dielectric breakdown. However, upon voltage removal, the sessile drop does not recede to its initial shape and stays at its advanced wetting state. Static CA hysteresis of the sessile drop is 42°, which indicates high EW irreversibility [28]. Since it was not possible to perform reversible EW cycles due to high hysteresis, application of voltage for a long time was decided to test the robustness of the sample at high voltages. Our experiments showed that even if a voltage of the order of 2.5V s was applied for 5 minutes, there was no sign of electrolysis. The microscopy inspection of the sample surface shows noticeable damage (Fig. 2a); however, material breakdown is not evident in the EW test. Clearly in the vicinity of the TPL there is a narrow band (~80 μm) that suggests that this portion of the surface is mostly affected. The stressed area looks like a ring with a narrow band at the edge, formed by the fully advanced wetting state of the drop. Although we observe these random formations, there is no macroscopic indication of material damage (i.e. electrolysis) that usually happens on Teflon ® coating which will be discussed below. The second sample (sample S2), with Teflon ® AF as hydrophobic coating, appears highly affected (Fig. 2b) It should be noted that when electrolysis happens, bubbles are localized in the vicinity of the TPL, confirming the high electric field strength in this region. The plasma-deposited FC interlayer might have a twofold advantage: on the one hand reduced void density between the hydrophobic coating and TEOS through better adhesion and on the other hand inhibition of local charge trapping in the overall hydrophobic top coating through reduced porosity. In the following section we focus on the interlayer mechanical properties of the hydrophobic dielectric to estimate the contribution of this factor to the overall EW system performance. Nanoscratch tests Nanoscratch tests can provide a measure of the scratch resistance of the hydrophobic dielectric. Initially, nanoindentation tests were conducted to determine the hardness and elastic modulus of hydrophobic dielectric layers. The corresponding values for each layer were used to determine the sequence parameters for the following scratch tests, i.e. applied [29]. Moreover, from the structural material viewpoint, the plasma FC interlayer is expected to be more resistant to the NL than the Teflon ® AF. The structure (-C-C-) that these polymers are consisted of is related to material hardness [30]. Chemical characterization of the plasma FC films through composition (XPS) analysis has shown that the plasma-deposited F/C ratio is 1.5 [31], whereas the F/C ratio of Teflon is 2. Hence, plasma-deposited FC is more crosslinked than the Teflon ® AF (more (-C-C-) bonds per volume). As a result, it is not surprising that the plasma-deposited FC appears to be more resistant to the NL. Comparing the surface profiles of Figures 3a,b, [29]. After NL ~30μN (regime III) both samples exhibited elastoplastic behaviour, with sample S2 exhibiting almost full plastic behavior (convergence of the initial and the residual scratch profiles) in the last few nanometers of displacement (indicated in Fig. 3b with a dashed circle). In Figure 4 the surface profiles of the two scratched samples are shown. As seen in Figs. 4a and b, there is a buildup of polymer material mostly on one side of the scratch [32]. These buildups are found in all scratches created, which shows that the films were plastically deformed and that the buildup was most likely an accumulation of compressed materials. When a moving scratch tip ploughs through the coating, the material will be either pushed forward or piled-up sideways ahead of the tip; material's pile-up on the sides of the indenter suggests plastic deformation of a film over an undeformable substrate [33,34].This phenomenon is usually observed for relatively ductile polymers, where plastic deformation is evident on applied strains [35]. In Fig. 4c cross-sectional scratch profiles (via SPM imaging at maximum load) of samples S3 and S2 at 50,150 and 300 μΝ of applied NL are presented. The result, shown in Fig.4b, is an indication of coating failure, which resulted in a blister sample damage (indicated by the circle in Fig. 4b). In every Teflon ® coated sample tested by nanoscratching, we observed an abrupt change in NL at a certain scratch depth (ranging from 68 to 76 nm). This abrupt change is attributed to a discontinuity of certain mechanical properties (e.g. elastic modulus) between Teflon ® and TEOS which reflects to the adhesion between the two and consequently an observed abrupt change in applied NL. Since this abrupt change was never observed in the case of the composite coating sample, it is suggested that plasma-deposited FC smooths out the aforementioned discontinuity between TEOS and the hydrophobic top coating (Teflon ® ), resulting in better EW performance. In Figure 5, applied NL dependences on scratch depth and length are presented. The tip scratches the surface under progressively increasing NL and along a predefined path. As denoted by the arrow, an abrupt change in NL is observed only when Teflon ® coating is used, which is indicative of strength weakening due to material heterogeneity [21,29,36]; this occurs at a critical scratch length of ~5.5 μm and NL ~ 150 μN. The scratch depth variation ( see Fig. 5), indicates that the NL abrupt change sets in when the tip penetration is close to the Teflon ® /TEOS interface (sample S2). The corresponding critical load is usually denoted as L c . The existence of an L c is an indication of failure in terms of coating cracking, delamination or brittle fracture caused by scratch testing [37]. High elasticity in combination with low hardness of the Teflon ® top coating mostly favor delamination and not coating cracking or brittle fracture. Moreover, the Teflon ® coating of the tested sample is approximately 60 nm thick which is close to the scratch depth value where the critical load appears. This strain mismatch evident by the abrupt change in the applied NL induces film delamination, and is not observed in the case of the composite coating. We suspect that the interlayer of plasma FC suitably bonds the oxide substrate and the spin coated Teflon ® layer, therefore the corresponding nanoscratch curve in Fig. 5 is smoother for the composite coating. Conclusions In this work the effect of plasma-deposited fluorocarbons, as structural layers of the top coating, on EW performance was investigated. A sandwich-like hydrophobic top coating was fabricated, here called composite coating, comprising a thin plasma-deposited FC layer and a thin spin coated Teflon ® layer. This sample showed resistance to dielectric breakdown, improved CA modulation and reversibility for at least up to thirty EW cycles, at applied voltages apparently beyond the saturation. Optical microscopy inspection revealed absence of dendritic patterns usually observed in Teflon ® coatings. Nanoscrach testing was conducted to further investigate the interlayer mechanical properties of the proposed hydrophobic dielectric stack. Nanoscratch measurements showed improved adhesion strength of the composite coating to the oxide substrate compared to the equivalent Teflon ® coating sample, confirming the observed improved robustness in EW tests.
2018-12-07T21:47:24.279Z
2011-10-19T00:00:00.000
{ "year": 2011, "sha1": "aa742347691f99d9ecc722000aaf4d598256d333", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1110.4238", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "aa742347691f99d9ecc722000aaf4d598256d333", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
59394956
pes2o/s2orc
v3-fos-license
Linear Growth and Nutritional Status of Young Gabonese Sickle Cell Patients , and Associated Factors Background: Stunting and undernutrition mark the nutritional status of the sickle cell patient, but some surveys show trends to overweight in some countries. The primary objective of this study was to compare the growth of children with sickle cell, with non-sickle cell children Introduction Sickle-cell anemia (SCA) is the first monogenic genetic disorder in the world, affecting nearly 2.5% of the Gabonese population, with about 800 births of homozygous children each year [1].Sickle cell life expectancy is currently better in developing countries than it was a few years ago [2][3][4].This improvement is attributed in part to earlier diagnosis, systematic antibiotic prophylaxis, improved sickle cell vaccination status, prevention of vascular accidents, hydroxyurea prescription and transfusion programs [5].SCA is commonly associated with stunted growth or even delayed onset of puberty.Several causes are attributable to these facts, including the chronicity of anemia, the inter-critical management of the disease in our context, but also the socio-demographic factors of the patients.Recent surveys show that sickle cell children are prone to global overweight and obesity, both in developed and developing countries [6,7].This is dissimilar to a large number of other investigations which highlight the negative consequences of sickle cell disease on the weight and growth of affected individuals [8,9]. The primary objective of our study was to compare the nutritional status and growth of sickle cell aged 0 -15 years old in Gabon with those of their non-sickle cell fellows.The secondary objective was to determine the factors associated or correlated to the differences observed. Materials and Methods This case-control study took place from April 1 to June 30, 2016, in the cities of Libreville and Lambarene.The cases were homozygous (SS) sickle cell children, aged from 0 to 15 years, drafted from the registers of two public hospitals of the two cities.Controls were documented hemoglobin-AA subjects, recruited from the same schools and pre-school centers, or from the same immunization centers as those attended by Linear Growth of Sickle Cell Children Int J Clin Pediatr.2018;7(1-2):1-5 the cases. Selection of subjects We enlisted sickle cell patients who had been in the pediatric wards of Libreville Teaching Hospital, and the Albert's Schweitzer Hospital in Lambarene during the year 2015, and also sickle-cell diagnosed infants in immunization centers around these hospitals.A random draw was performed to include cases from the constituted list.We listed school and preschools centers were the cases went to.For each SCA children, we randomly selected six children by school-provided lists of classes, or vaccination lists in health centers.We matched by sex and age ± 1 month for children less than 59 months, ± 3 months for school-aged.After the parental agreement, the cases were recalled in the non-critical phase, at a distance from any acute pathological event, to collect the socio-demographic, anthropometric data, the medical information of the subject and the realization of a cell blood count (CBC).For the controls who participated in the survey, the parental agreement was also obligatory, and we collected the same data as in the cases, except the CBC.All the children were given medical consultation if necessary.At the end of the data collection, we made a final draw to eliminate controls, so that the matching was built on the ratio of 1 case/1 control.The minimum number of 93 cases was obtained using the Schwartz formula, n = (t 2 × p(1 -p))/e 2 , where e is significance threshold of 0.05 (5%), p is prevalence and t is 95% confidence interval (CI)), using chronic malnutrition (assessed with the height for age index (HAZ)) in children in Gabon as 3% [10]. Conduct of the survey and anthropometric measurements Socio-demographic data included the subject's civil status, the number of children living at home, the child's rank in the siblings, the level of education, the occupational activity of the mother and the activity of the head of the family.The medical history collected the basic hemoglobin, the number of transfusion episodes and the number of hospitalizations without transfusions.The anthropometric parameters (weight, height or length, head circumference and mid-upper arm circumference) were collected according to the conventional methods of the WHO by the same examiner.The materials used were: an inextensible tape measure; a SECA™ brand electronic baby weighing scales, giving the weight at 10 g; a SECA™ brand electronic weighing scale giving the weight at 100 g; an infantometer for measuring the prone size in infants giving the size within 0.5 cm; a stadiometer for standing measurement graduated to 0.5 cm.The anthropometric data were managed with WHO ANTHRO © and WHO ANTHRO PLUS © software, which provides the z-scores for age and gender according to the WHO 2006 growth standards, which are available at http://www.who.int/childgrowth/software/en/.The WHO ANTHRO © software calculates z-scores for subjects aged 0 -59 months: weight for age (WAZ), HAZ, weight for height (WHZ), body mass index (BMI) for age (BAZ), head circumference for age (HCZ) and mid-upper arm circumference for age (MUACZ).WHO ANTHRO PLUS © calculates the z-scores of patients aged 60 months and older.WHO uses only three indices for those aged 60 -120 months: WAZ; HAZ; BAZ; and for children over 120 months of age, WHO uses only HAZ and BAZ indices. The growth retardation is defined by a HAZ < -2, moderate acute malnutrition is defined by a WHZ or BAZ < -2, and severe acute malnutrition is defined by WHZ or BAZ < -3.At the end of the anthropometric measurements, only for SCA children, we proceeded aseptically to the collection of 2 mL of venous blood to realize a complete CBC with a Coulter SK™ hematology automaton. Ethical considerations and parental agreement This survey obtained authorization from the Directorate General of Health, as well as that of the Ministry of National Education before it took place.The inclusion of each case or control was subject to parental approval by signature of the informed consent form. Data management The data obtained were managed on EPI INFO 7. (CDC), and exported for a further or a counter-analysis on SPSS 20; the graphs were obtained using EXCEL 2010.Chi-square test was used to assess differences in categorical data between groups.We used Student's t-test for comparisons of means.The relationship between SCA and growth or nutritional status was established by calculating the odds ratio (OR).Relative risk (RR) was used to compare a risk factor between the genders.The relationship between stunting in SCA children and risk factors was assessed using Pearson "r" coefficient.A P value < 0.05 was considered significant. Results We retained 118 SCA subjects out of the 131 listed and seen; we matched them to 118 controls.Of the 236 children, there were 114 girls (48.3%) and 122 boys (51.7%), with a sex ratio of 1.07.The sample contained 140 children (59%) recruited in Libreville and 96 children (41%) in Lambarene. The mean age (in months) of all the children retained was 85.2 ± 46.8.The mean age of patients with SCA was 83.6 ± 46.7 months, while that of the controls was 87.3 ± 47.1 months (P = NS). The mothers of the children were on average 33.8 ± 7.6 years old.The educational attainment of the children's mothers was university diploma in 26% (n = 61), secondary school in 52% (n = 123) and 22% (n = 52) primary school or illiterate level.These mothers had a lucrative activity in 55% (n = 130), 31% (n = 73) were without income sources, and 14% (n = 33) were students. Among the children, 34% (n = 80) lived in a single-parent family, and 66% (n=156) in a couple.The average number of children at home was 4.02 ± 2, and the average rank of the children in the siblings was 3 ± 2.5. There was no SCA children who received zinc in his usual treatment, neither who had taken hydroxyurea. The means of the anthropometric parameters of the cases and controls, as well as their comparisons, are reported in Tables 1-3. We found no SCA children obese, and 2.5% (n = 3) obese in the population control.For growth, the risk of stunting for SCA children compared to controls was OR = 6.9 (95% CI: 5.4 -8.3).For malnutrition, the risk of acute malnutrition (moderate or severe) in SCA compared with a control subject was OR = 2.9 (95% CI: 2.3 -3.5). The RR of growth retardation in the SCA boys compared to the SCA girls was 2.04 (95% CI: 1.2 -2.7), the RR of moderate acute malnutrition in an SCA boys compared to an SCA girls was 1.04 (95% CI: 0.75 -1.3). The factors positively correlating with the growth retardation of the SCA children included are summarized in Table 4. Discussion The anthropometric parameters of this survey show that there is a difference in the means of the two groups.The SCA patient is globally smaller and of lower weight than his non-SCA control for the same age and sex.Nevertheless, the differences observed are peculiar to each age group. In children younger than 60 months (Table 1), there was no statistical difference.Only the comparison of MUACZ means indicated a significant difference.MUACZ of SCA was smaller than that of controls.This fact underlines the sensitivity of the MUAC in children of this age group.The sensitivity of the brachial perimeter measurement has been demonstrated for a long time by Kanawati and McLaren, and reissued in the WHO 2006 standards for which the brachial perimeter is the necessary measure in mass surveys and food products crisis situations [11,12].A study carried out before the publication of the 2006 WHO growth standards showed that the brachial perimeter of SCA children was significantly smaller than that of controls [13]. For the head circumference, there was no difference in our results between HCZ of SCA children and HCZ of non-SCA children.This finding was similar to Senbajo et al's study, which led them to conclude that the growth patterns of the cranial perimeter of the general population can be used to monitor the cranial circumference of the sickle cell patient [14]. From 5 years of age (Tables 2 and 3), the Gabonese SCA patient was smaller than the child of the same sex and age without SCA, the average of HAZ scores at that age is lower up to 10 years.With growth, the SCA child becomes smaller and thinner than the rest of the general population.Cox et al had already made the same observation in Tanzania on a cohort of 1,041 children; adolescence is the period when the growth deficit of SCA kids increases [4].The results of Chawla et al from the USA, as well as Esezobor et al from Nigeria, also conclude that growth retardation in sickle cell disease increases with age [6,9].Growth is the physiological phenomenon that characterizes the child; the SCA child grows in a context of chronic energy malnutrition (permanent low oxygenation of tissues) which is characterized by upset growth.Moreover, this chronic malnutrition has a negative effect on the pubertal phase, depriving children from benefiting from the growth acceleration observed during this period, thus resulting in a widening gap at this age [4,6].Growth retardation rather than acute malnutrition was the disorder that has the highest risk of being observed in SCA.Lukusa Kazadi et al arrived at the same conclusion as us, finding growth retardation (HAZ ≤ -2) four times higher in SCA than in controls.The risk was lower when weight was considered, with a relative 1.2 higher risk in nmalnurished children [15].Osei-Yeboah et al had findings similar to ours, observing three times more growth retardation in Ghanaian sickle cell children than in their controls, and a non-significant difference in acute undernutrition [16].This may result from an adaptation of the metabolism of children with sickle cell child to chronic malnutrition.The SCA child could adapt its weight to the needs necessary, but also adequate to its metabolic capacities.Finally, the low sickle cell weight is compensated by his smaller size, thus giving a lower weight for height or BMI. Male gender was found in our survey to be associated with a higher risk of stunting.The same risk was also found to be statistically significant in the study of Aminasahun et al from Nigeria, and Lukusa Kazadi in the Democratic Republic of Congo (DRC) [15,17].Cox et al also pointed out that the situation was reversed if girls have more severe episodes, but the reasons why girls have better growth than boys remain uncertain [4]. Risk factors found in the analysis correlating with sickle cell growth retardation (Table 4) can be classified as inherent to the disease (intrinsic), its treatment and progression (number of transfusions and hematocrit) and extrinsic factors (socio-economic factors). For intrinsic factors, the correlation between multi-transfused status and growth retardation in SCA can be explained by the absence of a transfusion program in our context.In SCA child care protocols, transfusion is carried out in an emergency situation, this condition being established either gradually or quite rapidly in case of hemolysis due to an infectious cause.Our results are similar to those of Lukusa Kazadi et al in the DRC, who found an association among more than three transfusions with growth retardation in sickle cell disease [15].Low hematocrit was also found to be correlated with poor nutritional status or stunting in SCA in Emmanuelchide et al's studies in Nigeria and Nishkar et al in India [18,19].These authors concluded that this accessible and evaluable parameter is a good indicator and predictor of sickle cell general health status.Furthermore, administration of zinc supplementation to the sickle cell child showed apparent effects on their growth and matura-tion [20].This supplementation is not available in our regions.The introduction of this supplementation in our country, already in need of zinc for the treatment of diarrhea, would be a simple and effective instrument for the management of the sickle cell in a developing country, such as the use of hydroxyurea [5]. An unfavorable socio-economic context can be correlated with poor nutritional status or growth in SCA.Indeed, our results show that, even within this sickle cell population, unfavorable socio-economic conditions (mother with low educational level, a mother without income-generating activities, a head of family non-executive, large family) are associated with stunting.The authors of the DRC, India and Nigeria come to the same conclusion as we do: the sickle cell child in addition to his pathology also pays tribute to sinister socio-economic factors, like the whole population in which he lives [15,18,19].Only the Animashaun et al's survey in Nigeria in 2011 concluded that low socioeconomic status had an inverse (improving) effect on the nutritional status and hemoglobin level of sickle cell patients [17]. Conculsions The results of this case-control study show that growth retardation and the acute malnutrition of the sickle cell children are gradually established.The sickle cell child in his early childhood is similar to his control, but over the years becomes smaller than the non-sickle cell child.Obesity is not observed in SCA children in our country.The risk of stunting is greater than that of acute malnutrition.In sickle cell children, the factors correlating with linear growth retardation were age, male gender and factors reflecting a poorly controlled chronic disease.Unfavorable socio-economic conditions also hamper their anthropometric parameters.WHO 2006 standards are useful and effective tools for screening the nutritional status of SCA patient.Our findings lead us to recommend that the nutritional status of SCA child should be a major concern in from the age of 5 years and onward.The introduction of measures that have shown to be effictive in other countries such as zinc administration, dietary advice, easier access to hydroxyurea and transfusion programs could improve the health status of these young patients and their growth. Table 1 . Means of z-Scores of Anthropometric Indices of Children Aged 0 -59 Months Recruited Table 2 . Means of z-Scores of Anthropometric Indices of Children Aged 60 -119 Months Recruited WAZ: weight for age; HAZ: height for age index; BAZ: body mass index for age.P, value of Student's t-test. Table 3 . Means of z-Scores of Anthropometric Indices of Children Aged ≥ 120 Months Recruited Table 4 . Factors Positively Correlating With the Growth Retardation of the SCA Children Included
2018-12-26T19:08:34.359Z
2018-04-24T00:00:00.000
{ "year": 2018, "sha1": "58c05fe5b4db699d0fd373cb92f81d1bef68821f", "oa_license": "CCBYNC", "oa_url": "https://theijcp.org/index.php/ijcp/article/download/290/241", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "58c05fe5b4db699d0fd373cb92f81d1bef68821f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
8446513
pes2o/s2orc
v3-fos-license
MiR-101 and miR-144 Regulate the Expression of the CFTR Chloride Channel in the Lung The Cystic Fibrosis Transmembrane conductance Regulator (CFTR) is a chloride channel that plays a critical role in the lung by maintaining fluid homeostasis. Absence or malfunction of CFTR leads to Cystic Fibrosis, a disease characterized by chronic infection and inflammation. We recently reported that air pollutants such as cigarette smoke and cadmium negatively regulate the expression of CFTR by affecting several steps in the biogenesis of CFTR protein. MicroRNAs (miRNAs) have recently received a great deal of attention as both biomarkers and therapeutics due to their ability to regulate multiple genes. Here, we show that cigarette smoke and cadmium up-regulate the expression of two miRNAs (miR-101 and miR-144) that are predicted to target CFTR in human bronchial epithelial cells. When premature miR-101 and miR-144 were transfected in human airway epithelial cells, they directly targeted the CFTR 3′UTR and suppressed the expression of the CFTR protein. Since miR-101 was highly up-regulated by cigarette smoke in vitro, we investigated whether such increase also occurred in vivo. Mice exposed to cigarette smoke for 4 weeks demonstrated an up-regulation of miR-101 and suppression of CFTR protein in their lungs. Finally, we show that miR-101 is highly expressed in lung samples from patients with severe chronic obstructive pulmonary disease (COPD) when compared to control patients. Taken together, these results suggest that chronic cigarette smoking up-regulates miR-101 and that this miRNA could contribute to suppression of CFTR in the lungs of COPD patients. Introduction CFTR is a chloride channel that is primarily expressed at the apical surface of airway epithelial cells and is involved in the control of airway surface fluid homeostasis [1]. Absence of functional CFTR is known to cause Cystic Fibrosis with lungrelated problems being the leading cause of mortality [2]. CFTR expression can be regulated at the transcriptional and posttranscriptional levels. CFTR interacts with many proteins that can affect its stability, degradation, and/or processing [3]. On the other hand, few copies of CFTR mRNA have been found in airway epithelial cells [4] suggesting that translational repression and/or mRNA degradation would strongly impact the amount of CFTR protein. MicroRNAs (miRNAs) are short non-coding RNAs of about 22 nucleotides [5], which mainly function by translational repression and/or mRNA degradation by binding to the 39 Untranslated Region (UTR). Therefore, down-regulation of miRNAs will result in increased protein expression of the targeted gene(s) whereas upregulation of miRNAs will lead to suppression of the targeted protein(s). Deregulation of miRNAs has been found in many diseases including lung cancer and chronic obstructive pulmonary disease (COPD) [6,7]. Up to 30% of human protein coding genes may be regulated by miRNAs [8]. Some pathological conditions lead to the loss of certain miRNAs such as Let-7 members in cancer. A single miRNA can target several mRNAs and multiple miRNAs can target the same gene. It was only recently that CFTR was found to be regulated by miRNAs [9,10]. In this study, we investigated the effect of airway pollutants (cigarette smoke and cadmium) on miRNAs predicted to target CFTR in vitro in human airway epithelial cells as well as in vivo in the lung of smoke exposed mice and COPD patients. We also determined the role of miR-101 and miR-144 in regulating CFTR expression. Ethics Statement This study was carried out in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. The protocol was approved by the Institutional Laboratory Animal Care and Use Committee (ILACUC) of the Ohio State University (protocol number: 2007A0168-R1). Subjects and Sample Collection Human lung samples were obtained from the Lung Tissue Research Consortium (LTRC, NIH) approved project (Concept Sheet #09-99-0017). The LTRC Patients were classified into two groups based on lung function tests with GOLD 4 having an FEV 1 /FVC ,70%, FEV 1 ,30% predicted or ,50% normal with chronic respiratory failure, and GOLD 0 being asymptomatic with normal lung function. Animals and Lung Tissue Processing C57BL/6 female mice (8-10 weeks old) were used in accordance with the institutional animal welfare guidelines of the Ohio State University. Mice were subjected to smoke from 3 standard cigarettes/day (Camel brand), 5 days/week for 4 weeks using a Teague 10 smoke machine. This is approximately the equivalent of 60-90 minutes of smoke exposure per day. Mice were sacrificed and lungs were inflated for histology. Luciferase Assay The 39UTR (untranslated region) of CFTR was amplified by RT-PCR out of genomic DNA. The amplified products were subcloned into psiCHECK-2 vector (Promega, Madison, WI). In addition, we conducted mutagenesis of the seed sequence present in the 39UTR to prevent binding of the specific miRNAs. The mutations were confirmed by sequencing. HEK-293 cells were transfected with 50 ng of psiCHECK-CFTR or psiCHECK empty vector and either scrambled pre-miR, pre-miR-101, or pre-miR-144. Twenty four hours later, cells were assayed for both firefly and renilla luciferase using the dual luciferase glow assay (Promega, Madison, WI) and Victor TM X3 fluorescent plate reader (PerkinElmer, MA). Quantitative RT-PCR (qRT-PCR) Analysis Real-time quantitative RT-PCR was employed to measure the transcript levels of mature miR-101 and miR-144. RT-PCR was performed using TaqMan microRNA Reverse Transcription kit (Applied Biosystems, CA) following manufacturer's protocol and assayed on the Applied Biosystems 7900HT. The primers for miR-101, miR-144, and miR-145 were purchased from Applied Biosystems and U6 snRNA was used as endogenous control. Data are expressed as relative copy number (RCN), which was calculated using with the following equation: RCN = 2 -DCt x100 where DCt = Ct (target) -Ct (housekeeping gene) [13]. In situ Hybridization Detection of miRNAs in paraffin-embedded tissues was performed as previously described [14]. The locked nucleic acid (LNA) modified cDNA probe complementary to human mature miR-101 was used (Exiqon Inc, MA). The probes were labeled at the 59 end with digoxigenin by the manufacturer. The negative controls include omission of the probe and the use of a scrambled probe. Immunohistochemistry Our immunohistochemistry protocol has been previously published [15]. In brief, the antibodies were optimized by comparing no pretreatment, protease digestion, antigen retrieval, or antigen retrieval plus protease digestion using positive control tissues with the automated Benchmark LT platform (Ventana Medical Systems). The detection systems used were the Ultraview Fast Red and the Ultraview DAB based systems; hematoxylin served as the counterstain with each detection system. The optimal conditions for CFTR detection by immunohistochemistry was a dilution of monoclonal CFTR antibody (R&D Systems) at 1:200 with antigen retrieval for 30 minutes prior to immunohistochemistry. Statistical Analysis Data are expressed as mean 6 standard errors (SE) of at least three independent experiments. Statistically significant differences were assessed using Student's t-test. P values ,0.05 were considered significant. Cigarette Smoke and Cadmium Induce Up-regulation of miR-101 and miR-144 We previously showed that the air pollutants cigarette smoke and cadmium suppress the expression of the CFTR chloride channel in human airway epithelial cells [13,16]. We therefore exposed human bronchial epithelial (HBE) cells to cigarette smoke extract and cadmium for 24 hours. The expression of three miRNAs predicted to target CFTR (miR-101, miR-144, and miR-145) was determined. Exposure of HBE cells to cigarette smoke resulted in <80-and 4-fold increases of miR-101 and miR-144, respectively, while cadmium induced miR-101 and miR-144 by <40 and 6 fold (Fig. 1). Conversely, neither cigarette smoke extract nor cadmium increased the expression of miR-145 (Fig. 1). Expression of miR-144 and miR-101 Suppresses CFTR Protein in HBE Cells Since miR-101 and miR-144 are predicted to target the CFTR gene, we evaluated the effect of these miRNAs on the expression of CFTR protein. We therefore transfected each miRNA as a precursor (premiR) in HBE cells that constitutively express the CFTR protein. Transfection with premiR-101 or premiR-144 resulted in suppression of the CFTR protein as observed in Figure 2A. The expression of mature miR-101 and miR-144 was confirmed by quantitative RT-PCR. Mature miR-101 and miR-144 could be detected six hours post-transfection and were still highly expressed 48 hours after transfection ( Fig. 2B and data not shown). MiR-101 and miR-144 Target CFTR 39UTR In order to confirm that miR-101 and miR-144 directly target CFTR, the CFTR 39UTR was subcloned into the reporter psiCHECK-2 vector. As indicated in Figure 3, expression of miR-101 reduced the reporter activity by <40%. Similarly, overexpression of miR-144 resulted in <30 and 50% decrease in reporter activity when cells were transfected with 30 and 60 nM of pre-miR-144, respectively (Fig. 4). In order to validate the binding specificity, the seed sequence for CFTR 39UTR was mutated (Figs. 3 and 4). Mutations in the CFTR 39UTR (CFTR 39UTR Mut: GU to CA) eliminated the effect of miR-101 and -144 on reporter activity (Figs. 3 and 4). MiR-101 is Overexpressed in the Lung of Mice Subjected to Cigarette Smoke Taken together, our results above show that air pollutants (such as cigarette smoke and cadmium) induce up-regulation of miRNAs that target CFTR resulting in suppression of CFTR protein in airway epithelial cells in vitro. In order to determine whether such phenomenon can be observed in vivo, mice were subjected to cigarette smoke for four weeks. We focused on miR-101 since this miRNA was the most highly up-regulated by cigarette smoke in vitro. MiR-101 (purple staining) was found to be highly expressed in bronchial epithelial cells and in alveolar cells in the lung of mice subjected to cigarette smoke when compared to mice exposed to filtered air (Fig. 5A). Since we previously showed that miR-101 targets CFTR, we next investigated the expression of the CFTR protein. We found that CFTR protein (brown staining) was greatly reduced in the lung of mice subjected to cigarette smoke as observed by immunohistochemistry and more specifically in bronchial epithelial cells (Fig. 5B). MiR-101 is Highly Expressed in the Lung of COPD Patients We recently reported that CFTR protein was suppressed in the lung of COPD patients with a history of smoking [16]. Therefore, we investigated whether miR-101 was upregulated in the lung of these COPD patients. As observed in Figure 6, miR-101 (purple staining) was strongly expressed in bronchial epithelial cells in patients with severe COPD (GOLD 4) when compared to control patients (GOLD 0). Discussion CFTR is a chloride channel that plays a critical role in maintaining fluid homeostasis in the lung. Thus, mutations in the cftr gene that result in absence or malfunction of the CFTR protein lead to cystic fibrosis, a disease characterized by impaired mucus clearance, chronic infection and inflammation. We recently reported that air pollutants, such as cigarette smoke and cadmium, reduce the expression of the CFTR protein in vitro in airway epithelial cells [13]. Here we show that cigarette smoke and cadmium induce up-regulation of two miRNAs that target CFTR, reducing its expression in human airway epithelial cells. We further show that miR-101 is up-regulated in the lung of mice subjected to cigarette smoke and in COPD patients. There is increasing evidence that airway pollutants such as cigarette smoke suppress the expression of the CFTR protein [17,18]. We and Bodas et al., recently showed that CFTR is suppressed in the lung of COPD patients suggesting that reduced expression of CFTR could contribute to the development of this disease [16,19]. Here we show that cigarette smoke and the toxic metal cadmium induce up-regulation of specific miRNAs that target CFTR. Gillen et al. recently reported that CFTR can be regulated by several miRNAs including miRNA-144 but did not observe any effect of miR-101 on CFTR [10]. The discrepancy in the results could be due to the model used; human colon cancer cells versus human bronchial epithelial cells. It is therefore possible that expression and regulation of miRNA-101 is cell-type specific but also depends on the disease state (normal or cancerous). Interestingly, miR-101 was reported to play a role in inflammation by targeting MAPK phosphatase-1 (MKP-1), a dual specific phosphatase that deactivates MAPKs, which functions as a negative regulator of the innate immune system [20,21]. We can speculate that high expression of miR-101 observed in the lung samples could contribute to the sustained activation of Erk1/2 (phosphoErk1/2) observed in COPD patients [22] due to lack of dephosphorylation by MKP-1. Regarding miR-144, this miRNA has been found to be elevated in cancer [23][24][25], and was recently identified to be among the top three miRNAs up-regulated in the lung of COPD patients [7]. MiR-101 and miR-144 target the same region of CFTR 39UTR and share the same seed sequence indicating that these two miRNAs do not act synergistically or additionally. On the other hand, the fact that both miR-101 and miR-144 target the same region suggests that this 39UTR region is highly regulated by miRNAs. Cigarette smoke and cadmium similarly affected two of the three miRNAs investigated in this study, all predicted to target CFTR. Both pollutants increased miR-101 and miR-144 but had no effect on miR-145. Since cadmium is a contaminant of cigarette smoke, it is possible that cadmium present in cigarette smoke was responsible for the up-regulation of miR-101 and miR-144. Interestingly, the cytokine IL-17A was recently identified to up-regulate miR-101 via activation of the Akt pathway in cardiac fibroblasts [21]. Since both cigarette smoke and cadmium activate the Akt pathway [26][27][28], it is possible that up-regulation of miR-101 occurs via a similar pathway in the lung. Taken together, our results indicate that up-regulation of miR-101 and/or miR144 could contribute to the suppression of CFTR observed in COPD patients. In addition, Clunes et al. recently showed that exposure of primary airway epithelial cells to shortterm cigarette smoke lead to mucus dehydration [17]. Therefore, up-regulation of miR-101 by cigarette smoke or cadmium could affect lung fluid homeostasis and therefore mucus clearance by suppressing CFTR but also immune responses by preventing dephosphorylation of MAPKs due to inhibition of MKP-1. Future studies need to be done to investigate the effect of smoking cessation on CFTR expression and miRNAs regulating its expression. Our study highlights the role of miRNAs as genetic modifiers that may contribute to chronic bronchitis by altering expression of CFTR that regulates lung epithelial surface hydration. Paraffin-embedded, formalin-fixed lung tissues were incubated with an LNA probe anti-miR-101 (purple staining), or scrambled probe as previously described [12]. (B) CFTR protein (brown staining) was detected by immunohistochemistry as described in methods section. Arrows show the bronchial epithelium. The images are representative of 3-4 mice for each condition. doi:10.1371/journal.pone.0050837.g005
2017-05-01T19:45:00.030Z
2012-11-30T00:00:00.000
{ "year": 2012, "sha1": "b11f5d99508f85ce231647c18031ca5d99076805", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0050837&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b11f5d99508f85ce231647c18031ca5d99076805", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
266395112
pes2o/s2orc
v3-fos-license
Line Laser Scanning Combined with Machine Learning for Fish Head Cutting Position Identification Fish head cutting is one of the most important processes during fish pre-processing. At present, the identification of cutting positions mainly depends on manual experience, which cannot meet the requirements of large-scale production lines. In this paper, a fast and contactless identification method of cutting position was carried out by using a constructed line laser data acquisition system. The fish surface data were collected by a linear laser scanning sensor, and Principal Component Analysis (PCA) was used to reduce the dimensions of the dorsal and abdominal boundary data. Based on the dimension data, Least Squares Support Vector Machines (LS-SVMs), Particle Swarm Optimization-Back Propagation (PSO-BP) networks, and Long and Short Term Memory (LSTM) neural networks were applied for fish head cutting position identification model establishment. According to the results, the LSTM model was considered to be the best prediction model with a determination coefficient (R2) value, root mean square error (RMSE), mean absolute error (MAE), and residual predictive deviation (RPD) of 0.9480, 0.2957, 0.1933, and 3.1426, respectively. This study demonstrated the reliability of combining line laser scanning techniques with machine learning using LSTM to identify the fish head cutting position accurately and quickly. It can provide a theoretical reference for the development of intelligent processing and intelligent cutting equipment for fish. Introduction The main process in fish processing includes scaling, gutting, cleaning, and head/tail cutting, where head removal is an important part of cutting planning and directly affects the processing quality and meat yield [1].The main types of head cutting processes are manual and mechanical [2].The manual method is time-consuming and laborious, with low processing efficiency and high skill requirements for the processors [3], which cannot be adapted to the needs of short-term and high-volume production of bulk fish.Due to the biodiversity of fish, even the size of the heads of the same specification and the same batch of raw material varies greatly, and existing mechanical cuts are processed according to a pre-set cutting position [2], which is unsatisfactory in terms of reducing meat yield and non-compliance with handling.To be specific, if the processing volume is set too large, it will lead to a lower meat yield and waste; if the processing volume is set too small, the cutting tool will be easily damaged by cutting at the gill cover, resulting in cutting failure and even causing failure of the entire equipment operation.Therefore, how to achieve automatic identification of the fish head position so as to control the accurate cutting of the fish head is a problem that needs to be urgently solved for flexible, intelligent, and efficient processing of bulk fish. Materials This study was carried out on crucian carp.The experimental samples were purchased from Dalian Wholesale Fish Market, China.A total of 204 crucian carp were randomly selected and placed in a thermostat with ice for rapid transportation back to the laboratory.The parameters of the fish body shape were defined as shown in Figure 1; the distance between the mouth and the trailing edge of the gill cover was defined as the head length (Figure 1a), the maximum distance from the front of the mouth to the end of the tail fin was defined as the total length (Figure 1b), and the maximum distance between the dorsal and ventral parts of the fish was defined as the maximum width (Figure 1c), setting the cut line passing through this location as the ideal cut line for the head; the height of the highest point of the fish to the level of the conveyor belt when the fish was placed horizontally was defined as the maximum thickness (Figure 1d).The statistics of the manual measurements of the 204 samples are shown in Table 1. Materials This study was carried out on crucian carp.The experimental samples were purchased from Dalian Wholesale Fish Market, China.A total of 204 crucian carp were randomly selected and placed in a thermostat with ice for rapid transportation back to the laboratory.The parameters of the fish body shape were defined as shown in Figure 1; the distance between the mouth and the trailing edge of the gill cover was defined as the head length (Figure 1a), the maximum distance from the front of the mouth to the end of the tail fin was defined as the total length (Figure 1b), and the maximum distance between the dorsal and ventral parts of the fish was defined as the maximum width (Figure 1c), setting the cut line passing through this location as the ideal cut line for the head; the height of the highest point of the fish to the level of the conveyor belt when the fish was placed horizontally was defined as the maximum thickness (Figure 1d).The statistics of the manual measurements of the 204 samples are shown in Table 1. Data Acquisition System A fish data information laser scanning system was used to collect 3D contour information of the fish body.As shown in Figure 2b, the system mainly consisted of a line laser scanning sensor (LLT-2600 scanContral2D/3D, Micro-Epsilon, Ortenburg, German), a drive mechanism, a dark box, and a data processing unit.The laser scanning sensor scanned 640 points at a time, with a scanning frequency of 300 Hz, and outputted the measurement results via Ethernet (Modbus TCP protocol).The drive mechanism consisted of a conveyor belt and a drive unit, which can transport the material horizontally and linearly under the drive of a servo motor, and the motor speed was set at 6.6 rpm.The A fish data information laser scanning system was used to collect 3D contour information of the fish body.As shown in Figure 2b, the system mainly consisted of a line laser scanning sensor (LLT-2600 scanContral2D/3D, Micro-Epsilon, Ortenburg, German), a drive mechanism, a dark box, and a data processing unit.The laser scanning sensor scanned 640 points at a time, with a scanning frequency of 300 Hz, and outputted the measurement results via Ethernet (Modbus TCP protocol).The drive mechanism consisted of a conveyor belt and a drive unit, which can transport the material horizontally and linearly under the drive of a servo motor, and the motor speed was set at 6.6 rpm.The dark box was equipped with two groups of strip light sources, which were placed on both sides of the dark box, as shown in Figure 2b; the data processing unit was controlled by a computer, which was used to realize pre-processing such as data segmentation and filtering of the collected raw fish body information (Figure 2c).dark box was equipped with two groups of strip light sources, which were placed on both sides of the dark box, as shown in Figure 2b; the data processing unit was controlled by a computer, which was used to realize pre-processing such as data segmentation and filtering of the collected raw fish body information (Figure 2c). Data Acquisition Process The laser sensor was calibrated to send a laser ray vertically downwards, with the line laser perpendicular to the conveyor belt transport direction.The laser scanning schematic is shown in Figure 3a,b; the intersection points between the vertical line of the laser source and the horizontal plane where the conveyor belt surface was located was defined as the O point, the height was in the Z direction, the laser direction was defined as the Y direction, and the conveyor belt conveying the reverse direction was defined as the X direction.During data acquisition, the head of the fish was orientated in the same direction as the movement direction, and when the sample triggered the timing procedure, the laser sensor started scanning the fish to obtain the 3D point cloud information of the surface contour of the fish.Due to the huge amount of real-time data obtained by laser scanning, the sampling frequency was set to 0.42 Hz without affecting the accuracy of the calculation, and the obtained contour information was actually a number of point cloud data containing the contour information of the fish body cross-section (Figure 3c), which was stored in the form of an array. Data Acquisition Process The laser sensor was calibrated to send a laser ray vertically downwards, with the line laser perpendicular to the conveyor belt transport direction.The laser scanning schematic is shown in Figure 3a,b; the intersection points between the vertical line of the laser source and the horizontal plane where the conveyor belt surface was located was defined as the O point, the height was in the Z direction, the laser direction was defined as the Y direction, and the conveyor belt conveying the reverse direction was defined as the X direction.During data acquisition, the head of the fish was orientated in the same direction as the movement direction, and when the sample triggered the timing procedure, the laser sensor started scanning the fish to obtain the 3D point cloud information of the surface contour of the fish.Due to the huge amount of real-time data obtained by laser scanning, the sampling frequency was set to 0.42 Hz without affecting the accuracy of the calculation, and the obtained contour information was actually a number of point cloud data containing the contour information of the fish body cross-section (Figure 3c), which was stored in the form of an array. Data Validity Discernment As shown in Figure 3b, since the width of the line laser is larger than the fish it scans, the interference data formed on the surface of the conveyor belt on both sides of the fish body will inevitably be collected simultaneously during the process of acquiring the fish body contour data.In this study, the threshold segmentation method was used to remove the interfering data while retaining useful information on the surface of the fish body for subsequent processing.The calculation process of the threshold segmentation method is as follows. [ Data Validity Discernment As shown in Figure 3b, since the width of the line laser is larger than the fish it scans, the interference data formed on the surface of the conveyor belt on both sides of the fish body will inevitably be collected simultaneously during the process of acquiring the fish body contour data.In this study, the threshold segmentation method was used to remove the interfering data while retaining useful information on the surface of the fish body for subsequent processing.The calculation process of the threshold segmentation method is as follows. [ where [M] represents the array after threshold segmentation.[M 2 ] represents the array of radial sections of the fish body.[M 1 ] and [M 3 ] represent the radial cross-sectional array of the conveyor belt on both sides of the fish body.T represents the segmentation threshold.∆h 1 represents the absolute value of the difference between adjacent elements' height value of the right endpoint of [M 1 ] and height value of the left endpoint of [M 2 ]. ∆h 2 represents the absolute value of the difference between adjacent elements' height value of the right endpoint of [M 2 ] and height value of the left endpoint of [M 3 ]. Data Filtering The absorption and reflection of light by the fish itself, the influence of external lighting, and the vibration of the conveyance mechanism during movement can cause high-frequency fluctuations in the data and form noise [22].Since the Kalman filter has a good suppression effect on the random fluctuation noise generated by the data, and the median filter can reduce the fluctuation range of the data, the Kalman filter and the median filter were adopted to denoise the data.In this study, the covariance of the system noise of the Kalman filter was set to 0.0001, the covariance of the measurement noise was set to 0.1, the covariance of the system noise was set to 1, and the left and right rank of the median filter were set to 2 and −1, respectively. Fish Head Cut Position Identification 2.3.1. Feature Extraction As shown in Figure 3d, the fish head cut position is on the contour line composed of the highest points of the radial section data of the fish body, namely the boundary between the abdomen and the back of the fish body, which is defined as the ventral-dorsal demarcation line of the fish body in this study.The sampled data on the ventral-dorsal demarcation line of the fish body were taken as input and the real value of the fish head cut position was taken as output to construct the fish head cut position identification model and to achieve the prediction of the fish head cut position.The volume of data on the ventral-dorsal demarcation line is large, and there is a strong correlation between some data points, making it a large amount of redundant information and affecting the calculation accuracy.Principal Component Analysis (PCA) is an unsupervised machine learning algorithm that can transform multiple variables into a few composite variables, eliminating redundant information and reducing computational effort [23].In this study, PCA was chosen to reduce the dimensionality of the ventral-dorsal divide, using a few principal components instead of the entire ventral-dorsal divide data.As shown in Equations ( 4)-( 6), the collected ventral-dorsal dividers were transformed by the Z-score method to standardize them, as a way to eliminate the difference in magnitude and the difference in order of magnitude between different indicators.The correlation coefficient matrix between the independent variables was solved by using a standardized data matrix, and the characteristic roots were obtained according to the characteristic equation of the correlation coefficient matrix, as shown in Equation ( 8), with the cumulative contribution of the variance at 95% to determine the extracted principal components for subsequent studies.As shown in Equations ( 9) and ( 10), the indicator coefficient matrix of each principal component was obtained according to the component matrix and multiplied with the standardized data matrix to obtain the principal component values. Foods 2023, 12, 4518 U i = P i / λ i ( 9) where z j represents the mean value of the ventral-dorsal divide, s j represents the standard deviation of the ventral-dorsal divide, ∼ z ij represents the standardized data, Z ij represents the standardized matrix combined by column, D represents the correlation coefficient matrix, λ represents the eigenvalues of this matrix, P I represents the component matrix, U I represents the index coefficient matrix, and F i represents the principal component values of the ventral-dorsal divide of the fish. Establishment of Fish Head Cut Position Identification Models The principal component values of 204 fish body samples after PCA dimensionality reduction were used as model inputs and the length of the fish head was the output to construct the LS-SVM, PSO-BP, and LSTM models for the identification of the cutting position of the fish head.Among them, a total of 154 samples were randomly selected as the training set, and the remaining 50 samples were used for the testing set. LS-SVM Model The LS-SVM maps linearly indistinguishable data in space to a high-dimensional feature space by means of a constructed kernel function that makes the data divisible in the feature space [24,25].Given that the Gaussian radial basis function (RBF) is capable of non-linear mapping and has fewer parameters, the RBF kernel function was chosen to construct the LS-SVM model in this study.The penalty parameter sig2 and the radial basis kernel parameter gam are two important parameters of the RBF kernel function, which are closely related to the accuracy and generalization ability of the model [26].In this study, the particle swarm algorithm (PSO) was used to find the best values for the above two parameters, and the maximum number of iterations was taken as 100, with the search range of sig2 being 0.1 to 100 and the search range of gam being 0.01 to 100. PSO-BP Model BP neural network is an error backpropagation algorithm, and its learning process can be summarized as signal forward propagation, error backpropagation, weights, and threshold update.The network includes input, hidden, and output layers.Using the gradient descent method, the weights and thresholds between different network layers are adjusted inversely by comparing the model output values with the expected values to reduce the error along the gradient direction [19].The approximate solution that satisfies the error accuracy is sought through several iterations, and its structure is shown in Figure 4. In a traditional BP network, it is easy to fall into local optimal solutions in the training process, which makes the final model accuracy too low [17].To avoid this problem, this study used the PSO algorithm with global search ability to optimize the network weight of the BP neural network.Figure 5 shows the structure of the PSO-BP neural network built in this study, where the inputs are three independent variables, the number of hidden layers was ten layers, and the parameters of the neural network were set as follows: the number of iterations was 1000, the training objective was 0.00001, and the learning rate was set to 0.09.The PSO parameters were set as follows: learning factor c 1 = c 2 = 2, inertia weights ω max = 0.9 and ω min = 0.3 with a maximum iteration of 200.In a traditional BP network, it is easy to fall into local optimal solutions in the training process, which makes the final model accuracy too low [17].To avoid this problem, this study used the PSO algorithm with global search ability to optimize the network weight of the BP neural network.Figure 5 shows the structure of the PSO-BP neural network built in this study, where the inputs are three independent variables, the number of hidden layers was ten layers, and the parameters of the neural network were set as follows: the number of iterations was 1000, the training objective was 0.00001, and the learning rate was set to 0.09.The PSO parameters were set as follows: learning factor c 1 = c 2 = 2, inertia weights ω max = 0.9 and ω min = 0.3 with a maximum iteration of 200. LSTM Model The LSTM model is a special type of Recurrent Neural Network (RNN), consisting of memory blocks that add input and output channels to the hyperbolic tangent function (tanh), which can correlate the feature data of each fish with each other and analyze their non-linear relationships [27].The cell structure of the LSTM model is shown in Figure 6, where f t , i t , and o t denote the forgetting layer, input layer, and output layer, respectively; x t is the input of the current cell; C t−1 and h t−1 are the outputs of the last network cell; W and V are the weight matrices; b is the bias term; and σ is the sigmoid function layer.The LSTM network used in this study has an input layer with three inputs, a hidden layer activated by the tanh function, one forgetting layer, one fully connected layer, and one output layer.The gradient threshold size used was 1, the initial learning rate was 0.005, and the maximum number of iterations was 200.In a traditional BP network, it is easy to fall into local optimal solutions in the training process, which makes the final model accuracy too low [17].To avoid this problem, this study used the PSO algorithm with global search ability to optimize the network weight of the BP neural network.Figure 5 shows the structure of the PSO-BP neural network built in this study, where the inputs are three independent variables, the number of hidden layers was ten layers, and the parameters of the neural network were set as follows: the number of iterations was 1000, the training objective was 0.00001, and the learning rate was set to 0.09.The PSO parameters were set as follows: learning factor c 1 = c 2 = 2, inertia weights ω max = 0.9 and ω min = 0.3 with a maximum iteration of 200. LSTM Model The LSTM model is a special type of Recurrent Neural Network (RNN), consisting of memory blocks that add input and output channels to the hyperbolic tangent function (tanh), which can correlate the feature data of each fish with each other and analyze their non-linear relationships [27].The cell structure of the LSTM model is shown in Figure 6, where f t , i t , and o t denote the forgetting layer, input layer, and output layer, respectively; x t is the input of the current cell; C t−1 and h t−1 are the outputs of the last network cell; W and V are the weight matrices; b is the bias term; and σ is the sigmoid function layer.The LSTM network used in this study has an input layer with three inputs, a hidden layer activated by the tanh function, one forgetting layer, one fully connected layer, and one output layer.The gradient threshold size used was 1, the initial learning rate was 0.005, and the maximum number of iterations was 200. LSTM Model The LSTM model is a special type of Recurrent Neural Network (RNN), consisting of memory blocks that add input and output channels to the hyperbolic tangent function (tanh), which can correlate the feature data of each fish with each other and analyze their non-linear relationships [27].The cell structure of the LSTM model is shown in Figure 6, where f t , i t , and o t denote the forgetting layer, input layer, and output layer, respectively; x t is the input of the current cell; C t−1 and h t−1 are the outputs of the last network cell; W and V are the weight matrices; b is the bias term; and σ is the sigmoid function layer.The LSTM network used in this study has an input layer with three inputs, a hidden layer activated by the tanh function, one forgetting layer, one fully connected layer, and one output layer.The gradient threshold size used was 1, the initial learning rate was 0.005, and the maximum number of iterations was 200. Model Evaluation Metrics In order to verify the identification effect of the model, the coefficient of determination (R 2 ), the root mean square error (RMSE), the mean absolute error (MAE), and the rel- Model Evaluation Metrics In order to verify the identification effect of the model, the coefficient of determination (R 2 ), the root mean square error (RMSE), the mean absolute error (MAE), and the relative analysis error (RPD) were used as evaluation indicators of the prediction model.R 2 is an important parameter in model evaluation, and usually R 2 > 0.82 indicates that the method can be used for practical applications.R 2 > 0.9 indicates that the model has excellent fish head cutting position identification.Smaller RMSE values represent better model prediction performance [28].MAE can better reflect the actual situation of prediction value error, and a smaller value represents a smaller prediction error [28,29].RPD can intuitively reflect the prediction ability of the model; when RPD > 1.4, the model reliability is poor.When the RPD > 2.5, the model prediction is accurate and reliable [30,31].The formulas of the above evaluation indexes are as follows: Data Segmentation As shown in Figure 7a, the initial point cloud data obtained after laser scanning included the conveyor belt contour and the fish surface contour point cloud data.The maximum horizontal height of the conveyor belt contour was 250.32 mm, and this part was removed according to the method described in Section 2.3.2 to achieve partial dimensionality reduction of the data and eliminate the interference information irrelevant to the fish contour (Figure 7b). Data Filtering The results of the original data filtering of the radial cross-section profile of the fish body are shown in Figure 8.As can be seen in Figure 8a, the original data had more obvious noise and data fluctuation, and the radial cross-section profile of the fish body was not accurate.After adding the Kalman filter, the large-scale noise generated by system vibration was basically removed, indicating that it has a good suppression effect on the noise generated by random fluctuations in this study (Figure 8b), and the radial profile Data Filtering The results of the original data filtering of the radial cross-section profile of the fish body are shown in Figure 8.As can be seen in Figure 8a, the original data had more obvious noise and data fluctuation, and the radial cross-section profile of the fish body was not accurate.After adding the Kalman filter, the large-scale noise generated by system vibration was basically removed, indicating that it has a good suppression effect on the noise generated by random fluctuations in this study (Figure 8b), and the radial profile curve was further refined.But, due to the existence of data fluctuations, the profile curve was still not smooth and complete.After adding the median filter, the range of fluctuations of the original data was reduced (Figure 8c), and the radial profile curve gradually became continuous and complete.When the data were subjected to Kalman and median filters in turn, the contour curve became smooth and the high-frequency noise was obviously improved (Figure 8d), which is closer to the real contour curve.Therefore, in this study, the Kalman filter and the median filter for the pre-processing of the radial contour of the fish body were used together. data after segmentation.The different colored lines represent the results of a single line laser scan and consist of multiple points that form a radial profile. Data Filtering The results of the original data filtering of the radial cross-section profile of the fish body are shown in Figure 8.As can be seen in Figure 8a, the original data had more obvious noise and data fluctuation, and the radial cross-section profile of the fish body was not accurate.After adding the Kalman filter, the large-scale noise generated by system vibration was basically removed, indicating that it has a good suppression effect on the noise generated by random fluctuations in this study (Figure 8b), and the radial profile curve was further refined.But, due to the existence of data fluctuations, the profile curve was still not smooth and complete.After adding the median filter, the range of fluctuations of the original data was reduced (Figure 8c), and the radial profile curve gradually became continuous and complete.When the data were subjected to Kalman and median filters in turn, the contour curve became smooth and the high-frequency noise was obviously improved (Figure 8d), which is closer to the real contour curve.Therefore, in this study, the Kalman filter and the median filter for the pre-processing of the radial contour of the fish body were used together. Fish Head Cut Position Identification 3.2.1. Extraction of Ventral-Dorsal Demarcation Line For the fish body, the thickness of the ventral-dorsal part of the fish decreased slowly, and the thickness of the tail part of the fish decreased rapidly.The maximum point of the thickness of the fish was at the ventral-dorsal part, and the position of the head cut was before the maximum point of the thickness.The ventral-dorsal demarcation line of a fish is a cloud of points consisting of the maximum values of all radial cross-section heights of the fish body.The ventral-dorsal demarcation lines of 204 fish bodies are shown in Figure 9.It illustrates that although the size of the samples was not uniform, the corresponding ventral-dorsal demarcation lines had the same trend of changes, which increased rapidly and reached the maximum value before about one-third of the total length, and then began to decrease.As shown in the figure (Figure 9), the ventral-dorsal demarcation line could reflect the change law of the radial thickness of the fish body, which can be used as the basis for fish head cutting.Taking the ventral-dorsal demarcation line as input, three supervised machine learning methods (LS-SVM, PSO-BP, and LSTM) were used to train and predict the model, respectively, to achieve the identification of the fish head cutting position.9. It illustrates that although the size of the samples was not uniform, the corresponding ventral-dorsal demarcation lines had the same trend of changes, which increased rapidly and reached the maximum value before about one-third of the total length, and then began to decrease.As shown in the figure (Figure 9), the ventral-dorsal demarcation line could reflect the change law of the radial thickness of the fish body, which can be used as the basis for fish head cutting.Taking the ventral-dorsal demarcation line as input, three supervised machine learning methods (LS-SVM, PSO-BP, and LSTM) were used to train and predict the model, respectively, to achieve the identification of the fish head cutting position. Data Dimensionality Reduction of Abdominal and Dorsal Dividing Lines In order to reduce the volume of data for the identification of fish head cut features and to achieve rapid identification of fish head cut locations, PCA was used to reduce the dimensionality of the abdominal and dorsal dividing lines.Factor analysis was performed on the measured ventral-dorsal demarcation lines of the 204 samples, and the results of the obtained feature values are shown in Table 2.The cumulative variance contribution of the first three principal components reached 95.080%, indicating that the first three principal components can significantly reflect 95.080% of the information of the original data.In addition, it can also be seen from the gravel plot shown in Figure 10 that the trend of the first, second, and third eigenvalues was more obvious, and from the fourth eigenvalue onwards, the trend of the eigenvalues tended to be stable, so the first three principal components were taken for subsequent modeling. Data Dimensionality Reduction of Abdominal and Dorsal Dividing Lines In order to reduce the volume of data for the identification of fish head cut features and to achieve rapid identification of fish head cut locations, PCA was used to reduce the dimensionality of the abdominal and dorsal dividing lines.Factor analysis was performed on the measured ventral-dorsal demarcation lines of the 204 samples, and the results of the obtained feature values are shown in Table 2.The cumulative variance contribution of the first three principal components reached 95.080%, indicating that the first three principal components can significantly reflect 95.080% of the information of the original data.In addition, it can also be seen from the gravel plot shown in Figure 10 that the trend of the first, second, and third eigenvalues was more obvious, and from the fourth eigenvalue onwards, the trend of the eigenvalues tended to be stable, so the first three principal components were taken for subsequent modeling.Taking the three principal component values obtained as the model input and the actual cut position of the fish head as the model output, all 204 samples were randomly divided into two groups, one group of 154 samples as the training set and the other group of 50 samples as the test set, to construct the LS-SVM, PSO-BP, and LSTM fish head cutting position recognition models. LS-SVM Model In order to obtain an LS-SVM model with high prediction accuracy and stability, the two parameters of the RBF kernel function in the model, penalty coefficient sig2 and kernel parameter gam, need to be optimized.The results of parameter search using PSO showed that the LS-SVM model had the best recognition effect when the penalty parameter sig2 = 0.01 and the optimal kernel parameter gam = 15.3284; the fitness curve is shown in Figure 11, from which, after 29 iterations, the fitness curve started to smooth out and Fish Head Cutting Position Identification Model Taking the three principal component values obtained as the model input and the actual cut position of the fish head as the model output, all 204 samples were randomly divided into two groups, one group of 154 samples as the training set and the other group of 50 samples as the test set, to construct the LS-SVM, PSO-BP, and LSTM fish head cutting position recognition models. LS-SVM Model In order to obtain an LS-SVM model with high prediction accuracy and stability, the two parameters of the RBF kernel function in the model, penalty coefficient sig2 and kernel parameter gam, need to be optimized.The results of parameter search using PSO showed that the LS-SVM model had the best recognition effect when the penalty parameter sig2 = 0.01 and the optimal kernel parameter gam = 15.3284; the fitness curve is shown in Figure 11, from which, after 29 iterations, the fitness curve started to smooth out and the validation parameters reached the optimal solution.The recognition results of the LS-SVM model for the fish head cutting position are shown in Figure 12.The R 2 c , RMSE, and MAE of the training set were 0.9125, 0.2622, and 0.1857, respectively, and the R 2 p , RMSE, MAE, and RPD of the test set were 0.9094, 0.8548, 0.6123 and 2.4041, respectively.The results showed that its R 2 p was more than 0.9, which indicates that the LS-SVM model has good generalization ability and performance, and RMSE and MAE were low, which represents fewer outliers and errors in the results predicted by the model, but its RPD values were less than 2.5, indicating that the model is not stable enough and has limited identification capability. PSO-BP Model The connection weights of each layer of the BP neural network were encoded into particles, and the PSO algorithm was used to search for the optimal network weights within the set number of iterations, in which the population size of the particle swarm was set to be 20 to prevent overconsumption of computational resources.In order to avoid the particle speed growth being too fast or too slow, the value of the speed was set to [−1, 1], ensuring that the solution is within a reasonable range of the particle position to avoid overstepping the boundaries, and the position was set to [−2, 2].The training results of the PSO-BP neural network for fish head cutting position identification are shown in Figure 13.After the computation of the PSO-BP model, the model's recognition results of the fish head cutting position are shown in Figure 14.The R 2 c , RMSE, and MAE of the training set were 0.9168, 0.5203, and 0.3277, respectively, and the R 2 p , RMSE, MAE, and RPD of the test set were 0.9295, 0.5126, 0.3143, and 2.513, respectively.Compared with the LS-SVM model, its larger R 2 p value represents a better model performance, its smaller RMSE and MAE, represent a further reduction in the error of the PSO-bp model prediction, and its RPD was more than 2.5, which indicates that its recognition has strong reliability and is suitable for the recognition of the ideal cutting position of fish heads.the validation parameters reached the optimal solution.The recognition results of the LS-SVM model for the fish head cutting position are shown in Figure 12.The R c 2 , RMSE, and MAE of the training set were 0.9125, 0.2622, and 0.1857, respectively, and the R p 2 , RMSE, MAE, and RPD of the test set were 0.9094, 0.8548, 0.6123 and 2.4041, respectively.The results showed that its R p 2 was more than 0.9, which indicates that the LS-SVM model has good generalization ability and performance, and RMSE and MAE were low, which represents fewer outliers and errors in the results predicted by the model, but its RPD values were less than 2.5, indicating that the model is not stable enough and has limited identification capability. PSO-BP Model The connection weights of each layer of the BP neural network were encoded into particles, and the PSO algorithm was used to search for the optimal network weights within the set number of iterations, in which the population size of the particle swarm was the validation parameters reached the optimal solution.The recognition results of the LS-SVM model for the fish head cutting position are shown in Figure 12.The R c 2 , RMSE, and MAE of the training set were 0.9125, 0.2622, and 0.1857, respectively, and the R p 2 , RMSE, MAE, and RPD of the test set were 0.9094, 0.8548, 0.6123 and 2.4041, respectively.The results showed that its R p 2 was more than 0.9, which indicates that the LS-SVM model has good generalization ability and performance, and RMSE and MAE were low, which represents fewer outliers and errors in the results predicted by the model, but its RPD values were less than 2.5, indicating that the model is not stable enough and has limited identification capability. PSO-BP Model The connection weights of each layer of the BP neural network were encoded into particles, and the PSO algorithm was used to search for the optimal network weights within the set number of iterations, in which the population size of the particle swarm was were 0.9168, 0.5203, and 0.3277, respectively, and the R p 2 , RMSE, MAE, and RPD of the test set were 0.9295, 0.5126, 0.3143, and 2.513, respectively.Compared with the LS-SVM model, its larger R p 2 value represents a better model performance, its smaller RMSE and MAE, represent a further reduction in the error of the PSO-bp model prediction, and its RPD was more than 2.5, which indicates that its recognition has strong reliability and is suitable for the recognition of the ideal cutting position of fish heads. LSTM Model During the LSTM training, the forgetting, input, and output layers were activated by sigmoid functions, and the entire output data range was transformed in the [0,1] interval to keep the data normalized.In the model-building process, if a neuron parameter produces large volatility, the overall fit of the model will be biased towards that neuron; therefore, during the training process, to reduce the impact of overfitting on the prediction model, dropout regularization was added, and the dropout rate was taken as 0.2.As can be seen from Figure 15, during 200 iterations, there was a brief upward trend in the loss function of the training and test sets, followed by a rapid decline, which slowed down LSTM Model During the LSTM training, the forgetting, input, and output were activated by sigmoid functions, and the entire output data range was transformed in the [0,1] interval to keep the data normalized.In the model-building process, if a neuron parameter produces large volatility, the overall fit of the model will be biased towards that neuron; therefore, during the training process, to reduce the impact of overfitting on the prediction model, dropout regularization was added, and the dropout rate was taken as 0.2.As can be seen from Figure 15, during 200 iterations, there was a brief upward trend in the loss function of the training and test sets, followed by a rapid decline, which slowed down during the subsequent iterations, indicating that the LSTM output values fit the true values better and better.The results of the LSTM model for the identification of fish head cutting positions are shown in Figure 16.The R 2 c , RMSE, and MAE of the training set were calculated by the LSTM model to be 0.9705, 0.1964, and 0.1477, respectively, and the R 2 p , RMSE, MAE, and RPD of the test set were 0.9480, 0.2957, 0.1933, and 3.1426, respectively.Among the analyzed models, the LSTM model's R 2 p reached up to 0.9480, with the best generalization ability and performance, and it had a lower RMSE and MAE, which represents minimum and stable prediction error, and a larger RPD value of more than 2.5, which indicates that its identification has strong reliability and is suitable for the recognition of the ideal cutting position of fish heads.In the present study, the interference information and random fluctuation noise in the original data were processed by threshold segmentation and Kalman and median filtering, respectively, which can accurately restored the radial profile curve of the sample.The principal component value of the ventral-dorsal demarcation lines after PCA dimensionality reduction can be used as the feature information for identifying the cutting po- In the present study, the interference information and random fluctuation noise in the original data were processed by threshold segmentation and Kalman and median filtering, respectively, which can accurately restored the radial profile curve of the sample.The principal component value of the ventral-dorsal demarcation lines after PCA dimensionality reduction can be used as the feature information for identifying the cutting po- In the present study, the interference information and random fluctuation noise in the original data were processed by threshold segmentation and Kalman and median filtering, respectively, which can accurately restored the radial profile curve of the sample.The principal component value of the ventral-dorsal demarcation lines after PCA dimensionality reduction can be used as the feature information for identifying the cutting position of the fish head.In the established MPR, LS-SVM, and LSTM fish head cutting position identification models, the overall performance of the LSTM model was the best: the error between the predicted value and the actual value of the model was the smallest, and the reliability was also high.This shows that the line laser scanning technology combining with machine learning has the potential for fish head cutting position recognition.At the same time, the identification based on the information of the ventral-dorsal dividing line proved to be effective.The ventral-dorsal demarcation line is a line composed of the highest points in each radial profile section, which is identified by the morphological characteristics of the fish.It will not be limited to the posture and individual size, so the accuracy of the identification method based on the ventral-dorsal demarcation line will not be affected by the placement and size of the fish.The spindle-shaped and flat-hammer-shaped fishes with a similar morphology to that of the samples used in this study (crucian carp) all have a ventral-dorsal dividing line, and therefore, the method proposed in this study could have strong applicability to bulk fish with similar shapes and head-removing needs, such as grass carp, silver carp, etc., and even high value-added fish such as salmon and tuna.With a further increase in sample size, the accuracy and generalization of the identification model could also be continuously improved.In addition, due to the high precision and efficiency of line laser scanning, the application of this method is more conducive to the on-line detection of bulk fish.However, in actual large-scale fish processing, when two or more fish are stacked up and placed, it is hard to extract the ventral-dorsal demarcation line of an individual sample fish, so the method should be applied to the situation where raw materials are scanned one by one. Figure 2 . 2 . Figure 2. Fish head cutting position determination flow chart: (a) sample; (b) data acquisition; (c) data pre-processing; (d) feature extraction; the different colored lines represent the ventral-dorsal Figure 2. Fish head cutting position determination flow chart: (a) sample; (b) data acquisition; (c) data pre-processing; (d) feature extraction; the different colored lines represent the ventral-dorsal dividing line extracted from different samples; (e) model building; (f) evaluation; (g) fish head removal. Figure 7 . Figure 7. Upper surface data collection of fish: (a) laser point cloud data collection; (b) fish surface data after segmentation.The different colored lines represent the results of a single line laser scan and consist of multiple points that form a radial profile. Figure 7 . Figure 7. Upper surface data collection of fish: (a) laser point cloud data collection; (b) fish surface data after segmentation.The different colored lines represent the results of a single line laser scan and consist of multiple points that form a radial profile. Figure 9 . Figure 9. Abdominal and dorsal dividing lines.The different colored lines represent the ventraldorsal dividing line extracted from different samples. Figure 9 . Figure 9. Abdominal and dorsal dividing lines.The different colored lines represent the ventraldorsal dividing line extracted from different samples. Figure 10 . Figure 10.Gravel map extracted based on covariance matrix. Figure 10 . Figure 10.Gravel map extracted based on covariance matrix. Figure 12 . Figure 12.Comparison diagram of predicted and measured fish head length based on LS-SVM. Figure 12 . Figure 12.Comparison diagram of predicted and measured fish head length based on LS-SVM. Figure 12 . Figure 12.Comparison diagram of predicted and measured fish head length based on LS-SVM. Figure 14 . Figure 14.Comparison diagram of predicted and measured fish head length based on PSO-BP. Figure 14 . Figure 14.Comparison diagram of predicted and measured fish head length based on PSO-BP. Foods 2023 , 19 Figure 15 . Figure 15.The loss function of training iterations and the target value of the LSTM model. Figure 16 . Figure 16.Comparison diagram of predicted and measured fish head length based on LSTM. Figure 15 . Figure 15.The loss function of training iterations and the target value of the LSTM model. Foods 2023 , 19 Figure 15 . Figure 15.The loss function of training iterations and the target value of the LSTM model. Figure 16 . Figure 16.Comparison diagram of predicted and measured fish head length based on LSTM. Figure 16 . Figure 16.Comparison diagram of predicted and measured fish head length based on LSTM. Table 1 . Index statistics of fish sample data. Table 1 . Index statistics of fish sample data. Table 2 . Total variance of interpretation. Table 2 . Total variance of interpretation.
2023-12-21T16:45:52.719Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "36d951f8f73c5bd40bc5ecc2bda8d829edfde4dd", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-8158/12/24/4518/pdf?version=1702913413", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0408f405196ee5fb8d3494c3be46dc4eea595cb0", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
218920488
pes2o/s2orc
v3-fos-license
Schur indices for noncommutative reality-based algebras with two nonreal basis elements This article discusses the representation theory of noncommutative algebras reality-based algebras with positive degree map over their field of definition. When the standard basis contains exactly two nonreal elements, the main result expresses the noncommutative simple component as a generalized quaternion algebra over its field of definition. The field of real numbers will always be a splitting field for this algebra, but there are noncommutative table algebras of dimension $6$ with rational field of definition for which it is a division algebra. The approach has other applications, one of which shows noncommutative association scheme of rank $7$ must have at least three symmetric relations. Introduction A reality-based algebra (abbr. RBA) is a pair (A, B), where (rba1) A is an associative unital algebra over C with skew-linear involution * ; (rba2) B = {b 0 , b 1 , . . . , b r−1 } is a distinguished * -invariant basis of A; (rba3) 1 A ∈ B (we fix b 0 = 1 A ); (rba4) all of the structure constants λ ijk given by λ ijk b k , for 0 ≤ i, j, k ≤ r − 1, are real numbers; and (rba5) B satisfies the pseudo-inverse condition: if we also use * to denote the permutation of order 1 or 2 on {0, 1, . . . , r − 1} induced by the action of the involution * on B, then for all i, j ∈ {0, 1, . . . , r − 1}, Axiom (rba4) implies that RB, the R-span of B, is a * -subalgebra of A, and axiom (rba5) tells us each b i ∈ B has a unique pseudoinverse in B. The dimension r = |B| is the called the rank of the RBA. An element of B is called real if it is * -invariant, and otherwise it is called nonreal. (This terminology reflects the fact that the value of any character of A on a real basis element has to be a real number. Real basis elements have also been referred to as * -symmetric or * -invariant in the literature.) Since (b * i ) * = b i for every b i ∈ B, the nonreal basis elements always occur in pairs. An RBA has a positive degree map if there is a complex * -algebra representation for which δ(b i ) > 0 for all b i ∈ B. In this case each distinguished basis can be normalized by positive constants so that the coefficient This basis is unique relative to δ and we refer to it as the standard basis of the RBA. We will assume from now on that the distinguished basis of an RBA with positive degree map is this standard one. For further background on RBAs, we refer the reader to [2]. The field of definition F of an RBA is the minimal field extension of Q containing Λ, where Λ is the set of structure constants for the standard basis. By (rba4) we know this is always a subfield of R. We will say an RBA is F -rational when its field of definition is F , and rational when its field of definition is Q. As the structure constants relative to the regular matrices of a finite group or the adjacency matrices of an association scheme are nonnegative integers, finite groups and association schemes give familiar examples of rational RBAs with positive degree map. When F is the field of definition for an RBA with positive degree map, F B will be an r-dimensional semisimple F -algebra [5]. A splitting field of F B is a field extension K of F for which KB is a direct sum of full matrix algebras over K. In particular, it is a field K for which every irreducible character of A is afforded by a matrix representation that maps every element of the standard basis B to a matrix with entries in K; i.e. every χ ∈ Irr(A) is realized over K. Our main result concerns noncommutative RBAs with positive degree map that have exactly one pair of nonreal standard basis elements. The argument makes use of the general observation that irreducible characters of RBAs can always be realized up to similarity by * -representations; i.e. those that satisfy X (a * ) = X (a) ⊤ for all a ∈ A. We apply Linchenko and Montgomery's extension of Frobenius-Schur indicator theory for algebras with involution [11] to obtain some restrictions on irreducible characters of RBAs with two (and four) nonreal basis elements. When the RBA (A, B) has two nonreal basis elements, it shows there is a unique irreducible character χ of degree 2, all other irreducible characters are of degree 1, and every irreducible character is realizable over R. Being the unique irreducible character of degree 2, χ takes values in the field of definition F . If X is a real * -algebra representation affording χ, then X (F B) is a 4-dimensional simple Falgebra, and hence it can be expressed as a generalized quaternion algebra in terms of its symbol α, β F for a pair of parameters α, β ∈ F × . Our main result gives a character-theoretic formula for the symbol of X (F B). Although R is a splitting field, we give examples of some integral RBAs of rank 6 where this component is a division algebra. Along the way we explore some applications of these methods. For instance, the application of the indicator to noncommutative RBAs with 2 or 4 basis elements can be applied to establish a few cases of the "real Schur index one" question for irreducible characters that arise in the realization cone of an abstract regular polytope. In the last section we consider the application of our results to association schemes and noncommutative RBAs of small rank. Although the only finite groups with exactly two nonreal elements are the dihedral groups of order 6 and 8, there are infinite families of table algebras and association schemes with this property. It applies, for example, to all noncommutative RBAs of rank 5 and 6 with positive degree map that were discussed in [8] and [7]. There are noncommutative RBAs of rank 7 with either two, four, or six nonreal basis elements. We will give an example of a noncommutative rational rank 7 RBA with three pairs of nonreal basis elements for which X (RB) is real quaternion algebra. However, integrality restrictions on the character table show such RBAs can never have algebraic integer structure constants, so noncommutative association schemes with six nonreal basis elements cannot exist. Frobenius-Schur indicator theory In this section we show that the field of real numbers is a splitting field for RB when B contains only 2 nonreal elements. Intuitively this is somewhat obvious because any basis of the real quaternion algebra H must have 3 nonreal elements, but by appealing to Frobenius-Schur indicator theory we can actually give a complete characterization of the real realizability of irreducible characters of CB when B has 2 or 4 nonreal elements. Frobenius-Schur indicator theory for group algebras over R was extended to algebras with involution by Linchenko and Montgomery in [11,Theorem 2.7]. We will apply their main theorem in the case where A = CB, where B is a * -invariant basis of A whose structure constants are real -these assumptions are equivalent to axioms (rba1), (rba2), and (rba4). (Here T r(S) denotes the trace of the linear operator S.) Theorem 1. Let A be an r-dimensional algebra over C with skewlinear involution * . Suppose further that A = CB, where B is a *invariant basis of A and the structure constants relative to B are real. is a pair of dual bases for A with respect to a symmetric bilinear associative nondegenerate form on A. For all χ ∈ Irr(A), define Note that since B has real structure constants, a representation affording χ ∈ Irr(A) restricts to a representation of RB, and in this case the three cases of ν(χ) in Theorem 1 (ii) distinguish when the simple component of RB is isomorphic to a full matrix ring over C, R, or H, respectively. (Note that the statement of (ii) in [11] has to be made in terms of the existence of symmetric or skew-symmetric nondegenerate A-invariant bilinear forms because they make no assumptions on the field generated by the structure constants of a basis -see [11, pg. 348-349].) Lemma 2. Suppose A is a finite-dimensional algebra with involution over C that has a * -invariant basis B whose structure constants are real. (ii) If A has an irreducible character with ν(χ) = −1, then s ≤ r − 6. (iii) If s = r − 2, then A has a unique irreducible character χ with χ(1 A ) = 2; all other irreducible characters of A have degree 1; and every irreducible character of A is realizable over R. (iv) If s = r − 4, then either (a) A has two irreducible characters of degree 2, all other irreducible characters of A have degree 1, and every irreducible character of A is realizable over R; or (b) A has one irreducible character with χ(1 A ) = 2 that is realizable over R, two complex-valued irreducible characters of degree 1, and all other irreducible characters are real-valued with degree 1. Proof. (i) Since the distinguished basis B is * -closed (and hence Sclosed), T r(S) is the number of * -invariant basis elements. Applying Theorem 1 (iii) proves (i). The gap between ψ ψ(1 A ) 2 and ψ ν(ψ)ψ(1 A ) is minimized when ν(ψ) = 1 for every ψ ∈ Irr(A), and this minimal gap is equal to ). Since this is exactly 2, we can conclude that there is only one irreducible character of degree > 1, that this irreducible character has degree 2, and that ν(ψ) = 1 for every ψ ∈ Irr(A). The latter implies ψ is realizable over R for every ψ ∈ Irr(A). (iv) Since A is noncommutative there is at least one irreducible character with χ(1) ≥ 2. For every irreducible character with ν(χ) = 0 the number of nonreal elements increases by χ(1) 2 . Since the irreducible characters with ν(χ) = 0 come in complex conjugate pairs, we can have such a pair of irreducibles only when there is just one such pair of degree 1 and there is one other irreducible of degree 2 that has ν(χ) = 1. In this case all other irreducible characters must have χ(1) = 1 and ν(χ) = 1. This situation is case (b). Now suppose every irreducible character has ν(χ) = 1. Then This number can be 4 only when there are two irreducibles χ with degree 2 and all others have degree 1. This situation is case (a). Example 3. Lemma 2 can sometimes be applied to answer a question about Schurian association schemes arising naturally in the study of abstract regular polytopes -see [13,Problem 23]. This question asks if ν(χ) = 1 for every irreducible character of the adjacency algebra of the Schurian association scheme corresponding to the H-H-double cosets of a finite string C-group G relative to its first vertex stabilizer subgroup H. Since this is known for all finite Coxeter groups, one must consider finite groups that are homomorphic images of infinite Coxeter groups. The question is answered affirmatively by Lemma 2 for the particular group G whenever the set G/ /H of H-H-double cosets in G contains two or four double cosets with g −1 ∈ HgH. We have found some such Schurian association schemes among rank 3 string C-groups, which are characterized by their Schlafi type [m 1 , m 2 ] k,ℓ for positive integers m 1 , m 2 , k, ℓ. These groups G are generated by three involutions a, b, and c subject to the Coxeter group relations (ab) m 1 = (bc) m 2 = (ac) 2 = 1 and additional finiteness-ensuring relations (abc) k = (abcb) ℓ = 1. In all of these cases, the corresponding Schurian association scheme is not commutative, so Lemma 2 applies to show its degree 2 irreducible character has ν(χ) ≥ 0. An explicit formula for the indicator was established for adjacency algebras of noncommutative association schemes (aka. homogeneous coherent configurations) by Higman [ , for all χ ∈ Irr(A). Suppose (A, B) is an RBA whose distinguished basis has exactly two nonreal elements. By Lemma 2 (iii), its unique degree 2 irreducible character χ will be realized by a real representation Φ. We claim that χ is realized by a real * -representation. A proof of this general fact was observed by Hanaki [6]. The proof goes as follows: given a real representation Φ : CB → M n (C) of an RBA with positive degree map that affords χ, (ii) we can write A as A = BB ⊤ for some invertible symmetric n × n matrix B; and then (iii) X = B −1 ΦB is a real * -representation affording χ. The case of two nonreal basis elements Let (A, B) be a noncommutative rank r RBA with positive degree map δ whose standard basis B has exactly two nonreal elements. From Lemma 2(iii), we know that A has r − 3 irreducible characters, one of degree 2 that is realized over R, and the remaining r − 2 of them realvalued and of degree 1. By the remark at the end of the previous section the real representation X affording degree 2 irreducible character χ can be chosen to be a * -representation. If F is the field of definition for the RBA, we can see using Galois conjugacy that χ takes values in F . Since F (χ) = F and we are working over a field of characteristic zero, A χ = X (F B) will be a 4-dimensional central simple F -algebra. If A χ were a division algebra, it will have dimension 4 over its center, so it will be a generalized quaternion algebra over F [12, pg. 236]. A generalized quaternion algebra over F is the 4-dimensional F -algebra F 1 + F x + F y + F xy with defining relations yx = −xy, x 2 = α, y 2 = β for α, β ∈ F × . The algebra is determined by the choice of α and β (in either order), so it is denoted with the symbol α, β F . Up to isomorphism, this F -algebra is unchanged when α or β are multiplied by nonzero squares in F × . α, β F will be isomorphic to M 2 (F ) if either α or β lie in (F × ) 2 (see [12, §1.6 and §1.7]), and if the latter is not the case then the algebra is naturally isomorphic to the cyclic algebra (F ( √ α)/F, σ, β), so it will be isomorphic to M 2 (F ) iff β is a norm in the extension F ( √ α)/F . . . , r − 3 and b * r−2 = b r−1 . As in [7], we set Since X is a * -algebra representation, X (b i ) is a symmetric matrix for i = 0, 1, . . . , r − 3, and . Another general fact we will need is the following: Lemma 4. Suppose F is the field of definition for an RBA (A, B), and let χ ∈ Irr(A). Let X be a representation of A affording χ. Then m χ ∈ F (χ), and for all b i ∈ B, the coefficients of the characteristic polynomial of X (b i ) lie in F (χ). Proof. Since we are working over fields of characteristic zero, the extension F (χ)/F is separable, so by [3, (74.2)] the center of X (F B) is isomorphic to F (χ), and the coefficients of the centrally primitive idempotent e χ corresponding to χ in the standard basis B also lie in F (χ). From [2, Proposition 2.14], we know we can express e χ in terms of the standard basis as Since the δ(b i ) ∈ F for all b i ∈ B, it follows that m χ ∈ F (χ). Furthermore, we have that X (F B) ≃ F (χ)Be χ via a map that restricts to a field isomorphism on the centers and takes X (b i ) → b i e χ , for all b i ∈ B. This implies that for all b i ∈ B, X (b i ) maps to a matrix with entries in F (χ) under the regular representation of the algebra X (F B). Therefore, the coefficients of the characteristic polynomial of X (b i ) will have coefficients in F (χ). The next proof makes use of two standard character identities . Let Φ be the set of degree 1 irreducible characters of B. Then n = τ (b 0 ) = 2m χ + φ∈Φ m φ , and the fact that each φ ∈ Φ is a * -representation of A implies that for any We are now ready for our main theorem. Theorem 5. Let (A, B) be a noncommutative RBA with positive degree map that has two nonreal elements b r−2 and b * r−2 . Let F be its field of definition F , and let χ be its unique irreducible character of degree 2. Let X be a real * -algebra representation affording χ. Then there exists a * -invariant b j ∈ B such that X (b j ) ∈ F X (b 0 ), and for any such b j ∈ B, the symbol of A χ ≃ X (F B) can be expressed as Proof. Following the approach in [7], define D = {d 0 , d 1 , . . . , d r−3 , c, d} to be the basis of A given by Then {d 0 , d 1 , . . . , d r−3 , c} is a basis for the subspace of A consisting of * -invariant elements, and d * = −d. It then follows that Recall from Lemma 2(iii) that every irreducible character of A is realvalued. This means every irreducible character of degree 1 will vanish So it follows that m χ (s r−2 − t r−2 ) 2 = nδ r−2 . Therefore, In particular, if we take x = m χ X (d) to be one of the generators of A χ as a quaternion algebra, then x 2 = −nm χ δ(b r−2 )I. We now need to find a suitable choice for the other symbol algebra generator y. Since A χ has dimension 4, there are at least two of our other nontrivial * -invariant elements d ℓ ∈ {d 1 , . . . , d r−3 , c} for which X (d ℓ ) has distinct eigenvalues. So we can choose this d ℓ to be one of the real elements b j of B. After conjugating by a suitable orthogonal matrix (which will not affect the form of X (d) or the result for x 2 above), we can assume X (b j ) = r j 0 0 u j with r j = u j . Since x has the form 0 α −α 0 for some α ∈ F × , it is easy to see that in order for a symmetric matrix y = r s s u to satisfy yx = −xy, we only require u = −r. Setting y = 2X (b j ) − (r j + u j )X (b 0 ) accomplishes this, and for this y we have that y 2 = [(r j −u j ) 2 ]I is a positive multiple of the identity. Furthermore by Lemma 4 the trace r j + u j = χ(b j ) and determinant r j u j both lie in F (χ) = F , and hence y 2 = (r j − u j ℓ) 2 I = [(r j + u j ) 2 − 4r j u j ]I ∈ F I. It remains to give a character-theoretic expression for (r j − u j ) 2 . If a = χ(b j ) 2 , then assuming without loss of generality that r j > u j , we have (r j − u j ) = 2ε for some ε > 0. Furthermore, Multiplying this by m 2 χ produces the desired second parameter in the symbol given for A χ . Corollary 6. Suppose B is the standard basis of a noncommutative rank 5 RBA with positive degree map, and let F be its field of definition. Let X be a * -representation affording the degree 2 irreducible character χ of B. Then there will be at least one * -invariant element b j ∈ B for which X (b j ) has distinct eigenvalues, and for any such b j the symbol of Proof. The main results of [8] show these RBAs are determined by their degree map. The values of the degree 2 irreducible character are for all b i ∈ B, and m χ = n−1 2 . Using the fact that Φ = {δ} in Theorem 5 and substituting these values into the formula for the symbol of A χ gives the result. Example 7. Example 11 from [8] gives a noncommutative rational rank 5 table algebra with order 25 whose nontrivial basis elements have degree 6, that has a * -invariant element b 1 with So in this case we will have m χ = 12, and the symbol for A χ in the above corollary has parameters α = −(25)(12)(6) = −2(30) 2 and Using the Legendre symbol as in [12, §1.7] to compute the local Schur indices of this generalized quaternion algebra over Q, we see that the local index will be 2 at the primes p = 2 and 3. (This can also be accomplished with the LocalIndicesOfRationalQuaternionAlgebra command in the GAP package wedderga [1].) So in this case A χ is a 4-dimensional Q-division algebra. Example 8. Table 1 of [7] gives character-theoretic information for many integral rank 6 RBAs with positive degree map having order up to 150. For these the field of definition is Q. In checking this table we find examples for which A χ = α, β Q is a noncommutative division algebra. It is interesting to note that the above examples correspond to a table algebras that do not arise from association schemes. Although we did not find a noncommutative rank 6 association scheme in Table 1 of [7] that produces a nontrivial rational Schur index, the one primitive table algebra of order 81 whose feasibility as an association scheme is open does have this property: Example 9. Let B be the second example of order 81 in Table 1 of [7], which has character table In searching the database of small association schemes, we find several examples noncommutative association schemes of rank 7 whose standard basis has 2 nonreal elements. It is possible to construct noncommutative rank 7 table algebras with 4 nonreal basis elements as wreath products of rank 3 table algebras with 2 nonreal elements with the noncommutative rank 5 table algebras discussed in [8], but we have neither been able to find nor rule out the existence of noncommutative rank 7 association schemes with 4 nonreal elements. Proof. Consider the character table above. The first orthogonality relation implies that the sums of the values of the irreducible characters other than the degree map will be 0. For φ (and ψ) this implies 1 + 2(φ 1 + φ 2 + φ 3 ) = 0, and so φ 1 + φ 2 + φ 3 = − 1 2 . If the RBA is integral, the values φ 1 , φ 2 , and φ 3 must lie in the ring O of algebraic integers of the field of character values Q(φ). But then the sum of these numbers would have to be an algebraic integer, so it could not be − 1 2 . This contradiction implies that any RBA of this type will not have integral structure constants. Non-integral examples satisfying the assumptions of the previous theorem do exist. It is relatively straightforward to produce an admissible character table (with rational character values and multiplicities), and use it to construct a representation affording χ. For example, we have constructed a noncommutative rank 7 RBA with postive degree map that has this (admissible) character table: [6] A. Hanaki , A note on complex matrix representations of association schemes, unpublished note,
2019-05-01T18:42:22.000Z
2019-05-01T00:00:00.000
{ "year": 2019, "sha1": "ae9243f408df252e59059c081512b9a5ed73c1ac", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "bd09bfc21fdc177990a819fd0fb4c938bec125c5", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
7779820
pes2o/s2orc
v3-fos-license
Three Holy Men Get Haircuts: The Semiotic Analysis of a Joke This article deals with a typology of 45 techniques of humor that I found when doing research on the mechanisms that generate humor in texts, lists the techniques and applies them to a Jewish joke. It references the work of Vladimir Propp on folktales as analogous in that both are concerned with mechanisms in text that generate meaning. It also deals with four theories about why people find texts humorous, defines the joke as a short narrative with a punch line that is meant to generate mirthful laughter and defines Jewish humor as being about Jewish people and culture as told by Jewish people. It offers a paradigmatic analysis of the joke, and offers some insights into why Jewish people developed their distinctive kind of humor. This article is an enhanced and expanded version of an article which was published in a Chinese semiotics journal (doi:10.1515/css-2015-0022). gruity that we find amusing. Schopenhauer describes what we call the incongruity theory as follows (Piddington, 1963, pp. 171-172) "The cause of laughter in every case is simply the sudden perception of the incongruity between a concept and the real object which have been thought through it in some relation, and laughter itself is just the expression of this incongruity." In jokes, the sudden perception that Schopenhauer mentions is caused by the punch lines which generate this recognition of an incongruity. In a good joke, we don't know what to expect in the way of a punch line. The third "why" theory is the psychoanalytic theory of humor which suggests that humor is primarily a form of masked aggression." As Freud wrote in his book, Jokes and Their Relation to the Unconscious (Freud, 1963, p. 101) "and here at last we can understand what it is that jokes achieve in the service of their purpose. They make possible the satisfaction of an instinct (whether lustful or hostile) in the face of an obstacle that stands in its way." (Freud, 1963, p. 101). "The wonderful thing about humor, from a psychoanalytic perspective, is that when we hear a joke we can participate in the aggression without any sense of guilt. The fourth "why" theory ties humor to communication paradoxes and suggests that humor results from the use of paradox, play and the resolution of logical problems. As William Fry wrote in his book Sweet Madness (Fry, 1963, p. 158) "During the unfolding of humor, one is suddenly confronted by an explicit-implicit reversal when the punch line is delivered…Inescapably, the punch line combines communication with meta-communication." In the final analysis, these theorists argue that what goes on in jokes may be too complicated for us to understand at our present level of development. The problem with these theories is that they don't explain how humor arises in the events that take place in jokes. For example, incongruity theorists deal with surprises in jokes. Since all jokes contain punch lines, which generate unexpected resolutions to jokes, all my 45 techniques can be subsumed under the incongruity theory of humor. But there is a difference between talking about incongruity and about the various techniques I deal with in my typology: insult, facetiousness, exaggeration and so on. Now that I've discussed the four "why we laugh" theories, let me say something about how I developed my list of the 45 techniques of humor. In Vladimir Propp's Morphology of the Folktale he offers us thirty one functions that describe actions by characters who play an important role in folktales. These functions help us understand how narrative texts work. Some typical functions are interdiction, violation, trickery, and the receipt of a magical agent. Propp chose to focus on functions of characters in folktales because other approaches, such as studying themes or kinds of heroes and heroines, didn't work. He defined a function as (Propp, 1968, p. 21 A Priest, An Imam and a Rabbi Get a Haircut This joke was told to me by a Jewish friend from Israel. I found it very funny and had a good laugh when I heard it but some people who are not Jewish might not "get" it. A barber is sitting in his shop when a priest enters. "Can I have a haircut?" the priest asks. "Of course," says the barber. The barber than gives the priest a haircut. When the barber has finished, the priest asks "How much do I owe you?" "Nothing," replies the barber. "For you are a holy man." The priest leaves. The next morning, when the barber opens his shop, he finds a bag with one hundred gold coins in it. A short while later, an Imam enters the shop. "Can I have a haircut?" he asks. "Of course," says the barber, who gives the Imam a haircut. When the barber has finished, the Imam asks "How much do I owe you?" "Nothing," replies the barber. "For you are a holy man." The Imam leaves. The next morning, when the barber opens his shop, he finds a bag with a hundred gold coins in it. A bit later, a rabbi walks in the door. "Can I have a haircut?" the rabbi asks. "Of course," says the barber, who gives the rabbi a haircut. When the haircut is finished, the rabbi asks, "How much do I owe you?" "Nothing," replies the barber, "for you are a holy man." The rabbi leaves. The next morning, when the barber opens his shop, he finds a hundred rabbis. The problem with the "why" theories is they don't deal with the mechanisms in the joke I've just "told"-and we will be dealing with jokes here-short texts that generate the humor. I should point out that supporters of the various "why" theories spend a lot of time arguing with supporters of the other "why" theories about which theory is best. But what is important, from my perspective, is that the "why" theories don't deal with the specifics of jokes to explain what it is, in a given joke, that evokes mirthful laughter. Superiority theorists would say we feel superior to the rabbi and the hundred rabbis who are crowded in the barbershop, hoping to get a free haircut. Incongruity theorists would say we are surprised by the punch line, though anyone familiar with Jewish humor might possibly have been able to anticipate the kind of resolution we find in the joke. Psychoanalytic critics would say the joke allows guilt-free aggression against Jews, who are the main protagonists in the joke and the subject of the punch line, and communication theorists would say the resolution is ultimately paradoxical and involves a communication, the punch line, and a meta-communication-laugh, but don't take this story seriously because it is a joke. But these "why" theories don't adequately explain what is going on in the joke. Rather than arguing which "why" theory is best, I chose a different path. As the result of a large research project I conducted-a content analysis using all the books I had in my house with humorous content: joke books, books of folklore, comic strips, cartoons, humorous poems, theatrical comedies, humorous short stories, etc., etc. with a focus on what it was, in each text I examined, that was funny and that generated laughter. I came up with a list of 45 techniques which, I suggest, in various combinations, can be found in jokes and all other forms of humor. These techniques, we might say, are the DNA of humor. We often find two or three, or more, of these techniques in a joke. These techniques, I claim, inform humor from different time periods ( I will show how these techniques function in the joke about a priest, an imam and a rabbi who go to a barbershop to get a haircut. But first, we must decide what a joke is and then what makes a joke a Jewish joke. Defining the Joke I will define a joke as "a short narrative, with a punch line, meant to evoke mirthful laughter." The narrative may have a number of events in it but if it is a joke, it will always have a punch line that is meant to generate mirthful laughter. The structure of the typical joke is shown below: Techniques of Humor in the Priest, Imam and Rabbi Joke What follows are my suggestions about which techniques of humor are found in this joke. It is not unusual for a joke to make use of a number of different techniques. Technique 44: Theme and Variation (Logic Humor) The first technique we find in the joke is 44, Theme and Variation. In The Art of Comedy Writing I define theme and variation as follows (Berger, 1997, p. 44): By theme and variation I refer to the technique comedy writers use to take some matter (a belief, an activity) and show how different nationalities, religions, occupations, members of social classes, etc. vary with regard to this belief or activity. Part of the humor here comes from seeing how the theme is varied by the different groups, and by the way this technique plays with stereotypes people have of the different groups. There are three holy men and for most of the joke we find them doing the same thing-getting a haircut, asking to pay for the haircut but being told by the barber the haircut was free, and, for two of them, leaving the barber a hundred gold coins the next day. The three holy men are from different religions and the third holy man, the rabbi, doesn't leave a hundred gold coins but a hundred rabbis. That is the variation. Technique 19: Facetiousness (Linguistic Humor) In An Anatomy of Humor I define facetiousness as follows (Berger, 1993, p. 34): Facetiousness is generally taken to mean a joking, nonseries us of language. There is an element of ambiguity, for the person does not really mean (or take seriously) what he or she says and this must be Berger 493 communicated one way or another…Facetiousness is similar to irony, but is weaker. In both techniques we must "read" or "decode" the message; in irony there is a reversal, in facetiousness there is a discounting. I understand facetious to mean a jesting, frivolous, nonserious use of language. The idea of having a hundred rabbis packed into a small barbershop is far-fetched and the joke's humor is based, in large measure, on the ridiculous nature of the idea. Technique 1: Absurdity (Logic Humor) I deal with absurdity in my An Anatomy of Humor and explain (Berger, 1993, p. 19): Absurdity and its related forms-confusion and nonsense-seems to be relatively simple, but it is not, and its effects may be quite complicated, as Freud pointed out in his discussion of nonsense humor. Absurdity works by making light of the "demands" of logic and rationality. This absurdity doesn't necessarily take the form of silliness (though in many children's jokes it does) but may be an example of a relatively sophisticated philosophical position….We all seem to need to impose our sense of logic and order on the world, and when we come across situations or instances where our logic doesn't work, we react by being puzzled and, in certain cases, amused. I usually reserve the technique of absurdity to deal with the kind of plays one finds in the theater of the absurd, but it is reasonable to suggest that the idea of packing a hundred rabbis into a barbershop is absurd and that this absurdity helps generate the humor. Absurdity, which is based on logic and irrationality is not the same as facetiousness, which is based on a certain attitude. Technique 43: Stereotypes (Identity Humor) My discussion of stereotypes, found in An Anatomy of Humor, relies on sociological theories about the subject. As I explain in my discussion of this technique (Berger, 1993, pp. 52-53): Jokes involving stereotypes can be described as generalized insults-attacks on races, religions, ethnic groups, etc. but there is more to the humor of stereotypes than that. Stereotypes are useful to writers and comedians because they are instant (pseudo) "explanations" of behavior and they enable people to understand "motivation"….Stereotypes are, from a sociological point of view, group-held notions people have about other groups. Stereotypes can be negative, positive or mixed, but in all cases they are extreme over-simplifications and generalizations. The joke also alludes to the stereotype that Jews are cheap-a stereotype that is widely held but also quite inaccurate. Instead of a hundred gold coins, the barber finds a hundred rabbis. Technique 14: Disappointment (Logic Humor) In my description of the humor of disappointment and defeated expectations in An Anatomy of Humor I write (Berger, 1993, p. 31): The technique of disappointment involves leading people on about something and then denying them the logical consequences they expect. It is very similar to teasing and is funny to the extent that we find minor disappointments amusing. A good deal depends upon the frame or situation in which the disappointment is staged. Semiotics of the Joke 494 The structure of the barbershop joke, with the first two holy men leaving a hundred gold coins, sets the listener of the joke up to expect that the rabbi will also leave a hundred gold coins. Instead, he "leaves" a hundred rabbis. This humor is based upon defeated expectations. We can say the "formula" for this joke (that is, the techniques used in it) is: 44-19-43-14. Reducing a joke to a formula is, in itself, humorous. There are, in fact, jokes that use the idea of jokes having numbers to differentiate them from other jokes. For example, consider the following joke: At a conference of comedians, all the comedians know all the jokes so they now tell jokes by referring them to number. A comedian stands up and says "35-16-9-45" but nobody laughs. A comedian in the audience turns to a friend and says "he never could tell a joke well." Now let us turn to a paradigmatic analysis of the priest, imam and rabbi joke. There seems to be two distinct types of structural analyses in folklore. One is the type of which Propp's Morphology of the Folktale is the exemplar par excellence. In this type, the structure or formal organization of a folkloristic text is described following the chronological order of the linear sequences of elements in the text….Following Lévi-Strauss (1963, p. 312) this linear sequential structural analysis we might term "syntagmatic" structural analysis….The other type of structural analysis in a folklore seeks to describe the pattern (usually based upon an a priori binary principle of opposition) which allegedly underlies the folkloristic text. This pattern is not the as the sequential structure at all. Rather, the elements are taken out of the "given" order and are regrouped in one or more analytic schema. It was Claude Lévi-Strauss who suggested that the paradigmatic analysis of a text showed what it means in contrast to the syntagmatic analysis of a text, which shows what happens in it. We obtain the paradigmatic analysis of a text by finding the set of bipolar oppositions found (hidden) in the text. I believe the basic opposition in this joke is between paying for a haircut and not paying for a haircut. We see this opposition in the chart below. What is Jewish Humor Freud said he knew of no people who made so much fun of themselves as the Jews and this joke reflects a common Jewish sensibility-to laugh at human foibles, whether they are in lay people or in religious figures like rabbis. This joke is an example of Jewish humor, which we can define as humor in which Jewish people are the main characters and Jewish character traits and culture play an important role in generating the humor. Avner Ziv, an Israeli humor scholar, defines Jewish humor and explains its origins in Eastern Europe. He writes, in his book Jewish Humor (Ziv, 1986, p. 11 The fact that this Jewish joke has an Imam in it, instead of the characters we would find in earlier American jokes about holy men, namely a priest, a protestant minister and a rabbi, reflects important changes that have taken place in American culture and society. America is now a more multi-cultural, multi-ethnic and multi-religious society. Conclusions There are, in this joke, two other holy men: a priest and an imam. But the punch line involves a rabbi and thus I would suggest this is a Jewish joke. Many Jewish jokes involve people from other religions, ethnicities, races, countries, etc. But if the punch line involves Jews, it is generally safe to conclude that we have a Jewish joke. The rabbi in the joke wanted to pay for the haircut, but when the barber told him he didn't charge holy men for haircuts, the rabbi took advantage of his generosity and sent a hundred other rabbis to the barber. One could argue that the technique of literalness, technique 27, is also at play here since the barber told the rabbi he doesn't charge holy men for haircuts and that comment led to the punch line in the joke. The punch line, "he found a hundred rabbis," plays on our expectations that there will be a hundred something in the joke as well as the realization that if the rabbi left a hundred gold coins and nothing else happened there would be no joke. The fact that Jewish people are able to make fun of their rabbis, and often do in their jokes, suggests a different sensibility when it comes to relating to holy men and women (since there are now women rabbis) than you find in many other religions.
2018-04-03T03:42:07.390Z
2016-08-01T00:00:00.000
{ "year": 2016, "sha1": "f05feabbef70f2b868e7c9db367595923acb8191", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5964/ejop.v12i3.1042", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f05feabbef70f2b868e7c9db367595923acb8191", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "Sociology", "Medicine" ] }
55566076
pes2o/s2orc
v3-fos-license
Students ’ School Performance in Language and Mathematics : Effects of Hope on Attributions , Emotions and Performance Expectations This study examined (a) students’ (n= 342, both genders, grades 5 and 6) attributions and emotions for their subjectively perceived school performance in language and mathematics as successful or unsuccessful, (b) the role of students’ hope (pathways thinking, agency thinking) in the: perceived performance in the above school subjects as successful or unsuccessful, subsequent attributions and emotions, impact of attributions on emotions, and,in turn,interactive effect on performance expectations. The estimated as successful and unsuccessful school performance was predominately attributed to stable and unstable (external in language) factors, respectively. The students experienced intense positive and moderate negative emotions for the perceived successful and unsuccessful school performance, respectively. Hope (mainly, agency thinking) positively influenced the attributions (particularly, stability) and emotions (mainly, pathway thinking), and the impact of attributions on emotions, mainly in unsuccessful performance in mathematics. Hope, attributions and emotions had unique and complimentarily effect on performance expectations. Recent research on student motivation focuses on socio-cognitive and emotional constructs and their role in academic achievement (Anderman & Wolters, 2006;Boekaerts, Pintrich, & Zeidner, 2000;Schutz & DeCuir, 2002;Schutz, Hong, Cross & Osbon, 2006;Stephanou, 2006Stephanou, , 2008;;Stephanou & Kyridis, in press;Stephanou & Tatsis, 2008;Wosnitza, Karabenick, Efklides &Nenniger, 2009).Weiner's (1992Weiner's ( , 2005) ) attribution model of motivation, on which this study is partly based, incorporates a variety of these constructs, and it has proved helpful in understanding children's academic achievement (see Anderman & Wolters, 2006;Schunk & Zimmerman, 2006).Specifically, Weiner's (2005) attribution model perceives affects and expectations as immediate predecessors of academic achievement.The findings from previous investigations have documented that attributions for past performance influence future performance, since they have psychological consequences relative to expectancy and affects (Pintrich & Schunk, 2002;Stephanou, 2004bStephanou, , 2005;;Weiner, 1992).Whether students perceive their academic performance as successful or unsuccessful, and which explanations or interpretations they make about their performance influence their emotions, motivation and behaviour.For example, if a student attributes his / her successful course performance to internal, controllable and stable factors (e.g., long-term effort), he / she may experience pride and expect future success.In contrast, by attributing failure to internal, uncontrollable and stable factors (e.g., low ability), a student may experience incompetence and shame, have low expectations of success, and decreases the probability of successful performance.Furthermore, a student, who experiences repeated shame and hopelessness in a school course, will be likely to avoid being involved in that course in the future (see Eccles & Wigfield, 2000;Stephanou, 2004a). However, as Snyder, Shorey, Cheavens, Pulvers, Adams III, and Wiklund (2002) suggest, although the above constructs have contributed to better understanding of academic performance, each explains a part of student motivation.In Snyder's (2000) hope theory, hope, incorporating the motivation to move towards goals and the ways to achieve these goals, is a dynamic cognitive motivational system.According to this sense, emotions follow cognitions in the process of goal pursuits.Also, hope is positively related to competence beliefs and outcome expectations (Snyder, Irving & Anderson, 1991).Consequently, hope enables students to deal with problem with a focus on success, and, thus, enhances the chances of attaining their goals (Conti, 2000).For example, previous studies (e.g., McDermott & Snyder 2000;Stephanou, 2010) have indicated that children's high hope moderately predicts high school achievement tests for grade school children (McDermott & Snyder, 2000). Hope also influences how children interpret and feel for their achievement (Roberts, Brown, Johnson & Reinke, 2005).More accurately, although in hope theory the focus is on reaching desired future goal-related outcomes, hope is related to attributions for past behaviour, since both theories elaborate pursuit goals and important outcomes (see Seligman, 1991;Snyder, Rand & Sigmon, 2005;Weiner, 2005).Hope is associated with emotions in a given achievement situation, since goal-pursuit cognitions, such as avoiding or alleviating harm or maximizing benefits in it, cause emotions (Smith & Ellsworth, 1987;Snyder et al., 2005).Besides, emotions arise 'in response to the meaning structures of given situations' (Frijda, 1988, p. 349), and the appraisal of a certain learning situation is influenced by self-beliefs (Frijda, 2009;Weiner, 2001). Accordantly, Snyder's (1994Snyder's ( , 2005) ) hope theory, which is involved in this study, incorporating way power and willing power, offers an important construct in understanding how children deal with their academic-related situations (Roberts et al., 2005;Smith & Kirby, 2000;Snyder, Cheavens & Sympson, 1997).Besides, in agreement with Johnson and Roberts (1999), "looking at strengths rather than deficits, opportunities rather than risks, assets rather than liabilities is slowly becoming an increasing presence in the psychotherapy, education, and parenting literature" (p.50). Researches (e.g., Snyder, Hoza, Pelham, Rapoff, Ware, Danovsky, Highberger, Rubinstein & Stahl, 1997;Snyder, McDermott, Cook & Rapoff, 1997;Stephanou, 2011a) have shown that the majority of children are able to use hopeful, goal-directing thought.In middle childhood and preadolescence, in particular, there is a growth in logical rather than intuitive thinking skills, which contributes to increasing hopeful planning and pursuing pathways towards value-goals and doing so within a social context of mindful of the wishes of the significant others, such peers and teachers (Carr, 2005;Snyder, 2000).Also, children tend to perceive their future positively, and have high hope (Snyder, Hoza et al., 1997;Snyder, McDermott et al., 1997;Stephanou, 2011a;Stephanou & Balkamou, in press). However, few investigations have studied how children's hope interacts with attributions and emotions in school achievement, and how they interactively affect performance expectations following a certain performance.Previous literature about the relationship between emotions and cognitive factors in academic achievement suggests that it is relatively domain specific and varies from one academic domain to the other (Ainley, Buckley & Cha, 2009;Dermitzaki & Efklides, 2000;Efklides, 2001;Goetz, Frenzel, Pekrun, Hall & Lüdtke, 2007;Schunk& Zimmerman, 2006;Stephanou, 2006Stephanou, , 2011b;;Wolters & Pintrich, 1998).Furthermore, as some researchers (e.g., Anderman, 2004;Wigfield, Guthrie, Tonks & Perencevich, 2004) propose, students' motivation is better understood by contextualizing beliefs within a given domain rather than just by comparing between domains.The present study focused on mathematics and language in grades five and six, so that a more diverse picture of students' motivation is attained. Appraisal theory (e.g., Ellsworth & Scherer, 2003;Frijda, 1988Frijda, , 2005) ) of emotions has proved important in examining emotions in academic settings.The attributional appraisal perspective to emotions, especially, focuses on how specific emotions such as sadness and anger are elicited, and on the motivational functions they serve in a certain achievement condition (Weiner, 2002(Weiner, , 2006)). The intuitive appraisal of academic performance, which refers to students' perceptions of how good their performance was, and the attributional appraisal of performance, which concerns the perceived causes for performance, are important sources of students' emotions (Weiner, 2002).More precisely, according to Weiner's (2005) attribution theory, there are "outcome-dependent" (e.g., happiness, pleasure, sadness) emotions, that are the initial and strongest response to the valence of the performance, and the "attribution-dependent" (e.g., encouragement, anger) emotions, that are influenced by the causal explanation for the performance. Although all attributional dimensions are related to emotions for performance, their prevalence differs across the various emotions.Specifically, locus of causality, stability and controllability mainly influences the self-esteem (pride)-expectancy (confidence)-and social (shame, anger, gratitude)-related emotions, respectively (Berndsen & Manstead, 2007;Pintrich & Schunk, 2002;Stephanou & Tastis, 2008;Weiner, 1995Weiner, , 2001Weiner, , 2005Weiner, , 2006)).For example, internal attributions for successful school performance produce the feelings of confidence and pride, whereas external attributions leads to positive behaviors such as help seeking, or negative responses, such as helplessness, avoidance and lack of persistence.In contrast, attributing unsuccessful school performance to internal factors predicts incompetence, shame, guilt and resignation, whereas attributing unsuccessful performance to others causes anger, aggression and vindictiveness. Attributing successful school performance to stable factors enhances performance expectations, and facilitates task engagement, while attributing an unsuccessful performance to unstable is likely to improve performance and minimizes the feeling of hopelessness.In contrast, attributing failure to stable factors minimizes positive expectations, produces the feeling of hopelessness and can lead to learned helplessness, a sense that none effort can lead to good performance (see Peterson & Steen, 2005;Seligman, 2002;Weiner, 2001Weiner, , 2005Weiner, , 2006)). Overall, the beliefs that a student has about the causes of his / her school performance have effects on his / her emotions and expectations for the future performance.Then, emotions and expectations influence the student's actual future performance. Association of Hope with Attributions, Emotions and Performance Expectations in Academic Achievement According to Snyder's (2000) hope theory, hope is a cognitive set including an individual's beliefs in his / her capacity to create effective routes to achieve goals (way power or pathways thinking) and beliefs in his / her ability to initiate and sustain movement towards those goals (willing power or agency).It is "a positive motivational state that is based on an interactively derived sense of successful agency (goal-directed energy) and pathways (planning to meet goals)" (Snyder, Harris, Anderson, Holleran, Irving, Sigmon, Yoshinobu, Gibb, Langelle & Harney, 1991, p. 287).Agency thinking is the motivational component in hope theory, shares similarities with self-efficacy (Bandura, 1997;Snyder et al., 2005), and it is particularly crucial when individuals encounter impediments (Snyder, 1994).In such situations, agency thoughts enable the individual to direct the motivation to the best pathway among the alternative pathways (Snyder et al., 2005).Similarly, the production of several pathways is important in the case of impediments, and high-hope people are effective at initiating alternatives routes. Within this perceptive, hope is a critical construct to understand how children deal with and work towards goals, such as succeeding at school, in an adaptive and effective manner (see Roberts et al., 2005).Measures of children's hope are positively related with self reported competence and feeling about themselves, and it is predictor of self-esteem (Snyder, McDermott, et al., 1997;Snyder, Feldman, Taylor, Schroeder & Adams, 2000).Also, the Lewis and Kliewer's (1996) study, focusing on pediatric population, showed that hope was negatively associated with anxiety, while this association was moderated by coping strategies.A research by Barnum, Snyder, Rapof, Mani, and Thompson (1998) revealed that high-hope had protective function in children to allow them to be effective in their lives in spite of the obstacles.Also, previous researches revealed that hope, state or/ and trait, is positively associated with academic achievement.For example, Snyder, Hoza et al. (1997) found that hope is positively related to achievement test as it is reflected in grade school.Similarly, higher hope was related to higher overall grade point averages for junior high students (Lopez, Bouwkamp, Edwards & Teramoto Pediotti, 2000) high school students (Snyder et al., 1991) and semester grade for college students (Chang, 1998;Curry, Maniar, Sondag & Sandstedt, 1999).Peterson, Gerhardt and Rode (2006) found the positive effect of trait hope on performance in an anagram task through the state hope. Generally, individuals with high dispositional hope enjoy life, and use positive reappraisal for a variety of stressor situation, and they not use avoidance and denial behaviour (Gilham, 2000;Snyder, 2000;Snyder, Cheavens & Michael, 1999;Stephanou, 2011a;Stephanou & Balkamou, in press).Hopeful people, like optimistic people, expect positive outcomes even when they face difficulties, in which they insist in pursuit their goals and regulate themselves, using effective coping strategies, so they enhance the chances to achieve their goals (Carver & Scheier, 2005;Peterson, 2000;Scheier, Carver & Bridges, 2000;Seligman, 1991).Hopeful people, additionally, focus not only on future goals but also on goals they believe they can achieve (see Nolen-Hoeksema & Davis, 2005;Snyder, 2000).That means that hopeful individuals are looking for something positive in a variety of conditions.Also, some researches have suggested that optimism is predominately related to agency hope and that hope pathways has unique contribution and beyond what was affected by optimism (see Snyder et al., 2002). Accordantly, a high hope child may use optimistic attribution pattern in explaining successful and unsuccessful school performance.Probably, a high-hope child, as an optimistic child does, attributes failure to external, unstable and specific factors instead of internal, stable and global factors (see Scheier & Carven, 1985;Snyder et al., 2005;Seligman, 2002).In Snyder's hope theory, emphasizing the thinking processes, 'goal-pursuit cognitions cause emotions' (Snyder et al. 2005, p. 258).Specifically, positive emotions result from perception of successful goal pursuit which reflects unimpeded movement toward the goal or effective overcoming the obstacles.In contrast, negative emotions are formulated by the perception of unsuccessful goal pursuit which may result from insufficient agency thinking and / or pathway thinking or the ineffective ability to overcome the problem.These points were supported by respective researches (e.g., Snyder, Sympson, Ybasco, Borders, Babyak & Higgins, 1996;Stephanou, 2010), and are in agreement with findings for reported lessened well-being stern from perceived difficulties in pursuit of important goals (Diener, 1984;Ruehlman & Wolchik, 1988). Summarizing, hope has positive effects on thoughts, emotions, expectations and performance in academic achievement situations. Subjective Perception of Success and Failure Usually in academic achievement, the criteria of success and failure have objectively defined, that is, what grade one gets in a specific school subject.However, performance is also perceived as successful or not, regardless of the exact grade gained.It has been long recognized that successful and unsuccessful performance outcome are better seen as psychological states, based upon students' own interpretation of performance (Dweck, 1999).For example, students' performance expectations, goals, values, and self-perceptions of ability in a specific school subject influence the perception of how successful performance is (Pintrich, 2003;Pintrich & Schunk, 2002).Perceived performance, as compared to the objective one, has been found to be also related to students' achievement motivation and actual achievement (Weinstein, 1998;Zimmerman, 1995).For this reasons, in the present study, students defined what they consider successful performance for themselves. Aim and Hypotheses of the Study This study aimed to examine (a) students' attributions and emotions for their subjectively perceived successful and unsuccessful school performance in mathematics and language, (b) the role of students' hope (pathways thinking, agency thinking) in the perception of their school performance as successful or unsuccessful in the above school subjects, in the generation of the subsequent attributions and emotions, and in the impact of attributions on emotions, and (c) the role of students' hope (pathways thinking, agency thinking) in performance expectations, and in the interactive impact of attributions and emotions on performance expectations. The hypotheses of the study were the following: The perceived as successful and unsuccessful performance in each school subject will be attributed to self-and other-related factors, respectively (Hypothesis 1a).Locus of causality as compared to the rest of the attributional dimensions will be the most powerful factor in discriminating the two groups of students in mathematics and in language (Hypothesis 1b).The students will experience various emotions for the perceived successful and, particularly, unsuccessful school performance in both school subjects (Hypothesis 2a).The students, who will perceive their school performance as successful, will experience positive emotions, whereas the students, who will perceive their school performance as unsuccessful, will experience negative emotions, mainly outcome-dependent emotions (Hypothesis 2b).The perceived successful performance group, compared to perceived unsuccessful performance group, will have higher hope (mainly, agency thinking) in each school subject (Hypothesis 3a).Hope, mostly agency thinking, will have positive effects on the generation of the perception of school performance as successful and, mainly, unsuccessful (Hypothesis 3b).Hope will have positive effects on attributions, particularly stability (Hypothesis 4a), on emotions, mainly expectancy-related (Hypothesis 4b), and on the impact of attributions on emotions (Hypothesis 4c) in the successful, and, particularly, the unsuccessful performance group in both school subjects.Hope will positively influence performance expectations (Hypothesis 5a), the interactive effect of attributions and emotions on performance expectations (Hypothesis 5b), mainly in unsuccessful performance groups.There will be school subject differences but no specific hypothesis is suggested (Hypothesis 6). Participants A total of 342 students, both genders, of Grades 5 and 6 participated in the study.Their age ranged from 10 to 12 years (M = 11.30years, SD = 0.55 years).They came from schools of various towns of Greece, representing various parental socioeconomic levels.According to the findings with respect to the perceived school performance as successful or unsuccessful (see measurements below), 191 and 151 students perceived their school performance in language as successful and unsuccessful, respectively.Similarly, 179 and 163 participants estimated their school performance in mathematics as successful and unsuccessful, respectively. Measures A questionnaire with separate versions for mathematics and language was constructed.The wording of the questions for the two school subjects was the same except for the subject name. Performance expectations.The questionnaire was based on previous research (see Eccles & Wigfield, 2002;Nagy, Trautwein, Baumert, Koller & Garrett, 2006;Pintrich & Schunk, 2002;Stephanou, 2008;Wigfield & Eccles, 2002).It consisted of four questions (e.g., "How well do you think you will do on Language this school year?", "How good will your performance be in Mathematics this school year?").Responses ranged from 1 = very poorly to 10 = excellent.The 10-point scale was used to match the school marks scale.Cronbach's alphas were .83and .85 for mathematics and language, respectively. Objective and subjective school performance.Teachers rated students' school performance (from 1 to 10) in both school subjects.These rates represented objective school performance.Besides school marks, students' perceptions of their school performance as successful or unsuccessful was also estimated.Students were asked to indicate how successful they thought their school performance was.Specifically, the participants indicated the lowest mark (from 1 to 10) over which their performance in each school subject would be considered successful.Students whose school mark was lower than the indicated as successful formed the group of unsuccessful performance, whereas those whose school mark was equal or higher than the indicated one formed the successful performance group. Attributions for performance.Attributions for the perceived successful/ unsuccessful school performance in both school subjects were assessed by the slightly modified Causal Dimension Scale II (CDSII, McAuley et al., 1992;see Stephanou, 2004bsee Stephanou, , 2005)).The students, first, indicated the most important cause, which, according to their opinion, influenced their performance and, then, classified that cause along the attributional dimensions of locus of causality (internal / external causes to him/ herself), stability (stable / unstable causes over time), personal controllability (controllable / uncontrollable factors by their own) and external controllability (controllable / uncontrollable by others).Each of the causal dimensions consists of three items, ranging from 1= negative pole (e.g., not at all stable) to 7 = positive pole (e.g., totally stable).In mathematics, Cronbach's alphas were: .76 for locus of causality, .80 for stability, .85 for personal controllability, and .73 for external controllability.In language, Cronbach's alphas were .80,.83,.80 and .76 for locus of causality, stability, personal controllability and external controllability, respectively.Emotions for performance.Children's emotions for their school performance were assessed by mentioning the extent to which they experienced ten emotions: happiness, pleasure, pride, encouragement, not angry -angry, cheerfulness, confidence, calmness, not anxiety-anxiety and enthusiasm.The emotions had the form of adjectives with two opposite poles, the positive pole, having the high score of 5, and the negative pole, having the low score of 1 (e.g., happy 5 4 3 2 1 unhappy).The consistency of the scale was based on previous researches (see Stephanou, 2011b;Stephanou, & Tatsis, 2008;Weiner, 1992Weiner, , 2002Weiner, , 2005Weiner, , 2006).Cronbach's alphas were .84 and .83for mathematics and language, respectively. Hope.Children's dispositional hope was examined through the Children's Hope Scale for ages 8 to 16 (Snyder, Hoza, et al., 1997) which comprises three agency thinking (e.g., "I think I am doing pretty well") and three pathways thinking (e.g., "I can think of many ways to get the things in life that are most important to me") items.Responses ranged from 1 = None of the time to 6 = All of the time.This scale is a valid and reliable research instrument in examining dispositional hope in Greek elementary school population (see Stephanou, 2011a;Stephanou & Balkamou, in press).In this study Cronbach's alphas were .90 and .88 for agency thinking and pathways thinking, respectively. The participants' Personal information scale consisted of a set of questions relevant to personal factors, such as age and gender. Research Design and Procedure All the participants completed the questionnaire for each of the two subjects in the middle of the school year.The children individually completed the scales in front of the researcher in quiet classrooms in their schools.The students initially completed the hope scale, while, after one week, they responded to the rest of the scales.In order to ensure that any relationship among the examined variables was not due to procedure used, the participants completed, first, the performance expectation scale, then, the emotions scale and, finally, the attributions scale. To match the questionnaires that were responded by the same student, students were asked to choose a code name and use it on the response sheets.To match the students with the given marks by their teachers, first, the participants and their classmates were given the school-reported grades in both school subjects, and, then, the participating children were asked to spot and rewrite their grades on a separate sheet, and to use their code name on it.Students were assured of anonymity and confidentiality The findings from subsequent repeated measures ANOVAs, examining differences between attributions within each group of school performance (perceived successful / unsuccessful) in each school subject, post hoc pairwise comparisons and examination of the mean scores (Table 1), showed that the children mainly attributed their perceived successful school performance to stable and internal factors in both school subjects.In contrast, they predominately attributed their perceived unsuccessful performance in mathematics and language to unstable factors and both external and unstable factors, respectively. To further specify the source of these differences, ANOVAs with the perceived school performance (successful / unsuccessful) as between-subjects factor were conducted in mathematics and language separately.These analyses revealed that the students, who estimated their school performance as successful, as compared to the students, who estimated their school performance as unsuccessful, made more internal, stable, personal controllable and external uncontrollable (not difference in mathematics) attributions.The results from Discriminant analyses (Table 1), with stepwise method, confirmed the univariate effects and, in addition, showed that stability, discriminating power = .91.It was the most powerful dimension in discriminating the successful performance group from the unsuccessful performance group in mathematics, and that locus of causality, discriminating power = .84,was the most powerful attributional dimension in separating the one from the other group of students in language, Also, external controllability had no significant contribution in separating the two groups of students in language and in mathematics.Similarly, locus of causality was not a significant discriminator of the two performance groups in mathematics. Hypotheses 1a and 1b were partly confirmed by the above results. The repeated measures ANOVAs, examining differences between the emotions within each performance group (perceived successful / unsuccessful) and school subject, showed that the students experienced various emotions, and a variety of intensity of emotions, in mathematics: successful performance group, F(9, 170) = 49.00,p < .01,η 2 = .72,and unsuccessful performance group, F(1, 154) = 32.85,p< .01,η 2 = .68;and in Language: successful performance group, F(9, 182) = 37.16, p < .01,η 2 = .64,and unsuccessful performance group, F(9, 142) = 22.80, p< .01,η 2 = .59.Inspection of the scores (Table 2) and the post hoc pairwise comparisons indicated that the students experienced intense positive emotions for the perceived successful school performance, mainly not angry, confidence, enthusiasm, happiness and encouragement.In contrast, they felt moderate negative emotions for the perceived unsuccessful school performance, particularly angry, sadness, not calmness, shame (only in mathematics) and discouragement (only in language). Discriminant analysis, with stepwise method, was conducted to determine the set of emotions that best discriminated the two groups of children in each school subject.These analyses (Table 2) confirmed the univariate findings, and, in addition, revealed that (a) in both school subjects, the students, who estimated their school performance as successful, compared to students, who estimated their school performance as unsuccessful, felt better, expect for pleasure, anxiety-not anxiety and enthusiasm, (b) the emotion of not angry -angry, discriminating power = .72,followed by the emotion of confidence, discriminating power = .41,was the most powerful factor in discriminating the group of students with the successful school performance from the group of students with the unsuccessful school performance in mathematics, (c) the emotion of not angry -angry, discriminating power = .68,followed by the emotion of encouragement, discriminating power = .60,was the most powerful discriminator in separating the successful from the unsuccessful performance group in language and (d) the emotions of cheerfulness and calmness in language, and the emotions of cheerfulness, happiness and encouragement were found not to further differentiate the one group from the other group of students. The above findings partly confirmed Hypotheses 2a and 2b. The Role of Hope in School Performance The results from the four Anovas, with the perceived (successful / unsuccessful) school performance as between subjects factor, and examination of the mean scores showed that the students who perceived their school performance as successful, in comparison to students who perceived their school performance as unsuccessful, had higher agency thinking and higher pathway thinking in both school subjects.The results from Discriminant function analyses (Table 3), with stepwise method, confirmed these findings, and, in addition, showed that agency thinking, as compared to pathway thinking, was a more powerful factor in discriminating the one from the other group of students in mathematics, discriminating power = .95,and in language, discriminating power = .91.Because we were also interested in the role of hope within perceived (successful / unsuccessful) performance and school subject, correlations coefficients and regression analyses within each group of performance were conducted.The results from the analyses showed that higher levels of hope were related to less perceived unsuccessful performance, and more perceived successful performance in language and, particularly, in mathematics. Thus, Hypotheses 3a and 3b were partly confirmed. The Role of Hope on Attributions and Emotions for School Performance The results (Table 4) from a series of regression analysis, with agency thinking and pathway thinking as predictive variables and each of the attributional dimensions as predicted variable, within each group of perceived (successful / unsuccessful) performance and school subject (mathematics / language) revealed the significant role of hope. More accurately, hope (mostly, agency thinking) influenced the formation of the attributional dimensions, mainly in the perceived unsuccessful performance group, and particularly in mathematics.Furthermore, agency thinking and pathway thinking, in combination, most influenced the formation of stability than of any other attributional dimension in the successful, R 2 = .24,F(2, 178) = 28.50,p < .01,and unsuccessful, R 2 = .31,F(2, 166) = 36.00,p < .01,performance groups in mathematics, and in the successful performance group in language, R 2 = .19,F(2, 188) = 22.85, p < .01.In the unsuccessful performance group in language, agency thinking and pathway thinking, as a group, was a best predictor of locus of causality, R 2 = .29,F(2, 148) = 30.10,p < .01,than the other attributional dimensions. In addition, in mathematics, higher-hope students, as compared with lower-hope students, made more internal (pathway thoughts had no effect), personal controllable, stable and external uncontrollable (pathway thoughts had no effect) attributions for their perceived successful performance, and more external (pathway thoughts had no effect), personal uncontrollable (agency thinking had no effect), unstable and external controllable attributions for their perceived unsuccessful performance. In language, higher-agency hope students as compared with lower-agency hope students made more internal, stable and external uncontrollable attributions for their perceived successful performance, and more personal uncontrollable and unstable attributions for their perceived unsuccessful performance.High pathways-hope had positive impact only on personal controllable and external attributions for successful and unsuccessful performance, respectively. Thus, Hypotheses 4a was in the main confirmed. The results (Table 5) from a series of regression analysis, with agency thinking and pathway thinking as predictive variables and each of the emotions as predicted variable, within each group of perceived (successful / unsuccessful) performance and school subject (mathematics / language) showed that (a) hope positively influenced the formation of the emotions for the successful school performance and, mainly, unsuccessful school performance in language and in mathematics, (b) higher-hope students, as compared with lower-hope students, felt better for their performance in both school subjects, (c) hope was a more determinant factor in formulating the emotions of enthusiasm, encouragement and optimism than the rest of the emotions, (d) the relative power of pathway thinking and agency thinking in formulating emotional experience varied across emotions, school subjects and between the two groups of performance in each school subject, and (e) pathway thinking, in comparison to agency thinking, was a better predictor of most of the emotions in the successful and unsuccessful performance groups in both school subjects, while the pattern was reverse in angry and anxiety for unsuccessful performance in mathematics, in enthusiasm and cheerfulness for successful performance in mathematics, in sadness for unsuccessful performance in language, and in enthusiasm, pleasure and cheerfulness for successful performance in language. The above findings confirmed the hypothesis 4b partly. Table 4. Findings from regression analyses for the effects of hope (agency thinking, pathway thinking) on the attributional dimensions for the perceived successful and unsuccessful school performance in mathematics and language Mathematics Language Successful performance Unsuccessful performance Successful performance Unsuccessful performance The nature of the emotions is positive and negative in the perceived successful and unsuccessful performance, respectively; A. Th = Agency thinking, P. Th = Pathway thinking. Effects of Hope on the Impact of Attributions on Emotions for School Performance Because we were also interested in the mediate role of hope in the impact of the attributions on the emotions for the perceived successful and unsuccessful school performance in language and in mathematics, a series of hierarchical regression analysis were conducted.Each of the emotions was the predicted variable, and attributional dimensions were entered at the first step, and agency thoughts and pathway thoughts were entered at the second step of the analysis. The results from these analyses (Table 6) revealed that (a) hope and attributions, in combination, accounted for a significant variance in the emotions for the perceived successful school performance in mathematics, R 2 range from .13 (pride) to .53 (enthusiasm), and in language, R 2 range from .25 (happiness) to .49(pleasure), and in the emotions for the perceived unsuccessful school performance in mathematics, R 2 ranged from .35 (discouragement) to .56 (anxiety, non enthusiasm), and in language, R 2 range from .41 (angry) to .75 (non enthusiasm), (b) hope (agency thinking and pathways thinking, together) enhanced the effects of attributions on some of the emotions in the successful performance groups in mathematic, R 2 ch ranged from .046 to .21, and language, R 2 ch ranged from .056 to .12, and in the unsuccessful performance groups in mathematics, R 2 ch ranged from .043 to .28, and language, R 2 ch ranged from .06 to .23.That means that the students with higher hope were more likely to use the specific attributional pattern and feel better for their school performance than the children with lower hope.Also, (c) both agency thinking and bath way thinking had unique effect on most of the emotions, with the exception being in the emotions of not anger -anger in both school subjects, pride for mathematics success (only for agency thinking) and pleasure for success in language (only pathway thinking).Finally, (d) locus of causality and personal controllability, as compared to the other attributional dimensions, were better predictors of most of the emotions for the unsuccessful math performance and the successful language performance, respectively. Hypotheses 4c was partly confirmed. Effects of Hope on Performance Expectations The results from a series of regression analysis, with agencythoughts and pathways thoughts as predictors and performance expectations as predicted variable, showed that in the perceived successful performance groups in both school subjects, high-pathway thinking children expected higher school performance.In contrast, in the perceived unsuccessful performance group in language, low-agency children had low expectations of future performance. Hypotheses 5a was partly confirmed by the above findings. Effects of Hope on the Impact of Attributions and Emotions on Performance Expectations The main results from the four hierarchical regression analyses (Table 7), with performance expectations as predicted factor, and emotions for performance entering into first step, attributions for performance entering into second step and hope entering into third step of the analysis are the following: (a) the three sets of predictors, as a group, had significant and positive effect on the formation of performance expectations in the perceived successful, R 2 = .80,and unsuccessful, R 2 = .72,performance group in mathematics, and in the perceived successful, R 2 = .74,and unsuccessful, R 2 = .87,performance groups in language, (b) hope, attributions and emotions had unique and complimentarily effect on performance expectations, (c) pathway thoughts uniquely contributed into performance expectations, except in the perceived unsuccessful performance group in mathematics, whereas agency thinking explained a significant amount of the variability in performance expectations only in the perceived unsuccessful performance group in language, and (d) hope (pathways thinking and agency thinking, in combination) moderately influenced the interactive impact of attributions and emotions on performance expectations. Hypotheses 5b was partly confirmed by these findings. Discussion The main aim of this study was to investigate (a) possible differences between the students who perceivetheir school performance in language and mathematics either as successful or unsuccessful with respect to subsequent attributions and emotions, and to hope (pathways thinking, agency thinking), and (b) the role of hope in the generation of attributions, emotions and performance expectations, and in their inter-effects. Attributions and Emotions for School Performance The attributional pattern for the perceived successful and unsuccessful school performance in language and mathematics was in the main as expected. Specifically, the children attributed their performance to various atrributional dimensions, reflecting the high importance of both school subjects for their personal identity, since under such conditions individuals search explanations (Weiner, 1992(Weiner, , 2005)).Also, by attributing the perceived successful school performance to internal, stable and personal controllable causes, the participants enhanced themselves and multiplied the probability of future success (see Mullen & Riordan, 1988;Peterson & Steen, 2005;Weiner, 1995Weiner, , 2005)).By attributing the perceived unsuccessful school performance to external and unstable (not in mathematics) factors, the children protected themselves and minimized the chances of future failure (Peterson et al., 1993;Stephanou, 2005Stephanou, , 2007b;;Weiner, 2002).The high importance of the tasks for the participants and the desirable good performance contributed into these results.These findings may be also associated with the students' high motivation to succeed, and with the nature of the tasks (see Anderman & Wolters, 2006;Pintrich & Schunk, 2002).Language and Mathematics are major subjects of the school curriculum and successful school performance requires constant effort and aptitudes.Consequently, the perceived success was ascribed to adequate ability and high effort, whereas failure was not attributed to lack of these factors.The age of the participants may be another explanatory factor of these results.More precisely, the children, being at the specific age, might have expected positive performance, and confirmation or non confirmation of their expectations produced the specific attributional pattern (see Bless, 2003;Trope & Gaunt, 2005).Further research is needed to examine the personal and psychological processes that seem to generate attributions in various school subjects during primary school. However, it should be mentioned that attributing the perceived unsuccessful performance in mathematics to external uncontrollable and internal negative factors minimizes the chances for future success, and underestimates the teachers' beneficial role for the students (Schunk & Zimmerman, 2006;Weiner, 1995Weiner, , 2006)).This later finding may be partly explained by the subjective or objective task difficulty and self-efficacy (Bandura, 1997;Wigfield & Eccles, 1992).Perhaps, as previous studies have shown (e.g., De Corte, Op't Eynde, & Verschaffel, 2002;Efklides, 2001;Stephanou, 2004aStephanou, , 2004bStephanou, , 2008)), students doubted their abilities in this subject and considered mathematics as difficult.Besides, by 10 years of age, students begin to develop distinct views of their competence in different domains (Marsh, Craven & Debus, 1998).However, this needs to be further investigated. The findings regarding the students' emotions for their school performance were in the main consistent with our hypotheses and previous research evidence.Perhaps, the high importance of good performance in mathematics and language for the children produced various emotions (Frijda, 1993(Frijda, , 2009;;Goetz et al., 2003;Parrott, 2003), and contributed into discrimination the group of the children with the perceived unsuccessful performance from the group of the children with the perceived successful performance due to the attribution -dependent affects than the outcome-dependent affects.This specific finding is in the main against the Weiner's (1992Weiner's ( , 2002Weiner's ( , 2005) ) theory but in line with the notion that students search for explanations of their performance in high ego involvement tasks (see Mullen & Riordan, 1988;Peterson 1990).Also, motivation in association with the age of the participants might have contributed to the observed intensity of the emotions in the perceived successful and unsuccessful school performance groups.Specifically, since the two school subjects were important for the students and they desired to succeed, they might have been motivated to see the task positively, and be optimistic about their performance.Thus, confirmation of high success expectations produced intense positive emotions (see Bless, 2003;Trope & Gaunt, 2005), while the unexpected unsuccessful performance produced moderate negative emotions (see Carver & Scheier, 2000;Frijda, 2007Frijda, , 2009;;Parrott, 2003).Furthermore, the elementary school children seemed to have been mastery goal orientation, which is related to positive emotions (Meece, Blumenfeld & Hoyle, 1988;Smith, Sinclair, Chapman, 2002).Research needs to clarify such issues. The two groups of the children in both school subjects were discriminated by the other-related emotions (angry-), followed by the expectancy-related affects (confidence, encouragement), and self-esteem related affects (pride).This specific finding is consistent with Weiner's (2001Weiner's ( , 2002Weiner's ( , 2005) ) model, research evidence and the notion that emotions are "socially constructed, personally enacted" (Lazarus, 1991;Schutz et al., 2006;Stephanou, 2007aStephanou, , 2011b;;Stephanou et al., 2011).However, the relative power of the emotions in discriminating the perceived successful from unsuccessful performance groups varied between the two school subjects.Similarly, the prevalence of the emotions varied within the perceived successful and unsuccessful performance groups. It should be also mentioned that the experience of some certain negative motions does not facilitate future good school performance, especially in unsuccessful experience.For example, previous research evidence suggests that anger is positively related to attribute malicious intentions to others, and sadness shapes malicious attributions for low achievement (Pekrun, 2009;Schutz & Lenehart, 2002).The students felt intense emotion of anxiety in mathematics in the successful and, mainly, in the unsuccessful performance group.Probably, as already mentioned, the students considered mathematics as difficult.This, latter finding is particular important because anxiety influence students to focus towards the self than to the content of the course and the strategies taking the course (Frijda, 2005). Children also experienced discrete emotions by cognitively appraising their school performance along the attributional dimensions.More precisely, the attributional dimentions had unique effect on emotions, when hope entered into analysis, whose role is below discussed.The fact that attributions were more powerful contributor in the generation of the emotions in the perceived unsuccessful school performance groups than in the perceived successful school performance groups is consistent with the notion that individuals search for explanations of their negative than positive experiences (Weiner, 1992(Weiner, , 2002(Weiner, , 2005)).The pattern of correlations between attributions and emotions partly supports that each attributional dimension is related to specific kind of emotions (Weiner, 2005(Weiner, , 2006)).It seems that the students appraised the status of self-factors in pursuing their goals that include performing well in the specific activities in the classes and being good in the respective school subject, since such emotions are experienced in relationship to goals (Carver & Scheier, 2000;Frijda, 2005Frijda, , 2009;;Linnenbrink & Pintrich, 2002;Stephanou & Kyridis, in press).However, this needs to be further investigated. More precisely, locus of causality, as compared to the other attributional dimensions, was a better predictor of most of the emotions for the perceived unsuccessful performance in both school subjects.Personal controllability and stability, in comparison to the other attributional dimensions, proved the most powerful formulator of most of the emotions for the perceived successful performance in mathematics and the perceived language, respectively.Uunexpectedly, external controllable attributions played a significant role in students' emotions for performance in language, underling the children's sensitivity to significant others, such as teachers and classmates, in students' formation of emotional experience and motivation in particular academic tasks (Eccles & Wigfield, 2000;Goetz et al, 2007;Hidi & Harackiewicz, 2002;Stephanou, 2005Stephanou, , 2007bStephanou, , 2011b;;Weiner, 2002).Stability dimension partly confirmed Weiner's (1992Weiner's ( , 2005) ) prediction that stable attributions influence the magnitude of self -related and, in particular, expectancy -related affects. The Role of Hope in Attributions, Emotions, School Performance and Performance Expectation To summarize, the findings regarding hope were mainly consistent with our expectations.More precisely, in accordance to previous studies (see Roberts, 2005;Snyder et al., 2005), and Snyder's (2000) hope theory, the children with high hope achieved high performance, enjoyed their successful performance and used positive appraisal for their perceived successful performance in language and in mathematics.In a similar way, the high hope children, as compared to low hope children, performed better, suffered less and used effective appraisal for their perceived unsuccessful school performance in language and in mathematics.These findings indicate that the high hope children, not the low hope children, searched for something positive, a consistent finding with previous empirical evidence (see Carver & Scheier, 2005).Hope was also a more powerful contributor into the generation of emotions and appraisals of the perceived unsuccessful school performance than the perceived successful school performance, complementarily to previous research evidence, which suggests that high hope people use positive reappraisal for a variety of stressor situation (see Gilham, 2000;Snyder et al., 1999). On the other hand, the significant role of hope on the successful school performance experience is in line with Siegel (1992), who mentioned that "individual differences factors can influence both a child's responses to stress and his or her use of coping strategies" (p.4).Further, as was indicated by Siegel and supported by respective research, children tend to respond to daily life stimuli by using the same mechanism of responding to stress (see for a review Roberts et al., 2005).Additionally, there is an increasing recognition that a comprehensive conceptualization of coping mechanisms views them as normal developmental components (Carr, 2005;Dryfoos, 1998;Jaycox, Reivich, Gilhan & Seligman, 1994). The differential contribution of pathway thinking and agency thinking to school performance, emotional experience, cognitive appraisals of school performance in language and mathematics is an indication that hope is interactively constructed by these two elements (see Snyder et al., 1991;Snyder et al., 2005).Contrarily to our hypothesis and previous literature, pathway thinking played a minor role in some of the emotions, in evaluating and in attributing causes of school performance.This may reflect the notion that agency thinking shares similarity with self-efficacy (Bandura, 1997), and, being the motivational component of hope, proved crucial in the case of difficulties, like unsuccessful performance (see Snyder, 1994;Stephanou, 2011b). With reference to attributions, in addition, hope predominately influenced stability than the other attributional dimensions for the successful and unsuccessful performance in mathematics, and for the successful performance in language, while, unexpectedly, it mainly influenced locus of causality and personal controllability for the unsuccessful performance in language.These findings, probably, reflect the children's desire and assurance for successful school performance.These findings may also support other findings which reported that high-hope as compared with low-hope individuals tend to present themselves more positively and social desirable (Snyder, Hoza, et al., 1997;Taylor, 1989).In addition, hope proved a more powerful predictor of the attributional appraisal of school performance (particularly, unsuccessful) in mathematics than in language.This result is in line with high importance of mathematics in students' personal identity and academic development (Martin & Debus, 1998;Mason, 2003;Stephanou, 2005Stephanou, , 2008)).However, research needs to examine this speculation. The pattern of the effects of hope on emotions is consistent with empirical evidence (see Roberts et al., 2005;Seligman, 2005) showing the important role of hope in expectancy (encouragement / discouragement, confidence / non confidence, enthusiasm / non enthusiasm)-, goal pursuit (pleasure/ displeasure, anxiety / non anxiety)-, self (pride)-, and other (non anger / anger)-related affects.Furthermore, hope had direct and indirect, through attributions, effect on the emotions for the perceived successful school performance, and, mainly, the perceived unsuccessful school performance. The results from the present study also, confirming in the main our hypotheses, revealed that hope, attributions and emotions had unique and complimentarily effect on performance expectations.Specifically, the three sets of concepts, in combination, proved a more powerful predictor of performance expectations in the unsuccessful than successful performance in language, lending further support to the earlier findings (see Forgas & Smith, 2005;Greitemeyer & Weiner, 2003;Stephanou, 2007aStephanou, , 2011aStephanou, , 2011b;;Weiner, 2005).However, unexpectedly, the three predictors, as a group, better predicted performance expectations in the successful than unsuccessful performance group in mathematics.Perhaps, as previous studies have shown (e.g., De Corte et al., 2002;Efklides, 2001;Stephanou, 2004aStephanou, , 2004bStephanou, , 2008)), students doubted their abilities in this school subject and considered mathematics as difficult.Also, in agreement with Weiner's (1992Weiner's ( , 2005) ) model, the future (encouragement / discouragement, confidence / not confidence)-desirable high performance (pleasure / displeasure, cheerfulness)-goal (anxiety)-and other (not anger / anger)-related emotions contributed in performance expectations in both school subject.Stability, as expected, was a significant factor of the formation of performance expectation in all performance groups.However, unexpectedly, stability, compared to other attributional dimensions, had less effect in the generation of it in the unsuccessful performance group in language, reflecting, probably, children's beliefs that a such performance can become successful by controlling the situation.However, these need to be further investigated. Similarly, in line with Snyder's (2000) theory and previous research evidence (e.g., Peterson, 2000;Scheier et al., 2000;Stephanou, 2011a), hope had direct and indirect (via the interaction of attributions and emotions) effect on performance expectations in all groups of children, expect unsuccessful performance in mathematics.That means that the children with higher pathway thinking were more likely to use the specific attributional pattern, enjoy their performance more and have higher expectations of good school performance than the children with lower pathway thinking.In contrast, in the unsuccessful group in language, the children with higher agency thinking and higher pathway thinking, as compared with the children with lower respective thinking, were more likely to apply the specific attributional pattern, suffer for failure to achieve their goal less, and expect future good school performance.Research needs to verify the relative role of pathway thinking and agency thinking in children's school performance across various school subjects. Implications of the Findings for Educational Practice and Future Research The present findings address the significant role of children's hope (pathways thinking, agency thinking) on the formation of their perceptions of school performance in language and mathematics as successful or unsuccessful, the subsequent attributions and emotions, and their performance expectations.Hence, children are needed to be helped maximize hopeful thinking.This is inculcated through interactions with their caretakers and teachers (McDermott & Hastings, 2000;Snyder et al., 1997).Children should be encouraged to formulate clear goals, produce many and various pathways to these, pursue the goals and reframe obstacles as challenge to be overcome (Snyder, 2000). The findings of this study, in addition, stress the high importance of students' cognitive and emotional involvement in school performance, and the significant effects of these processes on performance expectations.Attributional retraining (Seligman, 2002) helps children change maladaptive attributional pattern of school achievement.Also, students' recognition and regulation of their emotional experience is an essential part of successful learning and subjective well-being (Boekaerts et al., 2000;Efklides & Volet, 2005;Frijda, 2005;Pekrun, 2009). Conclusively the findings from this study stress the importance of examining children school performance along the role of hope in evaluating, attributing causes, experiencing emotions and forming expectations.Research is needed to examine the role of children's past experience, self-and task-beliefs on the observed associations, and on the consequences of the present emotional and cognitive pattern on academic development.Finally, research needs to investigate the role of parents' and teachers' support in children's hope thinking. Table 1 . Descriptive statistics and findings from discriminant analysis for students' attributions for their perceived school performance in mathematics and language as successful or unsuccessful Table 2 . Descriptive statistics and findings from discriminant analyses for students' emotions for their perceived school performance in mathematics and language as successful or unsuccessful , 340)> 19.36, p < .01,F(1, 340) < 19.36, p < .05;F(1, 340) < 2.50, p > .05;The nature of the emotions is positive and negative in the perceived successful and unsuccessful performance group, respectively; *: nonsignificant contribution in discriminating the two groups. Table 3 . Descriptive statistics and findings from discriminant analyses for the effects of students' hope (agency thinking, pathway thinking) on their perceived school performance in mathematics and language as successful or unsuccessful Note.All F(1, 340)-values are significant at the .01level of significance. Table 6 . Results from hierarchical regression analyses for the impact of hope (agency thinking, path thinking) on the effects of attributions on emotions for the perceived successful / unsuccessful school performance in mathematics and language Table 7 . Results from hierarchical regression analyses for the role of hope in the effect of attributions and emotions for the perceived successful/ unsuccessful performance on performance expectations in language and mathematics
2018-12-11T02:45:23.027Z
2012-05-28T00:00:00.000
{ "year": 2012, "sha1": "92f6ea02593f99c3cbe60635cc8ad253c7a0d300", "oa_license": "CCBY", "oa_url": "https://ccsenet.org/journal/index.php/ijps/article/download/15496/11728", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "92f6ea02593f99c3cbe60635cc8ad253c7a0d300", "s2fieldsofstudy": [ "Education", "Psychology", "Mathematics" ], "extfieldsofstudy": [ "Psychology" ] }
271380211
pes2o/s2orc
v3-fos-license
Cefepime-Induced Encephalopathy in Patients Treated for Urinary Tract Infection Cefepime is a fourth-generation cephalosporin antibiotic administered intravenously used to treat various bacterial infections, including urinary tract infections. Administering cefepime to patients should be done with caution, understanding both potential risks and side effects. A 74-year-old female presented to the family medicine clinic with abdominal pain and a history of urinary tract infections. The workup included a CT scan that showed bowel obstruction and bladder wall thickening. Due to a history of urinary tract infections, three days following the presentation, the patient underwent an explorative laparotomy. Following the laparotomy, the patient was started on cefepime, a fourth-generation cephalosporin antibiotic. Five days following the initial presentation, the patient became confused and was nonverbal. An encephalopathy workup showed a negative MRI, but an EEG was consistent with encephalopathy. Cefepime was discontinued. Forty-eight hours after cefepime was discontinued, the patient returned to baseline with normal cognitive function. It is crucial that clinicians understand the different classifications of antibiotics, as well as the drugs and potential side effects of prescriptions. Cefepime can be used in gram-negative infections with resistance to more generic antibiotics. It has the ability to cross the blood-brain barrier, making it effective in treating meningitis. It has also been shown to cause encephalopathy as a side effect. It is important that clinicians understand the different generations of cephalosporins, as well as the cross-reactions and potential side effects of prescriptions. These factors must be considered when prescribing broad-spectrum antibiotics, such as cefepime. Introduction Encephalopathy can be caused by many agents and is defined as a disease that affects the function or structure of the brain [1].This can be temporary or long-lasting and varies greatly in presentation [2].Common symptoms of encephalopathy include confusion, memory loss, behavior changes, and loss of consciousness [3].We present a patient with encephalopathy, likely induced by cefepime, a fourthgeneration cephalosporin antibiotic.There are five generations of cephalosporins [4].As a fourthgeneration cephalosporin, cefepime has the ability to penetrate the blood-brain barrier and attack bacteria in the CSF [4].Due to its invasive nature and ability to attack diverse bacteria, cefepime is reserved for multi-drug-resistant bacterial infections and systemic infections [4].While effective, encephalopathy has been reported in approximately 3% of patients taking cefepime, making it a high-risk antibiotic [2,5].Additional risks associated with cefepime include allergic reactions, difficulty breathing, renal dysfunction, and diarrhea [6].While antibiotics are generally safe, it is important to understand the risks and benefits associated with each medication prescribed. Case Presentation A 74-year-old female presented to the family medicine clinic with abdominal pain.The patient had a history of diabetes, asthma, hypertension, and recurrent urinary tract infections.A CT of the abdomen and pelvis was ordered.Findings from the CT included small bowel obstruction and chronic bladder wall thickening.The patient was alert and oriented on the day of admission with no symptoms of fever or hypertension.She was prescribed cefepime to eliminate the urinary tract infection and began taking it on day 1, the same day as the presentation.These measures were attempted but were not successful in eliminating her pain.Three days following the initial presentation, she underwent an exploratory laparotomy to eliminate the bowel obstruction.On day 5, she started to get confused.She could nod her head, but she was nonverbal.A detailed encephalopathy workup was performed.An MRI of the brain did not show signs of encephalopathy, such as abnormal signal intensity or edema.At this point, the differential diagnosis for the change in mentation included neurotoxicity from cefepime, infectious disease, and the pain management process.An EEG showed generalized slowing consistent with encephalopathy.Cefepime was discontinued, and a different antibiotic was initiated.She returned to baseline mentation 48 hours later, on day 7, from the initial presentation.A timeline of this case can be seen in Figure 1. Discussion Key findings from this case report include the occurrence and resolution of probable cefepime-induced encephalopathy.Cefepime is frequently used to treat severe and complex urinary tract and abdominal infections, similar to the case presented here.Common side effects of cefepime in adults include diarrhea and rash; however, encephalopathy is a rare documented adverse effect [7][8][9].In this patient, encephalopathy was drug-induced and manifested as confusion and a nonverbal state.Risk factors for developing encephalopathy with cefepime use include renal dysfunction, excessive dosing, and preexisting brain injury [10].Notably, no additional treatment was required, highlighting the importance of drug cessation in resolving symptoms.Although specific testing to confirm cefepime as the cause was not conducted, the correlation between stopping the drug and symptom resolution within two days strongly supports this diagnosis. Cefepime-induced encephalopathy has been reported at an increased rate over the last 10 years [11].The mean age for this type of encephalopathy is 67, and 87% of patients report renal dysfunction [12].Encephalopathy typically occurs five to 10 days following the initial administration of cefepime [9].This case presented similarly to reported cases in the literature, but with the absence of renal failure.This underscores the need for vigilance regarding neurotoxic side effects in patients receiving cefepime, even though such occurrences are uncommon. Cefepime and most cephalosporins are primarily excreted via the renal system; thus, patients with renal dysfunction must receive dosage adjustments [4,7].Renal failure prolongs the half-life of cephalosporins in the blood and can induce neurotoxicity [7].Crossing the blood-brain barrier is a relatively unique ability of cefepime, which makes it a good choice for certain meningitis infections [13].The neurotoxicity of cefepime, along with other penicillins and cephalosporins, is not fully understood, but the predominant theory is that it inhibits the gamma-aminobutyric acid (GABA) system [14,15].Inhibition of the GABA system leads to increased excitatory activity and subsequent cell injury, leading to encephalopathy [16].The reduced threshold of excitation due to the downregulation of the GABA system is likely related to the structure of cephalosporins, although more research is being conducted [14,15]. The differential diagnosis for acute encephalopathy is broad but includes subdural hematomas, infectious agents (both systemic and neurologic), inflammatory conditions, and drug-related toxicity [1].The history and physical examination in this case revealed a negative MRI for any hematoma, while the EEG showed signs consistent with encephalopathy.The patient presented here has a history of recurrent urinary tract infections and was taking cefepime to eliminate the infection.Infectious agents could contribute to the neurotoxic state; however, due to the quick resolution of symptoms following cessation of the drug, we concluded that CNS symptoms were due to cefepime toxicity.Thus, this case report describes an example of an acute drug-related neurotoxic form of encephalopathy. The limitations of this study include the diagnosis of encephalopathy with no signs on the MRI.While little research has been done in identifying the sensitivity and specificity of drug-induced encephalopathy, the accuracy of using MRI to identify Wernicke's encephalopathy, encephalitis, and autoimmune encephalopathy is well below 100% [17,18].This indicates that an MRI without symptoms of encephalopathy does not rule out a diagnosis.An additional limitation is that no specific diagnostic tests were performed to confirm the drug's causative role.Although this was the case, the concurrence of drug discontinuation with the subsequent abatement of symptomatology and EEG findings supports the diagnosis of cefepime-induced encephalopathy.Lastly, the lack of long-term follow-up data restricts our understanding of the potential recurrence or long-term consequences of the encephalopathy.Despite these limitations, this case highlights the importance of recognizing the neurotoxic side effects of cefepime.Further reporting of cefepimeinduced encephalopathy and neurotoxicities is needed to further understand the potential adverse side effects of cefepime and potential contraindications. FIGURE 1 : FIGURE 1: Timeline of the presented case
2024-07-24T15:18:38.626Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "3bb1ddb868755924e7e9bcc381a63a5a3855de32", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "7b2f3cc8c684067ff1cf2505da63277ed3ee94f2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
248394256
pes2o/s2orc
v3-fos-license
GENDER-BASED DIFFERENCES IN PERFORMING REFUSAL STRATEGIES AMONG INDONESIAN INTERMEDIATE EFL LEARNERS In this study, 30 male and 30 female Indonesian intermediate EFL learners were investigated to find out the gender differences in performing refusal strategy and the effect of interlocutor status on refusal strategy performed by the selected participants. A qualitative approach was employed in this study by using a Discourse Completion Test (DCT) with 12 scenarios. The data were analyzed into the refusal taxonomy proposed by Beebe et al. (1990). The results showed that the indirect strategy become the most frequently used strategy performed by male and female students. However, the female students have used a variety of indirect strategies slightly greater than did the male students. Moreover, the interlocutor status also affected a similar effect on the refusal strategy performed by the participants. Therefore, there is no significant difference in refusals between male and female students, and the results reveal that EFL students prefer to refuse respectfully to avoid offending others. The researchers proposed that future studies use not only DCT but also other instruments in collecting data in order to acquire complete data to explore pragmatic competency. INTRODUCTION The high demand for English in today's world necessitates language users to be pragmatically competent in order to adapt to a variety of contexts' requirements. Some variables, such as the target language's culture, the speech act utilized in the contact, the interlocutors' position, and gender, are considered crucial components within particular situations, Tuncer & Turhan (2019). As a result, English should be researched from more than just a language standpoint. In other words, because it is the world's lingua franca, it is not sufficient to evaluate its syntactical, morphological, as well as its phonetic features. This means looking at English from the standpoint of how it should be used in various situations that demand distinct grammatical forms or lexical elements. This is where the term "pragmatics" should be used (Hikmahwati, et al, 2021). According to Yule (1996), pragmatics is the process through which speakers modify their language use depending on who they are speaking with, when, where, and under what circumstances. Pragmatics, according to Crystal (1997), is the study of language that takes into account its users' choices and the impact of language usage on the participants in the communication act. When it comes to language skills, there are two basic areas to consider: Communicative Competence and Grammatical Competence. These two skills should not be considered separate because, according to Hymes (1972), grammatical competence, as well as the capacity to use the grammatical skill in a range of communication circumstances, are included in communicative competence, highlighting the relevance of the sociolinguistic perspective. Grammatical competence includes knowledge of vocabulary, morphology, syntax, and phonology, but communicative competence also includes sociolinguistics, discourse, and strategic competence. Pragmatic competence is classified under sociolinguistic competence (Niezgoda & Röver, 2001) Because pragmatic competence is described as the ability to use language successfully in relation to users and circumstances, pragmatics may be seen as a relationship between society and language, and this is something that should be linked to sociolinguistics. This study attempts to explore pragmatic competency under this paradigm, specifically the refusal strategy of Indonesian intermediate EFL learners by looking at gender differences among these students in performing refusal acts. There have been some studies conducted on the realization of speech acts of refusals. First, Jiang (2015) The study looked at pragmatic transfer in refusal speech acts performed by Chinese high school EFL students and Americans. The findings showed that Americans preferred direct rejection techniques and good sentiments over Chinese, and pragmatics transfer was clearly shown in both groups. Both pragmatic transfer and L2 verbal ability exhibited a tendency of negative co-relationship in Chinese English learners 1 and 3, and both pragmatic transfer and L2 linguistic ability showed a tendency of negative corelationship in the substance of rejection techniques of excuse. Second, Demirkol (2016) conducted the research on How do We Say 'No' in English? The participants in this study did not show any significant modifications in their preferences for politeness methods. Third, Seyyed Hatam and Zohre (2014) conducted a comparison study of refusal strategies of Iranian University English as a Foreign Language and Non-English Learners in Native Language. The findings showed that non-English learners used refusal strategies more frequently, whereas EFL learners used adjuncts more frequently. A number of those researchers mentioned above only investigated speech acts of refusal dealing with age, level of education, and power differences. Nonetheless, gender as its variable has not been paid much attention. So, this current study will investigate gender and its contribution toward males and females in performing refusal. Refusal The negative answer to someone's invitation, recommendation, offer, or request is called refusal. Beebe, Takahashi, and Uliss-Weltz (1990) stated that the response to four distinct speech actions of invitation, offer, request, and suggestion is known as a speech act of refusal. Refusal occurs when the addressee refuses to do what the speaker requests. A refusal is a speech act in which a speaker refuses to engage in an activity that the converser suggests (Cheng et al., 1995). Moreover, refusals are not always the form of rejection. According to Gass and Houck (1999), when deciding not to accept an initiated act, one can normally choose one of three refusal strategies, rejection, delay, or an alternate proposal. Furthermore, refusals take into account a variety of social factors such as gender, social power, and position. Refusals are essential, according to Felix-Brasdefer (2006), since they are sensitive to social characteristics including gender, age, level of education, power, and social distance. In addition, Beebe, Takahashi, and Uliss-Weltz believe that refusals are an especially tough task in a second language, because learners may lack relevant linguistic and pragmatic skills (1990). Because of the complexity of refusals above, second/foreign learners must use their pragmatic ability to avoid the offense by adopting strategies when conducting the refusal. Semantic formulae (expressions that may be used to execute refusal) and adjuncts (which cannot be employed by themselves but go with refusal tactics) are the two types of refusal answers. According to Beebe et. al's (1990) taxonomy, there are two types of strategy: direct and indirect. In addition, there are four sorts of adjuncts in this categorization, however, they cannot be employed alone and must be used in conjunction with rejection tactics. Direct Strategy Direct strategy occurs when the illocutionary force is in line with the linguistic form. This strategy consisted of two classifications: (1) Performative, and (2) Nonperformative statements. Adjunct This strategy cannot be used to conduct a refusal on its own. They may exist before or after the semantic formulations (Felix-Brasdefer, 2004). This strategy is classified into four categories, namely: (1) Statement of positive opinion/feeling or agreement, (2) Statement of empathy, (3) Pause fillers, and (4) Gratitude/appreciation. METHOD This study was carried out in the English Education Department of one of Indonesia's public universities in Banten province. The purpose of this study was to determine gender differences in refusal strategy execution and the influence of interlocutor status on refusal strategy execution by 60 intermediate students (30 males and 30 females). To achieve this goal, Discourse Completion Test (DCT) was used because the researchers expected the participants to give their answers to the situation given. The situations for male and female groups were the same. Therefore, the most strategy which is frequently used and the relationship of the strategy used to the status of the interlocutors could be directly compared. According to Billmyer and Varghese (2000), this test requires participants to write down plausible replies depending on the circumstances provided. The data was then evaluated using three techniques described by Miles, Huberman, and Sadana (2014), namely data condensation, data presentation, and generating conclusions or verification. The refusal situation in the DCT is described below. DISCUSSION The study is aimed to analyze the types of refusal strategies that are most frequently used and the effect of interlocutor status on the strategy employed by the participants, to figure out the refusal strategies used the researchers used the refusal taxonomy which was proposed by Beebe et. al. (1990) to analyze the findings. After conducting the study, the researchers have found 569 strategies that are used by females, and 529 strategies that are used by males. Those findings are classified into twenty-two to twenty-three strategies on refusal taxonomy. The further discussion about each strategy used is shown in table 2. To answer the first research question about the most frequently used strategy performed by males and females, the researchers compared the average of total direct strategy and indirect strategy. The findings showed that both groups performed indirect strategy more frequently than direct strategy. The female group performed 67.66% indirect strategy and the male group performed 68.62%. However, the female group was indicated to employ a slightly greater variety of indirect strategies than the male group. Additionally, the researchers also found that both groups performed the adjuncts. In both groups, fewer adjuncts were elicited in the refusal strategy performed by males amount 12.47% than did the female group 13.70%. These findings indicated that the strategy frequently performed by both groups is indirect strategy than direct strategy and the female group has used a variety of indirect strategies and adjuncts greater than did the male group. The second research question is about the effect of interlocutor status on the strategy employed by the participants. In investigating the participants' refusal strategy three different interlocutor's status was determined to refuse as follows: Refusals to a lower status of the interlocutor, refusals to an equal status of the interlocutor, and refusals to a higher status of the interlocutor. Table 3 will be presented the refusal strategy performed by males and females. As shown in Table 3, the researchers compared the average of strategies per item performed to the different types of interlocutor status. The researchers found that the strategies performed varied among the three different types of interlocutor status. Nevertheless, there were similarities among the types of interlocutor's status toward the strategies performed by both groups. Firstly, both male and female groups elicited direct strategy greater to interlocutors with the lower status than to equal and higher status. In contrast, the indirect strategy was employed by both groups toward the interlocutor with lower status is fewer than equal and higher status. In addition, the use of adjuncts is in line with the direct strategy. Both groups performed fewer adjuncts to the interlocutor with higher status than to equal and lower. Therefore, the interlocutor's status has an effect on the refusal strategy performed by both groups. However, these findings showed many similarities than differences in the strategy performed to the types of interlocutor status by the male and female groups. Based on the findings of the results of refusal strategy frequently used and the interlocutor's effect on the refusal strategy performed by both groups of the present study with the results obtained in previous studies presented a considerable level of consistency of refusal strategy performed by the participants. The indirect strategy is the most frequently used strategy performed by male and female groups than the direct strategy. The current result of this research supports the study conducted by Jiang (2015) who investigated pragmatic transfer in refusal speech act made by Chinese high school EFL (English as a Foreign Language) learners and resulted that Chinese speakers used more indirect strategy than American English speakers. The indirect strategy of giving the reason, excuse, and explanation were among the top refusal produced in this study. Along with giving the reason, excuse, and explanation strategy, the giving regret strategy was investigated as the second most preferred strategy. These findings also run parallel to Dermikol's (2016) study found that giving reason and negotiation are the most frequently used strategy then followed by giving regret as the second most frequently used strategy by Turkey EFL learners. The female group utilized a little higher diversity of indirect strategies than the male group in this study. Liao & Bresnahan (1996) back this up, claiming that women employed more strategies than males to decline someone of higher status. The higher indirect strategy showed that the participants tend to avoid the interlocutor's offense, especially the interlocutor with the higher status. According to Campillo, Jorda, et al. (2009), indirect realizations are used to soften the bad impacts of face-threatening conduct, and they are performed via the use of excuses, explanations, alternatives, and so on. Furthermore, the majority of the participants in this study used adjuncts to express a favorable opinion, empathy, pause filler, and thankfulness. This result is similar to the study conducted by According to Seyyed Hatam and Zohre (2014), the participants utilized favorable ideas, sentiments, or agreement, as well as expressions of thanks and appreciation, the most frequently. One might presume that the employment of adjuncts was designed to fulfill the aims of politeness, Morkus (2009). CONCLUSION The results of this study proved that there are more parallels than differences in the rejection strategies used by both groups. To decline the interlocutor's request, offer, invitation, or suggestion, both female and male groups regularly adopted the indirect method. Female participants, on the other hand, used a wider range of indirect strategies than male ones. The gender of the interlocutor had a comparable influence on the rejection approach used by male and female groups. Refusals to the higher-status interlocutor evoked less direct strategies than refusals to the equal and lower-status interlocutor. In contrast to the indirect method used by participants with higher level interlocutors than equal and lower status interlocutors, fewer adjuncts were elicited in refusals to higher status interlocutors than lower and higher status interlocutors. Thus, there are no significant variations in the refusal approach regarding the interlocutor's position across male and female groups, and the results demonstrate that EFL learners prefer to respectfully decline to avoid offending anybody. In this section, the researchers would like to make some recommendations for further study on speech acts, particularly rejection. It is critical to acquire data, not only solicited data but also natural conversation data, in order to develop a full result concerning pragmatic competency. As a result, the researchers must not only utilize DCT to obtain elicited data, but also additional instruments such as role-play to capture natural discourse data. In order to conduct a more interesting study, the researchers suggested taking into account additional refusal factors such as the influence of age on refusals in the EFL environment, variations in the degree of education on refusal strategy in the EFL context, and so on. Moreover, to improve students' pragmatic competence, teachers should construct tasks that expose students to a variety of pragmatic situations, allowing them to perform acceptable speech acts in a given context to persons of various social statuses and social distances in order to prevent pragmatic failure.
2022-04-27T15:08:12.924Z
2022-04-15T00:00:00.000
{ "year": 2022, "sha1": "f8771415eb9bd8d5fba28e0acf1a046369785b08", "oa_license": "CCBYSA", "oa_url": "https://jurnal.ustjogja.ac.id/index.php/JELLT/article/download/12051/4989", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4710fd320a1e3421dc4d01e4c529d4a54cf99eb4", "s2fieldsofstudy": [ "Linguistics", "Education" ], "extfieldsofstudy": [] }
265645417
pes2o/s2orc
v3-fos-license
Preterm Birth and Its Association with Maternal Diet, and Placental and Neonatal Telomere Length Preterm birth (PTB), a multi-causal syndrome, is one of the global epidemics. Maternal nutrition, but also neonatal and placental telomere length (TL), are among the factors affecting PTB risk. However, the exact relationship between these factors and the PTB outcome, remains obscure. The aim of this review was to investigate the association between PTB, maternal nutrition, and placental-infant TL. Observational studies were sought with the keywords: maternal nutrition, placental TL, newborn, TL, and PTB. No studies were found that included all of the keywords simultaneously, and thus, the keywords were searched in dyads, to reach assumptive conclusions. The findings show that maternal nutrition affects PTB risk, through its influence on maternal TL. On the other hand, maternal TL independently affects PTB risk, and at the same time PTB is a major determinant of offspring TL regulation. The strength of the associations, and the extent of the influence from covariates, remains to be elucidated in future research. Furthermore, the question of whether maternal TL is simply a biomarker of maternal nutritional status and PTB risk, or a causative factor of PTB, to date, remains to be answered. Introduction Preterm birth (PTB), is defined as the birth whose onset occurs before the 37th week of pregnancy [1].It commences automatically with the presence of one or more of the following events: uterine contractions without rupture of membranes, premature rupture of membranes, induction of labor or the need of a caesarean section.Induction of preterm labor occurs in the presence of pregnancy-related pathology, such as preeclampsia and intrauterine growth restriction (IUGR), where the fetuses' delivery is necessary and/or urgent [2]. PTB is truly a global problem.An estimated 13.4 million infants were born prematurely in 2020, which translates to more than 1 in 10 babies, with the highest percentages reported in southern Asia and sub-Saharan Africa [1].Adding to this gloomy statistic, most countries show an increased rate of PTBs over the last 20 years [3].Prematurity is considered a multi-causal syndrome and one of the global epidemics [3].Goldenberg et al. [2] identified prematurity as a syndrome, due to the contribution of multiple possible causes for its spontaneous onset, including infection or inflammation, vascular diseases, uterine hypertension (e.g., twin pregnancy, polyhydramnios), etc. The multiple risk factors implicated in premature labor include lifestyle, heredity, anthropometric characteristics, multiple pregnancy, and maternal age [4].Other main causes of PTB receiving increased scientific interest recently include placental premature This mechanism speculates that the TL is associated with the premature placenta's aging pace, as well as the adverse outcome of premature birth [35]. In recent years, TL and telomerase (i.e., the enzyme responsible for maintenance of the length of telomeres by the addition of guanine-rich repetitive sequences), have been investigated as possible biomarkers of premature placental aging [15,16].Physiologically, cell division is accompanied by TL shortening that leads towards the aging trajectory.On the other hand, the enzyme telomerase is active in ensuring the preservation of placental TL and subsequently its normal function until the end of pregnancy.This balanced process can be affected by various aggravating factors mentioned above (e.g., oxidative stress, inflammation, and other genetic/epigenetic, immunological, physiological, lifestyle, and environmental factors including nutrition), leading to an advanced aging pace and the consecutive dysfunction of the placenta [15,36].As a result, complications during pregnancy, primarily gestational diabetes mellitus (GDM), preeclampsia, intrauterine growth restriction (IUGR), PTB, and intrauterine death, arise [35,37].Indeed, the placentas of complicated pregnancies with preeclampsia, intrauterine growth retardation, are characterized by decreased activity of telomerase and generalized dysfunctionality of TL regulation.In complicated pregnancies, a short placental TL is recorded, but no similar findings are evident in the TLs of umbilical cords.On the contrary, molar pregnancies have shown an increased telomerase activity, similar to the telomerase activity recorded in malignant cells and tissues [38]. The effect of the above influences is depicted in the offspring, and although the majority of prior telomere research has focused on adults, TL attrition in early life may be particularly important for lifelong health status, as evident from studies of childhood adversities and their effect on adult health trajectory [39].The importance of early life TL programming is further evident if one considers that TL in adulthood is determined by both TL at birth (which serves as an "initial setting") and by its subsequent attrition during development [40]. Since TL shortening is a biological phenomenon associated with many adverse health conditions, including various complications associated with pregnancy [41] and the subsequent health trajectory of offspring, it may serve as a reflection of the cumulative impact of stressors and/or as part of a fetal programming mechanism [42].Indeed, newborn TL, seen as the primary setting of TL regulation, although highly variable among individuals [42], carries important lifetime consequences for telomere dynamics and longevity [43,44].Recent evidence regarding newborn TL suggests that plasticity exists in both the programming of telomere biology as well as the initial setting of TL after birth [45].The "in utero" environment appears to be a significant contributor to this effect.Indeed, cord blood TL association studies with specific fetal exposures to maternal nutrient status, e.g., folate [46] and vitamin D [47], smoking status [48], maternal educational profile [49], or maternal metabolic status [50][51][52], and parental age [52][53][54][55][56], have provided insights into the mechanism of TL regulation and its trajectory following birth.Maternal stress [57], stress hormones [58], adverse early life events [39], and other related factors have been correlated with TL and point to the importance of in utero and early developmental exposures in the regulation of offspring TL in adult life.As shown by Gotlib et al. [59], chronic stress is a major determinant of telomere maintenance, both by direct exposure and through pathways of mother-to-fetus/infant transmission in early life. Previous studies have shown that premature infants exhibit longer telomeres at birth [60,61], but shorter telomeres in young adulthood compared with term-born infants [62,63], suggesting that a more rapid attrition occurs in early life [64,65].From this perspective, telomere regulation might be a key programming and compensatory mechanism, in premature infants.Furthermore, postnatal exposure to adversities associated with preterm birth are a cause for enhanced TL attrition.In particular, preterm infants are characterized by an immature neurobehavioral profile at birth, even in the absence of severe brain injuries and associated perinatal complications.Therefore, "by definition" they require long-lasting hospitalization in the Neonatal Intensive Care Unit (NICU).This increases the newborns' chances of survival but, at the same time, entails a number of stressors, such as the physical, pain related, and socioemotional, thus representing an early adverse experience, linked to detrimental consequences for neurological, neuro-endocrinal, sensorial, behavioral, emotional, and socioemotional development, as well as to increased chronic disease risk, later in life [60,66].Indeed, elevated oxidative stress and inflammation, both of which contribute to telomere shortening, has been recorded in premature hospitalized preterm infants, while a measurable decline in TL during hospitalization, may convey information about NICU exposures that carry both immediate and long-term health risks [67][68][69][70].It is therefore not surprising that researchers assume preterm infants display an "aged" phenotype [61], further increasing the importance of TL regulation and monitoring.Nevertheless, whether infant and placental TL are biomarkers of PTB, or the actual cause of this adverse outcome, remains to be discovered. The aim of this literature review is to investigate the possible associations between maternal nutrition, placental/newborn TL and PTB. Description of Search Strategy This is a review study of the international scientific literature.Articles published in scientific journals were sought from the online databases PubMed and EBSCO host, from 2013 to 2023, in an effort to reflect current knowledge and well-established research trends.The search took place from March to September 2023 and the terms used, were: The searches were conducted independently by two reviewers, namely NL and IT.Initially, the abstracts were read.In this way, research selection was directly related to the subject.Citation chaining also took place in order to avoid omitting relevant studies.With the completion of the article searches, the studies were classified into thematic categories based on the exposure and the outcome (PTB, maternal nutrition, placental, and neonatal TL). Study inclusion criteria: Inclusion of studies with a positive, negative, or neutral effect on the association of maternal nutrition with placental/newborn TL and PTB; • Description of the search strategy and eligibility of studies according to the four-phase flow chart of the PRISMA guidelines [71]. Results The search returned several studies for each keyword separately.However, when all keywords were combined together, no studies were found associating maternal nutrition with placental and/or newborn TL and PTB. For this reason, the keywords A: maternal nutrition, B: PTB, and C: placental/newborn TL, were searched in dyads as follows: concept "A" relation to concept "B", "C" to "A", and "C" to "B".It is impossible, however, to find a chronological sequence of the exposure and the outcome, but evidence of associations between these factors would imply a potential relationship (either causal or correlational) [72]. Studies Eligibility As previously mentioned, from the thorough preliminary search of the relevant literature in PubMed and EBSCO host electronic databases, no studies with the combination of keywords were found.The process of selecting and excluding the studies was carried out according to the PRISMA guidelines, as shown in Figure 1 [71]. Nutrients 2023, 15, x FOR PEER REVIEW 5 of 33 For this reason, the keywords A: maternal nutrition, B: PTB, and C: placental/newborn TL, were searched in dyads as follows: concept "A" relation to concept "B", "C" to "A", and "C" to "B".It is impossible, however, to find a chronological sequence of the exposure and the outcome, but evidence of associations between these factors would imply a potential relationship (either causal or correlational) [72]. Studies Eligibility As previously mentioned, from the thorough preliminary search of the relevant literature in PubMed and EBSCO host electronic databases, no studies with the combination of keywords were found.The process of selecting and excluding the studies was carried out according to the PRISMA guidelines, as shown in Figure 1 [71]. Thus, the literature search was performed linking concepts A to B, B to C, and A to C. In total, 40 observational studies containing the key words in the combinations described above were included.Thus, the literature search was performed linking concepts A to B, B to C, and A to C. In total, 40 observational studies containing the key words in the combinations described above were included. The selected studies were evaluated for their methodological quality on the basis of the checklist of the 22 thematic modules, examined by the STROBE scale, and were all considered to be of high quality. Maternal Nutrition and PTB Of the above 40 included studies, 21 were found to associate maternal nutrition with premature birth.The type of study, population characteristics and outcome, of each publication included in this section, may be seen in Table 1.Their publication years were from 2014 to 2023.Five of the studies took place in China, one in Indonesia, one in Norway, two in Singapore, three in the United States of America, one in Canada, one in Malawi, one in Pakistan, one in Mexico, one in Brazil, one in Switzerland, one in Portugal, one in Bangladesh, and one in Italy. • No overall association between prenatal yogurt consumption and PTB; • In non-overweight women, higher prenatal yogurt consumption was associated with reduced PTB risk. • The VFR pattern is associated with a lower incidence of PTB; • The VFR pattern was also associated with a higher birth weight, higher ponderal index, and increased risk of LGA deliveries. 8. The lower the maternal vitamin D level, the higher the GA at birth. • The likelihood of adverse outcomes was higher in non-white (p < 0.05) obese women with high protein consumption; • The anthropometric classification of obesity had a greater impact on PE and GDM, in contrast to PTB and SGA. Regarding the methodology, 12 of the studies concerned cohort studies, four studies were cross-sectional and five studies were case-control.Overall, the studies' populations were pregnant women with a singleton pregnancy and the absence of any pathology or pregnancy complications. Dietary intake was assessed with validated questionnaires, and/or measurements of anthropometric characteristics of the pregnant woman (BMI, arm circumference, etc.), as well as the newborn (sex, weight, length).Also, delivery characteristics, such as gestational age (GA) at delivery, were documented. The results showed both positive and negative associations between maternal nutrition (either as a dietary pattern of adherence or individual food items, or supplement/s intake) and PTB. Adherence to a healthy diet, as recorded by a maternal nutritional score and GA, at birth, were positively correlated.In particular, a significant increase in the risk of PTB was associated with low nutritional scores from a questionnaire developed and based on the International Federation of Gynecology and Obstetrics (FIGO) recommendations on adolescent, preconception, and maternal nutrition.Furthermore, a significant increase in preterm deliveries, with a relative risk of 1.44, was recorded for women with a first trimester nutritional score lower than five.However, the single food items of the score calculation were not associated with either early placental markers or complex pregnancy outcomes [73]. The Norwegian Fit for Delivery diet score, either in pre-pregnancy or early pregnancy, was protectively associated with excessive GWG and risk of PTB.The protective association with high birthweight was confined to pre-pregnancy diet and with preeclampsia to early pregnancy diet.The study recorded no association between the pre-pregnancy diet score and preeclampsia [74]. Dietary patterns related to reduced food intake or fasting in the 2nd trimester of pregnancy (the Ramadan period of the Islamic religion), were positively associated with increased risk of very PTB between 28 and 31 weeks of gestation.At 15-21 weeks, the risk increased by 1.33 times and at 22-27 weeks by 1.53 times [75]. Yogurt consumption during pregnancy appeared to have a negative (protective) association with PTB, especially when combined with a normal BMI [76].However, in the study by Lu et al. [77] higher dairy consumption compared to vegetable consumption, showed a strong correlation with PTB. In very PTB, the maternal dietary consumption during pregnancy of Portuguese women giving birth very prematurely was assessed.Consumption of certain foods did not comply with the recommendations for pregnant women by the Portuguese Directorate General of Health.In particular, consumption was below the recommended levels for dairy products (one vs. three portions), vegetables (two vs. three portions), and fruits (one vs. four portions).The study also recorded a very low cereal intake (average of one portion per day ingested).Furthermore, PTB-associated pregnancy-induced hypertension was associated with increased consumption of pastry products, fast food, bread, pasta, rice, and potatoes.However, only bread consumption had a weak but statistically significant association with pregnancy-induced hypertension in a multivariate analysis model [78]. A fruit/vegetable/rice-based dietary pattern during pregnancy appeared protective against PTB and small for gestational age (SGA) infants, compared with two other dietary patterns; namely, one based on consumption of seafood and noodles and the second on consumption of processed meat, cheese, and pasta.Only the fruit/vegetable/rice pattern contributed positively to the development of long for gestational age (LGA) neonate [79]. In the prospective cohort of Martin et al. [80], diet quality during pregnancy was associated with PTB.Specifically, greater adherence to a healthy dietary pattern, such as the DASH diet, reduced the chance of PTB, especially during the second trimester.Conversely, a greater adherence to a dietary pattern of poorer quality, such as pre-processed or fast foods, high-fat foods, and confectionary, increased the chance of PTB. The use of folic acid supplements of 400 µg or more showed a negative correlation with PTB.Folic acid use appeared to reduce PTB by 14%.Strong association with spontaneous PTB and non-significant for iatrogenic PTB or prolonged spontaneous rupture of membranes was also recorded [81].Furthermore, the large cohort study of Wu et al. [82] showed similar results with folic acid periconceptional supplementation associated with a lower risk of PTB.In particular, the earlier women started taking their folic acid supplements prior to pregnancy (i.e., at least 3 months before their last menstrual period), the more likely to reduce the risk of PTB, compared with women who started taking folic acid later (i.e., 1-2 months before their last menstrual period). Multivitamin use in the third trimester of pregnancy, had a negative correlation with preterm and very PTB, especially in pregnant women of African, non-Hispanic, descent.However, the dose-response effect was not investigated [83].Similar results by Olapeju et al. [84] support that multivitamin supplement intake of at least three times/week throughout pregnancy is significantly associated with a reduction in the likelihood of PTB.Use during the third trimester was especially associated with a greater reduction in PTB risk than use in the first trimester, but there was no significant association between preconception supplement intake and PTB.Also, higher plasma folate levels were associated with lower risk of PTB. In a case-control study, Ren et al. [85] examined hair levels of trace elements, including endocrine disrupting metal(loid)s (EDMs), such as lead, mercury, arsenic, and cadmium, and nutritional trace metal(loid)s (NTMs) such as zinc, iron, copper, and selenium, in 415 control pregnant women with birth at term and 82 pregnant women with PTB.Negative (protective) correlation between increased levels of NTMs, especially Fe and Zn and PTB occurrence, was recorded.Also, the potentially protective effect of mercury was seen for PTB, while for EDMs, only maternal hair mercury was negatively associated with PTB risk. PTB was associated with a higher serum concentration of heavy metals (such as mercury and lead and lower maternal serum concentration of AtRA) and all micronutrients, and with lower placental concentration of manganese, iron, copper, zinc, selenium, AtRA, 25(OH)D, and higher placental concentration of mercury and lead.Compared with the PTB group, the term birth group had higher concentrations of copper and AtRA in cord blood [86].Copper in maternal serum concentrations has been recorded above the upper normal limit in both term and PTB groups of a nested case-control study in Malawi.At the same time, PTB was associated with higher maternal serum concentrations of copper and zinc [87]. In the cohort study of Perveen and Soomro [88] iron deficiency anemia (Hb < 11 mg/dL) appeared positively associated with PTB, low birth weight, fetal mortality, and low Apgar score in the 1st and 5th minutes of birth.Additionally, increased levels of folic acid, in the third trimester were associated with reduced risk of PTB and longer duration of gestation of PTB.Little or no correlation was found between increased levels of B6 and B12 and PTB or SGA [89]. In a cross-sectional study by Christoph et al. [90], the lower the maternal serum vitamin D level, the higher the GA at birth, but no association was observed between vitamin D levels and PTB.In addition, in a nested case-control study in Bangladesh, vitamin D deficiency, common in Bangladeshi pregnant women, was associated with an increased risk of PTB [91]. An inverse relationship between maternal total protein levels (via diet) and PTB during the third trimester of pregnancy, especially in female fetuses, was recorded in a large Chinese cohort study [92].The likelihood of adverse outcomes was higher in nonwhite obese women with high protein consumption, in the nested case-control study by Miele et al. [93] The anthropometric classification of obesity had a greater impact on PE and GDM, in contrast to PTB and SGA.In total, obesity had a small effect on PTB. Maternal Nutrition and Placental and Newborn Telomeres In total, 13 studies were retrieved and included in this review investigating the relationship between maternal nutrition and placental and newborn telomeres.The studies type, population characteristics and outcome may be seen in Table 2. • Positive association between maternal vitamin C intake and fetal TL. In terms of methodology, 12 were cohort studies and one was cross sectional.The study populations primarily consisted of pregnant women, while others were mother-newborn dyads.The basic characteristics of the majority of the studies' populations concerned pregnant women in the first trimester of pregnancy with singleton pregnancy and the absence of pathology. Blood samples of pregnant women, questionnaires monitoring their eating habits, supplement intake and/or measurements of anthropometric characteristics of the pregnant woman (BMI, arm circumference), of the newborn (gender, weight, length), and delivery characteristics (GA at delivery), were recorded.In addition, observations were made with regard to the characteristics of the placentas, measurements of their TLs as well as the leucocyte TL (LTL) of the newborns from samples of umbilical cord blood at delivery. The results showed both positive and negative associations between maternal diet and placental and neonatal TL. In terms of dietary lipids, one study examined the effect of n3 intake on umbilical cord, placenta, and infant TL.Maintaining higher levels of maternal n3 polyunsaturated fatty acids (PUFAs) during pregnancy may help maintain TL in the offspring, which is beneficial to long-term offspring health [94]. No clear associations were recorded for prenatal or postnatal PUFAs status and methylmercury exposure with offspring TL.A higher prenatal n6:n3 PUFA ratio was, however, associated with longer maternal TL [95].On the contrary, a maternal high-fat dietary consumption pattern during pregnancy was associated with shortened TL among fetuses, after accounting for the effects of potential covariates [96]. One study correlated placental and umbilical cord blood TL with maternal nutritional profile.Specifically, a positive association between plasma vitamin 25(OH)D3 and placental TL was observed, while an inverse association was observed between BMI, body fat percentage, vitamin B12, and placental TL.Furthermore, a negative correlation was found between the above and the length of umbilical cord blood telomeres [97]. Furthermore, vitamin D levels during pregnancy, especially in the first trimester, were positively correlated with neonatal leukocyte TL at delivery [47,98].A positive correlation was also seen between neonatal leukocyte TL and maternal leukocyte TL, maternal vitamin D concentration, maternal energy intake, and newborn weight [47]. A positive correlation was also seen between folic acid and neonatal TL.Each 10 ng/dL increase in pregnant women's folic acid intake was associated with a 5.8% increase in mean TL.The average TL of newborns of mothers with low folate serum concentrations was shorter compared to those whose mothers belonged to the group with increased levels [46].Furthermore, maternal folic acid supplementation after the first trimester and throughout pregnancy was associated with longer newborn TL [99].However, in the same cohort, no significant association was found between maternal folic acid supplementation in the first trimester and newborn TL.One more study of the effect of folate on newborn TL showed a positive association between umbilical cord RBC folate and fetal TL at birth [100]. TL, but only in female rather than male newborns, was more susceptible to variation from maternal vitamin B12 levels, as well as maternal TL, and mental health.In total, maternal TL was strongly associated with antenatal factors, especially metabolic health, and nutrient status [65]. The increased exposure of pregnant women to toxic metals (antimony, lithium, arsenic) was positively correlated to placental TL and newborn sex.Antioxidants (zinc, selenium, folate, Vit D3), did not contribute to the modification of the above process.The TL of the placenta, decreases as the age of the pregnant woman increases, approximately by 1% per year.Lithium appears to increase the mother's TL.Lead (in umbilical cord blood) showed an inversely proportional correlation with infant TL, particularly in male neonates [101]. Magnesium deficiency was negatively associated with maternal RTL after adjusting for covariates.A positive association between maternal intake of magnesium and TL of cfDNA from amniotic fluid, while results on other micronutrients (i.e., vitamin B1 and iron) were marginally significant [102].Also, Myers et al. [103] examined the relationship of vitamin C intake and cord blood TL.The sample similarly to the Louis-Jacques et al. [100] study was drawn as a part of an ongoing prospective cohort study conducted by the University of South Florida (USF), Morsani College of Medicine.A positive association between maternal vitamin C intake and fetal TL was observed. PTB, Placental, and Newborn Telomeres As for the association of PTB with both the placental and the newborn telomeres, six studies, published from 2015 to 2023, were found.The studies type, population characteristics and outcome may be seen in Table 3.The study countries were Brazil (n = 1), United Kingdom (n = 1), Indonesia (n = 1), Israel (n = 1), India (n = 1) and France (n = 1).In terms of methodology, two were cohort, one case-control, and three were cross sectional studies.The population of the studies was heterogenous and included pregnant women in normal labor, mother-infant dyads, or 2-and 7-9-year-old children, who were born either full term or prematurely.Data analysis included anthropometric measurements, dietary intake questionnaires, observations regarding pregnancy course, DNA methylation of umbilical cord blood and placental tissue, umbilical cord blood, and placental tissue LTL. A study indicating no effect of PTB on TL showed similar placental TL in PTB and term labor placentas.In this cross-sectional study, early telomere shortening in PTB, was observed, that mimics the term placenta.Markers 8-OHdG and HMGB1, did not correlate with placental telomere ratio, while HMGB1 from the placenta of both PTB and term labor showed no significant difference.The equal relative amount of telomeric DNA (T) to the beta-globin single copy gene (S), calibrated to a plate reference genomic DNA sample (T/S ratio) of the placenta from PTB and term labor was recorded [104]. In contrast, telomere shortening in fetal membranes, was suggestive of senescence associated with triggering of labor at term.As such, fetal membranes from the term labor group showed TL reduction compared with those from the others.However, telomerase activity did not change in fetal membranes, irrespective of pregnancy outcome [105]. In the case-control study by Vasu et al. [61] preterm infants showed longer TL than full-term infants.A positive correlation was observed between maternal age and the telomere/shelterin (protein) ratio, while the higher the age of the mother, the longer were the telomeres of the newborn (p = 0.011).Accordingly, maternal blood and placental samples from spontaneous PTB presented shorter telomeres and increased Gal-3 expression compared with the spontaneous term pregnancies group [106]. Another longitudinal cohort study tracked premature infants into adulthood by studying TL in saliva, as well as lung function.A positive correlation was recorded between TL and abnormal lung airflow in the adult population who were born prematurely.There was no apparent association with perinatal causes of PTB.Nevertheless, no apparent association with perinatal events and TL was noted [70]. Increased LTL attrition was observed in those born before 37 weeks of GA, as well as in those who gained weight as adults in a cohort study in India.GA was positively associated with offspring RTL, although there was no significant association of offspring RTL with body size at birth including birthweight, birth length, and birth BMI.Conditional BMI gain at 2 and 11 years was not associated with RTL.BMI gain at 29 years was negatively associated with RTL.Born SGA was not associated with RTL in adulthood [107]. Discussion In the current study, the systematic review of traditional methodology was rejected since the combination of the search keywords "Maternal nutrition", "Preterm birth" and "Placental and/or newborn telomeres", returned zero results.For this reason, the keywords A: maternal nutrition, B: PTB, and C: placental and/or newborn TL, were searched in dyads as follows: is concept "A" related to concept "B", "C" to "A", and "C" to "B"; thus, leading to the assumptive inference of A being related to B and C. It is impossible, however, to find the chronological sequence of the exposure and the outcome, but evidence of associations between these factors would imply either a causal or a correlational relationship [72]. The data from the relevant literature search led to the final selection of 40 observational studies.The overall results indicate a relationship between maternal nutrition and PTB, as well as maternal nutrition and newborn and placental TL.However, the evidence for the relationship between PTB and TL, was inconclusive.The schematic representation of the aggregated key findings can be seen in Figure 2. Maternal Nutrition and PTB In terms of maternal nutrition and PTB, all 21 studies included in this review indicated that there are nutrients (either through dietary intake or supplementation) and eating patterns associated with increased risk of PTB, while others play a beneficial and/or protective role against PTB. Nutritional status, as assessed by selected biomarkers, is associated with higher risk of PTB and GA at birth.In particular, protein level in maternal blood plasma is inversely associated with a risk of PTB and positively associated with gestational duration in the third trimester of pregnancy, particularly in the female fetus [92].Dietary protein is a macronutrient well known for its relationship with fetal health in the past decades [108][109][110][111][112][113].However, few studies have investigated the role of protein in PTB risk, and the results are inconclusive, especially when examining protein supplementation and PTB incidence [114].By using selected valid and reproducible biomarkers of maternal protein plasma levels additionally to dietary intake assessments-which present numerous challenges to obtaining accurate nutritional status-significant confounders are omitted (i.e., quantification of intake, self-report bias, pathological causes of reduced absorption, etc.) [92]. Another nutritional marker examined for its role in PTB is folate.Higher maternal serum folate concentration at approximately the start of the third trimester, was associated with a longer duration of gestation and lower risk of PTB, while little or no correlation was seen between serum B6 and B12 and PTB [89].Supportive of the above findings regarding folate, one more study showed that higher plasma levels of the nutrient are associated with lower risk of PTB [84].The relationship between folate levels and PTB was further supported in a recent meta-analysis of folic acid and the risk of PTB.The meta-analysis evidence indicates that high early pre-conception maternal folate levels are significantly associated with a lower risk of PTB.Moreover, protective action against spontaneous PTB was seen following daily use of 400 µg folate periconceptionally [115].However, folate supplementation does not appear to protect against iatrogenic PTB or prolonged spontaneous rupture of fetal membranes [81].The large cohort study by Wu et al. [82] showed similar results with folic acid periconceptional supplementation being associated with a lower risk of PTB.Specifically, the earlier women started taking their folic acid supplements prior to pregnancy (i.e., at least 3 months before their last menstrual period), the more likely this was to reduce the risk of PTB, compared with women who started taking folate later (i.e., 1-2 months before their last menstrual period) [82].An earlier systematic review suggested that supplementation is associated with a significant reduction in the risk of PTB, but only when being initiated immediately after conception [23].Although further RCTs are warranted to establish the exact relationship between folate and PTB, its beneficial effect on the reduction of birth defects, supports the notion that it is necessary in adequate amounts during the periconceptional period. Vitamin D is another micronutrient studied for its effects on PTB risk in three studies included in this review.Firstly, the cross-sectional study by Christoph et al. [90] showed that the lower the vitamin D maternal level the higher the GA at birth.A second cross-sectional study in Indonesia, comparing a PTB group with a term birth group, found that the latter had a higher placental concentration of 25(OH)D [86].In addition, in a third nested casecontrol study in Bangladesh, vitamin D deficiency in pregnant women was associated with an increased risk of PTB [91].An earlier meta-analysis had already shown evidence of the association between maternal circulating 25-OHD deficiency (rather than insufficiency) and an increased risk of PTB.To enhance the argument, the meta-analysis concludes that vitamin D supplementation alone during pregnancy may reduce PTB risk [116].However, RCTs to date have unanimously failed to ascertain the prophylactic effect of vitamin D against PTB, with the emergence of conflicting evidence [117].It is suggested that low levels of vitamin D may reflect poor general maternal health status.Thus, priority should be given to the attainment of general health, rather than vitamin D supplementation per se [118]. Maternal iron deficiency anemia was positively associated with PTB [88].In particular, hemoglobin (Hb) levels of less than 11 mg/dL were directly associated with PTB, low birth weight, fetal mortality, and a low Apgar score in the 1st and 5th minutes of birth [88].In a recent systematic review and meta-analysis, iron deficiency anemia was indeed found to be a contributing factor towards PTB during the first trimester, but not in the second and third trimester [119].When accounting for iron during pregnancy, it is imperative to consider its critical role for embryonic development and fetal growth when transported through the placenta from the mother to the fetus.Furthermore, since iron cannot be synthesized by the body, sufficient iron absorption from dietary sources is very important for both mother and fetus [120].Still, little is known about the iron states in the mother, the placenta, and the fetus, and which mechanisms responsible for iron transport contribute towards PTBs.A recent study attempting to characterize maternal and fetal iron metabolism in pregnant women with PTB found a dysregulated iron homeostasis in both sides and a disordered placental iron equilibrium, which were presumed to account for the compromised fetal iron supply [121].Prevention or treatment with either intravenous iron supplementation or oral medication showed no significant differences in maternal and neonatal outcomes, thus emphasizing the need for nutritional correction of iron-deficient states during pregnancy [122]. "Placental and/or newborn telomeres", returned zero results.For this reason, the keywords A: maternal nutrition, B: PTB, and C: placental and/or newborn TL, were searched in dyads as follows: is concept "A" related to concept "B", "C" to "A", and "C" to "B"; thus, leading to the assumptive inference of A being related to B and C. It is impossible, however, to find the chronological sequence of the exposure and the outcome, but evidence of associations between these factors would imply either a causal or a correlational relationship [72]. The data from the relevant literature search led to the final selection of 40 observational studies.The overall results indicate a relationship between maternal nutrition and PTB, as well as maternal nutrition and newborn and placental TL.However, the evidence for the relationship between PTB and TL, was inconclusive.The schematic representation of the aggregated key findings can be seen in Figure 2. The effect of the availability of other micronutrients on PTB risk is far more complex than the previously discussed folate, vitamin D, and iron.In the Irwinda et al. [86] crosssectional study, lower concentrations of all-trans retinoic acid (AtRA) in maternal serum were inversely associated with PTB, while PTB was also positively associated with higher serum concentrations of heavy metals such as mercury and lead.Also, compared with the PTB group, the term birth group had higher maternal serum concentrations and higher placental concentrations of manganese, iron, copper, zinc, selenium, AtRA, lower placental concentrations of mercury and lead, and in cord blood, higher concentrations of copper and AtRA [86].A current scoping review also documented a higher incidence of PTB with lead and cadmium exposures.The findings for mercury and arsenic exposures were, however, inconclusive.The most common pathways through which heavy metals and metalloids lead to increased risk of PTB are placental oxidative stress, epigenetic modifications, inflammation, and endocrine disruptions [123]. In hair samples, nutritional trace metal(loid) concentrations, especially Fe and Zn, are negatively associated with PTB and, surprisingly, the endocrine disrupting metal(loid) Hg was also negatively associated with PTB [85].Conversely, PTB was associated with higher maternal serum concentrations of copper and zinc in a nested case-control study in Malawi, indicating perhaps the difference between types of sampling source of the biomarker (e.g., blood serum and plasma, cellular fractions, tissue, etc.) [87].Due to the conflicting results, and the low-to-moderate certainty evidence, it is suggested that zinc supplementation may reduce PTB risk in women with low zinc intake, low levels, or poor nutrition [114]. Any type of multivitamin supplementation during the third trimester of pregnancy (but not earlier) was associated with a significant reduction in PTB among non-Hispanic and African women in a large USA cohort study [83].In agreement is the case-control study by Olapeju et al. [84], where multivitamin supplement intake of at least three times/week throughout pregnancy was significantly associated with a reduction in the risk of PTB.In both studies described above, no significant association was recorded between preconceptional multivitamin intake and risk of PTB.Conversely, in a systematic review and meta-analysis, the use of multivitamins and adverse birth outcomes in high-income countries, did not change the risk of PTB [124].However, the authors stressed the need for additional data on multivitamin intake in pregnancy, from randomized controlled trials or large cohort studies, controlling for multiple confounders. In terms of dietary patterns, religious fasting (such as during the Ramadan period of the Islamic religion) in the second trimester of pregnancy, increased the risk of very PTB [75].This cohort had a large population of 78,109 births, making the evidence robust.In contrast, the recent review and meta-analysis that examined the effect of Ramadan fasting during pregnancy on perinatal outcomes of 5600 births, showed no effect on PTB [125].Studies indeed have shown conflicting results on the effect of fasting on PTB, perhaps due to differences in the timing of exposure [125].More well-designed observational studies with large samples investigating all types of fasting during all trimesters of pregnancy are required to shed light on its impact on maternal and fetal health [126]. A dietary pattern during pregnancy based on fruits, vegetables, and rice protected women against PTB and SGA infants, compared with dietary patterns based on seafood and noodles or based on processed meat, cheese, and pasta.Nevertheless, the fruit/vegetables/ rice pattern contributed towards the development of LGA infants [79].Supportive of this evidence is the study by Teixeira et al. [78], in which the group of women giving birth prematurely, had a below the recommendations for pregnant women by the Portuguese Directorate General of Health consumption of certain foods (i.e., dairy products, cereals, vegetables, and fruits).In another study examining the effect of dietary patterns on PTB severity levels, a pattern rich in rice and nuts lowered the risk of very/moderate PTB compared with late PTB or term birth, while a high dietary consumption of starchy foods was associated with the most severe level of PTB incidence [127].Dairy products, on the other hand, seem to have a controversial effect.It is argued that an increased consumption of dairy products has a protective effect against PTB, when combined with a maternal normal BMI [76].Otherwise, the increased consumption of dairy products, especially milk, as well as cereals, eggs, and Cantonese soups, and the 'Fruits, nuts, and Cantonese desserts' groups, compared with vegetable consumption, had a strong correlation with PTB [77]. On the same note, in the prospective cohort of Martin et al. [80], maternal diet quality was associated with PTB, where a greater adherence to a healthy dietary pattern, such as the DASH diet, reduced the risk of PTB, especially during the second trimester.Conversely, a greater adherence to a dietary pattern of poorer quality, such as preprocessed or fast foods, high-saturated fat foods, and confectionary, increased the risk of PTB.Comparing five different food patterns, namely "Obesogenic", "Traditional", "Intermediate", "Vegetarian", and "Protein", Miele et al. [93] performed a nested case-control study in Brazil and reported that a diet rich in protein increases the probability of developing preeclampsia and PTB.However, the anthropometric classification of obesity had a greater impact on preeclampsia (PE) and gestational diabetes mellitus (GDM), in contrast to PTB and SGA outcomes, suggesting that the effect of the dietary pattern on PTB is dependent upon other anthropometric characteristics.In a recent review, which included 40 observational studies, the dietary patterns during pregnancy associated with a lower risk of PTB, were also characterized by high consumption of vegetables, fruits, as well as whole grains, fish, and dairy products [11]. In general, the evidence indicates that a Western-type diet, high in meat and fats and low in fruits and vegetables, is associated with an increased risk of induced PTB [128].In contrast, adherence to a Mediterranean and/or healthy diet pattern during pregnancy appears to be associated with a reduced risk of PTB [129,130].It may also be that the protective effect of increasing the intake of foods associated with a Mediterranean or healthy dietary pattern is more important than totally excluding highly processed food, fast food, confectionary, and snacks [131]. Similar results were noted with nutritional scoring systems.In the cohort of Parisi et al. [73], a positive association between a healthy maternal nutritional score and GA at birth was reported, while a significant increase in the risk of PTB was associated with low nutritional scores.However, the single-food-items score calculations were not associated with either early placental markers or complex pregnancy outcomes.Also, a high-Norwegian Fit for Delivery diet score, either pre-pregnancy or in early pregnancy, was inversely associated with gestational weight gain (GWG) and PTB risk [74].Examining dietary patterns as a whole, independent of the manner of assessment, has emerged as a holistic approach for capturing the complex interactions between nutrients and foods [132].In summary, the evidence shows that maternal nutrition during pregnancy, as assessed through dietary patterns, is a major determinant for birth outcomes and, consequently, offspring health outcomes in later life [133]. Surprisingly, no recent studies were found that exclusively examine maternal BMI and the risk of PTB.Previous evidence showed that either a high or low BMI is associated with increased risk of PTB.However, when limited to developing countries, low BMI was not significantly associated with PTB [118].BMI is an obscure measure of nutritional status and health, especially since it does not take into account body composition (e.g., muscle mass, bone density, etc.) and racial and sex differences [134], and thus is not considered as a valid factor accounting for PTB risk.In contrast, recent findings from the ongoing prospective "Mamma & Bambino" study (Catania, Italy), suggest that gestational gain weight, rather than BMI per se, affects maternal TL.Specifically, women with adequate gestational gain weight showed longer TL than those who gained inadequate weight.Additionally, the TL of cfDNA exhibited a U-shaped relationship with weight gain during pregnancy, suggesting that increased weight gain also heralds negative effects [135].Mechanisms by which inadequate or excessive weight gain affect TL are yet to be elucidated but could be related to chronic inflammation and oxidative stress in utero. The above evidence indicates that dietary patterns linked to enhanced anti-inflammatory and antioxidant properties, containing a variety of nutrient dense, unprocessed or mini-mally processed foods, that reduce the risk of developing nutritional deficiencies, are the most appropriate for ascertaining a healthy period of pregnancy and fetal development, as well as the avoidance of adverse birth outcomes. Maternal Nutrition and Placental and Newborn Telomeres The findings from exploring the association between maternal nutrition and placental and infant TL conclude that diet has an important role in whether or not placental TL will be maintained.Specifically, the level of maternal plasma vitamin D seems to be positively correlated with placental TL, thus contributing to TL maintenance or reduction of TL attrition.Conversely, an inverse effect was observed for BMI, body fat percentage, vitamin B12 and placental TL, but no associations were apparent for cord blood TL [97]. Further studies support the protective role of maternal plasma vitamin D in maintaining the newborn's TL.In two studies included in this review, the concentration of serum vitamin D during pregnancy and maternal energy intake were positively correlated with the neonate's LTL at birth [47,98].The pleiotropic effects of vitamin D in the organism, and especially its immunomodulatory effects, may be one mechanism by which it is protective against telomere attrition [136].More recently, this evidence was strengthened by a cohort in Hong Kong where 25(OH)D, D3, and D3 epimer, both in utero and at birth, impacted childhood LTLs [137].Furthermore, insufficient maternal vitamin D (25(OH)D) has been associated with increased offspring risk for many diseases and later life adverse outcomes [138]. In contrast, Herlin et al. found no association between antioxidant maternal serum levels of zinc, selenium, folate, and vitamin D3, and maternal or newborn TL [101].The contradicting results of maternal vitamin D levels and their effect on offspring TL may be due to the differences in the timing of vitamin D level assessments during pregnancy.For example, Daneels et al. showed the importance of maternal nutrition early in pregnancy, and in particular the first trimester, being associated with TL at birth [98].In support of the above, another study in Switzerland showed that the effects of vitamin D are more pronounced during the earlier gestational period [139]. Three studies included in this review investigated the effect of folate on fetal and placental TL [46,99,100].A positive association was observed between maternal folic acid levels early in pregnancy and newborn cord blood TL [46].This evidence suggests that fetal telomeres exhibit developmental plasticity and show that maternal nutrition can affect or even "program" this system.The Louis-Jacques et al. study [100] also showed a positive association between umbilical cord red blood cell folate levels and fetal TL at birth, while a possible association between maternal folic acid supplementation during pregnancy and longer newborn TL was suggested in the cohort study by Fan et al. [99].The role of folate in DNA methylation and oxidative stress are proposed mechanisms through which it influences TL regulation in the offspring [140].In contrast, in the study by Kim et al., dietary intake of dietary folate equivalents, assessed by 24 h recalls, was not associated with fetal TL [47].A recent systematic review of the effect of maternal diet and offspring TL also concluded that higher circulating maternal folate and 25-hydroxyvitamin D3 concentrations were associated with longer offspring TL, adding to the equation a protective effect for higher maternal dietary caffeine intakes [141].To date, the data regarding higher dietary intake of folate, in regard to offspring TL regulation remain contradictory [103]. Another study examining maternal micronutrient status and TL in maternal serum, cord blood, amniotic fluid, and placenta, showed that only magnesium deficiency is negatively associated with maternal RTL.Furthermore, a positive association between maternal intake of magnesium and TL of cfDNA from amniotic fluid was seen, while results on other micronutrients (i.e., vitamin B1 and iron) were marginally significant [142].Moreover, Nsereko et al. concluded that lower ferritin, soluble transferrin receptor levels, and retinol-binding protein levels are associated with longer maternal TL [143].The effect of various micronutrients (both intakes and status) and TL in PTB remains obscure and demands further investigation. The effects of dietary fat and its biomarkers have also been investigated.Shortened TL among fetuses exposed to maternal high fat consumption during pregnancy, after accounting for the effects of potential covariates, was recorded in the Salihu et al. study [96].However, according to the Liu et al. study, concentrations of total n3-PUFAs and docosahexaenoic acid (DHA) in maternal erythrocytes, are closely correlated to infant TL and the telomerase reverse transcriptase (TERT) promoter methylation [94].Contradictory evidence for maternal PUFA status and its relation to offspring TL was recorded in the cohort study by Yeates et al. [95].There were no clear associations recorded for either prenatal or postnatal PUFA status with offspring TL.However, the higher prenatal n6:n3 PUFA ratio was associated with longer TL in mothers.Maintaining higher levels of maternal n3 PUFAs during pregnancy may help to conserve the offspring TL, accompanied by potential benefits in offspring long-term health. In terms of maternal dietary patterns and offspring TL regulation later in life, the recent-first of its kind-systematic review examining the impact of dietary intake factors on TL in childhood and adolescence, suggests that a higher consumption of fish, nuts and seeds, fruits and vegetables, leafy and cruciferous vegetables, olives, legumes, polyunsaturated fatty acids, and an antioxidant-rich diet might positively affect TL.Conversely, a high intake of dairy products, simple sugars, sugar-sweetened beverages, cereals, especially white bread, and a diet high in glycemic load were associated with enhanced TL shortening in the offspring during childhood and adulthood [144]. Offspring sex also appears to affect associations between maternal nutritional intake and/or status and TL.Indeed, TL in female newborns was shown to be more susceptible to variation from maternal TL and vitamin B12 levels, while in newborn male TL from parental age, maternal education, plasma fasting glucose, DGLA%, and IGFBP3 levels [65].Thus, demographic factors such as offspring sex and maternal ethnicity, seem to also affect the relationship between maternal nutrition and offspring TL, and therefore should be treated as confounding factors in all relevant investigations [103]. It is relatively well established in both animal and human studies that nutrition plays a profound role on DNA integrity, epigenetic mechanisms, and TL regulation [32,[145][146][147].Indeed, various nutrients influence TL through mechanisms depicting their role in cellular functions including inflammation, oxidative stress, DNA integrity, and methylation, as well as telomerase activity [145].However, evidence from human studies examining the impact of maternal nutrition on newborn TL remains scarce, not allowing definite conclusions to be reached.Since TL at birth represents an individual's initial setting of TL and predicts later life TL [148], the effects of maternal diet on offspring TL should be thoroughly investigated.Longitudinal studies assessing nutritional intake and/or statuswith various indices including nutrient biomarkers-and exploring possible associations with offspring and placental TL regulation, are warranted.Upon ascertaining the potential nutritional determinants of offspring TL regulation, TL status and attrition rates could become valuable as markers of future chronic disease risk. PTB and Placental and Newborn Telomeres As for the association of PTB with both placental and the newborn TL, only six studies were found.TL appears to be highly variable in newborn infants.In particular, PTB infants were found to have longer TL than full-term infants, while TL was significantly negatively correlated with GA and birth weight.A positive correlation was also observed between maternal age and the telomere/beta-globin single-copy gene (T/S) ratio, indicating that the older the mother, the longer the newborn's telomeres, possibly indicating a compensatory mechanism.In the same study, longitudinal assessment of preterm infants who had TL measurements available at the age of 5 years, suggests that TL attrition rate is negatively correlated with increasing GA [61].The findings of the systematic review and meta-analysis by Niu et al. [62] are in agreement.In this study, IUGR was associated with shorter placental TL, while PTB with longer birth TL, but only as measured by qPCR and not by the method of telomere restriction fragment (TRF) analysis.This may explain the lack of correlation of TL measurements when assessed by different laboratory methods, which may account for controversies in study results.The TRF method has been proposed as a method of high accuracy, as it results in lower variation than the PCR method [149].However, possibly due to its time consuming, labor intensive, costly, and expertise-demanding characteristics, TRF is used in very few studies compared with PCR. Furthermore, in the Niu et al. [62] review and meta-analysis, IUGR was associated with shorter birth TL only when birth TL was measured in the placenta, but not in newborn blood.TL is sensitive to the types of tissues used for measurement, as previous studies have reported the placental TL as being relatively longer than cord blood TL [150].Nevertheless, irrespective of the studied biological matrix (i.e., cord leukocytes or placenta), newborn TL measurements remain predictive of TL in leukocytes at the age of 4 years [43].Although, the choice of sample type is a complex matter, involving numerous issues such as attainability, availability and ethics, it must be made clear that when comparing studies using different biological samples, this could lead to discordant conclusions. In the study by Colatto et al., membranes from term labors, also showed TL reduction compared with those from the PTB group.Telomerase activity did not change in fetal membranes irrespective of pregnancy outcome.The authors suggested that telomere shortening in fetal membranes is indicative of senescence associated with the triggering of labor at term [105].The longer TL recorded in PTB offspring is a surprising finding since PTB has been associated with early aging phenotypes [151], but may be explained by the fewer cellular replications and reduced DNA turnover in the final few weeks of gestation, otherwise missed when born preterm [152].Indeed, in a twin study, placental TL gradually decreased with GA, indicating that enhanced TL attrition is more prominent with advancing GA [153].Adjustments for GA should be made when measuring birth TL in all studies between groups of different maternal nutritional exposures. On the other hand, the study by Saroyo et al. showed that telomere T/S ratio of the placenta did not differ between PTB and term labor despite difference in GA, suggesting perhaps similar TLs in the timing of sampling between the two groups, due to early telomere shortening in PTB that mimics the term placenta [104].The timing of sampling is indeed of extreme importance when assessed in PTB infants, especially if one considers the effects of trauma in the NICU, and the previously reported enhanced TL shortening of preterm infants experiencing these adversities following birth [68]. In contrast, GA has also been positively associated with offspring RTL.Longitudinal data indicate an increased LTL attrition in those born before 37 weeks of GA, as well as in those who gained weight as adults (29 years) but conditional BMI gain at 2 and 11 years, was not associated with RTL [107].Previous studies have shown, however, that birth TL differs according to body size and the GA of the newborn [154,155].Maternal anthropometric characteristics may perhaps explain, to an extent, why studies reach inconclusive outcomes when comparing TLs from spontaneous PTB offspring, compared with term-born infants [106]. Another longitudinal cohort followed premature newborns into their adulthood by studying the TL in their saliva and assessing their lung function.A positive correlation between TL and abnormal lung airflow in an adult population born prematurely was observed, but there was no apparent association with perinatal causes.It is speculated that there is continuous oxidative stress of the airways, which leads to persistent inflammation, lung function alteration, and increased sensitivity for chronic obstructive pulmonary disease [70].Furthermore, TL attrition rates are associated with diseases characterized by increased inflammation and oxidative stress [156], thus affecting TL regulation in the offspring in an independent manner. Evidence to date has confirmed that newborn TL strongly predicts child and adulthood TL [43,149], while it is the strongest predictor of TL change over time [157,158].Thus, TL measured at any point in life is jointly determined by TL at birth and subsequent TL attrition in later life, with a rapid attrition rate before the age of 5 years old and then remains relatively stable for the remainder of the life course [159].TL at birth indeed has been reported to vary as much as 3000-5000 bp inter-individually, a proportion that is well above the overall TL attrition occurring in the whole lifespan [160].Therefore, birth TL is extremely important for the lifetime health trajectory and aging pace rhythm determines the interindividual variation of adult TL, which in turn has been proposed as a biomarker and potential contributor to the development of aging-related chronic diseases [161,162]. Maternal Nutrition, Placental Newborn Telomeres, and PTB From the above evidence, it appears that maternal nutrition affects PTB risk, partly through its influence on maternal TL (Figure 3. Representation of the associations).Indeed, the telomere-regulated-clock mechanism determines the length of gestation, leading to the onset of labor (parturition), and at the same time PTB is a major determinant of offspring TL regulation.The strength of the associations and the extent of the influence from covariates remains to be elucidated in future research.Furthermore, the question of whether maternal TL is simply a biomarker of maternal nutritional status, and in effect PTB risk, or a causative factor of PTB through nutritional habits and status, remains unanswered to date.Even so, studies have supported the idea that TL may be used as a novel biomarker, especially in combination with traditional indicators for the prediction of PTB and, thereafter, preterm health, such as weight development in preterm neonates [163].Reliable biomarkers with high prognostic value are highly sought-after for efficient decisionmaking in clinical settings. Nutrients 2023, 15, x FOR PEER REVIEW 24 of 33 directional relationship between maternal nutrition and MH, assessing simultaneously TL regulation in the premature offspring. As previously mentioned, due to the critical role of birth TL in determining later life TL [155,[175][176][177], it is intriguing to explore the potential mediation role of TL biology, underlying the relationship between intrauterine exposures and aging-related diseases by primarily comparing TL at birth between PTB and term birth outcomes.The current findings thus emphasize the necessity for prospective, longitudinal studies that investigate the relationship between placental/newborn TL and maternal nutrition-particularly during conception and pregnancy-in PTB risk.With the use of machine learning, algorithms, and big data analysis, insights of the exact relationship of the aforementioned factors may be provided, also considering personal characteristics, genetics profiles, MH, way of life, and general wellbeing. Limitations of the Study This study has several limitations.First, studies included in this review used different research methodologies and assessment methods, variable set outcomes and exposures, and geographical, demographic and anthropometric population characteristics.Population sizes are furthermore extremely variable between studies.. Secondly, many studies did not adjust for sex and race/ethnicity.Thirdly, exposures, especially those with key roles in the outcomes of studies, for example MH status (i.e., subclinical depression, traumatic experiences, eating disorders, etc.), remained unexplored or unadjusted for.Fourthly, the well-documented difficulty leading to compromised validity and reliability of nutritional intake and status assessments, was not adequately addressed in the majority of the studies. Also, we searched for relevant publications only in two electronic databases and did not include any unpublished studies or articles, such as meeting abstracts and dissertations, which might have introduced publication biases.Finally, we included solely articles written in English, which might also have caused the omission of articles published in Following telomere expansion at the beginning of pregnancy, TL in the placenta and the fetal membranes, gradually shortens throughout the remainder of gestation.The rate of telomere shortening can be affected by nutritional practices, either directly, for example through their inflammatory/anti-inflammatory and oxidant/antioxidant properties [41,164], or indirectly by influencing other aspects of health also associated with TL, such as mental health (MH).Indeed, nutrition is associated with MH outcomes, such as a higher risk for developing depression and anxiety [165,166] while, on the other hand, MH may affect nutritional status (e.g., through appetite regulation) [167].Additionally, evidence shows a link between various types of MH disorders and TL, including stress and anxiety [168][169][170], bipolar disorder [171,172], and psychotic disorders [173,174]. Perhaps therefore, MH through its effect on maternal dietary habits, independently affects maternal and offspring TL.To our knowledge, no studies to date however, have examined the bi-directional relationship between maternal nutrition and MH, assessing simultaneously TL regulation in the premature offspring. As previously mentioned, due to the critical role of birth TL in determining later life TL [155,[175][176][177], it is intriguing to explore the potential mediation role of TL biology, underlying the relationship between intrauterine exposures and aging-related diseases by primarily comparing TL at birth between PTB and term birth outcomes.The current findings thus emphasize the necessity for prospective, longitudinal studies that investigate the relationship between placental/newborn TL and maternal nutrition-particularly during conception and pregnancy-in PTB risk.With the use of machine learning, algorithms, and big data analysis, insights of the exact relationship of the aforementioned factors may be provided, also considering personal characteristics, genetics profiles, MH, way of life, and general wellbeing. Limitations of the Study This study has several limitations.First, studies included in this review used different research methodologies and assessment methods, variable set outcomes and exposures, and geographical, demographic and anthropometric population characteristics.Population sizes are furthermore extremely variable between studies.. Secondly, many studies did not adjust for sex and race/ethnicity.Thirdly, exposures, especially those with key roles in the outcomes of studies, for example MH status (i.e., subclinical depression, traumatic experiences, eating disorders, etc.), remained unexplored or unadjusted for.Fourthly, the well-documented difficulty leading to compromised validity and reliability of nutritional intake and status assessments, was not adequately addressed in the majority of the studies.Also, we searched for relevant publications only in two electronic databases and did not include any unpublished studies or articles, such as meeting abstracts and dissertations, which might have introduced publication biases.Finally, we included solely articles written in English, which might also have caused the omission of articles published in other languages. Future studies using individual participant data with adjustments for sex, race/ethnicity, anthropometry, and other potential confounders, are imperative for valid comparisons.In addition, cross-sectional studies do not provide information of causality, because of the unclear temporality between the exposure and outcome and this should be considered in review and meta-analysis studies. Conclusions There is a relationship between maternal nutrition, placental-newborn TL, and PTB.Specifically, maternal nutrition influences PTB risk both positively and negatively, to an extent through its influence on maternal TL.Furthermore, maternal TL independently affects PTB risk, while at the same time PTB appears as a major determinant of offspring TL regulation.The strength of the associations and the extent of the influence from various covariates remains largely unexplored.The question also remains of whether maternal TL is simply a biomarker of maternal nutritional status and in effect PTB risk, or a causative factor of PTB, through nutritional practices and status. From a nutritional point of view, a diet high in carbohydrates and low in protein in the third trimester and high in fat in the first trimester of pregnancy, was associated with PTB and SGA infants.Micronutrients and their adequate intake may also play an important role in PTB risk.In particular, iron, zinc, B vitamins (folic acid, B6, B12) seem to affect PTB prevention and folic acid, vitamin D, B12, and n3 PUFAs are factors that aid in the maintenance of both the placental and infant TL.However, the strength of the evidence is not adequate to reach definite conclusions and the formulation of clinical guidelines. In addition, placental telomeres play an important role in PTB and act as a biomarker of both the mother's and the newborn's nutritional status.Indeed, placental and maternal TL is reported as being shorter in PTBs compared to full-term pregnancies, thus TL may be considered to be a biomarker of prognostic value for the adverse outcome of premature labor.Maternal leukocyte TL at the beginning of pregnancy, on the other hand, constitute a prognostic indicator of maternal biological aging and, further, indirectly, a prognostic indicator of the offspring's health from fetal life to the early post-natal period.Since premature infants exhibit a more rapid TL attrition in early life, while a measurable decline in TL during NICU hospitalization has been reported, the rate of TL change in early life may convey information about prematurity and NICU exposures that carry both immediate and long-term health risks. The findings of this review signify the urgent need for further research that will assess the relationships between the abovementioned factors simultaneously, especially in longitudinal observational studies, following populations from pregnancy to the offspring's' adulthood.Preventative medicine and future research may use this information regarding TL (maternal and placental) as well as, maternal nutritional status, for the screening of high-risk preterm delivery or pregnancies, quantification of premature offspring adversity burden in early life, and recording of their long-term health consequences in offspring adulthood. Author Contributions: It should be noted that each author has made substantial contributions to the completion of the current article.N.L., D.L., A.B.-T., E.H. and I.P.T. have approved the submitted version.Furthermore, N.L., D.L., A.B.-T., E.H. and I.P.T. agree to be personally accountable for their own contributions and ensuring that questions related to the accuracy or integrity of any part of the work, even ones in which they were not personally involved, are appropriately investigated, resolved, and documented in the literature.N.L. and I.P.T. contributed to aspects of the manuscript's conceptualization, data curation, methodology design, investigation, resources, project administration, and visualization.Supervision: I.P.T.The writing, both of the original draft and review and editing, was conducted by: N.L., D.L., A.B.-T., E.H. and I.P.T.All authors have read and agreed to the published version of the manuscript. Funding: The authors received no financial support for the research and authorship, of this article.The publication was funded by European University Cyprus. Institutional Review Board Statement: The studies used in this literature review comply with a series of conditions such as the validity of the content from accredited databases, acceptance by the National Bioethics Commission, and meeting the rules of ethics and research, without the existence of conflicts of interest. Informed Consent Statement: Informed consent was obtained from all subjects involved in the studies included in this review article. Figure 2 . Figure 2. Schematic representation of aggregated key findings regarding the associations between PTB, maternal nutrition, and placenta or fetal TL.PTB: preterm birth, TL: telomere length, RTL: Figure 3 . Figure 3. Representation of the associations between PTB, maternal nutrition and TL.Maternal nutrition affects PTB risk, partly through its influence on maternal TL (and in effect placenta TL).On the other hand, maternal TL independently affects PTB risk and, at the same time, PTB is a major determinant of offspring TL regulation.PTB: preterm birth, TL: telomere length. Figure 3 . Figure 3. Representation of the associations between PTB, maternal nutrition and TL.Maternal nutrition affects PTB risk, partly through its influence on maternal TL (and in effect placenta TL).On the other hand, maternal TL independently affects PTB risk and, at the same time, PTB is a major determinant of offspring TL regulation.PTB: preterm birth, TL: telomere length. Table 1 . Studies associating maternal nutrition and preterm birth. Table 2 . Studies associating maternal nutrition with placental and newborn telomeres. Table 3 . Studies associating preterm birth with placental and newborn telomeres. • Equal telomere T/S ratio of the placenta from PTB and term labor.
2023-11-10T14:06:42.058Z
2023-11-30T00:00:00.000
{ "year": 2023, "sha1": "0e33444e00900541065a56e18cc213a749884525", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "68003c79db20099b7b433a1cebf5537679eec725", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
109678888
pes2o/s2orc
v3-fos-license
Knowledge and awareness of cervical cancer in Southwestern Ethiopia is lacking: A descriptive analysis Purpose Cervical cancer remains the second most common cancer and cancer-related death among women in Ethiopia. This is the first study, to our knowledge, describing the demographic, and clinicopathologic characteristics of cervical cancer cases in a mainly rural, Southwestern Ethiopian population with a low literacy rate to provide data on the cervical cancer burden and help guide future prevention and intervention efforts. Methods A descriptive analysis of 154 cervical cancer cases at the Jimma University Teaching Hospital in Southwestern Ethiopia from January 2008 –December 2010 was performed. Demographic and clinical characteristics were obtained from patient questionnaires and cervical punch biopsies were histologically examined. Results Of the 154 participants with a histopathologic diagnosis of cervical cancer, 95.36% had not heard of cervical cancer and 89.6% were locally advanced at the time of diagnosis. Moreover, 86.4% of participants were illiterate, and 62% lived in a rural area. Conclusion A majority of the 154 women with cervical cancer studied at the Jimma University Teaching Hospital in Southwestern Ethiopia were illiterate, had not heard of cervical cancer and had advanced disease at the time of diagnosis. Given the low rates of literacy and knowledge regarding cervical cancer in this population which has been shown to correlate with a decreased odds of undergoing screening, future interventions to address the cervical cancer burden here must include an effective educational component. Methods A descriptive analysis of 154 cervical cancer cases at the Jimma University Teaching Hospital in Southwestern Ethiopia from January 2008 -December 2010 was performed. Demographic and clinical characteristics were obtained from patient questionnaires and cervical punch biopsies were histologically examined. Results Of the 154 participants with a histopathologic diagnosis of cervical cancer, 95.36% had not heard of cervical cancer and 89.6% were locally advanced at the time of diagnosis. Moreover, 86.4% of participants were illiterate, and 62% lived in a rural area. Conclusion A majority of the 154 women with cervical cancer studied at the Jimma University Teaching Hospital in Southwestern Ethiopia were illiterate, had not heard of cervical cancer and had advanced disease at the time of diagnosis. Given the low rates of literacy and knowledge regarding cervical cancer in this population which has been shown to correlate with a PLOS ONE | https://doi.org/10.1371/journal.pone.0215117 November 12, 2019 1 / 11 a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 Introduction Cervical cancer pathology and demographic data is lacking from Southwestern Ethiopia. The Jimma University Teaching Hospital (JUTH) is located in the city of Jimma which is 352 km southwest of Ethiopia's capital city Addis Ababa and is unique in that it acts as the only teaching and referral hospital in the region, serving a population of 15 million people [1]. Moreover, Jimma is part of the Oromia state which has one of the highest poverty rates (74.9% of the population) and lowest literacy rates in the country (36% of all residents, with 17% of the female residents living in rural settings) [2][3]. Contributory data from this hospital is vital since every year, an estimated 7,095 women are diagnosed with cervical cancer and 4,732 deaths are due to the disease in Ethiopia-it is currently the second most common cause of female cancer deaths in Ethiopia, after breast cancer. Infection with high-risk human papillomavirus (HPV) is the necessary cause of >99% of cervical cancer [4]. Other contributing factors include smoking, total fertility rate, and human immunodeficiency virus (HIV) infection [5]. The knowledge about cervical cancer in Ethiopia has been reported to range from 21.2% to 53.7%, with screening rates that ranged from 9.9% to 23.5%. Three of these four studies, however, took place in Northern Ethiopia [6-9]. Though there is not yet an organized cervical cancer education or screening program in Ethiopia, the ongoing dilemma remains how much the absence of such programs compared to a general lack of education or negative attitude towards cervical cancer contribute to the disease burden. Aweke et al. described that 34.8% of 583 survey respondents in Southern Ethiopia had a negative attitude pertaining to cervical cancer [7]. Place of study The study took place at the Jimma University Teaching Hospital Departments of Obstetrics and Gynecology and Medical Laboratory Sciences and Pathology in Southwestern Ethiopia. This study, including the verbal/oral consent procedure, was approved by the Touro University California Institutional Review Board in the United States of America, by the Research and Publication Committee of the Faculty of Medical Sciences at Jimma University, by the Jimma University Ethics Review Committee and by the Jimma University Teaching Hospital Departments of Obstetrics and Gynecology and Medical Laboratory Sciences and Pathology in Ethiopia. Verbal/oral consent was only able to be obtained as opposed to written consent given that a significant proportion of the study population was not literate. The verbal/oral consent was recorded by the residents who were interviewing the subjects/performing the procedure onto individual survey sheets, which were then transcribed into a central document. Study population The study population included non-pregnant women voluntarily attending the Jimma University Teaching Hospital Department of Obstetrics and Gynecology outpatient clinic from January 2008 -December 2010 who had evidence of cervical lesions on initial pelvic examination. All of the participants voluntarily presented to the clinic and were willing to be screened; data was collected only after full informed oral consent for participating in the study was obtained. Screening procedure Data was collected by residents in the Department of Obstetrics and Gynecology who were informed regarding the study parameters and were in charge of the outpatient service on a rotation basis. All non-pregnant women with cervical lesions were invited to participate during the study time period. The patients were informed about the indications, contraindications, and alternative options of undergoing a cervical punch biopsy to recognize any cervical pathology. Oral consent was obtained from each case before the interview, punch biopsy procedure and data collection for participation in the study. Then each patient was interviewed using a standardized questionnaire (S1 File) to extract information regarding additional clinical features, sociodemographic characteristics, maternity history, and knowledge about cervical carcinoma, amongst others. Questionnaires were collected weekly and checked for adequacy-those with inadequate data (missing data or unrecognizable responses) were excluded. Pelvic examination was conducted to characterize the cervical lesion(s) and determine the clinical stage. Thorough speculum examination of the cervix was performed to describe any lesion(s) and subsequently a four quadrant punch biopsy of the cervix was taken. The biopsy material was preserved in 10% formaldehyde and submitted to the Department of Medical Laboratory Sciences and Pathology. In the Department of Pathology the formalin fixed tissue was embedded in paraffin, sections were cut and subsequently stained as described. From each case, four microscopic slides were prepared-one remained in the Department of Pathology for clinical management and three were used for the current study. The slide used for clinical management was stained with hematoxylin and eosin (H&E) and diagnosed by a pathologist in the Department of Pathology according to the World Health Organization histological classification of tumors of the uterine cervix and this pathologic report was recorded and relayed to the physician specific to the case for clinical care. The H&E study slides were identified by the biopsy and code number assigned by the initial physician on the biopsy request sheet and questionnaire and were submitted for diagnosis to a pathologist from Touro University California who was blinded regarding the case for quality control. If there was disagreement in the reports between the slide used for clinical management and the second observer report, the slide was given to a third pathologist and the agreement of the two pathologists was taken as the gold standard report to be recorded. Data analysis Data was initially entered into Microsoft Excel after which it was coded and analyzed using STATA 15.0 software. Data cleaning was performed only in the form of eliminating missing data so as to improve accuracy, and descriptive statistics were subsequently used to summarize all variables. Results A total of 240 women presented with various gynecological complaints to the outpatient clinic from January 2008 -December 2010. Eighty six women were excluded: 30 of these women had a diagnosis other than cervical cancer such as cervicitis or a cervical polyp but their remaining data was insufficient to analyze; the remaining 56 women were excluded due to an uninterpretable or equivocal biopsy. This left 154 cases to be analyzed and their subjective and objective clinical data is summarized in Tables 1-3. Demographic and clinical features Cervical cancer is a unique cancer in that effective screening methods are known to prevent disease and associated mortality. Knowledge about the disease and preventive options are vital to effectively control the disease; however, we highlight in the current study that there is a considerable lack of knowledge and awareness regarding cervical cancer which is the second most common cause of cancer deaths in Ethiopia. Knowledge about cervical cancer in Ethiopia has been reported to range from 21.2% to 53.7% [6-9], and Aweke et. Al described that 34.8% (n = 583) of survey respondents in Southern Ethiopia had a negative attitude pertaining to cervical cancer [7]. In our study a majority 144 women (95.36%) had not heard of cervical cancer compared to 138 out of 633 women (21.8%) who had not heard of it in a study done in Gondar town, northwest Ethiopia in 2010 [6]. In that cross-sectional survey, the literacy rate was 18.8%, whereas the rate was 86.4% in our current study. Moreover, a majority of our study participants lived in rural areas (62%) where access to television/radio and health professionals is limited-these were noted as the two most common sources for hearing about cervical cancer in the aforementioned study. The lack of knowledge regarding cervical cancer is of note since preventative efforts such as screening have been shown to reduce the risk of cervical cancer compared to no screening [10]; furthermore, a single-visit approach for cervical cancer screening in Ethiopia was described by Addis Tesfa in 2010 where visual inspection of the cervix with acetic acid wash (VIA) with subsequent cryotherapy of premalignant lesions was performed. One VIA at age 35 can reduce a woman's lifetime risk of cervical cancer by 25% and if screened again at age 40 by 65% [11]. Cervical cancer educational strategies have been shown to improve screening in studies which targeted rural populations of sub-Saharan Africa [12][13][14]. Erku et al. describe that the odds of undergoing cervical cancer screening among women who had a comprehensive knowledge on cervical cancer and screening were 2.02 times higher than those who did not in a northwest Ethiopian population. In this study, a majority (87.7%) of the respondents had heard of cervical cancer. This is likely an overestimate since this study included a population of women living with HIV/acquired immunodeficiency syndrome (AIDS) which may have an increased level of awareness with more frequent healthcare visits [8]. In Ethiopia, currently there are approximately 25 cervical cancer screening centers that are providing visual inspection with acetic acid (VIA), however there is low participation in the community which is partly attributed to the lack of awareness regarding this disease [15]. Geremew et al describe that college and above educational status, knowing someone with cervical cancer, and having knowledge of cervical cancer were positively associated with favorable attitudes towards cervical cancer screening [16]; in the current study, a majority of the patients were illiterate and had decreased knowledge regarding cervical cancer which may explain the lack of screening in our specific population. The National Cancer Control Plan of Ethiopia headed by the Federal Ministry of Health Ethiopia plans a nation-wide scale up of the screening and treatment of cervical pre-cancerous lesions into over 800 health facilities [17]. The mean age at diagnosis of cervical cancer in the United States has been shown to be 48 years and in our study from Ethiopia it was 45 years [18]. Our study differs in that there is no data on prior screening which may have decreased the age at diagnosis and if so, could be attributed to a possible faster progression from HPV to cervical cancer secondary to HIV co-infection or other synergistic risk factors, particularly in the absence of a cervical cancer screening program. Established risk factors for most cervical cancer include: early onset of sexual activity, multiple sexual partners, immunosuppression, increasing parity, low socioeconomic status and oral contraceptive use [5]. A qualitative study of 198 patients with cervical cancer from Tikur Anbessa Hospital in Addis Ababa, Ethiopia in 2013 [19] is compared to our study at JUTH in Table 4. The mean age at first sexual intercourse in southwestern Ethiopia has previously been shown to be 17.07 years (+/-2.12) in a group of 405 young women where cervical lesions were not studied [20]. Our data of cervical cancer cases shows the mean age at first sexual intercourse to be 15.83 years (+/-2.08) and the mean age from the Tikbur Anbessa study is 16.5 years which may be explained by the cultural practice of marriage at a younger age in these selected populations. Variable Jimma Prior studies found that the mean number of sexual partners in Ethiopia for women is approximately 1.5 (cervical lesions not specified) compared to our study which is 2.9 [21][22] and an increased number of sexual partners raises the probability of becoming infected with HPV. The total fertility rate is estimated to be 4.8 children per woman in Ethiopia (cervical lesions not specified) compared to our study which is 6.27 per woman. The proposed mechanism for higher parity as a risk factor for cervical cancer include increased estrogen exposure during pregnancy, persistence of the transformation zone on the ectocervix in multiparous women, and cervical tissue damage during vaginal deliveries [22]. Hormonal steroids (such as those in oral contraceptive pills) have been shown to activate enhancer elements in the upstream regulatory region of the HPV type 16 viral genome which is one proposed mechanism for the increased risk of cervical cancer [23]. Out of the 35 women (23.33%) in our study used contraception, none practiced barrier contraception. The majority of these 35 women (80%) used oral contraceptive pills which have been shown to increase the cumulative incidence of invasive cervical cancer by age 50 from 7. . This increase may, however, be attributed to increased awareness, screening and subsequent diagnosis. In our study, a majority of women presented at stage IIB followed by stage IIIA at the time of diagnosis and the general trends in Ethiopia at that time remained at presenting at stage IIIB being the most frequent, and secondly stage IIB (Table 5). Histopathologic classification The majority of cervical cancers in the United States are squamous cell carcinoma (69%) followed by adenocarcinoma (25%) [27]. Histopathologic subtype classification in a study of 598 cervical cancer cases in Nigeria and 2,930 cervical cancer cases in South Africa demonstrated squamous cell carcinoma as the most common type as was shown in 92.3% and greater than 80% of cases, respectively [28][29]. In the United States, other non-squamous cervical cancers have been observed in the following frequencies: adenosquamous carcinomas represent 20%-30% of all adenocarcinomas of the cervix and small cell carcinomas represent 0.5%-5% of all invasive cervical cancers. In our study, approximately 91% of the cervical cancer cases were squamous cell carcinomas (including keratinizing, non-keratinizing and basaloid subtypes), 5.84% were small cell carcinomas, 2.59% were adenocarcinomas, and 0.64% were adenosquamous carcinomas. The squamous cell carcinoma frequency was similar to that observed in prior studies; however, an increased frequency of small cell carcinomas over adenocarcinomas was also noted in our study. It has been shown that the keratinizing squamous cell carcinoma subtype is associated with a higher likelihood of advanced stage disease and a lower overall 5-year survival [30] and in our study we observed a 51.29% frequency of this subtype. The HPV-18 genotype is more commonly associated with adenocarcinomas and small cell carcinomas of the cervix; however, the cases in this study were not subtyped. Few studies describing the high-risk HPV genotypes have been performed in Ethiopia out of which one study of 98 women with cervical dysplasia in Jimma showed that HPV-18 was detected in 8.2% of the 67.1% of HPV DNA positive samples [31]. Based on other studies, HPV type 18 is detected in 18.2% of cervical cancer cases in Ethiopia [32]. A population based study from 1988-2004 of 6,853 women with squamous cell carcinoma found that keratinizing squamous cell carcinoma of the cervix may be less radiosensitive and associated with shorter overall survival than non-keratinizing squamous cell carcinoma [30]. In our study, a majority of women presented with locally advanced cervical cancer (89.6%, Table 5), whereas approximately 54.9-58.8% of patients were diagnosed at a late stage in a California database from the United States [33], as a means of comparison to a high-income country with an established screening program in place. We believe the majority of women in our study presented with locally advanced lesions not entirely due to an intrinsic pathogenetic difference, but because of lack of a cervical screening program in Ethiopia, decreased knowledge about cervical cancer, inability to attend health clinics due to cost and travel expenditure, and increased exposure to risk factors. Limitations, future directions and recommendations Our study did not perform laboratory confirmation of HPV or HIV infection, or test for coinfections with other sexually transmitted infections. Recall bias may have affected the demographic data since it was procured by a survey. Future directions include measuring survival outcomes after intervention for cervical cancer and studying the effectiveness of cervical cancer screening after education. Based on our data, in this specific population of Ethiopian women we recommend promoting an educational initiative about cervical cancer among Ethiopian women given that improved knowledge regarding the disease has been shown to increase screening and decrease cervical cancer rates. Conclusions Most of the 154 women with cervical cancer studied at the JUTH in southwestern Ethiopia were illiterate, had not heard of cervical cancer, had advanced disease at the time of diagnosis and had microscopically confirmed squamous cell carcinomas. The low rates of literacy and knowledge regarding cervical cancer in this population were also associated with lower screening rates. Future interventions to address the cervical cancer burden in Ethiopia should include an effective educational component which has been shown to increase screening rates and ultimately decrease the cervical cancer incidence. Supporting information S1 File. Annex II questionnaire. Each patient was orally interviewed by residents using this standardized questionnaire who then input the information accordingly. The histopathology data (Section IV) was completed by a pathologist. (DOCX) Acknowledgments We would like to acknowledge the Jimma University Teaching Hospital and the Global Physicians Corps for their technical support in this study, and to the Touro University California Institutional Review Board in the United States of America, the Research and Publication Committee of the Faculty of Medical Sciences at Jimma University, the Jimma University Ethics Review Committee and the Jimma University Teaching Hospital Departments of Obstetrics and Gynecology and Medical Laboratory Sciences and Pathology in Ethiopia for their approval and permission to perform this study. We would also like to acknowledge all of the physicians/ trainees/staff who assisted in data collection and to all of the study participants who provided this vital data in an overall effort to study and reduce the morbidity/mortality attributed to cervical cancer.
2019-04-12T13:41:38.617Z
2019-03-28T00:00:00.000
{ "year": 2019, "sha1": "519190e64fa72ead61c84a4b2d4e8e3936bcf483", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0215117", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "61e023cdf9e493d082343577d5ebfb63f781cfdc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14474512
pes2o/s2orc
v3-fos-license
Endogenous benzodiazepine-like compounds and diazepam binding inhibitor in serum of patients with liver cirrhosis with and without overt encephalopathy Background/Aim—Despite some controversy, it has been suggested that endogenous benzodiazepine plays a role in the pathogenesis of hepatic encephalopathy. The aim of the present study was to evaluate the concentrations of endogenous benzodiazepines and the peptide, diazepam binding inhibitor, in the blood of patients with liver cirrhosis with and without overt encephalopathy, and to compare these levels with those of consumers of commercial benzodiazepines. Subjects—Normal subjects (90), benzodiazepine consumers (14), and cirrhotic patients (113) were studied. Methods—Endogenous benzodiazepines were measured by the radioligand binding technique after high performance liquid chromatography (HPLC) purification. The presence of diazepam andN-desmethyldiazepam was assayed by HPLC-electrospray tandem mass spectrometry. Diazepam binding inhibitor was studied in serum by radioimmunoassay. Results—Endogenous benzodiazepines were below the limit of detection in 7% of patients with encephalopathy. When detectable, their levels were at least comparable with those of benzodiazepine consumers and correlated with the liver dysfunction but not the stage of encephalopathy. Serum levels of diazepam binding inhibitor tended to decrease when endogenous benzodiazepines levels increased. Conclusions—Endogenous benzodiazepines may accumulate in patients with liver cirrhosis during the course of the disease, and the phenomenon appears to be independent of the presence or absence of encephalopathy. Hepatic encephalopathy is one of the major complications of liver cirrhosis, and it is a component of fulminant hepatic failure characterised by impairment of the central nervous system, which is believed to develop from increased tone of the inhibitory -aminobutyric acid (GABA A ) receptor system (for reviews, 1-3 ). The involvement of this receptor system in overt hepatic encephalopathy (OHE), discovered in the 1980s during studies on GABA A receptors in the brain of animals with OHE, was considered likely when specific benzodiazepine receptor antagonists were shown to revert the symptoms of encephalopathy in animal models 4 and in patients. 5 6 Later, the observation of an increased presence of endogenous benzodiazepine receptor ligands (BZDs) in animals and patients with OHE [7][8][9][10][11][12][13] suggested that this phenomenon may contribute to the enhancement of GABAergic neurotransmission. 14 We cannot exclude, however, the possibility that compounds such as ammonia 1 3 15 or neurosteroids 16 contribute to the above mentioned increased functional activity of the GABA A receptor system. BZDlike compounds and ammonia may potentiate inhibitory GABAergic neurotransmission by acting synergistically. 17 The endogenous receptor ligands found in blood and brain during OHE 7 9 were called BZD-like substances since they are a mixture of the halogenated 1,4-benzodiazepines (such as diazepam) and non-halogenated BZDs (called "endozepines"). Although the chemical structure of the endozepines is not yet fully characterised, it is fair to surmise that they contribute to OHE. Halogenated BZDs are naturally present in several plants and vegetables, 18 in brain tissues of diVerent animal species and in man. 19 Their sources have not yet been clarified, but the observation that they are present in human brain samples stored since 1940 20 indicates that they do not derive from environmental pollution with synthetic BZDs, which have been produced commercially since 1959. These compounds and their precursors are components of our diet. 18 An exogenous biosynthetic pathway for the production of such compounds cannot, however, be excluded since we recently showed that a reduction in the intestinal bacterial flora caused by a non-absorbable antibiotic partially decreases the levels of these compounds in the blood. 21 Other endogenous BZDs such as the neuropeptide called diazepam binding inhibitor (DBI) and its metabolite, the octadecaneuropeptide, which decreases GABA A neurotransmission, 22 have been found to be increased in the cerebrospinal fluid of patients with OHE 23 and in brain regions of rats with portacaval anastomosis. 24 Since few studies have been performed on endogenous circulating BZDs in patients with OHE due to fulminant hepatic failure 13 or liver cirrhosis, 7 9 10 the aim of the present study was to (a) evaluate the concentrations and nature of BZD-like compounds in the plasma of patients with liver cirrhosis with and without OHE, (b) compare the levels found in liver cirrhotic patients with those present in the plasma of consumers of commercial BZDs in order to estimate their pharmacological relevance, and (c) study the levels of DBI in both the patients and BZD consumers, bearing in mind that little is known about the mutual interaction of BZD compounds and DBI at the periphery. SUBJECTS (TABLES 1 AND 2) We studied 113 patients with liver cirrhosis and 90 normal subjects, who appeared to be free of commercial BZD medication for at least three months as verified by patient, family, and medication records. Moreover 14 normal subjects who were habitual consumers of commercial BZDs were included in the study. The diagnosis of liver cirrhosis was based on biochemical tests and liver biopsy. Fifty nine patients showed no evidence of OHE while the other 54 showed diVerent stages of impaired mental status. The stage of OHE was evaluated on the basis of electroencephalographic pattern. 25 This test allowed the classification of the cirrhotic patients into the following categories: 59 with stage 0, 22 with stage I, 19 with stage II, eight with stage III, and five with stage IV. The functional status of the liver was clinically classified according to the Child-Pugh classification. 26 Table 1 gives the characteristics of the patients included in the study, and table 2 contains laboratory data. The 14 regular consumers of BZDs, who used diazepam 2 mg per day or lorazepam 2.5 mg per day as sedatives, had normal liver and kidney function. The serum obtained from all patients was stored at −80°C until used and individually processed for the assay of BZDs and DBI. The study was carried out with the approval of the local ethical committee. QUANTIFICATION OF ENDOGENOUS BZD-LIKE COMPOUNDS As previously described, 9 aliquots of all the serum samples (1 ml) were acidified with acetic acid (1 M), and centrifuged at 3000 g for ten minutes. The supernatant was passed through previously washed Sep-Pak C 18 cartridges (Millipore, Medford, MA, USA). The material was eluted from Sep-Pak with acetonitrile/ 0.1% trifluoroacetic acid (TFA) and then lyophilised. The lyophilised samples were reconstituted with 1 ml water, and aliquots (200 µl) were chromatographed in duplicate at 0.8 ml/min on a LiChrospher 100 RP-18 column (250 × 4.0 mm; 5 µm) equilibrated with 80% water/0.1% TFA and 20% acetonitrile. Absorbance was monitored at 230 nm. Samples were chromatographed using a water/0.1% TFA and acetonitrile gradient at 0.5% per minute from 20 to 58% acetonitrile. Seventy five fractions (one per minute) from each sample were collected, lyophilised, and reconstituted with water before radioreceptor assay. Known concentrations of diazepam, N-desmethyldiazepam, oxazepam, lorazepam, delorazepam, and 2'-chlordiazepam were run in parallel with the plasma samples. Unless otherwise indicated, all reagents were obtained from Sigma Chemical Co. and were all high performance liquid chromatography (HPLC) grade. All the fractions were then tested for their ability to inhibit [ 3 H]flunitrazepam (1 nM; specific activity 87 Ci/mmol; NEN, Boston, MA, USA) binding to rat cerebellar membrane preparations, which are a source of BZD receptors, 9 and containing 180-200 µg protein/100 µl measured by Bradford's method. 27 Data were expressed as diazepam equivalents (DE) based on extrapolation from standard displacement curves generated using diazepam. The total concentration of BZD-like compounds present in each serum was calculated by determining the DE derived from the displacement activity of any single peak and then summing the values of all peaks. Since the chemical identities of all the components of the BZD-like material are not known, their extraction eYciencies could not be determined. The limit of detection of diazepam by [ 3 H]flunitrazepam binding was 2 nmol DE/l with a coeYcient of variation of 0.52. Assays were performed in triplicate and variations from the mean were less than 15%. 28 The analysis was performed using a triple stage quadrupole TSQ 7000 LC-MS-MS system with electrospray interface (Finnigan MAT, Bremen, Germany). Data acquisition and mass spectrometric evaluation were conducted on a Personal precipitate plasma proteins, 1 ml plasma was diluted with 1 ml saline and 2 ml 2 M acetic acid, heated at 90°C for 10 min followed by the addition of 2 M NaOH (1 ml). After centrifugation at 20 000 g for 20 minutes, aliquots of supernatants were lyophilised in triplicate and used for DBI radioimmunoassay (DBI-RIA). The characterisation of DBI immunoreactivity detected in serum extracts by reverse phase HPLC and the DBI-RIA, performed using antiserum produced in rabbits against human recombinant DBI, were performed as previously described. 29 30 The specificity of the immunoreactive material detected was determined by incubation of diVerent aliquots of tissue extract that paralleled the standard curve and by use of reverse phase HPLC. 29 31 STATISTICAL ANALYSIS The Kruskal-Wallis test was used to determine whether a given variable diVered significantly between groups. Comparisons between single groups were performed by means of the Mann-Whitney U test corrected as described by Bonferroni. BZD-LIKE COMPOUNDS The extraction and purification of plasma samples from normal subjects and from patients with liver cirrhosis with and without OHE showed the presence of at least 12 diVerent peaks, with a retention time ranging from 17 to 70 minutes under our chromatographic conditions. The number of peaks found in each patient ranged from one to four. The number of fractions containing BZD ligands was consistently lower in controls and in liver cirrhosis without OHE (one or two peaks) than in patients with OHE (three or four peaks). Endogenous benzodiazepines and liver cirrhosis The active fractions found in BZD consumers were represented by three or four peaks. The most commonly occurring peak in liver cirrhosis with OHE showed a retention time of 37 minutes, which did not correspond to the peaks of N-desmethyldiazepam, diazepam, lorazepam, delorazepam, 2'-chlordiazepam, or oxazepam used as standards, which respectively had retention times of 40, 49, 53, 57, 67, and 69 minutes, under our conditions. In a few cases, N-desmethyldiazepam or diazepam or both were found, but the amounts normally represented less than 20% of the total BZDlike material. The concentrations of BZDs in extracted and purified serum from normal subjects were below the limit of detection (less than 2 nmol DE/l) in 46 cases (51%). In the remaining 44, the total amount of BZD substances that displaced [ 3 H]flunitrazepam binding ranged between 6 and 20 nmol DE/l (mean value 9.5 nmol DE/l). In patients with liver cirrhosis, BZD-like compounds were below the limit of detection in 10 cases without OHE (16%; eight in Pugh-Child class A and two in Pugh-Child class B) and in five cases with OHE (7%; four in Pugh-Child class B and one in Pugh-Child class C). As shown in fig 1, the concentrations in the liver cirrhosis patients without OHE (stage 0) with measurable amounts of BZDs was extremely variable ranging from 4 to 1240 nmol DE/l (mean value 180 nmol DE/l). BZD concentrations in patients with stage I OHE were below the detection limit in four patients and, when measurable, ranged from 54 to 3750 nmol DE/l (mean value 924 nmol DE/l). In patients with stage II, these compounds were below the detection limit in one patient and the others ranged from 88 to 3890 nmol DE/l (mean value 1626 nmol DE/l). In patients with stage III, the values ranged from 98 to 3980 nmol DE/l (mean value 1348 nmol DE/l). In patients with stage IV, two patients had respectively 200 and 240 nmol DE/l and three had 2850, 4850 and 5890 nmol DE/l (mean value 2806 nmol DE/l). Kruskal-Wallis one way analysis of variance shows a significant diVerence between groups (p<0.0001). The Mann-Whitney U test adjusted using the Bonferroni correction shows that patients without OHE (stage 0) did not diVer from controls, whereas all those with OHE, irrespective of the stage, had significantly higher BZD levels than controls and patients with stage 0 OHE (p<0.001). As shown in fig 1, the values found in patients with stages I, II, III, and IV of OHE were not diVerent from each other and not different from the values found in BZD consumers, which ranged between 1400 and 5600 nmol DE/l (mean value 2511 nmol DE/l). When the population of patients with liver cirrhosis was classified according to the Child-Pugh system (fig 2), the plasma concentrations of BZDs ranged between 4 and 50 nmol DE/l (mean value 22 nmol DE/l) in Child-Pugh class A, between 54 and 1900 nmol DE/l (mean value 555nmol DE/l) in Child-Pugh class B, and between 82 and 5890 nmol DE/l (mean value 1739 nmol DE/l) in Child-Pugh class C. Kruskal-Wallis one way analysis of variance shows a significant diVerence between groups (p<0.0001). The Mann-Whitney U test adjusted using the Bonferroni correction shows that the BZD concentrations found in Pugh-Child class A patients did not diVer from controls, those found in Pugh-Child class B patients diVered from Pugh-Child class A (p<0.001), and those found in Pugh-Child class C diVered from Pugh-Child class A (p<0.001) and from Pugh-Child class B (p<0.05), indicating a correlation between serum BZD concentrations and the severity of the liver disease. HPLC-ESI-MS-MS (FIG 3) Mass spectrometric studies utilising HPLC-ESI-MS-MS on the active fractions were performed on 12 controls, 37 liver cirrhosis cases without OHE, and 16 liver cirrhosis cases with stages I-IV of OHE. Diazepam and N-desmethyldiazepam were below the detection limit in normal subjects and in 34 of 37 of the patients without OHE. In the remaining three patients, trace amounts of both compounds were found in two, and in one there was only N-desmethydiazepam. In liver cirrhosis with OHE the above two compounds were below the detection limit in two patients with stage I OHE and were represented only in trace amounts of N-desmethyldiazepam in two patients with stage III and IV OHE Figure 3 Mass chromatograms of serum samples from two patients (A and B) with stage IV hepatic encephalopathy with increased benzodiazepine-like compounds measured by the radioligand binding technique after HPLC purification. The upper panel for patient A shows the presence of molecules undergoing fragmentation at 271/140 m/z, which is characteristic of N-desmethyldiazepam, and the central panel shows the presence of molecules undergoing fragmentation at 285/257 m/z, which is characteristic of diazepam, obtained by selected reaction monitoring (SRM) mass scan. The bottom panel for patient A represents the reconstructed ion chromatograms (RIC) which confirms the presence of both benzodiazepines. Scanning the sample of patient B for molecules with the same fragmentation patterns as above showed the absence of both benzodiazepines, and the RIC confirms this result. respectively. In the remaining 12 patients the values for N-desmethyldiazepam ranged between 94 and 835 nmol/l and those for diazepam ranged between 45 and 112 nmol/l, and there was no correlation with the stage of OHE. Figure 3 shows the reconstructed ion chromatograms of plasma samples from two patients with liver cirrhosis and stage IV OHE. DIAZEPAM BINDING INHIBITOR (FIG 4) The DBI-LI levels ranged between 0.31 and 2.37 nmol/l (mean value 1.04 nmol/l) in control subjects, between 0.28 and 1.01 nmol/l (mean value 0.55 nmol/l) in liver cirrhosis without OHE, and between 0.13 and 0.57 nmol/l (mean value 0.34 nmol/l) in liver cirrhosis with OHE. Interestingly, the levels of DBI-LI in BZD consumers ranged between 0.15 and 0.57 nmol/l (mean value 0.33 nmol/l). Kruskal-Wallis one way analysis of variance shows a significant diVerence between groups (p<0.0001). The Mann-Whitney U test adjusted using the Bonferroni correction shows that the levels found in liver cirrhotic patients with or without OHE were statistically diVerent from controls (p<0.05 and p<0.001 respectively). The DBI-LI in BZD consumers was diVerent from controls (p<0.005) and similar to those found in cirrhotic patients with or without OHE. Discussion We have shown in this study, which includes a large number of fully characterised liver cirrhotic patients, that: (1) endogenous BZDlike compounds are, under our experimental conditions, below the detection limit (2 nmol DE/l) in 51% of normal subjects, in 16% of liver cirrhotic patients without OHE, and in 7% of those with OHE; (2) when detectable, BZD levels rise in the serum of cirrhotic patients in correlation with worsening liver function, but not with the degree of OHE; (3) the measurable BZD-like compounds comprise both known BZDs such as diazepam and N-desmethyldiazepam and unknown BZD-like compounds, and these so called "endozepines" seem to represent most of the displacing ligands in plasma; (4) when detectable, the BZD levels found in patients with OHE were comparable with those present in BZD consumers with normal states of consciousness; (5) DBI-LI levels were found to be decreased in cirrhotic patients independently of the presence or absence of OHE. In BZD consumers, in whom BZD levels were constantly elevated, we found a significant reduction of DBI-LI. These data indicate an inverse correlation with the levels of circulating BZDs. The finding that encephalopathy may occur in liver cirrhotic patients with very low levels of circulating BZD-like compounds, if not below the detection limit, is in line with the results of previous studies on patients with OHE due to fulminant hepatic failure. In these studies, only 60% of patients showed increased levels of BZDs in serum 1 and only 55% had increased concentrations in the brain. 12 These findings are in line with the concept that BZDs in serum diVuse passively into the brain and are in equilibrium with BZDs in the brain. 32 33 The finding that, when detectable, circulating BZD-like compounds reach concentrations comparable with those found in BZD consumers raises the question of what causes the diVerence in the response to BZDs by the brains of cirrhotic patients with OHE and those of BZD consumers. Chronic exposure to commercial BZD produces tolerance represented by reduced GABA-BZD receptor function; this means that administration of increased doses of the drug is required to maintain the pharmacological eVect. 33 In contrast, in patients with liver cirrhosis, rather than tolerance, there is increased cerebral sensitivity to BZD administration. It has been shown that the reduction in BZD dose requirements in these patients is due to changes in the cerebral sensitivity more than to changes in drug disposition. 33 34 Hence it seems fair to surmise that the enhanced GABAergic tone cannot be attributed to increased endogenous BZD-like compounds per se, but more to the presence of pre-existing brain dysfunction related, for example, to ammonia toxicity. 1 3 16 In this situation, a concentration of circulating BZD-like compounds that would have no eVect in a normal subject may facilitate sedation and worsen an episode of encephalopathy in a liver cirrhotic patient. Finally, as regards the nature of the BZD-like compounds in serum, we found the presence of both diazepam and N-desmethyldiazepam by HPLC-ESI-MS-MS analysis. These halogenated compounds, however, represented less than 20% of the total BZD receptor ligands. This observation, which confirms the results of previous studies, [8][9][10] indicates that most of these compounds are substances of unknown origin and nature called "endozepines". 9 Both Kruskal-Wallis one way analysis of variance shows a significant diVerence between groups (p<0.0001). The Mann-Whitney U test adjusted using the Bonferroni correction shows that the levels found in liver cirrhotic patients with or without OHE were statistically diVerent from controls (p<0.05 and p<0.001 respectively). The DBI-LI in BZD consumers was diVerent from controls (p<0.005) and practically equal to those found in cirrhotic patients with or without OHE. Endogenous benzodiazepines and liver cirrhosis halogenated and non-halogenated BZDs were found inconsistently in patients with OHE and were sometimes not raised at all. It remains, however, unclear why BZDs accumulate in the blood of some liver cirrhotic patients and not in others with the same pathological condition. Retrospective control of the diet and therapy used in our patients as well as establishment of the aetiology of the liver cirrhosis did not show any substantial diVerence to explain this phenomenon. As regards DBI, we found that the levels of this peptide are significantly decreased in those patients with liver cirrhosis and increased levels of BZDs independently of the presence or absence of OHE. The levels of DBI do not correlate with neuronal dysfunction or the severity of the liver disease. This finding would appear to exclude any direct eVect of the liver dysfunction and the encephalopathy on the metabolism of this circulating peptide and suggests the presence in the periphery of a negative regulatory feedback mechanism exerted by BZDs on DBI. Accordingly, the same decrease is present in BZD consumers. The relation between DBI levels in plasma and those in the central nervous system is still poorly understood, as is also the regulation of its synthesis and metabolism in peripheral tissues. From these data we can surmise that the ratio between DBI and BZDs in the periphery is probably regulated by diVerent mechanisms from those operating in the central nervous system. In liver cirrhotic patients with OHE, in fact, DBI was shown to be increased in cerebrospinal fluid in the presence of increased levels of BZDs, and the phenomenon was interpreted as an episode of compensatory reaction by DBI to an increased presence of BZDs. 23 Whatever the regulatory mechanism in the periphery may be, the described decrease in DBI in the blood of the liver cirrhotic patients may be of relevance from the metabolic point of view, since this peptide, through stimulation of peripheral BZD receptors, regulates the intermediate metabolism and steroid biosynthesis. 22 35 In conclusion, endogenous compounds with sedative action may accumulate in patients with liver cirrhosis during the course of the disease, and the phenomenon appears to be independent of the presence or absence of encephalopathy. The observation that circulating BZD-like compounds reach levels comparable with those found in BZD consumers with a normal state of consciousness reinforces the concept that these compounds may be more eVective in those patients with pre-existing altered brain function. 33 34 This work was supported by a grant from MIRAAF (no 7240, 1993. Rome) and a grant from Modena University. We thank Professor H Alho, University of Tampere, Finland who kindly provided antiserum raised in rabbits against human recombinant DBI. Preliminary data on the assay performed with the radioligand binding technique without previous HPLC purification were given as an oral presentation to the International Association for the Study of the Liver, Cancun, May 1994, and published in abstract form: Zeneroli ML, Venturini I, Avallone R, Ardizzone G, Demartini M, Portella G, Baraldi M. Levels of endogenous benzodizepine-like compounds in serum of liver cirrhosis patients with and without encephalopathy and in fulminant hepatic failure. Hepatology 1994;19:1431
2017-04-20T04:14:56.388Z
1998-06-01T00:00:00.000
{ "year": 1998, "sha1": "8983886000e255b2e938fc8092ae0277684cd062", "oa_license": "CCBY", "oa_url": "https://gut.bmj.com/content/42/6/861.full.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "0626e64b9891b7841fda7f25fae7a56ac956a698", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17491877
pes2o/s2orc
v3-fos-license
Dependence of Quantized Hall Effect Breakdown Voltage on Magnetic Field and Current When large currents are passed through a high-quality quantized Hall resistance device the voltage drop along the device is observed to assume discrete, quantized states if the voltage is plotted versus the magnetic field. These quantized dissipative voltage states are interpreted as occurring when electrons are excited to higher Landau levels and then return to the original Landau level. The quantization is found to be, in general, both a function of magnetic field and current. Consequently, it can be more difficult to verify and determine dissipative voltage quantization than previously suspected. Introduction The integer quantum Hall effect [1] occurs when current is passed through a two-dimensional electron gas (2DEG) formed in a semiconductor device which is cooled to very low temperatures in the presence of a large magnetic field. The Hall resistance i?H of the /th plateau of a fully quantized 2DEG assumes the values Rn{i)=hl(eH), where h is the Planck constant, e is the elementary charge, and i is an integer. In high-quality devices the current flow within the 2DEG is nearly dissipationless in the plateau regions for currents around 25 ixA. At high currents, however, energy dissipation can suddenly appear in these devices [2,3]. This is called breakdown of the quantum Hall effect. The dissipative breakdown voltage Vx can be detected by measuring voltage differences between potential probes placed on either side of the device in the direction of current flow. Cage et al. [3] had found that there is a distinct set of dissipative Vx states in wide samples, with transient switching ob-served on microsecond time scales among those states. Bliek et al. [4] proposed the existence of a new quantum effect to explain the breakdown structures in their curves of Vx versus magnetic field for samples with narrow constrictions. Their phenomenological model presumed that the structures were quantized in resistance, rather than voltage. Cage et al. [5] then found that, in wide samples, the distinct states are quantized in voltage. Hein et al. [6] have now observed dissipative voltages during breakdown of the quantum Hall effect in wide samples, but did not confirm that these voltage states are quantized. We show in this paper that the voltage is indeed quantized, but that the quantization is more complicated than previously suspected because, in general, it is a function of both the magnetic field and the current. Some of the data presented here were described with less detail in an earlier paper [7]. Figure 1 shows sweeps of P^ (2,4) versus the magnetic field B for the i=2 (12,906.4 fi) quantized Hall resistance plateau at a temperature of 1.3 K and a current, /, of +210 JJLA, where positive current corresponds to electrons entering the source and exiting the drain. This current is approaching the 230 \iA critical current value for this plateau at which Vx never reaches zero for these particular potential probes. One of two distinct paths always occurred for positive current when magnetic field sweeps were made in the direction of increasing B. Those distinct paths are labeled 1 and 2 in the figure. This path "bifurcation" is unusual. It occurred only for the I<c (2,4) probe pair at positive current, and only for the / = 2 plateau. A pronounced hysteresis was observed when magnetic field sweeps were made in the opposite direction; this path is indicated by the dashed line, labeled 3. The dashed-line curve was repeatable for all sweeps with decreasing B, varying only slightly for the value of 5 at which Vx again rose to path 1. The fact that Vx is zero over such a large magnetic field region for path 3 indicates the existence of a dissipationless state between 11.2-12.2 T. Figure 2 shows eight consecutive sweeps of J^(2,4) versus an increasing B over a magnified region on the low magnetic field side of Vx minimum at + 210 jiA. Four of the sweeps happened to be Figure 3 shows eight consecutive sweeps of J^ (2,4) for increasing B at -210 nA. No bifurcation was observed for such sweeps. The family of curves is labeled path 4 in the figure. The curves lie between those of paths 1 arid 2 at +210 |xA. Curves for decreasing B always followed the dashed line of path 3. The data of Figs. 2 and 3 are combined in Fig. 4 to show the 16 consecutive sweeps for increasing B at ±210 (lA and the two identical sweeps for decreasing B. Nothing is unique about these sweeps. Additional sweeps could have been displayed, but at the expense of reducing the overall clarity. We next demonstrate that the discrete voltage states of Fig. 4 are quantized, and that this quantization is a function of magnetic field. This is done by drawing a family of 20 shaded curves through the data in Fig. 4. The curves have equal (quan-tized) voltage separations at each value of magnetic field. The quantized voltage separations are, however, allowed to vary with B in order to obtain the best fit to the data. The family of curves was generated by first drawing a set of 20 equallyspaced vertical points at a particular value of B. The lowest point of the vertical set was constrained to be at 0.0 mV because Vx is always zero in the dashed-line sweep of path 3, which indicates that a dissipationless state exists over the magnetic field region of this figure. The spacing between the 19 other vertical points was then varied to obtain the best fit with uniform (equal) voltage intervals. This procedure was repeated for approximately 30 other values of 5. Finally, a family of 20 smooth shaded curves was drawn through the corresponding points of every vertical set. The 20 shaded curves, which correspond to a P^=0.0 mV ground state and 19 excited states, are labeled in brackets as [0] through [19]. The voltage separation (quantization) varies between 5.22 and 7.85 mV over the magnetic field range of this figure. The breakdown activity shown in Fig. 4 is confined to the region between, but not including, the Hall probe pairs 1,2 and 3,4 of Fig. 1. This was demonstrated by measuring the quantum Hall voltages of both Hall probe pairs at this current. The resulting curves of both probe pairs had struc- tures with deviations of only about ±0.1 mV from the expected ±2,710.3 mV quantum Hall voltage over the plateau region, and therefore were horizontal, straight lines when plotted to the same resolution as in Fig. 4. In addition, the Vx signals were the same on both sides of the sample for probe pairs 1,3 and 2,4. The higher-lying excited states are difficult to see in the multiple sweeps of Fig. 4 because of switching between states. Figure 5, therefore, shows one of those sweeps along path 4 at -210 jji,A. It is remarkable that the higher-lying states are just as well-quantized (i.e., well-fitted by the shaded curves) as the lower-lying states. The quantization is by no means perfect. Deviations from the shaded curves do occur, but the overall trend is clear. Histograms Cage et al. [8] and Hein et al. [6] have seen that the Vx signal can sometimes be time-averages of two or more discrete dc voltage levels in which only one level is occupied at a time, but where switching occurs between the levels. Therefore, histograms were made to ensure that the signals in Fig. 4 were not time-averages of several levels. Each histogram consisted of 16,000 measurements of the Vx signal in a 2.4 s sampling period. Figure 6(a) shows the time-dependence of one such sampling period for a path 4 sweep at 11.77 T; Fig. 6(b) shows the associated histogram. It is referred to as a histogram, rather than a spectrum, because the areas under the peaks do not correspond to the excitation probabilities. One would have to accumulate many histograms to ascertain the excitation probabilities. For example, peaks corresponding to quantum states 7 through 10 appear in Fig. 6, while other histograms at 11.77 T had missing peaks or additional peaks. These histograms never yielded any voltage states other than the ones which appear in Other Currents We next investigate the effect of changing the sample current. The smallest current for which breakdown structures could be observed was at -203 JJLA; no structures were observed, however, at + 203 JJLA. Figure 8 shows data for three successive path 4 sweeps at -203 ixA, plus a path 3 sweep. The individual data points displayed near 11.84 T were generated by slowly increasing the magnetic field and selecting data points when the voltage switched to new states. Switching to new states was sometimes induced by momentarily increasing the sample current and then reducing it back to -203 |xA. This procedure allowed additional data to be included without sacri^cing clarity. Figure 8 also shows 17 shaded curves from the same family used to fit the data displayed in Figs. 4 and 5 at ±210 |xA. The excellent fit would suggest that the voltage quantization was a function of magnetic field, but not a function of current. However, it will be seen in Sec. 3.3 that, in general, the voltage quantization is a function of current. We chose 225 JJLA as the highest current because the ground state was still occupied. This current approached the 230 JJLA critical current value at which Vx was still quantized, but never zero. Figure 9 shows five successive sweeps along path 1 and four successive sweeps for path 2 at +225 \iA. Note that there is a gradual deviation from zero voltage on the high magnetic field side of the sweeps. Also, interesting features occur on the high field side of the curves at this current. Figure 10 shows four successive path 4 sweeps for increasing magnetic field at -225 jxA, as well as a sweep for decreasing magnetic field. That sweep is also labeled path 4 since it follows much of that path; however, it has hysteresis like that of path 3 sweeps where Vx is zero. Many individual path 4 data points, obtained with increasing magnetic field, are also included in Fig. 10 using the procedure described above. Figure 11 combines the data for the two current directions and displays a family of 17 shaded curves which provide the best fit to the data. The ground state begins deviating from zero at 11.97 T, so the lowest point of each vertical set 100 ->'^ [15] 100 h [14] i-\-- [i3]-.-..;r\ --1 [12] ...^J-75 f^^l ^ 50 - [7] >^ [6] lb] 25 [4] r [3] of 17 points used to generate the 17 shaded curves was no longer constrained to be zero on the right hand side of the figure. This deviation from zero presumably arises from some other dissipative mechanism. It will be shown in Sec. 3.3 that this family of shaded curves for 225 \LA is different than that for 203 and 210 JAA. Microscopic Models The dissipative voltage states displayed in Figs. 4,5, 8, and 11 are clearly quantized. We next try to interpret this quantization. Many explanations of breakdown have been proposed. Some mechanisms, such as electron heating instabilities [9] and inhomogeneous resistive channels [10], are inapplicable here since they are classical effects which do not provide quantization. Quantization exists in the quantum Hall effect because the quantized Hall resistance occurs when the conducting electrons in the 2DEG occupy all the allowed states of the lowest Landau levels. It is therefore natural to assume that the quantized dissipation arises from transitions between Landau levels. There are several mechanisms to excite electrons into higher Landau levels that can be considered: (a) the emission of acoustic phonons to conserve energy and momentum, as employed by Heinonen, Taylor, and Girvin [11] and later used in the quasielastic inter-Landau level scattering (QUILLS) model of Eaves and Sheard [12], with refinements and extensions by Cage et al. [8]; (b) Zener tunneling [13]; (c) impurity-assisted resonant tunneling [14]; and (d) transitions between edge states [15,16]. To complicate matters, both bulk and edge states exist at high currents [17]. For bulk transitions, a large electric field (of order 10* V/m) is required somewhere across the width of the sample [8]; sample impurities and inhomogeneities might provide this high local field. The confining potential provides a high electric field for edge states, but if breakdown is due to edge states then it is difficult to understand why breakdown does not always occur at very low currents since there is probably an insignificant change in the slope of the confining potential with current. In addition to the above considerations, one must also take into account the return of the electrons to the ground state via emission of either photons or optical phonons. Furthermore, the dissipative Vx signals are quite large. Most of this dissipation must occur outside the breakdown region, otherwise heating ef-fects would depopulate the electron states within the Landau levels and thereby wash-out the quantization. Simple Model To avoid controversy about which of those microscopic models [8,[11][12][13][14][15][16] satisfy the above considerations and are appropriate, we use a simple model based on energy conservation arguments, and treat the breakdown region between the Hall probe pairs 1,2 and 3,4 as a black box. We assume that the dissipation arises from transitions in which electrons from the originally full Landau levels are excited to states in higher Landau levels and then return to the lower Landau levels. Vx is then the difference in potential between the initial and final /-=(7h-(i)m(f). « where / is the ratio of the transition rate r within the breakdown region to the rate He that electrons transit the device; / can also be interpreted as the fraction of conducting electrons that undergo transitions. Equation (1) is appropriate for even values of /. For odd vales of /, the factor ill should be replaced by the factor /. We associate the quantized values of M with the numbers in brackets for the shaded curves in Figs. 4, 5, 8, and 11. /, Vx, and B are measured quantities, and /, m*, and fi are constants. Therefore,/ and r can be determined from the Vx versus B plots and Eq. (1) if M is known. Analysis If/and r were constant, then Vx <x-B in Eq. (1), but it is clear from Fig. 4 that this is not the case for these data because the slope of Vx versus B has the opposite sign. Therefore, both/and r must vary with magnetic field. The fractions/(expressed as a percentage) of electrons that make the transitions in the shaded curves of Fig. 4 were calculated using Eq. (1), and are shown in Fig. 12 at 0.05 T intervals; / varies between 25.7% and 38.8%, corresponding to transition rates between 3.4xl0''*/s and 5.1 x 10"/s. Histograms obtained in a previous experiment [5] yielded 26.5% for the value of/in the vicinity of 11.75 T, whereas Fig. 12 indicates that / is 29.3% at 11.75 T. The apparent discrepancy arises because the position of the Vx minimum varies slightly with B on each cool-down. The minimum position was about 0.06 T higher for the present cool-down, giving 27.8% for / at 11.81 T, which is in reasonable agreement with the previous result. Shifted peaks were observed in the previous histograms [5], and were attributed to changes in the Vx zero. There is no evidence for ground state shifts in the present experiment. Instead, the shifted states result from the data deviating from the shaded curves of Fig. 4. This is consistent with having to use peaks from many histograms to obtain the ±0.6% quantization accuracy of the previous experiment [5]. The family of shaded curves in Fig. 8 is the same as that in Fig. 4. Therefore the values of/obtained at 203 jjiA are the same as those shown in Fig. 12 at 210 nA. An independent family of curves was also fitted to the data of Fig. 8. The resulting values of/ for the independent family are displayed in Fig. 13. They differ from those in Fig. 12 by as much as 0.9%, indicating that/can be determined to a precision of about 0.1% and an accuracy of about 1% for these particular data. Figure 14 shows values of /for the data of Fig. 11 at 225 jji,A. The results of/ versus B from Figs. 12-14 are combined in Fig. 15 for the three currents investigated. The difference between the/versus B curves for -203 fiA and ±210 (JLA in Fig. 15 illustrates the 1% accuracy at which the values of/can be determined for these data since they both yielded good fits to the Vx versus B curves at -203 |xA. The minimum value of/ at 225 JJLA is essentially the same as at 203 and 210 |xA; however, at lower magnetic field values, / is larger at this higher current. Discussion The fraction/of conducting electrons that make the transitions can be quite large. This suggests that either, all the current enters the breakdown region (in which case / is the probability for single transitions), or that some of the current bypasses ; -"H-- [10]. .Vx (2,4) t - B (T) Fig. 14. The fractions/for the 17 shaded curves shown in Fig. 11 at ±225 |jiA. the breakdown region (in which case/would correspond to the fraction of current passing through the breakdown region if the transition probability was always 100%). The fraction / is not necessarily 100%, and, in general, is a function of B and /. These facts can greatly complicate the identification of voltage quantization for most breakdown data because the voltage separations will not be constant if/ and r are not constant across the magnetic field range, so the voltages will appear to not be quantized even when they actually are. One can always obtain the product jM from the data by using Eq. (1), but the value of/can only be determined if M can be unambiguously deduced. The data presented here are particularly striking and clear, with sharp vertical transitions, switching between states, and sufficient variations between sweeps to generate the families of shaded curves. Although time-consuming, it was thus relatively easy to determine the quantization. We can therefore be reasonably assured that the values of M, and thereby the values of/, have been properly determined. Most breakdown data, however, require very careful measurements to deduce the quantization, and in many cases there may be insufficient structure, switching, and variation to definitively determine M. Conclusions Quantized dissipative voltage states exist in the breakdown regime of the quantum Hall effect. This quantization has been interpreted using a simple model in which electrons make transitions consisting of excitations from a lower Landau level to a higher level and then a return to the lower level. Voltage quantization suggests that individual electrons either make a single transition or make a fixed number of multiple transitions because varying numbers of transitions would result in a continuum of Vx values rather than voltage quantization. We have demonstrated that the dissipative voltage states are quantized, and that, in general, the quantization is a function of magnetic field and current. The actual transition mechanisms are no doubt very complicated, so the breakdown region has been treated as a black box, and we used a simple model to interpret the data. One normally expects quantization phenomena to be predictable, whereas the values of Vx and / are not predictable in the present experiment unless the transition probability is actually always 100% and / is thus the fraction of current passing through the breakdown region. The quantization is not perfect, but it is surprising just how well quantized the dissipative voltage states are, up to at least the nineteenth excited state.
2016-01-22T01:30:34.548Z
1993-05-01T00:00:00.000
{ "year": 1993, "sha1": "90842e060f50c06327fd10b099b63715775568ad", "oa_license": "CC0", "oa_url": "https://doi.org/10.6028/jres.098.028", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "cee0e9de7fc74c9bb76e27e4a9c6f71938832b62", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
11202052
pes2o/s2orc
v3-fos-license
A Comparison of Neuroimaging Findings in Childhood Onset Schizophrenia and Autism Spectrum Disorder: A Review of the Literature Background: Autism spectrum disorder (ASD) and childhood onset schizophrenia (COS) are pediatric neurodevelopmental disorders associated with significant morbidity. Both conditions are thought to share an underlying genetic architecture. A comparison of neuroimaging findings across ASD and COS with a focus on altered neurodevelopmental trajectories can shed light on potential clinical biomarkers and may highlight an underlying etiopathogenesis. Methods: A comprehensive review of the medical literature was conducted to summarize neuroimaging data with respect to both conditions in terms of structural imaging (including volumetric analysis, cortical thickness and morphology, and region of interest studies), white matter analysis (include volumetric analysis and diffusion tensor imaging) and functional connectivity. Results: In ASD, a pattern of early brain overgrowth in the first few years of life is followed by dysmaturation in adolescence. Functional analyses have suggested impaired long-range connectivity as well as increased local and/or subcortical connectivity in this condition. In COS, deficits in cerebral volume, cortical thickness, and white matter maturation seem most pronounced in childhood and adolescence, and may level off in adulthood. Deficits in local connectivity, with increased long-range connectivity have been proposed, in keeping with exaggerated cortical thinning. Conclusion: The neuroimaging literature supports a neurodevelopmental origin of both ASD and COS and provides evidence for dynamic changes in both conditions that vary across space and time in the developing brain. Looking forward, imaging studies which capture the early post natal period, which are longitudinal and prospective, and which maximize the signal to noise ratio across heterogeneous conditions will be required to translate research findings into a clinical environment. INTRODUCTION individual with ASD subsequently develop prominent delusions or hallucinations (3). In the current review, a comparison between ASD and COS was chosen for several reasons. Firstly, children with co-occurring and overlapping symptoms complicate a diagnosis (2,4). At times, a period of medication washout and inpatient observation is required to achieve a diagnostic consensus (7), further supporting a need for brain based biomarkers of disease state and treatment response. Indeed, over one quarter of patients diagnosed with COS display prodromal neurodevelopmental disturbances, meeting criteria for pervasive developmental disorder, or ASD (8,9). Children diagnosed with ASD are more likely to report psychotic symptoms in adolescence and adulthood (10,11), although the exact incidence of a subsequent diagnosis of schizophrenia varies by study, ranging from 0 to 7% (12)(13)(14). From a neuroimaging perspective, analysis of atypical brain "growth curves" may afford an opportunity for early identification and risk stratification; consistent with the present goal of moving toward biologically based diagnostic categories in neuropsychiatric disease. Secondly, a growing body of literature supports a neurodevelopmental origin of both schizophrenia and autism, with a shared genetic architecture contributing to, or precipitating, the development of both conditions (15,16). Some have hypothesized that ASD and schizophrenia are diametrically opposed with respect to underlying pathology (17). While adult onset schizophrenia and ASD have been compared in previous reviews [see Ref. (18)], a focus on COS specifically permits a more indepth analysis of aberrant neurodevelopmental trajectories across comparable age ranges, which may provide insight into disease pathogenesis. This review intends to translate several decades of neuroimaging research for a clinical audience, to highlight our current understanding of similarities and differences in the clinicopathogenesis of ASD and COS from a neuroimaging perspective. To our knowledge, this is the first focused review of neuroimaging findings in ASD and COS. STRUCTURAL MRI STUDIES (VOLUMETRIC ANALYSIS, CORTICAL THICKNESS AND MORPHOLOGY, AND REGION OF INTEREST STUDIES) VOLUMETRIC ANALYSIS Structural magnetic resonance imaging (MRI) analysis for neuropsychiatric diseases began to emerge in the 1990s. Early trials employed manual delineation of gray and white matter to investigate specific regions of interest. With advancement in high resolution MRI technology and automated analysis, voxel-based morphometry (VBM) made it possible to quantify the specific gray matter content of each voxel (a volumetric pixel) in an image, allowing large data sets to be processed more efficiently (19). For statistical comparisons between case and control populations, images are "warped" onto a common template, and the degree of transposition of each voxel can be quantified. Inferences must be heeded with the consideration that the relative volumetric differences by region can vary by age, gender, whole brain volume, and by IQ, thus the degree to which these factors have been controlled for must be kept in mind. Volumetric analysis in COS Initial trials conducted by the National Institute of Mental Health (NIMH) on a cohort of children with COS, identified a pattern of reduced cerebral volumes and larger ventricles, consistent with findings in the adult onset schizophrenia population (20). With expansion and longitudinal analysis of this patient sample, investigators were able to localize and describe patterns of change in brain structure and volume over time. While typically developing children were found to have a small decrease in cortical gray matter (~2%) in the frontal and parietal regions throughout adolescence, children with COS displayed exaggerated gray matter losses (~8%), involving the frontal, parietal, and temporal lobes. Of note, baseline IQ varied significantly between case and control groups in this data set (70 vs. 124) (21). Subsequent analysis on the same NIMH sample (n = 60 patients), suggested that this pattern took on a "back to front" trajectory, with losses originating in the parietal lobes and spreading anteriorly over time (22). This pattern persisted after controlling for IQ and medication administration (23). Despite significant differences at an early age, the rate of gray matter loss was shown to level off in early adulthood, implicating adolescent neurodevelopment as a key window in disease pathogenesis (22,24). This data is consistent with hypotheses pertaining to exaggerated synaptic pruning as a feature of schizophrenia (25). Later work by the same group demonstrated that the abovedescribed pattern was specific for COS. Using VBM, 23 COS patients were compared to 38 age and gender matched healthy control subjects and 19 patients with other psychotic symptoms but not meeting criteria for COS, defined as "multidimensionally impaired" (MDI). MRI scans were conducted at study intake, and at 2.5 years follow up. The MDI group had equal exposure to neuroleptics at study intake, and had a similar degree of cognitive impairment. Total gray matter loss between the two time points demonstrated 5.1% loss for COS patients, 0.5% loss for MDI patients, and 1.5% loss for healthy control subjects. Thus, exaggerated gray matter loss during adolescence was considered to be a potential biomarker of COS (26). There is very little literature looking at infants or toddlers who subsequently develop schizophrenia, given the methodological complexities of such a study. That being said, offspring of mothers with schizophrenia were found on average to have larger intracranial volumes, greater volumes of CSF, and greater gray matter volume on structural MRI in male neonates, compared to controls, although controlling for total intracranial volume resulted in all differences being non-significant (27). Volumetric analysis in ASD In ASD, earlier studies suggested a pattern of increased total brain volume, as well as increased ventricle size (28)(29)(30). Analyses across age ranges helped to further elucidate the chronology of this brain overgrowth picture. Indeed, exaggerated gray and white matter volumes seemed most pronounced in younger children, while older children with ASD had more typically appearing brains, when compared to their peers (31, 32) (see Figure 1). The hypothesis of brain overgrowth correlated with the measureable increase in rate of growth of head circumference during the first few years of life as well in this population (33,34). In 2005, a meta-analysis of published data on brain volume, head circumference, and post-mortem brain weight in ASD, further described the effect of age, with most marked differences occurring in the first few years of life. In adulthood, however, brain sizes did not vary from controls (35). Subsequent longitudinal and cross-sectional data from hundreds of children and adults with ASD documented volume enlargement during preschool years, most prominently in the anterior regions, followed by possible growth arrest or exaggerated losses later in childhood (36)(37)(38). Using cross-sectional age-adjusted data, Schumann et al. (36), for example, showed that children with ASD had 10% greater white matter volume, 6% greater frontal gray matter volume, and 9% greater temporal gray matter volume at 2 years of age. Longitudinal data showed altered growth trajectories at follow up scans (36). Frontiers in Psychiatry | Neuropsychiatric Imaging and Stimulation Volumetric differences did not hold true in all ASD studies however, for example, when structural MRI from children with ASD were compared to children with other developmental delays (39,40). Similarly, a recent systematic review of published data on head circumference overgrowth in children with ASD suggests differences may be much more subtle than previously thought. The authors attribute exaggerated differences to biased normative data in the CDC head circumference growth curves, to the selection of control groups from non-local communities, as well as to a failure to control for head circumference confounders such as weight and ethnicity (41). Recently, a small study looked at whether volumetric MRI might be predictive of a subsequent diagnosis of ASD, prior to the development of clinical symptoms. A group of 55 infants (33 of which were considered high risk given that they had a sibling with ASD) were scanned prospectively at three time points prior to 24 months of age. At 24 and 36 months, they underwent detailed developmental assessments, at which point 10 infants were identified as having a diagnosis of ASD, and 11 were noted to have other developmental delays. The authors found increased extraaxial fluid volume in infants who developed ASD, and quantified the difference through manual delineation of CSF compartments. They were able to show that a ratio of fluid:brain volume of FIGURE 2 | Shen et al. (42) showed how an elevated ratio of fluid:brain volume (above 0.14) at 12-15 months of age was predictive of a subsequent diagnosis of ASD, with 78% sensitivity and 79% specificity in their sample. Reproduced with permission from Shen et al. (42). >0.14 yielded 79% specificity and 78% sensitivity in 12-15 month old infants regarding a subsequent diagnosis of ASD (42) (see Figure 2). The finding remains to be replicated. Summary and comparison. In summary, volumetric analyses in ASD describe early brain overgrowth in the first few years of life, a finding that is difficult to contrast to COS, given the methodological complexity of acquiring neuroimaging data in very young children or neonates who subsequently develop this condition. During childhood and adolescence, volumetric data suggests that individuals with ASD may have attenuated brain growth or exaggerated volume loss, since adults with ASD have comparable brain volumes to their typically developing peers. Some similarities emerge with the COS population, given findings of exaggerated gray matter loss during adolescent years. CORTICAL THICKNESS AND MORPHOLOGY With advancements in computational statistics, it became possible extract a more detailed analysis of the cortical gray matter with respect to surface morphology. Specifically, the transposition of cortical imaging data onto a common surface template allowed cortical gray matter volume to be further quantified in terms of cortical thickness, surface area, and gyrification. More recently, complex statistical approaches employing mathematical algorithms and machine-learning models have manipulated neuroimaging data collected from both volumetric and cortical thickness measurements, in efforts to generate diagnostic classifiers of ASD/COS. Cortical measurements are of interest for neurodevelopmental disorders as they are thought to represent distinct embryological processes under tight regulatory control (43). Cortical surface area, for example, reflects to the process of neural stem cell proliferation and migration early in embryologic development (44). Cortical thickness, on the other hand, reflects axon and dendrite remodeling, myelination, and synaptic pruning, in a dynamic process lasting from birth into adulthood (45). Cortical thickness and morphology in COS In the NIMH-COS sample (46), a combination of cross-sectional and longitudinal data from 70 patients compared to controls revealed diffuse decreases in mean cortical thickness in childhood (~7.5% smaller), which became localized specifically to the frontal and temporal lobes with increasing age. Statistical significance survived correction for covariates such as sex, socioeconomic status, and IQ. Accordingly, while individuals with COS displayed global gray matter and cortical thickness losses in childhood, with age these losses became similar to those observed in adult onset schizophrenia, with deficits localizing more anteriorly (see Figure 3). Interestingly, in two separate samples, non-affected siblings of COS probands also demonstrated a pattern of decreased cortical thickness in the frontal, temporal and parietal lobes during childhood and adolescence, which then normalized in early adulthood, implicating some sort of compensatory mechanism despite underlying genetic risk (47, 48). With hospitalization and medication management, symptom remission correlated with localized increases in cortical thickness measurable in specific subregions of the cortex (49), irrespective of choice of antipsychotic (50). Children who had other psychiatric conditions with comorbid psychotic symptoms but not meeting full criteria for COS demonstrated cortical deficits in prefrontal/temporal pattern as well, but deficits were smaller and less striking than in COS patients (51). As mentioned in the introduction to this section, complex algorithms and mathematical protocols have been designed to identify and combine measurements that may be predictive of disease state. A multivariate machine-learning algorithm applied to cortical thickness data from the NIMH cohort was able to correctly classify 73.7% of patients with COS and controls. Through this method, 74 "important" regions were identified. Areas with the most predictive power clustered in frontal regions (primarily the superior and middle frontal gyris), and the left temporoparietal region (52). Given the rarity of COS in the general population, and the case-control study design, these results were not validated in a separate study population, precluding any calculation of positive or negative predictive value, and thus limiting any inferences regarding clinical utility. Cortical thickness and morphology in ASD There is significant heterogeneity in the literature with respect to cortical thickness and morphology in ASD, with at times seemingly contradictory results depending on the age, IQ, and clinical severity of the study population. In a very young group of patients with ASD, cortical volume, and surface area (but not thickness) were found to be increased compared to controls at the age of 2 years. The rate of cortical growth between ages 2 and 5 years did not differ between groups, further implicating the prenatal and early postnatal periods as central to disease pathogenesis (53). In slightly older age groups, many authors have observed evidence of exaggerated cortical thinning in ASD. For example, Hardan et al. (54) demonstrated that children with ASD ages 8-13 years had increased cortical thickness, particularly in the temporal lobe, as compared to aged matched controls. The small sample size (n = 17 cases), however, precluded co-variation for IQ, or analysis of age-related interactions (54). Longitudinal imaging 2-years later on seemingly the same cohort, showed that those with a diagnosis of ASD underwent exaggerated cortical thinning compared to controls, and that the degree of thinning correlated with the severity of symptoms. Differences, however, were mostly non-significant after controlling for multiple comparisons and variation in IQ (55). In a comparable age group (6-15 years). Mak-Fan et al. (56) showed a similar pattern of increased cortical thickness, surface area, and gray matter volume in children with ASD at earlier ages (6-10 years), that then underwent exaggerated losses compared to controls, such that by 12-13 years of age, controls surpassed patients on all three measures (56). Wallace et al. (57), on the other hand, found baseline deficits in cortical thickness for adolescents with ASD, but also observed exaggerated rates of cortical thinning during adolescence and early adulthood (57). In the same study population, no differences in overall surface area were noted, but more overall gyrification in the ASD group, particularly in the occipital and parietal regions was observed. Both groups showed a decline in gyrification overtime (58). On the other hand, several authors have noted deficits in cortical thinning in ASD. Looking over a wide age range, Raznahan et al. (59) used cross-sectional MRI data from 76 patients with ASD (primarily Asperger's syndrome) and 51 controls from ages 10 to 60 years to study the effects of age on cortical thickness and surface area. While surface area was relatively stable and comparable between both groups, they found significant differences with respect to cortical thickness. Typically developing individuals had greater cortical thickness in adolescence, which thinned steadily overtime. Individuals with ASD had reduced cortical thickness early in life, which underwent relatively little cortical thinning overtime, such that by middle age, they had surpassed their typically developing peers (59). ASD associated deficits in expected age-related cortical thinning during adolescence and adulthood has been shown in several other studies as well, both diffusely and in specific subregions (60,61). Recently, Ecker et al. (62) sought to tease apart the relative contributions of cortical thickness and cortical surface area to overall differences in cortical volume in a group of adult males (mean age of 26 years) with ASD compared to controls. While total brain volume and mean cortical thickness measurements were not significantly different between the two groups, several regional clusters emerged with both increased and decreased cortical volumes. The authors found that these relative differences were accounted for by variability primarily in cortical surface area, and less so from cortical thickness. As well, differences in cortical thickness/surface area were largely non-overlapping, and were deemed to be spatially independent from each other (62). As in COS, several groups have aimed to combine the predictive power of multiple measurements by applying mathematical algorithms to neuroimaging data. Ecker et al. (63), for example, included five parameters (cortical convexity, curvature, folding, thickness and surface area) in their support vector machine analytic approach. These combined measurements were able to correctly classify patients with ASD (n = 20) and controls (n = 20) with 80-90% specificity and sensitivity, with cortical thickness being the most predictive measurement. This approach also demonstrated proof of principle in separating patients with ASD from patients with ADHD, despite the small sample size, and lack of reproduction in a separate group of patients with ASD from which the algorithm was generated (63). Similarly, Jiao et al. (64) incorporated cortical thickness and volume data from children with ASD and controls (ages 7-13) into a machine-learning model with the aims of predicting presence or absence of ASD. One algorithm was able to predict diagnostic stratification with 87% accuracy based on cortical thickness measurements. The most predictive regions included both areas of decreased cortical thickness (in the left pars triangularis, orbital frontal gyrus, parahippocampalgyrus, and left frontal pole) and increased cortical thickness (left anterior cingulate and left precuneus) (64). Again, the case control design was not representative of true population prevalence, precluding calculation of positive predictive values. Summary and comparison. In ASD, a small number of studies support a pattern of very early overgrowth in cortical surface area and volume (<2 years of age), which is immediately followed by cortical dysmaturation throughout childhood and adolescence, with evidence suggesting both exaggerated and impaired cortical thinning, depending on the study. Changes in cortical thickness and surface area seem to occur in non-overlapping regions. In COS on the other hand, cortical thickness is reduced diffusely in childhood, although data from very young patients (<8 years) are lacking. During adolescence, reductions in cortical thickness become more localized to frontal regions, although less has been written about the specific rates of cortical thinning in this patient group. REGIONS OF INTEREST Studies seeking out and investigating specific regions of interest in both COS and ASD have employed several different approaches. On the one hand, a general approach simultaneously comparing dozens of regions of interest or thousands of specific points in the absence of an a priori defined hypothesis has been used to survey for areas associated with the greatest differences between patient and control samples, and can help guide future areas of research. On the other hand, a predefined hypothesis regarding volumetric differences in a particular region allows optimization of statistical power, to more precisely elucidate candidate regions. Regions of interest in COS A meta-analysis of studies conducted in adult onset schizophrenia patients describes global deficits in volume, most consistently in the left superior temporal gyrus and the left medial temporal lobe (65). Looking specifically at COS, in the NIMH cohort, an automated and longitudinal analysis of over 40,000 points across the cortical surface found that the superior and middle frontal gyris showed the greatest overall reduction in cortical thickness compared to controls (46). In a different sample COS population from UCLA, specific analysis of the right posterior superior temporal gyrus (Wernicke's area, involved in verbal comprehension), found volume to be increased in this region (66). Investigations conducted by the same group on the anterior cingulate gyrus, a central and highly connected structure in the prefrontal cortex involved in many functions including error monitoring, yielded volume reductions (67). Hypothesis driven approaches in the NIMH-COS cohort have been able to identify specific regional volume deficits as well. The insular cortex, for example, has been implicated in schizophrenia, given its role in distinguishing self from non-self, in visceral somatosensory interpretation, in processing of emotional experiences, and in salience. Patients with COS were found to have www.frontiersin.org smaller insular volumes, whereas COS-siblings and controls were not statistically different, suggesting reduced insular size as an indicator of disease state. Additionally, level of functioning and severity of symptoms correlated with insular volume (68). The cerebellum, classically understood to be involved in motor coordination and planning, has been implicated in schizophrenia given its association with learning and cognition. In longitudinal data from the NIMH cohort, smaller overall and regional cerebellar volumes were detected in affected individuals, with siblings falling between patients and controls on various measures (69). Regarding subcortical structures, enlargement of the caudate (70) has been shown. In the limbic system, increased amygdala volume (71), but volume loss in the hippocampus and fornix (72,73) has also been found in COS. Regions of interest in ASD Brain regions proposed to play a role in social cognition, communication, and "theory of mind" have been a focus of investigation in ASD. The region of the temporoparietal junction in particular, is thought to be central to the integration of social information and empathy, as well as selective attention to salient stimuli (74). Thinning of several areas in the temporoparietal region, particularly on the left side, has been shown in children, adolescents, and adults with ASD (38,57,59,61,75). The orbital frontal cortex, in the ventromedial prefrontal region, is thought to play a role in sensory processing, goal directed behavior, adaptive learning, and attachment formation (76). Patients with autism, despite increased overall cortical thickness in the frontal region, have been shown to have specific deficits in cortical thickness (38), volume, and surface area (62) in the orbital frontal cortex, which correlated with symptoms severity (62). Other frontal lobe structures showing reduced cortical thickness in ASD include the inferior and middle frontal gyri, and the prefrontal cortex, depending on the study (38,64,77). The anterior cingulate is a highly connected part of the social brain network situated along the medial aspect of the frontal cortex. Its role in self-perception, social processing, error monitoring, and reward based learning has been described (78). Relative increases (60,64) and decreases (62,75,77) in volume and thickness of the anterior cingulate have been shown in ASD. Given that different regions may grow at different rates in individuals with ASD vs. controls (60,61), variation in the age and distribution of study populations may account for some inconsistencies. Volume deficits in the insular cortex have been demonstrated in young adults with pervasive developmental disorders (79). In adults with ASD, those who had a history of psychotic symptoms also demonstrated reduced insular volumes, particularly on the right side, as well as reduced cerebellar volumes (80). Looking at subcortical structures, the caudate has been shown to be enlarged in ASD, across whole brain volumetric metaanalyses (81)(82)(83), and in targeted ROI analysis, even after controlling for confounding medication administration (84). Volume loss in the putamen has been shown across whole brain meta-analyses in adults with ASD (81,83,85), but enlargement of the putamen has also been observed in younger populations (86). In the amygdala, volume losses emerge across whole brain meta-analytic approaches (83,85,87), but volume gains are noted in younger patient groups as well (88). From a functional perspective, enlargement of the caudate may be associated with repetitive or self-injurious behavior (89)(90)(91)(92), while volume loss in the amygdala may pertain to impaired emotional perception and regulation (93). Summary and comparison. Volume losses have been noted in some overlapping prefrontal regions in both ASD and COS, particularly along the middle frontal gyrus. The anterior cingulate is also implicated in both conditions, although bidirectional changes in volume have been noted in ASD, depending on age of study participants. The area of the temporal-parietal junction shows volume loss in ASD, and was an area strongly predictive of diagnosis in group of individuals with COS (discussed in see Cortical Thickness and Morphology in COS). The insula is implicated in patients with COS, and in those with ASD who have comorbid psychotic symptoms. Looking at deep structures, both conditions are associated with volume gains in the caudate, which may pertain to repetitive behaviors, or concomitant neuroleptic treatment. STRUCTURAL WHITE MATTER ANALYSIS (VOLUMETRIC ANALYSIS AND DIFFUSION TENSOR IMAGING) Magnetic resonance imaging analyses that incorporate diffusion measurements allow for further sub-characterization of white matter microstructure, above volumetric differences. The diffusion of water molecules is measurable with MRI technology, and the magnitude and direction of diffusion within each individual voxel can be modeled mathematically with vector algebra. Axial diffusivity (AD) is the measurement of diffusion occurring parallel to white matter fibers; increased AD occurs in diseases involving axonal degeneration, and is thought to reflect both the integrity and density of axon structures. Radial diffusivity (RD) on the other hand, is a measurement of diffusion occurring perpendicular to the white matter fibers; it is used as a measure of myelination, and is increased in demyelinating diseases. Mean diffusivity (MD) (also known as the apparent diffusion coefficient, ADC) is a measure of average diffusion in absence of a directional gradient (94). A summary ellipsoid vector incorporating the overall spherical nature of the combined vectors is termed "fractional anisotropy" (FA). A perfectly "isotropic" solution (FA = 0), such as free water, contains molecules that diffuse freely in all directions, whereas an anisotropic solution (i.e., a white matter fiber bundle) would restrict diffusion in one direction resulting in an elongated ellipsoid and FA values closer to 1. In white matter tract analysis, increased FA is thought to be a sensitive but not specific measure of fiber myelination, the integrity of cell membranes as well as the diameter of the fibers (95). Typically developing individuals show age related increases in FA and decreases in MD throughout development, in keeping with increasing white matter maturation (96). As in gray matter analyses, DTI can be applied to the whole brain in a voxel-based approach, or alternatively, specific regions of interest can be investigated with this method. Along these lines, specific anatomic white matter tracts can be reconstructed and analyzed from DTI data, in a method known as tractography. DTI data can also be transposed onto a common FA template, in tract-based spatial statistics (TBSS) (97). Frontiers in Psychiatry | Neuropsychiatric Imaging and Stimulation Magnetic resonance imaging data collected in the absence of diffusion measurements can still be utilized in studying white matter integrity and growth. Similar to gray matter analysis, simple volumetric studies on white matter structures have been employed. Alternatively, 3D mapping of volumetric changes in white matter tracts via tensor-based morphometry (TBM) has been validated as a method of studying white matter development over time. In brief, TBM applies initial and follow up scans to a standardized brain template to ensure precise anatomical alignment. Next, an elastic-deformation algorithm is used to calculate the specific degree of volume expansion in a set area, represented by an expansion factor called the "Jacobian determinant." Growth rates are calculated by comparing the Jacobian determinant measures across patient and control samples. WHITE MATTER ANALYSIS IN COS The corpus callosum is the largest white matter structure in the human brain, and is central for connectivity and relay of information between hemispheres. Deficits in the corpus callosum have been inconsistently demonstrated in adult onset schizophrenia populations (95). In a longitudinal analysis of children and young adults with COS, differences in the midsagittal area of the splenium of the corpus callosum emerged around age 22, with patients having significantly smaller structures (98). Later analysis looking at volumetric differences in subsections of the corpus callosum revealed no differences between NIMH-COS patients, their siblings and controls with respect to overall volume, and/or volume change over time (99). Comparison of whole brain TBM data between 12 patients with COS and 12 age matched controls followed over a 5-year interval revealed aberrant white matter development between ages 13 and 19 years. Specifically, at baseline MRI, patients had a 15% deficit in white matter volume in the frontal regions. At follow up, control patients showed an average of 2.6% growth in white matter per year, while COS patient had only 0.4% white matter growth per year. The white matter deficits in the COS sample seemed to progress in a front to back pattern, opposite to previous findings regarding gray-matter deficits, but consistent with expected growth patterns in healthy adolescent brains (100). Unaffected siblings of children with COS showed delayed white matter growth at younger ages (<14 years) but not at older ages (14-18 years) as measured by TBM. Delayed white matter growth was most significant in the parietal regions for siblings, but normalized by age 18 (101). There are relatively few DTI studies in specific COS populations. Clark et al. (102) found no significant differences in FA diffusely between 18 children and adolescents with COS, and 25 controls. Of note, five COS patients had a comorbid diagnosis of ASD, of which four were tested as having a linguistic impairment. Increased RD and AD was noted for patient vs. control groups in several white matter tracts (see Table 1). Increases in RD and AD in these regions were explained primarily by the presence of a linguistic impairment, and not the diagnosis COS, however (102). There is a growing body of literature, however, on diffusion tensor imaging in adult onset schizophrenia and early-onset schizophrenia (EOS: defined as symptom onset prior to age 18 years). Findings investigating these patient groups are summarized in several reviews (103,104). Given the paucity of literature applying DTI in COS, some conclusions may be extrapolated from the earlyonset schizophrenia literature; therefore they will be discussed briefly. In general, while results have varied, the corpus callosum, superior and inferior longitudinal fasciculus, cingulum, and the uncinate fasciculus have been suggested as areas most affected with respect to white matter integrity as measured by decreases in FA (103,104). Some studies have attempted to correlate DTI findings with symptomatology. Ashtari et al. (105), for example, found decreased FA in the left inferior longitudinal fasciculus was more pronounced for EOS patients with a history of visual hallucinations (105). As in volumetric imaging, studies that incorporate www.frontiersin.org analyses for age effects provide evidence of dynamic white matter abnormalities as well, in EOS. For example, FA in the anterior cingulate region increased with age in the healthy control population, but decreased with age in the early onset psychosis population (106). Similarly, patients with EOS showed decreased FA in parietal regions, while patients with adult onset schizophrenia had findings localizing to the frontal, temporal, and cerebellar regions (107). WHITE MATTER ANALYSIS IN ASD Earlier volumetric analyses suggested a pattern of accelerated of white matter volume and growth in younger children, particularly in the frontal regions, but that adolescents with ASD had similar or reduced white matter volume compared to controls (108). Meta-analysis of 13 VBM studies on white matter volume found no differences globally in white matter volume, and no differences between child/adolescent groups and adults groups, although no studies included very young children (<6 years). Some regional differences emerged, however (109) (see Table 1). With respect to diffusion tensor imaging, a recent systematic review and meta-analysis, combining DTI data from 14 studies, including both children and adults with ASD, summarized some areas of consensus and heterogeneity in the literature. In summary, decreased FA was most consistently demonstrated in the corpus callosum, left uncinate fasciculus, and left superior longitudinal fasciculus of individuals with ASD. Mean diffusivity was increased in the corpus callosum, and bilaterally in the superior longitudinal fasciculus (110). This meta-analysis included data from ROI and tractography studies only, however, excluding whole brain TBSS and voxel-based analyses. A recent literature review on DTI in ASD by Travers et al. (97), identified decreased FA, increased MD, and RD as the most common finding across methods, with the corpus callosum, cingulum, arcuate fasciculus, superior longitudinal, and uncinate fasciculus showing the greatest differences (97). Most imaging studies in autism to date, as well as those included in the above-described meta-analyses, have been conducted in older children, adolescents, or adults. In these age groups, decreased FA and increased MD have been repeatedly documented in many white matter regions. The specific rate of change in white matter markers, as well as the effect of age on white matter maturation seems to vary by study, however. For example, Mak-Fan et al. (56) showed RD and MD measurements stayed stable between the ages 6 and 14 years in subjects with ASD, while control subjects showed expected decreases with age (111). Ameis et al. (112) found the between group differences in RD, AD, and MD, but not FA, which were more pronounced in childhood than in adolescence (112). Few studies have been conducted in very young children, however, and less consistency emerges in the data from this age range. Contrary to literature in older populations, Weinstein et al. (113), reported that FA was greater for children ages 1.5-6 years with ASD compared to controls in the areas of the corpus callosum, superior longitudinal fasciculus, and cingulum. Differences in FA were attributable to decreased RD, while AD was the same between cases and controls (113). Similarly, Ben Bashat et al. (114), found evidence of accelerated white matter maturation marked by increased FA and reduced displacement values in a small sample of children with ASD ages 1.8-3.3 years, most prominently in frontal regions (114). Abdel Razek and colleagues (115), found ADC scores to be greater for preschool children with ASD in several regions, which correlated with severity of autistic symptoms as measured by the childhood autism rating scale (115). Walker et al. (116) on the other hand, found that 39 children between ages 2 and 8 years with ASD had decreased MD and FA compared to controls, accompanied by an attenuated rate of increase in FA, as well an accelerated rate of decreased MD compared to controls (116). Longitudinal data looking at high risk infants found evidence of higher FA at 6 months in children who were subsequently diagnosed with ASD, but that they had then had a slower rate of change such that by 24 months typically developing children had surpassed them in this measure (117). For most studies, although differences have been statistically significant for certain regions, the magnitude of these differences has been quite small, on the range of 1-2%, thus limiting the predictive ability of any individual measurement. Lange et al. (118) generated a discriminant function that was able to distinguish between individuals with and without ASD with 94% sensitivity, 90% specificity, and 92% accuracy, by combining the predictive ability of DTI data points centered primarily around the superior temporal gyrus and the temporal stem. The sensitivity and specificity was reproduced in a replicate sample as well, however the case-control design was not reflective of true population prevalence, again precluding inferences regarding predictive ability in a real life clinical setting (118). Emerging efforts have tried to correlate neuroimaing findings to functional and behavioral outcomes. For example, increased MD in the superior longitudinal fasciculus correlated with degree of language impairment in children and adolescents (119). Increase FA and decreased RD in the arcuate fasciculus correlated with greater language abilities in another group of children with ASD (120). Similarly, lower FA in the dorsal lateral prefrontal region was associated with increased social impairment in a group of children with ASD in Japan (121). Attempts to identify structural deficits in areas involved socio-emotional processing have yielded mixed results as well. Further focus on understanding the functional connectivity between distant regions is described in the next section. Summary and comparison. White matter development in COS patients compared to controls appears marked by global deficits in white matter volume and decreased rates of white matter growth/integrity in adolescence, although the specific chronology, most affected regions and the relation to symptoms continues to be explored. In ASD, meta-analyses suggest no differences overall in white matter volume in adults, although early white matter volumetric overgrowth may occur in younger patient samples. Looking at specific white matter regions, volume losses have been noted in both ASD and COS in the corpus callosum and cingulum. In both conditions, decreased white matter integrity as measured though DTI has been observed in the superior longitudinal fasciculus, which may pertain to comorbid language impairments. FUNCTIONAL CONNECTIVITY While imaging of white matter tracts through techniques like DTI permits the quantification of structural connectivity between Frontiers in Psychiatry | Neuropsychiatric Imaging and Stimulation regions, functional connectivity requires in vivo analysis of brain activation. Functional magnetic resonance imaging (fMRI) measures regional changes in blood oxygen level dependent (BOLD) signaling, given the subtle differences in magnetic field strength between oxygenated and deoxygenated blood. Brain activation patterns may be analyzed in subjects at rest (termed resting state) or during a specific cognitive or behavioral task performed in an MRI scanner. Data can be analyzed with respect to a specific region of interest (seed technique), where connections to and from an a priori defined region are studied. Alternatively, independent component analysis (ICA), or similar techniques, look at overall activation patterns across all regions, and can comment on patterns in functional networks (i.e., default mode network, salience network). Data from functional neuroimaging studies are often analyzed using graph theory. In this approach, the relationship between certain areas of central activation (termed "nodes") and the vectors of connectivity between nodes (termed "edges") are described using discrete mathematics (122). Short-range connectivity (i.e., within a specific lobe, or to a neighboring lobe) and long-range connectivity between remote regions can be quantified in this manner. FUNCTIONAL CONNECTIVITY IN COS Two separate analyses in the NIMH cohort of COS have suggested exaggerated long-range connectivity, and impaired short-range connectivity, in keeping with a hypothesis of exaggerated synaptic pruning. Resting state fMRI data was used to graph the connectivity between 100 regional nodes for 13 patients and 19 controls. Data showed that patients with COS had signals that were less clustered with more disrupted modularity marked by fewer edges between nodes of the same module. On the other hand, they showed greater global connectedness and greater global efficiency (123). Subsequent analyses with a slightly larger sample again found reduced connectivity at short distances and increased connectivity at long distances for patients with COS compared to controls on resting state fMRI. Relative to healthy controls, patients with COS had several regions in the frontal and parietal lobes that were "nodes" of over-connectedness with respect to long-range associations (124). White et al. (125) on the other hand, interpreted an opposite pattern from a study using a visual stimulus to analyze connectivity in the occipital lobe of children and adolescents with early onset schizophrenia (125). Similarly, structural connectivity analysis in neonates at high risk for schizophrenia found decreased global efficiency, increased local efficiency, and fewer nodes and edges overall compared to control infants (126). FUNCTIONAL CONNECTIVITY IN ASD In ASD on the other hand, there is an abundance of recent literature on functional connectivity. An emerging hypothesis suggests that frontoparietal under connectivity in ASD results in reduced "bandwidth" in long-range circuits [reviewed by Just et al. (127)]. Some propose that this coincides with local increases in connectivity within a specific lobe, resulting in a failure to integrate and regulate multiple sources of information (128). This hypothesis is consistent with structural white matter deficits in long-range association fibers, as well as structural patterns in gray matter showing increased local, but deficits in global modularity (129). With respect to functional analyses, impaired synchronization, and under connectivity between large-scale networks has been shown in fMRI studies incorporating various task-based assessments, including those pertaining to language comprehension and auditory stimuli (130)(131)(132), executive functioning (133), visual spatial processing (134), and response to emotional cues (135,136). Under connectivity has not been the only finding however, with many functional MRI studies showing evidence of increased connectivity or altered developmental trajectories with respect to integrated neural networks (137)(138)(139). For example, a recent meta-analysis of fMRI studies found greater activation in children with ASD in response to a social task in certain specific regions (i.e., in the left-precentral gyrus) but relative under activation compared to controls in other areas (superior temporal gyrus, parahippocampal gyrus, amygdala, and fusiform gyrus). In adults with ASD, activation was greater in the superior temporal gyrus, but less in the anterior cingulate during social processing (140). The literature is also divided with respect to functional neuroimaging in resting state MRI, in the absence of any particular stimulus or task. Some have proposed that methodological issues may be contributing to observed inconsistencies (141). While hypoconnectivity seems most prevalent in the literature, [Ref. (142,143); reviewed by Uddin et al. (144)], Uddin et al. (144) observed long-range hyperconnectivity via ICA across remote regions in 20 children ages 7-12 years with autism compared to controls. Hyperconnectivity was noted to involve the default mode network, frontotemporal, motor, visual, and salience networks. Hyperconnectivity of the salience network (which involves the anterior cingulate and insula) was most predictive of the diagnosis of ASD and was able to discriminate between cases and controls with 83% accuracy, a finding that was reproduced in a separate image dataset (145). Other resting state fMRI studies have also observed mixed patterns, which vary by region, network, and by age of the sample (146,147). The literature in very young patients with ASD is relatively sparse but seems to suggest altered developmental trajectories for affected children beginning at very young ages. A recent publication observed increased functional connectivity at 3 months, which disappeared by 12 months in high risk infants (148). Alternatively, Redcay and Courchesne (139) found increased connectivity between hemispheres in 2-3 year old children with ASD compared to chronological age matched controls, however the opposite pattern emerged when they were compared to mental age matched controls (139). Dinstein et al. (132) observed hypoconnectivity between hemispheres and in language regions in toddlers with ASD in response to auditory stimuli (132). A recent review article by Uddin et al. (144) summarizes the literature to date with respect to resting state functional connectivity analyses. While intrinsic connectivity and seed-based analyses across 17 published studies suggest both hyper-and hypo-connectivity, Uddin and colleagues propose that the developmental age of the sample may be one explanatory factor with respect to variability in results. They describe a hypothesis in which increased functional connectivity in prepubescent children with ASD as compared to their peers is then met with altered maturational trajectories such that adults with ASD seem to have reduced connectivity compared to controls (144). www.frontiersin.org A recent publication put forth by a data sharing initiative entitled "autism brain imaging data exchange" (ABIDE) proposes to remedy disagreement in the literature through a large-scale international collaboration combining 1112 resting state fMRI scans. Analysis of 360 male subjects with ASD compared to controls found hypo connectivity in cortical networks but hyper connectivity in subcortical networks. They also identified localized differences in connectivity in certain regions, including the insula, cingulate, and thalamus. They did not perform specific analyses looking for age-associated differences, however, given that the majority of included participants were adolescents or adults (146). Summary and comparison. There are only a handful of studies looking at functional connectivity in COS, but data from fMRI suggest a pattern of increased long-range connectivity, with disrupted short-range connectivity, in keeping with pathology of exaggerated synaptic pruning. In comparison, data from fMRI in ASD suggest to some extent an opposite pattern, with increased local but decreased global connectivity. fMRI data sharing between research centers reveal hyperconnectivity in subcortical networks, and hypoconnectivity in cortical networks in adult males with ASD. Smaller studies in younger age groups suggest important age effects regarding the connectivity hypothesis as well, with younger children with ASD seemingly showing more "over-connectedness" than adults. DISCUSSION This review compares and contrasts neuroimaging findings in ASD and COS. Overall, across volumetric, structural, and functional neuroimaging data, there arises evidence for a dynamic changes in both conditions. In ASD, a pattern of early brain overgrowth is seemingly met with dysmaturation in adolescence, although the literature in this regard is far from certain. Functional analyses have suggested impaired long-range connectivity as well as increased local and/or subcortical connectivity, which may also progress with age. In COS, global deficits in cerebral volume, cortical thickness, and white matter maturation seem most pronounced in childhood and adolescence, and may level off in early adulthood. Deficits in local connectivity, with increased long-range connectivity have been proposed, in keeping with exaggerated cortical pruning; however the opposite has also been shown. Symptom and neuroimaging overlap across conditions was illustrated via a meta-analysis of fMRI data in both schizophrenia and ASD, which identified shared deficits in regions involved in social cognition (149). The significance of these findings is tempered, however, by heterogeneity in results across other pediatric onset neurodevelopmental disorders. In ADHD for example, longitudinal MRI analyses in children suggest overall reduced cortical thickness prior to the onset of puberty (158) with peak cortical thickness and onset of cortical thinning occurring at later ages (159). In the future, clinical neuroimaging must be able to identify not only the presence of aberrant neurodevelopment, but also be able to discern across overlapping conditions. While there is heterogeneity in the literature in both conditions, findings regarding COS at times appear more consistent. It is important to note that, given the rarity of this condition, these findings emerge from relatively few research samples, and are derived primarily from data collected from the same population of individuals. In ASD on the other hand, there has been an international explosion of investigation at numerous institutions, across ages, IQ ranges, and diagnostic severity, which has resulted in at times seemingly contradictory results. A call for collaboration (150) has been met with a first international compilation of neuroimaging datasets, which has helped to clarify some discrepancies in the literature with respect to fMRI (146). Going forward, ongoing collaboration to facilitate large scale, prospective, longitudinal neuroimaging studies, will be necessary to separate signals from noise in these complex and heterogeneous diseases. A focus on genetic subtypes may help to unite synapse pathology with neuroimaging findings and network dysfunction, to permit some degree of hypothesis generation with respect to molecular pathogenesis. In ASD, for example, a loss of inhibitory control leading to exaggerated growth, premature cortical thinning, and then early stabilization of cortical structures has led some to suggest that overall the developmental curve has been "shifted to the left" along the time axis in this condition, with respect to brain maturation (75,151). Current genetic investigations suggest alterations in structural scaffolding at the excitatory synapse could be contributory in ASD (152). Single gene disorders associated with autism may shed light on underlying final common pathways (153). Fragile X syndrome (FXS), for example, is a genetic condition comorbid with ASD in 20-30% of cases (154). Individuals afflicted with this condition have dysfunction or absence of the fragile X mental retardation protein (FMRP). FMRP is now understood to play a critical role in regulation of protein synthesis at the excitatory synapse, and without it, exaggerated receptor cycling and dysfunctional neuroplasticity can results (153). A similar mechanism in idiopathic ASD would hypothetically results in a loss of inhibitory control on expected maturational changes, uncoupling the structural and temporal timeline of synaptic neurodevelopment. In schizophrenia, exaggerated synaptic pruning has been a long held hypothesis with respect to an etiology (25), which is consistent with aspects of the neuroimaging literature in COS. On the other hand, a small study in high risk infants suggests enlarged cerebral volumes may exist early in life, implying that some type of early dysregulated growth may be at play in this condition as well, similar to the process occurring in ASD (27). Investigations in 22q11.2 deletion syndrome (DS), a genetic disorder associated with schizophrenia in 20-25% of cases (155), permits longitudinal and prospective analysis of children at high risk for schizophrenia. Interestingly, MRI data collected in children as young as 6 years old with 22q11.2 DS found early increases in cortical thickness and deficits in cortical thinning in preadolescence, which are then met with exaggerated cortical thinning during adolescent years. Patients who subsequently developed schizophrenia indeed had more exaggerated deficits in cortical thickness (156). In studies recruiting adolescents, it is difficult to tease out the possible influence of confounders such as substance abuse on both clinical and radiologic findings. While comorbid substance abuse is common in adult onset schizophrenia populations (occurring in 50-80% cases), the rate of substance abuse in COS, while presumed Frontiers in Psychiatry | Neuropsychiatric Imaging and Stimulation lower, has not been described (157). Ongoing study of clinical, environmental, and cultural confounding factors in both ASD and COS is needed. Many investigators have sought to use neuroimaging protocols as predictors of diagnosis in case-control studies. The accuracy, sensitivity, and specificity of these analyses have on average ranged between 60 and 90%, and some groups have been able to reproduce high levels of diagnostic accuracy in separate patient samples. The clinical utility of these algorithms, however, remains uncertain in the absence of their application to populations reflecting realistic disease prevalence (i.e., positive predictive values are low or not reported). The development of clinically useful, cost-effective wide scale diagnostic tests for neurodevelopment conditions remains a common goal, and several groups have initiated prospective trials on high risk patient populations which may perhaps yield some hopeful results in the next decade. AUTHORS CONTRIBUTION Danielle A. Baribeau authored the manuscript. Evdokia Anagnostou developed the research topic, provided guidance, editing, and supervision. ACKNOWLEDGMENTS Funding for this research was provided by the Province of Ontario Neurodevelopmental Disorders Network (POND), supported by the Ontario Brain Institute.
2016-05-31T19:58:12.500Z
2013-12-20T00:00:00.000
{ "year": 2013, "sha1": "cb57501258bd7203b22da13dfac459a39e9549b9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3389/fpsyt.2013.00175", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cb57501258bd7203b22da13dfac459a39e9549b9", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
222146386
pes2o/s2orc
v3-fos-license
β2‐AR activation promotes cleavage and nuclear translocation of Her2 and metastatic potential of cancer cells Abstract Prolonged hypersecretion of catecholamine induced by chronic stress may correlate with malignant progression of cancer. β2‐adrenergic receptor (β2‐AR) overexpressed in certain cancer cells may translate the signals from neuroendocrine system to malignant signals by interacting with oncoproteins, such as Her2. In the present study, we demonstrate that catecholamine stimulation activates the expression and proteolytic activity of ADAM10 by modulating the expression of miR‐199a‐5p and SIRT1 and also confirm that catecholamine induction triggers the activities of γ‐secretase, leading to shedding of Her2 extracellular domain (ECD) by ADAM10 and subsequent intramembranous cleavage of Her2 intracellular domain (ICD) by presenilin‐dependent γ‐secretase, nuclear translocation of Her2 ICD, and enhanced transcription of tumor metastasis‐associated gene COX‐2. Chronic stimulation of catecholamine strongly promotes the invasive activities of cancer cells in vitro and spontaneous tumor lung metastasis in mice. Furthermore, nuclear localization of Her2 was significantly correlated with overexpression of β2‐AR in human breast cancer tissues, indicating that catecholamine‐induced β2‐AR activation plays decisive roles in tumor metastasis. Our data also reveal that an unknown mechanism by which the regulated intramembrane proteolysis (RIP) initiated by β2‐AR‐mediated signaling controls a novel Her2‐mediated signaling transduction. leading to the elevation of β2-AR levels. 8 Several lines of evidence have indicated that the prolonged hypersecretion of catecholamine induced by chronic stress may correlate with higher occurrence of malignancies in various organs [9][10][11][12][13] ; β2-AR overexpressed in certain cancer cells may translate the signals from the neuroendocrine system to malignant signals by interacting with oncoproteins such as Her2. 8,14,15 However, the molecular mechanisms underlying cross-communication between β2-AR and Her2-mediated signaling pathways are largely unexplored. Receptor localization plays a crucial role in gathering paracrine signals from adjacent cells. 16,17 Unlike EGFR, which constantly shuttles and recycles through the cell, Her2 as a member of EGFR family is a highly internalization-resistant receptor and primarily resides on the plasma membrane of epithelial cells, although its nuclear localization has been documented. [18][19][20] Nuclear translocation of Her2 was proposed to be related to endocytic internalization in a full-length form and mediated by a conventional nuclear importing system associated with the nuclear pore complex. 21 However, there has been contradictory evidence regarding the nuclear localization of Her2. 16,17 It was also reported that soluble Her2 CTF synthesized by alternative initiation of translation was located in the nucleus. 22 In accordance with this study, we observed that the Her2 intracellular domain (ICD), but not full-length Her2, was exclusively distributed in the nucleus. 23 Correspondingly, a nuclear localization signal was identified within the sequences of Her2 ICD. It is known that Her2 extracellular domain (ECD) can be separated proteolytically from full-length Her2 and detected in cultural medium of human breast cancer cell line SKBR3, as well as in sera from patients with breast cancer. 24 The occurrence of Her2 ECD is a marker for production of the N-terminally truncated and membrane-associated Her2 fragment with a molecular weight of 95 kDa (p95Her2), which possesses constitutive ligand-independent activity and enhanced transforming efficiency. 25,26 The elevation of soluble Her2 ECD levels in sera from patients has been correlated with recurrence, nodal metastasis, worse prognosis, and poor response to hormone therapy, chemotherapy, and targeted therapy in clinical studies. 24,27 However, the molecular mechanisms whereby Her2 is cleaved are poorly understood. | In vitro γ-secretase assay In vitro γ-secretase activities were measured as described previously. 29 Cells were resuspended in lysis buffer and lysed by passing through a 30-gage needle attached to a 1-mL syringe. Membrane pellets were incubated at 37°C for 2 h in 50 μL of assay reaction buffer (pH 6.5) containing 12 μmol/L specific fluorogenic substrate (Calbiochem). Fluorescence was measured using a SpectraMax M5 spectrometer (Molecular Devices). The experiments were performed in duplicate. | In vivo tumor model All animal experiments were carried out in accordance with the approval of the Animal Research Committee of Xuzhou Medical University. Five-to 6-wk-old athymic female BALB/c nude mice were purchased | Clinical samples All clinical tissue samples were obtained from the Affiliated Hospital of Xuzhou Medical University with the informed consent of patients and with approval for experiments from the Hospital. | Statistical analysis Data were expressed as mean ± SD. For comparisons among the groups in the experiments, an ANOVA test was performed. For evaluation of consistency between β2-AR expression and Her2 nuclear localization in the tumor tissues, Kappa coefficients were calculated. A P-value < .05 was considered statistically significant. | Catecholamine stimulation induces Her2 CTF production and phosphorylation When we treated the human breast cancer cell line BT474 with β-AR agonist isoproterenol (ISO), an extra ~80 kDa molecular mass accumulated in a time-dependent manner, as detected by western blot with the antibody against the C-terminus of Her2 ( Figure 1A). A similar result was also obtained in the human ovarian cancer cell line SKOV3 ( Figure 1B), suggesting that the 80 kDa fragment may represent Her2 CTF and appearance of the 80 kDa Her2 fragment (p80Her2) was associated with ISO stimulation. To determine whether the generation of p80Her2 was mediated by activation of β-AR, we used β-AR activators including ISO (5 μmol/L) and naturally occurring catecholamines, epinephrine (10 μmol/L) and norepinephrine (10 μmol/L), to treat the human breast cancer cell line MCF-7. As shown in Figure 1C, levels of both full-length Her2 and p80Her2 were markedly increased. Pretreatment with the specific inhibitor of β2-AR ICI 118551 (1 μmol/L) strikingly impaired the effect of ISO on the formation of p80Her2, whereas the specific inhibitor of β1-AR ATEN (1 μmol/L) had only a marginal effect ( Figure 1D), indicating that activation of β2-AR is a prerequisite for the generation of p80Her2. p95Her2 was assumed to contain the transmembrane and cytoplasmic domains. 24 To clarify whether p80Her2 had derived from Her2 CTF, we constructed MCF-7 cells expressing Her2-GFP fusion protein (MCF-7/Her2-GFP). Western blot analysis with the antibody against GFP showed that a new product, whose size perfectly fitted the molecular weight of p80Her2-GFP fusion protein (~100 kDa), appeared after exposure of the cells to ISO ( Figure 1E), testifying that the p80Her2 fragment was a product of Her2 CTF. We exam- ing ISO stimulation ( Figure 1G). | Catecholamine modulates the cleavage of Her2 ECD by promoting ADAM10 expression through downregulation of miR-199a-5p and upregulation of SIRT1 Ectodomain cleavage of the transmembrane proteins is generally mediated by membrane-associated metalloproteases under the regulation of multiple signaling pathways such as the activation of PKC. 30,31 Earlier studies have shown that Her2 ECD shedding could be suppressed by the broad-spectrum metalloprotease inhibitors TNF Protease Inhibitor (TAPI), batimastat, and the tissue inhibitor of metalloproteases-1. 24 A previous study utilizing siRNAs selectively inhibiting ADAM10 expression suggested that ADAM10 may be one of the proteases responsible for Her2 cleavage. 32 However, shedding of Her2 was inefficient in contrast with the majority of shedding events. In addition, how sheddase is controlled under physiological conditions is unclear. To test whether ADAM10 was engaged in catecholamine-induced Her2 ECD cleavage, we examined the expression of ADAM10 in MCF-7 cells stably transfected with an Her2 expression plasmid (MCF-7/Her2). We found that ISO stimulation induced a significant upregulation of ADAM10 expression, which was obviously coherent with the accumulation of p80Her2 ( Figure 2A). A recent study indicated that the NAD-dependent deacetylase SIRT1 regulates the transcription of the gene encoding ADAM10 by direct interaction with the ADAM10 promoter. 33 Data in Figure 2B show that epinephrine stimulation dramatically promoted the expression of SIRT1 in a time-dependent manner in MDA453 and SKOV3 cells. SIRT1 was recently identified as a direct target of miR-199a-5p. 34 Interestingly, expression of miR-199a-5p was strikingly repressed in SKOV3 and MDA453 cells treated with epinephrine, as determined by real-time RT-PCR analysis ( Figure 2C). An experimental study demonstrated that β2-AR can activate an antiapoptotic signal through Gi-dependent coupling to phosphatidylinositol 3′-kinase/Akt pathway and that activated Akt is sufficient for inducing downregulation of miR-199a-5p in cardiac myocytes. 35 We noticed that phosphorylation of Akt was significantly enhanced by epinephrine stimulation, accompanied by reduction in miR-199a-5p levels in MDA453 and SKOV3 cells ( Figure 2B,C). Furthermore, knocking down ADAM10 expression by the siRNA, whose specificity and efficacy were verified in breast cancer cells by the previous study, 32 greatly inhibited the epinephrine-induced shedding of Her2 ( Figure 2D), implying that catecholamine modulates cleavage of Her2 ECD by promoting ADAM10 expression through downregulation of miR-199a-5p and thus upregulation of SIRT1. It is known that stimulation of β2-AR with the agonists leads to shedding of heparin-binding EGF-like growth factor by ADAM17, which is also the major sheddase for Her3 and Her4, 36,37 and subsequent activation of EGFR in an autocrine/paracrine manner. 4,5 The findings in this study suggested that ADAM10 activities induced by catecholamine stimulation mediate cleavage of Her2 tyrosine kinase. | γ-Secretase activity induced by catecholamine stimulation is responsible for the generation of p80Her2 ICD Generation of CTFs from a transmembrane receptor involves cleavage within the transmembrane domain by γ-secretase-catalyzed proteolytic processing, whereas the activity of γ-secretase is pro- | Catecholamine stimulation mediates nuclear translocation of Her2 ICD efficiently Nuclear translocation of Her2 as a full-length molecule was investigated in certain cell lines. 18 In contrast with other Her family members, the nuclear entry of Her2 was much less efficient. The regulatory mechanisms of nuclear translocation of full-length Her2 receptor under physiological conditions remain elusive. To illustrate the subcellular distribution of Her2, we traced intracellular trafficking of GFP-tagged Her2 after treatment of MCF-7/ Her2-GFP cells with ISO. In unstimulated cells, ectopic overex- pressed Her2-GFP was defined at the cytoplasmic membrane ( Figure 4A). No evidence for Her2-GFP in the nucleus was found. However, in the presence of ISO, nuclear Her2 was readily visualized ( Figure 4A). By immunofluorescence using the monoclonal antibodies against both C-and N-terminus of Her2, we observed that Her2 molecules were predominantly located at the cytoplasmic membrane and no substantial nuclear Her2 could be detected with either antibody in unstimulated SKBR3 breast cancer cells ( Figure 4B,C). The data are consistent with several previous observations that Her2 did not localize to the nucleus of several breast cancer cell lines spontaneously. 16 However, after treatment with ISO for 60-90 min, Her2 hugely migrated into the nuclei ( Figure 4B). Notably, nuclear Her2 could be easily detected by an antibody against the C-terminus of Her2 ( Figure 4B) but not by an antibody against the N-terminus of Her2 ( Figure 4C). Consistent with these data, p80Her2 was detected in the nuclei | Her2 ICD physically binds to the promoter of COX2 gene and drives transactivation of COX2 gene Sequential cleavage of transmembrane receptors can rapidly transform membrane-associated proteins into soluble effectors, which enter the nucleus and regulate the transcription of their target genes. 31 To determine the functional significance of p80Her2 in the nucleus, we isolated the nuclear proteins from SKOV3 cells and performed oligonucleotide pull-down assay. A previous study identified the Her2-associated sequence (HAS), which was located at 1750 nucleotides upstream from the transcriptional initiation site in COX-2, a known target gene of Her2. 18 We utilized the sequence as an oligonucleotide probe. By oligonucleotide pull-down assay we could reproducibly detect the association of p80Her2 with oligonucleotide probes containing the HAS sequence in ISO- The nuclear extracts were prepared using a Nuclear-Cytosol Extraction Kit. The expression of Her2 was analyzed by the antibody against the C-terminus of Her2. Detection of histone H3 and β-tubulin was used as the indicators of nuclear and cytoplasmic proteins | Catecholamine stimulation strongly promotes the invasive activities of cancer cells in vitro and spontaneous tumor lung metastasis in mice In an effort to determine the effects of catecholamine stimula- Figures 6B and S3). Surprisingly, nuclear staining of Her2 was mainly observed in metastatic tumor tissues by immunohistochemical labeling with an antibody against the C-terminus of Her2 ( Figure 6C), whereas nuclear Her2 was rarely seen in primary tumors ( Figure S4). Moreover, using an antibody against the N-terminus of Her2, nuclear Her2 could not be detected, but only membrane-anchored Her2 was signaled ( Figure 6C). To explore the role of γ-secretase and COX-2 in adrenergic signaling-triggered tumor metastasis, SKOV3 cells were injected intravenously into NCG mice via the tail vein. Then mice were treated with ISO, celecoxib, or LY411,575. Figure 6D shows that ISO treat- After tumor cell injection for 4 d, the mice were treated daily ip with PBS or ISO (10 mg/kg). Eight mice were used in each group. At 60 d following tumor implantation, mice were sacrificed. The lungs of the mice were autopsied, fixed, and photographed. C, Lung metastatic tumors were dissected. Paraffin-embedded tissue sections were stained with rabbit monoclonal antibodies against N-terminus and C-terminus of Her2. D, SKOV3 cells (8 × 10 5 /mouse) were injected intravenously into NCG mice via the tail vein, and then the mice were treated daily with ISO (10 mg/kg, ip), celecoxib (COX-2 inhibitor, 5 mg/kg, ip) or LY411,575 (γ-secretase inhibitor, 1 mg/kg, po). Each group contained 5 mice. At 2 wk later, the mice were sacrificed and H&E staining was performed in the dissected lung tissue. E, Expression of β2-AR and Her2 in breast cancer tissues was analyzed by immunohistochemistry with the antibodies against β2-AR and C-terminus of Her2. Bar = 50 μm staining for nuclear Her2 (Table S1). The difference between the 2 groups was highly significant (P < .0002). However, the kappa coefficient for β2-AR expression and Her2 nuclear localization was moderate (0.46), suggesting that other molecular mechanisms may be involved in nuclear translocation of Her2. Simultaneous staining of β2-AR and nuclear Her2 was also observed in human ovarian cancer tissues ( Figure S5). These data demonstrated that nuclear localization of Her2 is intimately associated with overexpression of β2-AR. | D ISCUSS I ON The present study demonstrated that catecholamine-induced Interestingly, a recent phase II clinical trial showed that β-blockade with propranolol reduced biomarkers of metastasis in breast cancer. 44 Our study showed that catecholamine stimulation strongly promoted the invasive activities of cancer cells in vitro and spontaneous tumor lung metastasis in mice. We noticed that the nuclear localization of Her2 was conspicuous in metastatic lung tissues. Furthermore, overexpression of β2-AR significantly correlated with Her2 nuclear localization in human breast cancer tissues, implying the clinical significance of the crosstalk between β2-AR and Her2. It is conceivable that ADAM10 and γ-secretase
2020-10-06T13:37:19.765Z
2020-10-05T00:00:00.000
{ "year": 2020, "sha1": "e48b5a29ab9dda448f00b95a2c5d2a997112cdfb", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/cas.14676", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7609469ad976aec1abfbd10e8283875879fc8e1d", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
207893651
pes2o/s2orc
v3-fos-license
Helicases FANCJ, RTEL1 and BLM Act on Guanine Quadruplex DNA in Vivo Guanine quadruplex (G4) structures are among the most stable secondary DNA structures that can form in vitro, and evidence for their existence in vivo has been steadily accumulating. Originally described mainly for their deleterious effects on genome stability, more recent research has focused on (potential) functions of G4 structures in telomere maintenance, gene expression, and other cellular processes. The combined research on G4 structures has revealed that properly regulating G4 DNA structures in cells is important to prevent genome instability and disruption of normal cell function. In this short review we provide some background and historical context of our work resulting in the identification of FANCJ, RTEL1 and BLM as helicases that act on G4 structures in vivo. Taken together these studies highlight important roles of different G4 DNA structures and specific G4 helicases at selected genomic locations and telomeres in regulating gene expression and maintaining genome stability. G-Quadruplex Structures and G-Quadruplex Helicases DNA molecules are capable of adopting a wide range of secondary structures besides the canonical B-DNA duplex form. Most secondary structures form when B-DNA is unwound during transcription or replication. Depending on the sequence context, single stranded DNA (ssDNA) can interact with itself or other DNA strands to form non B-DNA structures ranging from "simple" hairpins and cruciforms to more complicated structures such as guanine-quadruplex (G4) DNA. While many secondary structures have functions in e.g., regulating transcriptional activity, (nearly) all of them can form a barrier for progression of replication forks and must be resolved during DNA replication. To deal with the wide range of secondary structures that can form, it appears that all branches of life have evolved divergent repertoires of DNA helicases precisely for this role. The human genome encodes hundreds of helicases, many of which appear to have non-redundant functions. Most helicases are able to unwind different forms of DNA structures, at least in vitro, but display higher affinity for some structures than others. It seems safe to assume that many of such helicases have evolved specifically to unwind one or more different secondary DNA structures. G4 DNA can arise in strands of guanine-rich DNA when guanine residues form Hoogsteen base parings to form a planar structure consisting of four guanines, otherwise known as a G-quartet ( Figure 1A). These G-quartets can stack into G4 structures ( Figure 1B). Canonical G4 structures form at the highly specific DNA motif G 3+ N 1-7 G 3+ N 1-7 G 3+ N 1-7 G 3+ , consisting of four runs of three or more guanines interspersed with variable length spacers containing any nucleotide. While canonical G4 DNA folds from a single DNA strand, G4 DNA can also form between two or even four separate strands of DNA [1]. More recently, the definition of G4-forming motifs has expanded to include those containing runs of two guanines, as well as spacers containing (many) more than seven nucleotides [2,3]. Given the high stability and wide range of potential G4 structures, is seems probable that mammalian cells have evolved different helicases with affinity for binding and unwinding of different G4 DNA structures. Interestingly, such helicases appear to have non-redundant functions in maintaining (epi-) genetic stability, as shown by the variable phenotypes caused by mutations in the associated genes. Here, we will discuss how we encountered three helicases for which we found evidence that one of their functions is to act upon G4 DNA structures in vivo. The strikingly different functions of these different helicase proteins illustrate the importance of proper G4 metabolism in maintaining genome stability. Genes 2019, 10, x FOR PEER REVIEW 2 of 18 guanines interspersed with variable length spacers containing any nucleotide. While canonical G4 DNA folds from a single DNA strand, G4 DNA can also form between two or even four separate strands of DNA [1]. More recently, the definition of G4-forming motifs has expanded to include those containing runs of two guanines, as well as spacers containing (many) more than seven nucleotides [2,3]. Given the high stability and wide range of potential G4 structures, is seems probable that mammalian cells have evolved different helicases with affinity for binding and unwinding of different G4 DNA structures. Interestingly, such helicases appear to have non-redundant functions in maintaining (epi-) genetic stability, as shown by the variable phenotypes caused by mutations in the associated genes. Here, we will discuss how we encountered three helicases for which we found evidence that one of their functions is to act upon G4 DNA structures in vivo. The strikingly different functions of these different helicase proteins illustrate the importance of proper G4 metabolism in maintaining genome stability. From Self-Renewal of Stem Cells to Telomeres One prevalent idea in the late 1980s was that blood forming or hematopoietic stem cells (HSC) must be endowed with self-renewal properties in order to ensure blood cell formation over a lifetime. Transplantation studies in the mouse supported this notion in that single marked cells were shown to be capable of reconstituting blood cell formation in lethally irradiated recipients [4][5][6]. The situation in humans was less clear, but the assumption was that functional properties of human and murine HSCs would be comparable. However, we found that self-renewal properties of purified human HSC are developmentally controlled [7] and coincide with loss of telomere repeats with each division in vitro and in vivo [8]. Based on these observations, our studies shifted towards the role of telomeres in human biology [9] and we developed novel techniques using peptide nucleic acid probes to measure the length of telomere repeats in individual chromosomes [10] and single cells [11]. These techniques were used to show that the rate of telomere loss in human cells varies markedly between cell types and between individuals (Figure 2A-2B). Later studies showed that telomerase RNA as well as the telomerase reverse transcriptase protein levels are both limiting stem cell function as a modest drop in those levels, resulting from haplo-insufficiency for either of these two telomerase genes, was found to have a dramatic effect on telomere length and stem cell function, often resulting in bone marrow failure or pulmonary fibrosis ( Figure 2C-2D from [12]). So in contrast to the mouse, where a single blood forming stem cell without telomerase can restore blood cell production in irradiated recipients [13], telomerase levels in human stem cells are very tightly controlled. Most likely, progressive telomere loss limits human stem cell proliferation to act as a tumor suppressor mechanism that does not exist in short-lived mice [14]. From Self-Renewal of Stem Cells to Telomeres One prevalent idea in the late 1980s was that blood forming or hematopoietic stem cells (HSC) must be endowed with self-renewal properties in order to ensure blood cell formation over a lifetime. Transplantation studies in the mouse supported this notion in that single marked cells were shown to be capable of reconstituting blood cell formation in lethally irradiated recipients [4][5][6]. The situation in humans was less clear, but the assumption was that functional properties of human and murine HSCs would be comparable. However, we found that self-renewal properties of purified human HSC are developmentally controlled [7] and coincide with loss of telomere repeats with each division in vitro and in vivo [8]. Based on these observations, our studies shifted towards the role of telomeres in human biology [9] and we developed novel techniques using peptide nucleic acid probes to measure the length of telomere repeats in individual chromosomes [10] and single cells [11]. These techniques were used to show that the rate of telomere loss in human cells varies markedly between cell types and between individuals (Figure 2A,B). Later studies showed that telomerase RNA as well as the telomerase reverse transcriptase protein levels are both limiting stem cell function as a modest drop in those levels, resulting from haplo-insufficiency for either of these two telomerase genes, was found to have a dramatic effect on telomere length and stem cell function, often resulting in bone marrow failure or pulmonary fibrosis ( Figure 2C,D from [12]). So in contrast to the mouse, where a single blood forming stem cell without telomerase can restore blood cell production in irradiated recipients [13], telomerase levels in human stem cells are very tightly controlled. Most likely, progressive telomere loss limits human stem cell proliferation to act as a tumor suppressor mechanism that does not exist in short-lived mice [14]. Telomeres in human lymphocytes and granulocytes from peripheral blood shorten with age [15]. Fluorescence in situ hybridization followed by flow cytometry (Flow FISH) results from over 800 healthy individuals were used to calculate the distribution of telomere length at any given age (percentiles in population: solid green 50th; red 1st, blue 99th). Note the marked variation on average, cell specific telomere length at any given age. (C, D) The critical role of telomerase is illustrated by the telomere length in cells from patients that are haplo-insufficient for telomerase genes (red dots in C and D) compared to their unaffected siblings (black dots in C and D). The blue zone at the bottom of the graphs represents the area where telomeres are expected to be fully "uncapped" with less than 1 kb of TTAGGG repeats per chromosome end. Reproduced with permission from [12]. Hunting for Genes that Regulate Telomere Length In view of the telomere loss in human cells, we became interested in factors, other than telomerase levels that could possibly explain the difference in the average telomere length in cells from individuals of the same age (Figure 2A-2B). To study genetic control of telomere length we collaborated with Richard Hodes and others looking for genes that regulate telomere length in the mouse [16]. For this work, laboratory mice (Mus musculus) with very long tracks of telomere repeats (>30 kb) were crossed with a different murine species, Mus spretus, which has an average telomere length of around 10 kb. The F1 offspring of such crosses showed clear elongation of telomeres on M.spretus derived chromosomes. The F1 animals were backcrossed with M. spretus to map the M.musculus derived genetic loci required for this telomere elongation. Several loci were identified and we focused on a region of around 10 Mb at the tip of mouse chromosome 2 [16]. Because the mouse genome sequence had not been reported at the time, we studied the syntenic region on human Figure 2. (A,B) Telomeres in human lymphocytes and granulocytes from peripheral blood shorten with age [15]. Fluorescence in situ hybridization followed by flow cytometry (Flow FISH) results from over 800 healthy individuals were used to calculate the distribution of telomere length at any given age (percentiles in population: solid green 50th; red 1st, blue 99th). Note the marked variation on average, cell specific telomere length at any given age. (C,D) The critical role of telomerase is illustrated by the telomere length in cells from patients that are haplo-insufficient for telomerase genes (red dots in C and D) compared to their unaffected siblings (black dots in C and D). The blue zone at the bottom of the graphs represents the area where telomeres are expected to be fully "uncapped" with less than 1 kb of TTAGGG repeats per chromosome end. Reproduced with permission from [12]. Hunting for Genes that Regulate Telomere Length In view of the telomere loss in human cells, we became interested in factors, other than telomerase levels that could possibly explain the difference in the average telomere length in cells from individuals of the same age ( Figure 2A,B). To study genetic control of telomere length we collaborated with Richard Hodes and others looking for genes that regulate telomere length in the mouse [16]. For this work, laboratory mice (Mus musculus) with very long tracks of telomere repeats (>30 kb) were crossed with a different murine species, Mus spretus, which has an average telomere length of around 10 kb. The F1 offspring of such crosses showed clear elongation of telomeres on M.spretus derived chromosomes. The F1 animals were backcrossed with M. spretus to map the M.musculus derived genetic loci required for this telomere elongation. Several loci were identified and we focused on a region of around 10 Mb at the tip of mouse chromosome 2 [16]. Because the mouse genome sequence had not been reported at the time, we studied the syntenic region on human chromosome 20 q. Several candidate genes, including "Novel Helicase Like (NHL)", a gene with the seven conserved structural motifs of helicases and homology to yeast Rad3 were identified. Rather than trying to knock-out this "candidate" telomere length regulating gene in the mouse, we decided to study a homologous gene in C.elegans, a more suitable model organism for genetic studies [17]. Discovery of dog-1 A BLAST search of the human "NHL" gene against the C.elegans genome yielded several hits including F25H2.13 (now known as rtel-1) and F33H2.1, the second best hit. In collaboration with Ann Rose at UBC we studied F33H2.1 using a mutant strain (gk10) that lacked expression of the F33H2.1 protein [18]. The crucial finding was that animals with a characteristic "Variable abnormal" or Vab phenotype [19] were observed more frequently than predicted by chance in gk10 ( Figure 3B). Gk10 animals with a Vab phenotype were crossed with known Vab mutants. Such complementation studies revealed that the phenotype in our helicase mutants resulted from loss of the vab-1 gene. Strikingly, all gk10 animals with a Vab phenotype had deletions in the vab-1 gene initiating in front of exon 5 of ( Figure 3C-E). Similar deletions were found in many other G-rich genomic regions in gk10 and we decided to call the gene Deletion of guanine-rich DNA or dog-1 assuming that more than one gene would be required to prevent the characteristic deletions in gk10. Marcel Tijsterman and colleagues set up a mutagenesis screen to look for such additional genes [20]. Using an elegant reporter strain, multiple independent mutants were indeed identified. Surprisingly, all these mutants were found to map to dog-1, supporting that DOG-1, now also known as FANCJ, is the lone helicase required for prevention of G-rich DNA deletions in nematodes. Dog would have been a better name! DOG-1/FANCJ Promotes Replication through G4 DNA While dog-1 mutant C. elegans displayed characteristic deletions of G-rich DNA, the animals did not appear to have telomere defects [18] even though our original goal was to identify factors involved in telomere length regulation. While we clearly did not find the NHL homolog in worms, DOG-1 must somehow be related to this mysterious factor. What could be the link between deletions DOG-1/FANCJ Promotes Replication through G4 DNA While dog-1 mutant C. elegans displayed characteristic deletions of G-rich DNA, the animals did not appear to have telomere defects [18] even though our original goal was to identify factors involved in telomere length regulation. While we clearly did not find the NHL homolog in worms, DOG-1 must somehow be related to this mysterious factor. What could be the link between deletions in G-rich DNA and the setting of telomere length? In hindsight, it seems obvious that the answer is related to G4 DNA! Mammalian telomeres are made up of tandem TTAGGG repeats (TTAGGC in C. elegans) and readily form G4 structures in vitro [21]. Further investigation of the deletions in dog-1 deficient worms showed that they coincide with the presence of G4 motifs at much higher rates than anywhere in the genome [20]. Deletions in dog-1 mutant animals initiate at the 3 end of G4 motifs and typically measure 100-200 nucleotides in length, indicating that G4 structures might form a replication barrier in the absence of dog-1 [18,20]. Later it was shown that the human homolog of DOG-1, FANCJ (also called BACH1 or BRIP1), can bind and unwind G4 structures in vitro, and that FANCJ depletion sensitizes cells to G4 stabilizing agents [22]. Mutations in FANCJ are associated with Fanconi anemia, a genetic cancer-susceptibility disorder [23][24][25]. As in C. elegans, loss of function of FANCJ leads to genomic deletions in the vicinity of G4 motifs [26]. Finally, it was shown that FANCJ promotes replication fork progression through sites of G4 sequences [27,28], indicating that deletions occur due to replication fork stalling at G4's, although the exact mechanism is still unknown. It should be noted that FANCJ plays other roles than maintaining genomic stability during DNA replication. FANCJ also promotes repair of DNA double strand breaks (DSBs) via the homologous recombination (HR) pathway through its interaction with BRCA1 [23,29]. For an excellent review of the many roles of FANCJ in maintaining genome stability the reader is referred elsewhere [30]. FANCJ Maintains Epigenetic Stability at G4 Motifs Despite the fact that G4 motifs can lead to deletions throughout the genome, these tracts are conserved throughout evolution. In fact, sequences with quadruplex forming potential are present at much higher than expected frequencies in most genomes, including in worms [31] and humans [32]. Why would evolution maintain these potentially pathogenic sequences if they did not serve a function? Although there is much speculation about what these functions are, it seems clear that G4 motifs in DNA can act as a switch to affect the transcription of nearby genes depending on whether a G4 structure is formed or not. Indeed, loss of FANCJ can cause epigenetic instability at genes containing G4 motifs, leading to both increases and decreases in gene expression in the absence of deletions [33]. While the presence of absence of a G4 can a directly affect transcriptional activity, FANCJ deficiency has also been linked to disruption of chromatin structure due to defects in restoring chromatin state behind replication forks [27]. FANCJ appears to maintain stability at G4 motifs in DNA in collaboration with two other G4 helicases, WRN and BLM [33,34]. Differences in the timing of sister chromatid replication caused by G4 DNA structures could trigger epigenetic differences between daughter cells after cell division, where one daughter inherits the "correct" chromatin state, while the other differs from the mother cell in gene expression at this locus ( Figure 4). While such events would be more common in absence of G4 helicases or upon replications stress [35], local DNA replication timing differences could occur at low frequency in all dividing cells and impact gene expression on paired daughter cells as predicted by the "silent sister" hypothesis [36]. [18]. (B) G4 DNA structures in normal cells, resolved by helicases such as FANCJ, RTEL, BLM and others, could drive epigenetic differences between sister chromatids as parental nucleosomes are unlikely to be still around for deposition onto nascent DNA by the time replication resumes. The "silent sister" hypothesis predicts that differences in gene expression between daughter cells can result at G4 locations from differences in the replication timing of G-rich DNA. Discovery of RTEL1 Encouraged by the finding of DOG-1, a G4 DNA helicase in C.elegans and the known folding of telomeric DNA into G4 DNA [37], we returned to our search for the elusive NHL gene in the mammalian system. We cloned the gene and with help from Andras Nagy and Hao Ding we knocked out the "NHL" gene in the mouse. Animals without NHL died in utero at approximately E11, while developmental defects could already be detected at E8.5 [38]. However, we could obtain embryonic stem (ES) cells with and without NHL, the latter of which showed a clear telomere defect (Figure 5A-5C). When NHL +/-animals were crossed with M.spretus, we could show that NHL is indeed the gene required for elongation of M.spretus telomeres in crosses with M.musculus. We named the gene Regulator of telomere length or Rtel. This name was changed to Rtel1 by the mouse nomenclature committee, no doubt assuming that more Rtel genes were to be discovered, a mistake we are familiar with. The "silent sister" hypothesis predicts that differences in gene expression between daughter cells can result at G4 locations from differences in the replication timing of G-rich DNA. Discovery of RTEL1 Encouraged by the finding of DOG-1, a G4 DNA helicase in C.elegans and the known folding of telomeric DNA into G4 DNA [37], we returned to our search for the elusive NHL gene in the mammalian system. We cloned the gene and with help from Andras Nagy and Hao Ding we knocked out the "NHL" gene in the mouse. Animals without NHL died in utero at approximately E11, while developmental defects could already be detected at E8.5 [38]. However, we could obtain embryonic stem (ES) cells with and without NHL, the latter of which showed a clear telomere defect (Figure 5A-C). When NHL +/− animals were crossed with M.spretus, we could show that NHL is indeed the gene required for elongation of M.spretus telomeres in crosses with M.musculus. We named the gene Regulator of telomere length or Rtel. This name was changed to Rtel1 by the mouse nomenclature committee, no doubt assuming that more Rtel genes were to be discovered, a mistake we are familiar with. (C) High-resolution TRF analysis of ES cells with (left 4 lanes) and without Rtel (remaining lanes) shows a smear in wildtype cells and progressive shortening of discrete bands in cells lacking Rtel (for details see [39]). (D) To explore differences between Rtel in M.musculus and M.spretus that could explain the marked difference in telomere length between the two species we engineered knock-in animals using recombineering. For the first animal around 1 kb of the M.spretus promoter sequence flanking the untranslated exon 1 was (C) High-resolution TRF analysis of ES cells with (left 4 lanes) and without Rtel (remaining lanes) shows a smear in wildtype cells and progressive shortening of discrete bands in cells lacking Rtel (for details see [39]). (D) To explore differences between Rtel in M.musculus and M.spretus that could explain the marked difference in telomere length between the two species we engineered knock-in animals using recombineering. For the first animal around 1 kb of the M.spretus promoter sequence flanking the untranslated exon 1 was inserted into the M.musculus genome of embryonic stem cells. Following selection and removal of the selectable marker a single loxP site (red arrow) was left next to the selected M.spretus sequence. A similar approach was used to replace 2.8 kb of M.musculus DNA including exons 29-34 at the 3 end of the gene together with the UTR with corresponding sequences of M. spretus. Following injection of blastocyst heterozygous animals were obtained that were used to make homozygous knock-in animals as well as double knock-in animals. None of the knock-in animals showed a telomere phenotype. RTEL1 Maintains Genome Stability and Telomere Integrity Rtel1 −/− ES cells grow slower and display shorter telomeres than ES cells derived from wildtype littermates ( Figure 5B-D). Furthermore, Rtel1 −/− ES cells show a severe differentiation defect that coincides with loss of telomere DNA and chromosomal rearrangements [38]. Since its discovery, Rtel1 has been studied extensively by us and others. Shortly after our discovery of Rtel1 in mice, the Boulton lab identified its homolog in C. elegans, naming it rtel-1 [40]. Although rtel-1 is not essential in worms, mutant animals do display increased lethality and germline apoptosis. Unlike dog-1 mutants, rtel-1 deficient worms do not display a deletion phenotype at G4 motifs, indicating that both proteins play different roles in maintaining genome stability. Deletion of rtel-1 leads to increased sensitivity to DNA damaging agents and promotes DNA repair [40]. In mammalian cells, RTEL1 is required for normal DNA replication genome-wide, as well as telomere extension during DNA replication [39][40][41][42]. The latter function was linked to RTEL1 unwinding both T-loops and G4 structures in telomeres [41]. Specifically, RTEL1 deficiency leads to loss of terminal single stranded G-overhangs [43], known to preferentially form G4 structures [44]. Whether or not genome-wide DNA damage in absence of RTEL1 mainly occurs at G4 motifs as well is currently unknown. Mutations in the human RTEL1 gene are now known to cause a particularly serious form of dyskeratosis congenita, the so-called Hoyeraal-Hreidarsson syndrome (HHS) [45]. Consistent with Rtel1 −/− ES cells, cells from HHS patient show general genome instability with unusually short telomeres as well as increased telomere shortening compared to control cells. RTEL1 and Telomerase The RTEL1 requirement for elongation of short telomeres by telomerase was also documented using high-resolution TRF analysis [39]. Whereas wildtype embryonic stem (ES) cells show telomeres of variable length indicated by a smear in the TRF analysis, RTEL1 deficient ES cells show progressive shortening of telomeres in multiple discrete bands ( Figure 5C). We interpreted these finding as support for the notion that RTEL1 is required for elongation of terminal telomeric DNA by telomerase. This raises many questions about telomeres in M.spretus. Does telomerase in M.spretus only act on very short telomeres [46]? Does single stranded G-rich DNA at the 3' end of M.spretus chromosomes fold into a G4 structure that does not exist in M.musculus? Do G4 structures at the very 3' end of chromosomes exist in cells from other species? How are such structures hidden from DNA damage response pathways? Do terminal G4 DNA structures compete with T-loops [47] or do T-loops themselves contain G4 structures? Does terminal G4 DNA explain why G4 motifs are common in telomere DNA from almost all organisms with linear chromosomes? Clearly, much more work needs to be done to elucidate the roles of G4 DNA, RTEL1 and telomerase in relation to telomere function in different cells from different species. A recent piece to the puzzle was the finding that reversed replication forks are a pathological substrate for telomerase and a source of telomere catastrophe in Rtel1 −/− cells [48]. Since telomeres harbor the highest density of G4 motifs in the entire genome, compromised telomere integrity in the absence of the G4 DNA helicase RTEL1 is not unexpected. What is Wrong with Rtel1 in M.spretus? Once Rtel1 had been identified as a gene required for telomere elongation, our studies focused on the difference between Rtel1 in M.musculus and M.spretus. Given the highly conserved predicted amino acid sequence between the Mus species, we focused on differences in the promotor region, where one G4 DNA motif is lost in M.spretus, and the 3 end of the gene, in view of the noticeable differences in the splicing of 3 exons between the species [38]. For these studies the promotor region as well as the 3' end of the Rtel1 gene in M.musculus were replaced with sequences from M.spretus ( Figure 5D). This was a herculean effort by Evert-Jan Uringa in the lab using recombineering [49] at a time when CRISPR-Cas9 mediated gene editing had not yet been invented. Evert-Jan successfully made two knock-in strains, replacing the promotor as well as the 3 end of Rtel1 in M.musculus with sequences from M.spretus as shown in Figure 5D. Unfortunately, neither of the knock-in animals showed a telomere phenotype. Even the double knock-in animals, obtained upon crossing the knock-in animals, had telomeres that were indistinguishable in length from wild type animals (results not shown). On hindsight we should probably have focused on the methionine at position 482 in RTEL1 from M.musculus with the lysine observed at that position in M.spretus [38]. More recent studies have shown that telomere dysfunction in cells from HHS patients can result from a M482I mutation in RTEL1 [45]. The methionine at position 482 in RTEL was furthermore recently reported to be within the Arch domain required for DNA binding and translocation [50]. Whether the M482K amino acid substitution in RTEL1 indeed explains the telomere length difference between M.musculus and M.spretus remains to be shown. BLM is A Multifunctional Caretaker of Genome Stability DOG-1 and FANCJ protect against deletions at G4 motifs, and FANCJ maintains epigenetic stability in collaboration with the BLM helicase. BLM is one of the best studied G4 helicases, although its role in maintaining (epi-) genetic stability is not fully understood. BLM was first identified as the causative factor in Bloom syndrome, a genetic disorder characterized by growth retardation, genetic instability, and cancer predisposition [51,52]. The main phenotype of BLM deficient cells includes sensitivity to a range of DNA damaging agents, elevated spontaneous mutation rates, and a nearly 10-fold increase is sister chromatid exchange (SCE) events [53]. Like FANCJ, BLM plays multiple roles in maintaining genome stability, and it is, therefore, challenging to separate BLM's function in G4 biology from its other functions. The BLM helicase can unwind a wide range of DNA structures, including B-DNA [54], but it shows much higher affinity for double-Holliday junctions [55] and G4 DNA [56,57]. It was shown that BLM is an anti-recombinase that prevents exchanges of genetic material between sister chromatids and homologs during homologous recombination [58]. Further roles where discovered in maintaining genome stability at replication forks [59][60][61], resolving chromosome bridges during mitosis [62,63], and telomere maintenance [64]. We will not attempt to summarize the extensive literature on the different functions of BLM here and readers are referred to the primary literature and the many excellent reviews written on the subject. Instead, we focus on BLM's role in processing of G4 DNA. BLM Promotes Telomere Replication G4 DNA structures readily form in telomeric DNA and these need to be processed for proper telomere replication. BLM localizes to telomeres [64,65] and interacts with several components of the shelterin complex [66]. BLM activity was found to be enhanced by two of these components, TRF2 [67] and POT1 [68], perhaps allowing BLM to unwind or assist in the unwinding of telomeric G4 DNA during replication. Indeed, BLM deficiency leads to a decrease in the speed of replication at telomeres, but only in the G-rich strand which is capable of folding into G-quadruplexes [69]. This suggests that BLM is specifically required to unfold G4 structures during telomere replication. Indeed, telomere replication can also be retarded by treating cells with G4 stabilizers [69], and the absence of BLM leads to increased telomere fragility and telomere shortening [64]. BLM Prevents Replication fork Stalling and Recombination at G-Quadruplexes Since BLM is required to replicate through G4 structures in telomeres, is the same true for other locations in the genome? We became curious if SCE locations in BLM deficient cells overlapped with G4 motifs. There is evidence that the presence of a stabilized G4 structure delays BLM in unwinding duplex DNA [70], but classical cytogenetic SCE identification methods do not allow for high-resolution mapping of these events. To improve on this, we used Strand-seq, a sequencing-based method to map SCEs at kilobase resolution in single cells [71,72]. We previously showed that it was possible to distinguish between parental DNA strands and newly synthesized DNA in newly replicated cells by using unidirectional fluorescence in situ hybridization (FISH) probes after one round of incorporation with bromodeoxyuridine BrdU [73]. The approach we took, shown in Figure 6, is based on the nicking of nascent DNA with BrdU following exposure of the DNA to UV light in the presence of the DNA dye Hoechst 33258 [74]. Normally, the Hoechst dye binds with high affinity to double stranded DNA and shows fluorescence upon excitation with UV light. Such fluorescence is not observed when Hoechst is bound to BrdU substituted DNA. In this case, the absorbed light energy is not emitted but initiates a photochemical reaction resulting in nick exclusively in the BrdU substituted DNA strand ( Figure 6). This principle [75] was exploited to show that major satellite sequences in murine chromosomes are always oriented in the same direction relative to the 3' end of G-rich telomeric DNA ( Figure 6B) [73]. We adopted this method into a sequencing-based approach, allowed identification and mapping of SCE events at kilobase resolution [71,72]. Using the Strand-seq method we confirmed that BLM deficient cells have elevated levels of SCEs [76,77]. Furthermore, we could show that SCEs in BLM deficient cells are not randomly distributed over the genome but are enriched at genes in general and genes with G4 motifs in particular [77]. Interestingly, this effect was strongest for G4 motifs in the strands of transcribed genes, highlighting the interplay between transcription and G4 formation. If chromosomes are arrested at metaphase and spread onto a slide, it is possible to prepare completely single stranded chromosome spreads by nicking the DNA using Hoechst and UV followed by digestion with exonuclease. Single stranded chromosomes can be used to identify the 5′ and 3′ ends of chromosomes and study the orientation of genomic segments relative to such ends with unidirectional FISH probes. (C) If cells are allowed to divide after one round of BrdU, the two daughter cells will inherit one template strand from each parent. Such parental template strands can be identified using single cell Strand-seq as illustrated in (D) Reads derived from each parental chromosome map either the 5′ to 3′ "Crick" template strand or to the 3′ to 5′ "Watson" template strand. If during the synthesis of parental DNA template strands were switched reads will map to opposite If chromosomes are arrested at metaphase and spread onto a slide, it is possible to prepare completely single stranded chromosome spreads by nicking the DNA using Hoechst and UV followed by digestion with exonuclease. Single stranded chromosomes can be used to identify the 5 and 3 ends of chromosomes and study the orientation of genomic segments relative to such ends with unidirectional FISH probes. (C) If cells are allowed to divide after one round of BrdU, the two daughter cells will inherit one template strand from each parent. Such parental template strands can be identified using single cell Strand-seq as illustrated in (D) Reads derived from each parental chromosome map either the 5 to 3 "Crick" template strand or to the 3 to 5 "Watson" template strand. If during the synthesis of parental DNA template strands were switched reads will map to opposite sides of the reference genome at the site of such an exchange (arrows). Note the single chromatid exchange event in the paired normal cells (bottom of chr 1) and the many template strand exchanges in cells from a patient with Bloom's syndrome (E) The latter are not randomly distributed over the genome but enriched at G4 motifs of active genes [77]. BLM Deficiency Affects Transcription of Genes Containing G4 Motifs As discussed above, BLM appears to cooperate with FANCJ to impact epigenetic stability, at least in part by assuring proper recycling of parental histones behind the replication forks [33,34]. There have been no reports of epigenetic instability in the absence of BLM alone, although changes in transcriptional profiles were reported [78,79]. These changes in gene expression occur in both directions, and correlate with the presence of G4 motifs in both promoters and gene bodies. There is also evidence that BLM prevents transcription-induced DNA damage, specifically by unwinding R-loops that occur during transcription [80,81]. While R-loops are not necessarily associated with G4s, there are indications that R-loops form more readily and are more stable when a G4 motif is present on the non-transcribed strand, perhaps by stabilizing the displaced DNA strand [82]. BLM has some affinity for R-loops, similar to D-loops, but it is also possible that BLM destabilizes R-loops by unwinding the G4 on the displaced DNA strand. Molecular Phenotypes That Inform in vivo Helicase Function In the case of DOG-1 or FANCJ we were fortunate to stumble upon a very specific, molecular "signature" phenotype observed in mutant cells ( Figure 3C-E). The ability to deduce functional properties of proteins acting on DNA by analysis of mutant cells is also illustrated for RTEL1 in Figure 5C and for BLM in Figure 6E. This type of information is vital to complement structural information about proteins which is proceeding at an unprecedented pace [83]. In case of the iron-sulfur cluster (Fe-S) helicases DOG-1, FANCJ, and RTEL1, the generation of structural information has been relatively slow. Specific proteins and mitochondria are required for biosynthesis and incorporation of the FeS cluster into such proteins [84][85][86] and such conditions are not easily reproduced in vitro. The functional role of the FeS cluster is furthermore far from clear. It has been suggested that next to a structural role in protein folding and a functional role in 5 to 3 translocation on DNA substrates [50,87], the FeS cluster could also increase the binding affinity for non-duplex DNA structures [88]. G4 DNA has furthermore been reported to have peroxidase activity upon interaction with heme inside the nucleus [89] and perhaps G4 structures and G4 helicases communicate in a language that we still need to learn. The answers to the many questions about G4 DNA and G4 DNA helicases will no doubt require the development of novel tools and novel insight. Sydney Brenner once quipped "Progress in science depends on new techniques, new discoveries and new ideas, probably in that order" [90]. New techniques and discoveries enabled by such techniques are needed to help solve some of the current riddles related to G4 DNA and G4 helicases. Conclusions DNA with a G4 motif can function as a molecular switch by conversion of G4 DNA into duplex DNA and vice versa. Where such switches are useful to cells and organisms remains to be fully understood. However, switches enabled by G4 motifs are a double-edged sword: during replication and recombination G4 structures are obstacles that need to be resolved. As such, the presence of G4 structures at any given genomic site needs to be closely controlled. We have discussed three different helicases which while all capable of acting upon G4 DNA, play highly distinct roles in protecting against G4-associated damage. Many other G4 DNA helicases have been identified, with both unique and overlapping functions. For example, both yeast and human PIF1 proteins are potent G4 helicases [91,92]. Yeast Pif1 promotes replication through G4 motifs [93,94], and its absence destabilizes such motifs [93,95]. These results are reminiscent of DOG-1/FANCJ, and it is unclear how much functional overlap there is between these different G4 helicases. Other known G4 helicases include ATRX [96], WRN [97,98], the BLM and WRN homolog in S. cerevisiae, Sgs1 [99], XPB and XPD [100] and many others. What is the basis for the strikingly different cellular functions and phenotypes of loss of these different G4 helicases? Are these differences caused 'simply' by different spatiotemporal recruitment to G4 DNA, or do these helicases only bind and unwind certain classes of G4 structures in vivo [101]? With current genome editing and some of the tools described in this paper the answer to these and related questions should be forthcoming in the not so distant future. Author Contributions: The paper was written by P.L. and N.v.W. Funding: This work was supported by grant 20R77807 from the Canadian Institutes of Health Research.
2019-11-07T14:09:33.757Z
2019-10-31T00:00:00.000
{ "year": 2019, "sha1": "fe4bdf3f75481aa5f59b6581d05a40cd3f0553ec", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4425/10/11/870/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a79caee91bea29360d714b9ec8459eca6807ce21", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
258501638
pes2o/s2orc
v3-fos-license
Analysis of Escherichia coli Microbial Contamination and Total Coliform Bacteria in Refill Drinking Water in Pondok Cabe Ilir Village, South Tangerang City : Water is an essential nutrient for human health. It is important to maintain adequate drinking water intake to prevent dehydration, which can cause hypothermia, dizziness, constipation, and kidney stones. Currently, water-filling stations are an alternative source of drinking water because of limited access to clean water at affordable prices. The purpose of this study was to determine bacterial contamination at a drinking water station in Pondok Cabe Ilir, South Tangerang, Banten, based on Permenkes RI No. 492/Menkes/Per/IV/2010. The study was conducted using the Most Probable Number (MPN) method and the IMViC test, Triple Sugar Iron test, H 2 S production test, and motility test to identify Escherichia coli . The results showed that one out of five refilled drinking water samples contained Coliform bacteria above the threshold according to Minister of Health Regulation No. 492/Menkes/Per/IV/2010. INTRODUCTION Water is a primary requirement for humans; therefore, it is needed in sufficient quantities and with guaranteed safety for consumption by all residents.Unfortunately, the community's high demand for safe drinking water is not matched by the ease of access to safe drinking water.This encourages the presence of bottled drinking water industries (AMDK) and refilled drinking water depots (DAMIU).Many people prefer to use refilled drinking water because it is less expensive than bottled drinking water.There were 283 refilled drinking water depots in South Tangerang City in 2012.However, the presence of depots is not matched by permits, guidance, supervision, and circulation, resulting in low drinking water quality guarantees (Radji et.al. 2008).Enteropathogenic E. coli (EPEC), which can cause diarrhea in infants (Edberg, 2000). Supervision Diarrhea plays a role in 31% of deaths in children aged RESEARCH METHOD The research design used was descriptive analytic with a cross-sectional approach referring to the method used by Bambang et al. (2013) with modifications. Sampling and Observation The process of taking refilled drinking water samples began by surveying the amount, location, and sampling permits from DAIMU in the Pondok Cabe Ilir sub-district.The sample was then brought to the Microbiology Laboratory of the Pharmacy Department at the Syarif Hidayatullah State Islamic University Jakarta to carry out a predictive test which included a confirmation test and an identification test. Predictive Test In a Durham tube containing Lactose Broth (LB) media, each sample was diluted from 5 depots (D1, D2, D3, D4, D5) to a concentration of 10 -3 .The samples were then incubated at 36 ± 1ºC for 24 and 48 hours.The Durham tube-forming gas was recorded as a positive sample. Confirmation Test From a positive Durham tube in the predictive test, 1 ose was taken and inoculated in 2% Brilliant Green Lactose Bile Broth (BGLB 2%) media, then the samples were incubated at 36 ± 1°C for 24-48 hours. The Durham tube that produces gas was recorded as a positive sample. Identification Test of E. coli Bacteria Identification tests were conducted on selected colonies that had been rejuvenated previously using Predictive Test Results Table 1 shows the results of the predictive test.During the replication of the second tube, only sample D2 at 10 -1 dilution showed positive results, whereas samples D1, D3, D4, and D5 only showed media turbidity. Meanwhile, in the negative control, there was no turbidity in the media or gas bubbles in the Durham tube. The presence of gas bubbles in the Durham tube and turbidity in the LB media indicated that the sample contained Enterobacteriaceae bacteria.This is because these bacteria can ferment sugar through the fermentation of a mixture of acids and butandiol (Müller, 2001). Affirmation Test Results The results of the affirmation test are shown in Figure buildings and equipment that are not kept clean, or depot conditions that are not eligible (Walangitan et. al., 2016).Besides that, contamination by Coliform bacteria in sample D2 can also be caused by the long shelf life of raw water in the holding tank (Rahayu et al., 2013;Violita et al., 2010) considering that raw water in D2 only comes every 2-3 week. E. coli Identification Test Results E. coli identification test on sample D2 was carried out after the presumption test and confirmation test. Identification of E. coli needs to be conducted because this bacterium is considered the best indicator of fecal contamination or contamination in drinking water samples (Edberg, 2000). Note: A: Culture media before incubation; B: The culture medium becomes cloudy and there are gas bubbles in the Durham tube after 48 hours of incubation. Figure 2. E. coli identification test results The presence of bubbles and the occurrence of turbidity in the culture media as shown in Figure 2 showed that sample D2 was positive for E. coli according to the American Public Health Association (APHA).The results of the bacterial culture from the D2 sample inoculated on EMBA media showed the form of a single shiny metallic green colony which was an indicator that lactose and/or sucrose had fermented by faecal Coliform, namely from E. coli bacteria (Figure 3) (Leboffe et. al., 2010). Note : Green metallic luster colonies were formed Figure 3. Identification test results on EMBA media The results of the Gram stain test observed using a microscope showed that the bacteria in sample D2 showed the character of E. coli that is in the form of short bacilli and red in color (Figure 4). Test Negative results in acetyl methylcarbinol. Citrat Test Negative in forming NaCO3 In the Indole test results, a red ring was formed when sample D2 was dropped by KOVAC reagent, indicating that the bacteria contained in it produced indole (Figure 5). Note: A: SIM media without bacteria; B: The media turned black after being inoculated with bacteria; C: A red ring was formed on the media that had been dripped with KOVAC reagent Figure 5. Indole test results The results of the methyl red test on sample D2 showed that there was a color change in the medium A B from light yellow to red after adding methyl red (Figure 6).This color change occurs due to the acidic atmosphere formed in the media (Hemraj et al., 2013) as a result of glucose fermentation through the process of glycolysis into a mixture of acids namely acetic, lactic, and formic acids; CO2, and ethanol.The presence of this acid formation can be detected using methyl red by changing the color of the medium to red.(Bambang et al., 2014;Müller, 2011). Note : A: Media contains bacteria before the incubation period; B: The media contains bacteria after incubation and has been dripping with methyl red which turns red. Figure 6. Methyl Red Test Results The results of the Voges Proskauer test from sample D2 using Barritt A and Barritt B reagents showed a change in the color of the media to dark brown (Figure 7).In the Voges Proskauer (VP) test, E. coli plays a role in the fermentation process of glucose into a mixture of acids, ethanol and carbon dioxide (Müller, 2001).In the presence of peptones in the MR-VP media, the acetoin formed from glucose fermentation will undergo an oxidation process with the addition of KOH and produce a red color in the media.In this reaction, E. coli will give a negative response since it shows a brown-yellow color (Hemraj, et al., 2013). The results of the citrate test on sample D2 showed positive results because there was no color change in the Simmons citrate medium (Figure 8) which indicates that the bacteria inoculated in the media are enteric bacteria, such as E. coli (Müller, 2001). Note: A: Citrate media contains bacteria before the incubation period;B: Simmon Citrate media contains bacteria after an incubation period of 96 hours (no change in media color). Figure 8. Citrate Test Results The results of the Triple Sugar Iron test on sample D2 showed a change in the color of the medium to yellow on the surface of the agar and the base of the agar to black color, indicating the nature of E. coli (Figure 9). Note: A: TSIA media that has been inoculated with bacteria before incubation; B: TSIA media that has been inoculated with bacteria after an incubation period of 48 hours (a change in the color of the media to yellow on the agar surface, black on the agar base, and slightly raised agar). Figure 9. Triple Sugar Iron Test Results The occurrence of a yellow color change on the surface of the agar in the citrate test is due to the acidic nature that arises in the media as a result of the fermentation of sugars (lactose, sucrose, and glucose).In the results of the H2S production test from sample D2, there was a change in the color of the TSIA media and SIM media to black (Figure 10).This is due to the production of H2S by bacteria in the D2 sample. Several strains of E. coli have been reported to produce H2S, although E. coli itself does not typically produce H2S (Park et al, 2015). The results of the motility test from the D2 sample showed changes that occurred in the SIM media after 24 hours of incubation, where turbidity occurred in the media.This indicated growth in the puncture area which indicates that the bacteria in sample D2 are motile (Leboffe et al., 2011). The overall test results of the D2 sample referring to SNI are summarized in of the quality of refilled drinking water has been regulated by the government through Minister of Health Regulation No.492/Menkes/Per/IV/2010 concerning Drinking Water Quality Requirements.Among the mandatory parameters contained in the Minister of Health Regulation are microbiological parameter requirements, where the levels of E. coli and total Coliform bacteria allowed in refilled drinking water are 0/100 mL of sample (Ministry of Health, 2010).E. coli is a common bacterial flora found in the intestines of humans and animals.Pathogenic serotypes, on the other hand, can cause diarrhea via a variety of mechanisms, including Enterotoxigenic E. coli (ETEC), which can cause traveler's diarrhea, and media (Eosin Methylene Blue Agar (EMBA) and Nutrient Agar (NA).Identification of E. coli bacteria was carried out by Gram staining, Indole, Methyl Red, Voges Proskauer, Citrate (IMViC) test, and the Triple Sugar Iron test.The testing technique refers to the method used by Bambang et al (2013). Figure 1 . Figure 1.Formation of gas bubbles in the assertion test.The data in table 2 shows that in samples D1, D3, D4, and D5 the APM/mL value was <3/mL.This indicated that no Coliform bacteria were detected in the sample (Suprihatin, 2008), while the APM/mL value in the D2 sample was 4/mL.The quality of drinking water was determined by the level of contamination by microbes.The more the number of Coliform bacteria contained in drinking water, the worse the quality of drinking water.Drinking water quality requirements must meet the content of E. coli bacteria in drinkingwater, number 0/100 ml(Mirza, 2014). Figure 4 . Figure 4. Gram staining results of sample D2 Note A: Methyl Red-Voges Proskauer (MR-VP) media contains bacteria before incubation (light yellow color); B: MR-VP media that has been inoculated and dripped with Barritt A and Baritt B reagents (the color of the culture medium changes to brown). Table 1 . The results of the suspected incubation period of 24 and Table 4 . The test of entire sample D1-D5
2023-05-05T15:04:03.927Z
2022-06-27T00:00:00.000
{ "year": 2022, "sha1": "b4db716d7ba499fcd3f9d782cd9003662a4f50a9", "oa_license": "CCBYSA", "oa_url": "https://journal.uinjkt.ac.id/index.php/pbsj/article/download/24699/11911", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "0fd5df97c2311f53aa94ef084de69c8915d62bb1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
213787866
pes2o/s2orc
v3-fos-license
Formation, Kinetics and Control Strategies of Nox Emission in Hydrogen Fueled IC Engine With the increase in demand of environmental safety leads to the development of unconventional fuels. Though use of these unconventional fuels like hydrogen etc. heads to low emission, it is associated with shortcomings such as back fire, knocking, and NOx emission. This paper gives you a comprehensive overview on the phenomena of NOx formation, its effect on atmosphere, comparison between the NOx emission of hydrogen based IC engine and conventional fuel IC engine, it’s control techniques and their effectiveness. The future research direction on hydrogen sourced IC engines is also presented. Keywords— Hydrogen; NOx; IC engine. INTRODUCTION With an increase in the demand of automobile there is a parallel development of the emission problem and scarcity of conventional fuels like petrol and diesel. Unconventional fuels like bio gas, bio diesel, hydrogen have given hope to overcome that, but the development of these unconventional fuels is only constrained to the theoretical and laboratory research. In recent years, it is demonstrated to have a clear future ahead with slight modification and using various control techniques. Use of hydrogen as the main fuel is not entirely a new concept. In the 1930s about thousand engines were converted to hydrogen engine [1] in England and Germany, but after that hydrogen was only used in laboratory research owing to lack of interest of companies. In the 19th and 20th centuries the application of hydrogen again came into picture due to the environmental effect and scarcity of gasoline and diesel fuel. The application and practical significance of development of hydrogen energy are dealt in [2], [3]. With the increase in practical applications of the hydrogen as fuel problems such as knocking, misfire, loud noise and back fire were noticed. A. Hydrogen as fuel in IC engines Hydrogen having properties such as high flammability range, low ignition energy (as stated in Table 1) makes it useful in the SI engine [4], [5]. But due to its high self-ignition temperature it cannot be ignited only by the compression. However, by means of some external source of ignition like glow plug [6], [7] the hydrogen as a fuel for CI engine. Hydrogen can be used as the fuel in the small power diesel engine in the agricultural sector [8]. The drawbacks like misfire; backfire and knocking are significant in diminishing the engine performance. Uses of timed manifold injection system [39], port injection, flame trap, cold spark plug [7] are possible solutions to the above problem. Hydrogen IC engines emit very low CH except the CH emission caused by the engine lubrication oil and the diesel used in the C.I. engine. But it has one drawback of high rate of NOX emission compared to the gasoline [9]. II. NOX EMISSION The compounds like NO (nitric oxide), (nitrogen oxide) (nitrous oxide) are known as the NOx. These are among the primary pollutants produced from the vehicles. As these oxides get destroyed immediately when released to air these are termed as NOx. Formation of NOx normally takes place in high temperature of about 2800K. Thus, the main production of NOx takes place in the IC engines as they develop more amount of heat during the power stroke. Mainly nitrogen oxides get produced from the reaction taking place inside the cylinder between the nitrogen and oxygen. The main source of nitrogen in case of gasoline or diesel fueled engine are the carbon compounds which have nitrogen in their structures like Amines, Pyridines, Quinoline (0.5-2%) having chemical structures as given in the Fig. 1. These compounds if present in the fuel will produce nitrogen which will react with the oxygen to produce oxides of nitrogen. But in case of hydrogen fueled engine though there is no CH fuels (except engine oil) containing nitrogen the major amount nitrogen is from the air itself, which in high temperature is likely to give rise nitrogen oxides. A. Effect of NOX emission Pollutants like NOx affect the atmosphere mainly in the form of acid rains, ozone formation, and smog formation. With the addition of NOx with rain it gets converted into dangerous acid rain. Formation of ozone molecules from the reaction of volatile organic compounds leads to the lung diseases and health related issues. In the stratosphere, the decrement of ozone layer that absorbs the ultraviolet rays is helped by the oxides of nitrogen. Different toxic compounds like nitrosamines, nitroarenes are formed due to the reactions between NOx and organic compounds. These lead to the biological mutation. According to a Report [65] India is 4 th among 128 countries in the NOx emission, with 300,680 thousand metric tons of CO2 equivalent compared to 456,210 thousand metric tons of CO2 equivalent of United States, which is 52% higher than of India. Fig. 2 provides a graphical representation of the amount of NOx emission [66]. Chaichan, Abass [9] have studied the NOx emission for different fuels like CNG, gasoline, hydrogen at a given compression ratio (CR). III. FORMATION OF NOX Basically, the formation of NOx is a very complex mechanism and many researchers have studied on the kinetics of NOx formation [10], [11]. NOx formation normally takes place in five processes [12] such as thermal -Zeldovic mechanism, Fenimore prompt mechanism, O intermediate mechanism, NNH mechanism, and fuel NOx mechanism. They have found that for different CRs the NOx concentration is the highest for the hydrogen fueled SI engine followed by CNG, gasoline and natural gas. The results are shown in the Fig. 3 and 4. Due to less time for flame propagation, hydrogen easily burns, which helps in producing more NOx. Thermal NOx is observed to be predominant compared to the others and are produced in high range of equivalence ratio. Fenimore mechanism is suitable to produce NOx in the rich concentration whereas the use of lean fuel at low temperature leads to the O intermediate mechanism. Miller Equations (1) and (2) are the Zeldovic mechanism for the formation of NOx and (3) is the extended Zeldovic mechanism proposed by Lavoie et al. [13]. As all the three reactions take place in the presence of the O, OH and radical, the fuel burning reactions have significant contribution towards these. But with the assumption made by Zeldovic that NOx formation takes place after complete burning of the fuel the two reactions can be uncoupled [11], [12]. It is assumed that after long time remain at their respective equilibrium and steady values. At that time, the approximate NO concentration can be measured. An important justification of this assumption as provided in [13] that the combustion taking place inside the cylinder has high pressure leading to very small residence time in the flame reaction zone and the highest temperature suitable for the NOx formation is achieved when the burned gas gets compressed, that is just after the ignition. So, it is applicable to decouple the reactions to measure the approximate NO concentration. According to [13], the simple equation of equilibrium regarding the formation of NO can be written as (4) With the above stated assumption that N2 concentration is in steady state and all other concentrations are in equilibrium state the equation (4) can be reduced to are respective equilibrium constant used in (7), (8) and (9) The numerical values of these constants at the corresponding range of the temperatures are listed in Table 2. The equilibrium constants are experimentally studied by Miller et al. B. Prompt NOx It is studied in [14] the formation of nitric oxide takes place rapidly from the nitrogen during the time of flame zones of laminar premixed flames [15]. Within the flame zone the assumption of equilibrium is not always valid. High concentration of the oxygen at the period of reaction in the flame zones increases the amount of NO concentration. This formation of NO due to the participation of excess O is the basis of prompt reaction phenomena [12]. The reaction in [16]- [25] suggests that more amount of hydrocarbon radicals react with molecular nitrogen to form the Amine and Cyano compounds , which then get converted into their respective intermediate compounds leaving behind NO as [11] , [12] (10) + + (11) HCN+O (12) (13) (14) (15) The steps in (10) to (15) are for equivalent ratio less than 1.2. If it exceeds more than 1.2 then formation of NO becomes much more complex as different paths for formation of NOx open [12]. C. N2O intermediate mechanism In the region of low temperature and low equivalent ratio the formation of NO is mainly through the formation N2O intermediate mechanism as Formation of the NO from the fuel takes place through a chain reaction through which firstly the bounded nitrogen is transformed into the HCN or NH3, and then it is followed by the reaction steps of prompt NOx [11]. E. NNH Reaction As explained in N2O intermediate mechanism there are four possible reactions between H and N2O [26] as mentioned in (19) to (22). Bozzeli has suggested that the formation of NNH radical from the reaction of N2 and H as (23) This NNH intermediates oxidize and give rise to NO as in (24). (24) Harrington et al. [27] have found that NO is formed through this mechanism if cool, fuel rich, low pressure, premixed hydrogen/air flames are used. Konnov [28], [29] have also observed that the NNH mechanism is significant for all temperature with a very small residence time. At temperature 2100K, thermal NOx becomes dominant after one millisecond. At low temperature of 1500K and moderately high temperature of 1900K the NNH mechanism is more for all residence time [28]- [30]. Miler et al. [31] have further studied the chemical kinetic models and relationship of NNH mechanism with thermal NOx and have evaluated heat and life time of NNH. F. NO2 Formation The main cause of acid rain is the nitrous oxide (NO2) that forms in the atmosphere from NO. But in the engine, itself the formation and destruction of NO2 takes place (Formation) (27) HO2 is formed by the reaction (28) HO2 formation is more in the low temperature region and hence it increases the rate of formation of NO2 in that region. The NO2 destruction is active at high temperature region. It has been observed that quenching at low load increases the ratio [NO2]/ [NOx] compared to the quenching at high load. A detailed explanation of the [NO2]/ [NOx] is given in [32]. In case of fuel having no carbon compounds such as hydrogen fuel, formation of NOx is mainly based on the kinetics of thermal NOx i.e. extended Zeldovic mechanism. Others like prompt, N2O intermediate mechanism and NNH mechanism though have significant effect in concentration, and it can be neglected by using required equivalent ratio, temperature and pressure. IV. FACTORS AFFECTING NOX CONCENTRATION IN HYDROGEN IC ENGINE It is evident from the literature that the NOx amount mainly depends on main factors such as temperature, residence time, compression ratio and equivalent ratio. As discussed earlier the chemical kinetics of NOx formation is mainly favorable when the temperature is high, that is the temperature of the burned gas is high. Residence time can be defined as the time taken for the reaction to complete. The NOx concentration increases with a decrease in residence time due to an increase in flame propagation speed and reaction speed. A. Spark timing It has been found from literature and experimental values that spark timing has a significant effect on the NOx emission. Advancing the timing such that combustion takes place earlier increases the peak cylinder pressure and moves the peak pressure closer to the top dead centre. This increase burned gas temperature hence initiates formation of more NOx. The technique of retarding the spark timing to decrease the formation of NOx has been reported in [5], [33] and [34]. Subramanian et al. [33] have found that with fuel flow rate (FFR) of 0.68kg/h and 0.76kg/h the retardation of spark timing helps in decreasing the NOx concentration (Fig 5). They have demonstrated that NOx concentration can be minimized significantly in a fuel of low equivalent ratio by retarding the spark timing at the expense of brake thermal efficiency. B. Equivalent ratio It is observed from [13] that the concentration NOx decreases as fuel becomes lean. This happens mainly due to the low fuel amount compared to the oxygen, thus decreasing the burned gas temperature that inhibits the formation of NOx. Also, it has been noticed that with increase in the equivalent ratio above a limit decreases the NOx concentration, as there is insufficient oxygen for the combustion, thus decreasing the effective burned gas temperature. Referring to Fig. 6 it is observed [33] that for an equivalent ratio of 0.55, the NOx emission is negligible and its concentration attends peak at an equivalent ratio of 0.8 for a fluid flow rate of 7800ppm. Increasing further the plot shows a decreasing trend of NOx concentration which is similar to the observation reported in [13]. Mathur and Khajuria [37] have studied the relationship between the amounts of NO emission in a hydrogen fueled SI engine and speed at four different compression ratios. Das et al. [38], [39] also have studied the effect of different CR on NOx concentration at a constant speed of 1600 rpm in a SI engine. From Fig. 7, it is observed that with an increase in compression ratio the NOx concentration is increased due to high temperature condition and high flame speed. C. Injection timing Different studies [40]- [49] on the effect of injection timing on the NOx concentration indicate relative complex behavior of the injection timing. Homan et al. [40] in their research work on a ASTM CFR engine with a LIRIAM scheme (late ignition, rapid ignition and mixing) have studied the effect of different injection timing for a constant speed of 1200 rpm and a spark timing of 5° BTC. They have found out an effect of stratification [44], [45] on the mixture of hydrogen and oxygen that increases the NOx concentration reducing the homogeneity of the mixture. Results shown in the Fig. 8 provide three distinct observations (i).at early injection (before injection timing 40° BTC) gives an additional mixing time hence reducing the stratification amount thus reducing the NOx emission ,(ii) for late injection (later than 30° BTC) the maximum hydrogen burning occurs in the local equivalent ratio greater than 0.8, thus reducing the NOx and (iii) for the scheme designed for the late injection at 6° BTC the NOx emission for premixed fuel mixture is still higher. White et al. [43] in their paper have discussed the effect of retarding SOI (Start of Injection Time) to the heterogeneity and its different effects for two range of equivalent ratio one below NOx limited equivalent ratio and another one above the limited equivalent ratio. For the region of equivalent ratio below limited value shows an increase in the NOx concentration with an increase in heterogeneity of the mixture and an opposite trend for the fuel mixture of equivalent ratio more than limited value. It is observed in [41] that for a lean operation λ >2 the late injection timing increases the NOx concentration in the exhaust. These results are found out to be similar to the results reported by White et al. [43]. Burned gas fraction acts as diluents and reduces the peak temperature of the reaction, thus inhibiting production of NOx. Humidity [46] also leads to reduction of NOx emission, as increase in humidity reduces the overall temperature inside the chamber. Ghazzi et al. [47] have studied the effect of combustion duration on NOx emission and have found out that for prolonged combustion duration there is a reduction in the NOx emission. They have compared the results of combustion duration to the NOx emission for different fuels such as neat hydrogen, 70%CH4+30%H2, only CH4 and only CO. The results are shown in the Fig. 9. They have concluded that for low NOx emission with high power production efficiency, lean mixture with high flame propagation speed is suitable. The effect of compression ratio on NOx emission in a hydrogen fueled SI engine is studied in [48]. Fig. 10 shows the graph between the compression ratio and equivalent ratio and NOx amount. For an equivalence ratio less than 0.8, it is observed that increase in compression ratio leads to an increase in the NOx emission due to high combustion temperature with abundance of oxygen. But the behavior of NOx emission with compression ratio for the equivalence ratio greater than 0.8 is different, as amount of NOx decreases with increase in the compression ratio due to a decrease in amount of available oxygen. Few researchers have investigated on the effect of change in load and speed on NOx emission. Hua et al. [64] have studied the effect of change in speed and load on thermal efficiency and Fig. 8. Relationship between NOx emission and injection start timing (°BTC). NOx emission. They have observed (Fig. 11) that increase in speed leads to a decrease in NOx emission, as it decreases the temperature inside the combustion chamber. V. CONTROL TECHNIQUES During the progress in automobile industry importance has also given to reduce the detrimental effects of automobiles. Development of bio diesel, biogas, and hydrogen fuel has led to the solution of ever increasing carbon pollutants, though these are yet to be completely removed. In recent years researchers are working to decrease the pollutants like NOx using techniques such as EGR, spark timing adjustment, catalytic converter etc. In this section, the development and application on different controlled techniques reported in the literature are outlined. A. Lean Operation One of the most effective and simple techniques used for the reduction of NOx emission is the use of lean fuel mixture. As discussed before, there are evidences that with the use of lean hydrogen fuel substantial reduction of NOx can be achieved. When hydrogen fuel is used with an equivalent ratio less than 0.5 the peak temperature that is developed inside the cylinder decreases due to less combustion, which in fact reduces the overall NOx emission. So, use of lean fuel with wide open throttle is found out to be an effective technique for the NOx control, but as the load in the engine increases, it requires rich fuel mixtures (lean of stoichiometric), due to which the peak combustion temperature increases and it causes a sharp increase in NOx amount when equivalent ratio crosses its threshold value. At that time NOx emission is greater than the gasoline operated engine. Effectiveness of using three ways catalytic converter is less when it is used with lean burn strategy compared to the other developed techniques such as EGR and water injection. B. Effects of Diluents Use of diluents with the intake charge has been found out to be an effective method for the reduction of NOx in IC engines. Das et al. [49] have compared the effect of different diluents on the emission characteristics of hydrogen fueled diesel engine. For this purpose, they have used a single cylinder, four stroke , water cooled 4kW diesel engine that was modified to use hydrogen fuel. Neat hydrogen, hydrogen with helium, hydrogen with nitrogen, and hydrogen with water (10%, 20%, and 30% diluents res.) have been used as fuels and the emission characteristics have been measured and compared. When they have employed helium as diluents they have observed that with increase in helium concentration reduction in NOx emission takes place but with expense of thermal efficiency and power output [50]. But for the case of nitrogen as diluents, NOx emission is found to be more when the added nitrogen diluents are small in percentage (10%). But reduction of NOx is observed for a higher concentration of nitrogen diluents (20%, 30%). The performance and power output are found out to be better than that of, when helium is used as diluents. Thirdly they have found out that the addition of water in the charge most suitable for reduction of NOx compared to the other diluents, and has high performance characteristics compared to other diluents. Researchers [51], [52] and [53] have used water injection as effective techniques in the reduction of the NOx. Subramanian et al. [53] have studied on the effect of water injection on the hydrogen fuelled SI engine and its advantages and disadvantages. They observed that it is more practical to use water injection technique compared to the spark timing adjustment as later one leads to back firing. A reduction of 70% to 80 % of NO is noticed for a maximum water flow rate of 6kg/h. Water injection techniques leads to smoothness during combustion and constant IMEP with no fluctuation and hence beneficial. Water injection technique has high efficiency compared to that of gas diluents. Nande et al. [54] have found that using a single cylinder, 4 valves pent roof combustion chamber SI engine, water injection technique is more effective than spark retarding technique for the control of NOx emission. In their experiment water is injected at a constant pressure of 50 psi and at 360°CA after the SOI timing. Gadallah et al. [55] have shown that the reduction of NOx emission largely depends on the injected water pressure and injection timing. They have concluded that water injection during the later stage of compression stroke is more effective compared to the water injected during expansion stroke as it improves the thermal efficiency and reduces the NOx emission. Adnan et al. [51] have studied the effect of water injection timing on the NOx reduction in a hydrogen fueled YANMAR C.I. engine which utilizes the mechanically actuated fuel injection system. Water is injected at a constant pressure of 2 bar with start of injection in the range of 20°BTDC to 20°ATDC and injection duration of 20°or 40°CA. They have noticed that maximum NOx reduction has taken place at water injection timing of 0°CA and duration of 40°CA. C. Exhaust Gas Recirculation (EGR)Technique One of the widely used techniques for the reduction of NOx without affecting the engine performance is exhaust gas recirculation process. Normally exhaust gas contains low oxygen and burned fuel gas that have both dilution and thermal effects, when added to the inlet charge it acts as a heat sink lowering the combustion temperature thus decreasing the NOx concentration. EGR leads to engine wear, clogging of combustion chamber and fouled air intake system. To circumvent these, necessary steps like cooling and filtering and proper controlling of the exhaust gas should be carried out. Amount of EGR is expressed in terms of percentage as, %EGR= ) x 100 (29) Safari et al. [56] have used the experimental data to calibrate a kinetic model using 3 different zones unburned, flame zones and burned zones. They have compared the reduction in NOx emission using lean burn (∅=0.88, EGR=8%), hot EGR (110°C) and cooled EGR. Water is present in hot EGR where as in cooled dry strategy the EGR is cooled and discharged using condensate discharger. Their findings are plotted in Fig. 12 which shows that for less than 26% percentage EGR and ∅ greater than 0.65 there is a significant difference between EGR strategies to the lean burn strategy. The reduction in NOx emission is more with hot EGR strategy compared to the cooled EGR. This is mainly due to the presence of water vapors that increase the specific heat capacity, causing a reduction in mean and maximum combustion temperature, which is the cause of NOx emission. They have also made a comparative performance study on the power output and specific NOx emission and the results are presented in Fig. 13. They have found that the engine with cooled EGR has more power output compared to the engine with hot EGR, as volumetric efficiency of hot EGR is lower compared to the other one. Thermal efficiency is also observed to be more for cooled EGR technique compared to hot EGR. Their model predicted 33% and 28% of cooled EGR and hot EGR respectively for maximum indicated thermal efficiency. These results are also comparable to the ones reported in [57], [58], and [15]. They have concluded from the given kinetic model, that reduction in the NOx can be achieved with lower amount of hot EGR compared to the cooled EGR, though the loss in power output is higher for the hot EGR. Heffel [59], [60] has compared the effect of lean burn strategy to the EGR strategy using a hydrogen fueled engine of four cylinders, 2 liters, Ford ZETEC engine with compression ratio 12:1. He has used lean burn hydrogen (14%H2, 86% Air) and EGR diluted hydrogen (14%H2, 34%air, 52% EGR) as fuels and has compared the NOx emission and power output. He has noticed that with an increase in EGR percentage, there is a fall in power output and thermodynamic efficiency. He has also noticed that the maximum torque produced from lean burn is 94Nm higher than torque produced from EGR diluted fuel i.e. 87Nm. But the reduction in NOx emission at that point is observed to be higher for EGR diluted fuel compared to the lean burn fuel. So, an important conclusion is made that for low NOx emission as constraint, the torque output of the engine running in lean burn fuel is significantly reduced. But for the EGR diluted Fuel, NOx output is always low hence for all EGR condition higher torque can be produced. It is evident from the experiment that for near zero NOx emission the torque output of EGR diluted fuel (87Nm), which is more compared to the lean burn fuel (effectively 68Nm). The comparative results are shown in the Fig. 14. Verhelst [61] has devised technique to compensate the power output using supercharging in a single cylinder engine where the EGR methods have been used to limit the NOx emission. Similar model as in [56] has been developed by Fig. 12. Comparison of reduction in NOx (ppm) emission with different inlet supply such as lean burn, hot EGR, cooled EGR. Kosmadakis [57] to compare the effect of EGR rate with the effect Lean burn fuel. Yao et al. [4], [62] have studied the effect of EGR rate on NOx emission for different hydrogen flow rate ranging from 1.75 kg/h to 2.79kg/h. The results are shown in Fig. 15. Vudumu et al. [63] have developed a computational model using PID (proportional integral and derivative) controller and 15 mm throttle valve to control the rate of EGR flow and have studied the variation of NOx emission with EGR rate and have compared the simulation value with the experimental data. They have observed that (Fig. 16), an approximate linear dependency between the EGR and NOx emission. They have also found a decrease of NOx emission from 7300 ppm to 800 ppm when EGR rate is increased from 0 to 16. VI. CONCLUSION This paper has presented a comprehensive review of both the theoretical work and experimental investigations made by various researchers on various aspects of NOx emission in hydrogen fueled IC engine. The summary of the present study is outlined as 1. Extended Zeldovic mechanism mainly predominates formation of NOx in a hydrogen operated IC engine, though other mechanisms like prompt NOx, intermediate N2O and NNH intermediate also cause the formation of NOx in IC engine. An increasing trend in the NOx emission is observed to an equivalent ratio of 1.1 in the H2IC engine. 3. The spark timing of ignition is reported to be a possible solution to the NOx emission. It mainly reduces the period of ignition thus reduces the peak temperature in the cylinder. 4. Injection of water and different gases (hydrogen, helium, nitrogen) has been demonstrated to be an effective control of NOx emission. Water is found out to be more effective due to its high volumetric efficiency. 5. Use of EGR is noticed to be more effective than the conventional lean burn strategy to control the NOx emission. EGR used fuel gives more power output compared to the lean fuel for a limited amount of NOx emission. VII. SCOPE OF FUTURE WORK Use of hydrogen as the fuel for the IC engine has the main limitation of NOx emission besides problem of back fire, misfire, knocking. The main objective of development of alternative fuel, that is reduction in the emission, without any significant reduction in power output and efficiency is yet to be achieved completely. There is a still long way to go to achieve this prime objective of reduction in emission without hampering the efficiency of the engine. Reduction of the overall NOx emission to almost zero by using the three-way catalytic converter will be the focus of future research. Different catalysts have been tested out to know its effect on the reduction of NOx emission. Recent research activities concentrate on emission control without reducing the power of hydrogen and diesel fueled IC engine with SCR and turbo charger used as accessories to control the emission and power output. Simulation models and CFD models are also being developed by the researchers to predict and control the emission and performance of Hydrogen based IC engines.
2020-03-19T19:56:55.036Z
2020-01-17T00:00:00.000
{ "year": 2020, "sha1": "d4ce9fe0eae76c2aaab598cb7df33b7a12857c7f", "oa_license": "CCBY", "oa_url": "https://www.ijert.org/research/formation-kinetics-and-control-strategies-of-nox-emission-in-hydrogen-fueled-ic-engine-IJERTV9IS010081.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "04497b8efa4170d07e7b1e84ba112d1bd9bcf50f", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Chemistry" ] }
5038214
pes2o/s2orc
v3-fos-license
Charming new physics in rare B-decays and mixing? We conduct a systematic study of the impact of new physics in quark-level $b \to c \bar{c} s$ transitions on $B$-physics, in particular rare $B$-decays and $B$-meson lifetime observables. We find viable scenarios where a sizable effect in rare semileptonic $B$-decays can be generated, compatible with experimental indications and with a possible dependence on the dilepton invariant mass, while being consistent with constraints from radiative $B$-decay and the measured $B_s$ width difference. We show how, if the effect is generated at the weak scale or beyond, strong renormalisation-group effects can enhance the impact on semileptonic decays while leaving radiative $B$-decay largely unaffected. A good complementarity of the different $B$-physics observables implies that precise measurements of lifetime observables at LHCb may be able to confirm, refine, or rule out this scenario. I. INTRODUCTION Rare B decays are excellent probes of new physics at the electroweak scale and beyond, due to their strong suppression in the Standard Model (SM). Interestingly, experimental data on rare branching ratios [1,2] and angular distributions for B → K ( * ) µ + µ − decay [2,3] may hint at a beyond-SM (BSM) contact interaction of the form (s L γ µ b L )(μγ µ µ), which would destructively interfere with the corresponding SM (effective) coupling C 9 [4][5][6], although the significance of the effect is somewhat uncertain because of form-factor uncertainties as well as uncertain long-distance virtual charm contributions [7]. However, if the BSM interpretation is correct, it requires reducing C 9 by O(20%) in magnitude. Such an effect might arise from new particles (see e.g. [8]), which might in turn be part of a more comprehensive new dynamics. Noting that in the SM, about half of C 9 comes from (short-distance) virtual-charm contributions, in this article we ask whether new physics affecting the quark-level b → ccs transitions could cause the anomalies, affecting rare B decays through a loop. The bulk of these effects would also be captured through an effective shift ∆C 9 (q 2 ), with a possible dependence on the dilepton mass q 2 . At the same time, such a scenario offers the exciting prospect of confirming the rare B-decay anomalies through correlated effects in hadronic B decays into charm, with "mixing" observables such as the B s -meson width difference standing out as precisely measured [9] and under reasonable theoretical control. This is in contrast with the Z and leptoquark models usually considered, where correlated effects are typically restricted to other rare processes and are highly model dependent. Specific scenarios of hadronic new physics in the B widths have been considered previously [10], while the possibility of virtual charm BSM physics in rare semileptonic decay has been raised in [11] (see also [12]). As we will show, viable scenarios exist, which can mimic a shift ∆C 9 = −O(1) while being consistent with all other observables. In particular, very strong renormalization-group effects can generate large shifts in the (low-energy) effective C 9 coupling from small b → ccs couplings at a high scale without conflicting with the measuredB → X s γ decay rate [13]. II. CHARMING NEW PHYSICS SCENARIO We consider a scenario where new physics affects the b → ccs transitions. This could be the case in models containing new scalars or new gauge bosons, or strongly coupled new physics. Such models will typically affect other observables, but in a model-dependent manner. For this paper, we restrict ourselves to studying the new effects induced by modified b → ccs couplings, leaving construction and phenomenology of concrete models for future work. We refer to this as the "charming BSM" (CBSM) scenario. As long as the mass scale M of new physics satisfies M m B , the modifications to the b → ccs transitions can be accounted for through a local effective Hamiltonian, We choose our operator basis and renormalization scheme to agree with [14] upon the substitution d → b,s →c, arXiv:1701.09183v2 [hep-ph] 6 Aug 2018 u →s: The Q c i are obtained by changing all the quark chiralities. We leave a discussion of such "right-handed current" effects for future work [15] and discard the Q c i below. We split the Wilson coefficients into SM and BSM parts, where C c,SM i = 0 except for i = 1, 2 and µ is the renormalization scale. III. RARE B DECAYS The leading-order (LO), one-loop CBSM effects in radiative and rare semileptonic decays may be expressed through "effective" Wilson coefficient contributions ∆C eff 9 (q 2 ) and ∆C eff 7 (q 2 ) in an effective local Hamiltonian where q 2 is the dilepton mass and For q 2 small (in particular, well below the charm resonances), ∆C eff 9 (q 2 ) and ∆C eff 7 (q 2 ) govern the theoretical predictions for both exclusive (B → K ( * ) + − , B s → φ + − , etc.) and inclusive B → X s + − decay, up to O(α s ) QCD corrections and power corrections to the heavy-quark limit that we neglect in our leading-order analysis. Similarly, ∆C eff 7 (0) determines radiative Bdecay rates. We will neglect the small CKM combination V * us V ub , implying V * cs V cb = −V * ts V tb , and focus on real (CP-conserving) values for the C c i . From the diagram shown in Fig. 1 (left) we then obtain with C c x,y = 3∆C x + ∆C y and the loop functions Leading CBSM contributions to rare decays (left), and to width difference ∆Γs and lifetime ratio τ (Bs)/τ (B d ) (right). We note that only the four Wilson coefficients ∆C 1...4 enter ∆C eff 9 (q 2 ). Conversely, ∆C eff 7 (q 2 ) is given in terms of the other six Wilson coefficients ∆C 5... 10 . The appearance of a one-loop, q 2 -dependent contribution to C eff 7 is a novel feature in the CBSM scenario. Numerically, the loop function a(z) equals one at q 2 = 0 and vanishes at q 2 = (2m c ) 2 . The constant terms and the logarithm accompanying y(q 2 , m c ) partially cancel the contribution from a(z) and they introduce a sizable dependence on the renormalization scale µ and the charm quark mass. Since a shift of ∆C eff 7 (q 2 ) is strongly constrained by the measured B → X s γ decay rate, we do not consider the coefficients ∆C 5...10 in the remainder and focus on the four coefficients ∆C 1...4 , which do not contribute to B → X s γ at 1-loop order. Higher-order contributions can be important if new physics generates ∆C i at the weak scale or beyond, as is typically expected. In this case large logarithms ln M/m B occur, requiring resummation. To leading-logarithmic accuracy, we find if ∆C i are understood to be renormalized at µ = M W and ∆C eff 7,9 at µ = 4.2 GeV. It is clear that ∆C 1 and ∆C 3 contribute (strongly) to rare semileptonic decay but only weakly to B → X s γ. IV. MIXING AND LIFETIME OBSERVABLES A distinctive feature of the CBSM scenario is that nonzero ∆C i affect not only radiative and rare semileptonic decays, but also tree-level hadronic b → ccs transitions. While the theoretical control over exclusive b → ccs modes is very limited at present, the decay width difference ∆Γ s and the lifetime ratio τ (B s )/τ (B d ) stand out as being calculable in a heavy-quark expansion [16]; see Fig. 1 (right). For both observables, the heavy-quark expansion gives rise to an operator product expansion in terms of local ∆B = 2 (for the width difference) or ∆B = 0 (for the lifetime ratio) operators. The formalism is reviewed in [17] and applies to both SM and CBSM contributions. For the B s width difference, we have [18] ∆Γ s = 2|Γ s,SM 12 + Γ cc 12 | cos φ s 12 , where the phase φ s 12 is small. Neglecting the strange-quark mass, we find with values taken from [19]. For our numerical evaluation of Γ cc 12 , we split the Wilson coefficients according to (3), subtract from the LO expression (11) the pure SM contribution and add the NLO SM expressions from [20]. In general, a modification of Γ cc 12 also affects the semileptonic CP asymmetries. However, since we consider CP-conserving new physics in this paper and since the corresponding experimental uncertainties are still large, the semi-leptonic asymmetries will not lead to an additional constraint. In a similar manner, for the the lifetime ratio, we find where the SM contribution is taken from [21] and In each case, all Wilson coefficients are renormalized at µ = 4.2 GeV and those not corresponding to either axis set to zero. The black dot corresponds to the SM, i.e. ∆Ci = 0. The measured central value for the width difference is shown as brown (solid) line together with the 1σ allowed region. The lifetime ratio measurement is depicted as green (dashed) line and band. Overlaid are contours of ∆C eff 9 (5GeV 2 ) = −1, −2 (black, dashed) and ∆C eff 9 (2GeV 2 ) = −1, −2 (red, dotted), as computed from (5), and of ∆C eff 9 = 0 (black, solid). subtracting the SM part and defining B 1 , with values taken from [22]. We interpret the quark masses as MS parameters at µ = 4.2 GeV. V. RARE DECAYS VERSUS LIFETIMES-LOW-SCALE SCENARIO We are now in a position to confront the CBSM scenario with rare decay and mixing observables, as long as we consider renormalization scales µ ∼ m B . Then the logarithms inside the h function entering (5) are small and our leading-order calculation should be accurate. Such a scenario is directly applicable if the mass scale M of the physics generating the ∆C i is not too far above m B , such that ln(M/m B ) is small. Fig. 2 (left) shows the experimental 1σ allowed regions for the width difference and lifetime ratio (from the web update of [23]) in the (∆C 1 , ∆C 2 ) plane. The central values are attained on the brown (solid) and green (dashed) curves, respectively. The measured lifetime ratio and the width difference measurement can be simultaneously accommodated for different values of the Wilson coefficients: in the ∆C 1 -∆C 2 plane, we find the SM solution, as well as a solution around ∆C 1 = −0.5 and ∆C 2 ≈ 0. In the ∆C 3 -∆C 4 plane, we have a relatively broad allowed range, roughly covering the interval [−0.9, +0.7] for ∆C 3 and [−0.6, +1.1] for ∆C 4 . For further conclusions, a considerably higher precision in experiment and theory is required for ∆Γ s and τ Bs /τ B d . Also shown in the plot are contour lines for the contribution to the effective semileptonic coefficient ∆C eff 9 (q 2 ), both for q 2 = 2 GeV 2 and q 2 = 5 GeV 2 We see that sizable negative shifts are possible while respecting the measured width difference and the lifetime ratio. For example, a shift ∆C eff 9 ∼ −1 as data may suggest could be achieved through ∆C 1 ∼ −0.5 alone. Such a value for ∆C 1 may well be consistent with CP-conserving exclusive b → ccs decay data, where no accurate theoretical predictions exist. On the other hand, ∆C eff 9 only exhibits a mild q 2 -dependence. Distinguishing this from possible long-distance contributions would require substantial progress on the theoretical understanding of the latter. We can also consider other Wilson coefficients, such as the pair (∆C 3 , ∆C 4 ) (right panel in Fig. 2). A shift ∆C eff 9 ∼ −1 is equally possible and consistent with the width difference, requiring only ∆C 3 ∼ 0.5. VI. HIGH-SCALE SCENARIO AND RGE A. RG enhancement of ∆C eff 9 If the CBSM operators are generated at a high scale then large logarithms ln M/m B appear. Their resummation is achieved by evolving the initial (matching) conditions C i (µ 0 ∼ M ) to a scale µ ∼ M B according to the coupled renormalization-group equations (RGE), where γ ij is the anomalous-dimension matrix. As is well known, the operators Q c i mix not only with Q 7 and Q 9 , but also with the 4 QCD penguin operators P 3...6 and the chromodipole operator Q 8g (defined as in [24]), which in turn mix into Q 7 . Hence the index j runs over 11 operators with ∆B = −∆S = 1 flavor quantum numbers in order to account for all contributions to C 7 (µ) that are proportional to ∆C i (µ 0 ). Most entries of γ ij are known at LO [14,[24][25][26][27][28][29][30]; our novel results are (i = 3, 4) butions to ∆C eff 7 and ∆C eff 9 in (9),(10) as well as A striking feature are the large coefficients in the ∆C eff 9 case, which are O(1/α s ) in the logarithmic counting. The largest coefficients appear for ∆C 1 and ∆C 3 , which at the same time practically do not mix into C eff 7 . This means that small values ∆C 1 ∼ −0.1 or ∆C 3 ∼ 0.2 can generate ∆C eff 9 (µ) ∼ −1 while having essentially no impact on the B → X s γ decay rate. Conversely, values for ∆C 2 or ∆C 4 that lead to ∆C eff 9 ∼ −1 lead to large effects in C eff 7 and B → X s γ. B. Phenomenology for high NP scale The situation in various two-parameter planes is depicted in Fig. 3, where the 1σ constraint from B → X s γ is shown as blue, straight bands. (We implement it by splitting BR(B → X s γ) into SM and BSM parts and employ the numerical result and theory error from [31] for the former. The experimental result is taken from the web update of [23].) The top row corresponds to Fig. 2, but contours of given ∆C 9 lie much closer to the origin. All six panels testify to the fact that the SM is consistent with all data when leaving aside the question of rare semileptonic B decays-the largest pull stems from the fact that the experimental value for τ Bs /τ B d is just under 1.5 standard deviations below the SM expectation, such that the black (SM) point is less than 0.5σ outside the green area. Our main question is now: can we have a new contribution ∆C eff 9 ∼ −1 to rare semileptonic decays, while being consistent with the bounds stemming from b → sγ, ∆Γ s and τ Bs /τ B d ? This is clearly possible (indicated by the yellow star in the plots) if we have a new contribution ∆C 3 ≈ 0.2, see the three plots of the ∆C i − ∆C 3 planes in Fig. 3 (right on the top row, left on the middle row and left on the lower row). In these cases, the ∆C eff 9 ∼ −1 solution is even favored compared to the SM solution. A joint effect in ∆C 2 ≈ −0.1 and ∆C 4 ≈ 0.3 can also accommodate our desired scenario, see the right plot on the lower row, while new BSM effects in the pairs ∆C 1 , ∆C 2 and ∆C 1 , ∆C 4 alone are less favored. One could also consider three or all four ∆C i simultaneously. C. Implications for UV physics Our model-independent results are well suited to study the rare B-decay and lifetime phenomenology of ultraviolet (UV) completions of the Standard Model. Any such completion may include extra UV contributions to C 7 (M ) and C 9 (M ), correlations with other flavor observables, collider phenomenology, etc.; the details are highly model-dependent and beyond the scope of our modelindependent analysis. Here we restrict ourselves to some basic sanity checks. Taking the case of ∆C 1 (M ) ∼ −0.1 corresponds to a naive ultraviolet scale This effective scale could arise in a weakly-coupled scenario from tree-level exchange of new scalar or vector mediators, or at loop level in addition from fermions; or the effective operator could arise from strongly-coupled new physics. For a tree-level exchange, Λ ∼ M/g * where g * = √ g 1 g 2 is the geometric mean of the relevant couplings. For weak coupling g * ∼ 1, this then gives M ∼ 3 TeV. Particles of such mass are certainly allowed by collider searches if they do not couple (or only sufficiently weakly) to leptons and first-generation quarks. Multi-TeV weakly coupled particles also generically are not in violation of electroweak precision tests of the SM. Looplevel mediation would require mediators close to the weak scale which may be problematic and would require a specific investigation; this is of course unsurprising given that b → ccs transitions are mediated at tree level in the SM. The same would be true in a BSM scenario that mimics the flavor suppressions in the SM (such as MFV models). Conversely, in a strongly-coupled scenario we would have M ∼ g * Λ ∼ 4πΛ ∼ 30 TeV. This is again safe from generic collider and precision constraints, and a model-specific analysis would be required to say more. Finally, as all CBSM effects are lepton-flavor-universal, they cannot on their own account for departures of the lepton flavor universality parameters R K ( * ) [32] from the SM values as suggested by current experimental measurements [33]. However, even if those departures are real, they may still be caused by direct UV contributions to ∆C 9 . For example, as shown in [5], a scenario with a muon-specific contribution ∆C µ 9 = −∆C µ 10 ∼ −0.6 and in addition a lepton-universal contribution ∆C 9 ∼ −0.6, which may have a CBSM origin, is perfectly consistent with all rare-B-decay data, and in fact marginally preferred. VII. PROSPECTS AND SUMMARY The preceding discussion suggests that a precise knowledge of width difference and lifetime ratio, as well as BR(B → X s γ), can have the potential to identify and discriminate between different CBSM scenarios, or rule them out altogether. This is illustrated in Fig. 4, showing contour values for future precision both in mixing and lifetime observables. In each panel, the solid (brown and green) contours correspond to the SM central values of the width difference and lifetime ratio (respectively). The spacing of the accompanying contours is such that the area between any two neighboring contours corresponds to a prospective 1σ-region, assuming a combined (theoretical and experimental) error on the lifetime ratio of 0.001 and a combined error on ∆Γ s of 5%. The assumed future errors are ambitious but seem feasible with expected experimental and theoretical progress. Overlaid is the (current) B → X s γ constraint (blue). The figure indicates that a discrimination between the SM and the scenario where ∆C 9 ≈ −1, while BR(B → X s γ) is SMlike is clearly possible. A crucial role is played by the lifetime ratio τ Bs /τ B d : in e.g. the ∆C 3 − ∆C 4 case a 1 σ deviation of the lifetime ratio almost coincides with the ∆C 9 = −1 contour line; a further precise determination of ∆Γ s could then identify the point on this line chosen by nature. Further progress on B → X s γ in the Belle II era would provide complementary information. In summary, we have given a comprehensive, modelindependent analysis of BSM effects in partonic b → ccs transitions (CBSM scenario) in the CP conserving case, focusing on those observables that can be computed in a heavy-quark expansion. An effect in rare semileptonic B decays compatible with hints from current LHCb and B-factory data can be generated, while satisfying the B → X s γ constraint. It can originate from different combinations of b → ccs operators. The required Wilson coefficients are so small that constraints from B decays into charm are not effective, particularly if new physics enters at a high scale; then large renormalization-group enhancements are present. Likewise, there are no obvious model-independent conflicts with collider searches or electroweak precision observables. A more precise measurement of mixing observables and lifetime ratios, at a level achievable at LHCb, may be able to confirm (or rule out) the CBSM scenario, and to discriminate between different BSM couplings. Finally, all CBSM effects are lepton-flavor-universal; the current R K and R K * anomalies would either have to be mismeasurements or require additional lepton-flavor-specific UV contribution to C 9 ; such a combined scenario has been shown elsewhere to be consistent with all rare B-decay data and also presents the most generic way for UV physics to affect rare decays. With the stated caveats, our conclusions are rather model independent. It would be interesting to construct concrete UV realizations of the CBSM scenario, which almost certainly will affect other observables in a correlated, but model-dependent manner. VIII. ACKNOWLEDGMENTS We would like to thank C. Bobeth, P. Gambino, M. Gorbahn, and especially M. Misiak for discussions. This work was supported by an IPPP Associateship. S.J. and K.L. acknowledge support by STFC Consolidated Grant No. ST/L000504/1, an STFC studentship, and a Weizmann Institute "Weizmann-UK Making Connections" grant. A.L. and M.K. are supported by the STFC IPPP grant. IX. APPENDIX: TECHNICAL ASPECTS OF THE ANOMALOUS-DIMENSION CALCULATION Here we provide additional technical information regarding our results on anomalous dimensions entering in the RGE (20). Many of the elements of γ eff(0) are known [14,[25][26][27][28], except for γ , for i = 3, 4. The latter can be read off from the logarithmic terms in (5), and the mixing into P i follows from substituting gauge coupling and color factors in diagram Fig. 1 (left). This gives for i = 1, 2, 3, 4, with the mixing into C P3,5,6 vanishing. The leading mixing into C eff 7 arises at two loops [29] and is the technically most challenging aspect of this work. Our calculation employs the 1PI (off-shell) formalism and the method of [30] for computing UV divergences, which involves an infrared-regulator mass and the appearance of a set of gauge-non-invariant counterterms. The result is Our stated results for i = 1, 2 agree with the results in [24,26], which constitutes a cross-check of our calculation. We have not obtained the 2-loop mixing of C c 3,4 into C 8g and set these anomalous dimension elements to zero. For the case of C c 1,2 where this mixing is known, the impact of neglecting γ eff(0) i8 on ∆C eff 7 (µ) is small [the only change being −0.19∆C 2 → −0.18∆C 2 in (9)]. We expect a similarly small error in the case of ∆C 3,4 .
2018-04-23T03:59:34.752Z
2017-01-31T00:00:00.000
{ "year": 2017, "sha1": "88aa6ed180259db4e319630671e0855ff39a9c01", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.97.015021", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "88aa6ed180259db4e319630671e0855ff39a9c01", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
52296792
pes2o/s2orc
v3-fos-license
Household food security in an urban slum: Determinants and trends Introduction: As we are moving from millennium development goals to sustainable development goals, food insecurity is imposing a formidable challenge to the policymakers, especially in developing countries such as India. A survey conducted in the urban slum areas of Vellore district, 6 years back, had reported food insecurity as high as 75%. The current study was a resurvey to assess the food security status in the aforementioned area. Materials and Methods: A community‐based survey was conducted in which data were collected using a self‐administered questionnaire from 150 households, selected through multistaged cluster sampling, who had given oral consent to be a part of the survey. The prevalence of food security calculated from this study was compared with the results from a previous survey to look for any significant improvement. Results: Nearly 42.7% of the households were food secure, while 26.7% were food insecure without hunger and 30.6% were food insecure with some degree of hunger. Low socioeconomic status (odds ratio [OR]: 3.25, 95% confidence interval [CI]: 1.29–8.16; P < 0.012) and presence of debt (OR: 3.84, 95% CI: 1.90–7.73; P < 0.001) were the major risk factors for food insecurity. A comparison with the findings from the previous study has shown a statistically significant improvement in food security from 25.4% to 42.7% (Chi‐square: 27.072, df: 2, P < 0.0001). Conclusion: Although food security levels have shown marked improvement over the years, much needs to be done for India to be free from the shackles of hunger. Introduction According to United Nations, >750 million are estimated to be undernourished and almost 90 million children are undernourished and underweight. [1] Multiple studies have shown that the children in families with poor household food security are more at risk of undernourishment and stunting, when compared to children from families who have adequate levels of food security. [2,3] The aim of the current study was to reassess the burden of food insecurity in urban slums of Vellore city in the state of Tamil Nadu in South India, where very high levels of household food insecurity (75%) and hunger (61%) were reported earlier. [4] Materials and Methods The study was conducted during August-September 2014 in five urban slums of Vellore city. The slum areas are densely populated and majority of the tenements are made on encroached government land, without a proper title deed in the name of the people living in them. The predominant occupation is rolling of beedis, which is a handmade cigarette made of locally produced tobacco. Young adult men also work as unskilled laborers in the local vegetable market and also in the nearby construction sites. The unorganized nature of work makes these occupations vulnerable to exploitative practices followed by the business owners and, as a result, the wage levels are generally much below the minimum wages prescribed by the government. [5] A structured questionnaire was administered to 150 households who had given oral consent for being part of the survey and were randomly selected from five urban clusters, using multistage sampling technique. The house surgeons posted in the Department of Community Medicine collected the data. Sociodemographic and occupational characteristics of the family members were obtained. The socioeconomic status (SES) of the families was assessed using the modified Kuppuswamy scale 2012 which classifies households into "Upper," "Upper middle," "Lower middle," "Upper lower," and "Lower" socioeconomic strata. [6,7] Food security status of the households was assessed using a Household Food Security Survey (HFSS) questionnaire which was developed by the United States Department of Agriculture. The validity of this survey instrument has been demonstrated worldwide and it classifies the household as "Food secure," "Food insecure without hunger," "Food insecure with hunger-moderate," or "Food insecure with hunger-severe." [8] The data entry and analysis was done using Epi-Info 7.0, a free software developed by the Centers for Disease Control and Prevention, Atlanta, USA. The prevalence of various categories of food security was calculated. To measure the association of food insecurity with factors such as socioeconomic class, utilization of public distribution system (PDS), family size, and family type, prevalence odds ratio (OR) with 95% confidence interval (CI) was also calculated. The prevalence of food security was compared with the data obtained from the previous survey conducted and published [4] by the same department few years back. This was done for measuring any increase in prevalence and the significance of the observed difference was analyzed using Chi-square test for independence of two attributes. Results A total of 150 households were contacted for the survey and all the households were willing to participate. Majority of the households (64%) were nuclear families and most (93.3%) of the participants were Hindus. Even though the survey was conducted in urban slums, the proportions of huts were relatively low (18%). The mean family size was 4.64, with maximum size being 9 and minimum being 1; 54.7% of the respondents had household size of 5 or more. Seventy-eight percentage of the households had head of the families being unemployed or employed as unskilled/semiskilled workers and only 6% of the head of families had been to college. Majority (73.3%) of the households belonged to upper lower socioeconomic class. Almost 80% of the households had a valid ration card and 81.3% received some form of ration through the PDS. Out of the 150 households, 45% reported to have household debts. The households were assessed for food security status using the HFSS questionnaire. Of the 150 surveyed, 64 (42.7%) were food secure households, while 26.7% were food insecure without hunger. A total of 30.6% of the households reported food insecurity with some degree of hunger [ Table 1]. Families having household debts were at significantly higher risk of being food insecure when compared to families without debt (OR: 3.84, 95% CI: 1.90-7.73; P < 0.001). When the food security status of households was compared against SES, it was observed that food insecurity levels increased with lowering of SES. The proportion of households which had food security in upper middle, lower middle, upper lower, and lower classes was 75%, 65%, 39%, and 30%, respectively [ Figure 1]. This difference between upper and lower SES was found to be statistically significant (OR: 3.25, 95% CI: 1.29-8.16; P < 0.012). No statistically significant association was observed between food insecurity and factors such as family type, family size, and PDS coverage [ Table 2]. The study showed a statistically significant increase in the food security status when compared with the data from the previous study conducted in the same area (Chi-square: 27.07, df: 2, P < 0.0001) [ Table 3]. Prevalence of food security The prevalence of food security was found to be 42.7% (95% CI: 34.6-50.7). This is lower than the percentage of food secure households found in studies conducted in other developing nations such as Iran (59.1%) and the Philippines (65%). [9,10] The prevalence is better than the estimates from countries such as Bangladesh, Burkina Faso (27%), and Bolivia (30%), [10,11] but the tools used in studies for measuring food security in each of these countries were different. A high degree of correlation has been found between the gross domestic product (GDP) per capita of a country and the level of food security. The Global Food Security Index, 2015, supports these findings and points out that the food security situation across regions is improving with the increase in GDP and other indices of economic productivity. [12] India enjoys a relatively higher degree of food security among the low-middle income nations and this phenomenon can be explained partially through the rapid growth in economy witnessed in the late 1990s and 2000s. Even though the focus of economic reforms undertaken in the 1990s was on service sector, the primary and secondary sectors benefitted collaterally due to a very rapid growth in service sector. Furthermore, the impact of "Green Revolution" on agricultural productivity lasted for a very long time, propelling the food security status of the nation. The Food and Agricultural Organization also states that more progress could have been made if agricultural reforms and restructuring of the PDS were undertaken on time. [13] Comparison with the previous study On comparison with a similar survey done in the same area 6 years back, the findings on household food security show significant improvement. The prevalence of any form of household food insecurity decreased from 74.6% to 57.3%. A more dramatic change was observed in households having food insecurity with hunger; the prevalence reduced from 61.5% to 30.6%. The PDS coverage in Tamil Nadu remained relatively unaltered for many years now; approximately 80% of the total population. [4] Despite this, the significant improvement in figures may be attributed to the increase in the quality of Tamil Nadu PDS, in terms of the number of items available and the measures taken to bring down pilferage. [14] The household food security figures obtained in the present study are similar to the findings of studies done elsewhere in the state. [15] This shows that the PDS has performed consistently throughout the state. Influence of socioeconomic status and debt As found in previous studies, household food insecurity has an inverse correlation with SES assessed using objective scoring systems. [4] More than 50% of the lower SES households reported hunger associated with food insecurity while no household in upper middle class has any forms of hunger. Another important finding was the significant association between household debt and food insecurity. Those households with debts were at higher risk of household food insecurity (OR: 3.84, 95% CI: 1.90-7.73) when compared to household with no debts. The rise of a class of moneylenders who provide faster short-term loans to those outside the banking system at interest rates as high as 100% is a major concern. Even though the government has passed several legislations against their operations, there is a vast population still dependent on them for emergency finance. [16] Strength and limitations The Department of Community Medicine has a strong rapport with the community where the survey was conducted. This gave us good physical access to the area to collect data, which increased the robustness of the data collected. The study was intended to find out only the prevalence of food security. It was not powered enough to find out any potential risk factors. Conclusions The household food security situation in urban slums of Vellore is still precarious, but rapid strides have been made in the recent times. The level of hunger associated with food insecurity has been halved over the period of last 6 years, but it is still at unacceptably high levels. More research is needed to find out the reasons behind the high levels of food insecurity and hunger in spite of a relatively high coverage of universal PDS.
2018-09-24T14:22:43.630Z
2018-07-01T00:00:00.000
{ "year": 2018, "sha1": "de6a083d4a058489f3bcfb3cd254bb5ddd1e89ca", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/jfmpc.jfmpc_185_17", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "957016931ab1047e96138eb39a97fe118242a56a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
243647550
pes2o/s2orc
v3-fos-license
Neonatal Mice Spinal Cord Interneurons Sending Axons in Dorsal Roots Background: Spinal cord interneurons send their axons in the dorsal root. Their antidromic re could modulate peripheral receptors. Thus, it could control pain, other sensorial modality, or muscle spindle activity. In this study, we assessed a staining technique to analyze whether interneurons send axons in the neonate mouse’s dorsal roots. We conducted experiments in 10 Swiss-Webster mice, which ranged in age from 2 to 13 postnatal days. We dissected the spinal cord and studied it in vitro. Results: We observed interneurons in the spinal cord dorsal horn sending axons through dorsal roots. A mix of uorochromes applied in dorsal roots marked these interneurons. They have a different morphology than motoneurons. Primary afferent depolarization in afferent terminals produces antidromic action potentials (dorsal root reex; DRR). These reexes appeared by stimulation of adjacent dorsal roots. We found that in the presence of bicuculline, DRR recorded in the L4 dorsal root evoked by L5 dorsal root stimulation was reduced. Simultaneously, the monosynaptic reex (MR) in the L5 ventral root was not affected; nevertheless, a long-lasting after discharge appeared. The addition of 2-amino-5 phosphonovalric acid (AP5), an antagonist of NMDA receptors, abolished the MR without changing the after discharge. Action potentials persisted in dorsal roots even in low Ca2+ concentration. Conclusions: Thus, ring interneurons could send their axons by dorsal roots. Antidromic potentials may be characteristics of the neonatal mouse, probably disappearing in adulthood. Introduction Spinal cord interneurons send their axons in the dorsal root. Their antidromic re could modulate peripheral receptors. Spontaneous ring and occasional bursting in dorsal roots (DR) occurred after elevating the extracellular potassium concentration in isolated spinal cords of neonatal rats (1). The increase in potassium concentrations is also associated with seizures episodes. They occurred with primary afferent depolarizations and antidromic discharges of nerve impulses in DR bers (1,2). Antidromic activity occurred in the dorsal root ganglia in chronically axotomized rats (3). They can block or affect orthodromic impulses colliding with incoming afferent volleys (1,4). This mechanism would require high ring frequencies in the antidromic discharge (5). Interneurons sending axons via DR in the spinal cord produce antidromic action potential regulating different types of peripheral receptors (6). Ventral funiculus stimulation also evoked antidromic discharges in dorsal roots in a Petri dish brain stem-spinal cord preparation of neonatal (0-5-day old) rats. These discharges occurred by the underlying afferent terminal depolarization reaching ring threshold (7). Spontaneous interneuron activities play a critical role in the development of neuronal networks. Their discharges were conducted antidromically along the DR preceding those in the ventral root (VR) lumbar motoneurons. The action potential propagates centrally and triggers EPSPs in motoneurons. An indication of axons in dorsal roots coming from spinal cord interneurons is staining neurons with uorochromes. Neuron Labeling We analyzed the uorescent marker patterns in all spinal cords (n = 10) used for this study. The RDA application in the L4 dorsal root produced the red uorescent staining in afferent bers. It also stained rst-order interneurons by RDA leakage due to lower MW. They were marked close to terminal branches ( Fig. 2A). Application of FDA (green) in dorsal root L5 only marked afferent bers. Some interneurons were marked in red only when they were close to RDA afferent bers (Fig. 2B, the afferent ber are indicated by arrows). Application of a mixture of RDA and FDA in DR produced yellow staining. We only considered interneurons sending their axon by the dorsal roots when stained in yellow. We found some interneurons marked in yellow ( Fig. 2C-D, indicated by arrows) at the dorsal horn or in the intermediate nucleus. To determinate the interneuron localization, we marked some spinal cord DR's exclusively with FDA, and most of the afferent ber ends in the dorsal horn (Fig. 3A). In other cases, we retrogradely marked some interneurons with FDA in L5 and RDA in L4 dorsal root and localized interneurons in the dorsal horn ( Fig. 3B). FDA marked interneurons seem to indicate that they project their axons through the dorsal roots. In contrast, the RDA application in L5 and FDA in L4 dorsal root did not stain interneurons located in this region. With an FDA and RDA mixture applied in L4 and FDA mainly in L5 dorsal roots, we found marked yellow interneurons close to the intermediate nuclei region (Fig. 3C). Their morphology is different from motoneurons stained by the mixture of uorochromes applied in the L5 ventral root (Fig. 4B). The afferent bers arriving in the motor nucleus exhibited a bulb-like terminal when we applied the uorochrome mixture in dorsal roots (Fig. 4A). With FDA, we marked some bers green (Fig. 4B). We did not see any RDA leakage. The mixture applied in L5 VR stained neurons, revealed the motoneuron morphology. However, no interneurons showed marks in this ventral motor region. In addition, some marked cells resembled neurons traveling in rafts on the spinal cord dorsal surface when the uorochrome mixture penetrated the dorsal roots (Fig. 4C). Then, they began to penetrate the deep layers of the spinal cord (Fig. 4D). In Fig. 5A-B, the spinal cord cut exhibits one and two nuclei at different stages. We found only one nucleus with small-sized neurons at P2, and we observed two nuclei at P13. We counted the number of neurons at P2 and P13 and measured the cell size ( Fig. 5C-F). The graphs in Figs. 5C and D illustrate the size of all neurons stained in P2 and P13. We performed a linear regression to determine the mean value. In P2, most neurons were less than 2000 µm. In P13, the size and number of neurons in both nuclei increased; even the smallest neurons were more extensive than in P2 (Fig. 5E). The difference in the mean values of the two groups (P2 and P13 < average value) are higher than those occurred by chance; there is a statistically signi cant difference between the two groups (P < 0.05). In P13, the mean value of the size of the neurons in both nuclei was approximately 8000 µm (Fig. 4F). Discharges in the dorsal and ventral roots In 10 day old mice (n = 4), we stimulated the L5 dorsal root to produce a monosynaptic re ex. It registered in the L5 ventral root and the DRR in the L4 dorsal root. We took control of the monosynaptic re ex, the dorsal, and ventral re ex activity in normal aCSF (Fig. 6A). Recordings were obtained and were similar in all animals (n = 4 ) in these experiments. Bathing with bicuculine (10-20 µM) eliminated DRR but not the monosynaptic re ex (Fig. 6B). Interestingly, a long latency re ex occurred after the bicuculline application. Bicuculline has already described inducing locomotion episodes after rhythmic activity recorded in the ventral roots (not illustrated) (Duenas SH & Eidelberg, 1979). Similar activity has been observed in spinal cord motor neurons in the turtle in the presence of bicuculine (9). AP5 and bicuculline application decreased MR and DRR ( Fig. 6C-D); after a few minutes, it eliminated, but not after discharge. We recorded sporadic action potentials in DR ( Fig. 6C-D). After washing out bicuculline and AP5, the normal MR and DRR were reestablished (Fig. 6E). A low Ca + solution was then applied; MR, DRR, and after discharge disappeared; interestingly action potentials were observed in DR (Fig. 6F). Discussion In our experiments, spinal interneurons send axons through dorsal roots. We localized most of these interneurons close to the intermediate nucleus. They have several shapes that differ from motoneurons. In our study, we did not study dendritic arborizations nor their changes with age, as assessed in previous studies. In previous studies, Westerga & Gramsbergen observed a considerable increase in motoneuron soma size in rats, but with different distribution and arborizations patterns in a developing stage, which are longer and more extensive at rst in cervical than in the lumbar region (10). This temporal and spatial differences may in uence the motor development in a rostrocaudal manner (11). Dendrite bundles appeared relatively late in the Soleus' motoneuron compared to the Tibial anterior; this is related to the ne-tuning of neuronal activity, rather than patterning of motor activity (10). These observations will be studied in neonatal mice. Developing serotoninergic motoneuron innervation is related to the postnatal development of motor function already recognized in the second postnatal week (11). In our study, we found a signi cant neuronal soma size increase at a similar postnatal age. Marked neurons are not of the same type or from a speci c neuron group. That could be related to a different organization of the activation pattern. We found some cells traveling in the spinal cord dorsal surface. We did not know if these cells are neurons or glia. In a developmental study of kittens, the volume of the lateral cervical nucleus and the glial cells increased sixfold during 120-day observation, as did both the volumes of myelinated axons (12). As we noticed cells traveling in rafts in the dorsal horn surface of the spinal cord in the mouse spinal cord, further immunohistological studies could reveal the type of cells and clarify if some of them are progenitor neurons (13)(14)(15). We cannot con rm whether the recorded interneurons produce activity (action potentials) traveling antidromically in dorsal roots. However, we found antidromic activity in dorsal roots, even in bicuculline, AP5, and low calcium. In another study, 2-4 postnatal day mice presented depression curves unexplained by presynaptic activation failure (suppressed by AP5). Low calcium concentration reduced average amplitude and depression, and a higher calcium concentration increased average amplitude and depression. Increasing the bath temperature from 24 to 32 Celsius produced little change in amplitudes, but the depression was noticeably reduced at most frequencies (16). Therefore, these AP could be generated by these interneurons when their axons are su ciently depolarized. 5HT, DA, and NA produced no change in the compound antidromic potentials evoked by intraspinal microstimulation, indicating that DRP depression is unrelated to direct changes in the excitability of intraspinal afferent bers (17). Thus, antidromic activity could have an origin other than PAD, and consequently, other functions. Ephaptic interaction in afferent bers could also produce antidromic ring (18). Antidromic spike function in dorsal roots could participate in regulating activity in the afferent in ow of information related to in ammation and pain. DRR in afferent ber raises the hypothesis that mediated antidromic activity contributes to neurogenic in ammation (19). Sectioning the sciatic nerve of neonatal rat's triggers growth of afferent ber in VR, and stimulation in the L5 spinal cord evoked long latency antidromic potentials in the L5 ventral root. However, in normal rats, such potentials rarely appeared (20). Several experimental conditions, such as axotomy of sensory afferents, produced ectopic antidromic activity in their respective DRG, due to branched sensory afferents ber (3). In our experiments, the antidromic activity in DR, even in low calcium concentration, is indicative of axons in dorsal roots. We cannot assert their functional signi cance or action in the neonatal mouse. It would be essential to nd out whether these antidromic potentials in dorsal afferent bers are favoring some spinal circuit formation which remain in adulthood or are only part of a development process. Sympathetic preganglionic neurons (PGNs) in the neonatal rat's isolated spinal cord could be synaptically activated either by the dorsal root or spinal pathway stimulation. Dorsal root projections already appeared mature in the neonatal rat, and primary afferents did not appear to project directly to PGNs (21). Conclusions In our experiments, spinal interneurons send axons by dorsal roots. Thus, the AP comes to the interneurons sending axons in dorsal roots. Some spikes also occurred in ventral roots. In neonatal mice, spinal cord bipolar neurons could exist, sending axons through ventral and dorsal roots. Thus, AP could be produced by neurons with axons in ventral and dorsal roots. The presence of these interneurons in their maturity and their functional role in neonatal mice should be analyzed. We used the double labeling technique, which to our knowledge, is the rst time that it has been employed to identify interneurons with axons in dorsal roots. The nal location of these interneurons in adult mice spinal cords and their function will be investigated to elucidate the functional connections in adulthood. Materials And Methods The rst purpose in this study was to assess the presence of spinal cord interneurons sending axons in dorsal roots. The second aim was to evaluate whether there are antidromic potentials in the neonatal mouse spinal cord dorsal roots. For studying dorsal root functionality, we also analyzed DRR in the L5 dorsal root. Likewise, we studied MR modulation produced by electrical stimulation on the L5 DR and recorded this re ex in L5 VR. We also added bicuculline, a GABA antagonist drug, and the glutamic antagonist AP5 for analysing neural transmission implied in these re exes. Subjects We did experiments in 10 Swiss-Webster mice isolated spinal cords in vitro preparations at 2 to 13 postnatal days. They were housed one single mouse per cage at room temperature. Experimental protocols and animal care were under the NIH guidelines (USA) and approved by the Institutional Ethics Animals were anesthetized by inhalation with methoxy-urane. When fully anesthetized, they were decapitated. After ventral laminectomy, we used a tungsten needle to perform a longitudinal hemisection and kept ventral and dorsal roots between the T6 and sacral spinal cord segments. Other researchers followed this procedure in previous studies (22)(23)(24). One hemicord was placed in a Sylgard silicone elastomer tube at the bottom of a recording chamber. The hemicord was perfused with oxygenated ACSF owing at 10-14 ml/min. The bath solutions in owed aCSF through a servo-controlled heater (TC-324B, Warner instruments) for temperature monitoring. The bath solution recirculated at all times, even during wash out. In most cases, we used RDA and FDA in 50%. By mixing the markers we assured that the interneurons were marked correctly, thereby avoiding an RDA transsynaptic ow leak or insu cient FDA antidromic traveling distally to afferent ber terminals. Lower RDA molecular weight could produce to leakage, whereas the higher molecular weight could not even travel deep enough. In some experiments, we labeled DR afferent bers by applying FDA, RDA, or the mixture of both uorochromes to the cut L4 or L5 or both DR's for marking the afferent ber ending in the motor nuclei (Fig. 1B). We also retrogradely labeled motoneurons by applying RDA and FDA to the L4-L5 ventral root (n = 7). We used negative pressure to introduce the roots in the tubes producing a tight seal, avoiding any uorescent marker leakage. We used the markers diluted in a aCSF ten mmol/L solution, with 0.2% TritonX-100 (Sigma Chemical Co.). We employed ne suction electrodes pulled from polyethylene tubing (PE-190, Clay Adams, Parsippany, NJ.). After 18-24 hours, the spinal cord was xed by immersion in 4% of PFA in a 0.1% phosphate buffer (7.4 pH) overnight. After ascending sucrose cryoprotecter concentrations, we cut the spinal cords in coronal slices on a freezing microtome. Tissue sections placed on slides, dehydrated in ascending alcohol concentrations, cleared with Xylene and covered with an antifade mounting medium (Vectashield, Vector Laboratories Inc. Burlingame, CA). We examined tissue sections with an inverted Zeiss microscope and a laser scanning confocal imaging System (LSM 510). We analyzed images containing several optical sections in the Z plane and saved them from evaluating the morphology and synaptology of interneurons, motoneurons, and afferent bers. We reconstructed three-dimensional arrangements with Zeiss LSM 510 software. Stimulation and recording We placed the dorsal and ventral roots of segments L4 and L5 into the polyethylene suction electrodes for either stimulation or recordings. We produced MR and DRR by stimulating the dorsal root lament at the L5 segment in the afferent bers. To continue, we applied ten pulse trains (0.5 ms pulse duration with 2-min intervals) ranging from 16 Hz to 0.125 Hz. We recorded the MR at the L5 ventral root segment, and the DRR at the L4 dorsal root (Fig. 1A). Ca2 + concentration was zero in some experiments. We labeled these experiments as low calcium concentrations. Data Acquisition The signals obtained from the recording suction electrodes on DR and VR were ampli ed with Cyberamp 380 ampli ers (axon instruments: band 10-10 kHz) and digitized at 10 kHz with 16 bits resolution A/D converter (National Instruments NBIO-16) and then stored in the computer. We did data analysis off-line using NIH institute software packages. Statistical analysis. In some experiments, we measured the ventral horn neuron soma size. We studied them at 2 and 13 postnatal days (P2 and P13). We carried out a linear regression analysis to establish the average soma size value at the respective age, using the Sigma-Plot software v11. We applied Normality tests (Shapiro-Wilk) to the three groups (P2, P13 < average value, and P13). We performed a t-test to compare the soma size among different groups. Declarations Ethical Approval We carried out experiments in full compliance with ethical standards approved by the NIH guidelines (USA) and approved by the Institutional Ethics Committee, according to the Mexican O cial Norm (NOM-062-ZOO-1999). Consent for publication I Judith Marcela Duenas Jimenez hereby declare that I participated in the study and in the development of the manuscript titled. I have read the nal version and give my consent for the article to be published in BMC Neuroscience. Availability of data and material The datasets in this study are available on request to the corresponding author. Competing interests I declare that I have no signi cant competing nancial, professional, or personal interests that might have in uenced the performance or presentation of the work described in this manuscript. Funding Not applicable. Author's contribution All authors contributed to the study design and performed experiments. Sergio Horacio Dueñas Jiménez developed the concept and performed the material preparation, data collection, and analysis. Luis Castillo Hernandez wrote the rst draft of the manuscript. All authors commented on previous versions of the paper and approved the nal manuscript. Figure 1 Mouse spinal cord drawing illustrating ventral and dorsal roots in thoracic and hemisected spinal cord lumbar segments. A) The stimulation suction electrode (SSE) did administer at the L5 dorsal segment: the recordings electrodes applied in the L4 dorsal segment for dorsal root re ex (DRR), and monosynaptic re ex in the L5 ventral root (MR-VR). B) For neurons with axon in DR uorescent dextran amines, and a mixture of both were added in suction electrodes in the dorsal roots L4 or /and L5 for orthograde labeling (OL). For motoneuron retrograde labeling (RL), ventral roots L4 or L5 were lled with uorochromes. DRR and MR Recordings. DRR and MR control recording (indicated by arrows, upper and lower traces in A). They were recorded in L4 dorsal and L5 ventral roots, respectively. A: aCSF control, B: aCSF with bicuculline (10-20 µM). C and D illustrating MR and DRR in the presence of bicuculline plus AP5 (100 µM). Bicuculline eliminated DRR, and long-latency re exes observed in VR. MR depression began 2-4 min after applying AP5. Note that most of the MR were almost fully eliminated, but DR action potentials still appeared. E) Dorsal and ventral root re exes recovered after drug washout. F: VR and DR recording under low Ca2+ environment; the ventral and dorsal re exes were eliminated, but spiking persisted in both ventral and dorsal roots (indicated by arrows). Supplementary Files This is a list of supplementary les associated with this preprint. Click to download. AuthorChecklistedited.pdf
2020-08-27T09:05:36.067Z
2020-08-24T00:00:00.000
{ "year": 2020, "sha1": "a3a8ecd67eba1bc59aefcae281f4cd24549a4565", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-50650/v1.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "13aacd0852b8b6aefb3714528eccee98a78a1e90", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
221937838
pes2o/s2orc
v3-fos-license
A Cyber-Physical-Human System for One-to-Many UAS Operations: Cognitive Load Analysis The continuing development of avionics for Unmanned Aircraft Systems (UASs) is introducing higher levels of intelligence and autonomy both in the flight vehicle and in the ground mission control, allowing new promising operational concepts to emerge. One-to-Many (OTM) UAS operations is one such concept and its implementation will require significant advances in several areas, particularly in the field of Human–Machine Interfaces and Interactions (HMI2). Measuring cognitive load during OTM operations, in particular Mental Workload (MWL), is desirable as it can relieve some of the negative effects of increased automation by providing the ability to dynamically optimize avionics HMI2 to achieve an optimal sharing of tasks between the autonomous flight vehicles and the human operator. The novel Cognitive Human Machine System (CHMS) proposed in this paper is a Cyber-Physical Human (CPH) system that exploits the recent technological developments of affordable physiological sensors. This system focuses on physiological sensing and Artificial Intelligence (AI) techniques that can support a dynamic adaptation of the HMI2 in response to the operators’ cognitive state (including MWL), external/environmental conditions and mission success criteria. However, significant research gaps still exist, one of which relates to a universally valid method for determining MWL that can be applied to UAS operational scenarios. As such, in this paper we present results from a study on measuring MWL on five participants in an OTM UAS wildfire detection scenario, using Electroencephalogram (EEG) and eye tracking measurements. These physiological data are compared with a subjective measure and a task index collected from mission-specific data, which serves as an objective task performance measure. The results show statistically significant differences for all measures including the subjective, performance and physiological measures performed on the various mission phases. Additionally, a good correlation is found between the two physiological measurements and the task index. Fusing the physiological data and correlating with the task index gave the highest correlation coefficient (CC = 0.726 ± 0.14) across all participants. This demonstrates how fusing different physiological measurements can provide a more accurate representation of the operators’ MWL, whilst also allowing for increased integrity and reliability of the system. Introduction Advancements in technologies such as Artificial Intelligence (AI), sensor networks and agent-based systems are rapidly changing the operations of Unmanned Aircraft Systems (UASs) and are introducing systems with higher levels of intelligence and autonomy [1]. Particularly, system automation is becoming increasingly complex with heterogeneous sensor networks and algorithms that incorporate increasing amount of input data and with multiple objectives. A negative effect of this complexity is the human operators' loss of Situational Awareness (SA) and the increase in Mental Workload (MWL) in certain scenarios, where it is paradoxically meant to alleviate MWL [2]. A Cyber-Physical-Human (CPH) system is a particular class of Cyber-Physical Systems (CPS), which fundamentally addresses these issues. The implementation of a CPH system is vital as it ensures that the human maintains a central role in the operation of the system as the Human-Machine Interfaces and Interactions (HMI 2 ), intelligence and autonomy advances. The measurement of cognitive load, particularly MWL, in real-time gives CPS the ability to sense and adapt to the human operator. The proposed Cognitive Human Machine System (CHMS) is a CPH system concept that incorporates system automation support, which modulates as a function of measured cognitive state of the human operator [3][4][5]. Among other functions, the system allows dynamic adaptation of the system Automation Level (AL) and actual command/control interfaces, while maintaining desired MWL and the highest possible level of situational awareness. This new adaptive form of HMI 2 is central to support the airworthiness certification and widespread operational deployment of One-to-Many (OTM) systems in the civil aviation context [6][7][8]. An important consideration for a CHMS is implementing a sensor network with different physiological measurements, such as those originating from electrical and metabolic brain activity, eye movement activity and cardiorespiratory activity. This is important as each physiological parameter observes different biological processes, and their corresponding sensors are thus sensitive to signal contamination originating from distinctly different disturbances. For instance, Electroencephalogram (EEG) electrodes are prone to internal and external artifacts such as eye blinks, movement, heartbeat artifacts and other electromagnetic interference [9], whereas blink rate and pupillometry are among other sensitive to ambient light stimuli [10,11]. Henceforth, in a CHMS the monitoring of multiple parameters in a sensor network ensures the integrity of the system [3]. Such a sensor network is also natively suited to exploit data fusion of the physiological measurements to increase the overall accuracy and reliability of the human operators' estimated MWL. The disturbances mentioned above additionally mean that it is challenging to identify the true signal of interest from the noise. As such, the comparison with other MWL measures, such as subjective questionnaires and objective task performance measures, is important for cross referencing with the physiological measures, in order to verify that they are correctly and accurately measuring MWL. Moreover, having additional MWL measures is needed for potentially implementing them as labels for inference methods such as supervised Machine Learning (ML) techniques in the training/calibration phase [12]. The measurement of the physiological response and inferring cognitive states, with and without system adaptation has been demonstrated in previous studies [12][13][14][15][16][17][18]. However, there are still considerable challenges with the implementation of such methods, where some extensive reviews have identified that measures of MWL are not universally valid for all task scenarios [19,20]. A reason for this is that the physiological responses for MWL can be scenario dependent and are thus influenced by a range of individual differences and task characteristics [20]. In this paper we present a study with two physiological sensors, including an EEG and eye tracker, as well as a secondary task performance index and a subjective questionnaire as measurements of MWL in an OTM UAS wildfire detection mission. Here the participants assume the role of a UAS pilot controlling multiple Unmanned Aerial Vehicles (UAVs), where the task scenario is designed to incrementally increase in difficulty throughout a 30-min mission. This study capitalizes on existing approaches for measuring MWL and proposes a multi-sensory approach, with the data fusion of the eye tracking and EEG measures. This extends the research on the CHMS concept and demonstrates the ability to measure MWL in a complex OTM UAS task scenario. As such the contribution of this study is the relationship between the physiological and objective measures in the context of CHMS for OTM UAS operations. The contribution towards the development of a real-time measure of a human operators' MWL will support the implementation of more adaptive and intelligent forms of automation in OTM UAS operation. Background on Mental Worklaod (MWL) and MWL Measurements Among the various forms of cognitive load, MWL is of central importance as it influences the operators' performance and thus the system performance [21]. MWL is a complex construct and is challenging to define accurately [22], however MWL is assumed to be a reflection of the level of cognitive engagement and effort as an operator performs one or more tasks. Henceforth, a general definition of MWL is "the relationship between the function relating the mental resources demanded by a task and those resources available to be supplied by the human operator" [23]. Mental workload can thus be determined by exogenous task demands and endogenous supply of processing resources (i.e., attention and working memory). A notable distinction to make is between MWL and task load, where MWL reflects the operators' subjective experience while undergoing particular tasks during certain environments and time constraints. However, task load is the amount of work or external duties that the operator has to perform [24]. The operators' resulting MWL can thus be an outcome of the task demand and also endogenous factors such as experience, effort, stress and fatigue [25]. A significant human factor concern for complex, safety-critical aerospace systems is the prevention of suboptimal MWL such as mental underload and overload. Both are discriminated by referring to the source of error during operation, where the former relates to reduced alertness and lowered attention, while the latter refers to information overload, diverted attention and/or insufficient time required for information processing [2,21]. This relationship between MWL and operator performance can be modeled with the inverted U function, which indicates when an operator enters suboptimal workload that can lead to errors and accidents [26]. The present method of measuring MWL is generally done with either subjective measures, performance measures or physiological measures. Subjective measures include having the operator fill out questionnaires and self-confrontation reports such as NASA Task Load Index (NASA-TLX, [27]) and Instantaneous Self-Assessment (ISA, [28]). These measures are not in real-time and are generally collected following the completion of a task or at infrequent intervals during the experiment. Overcoming this challenge would mean interrupting the participant/operator more frequently, which would take away attention and mental resources from the primary task. Moreover, as questionnaires are self-reported the answers are prone to bias and a peak end effect [29]. Task performance measures can be further categorized into primary task performance measures and secondary task performance measures. The task performance measures generally evaluate speed or accuracy including tracking performance, reaction time or number of errors, where it can be seen as the overall effectiveness of the human machine interaction [30]. As compared to subjective questionnaires, task performance measures can be collected at much more frequent intervals. When additional tasks are added to the demand, secondary measures or dual task technique can be used. This could include the operator performing a primary task varying in cognitive demand, while having to fulfill a relatively low-demand secondary task, such as pressing a button immediately prior to hearing a tone. Here it is assumed that as cognitive capacity is increased by the primary task, there is less capacity available for the secondary task [31]. Although a more widely accepted measure for MWL, secondary task performance can be disturbing as it interferes with the primary task and may not be operationally relevant [32]. Controller inputs have also been used as a potential task-based measure of the operators' cognitive load [33]. Among other, this measurement includes measuring the speed at which the operator responds to a task or accuracy of clicking a button. However, since reaction speed and accuracy measures Sensors 2020, 20, 5467 4 of 21 are usually difficult to implement for more complex tasks, a more straightforward implementation involves the rate of control inputs, or the count of control inputs within a given time. Lastly, physiological measures are derived from the operators' physiology, and include measures from two anatomically distinct structures, namely the Central Nervous System (CNS) and Peripheral Nervous System (PNS) [34]. From these categories the physiological response of interest for passive control of the system are the involuntary reactive responses of the human operator [30]. These physiological measures have in recent years gained traction with the new technological developments and affordable prices and can allow for objective, unobtrusive and real-time measurement of MWL. Although there are numerous techniques for performing physiological response measures, the current notable ones include eye tracking measures, EEG, Functional Near Infrared Spectroscopy (fNIR), Electromyogram (EMG) and Electrocardiogram (ECG). In previous studies, the measurement of MWL has been demonstrated to modulate task load based on mental overload cases including the use of EEG measures in an Air Traffic Management (ATM) scenario [14]. Another study has contrarily modulated task load based on a mental underload case, where the difficulty presented to a pianist increased when an fNIR sensor detected that the presented material became too easy for the participant [18]. More commonly however, studies have mainly measured cognitive states in response to task load without dynamic task adaptation [13,15,17,35]. Nonetheless, the inference of cognitive states based on physiological data is still an active area of research, with the most promising avenue being the use of AI techniques including supervised Machine Learning (ML) to generate models of the users' cognitive states based on labeled data [12,13,[15][16][17]. For a more detailed review on the various physiological sensors, and the corresponding methods implemented for processing MWL measurements see the following reference [3]. For this study EEG and eye tracking measures were used, as such the remaining section outlines the EEG and eye tracking methods as needed for this study. When applied in clinical use, EEG frequencies bands are generally categorized into five different ranges. These include delta (δ, <4 Hz), theta (θ, 4-7 Hz), alpha (α, 8-12 Hz), beta (β, 12-30 Hz) and gamma (γ, >30 Hz) [36]. The layout of the electrode placement is standardized and follows the international 10-20 system. Previous studies have indicated that the changes in workload are observed with variations in the theta and alpha bands [35,[37][38][39][40][41][42]. More specifically, with higher workload the power in the theta band has been observed to increase at the frontal and central regions [35,37,41], while in the alpha band there has been observed a decrease in power at the left and right occipital regions [41]. Additionally, previous studies have indicated that 4-6 electrodes are sufficient to achieve accurate EEG recordings of cognitive states [43]. Eye tracking features can be deduced from gaze features or pupillometry and is performed with either wearable or remote sensors. Gaze features further includes fixation, saccade, dwell, transition and scan path, while pupillometry includes eye closure, blink rate and pupil radius [44]. In regard to gaze features, the scan path can allow for more complex features to be extracted such as visual entropy [45]. The eye tracking features correlated with the cognitive state include fixation, blink rate, saccades, pupil diameter, dwell time and visual entropy [3]. Visual entropy provides a particularly useful measure, where studies have shown that visual entropy was able to discriminate between control modes and flight phases associated with different levels of MWL [46]. This measure uses the randomness of the users' gaze patterns and once Areas of Interest (AOI) have been defined on the Human Machine Interface (HMI), visual entropy can be simply calculated from gaze data as a single, easily interpretable value. Cognitive Human Machine System (CHMS) and Design Considerations The proposed CHMS is based on an advanced CPH architecture incorporating both adaptive interfaces and automation support, which are modified dynamically as a function of the human operators' cognitive states as well as other relevant operational/environmental observables. The counterpart of a CPH system is an Autonomous Cyber-Physical (ACP) system, which operates Sensors 2020, 20, 5467 5 of 21 without the need for human intervention or control. Many of the CPS implemented today are a part of the subclass Semi-Autonomous Cyber-Physical (S-ACP) system that perform autonomous tasks in certain predefined conditions but require a human operator otherwise. However, the S-ACP systems are unable to dynamically adapt in response to external stimuli. Hence a CPH system addresses this as the interaction between the dynamics of the system and the cyber elements of its operation can be influenced by the human operator, and the interaction between these three elements are continuously modulated to meet specific objectives. A key feature of the CHMS, initially described in [4,5], is the real-time physiological sensing of the human operator to infer cognitive states that drive system adaptation. In its fundamental form, the CHMS framework can be depicted as a negative feedback loop as seen in Figure 1 below. Here, MWL is used as the reference for modulating the automation support and interface, where the resulting MWL for the human operator is a function of the task load (i.e., the number of tasks and/or task complexity) and the operators' endogenous factors (i.e., expertise, time pressure, etc.). Hence, when the workload is measured to increase or decrease beyond the specified thresholds, the adaptation module is activated to modulate the operators' task load, which can be done by changing the automation level, task scheduling and/or changing the interface. The operation of CHMS is expected to provide benefits for several aerospace areas apart from OTM UAS operations including ATM [47], Urban Traffic Management (UTM) [48] and Single Pilot Operation (SPO) [5,7]. The operation of the CHMS in all these applications will support the systems to operate at higher levels of autonomy while ensuring that the human operator maintains a central role of the system and the degree of trust with the system is maintained. influenced by the human operator, and the interaction between these three elements are continuously modulated to meet specific objectives. A key feature of the CHMS, initially described in [4,5], is the real-time physiological sensing of the human operator to infer cognitive states that drive system adaptation. In its fundamental form, the CHMS framework can be depicted as a negative feedback loop as seen in Figure 1 below. Here, MWL is used as the reference for modulating the automation support and interface, where the resulting MWL for the human operator is a function of the task load (i.e., the number of tasks and/or task complexity) and the operators' endogenous factors (i.e., expertise, time pressure, etc.). Hence, when the workload is measured to increase or decrease beyond the specified thresholds, the adaptation module is activated to modulate the operators' task load, which can be done by changing the automation level, task scheduling and/or changing the interface. The operation of CHMS is expected to provide benefits for several aerospace areas apart from OTM UAS operations including ATM [47], Urban Traffic Management (UTM) [48] and Single Pilot Operation (SPO) [5,7]. The operation of the CHMS in all these applications will support the systems to operate at higher levels of autonomy while ensuring that the human operator maintains a central role of the system and the degree of trust with the system is maintained. The CHMS has parallels to a passive Brain Computer Interface (pBCI) [49], however CHMS further expands on pBCI by implementing other physiological parameters apart from brain signal processing and additionally incorporates external environmental/operational factors for estimating the cognitive states. The more detailed CHMS concept is depicted in Figure 2 and requires the adoption of three fundamental modules: sensing, estimation and adaptation. The sensing module includes two sensor networks including the sensors for measuring physiological and external conditions. The physiological sensors include various advanced wearable and remote sensors, such as the EEG and eye tracker. The other network includes for example sensors for measuring weather and measurements about the flight phase. The collected data is then passed to the estimation module, where the data from the networks are passed to respective inference models. This is then combined to make a final estimation of the different levels of the cognitive states. The estimated cognitive states are then compared with the reference cognitive states, and the deviation from these predefined references is what drives the adaptation module, which includes changing the AL, task scheduling, the interface and/or the alerting mode. These alterations thus modify what information and tasks are presented to the human operator, which again alters the cognitive states of the human operator, and the cycle then continues. Before full implementation of a CHMS in future operational use, an initial training/calibration phase would need to be performed to calibrate the estimation module by generating and validating a cognitive state model of the human operator. Such a calibration phase will define the baseline and thresholds of the cognitive states, which will serve as the reference cognitive state conditions for comparison with the operationally collected and estimated data. The inference method adopted for the CHMS estimation module can include various AI methods, where supervised ML models are among the most promising approaches [12]. With such a method however, the calibration phase The CHMS has parallels to a passive Brain Computer Interface (pBCI) [49], however CHMS further expands on pBCI by implementing other physiological parameters apart from brain signal processing and additionally incorporates external environmental/operational factors for estimating the cognitive states. The more detailed CHMS concept is depicted in Figure 2 and requires the adoption of three fundamental modules: sensing, estimation and adaptation. The sensing module includes two sensor networks including the sensors for measuring physiological and external conditions. The physiological sensors include various advanced wearable and remote sensors, such as the EEG and eye tracker. The other network includes for example sensors for measuring weather and measurements about the flight phase. The collected data is then passed to the estimation module, where the data from the networks are passed to respective inference models. This is then combined to make a final estimation of the different levels of the cognitive states. The estimated cognitive states are then compared with the reference cognitive states, and the deviation from these predefined references is what drives the adaptation module, which includes changing the AL, task scheduling, the interface and/or the alerting mode. These alterations thus modify what information and tasks are presented to the human operator, which again alters the cognitive states of the human operator, and the cycle then continues. resolution, etc.) and sampling frequencies of each sensor. As such a sensor network optimisation scheme is key when designing a reliable CHMS [3]. The adoption of sensor networks is both a natural and necessary evolution to effectively exchange, synchronise and process measurement data within a customisable operational network architecture. In addition, a sensor network is natively suited to exploit data fusion of the physiological measurements to increase the overall inference accuracy and reliability of the estimation module. The remaining sections of this paper outlines the materials and method, results, discussion and conclusion. In section two, materials and methods for this study are described and include details on the task scenario as well as the methods for the post processing analysis implemented. The following section are the results, which comprises of two parts. The first part presents a statistical comparison between the mission phases (Phase 1, 2 and 3) for all the MWL measures, including subjective, performance and physiological measures. The second part of the results section provides a correlation analysis of the continuous physiological measures (EEG and eye tracking) and the continuous performance measures. Furthermore, a method for fusing the physiological measures is implemented and analyzed. Lastly, the results are discussed, before a conclusion is drawn in section five. Before full implementation of a CHMS in future operational use, an initial training/calibration phase would need to be performed to calibrate the estimation module by generating and validating a cognitive state model of the human operator. Such a calibration phase will define the baseline and thresholds of the cognitive states, which will serve as the reference cognitive state conditions for comparison with the operationally collected and estimated data. The inference method adopted for the CHMS estimation module can include various AI methods, where supervised ML models are among the most promising approaches [12]. With such a method however, the calibration phase should be conducted using additional objective measures, such as secondary task performance, task complexity (determined analytically prior to the experiment) and/or controller inputs, which will serve as data labels for model training/calibration. As mentioned above, the various physiological sensors and their biological processes are prone to distinctly different disturbances. Although multiple sensors are needed to improve reliability, some challenges arise with this including different measurement performance (e.g., accuracy, resolution, etc.) and sampling frequencies of each sensor. As such a sensor network optimisation scheme is key when designing a reliable CHMS [3]. The adoption of sensor networks is both a natural and necessary evolution to effectively exchange, synchronise and process measurement data within a customisable operational network architecture. In addition, a sensor network is natively suited to exploit data fusion of the physiological measurements to increase the overall inference accuracy and reliability of the estimation module. The remaining sections of this paper outlines the materials and method, results, discussion and conclusion. In Section 2, materials and methods for this study are described and include details on the task scenario as well as the methods for the post processing analysis implemented. The following section Sensors 2020, 20, 5467 7 of 21 are the results, which comprises of two parts. The first part presents a statistical comparison between the mission phases (Phase 1, 2 and 3) for all the MWL measures, including subjective, performance and physiological measures. The second part of the results section provides a correlation analysis of the continuous physiological measures (EEG and eye tracking) and the continuous performance measures. Furthermore, a method for fusing the physiological measures is implemented and analyzed. Lastly, the results are discussed, before a conclusion is drawn in Section 5. Participants There were five participants that took part in the experiment comprising of four males and one female. The participants were aerospace students at Royal Melbourne Institute of Technology (RMIT) University and were selected based on their prior experience in aviation and aerospace engineering. None of the participants had prior experience with this OTM scenario, and as such two different familiarization sessions were conducted lasting around an hour each. All participants volunteered for the experiment and were not paid. Informed verbal consent was given prior to the experiment. The corresponding ethics approval code for this research is ASEHAPP 72-16. Experimental Procedure The experimental procedure consisted of a briefing, sensor fitting and a rest period, followed by the mission. After the mission was completed, there was a second rest period before a final debrief. The whole procedure took approximately one hour. The refresher briefing was conducted to ensure that participants were familiar with the scenario and the interface. Following that, participants were fitted with the EEG device and the EEG electrodes impedances were checked to ensure they were within acceptable levels, this was then followed by a calibration of the desk-mounted eye tracker. Once both sensors were set-up, physiological data recording started, and data was logged for 5-min while the participant rested. After the resting phase the OTM UAS wildfire scenario commenced, which consisted of three back-to-back 10-min phases designed to provide increasing levels of difficulty. At the end of the scenario, physiological data was logged for another 5-min during a post-mission resting phase. Subsequently, participants provided subjective ratings for their workload and situational awareness in each of the three phases. Mission Concept For this scenario the test subjects assume the role of a UAS ground operator tasked with coordinating the actions of multiple UAVs in a wildfire surveillance mission. The primary objective of the mission is to find and localize any wildfires within the Area of Responsibility (AOR). The secondary objectives are to firstly maximize the search area coverage, and secondly to ensure that the UAV fuel levels, as well as navigation and communication (comm) performance are within a serviceable range. Further details about the mission objectives are provided in Table 1. The sensor payload of the UAV comprises of an active sensor (lidar) and a passive sensor (Infrared (IR) camera). UAVs can be equipped with either one of the two sensors or both sensors. The lidar provides an excellent range but a narrow field of view. To operate the lidar, it must be fired towards a ground receiver to measure the CO 2 concentration of the surrounding atmosphere (i.e., the mean column concentration of CO 2 ), and areas with excessive CO 2 concentration are likely to contain wildfires. There are a limited number of ground receivers within the AOR, which constrains the search area of the lidar. On the other hand, the infrared camera possesses a smaller range but has a larger field of view. Unlike the lidar, the camera does not require the use of a ground receiver and can be used anywhere within the AOR. The AOR is divided into smaller regions called Team Areas, which can then be assigned to UAV Teams. The division of the AOR into smaller regions allows UAVs to bound from area to area, initially conducting the search in the area closest to the base before searching further out. The concept for this is illustrated in Figure 3 with the AOR denoted in white borders while the Team Areas are depicted as convex polygons of different colors. In Phase 1 of the scenario, 3 UAVs are made available to the human operator to search the area closest to the base (Team Area 1). After the Area has been searched, or when the mission transits to Phase 2 (whichever occurs first), the operator will direct the initial UAVs, originally in Team 1, to move to Area 2 in order to allow the new UAVs to take over the coverage of Area 1. After Area 2 has been searched, the human operator repeats the same strategy with Area 3, moving the UAVs originally in Area 2 into Area 3 and the UAVs originally in Area 1 into Area 2. UAVs assigned to search an area should be assigned to the team associated with that area (i.e., Team 1 for Area 1, Team 2 for Area 2, etc.), as the team structure allows operators to exploit some built-in automation support such as search area designation, path planning and platform allocation. For further detail on the concept of operations and task analysis see the following references [50,51]. Depending on how the scenario evolves the MWL profile during this mission can be different between one participant to another. Nonetheless, although a simpler scenario can generate a more repeatable MWL profile, a more realistic scenario was used to evaluate the feasibility of the OTM concept and to allow for known physiological measures to be tested on a realistic application. Repeatability was maximized by carefully controlling independent variables such as the number of UAVs being controlled and the geographic extent of the AOR over each phase of the mission. Secondary Task Index A task index was used to provide an additional objective and continuous measure of MWL during the scenario. The main purpose of the task index was to assess the secondary task performance Depending on how the scenario evolves the MWL profile during this mission can be different between one participant to another. Nonetheless, although a simpler scenario can generate a more repeatable MWL profile, a more realistic scenario was used to evaluate the feasibility of the OTM concept and to allow for known physiological measures to be tested on a realistic application. Repeatability was maximized by carefully controlling independent variables such as the number of UAVs being controlled and the geographic extent of the AOR over each phase of the mission. Secondary Task Index A task index was used to provide an additional objective and continuous measure of MWL during the scenario. The main purpose of the task index was to assess the secondary task performance of the participant by providing a weighted count of the number of pending secondary tasks (i.e., system maintenance tasks). The number of pending tasks was calculated from the UAV flight logs as detailed in Table 2 below. Each UAV can thus have up to 6 points at any given time indicating a high level of unsatisfactory secondary task performance. Table 2. Task index calculation for each UAV. Pending Secondary Tasks Penalty Poor navigation performance (accuracy above 25 m) +1 Adequate navigation performance (accuracy between 10 and 25 m) +0.5 Excellent navigation performance (accuracy below 10 m) +0 Poor communication performance (comm strength below 50%) +1 Adequate communication performance (comm strength between 50% and 70%) +0.5 Excellent communication performance (comm above 70%) +0 Critically low fuel (fuel needed to return to base less than 1.5× of fuel on board) +1 Low fuel (fuel needed to return to base between 1.5× and 2× of fuel on board) +0.5 Adequate fuel (fuel needed to return to base more than 2× of fuel on board) +0 Autopilot mode in hold +1 Autopilot mode off +0 UAV not assigned into a team +1 UAV is assigned into a team +0 UAV does not have any sensors active +1 UAV does have sensors active +0 Eye Tracker Equipment and Data Processing The eye tracking data was collected using the Gazepoint GP3, which is a remote sensor positioned at the base of the monitor about 65 cm away from the participant. The raw eye tracking data comprises of the x and y coordinates of the gaze point and the blink rate. The system is setup to take the average x and y coordinates from the left and right pupil. If one pupil is not detected the system takes the x and y coordinates of the remaining pupil. If both are not available an invalid data point is recorded which will not be included in the data analysis. To allow for real-time processing of the scenario parameters and the processing of the eye tracking measurements, all eye-tracking data was routed to a central server. Besides eye tracking data, the server also collects and processes the flight logs of each UAV, each including the position, attitude, task type, autopilot mode, automation mode and performance of the different subsystems. During the scenario, the raw eye tracking data was processed by the server to derive other real-time metrics, including: dwell time on UAVs and UAV teams, attention on UAVs and UAV teams, along with UAV and team visual entropies, calculated from separate transition matrices of UAVs and UAV teams. However, visual entropy for UAVs gave the best indication of workload and was thus solely used for further analysis. The visual entropy (H) is determined from gaze transitions between different Regions of Interest (ROIs), which are typically represented in a matrix. The cells represent the number (or probability) of transitions between two interfaces. The visual entropy measures the randomness of the scanning patterns, and is given by [45]: where n and m are the rows and columns of the transition matrix respectively, p Y ij X i is the probability of fixation of the present state (i.e., fixation at region Y ij given previous fixation at region X i ) and p(X i ) is the probability of fixation of the prior state (i.e., probability of the previous fixation). A high value of H implies high randomness in the scan path while a low value of H implies an orderly scan pattern, therefore higher values of H indicate periods of higher workload where the operator is unable to maintain a regular scan pattern. EEG Equipment and Data Processing For performing the EEG recordings during the experiment, the actiCAP Xpress, from Brain Products GmbH was used. The EEG device utilizes low impedance gold-plated electrodes, which are meant to optimize the connectivity, thus reducing the need for electrode gel. However, from observation it was found that electrode gel was needed to obtain a clear signal. Moreover, the cap is combined with the V-Amp amplifier, and the software Brain Vision Recorder, which is used for visualizing and storing the raw EEG data. The layout of the cap follows the international 10-20 system, where 16 data electrodes were collecting data at the locations F4, Fz, F3, FC1, FC2, C3, C4, CP1, CP2, T7, T8, P3, Pz, P4, O1 and O2. The active reference electrode and passive ground electrode are placed on the earlobes of the participant. While being fitted with the EEG the minimum impedance accepted was below 5 kΩ. To achieve this the unsatisfactory electrode was either jiggled or alcohol and/or gel was applied to the area. The resulting EEG index is as described in the equation below, were it is calculated at 5 s intervals: here θ F4+C4 refers to the average theta power for electrode positions F4 and C4, while α O1+O2 average alpha power for positions O1 and O2. This was achieved by initially processing the individual channels with a bandpass filter between 0.5 and 30 Hz. A five second sample window was then applied for each channel to obtain fixed-length signal samples, which are then preprocessed by applying linear detrending. The Power Spectral Density (PSD) of the filtered sample window is then obtained and then the respective bands are integrated to determine the band power. Once all channels have been processed, the band powers of the respective channels are averaged and then divided to derive the EEG index. After the EEG index was calculated for all the 5 s intervals additional smoothing was performed prior to the data analysis. This was done using a lowpass filter and highlighted the predominant trends in the data. For the EEG data processing, additional data rejection criteria was included where data identified as outliers were removed. Here the isoutlier function in MATBLAB was used, which returned true for all elements that were more than 3 standard deviation from the mean. The function was performed following the calculation of the EEG index. Among the data for all the participants there were identified outliers for one participant. Here 5 outliers were detected, which were then replaced with mean values. Controller Input Processing During the scenario the subject controlled and navigated the application by clicking on the screen with the left and right mouse buttons. The mouse clicks were logged by the central server and total number of controller inputs (number of left and right clicks) was counted during 2-min intervals. Additional processing was obtained to discriminate between command inputs and panning/zooming inputs, however these results are not presented here. Data Analysis For data analysis the multiple one-way Analyses of Variance (ANOVA) and Pearson correlation coefficient was carried out on the processed data. A 5% significance level was used for all the statistical tests. ANOVA Analysis Multiple one-way Analyses of Variance were carried out to determine the statistical significance of the dependent measures in the different phases of the test scenario. The dependent measures comprised of a subjective questionnaire, physiological measures and performance measures. Physiological features and task performance measures were post-processed to obtain the normalized mean values for each participant in each phase of the test, comprising of five phases: Pre-rest, Phase 1, Phase 2, Phase 3 and Post-rest. Values were normalized using the data collected from all five phases of a participants' dataset and were centered to have a mean of 0 and scaled to provide a standard deviation of 1. Additionally, the Tukey's test was further implemented to identify what groups are significantly different from one another. Correlation between Features To investigate the linear relationship between the features, the Pearson Correlation Coefficient (CC) was calculated from all combinations of the different measurements. Equation (3) outlines the correlation coefficient: here n is the number of data points while x and y are the two respective features being analyzed. For each participant, pairwise correlations between six features were calculated. These six features include the EEG index, visual entropy, task index, control inputs, fused physiological measure and fused objective measure. The fused physiological measure was made up of a weighted sum of the visual entropy and EEG index while the fused objective measure was made up of a weighted sum of the task index and control inputs. Three different sets of weights were explored including 50/50, 70/30 and 30/70. As each participant had an individual correlation coefficient value for each feature-pair, a single value was obtained by determining the mean and standard deviation of that feature-pair across all participants. ANOVA Analysis The ANOVA analysis was conducted to determine if there were significant differences in both dependent measures across the different mission phases, giving an insight into the experimental design of the scenario and if the results are in fact suitable for further analysis and implementation. The dependent measures included in the ANOVA analysis comprised of subjective ratings, physiological features and performance measures. The subjective ratings included the mental workload rating and the situational awareness rating for each mission phase. The performance measures included the average task index value and controller input count across each phase, while the physiological measures included the average value of EEG index and visual entropy in each phase. The EEG index measurement was performed during the pre-and post-mission resting stages and was thus analyzed for all five phases as well as just the three mission phases. The results of the ANOVA analysis are summarized in Table 3 below. Task Index and Controller Input The ANOVA test performed on the task index showed it to be significant, with F(2,12) = 88.47, p = 6.56 × 10 −6 , see Table 3 and Figure 5a ANOVA showed the effect of controller input to be significant, with F(2,12) = 22.1, p = 9.47 × 10 −5 , see Table 3 and Figure 5b. Post hoc comparison using the TUKEY HSD test showed that Phase 1 (M = − 0.594, SD = 0.110) was significantly different from both Phases 2 and 3, while controller input in Phases 2 and 3 were not significantly different from each other. The lack of significant difference in Phases 2 and 3 implies that the control input count might not be a suitable proxy for MWL at medium to high workload levels since it tended to saturate at these stages, leading to decreased sensitivity. Performing the ANOVA test for the subjective situational awareness rating demonstrated that the SA rating was significant F(2,12) = 25.82, p = 4.49 × 10 −5 , see Table 3 and Figure 4b. Post hoc comparison using the Tukey HSD test showed that all 3 groups were significantly different from one another, Phase 1 (M = 9.6, SD = 0.555), Phase 2 (M = 6.2, SD = 0.555) and Phase 3 (M = 4, SD = 0.555). These results indicate that the experimental design for the mission scenario was successful in increasing the task load and mission complexity across the three mission phases, as indicated by an increasing MWL and decreasing SA. Although the subjective measures are prone to bias and the measures are infrequent, these results could serve as an additional reference for the physiological measure and are useful for comparison between the different ANOVA analyses. Task Index and Controller Input The ANOVA test performed on the task index showed it to be significant, with F(2,12) = 88.47, p = 6.56 × 10 −6 , see Table 3 and also supported the comparison with the subjective MWL measures that similarly increases between the phases. ANOVA showed the effect of controller input to be significant, with F(2,12) = 22.1, p = 9.47 × 10 −5 , see Table 3 and Figure 5b. Post hoc comparison using the TUKEY HSD test showed that Phase 1 (M = − 0.594, SD = 0.110) was significantly different from both Phases 2 and 3, while controller input in Phases 2 and 3 were not significantly different from each other. The lack of significant difference in Phases 2 and 3 implies that the control input count might not be a suitable proxy for MWL at medium to high workload levels since it tended to saturate at these stages, leading to decreased sensitivity. ANOVA showed the effect of controller input to be significant, with F(2,12) = 22.1, p = 9.47 × 10 −5 , see Table 3 and Figure 5b. Post hoc comparison using the TUKEY HSD test showed that Phase 1 (M = − 0.594, SD = 0.110) was significantly different from both Phases 2 and 3, while controller input in Phases 2 and 3 were not significantly different from each other. The lack of significant difference in Phases 2 and 3 implies that the control input count might not be a suitable proxy for MWL at medium to high workload levels since it tended to saturate at these stages, leading to decreased sensitivity. Physiological Measures ANOVA test showed visual entropy to be significant, F(2,12) = 34.54, p = 1.05 × 10 -5 , see Table 3 and Figure 6a. Further post hoc comparison using the Tukey HSD test showed that Phase 1 (M = − 0.903, SD = 0.136) was significantly different from Phase 2 and 3, while the visual entropy in Phase 2 and 3 was not significantly different from each other. Although the means were increasing in line with subjective MWL measure and the task index measure, these were not statistically significant. One reason for the lack of statistical difference is that the visual entropy measure loses sensitivity between the medium and high workload. For the EEG index the ANOVA analysis was performed on both the mission Phases 1, 2 and 3, as well as the performing the test on all five phases, where Pre-and Post-rest Phases were included. For the mission Phases 1-3 only, the ANOVA showed that the EEG index was significant, F(2,12) = 19.57, p = 0.0002, see Table 3 and Figure 6b. Further post hoc comparison using the Tukey HSD test showed that all three groups were significantly different from one another, Phase 1 (M = − 0.340, SD = 0.108), Phase 2 (M = 0.198, SD = 0.108) and Phase 3 (M = 0.612, SD = 0.108). For this analysis these results were the best for the physiological measures as all three groups were significantly different from one another. Moreover, this is comparable with the analysis on the subjective MWL measure and the task index measure. For the ANOVA test performed on the full experiment length showed that the EEG index was significant, F(4,20) = 16.44, p = 4.11 × 10 −6 , see Table 3 For these results the expected response would be that the Pre-and Post-resting Phases are similar (or not statistically different), while Phases 1, 2 and 3 are different. Nonetheless, with the exception of the Post-resting Phase, the means of the phases were statistically different. This can be seen when performing the ANOVA and TUKEY test, while excluding the Post-rest Phase. This could be a consequence of the notion that the protocol for post resting was not well enough enforced. This analysis indicates that EEG index could further discriminate between a Pre-resting Phase and the mission Phases 1-3. was not significantly different from each other. Although the means were increasing in line with subjective MWL measure and the task index measure, these were not statistically significant. One reason for the lack of statistical difference is that the visual entropy measure loses sensitivity between the medium and high workload. For the EEG index the ANOVA analysis was performed on both the mission Phases 1, 2 and 3, as well as the performing the test on all five phases, where Pre-and Post-rest Phases were included. For the mission Phases 1-3 only, the ANOVA showed that the EEG index was significant, F(2,12) = 19.57, p = 0.0002, see Table 3 and Figure 6b. Further post hoc comparison using the Tukey HSD test showed that all three groups were significantly different from one another, Phase 1 (M = − 0.340, SD = 0.108), Phase 2 (M = 0.198, SD = 0.108) and Phase 3 (M = 0.612, SD = 0.108). For this analysis these results were the best for the physiological measures as all three groups were significantly different from one another. Moreover, this is comparable with the analysis on the subjective MWL measure and the task index measure. For the ANOVA test performed on the full experiment length showed that the EEG index was significant, F(4,20) = 16.44, p = 4.11 × 10 −6 , see Table 3 and Figure 7. Furthermore, the Tukey HSD test showed that the Pre-rest Phase (M = − 1.01, SD = 0.153) was significantly different from the four other groups. Phase 1 (M = − 0.34, SD = 0.153) was significantly different from Phase 3 and the Pre-rest Phase, while Phase 2 (M = 0.19, SD = 0.153) and Post-rest Phase (M = 0.15, SD = 0.153) were only significantly different to the Pre-rest Phase. Lastly, Phase 3 (M = 0.61, SD = 0.153) was significantly different to Phase 1 and the Pre-rest Phase. For these results the expected response would be that the Pre-and Post-resting Phases are similar (or not statistically different), while Phases 1, 2 and 3 are different. Nonetheless, with the exception of the Post-resting Phase, the means of the phases were statistically different. This can be seen when performing the ANOVA and TUKEY test, while excluding the Post-rest Phase. This could be a consequence of the notion that the protocol for post resting was not well enough enforced. This analysis indicates that EEG index could further discriminate between a Pre-resting Phase and the mission Phases 1-3. The ANOVA results show that controller input and visual entropy analysis can both discriminate Phase 1 from Phases 2 and 3 but failed to be statistically significant between Phases 2 and Phase 3. On the other hand, the subjective MWL measure, task index measure and the EEG index measure show greater statistical significance. These three measures show a similar effect of an increasing mean across the three mission phases, strongly corroborating the three different MWL measures (subjective, task-based and physiological) and showing that the experimental results were in-line with expectations. Correlation Between Features Further results include the correlation between the time series of the different features. Figure 8 plots the results for one participant and shows the comparison between the task index (blue) and the two physiological measures visual entropy (yellow) and EEG index (red). The x axis is time in The ANOVA results show that controller input and visual entropy analysis can both discriminate Phase 1 from Phases 2 and 3 but failed to be statistically significant between Phases 2 and Phase 3. On the other hand, the subjective MWL measure, task index measure and the EEG index measure show greater statistical significance. These three measures show a similar effect of an increasing mean across the three mission phases, strongly corroborating the three different MWL measures (subjective, task-based and physiological) and showing that the experimental results were in-line with expectations. Correlation Between Features Further results include the correlation between the time series of the different features. Figure 8 plots the results for one participant and shows the comparison between the task index (blue) and the two physiological measures visual entropy (yellow) and EEG index (red). The x axis is time in seconds, while the values are normalized between 0 and 1 for visual comparison and statistical analysis. and Phase 3. On the other hand, the subjective MWL measure, task index measure and the EEG index measure show greater statistical significance. These three measures show a similar effect of an increasing mean across the three mission phases, strongly corroborating the three different MWL measures (subjective, task-based and physiological) and showing that the experimental results were in-line with expectations. Correlation Between Features Further results include the correlation between the time series of the different features. Figure 8 plots the results for one participant and shows the comparison between the task index (blue) and the two physiological measures visual entropy (yellow) and EEG index (red). The x axis is time in seconds, while the values are normalized between 0 and 1 for visual comparison and statistical analysis. Table 4 summarizes the correlation coefficient values of the most notable features for each participant. These include the correlation between the task index and (1) the EEG index, (2) visual entropy and (3) the fused weighted sum of the two physiological measurements (weighted 50% each). Additionally, the correlation between the two physiological measurements, EEG index and visual entropy, were compared for all participants. As fulfilled by the data rejection criteria a section of the eye tracking data for participant 2 was excluded from the analysis and excluding this invalid data did improve the pairwise correlation. Table 5 presents the pairwise correlation coefficient values in the matrix form. The values were combined across all participants by taking the mean and standard deviation. The results indicate that there was no correlation between the control input and other features. However, the mean for all the participants shows that the correlation between the task index and fused physiological feature (a 50-50 weighted sum of the EEG index and visual entropy) was highest at CC = 0.726 ± 0.14. The second highest correlation was between the task index and the visual entropy with a CC = 0.648 ± 0.19. The mean correlation between EEG index and the task index gave CC = 0.628 ± 0.17, while the mean correlation between the EEG index and visual entropy was CC = 0.561 ± 0.11. Further analysis to explore the effects of differently weighted ratios showed that when weighting the visual entropy measurement 30% and the EEG index 70% the correlation with the task index and the fused sensors gave CC = 0.710 ± 0.16. When weighting the visual entropy measurement 70% and the EEG index 30% the correlation coefficient was CC = 0.710 ± 0.14. The correlation of the time series for the different features show that no or poor correlation between the control input and other features was found. However, a good correlation was found between the task index and the fused sensor measurements as well as between the task index and the EEG index/Visual entropy. In addition, the correlation between the two physiological measurements was shown to be good. Weighting the physiological features 70/30 or 30/70 did not have much effect on the result as they remained strong. Discussion This study provided insight into the relationship between physiological and objective measures in a OTM UAS operation. However, in addition to this a number of useful insights were provided into the role of automation support in a multi-UAS context. The ground operators' main responsibilities included routine monitoring of UAV system health, analysing sensor data and strategically ensuring that resources were appropriately allocated within the AOR when planning UAV sorties or retasking individual UAVs. While the scenario was relatively manageable when participants were controlling three UAVs, they found it more challenging in the later phases when controlling more than six UAVs. Mission complexity was generally observed to scale exponentially with the number of UAVs, primarily due to the exponentially increasing number of interactions between different platforms in addition to the linearly increasing number of system monitoring and sensor analysis tasks. In this context, the automation support provided was aimed to reduce scenario complexity by taking over some of the tasks associated with managing the interactions between platforms. This was achieved by the UAV Team concept where UAVs were grouped into teams, allowing participants to stay 'on-the-loop' by managing the behaviour of UAV teams instead of remaining 'in-the-loop' by individually micromanaging each UAV. This behaviour was evident during the experiment, as participants tended to maintain better situational awareness when managing UAVs in teams. It was also observed that participants preferred to micromanage a small number of UAVs in the initial phase of the scenario but switched to team management in the latter two phases. Participants who did not make the switch to team management provided feedback that they did not trust the automation support as it was not sufficiently transparent or reliable. Another important observation was that even under team management mode, participants were still required to allocate significant attentional resources to micromanaging individual UAVs at specific instances in the mission (e.g., when troubleshooting system health, retasking the UAV or manually controlling the sensor to localize a fire), effectively transitioning from 'on-the-loop' command to 'in-the-loop' control. It was however observed that participants sometimes failed to assume direct control of UAVs when appropriate (e.g., when user input was required to resolve an issue with the system health), either because they were focused on another task, were overwhelmed by the amount of information/pending tasks that they overlooked the particular UAV, or because they assumed that automation support was capable of resolving the issue. As such the development towards adaptive interfaces are expected to support better transitions between 'on-the-loop' and 'in-the-loop' command, as it is able to infer the users' workload, intention and allocation of attentional resources and subsequently vary the amount of on-screen information to ensure smoother transitions. As for the statistical analysis in this study, the ANOVA and correlation coefficient were both used to highlight two different factors. The ANOVA test was performed in the initial analysis and served to determine the validity of the experimental design as well as to get an idea of the average values of each measure across the different scenario phases. Following the ANOVA, a more detailed comparison of the time series data was carried out by evaluating the pairwise correlation coefficients between the various performance and physiological measures. The results for the ANOVA analysis showed that all the measures were statistically significant, however a further Tukey test demonstrated that measures with all three scenario phases statistically differentiating from one another included the subjective responses for MWL and SA, as well as the task index and EEG index. As for the control input count, the results indicated that implementing this as an objective measure of MWL may not be a viable option. However, it can be noted that as the scenario is designed to push MWL to the limit it could be observed that the control input count saturated with high load and lost sensitivity between Phases 2 and 3. This means that further work remains to determine whether there are correlations between physiological measures and control input count to a system. As for the visual entropy, although not statistically different for all phases, the data was invalid for one participant during an extended period of the experiment. This occurred when the participant moved out of range of the camera, causing the eye tracker to lose track of the participants' pupil. Further investigating the CC results of that eye tracking measure showed that when excluding the section where the data was lost (at the start of Phase 3) and then correlating with the other measures the correlation improved. This highlights the importance of having at least two physiological sensors implemented in a CHMS system, since physiological observables can be particularly affected by noise, motion artifacts or susceptible to interference due to participant movement. Multiple sensors can additionally increase the consistency of measurements and reliability of the system. Performing the ANOVA test and corresponding Tukey test demonstrated firstly that the subjective workload and situational awareness ratings, which serves as the best approximation to the ground truth, was consistent with the results for the task index and the EEG index during Phases 1, 2 and 3. Although the task index and EEG index values were averaged across each of the three 10-min phases, it provided an initial assessment to determine what measures are suitable for further analysis. The correlation between the time series measures using the CC demonstrated how the various performance and physiological measures compared in a complex OTM UAS task scenario. While subjective ratings are currently the best approximation to ground truth, these can only be taken after extended periods of time (e.g., at the end of each phase). However, the actual workload and situational awareness of the participant can fluctuate significantly throughout each of the 10-min phases. For example, sudden spikes in the task index were observed at the start of each phase for most participants since this was a period where new UAVs were released. The task index was thereafter observed to decrease or stabilize and only peak when the participant experienced increased load such as when localizing fires or retasking UAVs. These fluctuations in mission difficulty within each phase cannot be captured by subjective questionnaires. When comparing the task index with the EEG index and visual entropy, a relatively high correlation was expected, as they are supposed to fundamentally measure the similar variation in MWL. The difference being that the EEG index and visual entropy are physiological measures, while the task index is a task-based performance measure. Looking at Figure 8, the graphical comparison illustrated that visual entropy correlated with the task index in certain regions where the EEG index does not, and vice versa. Showing that the two physiological measures, although both gradually increasing, respond differently to the task demand of the scenario in short timeframes. The weighted sum of the two physiological measures (a 50-50 weighted sum of the EEG index and visual entropy) demonstrated a higher correlation with the task index (CC = 0.726 ± 0.14) than each individual physiological measure. This further demonstrates the importance of having more physiological sensor measurements and fusing methods when performing measurements and estimations on MWL in a fully operational CHMS system. Different weighted sums were also explored including weighting the visual entropy measurement and the EEG index 30-70% and 70-30% respectively. However, changing the weights did not show much effect. This can potentially be improved with an optimal weighting strategy that is unique for each individual subject. The concluding results thus show that a moderate level of correlation was found across all participants between the task index and EEG index CC = 0.628 ± 0.19, as well as task index and visual entropy CC = 0.648 ± 0.17. Additionally, a fusing method demonstrated that fusing the physiological measures produced an improved and a high-level correlation CC = 0.726 ± 0.14. These results indicate that the physiological response of MWL for EEG and eye tracking are consistent with previous studies. This includes the EEGs observation of fluctuation in theta power in frontal and central regions and alpha power fluctuation in parential and occipital regions during increased mental task demand [35,37,41]. Similarly, visual entropy has been shown to correlate with higher mental demand [45,46]. Nonetheless, the measures of the physiological response of MWL were conducted on a new type of mission scenario. The mission specific task index was also introduced to provide an additional baseline for comparing the EEG and visual entropy measures. Henceforth, the significance of this study is the verification of established physiological response measures of MWL, including EEG and eye tracking, as well as the relationship between the physiological and objective measures in a complex OTM UAS wildfire detection scenario. The verification of a multi-sensor fusion method additionally demonstrates that the approach can improve the reliability of cognitive state measurements. Moreover, the demonstration of a highly correlated objective measure can provide useful for potential use as labels for the physiological data when implementing AI techniques such as supervised ML models. Future research includes exploring different data fusion techniques including further testing an optimal weighting strategy that is calibrated for each individual subject. Additional future research includes testing the objective performance measures (i.e., a secondary task performance measure) as labels for AI techniques such as supervised ML models. Conclusions Recent developments in avionics hardware and software for Unmanned Aircraft Systems (UASs) are introducing higher levels of intelligence and autonomy, which in turn facilitate the introduction of new advanced mission concepts such as One-to-Many (OTM) UAS operations. However, the effective implementation of OTM operations in current and likely future UAS missions will have to rely on substantial advances in the field of Human-Machine Interfaces and Interactions (HMI2). Particularly as negative effects arise with the increasingly more complex system automation, such as the human operators' loss of situational awareness and the increase/decrease in Mental Workload (MWL). The Cognitive Human Machine System (CHMS) systems presented in this paper implements an innovative Cyber-Physical-Human (CPH) system architecture that incorporates real-time adaptation in response to the mission complexity and the cognitive load (in particular MWL) of the human operator. This includes dynamic adaptation of the Automation Level (AL) and actual command/control interfaces, while maintaining stable MWL and the highest possible level of situational awareness of the human operator. Nonetheless, with physiological measurements the different methods are prone to various internal and external signal disturbances, which means that it is challenging to identify the true signal of interest from the noise. The comparison with other MWL measures, such as subjective questionnaires and objective task performance measures, are important for cross referencing with the physiological measures, in order to verify that they are correctly and accurately measuring MWL. In addition, the monitoring of multiple parameters in a sensor network is required, as well as data fusion methods, to ensure the accuracy and reliability of the MWL estimation. The additional measures are also promising for use as labels in Artificial Intelligence (AI) techniques such as supervised Machine Learning (ML). Although the measurement of the physiological response and inferring cognitive states (with and without system adaptation) was demonstrated in previous studies, there are still significant research gaps, one of which relates to a universally valid method for determining MWL that can be applied to any operational scenario. Henceforth, in this study we tested and analyzed physiological measures of MWL, including EEG and eye tracking, in a complex OTM UAS wildfire detection mission. Additionally, objective measures were explored, including a secondary task performance and controller inputs, in an analytical comparison with the physiological measures. Although subjective measures are the closest to a ground truth, at the moment they only provide a response at infrequent intervals during the mission and cannot capture the detailed MWL variations during the tasks without being disruptive. Lastly, a fusion approach with the physiological measures was performed and correlated with the task index. The results show that the correlation with the physiological measures and the task index were good for both physiological measures, with the strongest result when fusing the two measures. These results demonstrate the ability of measuring MWL in a complex UAS mission and will be used in further developments of the CHMS.
2020-09-27T13:05:33.243Z
2020-09-23T00:00:00.000
{ "year": 2020, "sha1": "ebb674f524fe34809d1618b81618468cd588c69d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/20/19/5467/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2fb5f45fe041eee4baec57d322167f00d2472370", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
56449004
pes2o/s2orc
v3-fos-license
Quantization of diffeomorphism invariant theories of connections with local degrees of freedom Quantization of diffeomorphism invariant theories of connections is studied. A solutions of the diffeomorphism constraints is found. The space of solutions is equipped with an inner product that is shown to satisfy the physical reality conditions. This provides, in particular, a quantization of the Husain-Kucha\v{r} model. The main results also pave way to quantization of other diffeomorphism invariant theories such as general relativity. In the Riemannian case (i.e., signature ++++), the approach appears to contain all the necessary ingredients already. In the Lorentzian case, it will have to combined in an appropriate fashion with a coherent state transform to incorporate complex connections. Quantization of diffeomorphism invariant theories of connections is studied. A solutions of the diffeomorphism constraints is found. The space of solutions is equipped with an inner product that is shown to satisfy the physical reality conditions. This provides, in particular, a quantization of the Husain-Kuchař model. The main results also pave way to quantization of other diffeomorphism invariant theories such as general relativity. In the Riemannian case (i.e., signature ++++), the approach appears to contain all the necessary ingredients already. In the Lorentzian case, it will have to combined in an appropriate fashion with a coherent state transform to incorporate complex connections. I. INTRODUCTION Keeping with the theme of the special issue, this paper will address the problem of quantization of a class of diffeomorphism invariant field theories. The class can be specified as follows. We will assume that the theory can be cast in a Hamiltonian form. The configuration variable will be a connection 1-form A i a on a d-dimensional ("spatial") manifold and takes values in the Lie algebra of a compact, connected Lie-group. The canonically conjugate momentum,Ẽ a i , will be a vector field with density weight one (or, equivalently, a d-1 form) which takes values in the dual of the Lie algebra. The phase space Γ will thus consist of pairs (A i a ,Ẽ a i ) satisfying suitable regularity conditions. Finally, the gauge invariance will be ensured by the Gauss constraint and the (d-dimensional) diffeomorphism invariance, by a vector constraint, such that the entire system is of first class in Dirac's terminology. Individual theories in this class may have additional features such as specific Hamiltonians or additional constraints. In the main discussion, however, we will ignore such structures and focus only of the features listed above which will be common to all theories in the class. To make this general setting more concrete, let us list a few illustrative examples of theories which are included in this class. The first is the Husain-Kuchař model [1] which can be thought of as general relativity without the Hamiltonian constraint. Thus, in this model, we only have the Gauss and the ("spatial") diffeomorphism constraints and the Hamiltonian is a linear combination of them. In this case, we will be able to obtain a complete quantum theory. A second example is provided by Riemannian (i.e., ++++) general relativity, cast in a Hamiltonian framework using self-dual connections. In this case, in addition to the Gauss and the diffeomorphism constraint, there is also the Hamiltonian constraint which dictates "time evolution." The results of this paper provide only a partial solution to the problem of quantization of this model since the Hamiltonian constraint will not be incorporated. However, as we will indicate in the last section, the general methods employed appear to be applicable also to the Hamiltonian constraint and the issue is currently being investigated. Next, one can also consider Lorentzian general relativity in terms of a spin connection and its conjugate momentum. Our results will again provide a complete solution to the Gauss and the diffeomorphism constraints. (The Hamiltonian constraint is, however, more difficult to address now. One possible approach is to pass to self-dual connection variables [2] using the coherent state transform of Ref. [3].) Finally, our class allows for Chern-Simons theories whose group is the inhomogeneous version [4] IG of a compact, connected Lie group G. This class includes Riemannian general relativity in 3 space-time dimensions. From a mathematical physics perspective, one faces two types of problems while quantizing such models. First, the underlying diffeomorphism invariance poses a non-trivial challenge: We have to face the usual field theoretic difficulties that are associated with the presence of an infinite number of degrees of freedom but now without recourse to a background space-time geometry. In particular, one must introduce new techniques to single out the quantum configuration space, construct suitable measures on it to obtain Hilbert spaces of states and regulate operators of physical interest. The second set of problems arises because of the presence of constraints. In particular, even after one has constructed a Hilbert space and regularized the constraint operators, one is left with the non-trivial task of solving the constraints to isolate the physical states and of introducing an appropriate inner product on them. This is a significant problem even for systems with only a finite number of degrees of freedom since, typically, solutions to constraints fail to lie in the initial Hilbert space. Thus, physical states do not even have a natural "home" to begin with! In theories now under consideration, these difficulties become particularly severe: Diffeomorphism invariance introduces an intrinsic non-locality and forces one to go beyond the standard techniques of local quantum field theory. Our approach to solving these problems is based on two recent developments. The first is the introduction of a new functional calculus on the space of connections modulo gauge transformations which respects the underlying diffeomorphism invariance (see Ref. [5][6][7][8][9][10][11][12]). The second is a new strategy for solving quantum constraints which naturally leads to an appropriate inner product on the physical states (see Ref. [6][7][8][9][10][11][12][13]). Together, the two developments will enable us to complete the general algebraic quantization program [17,18] for the class of systems under consideration. Thus, we will be able to solve the quantum constraints and introduce the appropriate Hilbert space structure on the resulting space of solutions. The main ideas underlying these developments can be summarized as follows. Recall first that, in gauge theories, it is natural to use the space A/G of connections modulo local gauge transformations as the classical configuration space. In quantum field theories, due to the presence of an infinite number of degrees of freedom, the quantum configuration space is typically an enlargement of its classical counterpart. The enlargement is non-trivial because the measures which define the scalar product tend to be concentrated on "distributional" fields which lie outside the classical configuration space. In gauge theories, if we require that the Wilson loop variables -i.e., the traces of holonomiesshould be well-defined also in the quantum theory, a canonical enlargement A/G of A/G becomes available [5]. This space can be thought of as a limit of the configuration spaces of lattice gauge theories for all possible "floating" (i.e., not necessarily rectangular) lattices. Geometric structures on configuration spaces of lattice gauge theories can therefore be used to induce geometric structures on A/G. [6,7,9,11]This enables one to introduce integral and differential calculus on A/G without reference to any background geometry. The calculus can, in turn, be used to introduce measures, Hilbert spaces of square-integrable functions and regulated operators on them. The strategy of solving quantum constraints, on the other hand, is quite general and not tied to the theories of connections. [15,19] For simplicity, consider the case when there is just one constraint, C = 0, on the classical phase space. To quantize the system, as in the standard Dirac procedure, one first ignores the constraint and constructs an auxiliary Hilbert space H aux , ensuring that the set of "elementary" real functions on the full phase space is represented by self-adjoint operators on H aux . Thus, H aux incorporates the "kinematic reality conditions". Since the classical constraint is a real function on the phase space, one represents it by a self-adjoint operatorĈ on H aux . The solutions are to be states which are annihilated bŷ C, or, alternatively, which are left invariant by the 1-parameter group U (λ) := exp iλĈ generated byĈ. A natural strategy [13,14] to obtain solutions, then, is to begin with a suitable state φ in H aux and average it over the group; formally, φ := dλU (λ) • |φ > is group invariant. The problem is that, typically,φ does not belong to H aux ; it is not normalizable. However, it often has a well-defined action on a dense subset Φ of H aux in the sense thatφ · ψ := dλ < φ|U (λ) • |ψ > is well-defined for all ψ >∈ Φ. That is,φ can be often thought of as an element of the topological dual of Φ (if Φ is equipped with a suitable topology which is finer than the one induced by H aux ). To summarize, group averaging can lead to solutions of the quantum constraint but they lie in a space Φ ′ which is larger than H aux (if, as is typically the case, zero lies in the continuous part of the spectrum ofĈ). Finally, one can introduce an Hermitian inner product on the space of the solutions simply by setting <φ 1 |φ 2 >=φ 1 · φ 2 . Thus, if one can find a dense subspace Φ in H aux (and equip it with a suitable topology) such that the group averaging procedure maps every element of Φ to a well-defined element of Φ ′ , one can extract the Hilbert space of physical states. One can show that the resulting physical Hilbert space automatically incorporates the "reality conditions" on physical observables [15,16] even when they are not known explicitly. The purpose of this paper is to use these two developments to obtain the following results for the class of models under consideration: 1. We will construct the quantum configuration space A/G and select the measure µ 0 on it for which L 2 (A/G, dµ 0 ) can serve as the auxiliary Hilbert space H aux , i.e., can be used to incorporate the kinematical reality conditions of the classical phase space. 2. Introduce the diffeomorphism constraints as well-defined operators on H aux and demonstrate that there are no anomalies in the quantum theory. 3. Construct a dense subspace Φ of H aux with the required properties and obtain a complete set of solutions of the diffeomorphism constraints in its topological dual Φ ′ . We will also characterize the solutions in terms of generalized knots (i.e., diffeomorphism invariance classes of certain graphs) and obtain the Hilbert spaces of physical states by introducing the inner products which ensure that real physical observables are represented by self-adjoint operators. While the main emphasis of the paper is on presenting a rigorous solution to the diffeomorphism constraint, along the way, we will summarize a number of additional results which are likely to be useful more generally. First, we will exhibit an orthonormal basis in H aux , introduced by Baez [20] drawing on spin networks considered by Rovelli and Smolin [21] (see also [22]). Second, we will present a rigorous transform that maps the states in the connection representation (i.e., in H aux ) to functions on the loop space. Furthermore, using the orthonormal basis, we will also present the inverse transform [22] from the loop representation [23][24][25] to the connection representation. Finally, in the case when d = 3 and the gauge group is SU (2), using differential calculus on A/G we will indicate how one can introduce, on H aux , regulated self-adjoint operators corresponding to areas of 2-surfaces. The spectra of these operators are discrete and provide a glimpse into the nature of quantum geometry that underlies Riemannian quantum general relativity. The plan of the paper is as follows. Sec.II contains an outline of the general quantization program. Sec.III specifies the precise class of theories considered and presents in greater detail models, mentioned above, that are encompassed by our discussion. Sec.IV recalls the structure of the quantum configuration space A/G. In Sec.V, we construct the auxiliary Hilbert space H aux and show that a complete set of real-valued functions on the classical phase space is indeed promoted to self-adjoint operators on H aux . We also present the Baez orthonormal basis and discuss the loop transform and its inverse. The diffeomorphism constraints are implemented in Sec.VI using a series of steps that handle various technical difficulties. Sec.VII summarizes the main results and puts them in a broader perspective. A number of results which clarify and supplement the main discussion are presented in appendices. Appendix A illustrates some subtleties associated with the group integration procedure in the case when the Poisson algebra of constraints is Abelian. Appendix B summarizes the projective techniques that lie at the heart of the diffeomorphism invariant functional calculus on A/G. Appendix C points out that the requirement of diffeomorphism invariance has certain technical consequences that might not have been anticipated easily. Finally, Appendix D illustrates how one can use the projective techniques to introduce welldefined operators on A/G which capture geometric notions such as areas of surfaces and volumes of regions. The operators can be made self-adjoint on L 2 (A/G, dµ 0 ) and have discrete spectra. These results provide a glimpse into the nature of quantum geometry. II. QUANTIZATION OUTLINE In Ref. [17][18], the Dirac quantization program for constrained systems was extended to incorporate certain peculiarities of diffeomorphism invariant theories such as general relativity. In this section, we will further refine that program using the "group averaging" techniques mentioned in Sec. I. These techniques provide a concrete method for constructing solutions to the quantum constraints and for introducing an appropriate scalar product on the space of these solutions. In the first part of this section, we will spell out the refined version of the program, and in the second, illustrate the various steps involved by applying them to three simple examples. A. Strategy Consider a classical system with first class constraints C i = 0 for which the phase space Γ is a real symplectic manifold. The proposal is to quantize this system in a series of steps. (The steps which have been modified from Ref. [17,18] are identified with a prime.) Step 1. Select a subspace S of the vector space of all smooth, complex-valued functions on Γ subject to the following conditions: a) S should be large enough so that any sufficiently regular function on the phase space can be obtained as (possibly a suitable limit of) a sum of products of elements in S. Each function in S is to be regarded as an elementary classical variable which is to have an unambiguous quantum analog. Step 2. Associate with each element F in S an abstract operator F . Construct the free associative algebra generated by these elementary quantum operators. Impose on it the canonical commutation relations, [ F , G] = ih {F, G}, and, if necessary, also a set of (anticommutation) relations that captures the algebraic identities satisfied by the elementary classical variables. Denote the resulting algebra by B aux . Step 3. On this algebra, introduce an involution operation ⋆ by requiring that if two elementary classical variables F and G are related by aux . (Recall that an involution on B aux is an anti-linear map ⋆ from B aux to itself satisfying the following three conditions for all A and B in B aux : i) These steps are the same as in Ref. [17,18]. The main idea in the remaining steps was to use the "reality conditions" -i.e., the requirement that a suitable class of classical observables be represented by self-adjoint operators-to determine the inner product on physical states. This strategy has been successful in a number of examples [18], including a model field theory that mimics several features of general relativity [26]. For the class of theories now under consideration, however, we will refine the remaining steps along the lines of Ref. [13][14][15][16]. While we will retain the idea that the classical reality conditions should determine the inner product, we will not need to explicitly display a complete set of classical observables (i.e., functions which Poisson commute with the constraints). Instead, we will work with the complete set of functions (S) on the unconstrained phase space, noting that the reality properties of such functions will determine the reality properties of the observables. The idea is then to implement the reality conditions of operators in B (⋆) aux on an auxiliary Hilbert space H aux from which the physical phase space H phys will be finally constructed. Step 4 ′ . Construct a linear ⋆-representation R of the abstract algebra B (⋆) aux via linear operators on an auxiliary Hilbert space H aux , i.e. such that where † denotes Hermitian conjugation with respect to the inner product in H aux . We now wish to construct the physical Hilbert space H phys , which will in general not be a subspace of H aux . We proceed as follows. Step 5 ′ a. Represent the constraints C i as self-adjoint operators C i (or, their exponentiated action, representing the finite gauge transformations, as unitary operators U i ) on H aux . This step provides a quantum form of the constraints that we will use to define observables and physical states. We will look for solutions of the constraints in terms of generalized eigenvectors of C i which will lie in the topological dual Φ ′ of some dense subspace Φ ⊂ H aux (see also Ref. [19,27]. Since Φ and Φ ′ will be used to build the physical Hilbert space, we will consider only physical operators that are well behaved with respect to Φ. Step 5 ′ b. Choose a suitable dense subspace Φ ⊂ H aux which is left invariant by the constraintsĈ i and let B (⋆) phys be the ⋆-algebra of operators on H aux which commute with the constraints C i and such that, for A ∈ B (⋆) phys , both A and A † are defined on Φ and map Φ to itself. Note that the choice of Φ is subject to two conditions: on the one hand it should be large enough so that B (⋆) phys contains a "sufficient number" of physically interesting operators, and, on the other, it should be small enough so that its topological dual Φ ′ is "sufficiently large" to serve as a home for physical states. The key idea now is to find an appropriate map η : Φ → Φ ′ such that η(φ) is a solution to the constraint for all φ ∈ Φ. (Note that the natural class of maps from Φ to Φ ′ is anti-linear (c.f., the adjoint map)). Step 5 ′ c. Find an anti-linear map η from Φ to the topological dual Φ ′ that satisfies: (i) For every φ 1 ∈ Φ, η(φ 1 ) is a solution of the constraints; i.e., for any φ 2 ∈ Φ. Here, the square brackets denote the natural action of Φ ′ on Φ. (ii) η is real and positive in the sense that, for all φ 1 , φ 2 ∈ Φ, (iii) η commutes with the action of any A ∈ B (⋆) phys in the sense that (The appearance of the adjoint on the r.h.s. of (II.4) corresponds to the anti-linearity of η.) Step 5 ′ d. The vectors ηφ span a space V phys of solutions of the constraints. We introduce an inner product on V phys through The requirement (II.3) guarantees that this inner product is well defined and that it is Hermitian and positive definite. Thus, the completion of V phys with respect to (II.5) is a 'physical' Hilbert space H phys . (Note that the positions of φ 1 and φ 2 must be opposite on the two sides of (II.5) due to the antilinear nature of η.) At this point, the reader may fear that this list of conditions on η will never be met in practice. That the new step 5 ′ may actually simplify the quantization program follows from the observation of [13,14] (and [15,16] for the case when the Poisson algebra of constraints is Abelian) that a natural candidate for such a map exists. Let us indicate, heuristically, how this can come about. Assume that the exponentiated form of all constraints C i defines the unitary action ( U ) of a group (of gauge transformations) K on H aux . Then, a natural candidate for the map η is provided by the "group averaging procedure". Set where dk denotes a bi-invariant measure on K (or, rather, on the orbit through |φ ), and ignore, for the moment, the issue convergence of the integral in (II.6). Then, it is easy to check that η satisfies properties (i)-(iii) in 5 ′ c. Finally, the expression (II.5) of the scalar product reduces to: Thus, it is intuitively clear that the requirements of step 5 can be met in a large class of examples. Let us return to the general program. The last step is to represent physical operators on V phys . This is straightforward because the framework provided by step 5 guarantees that H phys carries an (anti) ⋆-representation (see below) of B (⋆) phys as follows: Step 6 ′ . Operators in A ∈ B (⋆) phys have a natural action (induced by duality) on Φ ′ that leaves V phys invariant. Use this fact to induce densely defined operators A phys on H phys through A phys (ηφ) = η(Aφ). phys on H aux descend to the physical Hilbert space. We conclude this subsection with two remarks. Suppose, first, that for some A ∈ B (⋆) phys we have A = A † on H aux . If the operators (A ± i) −1 are both defined on Φ and preserve Φ, then the range of A phys ± i contains V phys and is dense in H phys . It then follows that A phys is essentially self-adjoint [28] on H phys . The second remark has to do with our restriction to strong observables, i.e., observables which commute with constraints. On physical grounds, on the other hand, one should deal with more general, weak observables. It is often the case that every weak observable of the system is weakly equivalent to a strong observable. In these cases, our restriction does not lead to a loss of generality. In more general cases, on the other hand, an extension of this procedure to encompass weak observables is needed. B. Examples We will now present three examples to illustrate how the group averaging procedure can be carried out in practice. (Parameterized Newtonian particles and some other examples are treated in Ref. [16] and appendix A contains general comments on the case of Abelian constraints.) The non-trivial application of this procedure to diffeomorphism invariant theories will be given in Sec. VI. Example A As a first test case, let us consider a nonrelativistic particle in three dimensions subject to the classical constraint p z = 0, so that the associated gauge transformations are just translations in the z-direction. Since the interesting classical functions can be built from x, y, z, p z , p y , p z , we let these six functions span the classical subspace S of step 1 and construct the algebra B (⋆) aux of step 3. We choose the auxiliary Hilbert space to be H aux = L 2 (R 3 , dxdydz) and let x, y, z act by multiplication and p x , p y , p z act by (−i times) differentiation so that all six operators are self-adjoint. Clearly, our physical states will be associated with generalized eigenstates of p z . We wish to view such states as distributions that act on some dense subspace Φ ⊂ H aux . With our choice of operators, it is natural to take Φ to be the space of smooth functions with compact support. Note that the Fourier transformf 0 of any such function f 0 is smooth. Hence, for any g 0 ∈ Φ, the distribution η(g 0 ) :=g * 0 δ(p z ) has well defined action on any f 0 ∈ Φ: dp x dp y dp z , (II. 9) where, as before, * denotes complex conjugation. Note that this action may be constructed by averaging over the translation group through (II.10) We now let V phys be the linear space spanned by such η(g 0 ). This space is annihilated by p z (under the dual action of p z on Φ ′ ) and will become a dense subspace of the physical Hilbert space H phys . For f, g in V phys , let f 0 be an element of Φ that maps to f under η. Then, our prescription (II.9)) yields the following physical inner product: f, g phys = g[f 0 ], where f 0 may be any smooth function f 0 (x, y, z) of compact support for which f (x, y) = dzf * 0 (x, y, z); i.e., η(f 0 ) = f . Thus, the physical inner product is just f * (x, y)g(x, y)dxdy. It is Hermitian, positive definite, and independent of the choice of f 0 . The resulting H phys is just what one would expect on intuitive grounds and, since the observables x, y, p x , p y act on V phys by multiplication and (−i times) differentiation, they define self-adjoint operators on H phys and the reality conditions are satisfied in the usual way. Finally, note that there is a freedom to scale the map η by a constant: for real positive a, the use of η a = aη would simply re-scale the physical inner product by an overall factor and lead to an equivalent physical Hilbert space. This freedom can be traced back to the fact that the Haar measure (dz ′ in II.10) on a non-compact group is unique only up to a multiplicative factor. Example B Our second example (also treated in Ref. [14,16]) will be the massive free relativistic par-ticle in four-dimensional Minkowski space. Recall that this system may be classically described by a phase space R 8 with coordinates x µ , p ν for µ, ν ∈ {0, 1, 2, 3}. It is subject to the constraint p 2 + m 2 = 0 and has an associated set of gauge transformations which may be loosely interpreted as 'time reparametrizations.' Again, these classical functions define the space S of step 1 and the algebra B (⋆) aux of step 3. Thus, we represent them by self-adjoint quantum operators x µ , p ν which act on the auxiliary Hilbert space L 2 (R 4 , d 4 x) by multiplication and (−i times) differentiation. We will concentrate on the dense space Φ of smooth functions with compact support, so that elements f 0 of Φ have smooth Fourier transformsf 0 . Let us attempt to apply the group averaging technique and define η(f 0 ) (for g 0 ∈ Φ) such that, for any g 0 ∈ Φ, where C = p 2 + m 2 . By spectral analysis, we know that the Fourier transform The span of such f defines the linear space V phys . Now, for any f, g ∈ V phys , choose some f 0 such that f = η(f 0 ) and define f, g phys = g[f 0 ]. Note that the inner product (II.11) is is just the integral of f * 0 (p)g 0 (p) over the mass shell. This inner product is manifestly positive definite, Hermitian, and independent of the choice of f 0 and g 0 . Thus, the resulting H phys is the usual Hilbert space associated with the free relativistic particle, except that it contains both the 'positive and negative frequency parts' as orthogonal subspaces. While none of the operators x µ , p ν are observables, they can be used to construct observables on H aux for which the induced operators on H phys are the familiar Newton-Wigner operators (see Ref. [16,29]). Again, any of the maps η a = aη may be used in this construction. Example C Finally, we consider what we will call the massive free relativistic particle on a globally hyperbolic, curved four dimensional space-time M with metric g µν . We will allow an arbitrary spacetime for which the wave operator ∇ µ ∇ µ is essentially self-adjoint when acting on the Hilbert space L 2 (M, dv), where dv is the space-time volume element. We take the classical phase space to be Γ = T * M , but subject our system to the constraint g µν (x)p µ p ν + m 2 = 0. Here, p µ is the fourmomentum and this constraint generates an associated group of gauge symmetries. We choose smooth functions on M and V µ p µ for complete vector fields V µ on M to generate the subspace S and the algebra B aux . It is then natural to choose H aux to be L 2 (M, dv) and to represent real functions on M by self-adjoint operators that act by multiplication. Similarly, real complete vector fields V µ are represented by the self-adjoint differential operators (−i)V µ ∂ µ − i 2 div(V ), where div(V ) denotes the divergence of v with respect to the space-time metric; L V dv = div(V )dv. The constraint is promoted to the unique self-adjoint extension C of the wave operator on L 2 (M, dv). (The freedom to add a multiple of the scalar curvature of g µν does not affect the discussion that follows.) It is again natural to take Φ to be the space of smooth functions on M with compact support. We then define the map η : Φ → Φ ′ by and take V phys to be its image. Here we appeal to Gel'fand spectral theory [27] to show that the resulting generalized eigenstates lie in the topological dual Φ ′ of Φ. As before, the natural concept is in fact the family of maps η a = aη for a ∈ R + . The physical Hilbert space H phys is the completion of V phys in the inner product f, g phys = g[f 0 ] where f 0 satisfies f = η 1 (f 0 ). This inner product is independent of the particular choice of f 0 , is Hermitian and positive definite, and self adjoint operators A on H aux which preserve Φ and commute with C induce symmetric, densely defined operators A phys on H phys . The construction of H phys may come as a surprise to some readers as it seems to violate the accepted idea that there is no well-defined notion of a single relativistic quantum particle in a nonstationary space-time. The 'resolution' is that the quantum theory defined above does not exhibit the properties that one would require for it to describe a 'physical' free particle. In particular, it contains no notion of a conserved probability associated with Cauchy surfaces, as our particle appears to 'scatter backwards in time' when it encounters a lump of space-time curvature. (Re-collapsing cosmologies [16] illustrate a similar effect). In addition, this framework cannot be used as the oneparticle Hilbert space to build a relativistic field theory. Recall that an essential element in the construction of a quantum field from a one particle Hilbert space is that the inner product on the Hilbert space be compatible with the symplectic structure on the space of classical solutions (which is given by the Klein-Gordon inner product). That this is not the case for our inner product may be seen from the fact that it contains no notion of a conservation law associated with Cauchy surfaces. III. THE CLASS OF THEORIES In this section, we spell out in some detail the class of theories to be considered and discuss various features which will be used in subsequent sections. The section is divided into three parts. We present the general framework in the first, some illustrative examples of theories satisfying our assumptions in the second, and in the third, a set of functions on the phase spaces of these theories which will serve as elementary variables in the quantization program. A. General framework Let suppose that the underlying "space-time" M is a d + 1 dimensional manifold with topology M = R × Σ where Σ is an orientable, real analytic, d dimensional manifold. We wish to consider field theories on M which admit a Hamiltonian formulation with following features: a) The phase space consists of canonical pairs (A i a ,Ẽ a i ) where A i a is a connection 1-form on Σ taking values in the Lie algebra of a compact, connected gauge group G, andẼ a i , its conjugate momentum, is a vector density of weight one on Σ which takes values in the dual of the Lie algebra of G. The fundamental Poisson brackets will be: The theory is a constrained dynamical system subject to (at least) the following two constraints: where F is the curvature of A. The first of these will be referred to as the Gauss constraint and the second as the vector or the diffeomorphism constraint. A given theory in the class may well have other constraints. It is easy to check that the canonical transformations generated by the Gauss constraint correspond to local gauge transformations associated with G while those associated with (a suitable combination of the Gauss and) the vector constraint correspond to diffeomorphisms of Σ. The constraint algebra formed by these two constraints is of first class. The action of these theories will have the general form: where c is a coupling constant, Λ i , N a are associated Lagrange multipliers and "other terms" could contain additional constraints. (For simplicity, we have left out possible boundary terms.) We will assume that the full system of constraints is of first class and that the Hamiltonian is (weakly) invariant under the canonical transformations generated by all constraints. In the following sections, for most part, we will focus just on the Gauss and the vector constraints. B. Example theories In this section, we will provide several examples to illustrate the type of theories that are encompassed by our analysis. A) The Husain-Kuchař model This is perhaps the simplest non-trivial example. Here, the gauge group G is SU (2) and the manifold Σ is 3-dimensional. As mentioned in the Introduction, it has no further constraints and the Hamiltonian is a linear combination of the two constraints. Somewhat surprisingly, the model does arise from a manifestly covariant, 4-dimensional action [1]. Although it is not of direct physical significance, this model is interesting from a mathematical physics perspective because it has all the features of general relativity except the Hamiltonian constraint. B) Riemannian general relativity A second model is provided by 4-dimensional general relativity with metrics of signature (++++). Again, at least at first sight, this model is not of direct physical interest. However, since it contains all the conceptual non-trivialities of Lorentzian general relativity, it provides an excellent arena to test various quantization strategies. Furthermore, there are some indications that, if one were to solve this model completely, one may be able to pass to the quantum theory of Lorentzian general relativity by a "generalized Wick rotation" which would map suitably regular functions of the Euclidean self-dual connections to holomorphic functions of the Lorentzian self-dual connections. Since this model is not discussed in the literature, we will write down the basic equations governing it. We will, however be brief since the Lorentzian counterpart of this case has been analyzed in detail in Ref. [2,17]. The key idea here is to use a Palatini-type of action, however with self-dual connections. Thus, we begin with: where the a, b, c are the four-dimensional tensor indices, I, J, K = 1, .., 4 are the "internal" SO(4) indices, e a I is a tetrad (for a positive definite metric), e its determinant, 4 A IJ a , a self-dual connection and 4 F IJ ab , its curvature. Although we are using selfdual connections, the variation of this action provides precisely the vacuum Einstein's equations. For simplicity, let us assume that the 3-manifold Σ is compact. (The asymptotically flat case requires more care but can be treated in an analogous fashion [2,17].) Then, if we perform a 3+1 decomposition, let t a be the "time-evolution" vector field, and use a suitable basis in the 3-dimensional self-dual sub-algebra of the SO(4) Lie-algebra, we can cast the action in the form: Here indices a, b, ... refer to the tangent space of Σ and i, j, ... to the self-dual (SU (2)) Lie algebra; A t := t a 4 A a , N a and N ∼ are Lagrange multipliers; and, (A i a ,Ẽ a i ) are the canonical variables. Thus, symplectic structure is given by The variation of the action with respect to the Lagrange multipliers yields, as usual, the first class constraints of Riemannian general relativity: These are, respectively, the Gauss, the vector and the scalar constraint. Thus, in the Hamiltonian form, the theory is similar to the Husain-Kuchař model except for the presence of the additional scalar constraint. How do we make contact with the more familiar Hamiltonian form of the theory in terms of metrics and extrinsic curvatures? The two are related simply by a canonical transformation. RegardẼ a i as a triad on Σ with density weight one and denote by Γ i a the spin-connection defined by it. Define K i a via: where q is the determinant of q ab . Note, however, that, while the constraints (III B) are all low order polynomials in terms of the connection variables, they become non-polynomial in terms of the metric variables. Hence, if one uses the metric formulation, it is much more difficult to promote them to well-defined operators on an auxiliary Hilbert space. C) Lorentzian general relativity in the spin connection formulation In the Lorentzian signature, self-dual connections are complex. Therefore, the formulation of the Lorentzian theory in terms of self-dual connections [2,17] falls outside the scope of this paper. However, as in the Euclidean case, one can consider the real fields (Ẽ a i , K i a ) as a canonical pair. By a contact transformation, one can replace the triadẼ a i by the spin-connection Γ i a and K i a by the momen-tumP a i conjugate to Γ i a . In the new canonical pair, the configuration variable is a SU (2) connection whence the framework falls in the class of theories considered here. One can show that the Gauss and the vector constraints retain their form; A i a andẼ a i in (III.7) are simply replaced by Γ i a andP a i respectively. Therefore, in this formulation, the theory belongs to the class under consideration. Unfortunately, however, the remaining, scalar constraint seems unmanageable in terms of Γ i a and P a i . Hence, this formulation is not directly useful beyond the Gauss and the vector constraints [30]. As mentioned in the Introduction, to handle the Hamiltonian constraint, one would have to use a different strategy, e.g., the one involving a coherent state transform [3] and pass to the (Lorentzian) self-dual representation. D) Chern-Simons theories Let G may be any compact, connected Lie group. Then, one can construct a natural "inhomogeneous version" IG of G. As a manifold, IG is isomorphic to the cotangent bundle over G and, as a group, it is a semi-direct product of G with an Abelian group which has the same dimension as G. If G is chosen to be the rotation group, SO(3), then IG is the Euclidean group in three dimensions. (For details, see Ref. [4,31].) Let us now set the dimension d of Σ to be 2 and consider the Chern-Simons theory based on IG. (If G is chosen to be SU (2), this theory is equivalent to 3-dimensional Riemannian general relativity.) It is straightforward to check that all our assumptions from Sec. III A are satisfied. We can also consider a more sophisticated enlargement I Λ G of G which is parametrized by a real number Λ (see Ref. [31]). In the case when G is SU (2), the Chern-Simons theory based on I Λ G is the same as Riemannian general relativity with a cosmological constant. (Curiously, the theory that results from G = SU (2) and Λ negative is also isomorphic, in an appropriate sense, with the Lorentzian, 3-dimensional gravity with a positive cosmological constant.) All these theories also fall in the class under consideration. Note however that, in general, the Chern-Simons theories based on compact gauge groups G -rather than IG or I Λ G-fall outside this class since these theories do not have canonical variables of the required type. C. An (over)complete set of gauge invariant functions In this section, for simplicity of presentation, we will focus on the case d = 3 and G = SU (2). Generalizations to higher dimensions and other compact, connected groups is, however, straightforward. For simplicity, we will solve the Gauss constraint classically. (See, however, the first part of Sec. VII.) Therefore, it is natural to regard the space A/G of (sufficiently well-behaved) connections on Σ modulo (sufficiently regular) gauge transformations as the effective configuration space. Phase space is then the cotangent bundle on A/G. Our aim is to single out a convenient set of functions on this phase space which can be used as "elementary classical variables" in the quantization program of Sec. II. Wilson loop functions are the obvious candidates for configuration variables. These will be associated with piecewise analytic loops on Σ, i.e., with piecewise analytic maps α : S 1 → Σ. (Thus, the loops do not have a preferred parameterization, although in the intermediate stages of calculations, it is often convenient to choose one.) The Wilson loop variables T α (A) are given by: where the trace is taken in the fundamental repre-sentation. As defined, these are functions on the space of connections. However, being gauge invariant, they project down naturally to A/G. The momentum observables, T S are associated with piecewise analytic strips S, i.e., ribbons which are foliated by a 1-parameter family of loops. For technical reasons, it is convenient to begin with piecewise analytic embeddings S : (1, 1) × S 1 → Σ and use them to generate more general strips. Set σ, τ are coordinates on S (with τ labeling the loops within S and σ running along each loop α τ ), η abc denotes the Levi-Civita tensor density on Σ, and, as before h ατ denotes the holonomy along the loop α τ . Again, the functions T S are gauge invariant and hence well-defined on the cotangent bundle over A/G. They are called "momentum variables" because they are linear inẼ a i . Properties of these variables are discussed in some detail in Ref. [17]. Here we recall only the main features. First they constitute a complete set in the sense that their gradients span the cotangent space almost everywhere on the phase space over A/G. However, they are not all independent. Properties of the trace operation in the fundamental representation of SU (2) induce relations between them. These algebraic relations have to be incorporated in the quantum theory. It is interesting that the Poisson brackets can be expressed in terms of simple geometric operations between loops and strips. We will ilustarate this by writing out one of these Poisson brackets which will be needed in the subsequent sections: where the sum is over transverse intersections i between the the loop α and the strip S, sgn i (S, α) takes values 0, ±1 depending on the orientation of the tangent vector of α and the tangent plane of S at the i-th intersection point and S • i α is a loop obtained by composing the loop in the strip S passing through the intersection point with the loop α. (Note that the same geometric point in Σ may feature in more than one intersection i.) Thus, in particular, the Poisson bracket vanishes unless the loop α intersects the strip S. The Poisson bracket between two strip functionals also vanishes unless the two strips intersect. If they do, the bracket is given by a sum of slightly generalized strip functionals. The generalization consists only of admitting certain strip maps S : (1, 1) × S 1 → Σ which is not necessarily embeddings, and integrating in (III.10) over a suitable sub-manifold I without boundary, I ⊂ (0, 1) × S 1 , such that for every loop α τ , α τ ∩ S(I) is a closed loop. The Poisson bracket between these more general strips closes. We did not simply begin with these more general strips because, in quantum theory, it is easier to begin with the embedded strips and let them generate more general ones. In Sec. V, we will use these loop and strip functionals as elementary classical variables to construct the auxiliary Hilbert space. IV. QUANTUM CONFIGURATION SPACE To complete the first four steps in the quantization program, it is convenient to proceed in two stages. First, one focuses on just the configuration variables T α and constructs representations of the corresponding "holonomy algebra." This naturally leads to the notion of a quantum configuration space. By introducing suitable geometric structures on this space, one can then represent the momentum operators corresponding to T S . We will begin, in this section, by isolating the quantum configuration space. In the second part, we will present three convenient characterizations of this space. A number of constructions used in the subsequent sections depend on these characterizations. In the third part, we introduce elements of calculus on this space which will lead to the definition of the momentum operators in Sec. V. A. A/G a completion of A/G In the classical theory, A/G serves as the gauge invariant configuration space for the class of theories under consideration. We will now show that, in the passage to quantum theory, one is led to enlarge this space [5]. Recall that an enlargement also occurs in, for example, scalar quantum field theory [32,33]. Let us begin by constructing the Abelian algebra of configuration operators. This algebra is, of course, generated by finite linear combinations of functions T α on A/G with complex coefficients. By construction, it is closed under the operation of taking complex conjugation. Thus, it is a ⋆subalgebra of the algebra of complex-valued, continuous bounded functions on A/G. It separates the points of A/G in the sense that, if [A 1 ] = [A 2 ] (i.e., if the gauge equivalence classes of A 1 and A 2 in A do not coincide), there exists a loop α such that: T α (A 1 ) = T α (A 2 ). Thus, as indicated in Sec. III C, the set of configuration variables is sufficiently large. This algebra is called the holonomy algebra and denoted by HA. To obtain a greater degree of control, it is convenient to introduce on it a norm and convert it into a C ⋆ algebra. Let us therefore set and complete HA with respect to this norm we obtain a commutative C ⋆ -algebra HA. (This algebra is equipped with identity, given by T ∅ , where ∅ is the trivial, i.e., point loop.) We are now in a position to apply the powerful representation theory of C ⋆ -algebras. The first key result we will use is the Gel'fand-Naimark theorem, that every C ⋆ algebra with identity is isomorphic to the C ⋆ -algebra of all continuous bounded functions on a compact Hausdorff space called the spectrum of the algebra. The spectrum can be constructed directly form the algebra: it is the set of all ⋆-homomorphisms from the given C ⋆ -algebra to the ⋆-algebra of complex numbers. We will denote the spectrum of HA by A/G. It is easy to show that A/G is densely embedded in A/G; thus, A/G can be regarded as a completion of A/G. Recall that, since HA is the C ⋆ -algebra of configuration variables, our primary objective here is to construct its representations. Now, a key simplification occurs because one has a great deal of control on the representation theory. Let ρ : HA → B(H) denote a cyclic representation of HA by bounded operators on some Hilbert space H. Let Γ be the "vacuum expectation value functional": where Ω is a cyclic vector and f any element of HA. Clearly, Γ is a positive linear functional on HA. Since HA is isomorphic with the C ⋆ -algebra of continuous functions on A/G, Γ can be regarded as a positive linear functional also on C 0 (A/G). Now, since A/G is compact, the Riesz representation theorem ensures that there is a a unique regular Borel measure µ on A/G such that wheref ∈ C 0 (A/G) corresponds to f in HA. This immediately implies that any cyclic representation of HA is unitarily equivalent to a "connection representation" given by where the measure µ is defined through (IV.3). Therefore the set of regular measures on A/G is one-to-one correspondence with the set of cyclic representations of HA. To summarize, in any cyclic representation of HA, quantum states can be thought of as (squareintegrable) functions on A/G (for some choice of measure. Recall that cyclic representations are the basic "building blocks" of general representations.) Hence, A/G can be identified with the quantum configuration space. The enlargement from A/G to A/G is non-trivial because, typically, A/G is contained in a set of zero measure [9]. We will conclude this subsection with a general remark. In the construction of the quantum configuration space, we have avoided the use of the nongauge invariant affine structure of the space A of connections and worked instead directly on A/G. (For earlier works in the same spirit, see [34].) This is in contrast with with the gauge fixing strategy that is sometimes adopted in constructive quantum field theory [32,33] which then faces global problems associated with Gribov ambiguities. B. Characterizations of A/G Since A/G is the domain space of quantum states, it is important to understand its structure. In this subsection, therefore, we will present three characterizations of this space, each illuminating its structure from a different perspective. Denote by L x0 Σ the space of continuous piecewise analytic loops on Σ based at an arbitrarily chosen but fixed point x 0 . Two loops α, β are said to be holonomically equivalent if for every A ∈ A we have The corresponding equivalence classes are called hoops. For notational simplicity we will use lower case greek letters to denote these classes as well. The set of all hoops forms a group called the hoop group which is denoted by HG x0 . A smooth connection A ∈ A defines a homomorphism from HG x0 to SU (2), which is smooth in a certain sense [35] H(., A) : We can now present the first characterization: A/G is naturally isomorphic to the set of all homomorphisms from HG x0 to SU (2) modulo conjugation [6]. (The conjugation serves only to eliminate the freedom to perform gauge transformations at the base point. Note that the homomorphism here need not even be continuous.) This result makes it possible to show further that A/G is a limit of configuration spaces of gauge theories living in arbitrary lattices for which the space of connections modulo gauge transformations coincides with finite products of copies of SU (2) modulo conjugation [9][10][11][12]. The second characterization is in terms of these limits. To introduce it, let us begin with the notion of independent hoops [6]. Hoops {β 1 , ..., β n } will be said to be independent if loop representatives exist such that each contains an open segment that is traced exactly once and which intersects other representatives at most at a finite number of points. Let now S n (β 1 , ..., β n ) denote the subgroup of HG x0 generated by a set of independent hoops {β 1 , ..., β n }. The space H(S n ) of all homomorphisms (modulo conjugation) from S n to SU (2) is homeomorphic to SU (2) n /Ad, which in turn can be thought of as the configuration space of the "floating" (i.e., non-rectangular) lattice formed by {β 1 , ..., β n }. Now, if we consider a larger subgroup S m ⊃ S n of the hoop group, we have a natural projection map p SnSm , where In the lattice picture, the projection is obtained simply by restricting the configurations on the larger lattice to the smaller lattice. The family (H(S n ), p SnSm ) is called a projective family labeled by the subgroups S n of the hoop group (see appendix B). Since the theory for a larger lattice contains more information, it is desirable to consider larger and larger lattices, i.e., bigger and bigger subgroups of the hoop group. Unfortunately the projective family itself does not have a "largest element" from which one can project to any other. However, such an element can in fact be obtained by a standard procedure called the "projective limit.". Now, given the space A/G, we have a surjective projection p Sn to H(S n ) for any subgroup S n of the hoop group: where the brackets [, ] on the right hand side denote conjugacy classes. This suggests A/G may be the projective limit of the family (H(S n ), p SnSm ). Detailed considerations show that this is indeed the case [9]. This characterization of A/G as a limit of finite dimensional spaces allows the introduction of integral calculus [6][7][8]10,12] on A/G using integration theory on finite dimensional spaces. Roughly, measures on lattice configuration spaces H(S n ) which are compatible with the projections P SnSm from larger lattices to the smaller ones induce measures on the projective limit A/G. In particular, this strategy was first used in [6] to construct a natural, faithful, diffeomorphism invariant measure µ 0 on A/G from the induced Haar measures on the configuration spaces H(S n ) of lattice theories. More precisely, µ 0 is defined by: where, µ H denotes the Haar measure on SU (2), p Ad denotes the quotient map and f ⋆ µ denotes the push-forward of the measure µ with respect to the map f . This description uses hoops as the set of "probes" for the generalized connections. A related approach, developed by Baez, [7] relies on the (gauge dependent) probes defined by analytic edges. This strategy provides a third characterization of A/G, again as a projective limit, but of a projective family labeled by graphs rather than hoops. It is this characterization that is best suited for developing differential calculus [8,11]. Since it is used in the subsequent sections, we will discuss it in greater detail. Let us begin with the set E of all oriented, unparametrized, embedded, analytic intervals (edges) in Σ. We introduce the space A of (generalized) connections on Σ as the space of all maps A : E → SU (2), such that whenever two edges e 2 , e 1 ∈ E meet to form an edge. Here, e 2 • e 1 denotes the standard path product and e −1 denotes e with opposite orientation. The group G of (generalized) gauge transformations acting on A is the space of all maps g : Σ → SU (2) or equivalently the Cartesian product group G := × x∈Σ SU (2) . (IV.12) A gauge transformation g ∈ G acts on A ∈ A through [g(A)](e p2,p1 ) = (g p2 ) −1 A(e p2,p1 )g p1 , (IV. 13) where e p2,p1 is an edge from p 1 ∈ Σ to p 2 ∈ Σ and g pi is the group element assigned to p i by g. The group G equipped with the product topology is a compact topological group. Note also that A is a closed subset of the Cartesian product of all A e , where the space A e of all maps from the one point set {e} to SU (2) is homeomorphic to SU (2). A is then compact in the topology induced from this product. The space A (and also G) can also be regarded as the projective limit of a family labeled by graphs in Σ in which each member is homeomorphic to a finite product of copies of SU (2). [9,12] Let us now briefly recall this construction as it underlies the introduction of calculus on A/G. The set of all graphs in Σ will be denoted by Gra(Σ). In Gra(Σ) there is a natural relation of partial order ≥, whenever every edge of γ is a path product of edges associated with γ ′ . Furthermore, for any two graphs γ 1 and γ 2 , there exists a γ such that γ ≥ γ 1 and γ ≥ γ 2 , so that (Gra(Σ), ≥) is a directed set. Given a graph γ, let A γ be the associated space of assignments (A γ = {A γ |A γ : γ → SU (2)}) of group elements to edges of γ, satisfying A γ (e −1 ) = A γ (e) −1 and A γ (e 1 • e 2 ) = A γ (e 1 )A γ (e 2 ), and let p γ : A → A γ be the projection which restricts A ∈ A to γ. Notice that p γ is a surjective map. For every ordered pair of graphs, γ ′ ≥ γ, there is a naturally defined map (IV. 16) With the same graph γ, we also associate a group G γ defined by where V γ is the set of vertices of γ; that is, the set V γ of points of Σ lying at the ends of edges of γ. There is a natural projection G → G γ which will also be denoted by p γ and is again given by restriction (from Σ to V γ ). As before, for γ ′ ≥ γ, p γ factors into p γ = p γγ ′ • p γ ′ to define Note that the group G γ acts naturally on A γ and that this action is equivariant with respect to the action of G on A and the projection p γ . Hence, each of the maps p γγ ′ projects to new maps also denoted by We collect the spaces and projections defined above into a (triple) projective family (A γ , G γ , A γ /G γ , p γγ ′ ). It is not hard to see that A and G as introduced above are just the projective limits of the first two families. Finally, the quotient of compact projective limits is the projective limit of the compact quotients, [12] A/G = A/G . Using again the normalized Haar measure on SU (2), the construction (IV.9,IV.10) may be repeated for this projective family [7]. This leads to a natural ("Haar") measure µ ′ 0 defined on A via Under the natural projection map to A/G, the push forward of this measure yields µ 0 of (IV.9). C. Differential calculus on A/G We now recall from Ref. [11] some elements of calculus on A/G defined using calculus on finite dimensional spaces and the representation of A/G as a projective limit. This framework will allow us, in the next section, to represent T S as operators on L 2 (A/G, dµ 0 ). Although our primary interest is A/G, it will be convenient to introduce geometric structures on A. Vector fields and other operators that are invariant under the action of G on A will descend to A/G = A/G and provide us with differential geometry on the quotient. Let us begin by introducing the space of C n cylindrical functions on A (for details, see appendix B): Let us now consider vector fields. These can be regarded as derivations of the algebra Cyl ∞ (A), i.e. (IV.24) A natural way to construct these vector fields is via consistent families of vector fields (X γ ) on A γ . This correspondence is given by the natural measure µ ′ 0 on A and for all f γ , g γ ∈ C 1 (A γ ), where f = p ⋆ γ f γ and g = p ⋆ γ g γ . The family (X γ ) is (µ ′ 0 -) consistent in the sense that for all γ ′ ≥ γ, and for all f γ , g γ ∈ C 1 (A γ ), (IV.26) The cylindrical vector fields take a particularly simple form if there exists a γ 0 such that for all γ ′ ≥ γ ≥ γ 0 . These vector fields were introduced and studied in detail in Ref. [11]. They will play an important role in the next section for the representation of T S as operators. More general cylindrical operators can be associated with families (B γ ) of operators acting on C ∞ (A γ ) and satisfying the same consistency conditions as vector fields in (IV.26) Examples of such operators are Laplacians [3,11] on A and the geometric operators discussed in Appendix D. V. QUANTUM KINEMATICS We are now ready to apply the algebraic quantization of program of Sec. II to the class of theories under consideration. In this section, we will complete the first four steps in the program. We begin by introducing the auxiliary Hilbert space H aux which incorporates the reality conditions on the loop-strip functions and then analyze some of its structure. for all ψ ∈ L 2 (A/G, dµ 0 ). By construction, these operators are (bounded and) self-adjoint; the reality conditions on the configuration variables are thus incorporated. Note that this would have been the case for any choice of measure; it is not essential to choose µ 0 at this stage. The condition thatT S be represented by selfadjoint operators, on the other hand, does restrict the measure significantly. Since T S is linear in momentum, one would expect it to be represented by the Lie derivative along a vector field on A/G. This expectation is essentially correct. The detailed definition ofT S is, however, somewhat complicated. Let us begin by introducing a simpler operator from whichT S will be constructed. Consider an analytic loop α. we can think of it as a graph with just one edge. Fix a point p on α and a d − 2dimensional subspace W of the tangent space at p. (Recall that the underlying manifold Σ is d dimensional.) Then, given a graph γ ≥ α, and a function F γ on A γ , we wish to define the action of a vector field X α,W on F γ . The key idea is to exploit the fact that, if γ has n edges, (e 1 , ..., e n ), then A γ is isomorphic with (SU (2)) n and can be coordinatized by n group valued coordinates (g 1 , ..., g n ). Using this fact, we set: where k ± (e) := 0 if e ± = p 1 4 [sgn(ė ± ,α + , W ) + sgn(ė ± ,α − , W )] ife ± = p Here, h α is the (generalized) holonomy function on A γ associated with the loop α, τ i are the Pauli matrices, k ij , the metric in the Lie algebra of SU (2), X R e,i and X L e,i are the right and the left invariant vector fields on the copy of the group associated with the edge e which point in the i-th direction at the identity of the group, e ± refers to the two ends of the edges, sgn(ė ± ,α ± , W ) is 0, ±1 depending on the relative orientation of the vectors involved and the subspace W , and α + (respectively, α − ) is the outgoing (incoming) segment of α at p. While the definition of this vector field seems complicated at first, it is in fact straightforward to calculate its action on functions on A γ . In particular, what counts is only the dependence of the function F γ on the group elements corresponding to the edges which pass through p for which the orientation factor is non-zero. For each γ ≥ α, we now have a vector field on A γ . One can check that these vector fields satisfy the compatibility conditions (IV.27) and thus provides a vector field (X γ ) on A which we will again denote by X α,W . The definition then immediately implies that this vector field is invariant under G. Hence it has a well-defined action on the space Cyl 1 (A/G) on A/G of differential cylindrical functions on A/G and a well defined divergence with respect to µ 0 . [11] A direct calculation shows that We are now ready to define the strip operators. Given a strip S which is analytically embedded in Σ, let us setT where W x is any (d − 2) plane through x which is transversal to the loop α x in the strip passing through x and tangent to the strip. Although there is an uncountably infinite number of loops involved in this definition (V.4), the action ofT S is nonetheless well-defined on cylindrical functions since, in this action, only a finite number of terms give nonzero contributions. The simplest cylindrical functions are the traces of holonomies. On these, the action ofT S reduces simply to: where we have used the same notation as in (III.11). This is action that one would have expected on the basis of the Poisson bracket III.11, so that the commutators betweenT β andT S are the required ones. Finally, using the fact that each vector field X α,W is divergence-free, one can show thatT S is essentially self-adjoint. Thus, the representation of these elementary operators does incorporate all the reality conditions. We will conclude with two remarks. 1. Our strip operators have been directly defined only for analytically embedded strips. Since more general strip functionals were generated by Poisson brackets of the analytically embedded ones, the corresponding operators are obtained by taking commutators between the "basic" strip operators. 2. In the above discussion, we first set H aux = L 2 (A/G, dµ 0 ), introduced loop and strip operators on it, and argued that the resulting representation of B aux satisfies the reality conditions. There is in fact a stronger result. One can begin with cylindrical functions on A/G and defineT α andT S as above. Then, µ 0 is the only non-trivial measure on A/G for which the reality conditions can be satisfied. (The qualification "non-trivial" is necessary because, as was pointed out in Sec. III, the loop-strip variables are complete everywhere except at the flat connections with trivial holonomies and one can introduce another measure which is concentrated just at that point of A/G which will also incorporate the reality conditions.) Thus, the overall situation is similar to that in ordinary quantum mechanics where the the Lebesgue measure is uniquely picked out by the reality conditions once we specify the standard representation, −ih ∇ of the momentum operator. B. Spin networks and the (inverse) loop transform In this subsection, we recall [20] that H aux admits a convenient basis and point out the relation between the connection and the loop [23,24] representations. The geometrical object called spin-network is a triple (γ, π, c) consisting of (i) a graph γ, (ii) a labeling π := (π 1 , .., π n ) of edges e 1 , ..., e n of that graph γ with irreducible representations π i of G, Each contractor c j is an intertwining operator from the tensor product of the representations corresponding to the incoming edges at a vertex v j to the tensor product of the representations labeling the outgoing edges. Because the group G is compact, the vector space of all possible contractors c j associated with a given vector π and vertex v j is finite dimensional. To (i − iii) we add a forth 'non-degeneracy' condition, (iv) for every edge e the representation π e is nontrivial and γ is a 'minimal' graph in the sense that if another graph γ ′ occupies the same set of points in Σ, then each edge of γ ′ is contained in an edge of γ. (Equivalently, γ ′ can always be built by subdividing the edges of γ, but γ cannot be so built from γ ′ .) A spin-network state is simply a C ∞ cylindrical function on A/G (a G invariant function on A) constructed from a spin-network, for all A ∈ A, where, as before, h ei (A) = A(e i ) is an element of G associated with an edge e i and '·' stands for contracting, at each vertex v j of γ, the upper indices of the matrices corresponding to all the incoming edges and the lower indices of the matrices assigned to all the outgoing edges with all the indices of c j . Using the spin-network states it is easy to construct an orthonormal basis in H aux . To begin, given a pair γ, π, consider the vector space H γ, π spanned by the spin-network states T γ, π, c given by all the possible contractors c associated with γ, π as above. Note, that where γ, π ranges over all the pairs of minimal graphs and labelings by irreducible, non-trivial representations, the sum is orthogonal and the spaces H γ, π aux are finite dimensional. Thus, we need only choose an orthonormal basis in each H γ, π aux . An explicit construction is given in Ref. [20,22]. We now turn to loop transforms. This discussion will be brief because it is not used in the rest in the rest of the paper. Given any measure µ on A/G we can perform the integrals to obtain a function of multi-loops. In the case when G = SU (n), Mandelstam identities enable us to express finite products of traces of holonomies in terms of sums of products involving r or less traces where r is the rank of the group. Hence, in the loop representation, we have to deal only with functions of r or less loops. On the other hand, by the Riesz-Markov theorem, any positive linear functional on C 0 (A/G) that satisfies the conditions induced by the Mandelstam identities is the loop transform χ of a regular measure supported on A/G. Thus, there is a one to one correspondence between between regular measures µ and their characteristic functions χ. This result is analogous to the Bochner theorem that is used in the framework of constructive quantum field theory [32]. In fact, the loop transform can be thought of as a precise analog of the Fourier transform for a quantum field theory with a linear quantum configuration space. We will now indicate how one can explicitly recover the finite joint distributions of the measure µ from its characteristic functional. (Details will appear elsewhere [22].) This reconstruction of the measure can be regarded as the inverse loop transform. Given a measure µ, choose an orthonormal basis of spin-network states T γ, π, cI and define the associated spin-network characteristic function to be the analog of (V.8), namely χ(γ, π, c I ) :=< T γ, π, cI > . (V.9) We will say that the characteristic functional is absolutely summable if and only if, for any finitely generated graph γ, the series π cI = cI ( π) |χ(γ, π, c I )| < ∞ (V.10) is absolutely convergent. We can now state the theorem [22] in question Theorem V.1 Let the loop transform of a measure be such that the characteristic functional is absolutely summable. Then the associated family of compatible measures on A γ is given by: This is a precise analogue of the inverse Fourier transform in the linear case. VI. THE HILBERT SPACE OF DIFFEOMORPHISM INVARIANT STATES Our discussion in sections 4 and 5 has served to introduce and study the auxiliary Hilbert space H aux = L 2 (A/G, dµ 0 ). As this space carries a ⋆representation of the algebra (III.10) defined by the loop and strip operators (T α andT S ), we have implemented steps 1-4 of the refined algebraic quantization program (see section II A). In the present section, we will complete the remaining steps (5 and 6) and construct the Hilbert space of diffeomorphism invariant states. For simplicity, we assume throughout this section that the underlying manifold Σ is R 3 (although the results on R n are identical). A key step in our construction will involve an appropriate averaging of spin-network states over the diffeomorphism group. This averaging procedure was considered, independently, by John Baez [36] as a tool for constructing a rich variety of diffeomorphism invariant measures on A/G. A. Formulation of the diffeomorphism constraint Recall that the diffeomorphism constraint is given by: Let us considered the smeared version of this constraint, where N a are complete analytic vector fields on Σ. (We require analyticity because the edges of our graphs are assumed to be analytic. See Sec. IV and V.) Denote by ϕ t the 1-parameter family of diffeomorphisms generated by N a on Σ. Now, as shown in Appendix C, V N has a natural action on the space of smooth functions on A/G which can be used to define a 1-parameter family U (t) of unitary operators on H aux , providing us a faithful, unitary representation of the group ϕ t . On spin network states, the action of the operator U ϕ corresponding to ϕ is given by: where ϕα is the image of the graph α under the analytic diffeomorphism and ϕ π and ϕ c are the corresponding vector of representations and contraction associated with the new graph ϕα. Thus, as needed in the group averaging procedure, each constraint V N is promoted to a 1parameter family of unitary operators. Varying N a , we obtain, on H aux , a unitary representation of the group of diffeomorphisms on Σ generated by complete analytic vector fields. Thus, there are no anomalies. Note that this is not a formal argument; the operators U (t) corresponding to V N are rigorously defined on a proper Hilbert space, and they are unitary because the measure µ 0 is diffeomorphism invariant. Note that U ϕ preserves the space Cyl ∞ (A/G) of smooth cylindrical functions. Since Cyl ∞ (A/G) is also preserved by our algebra of elementary quantum operators (generated byT α andT S ), it is natural to take Cyl ∞ (A/G) to be the dense subspace Φ ⊂ H aux of step 5 ′ b of the refined algebraic quantization program. Finally, we need to specify a topology on Φ. Finite dimensional examples suggest that we let one of the standard nuclear topologies of the C ∞ (A γ ) ∼ = C ∞ (SU n (2)) induce the required topology on Cyl ∞ (A/G). We will seek 'solutions of the constraints' in the topological dual Φ ′ , the space of cylindrical distributions. Diffeomorphisms have a natural action on φ ∈ Φ ′ by duality and we will say that φ ∈ Φ ′ is a solution of the diffeomorphism constraints if for all ϕ ∈ Diff(Σ) and φ ∈ Φ . B. The issue of independent sectors Having identified a suitable dense subspace Φ ⊂ H aux and having seen that its topological dual Φ ′ is large enough to contain diffeomorphism invariant distributions, we now wish to construct a map η : Φ → Φ ′ that completes step 5 ′ c in our program. This will, however, be more complicated than for the examples in Sec.II due to the fact that each state |φ ∈ Φ has an infinite 'isotropy group' of diffeomorphisms that leave |φ invariant. Thus, the sum in (VI.5) was not over the entire diffeomorphism group, but only over the orbit of the state |α, π, c ∈ Φ. Because the sum in (VI.5) itself depends on the state |α, π, c , our definition of the inner product on V dif f will have to take into account the fact that the orbit size is state-dependent. While the infinite size of the orbits would appear to make this difficult, a simplification will occur as the presence of 'infinitely different' isotropy groups will imply that L 2 (A/G, dµ 0 ) carries a reducible representation of the algebra of observables. In fact, we show below that H aux can be written as a direct sum of subspaces such that, on each subspace, the sizes of orbits are 'comparable'. This will allow us to give a well defined averaging procedure by treating each such subspace separately in section VI C. A similar situation is discussed in appendix A. In order to classify these isotropy groups, let us consider for each spin-network state |α, π, c the collection E α of analytic edges of the graph α. For technical reasons, we shall focus on graphs for which, given any edge e ∈ E α , there is an analytic real function f which vanishes on the maximal analytic curveẽ that extends e, but nowhere else. We shall call such graphs (and their associated curves) 'type I', while all others are 'type II.' Note that the collection ofẽ defined by the type I graph α intersect at most a countable number of times and so define a graphα with countably many edges. Now, given any n type I maximal analytic curves (i.e., curves which cannot be analytically extended) in R 3 and any distinct maximal analytic curveẽ (not necessarily of type I), there is a multiparameter family of analytic diffeomorphisms that preserves the n type I curves but does not preservẽ e. To see this, begin with any constant vector field X 0 on R 3 which is not everywhere tangent toẽ oñ e. Let f i be the real analytic function that vanishes exactly on the ith maximal type I curve. Then the product f of the the f i is a real analytic function that vanishes exactly on the union of these curves. Thus, the complete analytic vector field X = f e −f 2 X 0 exponentiates to a one parameter family of analytic diffeomorphisms that preserves the n maximal type I curves, but does not preservẽ e. As in section II, we consider the algebra B (⋆) phys of operators A on H aux that i) are defined on Φ and map Φ into itself, ii) have adjoints A † defined on Φ which map Φ into itself, and iii) commute with the action of all diffeomorphisms ϕ. (Note that the last condition implies that A † also commute with constraints.) Let |φ 1 = |α 1 , π 1 , c 1 and |φ 2 = |α 2 , π 2 , c 2 be the spin-network states above, so that there are infinitely many diffeomorphisms ϕ which moveα 2 but move no edge ofα 1 . Then, for such a ϕ, the matrix elements Thus, either φ 1 |A|φ 2 aux = 0 or the vector A † |φ 1 has an infinite number of equal components. However, |φ 1 ∈ Φ lies in the domain of A † so that A † |φ 1 is normalizable, whence φ 1 |A|φ 2 aux must vanish. Since the adjoint of A is also in B aux are truly independent in the sense that they are not mixed by any physical operators A ∈ B (⋆) phys or any diffeomorphism ϕ. Thus, from now on, we will treat each H [α] aux individually. C. A Family of Maps We now wish to implement step 5 ′ of the program separately within each 'independent sector' H . We will identify a vector space V and will contain only diffeomorphism invariant distributions. (Here, a = a [α] ∈ R + . For simplicity of notation, we will not make the dependence of a on [α] explicit.) To construct the map η a , let us first give its action on functions |f ∈ π H γ, π aux associated with some fixed graph γ withγ ∈ [α]. The action of η a on general states |φ ∈ Φ then follows by (finite) anti-linearity. To construct this map, we will need to consider the 'isotropy' group Iso(γ) of diffeomorphisms that mapγ to itself and the 'trivial action' subgroup T A(γ) of Iso(γ) which preserves each edge ofγ separately. We will also need the quotient group GS(γ) = Iso(γ)/T A(γ) of 'graph symmetries' ofγ and some set S(γ) of analytic diffeomorphisms which has the property that, for all β ∈ [γ] there is exactly one ϕ ∈ S(γ) that mapsγ toβ. The appropriate maps are then given by where, in the second sum, ϕ 2 is any diffeomorphism in the equivalence class [ϕ 2 ]. For the reader who feels that this definition has been 'pulled out of a hat,' we will provide a heuristic 'derivation' below in section VI D by 'renormalizing' the map given by naive group averaging. In order to show that η a |f does in fact define an element of Φ ′ , note that its action on any state |g ∈ π H β, π aux is given by where ϕ 0 is any diffeomorphism that mapsγ toβ. Becauseγ may have an infinite number of edges, GS(γ) may be infinite as well. Nonetheless, we will now show that the above sum contains only a finite number of nonzero terms. First, note that if there are any nonzero terms at all, we may take ϕ 0 γ = β without loss of generality. In this case, a term in (VI.11) is nonzero only if the associated ϕ 2 preserves that graph γ. The key point is to note that, sinceγ may be constructed by analytically extending the edges of the graph γ, the action of any analytic diffeomorphism on the edges of γ determines the action of this diffeomorphism on every edge in the extended graphγ. Thus, the diffeomorphisms ϕ 2 ∈ S(γ) that preserveγ must rearrange the edges of γ in distinct ways. Since γ contains only a finite number of edges, it follows that there can be at most a finite set of diffeomorphisms ϕ 2 in GS(γ) that preserve γ. There are thus only finitely many terms in (VI.11). The fact that (VI.10) defines an element of Φ ′ then follows by (finite) linearity. The space V [α] dif f is then defined to be the image of η a . It is clear from the form of the sum (VI.11) that η [α] a is real and positive so that the inner product (VI.11) is well-defined, Hermitian, and positive definite. We may therefore complete each V (VI.12) (Here, without loss of generality, we take φ 1 , φ 2 cylindrical overα.) It follows that the map A → A phys (where A phys (ηφ) = η(Aφ) ) defines an anti dif f . Thus, the "reality conditions" on physical observables have been incorporated. D. Some final Heuristics For those who are interested, we now present a short heuristic 'derivation' of (VI.11) in which we first average over the entire group of diffeomorphisms (in analogy with [13,14] and section II B) and then 'renormalize' the resulting distribution by canceling (infinite) volumes of isotropy groups. Because a sum of the form ϕ∈Diff(Σ) ϕ|φ diverges (even as an element of Φ ′ ), we attempt to remove this divergence by comparing the inner product of two distributions ψ and ψ in V dif f with the norm of some reference distribution ρ which lies in the same vector space V [α] dif f . Let us suppose that these 'heuristic distributions' are obtained by averaging |φ , |ψ , and |ρ ∈ Φ [α] over the diffeomorphism group. For convenience, we will also fix some particular extended analytic graphα and assume that |φ , |ψ , and |ρ lie in Hα aux . Then, the ratio of the inner product of φ and ψ to the norm of ρ is so that a given diffeomorphism ϕ contributes to this sum only if it preservesα. That is, we need only sum over the isotropy group Iso(α). Note that we may rewrite the sums over diffeomorphisms in (VI.13) as sums over the cosets GS(α) (which give a finite result by the above discussion) multiplied by the (infinite) size of the trivial action subgroup T A(α). Formally canceling these infinite factors in the numerator and denominator, we arrive at where the sum is over the equivalence classes in GS(α) and ϕ is an arbitrary representative of [ϕ]. This motivates the definition (VI.10) of the maps η [α] a and the inner product (VI.11). E. Subtleties We have seen that the Hilbert space H dif f that results from solving the (Gauss and the) diffeomorphism constraints can be decomposed as a direct sum of Hilbert spaces H phys -observables which strongly commute with constraints-do not mix states from distinct Hilbert spaces that feature in the direct sum. Recall, however, that the physical observables have to commute with constraints only weakly and there may well exist weak observables which connect distinct Hilbert spaces. From a physical viewpoint, therefore, we need to focus on the irreducible representations of the algebra of weak observables. If there are no further constraints, (as in the Husain-Kuchař model), these irreducible sectors are properly thought of as separate and, in the standard jargon, superselected. Since by assumption the Hamiltonian operator commutes with all constraints, dynamics will leave each sector invariant. Indeed, no physical observable can map one out of a superselected sector. Thus, a physical realization of the system will involve only one such sector and just which sector arises must be determined by experiment. Unfortunately, as the matter stands, we do not have a manageable characterization of these sectors because we focussed only on strong observables. (In general, weak observables do not satisfy (II.4).) If the diffeomorphism group represents only a sub-group of the full gauge group (as in the case of general relativity), then there can be a further complication and the situation becomes quite subtle. On the one hand, because we have more constraints, one expects there to be fewer observables. On the other hand, commutator of the an operator with the diffeomorphism constraints may be equal to one of the new constraints. Then, while the operator would not be an observable of the partial theory that ignores the additional constraints, it would be an observable of the full theory. Curiously, this is precisely what happens in the case of 3-dimensional, Riemannian general relativity (i.e., the ISU (2) Chern-Simons theory). The Wilson loop operatorsT α fail to be weak observables if we consider only the diffeomorphism constraint but they are weak observables of the full theory. Furthermore, they mix the independent sectors which are super-selected with respect to diffeomorphisms. We expect that the situation will be similar in 4dimensional general relativity. Thus, we expect that the physical states of this theory will not be confined to lie in just one H [ α] dif f ; as far as general relativity is concerned, one should not think of these sectors as being physically super-selected. Finally, note that we have asked that the physical states be invariant only under diffeomorphisms generated by vector fields. Large diffeomorphisms are unitarily implemented in the physical Hilbert space; they are symmetries of the theory but not gauge. One may wish to treat them as gauge and ask that the "true" physical states be invariant under them as well. If so, one can again apply the group averaging procedure, now treating the modular group as the gauge group. In the case of 3-dimensional Riemannian general relativity on a torus, for example, this procedure is successful and yields a Hilbert space of states that are invariant under all diffeomorphisms. VII. DISCUSSION In this paper, we have presented solutions to the Gauss and the diffeomorphism constraints for a large class of theories of connections. The reader may be concerned that we did not apply the quantization program of Sec. II to the Gauss constraint but instead solved it classically. However, we chose this avenue only for brevity; it is straightforward to first use the program to solve the Gauss constraint and then face the diffeomorphism constraint. In this alternate approach, one begins with the space A of generalized connections (see section 4.2) as the classical configuration space and lets the auxiliary Hilbert space be L 2 (A, dµ ′ 0 ), where µ ′ 0 is the induced Haar measure on A (see Ref. [7,12]). Next, one introduces the Gauss constraints as operators on the new auxiliary Hilbert space. The resulting unitary operators just implement the action of the group G of generalized gauge transformations on the Hilbert space. Since G is compact, the resulting group averaging procedure is straightforward and leads us to L 2 (A/G, dµ 0 ) as the space of physical states with respect to the Gauss constraints. One is now ready to use Sec. VI to implement the diffeomorphism constraints. The final picture that emerges from our results can be summarized as follows. To begin with, we have the auxiliary Hilbert space H aux . While it does not appear in the final solution, it does serve three important purposes. First, it ensures that real, elementary functions on the classical phase space are represented by self-adjoint operators, so that the "kinematical reality conditions" on the full phase space are incorporated in the quantum theory. Second, it enables us to promote constraints to well-defined operators thereby making the analysis of potential anomalies mathematically sound. Finally the space Φ, whose topological dual Φ ′ is the "home" of physical quantum states, is extracted as a dense sub-space of H aux . The physical statesφ ∈ Φ ′ are obtained by "averaging" states φ ∈ Φ over the orbits of the diffeomorphism group appropriately. Care is needed because the orbits themselves have an infinite volume and because, in general, different orbits have different isotropy groups. These features lead to diff-superselected sectors. Each sector is labeled by the diffeomorphism class [α] of "maximally extended" (type I) graphsα. Operators on H aux which leave Φ invariant have an induced action on the topological dual, Φ ′ of Φ. If they commute with the diffeomorphism operators on H aux , they descend to the space V dif f of (diff-)physical states. The sectors are diff-superselected in the sense that each of them is left invariant by operators on Φ ′ which descend from observables -i.e., self-adjoint operators which commute with the diffeomorphism operators-on H aux . The induced scalar product on V diff is unique up to an overall multiplicative constant on each diff-superselected sector. It automatically incorporates the physical reality conditions. (The ambiguity of multiplicative constants would be reduced if there exist weak observables which mix these sectors which are superselected by strong observables.) How does this situation compare to the one in the general algebraic quantization program of Ref. [17,18]? In the final picture, the inner product is determined by the reality conditions. However, the group averaging strategy enables one to find this inner product without having to find the physical observables explicitly; the inner product on H aux which incorporates the kinematical reality conditions on the full phase space descends to vp. This is an enormous technical simplification. On the conceptual side, there are now four inputs into the program: choice of a set of elementary functions (labeled by loops and strips in our case), of a representation of the corresponding algebra (on L 2 (A/G, µ 0 ) in our case), of expressions of the regularized constraint operators (which, in our case, implement the natural action of the diffeomorphism group on H aux ), and of the subspace Φ (Cyl ∞ (A/G) in our case). We have shown that the choices we made are viable and quantization can be completed. There may of course be other, inequivalent quantum theories, which correspond to different choices. Indeed, even in Minkowski space, a classical field theory can be quantized in inequivalent ways. We expect, however, that there exists an appropriate uniqueness theorem which singles out our solution, analogous to the theorem that singles out the Fock representation for free field theories. What are the implications of these results to the specific models discussed in Sec. III? For the Husain-Kuchař model, we have complete solutions. For Riemannian general relativity, on the other hand, we have only a partial result since the Hamiltonian constraint is yet to be incorporated. However, our analysis does provide a natural strategy to complete the quantization. For, we already have indications that the projective methods can be used also to regulate the Hamiltonian constraint operator on diffeomorphism invariant states. If this step can be completed, one would check for anomalies. If there are none, one would again apply the group averaging procedure to find solutions. This task may even be simpler now because, given the structure of the classical constraint algebra, one would expect the Hamiltonian constraints to commute on diffeomorphism invariant states. The procedure outlined in Appendix A would then lead to the physical Hilbert space for the full theory. As indicated in Sec. VI E, however, subtleties will arise because of the observables which commute with the constraints only weakly and the final Hilbert space is likely to contain elements from different diff-superselected sectors. Furthermore, to extract "physical" predictions, one would almost certainly have to develop suitable approximation schemes. However, this task would be simplified considerably if we already know that a consistent quantum theory exists. Indeed, in this respect, the situation would be comparable to the one currently encountered in atomic and molecular physics where approximations schemes are essential in practice but the knowledge that the exact Hamiltonian exists as a well-defined self-adjoint operator goes a long way in providing both confidence in and guidelines for these approximations. For Lorentzian general relativity, one can begin with the formulation in which the spin connection is the configuration variable. For this case, the results of this paper again lead to a complete solution to the Gauss and the diffeomorphism constraints. Unfortunately, as mentioned in the Introduction, the Hamiltonian constraint is unmanageable in these variables and the best strategy is to perform a transformation and work with self-dual connections [2]. Classically, the required canonical transformation is well-understood. Its quantum analog is an appropriate "coherent state transform" which would map complex-valued functions of spin connections to holomorphic functions of the self-dual connections. Such a transform is already available [3] and it seems fairly straightforward to carry over our treatment of the diffeomorphism constraint to the holomorphic representation. However, it is far from being obvious that the Hamiltonian constraint can be treated so easily in the holomorphic representation. Another strategy is to begin with the Riemannian model, obtain physical states and then pass to the holomorphic representation via an appropriate generalization of the Wick rotation procedure. Thus, whereas in the Riemannian case, results of this paper provide a clear avenue, in the Lorentzian case, new inputs are needed. Work is in progress along the two lines indicated above. The canonical approach to quantum gravity is quite old; foundations of the geometrodynamic framework were laid by Dirac and Bergmann already in the late fifties. The precise mathematical structure of the classical configuration and phase spaces became clear in the seventies. However, these analyses dealt only with smooth fields while, as is well-known, in quantum field theory one has to go beyond such configurations. The required extensions are non-trivial and are, in fact, yet to be carried out in the metric representation. Consequently, in the traditional geometrodynamical approach, the formulation and imposition of quantum constraints have remained at a formal level even for the diffeomorphism constraint. We have seen that the situation changes dramatically if one shifts the emphasis and works with connections. (Note that these can be SU (2) spin connections; they don't have be self-dual. Since the spin connection is completely determined by the triads, the corresponding representation provides an alternative framework to solve the quantum Gauss and diffeomorphism constraints of the triad geometrodynamics.) Now, problems of quantum field theory can be faced directly and the general level of mathematical precision is comparable to that encountered in rigorous quantum field theory. Finally, note that this became possible only because of the availability of a calculus on the quantum configuration space which does not refer to a background field such as a metric. Thus, the projective techniques summarized in Sec. IV are not a luxury; they are essential if one wants to ensure that inner products and operators are well-defined in the quantum theory. Most of theoretical physics, however, does not require such a high degree of precision. Why, then, is so much care necessary here? The main reason is that we have very little experience with nonperturbative techniques. We have already seen that the perturbative strategy, which is so successful in theories of other forces of Nature, fails in the case of gravity. Hence, if one wishes to pursue a new approach, it is important to have an assurance that the quantum theory we are dealing with is internally consistent and that the problems that arise in perturbative treatments are not just swept under a rug. An obvious way to achieve certainty is to work at a high level of mathematical precision. The mathematical framework could, however, be improved in two directions. First, the functional calculus we used is based, in an essential way, on the assumption that all edges of our graphs are analytic. If we weaken this assumption and allow edges which are only C ∞ , a number of technical problems can arise since, for example, two C ∞ curves can have an infinite number of intersections in a finite interval. On physical grounds, on the other hand, smoothness seems more appropriate than analyticity and it would be desirable to extend this framework accordingly. Furthermore, if we could work with smooth loops, the discussion of the "independent sectors" in Sec. VI would simplify considerably; it would not be necessary to divide the spin networks into types. The second improvement would be more substantial. The present mathematical framework is based on the assumption that traces of holonomies should become welldefined operators on the auxiliary Hilbert space. Once this assumption is made, one is naturally led to regard A/G as the quantum configuration space and use on it the calculus that is induced by the projective techniques. The assumption is not unreasonable for a diffeomorphism invariant theory and has led to a rich structure which, as we saw, is directly useful in a number of models. (The framework has also been used to find new results in 2dimensional Yang-Mills theory [37] which happens to be invariant under all volume preserving diffeo-morphisms.) However, it is quite possible that, ultimately, the assumption will have to be weakened. To do so, we may need to feed more information about the underlying manifold into the quantum configuration space. Our present construction does capture a part of the manifold structure through its use of analytic graphs and also has some topological information, e.g., of the first homotopy group of the manifold. However, it does not use the notion of convergence of a sequence of graphs which knows much more about the topology of the underlying manifold. In the language of projective techniques (see Appendix B), it would be desirable to use the underlying manifold to introduce a topology on the label set and see how it influences the rest of the construction. These issues are currently being investigated. To illustrate the quantization program, we discussed a number of simple examples in section II B. To bring out some subtleties associated with the group averaging procedure, in this Appendix, we will consider a somewhat more general situation which, however, is simpler than the one considered in Sec. VI. In section II B, the group generated by the quantum constraints was Abelian and was represented by unitary operators U (g) in a Hilbert space H aux . The definition of the physical inner product involved a map η from a space Φ of test functions to its topological dual Φ ′ which was defined by integrating over the volume of the gauge group, η|f = ( dgU (g)|f ) † . As such, it is clearly important that no infinite subgroup should leave |f invariant so that the integral does not diverge. Thus, it is natural to ask if this method can be suitably modified to incorporate the case when some U (g) have eigenstates in H aux with eigenvalue 1. In this Appendix, we will analyze this issue in the general setting of Abelian constraints and show that the answer is always 'yes,' though the procedure is somewhat more subtle. Recall that our intent is to construct an irreducible representation of a ⋆-algebra of physical operators and that we suppose this algebra to be represented on H aux . At least when this algebra is generated by bounded operators, we will see that the representation on H aux is reducible whenever 1 is a part of the discrete as well as the continuous spectrum of some U (g). Suppose that the representation of the gauge group is generated by some set U i of unitary operators for i in some label set I. Denote by S (d) i the subspace of H aux which is left invariant by U i , i.e., the space of eigenvectors with discrete eigenvalue 1. Since {1} is a set of zero measure in R, any state in H aux which is orthogonal to S 1 i can be built from spectral subspaces of U i with eigenvalue = 1. Now, solutions to the constraints in Φ ′ are of two types. First, each element of S d i , regarded as an element of Φ ′ , is a solution. Second, there is a subspace S c i obtained by group-averaging elements of Φ which are orthogonal to S d i . These two subspaces of physical states are orthogonal to each other. Consider now a bounded operator A which commutes with each U i . It is straightforward to check that the action of A preserves each of the two orthogonal subspaces; the action of A on Φ ′ does not mix the discrete and continuous eigenvalue 1 distributions of U i in Φ ′ . We now refine our group-averaging procedure as follows. First, decompose H aux as a direct sum of the subspaces H λ aux , where λ is a map λ : I → {d, c}. Thus, H λ aux is the subspace on which U i has continuous spectrum for λ(i) = c but has discrete spectrum for λ(i) = d. Since these subspaces are superselected, it is only meaningful to define a physical Hilbert space H λ phys for each H λ aux separately. This is done by projecting H λ aux to the zero spectrum of each U i with λ(i) = d and averaging as in section II B over the Abelian group generated by the U i with λ(i) = c. It then follows that operators induced by physical operators on H aux have the required ⋆-relations on each H λ phys . We would like to emphasize that, when the U i 's generate the entire gauge group, these superselection rules are not just an artifact of the mathematics but are important for a physical understanding of the system. They imply that the representation of the physical algebra on H aux is reducible, so that each H λ phys contains a separate representation of the algebra of physical operators. Which H λ phys is realized in a given situation must be determined experimentally. Furthermore, the super-selection rules described above have a close classical analogue due to Liouville's theorem. Consider a classical constraint function C i and a strong observable A that is a smooth function on the phase space. (The use of strong observables is not essential but simplifies the discussion.) The Hamiltonian vector field h A of any such A has the property that it maps any orbit of C i in the unconstrained phase space onto another orbit of C i . Heuristically, regions of the phase space that contain compact orbits correspond to the discrete spectrum of a corresponding U i and regions that contain non-compact orbits correspond to the continuous spectrum. Now, consider any set of compact orbits with non-zero but finite phase space volume. By Liouville's theorem, the exponentiated action of any Hamiltonian vector field preserves the finite volume of this set. As a result, h A cannot map this set of compact orbits to a bundle of non-compact orbits. Note that this is a direct analogy with the super-selection laws described above. Of course, if these orbits are not the full gauge orbits, but only those of a gauge subgroup, then such arguments are inconclusive when applied to weakly physical operators. This is because, under the action of the full gauge group, many of the above compact orbits may combine to form a single non-compact orbit, which could then be mapped onto non-compact orbits in a volume preserving way. APPENDIX B: PROJECTIVE LIMITS A general setting for functional integration over an infinite dimensional, locally convex, topological space V is provided by the notion of "projective families" [38,39]. This framework can be naturally extended to theories of connections where the relevant space A/G is non-linear [6,[8][9][10][11][12]. In the present appendix we will summarize the basic ideas which are implicitly used in the main text. Let L be a partially ordered directed set; i.e. a set equipped with a relation '≥' such that, for all S, S ′ and S ′′ in L we have: L will serve as the label set. A projective family (X S , p SS ′ ) S,S ′ ∈L consists of sets X S indexed by elements of L, together with a family of surjective projections, assigned uniquely to pairs (S ′ , S) whenever S ′ ≥ S such that We will assume that X S are all topological, compact, Hausdorff spaces and that the projections p SS ′ are continuous. In the application of this framework to theories of connections, carried out in Sec. IV, the labels S can be thought of as general lattices (which are not necessarily rectangular) and the members X S of the projective family, as the spaces of configurations associated with these lattices. The continuum theory will be recovered in the limit as one considers lattices with increasing number of loops of arbitrary complexity. Note that, in the projective family there will, in general, be no set X which can be regarded as the largest, from which we can project to any of the X S . However, such a set does emerge in an appropriate limit, which we now define. The projective limit X of a projective family (X S , p SS ′ ) SS ′ ∈L is the subset of the Cartesian product × S∈L X S that satisfies certain consistency conditions: (This is the limit that gave us in Sec. IV the quantum configuration A/G for theories of connections.) We provide X with the product topology that descends from × S∈L X S . This is the Tychonov topology. In the Tychonov topology the product space is known to be compact and Hausdorff. Furthermore, as noted in [9], X is closed in × S∈L X S , whence X is also compact (and Hausdorff). Note that the limit X is naturally equipped with a family of projections: Next, we introduce certain function spaces. For each S consider the space C 0 (X S ) of the complex valued, continuous functions on X S . In the union we define the following equivalence relation. Given f Si ∈ C 0 (X Si ), i = 1, 2, let us say: for every S 3 ≥ S 1 , S 2 , where p * S1,S3 denotes the pull-back map from the space of functions on X S1 to the space of functions on X S3 . Using the equivalence relation we can now introduce the set of cylindrical functions associated with the projective family (X S , p SS ′ ) S,S ′ ∈L , The quotient just gets rid of a redundancy: pullbacks of functions from a smaller set to a larger set are now identified with the functions on the smaller set. Note that in spite of the notation, as defined, an element of Cyl(X ) is not a function on X ; it is simply an equivalence class of continuous functions on some of the members X S of the projective family. The notation is, however, justified because, one can identify elements of Cyl(X ) with continuous functions on X . This identification was implicitly used in (IV.22). If the X S are differentiable manifolds then one can define spaces Cyl n (X ) of differentiable cylindrical functions in a a completely analogous way. These spaces play a crucial role in defining measures and regulated operators. APPENDIX C: UNEXPECTED CONSEQUENCES OF DIFFEOMORPHISM INVARIANCE In section VI, we used a group averaging procedure to solve the quantum diffeomorphism constraint. It was therefore natural to use the finite -rather than the infinitesimal-form of constraints. It turns out, however, that there is really no choice: it is not possible to define the infinitesimal form of the diffeomorphism constraints on any H aux = L 2 (A/G, dµ) which carries a faithful representation of the holonomy algebra when dµ is diffeomorphism invariant. In this appendix, we will discuss this somewhat surprising technical point. Let N a denote a complete analytic vector field on Σ and ϕ t the corresponding flow of analytic diffeomorphisms. Then, from (III.2), we see that the smeared version of the diffeomorphism constraint is given by: (C.1) Let us equip A/G with one of the standard Sobolev topologies [40] and denote by N and ϕ t the vector field and the flow on A/G induced by N a . Given a smooth function ψ on A/G, it is then easy to write out the action of the desired operatorV N on ψ: Hence, the exponentiated version of the constraint is given simply by: Sinceφ extends naturally to A/G, it is straightforward to extend the action of U N to our H aux , which we will denote again by U N . If the measure on A/G is diffeomorphism invariant, U N are unitary operators, hence defined on all of H aux . It is now obvious that the algebra of these operators is closed; there are no anomalies. The result is, however, non-trivial because our constraint operators U N (t) are rigorously defined on the auxiliary Hilbert space. It is known, for example, that if one uses a lattice regularization to give meaning to the formally defined constraint operators, anomalies do result. What would happen if we try to extend to H aux the action of the infinitesimal constraintsV N instead? Since Wilson loop variables are smooth functions on A/G, let us begin by setting ψ(A) = T α (A) on A/G. Then, we have: where α t = φ t α and the point A indicates that the limit is taken pointwise in A/G. The limit is of course a well-behaved smooth function on A/G. However, it fails to be a cylindrical function. (Note that U N (t) • T α = T φ(t)·α , on the other hand, is cylindrical.) Hence, one might suspect that there may be a difficulty in extending the operatorV N to H aux . We will see that this is the case. More precisely, we now show that for a diffeomorphism invariant measure µ on A/G to be compatible with a well defined infinitesimal generator of the diffeomorphism constraint, µ must have a very special support. The resulting representation of theT α algebra would then be so unfaithful as to be physically irrelevant. Indeed, let µ denote a diffeomorphism invariant measure and α t = ϕ t α as above. For the diffeomorphism constraint to be well defined we must have (at least for "most" of the loops α in Σ) lim t→0 T αt − T α 2 L 2 = lim t→0 A/G (T αt − T α ) 2 dµ = 0 . (C.4) From diffeomorphism invariance of the measure it is clear that and that there exists t 0 > 0 such that A/G T α T αt dµ = k = const , f or t : 0 < t < t 0 . (C.5) (To see this we can consider a flow ϕ ′ s of analytic diffeomorphisms that leave α invariant and such that ϕ ′ s α t = α t ′ (t,s) .) For the limit in (C.4) to be equal to zero we must have k = A/G T 2 α dµ, which from (C.4) implies that in L 2 (A/G, µ) T αt = T α , ∀t : 0 < t < t 0 . (C.6) Now, (C.6) implies that the representation ρ (see (IV.4)) of the holonomy algebra on L 2 (A/G, µ) is not faithful since T αt − T α = 0 as elements of HA, while ρ(T αt − T α ) = 0 as operators on L 2 (A/G, µ). Thus, the support of the measure µ is so special that it is not suitable as a kinematical measure in quantum theory. Put differently, in any interesting representation of the holonomy algebra, and therefore the infinitesimal generators of the diffeomorphism constraints can not be well defined. APPENDIX D: GEOMETRICAL OPERATORS On the phase space of Riemannian general relativity, the momentum variableẼ a i has the interpretation of a density weighted triad. Hence, one can use it to construct functions on the phase space that carry geometrical information. For example, the volume of a region R within Σ is be given by: where η abc is the Levi-Civita tensor density on Σ. Similarly, the area of a 2-surface S within Σ defined by, say, x 3 = const is given by: The question then arises: are there well-defined geometric operatorsV R and S on H aux ? In absence of matter fields, V R and A S fail to be observables since they are not diffeomorphism invariant. Hence, the corresponding operators will not represent physical observables. However, if we bring in matter sources and define the regions R and surfaces S using these fields, thenV R and S would be observables with respect to the diffeomorphism constraints [41]. Therefore, it is of considerable interest to try to construct these operators in the kinematical setting of Sec.IV and explore their properties. At first sight, it seems difficult to make sense out of these operators. To begin with,Ê a i itself is not a well-defined operator on H aux . Second, the desired operators would require products ofÊ a i evaluated at the same point, and, furthermore, a square-root! Nonetheless, it turns out that these formal expressions can be regulated satisfactorily to yield well-defined operators on H aux . The regularization procedure involves point-splitting and it is necessary to fix a gauge and a background metric (or coordinate system) in the intermediate stage. However, when the regulator is removed, the final expression is not only well-defined but independent of the background structures used in the procedure. The overall procedure is similar to the one used in rigorous quantum field theories. Furthermore, somewhat surprisingly, for suitable operators such asV R and S , the situation is better than what one might have expected: there is no need to renormalize, whence the final answers have no free parameters. Finally, the operators are essentially self-adjoint on H aux and their spectra are often discrete. Thus, the "quantum geometry" that emerges from our framework has certain essentially discrete elements which suggest that the use of a continuum picture at the Planck scale is flawed. These results are analogous to the ones obtained by Rovelli and Smolin [21] in the loop representation. However, the precise relation is not known. Here, we will illustrate these results with the area operator. For simplicity, let us suppose that we can choose coordinates on Σ in a neighborhood of S such that S is given by x 3 = const and x 1 , x 2 coordinatize S. Then, we can write A S as: where f ǫ (x, y) (is a density of weight 1 in x and function in y and that) tends to δ 3 (x, y) in the limit. For concreteness, we will construct it from Θ density/functions: (There is thus an implicit background density of weight one in x in the expression of Θ.) Now, let us begin by considering a cylindrical function F γ on the space A of smooth connections. By using a group-valued chart on A γ , F γ can be expressed as F γ (A) = f (g 1 , ..., g N ) where N is the number of edges in γ and g I = P exp eI A. A simple calculation yields: The right side is a well-defined function of smooth connections A. However, it is no more a cylindrical function because of the form of the terms involving integrals over edges. We thus have two problems: the action of O ǫ (x) is not well-defined on functions of generalized connections, and, even while operating on functions of smooth connections, the operator sends cylindrical functions to more general ones. We will see that the two problems go away once the regulator is removed. Let us consider the first term in detail; an analogous treatment of the second term shows that it does not contribute to the final result. Ultimately, we want to integrateÔ(x) over S. Hence, we want x to lie in S. Then, for sufficiently small ǫ, because of the f ǫ terms, only the edges that intersect S contribute to the sum. (Furthermore, since only the third component of the tangent vectors count in O I ǫ , edges which lie within S do not contribute.) Without loss of generality, we can assume that intersections occur only at vertices of γ (since we can always add vertices in the beginning of the calculation to ensure this). Now, if we write out the functions f ǫ explicitly and Taylor expand, around each vertex at which γ intersects S, the group elements that appear in the integrals we can express O I ǫ • f as a sum: ..., g n ) . (D. 6) Here v α are the vertices of γ that lie in S, e Iα , e Jα are the edges passing through the vertex v α , X Iα is the right (left) invariant vector field on the copy of the group corresponding to the edge e Iα which points at the identity of the group in the i-th direction, if the edge is oriented to be outgoing (incoming) at the vertex, and the constant K(I α , J α ) equals +1 if the two edges lie on the opposite side of S, −1 if they lie on the same side and vanishes if the tangent vector of either edge is tangential to S. Let us try to take the limit ofÔ I ǫ (x) • f as ǫ tends to zero. In this limit, each Θ ǫ tends to a 1dimensional Dirac δ-distribution, and the expression then diverges as 1ǫ 2 . As is usual in field theory, we can first renormalize the expression by ǫ 2 and then take the limit. Now, the limit clearly exists. However, it depends on the background density implicit in the expression of Θ and hence the resulting operator carries the memory of the background structure used in the regularization. That is, the ambiguity in the final answer is not of a multiplicative constant, but of a background density of weight one. (This is to be expected since the left hand side is a density of weight 2 (in x and y) while the 2-dimensional Dirac δ-distribution is only a density of weight 1.) Because of the background dependence, the resulting operator is not useful for our purposes. However, if we take the square-root of the regulated operator and then take the limit, we obtain a well-defined result: Note that, now, no renormalization is necessary. In the final result, both sides are densities of weight one and there is neither background dependence, nor any free parameters. With proper specification of domains, the operator under the square-root can be shown to be a non-negative self-adjoint operator on L 2 ((SU (2)) n , dµ H ). (For example, if there are just two edges at a vertex v α , one on each side of S, then the operator is just the (negative of the) Laplacian.) Hence, the square-root is well-defined. We can therefore construct an area operator: Clearly, this operator maps cylindrical functions to cylindrical functions. It is straightforward to show it satisfies the compatibility conditions discussed in Sec. IV and thus leads to a well-defined operator on H aux = L 2 (A/G, dµ o ). This operator is selfadjoint and has a discrete spectrum. The volume operator can be treated in a similar manner. To conclude, note that there is a striking qualitative resemblance between this analysis of properties of geometry and that of physical properties of polymers in condensed matter physics [42]. In both cases, the basic excitations are "loopy" rather than "wavy"; they reside along 1-dimensional graphs rather than on 3-dimensional volumes. However, under suitably complex conditions, they resemble genuinely 3-dimensional systems [42,43].
2018-12-17T18:19:55.467Z
1995-04-12T00:00:00.000
{ "year": 1995, "sha1": "6774bb495e20a4e1d2dcdde8f2e5cc304547ce6f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/gr-qc/9504018", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1e73364f4cd9621a2b55b53f40d6df783037935e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
251934574
pes2o/s2orc
v3-fos-license
Analyzing annual changes in network structures of a social media application-based information-sharing system in a Japanese community Background Understanding the evolution of social network services (SNSs) can provide insights into the functions of interprofessional information-sharing systems. Using social network analysis, we aimed to analyze annual changes in the network structure of SNS-based information sharing among healthcare professionals over a 3-year period in Japan. Methods We analyzed data on SNS-based information sharing networks with online message boards for healthcare professionals for 2018, 2019, and 2020 in a Japanese community. These networks were created for each patient so that healthcare professionals could post and view messages on the web platform. In the social network analysis (SNA), healthcare professionals registered with a patient group were represented as nodes, and message posting and viewing relationships were represented as links. We investigated the structural characteristics of the networks using several measures for SNA, including reciprocity, assortativity and betweenness centrality, which reflect interrelational links, the prevalence of similar nodes with neighbors, and the mediating roles of other nodes, respectively. Next, to compare year-to-year trends in networks of patients overall, and between receiving nursing care levels 1–3 (lighter care requirement) and levels 4–5 (heavier care requirement), we described the annual structural differences and analyzed each measure for SNA using the Steel–Dwass test. Results Among 844, 940, and 1063 groups in each year, groups for analysis in care levels 1–3/4–5 were identified as 106/135, 79/89, and 57/57, respectively. The overall annual assessment showed a trend toward increased diameter and decreased density, but the differences were not significant. For those requiring care levels 1–3, assortativity decreased significantly, while for those requiring care levels 4–5, reciprocity decreased and betweenness centrality increased significantly. No significant differences were found in the other items. Discussion This study revealed that the network of patients with a lighter care requirement had more connections consisting of nodes with different links, whereas the network of patients with a heavier care requirement had more fixed intermediary roles and weaker interrelationships among healthcare professionals. Clarifying interprofessional collaborative mechanisms underlying development patterns among healthcare professionals can contribute to future clinical quality improvement. Introduction With the rapid aging of the population across the world, complex health problems are arising from challenges associated with multimorbidity, aging and inactivity [1]. Japan has the world's highest proportion of elderly people, at 29.1% of the total population in 2018 [2]. The Japanese government has proposed the establishment by 2025 of a "community-based integrated care system," consisting of a medical insurance system and a long-term care insurance system, with the aim of ensuring provision of comprehensive healthcare, long-term care, preventive healthcare, housing, and lifestyle support. In this system, care services are provided in the local community, based on patient requirements. Each patient's requirements are determined by the extent of care services needed. In the current Japanese long-term care insurance system, care services are categorized into seven levels based on the condition of each patient or user. Level 5 represents the highest level of long-term care, level 1 represents the lowest, with levels 1 and 2 including plans to forestall long-term care [3]. To achieve this, healthcare professionals need to fully understand the physical and mental characteristics of the elderly, and collaborate with other professionals in a comprehensive community-based care system [4]. An interprofessional information-sharing network to solve these complex problems is needed, not only in Japan but also elsewhere in Asia and other developed countries further afield. Social networking services for healthcare professionals Systems for sharing patient information are making increasing use of social networking services (SNSs), as in the example of an online message board focusing on coronary heart disease [5]. SNS-based information sharing networks can serve as an effective training tool for interprofessional collaboration [6]. SNSs have enabled the sharing of knowledge among professionals through web-based networking and the sharing of personal information with few constraints of time, space, or geography [7]. Because of expected further growth in the use of SNS-based information sharing networks as a tool for healthcare professionals in community-based integrated systems, there is a need to clarify the mechanisms and annual changes in information sharing. However, little is known about how information sharing networks develop in such SNSs for healthcare professionals [8]. Review of social network analysis (SNA) For analyzing network development and dynamics, social network analysis (SNA) is a useful tool [9]. SNA is a distinctive set of procedures for mapping, measuring, and analyzing social relationships among people, teams, and organizations [10]. It provides a visual representation of nodes (individuals, groups, organizations, etc.), and facilitates exploration of patterns and types of relationships among nodes. Additionally, by analyzing relationships (links) between nodes, the roles and influences of particular nodes in the network can be examined. Such SNAs can provide a theoretical approach for exploring interactions of nodes in the network based on social interaction frameworks [11] SNA has been used in a variety of ways in systematic reviews in healthcare settings [12][13][14]. However, in some studies SNA has been used as a descriptive tool rather than an evaluative one, and few annual analyses of information-sharing networks of healthcare professionals exist. One such study used a questionnaire design that was influenced by recall bias and other factors [15]. Revealing the development of SNS-based information sharing networks for healthcare professionals over time can help identify strengths and gaps in the network and improve the effectiveness of interprofessional collaboration [13]. Here, therefore, we aimed to reveal the annual changes in the network structures of an SNS-based information sharing system among healthcare professionals in a Japanese community over a 3-year period. Design We conducted repeated cross-sectional surveys [16]. Repeated cross-sectional data are created when a survey is administered to a new sample at successive time points. For annual surveys, this means that participants in one year may include people other than those who participated in a previous year. In the data analysis, we used SNA to analyze information sharing patterns of SNS-based networks in one community. SNA is based on the principles of graph theory [17]. It examines the existence of nodes (e.g., individual medical professionals) and internodal links, focusing on the relationships between nodes and link structures (e.g., information sharing networks). In this study, the information sharing network for each patient was constructed using an SNS-based collaborative mechanisms underlying development patterns among healthcare professionals can contribute to future clinical quality improvement. Keywords: Community health services, Annual survey, Social network analysis, Interprofessional collaboration, Japan platform that allowed different healthcare professionals to post and view messages. Based on data from 2018, we published the results of a study of the network structure and some measures of SNA, including centrality of nodes' characteristics [18]. Those data were included in the present study for comparative purposes. Setting For purposive sampling, a community within City X was selected because the healthcare professionals in the city used SNS-based platforms to share information about patients. City X is a core city in the local community, with a population of 220,000 and seven general hospitals in 2018. Since the population distribution of City X in 2018-2020 was close to the middle of Japan, we considered it to be a representative setting. SNS-based information sharing tools An SNS-based information sharing tool among healthcare professionals in City X was used in this study. The SNS was created by a private company for use by healthcare professionals in a variety of roles including doctors, nurses, care workers, care managers, physical therapists, and registered dietitians. For example, home care services in the patient's home may be provided by home care staff, visiting nurses, pharmacists, doctors, and physical or occupational therapists. Home care workers, who are not state-certified, visit the patient's home to help with meals, shopping, etc. as assistance with Activity daily lives (ADLs) and Instrumental activity daily lives (IADLs). Care workers are mainly responsible for physical care such as toileting and bathing patients, providing assistance with patients' ADLs and IADLs in home or external facilities. Clinic clerks work with other members of a small team, handling receptionist appointment requests, prescription requests, and inquiries. Medical consultants share patient information among facilities, e.g., when patients have symptoms like fever or rash, care workers consult nurses at first, and nurses report to doctors with comments and pictures via SNS to provide proper treatments or seek advice about care plans. This SNS-based platform allows multiple registered healthcare professionals to share patient-related information. The platform is also used by the staff of several medical and nursing facilities when making social welfare appointments for transportation on patients' admission and discharge. Although patients and healthcare professionals gave written consent for their data to be used in research when they registered on the SNS, they were also guaranteed the right to refuse to participate in this study by opting out at the beginning of the study. Data collection Data were obtained from patients and healthcare professionals who agreed to participate anonymously. Patients who used the online message board and their healthcare professionals linked to the board were registered as nodes. A network was created for each patient. Healthcare professionals registered in this network were allowed to post messages and mark them for viewing on the online message board of their patients. This study involved annual surveys of patient demographics and healthcare professionals. We collected the log data for message postings from users about each patient and the user's viewing marks, and applied SNA measures such as index of network structure to organize and analyze information over the period from January to December of each year 2018, 2019, and 2020. The data included gender, age, and care levels of patients registered on the message board. It included posting and viewing of messages marked as "viewed" by the posting users, who were healthcare professionals participating in the message board. Constructing networks Each group was networked from the log data of message postings from users about each patient and the user's viewing marks. Individual healthcare professionals (excluding patients) registered in a patient group were considered as nodes. Message posting/viewing relationships were considered as links. More specifically, for each patient group, an unweighted directed graph G = (V, E) was created. Node u represents a user (i.e., an individual medical professional), and a directed link (u, v) represents a user v marked as "viewed" in a message posted by user u. Thus, a link was considered created when a user made a node-to-node communication through a particular thread. Because very small networks were not useful for SNA, only those with more than 10 nodes were selected for analysis, based on previous studies [19,20]. Some of the patterns of social networks in this study may be visualized as shown in Fig. 1. Analysis Several SNA metrics were used to investigate the structural properties of the target networks. For each network, we focused mainly on the number of nodes, density [21], diameter [22], path length [23], clustering coefficient [24], assortativity [25], and degree, closeness and betweenness centrality [26]. Density is determined by dividing the observed number of links in the network by the maximum possible number, which provides a more comprehensive description of the level of connectivity in the network [25]. The greater the density, the more nodes are connected in the network [21]. Diameter is the longest path between two nodes in the network [22]. The average path length of a network is defined as the average distance between all node pairs, including between a node and itself [21]. In other words, a network with a small diameter and a short average distance can be considered "compact", while a network with few nodes and a long distance will have a larger diameter [21]. The clustering coefficient quantifies the number of connected triangles present in a network and helps to reveal the characteristics of individual nodes [24]. A high clustering coefficient means that a person connected to two other people is likely to be connected to others, in a triangle. Assortativity is defined as the degree to which a node in a graph is linked to other nodes with a similar number of connections, for example in a "post/view" relationship [27]. In other words, a high degree of assortativity means that people with many connections are linked to other people with many connections, or conversely, people with few connections are linked to people with few connections. By contrast, low assortativity means that connected individuals are connected to many other, less connected individuals [25]. Reciprocity is defined as the degree of mutual interaction between individuals. It is a measure of the proportion of two-way links in a network. Degree centrality is the number of links a node has. Closeness centrality is higher when the inverse of the sum of the distances from a node to all other nodes in the network (closeness centrality) is biased. Closeness centrality indicates the potential independence of nodes in the flow of an information-sharing network, so if there are information exchange relationships among nodes throughout the network, closeness centrality will be high. Betweenness centrality is a measure of centrality based on the degree to which a node mediates the relationships of other nodes. As betweenness centrality is based on the extent to which nodes mediate relationships between other nodes, a network with few mediators has low betweenness centrality. These measures are widely used in SNA [19,20]. All measures except the number Fig. 1 The examples of networks. An isolated node indicates that it belongs to the network but is neither input nor output. of nodes, diameter and path length were normalized to a (0,1) scale and expressed as values ranging from 0 to 1. The mean and standard deviation (SD) for each network were calculated. Based on the severity of their disease and the need for care, patients were divided into groups of care levels 1-3 (light care requirement) or 4-5 (heavy care requirement) [3]. For each group, annual comparisons of each score were analyzed using the Steel-Dwass test [28]. Results The number of groups (patients) with at least one post in years 2018, 2019, and 2020 was 844, 940, and 1063, respectively. In the SNA, 106/135, 79/89, and 57/57 analyzable networks (i.e., those with at least 10 networks) involving care level 1-3/4-5 were identified, respectively. Table 1 shows the number of networks analyzed along with gender and average age of patients who participated in the networks. Table 2 shows the number and types of healthcare professionals who participated in each of years 2018, 2019, and 2020. The average SNA measurements in care levels 1-3 for 2018, 2019, and 2020 are shown in Table 3. Across 2018, 2019, and 2020, the average diameter tended to increase. In contrast, the assortativity, reciprocity and degree centrality tended to become smaller with each year. Other measures indicated no consistent trends. The average SNA measurements for care levels 4-5 for 2018, 2019, and 2020 are shown in Table 4. Across 2018, 2019, and 2020, the average diameter and path lengths, degree centrality, closeness centrality, and betweenness centrality tended to increase. In contrast, the average density, assortativity, clustering Concerning year-to-year differences as assessed by the Steel-Dwass test, there was an overall annual trend toward increasing diameter, but it was not significant. For patients requiring care levels 1-3, assortativity decreased. For those requiring care levels 4-5, reciprocity decreased and betweenness centrality increased significantly. No significant differences were found in the other items. Discussion This study demonstrates how health professionals in Japan have developed the SNS-based information sharing networks in the local community and how the networks vary with the level of care requirement. These annual changes in information sharing networks revealed that networks of patients with a lighter care requirement tended to have reduced assortativity, to which professionals with large and small hub links were connected. By contrast, the networks of patients with heavier care requirements tended to include fixed healthcare professionals who coordinate services in intermediary roles, with reduced interrelationships with other professionals. Network diameters tended to increase regardless of the level of care, but these differences were not statistically significant. The differences and changes in these network indicators are significant for drawing inferences about the potential information sharing mechanisms of healthcare professionals. Assortativity measures degree of similarity between a node and its neighbors. In a U.S. study, annual changes in assortativity of patient referrals between states were reported as 0.1084 to -0.1217, -0.1104 to -0.1245, 0.0775 to 0.0549, and 0.0800 ~ 0.0569 for in-in, out-out, in-out, and out-in, respectively [29]. The tendency in that study diverges from what we found in the present study, which was that the network tended to be connected through nodes of different degrees. This reflects the fact that about half of the respondents who needed nursing care 1 and 2 were more likely to have increasing care needs than those needing care 4 and 5 [30]. Conceivably, there may be more connections between professionals who were nodes of different levels of care because of changes in care management and services. Networks of patients with heavier care requirements may not show reduced similarity among neighboring nodes compared to those of light-care patients because of the limited number of healthcare professionals who can provide care. In other words, the mechanism appeared to be that the healthcare professionals associated with patients with lighter care requirements are more likely to be connected to a variety of professionals, while those connected to patients with heavier care requirements are more likely to be connected to the same types of professionals. Reciprocity is useful for understanding two-way relationships because it indicates whether the two nodes contributed equally to a relationship. The reciprocity of the work-related problem-solving network, medication advice-seeking network, and interaction network in an emergency department was reported to be 0.43, 0.26, and 0.24, respectively [31]. Another study reported high reciprocity (0.76) concerning public and private sector networks of midwives [32]. In this study, the reciprocal relationship between health care providers in the network of patients with heavier care requirements was between 0.4 and 0.3, in the mid-range of scores reported in the studies above, but lower over time. This is because patients requiring care levels 4 and 5 were almost always in bed with ADLs, and previous research showed that 70-80% of patients in this group died or experienced deterioration of nursing care level owing to further aging and inactivity after one year [30]. In addition, the network of patients with heavier care requirements tends to include fixed nodes for healthcare professionals that coordinate services in intermediary roles. Intermediary roles in healthcare fields are often practiced as "jobs within jobs" which means that the intermediary operates within the limited rules or regulations decided by the service providers [33]. In these networks, with the intermediary roles fixed, mechanisms to promote smoother interaction may be beneficial. Density is determined by dividing the actual number of links in a particular network by the total number of nodes, and gives a more comprehensive indication of the level of connectivity in a network [24]. Density is an indicator of shared leadership, [34] reflected in a team's internal network structure [35]. Additionally, it is positively related to team performance and member satisfaction [35]. We found that as the diameter increased, density tended to decrease. This is a result of the indirect addition of new members and the creation of sufficient ties among members to facilitate the flow of information without over-reliance on any one member [36]. Although the density of networks on the internet-such as social networking sites-is often low, team activities have been described as requiring a denser network within a more sparse social structure [37]. In other words, our finding of increasing diameter and decreasing density may be interpreted as the core and periphery network structures beginning to form as new members join the increasingly better-known SNS. A network in which the core and periphery structures are functionally formed has been reported by surgical teams [38]. It is conceivable that similar mechanisms were operating in the year-toyear developments in the SNS-based information sharing network. Patterns of annual changes in SNS-based information sharing networks can be used not only as an analytical tool but also as a means to improve interprofessional collaboration in the context of information technology. Using SNA, future research should aim to clarify the mechanisms underlying development patterns among healthcare professionals, which may contribute to clinical quality improvement. To further investigate networks with lighter care requirements, it would be important to identify patients whose actual level of care has changed and examine network trends for those patients. Networks involving higher care requirements may be more likely to share information while the intermediary role is more likely to be fixed. This may become apparent in the future research involving comparisons among local communities. These inferred differences in health professionals' information sharing mechanisms by care level requirements provide meaningful evidence for facilitating interprofessional collaboration. Strengths and limitations This study has some limitations. First, the number of nodes was relatively small and may not be representative of Japan, or other local communities, in view of variations in local circumstances including professional healthcare arrangements. Therefore, further validation in other local communities is desirable. In addition, some information sharing might not be reflected in the SNSbased network, as doctors may use tools other than SNS to provide advice to nurses (for example, word of mouth or paper memos). The Japanese healthcare system, in which patients and their families call the visiting nurse first rather than the doctor, may have influenced the findings. However, given the lack of evidence on SNA based information sharing networks among healthcare professionals, this rare example of a longitudinal network survey may contribute to improving the quality of information sharing. Along with analysis of such networks, further research is needed to determine their relationship to patient outcomes, and cost-effectiveness of care-providing systems. Conclusion The findings of this study highlight differences in annual changes in information networks among patients requiring different levels of care. The network of patients with lighter care requirements had more connections with nodes of various sizes, while the those of patients with heavier requirements tended to become characterized by fixed intermediary roles and weaker interrelationships. Clarifying interprofessional collaborative mechanisms underlying development patterns among healthcare professionals can contribute to future clinical quality improvement. Abbreviations SNS: Social network service; SNS: Social network analysis.
2022-08-31T13:41:53.930Z
2022-08-31T00:00:00.000
{ "year": 2022, "sha1": "35872f082da512a251e4122daa5a7f33dbcf09fa", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "35872f082da512a251e4122daa5a7f33dbcf09fa", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
218470469
pes2o/s2orc
v3-fos-license
PDE-Based Multidimensional Extrapolation of Scalar Fields over Interfaces with Kinks and High Curvatures We present a PDE-based approach for the multidimensional extrapolation of smooth scalar quantities across interfaces with kinks and regions of high curvature. Unlike the commonly used method of [2] in which normal derivatives are extrapolated, the proposed approach is based on the extrapolation and weighting of Cartesian derivatives. As a result, second- and third-order accurate extensions in the $L^\infty$ norm are obtained with linear and quadratic extrapolations, respectively, even in the presence of sharp geometric features. The accuracy of the method is demonstrated on a number of examples in two and three spatial dimensions and compared to the approach of [2]. The importance of accurate extrapolation near sharp geometric features is highlighted on an example of solving the diffusion equation on evolving domains. Introduction Extrapolation procedures are ubiquitous in scientific computing and generally allow one to estimate a valid value of a quantity at points where data is not given; either in space or in time.In the context of level-set methods [30], extrapolation procedures in space have been frequently used since the advent of the ghost-fluid method [11], where constant extrapolations were originally used.Generalized ghost-fluid methods were then designed, in part based on higher-order extrapolations for which Aslam introduced a partial differential equation (PDE) approach to perform linear and quadratic extrapolation [2] and Gibou and Fedkiw introduced a cubic extrapolation in the same PDE framework [13].It is natural in the level-set context to perform such extrapolations using PDE formulations for their solutions are based on Hamilton-Jacobi solvers that have been designed for other standard level-set equations, see e.g.[39].A typical situation that needs extrapolation is that of an implicit treatment of a field in a free boundary problem.In this case, a valid value of the field at time t n needs to be known when assembling the right-hand side of the linear system of equations at time t n+1 .Since the interface at the new time step has swept grid points that are outside the domain at the previous time step, valid values of the field at time t n are needed in the domain at time t n+1 , which requires an extrapolation procedure. However, those methods behave poorly in the case where the free boundary presents high-curvature features or kinks.Typical examples of such situations are multimaterial flows with triple junction points, motion of sharp-edged bodies in fluids, contact line dynamics in wetting phenomena, phase-change front propagation in the presence of confining walls, etc.We introduce a method that solves that problem.We present the method in section 2 and numerical examples in sections 3 and 4 that illustrate its benefits and comment on its efficiency.Section 5 considers an example of solving a diffusion equation on evolving domains that demonstrates the importance of accurate extrapolation near sharp geometric features.Section 6 draws some conclusions. Level-set Representation The level set representation [30] defines the interface of a domain by {x : φ(x) = 0}, its interior and exterior by φ(x) < 0 and φ(x) > 0, respectively, where φ(x) is a Lipschitz continuous function called the level-set function.In this paper, the only geometrical quantity that is needed is the outward normal to the interface, n, which can be computed as: using central differencing for φ x and φ y .In typical level-set simulations, the level-set function is reinitialized as a signed distance function [39].We refer the interested reader to [29,36] for a thorough presentation of the level-set method and [16] for a recent review. Normal-derivative based multidimensional PDE extrapolation of [2] High order extrapolations in the normal direction are traditionally performed in a series of steps, as proposed by Aslam in [2] and referred to in the present manuscript as the normal-derivative based partial differential equation (ND-PDE) extrapolation.For example, suppose that we seek to extrapolate a scalar field q from the region where φ ≤ 0 to the region where φ > 0. In the case of a quadratic extrapolation, we first compute q nn = ∇ (∇q • n) • n in the region φ ≤ 0 and extrapolate it across the interface in a constant fashion, that is, such that its normal derivative is zero in the region φ > 0, by solving the following partial differential equation: where H is the Heaviside function.Then, the value of q across the interface is found by solving the following two partial differential equations: defining q n in such a way that its normal derivative is equal to the previously extrapolated q nn and then defining q in such a way that its normal derivative is equal to the previously extrapolated q n .These PDEs are solved in fictitious time τ for a few iterations (typically 15) since we only seek to extrapolate the values of q in a narrow band of a few grid cells around the interface. This extrapolation procedure produces accurate results in the case where the interface is smooth, but generates large error in the case where sharp geometric features occur, e.g.thin elongated shapes or interfaces with kinks as illustrated in sections 3 and 4. Weighted-Cartesian-derivative based multidimensional PDE extrapolation Instead of calculating the normal derivatives in the negative region before extrapolating them, we instead compute the derivatives in the Cartesian directions, extrapolate them and then construct the normal derivatives.Specifically, consider the following quantities, that are computed in the negative level-set region:  and the symmetric matrix Q ∇∇ =   q xx q xy q xz q xy q yy q yz q xz q zy q zz   . Similar to the method described in the previous section, we extrapolate the elements of Q ∇∇ in a constant fashion: before successively solving the following equations: Note that now, the normal vector field n enters the equations merely as some sort of weighting factor.Thus, as long as field q is sufficiently smooth this approach to multidimensional extrapolation is expected to produce accurate results even when the normal vector field n is not smooth (as is the case of domains with sharp features).To distinguish the proposed approach from the one in [2], we refer to it as the weighted-Cartesian-derivative based partial differential equation (WCD-PDE) extrapolation. Remark: It is possible to construct cubic and even higher-order extrapolations following this approach as well, however one needs to keep in mind the rapidly growing computational cost, because an m-th order method requires solving advection equations for tensor variables of order up to m (3 × 3 × 3 for cubic, 3 × 3 × 3 × 3 for quartic, etc). Implementation details In this work we demonstrate the proposed method on uniform Cartesian grids and our implementation follows very closely the one from [2] with just few differences.Consider a two dimensional computational grid with nodes defined as: where [x min ; x max ] × [y min ; y max ] denotes the computational domain, N x and N y are number of grid nodes in the Cartesian directions.Standard second-order accurate central difference formulas are used for calculating the normal vector field n(r) (in the entire domain) and derivatives (first and second) of q in the negative region.Normal derivatives of q are computed as: Since the first and second order derivatives of q are not well-defined at all grid points where φ < 0 we replace the Heaviside function H(φ) in equations ( 3), ( 6) and in equations ( 2), ( 5) with discrete fields H φ,∇ and H φ,∇∇ , respectively, where: Applying an explicit first-order accurate in time discretization to equations ( 2)-( 7) one obtains the following updating formulas: and When extrapolating first-and second-order derivatives (i.e.q n , q nn , q ∇ and Q ∇∇ ), first-order spatial derivatives in the equations above are computed using first-order accurate upwind discretizations.For example, derivatives in the x-direction are approximated as: where f is q n , q nn , q ∇ or Q ∇∇ .This is sufficient to achieve second-order accuracy in the extended fields q n and q ∇ .For extrapolation of the field q itself (last equations in ( 8) and ( 9)), however, second-order accurate upwind discretizations are used.For example, derivatives in the x-direction are approximated as: where Derivatives in the y-directions are approximated in a similar fashion.We note that approximation of derivatives is done in the same way for both, ND-PDE and WCD-PDE, extrapolation methods.The difference between the approaches lies in which quantities are extended over interfaces.Since in the new method the approximation of second-order derivatives in all Cartesian directions are already available during solving the PDE for q, the minmod corrections in (11) can be computed only once during first iteration and reused in subsequent iterations, reducing the cost of each iteration by approximately 2 times.Specifically, the total count of arithmetic operations to compute [q] k+1 i,j using the ND-PDE method is approximately 22 in two spatial dimensions and 32 in three spatial dimensions, while for the WCD-PDE method the total count is 10 and 14, correspondingly.Thus, if we denote as T the cost of solving a single advection equation using first-order accurate approximations of derivatives, then the total cost of performing quadratic extrapolation using the ND-PDE method is approximately (1 + 1 + 2)T = 4T in two and three spatial dimensions, while the total cost of performing quadratic extrapolation using the WCD-PDE method is (3 + 2 + 1)T = 5T in two spatial dimensions and (6 + 3 + 1)T = 10T in three spatial dimensions. Equations ( 8) and ( 9) are iterated in the fictitious time τ until steady-state.Time step ∆τ is chosen based on consideration of satisfying the CFL condition as: ∆τ = min(∆x, ∆y) 2 and ∆τ = min(∆x, ∆y, ∆z) 3 in two and three spatial dimensions, respectively.Iterations are terminated when the maximum difference between two successive steps max i,j [f ] k+1 i,j − [f ] k i,j within the band of interest, that is, among all grid nodes within the distance of 2 ∆x 2 + ∆y 2 (or 2 ∆x 2 + ∆y 2 + ∆z 2 in three spatial dimensions) around the domain boundary, is less than a specified tolerance tol = 10 −12 . Remark.Since in the proposed approach there is no need to recalculate second derivatives and apply the nonlinear minmod operator at every iteration, it is possible to obtain the steady-state solution of the advection equations in an implicit fashion.This could be very beneficial in cases when a good guess for the extended field is available (for example, solutions from preceding time instants in timedependent problems).Such an approach will be explored in future works. Remark.In case of linear extrapolations, the first equations in ( 8) and ( 9) are not solved, in second equations [q nn ] i,j and [Q ∇∇ ] i,j are set to zero and first-order accurate formulas (10) are used during the extrapolation of both field q and its derivatives for more efficient computations. Extension to adaptive Quad-/Oc-tree grids The methodology introduced in this paper can be trivially extended to Quad-/Oc-tree data structures.Specifically, we sample data fields at nodes of Quad-/Oc-tree grids and use the second-order accurate discretizations of [22] for regions where grids are non-uniform.A band of uniform grids are usually imposed near the interface in practical free boundary applications (see Fig. 1).In this case the extrapolation within some neighborhood around the interface (where it is primarily required) is as accurate as for uniform grids, however, the extrapolation procedure is much faster on adaptive grids for their significant reduction in the total number of grid points. Numerical Results in Two Spatial Dimensions We consider four physical domains: a disk, a star shape, a union of two disks and an intersection of two disks (see figure 2).The disk is a smooth interface for which the approach of [2] performs well.The star-shape domain is an example where regions of high curvature are present (crest and trough of the wavy shape).The union/intersection of two disks are examples where kinks occur and illustrate In each case we consider a computational domain Ω = (−1, 1) × (−1, 1).We define the function q = sin(πx) cos(πy) inside every domain and extrapolate it in the outside region.Then the maximum difference between the exact values of q and extrapolated ones, that is, the L ∞ norm of the error, is computed within a band of thickness 2 ∆x 2 + ∆y 2 in the outside regions.Figures 3 and 4 summarize the convergence behavior of the ND-PDE approach of [2] and the proposed WCD-PDE approach.Figure 5 demonstrates the error distribution for both methods in the case of the quadratic extrapolation on a 128 2 grid. In case of the smooth domain Ω 0 (disk), both approaches produce almost indistinguishable results attaining second-and third-order rates of convergence for the linear and quadratic extrapolation, respectively. For the high-curvature domain Ω 1 (star), both methods still reach optimal orders of convergence; however the ND-PDE approach demonstrates the optimal order of convergence only at relatively high Figure 3: Accuracy of the linear extrapolation (in the L ∞ norm) in two spatial dimensions measured in a narrow band of thickness 2 ∆x 2 + ∆y 2 around an interface using the approach of [2] and the proposed approach. Figure 4: Accuracy of the quadratic extrapolation (in the L ∞ norm) in two spatial dimensions measured in a narrow band of thickness 2 ∆x 2 + ∆y 2 around an interface using the approach of [2] and the proposed approach. grid resolutions when all geometric features are well-resolved.Moreover, for a given grid resolution the WCD-PDE approach produces results that are more than one order of magnitude more accurate in the case of the linear extrapolation and almost three orders of magnitude more accurate in the case of the quadratic extrapolation compared to the ND-PDE approach.Figure 5b shows that the ND-PDE approach produces very large errors near regions with the highest curvature, while the error in the case of the WCD-PDE approach is much smaller and exhibits very little variation throughout all regions around the interface. The results are even more significantly improved with the proposed approach in the case of interfaces with kinks Ω 2 (union) and Ω 3 (intersection).Figures 5c and 5d show that the ND-PDE method of [2] produces large errors near kinks; those errors are significantly reduced with the WCD-PDE approach.In particular, Figures 3c-d and 4c-d demonstrate that the second-order (third-order) accuracy of the linear (quadratic) extrapolations are recovered with the proposed approach; the rates of convergence for the approach of [2] are close to first order, which corresponds to the constant extrapolation, due to the fact that errors near kinks do not decrease despite grid refinement and the apparent first order of convergence is only because the neighborhood in which errors are computed is shrinking closer to the domain. Numerical Results in Three Spatial Dimensions We consider three different domains, Ω1 , Ω2 and Ω3 , that present high-curvature features or kinks in three spatial dimensions.In addition, we consider a smooth spherical domain Ω0 with center (0, 0, 0) Similar to the two-dimensional examples, we consider a computational domain Ω = (−1, 1) 3 .We extrapolate the function q = sin(πx) cos(πy) exp(z) from the inside to the outside for every domain and compute the difference between the exact values of q and the extrapolated ones, that is, the L ∞ norm of the error, within a band of thickness 2 ∆x 2 + ∆y 2 + ∆z 2 in the outside region. Figure 7: Accuracy of the linear extrapolation (in the L ∞ norm) in three spatial dimensions measured in a narrow band of thickness 2 ∆x 2 + ∆y 2 + ∆z 2 around an interface using the approach of [2] and the proposed approach. Figure 8: Accuracy of the quadratic extrapolation (in the L ∞ norm) in three spatial dimensions measured in a narrow band of thickness 2 ∆x 2 + ∆y 2 + ∆z 2 around an interface using the approach of [2] and the proposed approach. Conclusions similar to the two dimensional case can be drawn from the results in figures 7 and 8. Specifically, for a smooth and well-resolved domain (sphere) both approaches produce almost indistinguishable results with optimal order of convergence (second and third for the linear and quadratic extrapolations, respectively).When the interface curvature is high ( Ω1 , star) the WCD-PDE approach produce extrapolated fields that are several orders of magnitude more accurate than for the ND-PDE approach.For geometries with sharp features Ω2 (union) and Ω3 (intersection) only the WCD-PDE approach demonstrates optimal orders of convergence, while for the ND-PDE approach the rate of convergence is stuck to 1. Application to Solving the Diffusion Equation in Time-Dependent Domains In order to illustrate the importance of accurate extrapolation near sharp corners in moving interface problems, we present a simple example of solving diffusion equation around a moving object that may have a non-smooth boundary.Specifically, we consider a two-dimensional rectangular region [−1; 1] × [−1; 1] and an object that moves diagonally from its starting position at (x s , y s ) = (−0.51,0.52) at time t = 0 to the final position (x f , y f ) = (0.49, −0.48) at time t = 1 while making half a turn around its center as demonstrated in Figure 9.A diffusion equation subject to Neumann boundary conditions is solved in the rectangular box excluding the region occupied by the moving object.While the problem at hand does not correspond to any specific practical application, it represents a prototypical situation arising in simulation of more relevant (and more complex) processes and at the same time allows a precise analysis of numerical errors.Note that a non-deformable shape is considered for the sake of simplicity, we expect the main conclusions to hold true in more general cases, e.g.multiphase flow with triple junction points. The Eulerian framework is employed, more precisely, the region [−1; 1] × [−1; 1] is discretized into a static uniform rectangular grid with N nodes in each Cartesian direction while the object is implicitly described by a time-dependent level-set function.Suppose the shape of the moving object in its local system of coordinate ξ is described by a level-set function φ 0 (ξ) (such that φ 0 (ξ) > 0 inside the object).Then the object's motion in the global system of coordinates r can be expressed by a time-dependent level-set function φ(t, r) = φ 0 (ξ(t, r)), where global-to-local coordinate transformation ξ(t, r) is given by: cos(πt) sin(πt) − sin(πt) cos(πt) . The solution domain Ω(t) can then be defined as: The boundary of the computational box is denoted as ∂Ω and the boundary of the moving object is denoted as Γ(t). In order to investigate the influence of non-smooth interface, we consider two choices of moving object, one having a smooth boundary, a disk of radius 0.25, and another one having a non-smooth boundary, a union of two disk with radii 0.25 and 0.175, motivated by multimaterial compound bubbles.In the first case the level-set function of the object is given by (in the local system of coordinates): while in the latter case: where r 0 = 0.25, ξ 0 = 1 2 r 0 1 + q 2 and q = 0.7. The time range [0; 1] is discretized into time layers t n , n = 0, 1, 2 . .., where the time step between adjacent time layers ∆t n+1 = t n+1 − t n is determined such that the maximum displacement of the moving object boundary during the given time step is expected not to exceed a user-defined fraction f of the grid cell diagonal ∆x 2 + ∆y 2 , that is: where v n denotes the normal velocity of the object's boundary.Specifically, in this example f = 0.8 is taken. We use the second-order variable-step backward differentiation formula (BDF2) for discretizing the diffusion equation (12) in time, that is: where and use the superconvergent second-order accurate method of [4] (which is specifically designed to handle irregular domains with non-smooth boundaries) for solving the resulting Poisson-type equation (13).Solution of equation ( 13) produces values of u n at all grid nodes that belong to the current solution domain Ω(t n ).However, as the object moves in time some of grid nodes outside of Ω(t n ) may become part of Ω(t n+1 ) or Ω(t n+2 ) and solving at subsequent time layers t n+1 and t n+2 would require valid values of u n at such grid nodes.This is typically addressed in free boundary value problems by smoothly extrapolating u n into some neighborhood of Ω(t n ).In this example we quadratically extrapolate solutions using both the proposed in this work WCD-PDE approach and the ND-PDE approach of [2].Also, since solving Poisson-type equations on irregular domains produces additional errors, we generate a reference solution where instead of performing extrapolation of numerical values we fill the grid nodes outside of the current solution domain Ω(t n ) with exact values given by the analytical solution. We investigate the influence of extrapolation procedures on the accuracy of the numerical solution and its gradient.In order not to measure the error of extrapolation procedure itself but rather only its influence on solving the diffusion equation, the solution error at time layer t n is calculated only for grid nodes in Ω(t n ) and the gradient is calculated only using values from Ω(t n ) as well. Obtained results are summarized in Figures 10, 11, 12 and 13.Figures 10 and 11 show error distributions at the final time t = 1 on a 128 2 grid.As one can see, for a smooth moving object (Fig. 10) using either the ND-PDE extension or the WCD-PDE one, results in errors that are very close to ones of the reference solution, while for a non-smooth moving object (Fig. 11) the WCD-PDE extension produces a much more accurate solution that is also very close to the reference one, whereas using the ND-PDE extension results in a significant accumulation of errors in the wake of the moving object.More quantitative conclusions can be drawn from the convergence studies shown in Figures 12 and 13, in which the dependence of the error in the L ∞ norm for both numerical solution and its gradient on the grid resolution is presented.In case of a smooth moving object (Fig. 12) both extension methods lead to second-order convergence both in the numerical solutions and its gradient (as what is expected from the superconvergent method of [4]) with magnitude of errors being very close to the ones of the reference solution.In case of a non-smooth moving object (Fig. 13) the accuracy of numerical solutions obtained using the ND-PDE extension degrades severely showing only first-order convergence in the solution itself and non-convergence in the gradient.At the same time the accuracy of computations based on the proposed WCD-PDE extension seems affected very slightly by the presence of sharp corners, retaining the second-order convergence in the numerical solution and its gradient with errors very close to the reference solution.Remark: Note that the presented in this work extrapolation approach is designed for extending smooth scalar fields (as in the present example).However, in general, solutions to partial differential equations in domains with sharp features may contain singularities.In such cases for best results the proposed extrapolation procedure should only be applied to the regular part of the solution, the singular part must be dealt with separately using special methods, for example, as in [41]. Conclusion We have presented a numerical method for extrapolating scalar quantities across the boundaries of irregular that may present high-curvature features or kinks.Linear and quadratic extrapolations procedures produce second-and third-order accurate results in the L ∞ norm, respectively and do so regardless of the irregularity of the boundaries, i.e. boundaries with kinks can readily be considered.These procedures are effective in both two and three spatial dimensions and can be implemented on quadtree and octree Cartesian grids.We have shown through numerical examples that errors associated with extrapolations can be reduced by several orders of magnitude in some cases, compared with the approach of [2] commonly used in level-set methods.We have also presented an example of solving a diffusion equation on evolving domains in order to highlight the importance of accurate extrapolation near sharp geometric features for practical applications.The numerical method we introduced is based on solving PDEs in pseudo-time, but we note that static solutions based on an implicit approach like Fast Marching or Fast Sweeping could be obtained and we expect the results to follow the same general behavior as that presented in the current manuscript. Figure 5 : 3 (x 2 + y 2 + z 2 )Figure 6 Figure 5: Comparison of error distributions in the case of the quadratic extrapolation on a 128 2 grid.Top row:the approach of[2].Bottom row: the present approach.In each case the error is multiplied by a factor of 30 for visualization purpose. Figure 6 : Figure 6: Irregular domains considered in section 4 along with the octree grids refined near their boundaries. (a) Smooth moving object (b) Moving object with corners Figure 9 : Figure 9: Problem geometry in diffusion equation example from Section 5. (a) Using exact values (b) Using method of [2] (c) Using proposed method Figure 10 : Figure 10: Error distribution at the final time moment (t = 1) in case of a smooth moving object using different extrapolation approaches. (a) Using exact values (b) Using method of[2] (c) Using proposed method Figure 11 : Figure 11: Error distribution at the final time moment (t = 1) in case of a non-smooth moving object using different extrapolation approaches. Figure 12 : Figure 12: Accuracy of solving diffusion equation (in the L ∞ norm) in case of a smooth moving object using different extrapolation approaches. Figure 13 : Figure 13: Accuracy of solving diffusion equation (in the L ∞ norm) in case of a non-smooth moving object using different extrapolation approaches.
2019-12-19T21:53:46.000Z
2019-12-19T00:00:00.000
{ "year": 2019, "sha1": "9838b735e67771b21e4fd39f701b3e9ce191b2d8", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1912.09559", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "37fa414e9bada86283503b55c426737c03db5a3b", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
24975399
pes2o/s2orc
v3-fos-license
Open Vesicocalicostomy for the Management of Transplant Ureteral Stricture Abstarct A 59-year-old male developed a proximal stricture of his transplant ureter ten years after a living donor renal transplant. Endoscopic management was unsuccessful, and the patient was temporized with percutaneous nephrostomy tubes for months. Eventually, it became clear he would require surgical revision. Intraoperatively, complete fibrosis of the renal hilum, and intrarenal location of the pelvis precluded the planned pyelovesicostomy. A successful open vesicocalicostomy was performed, anastomosing a bladder flap to a lower pole calix. The patient remains recurrence free after 6 months of follow-up. Introduction Urological complications following renal transplant represent a significant area of morbidity for recipients. Ureteric stricture is the most common of these, accounting for up to 50% of urologic complications. 1 The overall incidence of stricture among transplant recipients is estimated between 0.6% and 12.5%. 2 Multiple techniques for repairing strictures have been described. Initial approach is usually endoscopic, including retrograde ureteral stent placement, balloon dilation, or percutaneous nephrostomy with stenting. 2 Commonly described open repairs consist of ureteroneocystostomy or pyelovesicostomy with or without the use of a bladder flap, or ureteroureterostomy with anastomosis of the ipsilateral native ureter to the transplant ureter or pelvis. 3 When the traditional avenues of open repair are nonviable, few remaining options have been described in the literature. In the following case report, we describe a successful vesicocalicostomy performed in lieu of the planned pyelovesicostomy. Case presentation Our patient is a 59-year-old male with a history of end-stage renal disease secondary to bilateral renal artery stenosis. In 1995, the patient underwent a living donor renal transplant. He did well for many years. At the end of 2015, the patient was admitted from a transplant clinic follow-up with elevated creatinine of 3.84 mg/dL from his baseline of 1.9 mg/dL. VCUG confirmed reflux into only the distal portion of the transplant ureter. Renal scan was consistent with obstruction. A nephrostomy tube was placed in the transplant kidney and an antegrade nephrostogram was performed and suggestive of a proximal ureteral stricture. Subsequently, the patient underwent endoscopic evaluation of his transplant ureter, which confirmed a 2 cm proximal ureteral stricture. UroMaxÔ balloon dilation was performed and an 8.5Fr  22 cm double J stent was left in place and his nephrostomy tube capped. Five weeks after dilation, the patient was readmitted with severe AKI, malaise and decreased urine output. His nephrostomy tube was uncapped with resultant high volume output consistent with postobstructive diuresis. His serum creatinine eventually leveled to approximately 2.0 mg/dL. Having now failed endoscopic management of his stricture, the stent was removed and his nephrostomy tube left to gravity drain. Almost 2 months later, a follow-up anterograde nephrostogram was performed in clinic which failed to opacify the bladder or transplant ureter (Fig. 1). A combined case involving transplant surgery and urology was planned and the patient was taken to the operating room for a planned open pyelovesicostomy. Cystoscopy revealed a normal bladder, but a completely obliterated ureteral opening. The team was unable to cannulate the fibrosed transplant ureter. An infraumbilical midline incision was made. The transplant ureter was identified and found to have a significant fibrotic rind. Ureterolysis was performed until we had circumferential control of several centimeters of the transplant ureter from just distal to the iliac vessels to the renal pelvis. The ureter itself was partially transected, but no identifiable mucosal lumen was identified. The transplant kidney renal pelvis was found to be predominantly intrarenal. The renal hilum was encased in severe fibrosis, precluding further dissection without placing the vasculature at risk. Given these findings of an inaccessible renal pelvis, consideration was given to vesicocalicostomy. Intraoperative ultrasound was performed to identify the largest, most dependent renal calyx of the transplant kidney. The bladder was fully mobilized. Bladder elongation flap was created on the anterior bladder wall. A 1 cm circular portion of the lower pole renal cortex was excised. The collecting system was entered with location confirmed with methylene blue injected through the nephrostomy tube. The vesicocalicostomy was then performed by first completing the posterior anastomosis between the lower pole calix mucosa and the mucosa of the bladder elongation flap using interrupted 5-0 polyglactin 910 sutures. Prior to completion of the anterior anastomosis, a ureteral stent was placed. The anterior anastomosis was then completed. The remaining bladder flap and cystotomy were then tubularized in a two-layer running fashion with 4-0 polyglactin 910. A peritoneal flap was harvested and sutured over our anastomosis for additional reinforcement. The immediate postoperative course was uncomplicated. Cystogram at 4-weeks postop confirmed a freely refluxing anastomosis and well healed cystorrhaphy. His ureteral stent was removed after 6 weeks and an antegrade nephrostogram at 8 weeks postop demonstrated a patent vesicocalicostomy without recurrent stricture or significant hydronephrosis (Fig. 2). At his 6 month followup, the patient was voiding well pleased and with his urinary quality of life. His serum creatinine was 2.2 mg/dL. Renal ultrasound was normal, showing stable pelvocaliectasis. Discussion Ureteral strictures represent a significant urological complication in renal transplant patients. When stenting fails to treat these ureteral strictures, open or laparoscopic techniques are employed. The most commonly described repairs are ureteroneovesicostomy directly to bladder or with flap, ureteroureterostomy with allograft or native ureter, or vesicopyelostomy. Repair of native ureteral strictures has even been described using appendicovesicostomy, but never in a transplant kidney. 5 There is a paucity of literature on using vesicocalicostomy to treat ureteral strictures in transplant recipients. One case report from 1986 describes a successful vesicocalicostomy in a young woman who had severe hydronephrosis due to ureteral stricture early after transplant. 4 After two unsuccessful attempts at surgical repair, an anastomosis between bladder and lower pole calix was created. At 23-months postop, the patient remained infection free and with only minimal dilation of the pyelocaliceal system. A second case report from 2013 described a successful laparoscopic-assisted vesicocalicostomy in a native kidney for severe ureteral stricture disease. 5 Intraoperative ultrasound was employed to identify the lower pole calyx. At 2-year follow-up, the patient was asymptomatic. Our case study demonstrates a viable option for treatment of transplant ureteral stricture. It highlights the intraoperative flexibility that is necessary for urological reconstruction. Though he is only 6-months postop at this time, the patient is doing well and currently shows no sign of complication. Conclusion Vesicocalicostomy is a rarely described option for transplant kidney ureteral stricture. It is appropriate when other standard repairs are not feasible. This should be included in the armamentarium of transplant surgeons and urologists managing kidney transplant complications. Consent Verbal consent was obtained from the patient for this case study. Conflicts of interest The authors have no conflicts of interest to disclose.
2018-04-03T04:13:06.120Z
2017-05-11T00:00:00.000
{ "year": 2017, "sha1": "3c361ffb4e521aa3f6e21e9b965c13403a4070b3", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.eucr.2017.03.028", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "42c237c00831f3998099787654f97b345a0b494e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
212620974
pes2o/s2orc
v3-fos-license
Evidence to support the early introduction of laparoscopic suturing skills into the surgical training curriculum Background The objectives of this study were to investigate the relationship between the acquisition of laparoscopic suturing skills and other operative laparoscopic skills and to provide evidence to determine ideal time and duration to introduce laparoscopic suturing training. Methods The first part of the study explored the relationship between the acquisition of laparoscopic suturing skills and proficiency of other operative laparoscopic skills. The second part of the study consisted of an opinion survey from senior and junior trainees on aspects of training in laparoscopic suturing. Results One hundred twenty-eight surgical trainees participated in this study. The total scores of task performance of 57 senior surgical trainees in laparoscopic suturing skills consisting of needle manipulation and intracoporeal knot tying were improved significantly after the training course (46.9 ± 5.3 vs 29.5 ± 9.4, P < .001), the improvement rate was 59%. No statistically significant correlations were observed between intracorporeal laparoscopic suturing skills and proficiency in the basic laparoscopic manipulative skills assessed before (r = 0.193; P = 0.149) and after (r = 0.024; P = 0.857) the training course. 91% of senior trainees and 94% junior trainees expressed that intracorporeal suturing should be introduced at an early stage of the training curriculum. Conclusions There was no statistically significant correlation between the performance on basic operative laparoscopic skills (non-suturing skills) and laparoscopic suturing skills observed in this study. The acquisition of basic laparoscopic skills is not a prerequisite for training in intracorporeal suturing and it may be beneficial for the surgical trainees to learn this skill early in the surgical training curriculum. Surgical trainees want to learn and practice laparoscopic suturing earlier than later in their training. the common objective of delivering standardized, quality training through a strong accreditation process [4]. Laparoscopic suturing skill was the area most in need of improvement in a national survey conducted by Nepomnayshy et al. in USA [5]. In the survey, the trainees found laparoscopic suturing skills to be the most deficient skill at the conclusion of residency training, as well as considering them to be the most important to master before completion of fellowship training [5]. Laparoscopic suturing skills are best acquired by training on both inanimate and animate models in the skills laboratories before being attempted in operative clinical practice [6][7][8][9][10]. There is good evidence that suturing skills acquired by simulator training can be translated to operative clinical laparoscopic surgery [11,12]. Both training and learning of intracorporeal suturing and knot tying can be assessed objectively and this enables assessment of progress in skill acquisition [12][13][14][15][16]. A survey conducted amongst urologists confirmed that hands-on laparoscopy training courses contributed to expansion of laparoscopic practice. In these studies, experience gained from these laparoscopic training courses enabled 61% participants to improve their clinical practice by including intracorporeal suturing in laparoscopic urological operations [17,18]. In a similar study, Sleiman et al. demonstrated that a short well-guided training course, using the European Academy laparoscopic "Suturing Training and Testing (SUTT) model, significantly improved surgeon's laparoscopic suturing ability, regardless of their level of experience in laparoscopic surgery [19]. Because of the visual and mechanical constraints, laparoscopic suturing is regarded as a demanding laparoscopic task and is reserved for the more advanced trainees who have mastered the other less taxing laparoscopic component skills in some training curriculums and countries [2,4,20]. For instance, the surgical trainees start learning laparoscopic suturing skills formally at year 3 or year 4 of their surgical training curriculum when they need to master this skills for a surgical procedure of laparoscopic fundoplication in general surgery in the United Kingdom [21]. Laparoscopic suturing skills have been an integrated session of the Fundamentals of Laparoscopic Surgery (FLS) curriculum and examination in the USA [22]. Mattar et al. conducted a national survey to assess readiness of general surgery graduate trainees entering accredited surgical subspecialty fellowships in North America. One of the major findings was that 56% of the surgical trainees could not do laparoscopic suturing [4]. Kurashima et al. also showed that only 55% of the teaching hospital had a skills lab and assessment tool to assess the laparoscopic skills including laparoscopic suturing skills of their trainees in Japan [23]. Thus, there has been no objective evidence that it is educationally sound on either early or delayed introduction of laparoscopic suturing skills training despite the significant improvement of surgeon's laparoscopic suturing skills obtained by attending surgical skills training courses [7,19]. Hence, there is no available objective data which addresses the issue of the optimal time for the introduction of laparoscopic intracorporeal suturing and knot tying in the surgical curriculum. In our training centre which is the biggest surgical training centre in the United Kingdom, laparoscopic suturing is restricted to intermediate and advanced laparoscopic skills courses and is usually excluded from the basic ones. The present study was set up to explore this issue by studying the relationship between the acquisition of laparoscopic suturing skills and proficiency in the more basic components of laparoscopic skills to gather objective evidence for early introduction of laparoscopic suturing skills into surgical training program at an early stage. It was also designed to obtain the views of both senior and junior surgical trainees on early versus delayed introduction of laparoscopic intracorporeal suturing. Overall study design The study was conducted from April 2016 to September 2017 in the Surgical Skills Centre, Ninewells Hospital and Medical School, University of Dundee, UK. Ninety-two senior and 36 junior surgical trainees were recruited in the study (Table 1). Fifty-seven senior surgical trainees were selected from the 92 by the dates they attended the course to participate the first part of the study. The senior trainees were either specialist registrars year 4-6 with clinical laparoscopic surgical experience in the UK or overseas surgery trainees with equivalent experience. Senior participants were selected from the UK, Europe, Africa, and Asia. Considering the differences of training systems, Proficiency in laparoscopic suturing 29% none Region from 58% from the UK and 42% from overseas 93% from the UK and 7% from overseas eligibility of the participants' level of experience was assessed by the experts at the training centre based on the information provided in their CVs and recommendation letters from the heads of their departments. The study consisted of two parts. The first part was designed to investigate the relationship between the acquisition of laparoscopic suturing skills and the proficiency level with other more basic laparoscopic component skills and cognitive knowledge. Six common laparoscopic tasks consisting of port insertion, electrosurgical knife dissection, clipping, scissors cutting and applying an endoloop were selected as the basic operative laparoscopic skills while laparoscopic suturing was considered as a skill at one level up. These tasks were well defined skills for assessment of laparoscopic skills in previous publications [24,25]. These tasks provided more information on the performance of operative laparoscopic skills than the simple peg transfer etc. Scissors dissection, clipping, applying an endoloop and laparoscopic suturing were tested on synthetic models ( Fig. 1). In the scissors dissection exercise, a doublelayered latex membrane was attached with tension to a plastic cylinder using an elastic band. The participant was required to carefully dissect between the black lines and separate the triangular shape from its attachments. Any deviation over the lines or damage to the underlying layer of latex was considered as an error. For the clipping skills test, the participants were asked to select one of the vessels of a synthetic vascular bed and apply two clips, leaving an appropriate distance between the clips to allow safe cutting of the vessel. To accomplish this, the participant must supinate or pronate both wrists to ensure that both jaws of each clip could be seen prior to applying the clip transversely. Participants were also tested on their technique to apply an endoloop onto a simulated appendix. The skills required were to use both hands to position the loop to a marked black line and to tighten it with proper tension. Electrosurgical hook dissection was carried out on turkey wings (Fig. 2). This involved keeping the hook in endoscopic view, and using controlled movement, carrying out dissection in the right tissue plane, with no deviation from the marked lines at the edges of the triangle (accuracy of the dissection) that was drawn on the turkey wing. The laparoscopic suturing task was performed with a 20-cm length with a 3/8 needle (3/0 Polysorb 22 mm taper 3/8 needle, Code GL-303, Medtronic) on a sponge foam with marked lines (Fig. 1). Assessors training and reliability in using objective structured clinical examination (OSCE) and observational clinical human reliability analysis (OCHRA) assessment method The OSCE approach has been in use for the assessment of laparoscopic operative and cognitive skills in the centre since 1990 [26]. This assessment includes: knowledge related to the safe practice of laparoscopic surgery and operative skills. Scores were obtained from each operative task and knowledge station by independent assessor who had adequate training in using this technique. Human Reliability Assessment (HRA) techniques have been in use for several decades in high-risk industries to improve performance and safety. This HRA methodology was modified for use in laparoscopic clinical surgery and was the basis for the validated system of Observational Clinical Human Reliability Assessment (OCHRA) [27]. The assessors had adequate training in using the OSCE and OCHRA techniques. Performance of trainees was assessed during the tests before and after the training courses. Scores were obtained from each operative and knowledge station by independent assessors. Six experienced consultant surgeons assessed trainees' performance at the stations. The inter-rater consistency of the OCHRA system assessed was found to be 0.85. The expert panel provided consultation throughout the study and checked the accuracy of the analysis. It was also the aim to investigate if the proficiency of other basic laparoscopic skills would be a prerequisite for learning the laparoscopic suturing skills by analysing the correlation between the task performance on basic laparoscopic skills and laparoscopic suturing skills. It involved assessment of the proficiency (task performance) of the trainees in six common laparoscopic skills including component steps of laparoscopic intracorporeal suturing. The ethical committee advised that the consent from the participants was sufficient for ethical approval because of the nature of the study. There were no patients or other conflicting materials involved in this study. The second part of the study consisted of an opinion survey from 35 senior trainees and 36 junior trainees. These 35 trainees were selected from the 92 trainees who were at the same level of skills as the 57 trainees. The 36 junior trainees were recruited from a basic laparoscopic skills training course. These senior trainees had almost finished their surgical training and had a better understanding of the need for laparoscopic skills training for the surgical trainees [4,5]. Five expert laparoscopic surgeons who were well recognized and reputed in their field (had experience of more than 200 laparoscopic procedures, taught and trained on a minimum of one laparoscopic course) identified 5 key questions associated with training of laparoscopic suturing skills. They were selected based on aspects of training in laparoscopic suturing: timing, duration and identifying the most technically demanding skills of intracorporeal suturing. Subsequently, these 5 key questions were used to design the questionnaire ( Table 2). Pre course assessment This was based on performance of 6 tasks by 57 trainees who attended advanced upper/ lower gastrointestinal laparoscopic surgery training courses. The 6 tasks included the testing of operative laparoscopic skills on electrosurgical hook knife dissection (Fig. 2), clipping, scissor cutting, port insertion, applying an endoloop, and suturing skills (Fig. 1). The nature and purpose of this assessment was explained to all of the participants and formal consents to participate were obtained. The precourse assessment was conducted in the morning prior to the start of the course. The standard laparoscopic video stack equipment and instruments were used for carrying out all of the tasks in trainer boxes using synthetic and animal tissue models. Scores of the assessment for the non-suturing laparoscopic operative skills were obtained by use of checklist based on Objective Structured Clinical Examination (OSCE) and Observational Clinical Human Reliability Analysis (OCHRA) [25]. OCHRA and OSCE were used as the more objective means of assessment of laparoscopic operative and cognitive skills in present study. These two assessment methods have been validated previously already [25][26][27]. The scores for task performance of laparoscopic suturing were assessed using the 29point checklist method [7]. Both assessment methods had been previously validated [7,[25][26][27][28]. Assessments were carried out live by the experienced senior lecturers and consultant surgeons who were experienced in using the assessment methods. Details of the course program The courses comprised didactic session, live surgery demonstration, anatomy session on a cadaver, expert discussion session and practical hands-on sessions using both synthetic and restructured animal tissue models. The proportion of the time distribution between the didactic and practical sessions was 30 to 70%, emphasizing the predominant hands-on training nature of the course. During the course, the participants undertook an exercise on their ability to overcome the visual constraints of laparoscopic surgery and their efficient equipment and instrument positioning/ manipulation using ergonomic principles to execute dissection and clipping tasks. Laparoscopic suturing training consisted of skills in ideal ergonomic set up for suturing, handling needle holder, needle manipulation technique, bite placement, and knot tying. The training of laparoscopic suturing skills was at an advanced level, for example, laparoscopic suturing on structures under tension, i.e., repairing hiatal defect during laparoscopic fundoplication. Participants received laparoscopic suturing for two consecutive half days totalling 8 h. Suturing skills included interrupted suturing, continuous suturing, and tumbled square knotting for suturing under tension. Thereafter, the trainees were given the opportunity to apply all of the acquired skills to various surgical procedures on restructured animal tissue models. The simulated procedures consisted of laparoscopic fundoplication, laparoscopic extraction of ductal stones, bowel anastomoses, repair of perforated duodenal ulcer, and gastric bypass etc. Post course assessment The post-course assessment based on the same six tasks was performed at the end of the course on the 57 participants using the same assessment and scoring systems. Opinion survey from trainees The second part of the study consisted of an opinion survey on aspects of training in laparoscopic suturing: timing, duration, and the most technically demanding skills of the intracorporeal suturing. These 35 senior trainees were selected from the 92 trainees who were at the same level of skills as the 57 who participated the first part of the study. One group of 36 junior trainees were used as a control group. The junior trainees attended a 2-day basic laparoscopic training course. They were in the specialist registrar year 1-2 of their training. The questionnaires were handed in at the end of each course. Statistic analysis Statistical analysis was carried out using the Statistical Package for Social Science version 23 (SPSS, Chicago, Illinois, USA). Data analysis showed that the sample data were not normally distributed. Therefore, the Mann-Whitney test was used to analyse the difference in task performance of all the assessed laparoscopic operative skills before and after the course. Quantitative score data is expressed as mean ± stand deviation (s.d.). The correlations between the performance on the laparoscopic suturing tasks (needle manipulation, bite placement, and intracorporeal knot tying) with other laparoscopic operative skills were analysed by the Pearson's correlation and linear regression analysis with statistical significance at 0.01. Demographics of participants Ninety-two senior and 36 junior surgical trainees were recruited in this study (Table 1). Fifty-seven senior trainees participated the first part of the study while 35 senior trainees and 36 junior trainees took part in the second part of the study. The senior trainees were all at a similar level in laparoscopic surgery including laparoscopic suturing skills. They were in years 4-6 of surgical training in the UK and the overseas delegates who had equivalent experience, who were aged between 28 and 32 years. They had 3-4 years experience in laparoscopic surgery, had performed laparoscopic cholecystectomy and laparoscopic appendicectomy independently with numbers varying between 30 to100 cases, averaged around 50 cases. Fifty-eight percent of the participants were from the UK while 42% were from overseas. More than 70% of them were not proficient in laparoscopic suturing. The junior trainees were in year 1-3 of surgical training and were aged between 26 and 30 years. They attended a 2-day basic laparoscopic skills training course. Correlation between the basic operative laparoscopic skills and laparoscopic suturing performance The 57 senior trainees were randomly selected by the dates they attended the courses to perform the assessment of basic laparoscopic and suturing tasks before and after completing the advanced training course. In this group, no correlations were observed between the task performance relating to laparoscopic suturing skills and other more basic laparoscopic operative skills (basic operative skills on electrosurgical hook knife dissection, clipping, scissor cutting, port insertion, applying endoloop) before (r = 0.193; P = 0.149) (Fig. 3a) and after (r = 0.133; P = 0.323) the course (Fig. 3b). The correlation between the post-course other more basic laparoscopic operative skills (basic operative skills on electrosurgical hook knife dissection, clipping, scissor cutting, port insertion, applying endoloop) and post-course laparoscopic suturing skills was also not significant (r = 0.024; P = 0.857) (Fig. 4). There was also no correlation between intracorporeal knot tying skills and skills in needle manipulation (r = 0.168; P = 0.211) and bite placement during (r = 0.298; P = 0 .024) in the post-course assessment. Fig. 3 a The pre course basic laparoscopic operative skills of participants did not correlate with pre course performance in laparoscopic suturing(r = 0.193; P = 0.149). b The score for the pre course assessment on task performance of other laparoscopic operative skills did not correlate with the task performance of laparoscopic suturing skills assessed post-course(r = 0.133; P = 0.323) Comparison between the pre and post course task performance (Fig. 5) The total scores of task performance in laparoscopic suturing skills improved significantly after the training course (46.9 ± 5.3 vs 29.5 ± 9.4, P < .001), the improvement rate was 59%. The total scores of the other operative laparoscopic skills including port insertion, tissue dissection, using of diathermy hook, clipping, and application of endoloop (127.5 ± 10.1 vs 95.5 ± 16.8, P < 0.001) and cognitive knowledge (66.5 ± 16.2 vs 58.8 ± 10.3, P < 0.001) also improved significantly in trainees with previous laparoscopic experience, they were improved at a rate of 34 and 14% respectively (Fig. 5). Opinions survey on the acquisition of laparoscopic suturing skills Details of the questionnaires given to the senior (n = 35) and junior (n = 36) trainees were shown in Table 2. All returned the completed questionnaires (100% compliance). The majority of both groups, 91% of senior trainees and 94% of junior trainees, expressed the view that laparoscopic suturing should be learned at an early stage of their surgical training and should form part of the basic laparoscopic training course. Opinion on the ideal duration of laparoscopic suturing practical sessions differed between the senior trainees who had actually practiced laparoscopic suturing and the junior trainees who had not: 27 (77%) senior trainees opted for two half days (74%) or one full day (26%), whereas 21 (58%) junior trainees considered one half day as being sufficient. There was a difference between these two groups in their views on the level of difficulty/ complexity of the steps of the intracorporeal suturing task (Table 2). Twenty-three (66%) senior trainees who had practiced the laparoscopic suturing indicated that laparoscopic needle manipulation was the most difficult skill to master, and 9 (26%) considered intracorporeal knot tying was difficult after having attended the training course. In contrast, 28 (78%) junior trainees who had not practiced the laparoscopic suturing predicted that the intra-corporeal knot tying would be the most difficult component of the laparoscopic suturing task. Discussion This is the first study in the surgical literature to investigate the relationship between basic operative laparoscopic skills and laparoscopic suturing skills. The study has demonstrated that there was no statistically significant correlation between the performance on basic operative laparoscopic skills (non-suturing skills) and laparoscopic suturing skills, both before and after attending the laparoscopic training courses. The acquisition of basic operative laparoscopic skills may not be a prerequisite for the acquisition of laparoscopic suturing skills. It has provided scientific basis to explain why training junior operative residents in laparoscopic suturing skills is feasible on a short well-guided training course [7,19]. Along with the objective evidence, this study has also provided subjective opinions from senior surgical trainees. Surgical trainees indicated their preference for earlier exposure to laparoscopic suturing in their training. The assumption that mastery of basic laparoscopic operative skills is necessary for trainees to benefit from training in laparoscopic suturing is disproved by the findings of the present study. Skills acquisition from basic to complex skills in laparoscopic and robotic surgery is a profound area to study. The basic operative laparoscopic skills did not correlate significantly to advanced skills such as the laparoscopic suturing skills in our data. This may be similar in acquisition of other complex laparoscopic skills such as operative skills in robotic-assisted surgery. Kowalewski et al. have demonstrated that robotic-assisted surgery required skills distinct from conventional laparoscopy or open surgery [29]. The laparoscopic suturing skills that are involved in several tasks which include handling the needle holders, loading a need onto needle holder, holding a needle at the correct angle and direction, making a bite into the tissue, and finally safely tying a knot. Therefore, laparoscopic suturing skill may be distinct enough that the surgical trainees would benefit from direct experiential training. Interestingly, we also found that previous experience in laparoscopic suturing did not correlate with the level of laparoscopic suturing performance in the post-course assessment. All of the operative laparoscopic skills including suturing skills were improved significantly by the intensive hands-on training, which was in line with the findings published in the literature [6, 7, 9-11, 13, 19]. To date, skills courses have been designed and developed mainly on the opinions of a panel of expert educators/ tutors without any input from the surgical trainees in some of the surgical training centres. This is perhaps the main reason why laparoscopic suturing is excluded from the basic laparoscopic skills courses [7,20]. There are, however, other contributing factors which include: (i) few active and well established surgical training centres which run these courses with the necessary inhouse expert tutors in laparoscopic suturing, (ii) the implicit belief that laparoscopic suturing may be too difficult for junior trainees and thus counterproductive to their progress if introduced too early in the curriculum, (iii) beside the complexity and difficulty to acquire proficiency in laparoscopic suturing, the other main concern or argument against its earlier introduction in their surgical training is that they will not have the opportunity to apply the skills in their clinical practice, and for this reason, they would deskill very quickly. In the absence of such data, we need to take on board the opinion expressed by the surgical trainees documented by the present study. In practice, trainees have insufficient access to tutored laparoscopic suturing training sessions on physical models. This is important as the current generations of VR surgical simulators while able to impart the basic Fig. 5 Comparison of task performance in basic operative laparoscopic skills (127.5 ± 10.1 vs 95.5 ± 16.8, P < 0.001) and laparoscopic suturing skills (46.9 ± 5.3 vs 29.5 ± 9.4, P < .001) before and after the training course. All were improved by the training provided during the course component clinical skills, are a long way off providing effective simulation for laparoscopic suturing [8]. The reported study also confirms that laparoscopic suturing skills broaden the clinical applicability of laparoscopy and increases the laparoscopic caseload in both general surgery and urology [16,17]. For this reason, laparoscopic suturing should be introduced earlier in the surgical curriculum and should certainly be included in basic laparoscopic training courses to prepare them for the opportunity to come [4,5]. This study also demonstrated that there are different opinions on the ideal duration of laparoscopic suturing exercise sessions between the senior (with experience of laparoscopic suturing) and junior trainees with no previous exposure and who can only guess as to the optimal duration. We consider that one half day to practice suturing skills is not sufficient and recommend two consecutive half days for optimal skill acquisition. This provides adequate exposure for the trainees to practice their laparoscopic suturing to reach proficiency which they can then translate to their practice in the operating room [12,24,28]. The survey showed a significant difference in the identification of the difficult steps of laparoscopic suturing. The majority (66%) of the senior trainees indicated that laparoscopic needle manipulation was the most difficult component step, whereas the majority (78%) of junior trainees predicted that the intra-corporeal knot tying would be the most difficult component of the suturing task. The view expressed by junior trainees should not be overlooked as it indicates the need for a precise clear description of the sequential component steps of the intracorporeal knot tying to junior trainees. The senior trainees had obviously advanced beyond this perceived difficulty with knot tying and thus did not identify it as a particularly difficult problem. Despite the evidence to support the feasibility and efficiency of early introduction of laparoscopic suturing skills into the surgical training curriculum, studies have shown that there was a modest decrement in performance of laparoscopic suturing skills after 6 months of training [30][31][32]. Therefore, it is fundamentally important to understand that the surgical trainees may become deskilled if their laparoscopic suturing skills are not used in the operating room or maintained with repeated practice in a simulated setting [30]. Mashaud et al. and Scerbo et al. demonstrated that an ongoing structured training programme helped to maintain proficiency of laparoscopic suturing skills [31,32]. Therefore, a retention interval and refresher session should be provided for the junior trainees who do not have adequate exposure in the operating room to reinforce and maintain laparoscopic suturing skills [9,15,30,32]. The portable laparoscopic simulator and virtual reality simulator have been proven to be valid and effective for this purpose [12,15,24]. Limitations This study was not conducted in a randomized controlled trial. However, we were aware that this was not achievable during the time when this study was conducted, as we did not have control of choosing the candidates though we assessed their eligibility to attend the course. The study was also not designed to compare the outcome of an early versus late introduction of laparoscopic suturing skills in the surgical training curriculum, thus, there was no data to demonstrate whether laparoscopic suturing skill benefits from prior training in basic operative laparoscopic skills (non-suturing skills) or not. There was no assessment on the actual performance of the senior trainees in the operating room after the course because the study was not designed to assess the transferability of laparoscopic suturing skills from a skills lab into operating room, as many studies have demonstrated this already [10-14, 17, 18, 25]. Senior surgical trainees considered that needle manipulation was the most difficult component for laparoscopic suturing skills whereas the junior group thought the knot tying was the difficult task. We did not conduct a study to investigate this different opinion further. This may be of importance for laparoscopic suturing skills training when teaching surgical trainees at different levels of experience and allocating time on each task during the exercise. This will be an area for future study. We were also fully aware that this study was mainly based on our own institutional data to provide evidence to support the early introduction of laparoscopic suturing skills into surgical training curriculum. Thus, adaption of this practice should be tailored to meet the requirement of the systems that would be beneficial for the surgical trainees to improve their surgical performance. Conclusions There was no statistically significant correlation between the performance on basic operative laparoscopic skills (non-suturing skills) and laparoscopic suturing skills observed in this study. The acquisition of basic laparoscopic skills is not a prerequisite for training in intracorporeal suturing and it may be beneficial for the surgical trainees to learn this skill early in the surgical training curriculum. Surgery trainees want to learn and practice laparoscopic suturing earlier rather than later in their training.
2020-03-08T13:10:33.104Z
2020-03-06T00:00:00.000
{ "year": 2020, "sha1": "8898e75c7cddddcd73022f696572b1718e2737d1", "oa_license": "CCBY", "oa_url": "https://bmcmededuc.biomedcentral.com/track/pdf/10.1186/s12909-020-1986-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "74bfc79f6d1be667a5b501d7b80270bbf910c721", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10362473
pes2o/s2orc
v3-fos-license
Remote electrocardiograph monitoring using a novel adhesive strip sensor: A pilot study The increase in health care costs is not sustainable and has heightened the need for innovative low cost effective strategies for delivering patient care. Remote monitoring holds great promise for preventing or shortening duration of hospitalization even while improving quality of care. We therefore conducted a proof of concept study to examine the quality of electrocardiograph (ECG) recordings obtained remotely and to test its potential utility in detecting harmful rhythms such as atrial fibrillation. We tested a novel adhesive strip ECG monitor and assessed the ECG quality in ambulatory individuals. 2630 ECG strips were analyzed and classified as: Sinus, atrial fibrillation (AF), indeterminate, or other. Four readers independently rated ECG quality: 0: Noise; 1: QRS complexes seen, but P-wave indeterminate; 2: QRS complexes seen, P-waves seen but poor quality; and 3: Clean QRS complexes and P-waves. The combined average rating was: Noise 12%; R-R, no P-wave 10%; R-R, no PR interval 18%; and R-R with PR interval 60% (if Sinus). If minimum diagnostic quality was a score of 1, 88% of strips were diagnostic. There was moderate to high agreement regarding quality (weighted Kappa statistic values; 0.58 to 0.76) and high level of agreement regarding ECG diagnosis (ICC = 0.93). A highly variable RR interval (HRV ≥ 7) predicted AF (AUC = 0.87). The monitor acquires and transmits diagnostic high quality ECG data and permits characterization of AF. Abstract The increase in health care costs is not sustainable and has heightened the need for innovative low cost effective strategies for delivering patient care. Remote monitoring holds great promise for preventing or shortening duration of hospitalization even while improving quality of care. We therefore conducted a proof of concept study to examine the quality of electrocardiograph (ECG) recordings obtained remotely and to test its potential utility in detecting harmful rhythms such as atrial fibrillation. We tested a novel adhesive strip ECG monitor and assessed the ECG quality in ambulatory individuals. 2630 ECG strips were analyzed and classified as: Sinus, atrial fibrillation (AF), indeterminate, or other. Four readers independently rated ECG quality: 0: Noise; 1: QRS complexes seen, but P-wave indeterminate; 2: QRS complexes seen, P-waves seen but poor quality; and 3: Clean QRS complexes and P-waves. The combined average rating was: Noise 12%; R-R, no P-wave 10%; R-R, no PR interval 18%; and R-R with PR interval 60% (if Sinus). If minimum diagnostic quality was a score of 1, 88% of strips were diagnostic. There was moderate to high agreement regarding quality (weighted Kappa statistic values; 0.58 to 0.76) and high level of agreement regarding ECG diagnosis (ICC = 0.93). A highly variable RR interval (HRV ≥ 7) predicted AF (AUC = 0.87). The STUDY Due to increased longevity, people are facing an increa sing prevalence of chronic disease that threatens their ability to live independently and has led to rapidly escalating healthcare costs. It is imperative that new, effective, economical and efficient methods to prevent and manage chronic disease are developed. Cardio vascular disease accounts for a significant burden of chronic illness, often manifesting as heart failure, and arrhythmias such as atrial fibrillation (AF) are commonly observed [14] . These arrhythmias may be difficult to detect, often initially presenting as decompensation of heart failure or stroke. Remote monitoring of physiologic measures such as the ECG and heart rate may provide an important option for early detection of cardiovascular compromise and arrhythmias [5] . Limitations of current monitoring systems include a large body burden and inconvenience in use, latency in transmission of physiolo gic information, enormous volumes of data for analysis consuming human resources, and significant false alarms generated by artifact, requiring human oversight [68] . We have developed a personal monitoring system capable of interfacing with additional low profile, unobtrusive, onbody and offbody sensors to provide realtime and cumulative data to a health care pro vider at any internet or cellular network enabled location. The system records ECG, respiration (via bio impedance measurement), and physical activity using a 3axis accelerometer. The system also has embedded algorithms that provide a selfdiagnostic reliability index to qualify the value of the data, permitting reviewers to discard noisy signals, thus facilitating generation of alerts with greater specificity. In this pilot study, we sought to test the monitoring system in healthy volunteers residing in an independent living center, to determine whether the system satisfactorily acquires, stores, and displays ECG information of diagnostic quality in ambulatory, freeliving individuals. LITERATURE AND RESEARCH We prospectively enrolled 10 healthy volunteers from residents of the Mayo Clinic Charter House, an assisted living center near the Mayo Clinic Rochester downtown campus. To be eligible, participants had to live in apartments with appropriate cellular network coverage. Subjects with implanted cardiac defibrillators or pacemakers were excluded. After enrollment, a study coordinator provided each participant with a data hub that consisted of a SmartPhone preloaded with custom monitoring software (Google Nexus, HTC Corporation, Taipei, Taiwan), a charger for the SmartPhone, as well as two fully charged monitoring units and adhesive snap strips ( Figure 1, BodyGuardian, Preventice Inc., Minneapolis, MN, described further below). A study coordinator instructed the subject on applying the adhesive strip sensor to the chest, methods for ensuring good signal quality, and how to ask for assistance if required. Each subject was asked to use the system for 3 consecutive days. Supervised maneuvers, such as lying supine, sitting, standing, and walking were performed once per day, each day for 3 d, at which time the various signals were recorded. At the end of each 24-h period, the Study Coordinator exchanged the unit for a newly charged unit. The study was approved by the Institutional Review Board. Since the system was not FDA approved at the time of the study, no clinical decisions or management changes were made based on data obtained during the trial. REMOTE MONITORING SYSTEM The remote health management system connects personal health sensors with secure mobile communi cation devices. The monitor frontend is composed of an electronic unit; an adhesive patch with attached electrodes and snaps for a rechargeable module. The rechargeable module measures 59 mm × 50 mm and houses the sensors, battery and wireless transmitter ( Figure 2). It is detachable from the electrode snap strips to permit showering. The module is able to measure heart rate (HR), ECG, respiratory rate, and activity level. The ECG is recorded via the two inner electrodes (the distance between the inner electrodes is 70 mm and the distance between the outer electrodes is 104 mm). The electrode pads measure 10 mm diameter and have a signal sampling rate of 256 Hz with 12 bit resolution. Respirations are measured by injection of a low voltage charge from one pair of electrode contacts and measuring the change in voltage over a fixed distance on the other pair of electrode contacts (current amplitude: 100 µA, current frequency: 50 kHz, sampling frequency: 32 Hz). A three dimensional accelerometer acquires samples at 50 Hz and the signal is algorithmically processed to determine physical activity. Physiologic information is communicated to a remote server using a mobile phone as the communication hub. The mobile phone displays data acquisition, battery level and data transmission to the subject. During normal operation, the system collects phy siologic data and stores it in its onboard memory. The data are transmitted to the smart phone data hub at programmable intervals (nominally 60 min). In the absence of proximity to the data hub, data are stored on the rechargeable module attached to the adhesive strip until the next communication attempt. Data are automatically transmitted from the smart phone hub to a secure, HIPAA compliant server database. Utilizing clinical algorithms, the system is capable of automated decision making based upon integration of data and can provide immediate feedback to the subject. The solution is a multitiered mobile health platform ( Figure 3). The stored data are presented for review via a webbased interface, or using custom software on an iPad (Apple Computer, Cupertino, CA). SELECTION OF ECG STRIPS FOR ANALYSIS Each hour, a randomly selected twominute ECG strip was automatically recorded and transmitted for the purposes of this study. Users could also manually activate a recording using the smart phone data hub interface. ANALYSIS OF ECG QUALITY Each of the ECG strips was read by 4 independent, experienced readers for ECG signal quality and rhythm interpretability. The readers were ECG technicians working in a 24-h continuous telemetry unit, and were blinded to clinical information and other readers' inter pretations. Each reader independently rated the ECG quality using an ordinal scoring system: 0 Noise, cannot reliably determine QRS complexes 1 QRS complexes reliably seen and RR intervals determined, but atrial activity indeterminate due to baseline noise 2, QRS intervals reliably recorded, and atrial activity seen but of poor quality, and PR interval not reliably seen 3, clean signal, with reliable assessment of QRS intervals, and PR intervals (when present). Quality scores were compared between each pair of readers and a weighted Kappa statistic was calculated assuming an ordinal outcome. In addition, in order to compare quality scores from all 4 readers, an intra-class correlation coefficient was calculated as a measure of agreement across all 4 raters. ANALYSIS OF HEART RATE VARIABILITY The system reports an average HR. The HR is derived by detecting the R wave component of the QRS complex for both normal and premature ventricular complexes (PVCs). The system calculates the interval between R waves (R-R interval) and processes this information to derive an average HR value every 10 s. The system also calculates heart rate variability (HRV). HRV is a value derived from the variance of the ECG RtoR intervals based on a 10s time interval. It is sensitive to both normal beats and PVCs. An event is triggered when the number of heart beats per minute varies by more than the HRV threshold. For example, if the threshold is set for the 72-h duration the device was used. Assessment of ECG quality Data for 2630 ECG 2-min strips were available for analysis. Rhythm was classified by each of the 4 readers as sinus, AF, indeterminate or other (Table 1). There was moderate agreement in rhythm classification between pairs of readers (median Kappa = 0.65). In particular, variability was noted in the percentages of strips rated by each reader as sinus (48%-70%) while the percentages of those rated as AF was comparable across readers (11%-15%). Quality scores were compared between each pair of readers. There was a moderate to high level of concordance between readers (weighted Kappa statistic values ranged from 0.58 to 0.76). There was also a very high level of agreement across the 4 readers (ICC = 0.93). The combined average rating of ECG quality based to 30bpm, a HR that varies from the average by more than 5 beats in a 10 s interval triggers an HRV event. Use of the HRV threshold to trigger an event helps to identify ECG tracings that may require physician review as they are more likely to indicate arrhythmia, based on dropped beats or irregular rhythm or increased heart rate. Logistic regression analysis was used to examine the association of HRV with the outcome of AF. A Receiver Operator Characteristic curve and concordance statistic (AUC) was used to illustrate the sensitivity and specificity of HRV. RESULTS OF STUDY Ten healthy volunteers were recruited to the study (4 men, average age 79.5 years (range 74 to 92 years). All 10 subjects wore the device for 72 h. Data from all 10 subjects were stored and were available for analysis Although the raters quality score ranged from 1 through 3, the irregularly irregular RR interval and absence of discernible P waves present in this Electrocardiograph signal is diagnostic for atrial fibrillation. 7 were classified as AF. This variable was also entered into a logistic regression model for predicting AF. The univariate area under the curve (AUC) for a highly variable RR interval (HRV ≥ 7) in predicting AF was 0.87 ( Figure 5). Using HRV ≥ 7, sensitivity was calculated to be 97% (95%CI: 94-99) while specificity was 77% (74-80), positive predictive value was 54% (49-59) and negative predictive value was 99% (98-99). DISCUSSION The findings of this pilot study demonstrate for the first time the ability of this low body burden, unobtrusive, wireless remote monitoring system to acquire and transmit high diagnostic quality ECG data when worn by elderly subjects leading active independent lives, outside of a hospital environment. Artifact in ambulatory 24/7 ECG recordings results in erroneous arrhythmia classification that may significantly and adversely affect diagnostic accuracy and hence quality of care. These artifacts result from myopotentials (most commonly from the pectoralis muscles), galvanic skin currents, and less commonly electromagnetic interference. These issues are particularly prevalent in ambulatory settings and Band-Aid style sensors with only two electrodes are particularly at risk. Thus, it is reassuring that most of the ECG recordings using this system provided clinically diagnostic information, free from artifact. Furthermore, although the study was not designed to assess arrhy thmia detection, serendipitously, one subject had persis tent atrial fibrillation. Analysis of segments using the HRV on the 4 independent raters was: No RR-noise 12%, RRno P-wave 10%, RR-no PR interval 18%, PR interval 60% (if in sinus rhythm). Thus, if a minimum diagnostic quality was determination of an RR interval, 88% of strips were sufficiently diagnostic to provide a determination of HR, and a minority of strips was considered noise related to artifact (12%). Examples of ECG strips and the combined and individual assess ments of ECG quality are presented in Figures 3 and 4. One of the 10 subjects had persistent AF. In order to preliminarily assess the utility of HRV for identifying AF, and because of the variability in ECG classification variability, analysis was performed on those strips that were found to be in agreement across all 4 readers as either sinus rhythm (n = 889) or AF (n = 252). HRV scores were found to be significantly different between those classified as Sinus Rhythm (mean = 10.0, SD = 2.4) and those classified as AF (mean = 4.7, SD = 5.9), P < 0.001. Based on this finding, we defined images with a variability score of 7 or greater as highly variable. Ninety-seven percent of the strips with HRV ≥ data for analysis, the subject sample size was small. The study design requirement for visual confirmation of rhythm and ECG quality rather than relying on automated algorithms made it necessary to limit the number of subjects studied. In mitigation, more than 2600 rhythm strips from the 10 subjects were visually inspected by study investigators to ascertain cardiac rhythm, which was labor and timeintensive. To prove the clinical utility of this approach in the future will require studies with larger numbers of subjects, which will only be practical with systems capable of automated rhythm identification in order to enable scalability. Additionally, very few patients experienced an arrhythmia (atrial fibrillation), and patients with other arrhythmias were not included. However, this was a pilot study directed toward evaluating the ergonomics, tolerability, and effectiveness of continuous EKG monitoring, and to determine whether the quality of the EKG recording could be preserved over extended periods. CONCLUSION The findings of this pilot study confirm that a remote monitoring system using a novel adhesive strip ECG sensor can acquire and transmit diagnostic high quality ECG data over a period of 3 d when worn by elderly subjects leading active independent lives. Automated determination of heart rate variability permitted reliable characterization of ECG strips with AF. algorithm permitted differentiation of ECG strips with AF from SR. Determining reliable high quality ECG recordings is important in ambulatory monitoring systems to ensure appropriate diagnosis. It is also important to be able to characterize poor quality ECG data or noise (artifact) so that these data can be ignored. This is particularly important when large amounts of data are being recorded over prolonged periods, when frequent false alarms generate both user and healthcare provider "alarm" fatigue rendering the system cumbersome, and consequently adversely affecting effectiveness, adherence and prescription. The monitor system is capable of acquiring high quality ECG recordings using an unobtrusive adhesive electrode sensor in an ambulatory setting. HRV as defined by this system may be useful for detection of arrhythmias such as atrial fibrillation. Indeed, in this study, one subject had AF. When excessive HRV was noted, ECG data strips from the patient could be reliably determined. This observation could be potentially useful in detecting AF, particularly if new AF develops in an individual who was previously in sinus rhythm (when HRV would be low). High HRV may be seen with arrhythmias other than AF, such as frequent PVC's. LIMITATIONS This study has limitations that may constrain broad generalization of our findings. The subjects enrolled in this study were elderly residents of an assisted living facility ranging in age from 72 years to 92 years. They are thus not representative of other population groups who may be younger, more active or less healthy. Furthermore, although there were large amounts of
2018-04-03T00:17:25.912Z
2016-10-26T00:00:00.000
{ "year": 2016, "sha1": "75925e69c8b90dec9e2fbf1806bd308aca93badf", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4330/wjc.v8.i10.559", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "75925e69c8b90dec9e2fbf1806bd308aca93badf", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
56276320
pes2o/s2orc
v3-fos-license
Delineation of Crystalline Extended Defects on Multicrystalline SiliconWafers We have selected Secco and Yang etch solutions for the crystalline defect delineation on multicrystalline silicon (mc-Si) wafers. Following experimentations and optimization of Yang and Secco etching process parameters, we have successfully revealed crystalline extended defects on mc-Si surfaces. A specific delineation process with successive application of Yang and Secco agent on the same sample has proved the increased sensitivity of Secco etch to crystalline extended defects in mc-Si materials. The exploration of delineated mc-Si surfaces indicated that strong dislocation densities are localized mainly close to the grain boundaries and on the level of small grains in size (below 1 mm). Locally, we have observed the formation of several parallel dislocation lines, perpendicular to the grain boundaries. The overlapping of several dislocations lines has revealed particular forms for etched pits of dislocations. INTRODUCTION The presence of crystalline defects and impurities in multicrystalline silicon (mc-Si) wafers leads to loss of energetic efficiency of the photovoltaic cells [1,2].This is due to the degradation of the electrical properties which are correlated with the crystalline defect density in the bulk material [3,4].Indeed, crystalline defects create recombination centers, and consequently reduce lifetime of minority charge carriers.The aim of this paper is the setting in of a specific delineation process for crystalline extended defects in the case of mc-Si bulk and the study of their propagation.This developed technique will permit us to qualify mc-Si ingots grown at our laboratory (UDTS) and compare their characteristics to others produced by major mc-Si producers.Observation with a scanning electron microscope (SEM) of the crystalline extended defects (dislocations, stacking faults, twins, precipitates, etc.) requires a chemical etching operation known as delineation process [5].Delineation consists of the action of a selective etching agent which is able to etch more quickly the zones of the crystalline defects than the perfect crystalline zones.This is due to the fact that in region of defects, the disturbance of the crystal lattice causes weak atomic bonds.The delineation of the defects takes place only on the crystal grain level; the zones of grain boundaries are uniformly and more quickly etched because atomic bonds are weaker there [2].Chemical delineation is followed by SEM observation.Then, identification and counting of the defects per unit area are carried out. Several chemicals for defect delineation such as Dash, Sirtl, Secco, Yang, Wright have been validated and are used for monocrystalline silicon defect analysis.The issue of defects delineation process depends on silicon surface crystallographic orientation and surface topography [6].In order to perform the delineation process for mc-Si material where there are multiple crystallographic orientations, it will be necessary to select the more sensitive chemical solution to defects and adjust its application conditions (time, agitation, temperature, etc.).Historically, Dash etch was the first used.It reveals dislocations in all crystallographic orientations but necessitates very long etching times [7].Sirtl reveals dislocations only on (111) surfaces [8].Secco etches defects in all orientations and exposes circular defect pits [9].Yang solution gives good defect delineation in all orientations and its etch-pits shapes (triangular, quadratic, etc.) are functions of surface orientation [10,11].Wright is considered to be the finest chemical defect delineation solution, specially for detection of induced defects by hot processing.It is acting in all orientations but its composition is more complex than Secco and Yang.Furthermore, Wright etch is less sensitive to [11,12].Because of our interest in studying the dislocations induced during HEM mc-Si growth, and due to the different crystallographic grain orientations on this material, we have chosen to develop our defect analysis process with Secco and Yang etching solutions.Effectively, these two solutions are sensitive to all kinds of defects and also to all crystallographic orientations.However, the specificity of mc-Si material will necessitate a special adjusting of Yang and Secco etching process parameters in order to have the sharpest etch pits and the cleanest surfaces for SEM analysis. EXPERIMENTAL We have used P-type (Boron-doped) mc-Si wafers with electrical resistivity about 1 Ωcm.These wafers of 10 × 10 cm 2 area were cut from ingots grown at our laboratory by the heat exchanger method (HEM) [13].The first step was thinning and polishing the as-cut mc-Si wafers in order to remove the saw damage and slurry residue on the surface.During this step, we used a polishing solution made by mixing HNO 3 /CH 3 COOH/HF with (5 : 3 : 2) concentrations.After 2 minutes of etching time, we rinsed thoroughly the mc-Si material with deionized water and dried it under a nitrogen gun.To test the Secco and Yang solutions, we cut 14 samples from an mc-Si polished wafer.These samples were referenced as shown in Table 1.We have varied mainly the etching time and agitation mode.Note that just a few minutes before each delineation process, the native SiO 2 formed at the surface was etched by immersing the samples in diluted HF (10%) solution for 30 seconds followed by rinsing with deionized water and drying in nitrogen.The mc-Si samples were then ready for the defect delineation trials and analysis. RESULTS AND DISCUSSION Table 1 summarizes the etching parameters that were varied during Yang and Secco delineation studies.Generally, a dipping time between 1 and 2 minutes gives good defect delineation and we clearly distinguish dislocations, grain boundaries, twins, and slip lines.Our experiments confirm the revelation of crystalline defects with the Yang solution starting from 30 seconds of dipping time.However, for 30 seconds, the dislocation pits are too small (0.5 µm) to allow any fast exploration with SEM at low magnifications.Experiments with the Yang etch showed that a time of immersion from 1 to 2 minutes without ultrasonic agitation gives the best results.The shapes of Yang-etched pits depend on the crystallographic orientation of the mc-Si grains and are mainly triangular or quadratic.After immersion in Yang solution for 2 minutes with ultrasonic agitation, etching becomes too active, leading to defect-etched pits which are too broad and irregular in dimension.To conclude, the study of the Yang solution indicates that a time of immersion from 1 to 2 minutes without ultrasonic agitation provides the best results.The same study conducted with Secco etching process showed that the sharpest defect delineation was obtained with a dipping time between 1 and 2 minutes.However, in Secco process, the ultrasonic agitation is very useful for the elimination of gas bubble artifacts and results in circular dislocation pits.Thus, the recommended process with Secco etch is from 1 to 2 minutes of dipping time with ultrasonic agitation. Figures 1 and 2 are good illustrations of Secco and Yang defect delineation processes under the optimized conditions given above.We have clearly delineated dislocations, twins, grain boundaries, and dislocation lines.We have noticed that zones close to the grain boundaries are most favorable for the revelation of strong populations of defects.The exploration of the surface indicates that zones with high dislocation densities are localized mainly close to the grain boundary zone on the level of the small grain sizes (below 1 mm).This phenomenon is illustrated by Figure 3 and it is foreseeable, since on the level of the grain boundary there is a change of the crystallographic orientation leading to a zone of stress which finds its thermodynamic equilibrium by the emission of dislocations [14].Locally, we have noted the formation of several parallel dislocation lines which are perpendicular to grain boundary (see Figure 4).This observed arrangement of parallel dislocation lines is indicator of dislocation loops created during mc-Si ingot growing process and is principally depending on cooling conditions. In order to compare the action of dislocation localization between Secco and Yang solutions, we carried out a Secco revelation on a sample revealed before with Yang and vice versa.For this, we initially took the Y2 sample (revealed by Yang 2 minutes) and then we added to it a Secco revelation for 1 minute with ultrasonic agitation.We observed that all initially triangular defects are transformed into circles (see Figure 5), which implies that Secco marks all the places already marked with Yang.But we remark some new zones again (nonexistent before) as small circles (inferior to 1 µm) which means that the Secco solution decorates defect zones which were not previously localized with Yang.In the same way, we carried out the opposite operation by taking the S2 sample (revealed by Secco for 2 minutes) and we applied to it the Yang etch for 1 minute.All the circular zones were truncated by the Yang action and we obtained the formation of right pit edges.However, we did not find newly revealed zones (not more than 1 µm) with the Yang solution, which makes it possible to state that Yang etch did not delineate new defected areas in opposite to Secco etch.The two preceding tests show that the sensitivity of Secco to crystalline defects is higher than that of Yang etch.This consolidates our choice of Secco for the calculation dislocation density on mc-Si materials. It is of great importance to study the distribution of crystalline defects throughout the cross-section of mc-Si bulk and the influence of the process parameters.Then, we have observed a section of mc-Si revealed with the Secco solution during 2 minutes.We choose a sample with a section corresponding to a natural cleaving plane, in order to have a very smooth surface for SEM observation.We explored the entire cross-section of this sample and we noticed some particular defect shapes due to the overlapping of several dislocations and according to various angles to the same spot (see Figure 6).However, nothing seems to indicate the presence of stacking fault defects.According to the literature [11], it will be more probable to obtain stacking faults shapes after a high-temperature treatment such as oxidation or diffusion process that the mc-Si substrate will undergo during cell fabrication.Finally, by using Secco delineation process, we have compared the dislocation density of our mc-Si to other mc-Si materials produced by major photovoltaic companies.These parameters are summarized in Table 2 and indicate that in terms of dislocation densities, our samples are of comparable quality with respect to the mc-Si materials fabricated by other companies. CONCLUSION We have successfully optimized Secco and Yang etching parameters for crystalline defect delineation on mc-Si material.We have clearly distinguished dislocations, grain boundaries, twins, and dislocation lines.The shapes of Yang etch pits are triangular or quadratic according to crystallographic orientation on the grain level.We have noticed that in Secco etching process, the dislocation pits are circular.A comparative study of these two etches proved that Secco etch is more sensitive to crystalline defects than Yang etch in the case of mc-Si surfaces.We have observed some important phenomena specific to the mc-Si HEM material, like the concentration of dislocations in the vicinity of grain boundaries and irregular repartit ion of these defects from grain to grain.Cross-sectioning experiments using Secco solution clearly revealed crystalline defects which give us a good tool for analysis of crystalline defects through bulk material.This technique will be very helpful for studding process impact on defects propagation in mc-Si bulk material and also for performance and quality enhancing. Figure 3 : Figure 3: Defect-free surface nearby zones with high dislocation densities. Figure 4 : Figure 4: Formation of dislocation lines perpendicular to the grain boundaries. Figure 5 : Figure 5: Secco solution detects new defected zones of mc-Si sample initially Yang-etched. Figure 6 : Figure 6: Various pit shapes due to the overlapping of several dislocation lines. Table 1 : Yang and Secco parameters effects on etched dislocation pits of mc-Si samples. formulation is HF/K 2 Cr 2 O 7 /H 2 O obtained by mixing 2 parts of HF with 1 part of K 2 Cr 2 O 7 /H 2 O at (0.15 M) or (44 g K 2 Cr 2 O 7 in 1 litre of H 2 O).The Yang [6, 10] formulation is HF/CrO 3 /H 2 O obtained by mixing 1 part of HF with 1 part of CrO 3 /H 2 O at (1.5 M) or (150 g CrO 3 in 1 litre of H 2 O).After the Secco or Yang delineation, the samples were immediately rinsed in water and dried in nitrogen.Then, SEM observations of the treated samples were carried out. Table 2 : Comparative study of dislocation density values from other mc-Si producers.
2018-12-19T15:20:43.156Z
2007-01-01T00:00:00.000
{ "year": 2007, "sha1": "aa98533cc560e07270706ece61d6f198b17eea5e", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/ijp/2007/018298.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "aa98533cc560e07270706ece61d6f198b17eea5e", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Chemistry" ] }
9478555
pes2o/s2orc
v3-fos-license
Antimicrobial Activity of Pantothenol against Staphylococci Possessing a Prokaryotic Type II Pantothenate Kinase Pantothenol is a provitamin of pantothenic acid (vitamin B5) that is widely used in healthcare and cosmetic products. This analog of pantothenate has been shown to markedly inhibit the phosphorylation activity of the prokaryotic type II pantothenate kinase of Staphylococcus aureus, which catalyzes the first step of the coenzyme A biosynthetic pathway. Since type II enzymes are found exclusively in staphylococci, pantothenol suppresses the growth of S. aureus, S. epidermidis, and S. saprophyticus, which inhabit the skin of humans. Therefore, the addition of this provitamin to ointment and skincare products may be highly effective in preventing infections by opportunistic pathogens. Coenzyme A (CoA) functions as an acyl carrier and is an indispensable cofactor for all living cells.CoA is synthesized from pantothenate (vitamin B 5 ), cysteine, and ATP through five enzymatic steps: pantothenate kinase (CoaA in prokaryotes and PanK in eukaryotes), phosphopantothenoylcysteine synthetase (CoaB), phosphopantothenoylcysteine decarboxylase (CoaC), phosphopantetheine adenyltransferase (CoaD), and dephospho-CoA kinase (CoaE) (8).The pantothenate kinases that catalyze the phosphorylation of pantothenate are key enzymes in the CoA biosynthetic pathway, and have been divided into four groups based on their amino acid sequences, i.e., prokaryotic type I, II, and III CoaAs and eukaryotic PanK (12).Prokaryotic type I CoaA and PanK are known to be sensitive to CoASH (nonesterified CoA) and acyl-CoAs (12,19), whereas type II and III CoaAs are resistant to the end-products of the pathway (2,11).In addition, the type III CoaA requires monovalent cations, i.e., K + or NH 4 + , for its enzymatic activity (7).Thus, diversity exists amongst the structures and properties of pantothenate kinases, and these essential enzymes have become attractive drug targets for the development of novel antimicrobial agents (5,13,17).N-substituted pantothenamides (N-pentylpantothenamide and N-heptylpantothenamide) and CJ-15,801 produced by Seimatosporium sp., which have the ability to inhibit the prokaryotic type II CoaA, have been shown to effectively interfere with the growth of Staphylococcus aureus, which uses the type II CoaA in its CoA biosynthetic pathway (4,11,18,20,21). The antimicrobial activity of pantothenol, a provitamin of pantothenic acid, against lactic acid bacteria, which require pantothenic acid for their growth, was identified in the 1940s (Fig. 1A) (16).Pantothenol has recently been reported to suppress the phosphorylation activity of the prokaryotic type I CoaA from Mycobacterium tuberculosis as well as the pro-liferation of malaria by inhibiting parasite eukaryotic PanK(s) (10,15).In the present study, the potential effectiveness of pantothenol as an antimicrobial agent was investigated. The distribution of the homologous genes encoding the three types of CoaAs in bacteria was as follows: the type I CoaA, the genera Corynebacterium, Lactobacillus, Lactococcus, Streptococcus, Rhizobium, Escherichia, Klebsiella, Salmonella, Serratia, Shigella, Yersinia, Coxiella, Shewanella, Haemophilus, and Vibrio; the type II CoaA, the genus Staphylococcus; and the type III CoaA, the genera Clostridium, Thermotoga, Thermus, Burkholderia, Neisseria, Campylobacter, Helicobacter, Francisella, Legionella, Pseudomonas, and Xanthomonas.Thus, type I and III CoaAs are widely distributed while the type II CoaA is limited to staphylococci.Although the putative type II CoaA gene, together with the type III enzyme, is also conserved in some Bacillus species, the type II kinase of B. anthracis does not function in vivo (14).Therefore, the prokaryotic type I CoaA from Escherichia coli K-12 (EcCoaA), the type II CoaA from S. aureus MW2 (SaCoaA), and the type III CoaA from Pseudomonas putida JCM 20089 (PpCoaA) were examined in this study.The expression plasmid for EcCoaA, pET15b/ bPanK, was obtained from Dr. Jackowski, St. Jude Children's Research Hospital, USA (3).The coaA genes coding for SaCoaA (MW2054) and PpCoaA (accession number AB829254) were amplified by PCR, and the resulting DNA fragments were cloned into pET-28a(+) to generate pET-Sa-coaA and pET-Pp-coaA.E. coli BL21 (DE3) cells transformed with pET15b/bPanK, pET-Sa-coaA, or pET-Pp-coaA were grown in LB broth containing IPTG, and the recombinant enzymes were prepared using a nickel-chelating resin (Table S1 and Fig. S1).The inhibitory effect of pantothenol on pantothenate kinase activity was determined using d-[ 14 C] pantothenate and ATP as substrates in the presence of d-pantothenol at concentrations of 0.5 to 10 mM (Fig. 1B). Pantothenol reduced the activities of type I and II CoaAs, with a more potent effect being observed on the type II CoaA than on the type I CoaA, with an IC 50 of 1.68 mM for the type II SaCoaA.In the presence of 10 mM pantothenol, the activities of SaCoaA and EcCoaA were decreased to 14.5% and 42.1% of their maximal activities, respectively.Conversely, the type III CoaA (PpCoaA) was not inhibited by pantothenol. The pantothenol treatment significantly inhibited the phosphorylation activity of the prokaryotic type II CoaA.Although this effect was observed at a high IC 50 in the millimolar range, humans have the ability to convert this provitamin to vitamin B 5 , pantothenic acid (1), and no apparent toxicity has been reported for pantothenol, even at oral dosage levels of 8 to 10 g daily (6).Hence, it is possible for pantothenol to act as an antimicrobial agent.The MIC values of pantothenol against bacteria possessing the prokaryotic type I, II, or III CoaA were determined by a broth-microdilution method.Bacto-tryptone (Becton, Dickinson, and Company) was employed as a medium for the estimation of bacterial growth, as its low pantothenic acid content (typically ca.5.3 µg g −1 ) was unlikely to compete with pantothenol.The bacterial strains were grown to the mid-log phase in 1% (w/v) bacto-tryptone at 30°C, diluted in the same medium, and then added to medium containing 0 to 32 mM pantothenol at a density of 5 × 10 5 colony-forming unit mL −1 in each well of a microplate.After 24 h of cultivation at 30°C, the turbidity at 600 nm was measured.There are many isolates, including hospital-and community-acquired methicillin-resistant Staphylococcus aureus among S. aureus, and the pantothenate kinase (SaCoaA) from the strain MW2 shares almost the same amino acid sequence with the other strains of S. aureus (>99%-identity).Hence, S. aureus subsp.aureus type strain NBRC 100910 was used here instead of the MW2 strain.Furthermore, the strains of S. epidermidis NBRC 12993, S. epidermidis type strain NBRC 100911, and S. saprophyticus subsp.saprophyticus type strain NBRC 102446 were also examined.Although the amino acid sequence of the CoaA from S. epidermidis type strain NBRC 100911 was not available, the sequences of CoaAs from S. epidermidis NBRC 12993 and S. saprophyticus type strain NBRC 102446 showed 75%-and 68% identity to that from the MW2 strain.The growth of staphylococci possessing the prokaryotic type II CoaA was effectively suppressed by the addition of 32 mM pantothenol, showing 71.9% inhibition in S. aureus subsp.aureus NBRC 100910, 85.7% in S. epidermidis NBRC 12993, 98.0% in S. epidermidis NBRC 100911, and 99.7% in S. saprophyticus subsp.saprophyticus NBRC 102446 (Table 1).This result was consistent with the in vitro experiment using recombinant SaCoaA (Fig. 1B).As shown in Table 1 and Fig. 2, the MIC values of S. epidermidis NBRC 100911 and S. saprophyticus subsp.saprophyticus NBRC 102446 were estimated to be 4 and 2 mM, respectively, although the growth of S. aureus subsp.aureus NBRC 100910 and S. epidermidis NBRC 12993 were not completely suppressed even in the presence of 32 mM.The inhibitory effect of pantothenol was abrogated by the addition of 1 µM pantothenate, and the IC 50 values for S. epidermidis NBRC 100911 and S. saprophyticus NBRC 102446 shifted from 0.776 and 0.641 mM to 4.82 and 5.20 mM, respectively (Fig. 2C and D).This result clearly indicated that pantothenol also competed with pantothenate on the type II enzyme activities in vivo.On the other hand, pantothenol had little effect on the growth of E. coli and P. putida (Table 1), although the IC 50 of the E. coli enzyme was calculated to be 6.06 mM (Fig. 1B).Thus, the inhibitory effect of pantothenol on bacterial growth depended on the types of CoaA that bacteria employed in their CoA biosynthetic pathways. The expression of prokaryotic type II CoaA is known to be specific to the genus Staphylococcus (4), and pantothenol was found to be effective as a growth inhibitor of staphylococci using type II enzymes in this study.Since the inhibitory effect was reduced in the presence of a small amount of pantothenate (Fig. 2C and D), the oral administration of pantothenol may not have had a direct effect on bacterial growth because pantothenate, which is derived from food and produced by enterobacteria, is abundant in the body.However, since staphylococci including opportunistic pathogens such as S. aureus, S. epidermidis, and S. saprophyticus, which lead to impetigo, hospital-associated bloodstream infections, and urinary tract infection, populate the skin and mucous membranes of humans and other mammals, especially in moist areas such as the anterior nares, axillae, and perineal areas (9,22), the use of ointment and skincare products containing pantothenol may be highly effective at preventing infections by staphylococci. Fig. 1 . Fig. 1.Inhibitory effect of pantothenol on three types of CoaAs.(A) Chemical structures of pantothenic acid and pantothenol.(B) Inhibition of EcCoaA (circle), SaCoaA (triangle), and PpCoaA (square) activities by pantothenol.The assays were performed three times independently, and the results are indicated as the mean ± SD. Table 1 . Antimicrobial activity of pantothenol aThe strains listed in the table were obtained from the NBRC (NITE Biological Resource Center, Japan) and JCM (Japan Collection of Microorganisms).bThevalues indicate the inhibition of growth in the presence of 32 mM pantothenol relative to growth without pantothenol (n=3, mean ± SD).
2018-04-03T00:11:02.949Z
2014-04-22T00:00:00.000
{ "year": 2014, "sha1": "2a17efc68c4ad1d8482ca99694d44aec132d7528", "oa_license": "CCBY", "oa_url": "https://www.jstage.jst.go.jp/article/jsme2/29/2/29_ME13178/_pdf", "oa_status": "BRONZE", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2a17efc68c4ad1d8482ca99694d44aec132d7528", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
268327055
pes2o/s2orc
v3-fos-license
Patron Client in Political Corruption in Circles South Sulawesi BUMD Purpose – The aim of this research is to understand the forms and processes of political corruption within BUMDs based on patron-client relationships in placing people in strategic positions in the company. Method - This research uses secondary data, namely based on the results of the anti-corruption assessment carried out by TI Indonesia. Then carry out a literacy study to draw conclusions and verify. Result - The political corruption nested within BUMDs in South Sulawesi has had an impact on unsatisfactory performance and achievements, where continuous losses have been found, which is the impact of the recruitment system for BUMD directors which is only based on the politics of retaliation in the regional elections. Implication Introduction The basic question that the public always asks regarding the image of Regional Owned Enterprises (BUMD) (Muryanto, 2016) is 'why can't companies managed by the government or those with red plates be better in terms of service than companies under the auspices of the private sector?'.In fact, as a company that always receives subsidies from the regional government (Pemda) as the owner of the highest authority, BUMD allows it to grow and support regional finances as a source of local original income (PAD). Not only that, if BUMD is managed well it can even be possible to control strategic sectors of the economy, including supporting the success of regional development.Because subsidy support and government capital participation are disbursed every budget year.Business and political relations in government circles are the cause of the quality of public services and management of regional potential through BUMD not running as expected.(Saleh & Perdana: 2009). For example, the South Sulawesi Provincial Government (Pemprov Sulsel) in 2022 disbursed a budget for capital investment of IDR 52 billion for 4 BUMDs under the auspices of the South Sulawesi Provincial Government.The 4 BUMDs include PT Sulawesi Citra Indonesia (SCI) or Perseroda Sulsel, Bank Sulselbar, Jamkrida and Perusda Agribisnis. The tens of billions of funds disbursed by the South Sulawesi Provincial Government in the hope of increasing regional income have turned out to be far from expectations.At the beginning of the administration of the Acting Governor of South Sulawesi, Bahtiar Baharuddin (Sul-Sel, 2023), who replaced the previous Governor Andi Sudirman Sulaiman, sensed irregularities in BUMDs in South Sulawesi by ordering an audit of BUMDs belonging to the South Sulawesi Provincial Government.This is because the revenue targets for a number of BUMDs are very low, one of which is PT SCI (Aprita & Qosim, 2022) or Perseroda.The large number of nonperforming assets is one of the reasons.According to PJ Bahtiar, the audit was carried out to get a correct diagnosis of the disease that is currently nesting in the bodies of South Sulawesi entrepreneurs. According to data from the Economic and Development Administration Bureau (Ekbang) of the Regional Secretariat (Setda) of South Sulawesi Province, from the target of more than IDR 15 billion in revenue in 2023, Perseroda until the end of this year can only gain IDR 3 billion in revenue.The three Perseroda targeted are PT Bank Sulselbar, PT Jamkrida and PT SCI.None of the three reached the target. A number of problems faced by BUMD's are still being discovered, such as unsatisfactory performance in their contribution to increasing PAD, and there are even continuous losses so they still have to be subsidized by the government.This also includes poor governance and information presented v (Mahpudin, 2021)ia company websites that can be accessed by the public, opening up greater opportunities for corrupt practices to occur in BUMD.(Kriyantono et al., 2019) Based on the condition of South Sulawesi BUMD which has so many diseases nested within it, this prompted the author to research the forms of political corruption within South Sulawesi BUMD by conducting a literacy study on Patron Client Theory, where it is hoped that this article can become a reference and evaluation material in order to improve the management of BUMD.South Sulawesi is becoming healthier.(RAMADHAN & NPM, 2017) Method This research uses a secondary research method, where the analysis method refers to data in the form of existing documents and materials to serve as a basis for conducting research.This document can be obtained through open data provided in public libraries, websites or closed government data.(Nunung:2020) Where the secondary data used in this research refers to the results of assessments held by Transparency International (TI) to assess whether BUMD anti-corruption policies and programs in South Sulawesi have fulfilled the elements of transparency to the public or not.This research begins by identifying topics available from IT research results, then continues with identifying sources of relevant data and information used in this research, including looking for additional information through reports in the mass media, books and other research results.The data is then combined and analyzed to answer questions or hypotheses in this research. Discussion of the First Problem Formulation Transparency International (TI) Indonesia in the period December 2022 to January 2023 has completed the Transparency in Corporate Reporting assessment of 47 regionally owned companies (BUMD) in 5 Provinces.South Sulawesi (Sulsel) is one of the areas targeted by TI Indonesia to collaborate with YASMIB. Transparency in Corporate Reporting (TRAC) itself is an international anticorruption assessment instrument developed by TI to assess the transparency of company anti-corruption policies and programs.TI Indonesia itself has carried out TRAC assessments since 2017, by assessing the 100 largest companies in Indonesia, BUMN, Electricity, Palm Oil, PLTU and BUMD.In this assessment, TI Indonesia collected and analyzed data and information regarding the implementation of anti-corruption programs published by companies on the internet and other information sources that can be accessed by the public.Meanwhile, BUMDs which are the subject of research conducted by TI Indonesia are business entities whose capital is wholly or largely owned by the Region.(Law No. 23 of 2015 andPP 54 of 2017). Regionally Owned Enterprises (BUMD) are business entities established by regional governments whose capital is largely/entirely owned by the regional government.BUMD based on its target category consists of 2 (two) groups, namely regional companies to serve the public interest and regional companies for the purpose of increasing regional revenues. It still concerns Law no.23 of 2015 and PP 54 of 2017, the aim of establishing BUMD is stated to be to carry out regional development through services to the community, providing public benefits and increasing regional income.The characteristics of BUMD as regulated in the Law are namely: (1).The government holds the rights to all wealth and business.(2).The government acts as a shareholder in the company's capital.(3).The government has the authority and power to determine company policies.(4).Supervision is carried out by complementary state authorities.(5).Serving the interests of the general public, apart from seeking profit.(6).As an economic stabilizer in order to realize people's welfare.(7).As a source of state and regional income (original regional income).(8).All or most of the capital belongs to the regional government, and is separate wealth.(9).The capital can be in the form of shares or bonds for companies that go public.(10).Can collect funds from other parties, both banks and non-banks.(11).The Board of Directors is fully responsible for BUMD, and represents BUMD in court. Understanding Corruption Corruption is a very dangerous disease if it attacks and is allowed to nest within government structures or business entities managed by local governments.Symptoms of corruption can be found anywhere in society, history shows that acts of corruption occur in almost every country.So it is not an exaggeration if over time the definition of corruption has developed and changed according to changing times. The word corruption itself comes from the Latin word corruptio or corruptus.There are various definitions of corruption which can be interpreted as something that can damage and destroy.Apart from that, corruption is also defined as rottenness, something bad, depraved and dishonest behavior, bribable and deviant.In English it is called corruptionio, while in Dutch it is included in the Indonesian treasury as corruption.Several definitions of corruption can be categorized as follows: a. Corruption according to the Big Indonesian Dictionary (KBBI), corruption is defined as a form of misappropriation and misuse of state money, whether within companies, organizations, foundations and various other forms of public organizations, carried out for personal or other people's gain. b.The World Bank in 2000 provided another definition of corruption, and used it as an international standard in formulating corruption, namely "corruption is the abuse of public power for private gain".f.Meanwhile, George Junus Aditjodro believes that corrupt practices cannot be carried out alone, according to him, this crime requires the help or role of each person so that budget or policy misuse can take place.Quoted in a journal written by Budi Wahyu Nugroho entitled Sociology of Corruption, there are three main principles of social capital, namely trust or in Indonesian called trust, networking or work networks, and reciprocity 'mutual benefit'.According to George, these three principles of social capital can build social relations that cause corrupt activities in an institution or organization.Corruption actors or perpetrators give trust to other actors (peers/colleagues) as a network of people who have the same relationships and interests.Both parties will gain their respective benefits based on the reciprocal relationship that exists. The Concept of Forming Corrupt Behavior Corrupt practices in many cases found in government institutions cannot be separated from networks that bring together corrupt actors who have the three social capitals as mentioned above.Pierre Bourdieu and Robert Putnam pointed out the negative side of such social capital.Both share exclusive social networks.According to him, not everyone can enter this social network, and what this network does will only benefit their own group and other parties outside their group will be harmed.The formation of this network can be explained through Pierre Bourdieu's habitus thinking which is explained through the formula below: (Habitus x Capital) + Domain = Practice Habitus can be interpreted as the cognitive nature of the perpetrator of corruption, especially aimed at the main perpetrator, while capital which consists of four forms, namely, social capital, economic capital, cultural capital and symbolic capital is a factor that causes the opening of shells or opportunities for someone to do corrupt things.Social capital can be positioned as a network of corrupt actors, such as other actors who agree to be involved in corrupt activities.Economic capital is the financial ability of corrupt actors to bribe other parties to carry out their actions.Cultural capital is a cultural basis that considers acts of corruption to be normal, while symbolic capital is the social status of the perpetrator of corruption which is used to legitimize his actions. While the realm is a place where a game (capital and interests) is met or takes place.Meanwhile, practice is whether the corruption is successful or not.A habitus of corruption that has already been 'infected' by fraud can only be overcome by creating a new habitus, which is clean and far from potential corrupt behavior. Corrupt Practices in South Sulawesi BUMD Circles Corruption cases involving BUMN have occurred several times.Statistically, there are around 53 cases of corruption recorded within BUMN (Nibraska Alam).Meanwhile, many corruption cases in BUMN are dominated by bribery.Bribery that occurs in BUMN is essentially motivated by the weak implementation of good corporate governance.Iwan Nuryan in his research concluded that the implementation of Good Corporate Governance in BUMN is still low.This shows that the implementation of GCG has not actually become a company culture, thereby opening up opportunities for fraud. Apart from bribery, the type of corruption that often involves BUMD is political corruption.The term political corruption in the classical concept as quoted in the journal written by Fransiska Adelia entitled 'Forms of Political Corruption' explains that: Political corruption is interpreted as a problematic relationship between sources of power and the moral rights of those in power, in other words political corruption is a result of the inevitable struggle for power.Gibson himself found the definition of political corruption from the results of research on 279 students at various university levels in the Montreal Province of Canada, using a behavioral approach.In the research, there were 9 types of dishonesty that were found to differ in practice.Indicators include patronage, vote buying, pork barreling, bribery, kickbacks, conflicts of interest, nepotism, influence peddling, and campaign financing.Gibson found 8 of the 9 practices evaluated were recognized and qualified as corrupt by respondents.However, in the case of campaign financing, respondents gave an exception as a form of political corruption. From the results of this research, Gibson draws a conclusion regarding the definition of corruption as a specific state-society relationship and individual or individual nature which can take the form of crime.Every person with the status of civil servant, functionary, bureaucrat and politician, who represents the state and occupies a position in government has the authority to control existing resources.Therefore, it can be said that political corruption is a deviation that is no longer in accordance with legal, rational moral values and is out of step with the principles adhered to by modern states.This condition cannot be separated from the problem of weak accountability between the government and those in command, in this case the people or society. a. Patron Client Relations There are various patterns or forms of relationships in society.The birth of client patrons in society, according to a number of thinkers, is a consequence of the contradiction between two classes or groups of society, each of which has interests.These relationships occur and are intertwined in society on the basis of interests.This relationship continues and will not stop as long as the interests are still well accommodated by the related parties. The conflict theory that stands out in social science as explained by a number of theorists includes C. Gerrtz's conflict theory, namely about primordialism, second is Karl Marx's conflict theory about class conflict, and third is James Scott's conflict theory, about patron-client, namely one of the relationships relationship commonly known as "patronage". The term patron is explained by Kausar and Komar Zaman (2011) in a journal entitled Analysis of Patron-Client Relationships: The term patron comes from a Spanish expression which etymologically means "someone who has power, status, authority and influence".Meanwhile, client means "subordinate" or the person who is ordered and ordered.Furthermore, the patron-client relationship pattern is an alliance of two community groups or individuals who are not equal, both in terms of status, power and income, thus placing the client in a lower position (inferior) and the patron in a higher position (superior).A patron can also be defined as a person who is in a position to support, in other words, provide assistance to his clients.James Scott explains that patron-client interaction is a special treatment between two people who are bound to each other and is dichotomous and hierarchical, between the "higher" (patron) and the "lower" (client).James Scott: 1981 states that: Patron-client interactions involve instrumental friendships in which an individual with a higher socio-economic status (the patron) uses his influence and resources to provide protection and/or benefits for someone with a lower status (the client). Scott, in the journal Patron Client Relations for Cat Rice Traders in Yogyakarta City, written by Sri Emy Yuli Suprihatin, stated that a person or group who has a higher socio-economic status acts as a patron with the influence they have that is able to provide protection and various other benefits to a person or group that have a lower socio-economic status with their income.This group acts as a client, where as a financially protected person they are willing to return the favor in the form of comprehensive support which includes personal service to the patron. One of the dimensions of TRAC's assessment in South Sulawesi BUMD circles that is highlighted is regarding the appointment of leaders, political donations and CSR (Corporate Social Responsibility) program policies for BUMDs in South Sulawesi.As a result, the five South Sulawesi BUMDs assessed stated that they did not have rules and policies regarding political donations and prohibited politicians from serving as commissioners/directors.In South Sulawesi, the majority of BUMDs were indicated to be found to be Politically Exposed Persons (PEPs), and there were even 3 individuals holding concurrent positions in other agencies. In a report written by TI Indonesia in 2017 entitled 'Corruption, Patronage and the Anti-Corruption Movement', it was explained that the problem of political corruption is related to political funding during the five-yearly democratic party, in order to elect regional head or leader candidates.A number of corruption cases involving political elites and government officials are not solely aimed at enriching themselves and their relatives (nuclear family), more than that the corruption is carried out to support campaign financing/political activities which cost quite a lot.Kuskridho Ambardi (2012) calls it political cartelization, carried out by political parties to ensure the survival of the group.Their survival is determined by the common interest of maintaining various existing financial sources, especially those that come from the government, not official government money allocated to political parties, but government money obtained by parties through rentseeking.This is in line with what James Scott (1981) explained, which mentions a number of characteristics in patron-client relationships which can also be found in political relations in South Sulawesi BUMD, namely: 1.There is a relationship of reciprocity, namely a relationship that is mutually beneficial, giving and receiving, even though the levels are not equal to each party. 2. Personal relationships, which are direct and intensive relationships between patrons and clients.Their relationship includes feelings that are usually found in private relationships.So that the relationship that occurs is not solely motivated by profit.The placement of officials on the board of directors of South Sulawesi BUMD cannot be separated from the patronclient relationship between regional heads as owners of regional companies. For example, the massive dismantling of BUMD management chairs after the inauguration of regional heads.Many people are considered to have sweated during the Pilkada contestation and helped win regional head candidates who were elected to be allocated positions on the BUMD directors.Like Loyalty relationships (loyalty or obedience) The politics of retribution has also undermined many South Sulawesi BUMD directors.Many members of the successful team during the Pilkada were accommodated in Regional Companies (Perusda) as a form of political retribution.The appointment of officials to seats on the board of directors of South Sulawesi BUMDs is not based on their competence, but as a form of remuneration in the form of sharing seats. Discussion of the Second Problem Formulation The Transparency in Corporate Reporting (TRAC) assessment of 5 BUMDs in South Sulawesi carried out by TI Indonesia, was carried out on various problems that occurred within BUMDs, such as poor due diligence of BUMD leaders, gradual losses occurring, including the discovery of corrupt practices and concurrent positions at BUMD.Based on the TRAC BUMD score results for 5 South Sulawesi BUMDs as depicted in the table above, there are 6 dimensions of the South Sulawesi TRAC BUMD assessment that were studied, the first is anti-corruption commitment, namely to see the company's seriousness in implementing anti-corruption programs.Second, the scope of the company's anti-corruption policy includes when a company has an anti-corruption policy, the extent to which the policy regulates it, whether only within the company or to parties related to the company such as intermediaries or agents in the procurement of goods and services.Third , disclosure of internal policies, namely ensuring whether the company has an anti-corruption policy or not, such as rules regarding gratuities, whether there are practices of nepotism, patronage, influence trading, and so on.The fourth issue is the appointment of leaders, giving political donations and CSR, which are aspects that are closely related to the company's political involvement, because BUMDs are often held hostage by political interests, especially in filling the positions of directors or commissioners. Therefore, in the research carried out by TI Indonesia, they also checked whether there were regulations regarding due diligence for directors and commissioners, Revolving Door or Cooling of Period.To overcome multiple positions and being held hostage by political interests.Then, this CSR issue is also prone to fraud, so it is important for us to check the transparency of its distribution.Fifth , related to the violation reporting system (WBS).It is important to check whether the BUMD has a WBS system or not.Sixth, namely, anti-corruption training and monitoring programs, whether they have them or not. From these six dimensions, the average TRAC score of the five BUMDs in South Sulawesi itself is only around 1.58.Where PT Gowa Makassar Tourism Development (GMTD) with a score of 3.13, PT BPD Bank Sulselbar with a score of 2.29, PT Kawasan Industri Makassar (KIMA) with a score of 2.50, PT SCI Perseroda with a score of 0.00, and PT Jamkrida Sulsel with a score of 0.00. The TRAC assessment of five BUMDs in South Sulawesi illustrates the lack of anti-corruption commitment of the leaders of South Sulawesi BUMDs.It is known that there are only two BUMDs in South Sulawesi that have a code of conduct for their directors and employees, but there is no code of conduct that regulates external parties.It was also stated that BUMDs in South Sulawesi do not have regulations regarding influence trading, revolving doors, cooling off periods and transparency in procurement of goods and services (PBJ), so that the absence of these regulations allows the practice of nepotism, favoritism, clientelism and patronage to open up.In fact, of the five BUMDs owned by the South Sulawesi Provincial Government, only 1 has received ISO:37001:2016 certification, namely PT BPD Bank Sulselbar.Anti-Bribery Management System Certification (SMAP)/ (ISO 37001) itself is a guide to assist companies in building, implementing and continuously improving compliance programs or SMAP with the aim of identifying, preventing and detecting bribery attempts.ISO:37001:2016 certification in BUMD is important for every company, considering the high number of criminal acts of corruption (TPK) handled by the Corruption Eradication Committee during the 2004-2019 period, where the trend has even spread to BUMD agencies in various regions.From the Indonesian Corruption Eradication Committee report for the 2004-2019 period, Corruption Crimes based on agencies mostly occurred in ministries and government agencies with 365 cases (44.2%), followed by district or city regional governments with 155 cases (18.8%), thirdly provincial level regional governments.as many as 139 (16.8%) and fourth place was occupied by BUMN/D with 73 cases (8.8%). Meanwhile, of the types of cases handled by the Indonesian Corruption Eradication Committee, starting from 2004 March 2021, 93 of the 1,140 cases were recorded, 8.2% of the suspects came from the ranks of BUMD directors.It is not surprising to see that the health condition of BUMDs is in line with cases under the handling of the Indonesian Corruption Eradication Commission.Therefore, there is a need to increase competence in managing BUMD in a more professional manner.Including when selecting and placing Commissioners, Directors, SPI, there needs to be more professional recruitment. Bribery itself can be active with various actions such as offering, promising and giving something.Meanwhile, passive bribery is when someone receives or accepts something in return.J. Noonan in Fransiska:2018 says: Bribery is a secretive and irresponsible exchange.Bribery is always carried out through various strategies depending on where the exchange is carried out; therefore, the differences that occur between different countries regarding bribery are more quantitative than structural.Therefore, it is very important for a company or BUMD to have an anti-bribery management system through ISO 37001 -SMAP 2016 to prevent, detect and handle the risk of bribery.Included includes a range of measures and controls that represent good global anti-bribery practice. Therefore, in the research carried out by TI Indonesia, they also checked whether there were regulations regarding due diligence for directors and commissioners, Revolving Door or Cooling of Period.To overcome multiple positions and being held hostage by political interests.Then, this CSR issue is also prone to fraud, so it is important for us to check the transparency of its distribution.Fifth , related to the violation reporting system (WBS).It is important to check whether the BUMD has a WBS system or not.Sixth, namely, anti-corruption training and monitoring programs, whether they have them or not. From these six dimensions, the average TRAC score of the five BUMDs in South Sulawesi itself is only around 1.58.Where PT Gowa Makassar Tourism Development (GMTD) with a score of 3.13, PT BPD Bank Sulselbar with a score of 2.29, PT Kawasan Industri Makassar (KIMA) with a score of 2.50, PT SCI Perseroda with a score of 0.00, and PT Jamkrida Sulsel with a score of 0.00. The TRAC assessment of five BUMDs in South Sulawesi illustrates the lack of anti-corruption commitment of the leaders of South Sulawesi BUMDs.It is known that there are only two BUMDs in South Sulawesi that have a code of conduct for their directors and employees, but there is no code of conduct that regulates external parties.It was also stated that BUMDs in South Sulawesi do not have regulations regarding influence trading, revolving doors, cooling off periods and transparency in procurement of goods and services (PBJ), so that the absence of these regulations allows the practice of nepotism, favoritism, clientelism and patronage to open up. In fact, of the five BUMDs owned by the South Sulawesi Provincial Government, only 1 has received ISO:37001:2016 certification, namely PT BPD Bank Sulselbar.Anti-Bribery Management System Certification (SMAP)/ (ISO 37001) itself is a guide to assist companies in building, implementing and continuously improving compliance programs or SMAP with the aim of identifying, preventing and detecting bribery attempts.ISO:37001:2016 certification in BUMD is important for every company, considering the high number of criminal acts of corruption (TPK) handled by the Corruption Eradication Committee during the 2004-2019 period, where the trend has even spread to BUMD agencies in various regions.From the Indonesian Corruption Eradication Committee report for the 2004-2019 period, Corruption Crimes based on agencies mostly occurred in ministries and government agencies with 365 cases (44.2%), followed by district or city regional governments with 155 cases (18.8%), thirdly provincial level regional governments.as many as 139 (16.8%) and fourth place was occupied by BUMN/D with 73 cases (8.8%). Meanwhile, of the types of cases handled by the Indonesian Corruption Eradication Committee, starting from 2004 March 2021, 93 of the 1,140 cases were recorded, 8.2% of the suspects came from the ranks of BUMD directors.It is not surprising to see that the health condition of BUMDs is in line with cases under the handling of the Indonesian Corruption Eradication Commission.Therefore, there is a need to increase competence in managing BUMD in a more professional manner.Including when selecting and placing Commissioners, Directors, SPI, there needs to be more professional recruitment. Bribery itself can be active with various actions such as offering, promising and giving something.Meanwhile, passive bribery is when someone receives or accepts something in return .Therefore, it is very important for a company or BUMD to have an anti-bribery management system through ISO 37001 -SMAP 2016 to prevent, detect and handle the risk of bribery.Included includes a range of measures and controls that represent good global anti-bribery practice. CLOSING Based on the results of the Transparency in Corporate Reporting (TRAC) assessment of 5 BUMDs in South Sulawesi carried out by TI Indonesia in this research, it can be concluded that: 1.A number of problems exist within BUMDs, both in terms of finances which cannot contribute optimally to the addition of regional PAD, including in terms of services which have not provided maximum service to the community, many of which are influenced by the recruitment and placement of officials on the directors of Perusda which have many indications of political corruption.or the politics of retribution when the regional elections win the elected governor.2. The politics of retribution is a result of the patron-client relationship that exists between the regional head (Governor) and the success team because they are deemed to have contributed to helping win the South Sulawesi gubernatorial election (Pilgub).3. It was found that a form of political corruption in BUMD circles in South Sulawesi can take the form of nepotism or what is also commonly referred to as patronage to help relatives/family and people who have helped finance political boarding during the gubernatorial election. The advice that can be given through this research is to improve the bodies of BUMDs in South Sulawesi by using this research as a comparative or alternative reference in improving the existence of BUMDs so that there is clean and good governance. THANK-YOU NOTE The report on the results of research conducted by Swadaya Mitra Bangsa (YASMIB) Sulawesi in collaboration with Transparency International Indonesia (TI Indonesia), regarding the implementation of anti-corruption policies in Regional Owned Enterprises (BUMD) in South Sulawesi has become a reference and secondary data in this research .Regarding the open data that the author obtained, the author expresses his gratitude so that this research can run as it should. The author also expresses the same expression to Mr. La Ode Ismail Djabharu as a good colleague, because he has helped correct this journal very well, in the midst of his busy schedule.There are many shortcomings in this article which perhaps in the future could be developed much better by other authors regarding the topic raised in relation to the Sociology of Corruption.We hope that at least some of the findings and benefits from the studies in this article can provide benefits for the future development of science. c. Corruption according to Robert Klitgaard is the abuse of position for personal gain.This position can be a public position, or any position of power, including in the private sector, non-profit organizations, even lecturers on campus.According to Klitgaard, corruption takes the form of bribery, blackmail and all kinds of fraud."d.Corruption according to Transparency International (TI) is defined as the actions of public officials, both politicians and civil servants, who illegally and unfairly enrich themselves by abusing the power that society has entrusted to them.The International Transparency Institute (TI) states that any act of corruption and whatever form and type it takes can hurt the poor.Because corruption, which leads to misuse of resources and power, not only harms the private sector but can also hinder development, is very detrimental.So that corrupt behavior from administrators becomes the government's biggest challenge in a fundamental way that deviates from public policy.e.From a legal perspective, according to Law Number 20 of 2001, corruption is the act of a person or group of people who intentionally and unlawfully enrich themselves or other people or companies which can harm state finances or the national economy. when the Governor of South Sulawesi, Nurdin Abdullah, was elected in the 2018 gubernatorial election political contest, and placing Taufik Fachruddin as Main Director of the South Sulawesi Regional Company (Perusda) attracted the spotlight.At the hearing of the South Sulawesi DPRD's Questionnaire Rights Committee, Monday (29/7) 2019, Taufik himself admitted that he was appointed directly by Governor Nurdin Abdullah, who is also his brother-in-law and part of the winning team of Nurdin Abdullah-Andi Sudirman Sulaiman during the 2018 gubernatorial election.Chairman of the Committee At that time, Kadir Halid's questionnaire asked about the appointment of Taufik Fachruddin as President Director of Perusda.Because based on the rules contained in Presidential Decree 200, this position cannot be occupied by someone who has a family relationship with the regional head.(IDN Times:2019). Table 1 . 1 South Sulawesi Provincial Government Shares and BUMD TRAC Score Source: TI Indonesia
2024-03-12T16:10:15.260Z
2024-02-28T00:00:00.000
{ "year": 2024, "sha1": "784993a65a21f67f1408501f6c66d6a5e96872ec", "oa_license": "CCBYSA", "oa_url": "https://ejournal.iainpalopo.ac.id/index.php/alamwal/article/download/4897/2757", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3056b27f3b660a7c614f8a5b05650ecd8b4906f7", "s2fieldsofstudy": [ "Political Science", "Business" ], "extfieldsofstudy": [] }
211133559
pes2o/s2orc
v3-fos-license
Near Real-Time Monitoring of Clinical Events Detected in Swine Herds in Northeastern Spain Novel techniques of data mining and time series analyses allow the development of new methods to analyze information relating to the health status of the swine population in near real-time. A swine health monitoring system based on the reporting of clinical events detected at farm level has been in operation in Northeastern Spain since 2012. This initiative was supported by swine stakeholders and veterinary practitioners of the Catalonia, Aragon, and Navarra regions. The system aims to evidence the occurrence of endemic diseases in near real-time by gathering data from practitioners that visited swine farms in these regions. Practitioners volunteered to report data on clinical events detected during their visits using a web application. The system allowed collection, transfer and storage of data on different clinical signs, analysis, and modeling of the diverse clinical events detected, and provision of reproducible reports with updated results. The information enables the industry to quantify the occurrence of endemic diseases on swine farms, better recognize their spatiotemporal distribution, determine factors that influence their presence and take more efficient prevention and control measures at region, county, and farm level. This study assesses the functionality of this monitoring tool by evaluating the target population coverage, the spatiotemporal patterns of clinical signs and presumptive diagnoses reported by practitioners over more than 6 years, and describes the information provided by this system in near real-time. Between January 2012 and March 2018, the system achieved a coverage of 33 of the 62 existing counties in the three study regions. Twenty-five percent of the target swine population farms reported one or more clinical events to the system. During the study period 10,654 clinical events comprising 14,971 clinical signs from 1,693 farms were reported. The most frequent clinical signs detected in these farms were respiratory, followed by digestive, neurological, locomotor, reproductive, and dermatological signs. Respiratory disorders were mainly associated with microorganisms of the porcine respiratory disease complex. Digestive signs were mainly related to colibacilosis and clostridiosis, neurological signs to Glässer's disease and streptococcosis, reproductive signs to PRRS, locomotor to streptococcosis and Glässer's disease, and dermatological signs to exudative epidermitis. INTRODUCTION The prevention and control of diseases are essential to ensure efficient and sustainable swine production. Getting updated information on the health status of the target swine population in near real-time can facilitate the implementation of efficient measures by swine stakeholders, veterinary practitioners, and government. Innovative surveillance methods based on the analyses of various types of data, which may serve as indirect health indicators, are under development (1)(2)(3). The ability to collect data in a cost-effective and timely manner from a wide range of sources, the use of data mining techniques and time series analyses, and the possibility of generating dynamic reproducible reports, has led to the development of new ways of conducting surveillance in near real-time (4,5). In recent years, the Spanish swine sector has grown significantly, with over 50% of the pig herds concentrated in Catalonia and Aragon (regions located in the North East of the country). In those areas, the number of sows in largescale operations has increased, and an important proportion of facilities are part of integrated industries with highly specialized farrowing, post-weaning, and finishing sites (6). In this context of swine production, it is essential to maintain a good sanitary status. The Porcine Sanitation Group of Lleida, Spain (GSP) is a non-profit association that brings together pig owners, independent breeders, and companies associated to the swine sector in Northeastern Spain. The GSP aims to improve the swine health in farms and collaborates closely with the official animal health authorities carrying out actions related to disease surveillance, prevention, and control. In 2012, the GSP decided to carry out a near-real time monitoring system in Aragon, Catalonia and Navarra to gather data on clinical events detected by practitioners. The GSP hypothesized that, by monitoring, targeting, and reporting clinical signs and presumptive diagnoses, it would be possible to reveal in near real-time the occurrence of endemic diseases that are not notifiable. This information might help assess the spatiotemporal distribution of such diseases in these populations, identify subpopulations at high risk and factors that influence disease presence. Practitioners and swine stakeholders would benefit from this information to plan and take more efficient control measures. It is important to highlight that the initial intention of this monitoring tool was not associated with a pre-defined control plan against a specific disease. The main aim of the tool was to gather data from swine herds in near real-time and provide accessible and regularly updated information to veterinary practitioners and swine stakeholders. The system aimed to visually track the spatiotemporal distribution and spread of endemic diseases and support the decision of where and when actions were necessary. Moreover, the system aimed to enhance the communication and cooperation within the swine sector in Northeastern Spain. This work aims to evaluate the Abbreviations: GSP, Porcine Sanitation Group; app, web application; REGA, National official farm identifier; APP, Actinobacillus pleuropneumoniae infection or porcine pleuropneumonia; PRRS, porcine respiratory reproductive syndrome. functionality of this system developed to monitor the frequency of endemic diseases in the swine population at region and county level, and discusses the advantages and limitations related to its implementation. MATERIALS AND METHODS To illustrate how the GSP monitoring system operates, we analyzed the data of clinical events reported voluntarily by veterinary clinicians from swine farms of the Catalonia, Aragon, and Navarra regions (Northeastern Spain) between January 2012 and March 2018. Development of a Web Application to Report Clinical Events Detected in Swine Farms The researchers and technicians of GSP, in collaboration with many swine stakeholders and veterinary practitioners, developed a web application (app) to collect and store data on clinical events detected by veterinarians during their visits to farms. Before launching the system, all the practitioners and representatives of the swine industry of this zone were convened to a recruitment meeting. Afterwards, twice a year the participants were convened to a meeting for promoting their continuous participation. The veterinarians that participated worked for large integrated companies as well as small individual farms. Several meetings with representatives of the swine sector and veterinary practitioners took place to define and agree what data fields to include in the app, which could be executed by desktop computer, smartphone, tablet, or other mobile device. Data from farms was supplemented with diagnostic test results if samples had been submitted to the official laboratory. A program was created to analyze the data automatically and report the health status of the swine population to veterinary practitioners that participated. This app is currently accessible using a user code and password through the link: http://www.gsplleida.net/es/ content/app-del-gsp. Figure 1 shows the app interface with the fields to be filled out by a user detecting a clinical outbreak in a pig farm. Data Source, Types and Preparation Data were mainly sourced from veterinarians who routinely visited the pig farms. If a veterinarian detected pigs with clinical signs during a visit to a farm, he/she registered the following variables in the app for each clinical event at farm level: severity of the clinical event, date of the visit, official identification of the farm, company to which the farm belonged, location of the farm, type of animal (i.e., sows or pigs), category of age affected, body system affected, lesions observed during necropsy (if applicable), vaccines applied, and presumptive diagnosis of the disease. The veterinarian identified the affected body system according to the clinical signs observed in swine, distinguishing between respiratory, digestive, neurological, locomotor, dermatological, and reproductive system. In the event of detecting multiple disorders in the same farm (e.g., respiratory and digestive), each sign could be recorded individually. Moreover, the veterinarian indicated the most plausible presumptive diagnosis based on his/her clinical experience. The presumptive diagnoses comprised a closed list of endemic diseases that included: porcine pleuropneumonia (APP-due to Actinobacillus pleuropneumoniae), porcine circovirus associated disease (due to Porcine Circovirus type 2), clostridiosis (due to Clostridium spp.), unspecific diarrhea (when the microorganism involved was unknown), swine dysentery (due to Brachyspira hyodisenteriae), colibacillosis (due to Escherichia coli), exudative epidermitis (due to Staphylococcus hycus), streptococcosis (due to Streptococcus suis), Glässer's disease (due to Haemophilus parasuis), swine influenza (due to Swine Influenza virus), ileitis (due to Lawsonia intracellularis), leptospirosis (due to Leptospira spp.), mycoplasmosis (due to Mycoplasma hyopneumoniae), any swine parasitosis, pasteurellosis (due to Pasteurella multocida), rectal prolapse, matrix prolapse, porcine respiratory reproductive syndrome (due to PRRSV), atrophic rhinitis (due to Bordetella bronchiseptica and/or Pasteurella multocida), salmonellosis (due to Salmonella spp.), and gastric ulcers. The app allowed reporting of several presumptive diagnoses during a single visit. However, this situation was very unusual, since the clinician usually indicated a unique presumptive diagnosis. Finally, the veterinarian also categorized the severity of a clinical event as mild, moderate or severe taking into account his/her own clinical experience and considering the rates of mortality and morbidity and the negative impact of the event on productive performance. The second source of data was the GSP official laboratory for swine diseases. If practitioners submitted clinical samples from a reported affected herd, the laboratory carried out diagnostic testing to confirm or rule out a suspected endemic disease. The type of test used and the results obtained at farm level were then recorded to the app. Data of clinical cases and laboratory confirmation testing were integrated at farm level using a relational database built by the IT services of GSP. The elapsed time between the report of a clinical event and its laboratory confirmation ranged between 24 h and a week depending on whether the diagnosis was performed by PCR, serology or microbiology. The third data source was the official census of the active swine farms in the regions of study (i.e., Aragon, Catalonia, and Navarra) (7). This census contained the following fields: a unique identifier of the farm, the company to which the farm belonged, the municipality, the county, the province, the number of adult sows/boars, the number of fattening pigs, the number of piglets in nursery, the type of production, and the UTM coordinates (x, y). The data registered by the veterinarians during their visits, and by the official laboratory were pre-processed and integrated with the census data in order to get a final data set that could be analyzed. Coverage Assessment The initial aim of the GSP monitoring system was to gather data on clinical events occurring in any swine farm of Catalonia, Aragon and Navarra. The swine population of these three regions totals 6,741 active pig farms, 79% of which were located in Catalonia, 18% in Aragon and 3% in Navarra. Around 90 swine practitioners routinely visited these farms, the majority of which were located in 12 of the 62 counties of the regions. Half of the farms belonged to 20 integrated swine companies. Over the study period, the implementation of the monitoring system was partial, as not all the veterinarians used the app to report clinical events when visiting the swine farms. Initially, to evaluate the coverage achieved by the GSP system, it was assessed from which counties the swine practitioners reported clinical events. This set of counties corresponded to the accessible population. Then, the coverage was also analyzed by type of production. The comparison allowed identification of those swine farms not participating in the monitoring system and inference of results solely to the participating population. Spatiotemporal Analyses and Modeling of Clinical Signs and Presumptive Diagnoses Several descriptive analyses were conducted to summarize the frequencies of clinical events with different clinical signs and presumptive diagnoses, and visualize if any spatiotemporal pattern emerged from the data. An initial exploration was carried out to describe the trend of clinical events monitored by week. The clinical signs and the respective presumptive diagnoses were grouped and summarized in tables, maps, and bar plots. The severity of clinical signs, the types of production and the age categories of the affected animals were also analyzed to characterize the subpopulations affected. The analysis was complemented by the diagnostic testing results from the laboratory. Next, to evidence possible patterns over time and space, the number of events with different clinical signs and presumptive diagnoses were explored at low spatiotemporal granularity (i.e., by county and week). The counts of clinical signs and presumptive diagnoses reported weekly were represented with multiple surveillance time series. These series showed the pattern of each clinical sign for each one of the 33 counties included in the population of study between January 2012 and March 2018. Moreover, the cumulative counts of clinical events were mapped monthly and yearly at county level. Spatiotemporal Modeling Illustrated by Clinical Events of Porcine Pleuropneumonia as Presumptive Diagnosis Counts of some clinical events grouped by clinical sign or presumptive diagnosis evidenced an overall trend and/or annual seasonality over time (e.g., clinical events such as porcine pleuropneumonia as presumptive diagnosis). To get a better understanding of the observed patterns for different groups, endemic-epidemic multivariate time series models for infectious disease counts were used (8)(9)(10)(11)(12)(13). This approach considers that the incidence reported over time can be additively decomposed into two components: an endemic component (or baseline rate of cases with a stable temporal trend) and an epidemic (or autoregressive component). The endemic component includes several terms to represent the reference number of cases as the intercept, the trend and the possible seasonal variation over time. Added to these parameters, these endemic-epidemic multivariate time series models also allow the inclusion of a neighbor-driven component and random effects to explain their influence on the clinical events. A basic formulation of the endemic-epidemic multivariate time series models can be expressed as: where the mean incidence of clinical events in each county i at week t (µ it ) depends on two components. (1) An endemic component (υ t ) multiplied by an offset that corresponds to the accessible population fraction located in each county (e i ). Here, υ t is incorporated as log-linear predictor that includes an overall trend β t and sine-cosine terms to represent an annual seasonal variation with a wave frequency ω = 2π/52. (2) An epidemic component split into two parts: an autoregressive part that reproduces the incidence within county i (λY i,t−1 ), and neighborhood effects that represent the transmission from other adjacent counties j (ϕ j =i ω ji Y j,t−1 ). These epidemic parameters λ = exp(α (λ) ) and ϕ = exp(α (ϕ) ) are assumed homogeneous across geographical units and constant over time. The multivariate count time series defined at different spatiotemporal units can be fitted to a Poisson model, or a negative binomial model if we need to account for overdispersion. In the case of a negative binomial model, the conditional mean (µ it ) remains the same, but the conditional variance increases to µ it (1+ µ it ψ i ) with additional unknown overdispersion parameter ψ i > 0. These models are very flexible and allow the inclusion of covariates, estimated transmission weights, and random effects to eventually account for unobserved heterogeneity of the units. In this study, to model the spatiotemporal patterns of clinical events of porcine pleuropneumonia as presumptive diagnosis, different models were evaluated adding diverse sequential extensions. Initially, a basic model was evaluated accounting for endemic and epidemic parameters with annual seasonality variation and overall trend. Then, other covariates, such as county neighborhood effect or population fraction of each county, were tested on the endemic or epidemic parameters, and finally random effects were tested to eventually account for unobserved heterogeneity of counties. The most appropriate model was selected by comparing the values obtained from the Akaike Information Criterion and choosing the lowest one (14,15). Reporting Information in Near Real Time The GSP application allowed not only recording and integration of data on clinical events, but also immediate feedback to veterinary practitioners on trends and spatiotemporal evolution of events at county and regional level. In addition, reproducible documents were created in "pdf " format to report the updated information extracted from the analyses of data. All the stakeholders and veterinarians who collaborated in the network received these reports with detailed results related to reported clinical events and spatial and temporal evolution of patterns of clinical signs and presumptive diagnoses. It is important to notice that these reports did not show raw information. In order to protect the privacy of the participating stakeholders, the information was summarized by region or county without giving exact details on individual farms. Software Used for the Development of the Web Application and the Implementation of Analyses The application of GSP was developed using the following software: HTML5 (16), Java script (17), and Angular 2 (18), and the working environment of IONIC 4 (https://ionicframework.com). The reproducible reports were built in Latex format (33) and compiled with RStudio (20). Coverage Assessment Between January 2012 and March 2018, a total of 55 practitioners out of 90 volunteered to report clinical events (i.e., veterinary (10) 103 (6) 947 (9) Farrow-to-finish 8 647 (1) Over this period, 10,654 clinical events were reported from 1,693 farms (i.e., 25 and 40% of target and accessible swine population, respectively). Figure 2 shows the location of the target swine population and the coverage achieved by the GSP system by region and county. Appendix lists the numerical county codes with their corresponding names. Most of the clinical events were reported in fattening farms (88%), followed by sow farms and farrow-to-finish farms with 8 and 4%, respectively. The composition of the swine target population was slightly different and comprised 11 production types in which the fattening farms were the most abundant (69%), followed by sow farms (10%), and farrow-to-finish farms (10%). Moreover, it is interesting to remark that the median size of farms that reported clinical events were larger than in the target population, mainly in fattening, continuous flow finisher, and sow farms. This demonstrates that farms from integrated large-operations with highly specialized facilities were more likely to report problems to the system. Table 1 summarizes the coverage and number of swine farms and clinical events by production type reported by the GSP monitoring system. During the first semester of 2012 the number of reports was relatively low, but in the second semester of that year the level of reporting increased substantially being subsequently sustained throughout the whole study period (see Figure 3). farms (nursery and fattening pigs) was milder than in sow farms (sows and/or nursery pigs). In both growing pigs and sow farms, respiratory clinical signs were the most frequent, followed by digestive, neurological, locomotor, reproductive, and finally dermatological signs. A combination of several clinical signs was observed in 30% of these events (n = 3,182). The most frequent combination was digestive and neurological signs (7%), followed by respiratory and neurological (5%), respiratory and locomotor (4%), and respiratory and reproductive (4%). Figure 3 summarizes the frequency of all clinical signs reported with their respective spatiotemporal distribution by county and week. Spatiotemporal Descriptive Analyses of Clinical Signs and Presumptive Diagnoses The main presumptive diagnoses associated with respiratory clinical signs were diseases that belong to the porcine respiratory complex (such as swine influenza, PRRS, and mycoplasmosis), followed by Glasser's disease, pasteurellosis, and porcine pleuropneumonia. The most frequent presumptive diagnosis reported in clinical events with digestive signs was colibacilosis, while the most frequent suspicion for neurological signs was Glässer's disease, PRRS for reproductive signs, streptococcosis for locomotor signs, and exudative epidermitis for dermatological signs. Less than 1% of those presumptive diagnoses were confirmed by laboratory diagnosis. Table 2 summarizes the clinical signs by type of affected animals (i.e., growing pigs or sows), the number of affected farms, the degree of severity and the associated presumptive and confirmed diagnoses. Next, using porcine pleuropneumonia as example of presumptive diagnosis, we illustrate how the clinical events grouped by each presumptive diagnosis were represented spatiotemporally. It is important to note that the clinical signs of porcine pleuropneumonia are quite pathognomonic, and thus clinical suspicions were a useful measure of the pattern of disease. The disease was suspected by the veterinarians on 758 occasions, most often in pigs on fattening farms (88%). Figure 4). Moreover, using multivariate surveillance time series, trend at county level can be visually assessed and compared among counties (see Figure 5). Figures 4, 5 show that, except for 2012, throughout the whole period of study the overall trend of reporting was quite stable with a seasonal increase each winter. From the accessible population 19 out of the 33 counties reported at least one suspicion of porcine pleuropneumonia. Although the number of weekly counts by county were relatively low (i.e., maximum 3), those counties of Western Catalonia and Aragon that had more swine farms also reported more consistently suspicions of porcine pleuropneumonia (e.g., county labeled as 26). Spatiotemporal Modeling for Porcine Pleuropneumonia as Presumptive Diagnosis The data on porcine pleuropneumonia as presumptive diagnosis was fitted using a negative binomial model. This model included an endemic component with a marked seasonality that increased between January and March and an epidemic component. In this case, the influence of random effects structures could not be assessed due to the lack of convergence. The coefficients of the fitted model and the resulting multiplicative effect of seasonality on the endemic component for the suspected clinical events of swine pleuropneumonia are shown in Figure 6. Reporting Information in Near Real Time Finally, with the aim of providing a continuous feedback to stakeholders and veterinarians, and communicate information on the health status of the population in near real time, the system produced different reports. Directly from the web application the user could get the area where the clinical events were reported during the last 3 months and the trend (see Figure 7). Moreover, every month the stakeholders received a brief report summarizing the information of the clinical events reported by age, type of farms, and counties. Each year they also received a very detailed report of the monitoring conducted and the results of the models that evaluate the pattern of some clinical signs and presumptive diagnoses. DISCUSSION Traditionally, in Spain, the reporting of many endemic swine diseases through passive surveillance has been very disperse and scarce, and the information required for making proper decisions in health management at population level has often been poor (4). The development of new user-friendly and standardized digital methods to report and analyze data on clinical events can help to determine the frequency and the evolution of diseases at population level (5,34). Recent initiatives have been carried out to monitor some endemic diseases in swine populations using these methods. Although many authors have pointed out the potential for near real-time monitoring, system development faces several technical, social and communication challenges in order to define data standards and data-sharing agreements (1)(2)(3)5). Implementation involves a multidisciplinary approach and the participation of the swine sector. This was the case with the GSP system, built in collaboration with researchers in swine diseases, lab technicians, computer scientists, practitioners, epidemiologists and with the support of producers of Northeastern Spain, all of whom were fully committed to the initiative. The coverage assessment of the GSP system allowed identification of the accessible population under monitoring. This study, based on data collected over more than 6 years, shows that implementation was gradual during the first year (2012), while in subsequent years the practitioners registered clinical events on a regular basis. The spatial coverage of the target swine population was partial and varied between counties (see Figure 2) and types of production (see Table 1). Despite this limitation, the monitoring system achieved the collection of data from an important proportion of fattening farms from integrated large operations in 33 counties, mainly concentrated in Western Catalonia and Aragon. In total 55 out of 90 swine practitioners in Northeastern Spain volunteered to participate. Veterinarians usually worked in a specific area and this could lead to spatiotemporal clustering of reporting of clinical events. In each county, the clinical events were reported by veterinarians from different companies so, to ensure standardized reporting, the same training was provided to all the system participants. To improve the coverage in areas where the app was not used, our suggestions are to hold more meetings explaining the benefits of the information provided by this kind of monitoring and try to sort out the problems that prevent practitioners from participating. On the other hand, since this system did not allow differentiating if a specific farm was not visited or the veterinarian did not detect any clinical event, we recommended adding a field in the application to record all visits of the practitioner, even when no disease was observed. We believe that the recording of all the clinical inspections carried out would help improve the assessment of the coverage of the system and serve to demonstrate the absence of endemic diseases. The spatiotemporal descriptive analyses from clinical events reported at farm level allowed the identification in near realtime of the most frequent clinical signs and presumptive suspicions at county and regional level. This easily accessible and current information could be useful to veterinary clinicians and stakeholders for decision making. For example, if a practitioner knew that the incidence of an endemic disease had increased in neighboring farms, he/she could decide to implement or modify preventive measures (e.g., vaccination) or take samples in other swine farms to confirm the presence or absence of infection. In addition, the spatiotemporal descriptive analyses of retrospective data from clinical events allowed assessment of the evolution of different clinical signs and endemic diseases, comparison of different subpopulations and identification of groups of farms, areas, or periods with higher incidence of specific problems. This long-term monitoring could help to determine the baseline frequency of clinical signs or endemic diseases, assess the influence of different factors on disease presence, and predict clinical events. In our study, the results of these analyses showed that the most frequent clinical signs reported from the accessible population were respiratory, followed by digestive and neurological. Moreover, as example of a more detailed analysis, by combining data of clinical signs and presumptive diagnoses, we observed that the practitioners mainly associated these signs with diseases of the respiratory complex (such as swine influenza, mycoplasmosis, or PRRS), followed by pasteurellosis, porcine pleuropneumonia, and Glässer's disease. However, it is important to note that due to the small number of samples (<1% of events), most of these presumptive diagnoses were not confirmed by the laboratory. The main reason why swine practitioners did not take samples was that they believed that the laboratory confirmation would not change the medical interventions to undertake at farm level; and thus, they preferred to avoid extra-costs and logistical difficulties. The monitoring of presumptive diagnoses without laboratory confirmation could result in false alerts being raised. To minimize this limitation, we suggest identifying subpopulations frequently affected by clinical signs or endemic disease suspicions, communicating the information to practitioners and recommending submission of samples for laboratory confirmation. Furthermore, the reporting of presumptive diagnoses was defined as a closed list of possible endemic diseases and the option to report other endemic diseases (not included in the list) or exotic diseases was not considered. To improve this reporting, we suggest including the option of "other suspicion" within an open field, where the veterinarian could record other diagnoses or findings. The reproducible reports created by the GSP system provided updated and continuous information to the practitioners and swine stakeholders who participated. These reports showed visually the frequency of health problems at county and regional level, allowed the identification of their spatial distribution and progress, and helped the decision-making of where and when actions were necessary. Practitioners and swine stakeholders benefited from sharing information of clinical events occurring in the neighboring areas to plan control measures against these infections at farm level. Moreover, this system facilitated communication within the swine sector in Northeastern Spain and promoted co-operation. However, at this initial stage, the interventions to undertake in the event of alert at population level had not been agreed upon by different practitioners and private stakeholders, so the system was not ready to be used to plan specific actions. A future potential use of this system would be as surveillance system in order to detect outbreaks or aberrations. Nevertheless, for directing effective control actions, we still need to gradually build more trust in the current monitoring and achieve a better consensus and commitment from the whole swine sector. CONCLUSIONS Overall, we believe that the kind of monitoring system described in this study provides very useful information to detect and monitor the trend of the most frequent endemic diseases, identify specific health problems and to enhance communication within the swine sector. A consensual and broad implementation of the system on the whole target population could shorten the response time to prevent and control certain diseases, decreasing productive, and sanitary losses. Further research could be directed at identifying disease characteristics and modeling other covariates of interest at company or county level to further benefit endemic disease control within the swine industry. DATA AVAILABILITY STATEMENT The datasets generated for this study are available on request to the corresponding author. ETHICS STATEMENT The approval of our study was not required as per the local legislation. This research did not involve any specific clinical study using animal experimentation. This work has been written in agreement with the veterinary clinicians and stakeholders who provided data. All data have been analyzed in aggregate form guaranteeing their privacy and security. AUTHOR CONTRIBUTIONS AA-C conceived the research, performed the statistical analyses, developed the reproducible reports, and wrote the article. EA conceived the research, designed the app, obtained the data, and provided the layout to show the results in the app. VT and JB conceived the research, built, coordinated, and motivated the network of veterinary clinicians that participated in the GSP system. EN conceived the research, performed the laboratory tests, and interpreted the results. SN interpreted the laboratory test results and wrote the article. LF conceived the research, reviewed the statistical analyses and reproducible reports, guided the study, and wrote the article.
2020-02-18T14:13:12.290Z
2020-02-18T00:00:00.000
{ "year": 2020, "sha1": "bb2d1a8026c4eedd7fb47c64b12eaf9f7cf3e648", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fvets.2020.00068/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bb2d1a8026c4eedd7fb47c64b12eaf9f7cf3e648", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237542450
pes2o/s2orc
v3-fos-license
Blue Led Light in Burns: A New Treatment’s Modality signal transducers of numerous cellular pathways in tissue repair; an increased production of ROS induces a controlled increase of inflammatory functions sufficient for stimulating tissue response [5-7]. Although the mechanism underlying photobiomodulation is still not completely understood, it has accumulated evidences of a positive action on all phases of wound repair, from inflammation to remodeling. signal transducers of numerous cellular pathways in tissue repair; an increased production of ROS induces a controlled increase of inflammatory functions sufficient for stimulating tissue response [5][6][7]. Although the mechanism underlying photobiomodulation is still not completely understood, it has accumulated evidences of a positive action on all phases of wound repair, from inflammation to remodeling. These beneficial effects include acceleration of wound healing, cellular and extracellular matrix proliferation, collagen production and granulation tissue formation [8]. It also shows anti-microbial activity [9] leading to synergic, beneficial effects on healing process. Important scientific literature supports the evidence that light stimulates tissue regeneration and skin repair owing to its ability to interact with tissue inducing the photobiomodulation [10]. Previous studies show how PBMT is able to promote metabolism of all cellular processes by ATP synthesis and, in turn, to stimulate angiogenesis [11], tissue repair with reduction of scar tissue and formation of keloids [12]. To date, the use of blue light with a low frequencies range of 400-450 nm results to be effective for the treatment of different dermatologic pathologies such as acne [13][14][15], psoriasis [16] and eczema [17] as well as skin wounds [18][19]. For all these advantages, the use of PBMT is now increasing in clinical practice to improve wound bed as well as the healing time. In this study, we evaluated for the first time the effectiveness of this treatment both in burned patients not able for surgery for their comorbidities and those in which skin engraftment failed, in order to promote the healing process. Our results lead us to consider PBMT an effective new burn-care approach available in the clinical practice of burn-care. Material and Methods The observations were made on 11 patients (4 females and 7 males) with burns resistant to standard treatment then treated with PBMT in order to promote the skin lesion healing. In particular, 9 patients showed deep burns of different etiologies not responding to surgery with skin grafts while the other 2 patients showed skin scarring treated with dermo-epidermal substitutes with incomplete re-epithelialization. Introduction Burns is a type of acute trauma affecting millions of people worldwide that frequently occurs for an incidental/intentional exposure to fire as well as a wrong use of inflammable liquids [1]. The therapeutic strategy used to manage burned patients is related to the severity, depth and anatomic area of the burn and the clinical condition of the patient. In fact, a spontaneous re-epithelialization of the wound area is expected up to a superficial/medium seconddegree burn, while the removal of necrotic tissue and the replacement of skin defect with autologous skin is performed on burns of deep second/ third degree [2][3][4]. However, the engraftment of autologous skin on the damaged area is not always guaranteed and its partial/ total detachment from lesion leads to a delayed healing process by second intention further affected by patient's comorbidities. Among the different approaches developed to stimulate wound healing, the use of PBMT appears to be very promising. The underlying mechanism of this new therapy is a photothermal effect, due to the selective absorption of the blue light by the hemoglobin -in particular the heme group -in the bleeding wound able to improve the healing process, with an apparent modulation of the fibroblast activity and better recovery of the collagen content in the wound area. The direct light energy transfer from the device to the patient permits the interaction and the stimulation of some chromophores of blood and skin, in particular cytochrome C and protoporphryn IX. Once activated by the blue light, cytochrome C interacts with the last two mitochondrial transport chain complexes and contributes to strengthening the cellular respiratory process, increasing the production of adenosine triphosphate, energy currency of the cell, which can intensify its metabolic activity. A further important effect is an increased production of reactive oxygen species (ROS), Keywords: Wounds; Healing process; Photobiomodulation therapy; Hard to heal burns Abstract The management of full-thickness severe burns is an important issue from the medical point of view, especially when the autologous skin is not able to engraft on the wound area and, in turn, to induce its re-epithelialization. As a consequence, the wound healing process occurs slowly, by second intention leading to an increased formation of scar tissue. The delay in wound healing leads to a longer management of the patient, increasing inconveniences and increasing the costs for the health system. In addition, comorbidity factors of the patients such as diabetes, infections or its elderly age can also affect the healing process. With the aim to improve the wound bed as well as the healing time, we decided to test photobiomodulation therapy (PBMT) as a new practice for the burn-care treatment. The results here reported show its effectiveness in the promotion of wound healing leading to faster and better final aesthetic results. Clinical case 1 The first patient we treated with blue LED light was a 67 year old man with cirrhosis and psoriasis. He presented deep burns of the abdomen ( Figure 1A) treated with surgery with the application of autologous skin grafts then failed ( Figure 1B). Thus, we decided to use PBMT to promote the healing process. In this case, five applications twice a week were required to obtain the wound healing, according to the protocol described in material and methods ( Figure 1C, detail Figure 1 C-1). Surprisingly, we observed a marked improvement of the wound bed after only three treatments as well as a reduction of lesion's depth and inflammation, exudation and re-epithelialization ( Figure 1D). The wound bed at the end of the treatments showed a complete reepithelialization with focal atrophic areas, without an inflammatory response (Figure 1 E) and the clinical follow-up after 3 and 14 months showed a scarcely erythematous and hypertrophic scarring, not retracting and soft to the touch. The atrophic areas were also significantly reduced (Figure1 F-G). Clinical case 2 The second patient was a 61 year old man with deep burn of shoulder ( Figure 2A) treated with surgery with the application of autologous skin grafts then failed ( Figure 2B). Although a new surgery was recommended, the patient refuse it. Thus, we decided to use PBMT to promote the healing process. In this case, eight applications twice a week were required to obtain the wound healing. We quickly observed a revitalization of the wound bed, with reduction of slough and clean base, normal exudate and proliferative edges already after 3 treatments ( Figure 2C) with a flat scarring, soft to the touch, pink in color and with the presence of fine superficial telangiectasias in the follow-up after 1 months ( Figure 2D). Clinical case 3 The third patient was a 59 year old woman suffering from To perform PBMT, we used a portable medical device emitting blue light with a wavelength of between 400 to 430 nm; it was applied for one minute/application at a distance of 4 cm from the wound bed, providing it a LED radiation of 120 mW/cm 2 power density, which corresponds to an energy density dose of 7,2 J/cm 2 ; light covered a circular 5 cm diameter area so that application numbers were correlated to wound size. The treatment schedule adopted was the application of blue LED light for 60 seconds twice a week for a maximum period of ten weeks at the time of dressing change and after wound cleaning with distilled water. Following the blue LED light treatment, appropriate dressing for the type of lesion was applied according to the required standard. This treatment does not interfere with other systemic therapies that may be in place and does not involve any additional risk compared to standard treatment; the comparative evaluation of aesthetic outcomes was performed using modified Yeong scale ( Table 2). Results In our study, we selected patients with burns resistant to standard treatment candidates to alternative, not invasive approaches such as PBMT. In particular, we mainly evaluated data of the patients treated reported in Table 1. The treatment was well tolerated, there were no reports of side effects or other adverse events and compliance was excellent. The patients recorded a significant reduction in pain at the end of the treatment period. We witnessed the reactivation of the reparative process with the complete re-epithelialization of the burns or of the burned area where the autologous graft was not successful in 8 cases out of [11]. In 2 cases we observed a percentage reduction of the lesion area of at least 80% compared to the initial one. In only 1 case there was a poor response to the treatment probably due to the extreme severity of the initial picture and the consequent severe physical deterioration of the patient. We describe below three cases particularly interesting from a clinical point of view, for the results obtained given the initial lesion conditions. hypertension. She presented deep burns of the right upper and lower limb ( Figure 3A) treated with surgery with the application of autologous skin grafts then failed only on the arm ( Figure 3B). We initially treated this area with advanced dressing ( Figure 3C) but given the slow response we decided to use the PBMT to promote the healing. In most clinical cases, we also noticed that the use of blue LED light induced a modulation of fibroblast's activity, so as to reduce the possibility of the appearance of keloids or hypertrophic scars keloid recurrence. Thus, in some cases we have also taken into account the aesthetic results, identified as the scar surface appearance, scar height and color mismatch of wounds after 6 and 12 -14 months after hospitalization/healing according to parameters of the modified Yeong scale (Table 2). In particular, we could compare at 9 months the aesthetic outcomes obtained in the clinical case 3 treated with blue LED light on the right upper limb in which graft skin failed (Figure 3 F-G) and those of the lower limb ( Figure 3H) in which the engraftment of skin after surgery was evident (Table 3). Based on the clinical evaluation and the score obtained using the modified Yeong scale, we can conclude that the area treated with blue LED light has had a more favorable evolution in terms of scar outcome (thickness, discoloration and consistency of the tissue) than the area treated in a standard way. Discussion Surgical procedure of autologous graft is performed as the Table 2: Scar score of modified Yeong scale. Each category is assigned a score from 1-4, for a total possible score of 3-12. Major color difference. Prominent mismatch (includes differences in pigmentation or erythema) standard of care (SOC) for burns; the coverage of a deep burn with the graft may be immediate, delayed for a few days after the excision or late, after a first phase of direct healing. However, the autologous graft has a probability of failure with total or partial detachment of the grafted skin. When it happens, the healing process of the lesion occurs by secondary intention but it is not always easy to manage, as this process can be delayed or stopped by many factors such as diabetes, infections, metabolic deficiencies and the advanced age of the subject. Although SOC appears to be effective in the most of clinical cases, it results invasive, not selective for necrotic tissue and not always easily practicable for clinical and organizational problems; moreover, the critical conditions of the patients or the presence of comorbidities can be real contraindications to surgical approach. For this reasons, minimally or not invasive approaches have been considered as additional therapies able to reduce the possible side effects related to surgery. In fact, a proper wound management and dressing of burn instead of or after surgery is an important part of the healing process, in order to prevent the onset of infections or other complications, and also to accelerate the wound healing with as little scarring as possible. Among the not invasive approaches used for the burns management, photobiomodulation is a new therapy for treating hard to heal wounds. To the best of our knowledge there are not trials or studies that have analyzed photobiomodulation's effects on burns so this is the first case series. The benefits identified in our clinical cases can be attributable not only to a reactivation of the tissue repair process but also to a reduced healing time and an improvement in scarring, with the consequent indirect benefits, namely reduction of public health expenditure, improvement of quality of the life of patients with burn scars that often significantly impact this parameter. According to our small experience, the blue LED light used for PBMT contributed significantly to a faster healing process, a reduction of inflammatory response and pain as well as better recovered skin morphology. Conclusions Based on our observation, we can conclude that PBMT can be proposed as a promising therapy to be used in the management of cutaneous fibrosis, likely in combination with pre-existing treatments. Patient consent statement: the patients' informed consent was acquired for all patients involved in the study.
2021-09-17T08:11:34.981Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "5c654879b23f9e96137c613e863f112ac8d25f6a", "oa_license": "CCBY", "oa_url": "https://www.avensonline.org/wp-content/uploads/JCID-2373-1044-09-0072.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5c654879b23f9e96137c613e863f112ac8d25f6a", "s2fieldsofstudy": [ "Biology", "Engineering" ], "extfieldsofstudy": [] }
53806244
pes2o/s2orc
v3-fos-license
Continuous Trade-off Optimization between Fast and Accurate Deep Face Detectors Although deep neural networks offer better face detection results than shallow or handcrafted models, their complex architectures come with higher computational requirements and slower inference speeds than shallow neural networks. In this context, we study five straightforward approaches to achieve an optimal trade-off between accuracy and speed in face detection. All the approaches are based on separating the test images in two batches, an easy batch that is fed to a faster face detector and a difficult batch that is fed to a more accurate yet slower detector. We conduct experiments on the AFW and the FDDB data sets, using MobileNet-SSD as the fast face detector and S3FD (Single Shot Scale-invariant Face Detector) as the accurate face detector, both models being pre-trained on the WIDER FACE data set. Our experiments show that the proposed difficulty metrics compare favorably to a random split of the images. Introduction Face detection, the task of predicting where faces are located in an image, is one of the most well-studied problems in computer vision, since it represents a prerequisite for many other tasks such as face recognition [22], facial expression recognition [9,16], age estimation, gender classification and so on. Inspired by the recent advances in deep object detection [12,24], researchers have proposed very deep neural networks [4,18,23,28,30] as a solution to the face detection task, providing significant accuracy improvements. Although deep models [11,12] generally offer better results than shallow [13,19] or handcrafted models [8,27], their complex architectures come with more computational requirements and slower inference speeds. As an alternative for environments with limited resources, e.g. mobile devices, researchers have proposed shallower neural networks [13] that provide fast but less accurate results. In this context, we believe it is relevant to propose and evaluate an approach that allows to set the trade-off between accuracy and speed in face detection on a continuous scale. Based on the same principles described in [26], we hypothesize that using more complex and accurate face detectors for difficult images and less complex and fast face detectors for easy images will provide an optimal trade-off between accuracy and speed, without ever having to change anything about the face detectors. The only problem that prevents us from testing our hypothesis in practice is finding an approach to classify the images into easy or hard. In order to be useful in practice, the approach also has to work fast enough, e.g. at least as fast as the faster face detector. To this end, we propose and evaluate five simple and straightforward approaches to achieve an optimal trade-off between accuracy and speed in face detection. All the approaches are based on separating the test images in two batches, an easy batch that is fed to the faster face detector and a hard (or difficult) batch that is fed to the more accurate face detector. The difference between the five approaches is the criterion used for splitting the images in two batches. The first approach assigns a test image to the easy or the hard batch based on the class-agnostic image difficulty score, which is estimated using a recent approach for image difficulty prediction introduced by Ionescu et al. [15]. The image difficulty predictor is obtained by training a deep neural network to regress on the difficulty scores produced by human annotators. The second approach is based on a person-aware image difficulty predictor, which is trained only on images containing the class person. The other three approaches used for splitting the test images (into easy or hard) employ a faster single-shot face detector, namely MobileNet-SSD [13], in order estimate the number of faces and each of their sizes. The third and the fourth approaches independently consider the number of detected faces (images with less faces go into the easy batch) and the average size of the faces (images with bigger faces go into the easy batch), while the fifth approach is based on the number of faces divided by their average size (images with less and bigger faces go into the easy batch). If one of the latter three approaches classifies an image as easy, there is nothing left to do (we can directly return the detections provided by MobileNet-SSD). Our experiments on the AFW [31] and the FDDB [17] data sets show that using the class-agnostic or person-aware image difficulty as a primary cue for splitting the test images compares favorably to a random split of the images. However, the other three approaches, which are frustratingly easy to implement, can also produce good results. Among the five proposed methods, the best results are obtained by the class-agnostic image difficulty predictor. This approach shortens the processing time nearly by half, while reducing the Average Precision of the Single Shot Scale-invariant Face Detector (S 3 FD) [30] from 0.9967 to no less than 0.9818 on AFW. Moreover, all our methods are simple and have the advantage that they allow us to choose the desired trade-off on a continuous scale. The rest of this paper is organized as follows. Recent related works on face detection are presented in Section 2. Our methodology is described in Section 3. The face detection experiments are presented in Section 4. Finally, we draw our conclusions in Section 5. Related Work To our knowledge, there are no previous works that study the trade-off between accuracy and speed for deep face detection. However, there are works [14,26] that study the trade-off between accuracy and speed for the more general task of object detection. Huang et al. [14] have tested different configurations of deep object detection frameworks by changing various components and parameters in order to find optimal configurations for specific scenarios, e.g. deployment on mobile devices. Different from their approach, Soviany et al. [26] treat the various object detection frameworks as black boxes. Instead of looking for certain configurations, they propose a framework that allows to set the trade-off between accuracy and speed on a continuous scale, by specifying the point of splitting the test images into easy versus hard, as desired. We build our work on top of the work of Soviany et al. [26], by considering various splitting strategies for a slightly different task: face detection. In the rest of this section, we provide a brief description of some of the most recent deep face detectors, in chronological order. CascadeCNN [20] is one of the first models to successfully use convolutional neural networks (CNN) for face detection. Its cascading architecture is made of three CNNs for face versus non-face classification and another three CNNs for bounding box calibration. At each step in the cascade setup, a number of detections are dropped, while the others are passed to the next CNN. By changing the thresholds required in this process, a certain trade-off between accuracy and speed can be obtained. Jian et al. [18] use Faster R-CNN to detect faces. Faster R-CNN [24] is a very accurate region-based deep detection model which improves Fast R-CNN [10] by introducing the Region Proposal Networks (RPN). It uses a fully convolutional network that can predict object bounds at every location in order to solve the challenge of selecting the right regions. In the second stage, the regions proposed by the RPN are used as an input for the Fast R-CNN model, which will provide the final object detection results. S 3 FD [30] is a highly accurate real-time face detector, based on the anchor model used initially for object detection [21,24]. In order to solve the limitations on small objects (faces) of these methods, S 3 FD introduces a scale compensation anchor matching strategy to improve recall, and a max-out background label to reduce the false positive detections. In order to handle different scales of faces, it uses a scale-equitable face detection framework, tiling anchors on a wide range of layers, while also designing many different anchor scales. MobileNets [13] are a set of lightweight models that can be used for classification, detection and segmentation tasks. They are built on depth-wise separable convolutions with a total of 28 layers and can be further parameterized, making them very suitable for mobile devices. The fast speeds and the low computational requirements of MobileNets make up for the fact that they do not achieve the accuracy of the very-deep models. The experimental results show they can also be successfully used for face detection. Methodology Humans learn much better when the examples are not randomly presented, but organized in a meaningful order which gradually illustrates more complex concepts. Bengio et al. [1] have explored easy-to-hard strategies to train machine learning models, showing that machines can also benefit from learning by gradually adding more difficult examples. They introduced a general formulation of the easy-to-hard training strategies known as curriculum learning. However, we can hypothesize that an easy-versus-hard strategy can also be applied at test time in order to obtain an optimal trade-off between accuracy and processing speed. For example, if we have two types of machines (one that is simple and fast but less accurate, and one that is complex and slow but more accurate), we can Algorithm 1: Easy-versus-Hard Face Detection 1 Input: 2 I -an input test image; 3 D f ast -a fast but less accurate face detector; 4 D slow -a slow but more accurate face detector; 5 C -a criterion function used for dividing the images; 6 t -a threshold for dividing images into easy or hard; B ← D f ast (I); devise a strategy in which the fast machine is fed with the easy test samples and the complex machine is fed with the difficult test samples. This kind of strategy will work as desired especially when the fast machine can reach an accuracy level that is close to the accuracy level of the complex machine for the easy test samples. Thus, the complex and slow machine will be used only when it really matters, i.e. when the examples are too difficult for the fast machine. The only question that remains is how to determine if an example is easy or hard in the first place. If we focus our interest on image data, the answer to this question is provided by the recent work of Ionescu et al. [15], which shows that the difficulty level of an image (with respect to a visual search task) can be automatically predicted. With an image difficulty predictor at our disposal, we have a first way to test our hypothesis in the context of face detection from images. However, if we further focus our interest on the specific task of face detection in images, we can devise additional criteria for splitting the images into easy or hard. One criterion is to consider an image difficulty predictor that is specifically trained on images with people, i.e. a person-aware image difficulty predictor. Other criteria can be developed by considering the output of a very fast single-shot face detector, e.g. MobileNet-SSD [13]. These criteria are the number of detected faces in the image, the average size of the detected faces, and the number of detected faces divided by their average size. To obtain an optimal trade-off between accuracy and speed in face detection, we propose to employ a more complex face detector, e.g. S 3 FD [30], for difficult test images and a less complex face detector, e.g. MobileNet-SSD [13], for easy test images. Our simple easy-versus-hard strategy is formally described in Algorithm 1. Since we apply this strategy at test time, the face detectors as well as the image difficulty predictors can be independently trained beforehand. This allows us to directly apply state-of-the-art pre-trained face detectors [13,30], es-sentially as black boxes. It is important to note that we use one of the following five options as the criterion function C in Algorithm 1: 1. a class-agnostic image difficulty predictor that estimates the difficulty of the input image; 2. a person-aware image difficulty predictor that estimates the difficulty of the input image; 3. a fast face detector that returns the number of faces detected in the input image (less faces is easier); 4. a fast face detector that returns the average size of the faces detected in the input image (bigger faces is easier); 5. a fast face detector that returns the number of detected faces divided by their average size (less and bigger faces is easier). We note that if either one of the last three criteria are employed in Algorithm 1, and if the fast face detector used in the criterion function C is the same as D f ast , we can slightly optimize Algorithm 1 by applying the fast face detector only once, when the input image I turns out to be easy. Another important note is that, for the last three criteria, we consider an image to be difficult if the fast detector does not detect any face. Our algorithm has only one parameter, namely the threshold t used for dividing images into easy or hard. This parameter depends on the criterion function and it needs to be tuned on a validation set in order to achieve a desired trade-off between accuracy and time. While the last three splitting criteria are frustratingly easy to implement when a fast pre-trained face detector is available, we have to train our own image difficulty predictors as described below. Image difficulty predictors. We build our image difficulty prediction models based on CNN features and linear regression with ν-Support Vector Regression (ν-SVR) [2]. For a faster processing time, we consider a rather shallow pretrained CNN architecture, namely VGG-f [3]. The CNN model is trained on the ILSVRC benchmark [25]. We remove the last layer of the CNN model and use it to extract deep features from the fully-connected layer known as fc7. The 4096 CNN features extracted from each image are normalized using the L 2 -norm. The normalized feature vectors are then used to train a ν-SVR model to regress to the ground-truth difficulty scores provided by Ionescu et al. [15] for the PASCAL VOC 2012 data set [5]. We use the learned model as a continuous measure to automatically predict image difficulty. We note that Ionescu et al. [15] showed that the resulted image difficulty predictor is class-agnostic. Since our focus is on face detection, it is perhaps more useful to consider a class-specific image difficulty predictor. For this reason we train a different image difficulty predictor by selecting only the PASCAL VOC 2012 images that contain the class person. As these images are likely to contain faces, the person-aware difficulty predictor could be more appropriate for the task at hand. Both image difficulty predictors are based on the VGG-f architecture, which is faster than the considered face detectors, including MobileNet-SSD [13], and it reduces the computational overhead at test time. Data Sets We perform face detection experiments on the AFW [31] and the FDDB [17] data sets. The AFW data set consists of 205 images with 473 labeled faces, while the FDDB data set consists of 2845 images that contain 5171 face instances. Evaluation Details Evaluation Measures. On the FDDB data set, the performance of the face detectors is commonly evaluated using the area under the discrete ROC curve (DiscROC) or the area under the continuous ROC curve (ContROC), as defined by Jain et al. [17]. On the other hand, the performance of the face detectors on the AFW data set is typically evaluated using the Average Precision (AP) metric, which is based on the ranking of detection scores [7]. The Average Precision is given by the area under the precision-recall (PR) curve for the detected faces. The PR curve is constructed by mapping each detected bounding box to the most-overlapping ground-truth bounding box, according to the Intersection over Union (IoU) measure, but only if the IoU is higher than 50% [6]. Models and Baselines. We use S 3 FD [30] as our accurate model for predicting bounding boxes, and experiment with the pre-trained version available at https://github.com/sfzhang15/SFD. As our fast detector, we choose the pretrained version of MobileNet-SSD from https://github.com/yeephycho/tensorflowface-detection, slightly modified. For both models, which are pre-trained on the WIDER FACE data set [29], we set the confidence threshold to 0.5. The main goal of the experiments is to compare our five different strategies for splitting the images between the fast detector (MobileNet-SSD) and the accurate detector (S 3 FD) with a baseline strategy that splits the images randomly. To reduce the accuracy variation introduced by the random selection of the baseline strategy, we repeat the experiments for 5 times and average the resulted scores. We note that all standard deviations are lower than 0.5%. We consider various splitting points starting with a 100% − 0% split (equivalent with applying the fast MobileNet-SSD only), going through three intermediate splits (75% − 25%, 50% − 50%, 25% − 75%) and ending with a 0% − 100% split (equivalent with applying the accurate S 3 FD only). Table 1 presents the AP scores and the processing times of MobileNet-SSD [13], S 3 FD [30] and several combinations of the two face detectors, on the AFW data set. Different model combinations are obtained by varying the percentage of images processed by each detector. The table includes results starting with a 100% − 0% split (equivalent with MobileNet-SSD [13] only), going through three intermediate splits (75% − 25%, 50% − 50%, 25% − 75%) and ending with a 0% − 100% split (equivalent with S 3 FD [30] only). In the same manner, Table 2 shows the results for the same combinations of face detectors on the FDDB data set. While the results of various model combinations are listed on different Table 1. Average Precision (AP) and time comparison between MobileNet-SSD [13], S 3 FD [30] and various combinations of the two face detectors on AFW. The test data is partitioned based on a random split (baseline) or five easy-versus-hard splits given by: the class-agnostic image difficulty score, the person-aware image difficulty score, the number of faces (n), the average size of the faces (avg), and the number of faces divided by their average size (n/avg). For the random split, we report the AP over 5 runs to reduce bias. The reported times are measured on a computer with Intel Core i7 2.5 GHz CPU and 16 GB of RAM. columns in Table 1 and Table 2, the results of various splitting strategies are listed on separate rows. We first analyze the detection accuracy and the processing time of the two individual face detectors, namely MobileNet-SSD [13] and S 3 FD [30]. On AFW, S 3 FD reaches an AP score of 0.9967 in about 1.89 seconds per image, while on FDDB, it reaches a DistROC score of 0.9750 in about 1.17 seconds per image. MobileNet-SDD is more than four times faster, attaining an AP score of 0.8910 on AFW and a DistROC score of 0.8487 on FDDB, in just 0.28 seconds per image. We next analyze the average face detection times per image of the various model combinations on AFW. As expected, the time improves by about 19% when running MobileNet-SSD on 25% of the test set and S 3 FD on the rest of 75%. On the 50% − 50% split, the processing time is nearly 40% shorter than the time required for processing the entire test set with S 3 FD only (0% − 100% split). On the 75% − 25% split, the processing time further improves by 63%. As the average time per image of S 3 FD is shorter on FDDB, the time improvements are close, but not as high. The improvements in terms of time are 15% for the 25% − 75% split, 34% for the 50% − 50% split, and 55% for the 75% − 25% split. We note that unlike the random splitting strategy, the easy-versus-hard splitting strategies require additional processing time, either for computing the difficulty scores or for estimating the number of faces and their average size. The image difficulty predictors run in about 0.05 seconds per image, while the MobileNet-SSD detector (used for estimating the number of faces and their average size) runs in about 0.28 seconds per image. Hence, the extra time required by the two Table 2. Area under the discrete and the continuous ROC curves and time comparison between MobileNet-SSD [13], S 3 FD [30] and various combinations of the two face detectors on FDDB. The test data is partitioned based on a random split (baseline) or five easy-versus-hard splits given by: the class-agnostic image difficulty score, the person-aware image difficulty score, the number of faces (n), the average size of the faces (avg), and the number of faces divided by their average size (n/avg). For the random split, we report the scores over 5 runs to reduce bias. The reported times are measured on a computer with Intel Core i7 2.5 GHz CPU and 16 GB of RAM. splitting strategies based on image difficulty is almost insignificant with respect to the total time required by the various combinations of face detectors. For instance, in the 50% − 50% split with MobileNet-SSD and S 3 FD, the difficulty predictors account for roughly 4% of the total processing time (0.05 out of 1.13 seconds per image) for an image taken from AFW. Regarding our five easy-versus-hard strategies for combining face detectors, the empirical results indicate that the proposed splitting strategies give better performance than the random splitting strategy, on both data sets. Although using the number of faces or the average size of the faces as splitting criteria is better than using the random splitting strategy, it seems that combining the two measures into a single strategy (n/avg) gives better and more stable results. On the AFW data set, the n/avg strategy gives the best results for the 75% − 25% split (0.9571), while the class-agnostic image difficulty provides the best results for the 50% − 50% split (0.9818) and the 25% − 75% split (0.9923). The highest improvements over the random strategy can be observed for the 50%−50% split. Indeed, the results for the 50% − 50% split shown in Table 1 indicate that our strategy based on the class-agnostic image difficulty gives a performance boost of 4.63% (from 0.9355 to 0.9818) over the random splitting strategy. Remarkably, the AP of the MobileNet-SSD and S 3 FD 50% − 50% combination is just 1.49% under the AP of the standalone S 3 FD, while the processing time is reduced by almost half. On the FDDB data set, the n/avg strategy gives the best DiscROC score for the 75% − 25% split (0.9214), while the class-agnostic image difficulty provides the best DiscROC score for the 25% − 75% split (0.9673). The n/avg strategy and the class-agnostic image difficulty provide equally good DiscROC scores on the 50% − 50% split (0.9493). As indicated in Table 2, the best DiscROC score for the 50% − 50% split (0.9493) on FDDB is 3.83% above the DiscROC score (0.9110) of the baseline strategy. The ContROC scores are generally lower for all models, but the same patterns occur in the results presented in Table 2, i.e. the best results are provided either by the n/avg strategy when MobileNet-SSD has a higher contribution in the combination or by the class-agnostic image difficulty when S 3 FD has to process more images. Results and Discussion To understand why our splitting strategy based on the class-agnostic image difficulty scores gives better results than the random splitting strategy, we randomly select a few easy examples (with less and bigger faces) and a few difficult examples from the AFW data set, and we display them in Figure 1 along with the bounding boxes predicted by the MobileNet-SSD and the S 3 FD models. On the easy images, both detectors are able to detect the faces without any false positive detections, and the bounding boxes of the two detectors are almost identical. Nevertheless, we can perceive a lot more differences between MobileNet-SSD and S 3 FD on the hard images. In the left-most hard image, MobileNet-SSD is not able to detect the profile face of the man in the near right side of the image. In the second image, MobileNet-SSD wrongly detects the dog's face as a human face and it fails to detect the face of the boy sitting in the right. In the third image, MobileNet-SSD wrongly detects the small snowman's face sitting in the background and it fails to detect the face of the baby. In the right-most hard image, MobileNet-SSD fails to detect the face of the person looking down, which is difficult to detect because of the head pose. In the same image, MobileNet-SSD also fails to detect the profile face of the man in the far right side of the image. Remarkably, the S 3 FD detector is able to correctly detect all faces in the hard images illustrated in Figure 1, without any false positive detections. We thus conclude that the difference between MobileNet-SSD and S 3 FD is only noticeable on the hard images. This could explain why our splitting strategy based on the class-agnostic image difficulty scores is effective in choosing an optimal trade-off between accuracy and speed. Conclusion In this paper, we have presented five easy-versus-hard strategies to obtain an optimal trade-off between accuracy and speed in face detection from images. Our strategies are based on dispatching each test image either to a fast and less accurate face detector or to a slow and more accurate face detector, according to the class-agnostic image difficulty score, the person-aware image difficulty score, the number of faces contained in the image, the average size of the faces, or the number of faces divided by their average size. We have conducted experiments using state-of-the-art face detectors such as S 3 FD [30] or MobileNet-SSD [13] on the AFW [31] and the FDDB [17] data sets. The empirical results indicate that using either one of the image difficulty predictors for splitting the test images compares favorably to a random split of the images. However, our other easy-versus-hard strategies also outperform the random split baseline. Since all the proposed splitting strategies are simple and easy to implement, they can be immediately adopted by anyone that needs a continuous accuracy versus speed trade-off optimization strategy in face detection.
2018-11-27T12:16:22.000Z
2018-11-27T00:00:00.000
{ "year": 2018, "sha1": "44f433c5db087664cd5fae273763a2d2baf03bfd", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1811.11582", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a83bbe9f1806b67f344a00f1d6dbaca90f1a815d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
6069754
pes2o/s2orc
v3-fos-license
Beating the bookies with their own numbers - and how the online sports betting market is rigged The online sports gambling industry employs teams of data analysts to build forecast models that turn the odds at sports games in their favour. While several betting strategies have been proposed to beat bookmakers, from expert prediction models and arbitrage strategies to odds bias exploitation, their returns have been inconsistent and it remains to be shown that a betting strategy can outperform the online sports betting market. We designed a strategy to beat football bookmakers with their own numbers. Instead of building a forecasting model to compete with bookmakers predictions, we exploited the probability information implicit in the odds publicly available in the marketplace to find bets with mispriced odds. Our strategy proved profitable in a 10-year historical simulation using closing odds, a 6-month historical simulation using minute to minute odds, and a 5-month period during which we staked real money with the bookmakers. Our results demonstrate that the football betting market is inefficient - bookmakers can be consistently beaten across thousands of games in both simulated environments and real-life betting. We provide a detailed description of our betting experience to illustrate how the sports gambling industry compensates these market inefficiencies with discriminatory practices against successful clients. Introduction "In the midst of chaos, there is also opportunity." -Sun Tzu , The Art of War In recent years, the emergence of web technologies, product platforms and TV broadcast rights transformed the online gambling industry into a worldwide $452 billion business . Clients of the 2 online sports betting industry dream of "beating the bookies" and, most often, find in the adrenaline and excitement of their risky gambling activities an escape from the boredom of everyday life (Blaszczynski, McConaghy, & Frankova, 1990;Lee, Chae, Lee, & Kim, 2007;Loroz, 2004;Platz & Millar, 2001) . To maximize profit, bookmakers employ teams of data scientists to analyze decades of sports data and develop highly accurate models for predicting the outcome of sports events (Cantinotti, Ladouceur, & Jacques, 2004;García, Pérez, & Rodríguez, 2016) . Although several strategies have been proposed to compete with bookmakers' models, from expert predictions (Forrest, Goddard, & Simmons, 2005) , probability models based on Power scores, Elo ratings and/or Maher-Poisson approaches (Dixon & Coles, 1997;Maher, 1982;Vlastakis, Dotsis, & Markellos, 2008) and prediction markets (Spann & Skiera, 2009) to arbitrage strategies and odds bias exploitation (Ashiya, 2015;A. C. Constantinou, Fenton, & Neil, 2013;Franck, Verbeek, & Nüesch, 2009) , to our knowledge there is no precedent in the scientific literature that they consistently outperform the market and show sustained profit over years and across football leagues around the world (A. Deschamps & Gergaud, 2012;Kain & Logan, 2014;Spann & Skiera, 2009;Vlastakis et al., 2008;Vlastakis, Dotsis, & Markellos, 2009) . 2 www.statista.com Can a betting strategy outperform the sports betting market? Although bookmakers' profitable business (along with their modelling advantage and control of odds pricing) seems to suggest the opposite, we aimed to demonstrate that bookmakers can be beaten with their own numbers. We developed a betting strategy for the football market that exploited the implicit information contained in the bookmakers' aggregate odds (Kuypers 2000;Cortis 2016;Cortis, Hales, and Bezzina 2013) to systematically take advantage of mispriced events. Our betting system differed from previous betting strategies in that, instead of trying to build a model to compete with bookmakers' forecasting expertise, we used their publicly available odds as a proxy of the true probability of a game outcome. With these proxies we searched for mispricing opportunities, i.e., games with odds offered above the estimated fair value (see glossary in Box 1). Our strategy returned sustained profits over years of simulated betting with historical data, and months of paper trading and betting with actual money. These results suggest that the football betting market is inefficient. Bookmakers, however, deploy a special set of practical rules to compensate for these inefficiencies. A few weeks after we started trading with actual money some bookmakers began to severely limit our accounts, forcing us to stop our betting strategy. We thus demonstrate that (i) bookmakers can be beaten consistently over months/years of betting with a single strategy, in both simulated environments and in real-life betting situations and (ii) the online sports betting system is rigged against successful bettors through discriminatory practices. Methods "Who wishes to fight must first count the cost." -Sun Tzu , The Art of War Betting strategy For a bet to be "fair", i.e., for the expected value of a bet to be zero, the odds paid by the bookmaker must be the inverse of the underlying probability of the result. Once bookmakers build an accurate model that estimates the underlying probability of the result of a game, they offer odds that are below the fair value. The mechanism operates similarly to the roulette at the casino. For example, when a customer places a bet on red in an American roulette, there is a 18/38 chance of doubling the wager (18 green numbers, 18 red numbers, plus 0 and 00, which are green). Under these conditions, the fair value for the bet is 2.111 but the house pays only 2 and, therefore, the house pays below fair value. This is the 'tax' or commission charged by the bookmaker, in this case, for every dollar bet at the roulette, the house expects to earn $ (2/38), or 5.3c. In order to calculate the odds that, statistically, will allow bookmakers to earn a desired percentage of the total money bet at sport games, they need accurate models to estimate the probability of each event. There are many different factors that can be incorporated into a model to predict the probability of the outcome of a football game, for instance: the results of the last n games for the two teams, the record of successful games at home or away for those teams, the number of goals scored and conceded by each team during the previous games, player injuries before the game and even the expected weather conditions on the day of the match (Dixon & Coles, 1997;Langseth, 2013;Maher, 1982) . If we consider the scope of these variables the task of developing accurate models to predict the outcome of thousands of games across football leagues around the world becomes an extremely complex challenge. In recent years, however, teams of professional analysts have improved the outcomes of their prediction models with increasingly sophisticated statistical analysis and large amounts of data in variety of forms (Gandar, Zuber, & Lamb, 2001) . To measure the accuracy of bookmakers in estimating a game's final result probabilities we calculated the consensus probability as follows: where is a set containing the odds across bookmakers for a given event and a given game Ω result (home team win, draw, away team win), whenever there were more than 3 odds available for that result (in some games only a subset of bookmakers offered odds; the number of odds we employed for analysis varied from a minimum of 3 to a maximum of 32). In this way, we calculated the consensus probability of a home team win, a draw, or an away team win for each of the 479,440 games (in total 3 x 479,440 consensus probabilities). Then, we binned the data according to the consensus probability from 0 to 1 in steps of 0.0125 (i.e., 80 bins). Within each bin we calculated: 1) the mean consensus probabilities across games at closing time (the final odds provided by bookmakers before the start of the match); and 2) the mean accuracy in the prediction of the football game result (i.e., the proportion of games ending in home team victory, draw or away team victory for that bin; see Figure 1). We used a minimum of 100 games for each bin. With these data we ran a preliminary analysis and observed that the consensus probability is a good predictor of the underlying probability of an outcome (see Results section). Based on these results, we decided to build our betting strategy on this evidence that bookmakers already possess highly accurate models to predict the results of football games. A strategy intended to beat the bookmakers at predicting the outcome of sports games requires a more accurate model than the ones bookmakers have developed over many years of data collection and analysis. Instead of trying to create such a model, we decided to use the bookmakers' own probability estimates of the outcomes to find mispricing opportunities. More specifically, we searched for opportunities where some odds offered were above their estimated fair value. Sometimes bookmakers offer odds above fair value either to compete to attract clients or to maintain a balanced book to avoid getting overly exposed to risk. For example, when too many clients bet on an outcome ( e.g. home team victory) bookmakers can increase the odds for the corresponding counterpart ( e.g. away team victory), in order to attract more gamblers to bet on it and decrease their exposure to the overbooked outcome. This means that bookmakers might offer odds with a lower implied probability than the actual probability of a result. This is the key factor that we exploited in our strategy. We based our strategy on the estimated payoff of each bet. The expected payoff of betting $1 is: Where is the payoff of the bet (a random variable), is the actual underlying probability Π p real that the outcome materializes, and are the odds paid by the bookmaker in case that the ω outcome comes about. We performed a preliminary data analysis and found that: is the consensus probability as calculated above and is an adjustment term that p cons α allows us to include the intercept we estimated in a regression analysis on outcomes of games for "Home", "Draw" and "Away". The estimated was 0.034, 0.057 and 0.037 for home victory, α draw and away victory, respectively (see results section). Then: Under these conditions, we should place a bet when the expected payoff is greater than 0, i.e., when: (Eq. 6) / (p ) ω > 1 cons − α We followed this line of reasoning to define our betting strategy, and decided to place a bet whenever the maximum odds offered for a given result fulfilled the following inequality: The expected value of each bet increases with the parameter, while the number of games α available for betting decreases. This occurs because the condition becomes more stringent and less bookmakers offer odds with such high margins. To select an appropriate value for the α parameter we analyzed the performance of the simulation strategy by varying the value of α from 0.01 to 0.1. We found that an of 0.05 produced the optimal payoff with the largest α amount of games (an of 0.06, for example, was equally profitable but we decided to use an α α of 0.05 because it provided twice as many games to bet in, which might be useful in a strategy that increases the amount staked as a function of the earnings). In summary, we based our betting strategy on the assumption that odds published by bookmakers allow us to obtain a highly accurate estimate of the actual probability of the outcome of an event (by taking the inverse of the mean odds across bookmakers minus a constant that allows for the bookmaker's commission). Thus, our betting strategy consisted of placing bets whenever the odds offered by some bookmaker deviated from the average and were above fair value, i.e., when the expected payoff of placing the bet was positive. Importantly, the task of identifying the odds that satisfied the threshold in (Eq. 7) did not require a model with higher accuracy than the bookmakers' models. Strategy implementation Our betting strategy was implemented as a real time system, and deployed on a virtual machine hosted on the cloud. The system continuously collected data from online sports betting portals and provided the web service that made a dashboard available, where the recommended bets and the betting history were shown (Supplementary Figure 1). The virtual machine was used to run a program that searched for the odds of every football game from 5 hours before the onset of the game. For each game, the program continuously collected odds across 32 bookmakers and calculated whether the maximum offered odds complied with our strategy's condition for placing a bet -i.e., maximum odds fulfilling, Eq. (7). Whenever the program found a situation in which this happened, it displayed the information about the game, bookmaker and odds on the dashboard (Supplementary Figure 1), so that the users (including us) could see the list of bets recommended by the system and place a bet of fixed amount with the bookmaker. To keep the amount of money placed on each independent game constant, once a bet was placed for a game at some bookmaker, that game was not considered for further analysis. "Victorious warriors win first and then go to war …. The greatest victory is that which requires no battle." -Sun Tzu , The Art of War Analysis to define the betting strategy To select the appropriate strategy we first performed a descriptive statistical analysis of the relationship between the bookmakers' predictions and the actual probability of the outcome of football games. A linear regression analysis showed a strong correlation between the bookmakers' consensus probability and the results of the game for home victory (R 2 = 0.999), draw (R 2 = 0.995) and away victory (R 2 = 0.998). The slopes and intercepts of the regression line were 1.003 and -0.034 for a home victory, 1.081 and -0.057 for draw, and 1.012 and -0.037 for an away victory, respectively. These results suggest that the consensus probability is an extremely accurate proxy (up to a constant intercept) of the actual probability of occurrence of each event (home victory, dray, away victory; note that the slopes of the three regression lines are very close to 1). Based on these results, we decided to build our betting strategy on the evidence that bookmakers already possess highly accurate models to predict the results of football games. Strategy Outcome We tested our betting strategy by analyzing the odds and results of 479,440 football games played in 818 leagues during a ten-year period, from 2005 to 2015. We began our analysis by applying our betting strategy to the closing odds of each game (i.e., the odds values offered by bookmakers at the start of the game ). We simulated placing bets when the closing odds of a at the maximum odds offered across bookmakers. In each run of the simulation we A) randomly sampled 56,435 games (the same amount of games that were selected by our betting strategy) from the complete set of games in the historical series, B) selected a random outcome on which to bet with a probability of 0.595 for home victory, 0.021 for draw and 0.384 for away victory (these are the proportions of home, draw and away games that were selected by our strategy) and C) calculated the return of placing the bet. We repeated the procedure 2000 times (sampling with replacement) to obtain a distribution of returns ( Figure 2A). The random strategy yielded an accuracy of 38.9%, an average return of -3.32% and an average loss of $93,563 (STD=$17,778), further confirming that Eq. 7 successfully selects bets with a positive expected payoff above chance level. The return of our strategy was 10.82 standard deviations above the mean return of the random bet strategy. The probability of obtaining a return greater than or equal to $98,865 in 56,435 bets using a random bet strategy is less than 1 in a billion. We observed that the final accuracy was higher for our strategy (44.4%) than for the random bet strategy (38.9%). Correspondingly, our strategy selected odds with a mean value of 2.30 (STD=0.99) and the random bet strategy selected odds with a mean value of 3.10 (STD= 2.42). The discrepancy in the accuracy between strategies originated from the selection of events: our strategy picked up games with lower odds values and higher probability of occurrence than the games selected by the random bet strategy. We confirmed this finding with an analysis of the mean closing odds across bookmakers for each strategy. As shown above, the mean closing odds across bookmakers is a precise estimate of the true probability of occurrence of an event ( Figure 1). The expected accuracy (as predicted by the inverse of the mean closing odds across bookmakers) precisely estimates the final accuracy in each strategy. We calculated the expected accuracy of the strategy using Eq. 4. For each bet of the strategy we calculate where is equal to 0.034, 0.057 and 0.037 for home win, draw and away bets respectively (and α where the intercept comes from the regression analysis performed in the first paragraph of α the Results section), and indexes the bet. We then calculated the average estimated probability: i confirms that the the probability information implicit in the mean closing odds across bookmakers represents a powerful predictor for the true outcome of football games (as shown in our historical analysis). Following the success of our initial analysis, and considering that in real life individuals cannot place bets at the closing time of odds, we decided to conduct a more realistic simulation in which we placed bets at odds available from 1 to 5 hours before the beginning of each game. To this end, we wrote scripts to continuously collect odds from multiple sources on the Internet. While the historical closing odds for football games can be easily retrieved online, we could not find any source of data containing the time series of odds movements before the beginning of each game. To obtain these times series we wrote a new set of scripts to gather information in real time for upcoming games as they became available online. In total, we were able to obtain data from 31,074 games, from the 1st of September 2015 to the 29th of February 2016. Using these times series data, we placed bets according to our betting strategy at any time starting 5 hours before the beginning of a game until 1 hour before the start of the game. Under these simulated conditions, our strategy selected odds with a mean value of 2.32 (STD=0.99.), had an accuracy of 47.6% and yielded a 9.9% return; i.e., if every bet placed was $50 our strategy would have generated $34,932 in profit across 6,994 bets (Table 1, Figure 2B). In contrast, the distribution of returns of the random bet strategy selected odds with a mean value of 3.29 (STD=2.96), had and accuracy of 38.4% and would have generated, for bets of $50, a return of 0.2% and an average profit of $825 (STD=$7,106). Similarly as shown above, the expected accuracy wa 46.5% for our strategy and 37.7% for the random bet strategy, which closely matched the actual accuracies of both strategies. The return of our strategy was 4.80 standard deviations above the mean of the random bet strategy. The probability of obtaining a profit greater than or equal to $34,932 in 6,994 bets with a random bet strategy is less than 1 in a million. Once we determined that our betting strategy was successful with the historical closing odds and with the analysis of odds series movements from 5 hours to 1 hour before the game start, we decided to test our betting strategy under more realistic betting conditions. To this end we employed a technique called "paper trading", a simulated trading process in which bettors can "practice" placing bets without committing real money. We used the information displayed on the dashboard to check the bookmakers' accounts, verify that the possibility to lay a bet at the advantageous odds was available, and subsequently mark the bet as laid on the dashboard. Paper trading allowed us to empirically check whether the odds were available at the bookmakers at the time of placing a bet. We had to test the discrepancy between the information that bookmakers showed on their websites and the information that was displayed on our dashboard. Often, there was a time delay between the moment when bookmakers made their odds available online and the time it took for our scripts to show that information on the dashboard. We observed that around 30% of the odds that were displayed on the dashboard had already been changed at the bookmakers' sites. The delay in the update of the odds created a sample bias in the games we were betting on: in contrast to previous analysis in which every game was used for the simulation, now a subset of these games was not included at the time of placing bets. To test how this delay could affect our results, we ran again our strategy simulation, now randomly discarding 30% of the games. We observed that, despite the missing bets, the strategy remained profitable. We decided to continue with our betting strategy, and after three months of paper trading our strategy obtained an accuracy of 44.4% and a return of 5.5%, earning $1,128.50 across 407 bets for the case of $50 bets (Table 1, Figure 3). At this point we decided to place bets with real money. All the procedures were identical to the paper trading exercise, with the exception that the human operator actually placed $50 bets at the bookmakers' online platforms after checking the odds on the dashboard. Our final results show the profit we obtained in 5 months of betting money for real. During that period we obtained an accuracy of 47.% and a profit of $957.50 across 265 bets, equivalent to a 8.5% return (Table 1, Figure3). Combined, paper trading and real betting had an accuracy of 45.5% and yielded a profit of $2,086 in 672 bets, equivalent to a return of 6.2%. We compared the results of our strategy with the results of a random bet strategy, identical to that employed for the time series odds (figure 2B) but this time considering games from April 2015 to July 2015 (the period used for paper trading and real betting). The random strategy yielded an accuracy of 38.7%, an average return of -0.7% and an average loss of $670 (STD=$2047). The return of our strategy after 672 games was 1.34 standard deviations above the mean of the random bet strategy and the probability of obtaining a profit of $2,086 or higher in 672 bets with a random betting strategy is 1 in 11. This probability corresponds to a p value of 0.089, under the null hypothesis that the return of our strategy comes from a distribution of final returns obtained with a random bet strategy. A p-value of 0.05 is often considered as the standard threshold for statistical significance. The p-value we obtained from the analysis of the return of our strategy was expected given the evolution of the returns obtained in our historical simulations: with an increase in the number of games our strategy increases its return and separation from the distribution of returns of the random bet strategy (as seen with the historical analysis of closing odds and odds movements series). The reader might notice that during a similar time period the simulated strategy bet on approximately ten times more games. The reason for this is that we did not have a dedicated operator betting on all available opportunities 24 hours a day and as a result we missed many of the bets that appeared on the dashboard. Nevertheless, our paper trading and actual betting activity confirmed the profitability of the strategy. Although we played according to the sports betting industry rules, a few months after we began to place bets with actual money bookmakers started to severely limit our accounts. We had some of our bets limited in the stake amount we could lay and bookmakers sometimes required "manual inspection" of our wagers before accepting them. In most cases, bookmakers denied us the opportunity to bet or suggested a value lower than our fixed bet of $50 (Figure 4). Under these circumstances we could not continue with our betting strategy. The limits imposed by bookmakers not only shrunk our potential profit but also created a sampling bias in the choice of games which was not taken into account in our previous analysis. In our simulations, when we analyzed the effects of randomly discarding a proportion of the games, the returns were not affected. However, the selection of games where bookmakers limited our stakes was unlikely to be purely random, which could negatively impact the strategy's performance. For these reasons, and because bookmakers' restrictions turned the betting experience increasingly difficult, we decided to end our betting experiment . 4 Discussion "One may know how to conquer without being able to do it." -Sun Tzu , The Art of War We developed a betting strategy for the online betting football market. In contrast to strategies that build prediction models to compete with the forecasts of bookmakers' models, our strategy was developed under the assumption that the average of the odds across bookmakers reflects an accurate estimate of the probability of the outcome of a game. Instead of competing against bookmakers' forecasting models, we used the prediction information contained in the aggregate odds to bet on mispriced events. Our strategy proved successful and returned profit with historical data, paper trading and real betting over months and across football leagues around the world. Betting strategies based on expert or tipster analysis attempt to beat bookmakers by constructing more accurate forecasting models than those of bookmakers (Boulier, Stekler, & Amundson, 2006;Daunhawer, Schoch, & Kosub, 2017;Deschamps & Gergaud, 2012;Vlastakis et al., 2009) . Our analysis shows, however, that the implicit information contained in the average odds across bookmakers provides a highly accurate model to predict the outcomes of football games (Boulier et al., 2006;Forrest et al., 2005;Spann & Skiera, 2003) . 4 As of the date of writing this paper (August 2017), one of the bookmakers we had accounts with, "Doxxbet", closed its website to clients. We are not able to withdraw the money (90 euro) from them. Their support teams do not respond to our emails. There are many cases where the aggregate predictions of a group of individuals produce more accurate predictions than those of each individual separately, a phenomenon often referred to as the wisdom of crowds (Navajas, Niella, Garbulsky, Bahrami, & Sigman, 2017) . This idea is often applied in practice, for example in applications such as ensemble learning in machine learning algorithms (Géron, 2017) . Similarly, in the football market, each bookmaker can be considered a predictor, and the average odds as the aggregate information across predictors. These predictions also include the preferences and opinions of the punters regarding the probability of the outcome, because they exert pressure on the price of the odds through their collective betting (bookmakers often alter odds based on demand level to keep a balanced book, e.g. when they increase the odds for a favourite when a disproportionate amount of punters place money on the underdog). As bookmakers already posses excellent predictive models to estimate the outcomes of football games, competing with them at forecasting game outcomes becomes a challenging task. Not surprisingly, previous attempts to beat the football market with expert strategies showed inconsistent returns (Boulier et al., 2006;Daunhawer et al., 2017;Deschamps & Gergaud, 2012;Forrest et al., 2005;Kain & Logan, 2014;Vlastakis et al., 2008Vlastakis et al., , 2009 . In comparison, our strategy showed positive and sustained returns over years of betting with historical data and over months of betting actual money across leagues in the football market. Through our experiments we demonstrated the existence of a betting strategy that consistently generates profit. Some scholars consider that the existence of one such strategy is inconsistent with the putative "efficiency" of the betting market (A. Deschamps & Gergaud, 2012;Vlastakis et al., 2009) . If, on the contrary, a strategy like ours generates profit consistently either by outperforming bookmakers' predictions or by exploiting market flaws then the betting market is necessarily "inefficient". Our results suggest that the online football betting market is inefficient because our strategy was able to obtain sustained profits over years with historical data and over months of paper trading and actual betting. In practice, however, the inefficiency of the football betting market was compensated by the bookmakers' restrictive practices. A few months after we began placing bets with real money bookmakers limited our accounts, which forced us to stop our betting completely. Although our betting activities were legal and were conducted according to the bookmakers' rules, our bet stakes were nevertheless restricted. Our case illustrates some of the discriminatory practices of the online sports betting marketthe sports betting industry has the freedom to publicize and offer odds to their clients, but those clients are expected to lose and, if they are successful, they can be restricted from betting. In comparison, the limits to the accounts imposed in the online gambling industry constitute illegal practices in other industries, or are even unlawful in general. For example, advertising goods or services with intent not to sell them as advertised, or advertising goods or services with no intent to supply reasonably expectable demand but with the intention to lure the client to buy another product (a practice, often called "bait" or "bait and switch" advertising) is considered false advertising and carries pecuniary penalties in the UK, Australia and the United States of America . Most countries have laws regulating advertising in 5 the gambling industry, but some of these laws have been relaxed in recent years (e.g. the Gambling Act 2005 in the UK allowed the sports gambling industry to start advertising online and on TV) and they vary from country to country. Our study sets a precedent of the discriminatory practices against successful bettors in the online sports gambling industry: the 5 Consumer Protection from Unfair Trading Regulations (2008) Guidance, Interim: Guidance on the UK Implementation of the Unfair Commercial Practices Directive ( https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/284442/oft1008.pdf ) Australia: COMPETITION AND CONSUMER ACT 2010 -SCHEDULE 2 ( http://www.austlii.edu.au/au/legis/cth/consol_act/caca2010265/sch2.html ) US: FTC Guides Against Bait Advertising, Section 238. online football market is rigged because bookmakers discriminate against successful clients. We advocate for governments to take action into further regulating the sports betting industry, either by forcing bookmakers to publicly admit that successful clients will be banned from betting or by denying bookmakers the chance to discriminate against them. Tables Table 1. Results obtained with historical data, paper trading conditions and real betting. Dates Profit (U$D) Figure 1 . Bookmakers prediction power. A historical analysis of 10 years of football games shows the tight relationship between the bookmakers' predictions and the actual outcome of football games. The probability estimated by bookmakers (as reflected by the inverse of the mean closing odds across bookmakers) is highly correlated with the true probability of the outcomes of football games for home team victory (black dots), draw (magenta dots) and away team victory (blue dots). We analyzed 479,440 games from 818 leagues and divisions across the world during the period 2005-2015. Data was binned according to the estimated probabilities, from 0 to 1 in steps of 0.0125, and with a minimum of 100 observations per bin. Number of bets Figures 2 . Two analysis with historical data demonstrate the effectiveness of our betting strategy. A-We applied our strategy to the closing odds of 479,440 games and obtained a return of 3.5% in 56,435 bets. To assess the probability of obtaining a return greater than or equal to 3.5% by chance we performed a bootstrap analysis to estimate the distribution of returns for a "Random Bet Strategy". By placing bets at the highest offered odds at random games the "Random bet Strategy" yielded, on average, a return of -3.32%. In comparison, the return of our strategy was 10.82 standard deviations above the mean of the distribution of returns of the random bet strategy. The probability of obtaining a return greater than or equal to ours with a random bet strategy across 56,435 games is less than 1 in a billion. Data in this panel comes from a 10-year database (2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015) of football games. The figure shows the potential total return assuming a constant $50 stake per bet. B) We applied the same bootstrap analysis as in A), but now to the time series of odds movements during the period [-5 -1] hours before the start of the games. The random bet strategy yielded an average return of 0.2%. In comparison, the return of our strategy was 9.9%, 4.80 standard deviations above the mean of the distribution of returns of the random bet strategy. The probability of obtaining a return greater than or equal to ours with a random strategy that bets on the maximum odds across 31,074 games is less than 1 in a million. Data in this panel comes from a 6-month database of football games (September 2015 -March 2016). Figure 3. Our betting strategy generated profit with paper trading and in a real-life betting (placing real stakes with bookmakers). We obtained a return of 5.5% for "paper trading" (blue line) and a return of 8.5% for real betting (see Table 1 for a detailed analysis) over a 5-month period of betting. Considering both paper trading and real betting we made a profit of $2,086 in 672 bets, a return of 6.2%. This was achieved by placing $50 on each bet. A-William Hill betting slip showing ( www.sports.williamhill.com/bet/en-gb ) a 2,428.33 yen limit on our bet (at the time this bet took place 5000 yen were equivalent to $50). B-Interwetten ( www.interwetten.com ) imposing a maximum bet of $11.11 C-Sportingbet ( http://www.sportingbet.com/ ) setting a maximum limit of $1.25 and D-Betway ( www.betway.com ) limiting our stakes to $10.45. Supplementary Materials Box 1. Glossary Bookmaker The bookmaker (or "bookie"), refers to the company that provides an odds market for betting and offers to pay a price for each possible outcome of a sporting event. Event This term denotes a specific match between two teams or individuals. For example: Ath. Bilbao vs Barcelona, Thursday, January 5, 21:15 GMT. Market A betting market is a type of betting proposition with two or more possible outcomes. The result of the match (home win, away win, or draw), the number of goals scored (two or less goals, three or more), or the time of the first goal are a few examples of different markets for a single sporting event. Stake The amount of money wagered in a single bet. Odds The odds of a result refer to the payoff to be received if the chosen result materializes. In this paper we use the European notation, where the odds are equal to the currency units to be received for each currency unit wagered. For example, if an outcome offers odds of 2 means that for each dollar wagered the house will pay 2 back, giving a profit of 1 dollar per dollar invested. Fair odds Fair odds for an outcome are the ones that result in a zero expected payoff. For example, if the probability of the outcome is ½ , the fair odds would be 2, because E(payoff) = ½ x (2 -1) + ½ x (-1) = 0. In general for the odds of an outcome to be fair, they should be the inverse of the probability of the outcome. Result/Outcome The actual outcome of an event. E.g. in a 1x2 soccer bet, local win, draw, and away win are the three possible results. If the result that comes about coincides with the chosen result of a bet, the gambler wins the odds times the stake, otherwise he loses the whole stake. Profit The amount of additional money the bettor receives on top of his stake if he chooses the result that actually happens. For example, if the odds are 2 and the stake is 10 dollars, the gambler receives 20 dollars in total from the bookie, and the profit is 10 dollars. Yield A measure of the profitability of a series of bets, it is calculated as the sum of the profits made from all the bets placed divided by the sum of the money staked in all bets, usually expressed as a percentage. For example, if after 10 bets of $1 each there is a net profit of $1.50, the yield is (1.5/10) = 0.15=15%. A B Supplementary Figure 1. A) Screenshot of the online dashboard displaying the games that were selected for betting. The dashboard displayed the names of both teams, league name, bet value at which to place the bet for the strategy to work and the time remaining until the start of the game. B) A second tab in the Dashboard was used to keep track of the bets list. There the dashboard displayed the names of the participating teams, football league, the result that was backed by the bet (1: home team to win, 2: away team to win), final result of the game, odds value for the bet, the bookmaker that was used for each bet, the amount of money placed on each bet (we employed U$D 50 throughout), the result of the bet (1: bet won; -1: bet lost) and the profit obtained from each bet. Some of the games used for paper trading are displayed in this figure.
2017-10-08T11:44:35.000Z
2017-10-08T00:00:00.000
{ "year": 2017, "sha1": "d65ec3c62643efdd91003f0710bc87e5cdcf6455", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d65ec3c62643efdd91003f0710bc87e5cdcf6455", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
116730912
pes2o/s2orc
v3-fos-license
Study on the Method for Analyzing Electric Network Resonance Stability : With the increasing utilization of power electronic equipment in power systems, there has been an increase in the occurrence of oscillatory behavior from unknown sources in recent years. This paper puts forward the concept of electric network resonance stability (ENRS) analysis and tries to classify the above-mentioned oscillations into the category of ENRS. With this method, many complex power system oscillations can be analyzed with the linear network theory, which is mathematically mature. The objective of this paper is to establish a systematic approach to analyze ENRS. By introducing the s-domain nodal admittance matrix (NAM) of the electric network, this paper transforms the judgment of ENRS into the zero-point solution of the determinant of the s-domain NAM. First, the zero-points of the determinant of the s-domain NAM are proved to correspond to the eigenvalues of the system. Then, a systematic approach is proposed to analyze ENRS, which includes the identification of the dominant resonance region and the determination of the key components related to resonance modes. The effectiveness of the proposed approach for analyzing ENRS is illustrated through case studies. Introduction Oscillations are common phenomena in power system operations [1]. Generally, power system oscillations can be divided into three categories: (1) oscillations of the generator shaft (torsional interaction); (2) oscillations among the generator rotors; and (3) inherent resonance in the electric network. With the increasing utilization of power electronic equipment in power systems, the third category-namely the inherent resonance in the electric network-becomes more severe and complex. For some electronic equipment, a negative resistance effect appears within a particular frequency range, which may lead to resonance instability. In the Hebei Province of China, since 2011 the interaction of doubly-fed induction generator (DFIG) wind turbines with nearby series capacitors has often caused subsynchronous oscillations with a typical frequency of 3-10 Hz. This leads to abnormal vibrations of transformers and wind generator tripping [2,3]. Moreover, unattenuated subsynchronous oscillations once occurred in the dc side of the Xiamen VSC-HVDC project in the Fujian Province, with typical frequency of about 25 Hz [4]. Furthermore, torsional oscillations once appeared in the Xinjiang Region, which is regarded as an interaction between generator shafts and the local power grid upon the integration of permanent magnetic synchronous generator (PMSG) wind farms [5]. Oscillations in the scenario of wind farm integration through a modular multilevel converter-based HVDC (MMC-HVDC) have also been reported [6,7]. Therefore, the inherent resonance structure and its relevant stability in power systems should be fully researched. The inherent resonance structure of an electric network contains information on four aspects of the network: (1) the number of resonance modes within a target frequency range, and for each of the Definition of System Eigenvalue The dynamic behavior of an electric network is governed by the Kirchhoff current law (KCL), the Kirchhoff voltage law (KVL), and the dynamical equations of any energy-storage elements (e.g., capacitors, inductors). The former two are algebraic equations, while the latter one is a first-order differential equation. When an electric network consists of components of lumped and time-invariant parameters, a single-input single-output (SISO) state-space model can be established [15]. Based on the above-mentioned three equations, where {A, T} together make up the system matrix of the state space; x is the vector of state variables, including the state of energy-storage components (for example, capacitor, voltage, and inductor currents); y i is the voltage of the i-th node; u j is the injected current at the j-th node; and both column vector b and row vector c are one-dimensional constant vectors. Applying the Laplace Transform to Equation (1), one can obtain sTx(s) = Ax(s) + bu j (s) y i (s) = cx(s). Then the transfer impedance in the s-domain is where det(sT − A) and (sT − A) * represent the determinant and adjoint matrix of sT − A, respectively. According to the theory of linear algebra, det(sT − A) is the system's characteristic polynomial, and its zero-points are the system's eigenvalues. The system is stable if all the eigenvalues are located on the left half-plane of the complex plane. Note that as the network components are described by lumped and time-invariant parameters, the state-space model has a limited number of resonance modes. Eigenvalue Acquisition Based on the s-Domain Nodal Admittance Matrix Beside the state-space model, the network eigenvalues can also be acquired by the s-domain nodal admittance matrix. In the s-domain, the nodal voltage equation of the electric network is where Y(s) is the s-domain nodal admittance matrix, V node (s) and I node (s) are the vectors of node voltages and node injection currents, respectively. Considering the same SISO system as in the previous subsection, suppose that the j-th node injection current is i j (s) and the i-th node voltage is v i (s). Hence, Equation (4) can be rewritten as where the column vector b j has zero elements except for its j-th element which equals 1 (one); and similarly, row vector c i contains 1 only in the i-th element. The transfer impedance in the s-domain can be calculated as where det[Y(s)] and Y(s) * represent the determinant and adjoint matrix of Y(s), respectively. Note that not all the elements of Y(s) are polynomials of s, for example, the operational admittance of an inductor has s in the denominator (1/sL). However, there will exist a proper integer m to make s m det[Y(s)] a polynomial of s. Further compared with Equation (3), it can be seen that s m det[Y(s)] is also the system's eigenpolynomial. Therefore, det (sT − A) = 0 and det [Y(s)] = 0 will have identical roots (except for s = 0). The nonzero roots of det [Y(s)] = 0 are actually the system's eigenvalues, which means that the state-space model and the s-domain nodal admittance matrix are consistent and both solve for the system's eigenvalues. It should also be noted that conditions for network components with lumped and time-invariant parameters are not required when establishing Y(s). Hence, compared to the state-space methods, the Y(s) method is more applicable in analyzing ENRS, especially for the general electric network with components of distributed or frequency-dependent parameters. It can be proven that an electric network with components of distributed parameters will have infinite resonance modes. Case Verification In this subsection, the consistency of using the state-space model and Y(s) in solving for a system's eigenvalues will be verified through a simple case. For the RLC series circuit shown in Figure 1 (R < 2 √ L/C), the state-space model can be expressed as d dt The system's eigenpolynomial is hence In the s-domain, the nodal admittance matrix of the circuit is which has the corresponding determinant It is clear that Equations (8) and (10) have identical zero-points if s = 0, which suggests that an identical result will be acquired when analyzing ENRS by both methods. Case Verification In this subsection, the consistency of using the state-space model and ( ) in solving for a system's eigenvalues will be verified through a simple case. For the RLC series circuit shown in Figure 1 ( < 2 / ), the state-space model can be expressed as The system's eigenpolynomial is hence In the s-domain, the nodal admittance matrix of the circuit is which has the corresponding determinant It is clear that Equations (8) and (10) have identical zero-points if ≠ 0, which suggests that an identical result will be acquired when analyzing ENRS by both methods. Eigenvalue Solution Based on Y(s) Based on the analysis above, the solution for the eigenvalues of the electric network can be transformed into the solution of the following equation: The solutions of Equation (11) can be found by the Newton-Raphson method, which is mathematically mature and its rapid convergence (in five or six iterations) has been widely recognized. The Newton-Raphson iteration expression in this case is given by [12]: The meaning of vector and are the same as Equation (5), and the indexes i and j can be chosen to be equal for efficiency. Note that the CPU time for building ( ) and ( )/ are practically the same, as their building rules are similar. Eigenvalue Solution Based on Y(s) Based on the analysis above, the solution for the eigenvalues of the electric network can be transformed into the solution of the following equation: The solutions of Equation (11) can be found by the Newton-Raphson method, which is mathematically mature and its rapid convergence (in five or six iterations) has been widely recognized. The Newton-Raphson iteration expression in this case is given by [12]: The meaning of vector c i and b j are the same as Equation (5), and the indexes i and j can be chosen to be equal for efficiency. Note that the CPU time for building Y(s) and dY(s)/ds are practically the same, as their building rules are similar. Systematic Approach to Analyze ENRS In this section, a systematic approach to analyze ENRS will be proposed, which includes the identification of the dominant resonance region and the determination of the key components related to the concerned resonance modes. Identification of the Dominant Resonance Region For a particular resonance mode, the determination of the dominant resonance region is based on two indexes, the nodal voltage mode shape and the participation factor matrix. Suppose the eigenvalue of the k-th resonance mode of the electric network is s k , then det [Y(s k )] = 0. According to the matrix theory, the determinant of a matrix equals the product of all its eigenvalues, hence Y(s k ) must have one eigenvalue that equals 0 (zero). Further suppose that Y(s k ) is an n-order diagonalizable square matrix, with eigenvalues λ 1 , λ 2 , . . . , λ n (for simplicity, let λ 1 = 0). The corresponding right eigenvectors are R 1 , R 2 , . . . , R n , which satisfy the uniqueness condition Based on the assumptions above, using the technique introduced in where Λ = diag(λ 1 , λ 2 , . . . , λ n ) is the diagonal matrix of the system's eigenvalues; Substituting Equation (13) into (4) and letting s = s k , one obtains Defining the mode voltage and mode injection current as respectively, then Equation (14) can be rewritten as Namely  Given . Hence Equation (15) can be approximately expressed as Hence, R 1 can be defined as the nodal voltage mode shape of the k-th resonance mode, which represents the relative magnitude and phase of each node voltage under the k-th resonance mode. The nodal voltage mode shape can be used to locate the dominant resonance region and resonance type. In R 1 , some of the elements may have significantly larger magnitude than others, which means that the corresponding nodes are much more affected and can hence be considered as the dominant resonance region of the k-th resonance mode. Additionally, elements of R 1 may have close phases (see Section 4.1 as an example) or inverse phases (see Section 4.2 as an example). The presence of close phases usually corresponds to parallel resonance, for example, resonance caused by a reactive load and a nearby distributed capacitance. The opposite phases, on the contrary, usually mean series resonance, for example, resonance caused by a series capacitor and nearby transmission lines. Equation (19) can be further written as where, P k = R 1 R T 1 is defined as the participation factor matrix of the k-th resonance mode; its element p k(ij) represents the contribution of j-th node injection current to the i-th node voltage when the k-th resonance mode is excited. Determination of the Key Components Among all the resonance modes in an electric network, the divergent modes are of the most concern, hence the key components that greatly affect these divergent modes should be determined. The determination of the key components is based on mode sensitivity analysis with respect to the parameters of these components. Theoretically speaking, mode sensitivity analysis can be directly executed if the target resonance mode is identified. However, in a large-scale electric network, it is quite difficult to analyze the mode sensitivity with respect to each of the system parameters. Hence, the identification of the dominant resonance region in the previous subsection, which contributes to reducing the range of mode analysis, is necessary. There is reason to believe that components in the dominant resonance region have a relatively large influence on the target resonance mode. Suppose that a component parameter p is assumed to vary, the s-domain nodal admittance matrix can be further considered as a function of p, namely Y(s, p). In reference [13], the sensitivity of the k-th mode with respect to p is given as where s k is the eigenvalue of the target resonance mode and p 0 is the initial parameter of the concerned component. The vectors c i and b j are defined in the same manner as in Equation (6). Theoretically, c i and b j can be chosen arbitrarily and they will not influence the calculation result. However, in an actual calculation, if the i and j do not correspond to the nodes in the dominant resonance region, the accuracy of the final result may be deteriorated. Hence, for better accuracy of numerical calculations, in the proposed method the i and j will be chosen to be the same as the most dominant node: the node having the largest magnitude in the nodal voltage mode shape (suppose it is the m-th node). Another advantage is that c m Y(s, p) −1 and Y(s, p) −1 b m are transposes of each other, which helps accelerate the calculation in reality. With this in mind, Equation (21) can be rewritten as The above sensitivity index is in actual units. Under certain situations, it is more desirable to have a normalized sensitivity index with relative units instead of actual ones, so that indexes calculated from different components are comparable. Normalized sensitivities can be expressed as The acquired normalized mode sensitivities can be ranked to determine the key components related to the target resonance mode. For a certain mode sensitivity, its real and imaginary parts can also be separately considered. A relatively larger real part means the component mainly affects the damping factor of the resonance mode, while a relatively larger imaginary part means that the resonance frequency is more affected. After determining the key components related to concerned resonance modes, if needed, the relevant parameters can be adjusted, so that the system will have better resonance structures. This parameter adjustment method will be demonstrated in future work. Case Studies In this section, three cases will be analyzed to demonstrate the effectiveness of the proposed method in analyzing ENRS. The three cases are the original IEEE 39-bus system, the modified IEEE 39-bus system with an integrated wind farm, and a wind farm integration system with an MMC. Original IEEE 39-Bus System The diagram of the IEEE 39-bus system is shown in Figure 2. Other system data can be found in reference [17]. The results of ENRS analysis on this system with a frequency range of 0~1500 Hz is listed in Table 1, marked as original system. Note that all the transmission lines are treated with a distributed parameter model during processing. Table 1 shows that the system has nineteen resonance modes within the target frequency range, and the damping factor of each resonance mode is sufficiently positive, hence the system is resonantly stable. damping factor of the resonance mode, while a relatively larger imaginary part means that the resonance frequency is more affected. After determining the key components related to concerned resonance modes, if needed, the relevant parameters can be adjusted, so that the system will have better resonance structures. This parameter adjustment method will be demonstrated in future work. Case Studies In this section, three cases will be analyzed to demonstrate the effectiveness of the proposed method in analyzing ENRS. The three cases are the original IEEE 39-bus system, the modified IEEE 39-bus system with an integrated wind farm, and a wind farm integration system with an MMC. Original IEEE 39-Bus System The diagram of the IEEE 39-bus system is shown in Figure 2. Other system data can be found in reference [17]. The results of ENRS analysis on this system with a frequency range of 0~1500 Hz is listed in Table 1, marked as original system. Note that all the transmission lines are treated with a distributed parameter model during processing. Table 1 shows that the system has nineteen resonance modes within the target frequency range, and the damping factor of each resonance mode is sufficiently positive, hence the system is resonantly stable. Take the 131.6 Hz resonance mode as an example. Figure 3 shows the nodal voltage mode shape, and considering the layout restriction, the participation factor matrix is not given. The magnitude analysis of the nodal voltage mode shape helps to locate the dominant resonance region of the 131.6 Hz mode, which has been marked by a dashed grey rectangle in Figure 2. The phase analysis reveals that the voltage phases of all the buses are nearly the same, indicating that it is a parallel resonance. Further data inspection demonstrates that the resonance mode is caused by reactive loads and the nearby distributed capacitance in the dominant resonance region. Take the 131.6 Hz resonance mode as an example. Figure 3 shows the nodal voltage mode shape, and considering the layout restriction, the participation factor matrix is not given. The magnitude analysis of the nodal voltage mode shape helps to locate the dominant resonance region of the 131.6 Hz mode, which has been marked by a dashed grey rectangle in Figure 2. The phase analysis reveals that the voltage phases of all the buses are nearly the same, indicating that it is a parallel resonance. Further data inspection demonstrates that the resonance mode is caused by reactive loads and the nearby distributed capacitance in the dominant resonance region. Modified IEEE 39-Bus System In this subsection, a DFIG-based wind farm is connected to the original IEEE 39-bus system through a long-distance transmission line with 50% series compensation (Figure 4). This modified system aims at conceptually verifying the first case mentioned in Section 1, i.e., that subsynchronous resonance (SSR) can be triggered by a DFIG-based wind farm with a series-compensation line. Modified IEEE 39-Bus System In this subsection, a DFIG-based wind farm is connected to the original IEEE 39-bus system through a long-distance transmission line with 50% series compensation (Figure 4). This modified system aims at conceptually verifying the first case mentioned in Section 1, i.e., that subsynchronous resonance (SSR) can be triggered by a DFIG-based wind farm with a series-compensation line. The wind farm is aggregated by DFIG-based wind generators with a total rated-capacity of 1500 MW. The detailed impedance model of a DFIG wind generator is given in reference [18]. The impedance-frequency characteristic of the aggregated wind farm for the frequency range 0-100 Hz is shown in Figure 5. It indicates that a negative resistance effect (∠ > 90°) appears in the frequency range of 25-45 Hz, which may lead to SSR if the series resonance frequency also lies in this frequency range. Strictly speaking, the impedance model of a DFIG wind generator cannot be represented in the form of Y(s), because the outer loop controls (OLCs) of the d and q axes in the voltage source converter are not symmetric. However, the response time of the OLCs is usually larger than 200 ms, which means that the OLCs can be ignored for the considered frequency range studied. The impedance model in reference [18] is deduced based on the simplification above. The results of ENRS analysis on the modified system for the frequency range 0-1500 Hz is also listed in Table 1, marked as modified system. Compared to the results from the original system, a new resonance mode with negative damping factor (divergent) appears at a frequency of 29.8 Hz. In Figure 6, the nodal voltage mode shape shows that bus-A1 and bus-A2 are the dominant participation buses related to the unstable resonance mode. As bus-A1 and bus-A2 are the connecting point of the wind farm and the series capacitor, respectively, the previous expectation that a DFIG-based wind farm with a series-compensation line may cause SSR can be verified. The wind farm is aggregated by DFIG-based wind generators with a total rated-capacity of 1500 MW. The detailed impedance model of a DFIG wind generator is given in reference [18]. The impedance-frequency characteristic of the aggregated wind farm for the frequency range 0-100 Hz is shown in Figure 5. It indicates that a negative resistance effect (∠Z DFIG > 90 • ) appears in the frequency range of 25-45 Hz, which may lead to SSR if the series resonance frequency also lies in this frequency range. Strictly speaking, the impedance model of a DFIG wind generator cannot be represented in the form of Y(s), because the outer loop controls (OLCs) of the d and q axes in the voltage source converter are not symmetric. However, the response time of the OLCs is usually larger than 200 ms, which means that the OLCs can be ignored for the considered frequency range studied. The impedance model in reference [18] is deduced based on the simplification above. The wind farm is aggregated by DFIG-based wind generators with a total rated-capacity of 1500 MW. The detailed impedance model of a DFIG wind generator is given in reference [18]. The impedance-frequency characteristic of the aggregated wind farm for the frequency range 0-100 Hz is shown in Figure 5. It indicates that a negative resistance effect (∠ > 90°) appears in the frequency range of 25-45 Hz, which may lead to SSR if the series resonance frequency also lies in this frequency range. Strictly speaking, the impedance model of a DFIG wind generator cannot be represented in the form of Y(s), because the outer loop controls (OLCs) of the d and q axes in the voltage source converter are not symmetric. However, the response time of the OLCs is usually larger than 200 ms, which means that the OLCs can be ignored for the considered frequency range studied. The impedance model in reference [18] is deduced based on the simplification above. The results of ENRS analysis on the modified system for the frequency range 0-1500 Hz is also listed in Table 1, marked as modified system. Compared to the results from the original system, a new resonance mode with negative damping factor (divergent) appears at a frequency of 29.8 Hz. In Figure 6, the nodal voltage mode shape shows that bus-A1 and bus-A2 are the dominant participation buses related to the unstable resonance mode. As bus-A1 and bus-A2 are the connecting point of the wind farm and the series capacitor, respectively, the previous expectation that a DFIG-based wind farm with a series-compensation line may cause SSR can be verified. The results of ENRS analysis on the modified system for the frequency range 0-1500 Hz is also listed in Table 1, marked as modified system. Compared to the results from the original system, a new resonance mode with negative damping factor (divergent) appears at a frequency of 29.8 Hz. In Figure 6, the nodal voltage mode shape shows that bus-A1 and bus-A2 are the dominant participation buses related to the unstable resonance mode. As bus-A1 and bus-A2 are the connecting point of the wind farm and the series capacitor, respectively, the previous expectation that a DFIG-based wind farm with a series-compensation line may cause SSR can be verified. The wind farm is aggregated by DFIG-based wind generators with a total rated-capacity of 1500 MW. The detailed impedance model of a DFIG wind generator is given in reference [18]. The impedance-frequency characteristic of the aggregated wind farm for the frequency range 0-100 Hz is shown in Figure 5. It indicates that a negative resistance effect (∠ > 90°) appears in the frequency range of 25-45 Hz, which may lead to SSR if the series resonance frequency also lies in this frequency range. Strictly speaking, the impedance model of a DFIG wind generator cannot be represented in the form of Y(s), because the outer loop controls (OLCs) of the d and q axes in the voltage source converter are not symmetric. However, the response time of the OLCs is usually larger than 200 ms, which means that the OLCs can be ignored for the considered frequency range studied. The impedance model in reference [18] is deduced based on the simplification above. The results of ENRS analysis on the modified system for the frequency range 0-1500 Hz is also listed in Table 1, marked as modified system. Compared to the results from the original system, a new resonance mode with negative damping factor (divergent) appears at a frequency of 29.8 Hz. In Figure 6, the nodal voltage mode shape shows that bus-A1 and bus-A2 are the dominant participation buses related to the unstable resonance mode. As bus-A1 and bus-A2 are the connecting point of the wind farm and the series capacitor, respectively, the previous expectation that a DFIG-based wind farm with a series-compensation line may cause SSR can be verified. Further sensitivity analysis of the newly-appeared unstable resonance mode is listed in Table 2. It shows that the stability (damping factor) is most influenced by the line resistance. Considering that adjusting R L is not applicable in real projects, applying a virtual impedance control at a certain frequency may be an optional choice to increase the system's stability. Table 2. Mode sensitivity analysis of the modified system. Parameters Sensitivity Apart from the new resonance mode, the existing resonance modes may be changed after the connection of the DFIG-based wind farm. However, the changes are negligible, hence they do not influence the system's stability. Wind Farm Integration System with an MMC In this subsection, the scenario of wind power integration through a modular multilevel converter (MMC) will be studied. The relevant case is shown in Figure 7, which is part of a newly planned renewable power exploitation project in China. In this case, four clustered wind power plants, with rated capacities of 400 MW, 100 MW, 300 MW and 400 MW, respectively, form an islanded sending system and are connected to the MMC. The wind power plants are assumed to have a DFIG structure (impedance characteristic is shown in Figure 5). Line parameters and MMC parameters are listed in Tables 3 and 4, respectively. Further sensitivity analysis of the newly-appeared unstable resonance mode is listed in Table 2. It shows that the stability (damping factor) is most influenced by the line resistance. Considering that adjusting RL is not applicable in real projects, applying a virtual impedance control at a certain frequency may be an optional choice to increase the system's stability. Apart from the new resonance mode, the existing resonance modes may be changed after the connection of the DFIG-based wind farm. However, the changes are negligible, hence they do not influence the system's stability. Wind Farm Integration System with an MMC In this subsection, the scenario of wind power integration through a modular multilevel converter (MMC) will be studied. The relevant case is shown in Figure 7, which is part of a newly planned renewable power exploitation project in China. In this case, four clustered wind power plants, with rated capacities of 400 MW, 100 MW, 300 MW and 400 MW, respectively, form an islanded sending system and are connected to the MMC. The wind power plants are assumed to have a DFIG structure (impedance characteristic is shown in Figure 5). Line parameters and MMC parameters are listed in Tables 3 and 4, respectively. The impedance-frequency characteristic is acquired by using the test signal method [19]. Considering that the result of the test signal method is discrete, numerical fitting is executed so that the result can be used in the proposed method. Figure 8 shows the results of the test signal method and the fitting curves. The impedance-frequency characteristic is acquired by using the test signal method [19]. Considering that the result of the test signal method is discrete, numerical fitting is executed so that the result can be used in the proposed method. Figure 8 shows the results of the test signal method and the fitting curves. The results of ENRS analysis on this system are listed in Table 5. As the system topology is relatively simple, only two resonance modes are found. The 126.6 Hz mode is sufficiently stable, while the 42.7 Hz mode is slightly divergent. Figure 9 shows the nodal voltage mode shape of the 42.7 Hz mode. Table 6 further shows its per-unit participation matrix (only the upper triangular components are listed considering the symmetry). Both Figure 9 and Table 6 indicate that this resonance mode appears significantly among all the six buses with similar phases. Namely, each connection point of the DFIG or MMC participates in, and is influenced by, this resonance mode. The mechanism of this kind of resonance is quite complex and the relevant suppression method will be demonstrated in future work that combine sensitivity analysis. The results of ENRS analysis on this system are listed in Table 5. As the system topology is relatively simple, only two resonance modes are found. The 126.6 Hz mode is sufficiently stable, while the 42.7 Hz mode is slightly divergent. Figure 9 shows the nodal voltage mode shape of the 42.7 Hz mode. Table 6 further shows its per-unit participation matrix (only the upper triangular components are listed considering the symmetry). Both Figure 9 and Table 6 indicate that this resonance mode appears significantly among all the six buses with similar phases. Namely, each connection point of the DFIG or MMC participates in, and is influenced by, this resonance mode. The mechanism of this kind of resonance is quite complex and the relevant suppression method will be demonstrated in future work that combine sensitivity analysis.
2019-04-16T13:27:45.419Z
2018-03-14T00:00:00.000
{ "year": 2018, "sha1": "6d8d4f28f031cb7139b043439e2e14a08c32e144", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/11/3/646/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "48758dae23ec499dda6d8f07ec27c8c56e127bed", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Engineering" ] }
13710861
pes2o/s2orc
v3-fos-license
Multiple K‐ras Mutations in Hyperplasia and Carcinoma in Cases of Human Pancreatic Carcinoma Mucous cell hyperplasia (MCH) has been considered an important precursor of pancreatic ductal carcinoma based on histological and molecular research, although various K‐ras mutations rates are seen among cases with pancreatic carcinoma, chronic pancreatitis and normal pancreas, with a wide range of histological characters. To investigate the premalignant potential of MCH and the multicentricity of pancreatic carcinoma, we analyzed K‐ras mutation at codon 12 in carcinoma foci of 82 cases of surgically‐resected pancreatic carcinoma [67 solid‐type carcinomas (SCs) and 15 ductectatic‐type carcinomas (DCs)], as well as in both MCH and carcinoma foci in 42 cases (30 SCs and 12 DCs), using an enriched polymerase chain reaction (PCR)‐enzyme linked mini‐sequence assay (ELMA). K‐ras mutation was recognized in 85% (57/67) of SCs and 73% (11/15) of DCs, and multiple K‐ras mutations in 12% (8/67) of SCs and in 20% (3/15) of DCs. Multiple K‐ras mutations were also recognized in MCHs in 47% (14/30) of SCs and in 42% (5/12) of DCs. Moreover, the same sequence at K‐ras codon 12 in MCH and carcinoma was identified in 76% (32/42) of carcinoma cases and it was more frequently recognized in hyperplasias with histological atypia (51%, 37 of 72 foci) than those without atypia (24%, 16 of 68 foci) (P < 0.0007). These results further support the idea of multicentric carcinogenesis and premalignant potential of atypical hyperplasia in the human pancreas, although about half of the hyperplasias around carcinomas were not thought to be direct precursors. Histological and molecular analytical aspects of carcinogenesis are of clinical importance to detect early cancers. Mucous cell hyperplasia (MCH), especially atypical hyperplasia, of the pancreatic duct has been thought to be a significant precursor of pancreatic ductal carcinoma, because it is frequently observed as physically continuous with cancer tissue [1][2][3] and the mucus characteristics of MCH are similar to those of carcinomas. 3) It has also been reported that a high incidence of K-ras mutation at codon 12 was detected in MCH, [4][5][6][7] as well as in pancreatic ductal carcinoma, [7][8][9][10][11][12][13][14] and the same sequence of K-ras mutations in MCH and carcinoma was frequently (8 of 10 cases) recognized in a single pancreas. 6) Therefore MCH with K-ras mutation might represent a high-risk precursor of invasive carcinoma. However, it is still unclear if all K-ras mutant hyperplasias surrounding carcinoma are precancerous. Multicentricity of pancreatic ductal carcinoma has been reported in 16-38% of patients by histological examination. [15][16][17][18][19] It is also supported by previous reports showing multiple K-ras mutations in 3-17% of pancreatic can-cers. [11][12][13][14] However, few studies have analyzed sequences of K-ras codon 12 in both MCH and carcinoma in the same pancreas with large numbers of cases and foci. This time, to investigate pancreatic multicentric carcinogenesis, we analyzed K-ras mutation in large numbers of MCH and carcinoma foci in cases with pancreatic cancer, using a highly sensitive and non-isotopic assay for detecting Kras codon 12 mutation. MATERIALS AND METHODS Human pancreatic specimens Specimens were selected from consecutively recorded cases in our archives at the First Department of Pathology, Niigata University School of Medicine; 67 surgical cases of solid-type carcinoma (SC) 7, 20, 21) (equivalent to "ductal adenocarcinoma" in the WHO classification 22) and IPCSG) 23) (34 men and 33 women, mean age±SD: 66.1±8.1, range 44-79 years old) and 15 surgical cases of ductectatic-type carcinoma (DC) 7,20) [equivalent to "intraductal (papillary-mucinous) carcinoma" and "mucinous cystadenocarcinoma"] (12 men and 3 women, mean age±SD: 68.8±8.1, range 48-77 years old). All specimens were fixed in 10% formalin, serially sliced into 3 to 4 mm sections, and embedded in paraffin. The clearest sections (1-4 sections per case) for analyzing target foci were serially sliced into specimens for hematoxylin-eosin (HE) staining and DNA extraction. Classification of pancreatic ductal epithelia In HE specimens, pancreatic ductal epithelia were divided into the following five groups: ordinary epithelium, MCH including nonpapillary type [including "adenomatoid ductal hyperplasia," "non-papillary epithelial hypertrophy" and/or "mucinous cell hypertrophy"] 22,23) and papillary type ( Fig. 1, C-G), and carcinoma including intraductal (Fig. 1H) and invasive carcinoma (Fig. 1B) 1D) was examined, referring to previous reports, [1][2][3][22][23][24] with the following histological criteria: high columnar epithelia, forming flat or papillary structures with fibrous cores (sometimes not seen in diagonally sliced specimen), sometimes crowded, with (partial or) no loss of polarity, basically basally located and round or irregular-shaped enlarged nuclei, with either vesicular or dense hyperchromatic but not large dot-type chromatin pattern, with sporadically or focally aggregated basophilic nucleoli. Extracted DNA was dissolved in 50 µl of distilled water (Kanto Chemical, Tokyo) and kept at −80°C. Analysis of K-ras mutation at codon 12 Point mutation of K-ras codon 12 was analyzed by enriched polymerase chain reaction (PCR)-enzyme linked mini-sequence assay (ELMA) (Sumitomo Metal Industry, Inc., Tokyo) (Fig. 2). This method made it possible to identify highly sensitively and non-isotopically the sequences at K-ras codon 12 in one working day. The oligonucleotide primers were as follows: upstream for the first and second PCR; 5′-TAAACTTGTGGTAGT-TGGAACT-3′, downstream for the first PCR; 5′-GTTG-GATCATATTCGTACAC-3′, and downstream for the µl of reaction mixture for 40 cycles under the same conditions and with the same initial denaturation as in the first PCR. The second PCR product was mixed with 50 µl of denaturing solution and 10 µl of the denatured PCR product was hybridized with probes in a 96-microwell plate, on which oligonucleotide probes for detecting the K-ras codon 12 wild-type (GGT) and six mutants (GAT, GCT, GTT, AGT, CGT and TGT) DNA were immobilized, at 55°C for 30 min. After washing out of the hybridized product, 100 µl of biotinated A and 0.01 U of Taq DNA polymerase were added and incubation was continued at 55°C for 30 min. To develop color, 100 µl of avidin-horseradish peroxidase conjugate was added and the mixture was kept at room temperature for 30 min. After a washing step, 100 µl of tetramethylbenzidine (TMB) substrate was added and the plates were kept in the dark at room temperature for 20 min. Finally, 100 µl of stop solution was added and the light absorbance of each sample was measured by spectrophotometry (Multiskan Multisoft, Labsystems, Tokyo) with a 450 nm filter wavelength. Sampling of nonneoplastic and neoplastic pancreatic ductal epithelia analyzed for K-ras mutation In preliminary studies, the reliability of this method was ascertained by repeated analysis using DNAs extracted from cell lines, the K-ras sequences of which were already known. In addition, using various mixtures of several kinds of plasmid DNAs, the sensitivity of this method was determined to be 0.2-2% mutant DNA concentration. Statistical analysis Differences in age were evaluated by use of the Mann Whitney U-test. Differences in gender, frequency of K-ras mutation, and MCH with atypia were evaluated with the χ 2 test (two-sided). A probability value of less than 0.05 was considered statistically significant. Relation of K-ras mutation in pancreatic ductal hyperplasia and carcinoma As shown in MCH having the same mutation sequence at K-ras codon 12 as carcinoma was recognized in 32 of 42 pancreatic carcinoma cases [19 of 27 (70%) K-ras mutant SCs, all of 3 K-ras wild-type SCs, 7 of 9 (78%) K-ras mutant DCs and all of 3 K-ras wild-type DCs]. Furthermore, the same K-ras sequence as in carcinoma was identified in 72 of 140 (51%) MCH foci, increasing from 40% (35/87) of MCH foci without atypia to 70% (37/53) of those with atypia (P=0.0007) (Table III). DISCUSSION Our previous study, 7) which did not analyze the mutational type, showed increasing incidence of K-ras mutation at codon 12 from non-papillary hyperplasias to papillary ones, which supports the hypothesis of a sequence from non-papillary through papillary hyperplasias to carcinoma as the main route of pancreatic carcinogenesis. In the current study, to further test the hypothesis, we analyzed the sequence of K-ras codon 12 in pancreatic epithelial foci with large numbers of samples. As a result, the same type of K-ras mutation as in the carcinoma was frequently recognized in MCHs with increasing incidence from foci without atypia to those with atypia, which is compatible with the report by Moskaluk et al. 6) Yanagisawa et al. 4) have reported frequent K-ras mutation in hyperplasia without nuclear atypia (10 of 16 foci, 63%) in cases of chronic pancreatitis. Hence, K-ras activation is thought to occur at a very early stage of pancreatic carcinogenesis and may not directly indicate high premalignant potential. From our data, atypical hyperplasias (not all K-ras mutant hyperplasias) are thought to be direct precursors of pancreatic carcinomas. In our current study, we also detected multiple K-ras mutations in hyperplasias (42-47% of cases) as well as in carcinoma (12-20% of cases). Previous papers have reported multiple K-ras mutations in 3-17% of pancreatic carcinomas [11][12][13][14] and histological multicentricity in 16-38%. [15][16][17][18][19] In our data, about half (68/140) of MCHs surrounding carcinoma were different from the carcinoma in terms of K-ras sequence. Of these 68 MCHs, 49 (72%) foci were K-ras mutants. In previous reports, K-ras mutation rates of hyperplastic foci in normal pancreas ranged from 18% (9/51) 7) to 24% (19/79). 5) Therefore, it is possible that K-ras mutant hyperplasia is more likely to develop in pancreata having carcinoma than in the normal pancreas. Klöppel et al. 24) reported that a high incidence of papillary hyperplasias was recognized in mild to moderate obstructive pancreatitis caused by carcinoma invasion in the head of the pancreas. Brentnall et al. 25) reported that patients with pancreatic cancer or pancreatitis, in whom the pancreatic juice was positive for K-ras mutation, also showed microsatellite instability. Furthermore, the amount of reactive oxygen species, which are thought to induce DNA damage, is high in pancreatitis, 26,27) and such mutagenic stimuli may increase more in a pancreas with cancer. From the results of Furuya et al., 28) it is not certain if carcinoma develops from K-ras mutant hyperplasia, although that was the conclusion drawn by Ochi et al. 29) based on the results of follow-up of a case of chronic pancreatitis in which the pancreatic juice harbored K-ras mutations. Similarly, it is unknown if a second cancer develops from K-ras mutant hyperplasia surrounding an existing cancer within the patient's life span, though the presence of multiple K-ras mutations in carcinoma and hyperplasia foci suggests that this may be possible. The proportions of mutant sequences at K-ras codon 12 in carcinoma lesion were compatible with previous reports and most were GTT, GAT or CGT (Table I). Interestingly, the K-ras mutation type in MCH foci was almost the same as in carcinoma foci (Table II), which also supports the hyperplasia-carcinoma sequence in human pancreas. Tada et al. 5) reported that K-ras mutation types in carcinoma foci were GAT (53%), GTT (33%) and CGT (14%), in contrast to the dominance of TGT (37%) and AGT (16%) type mutation in MCHs in normal pancreas. Although we did not analyze MCHs in normal pancreata in the current study, at least hyperplasias having K-ras codon 12 mutation of GTT, GAT or CGT type are thought to have some degree of premalignant potential. Our results support the hypothesis of pancreatic carcinogenesis via hyperplasia, though further investigations on mutagenic stimuli and molecular alterations in atypical hyperplasias and carcinomas are needed to detect early cancers.
2018-04-03T02:16:17.655Z
1999-08-01T00:00:00.000
{ "year": 1999, "sha1": "a1a9ce3f2a6a15010bc8240be8f0ac6576525303", "oa_license": null, "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.1349-7006.1999.tb00825.x", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "a1a9ce3f2a6a15010bc8240be8f0ac6576525303", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
14800571
pes2o/s2orc
v3-fos-license
Therapeutic Action of the Mitochondria-Targeted Antioxidant SkQ1 on Retinopathy in OXYS Rats Linked with Improvement of VEGF and PEDF Gene Expression The incidence of age-related macular degeneration (AMD), the main cause of blindness in older patients in the developed countries, is increasing with the ageing population. At present there is no effective treatment for the prevailing geographic atrophy, dry AMD, whereas antiangiogenic therapies successful used in managing the wet form of AMD. Recently we showed that mitochondria-targeted antioxidant plastoquinonyl-decyl-triphenylphosphonium (SkQ1) is able to prevent the development and moreover caused regression of pre-existing signs of the retinopathy in OXYS rats, an animal model of AMD. Here we examine the effects of SkQ1 on expression of key regulators of angiogenesis vascular endothelial growth factor A (VEGF) and its antagonist pigment epithelium-derived factor (PEDF) genes in the retina of OXYS rats as evidenced by real-time PCR and an ELISA test for VEGF using Wistar rats as control. Ophthalmoscopic examinations confirmed that SkQ1 supplementation (from 1.5 to 3 months of age, 250 nmol/kg) prevented development while eye drops SkQ1 (250 nM, from 9 to 12 months) caused some reduction of retinopathy signs in OXYS rats and did not reveal any negative effects on the control Wistar rat's retina. Prevention of premature retinopathy by SkQ1 was connected with an increase of VEGF mRNA and protein in OXYS rat's retina up to the levels corresponding to the Wistar rats, and did not involve changes in PEDF expression. In contrast the treatment with SkQ1 drops caused a decrease of VEGF mRNA and protein levels and an increase in the PEDF mRNA level in the middle-aged OXYS rats, but in Wistar rats the changes of gene expression were the opposite. Conclusions: The beneficial effects of SkQ1 on retinopathy connected with normalization of expression of VEGF and PEDF in the retina of OXYS rats and depended on age of the animals and the stage of retinopathy. Introduction Age-related macular degeneration is the most important cause of impaired vision and blindness in the aging population. But its pathogenesis remains poorly understood. The prevalence of agerelated diseases is increasing dramatically as the proportion of the elderly in the population of developed countries continues to rise. Therefore, the development of effective therapeutic and prophylactic agents for AMD is urgently needed. Animal research as well as observational studies suggests that antioxidant supplementation can slow down aging of the eye and possibly provide some protection against AMD [1,2]. Indeed, oxidative stress is implicated in the aging process and pathogenesis of a wide range of age-related disorders, including AMD. Nonetheless, randomized controlled trials show that antioxidant supplements do not prevent early AMD [3]. There is no evidence that antioxidant supplementation can reduce pre-existing signs of AMD. Moreover, recent human studies reported an increased risk of adverse effects associated with increased intakes of beta-carotene and vitamin E [4]. At present there are tens of thousands of natural and synthetic compounds that possess an antioxidant activity and the number is growing. Correct assessment of antioxidant effects in humans is difficult because of many factors, e.g., the long life span, high cost of randomized controlled trials, ethical issues, and difficult-tocontrol variables such as quality of life and individual differences in nutrition and age-related health problems. Fortunately, animal models can be used successfully to study biological effects of antioxidants as well as molecular mechanisms and pathways underlying those effects. We showed previously that the senescence-accelerated OXYS rat strain is a suitable model for studying the pathogenesis of AMD and for identifying the relevant therapeutic targets [5][6][7][8][9]. The OXYS rat strain was developed at the Institute of Cytology and Genetics, Siberian Branch of the Russian Academy of Sciences, from Wistar stock by selection for susceptibility to cataractogenic effects of galactose as described previously [10,11]. Progressive mitochondrial dysfunction is considered a possible cause of accelerated senescence in OXYS rats [12,13] and a possible source of enhanced reactive oxygen species (ROS) production in the tissues of these animals [14]. And, indeed, dietary supplementation with antioxidants can prevent the premature deterioration of mitochondrial function typical for OXYS rats. Recently, it was shown that the mitochondria-targeted antioxidant SkQ1 (plastoquinonyl-decyl-triphenylphosphonium), a conjugate of a lipophilic decyltriphenylphosphonium cation with an antioxidant moiety of a plastoquinone [15], at nanomolar concentrations is capable of preventing some consequences of accelerated senescence in OXYS rats [7]. According to our data, addition of the above-mentioned SkQ1 amounts to food or treatment with SkQ1 eye drops not only completely prevents development of retinopathy but also reduces the severity of preexisting pathological changes in the retina in OXYS rats [7]. This implies improvement of the animals' vision. It was reported that the effects of SkQ1 include improvement of retinal pigment epithelium functions and reduction of lipofuscin accumulation in the OXYS rat retina [8]. The molecular mechanisms underlying the effects of SkQ1 have yet to be investigated. It is known that in the course of aging, and especially during the development of AMD, an impairment of retinal angiogenesis occurs [16]. It is also known that the key regulators of angiogenesis, vascular endothelial growth factor A (VEGF-A, hereinafter referred to as VEGF) and pigment epithelium-derived factor (PEDF), are involved in the pathogenesis of AMD [17][18][19][20]. Recently, we reported accelerated reduction of VEGF and PEDF gene expression in the retina of OXYS rats with age in comparison with control (Wistar) rats [9], due to early alterations in RPE cells and choroid vessels in OXYS rats. We supposed that such changes are prerequisite for the development of retinopathy. The purpose of this study was to assess the effect of SkQ1 on expression of VEGF and PEDF genes in the retina of OXYS rats and to compare it with therapeutic effects of this antioxidant on the signs of retinopathy in these animals. We investigated the prophylactic effect of SkQ1 when added to food between the ages of 1.5 to 3 months, which is the period of manifestation of early signs of retinopathy [9]. In addition, we tested SkQ1 eye drops as a possible therapeutic intervention from age 9 to 12 months, when early alterations in the retina of OXYS rats actively progress to the second stage of retinopathy [9]. Ophthalmoscopic examination Preliminary examination of rats at the age of 1.5 months showed that the same percentage of eyes in experimental and control groups of OXYS rats had signs of the 1st stage (1 a.u.) of retinopathy (21 and 22%, respectively). SkQ1 supplementation of food (from 1.5 to 3 months of age, 250 nmol/kg) prevented development of retinopathy in OXYS rats (Fig. 1). Therefore at the age of 3 months only 18% of OXYS rats' eyes had signs of the 1st stage of retinopathy, whereas in 82% of eyes the signs of the disease were not detected. In contrast, in untreated OXYS rats retinopathy developed actively, and at the age of 3 months, 83% and 17% of eyes had the signs of the 1st and 2nd stage of retinopathy, respectively. When assessing the therapeutic effect of SkQ1 eye drops during the first examination of 9-month-old OXYS rats, all animals had signs of retinopathy in at least one of the eyes. Sixty percent of eyes in the control group (Fig. 2) presented with changes corresponding to the AMD predisciform stage (1 a.u.), 5% corresponding to disciform stage (2 a.u.), and 35% did not have the signs of retinopathy. In the experimental group, 90% of eyes had changes corresponding to the predisciform stage (1 a.u.), 7% corresponding to the disciform stage (2 a.u.), and 3% of rats did not have the signs of retinopathy. Statistical analysis showed that SkQ1 reduced the severity of pathological changes in the eye-ground of OXYS rats (p,0.000), while in the control animals, retinopathy continued to progress (p,0.0005) (Fig. 2). By the time of the second eye inspection at the age of 12 months, in the control group, the percentage of eyes with changes in the retina corresponding to the 2nd stage of retinopathy (2 a.u.) had increased to 50%, and all eyes had signs of retinopathy. At the time of re-examination of the SkQ1-treated OXYS rats, we did not observe pathological changes in the eyeground of 17% of eyes, and the percentage of eyes with the 1st stage retinopathy decreased to 77% (Fig. 2). According to our recent research there is no any evidence of retinopathy in the retina of Wistar rats up to 24 months [9]. The results of this study are consistent with these data. We had not detected any signs of retinopathy in the Wistar rat's retina at the first as well as the repeated inspection of eyes in both experiments (SkQ1 supplementation and SkQ1 eye drops). Expression of VEGF and PEDF genes Effects of SkQ1 supplementation of food on gene expression. Two-way ANOVA showed that VEGF mRNA expression ( Fig. 3a) was not affected by genotype (F 1.23 = 0.27, p = 0.60) and by SkQ1 supplementation (F 1.23 = 0.58, p = 0.45), but those factors were interacting (F 1.23 = 5.35, p,0.029). Oneway ANOVA showed a decreased mRNA level of the VEGF gene in the retina of control OXYS rats (F 1.12 = 7.87, p,0.015) compared to the Wistar strain. SkQ1 supplementation did not change VEGF mRNA expression in Wistar rats (p = 0.29), but increased it in OXYS rats (p,0.05) to the normal level of Wistar rats (p = 0.89). VEGF protein level (Fig. 3b), according to two-way ANOVA, was affected by genotype (F 1.12 = 7.43, p,0.018) and was not influenced by SkQ1 supplementation (F 1.12 = 0.69, p = 0.42), although there was a significant interaction between the latter two factors (F 1.12 = 8.64, p = 0.012). One-way ANOVA revealed a difference between the level of VEGF protein in the retina of untreated Wistar and OXYS rats (Fig. 3b). SkQ1 supplementation increased VEGF protein expression in the retina of OXYS rats (p,0.020) to the level of Wistar rats. Expression of the VEGF gene ( Fig. 5a) was affected by genotype (F 1.33 = 7.48, p,0.001) and was not influenced by SkQ1 (F 1.33 = 1.32, p = 0.25). A significant interaction between these two factors (F 1.33 = 7.54, p = 0.001) indicates that the SkQ1 eye drops affected gene expression differently in the retinas of OXYS and Wistar rats. One-way ANOVA analysis revealed no difference between the level of VEGF mRNA in the retinas of untreated Wistar and OXYS rats (Fig. 5a). SkQ1 significantly decreased VEGF mRNA expression in the retina of OXYS rats (F 1.18 = 5.21, p = 0.034) and there was a tendency for increased expression in the Wistar strain (F 1.15 = 3.72, p = 0.072). As a result, the level of VEGF mRNA in SkQ1-droptreated OXYS rats matched the level of untreated Wistar rats (F 1.17 = 2.50, p = 0.13) and became significantly lower than the VEGF mRNA level of SkQ1-drop-treated Wistar rats ( Fig. 5a; F 1.16 = 9.74, p = 0.006). A somewhat different situation was observed with the VEGF protein level (Fig. 5b), which, according to two-way ANOVA, depended only on the genotype and was lower in the OXYS retinas (F 1.12 = 35.16, p = 0.000). One-way ANOVA showed a lower level of VEGF protein in untreated OXYS rats than in untreated Wistar rats. Interstrain comparison showed that SkQ1 eye drops had no effect on the VEGF protein content in Wistar rats but significantly reduced it in the retina of OXYS rats (p,0.017). Expression of the PEDF gene ( Fig. 4b) was affected by genotype (F 1.31 = 6.93, p,0.013) and was not influenced by SkQ1 eye drops (F 1.31 = 0.32, p = 0.57). A significant interaction between the latter two factors (F 1.31 = 14.76, p = 0.000) indicates that the SkQ1 eye drops affected gene expression differently in the retinas of OXYS and Wistar rats. One-way ANOVA revealed no difference between the level of PEDF mRNA in the retinas of untreated Wistar and OXYS rats (Fig. 4b). SkQ1 significantly increased PEDF mRNA expression in the OXYS retinas (F 1.18 = 5.26, p = 0.033) and decreased it in Wistar rats (F 1.13 = 11.55, p = 0.004). As a result, the level of PEDF mRNA in SkQ1-drop-treated OXYS rats matched the level of untreated Wistar rats (F 1.16 = 1.94, p = 0.18) and became significantly higher than the PEDF mRNA level in SkQ1-drop-treated Wistar rats ( Fig. 4b; F 1.15 = 18.42, p = 0.0006). Discussion In our previous study [9] we identified two critical periods for the development of clinical signs of retinopathy in OXYS rats. The manifestation of early disease signs occurs between ages 1.5 and 3 months. Late stage retinopathy develops between 9 and 12 months of age [9]. Additionally, we showed that already at age 20 days the average area of open choroid vessels is decreased while the area of thrombosed vessels is increased in OXYS rats compared to (control) Wistar rats [9]. The developing atrophy of blood vessels leads to hypoxic conditions in the outer retina. Oxidative stress is the main mechanism of tissue damage under hypoxia. In addition, oxidative stress may be both a cause and a consequence of pathological changes in the retina. We found that the mitochondria-targeted antioxidant SkQ1 [15,21] completely prevents the development of early retinopathy signs in OXYS rats when it is added to food starting from age 1.5 months. Importantly, the antioxidant did not cause any changes in the retina of Wistar rats. The fact that the eyes of the both control and threaded with antioxidant groups of Wistar rats remained healthy indicates that taking of SkQ1 does not have any negative effects. As we showed recently doses of SkQ1 250 nmol/kg for supplementation with food and 250 nM for eye drop are optimal for retinopathy treatment, while 200-fold higher dose (drops contain 25 mM SkQ1) can cause development of new vessels [7]. Same as previously [9], we now show that at age 3 months expression of VEGF and PEDF genes in OXYS rats significantly decreases in comparison with Wistar rats (controls) and it may predispose retina to the development of early retinopathy. It should be pointed out that the differences in mRNA expression of both genes between OXYS and Wistar rats were much greater in the previous [9] than in the present experiment. This difference between the two experiments may be due to differences in the severity of retinopathy. In the previous experiment [9] there were 25% of eyes with 2 nd stage and 75% with the 1 st stage disease in OXYS rats, whereas now the figures are 17% and 83% respectively. These data support the view that early retinopathy (AMD) is associated with decreased expression of VEGF in the retina. There are many reports in literature that support the existence of a VEGF-mediated physiological mechanism that ensures fine-tuning of choriocapillaris to the necessary nutrient supply to the outer retina [15,22]. Thus, the premature decrease of VEGF mRNA and protein levels in OXYS rats may cause a disruption of this mechanism and thereby serve as both a cause and a consequence of retinopathy. We found that, at the molecular level, SkQ1 supplementation prevents a decline of VEGF mRNA expression in OXYS rats and increases protein expression to the normal level. This demonstrates a link between VEGF expression and early retinopathy in OXYS rats. Mitochondrial dysfunction of RPE cells is typical for AMD and accompanies AMD pathogenesis in humans and, one can assume, in rats as well as [23]. Mechanisms of therapeutic action of SkQ1 on retinopathy involve renovation of mitochondrial structure and function in OXYS rats that lead to the reversal of functional insufficiency of retinal pigment epithelium, which produces VEGF and PEDF [7,21]. These therapeutic mechanisms may also involve the ability of SkQ1 to act as a mitochondriatargeted protonophore [24], which decreases reactive oxygen species (ROS) generation and reduces oxidative damage in ocular tissues. It is likely that SkQ1, acting on mitochondria, blocks mitochondria-mediated apoptosis of RPE cells and photoreceptors and thus prevents the atrophy of this tissue. Beneficial effects of SkQ1 on the endothelium of choriocapillaris cannot be ruled out either. On the other hand, we did not observe any effects of SkQ1 on PEDF expression at age 3 months-PEDF expression remains low in SkQ1-treated and untreated OXYS rats compared to Wistar rats. It is known that PEDF has anti-angiogenic properties, and therefore we propose that SkQ1 either facilitates the creation of blood vessels or somehow maintains the normal tissue metabolic rate at age 3 months. It should be noted that this is the first study to demonstrate a significant prophylactic effect of an antioxidant compound on retinopathy. Our study also shows the influence of an antioxidant on the expression of genes that are key for retinopathy pathogenesis. Additionally, we found that SkQ1 can work as a treatment for pre-existing retinopathy in OXYS rats, when it is administered in eye drops starting at age 9 months. In this case, the beneficial effect of the antioxidant also affects the VEGF and PEDF gene expression. The development of late stage AMD is associated with increased VEGF gene expression, especially in development of neovascularization [25]. We did not observe an increased mRNA level of either VEGF or PEDF at age 12 months-the age of onset of late stage retinopathy-in OXYS rats compared to controls, just like in our previous study [9]. Moreover, the protein level of VEGF is decreased in OXYS rats compared to Wistar. In the present study, we found almost no OXYS rats with detectable neovascularization and this may be explained by the decreased level of VEGF. In addition, we can hypothesize that the VEGF level is locally increased because the amount of RPE cells, which produced VEGF, is decreased according to morphologic analysis [9]. Immunohistochemical analysis would be needed to clarify this question. Bhrutto and coworkers [20] have shown that the cause of the increased VEGF level in late stage AMD is the immune cells that migrate to the retina. It is possible that the development of late stage retinopathy in OXYS rats occurs by the same scenario. After treatment of OXYS rats with SkQ1 eye drops we also observed changes in gene expression. In OXYS rats, the antioxidant decreased VEGF mRNA and protein levels and increased PEDF mRNA expression. At the same time, VEGF expression (mRNA and protein) increased and PEDF mRNA decreased in Wistar rats. We attribute the interstrain differences in the response to SkQ1 eye drops to the different status of retina in these rat strains. In OXYS rats the development of late stage retinopathy is taking place, and therefore it is biologically necessary to reduce VEGF activity in order to prevent edema and neovascularization. In all experiments determination of different splice isoforms of VEGF-A (for example VEGF XXX a or VEGFA -XXX b) did not perform and is an aim of further investigation of retinopathy in OXYS rats. It is likely that SkQ1, which has anti-inflammatory properties [21], suppresses the function of local immune cells (macrophages and others), which are known to release VEGF [17]. In addition, there are reports that SkQ1 prevents macrophage transformation of RPE cells ex vivo [7]. Wistar rats do not exhibit clinical signs of retinopathy, however conform to aging alterations of choroid vessels and RPE occur in the retina of these rats. SkQ1, by increasing VEGF level and decreasing PEDF expression, creates the conditions for survival of choroid vessels in the retina of Wistar rats and prevents the development of retinopathy, as is the case in OXYS rats at age 3 months. This data allow us to suggest that SkQ1 may inhibit development of the retinopathy as one of the possible manifestations of the aging. One more mechanism that mediates the effects of SkQ1 on VEGF gene expression is the regulation of stability and accumulation of the transcriptional factor hypoxia-inducible factor-1 (HIF-1), which is the main regulator of VEGF expression [17]. Recently, the crucial role of mitochondria-derived ROS for activation HIF-1 was demonstrated and it was shown that the suppression of mitochondrial generation of ROS by SkQ1 effectively blocked HIF-1 accumulation [26]. Additionally, we would not rule out the anti-apoptopic action of this mitochondrial antioxidant, which is mediated by its ability to block the opening of non-specific pores in the mitochondrial membrane. To date, there is no treatment for early AMD. Currently used anti-VEGF agents typically stop the development of only the wet form of AMD [27]. In the present study, we show that SkQ1 not only prevents the development of early and late stage retinopathy but also causes regression of pre-existing signs of the disease. At the molecular level, we demonstrate normalization of VEGF mRNA and protein levels by means of SkQ1 supplementation. It appears that the effect of SkQ1 on gene expression depends on age of the animals and the stage of retinopathy. Animals Male senescence-accelerated OXYS and age-matched male Wistar rats (as controls) were obtained from the Breeding Experimental Animal Laboratory of the Institute of Cytology and Genetics, Siberian Branch of the Russian Academy of Sciences (Novosibirsk, Russia). At the age of 4 weeks, the pups were weaned, housed in groups of five animals per cage (57636620 cm), and kept under standard laboratory conditions (2262uC, 60% relative humidity, and natural light), provided with a standard rodent feed, PK-120-1, Ltd. (Laboratorsnab, Russia), and given water ad libitum. Studies of SkQ1 effects SkQ1 was synthesized at the Institute of Mitoengineering of Moscow State University (Moscow, Russia). Rats were distributed among the control and experimental groups by simple random sampling. To study the effects of SkQ1 on the development of retinopathy in OXYS rats we conducted two experiments. In the first experiment, 250 nmol SkQ1 per kg of body weight per day were added to the feed of OXYS and Wistar rats between ages 1.5 and 3 months; the control groups of rats did not receive SkQ1. In the second experiment, 0.9% NaCl solution (control group) or 250 nM SkQ1 in 0.9% NaCl (experimental group) was instilled to the eyes of OXYS and Wistar rats in the amount of one drop day in the form of eye drops from 9 to 12 months of age. In total, 60 animals were used in each experiment, 15 animals in each group (OXYS control, OXYS experiment, Wistar control, and Wistar experiment). Ophthalmoscopic examination All animals were examined by an ophthalmologist twice, before and after SkQ1 supplementation or eye drop treatment, at the age of 1.5 and 3 months or at the age of 9 and 12 months, respectively. All rats underwent fundoscopy with a ''Heine BETA 200 TL'' Direct Ophthalmoscope (Heine, Germany) after dilatation with 1% tropicamide. Assessment of stages of retinopathy was carried out according to the Age-Related Eye Disease Study (AREDS) grade protocol (http://eyephoto.ophth.wisc.edu). The degree of retinopathy was estimated as follows: 0 arbitrary unit (a.u.) corresponds to health retina (Fig. 6a); 1 a.u. corresponds to appearance of drusen and other pathological changes in the retinal pigmented epithelium (RPE) and partial atrophy of the choroid capillary layer (Fig. 6b); 2 a.u. means exudative detachment of RPE and of retinal neuroepithelium, with further choroid capillary layer atrophy (Fig. 6c. Five days after the last eye examination, the rats were decapitated and the studies of the effects of the mitochondria-targeted antioxidant SkQ1 on VEGF and PEDF gene expression were performed. Kowa Genesis-D fundus camera (Japan) was used as a hand-held digital fundus camera to take digital fundus photographs of retina. Studies of VEGF and PEDF gene expression Steady-state mRNA expression of VEGF and PEDF was studied in the retinas of the control and SkQ1-treated OXYS and Wistar rats using real-time PCR. There were 7-10 animals in each group. Rats were decapitated and eyes removed. The retina for gene expression analysis was separated from the other tissues, placed in microcentrifuge tubes for RNA isolation and frozen in liquid nitrogen. All specimens were stored at -70uC prior to the analysis. Total cell RNA was isolated from rat retina using TRI-Reagent (Ambion). The amount of isolated RNA was assessed by means of electrophoresis of 1 ml of each RNA sample in 1.5% agarose gel with ethidium bromide staining. RNA concentration in each sample was determined using spectrophotometry at 260 nm and also by absorbance ratios 260/280 nm and 260/320 nm. RNA was stored at -70uC. Contaminating genomic DNA was removed by treatment with DNase I (Promega, USA) according to the vendor's manual and then by repeated RNA extraction with the phenol-chloroform mixture and pure chloroform, followed by precipitation with propanol. Reverse transcription was performed using M-MLV Reverse Transcriptase (Promega, USA). For subsequent PCR, we used 0.25-1.0 ml of the resulting cDNA solution. Aliquots (4 ml) from all cDNA samples were mixed and the ''average'' solution was used for preparation of calibration curves, which were used for determination of a relative cDNA level for genes of interest and a reference gene in experimental samples. Age-related changes of vegfa and pedf gene expression were studied using iCycler iQ4 real-time PCR detection system (Bio-Rad Laboratories, USA) and SYBR Green I dye (Molecular Probes, USA). The housekeeping gene Rpl30 (encoding large ribosomal subunit protein 30) was used as a reference gene. The following primers were used: 59_ATGGTGGCTGCAAAGAAGAC_39 and 59_CAAAGCTGGACAGTTGTTGG_39 for Rpl30; 59_CTG-GCTTTACTGCTGTACCTCCACC_39 and 59_GGCACACA-GGACGGCTTGAA_39 for vegfa; and 59_GATTGCCCAG-CTGCCTTTGACA_39 and 59_GGGACAGTCAGCACAGC-TTGGATAG_39 for pedf. The reaction mixture (final volume 25 ml) contained the standard PCR buffer (67 mM Tris-HCl pH 8.9, 16 mM (NH 4 ) 2 SO 4 , 0.01% Tween 20, and 10 mM b-mercaptoethanol), MgCl 2 (3 mM for vegf, 1.5 mM for RPL30, and 3 mM for pedf), 0.2 mM dNTPs, SYBR Green I (1:20,000 dilution), 150 nM of each primer, and 0.8 U of Taq polymerase (Institute of Cytology and Genetics, Russia). Reaction was carried out under the following conditions: heating at 95uC for 3 min (initial denaturation), and then 40 cycles: denaturation at 95uC for 20 sec, annealing at 60uC for 20sec, elongation at 72uC for 30 sec. Data collection was based on fluorescence for Rpl30 at 84uC for 30 sec and for the vegf and pedf genes at 87uC for 10 sec. After the completion of PCR, the melting curves for specificity control were recorded. In each experiment, samples of cDNA under study were mixed with primers for a gene of interest (four repeats per cDNA sample) in one microtube plate; similar samples were mixed with primers for the reference gene (also four repeats). ''Standard'' cDNA was diluted from 1:2 to 1:64 with the same primers (2-3 repeats). For each cDNA sample, PCR was repeated at least twice. To confirm amplicon size and reaction specificity, PAGE electrophoresis was performed with DNA molecular weight markers. The initial quantitation of cDNA concentration in the samples was carried out using standard calibration curves (versus ''standard'' cDNA) and the gene expression value was obtained for each gene of interest; this value was then normalized according to the amount of the reference gene cDNA [28]. Enzyme-Linked Immunosorbent Assay Protein was isolated from rat retina using TRI-Reagent (Ambion). Enzyme-linked immunosorbent assay (ELISA) for VEGF (RayBiotech, USA) was performed according to the manufacturer's instructions, except that equal protein concentrations were loaded into each well. Quantitation was carried out according to the optical density measurement obtained using a microtiter plate reader and recalculated as pg of VEGF protein per mg of retinal tissue. Statistical analysis The data were analyzed using repeated measures ANOVA with the statistical package Statistica 6.0. Two-way ANOVA was used to evaluate effects of treatment. The independent variables were genotype (Wistar, OXYS) and treatment (controls, SkQ1). A Newman-Keuls post-hoc test was applied to significant main effects and interactions in order to estimate the differences between particular sets of means. One-way ANOVA was used for individual group comparisons. To assess the therapeutic effectiveness we performed dependent pairwise comparison of the eye states before and after treatment (T-test for dependent samples). Data are represented as mean 6 S.E.M. Results were considered statistically significant if p value was less than 0.05.
2014-10-01T00:00:00.000Z
2011-07-05T00:00:00.000
{ "year": 2011, "sha1": "113b757942928a19a57f6e3f2025427fa05da97f", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0021682&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "113b757942928a19a57f6e3f2025427fa05da97f", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
81922557
pes2o/s2orc
v3-fos-license
Disseminated lung tuberculosis and tuberculosis meningoencephalitis after kidney transplantation Disseminated lung tuberculosis and tuberculosis meningoencephalitis after kidney transplantation B.I. Yaremin, I.L. Tsygankov, L.A. Baryshnikova, A.A. Starostina, V.E. Aleksandrova, U.V. Maslikova Samara State Medical University, 89 Chapayevskaya St., Samara 443099 Russia; Togliatti Tuberculosis Dispensary, 34 Telegraph St., Togliatti 445013, Samara Region, Komsomolsky District, Russia; Postnikov Samara Regional Clinical Tuberculosis Dispensary, 154 Novo-Sadovaya St., Samara 443068 Russia HIV, Human Immunodeficiency Virus IST, immunosuppressive therapy MBT, mycobacterium tuberculosis MSCT, multislice spiral computed tomography Despite the measures to combat the tuberculosis epidemics, the epidemiological situation in this one of the most socially significant diseases remains extremely tense.In Russia, in 2015, the number of newly reported cases of mycobacterium tuberculosis (MBT) infection, including HIVinfected patients, was 80 per 100 000 population; and the mortality still remains high (11 per 100 000 population).This creates an increased risk for patients after organ transplantation, and they constitute an obvious risk group.According to some data, the risk is 20-50 times higher than in general population, due to the effect of immunosuppressive therapy (IST) on the occurrence or reactivation of the tuberculosis process after surgery [1,2].In this regard, the issues of the timely, correct, and accurate diagnosis, prevention, and treatment remain open [3,4]. Latent tuberculosis is also increasing; its detection in patients with chronic kidney disease (CKD) is difficult both before the kidney transplantation because of the anergy typical for patients on hemodialysis, and after transplantation. Considering a blurred and atypical pattern of tuberculosis clinical manifestations and specific features of the immune response in persons receiving immunosuppressive drugs, the use of new diagnostic methods and treatment schemes for this disease is of great importance [5].It is also worthwhile to note that the current situation becomes specific because of an abruptly increased prevalence of MBT primary resistance to drugs [6,7].mm, the pulmonary tissue infiltration of the "frosted glass" type in the core zone (Fig .1).The lymph nodes of the bifurcation and paratracheal groups were enlarged up to 13 mm.Fibrobronchoscopy showed the bronchy being evenly shaped on both sides, slightly deformed, the vascular pattern was more prominent, deformed; there are single enlarged capillaries, moderately pronounced contact bleeding.A microscopic study of sputum for M.tuberculosis gave negative result.Diaskintest was negative.Given the clinical presentation, physical examination, laboratory investigations and instrumental test results, the antibiotic therapy with doripenem 1,500 mg/day and levofloxacin 250 mg/day, intravenously, was administered, in addition to antifungal therapy.Thanks to conducted therapy, the foci resolved for 10 days, but the elevated body temperature persisted.The Epstein-Barr virus was found in the CSF by the polymerase chain reaction.On the same day, the patient was examined by a psychiatrist for the persising meningitis signs, emerged delirium associated with a hallucinosis (shaking off non-existent objects), increased anxiety, and rapid exhaustion. Emphasizing that the diagnosis Olanzapine was prescribed in a dose of 2.5 mg.As far as the given antibiotic therapy appeared inefficient (i.e. the patient's condition worsened, the temperature rose to 39.2°C, confused consciousness), Ganciclovir 250 mg/day, and levofloxacin 250 mg/day were added to the treatment. The patient was repeatedly examined by a consilium of specialists (transplantologists, pulmonologists, infectious disease specialists, phthisiatricians, neurologists, resuscitators).There was a debate about the etiology of the infectious disease.The consultant-phthisiatrist strongly denied the possibility of a tuberculous etiology of the condition as the diaskintest results were negative, and there were no positive cultures of for MBT.The neurologist insisted on the EBV etiology of meningitis, despite the literature data on an extremely rare (2 cases in the world) development of Epstein-Barr virus meningoencephalitis in adults.The therapy for the infectious process was performed ex juvantibus, without a clear idea of its etiology. While on levofloxacin therapy in a dose of 250 mg/day, a decrease in cytosis to 13 cells/μL was noted.The repeated thoracic MSCT showed positive X-ray changes in the form of partial resolution of small-focal dissemination in the upper parts of the lungs seen in a series of MSCT images, and infiltration sites appeared in the lower-basal parts of the lungs, mainly on the right, with a triangular-shaped site with the base facing the pleura (Fig. 2).Thus, there was a positive dynamics in the patient receiving fluoroquinolone therapy.That again caused a debate on a probable tuberculosis etiology of the lung and brain damage.No abnormalities in the diaskintest were explained by the IST administered to the patient.No positive MBT cultures were explained by the absence of destructive changes.A presumptive anti-tuberculosis therapy was administered according to the following scheme: levofloxacin 500 mg/day, rifampicin 300 mg x 3 times/day, pyrazinamide 500 mg, 2 times a day, ethambutol 400 mg, 2 times a day, isoniazid 10%, 5 ml, and amphotericin b 50 mg/day, Ganciclovir 250 mg/day, 2 times.The patient received that treatment from 10.10.2014 to 30.10.2014.On the 2nd day of the prescribed therapy, a marked clinical effect was noted: the psychoneurological symptoms were stabilized the body temperature lowered to subfebrile values.No complications of anti-tuberculosis therapy were noted.The developed drug interaction of rifampicin and tacrolimus (as cytochrome P450 inducers) required the correction of tacrolimus concentration.With the ongoing anti-tuberculosis therapy in combination with Ganciclovir, and sulperazone, a complete resolution of clinical and radiologic symptoms was achieved within a month, the body temperature and clinico-laboratory parameters returned to normal values.The neurologic status completely recovered, but the patient had residual abnormality of neurosensory hearing loss (Fig. 3).At 3 months from the onset of the disease clinical manifestations, the growth of MBT resistant to rifampicin and isoniazid was noted in the bronchoalveolar lavage culture on solid culture media.That finding verified the diagnosis of tuberculosis and confirmed that the chosen treatment tactics was correct. Discussion The case has demonstrated the difficulty in diagnosing tuberculosis infection in patients being on the IST after transplantation, as the clinical signs seem atypical.The diagnosis of tuberculosis in a patient after transplantation is complicated, unobvious, and controversial.The IST performed in these patients sometimes creates a morphological phenomenon of "tuberculosis without tuberculosis" where the tissue mycobacterial infiltration appears present, but the classical small foci do not have enough time to be formed [8,9].This leads to absent classic radiographic and laboratory signs of the disease, which can be misleading even for an experienced phthisiatrician who, however, is not familiar with the problem of posttransplantation tuberculosis.This requires a special approach from the TB services, which would combine a high vigilance, a preventive approach, deep knowledge of the pharmacodynamics and pharmacokinetics of immunosuppressive drugs and their drug interactions with anti-tuberculosis medications.The complex anti-tuberculosis, antimycotic, antiviral therapy administered ex juvantibus, considering a high suspicion and taking into account the pharmacokinetics and pharmacodynamics of immunosuppressive drugs, provided positive dynamics in our case and allowed the patient's salvage earlier than came the opportunity to unambiguously diagnose the pathogen. Conclusion The presented clinical case report suggests the necessity of a special approach to the diagnosis of posttransplantation tuberculosis.There is no sense of waiting for the typical clinical signs of the disease in such patients. We should rather speak about the specific pathogenesis of posttransplantation tuberculosis, which causes atypical manifestations and demands other approaches to the diagnosis and treatment.Doctors involved in the identification of post-transplant tuberculosis should have additional knowledge in this matter.A more bold approach to the diagnosis and a broader administration of ex juvantibus therapy may be justified in treating this patient population. and treatment of tuberculosis is difficult in persons who undergone transplantation, we want to present a clinical case report of the tuberculosis infection course in a patient after cadaveric kidney allotransplantation.Clinical Case Report Patient G., born in 1983, was hospitalized for kidney transplantation to the Samara Center for Transplantation of Organs and Tissues on July 20, 2014.From medical history: the patient had suffered from renal pathology since 2 years of age, was treated for the diagnosis of chronic glomerulonephritis without morphological confirmation.It is known that he had a family history of the disease (the patient's brother suffered from the same pathology with the outcome in the end-stage CKD).By 2008, the patent had the outcome in the end-stage CKD.On 25.10.2008, the cadaveric kidney allotransplantation on the left was performed before starting the dialysis.The developed kidney allograft dysfunction resulted in transplantectomy performed on 14.07.2011.From June 2011, the chronic program hemodialysis was resumed.On 20.07.2014 the patient underwent a second surgery, the cadaveric kidney allotransplantation on the right from a deceased donor.After surgery the patient received a three-component IST: tacrolimus of extended release (concentration 10 ng/mL), mycophenolic acid 1440 mg/day, oral methylprednisolone according to the scheme.The postoperative course was complicated by acute steroid-resistant rejection of the renal allograft, which occurred on August 7, 2014.The rejection was controlled by administering the therapy with antithymocyte immune globulin (Atgam): a total of 6750 mg.While on that therapy, the patient developed pharyngitis from August 18, 2014, that was qualified as of fungal origin.Nevertheless, by 01.09.2014 the patient had experienced short-term episodes of increased body temperature to 39.2° C. Multislice spiral computed tomography (MSCT) of the lungs showed multiple polymorphic peribronchial foci of pulmonary tissue infiltration with a diameter of 6-8 Fig. 1 . Fig. 1.Multislice spiral CT of the lungs of 01.09.2014 in Patient G.: multiple polymorphic foci of pulmonary tissue infiltration Fig. 2 . Fig. 2. Multislice spiral CT of the lungs of 09.10.2014 in patient G.: some of the foci became more dense, some others resorbed; the emerging pneumonia can be visualized in the lower-posterior segments of the lungs Fig. 3 . Fig. 3. Multislice spiral CT of the lungs of 29.10.2014 in patient G.: the foci have completely resolved; there is no pneumonic infiltration
2018-12-18T03:26:05.926Z
2018-09-17T00:00:00.000
{ "year": 2018, "sha1": "ce63e6104caa46aec44eb491ca5d64ef78a73b95", "oa_license": "CCBY", "oa_url": "https://www.jtransplantologiya.ru/jour/article/download/393/454", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ce63e6104caa46aec44eb491ca5d64ef78a73b95", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234258478
pes2o/s2orc
v3-fos-license
Collaborative protection of marine environment in the Guangdong-Hong Kong-Macao Greater Bay Area . The quality of Marine ecological environment is one of the necessary conditions for building a world-class bay area. The ecological environment in the coastal waters of Guangdong-Hong Kong-Macao Greater Bay Area is seriously polluted, it is urgently needed for three places to strengthen coordinated Marine ecological and environmental management. Under the guidance of outline development plan for the Greater Bay Area and with the goal of building a Marine ecological civilization in the Greater Bay Area, the coordination mechanism of administrative management and the standard system for Marine ecological environmental protection should be improved , land and sea overall planning should be used for Marine ecological environment protection. In view of the fact that the water quality of Pearl River Estuary is extremely poor, Guangdong and Hong Kong has set up 'Guangdong and Hong Kong Liaison Group on Environmental Protection' for strengthening area water quality management, established several joint action plans, jointly tackled transboundary water pollution from 1990. In 2000, it was upgraded to the Hong Kong and Guangdong Cooperation Group on Sustainable Development and Environmental Protection to further enhance cooperation on cross-boundary environmental issues. In 2016, under this framework, they established 'Marine Environment Management Task Group' and 'Cross-Border Maritime Drift-Garbage Incident Reporting Mechanism'. However, prior to the formulation of Guangdong-Hong Kong-Macao Greater Bay Area strategy, such cooperation was mainly voluntary and initiative bilateral cooperation, with the main purpose of strengthening communication and exchanges on Marine environmental protection issues. Differentiated management mechanism The first is the key sea area control system and the universal marine environment control. Guangdong implements the General Marine Environmental Protection Planning and the Marine Environmental Protection Planning in Key Sea Areas. That is to say, the Provincial Marine Administrative Department should, in accordance with the Marine Function Zoning, the National Marine Environmental Protection Plan and the Regional Marine Environmental Protection Plan of Key Sea Areas, formulate the Provincial Marine Environmental Protection Plan of Key Sea Areas. There is no such distinction in Hong Kong. Instead, Hong Kong Waters are generally divided into ten water quality control zones and four supplementary zones. Each control zone applies specific water quality target restrictions in accordance with the actual situation of the region and implements water quality management in accordance with the water quality target requirements of the region. The establishment of key sea areas can meet the current plight of the mainland Marine environment supervision and administration departments with scarce supervision resource and insufficient administrative power. However, it also exposes the blank of supervision over the non-key sea areas. The second is the significant difference in Marine environmental information disclosure system. Every year, Hong Kong releases a report on the water quality of about 1,700 square kilometres of Hong Kong waters, and makes the public to know the achievement of key water quality targets in each region. Although Guangdong also releases a bulletin on Marine Environment State every year, the information on Marine environment is relatively simple and not detailed to specific sea areas. For example, every year, Hong Kong publishes the Annual Marine Water Quality Report, which reports on the state of Marine water quality in ten water control zones in Hong Kong. The Hong Kong Environment Protection Department has also set up a Dedicated Marine Water Quality Website to publish various types of Marine water quality information. There are 76 Marine monitoring stations in Hong Kong. Each month, the Environment Protection Department monitors the seawater quality of 76 water sampling stations, collects and surveys phytoplankton samples from 25 stations. The water quality of 17 typhoon shelters, yacht clubs and marinas in Hong Kong is monitored every other month. The sediment samples are collected and analysed twice a year at 60 seabed sediment sampling stations, covering all Hong Kong Waters. On the other hand, 'Guangdong Province Ocean Status Bulletin in 2017'released the water quality status of Pearl River Delta coastal waters. The water quality that meets the quality standards for class I and II seawaters is mainly distributed in the waters of Daya Bay, Dapeng Bay and Chuanshan Islands. The water quality inferior to the class IV seawater quality standards is mainly distributed in the Pearl River Estuary. The Third is the difference of pollution accident reporting system. The law of Hong Kong requires ships to report immediately and in as much detail as possible to the Director of Marine any accident involving the actual or probable discharge of dangerous solid bulk goods from the ship, if the ship is outside the Hong Kong Waters, the captain should report the accident to the nearest coastal state. The report must be based on the guidelines and general principles adopted in relevant International Maritime Organization resolutions. Moreover, in order to enhance the effectiveness of the report, Hong Kong also requires that if the report issued by the ship is incomplete or not available, the owner must make or complete the aforementioned reports as far as practicable. It can be seen that such a report system is highly operational and has clear operational basis. In comparison, Guangdong proclaims in principle when the ship has accident or may cause marine pollution accident, the parties should report to the maritime sector or the fishery administration departments, but there is no clear stipulation on the method and content of the report, and also there is not suitable penalties for failure to perform reporting obligations, so such report system is short of operability and enforceability. Different marine environmental standard system The first is the difference of seawater quality index. The seawater quality standard applicable to Guangdong is the Seawater Quality Standard of People's Republic of China (GB 3097-1997), while Hong Kong has set different water quality indexes for the ten water control zones. There are two main differences in the water quality indicators between Guangdong and Hong Kong. In terms of rigor degree of the indicators, Hong Kong's seawater quality indicator is more stringent than Guangdong's. In terms of dissolved oxygen index, for example, Guangdong's water quality standards is respectively set up four categories of indicators, including the value of 6, 5, 4, and 3, in accordance with the sea water quality standard of class I, II, III and IV. Hong Kong applicable water quality standard, on the other hand, is divided into four types: seabed dissolved oxygen, water depth average dissolved oxygen, remaining water column dissolved oxygen, and all depths of dissolved oxygen, with the value of 2 and 4 respectively. Even within a water quality control zone, there are different water quality indicators for different sea areas. In addition, the types of indicators are set differently. For example, Hong Kong has chlorophyll and salinity indicators, while Guangdong does not have the two types of indicators. The second is the difference of standard partition setting. Guangdong has classified the first, second, third and fourth categories of marine water quality standards. Although the Marine Function Zoning System has been implemented at the beginning, it is not in-depth enough, leading to the cross-functional area transfer of Guangdong marine environment pollution, so Guangdong has not yet set specific pollution discharge standards in strict accordance with detailed Marine Function Zoning. Hong Kong, on the other hand, has adopted different effluent standards for each water control area and refined them to different effluent limits for different functional areas within the same water control area. The third is the difference in marine environment monitoring methods. The standards methods of the American Society for Material and Testing, the British Standards Institution and the American Public Health Association are generally adopted in Hong Kong. Guangdong adopts the Chinese national standard methods, including the Technical Regulations for the Evaluation of Seawater Quality, the Technical Regulations for the Environmental Monitoring and Evaluation of Seawater Bathing Beach, the Quality of Marine Sediments, and the Quality of Marine Life. For example, chlorophyll-A and sediment in Hong Kong's seawater quality indicators should refer to the US national coastal zone status assessment index and rating method. [5] Hong Kong's marine environmental standards can focus on regional characteristics, and pay attention to the fact that the standard critical value can be adjusted with the change of time and space, so they have strong flexibility and applicability. Although Guangdong has the unified standard for seawater environment, it lacks due attention and reference to regional characteristics, scientific research achievements. The Outline Law on Macao Sea Area Administration only stipulates in principle that the standards for Marine environmental quality management in Macao should be formulated in accordance with the national standards for marine environmental quality and in conformity with the conditions of Macao, but have not been published so far. Improve coordination management mechanism The common and shared marine environment determines the inevitability of regional cooperation in Marine environmental governance. Guangdong, Hong Kong and Macao have established an inter-governmental agreement cooperation model, for instance, the Environment Bureau of Hong Kong had a cooperation agreement with the Maritime Safety Administration of the Ministry of Transport on air pollution control from ships. [6] Guangdong and Macao also have cooperation experience in the protection of Hengqin Coastal wetland. [7] Now it is necessary that the coordination management should be improved since the Greater Bay Area has been set up, which means that the improvement should be aimed to the specific objects of pollution prevention and control, and the corresponding specific cooperation mechanisms, which are based on the strengthening of "one country, two systems". In other words, pollution prevention and control cooperation should be carried out under the authority coordination of central government with respect to a certain prominent problem of marine environmental pollution, and the cooperation scope should be extended to the Greater Bay Area of Guangdong, Hong Kong and Macao instead of just bilateral cooperation. For instance, the prevention and control of sea drift-garbage can be led and coordinated by the Ministry of Ecology and Environment, and a leading group for sea drift-garbage prevention and control can be set up. Its members can be composed of the leaders of "9+2" city authorities in the Greater Bay Area, and the existing cooperation mechanism for Marine environmental protection between Guangdong and Hong Kong, Guangdong and Macao should be integrated into a tripartite participation mechanism. Coordinate environmental standards There are still great conflicts in the setting of specific standards for marine environmental protection among the three places, especially in Guangdong and Hong Kong. The coordination of marine environmental standards is the basis of promoting integrated marine environmental protection in the Greater Bay Area. As the Greater Bay Area shares the adjacent sea areas, it is necessary to comprehensively and systematically review the existing marine environmental standard system of the three regions under the leadership coordination mechanism of marine environmental protection. The institutional advantages of "one country, two systems" should be made full use of. Guangdong should learn from Hong Kong's advanced marine environmental standards setting, marine environmental monitoring methods, and marine environmental monitoring networks point location setting. In view of the low level of marine environmental standards, incomplete index setting, and lagging marine environmental monitoring methods, Guangdong should improve the standards level and methods of marine environmental monitoring. Macao should promptly establish and prefect the marine environmental standard system in its own sea area. Under the premise that the three parties have reached consents through consultation, the unified standards for marine environmental protection in the Greater Bay Area may be considered to facilitate exchange and sharing of marine environmental monitoring information. Land and sea overall planning The overall planning of land and sea is the fundamental way to prevent and control marine ecological environment pollution. With the continuous progress of ecological civilization construction in the Greater Bay Area, Guangdong, Hong Kong and Macao should increase frequency and intensity of joint law enforcement on marine environmental protection. Joint administrative law enforcement may be conducted on the issues of common maritime drift-garbage disposal, pollution prevention and control from ships, marine ecological protection zones and other aspects, so as to explore and summarize the experience and rules of joint administrative law enforcement of marine environmental protection among three parties. Under the coordination mechanism of marine environmental protection in the Greater Bay Area, the three parties should determine the procedures and means of integrated marine environmental administrative management through consultation, giving consideration to the values of environmental justice, administrative efficiency and procedural justice. Guangdong should learn from Hong Kong's procedural regulations on marine environmental administrative management, enhance the procedural and normative nature of environmental administrative management and attract the public to participate in marine environmental administrative management. Hong Kong should learn from Guangdong's practice of improving the efficiency of environmental administrative management and strengthen the enforcement of environmental administrative actions. Macao should further give full play to the advantages of executive administration mode, ensure the legitimacy of environmental administrative regulations and orders, so as to prevent and avoid unnecessary environmental administrative disputes. Conclusion The Development Plan Outline of the Greater Bay Area clearly requires that the marine resources and environment protection should be strengthened, the determination of land by sea should be attached greater importance to, the establishment of a total amount control system for pollutants entering the sea and a realtime online monitoring system for the marine environment should be speeded up. At the same time, it also calls for "strengthening cooperation in ecological and environmental protection between Guangdong, Hong Kong and Macao, to jointly improve the ecological environmental system." Therefore, the construction of marine ecological civilization in the Greater Bay Area has become a challenging task to be solved urgently. Guangdong, Hong Kong and Macao should improve the coordination mechanism of marine environmental protection in the Greater Bay Area under the guidance of the Development Plan Outline, coordinate with each other to determine the marine environmental standards system for the Greater Bay Area, carry out the overall planning of the land and sea areas to coordinate the administrative management of marine ecological and environmental protection.
2021-05-11T00:06:22.800Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "012ed8603d263411ea6b7cfa3b4c36d6d8a9d114", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/09/e3sconf_iaecst20_01132.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "c4bc6594bb9d49febf44a22ec72983f04b76081b", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
228882873
pes2o/s2orc
v3-fos-license
Meteorological OSSEs for New Zenith Total Delay Observations: Impact Assessment for the Hydroterra Geosynchronous Satellite on the October 2019 Genoa Event : Along the Mediterranean coastlines, intense and localized rainfall events are responsible for numerous casualties and several million euros of damage every year. Numerical forecasts of such events are rarely skillful, because they lack information in their initial and boundary conditions at the relevant spatio-temporal scales, namely O (km) and O (h). In this context, the tropospheric delay observations (strongly related to the vertically integrated water vapor content) of the future geosynchronous Hydroterra satellite could provide valuable information at a high spatio-temporal resolution. In this work, Observing System Simulation Experiments (OSSEs) are performed to assess the impact of assimilating this new observation in a cloud-resolving meteorological model, at different grid spacing and temporal frequencies, and with respect to other existent observations. It is found that assimilating the Hydroterra observations at 2.5 km spacing every 3 or 6 h has the largest positive impact on the forecast of the event under study. In particular, a better spatial localization and extent of the heavy rainfall area is achieved and a realistic surface wind structure, which is a crucial element in the forecast of such heavy rainfall events, is modeled. Introduction The Mediterranean region is frequently struck by severe rainfall events causing numerous casualties and several million euros of damage every year [1]. In particular, the unusually complex terrain of the western Mediterranean areas, characterized by high mountains close to the coastlines (Alps, Apennines, Massif Central, Pyrenees), can enhance or trigger the deep convective processes often originating over the warm sea in the fall season [2][3][4]. Among the heaviest rainfall phenomena of this region, there are Mesoscale Convective Systems (MCSs). On short time-scales, their relevance is due to their high probability of triggering floods and flash-floods, with significant societal impacts, often combined with numerous shortcomings in their forecast [5][6][7]. Being characterized by very high accumulated rainfall depths, they are also responsible for a large proportion of rainfall on annual time-scales. Climate projections suggest that their importance, in terms of frequency and intensity, is likely to increase in a warming climate. Recent studies demonstrate a strong sensitivity of the predicted climate impacts to the numerical representation of MCSs, with current climate models not generally capturing MCSs well enough [8]. Thus, improving the forecast accuracy of MCSs is a fundamental step towards managing their social and economic damage on both the short and the long term. The advance of Numerical Weather Prediction (NWP) models to increasingly higher grid spacing (km and sub-km) is paving the way to potential new synergies with space-borne systems. On the one hand, to drive high resolution NWP models, high resolution input data and boundary conditions are needed. On the other hand, the present state-of-the-art high resolution NWP models coincide with the increasing availability of space-borne observational data sources characterized either by high spatial resolution (e.g., the Sentinel missions developed in the Copernicus program framework) or by a high temporal resolution (Global Navigation Satellite System, GNSS). In this context, the Synthetic Aperture Radar (SAR) Interferometry (InSAR) technique [9][10][11][12] applied to Sentinel-1 data enables the retrieval of information on a wide range of spatial scales of the potentially highly turbulent atmospheric water vapor field [13][14][15][16][17][18]. Many studies demonstrate the positive impact of assimilating Integrated Water Vapor (IWV) (measured in kg m −2 ) or, equivalently, Zenith Total Delay (ZTD) [m] observations in the forecast of heavy rain, both from InSAR [19][20][21][22][23], and from GNSS [22,[24][25][26][27]. Hence, it is expected that feeding NWP models with EO (Earth Observation) data-derived ZTD maps combining high spatial resolution and short revisit time can represent a breakthrough in the ability to forecast extreme weather events. However, nowadays, such space-borne observations with concurrently high spatial and temporal resolutions are not available yet. On the one hand, Sentinel-1 ZTD maps have a very high spatial resolution [13,16] but a too low temporal one, of the order of some days. On the other hand, GNSS ZTD timeseries are point measurements characterized by a coarser resolution (on the order of 30 km at best, much less in some regions) but they reach a temporal resolution of 30 s [22]. In the future, InSAR data at high temporal resolution (daily, or sub-daily) could be provided by geosynchronous satellites. The geosynchronous C-band SAR mission called Hydroterra is currently a phase 0 candidate mission for the 10th Earth Explorer Programme of the European Space Agency (ESA). Hydroterra aims to observe the key processes of the daily water cycle by supplying frequent images (e.g., 1-12 h repeat time) at 1-3 km resolution. The geosynchronous orbit is expected to cover Europe and Africa. One of its main scientific objectives is to improve the physical insight and, therefore, the predictive capability of heavy rainfall and its possible consequences (floods, landslides) by providing estimates of ZTD, as well as of soil moisture, flood extent, and the presence of melting snow [28]. Concerning soil moisture, the added value of Hydroterra-derived estimates has been discussed in Cenci et al. [29]. To the best of our knowledge, a similar kind of analysis has never been carried out for ZTD estimates from Hydroterra observations and their impacts on the predictive capability of severe hydro-meteorological events. In this work, to assess the added value of high resolution/high frequency ZTD estimates using future Hydroterra observations, a set of Observing System Simulation Experiments (OSSEs) is built. An OSSE is a numerical experiment conducted with a numerical prediction model (in this case a NWP model) and a data assimilation system that ingest simulated rather than real observations. Thus, a simulated scenario is used as reference instead of real-world observations, as explained in Section 3. The OSSE approach is widely used to estimate the impacts of the proposed designs of new satellites or new kinds of observations [30,31]. However, this is the first time that an OSSE is used to evaluate the potential of the Hydroterra data for NWP applications. In particular, the OSSEs are used both to understand the best way to assimilate this new kind of observation with the state-of-the-art data assimilation systems and to assess the most useful spatio-temporal resolution for NWP applications [32][33][34][35][36]. The aim of this work is twofold. Firstly, the sensitivity to different spatio-temporal resolutions of this new kind of ZTD observation is assessed to identify the best-performing setup in the simulation of a heavy rainfall event. Secondly, the added value of assimilating the Hydroterra-like ZTD field is compared to the forecasting skills of some experiments, where already existing ZTD observations are assimilated, namely mimicking the GNSS Italian network coverage. Beyond a traditional and an object-based validation of the rainfall forecasts, the OSSEs results are also investigated using some physical criteria that are relevant for operational activities. Despite the OSSEs not being performed in fully operational configurations, this assures the relevance of the assimilation of the Hydroterra product to operational activities. The work is organized as follows. In Section 2, the use case is presented. Section 3 introduces the OSSE setup, a comparison between the reference run (to be used to produce the synthetic observations) and the experiment with no data assimilation, the observations to be assimilated, the assimilation techniques, the experiments, and the validation method. Results are presented in Section 4. Section 5 is devoted to the discussion and the interpretation of the results, while the conclusions are given in Section 6. Study Area The study area, corresponding to the territory of the Italian region called Liguria, is located along the north-western coast of Italy (see Figure 1). From a morphological point of view, the region is characterized by high mountain ranges, with a maximum height between 1000 and 2000 m a.s.l. (above sea level) that run parallel to the coast and reach their maximum height a few kilometers from the coast. The particular morphology leads to the formation of meteorological patterns specific to the region, capable of producing rainfall of relatively short duration and extremely high intensity (up to an average of 200 mm in one hour and 500-600 mm in 12 h) (see e.g., [37]). The particular meteorological situation, combined with the morphology, characterized by small basins with a high average slope, makes the region particularly exposed to flash flood risk. This type of morphology is very similar to that of several areas of the Mediterranean (e.g., Spanish, Greek, Algerian, French, and Turkish coasts) as well as the hydro-meteorological events that cause economic damage and deaths [38][39][40]. The region provides an excellent study area representative of the entire Mediterranean belt subject to flash floods. Case Study Description The OSSEs are performed for a high impact weather event characterized by low predictability that occurred in Italy over the Liguria region between the 14th and the 15th of October 2019. The selected case study corresponds to a back-building MCS; these are among the most important flash-flood producing storms in the Liguria region area [2,4,41] and other Mediterranean coastal regions, such as southern France [3,42] and eastern Spain [43,44]. MCSs are known to have been common in these areas also in the past [45] and there is evidence that climate change could increase their frequency [46]. It is also known that their dynamics generally develop over the sea [42,47], which can control the rainfall intensity by modifying the atmospheric stability according to the average value of sea surface temperature [48][49][50], and can influence the low-level wind field by means of the differential thermal forcing due to sea surface temperature gradients [51,52]. The low predictability of this kind of event [4,53,54] is due to the fact that small-scale meteorological processes drive their dynamical evolution. Fiori et al. [4], for example, highlight the role of the convergence line that forms over the sea when a cold and dry continental air mass coming from inland meets a warm and wet maritime air mass. The cold air mass acts as a virtual orographic barrier that lifts the unstable warm air and triggers convection. In addition to the mesoscale lifting, the other known ingredients for the development of a back-building MCS are a relatively high level of moisture, the presence of a conditionally unstable air mass, and slowly-evolving synoptic conditions [42]. On the 14th of October 2019, a surface low pressure system located off the south-western coast of Ireland was associated with an upper-level trough extending as far south as the north African coasts, as shown in Figure 2A. At that time, a cold front was approaching the Spanish coasts and a southerly low-level flow was developing off the Ligurian coasts (not shown). Similar conditions characterized the 15th of October, see Figure 2B, where the upper level divergence of the synoptic trough was placed over the Ligurian coasts and the moist and unstable flow kept blowing from the Mediterranean Sea. Such conditions are typical of the heavy rainfall events that are known to hit northern Italy in the Autumn [55][56][57]. As outlined before, these slow-evolving synoptic conditions are necessary for the MCS development but need to be accompanied by other local forcing factors (conditional instability, low-level moisture and mesoscale lifting), which significantly challenge the predictive capabilities of current NWP modeling tools. Methods and Experiments The underlying hypothesis of this study is that by assimilating high resolution ZTD maps, the NWP model can improve its spatial representation of the low-level moisture and the conditional instability. For the event under consideration, this can affect the local dynamics, possibly helping the development of a convergence line, which can act as a lifting factor for the triggering of the back-building MCS. OSSE Setup The OSSEs setup is built following key points from Hoffman and Atlas [32] to guarantee its validity. The state-of-the-art Weather Research and Forecasting Model [59] (WRF, v3.8.1) is used to produce both the truth run (TR hereafter) and the forecast runs (FC hereafter), characterized by the following features: • the TR and FC simulations are performed at different grid spacing using three two-way nested domains: 13.5, 4.5, and 1.5 km for TR ( Figure 3A) and 22.5, 7.5, and 2.5 km for FC ( Figure 3B). The choice to use a higher resolution for the TR is mainly dictated by three considerations. Firstly, we needed to represent the phenomena under study with a sufficiently high resolution in the TR. Secondly, we wanted to have a TR ZTD field at a resolution which was as close as possible to the maximum resolution planned for the Hydroterra observations (on the order of 1 km) [62]. Thirdly, we aimed to evaluate the impact of the assimilation in a model with a setup currently used for operational forecasting activities. The remaining parameterizations (listed below) are the same for the TR and the FC experiments and follow the setup adopted in recent research [22,63,64]. They are also used in the setup implemented for an operational forecast at CIMA Research Foundation (www.cimafoundation.org/foundations/research-development/wrf.html) and include the Yonsei University scheme [65] for the planetary boundary layer turbulence closure; the RRTMG shortwave and longwave schemes [66][67][68] for radiation; the Rapid Update Cycle (RUC) scheme for the land surface model [69,70]. No cumulus scheme is activated in the two innermost domains (of both TR and FC runs), because the grid spacing is fine enough to explicitly resolve convection. An appropriate convective scheme, consistent with the boundary condition product, is activated in the outermost domain of both configurations: the Tiedke scheme [71,72] in the TR, and the new simplified Arakawa-Schubert scheme [73] in the FC experiments. Comparison between TR and FC Open Loop To assess the impact of ZTD assimilation at different spatial and temporal resolutions, it is necessary that the TR differs significantly from the FC_OL (the FC Open Loop simulation, i.e., with no data assimilation) and, conversely, that it represents the rainfall field well enough. In the TR, a back-building MCS is simulated, producing accumulated rainfall depths higher than 300 mm in 12 h ( Figure 4B). The simulation is very close to the back-building MCS accumulated rainfall observed by the merged radar and rain-gauges product ( Figure 4A). As introduced in the previous subsection, MCSs are generally triggered by a strong and persistent (in time) convergence line over the sea, which fixes the generation of convective cells at the same position for a few hours, so that very high values of accumulated rainfall are produced [4,63,74]. Such a convergence line is visible during the main phase of the event (00, 01, 02 UTC) in the TR, as shown in Figure 5A-C. Conversely, the FC_OL is not able to capture the correct dynamics of this event: Figure 5D-F shows that the convergence line is completely absent in the FC_OL simulation between 00 and 02 UTC. Consequently, the peak accumulated rainfall in 12 h is less than 100 mm and the precipitation is more spatially distributed ( Figure 4C). The dynamics of the TR and the FC_OL seem to significantly diverge in the afternoon of the 14th of October. In fact, in the morning of the 14th both configurations model a convergence line over the sea. Later during the day, in the FC_OL, this line moves towards France and gets weaker, while in the TR, the convergence line intensifies (not shown). This is likely due to either a wrong description of the thermodynamical state of the continental air mass in the FC_OL, which prevents it from overcoming the orographic barrier and flow over the sea, or a too strong south-easterly flow from the sea, or a combination of both. A correct representation of the convergence line in the NWP model has both dynamical and thermodynamical consequences. In fact, other than possibly producing vertical motion, the surface convergence line is also characterized by an anomalous water vapor content. This happens because the relatively dry continental air mass acts as a barrier for the moister maritime air mass [4], resulting in an accumulation of water vapor, which affects the air column stability. This is visible in Figure 5, where the 252 mm isoline of ZTD is shown in magenta. In fact, it is possible to see that, corresponding to the convergence line over the sea, a well-defined finger-like structure of high water vapor content is modeled perpendicular to the Ligurian coast in the TR (Panels A, B, C). This area of relatively high humidity, in the first place, acts as source of water for the intense heavy rain, which is one of the necessary ingredients for the development of such phenomena [42]. Secondly, the higher humidity content in the TR, decreases the atmospheric stability. In fact, over the Ligurian Sea, the maximum Convective Available Potential Energy (mCAPE) is significantly higher in the TR, O(2000 J kg −1 ), than in the FC_OL, O(1500 J kg −1 ), as discussed in Section 5. Since in the FC_OL the convergence line is not produced, also the area of higher humidity is completely absent, with the consequences for the accumulated rainfall field discussed above (Panels D, E, F). Synthetic Observations Description and Retrieval from The TR All the observations used in this work, namely the Hydroterra-like and the GNSS ZTD, are synthetic observations retrieved from the TR fields. ZTD can be modeled as the difference between the distance in the zenith direction covered by an electromagnetic signal assuming to be in vacuum, i.e., moving with constant velocity c, and the actual distance, i.e., that covered at the actual velocity v ≤ c. In particular, it can be expressed as the vertical integral of the atmospheric refractivity N [75], namely where N is a function of the pressure of dry air p d , the partial pressure of water vapour e, and the temperature T along the zenith profile: The k i , i = 1, 2, 3 constants are experimentally determined and, in this work, their values are taken from Smith and Weintraub [76] and Bevis et al. [77], in agreement with the WRF implementation. ZTD is related to IWV through where ZHD is the Zenith Hydrostatic Delay, which is substantially controlled by the surface pressure [78], ZWD is the Zenith Wet Delay, which is controlled by the highly variable water vapor content, and Π is a conversion factor. It depends on the vertical mean value of the inverse of the temperature weighted by the water vapor density and is approximately equal to 0.15 [75,77]. To go from ZTD to IWV, thus, it is clear that additional information on surface pressure and temperature is needed. As these observations are sometimes hard to retrieve and they add processing steps that can be avoided by directly assimilating ZTD in the model, in all the experiments of this work, the assimilated variable is ZTD. The Hydroterra-like ZTD is assimilated only over land, since Hydroterra will not retrieve ZTD over the sea. This is mainly because the ZTD InSAR maps (as the Hydroterra ones) are derived by taking phase differences for of each pixel using multi-temporal observations. The phase is the optical path delay and the own target's signature, which should be stable in the time between the two SAR observations, in order to provide a reliable measure of the differential path delay. This does not occur when observing water, where the kinematic instability of the surface changes its radar reflectivity within milliseconds [79,80]. In SAR interferometry, water surfaces have random phase, even when observed by a very short revisit. To obtain the GNSS-like ZTD, the TR ZTD field is interpolated on the positions of the receivers of the Italian GNSS network, with a nearest-neighbor approach. The distance between the GNSS receivers of the Italian network is between 30 and 50 km, and for a map of the receivers the reader is referred to Figure 4 of Lagasio et al. [22]. As with many heavy rainfall events, this case study was completely missed by Sentinel-1: the first observation was at 5.35 UTC of the 14th of October, too early to give some information for such very localized event, and the second one was at 5.25 UTC of the 15th of October, when the event was already over. The difficulty in finding a case study in which to assimilate Sentinel-1 ZTD map with a timely passage [22,64] is due to its very low temporal resolution with respect to the dynamics of this kind of explosive high impact weather events. Data Assimilation Setup and Experiments Configuration The data assimilation procedure is performed with the state-of-the-art 3DVAR WRFDA package, V3.9.1 [81]. The 3DVAR finds the optimal estimate of the atmospheric state, called 'analysis', by minimising an appropriate cost function that weights the background atmospheric state(coming from a NWP model run) and the observations, by their uncertainties. A technical description of the assimilation procedures used in this study is given in Appendix A. It has been shown that when high resolution radar observations are assimilated, if the cost function is not properly constrained, such a large number of inputs can dominate the analysis result by adding large unbalanced wind increments, especially when convective systems are present [82,83]. Furthermore, the high resolution ZTD Hydroterra-like observations can lead to unrealistic dynamics, by changing the atmospheric stability and producing very vigorous vertical motion throughout the domain (not shown). This is why an additional constraint in the assimilation procedure is needed. The additional constraint used is sensitive to the large-scale features. It is well known that one of the challenges in convective-scale data assimilation is to extract as much information as possible from the observations while maintaining the background large-scale balance. In other words, the problem is to find a way to add high resolution observational data to the initial conditions through a data assimilation system without damaging the large-scale pattern or causing spurious convection [82]. A possible solution to improve the data assimilation procedure is to use a method to minimize the imbalance problem in the 3DVAR system by adding a constraint in the cost function using information at larger scales. This is defined in terms of the departure of a high resolution 3DVAR analysis from a coarser-resolution large-scale analysis, as explained more in detail in Appendix A [82]. In this work, the version of large-scale constraint (LSC) used in Tang et al. [83] is adopted. Firstly, the GFS forecast fields (instead of analysis fields) are interpolated into the same regular grids as the outer domain via the WRF pre-processing system. Secondly, they are assimilated as bogus observations in the inner domain during the regular DA cycles. Note that, as discussed in Appendix A, not all the grid points of the large domain are considered. In particular, in the present work, the LSC sampling step is set to 45 km, corresponding to retaining every second point of the d01 grid. The assimilation experiments are performed sampling the observation at different spatial (2.5 km, 5 km, GNSS network location) and temporal (3 h, 6 h) resolutions in all the possible combinations. Table 1 introduces the experiments and Figure 6 shows a schematic of the OSSEs data assimilation timing. Note that in the first 6 hours, the OSSEs have no assimilation that is due to the TR spin-up. The lower spatial resolution is set to 2.5 km (the FC resolution) because higher resolution violates the assumption of spatially independent observation errors for the R matrix [19,21,22]. Validation Method The evaluation of the assimilation performances is done using the MODE tool [84,85], by comparing the TR accumulated rainfall field with the forecast fields of the other runs. The main advantage of such a validation is that the forecast is not only evaluated point-wise but also at feature level, thus overcoming the so-called "double-penalty" issue [86]. MODE identifies precipitation structures above given thresholds in both the forecast and the observed fields and performs a spatial evaluation of the model capability of reproducing the identified objects [22]. Especially for high resolution observations and cloud-resolving meteorological forecasts during deep convective events, it is preferable to use feature-based verification techniques, such as MODE, because traditional methods cannot provide a measure of spatial and temporal match between observed and forecast fields. In this work, to evaluate the ZTD assimilation performances, 10 different indices are considered above 48 mm threshold. They include both pairs of object attributes and classical statistical scores, namely, for the geometrical indices we consider: centroid distance (CENTROID_DIST), angle difference Results Looking at the 10 m wind field in the first hours of the event (Figure 7), it is possible to see that the presence or the absence of the convergence line over the sea is one of the most evident differences between the forecasts. As previously discussed, the convergence line is strong and persistent in the TR (Figure 7 Panels A, I, Q). It is interesting to underline that from a strictly forecasting view point, Poletti et al. [87] identify the presence of a convergence line over the sea as one of the most important factors that leads to the issue of a hydro-meteorological alert, as argued in what follows. As discussed in Section 3.2, the convergence line is completely absent in the FC_OL simulation (Figure 7 Panels B, J, R). It is found that, the higher the spatio-temporal resolution of the assimilated ZTD field, the better the impact on the convergence line dynamics. In fact, assimilating the Hydroterra-like ZTD at 2.5 km grid spacing, in simulations FC_DA_2.5 km_3 h (Panels C, K, S) and FC_DA_2.5 km_6 h (Panels F, N, V), produces the most realistic convergence line. In particular, the convergence line is better defined by assimilating every 3 h, although in both cases, it is still different from the TR one. Assimilating the Hydroterra-like ZTD at 5 km grid spacing, as in the FC_DA_5 km_3 h (Panels D, L, T) and FC_DA_5 km_6 h (Panels F, N, V) runs, introduces smaller improvements in the modeling of the convergence line with respect to the previous experiments, while assimilating the ZTD at the GNSS locations in simulations FC_DA_gnss_3 h (Panels E, M, U) and FC_DA_gnss_6 h (Panels H, P, X) seems not to influence the surface wind dynamics at all. A better representation of the surface wind field in FC_DA_2.5 km_3 h (Panels C, K, S) and FC_DA_2.5 km_6 h (Panels F, N, V) is also accompanied by an increase of water vapor along the convergence line, more similar to the TR, as highlighted by the 252 mm isoline in Figure 7. Lagasio et al. [22] showed that, for a similar back-building MCS that caused the severe Livorno 2017 flood, the ZTD assimilation from GNSS provided significant improvements in the heavy rainfall forecast. In particular, it was found that the GNSS ZTD assimilation was more effective when the wind field was simultaneously assimilated. This, together with the present findings, suggests that the coarse spatial resolution of the GNSS receivers helps in the correct modeling of the total amount of water vapor, which acts as a source for the heavy rainfall, but struggles in reproducing the fine-scale water vapor spatial distribution, that modifies the surface dynamics. This is especially true when, as in this case, the FC_OL dynamic is very far from the TR one. Thus, only by assimilating the Hydroterra-like ZTD observations at high spatial resolution, does the FC dynamic move towards the TR one, showing a convincing intense convergence line. Thus, the effects of the ZTD assimilation on the surface wind dynamics have direct impacts on the forecast of the rainfall pattern ( Figure 8). In particular, the presence of the well-defined surface convergence line when assimilating the ZTD at 2.5 km grid spacing, in experiments FC_DA_2.5 km_3 h and FC_DA_2.5 km_6 h, results in a more localized rainfall pattern (Panels B and F, respectively). Although being weaker, this is very consistent with the TR rainfall field, which shows the typical V-shape pattern of the Ligurian MCSs [4]. Assimilating a coarser ZTD product, namely the Hydroterra-like ZTD at 5 km, in the FC_DA_5 km_3 h (Panel C) and FC_DA_5 km_6 h (Panel G) runs, results in a rainfall pattern that is more localized than the OL one, but less than in the above mentioned 2.5 km experiments. With respect to the FC_DA_2.5 km experiments, the rainfall peak appears to be shifted westward. Concerning the simulation of the surface convergence field, the assimilation of ZTD at the GNSS locations, in the experiments FC_DA_gnss_3 h (Panel D) and FC_DA_gnss_6 h (Panel H), instead, maintains a more widespread rainfall pattern very similar to the FC_OL one. Note that the time intervals of the rainfall accumulation are different. In the TR, the 12 h accumulation interval is between 21 UTC of the 14th and 09 UTC of 15th of October. In the FC experiments, instead, it is between 00 and 12 UTC of the 15th of October. The reason for this is because in the FC runs, despite the assimilation procedure, a temporal shift of roughly three hours of the intense rainfall remained. None of the FC simulations is able to reach the TR accumulated rainfall peak values. However, the assimilation of Hydroterra-like observations at 2.5 km (FC_DA_2.5 km_3 h and FC_DA_2.5 km_6 h) allows a big improvement with respect to the OL run as quantitatively highlighted by the Method for Object-Based Evaluation (MODE) rainfall validation. Figure 8. It is possible to see that the 48 mm threshold (Figure 9) reveals that when assimilating the Hydroterra-like ZTD observation at 2.5 km, the accumulated rainfall structure is better captured by the model (higher POD, CSI and better FBIAS and FAR), with respect to assimilating the same observation at 5 km grid spacing. In particular, assimilating at 2.5 km every 6 hours provides the lowest FAR, due to a correct spatial distribution of the rainfall field. In fact, with respect to the simulation assimilating at 2.5 km every 3 h, no rainfall overestimation is produced inland (north of 45°N, as visible in Panels B and F of Figure 8). This is probably due to the eastward displacement of the convergence line at 1UTC ( Figure 7K), that is strongly reduced in the FC_DA_2.5 km_6 h ( Figure 8F) forecast. In fact, the FC_DA_2.5 km_6 h has a weaker convergence line ( Figure 7C,K,S) with respect to the FC_DA_2.5 km_3 h ( Figure 7F,N,V), that is, however, more persistent in terms of location. The validation in terms of the MODE geometrical indices is restricted to the core rainfall object, and not to the entire WRF innermost domain, d03. This procedure cannot be completely automated because it is specific for each event. It is also necessary to focus the validation on the area of interest, instead of the full WRF grid, in order to avoid mixing the multiple rainfall objects that appear in the simulation results, which could affect the validation results. Looking at these geometrical indices ( Figure 10) it is possible to see that the angle difference (ANGLE_DIFF) of the FC_OL and the FC_DA_gnss runs are the worst ones, remarking a more widespread rainfall pattern with respect to the TR one. The CENTROID_DIST and the SYMMETRIC_DIFF highlight how the simulations assimilating Hydroterra-like observations at 2.5 km resolution (FC_DA_2.5 km_3 h and FC_DA_2.5 km_6 h) produce a better localized intense rainfall object, with a shape closer to the TR one. Furthermore, the INTERSECTION_AREA shows that the FC_DA_2.5 km_6 h has a better pattern extent. Summarizing, it is possible to say that assimilating the ZTD Hydroterra-like observations produces the best improvement in a very challenging forecast, where the dynamical and thermodynamical differences between FC_OL and TR are large. In particular, the higher spatial resolution (2.5 km) seems to be the most effective in changing the wind dynamics and, consequently, the rainfall pattern. Both temporal resolutions of the assimilation (3 and 6 h) produce this improvement. However, the simulation assimilating every 3 h (FC_DA_2.5 km_3 h) still maintains a high FAR, due to the shifting of the simulated convergence line. Instead, a more persistent convergence line in the simulation with data assimilation performed every 6 h (FC_DA_2.5 km_6 h) gives a lower FAR (Figure 9). Discussion Only the high resolution Hydroterra-like observation experiments are capable of changing the OL dynamics enough to provide some of the main ingredients that are important to forecast this kind of back-building MCS. As previously outlined, the MODE analysis indicates that the 6-hour assimilation experiment has better performance than the 3-hour one. This suggests that a 3-hourly DA cycle may not leave enough time for a proper dynamical adjustment to the new humidity information, which can be reached with a 6-hourly cycle. Thus, it appears that the assimilation of the Hydroterra-like ZTD modifies the dynamics at the mesoscale, so that the environment is properly set for the development of the convective V-shape storm. Due to the characteristic low predictability of this kind of event, Liguria region's meteorological forecaster developed a check-list tool [87] to consider various ingredients indicating the possible occurrence of severe, organized, and stationary storms, like the back-building MCSs, during the operational forecasting activities. To assess the impact of assimilating Hydroterra-like observations, the TR, OL and FC_DA_2.5 km_6 h runs are compared following Table 2 of the checklist by Poletti et al. [87]. In the first part (a) of this table, an analysis of some thermodynamic parameters such as the K-Index (KI), the Total totals (TT), the CAPE, and the Precipitable Water (PW) allows to evaluate the probability of severe thunderstorms (see Poletti et al. [87] for their definitions). If some of these parameters exceed the identified thresholds, the second part of the table (b) is used to evaluate whether the event under consideration is likely to be organized and persistent. Some of the parameters that are considered in this second part are the presence of a wind convergence line over the sea for more than 3 h, and the strength of the 950 hPa temperature (and humidity) gradient between the Po Valley and the Ligurian Sea. In this work, the use of both parts of Table 2 of Poletti et al. [87] (the checklist) allows to evaluate the impacts that assimilating the high resolution Hydroterra-like ZTD maps has on some physical quantities that are relevant for operational applications. It is worth noticing that even if specific thresholds are identified in the Poletti et al. [87] checklist, their values need to be interpreted. For example, the CAPE parameter threshold should be modulated on the annual cycle, as summer events are usually characterized by higher CAPE values than autumn ones. Furthermore, the K-index is mentioned as a good indicator of severe and organized thunderstorms, but not for persisting ones, such as this kind of back-building MCSs. Furthermore, the TT index and the CAPE do not show a relevant predictive ability for persistent events because, for almost the whole data sets, their values fall within the respective low ranges. Thus, these indices are here used to evaluate if the simulations produce scenarios leading to severe events with respect to some metrics that are currently used for operational activities. The presence of the persistent convergence line and the surface humidity gradients are evaluated to analyse if the event can be both organized and stationary (meaning that it is more prone to generate flash floods). A representative point within the moist and conditionally unstable air mass in the Ligurian sea is chosen to produce the Skew-T diagram and to calculate the relevant indices of the Poletti et al. [87] checklist. The virtual vertical soundings are shown in Panels A-C of Figure 11, while the corresponding surface water vapor mixing ratio maps are shown in Panels D-F. The soundings are taken in the early phase of the event, which are a few hours apart depending on the configuration, as discussed above. In particular, the virtual sounding is taken at 4 UTC in the TR experiment and at 7 UTC of the 15th of October in the FC_OL and FC_DA_2.5 km_6 h experiments. While the TR and the FC_DA_2.5 km_6 h runs are characterized by thermodynamic indices that fall in the moderate to high ranges, the FC_OL has generally weaker values. For example, the CAPE over the Ligurian Sea in the TR and FC_DA_2.5 km_6 h runs is of the order of 2000 J kg −1 and it is only around 1500 J kg −1 in the FC_OL. The KI is moderate for the TR and FC_DA_2.5 km_6 h runs, with values around 30°C, and is weak for the FC_OL, roughly 25°C. The TT and the PW indices, instead, do not highlight significant differences, as they all fall in the same range (weak for the TT, between 45 and 50°C, and moderate for the PW, between 30 and 35 mm). Thus, the first part of the checklist evaluation suggests that severe events can occur in all forecasts, with the FC_OL generally having weaker indices. Moving to the organization and persistence evaluation, Poletti et al. [87] highlight the importance of the presence of the convergence line for more than three hours over the sea. In fact, this persistent dynamic is responsible for the development of convective cells over the same location, producing very high values of accumulated rainfall. The fact that in the TR the convergence line lasts for at least three hours is visible in Panels A, I, Q of Figure 7, showing the surface wind field between 0 and 2 UTC, and in Panel D of Figure 11, showing the surface water vapor mixing ratio field (at 2 m, Q2m) at 4 UTC. In particular, the surface convergence is highlighted by the 0.013 kg/kg isoline shown in red, which marks the dividing line between the drier continental air mass and the moist maritime one. The FC_OL simulation does not present any sign of convergence line, neither at the beginning of the event ( Figure 7B,J,R), nor during its main phase, as indicated by the more homogeneous Q2m distribution over the sea at 7 UTC ( Figure 11E), with the 0.013 kg/kg isoline closely following the coastlines. The FC_DA_2.5 km_6 h simulation shows the presence of the convergence line ( Figure 7F,N,V) since the beginning of the event. Even if weaker and slightly shifted with respect to the TR, the convergence line is clearly visible for at least three hours, and it strengthens at 7 UTC, as revealed from the Q2m distribution shown in Figure 11F. Thus, this important ingredient, associated with the presence of a temperature gradient (not shown) and a Q2m gradient between the Po Valley and the Ligurian Sea ( Figure 11D-F) allows us to conclude that the TR and the FC_DA_2.5 km_6 h simulate a severe organized and persistent event (consistent with the back-building MCS dynamics) while the FC_OL simulates a weaker and non-organized event. This analysis, using physical criteria that are relevant for operational activities, shows that the assimilation of Hydroterra-like observations is able to change the model dynamics and thermodynamics so that, starting from a run that simulates a relatively weak, widespread, and non-organized rainfall event, a realistic back-building MCS is produced. Note that the FC_DA experiments are not fully operational configurations, as the Hydroterra-like ZTD is assimilated during the event. Future works will be devoted to study the impact of assimilating the Hydroterra-like ZTD product in fully operational configurations, taking into account, for example, the availability of the forecasts and of the Hydroterra products. In this way, a more precise quantification of the lead time of the improved forecast in different meteorological conditions could be performed. The proven relevance of the Hydroterra-like observations, albeit structurally retrievable only over the land, can be further interpreted in light of the results of Chu and Lin [88], and Chen and Lin [89]. These authors identified four moist flow regimes for a (two-dimensional) conditionally unstable flow over a mesoscale mountain ridge and proposed an unsaturated moist Froude number F w = U/(hN w ) as the control parameter for these flow regimes, where U is the wind speed, h the mountain height and N w the moist Brunt-Väisälä frequency. In the regime with low F w , the quasi-continuous and heavy rainfall is produced over the upslope side of the terrain as individual convective cells develop upstream at the head of the density current, thus resembling the typical back-building MCS scenario over the Mediterranean area. Propagating precipitation is caused by convection triggered ahead of the hydraulic jump over the lee slope, in this case coincident with the seaward side of coastal mountain range, and is advected by the basic large-scale flow. Thus, the aforementioned hydraulic jump is controlled by downstream conditions over the land, supporting the relevance of continental Hydroterra-like observations. This means that the assimilation of ZTD observations over land modifies the thermodynamical state of the upstream flow, which significantly impacts the surface wind dynamics over the Ligurian Sea, as shown in Figure 7 and discussed previously. To explicitly show the link between the mesoscale dynamics and the convective dynamics in this region characterized by complex terrain, Figure 12 shows the surface wind speed and the isosurfaces of the updraft (cyano) velocities at 1 m/s in the TR, FC_OL and FC_DA_2.5 km_6 h experiments. As visible in the figure, the FC_OL run is the only one that does not produce ascending motion with a narrow and well-organized structure along the surface wind convergence line. Conclusions The main goal of the present work is to evaluate the possible added value of directly assimilating in a NWP model the high resolution ZTD estimates that will be provided by the SAR sensor flying on board of the Hydroterra geosynchronous satellite, an ESA 10th Earth Explorer mission candidate. Firstly, a set of OSSEs is built to identify the spatio-temporal resolution of the new ZTD observations that has the largest positive impact on the forecast of a heavy rainfall event. Secondly, a comparison with the improvements induced by the assimilation of ZTD from the currently available GNSS Italian network is performed for the same case study. All validations are done both in a qualitative way by looking at appropriate maps, and in a quantitative way using an object-based diagnostic tool applied to the accumulated rainfall field [84,85] (MODE). The case study is a MCS that occurred over the Liguria region between the 14th and the 15th of October 2019, characterized by a very low predictability. As in the present case, MCSs are often triggered by the encounter of a cold and dry continental air mass and an unstable, moist, and warmer maritime air mass [4], resulting in a persistent and well-defined surface wind convergence line. The reference TR is performed using an initialization and a setup allowing to obtain a good representation of the real extreme event, with very intense accumulated rainfall values over a relatively small area. Conversely, the FC_OL is not able at all to model this event and its dynamics differ significantly from the TR, with the convergence line completely missing in the FC_OL. The OSSEs highlight that, even if the starting point (the FC_OL) completely lacks some of the fundamental ingredients for a skilful forecast of a back-building MCS, the assimilation of high resolution (at 2.5 km) Hydroterra-like observations is able to deeply improve the forecast. In fact, this is the only observation, among the ones used in this work, that modifies the wind dynamics so that a persistent and well-defined convergence line is modeled. This is particularly relevant because, although the Hydroterra-like ZTD observation is assimilated only over land, it is able to produce more realistic dynamics over the sea, which is crucial for a correct forecast of the MCSs. A better surface wind representation is accompanied by a more localized and more intense accumulated rainfall simulation that resembles the reference run more closely. The comparison with the skills of the simulation assimilating the currently available GNSS receivers' ZTD observations (with a spacing of roughly 30-50 km) shows that it is indeed the fine spatial resolution that adds information to the model so that the surface wind and the accumulated precipitation are simulated more accurately. It is worth noticing that none of the simulations reach the TR rainfall peak. However, it is well known that this kind of event is characterized by an intrinsic low predictability [3,4,41]. For this reason, in an operational framework, some regions particularly prone to this kind of event developed tools (in the form of a checklist) to account for all the relevant dynamical and thermodynamical processes that could help to forecast this kind of extreme event [87]. From the evaluation of the most important parameters highlighted in the Liguria region checklist, it appears that FC_OL and FC_DA_2.5 km_6 h both indicate the likely occurrence of a severe event (with the FC_OL having a weaker signal), but only the FC_DA_2.5 km_6 h is able to suggest the probable occurrence of a severe, organized, and persistent event, as in the TR. In fact, one of the most important dynamical ingredients is the presence of a convergence line over the sea for more than three hours, and only by assimilating the Hydroterra-like observations at 2.5 km is the model able to reproduce it. Summarizing, the Hydroterra-like observations are found to have great potential for use in a meteorological framework. In particular, the assimilation of such high spatio-temporal resolution information of water vapor (in form of ZTD) seems to be able to correct the model dynamics so that the heavy rainfall event is better reproduced. Such an influence in the model simulation can be important not only in the operational framework but also lead to deeper physical insights on the evolution of such events. In this work, the time resolution used for Hydroterra-like observations is 3 and 6 h because a conservative approach in the state-of-the-art assimilation procedure was selected. However, having hourly ZTD observations from Hydroterra could pave the way for various new applications such as: the implementation of ensemble NWP nowcasting chains with hourly initialization, the use of different kinds of data assimilation techniques to exploit the ZTD temporal evolution (i.e., 4DVAR), and the development of storm detection and prediction algorithms based on the spatial distribution of the water vapor field [90][91][92]. Furthermore, in this case, the impact evaluation is performed on an explosive rainfall event, but it is demonstrated that assimilating ZTD at high resolution is useful also to improve forecasts of slowly evolving rainfall cases [22]. Another important future development of this work would be to evaluate the added value of assimilating Hydroterra-like ZTD in other regions covered by the Hydroterra geostationary observations, e.g., Africa. West Africa, including the Sahel, is a good example because MCSs are frequent and can cause significant damage. Due to the lack of observations in that area, the Hydroterra ZTD observations could be very valuable for improving the forecast capabilities, especially when coupled with the Hydroterra soil moisture observations, because soil moisture plays a fundamental role in the dynamics of MCSs in this region [93]. In fact, the MCSs which form over land (e.g., in the Sahel where they are responsible for the majority of annual rainfall [94]) are known to be controlled by the surface properties [95]. The added value of the Hydroterra soil moisture observation in the hydrological framework have been discussed in [29]. Future works are needed to assess the impact of these new observations (ZTD and soil moisture) in a complete hydro-meteorological framework that is very important to forecast high impact weather events over areas with complex terrain, such as the Mediterranean region. Furthermore, also the differences and the interactions of these new data with other traditional sensors (e.g., radar and ground stations) will be investigated in future works. Conflicts of Interest: The authors declare no conflict of interest. Appendix A. Data Assimilation Procedures The standard data assimilation 3DVAR technique implemented in the WRFDA package [81] looks for the minimum of the following cost function [96] in which x is the analysis, x b is the first guess coming from a NWP model, y 0 is the observation vector to be assimilated and y = H(x) is the model-derived observation vector. y is obtained by applying the observation operator H on the analysis x, namely y = H(x). The solution of Equation (A1) represents an a posteriori minimum variance estimate of the true state of the atmosphere given two sources of data: the numerical first guess x b and the available observation y 0 . Their relative importance is weighted by the estimates of their errors contained in the background error covariance matrix, B, and the observation error covariance matrix, R. The R matrix is actually the sum of two distinct error covariance matrices: the observation (instrumental) matrix and the representativity error matrix (that contains the approximations introduced by geometrical transformations, interpolations, etc.). This matrix is assumed to be diagonal, as done in most of the models [97], implying that the correlations between different instruments and between different observations made by the same instruments are equal to zero. In this work, the Control Variable option 7 (CV7) of the WRFDA package is used for the B matrix calculation with the National Meteorological Center (NMC) method [98]. In previous works, where ZTD from Sentinel and GNSS was assimilated [22,64], the CV5 option was used, instead. The CV5 option exploits the velocity potential and the streamfunction (ψ, χ) as momentum control variables. This has been shown to improve the representation of the large-scale features, thanks to the balance between the mass and wind fields, but the small-scale features are missed [99]. Instead, the CV7 option uses the wind components (U, V) as momentum control variables. In CV7, since no balance constraints are applied, the use of (U, V) as control variables can provide closer fitting to dense observations in limited area convective scale data assimilation experiments [99]. The NMC method is applied over the entire month of October 2018 with a 24-h lead time for the forecasts starting at 00:00 UTC and a 12-h lead time for the ones initialised at 12:00 UTC of the same day. The differences between the two forecasts (t + 24 and t + 12) valid for the same reference time are used to calculate the domains specific error statistics. Concerning the Large-Scale Constraint (LSC), it is mathematically implemented into WRFDA 3DVAR by adding a new term J c to Equation (A1), namely, using the incremental formulation, where d c = y c − H(x b ) is the innovation vector that measures the departure of the LSC y c from its counterpart computed from the background x b ; v = U −1 (x − x b ) is the control variable vector, with U being the decomposition of the background error covariance B via B = UU T ; and H is the linearization of the nonlinear observation operator H. The y c variable includes the meridional and zonal wind components, the temperature, and the water vapour mixing ratio from the large-scale analysis that are being assimilated as bogus observations. The errors for wind, temperature, and water vapour mixing ratio are 2.5 m s −1 , 2°C, and 3 g kg −1 , respectively, and are determined by the diagnostics of the GFS product [82,83]. They form the R c matrix, which weights the importance of the LSC term in the cost function minimization. Starting from the results obtained by Tang et al. [83], some experiments are performed as sensitivity, to understand the effect of the LSC scheme to different scales of the analysis fields and the precipitation forecast (not shown). In [83] the sensitivity on the assimilation scheme is performed using LSC every 1, 5, 10 grid points of the outer WRF domain (d01) at 15 km resolution and starting from different vertical levels. By skipping the first few levels in the LSC scheme, they allow the lower atmosphere to develop the small-scale dynamics that can be important for the convection development, due to, for example, the horizontal gradients of the surface fluxes and the interactions with the orography. Their best results are achieved sampling every 5 grid points (at, thus, 75 km grid spacing) and starting from the fourth vertical level. However, in all the experiments performed, the forecast is found to improve with respect to the open loop reference run. In this work, the WRF d01 domain at 22.5 km resolution is used for LSC sensitivity retaining a value every 1, 2, and 3 grid points. Further experiments are performed by skipping the first few vertical model levels, to minimise the possible impact of the large scale constraint on the small-scale features and result in a more effective assimilation of surface observations. In this particular case, reproducing the same sensitivity of [83], no significant differences are highlighted skipping the lower three vertical levels (not shown). The final setup chosen for this work is the LSC sampling every 2 grid points of d01 without skipping any vertical level.
2020-11-19T09:13:02.699Z
2020-11-18T00:00:00.000
{ "year": 2020, "sha1": "3e118dfdd6ad5d06052050cfe28f651a69c8144e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/12/22/3787/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "b888573af6b102dd68c22b1388b69764f833447d", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science", "Environmental Science" ] }