id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
247435690
pes2o/s2orc
v3-fos-license
computers Mind Your Outcomes: The ∆ QSD Paradigm for Quality-Centric Systems Development and Its Application to a Blockchain Case Study † : This paper directly addresses a long-standing issue that affects the development of many complex distributed software systems: how to establish quickly, cheaply, and reliably whether they can deliver their intended performance before expending significant time, effort, and money on detailed design and implementation. We describe ∆ QSD, a novel metrics-based and quality-centric paradigm that uses formalised outcome diagrams to explore the performance consequences of design decisions, as a performance blueprint of the system. The distinctive feature of outcome diagrams is that they capture the essential observational properties of the system, independent of the details of system structure and behaviour. The ∆ QSD paradigm derives bounds on performance expressed as probability distributions encompassing all possible executions of the system. The ∆ QSD paradigm is both effective and generic: it allows values from various sources to be combined in a rigorous way so that approximate results can be obtained quickly and subsequently refined. ∆ QSD has been successfully used by a small team in Predictable Network Solutions for consultancy on large-scale applications in a number of industries, including telecommunications, avionics, and space and defence, resulting in cumulative savings worth billions of US dollars. The paper outlines the ∆ QSD paradigm, describes its formal underpinnings, and illustrates its use via a topical real-world example taken from the blockchain/cryptocurrency domain. ∆ QSD has supported the development of an industry-leading proof-of-stake blockchain implementation that reliably and consistently delivers blocks of up to 80 kB every 20 s on average across a globally distributed network of collaborating block-producing nodes operating on the public internet. Introduction In order to avoid expensive design and implementation failures, it is critical to establish sufficiently early in the design cycle that software systems will meet both their functional requirements and their non-functional requirements. This paper describes ∆QSD, a novel metrics-based and quality-centric paradigm that uses formalised outcome diagrams to explore the performance consequences of design decisions, and so to determine system viability ahead of expensive implementation work. The paradigm has been successfully used in a number of commercial settings, including telecommunications, avionics, and space and defence. The paper introduces the concepts underlying ∆QSD, formalises these, 1. System requirements are often vague and/or contradictory, and they can change both during and after development; 2. Complexity forces hierarchical decomposition of the problem, creating boundaries, including commercial boundaries with third-party suppliers, that may hinder optimal development and hide risks; 3. Time pressure forces parallel development that may be at odds with that hierarchical decomposition, and it encourages leaving 'tricky' issues for later, when they tend to cause re-work and overruns and leave tail-risks; 4. Cost and resource constraints force resources to be shared both within the system and with other systems (e.g., when network infrastructure or computing resources are shared); they may also require re-use of existing assets (own or third-party), introducing a degree of variability in the delivered performance; 5. The performance of particular components or subsystems may be incompletely quantified; 6. System performance and resource consumption may not scale linearly (which may not become apparent until moving from a lab/pilot phase to a wider deployment); 7. At scale, exceptional events (transient communications and/or hardware issues) can no longer be treated as negligibly rare, and their effects and mitigation need to be considered along with the associated performance impacts. Thus, what is needed is (1) a way of capturing performance and resource requirements that accommodate all the various sources of uncertainty; and (2) a process for decomposing a top-level requirement into subsystem requirements that provides confidence that satisfying all the lower-level requirements will also satisfy the top-level one. For functional aspects of system behaviour, there are various ways of dealing with this [7]. However, while established software engineering approaches do exist for dealing with performance [8], these all have significant limitations. The ∆QSD Systems Development Paradigm This paper directly addresses those issues by defining the ∆QSD systems development paradigm and providing a high-level formalism that can be used throughout the system development process. ∆QSD is a quality-centric paradigm, focusing on meeting timeliness constraints and an acceptable failure rate of the top-level outcomes with acceptable resource consumption. The paradigm has been used successfully by a small team in Predictable Network Solutions in a variety of large industrial projects, collectively saving billions of dollars and person-centuries of development effort. It informs high-level management and system design decisions by showing where conflicts exist (or may exist) between system designs and required outcomes. It is able to compute the predicted performance at any stage of the design process, where performance is seen broadly as comprising timeliness, behaviour under load, resource consumption, and other key system metrics. Central to ∆QSD is the concept of an outcome, which is defined as a specific system behaviour with specified start and end points, localised in space and time. In ∆QSD, the system engineer models the system as an outcome diagram, which is a graph that captures the causal relationships between system outcomes. ∆QSD defines a system design as a sequence of outcome diagrams that capture the essential observational properties of the system, independent of the details of system structure and behaviour. This sequence starts with a fully unspecified system and ends with either (i) a fully specified (or a convincingly specified-enough) system (deemed as constructible), or (ii) the conclusion that the system goals are infeasible. The formalism allows exploration of the design space by assessing the consequences of the decisions that are taken (and possibly retracted) at each refinement step, giving rise to threaded decision trees. For each partially specified system, we compute the predicted timeliness and behaviour and resource consumption of the system under load, obtaining one of three possible conclusions: (1) infeasibility-hence, ceasing further development and revising former design decisions; (2) slack-hence, ceasing further optimisation because the system is good enough; or (3) indecisiveness-hence, requiring additional scrutiny until one of the alternative conclusions can be drawn. The paper gives one large example, blockchain diffusion, that illustrates how ∆QSD can be used in practice, explaining how the formalism can be used to drive the design process and associated decision making. This example is a real-world application that is in continuous use as a core part of the Cardano blockchain technology (https://cardano.org/ (accessed on 14 December 2021)). Most performance analysis approaches require the system to be fully specified, or even implemented, which is a serious disadvantage, since it does not allow the properties of subsystems to be encapsulated and hierarchically (de)composed. By contrast, ∆QSD satisfies compositionality, the principle that the meaning of a complex expression is determined by the meanings of its constituent expressions and the rules that are used to combine them. For compositional properties, what is "true" about subsystems (e.g., their timeliness, their resource consumption) is also "true" about their (appropriate) combination: there exists an invariant (e.g., timeliness, aspects of functional correctness) that must hold over the reified components of the system. This is key to managing complexity within the systems' development life-cycle. In the broader software development space, functional programming techniques are improving the compositionality of functional aspects of software systems, and they can deliver high assurance of functional correctness when combined with appropriate formal methods [9]. The ∆QSD paradigm represents a similar step change in handling the "nonfunctional" aspects of performance and resource consumption. By treating delay and failure as a single object, called 'quality attenuation', our paradigm can be thought of as a combination of passage time analysis and failure mode effects analysis (FMEA). Main Contributions of this Paper The main contributions of this paper are as follows: 1. Introduce ∆QSD, a formalism (Section 5) that focuses on rapidly exploring the performance consequences of design and implementation choices, where: (a) Performance is a first-class citizen, ensuring that we can focus on details relevant to performance behaviour; (b) The whole software development process is supported, from checking the feasibility of initial requirements to making decisions about subtle implementation choices and potential optimisations; (c) We can measure our choices against desired outcomes for individual users (customer experience); (d) Analysis of saturated systems is supported (where a "saturated system" is one with resources that have reached their limits, e.g., systems with high load or high congestion); (e) Analysis of failure is supported. We use term-rewriting for formalising refinements (Definition 3 in Section 5) and denotational semantics for formalising timeliness analysis (Section 5.3) as well as load analysis (Section 5.4). 2. Describe key decisions made in the development process of a real system-i.e., the Cardano blockchain, which is presented as a running example-and show how ∆QSD is able to quickly rule out infeasible decisions, predict behaviour, and indicate design headroom (slack) to decision makers, architects, and developers (Section 4). While the ∆Q concept has been described in earlier papers [10,11] and used to inform a number of large-scale system designs, these previous contributions have only used it in an informal manner. By providing a formal definition of ∆QSD, and showing how it can be used in practice, we are taking an important step towards a general evidence-based engineering methodology for developing real-time distributed systems. Structure of the Paper This paper has the following structure: • Section 2 introduces the running example that we will use throughout the paper: block diffusion in the Cardano blockchain. • Section 3 defines the basic concepts that underlie the ∆QSD formalism: outcomes, outcome diagrams, and quality attenuation (∆Q). We also compare outcome diagrams with more traditional diagrams such as block diagrams. • Section 4 gives a realistic example of the ∆QSD paradigm, showing a step-by-step design of block diffusion (introduced in Section 2) based on quality analysis. This example introduces the basic operations of ∆QSD in a tutorial fashion. The example uses realistic system parameters that allow us to compute predicted system behaviour. • Section 5 gives the formal mathematical definition of ∆QSD and its main properties. With this formal definition, it is possible to validate the computations that are used by ∆QSD as well as to build tools based on ∆QSD. • Section 6 gives a comprehensive discussion about related work from three different viewpoints: theoretical approaches for performance analysis (Section 6.1), performance design practices in distributed systems (Section 6.3), and programming languages and software engineering (Section 6.4). • Section 7 summarises our conclusions, discusses some limitations of the paradigm, and describes our plans to further validate ∆QSD and to build a dedicated toolset for real-time distributed systems design that builds on the ∆QSD paradigm. Running Example: Block Diffusion in the Cardano Blockchain A blockchain is a form of distributed ledger. It comprises a number of blocks of data, each of which provides a cryptographic witness to the correctness of the preceding blocks, back to some original 'genesis' block (a 'chain' of blocks, hence 'blockchain') [12]. Nodes in the system use some specified protocol to arrive at a distributed consensus as to the correct sequence of blocks, even in the presence of one or more 'adversaries' that aim to convince other nodes that a different sequence is correct. One such consensus protocol is Ouroboros Praos [13], which underpins Cardano (https://www.cardano.org (accessed on 14 December 2021)), one of the world's leading cryptocurrencies. Ouroboros Praos uses the distribution of 'stake' in the system (i.e., the value of the cryptocurrency tokens that are controlled by each node) to randomly determine which node (if any) is authorised to produce a new block in the chain during a specific time interval (a 'slot'); the more stake a node controls, the more likely it is to be authorised to produce a block. For this to be effective, it is important that the block-producing node has a copy of the most recently produced block, so that the new block can correctly extend the existing chain. Since the block producer is selected at random, this means that the previous block needs to have been copied to all block-producing nodes; we call this process 'block diffusion'. Since blocks are produced on a predetermined schedule and each block depends on its predecessor, block diffusion is a real-time problem; each block must be diffused before the next block can be produced. In order to be robust, the consensus algorithm is designed to withstand some imperfections in block diffusion; hence, the effective requirement is that blocks should be well-diffused "sufficiently often". Put another way, the probability that a block fails to arrive in time for the production of the next block must be suitably bounded. The engineering challenge is to quantify this probability as a function of the design and of the parameter choices of the implementation. The scale of the challenge is illustrated by Cardano. Cardano is a global-scale distributed system that eschews centralised management. At the time of writing, 2948 globallydistributed nodes cooperate to produce and distribute blocks for $45.77B of cryptocurrency that is associated with 956,092 distinct user addresses. The stake distribution at the time of writing is shown in Figure 1. In Cardano, slots are one second long and blocks are produced every 20 s on average. An initial implementation of Cardano (code-named 'Byron') was functionally correct but proved incapable of effective block diffusion without rigid control of the nodes and their topology; a re-implementation (called 'Jormungandr') targeted higher performance by using a different programming language (Rust instead of Haskell), but this also missed the block diffusion target by a wide margin. A further, and ultimately successful, reimplementation (called 'Shelley' [14]) used Haskell to retain strong correctness assurances but applied the principles that are discussed in this paper to also ensure adequate performance in a fully decentralised deployment. ). The y-axis represents the number of blocks produced by each "stake pool"; the x-axis represents the stake that is held by the pool (in Ada). Key Design Decisions In the design of Shelley, a number of inter-related decisions had to be made. These included the following: 1. How frequently should blocks be produced? Proof-of-Work systems are limited in their throughput by the time taken to 'crack' the cryptographic puzzle; proof-ofstake systems do not have this limitation and so have the potential for much higher performance both in terms of the volume of transactions embedded into blocks and the time take for a transaction to be fully incorporated in the immutable part of the chain. Thus, the interval between blocks is a key parameter. 2. How are nodes connected? It might seem that connecting every node to every other would minimise block diffusion time; however, the lack of any control over the number and capabilities of nodes makes this infeasible. Nodes can only be connected to a limited number of peer nodes; then, the number of connected peers and how they are chosen become important. 3. How much data should be in a block? Increasing the amount of data in a block improves the overall throughput of the system but makes block diffusion slower. 4. How should blocks be forwarded? Simply forwarding a new block to all connected nodes would seem to minimise delay, but this wastes resources, since a node may receive the same block from multiple peers. In the extreme case, this represents a potential denial-of-service attack. Splitting a block into a small header portion (sufficient for a node to decide whether it is new) and a larger body that a node can choose to download if it wishes mitigates this problem but adds an additional step into the forwarding process. 5. How much time can be spent processing a block? Validating the contents of a block before forwarding it mitigates adversarial behaviour but can be computationally intensive, since the contents may be programs that need to be executed (called 'smart contracts'); allowing more time for such processing permits more, and more complex, programs but makes block diffusion slower. The remainder of this paper shows how such design decisions can be quantified using the ∆QSD paradigm. Formulating the Problem We assume that a collection of blockchain nodes is assembled into a random graph (randomness is important in a blockchain setting for mitigating certain adversarial behaviours). In each time slot, a randomly chosen node may generate a block, and we are interested in the probability that the next randomly chosen node has received that block before it generates the next block. Problem Statement Starting from blockchain node A, what is the probability distribution of the time taken for a block to reach a different node Z when A and Z are picked at random from the graph? Since the graph is random with some limited node degree N, there is a strong chance that A is not directly connected to Z, and so, the block will have to pass through a sequence of intermediate nodes B, C, . . . The length of this sequence is a function of the size and node degree of the graph [15]. The (distribution of) time to forward a block directly from one node to another is known (e.g., by measurement). Foundations In the remainder of this paper, we will take the system of discourse to be fixed for the design engineer. We assume that this system has a number of tasks that must be performed. In order to perform a task that is not considered to be atomic by the design engineer, the system might need to perform several other subtasks. The process of clarifying the details of the system by breaking higher-level tasks into such subtasks is what we call refinement (Definition 3 in Section 5). By refining a system, one goes from a coarser granularity of the design to a finer one (see Sections 4.1-4.3 for examples). Sometimes, the design engineer will return to a coarser grained design, as discussed in Section 7.1.2, in order to take a different direction of refinement (see for examples). Reasons why they might want to do so include: to investigate other aspects of their system; to compare two alternative design choices; or because a refinement fails to meet the necessary performance or other requirements. Thus, ∆QSD is design exploration in the world of refinements. This section sets the stage for presenting design exploration in action (Section 4) by introducing the fundamental concepts: outcomes (Section 3.1), outcome diagrams (Section 3.2), and quality attenuation (Section 3.3). Then, it gives a simple example of how to approach problems à la ∆QSD (Section 3.4). This section ends in a discussion on why ∆QSD advises a new diagram in the presence of all the existing ones in Software Engineering (Section 3.5). Outcomes An outcome is what the system obtains by performing one of its tasks. Each task has precisely one corresponding outcome, and each outcome has precisely one corresponding task. We say that an outcome is 'performed' to mean that the corresponding task of an outcome is performed. Likewise, we might use task adjectives for outcomes too, even though outcomes and tasks are inherently different. For example, by an atomic outcome, we mean an outcome whose corresponding task is itself atomic. We take an event-based perspective, in which each outcome has two distinct sets of events: the starting set of events (any one of which must happen before the task can commence) and the terminating set of events (at least one of which must happen before the task can be considered complete). Each of those sets consists of events that are of particular interest (as opposed to just any event). We call such events of interest the observables. For example, an observable in the starting set, S o , of an outcome o is of interest because it signifies the point in time and 3D location at which o begins. Likewise, an observable from the terminating set, T o of o is an event that contains information regarding the location where o finishes. While it may seem unusual to refer explicitly to location in a computer science context, when considering distributed systems, the outcomes of interest are precisely those that begin at one location and finish at another. Of course, once an observable from S o occurs, there is no guarantee that one from T o will occur within o's duration limit, d(o) (i.e., the relative time by which o is required to complete). However, when an observable T o does occur within the duration limit after one from S o , o is said to be done. Diagrammatically, we show an outcome using an orange circle. As shown in Figure 2, we depict the starting set and the terminating set of an outcome using small boxes to the left and right of the outcome's circle, respectively. The starting set is connected to the outcome from the left, and the terminating set is connected to the outcome from the right. When they are unimportant for an outcome, we do not include the starting set and the terminating set of that particular outcome in the diagram. We consider one special kind of outcome. Consider the situation where a design engineer is aware that an outcome is not atomic. They will eventually need to further break the outcome into its suboutcomes. Nevertheless, the current level of granularity is sufficient to carry out a particular analysis (see Sections 5.3 and 5.4 for two example analyses). In ∆QSD, a black box can be used for that particular outcome. Black boxes are those outcomes that achieve one of the following: 1. Can be easily quantified without even a need for them to be named; 2. Are beyond the design engineer's control (and so may need to be quantified by external specification or measurement); or, 3. Are ones for which the design engineer has intentionally left the details for later. Outcome variables are the variables that we use to refer to a given outcome. Outcome Diagrams and Outcome Expressions The description of a system in terms of its outcomes requires the causal relationships between the outcomes to be captured. In ∆QSD, these relationships are captured in outcome diagrams. In addition to its graphical presentation, each outcome diagram can be presented algebraically, using its corresponding outcome expression. As shown in Figure 3, outcome diagrams offer four different ways to describe the relationships between outcomes. Quality Attenuation (∆Q) From the perspective of a user, a perfect system would deliver the desired outcome without error, failure, or delay, whereas real systems always fall short of this; we can say that the quality of their response is attenuated relative to the ideal. We denote this 'quality attenuation' by the symbol ∆Q and reformulate the problem of managing performance as one of maintaining suitable bounds on ∆Q [16]. This is an important conceptual shift because 'performance' may appear to be something that can be increased arbitrarily, whereas ∆Q (similar to noise) is something that may be minimised but that can never be completely eliminated. Indeed, some aspects of ∆Q, such as the time for signals to propagate between components of a distributed system, cannot be reduced below a certain point. Since the response of the system in any particular instance can depend on a wide range of factors, including the availability of shared resources, we model ∆Q as a random variable. This allows various sources of uncertainty to be captured and modelled, ranging from as-yet-undecided aspects of the design, to resource use by other processes, to behavioural dependence on data values. In capturing the deviation from ideal behaviour, ∆Q incorporates both delay (a continuous random variable) and exceptions/failures (discrete variables). This can be modelled mathematically using Improper Random Variables (IRVs), whose total probability is less than one [17]. If we write ∆Q(x) for the probability that an outcome occurs in a time t ≤ x, then we can define the 'intangible mass' of such an IRV as 1 − lim x→∞ ∆Q(x). In ∆Q, this encodes the probability of exception or failure. This is illustrated in Figure 4, which shows the cumulative distribution function (CDF) of an IRV (with arbitrary time units). We can define a partial order on such variables, in which the 'smaller' attenuation is the one that delivers a higher probability of completing the outcome in any given time: This partial order has a 'top' element, which is simply perfect performance: ≡ (∀ x ∆Q(x) = 1), and a 'bottom' element, which is total failure (an outcome that never occurs): ⊥ ≡ (∀ x ∆Q(x) = 0). We can write specifications for system performance using this partial order by requiring the overall ∆Q to be less than or equal to a predefined bounding case. Where the ∆Q is strictly less than the requirement, we say there is performance slack; when it is strictly greater than the requirement, we say there is a performance hazard (cf. Definitions 5 and 8). Assessments might also find the current level of information about a system to be indecisive-neither slack nor hazard. The simplest reason for indecisiveness is the partiality of ≤ in Equation (1). Another reason for indecisiveness might be conflict between different analyses. For example, timeliness analysis (Section 5.3) might show slack whilst load analysis (Section 5.4) shows hazard. A third reason might be that even though the formulations end up indicating slack or hazard, the system is still detailed so little that the result of the analysis should not be counted on. The relationships between outcomes that are shown in Figure 3 then induce corresponding relationships between the ∆Qs of those outcomes, as explained in Section 5.3. The key to the compositionality of the paradigm is that the partial order is preserved by the operations that combine ∆Qs. Thus, for example, considering the sequential composition of either of two alternative outcomes with a third, This enables an overall timeliness requirement to be broken into 'budgets' for sub-outcomes. More details of this approach are given in [11]. Simple Example Consider the simple distributed system of a web browser interacting with a set of servers that collectively provide a web page. The outcome that is of interest to the user starts with the event of clicking on a URL, and it ends with the event of the page being fully rendered. This corresponds to the first row of Figure 5. The second row shows the distinction between the user and the browser, and the third row exposes the back-end servers. A typical web page will contain a variety of elements that are served by servers from different host domains. So, for each element, the browser (and its supporting O/S) must first resolve the corresponding domain name, then establish a connection to the given server, and finally download and then render the provided content. Thus, for each element that needs to be displayed, the ∆Q is the sequential composition of the ∆Qs of the component steps described above; and the ∆Q of rendering the whole page is an all-to-finish combination of the ∆Qs of all the elements. Note that this formulation automatically deals with the possibility that any of the steps may fail, and it provides the resultant failure probability for the whole process in addition to the distribution of expected completion times. This simple model can be further refined as needed to meet real-world requirements. For example, DNS resolution might provide alternative server addresses for load-balancing purposes, and each of these servers might have different ∆Qs when providing the same content to the user (perhaps because they are located in different geographical locations or are provisioned using systems with different CPU or storage capabilities). We can represent this as a probabilistic choice between these outcomes, which is weighted by the probability that a specific server is used. This weights the corresponding ∆Q. In addition, we might also consider the effect of load and contention for shared resources, for example network interface bandwidth or rendering capacity, or the impact of different DNS caching architectures on performance. These aspects of system performance design are formalised in Section 5. Alternatives to Outcome Diagrams-Why a New Diagram? The ∆QSD paradigm introduces the concept of outcome diagrams. It is perfectly reasonable to ask at this point: "Why another diagram? What is it that outcome diagrams capture that UML diagrams, for example, cannot?" Let us answer these questions by comparing outcome diagrams with UML. We first recall the two main properties of outcome diagrams in the ∆QSD paradigm: • An outcome diagram specifies the causal relations between outcomes. An outcome is a specific system behaviour defined by its possible starting events and its possible terminating events. For example, sending a message to a server is an outcome defined by the beginning of the send operation and the end of the send operation. The action of sending a message and receiving a reply is observed as an outcome, which is defined by the beginning of the send operation and the end of the receive operation. Outcomes can be decomposed into smaller outcomes, and outcomes can be causally related. For example, the send-receive outcome can be seen as a causal sequence of a send outcome and a receive outcome. • An outcome diagram can be defined for a partially specified system. Such an outcome diagram can contain undefined outcomes, which are called black boxes. A black box does not correspond to any defined part of the system, but it still has timeliness and resource constraints. Refining an outcome diagram can consist in replacing one of its black boxes with a subgraph of outcomes. A crucial property of an outcome diagram is that it is an observational concept. That is, it says something about what can be observed of a system from the outside, but it does not say anything about how the system is constructed internally. UML Diagrams UML is a rich language defined to model many different aspects of software, including its structure, behaviour, and the processes it is part of. The UML 2 standard defines 14 kinds of diagrams, which are classified into structural diagrams and behavioural diagrams. We first note two general properties of outcome diagrams that UML diagrams do not share: • Observational property: All UML diagrams, structural and behavioural, define what happens inside the system being modelled, whereas outcome diagrams define observations from outside the system. The outcome diagram makes no assumptions about the system's components or internal states. • Wide coverage property: It is possible for both UML diagrams and outcome diagrams to give partial information about a system, so that they correspond to many possible systems. As long as the systems are consistent with the information in the diagram, they will have the same diagram. However, an outcome diagram corresponds to a much larger set of possible systems than a UML diagram. For an outcome diagram, a system corresponds if it has the same outcomes, independent of its internal structure or behaviour. For a UML diagram, a system corresponds if its internal structure or behaviour is consistent with the information in the diagram. This means that a UML diagram is already making decisions w.r.t. the possible system structures quite early in the design process. The outcome diagram does not make such decisions. In the rest of this section, we compare outcome diagrams to two UML diagrams, namely the state machine diagram and the component diagram. State Machine Diagram A state machine diagram is a finite state automaton. It defines the internal states of a system and the transitions between them. The state diagram captures the causality between the actions taken when the system changes states, but this does not map directly to the outcomes observed by an external user. However, there is a relationship between a state diagram and an outcome diagram. An outcome can map to a sequence of state transitions, whereas, by examining the actions of a state diagram, it is possible to deduce the outcomes to expect from taking those actions. Block Diagram A block diagram specifies a system as a set of elements with their interconnections. We illustrate the difference between block diagrams and outcome diagrams using a simple example system: a user querying a front end that is connected to a database ( Figure 5). The figure shows the refinement process: a system with an initially unknown structure is refined stepwise into a system that has a completely known structure. For the outcome diagram, the system performance can be obtained directly by composing the ∆Qs of the outcomes, using the rules described in Section 5. For the block diagram, it is harder to obtain system performance. This is because the block diagram does not define the expected outcomes of a system or their causality. The block diagram by itself does not have sufficient information to allow system performance to be calculated: we also need to know the expected outcome and the sequence of messages sent between blocks needed to achieve that outcome. As a final remark, the block diagram constrains the system structure to always have a front end and a database, whereas the outcome diagram is consistent with many alternative system structures. Design Exploration Using Outcome Diagrams This section simulates how a design engineer could explore the blockchain diffusion example that was described in Section 2, using outcome diagrams. Figure 6 depicts that design exploration in the form of a threaded decision tree in the search space. Each node in the tree is an outcome diagram. Every node is labelled with a description plus the section in this paper where it is discussed. There are two types of edges: solid edges represent refinement steps (Definition 7), whilst dashed edges represent backtracks to take alternative directions of refinement. The formalism used in this section is presented in Section 5. Starting Off Initially, the design engineer knows almost nothing about the system. Perhaps, all they know is that there will be the following two observation locations: • A − : Block is ready to be transmitted by A. • Z + : Block is received and verified by Z. The corresponding outcome diagram is in which the only outcome is a black box. As will be detailed in Section 5, the outcome expression to describe that outcome diagram is a ( for black boxes). Early Analysis Given that the design engineer is not content with the current level of granularity, they wish to further detail the diagram by giving the black box a name, such as o A Z . In ∆QSD, we call adding that further detail a refinement. That refinement step is depicted below. Refinement Refinement Here, the outcome diagram that is above the dashed line is refined into the one below the dashed line. As will be discussed in Section 5, the (rewrite) rule that authorises this refinement is We call this rule (UNBX) for unboxing (a black box). The rule states that in a context C, a black box can be rewritten to any other outcome expression (but not to a black box). In this case, we choose the black box to be rewritten to an outcome variable called o A Z . This indicates the outcome of hopping directly from A to Z. Before producing more of our block diffusion algorithm's outcome diagram, we would like to take the time to apply some analysis. Refinements aside, suppose for a moment that there are two hops to make from A to Z: first from A to an intermediate node B, and, then, from B to Z. The corresponding outcome diagram for the two-hop journey from A to Z would then be: Here, o A B and o B Z are the outcomes of hopping from A to B and from B to Z, respectively. Note also that the observation location between the above two outcomes is labelled B + /B − . That is because the observation B + and B − take place at the same location. For that reason, we will simply write B to refer to that observation location. The same convention is used for similar intermediate locations. Then, it is easy to obtain the outcome diagram for three hops: While outcome diagrams are visually more attractive, outcome expressions are algebraically more attractive. For example, the corresponding expression for two hops is is the symbol we use for sequential composition: The sequential composition of o A B and o B Z is needed because the latter causally depends on the former. Likewise, the outcome expression for three hops is Then, generalising that to n hops is easy: Parameterisation by n hops is useful because it helps the design engineer determine the right n for their blockchain. For example, a relevant question is: What is the optimal n for block diffusion to be timely and for its load to be bearable? The formalisation in Section 5 instructs the design engineer as to how to achieve that and other goals. Before detailing the how, we take our moment to analyse a smaller example. Consider the two-hop scenario. Provided that the design engineer has ∆Qs for both o A B and o B Z , they can use Definition 4 to work out the ∆Q of which is the convolution of the two constituent ∆Qs: In a similar vein, the design engineer can work out the n-hop scenario's ∆Q for n > 1. Then, using the formulation given in Definition 5, the design engineer can determine the constraints on n that are needed in order for block diffusion to meet the overall timeliness requirements. In practice, the time that is needed to transfer a block of data one hop depends on four main factors: 1. The size of the block; 2. The speed of the network interface; 3. The geographical distance of the hop (as measured by the time to deliver a single packet); 4. Congestion along the network path. When we consider blockchain nodes that are located in data centres (which most block producers tend to be), the interface speed will typically be 1 Gb/s or more. This is not a significant limiting factor for the systems of interest (see Section 5.4 for an analysis that explains this). In the setting that we are considering, congestion is generally minimal, and so this can also be ignored in the first instance. This leaves (i) block size, which we will take as a design parameter to be investigated later; and (ii) distance, which we will consider now. For simplicity, we will consider three cases of geographical distance: 1. Short: The two nodes are located in the same data centre; 2. Medium: The two nodes are located in the same continent; 3. Long: The two nodes are located in different continents. For pragmatic reasons, Cardano relies on the standard TCP protocol for data transfers. TCP transforms loss into additional delay, so the residual loss is negligible. At this point, we could descend into a detailed refinement of the TCP protocol, but equally we could simply take measurements; the compositionality of ∆QSD means that it makes no difference where the underlying values come from. Table 1 shows measurements of the transit time of packets and the corresponding transfer time of blocks of various sizes, using hosts running on AWS data centre servers in Oregon, Virginia, London, Ireland, and Sydney. Since we know that congestion is minimal in this setting, the spread of values will be negligible, and so in this case, the CDFs for the ∆Qs will be step functions. The transfer time for each block size is given both in seconds and in multiples of the basic round-trip time (RTT) between the hosts in question. Since the TCP protocol relies on the arrival of acknowledgements to permit the transmission of more data, it is unsurprising to see a broadly linear relationship, which could be confirmed by a more detailed refinement of the details of the protocol. Given the randomness in the network structure and the selection of block-producing nodes, there remains some uncertainty on the length of an individual hop. At this point, we will assume that short, medium, and long hops are equally likely, which we can think of as an equally-weighted probabilistic choice. In numerical terms, this becomes a weighted sum of the corresponding ∆Qs, as given in Table 1. This gives the distribution of transfer times per block size shown in Figure 7. Refinement and Probabilistic Choice Recall that A and Z are names for randomly chosen nodes, so the number of hops between A and Z is unkown. ∆QSD tackles that uncertainty by offering an outcome diagram that involves probabilistic choice between the different number of hops that might be needed. Strictly speaking, a probabilistic choice is a binary operation. Hence, when there are more than two choices, the outcome diagram will cascade probabilistic choices. In the general formulation, there are at most n hops. In order to produce that, the design engineer exercises a step-by-step refinement of the single-hop outcome diagram. The first refinement introduces the choice between one or two or more hops, as shown in Figure 8. There are two outcome diagrams in Figure 8: the one above the dashed line and the one below. The underlying green area is not a part of the two outcome diagrams itself, but it is there to indicate which part of the diagram above the dashed line is being refined into which part of the diagram below. In the absence of the left-side arrow, the direction of refinement can also be determined using the colour of the underlying green area. The pale side of an underlying green area is for what is being refined, whereas the dark side is for the result of the refinement. Refinement Refinement The equivalent outcome expression of the lower diagram in Figure 8 , which is a probabilistic choice between one or two hops with respective weights m 1 and m 1 . The corresponding (rewrite) rule of the figure is: which we call (PROB) (for probabilistic choice). Here is how we applied (PROB) to arrive from the single hop to the probabilistic choice between one hop and two hops: That is, C in the above refinement is an empty context. Next, the design engineer further refines the two+-hop part to the probabilistic choice between two or three hops, as shown in Figure 9. Again, in that figure, the underlying green area is not a part of either diagram. It only serves as a visual indicator, showing which part of the upper diagram is being refined into which part of the lower one. For the equivalent term rewriting of Figure 9, we use (PROB) again. However, instead of an empty context, here, the context is Refinement Refinement The design engineer can continue refinement until a predetermined number of hops is reached. Alternatively, they can keep the number of hops as a parameter and analyse the corresponding parameterised outcome expression for timeliness, behaviour under load, etc. Figure 10 shows the result of applying Equation (2) to the sequence of outcome expressions corresponding to one, two, . . . five sequential hops using the transfer delay distribution shown in Figure 7, for a 64 kB block size. It can be seen that there is a 95% probability of the block arriving within 2 s. In contrast, Figure 11 shows the corresponding sequence of delay distributions for a 1024 kB block size, where the 95th percentile of transfer time is more than 5 s. If we know the distribution of expected path lengths, we can combine the ∆Qs for different hop counts using (PROB). Table 2 shows the distribution of paths lengths in simulated random graphs having 2500 nodes and a variety of node degrees [18]. Using the path length distribution for nodes of degree 10, for example, then gives the transfers delay distribution shown in Figure 12. Alternative Refinements Suppose that instead of investigating the number of hops, the design engineer is now interested in studying the steps within a single hop. There are various ways to do this. In Sections 4.4-4.7, we will consider four different ways that can be used when A and Z are neighbours, each of which refines o A Z . These refinements are all instances of the (ELAB) (rewrite) rule (for elaboration): The following sections are also important for another reason. So far, we have traversed the threaded tree of refinement in a depth-first way; the upcoming subsections traverse that tree in a breadth-first way. ∆QSD allows the design engineer to choose between depth-first and breadth-first refinement at any point in their design exploration. Breaking Down Transmissions into Smaller Units Network transmissions are typically broken down into the transmission of smaller units. Depending on the layering of the network protocols, that might, for example mean dividing a high-level message into several smaller packets. In a similar vein, the design engineer might decide to study block diffusion in terms of smaller units of transmission. For example, they might want to study the division of o A Z into n smaller unit operations o u 1 A Z , . . . , o u n A Z . The resulting outcome diagram is shown in Figure 13. Then, the corresponding outcome expression would be Header-Body Split In Cardano Shelley, an individual block transmission involves a dialogue between a sender node, A, and a recipient node, Z. We represent the overall transmission as o A Z . This can be refined into the following sequence: 1. Permission for Header Transmission (o ph Z A ): Node Z grants the permission to node A to send it a header. 2. Transmission of the Header (o th A Z ): Node A sends a header to node Z. 3. Permission to for Body Transmission (o pb Z A ): Node Z analyses the header that was previously sent to it by A. Once the suitability of the block is determined via the header, node Z grants permission to A to send it the respective body of the previously sent header. 4. Transmission of the Body (o tb A Z ): Finally, A sends the block body to Z. The motivation for the header/body split and the consequential dialogue is optimisation of transmission costs. Headers are designed to be affordably cheap to transmit. In addition, they carry enough information about the body to enable the recipient to verify its suitability. The body is only sent once the recipient has done this. This prevents the unnecessary transmission of block bodies when they are not required. Since bodies are typically several orders of magnitude larger than headers, considerable network bandwidth can be saved in this way. Moreover, the upstream node is not permitted to send another header until given permission to do so by the downstream node in order to prevent a denial-of-service attack in which a node is bombarded with fake headers, so this approach also reduces latency when bodies are rejected. In practice, the first permission is sent when the connection between peers is established and the permission renewed immediately after the header is received, so that the upstream peer does not have to wait unnecessarily. Therefore, the design engineer can refine o A Z into the finer-grained outcomes shown in Figure 14. The corresponding outcome expression is o Figure 14. Splitting a block transmission into its constituent parts: header (ph/th) and body (pb/tb). Note that the protocol described here is between directly connected neighbours-these requests are not forwarded to other nodes. Thus, this is a refinement of the one-hop block transfer process. The significance of this refinement is that it shows that an individual outcome that, at a given level of granularity, is unidirectional (i.e., only from one entity in the system to another) might, at a lower level of granularity, very well be a multidirectional conversation. Obtaining One Block from each Neighbour when Rejoining the Blockchain Consider the situation where a node Z rejoins the blockchain after being disconnected for some period of time. Z will be out-of-date w.r.t. the recently generated blocks and will need to update itself. Let us consider the lucky situation where Z can acquire all the blocks that it is missing from its neighbours; that is, it can acquire the blocks with only one hop but from different neighbours. For demonstration purposes, we now make a number of simplifying assumptions: • Upon its return to the blockchain, Z is m blocks behind, where m is less than or equal to the number of Z's neighbours. With those simplifications in place, the outcome diagram will be as shown in Figure 15. This shows that Z will be up-to-date when all its m (selected) neighbours are granted permission and have finished sending their blocks to Z. Note that the outcome diagram has, in fact, m starting observation locations and m terminating observation locations. This is the reason for the 1. . .m notation immediately below each of those observation locations. The corresponding outcome expression is Load Analysis One reason why this refinement is particularly interesting is that it allows an easy demonstration of our load analysis from Section 5.4. Fix a resource ρ such as network capacity. Pick a time t between the first observation made at an A − i and the last observation made at a Z + i . According to Definition 10, the static amount of work S at time t that is required for performing Equation (3) describes an approach to aggregating offered load on a resource. Considering an ephemeral resource-such as a communications network interface-a design interest might be to understand the intensity of use of this interface. We say a resource is ephemeral if it is lost if unused. For example, for a design requirement to be (at this level of detail) feasible, the average use of the interface has to be less than its capacity. This is the basic precondition for the demand on the resource to possess a feasible schedule. The RHS of Equation (3) captures this process as a piece-wise summation of the load intensities. Building on the time to transfer blocks (Table 1), and noting (from Section 2.1) that the body of a block is forwarded in response to a request (which takes one round-trip time), the total block volume is delivered in the total time minus the round trip time. For the 'Near' peers shipping a 64 kB block, this means an intensity of 42.7 Mb/s (8 × 64, 000/(0.024 − 0.012)) before incorporating any other network-related overheads (such as layered headers). Table 3 captures that load intensity approximation. This provides an insight into the likely capacity constraints for differing degrees of connectivity and, by inference, an insight into the system-level design trades. From Tables 1 and 3, it can be seen that smaller geographic distribution can lead to lower forwarding times assuming that (for a fixed communications capacity) the number of associating peers is suitably reduced. Assessments such as this give a measure of the likely "slack" in the design; those portions of the design that have less "slack" represent design elements that might need more detailed refinement and/or other strategies to ensure their feasibility. Note that a dedicated support tool for ∆QSD would easily be able to manipulate these complex outcome diagrams, giving a formally correct analysis, with very little mental burden for the design engineer. Obtaining a Block from the Fastest Neighbour Section 4.5 discussed splitting the header and body for optimisation reasons. One assumption in that design is that the header and the body will be taken from the same neighbour. It turns out that this assumption will not necessarily lead to the fastest solution. In fact, when Z determines that it is interested in a block that it has received the header of, it may obtain it from any of its neighbours that have signalled that they have it. In particular, Cardano nodes keep a record of the ∆Qs of their neighbours' block delivery. This allows them to obtain bodies from their fastest neighbour(s). In other words, once a node determines the desirability of a block (via its header), it is free to choose to take the body from any of its neighbours that have provided the corresponding header. As long as only timeliness is a concern-and not when resource consumption is also of interest-a race can occur between all neighbours, with the fastest neighbour winning the race. The diagrams in this section assume such a race. Now, as in Section 4.6, consider the situation where Z reconnects to the blockchain after being disconnected for some time. Our design in Section 4.6 assumes that there is no causality between the m blocks that Z needs to obtain. In reality, that is not correct: there is a causal order between those blocks, and that order can be rather tricky to define; it might take a couple of reads before the matter is fully digested. There are two separate total orders between blocks: CO1. For each block, the header must be transmitted before the body (so that the recipient node can determine the suitability of the block before the body transmission); CO2. Headers of the older blocks need to be transmitted before those of the younger blocks (note, however, that there is no causal relationship between the body transmissions). This section considers the situation when the design engineer investigates the above race as well as CO1 and CO2. Suppose that once Z reconnects to the blockchain, it is exactly m = 3 blocks behind the current block. Suppose also that Z has k neighbours. The corresponding outcome diagram is shown in Figure 16. The fork that is causally dependent on o th 3 A Z is done when any of its prongs is done, that is, as soon as any neighbour of Z has finished transmitting the third block to Z. The other "∃" forks are similar. The corresponding outcome expression is: We would like to invite the reader to take their time to pair the above diagram with our explanations above. We understand that the diagram and to a greater degree the expression can look impenetrable. The compositionality of our formalism (inherited from that of ∆QSD) comes to the rescue! Indeed, we can observe that the race pattern is rather repetitive. Thus, we can wrap the entire race into three new outcomes . Z , for example, to be the outcome of obtaining the first body transmitted to Z by any one of its k neighbours (that is, we are using "." in the subscript of o b 1 . Z as a wildcard). This makes the outcome diagram considerably simpler: These new diagrams make it easy to spot the lack of causal relationship between the o b i . Z s. Hence, there is no causal order between the body transmission despite the existence of CO1 and CO2. The corresponding outcome expression also becomes considerably simpler: The latter outcome diagrams and outcome expressions are now relatively easy to follow. Summary The refinements and analysis that are described in this section capture an important part of the design journey for the Shelley implementation of Cardano. In Section 4.1, we defined a 'top level' outcome of interest: that of diffusing a block from an arbitrary source node to an arbitrary destination in a bounded time and with bounded resource consumption. In Section 4.2, we refined this to examine the implications of forwarding the block through a sequence of intermediate nodes, and in Section 4.3, we factored in the expected distribution of path lengths. This allows an exploration of the trade-offs between graph size, node degree, block size, and diffusion time. In Section 4.4, we showed how ∆QSD can be used to explore orthogonal aspects of the design, in this case how blocks of data are in fact transmitted as a sequence of packets. This could be extended into a full analysis of some transmission protocol such as TCP or QUIC. In Section 4.5, we analysed the effects of splitting blocks into a header and a body in order to reduce resource consumption, and in Section 4.6, we analysed the potential for speeding up block downloading by using multiple peers in parallel. This analysis informed critical design decisions in the Cardano Shelley implementation, in particular the block header/body split, which was shown to significantly improve the resource consumption while increasing the diffusion time only slightly. An analysis of the network resource consumption in this case gave a flavour of how the ∆QSD paradigm encompasses resource as well as timeliness constraints. Finally, in Section 4.7, we discussed how ∆Q is used in the Shelley implementation of Cardano in operation as well as in design, to optimise the choice of peer from which to obtain a block. All of this, together with further optimisations such as controlling the formation of the node graph to achieve a balance between fast block diffusion and resilience to partitioning, has produced an industry-leading blockchain implementation that reliably and consistently delivers blocks of up to 72 kB every 20 s on average across a globally distributed network of collaborating block producing nodes. Figure 17 gives a snapshot of the 95th percentile of block diffusion times over a period of nearly 48 h. This clearly shows highly consistent timing behaviour regardless of block size, with the vast majority of blocks diffused across the global network within 1-2 s. Such measurements, based on the ∆QSD paradigm, are used on an ongoing basis to avoid performance regressions as new features such as smart contracts are added to the Cardano blockchain. Comparison with Simulation It is informative to consider how the insights delivered by using ∆QSD could have been obtained otherwise, using, e.g., discrete-event simulations. This would require implementing the design to a sufficient level of detail for the timing to be considered accurate and then running many instances of the simulation to explore the variability of the context. For instance, obtaining the results of Figure 12 would require the following: • Generating a random graph with 2500 nodes having degree 10; • Randomly choosing whether each link is 'short', 'medium', or 'long', and applying the corresponding delay from Let us estimate how many simulation runs might be required. As a rule of thumb, we could consider that having any confidence in a 99th percentile result requires at least 1000 samples, so we would need to measure the diffusion time of at least 1000 blocks of the selected size; following Table 2, this would typically require each block to traverse four hops, hence needing 4000 simulation steps. So far, this seems quite tractable. However, let us consider how many graphs would need to be considered to have confidence in the results. According to McKay [19], if k ≤ 2n/9 and nk is even, then the number of labelled k-regular graphs (i.e., having degree k) on n vertices is given by: Taking logarithms and using Stirling's approximation for factorials ln(n!) ∼ n(ln(n) − 1), we can rewrite this as: If we substitute k = 10 and n = 2500, we get ln(M(n, k))) ∼ 12500 × 7.21 − 99/4 90, 158 which means M(n, k) ∼ 10 39,155 . So, obtaining a reasonable coverage of the set of possible random graphs with 2500 nodes of degree 10 is clearly infeasible. Using ∆QSD, we only process enough information to establish the performance hazard instead of constructing a lot of detail that is then discarded; combining probability distributions is a highly computationally efficient way to derive the distribution of interest (all the figures in this paper were produced on an ordinary laptop in a matter of seconds). This is not to say that ∆QSD replaces simulation, far from it: simulations can produce precise results whereas ∆QSD delivers probabilistic estimates. The limitation of ∆QSD are discussed further in Section 7.2. A Formalisation of ∆QSD The examples that were presented in Section 4 all build on the formalisms that we will present in this section. We start by describing the notational conventions that we will use here (Section 5.1). Then, we provide the syntax (Definition 1) for outcome expressions and formalise the rewrite rules that define the valid transitions between possible outcomes (Definition 3). In Sections 5.3 and 5.4, we provide corresponding denotational semantics for both timeliness and load. These provide the bases for constructing formal timeliness and load analyses that can be used as part of ∆QSD. The analyses have so far been deployed manually to inform design decisions for a number of complex real-world systems. Our longer-term intention is that they should be implemented as part of a design exploration toolset that will support ∆QSD. Additional semantics and analyses are also possible, of course, and could be used to support alternative design explorations or to provide further details about timeliness, load, etc. , for a set A, we write A a to indicate that a, a , a , . . . , a 1 , a 2 , . . . all range over A. For predicates, we write pred(x). Let B and O v o v . We refer to black boxes and outcome variables together as base variables: Definition 1. The abstract syntax of outcome expressions is: We take o o to be commutative. In Section 4, we used these syntax elements as follows: • in Section 4.1. Definition 2. The evaluation contexts C of an outcome are defined as follows: where "[]" is the empty context. Evaluation contexts are useful in the definition of outcome transitions, which we define next. Formally speaking, a refinement step is an instance of an outcome transition. The formal description of the system is refined when one or more refinement steps are taken. The restriction on (UNBX) is because it makes no sense to replace a black box with another black box. (See the trailing discussion of Section 3.1 on the intention behind black boxes.) The restriction on (ELAB) is because it makes no sense for an outcome variable to be replaced by another outcome variable or a black box. Considering Definition 3 to be part of the syntax is unusual. After all, evaluation contexts are a formalism for the semantics of programming languages. However, for ∆QSD, it turns out that the rewrites only cause syntactic changes to the outcome expressions (and the corresponding diagrams). Note that a refinement is not a system evolution, but rather, an update in the system description. It is only at analysis time that one tries to understand the meaning of an outcome diagram/expression. Timeliness Analysis We are now ready to describe the process of ∆Q analysis. The idea is that the design engineer provides the basic ∆Q analysis to the formulation in Definition 4. Then, our formulation enables them to determine the ∆Q analysis of the larger parts of their system or even all of it. This formulation is both compositional and simple. We call the ∆Q analysis that is provided by the design engineer the basic (∆Q) assignment (Definition 4). In the basic assignment, the design engineer only maps B expressions. They map those expressions to either CDFs or ∆Q variables. In return, they receive more complex ∆Q expressions. This is shown in Figure 18. The process is similar for load analysis except that there, the values exchanged between the design engineer and the respective formulation refer instead to static amounts of work. The reason for including the CDFs in the input type of basic assignments is rather obvious. The choice to allow ∆Q variables here might be less so. The assignment of those B expressions that are mapped to ∆Q variables are considered to be left by the design engineer for later. As such, the formulation in Definition 4 takes the ∆Q value of those expressions to be , which lets the design engineer investigate feasibility even when those particular expressions are disregarded for the moment. where * denotes the convolution of two ∆Qs. We denote the set of all basic assignments by We demonstrated the use of this definition in Section 4.2. In programming language theory, Definition 4 is said to give a denotational semantics for O. This is because the formulation works by compositionally denoting the O syntax into a familiar domain, which is deemed to be simpler (in our case, it is Γ). Definition 4 gives the design engineer the possibility of determining the ∆Q behaviour of a snapshot of their system. Armed with that information, the design engineer needs to figure out whether such ∆Q behaviour is affordable. In other words, they need to make sure the actual ∆Q is within the acceptable bounds. In order to do that, we assume that the design engineer's customer will provide them with a demand CDF: one that defines the acceptable bounds. Definition 5 below is a recipe for comparing the actual behaviour against a demand CDF. Definition 5. Given a demand CDF γ and a partial order < on Γ, say that a basic assignment ∆ • is a witness that an outcome o is a hazard w.r.t. γ Likewise, say ∆ • is a witness that an outcome o has slack once compared with γ The formulation of Definition 5 enables the design engineer to perform the ∆Q analysis of a single snapshot of their system. In some cases, that is enough because it can, for example, reveal the absolute infeasibility of a design. However, for the majority of cases, it is not enough. After all, a snapshot ∆Q analysis might not be conclusive for a variety of reasons. For example, one might not see any indication of a hazard by employing just Definition 5 because more detail is required. That takes us to Definition 8. When a design engineer works out the ∆Q analysis of a snapshot, the results might be favourable at the given level of refinement but still inaccurate. In such a case, a design engineer may wish to refine the system and perform the snapshot ∆Q again to check whether the refinement confirms the initial ∆Q analysis. Definition 8 examines that overall confirmation. Definitions 6 and 7 set the stage. Definition 6. Let ∆ • be a basic assignment. Write for those B outcomes in the domain of ∆ • that ∆ • maps to CDFs. In other words, a basic assignment refines another one when it keeps all the CDFs in place and possibly adds more. We are now ready for Definition 8. Definition 8. Fix an outcome transition o → o and a ∆Q refinement ∆ • → ∆ • . Given a partial order < on Γ, we say that ∆ • → ∆ • witnesses that o → o arms a hazard As can be seen from Definitions 5 and 8, all the decisions for the timeliness analysis are made by scrutinising the CDFs (which represent ∆Q values). This is a consequence of the simple denotational semantics of Definition 4. The fact that the latter formalism is denotational implies that comparisons can be made in the domain of CDFs. Moreover, these comparisons are affordable because the denotational semantics is simple (as well as being effective). Load Analysis This section describes how the same approach can be used to analyse the load on given resources. Resources can be of different types; in particular, we distinguish ephemeral resources that are available at a certain rate and fixed resources that are available in a fixed number or amount. Examples of ephemeral resources are CPU cycles, network interface capacity, and disk IO operations. Fixed resources include CPU cores, memory capacity, and disk capacity. In this paper, we consider only ephemeral resources. The analysis that we want is an answer to the following question: will the resource manage the amount of work assigned to it in the available time frame? We first need to set up some terminology for specifying the available time frame as well as the amount of work that is assigned to a given resource. Write t • (o) for the time an observable from the starting set of an outcome o occurs. where d(o) denotes the duration limit of o. Fix a set of resources H ρ. Note that the amount of work that is assigned to a resource ρ is not scalar. Of course, it is necessary to provide the unit of measurement. For example, when ρ represents CPU resources, a sensible unit of measurement is the number of CPU cycles. When ρ represents network resources, a sensible unit of measurement is the message size. However, at the current level of formalisation, we wish to set ourselves free from thinking about units of measurement. Therefore, given a resource ρ, we write W ρ for the set of values of the right unit of measurement for an amount of work that has been assigned to ρ. The design engineer utilises our load analysis in the same way that they utilise our ∆Q analysis. That is, they must provide some basic load analysis (Definition 9). Then, exactly as shown in Figure 18, they use the formulation in Definition 10 to determine the load analysis for larger parts of their system or possibly all of it. We now formalise what we mean by a basic load analysis. Definition 9. For a given ρ, a basic "static (amount of) work assignment for ρ" is a function: Definition 10. Given a basic static work assignment S • for ρ, the static work assignment (i.e., the amount of work to perform a single outcome per unit of size) Whether or not a given resource ρ is overloaded when performing an outcome o is determined by whether ρ can bear the offered load in the required duration, d(o). The smaller that d(o) is, the faster (i.e., the more intensely) o must be performed. However, that can only be done up to a certain threshold that is determined by the system's configuration. In other words, whether the intensity brought to ρ passes a given threshold is what determines whether ρ is overloaded. As with W ρ , at our current level of abstraction, we wish to disregard the units of measurement for intensity. That is, we write I ρ for the set of values of the right unit of measurement for the intensity of the load that is imposed on ρ. We single out θ I (ρ) ∈ I ρ for the threshold of intensity ρ can bear. When it is clear, we write θ I for θ I (ρ). Definition 11. For a fixed ρ, given a threshold of intensity θ I (ρ) and a basic static work assignment S • for ρ, the static slack of an outcome in ρ-consumption: Define the static hazard of an outcome in ρ-consumption: Our emphasis on considering the analyses of Definitions 9-11 "static" is intentional. Firstly, they all assume that a base outcome's work is spread uniformly over its duration limit. That is obviously not always correct. The work assignment typically varies over the duration limit. However, if to every base outcome β, the design engineer chooses to assign the highest amount of work that β needs to do during its duration limit, the analyses given in Definition 11 would lead to a safe upper bound, which is useful as a first estimate. Secondly, Definitions 9-11 assume that an outcome's amount of work is always the same throughout its execution. Again, that is not realistic. Various reasons might cause the amount of work assigned to a base outcome to change over time. Examples are congestion, nonlinear correlations between outcomes, and cascading effects. This suggests more advanced load analyses that are "dynamic" rather than the "static" ones we have described here. We leave the development of such analyses to future work. Related Work Several theoretical or practical approaches have previously been proposed that address parts of the problem that has been identified above, but none of these addresses the whole problem in a comprehensive way. 6.1. Alternative Theoretical Approaches 6.1.1. Queuing Theory Steady-state performance has been widely studied as an aid to analysis, for example in queuing theory. Such approaches tend to take a resource-centric view of the system components, focusing on their individual utilisation/idleness. Where job/customer performance is considered, such as in mean-value analysis [20] or Jackson/BCMP networks [21], it is also in the context of steady-state averages. However, these traditional approaches cannot deliver metrics such as the time distribution of the system's response to an individual stimulus or even the probability that such a response will occur within a given time bound. These metrics are key for any time-critical and/or customer-experience-centric service. Extending Existing Modelling Approaches With the exception of hard real-time systems, it is rare to see performance treated as a "first-class citizen" in a system design process. At best, performance is considered as a property that will emerge during the system development life-cycle and thus something that can only be retrospectively validated. Thus, in contrast with ∆QSD, performance is unverifiable when using such an approach. A common approach has been to extend existing approaches to modelling distributed systems such as Petri nets or process calculi with the goal of integrating performance modelling. Examples include stochastic Petri nets [22], timed and probabilistic process calculi [23,24], and performance evaluation process algebra (PEPA) [25]. These systems consider passage-time [26], which is the time taken for the system to follow a particular path to a state, that path being characteristic of an outcome of interest [27][28][29][30]. As mentioned above, these are all retrospective validation tools, requiring fully specified systems, that will give probabilistic measures of outcomes under steady-state assumptions. These systems are susceptible to state space explosion as a model grows in complexity, and therefore, this limits their usage to less complex systems. Furthermore, as with queuing models, they do not model failure nor do they model typical real-world responses to failure such as timeouts and retries. Real-Time Systems and Worst-Case Execution Time In real-time systems, actions must be completed by strict deadlines. Missed deadlines can be catastrophic (hard real-time systems) or lead to significant delay and loss caused by roll-backs or recovery (soft real-time systems). Performance analysis has focused on giving guarantees that deadlines can be met by studying worst-case execution time [31]. These approaches generally aim to analyse the behaviour of specific implementations, providing information about specific interactions. Thus, this approach is complementary to design-time approaches such as ∆QSD. Block Propagation Bitcoin's block propagation has been measured by Decker and Wattenhofer [32] and later by Croman et al. [33], who proposed guidelines on block size and interval to ensure adequate throughput for 90% of nodes. Shahsavari et al. [34] propose a random graph model for modelling the performance of block propagation. The recent survey article of Dotan et al. [35] covers block propagation (Section 3) and the mapping of blockchain networks. Distributed System Design Designing large distributed systems is costly and error-prone. This might seem paradoxical given the proliferation of modern Internet-based companies whose core business is based on large distributed systems, such as Google, Facebook, Amazon, Twitter, Netflix, and many others. Given the existence of these successful companies, it might seem that building large distributed systems is a solved problem. It is not: successful companies have built their systems over many years, using vast amounts of effort and ingenuity to find usable solutions to difficult problems. Unsuccessful companies are forgotten. Iterative Design There does not exist a standard approach for designing large distributed systems that allows prediction of high-load performance early on during the design process. We explain the problem by giving an overview of the current design approach for distributed systems. The approach is iterative. It starts with a specification of the system's desired performance and scale. Then, the system architecture is designed by determining the system components according to the system's scale and estimating the performance they must have to give the required overall performance. The next step is performance validation to verify that the design satisfies the performance requirements. Performance validation is performed either as part of unit, subsystem, and/or system testing or via discrete-event simulation. Testing the performance of a component or subsystem is inconclusive without a reliable means to relate it to the resulting system performance, and testing of the whole system only reveals issues very late in the system development life-cycle. It is good practice to perform integration testing at this late stage. However, this is a poor and expensive substitute for performance analysis throughout the development process. Simulation can be performed earlier in the development process, and it may be less costly than testing, but it is limited in its ability to expose rare cases and hence cannot test tight bounds on the performance. In the final analysis, obtaining reliable performance numbers at high load requires actually building a large part of the final system and subjecting it to a realistic load. If the system does not satisfy the requirements, then it is back to the drawing board. The system architecture is redesigned to remove observed and predicted bottlenecks and rebuilt. Several iterations of the design may be necessary until the system behaves satisfactorily. It often happens that the system only behaves satisfactorily at a fraction of the required load, but because of market constraints, this is considered acceptable, and the system is deployed. In parallel to the deployment, the design engineers continue to work on a system that will accept the larger load under the assumption that the deployment will be successful so that the load will increase. This methodology is workable, but it is highly risky due to its high cost and development time. To have a good chance of success, it requires experienced developers. The development budget may be exhausted before achieving a satisfactory system; it may even be determined that the requirements are impossible to satisfy (infeasibility). If this is discovered early on, then the company may be able to retarget itself to become viable. Otherwise, the company simply folds. Role of the ∆QSD Paradigm in Distributed System Design The ∆QSD paradigm is designed specifically to reduce cost and development time. The system is designed as a sequence of increasingly refined outcome diagrams. At each stage, performance is computed using the ∆Q parameters. If the system is infeasible, this is detected early on, and it is immediately possible to change the design. If the design has sufficient slack, then the design process continues. The ∆QSD paradigm is effective insofar as the ∆Q computations provide realistic results. This depends on (i) having correct ∆Q distributions for the basic components and (ii) correctly specifying causality and resource constraints. Experience with ∆QSD in past industrial designs gives us confidence in the validity of the results. The additional rigour that is provided by the ∆QSD formalism that has been introduced in this paper gives us confidence that the paradigm is being applied correctly and allows the paradigm to be integrated into new design tools. Programming paradigms each focus on their particular discipline for bringing more opportunities for code reuse. The most familiar examples are perhaps Object-Oriented Programming, Functional Programming, and Genericity by Type, which promote code-reuse between a base class and derived ones by refactoring into functions and type parameterisation. Gibbons [36] has an excellent survey on different flavours of Generic Programming with the different opportunities for code reuse that each provides. Some programming paradigms have widely accepted formalisms, and some do not. Regardless of the underlying programming paradigm, ∆QSD is a paradigm for systems development rather than simply for programming, and it comes with its own formalism. Software Development Paradigms Three paradigms focus on the process of software development and hence are closer to ∆QSD: 1. Design-by-Contract. [37] Similarly to ∆QSD, in this paradigm, the programmer begins by coding by describing the pre-conditions and the post-condition. Over the years, the concept of refining initial designs from specification to code has gained increasing weight [38]. However, unlike ∆QSD, the focus is on functional correctness rather than performance. 2. Software Product Lines. [39] This paradigm targets families of software systems that are closely related and that clearly share a standalone base. The aim is to reuse the development effort of and the code for the base across all the different variations in the family. The similarity with ∆QSD is that this approach also allows variation in the implementation so long as the required quality constraints are met. In other words, variations can share a given expected outcome and its quality bounds. 3. Component-Based Software Engineering. [40] Components, in this paradigm, are identified by their so-called 'requires' and 'provides' interfaces. That is, so long as two components have the same 'requires' and 'provides' interfaces, they are deemed equivalent in this paradigm, and they can be used interchangeably. In ∆QSD, subsystems can also have quality contracts that involve quantitative 'demand' and 'supply' specifications. Such contracts impose quality restrictions (say, timeliness or pattern of consumption) on the respective outcomes of those subsystems. However, we have not shown examples of quality contracts in this paper, because their formalisation is not yet complete. Algebraic Specification and Refinement Algebraic specification languages such as CLEAR [41], Extended ML [42], Institutions [43], and CASL [44] work on the basis of specifying requirements using algebraic signatures and equations that are then refined progressively until one makes it to the level of actual code. Refinement in such languages is managed using various media, for example by module systems with rigorously defined formal semantics. Whilst the focus of such languages is almost exclusively on functional correctness, studying possibilities for enhancing algebraic specifications so that they also accommodate the quality of outcomes would be an interesting avenue for future work. Amortised Analysis Amortised resource analysis is an approach for promoting resource analysis as a first-class citizen of programming languages specification. Various operational semantics, type systems, and category theoretical approaches have been employed. See [45][46][47], for example, where memory consumption for functional languages such as HASKELL and ML are automatically calculated for programs written in those languages. ∆QSD advises on specification at the much higher level of outcomes and outcome diagrams, leaving the actual implementation and its host language completely unconstrained. As a result, ∆QSD is much more flexible and permits rapid performance estimation throughout the system development life-cycle. Conclusions This paper has presented the ∆QSD systems development process that is driven by performance predictability concerns and is supported by a rigorous formalism (Section 5). Our formalism builds on the simple concept of quality attenuation (∆Q, Section 3.3) that captures the notion of performance hazard. This helps early detection of infeasibility, thus preventing the waste of resources (financial, people, time, and systems). ∆QSD has been successfully used in a wide range of industries, including telecommunications, avionics, space and defence, and cryptocurrency. It complements other approaches that are focused primarily on functional concerns, such as functional programming or model checking. Our formalisation of ∆QSD is a part of a wider initiative both within Predictable Network Solutions and IO Global [9]. In particular, it has been applied to the development of the current iteration of the Cardano blockchain, which uses a proof-of-stake (PoS) consensus algorithm rather than the proof-of-work (PoW) approach used by most other blockchains, including Bitcoin. PoS algorithms have significant advantages over PoW, such as vastly better energy efficiency and the potential to deliver much higher performance, both in terms of processing transactions and embedding them more rapidly in the immutable chain. However, for this to work, blocks must be diffused within a predictably short time-frame across a globally distributed system with no central control so that the chain can be most efficiently extended. Only by using ∆QSD was the Cardano engineering team able to untangle this knot to deliver a secure and efficient system. ∆QSD is based on taking the observable outcomes of a system as the central point of focus (Section 3.1), capturing the causal dependencies between outcomes in the form of outcome diagrams (Section 3.2). The formalism also describes the process of refining outcome diagrams (Definition 7) as part of a system design process. The formal specification of a system serves as a basis for different analyses such as timeliness (Section 5.3) and behaviour under load (Section 5.4). Although we have illustrated the ∆QSD paradigm in the context of design refinement, the aim is that these aspects should permeate throughout the complete system development life-cycle. Takeaways for System Designers Let us summarise the main insights of the ∆QSD paradigm for the system designer. The main new concept is focus on performance as determined by observations, which are captured using outcome diagrams. Designing with outcome diagrams allows problems to be discovered early on in the design process, which saves time and reduces cost. We are working on tools and documentation to disseminate the ∆QSD paradigm in the system design community. Outcome Diagrams The outcome diagram defines a system in terms of what is observable from the outside (of the (sub)system under consideration), whereas traditional approaches such as UML (discussed in Section 3.5.1) all describe what is inside the system. A major advantage of this approach is that it avoids making decisions prematurely on how the system should be built. Outcome diagrams allow infeasibility to be discovered early on, avoiding costly dead ends and reducing time-to-market. On the other hand, all these advantages do not come for free. The main difficulty of using ∆QSD is psychological: some decisions on the actual system structure have to be "kept in the air" for long periods as the designer works with outcome diagrams. This can conflict with the natural urge to make decisions at the earliest opportunity and the often-imposed requirement to demonstrate 'progress'. Quantifying design risks is rarely understood as progress, although this is often the most valuable part of the entire design process. Outcome diagrams provide a framework for 'rigidly defined areas of doubt and uncertainty' [48], enabling such value to be evidenced. Figure 19 compares ∆QSD with a traditional approach. The figure shows a design tree. Each nonleaf node corresponds to one design decision. The design starts at the root and continues down the tree until it reaches a leaf node, which corresponds to a completely designed system. The subtree outlined on the left contains all designs where decision D x took the leftmost branch. In our case, all these designs are infeasible. In ∆QSD, this fact would be detected immediately after the D x decision is made by observing that the quality attenuation required from any subsequent refinement is infeasibly small, for instance less than the time taken for signals to move between components of the distributed system. Using an approach based on refining the system's structure, such as a UML-based approach, would require specifying much more of the system before this fact became evident. In many cases, it can only be seen by actually building the system and checking that it cannot satisfy the requirements. With ∆QSD, the cost of designing and building all these infeasible systems is saved. This example summarises the actual experience of Predictable Network Solutions (PNSol) in many industrial projects. Recommendations We recommend that you think about how the two main concepts of ∆QSD, outcomes and quality attenuation, can apply to your own work. Try to express one of your own designs in terms of the outcomes that a user sees without making any decisions about how the system is built. Instead of describing the system structure, as UML does, try to think only of externally visible outcomes. The blockchain example of Section 4 gives a realistic example of how this is done. Note that in practice, we expect that a software tool would do all the tedious bookkeeping needed to keep track of the outcome diagrams. To design a system, start from the outcomes that the user expects, and work your way in from there. A primary outcome, such as a request-reply, can be divided into smaller outcomes. Bigger outcomes decompose into smaller ones, either by sequencing small outcomes, by creating a choice between small outcomes, or by synchronising on small outcomes. Eventually, you get to primitive outcomes that can be directly provided by components, such as networks, servers, or databases. At any time, you can combine the quality attenuation of small outcomes to get the quality attenuation of a bigger outcome. This means that you can start answering questions immediately, even if the system is only partially designed. The main question is, is the system feasible? In other words, is there a probability close to 1 that the reply returns with an acceptable delay? For cutting-edge systems, the answer to this question might be 'no'. In that case, you need to step back and build an alternative outcome decomposition. Limitations of the ∆QSD Paradigm There are two main limitations of the work that has been described here. 1. Contextuality vs. Compositionality:As a performance modelling tool, ∆QSD deliberately trades detail in exchange for compositionality. The highest level of detail is provided by timed traces of a real system or a discrete event simulation thereof. A level of abstraction is provided by the use of generator functions [49], which obscure some details such as data-dependency but retain the local temporal context. Representing behaviour using random variables removes the temporal context, treating aspects of the system as Markovian. Thus, the ∆QSD paradigm is most applicable to systems that execute many independent instances of the same action, such as diffusing blocks, streaming video frames, or responding to web requests. For systems that engage in long sequences of highly dependent actions, it may only deliver bounding estimates. 2. Non-linearity: In many systems, resource sharing may introduce a relationship between load and ∆Q, which can be incorporated in to the analysis. An obvious example is a simple queue (which is ubiquitous in networks), where the delay/loss is a function of the applied load. However, where system behaviour introduces a further relation-ship between ∆Q and load, for example due to timeouts and retries, the coupling becomes non-linear. In this case, a satisfactory performance analysis requires iterating to a fixed point, which may not be forthcoming. Failure to find a fixed point can be considered a warning that the performance of the system may be unstable. Future Work The ∆QSD paradigm has been developed for over 30 years by a small group of people in and around PNSol, and it has shown its value in large-scale industrial projects. It has matured enough that it should be more widely known. Unfortunately, applying it today requires a high level of commitment and effort, because there is no tool support and little documentation. The ultimate goal of our work is to make it usable with much less effort; this paper takes the first step by defining a formal framework for outcome diagrams. Ideally, the system designer will mostly need domain expertise to apply ∆QSD and very little expertise in the paradigm itself. To achieve this goal, we are working towards building tools to handle most of the details of creating outcome diagrams and computing quality attenuation. The immediate next step after this paper is a tutorial on ∆QSD given at the HiPEAC conference in June 2022 [50]. This tutorial will give a broad introduction to the use of ∆QSD through a variety of practical examples that come from PNSol's experience. That will help the adaptation of ∆QSD by other practitioners and therefore will help us with further tuning of the ∆QSD tool we are currently developing. Future work will also include the development of new analyses for non-ephemeral resources and for dynamic loads as well as an extension to non-linear systems in which the load and timeliness are coupled. In parallel, we plan to use our formalism as an intermediate step to better teaching and dissemination of ∆QSD. We will build additional tools that will enable us to track the key observables/outcomes from the design into the implementation so that they can support ongoing system design and development throughout the system development life-cycle. Given appropriate tools, it would become feasible to systematically articulate the benefits of the paradigm, for instance by comparing various metrics between design projects that do or do not use it, such as the time/budget to compete the project, number of major design changes, etc. This line of research would require new collaborators with expertise in social science disciplines. The wider ∆Q framework is also under active development within the International Broadband Forum [51] as a means of characterising quality attenuation associated with networks.
2022-03-23T15:14:59.055Z
2022-03-17T00:00:00.000
{ "year": 2022, "sha1": "a2b50a527a85f2ae7db59710586b9fbf157b60f3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-431X/11/3/45/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "54d8b092453890ade485fd9fec328b5264fc70b5", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
236597805
pes2o/s2orc
v3-fos-license
Uptake Prediction of Eight Potentially Toxic Elements by Pistia stratiotes L. Grown in the Al-Sero Drain (South Nile Delta, Egypt): A Biomonitoring Approach : The potential to utilise the free-floating macrophyte Pistia stratiotes L. to survey contamination of the Al-Sero Drain in the South Nile Delta, Egypt, by eight potentially toxic elements (PTEs) was investigated in this study. This study considered the absorption of eight PTEs (Cd, Co, Cu, Fe, Mn, Ni, Pb, and Zn), and the evaluated P. stratiotes were located in three sampling locations along the Al-Sero Drain, with sampling conducted in both monospecific and homogenous P. stratiotes . Samples of both types of P. stratiotes and water were collected on a monthly basis between May 2013 and April 2014 at each location, utilising three randomly chosen 0.5 × 0.5 m quadrats. Regression models were designed to predict the concentration of the PTEs within the plant’s shoot and root systems. Elevated water Fe levels were correlated with a rise in shoot system Fe concentration, whereas higher Ni concentrations in the water led to a higher Ni concentration within the root system. The latter was also true for Pb. Water Cu levels had a negative association with the Cu concentration within the P. stratiotes shoot system. Raised Fe levels were also correlated with a diminished Fe level within the roots. For all PTEs, P. stratiotes was characterised by a bioconcentration factor of more than 1.0, and for the majority by a translocation factor of less than 1.0. The goodness of fit for most of the designed models, as indicated by high R 2 values and low mean averaged errors, demonstrated the associations between actual and predicted PTE concentrations. Any disparity between measured and predicted parameters failed to reach significance with Student t -tests, reinforcing the predictive abilities of the designed models. Thus, these novel models have potential value for the prediction of PTE uptake by P. stratiotes macrophytes inhabiting the Al-Sero Drain. Furthermore, the macrophyte’s constituents indicate the long-term impact of water contamination; this supports the potential future use of P. stratiotes for biomonitoring the majority of the PTEs evaluated in this study. Introduction From the time of the industrial revolution, environmental pollution from potentially toxic elements (PTEs) has been increasing, with grave ecological consequences [1]. Globally, the effects of this pollution on the environment have generated a perilous situation due to the continued accelerated advancement of industrial endeavours [2]. Aquatic ecosystems are particularly at risk of contamination by PTEs. Their pollution is a major issue, since PTEs are persistent in the environment and become biomagnified as they pass through the and pH. Another goal was to discover how capable P. stratiotes could be as a biomonitor of eight PTE concentrations in the Al-Sero Drain, a site considered typical of the South Nile Delta drainage channels. Our hypothesis was that the PTE accumulation capabilities of P. stratiotes and its potential to serve as a biomonitor for PTE contamination could differ among populations grown under natural conditions and those grown under experimental conditions. This work will additionally be of value for the future utilisation of this form of vegetation in Egyptian phytoremediation research. Study Area The research location was in Giza Province, within the Egyptian South Nile Delta region ( Figure 1). This territory is classified as hyperarid [38]. The yearly average climate parameters include precipitation in the region of 87 mm, maximum temperature of 30.0 • C and minimum of 14.8 • C, evaporation rate of 6.9 mm/day (Piche), relative humidity of 45.5%, and wind speed of 3.9 m/s [39]. Water Sampling Although the water PTE concentrations have not varied significantly in recent years [41], throughout this study, monthly samples were taken over a period of 12 months (May 2013-April 2014), which should have captured the variations in concentration in different months. Three water samples were gathered each month from the same sampling quadrats at each location. The samples were collected, utilising plastic bottles rinsed with deionised water, as coalesced composite samples from the water surface to a depth of 50 cm. At the laboratory, filtration was performed with Whatman membrane nylon filters (pore size 0.45 µm, diameter 47 mm), and then the samples were frozen at −20 °C, pending subsequent PTE analysis of Cd, Co, Cu, Fe, Mn, Ni, Pb, and Zn. This process has been detailed by the American Public Health Association [42]. Chemical Analysis The eight PTEs under examination were subtracted from 0.5-1 g of the macrophyte's shoot and root tissues by deploying a mixed-acid digestion technique, using HNO3/HClO4/HF, 1:1:2, v/v/v, in a microwave sample preparation system (PerkinElmer Water Sampling Although the water PTE concentrations have not varied significantly in recent years [41], throughout this study, monthly samples were taken over a period of 12 months (May 2013-April 2014), which should have captured the variations in concentration in different months. Three water samples were gathered each month from the same sampling quadrats at each location. The samples were collected, utilising plastic bottles rinsed with deionised water, as coalesced composite samples from the water surface to a depth of 50 cm. At the laboratory, filtration was performed with Whatman membrane nylon filters (pore size 0.45 µm, diameter 47 mm), and then the samples were frozen at −20 °C, pending subsequent PTE analysis of Cd, Co, Cu, Fe, Mn, Ni, Pb, and Zn. This process has been detailed by the American Public Health Association [42]. Chemical Analysis The eight PTEs under examination were subtracted from 0.5-1 g of the macrophyte's shoot and root tissues by deploying a mixed-acid digestion technique, using HNO3/HClO4/HF, 1:1:2, v/v/v, in a microwave sample preparation system (PerkinElmer ). Field and Laboratory Three sampling locations were selected in relation to the Al-Sero Drain, which comprised monospecific and homogeneous stands of P. stratiotes (Figure 1) .00 E. P. stratiotes biomass was sampled on a monthly basis between May 2013 and April 2014 at each site, utilising three randomly chosen 0.5 × 0.5 m quadrats. The entire population of P. stratiotes from each quadrat was harvested, stored in plastic bags, and then transported to the laboratory. The total biomass ranged between 29.9 g DM/m 2 in May and 341.6 g DM/m 2 in August. Detailed data on the biomass were presented in our previous paper [40]. The samples were divided into shoot and root systems and washed with tap water, and then cleaned with deionised water over a 4 mm mesh sieve to eliminate PTEs adsorbed on the tissue surface and to minimise material loss. In this way, only PTEs absorbed by the plant were determined, and then the bioaccumulation was assessed. The plant material was then reduced to a uniform mass by oven-drying at a temperature of 85 • C. A metal-free plastic mill (Philips HR2221/01, Philips, Shanghai, China) was used to pulverise the dried plant systems, which were then transferred and stored in a desiccator in sterile Ziploc bags. One composite sample from each quadrat from each P. stratiotes shoot and root systems at each of the three sampling sites per month was then utilised to assay cadmium (Cd), cobalt (Co), copper (Cu), iron (Fe), manganese (Mn), nickel (Ni), lead (Pb), and zinc (Zn) levels. In total, 108 plant samples per each P. stratiotes shoot and root system (3 quadrats × 3 sampling locations × 12 sampling times (months)) were used to determine the uptake of the eight PTEs. Water Sampling Although the water PTE concentrations have not varied significantly in recent years [41], throughout this study, monthly samples were taken over a period of 12 months (May 2013-April 2014), which should have captured the variations in concentration in different months. Three water samples were gathered each month from the same sampling quadrats at each location. The samples were collected, utilising plastic bottles rinsed with deionised water, as coalesced composite samples from the water surface to a depth of 50 cm. At the laboratory, filtration was performed with Whatman membrane nylon filters (pore size 0.45 µm, diameter 47 mm), and then the samples were frozen at −20 • C, pending subsequent PTE analysis of Cd, Co, Cu, Fe, Mn, Ni, Pb, and Zn. This process has been detailed by the American Public Health Association [42]. Chemical Analysis The eight PTEs under examination were subtracted from 0.5-1 g of the macrophyte's shoot and root tissues by deploying a mixed-acid digestion technique, using HNO 3 /HClO 4 / HF, 1:1:2, v/v/v, in a microwave sample preparation system (PerkinElmer Titan MPS, PerkinElmer Inc., Waltham, Massachusetts, USA). The process was continued until the mixture lost its opacity. The plant digests were then filtered, and double deionised water was used to dilute the samples to 25 mL. Inductively coupled plasma optical emission spectrometry (ICP-OES) (Thermo Scientific iCAP 7000 Plus Series; Thermo Fisher Scientific, Waltham, MA, USA) was utilised for both P. stratiotes and the water samples in order to measure the PTE concentrations. Concentrations were given on the basis of dried matter, and deionised water was utilised at all times. Washed glassware and analytical grade reagents were employed appropriately. Instrument readouts were rectified utilising blank reagents. Standard solutions with established concentrations of Cd, Co, Cu, Fe, Mn, Ni, Pb, and Zn were used to calibrate the system. The instrument parameters and operating circumstances were set in keeping with the vendor's operational guidelines. The PTE detection limits were Fe, Pb and Zn, 5.0 µg/L; Ni, 3.0 µg/L; Co and Cu, 0.5 µg/L; Mn, 0.3 µg/L; and Cd, 0.1 µg/L. Quality Assurance and Quality Control With the use of a certified reference material, SRM 1573a (tomato leaves), we confirmed the precision of the PTE test system. The reference material was digested and underwent the same analytical process as the shoot and root systems from the P. stratiotes samples on three replicates. The assayed concentrations were contrasted with the certified parameters, and then the percentage was calculated as an expression of accuracy. The spectrum of recovery rates was 96.5-104.3%. Data Analysis Student's t-tests were used to analyse any variations in the PTE data between the shoot and root samples. The bioconcentration factor (BCF) was computed in order to establish the efficacy of PTE uptake from the water by P. stratiotes, where [43] BCF = (PTE concentration (mg/kg) in the root system)/(PTE concentration (mg/L) in the water from the same site) In order to assess the capacity of P. stratiotes to transport a particular PTE from its root to shoot system, we calculated the translocation factor (TF) [43]: TF = (PTE concentration (mg/kg) in the shoot system)/(PTE concentration (mg/kg) in the root system) Prior to conducting a one-way analysis of variance (ANOVA-1), we evaluated the BCF and TF data by using the Shapiro-Wilk W and Levene tests for the presence of a normal distribution and variance homogeneity. The data were then transformed into logs if necessary. An ANOVA-1 was performed on the BCF and TF results in order to identify any variation between the eight PTEs. Any significant variations between the means were established using Tukey's HSD test at p < 0.05. Water pH and its PTE concentration are the principal variables governing the PTE concentration in P. stratiotes [10]. The model's general equation can be expressed as [10] where C plant and C water represent a given PTE's concentration in P. stratiotes tissue and water, respectively, and a, b, and c pertain to the regression coefficients. There was little variation within the results from the three selected sampling areas (data not presented). In view of this, monthly gathered data from two of the sites (n = 72) were employed to establish the regression equations for the prediction of the PTE concentrations within P. stratiotes root and shoot tissues on the basis of the water indices of pH and the respective PTE concentrations as independent variables. The results from the remaining sampling location (n = 36) were kept as a validation dataset. The determination coefficient, R 2 ; model efficiency, ME; and model strength were used to appraise the quality of the model. Model strength was based on the mean normalised average error, MNAE. These parameters were computed according to the equations presented below [44]: where C model , C measured , and C mean represent the model-predicted, measured, and mean of the measured concentrations of a given PTE, respectively, and n is the observation number. The resulting regression equations were used to estimate the PTE concentrations of the validation. The deviations between the estimated and measured PTE concentrations relating to the same tissue were analysed utilising a Student's t-test. The correlation between the PTE levels in the water and the BCF of the PTEs in the P. stratiotes root system was measured using non-linear regression. Statistica software, version 7.0 [45], was utilised for all data analysis. Results Chemical analysis of the water samples taken from the three locations along the Al-Sero Drain revealed a modestly alkaline water, with a mean pH of 7.5 ( Table 1). The spectrum of PTE concentrations varied from Cd, 3.5 µg/L to Fe, 523.6 µg/L. The concentration level from high to low of each material was Fe > Pb > Mn > Ni > Zn > Co > Cu > Cd. Differences in concentrations of six of the PTEs, i.e., not Cd and Pb, between P. stratiotes shoot and root systems, were significant (Table 2). Furthermore, the majority of the PTEs were found in higher concentrations in the root system, as opposed to in the shoots. Within P. stratiotes, the mean PTE concentration ranges were as follows: Cd, 0.9-1.0 mg/kg; Co, 5.2-17.6 mg/kg; Cu, 10.0-55.5 mg/kg; Fe, 974.1-2511.0 mg/kg; Mn, 331.5-1160.7 mg/kg; Ni, 6.8-20.4 mg/kg; Pb, 39.8-42.0 mg/kg; and Zn, 37.1-48.2 mg/kg. The decreasing orders of PTE concentrations within the shoot and root systems were Fe > Mn > Cu > Pb > Zn > Ni > Co > Cd and Fe > Mn > Zn > Pb > Ni > Co > Cu > Cd, respectively. The higher water Fe concentration was correlated with the Fe concentration of the shoots (r = 0.335, p < 0.001) ( Figure 2, Table S1). An elevated concentration of Ni in the water was related to the root system's Ni concentration (r = 0.212, p < 0.05). Elevated water and root system Pb quantities were also associated with each other (r = 0.294, p < 0.01). The water Cu concentration was negatively related to the Cu concentration within the shoot system (r = −0.589, p < 0.001). The elevated Fe concentration in the water was associated with reduced Fe in the roots (r = −0.287, p < 0.01). A BCF > 1.0 was calculated for P. stratiotes for all the PTEs ( Table 3). The values of the parameter were diverse, being generally higher for Mn, and then in descending order: Fe > Cu > Zn > Co > Cd > Ni > Pb. In this study, the TF values also differed according to the PTE under study (Table 3). A TF for the majority of the PTEs for P. stratiotes was computed to be < 1.0. The TF ranking from root system to shoot system was as follows: Cu > Cd > Pb > Zn > Fe > Co > Ni > Mn. Figure 3 depicts the non-linear regression analysis conducted between the water concentration and the P. stratiotes BCF for these PTEs. The BCFs were noted to be maximal at lower water PTE concentrations; they demonstrated an exponential fall with rising PTE concentrations in the water. The R 2 of these exponential equations varied from Pb 0.037 to Mn 0.974. Regression models were designed to predict P. stratiotes root and shoot PTE concentrations on the basis of the latter's water concentration and utilising the water pH as a cofactor. Table 4 illustrates the results from these models, as well as their predictive accuracy. Associations between measured and predicted PTE concentrations, together with high R 2 and low mean averaged errors, provided an indication of the acceptability of most of the models. In addition, t-test values, which were utilised to analyse any difference between real and predicted concentrations for the eight PTEs in P. stratiotes root and shoot systems, were nonsignificant, highlighting the accuracy of the models. For all the models tested, R 2 varied from 0.147 for Cu within the root system to 0.592 for Mn within the shoot system. ME parameters had a range between 0.367 for Cu within the root system and 0.811 for Mn within the shoot system. Furthermore, a low MNAE for the majority of the PTEs was observed in relation to the regression models, with a spectrum ranging from 0.179 for Mn within the shoot system to 0.628 for Cu within the root system. With respect to the shoot system, the model for Mn had the greatest R 2 value (0.592) and was related to a high ME of 0.811 but a small MNAE of 0.179. In relation to the root system, the model for Pb demonstrated the highest R 2 (0.405), with a high ME of 0.742 and the smallest MNAE of 0.248. A BCF > 1.0 was calculated for P. stratiotes for all the PTEs ( Table 3). The values of the parameter were diverse, being generally higher for Mn, and then in descending order: Fe > Cu > Zn > Co > Cd > Ni > Pb. In this study, the TF values also differed according to the PTE under study (Table 3). A TF for the majority of the PTEs for P. stratiotes was computed to be <1.0. The TF ranking from root system to shoot system was as follows: Cu > Cd > Pb > Zn > Fe > Co > Ni > Mn. Figure 3 depicts the non-linear regression analysis conducted between the water concentration and the P. stratiotes BCF for these PTEs. The BCFs were noted to be maximal at lower water PTE concentrations; they demonstrated an exponential fall with rising PTE concentrations in the water. The R 2 of these exponential equations varied from Pb 0.037 to Mn 0.974. Table 3. Mean ± standard error (n = 108) of bioconcentration factors (BCFs), from the water to root system, and translocation factors (TFs), from the root to shoot system, of potentially toxic elements (PTEs) in Pistia stratiotes populations grown in the Al-Sero Drain (South Nile Delta, Egypt) over one year (May 2013-April 2014). Regression models were designed to predict P. stratiotes root and shoot PTE concentrations on the basis of the latter's water concentration and utilising the water pH as a cofactor. Table 4 illustrates the results from these models, as well as their predictive accuracy. Associations between measured and predicted PTE concentrations, together with high R 2 and low mean averaged errors, provided an indication of the acceptability of most of the models. In addition, t-test values, which were utilised to analyse any difference between real and predicted concentrations for the eight PTEs in P. stratiotes root and shoot systems, were nonsignificant, highlighting the accuracy of the models. For all the models tested, R 2 varied from 0.147 for Cu within the root system to 0.592 for Mn within the shoot system. ME parameters had a range between 0.367 for Cu within the root system and 0.811 for Mn within the shoot system. Furthermore, a low MNAE for the majority of the PTEs was observed in relation to the regression models, with a spectrum ranging from 0.179 for Mn within the shoot system to 0.628 for Cu within the root system. With respect to the shoot system, the model for Mn had the greatest R 2 value (0.592) and was related to a high ME of 0.811 but a small MNAE of 0.179. In relation to the root system, the model for Pb demonstrated the highest R 2 (0.405), with a high ME of 0.742 and the smallest MNAE of 0.248. Table 4. Models of regression between potentially toxic elements in Pistia stratiotes (mg/kg) and potentially toxic elements in water (µg/L) and pH. Discussion This study demonstrated that the majority of PTE concentrations were notably elevated in P. stratiotes root systems, rather than in the shoot system. Numerous studies have reported similar findings [10,28,30,34,36,47,48]. This large PTE accumulation within the roots is likely to be a consequence of the PTEs forming complexes with sulphydryl residues, resulting in a lower concentration of free PTE to be transported into the shoots [49]. A number of publications have also described phytochelatin production; these compounds have the ability to sequester PTEs, again contributing to the retention of PTEs inside the roots [50]. Another reason for the higher root concentration is that the root system is the initial point of contact with the PTEs contained within the water [51]. The mean Cu and Pb concentrations recorded for the P. stratiotes shoot system in this study were within the phytotoxic ranges; the mean Cd, Co, Fe, Mn, Ni, and Zn concentrations were lower than the phytotoxic range [46]. The mean Co and Pb concentrations recorded for the P. stratiotes root system were within the phytotoxic ranges; the mean Cd, Cu, Ni, and Zn concentrations were lower than the phytotoxic range; and the mean Fe and Mn concentrations were higher than the phytotoxic range [46]. It has been shown that aquatic macrophytes are key actors in the extraction of PTEs from wastewater [52]. P. stratiotes functions in water pollution removal [28,[30][31][32][33][34]36,37]; it is a relatively low-cost method, and in itself is environmentally sound [28]. P. stratiotes is typically utilised in constructing wetlands in order to improve the quality of water in water treatment systems [35]. Its advantages include its ability to propagate [53], as well as its PTEs assimilation capabilities [28]. Within the root and shoot systems of P. stratiotes, Fe, and then Mn, Zn, and Cu, were found in the highest concentrations, reflecting the straight-forward underlying mechanisms for their uptake, as they are intrinsically necessary for the proliferation of most vegetation [54]. Similar findings were noted by Kumar et al. [10] for current species grown on paper mill effluent in a lab scale phytoremediation experiment, and by Eid et al. [55] for E. crassipes grown in irrigation canals in the North Nile Delta in Egypt. Fe is a critical minor nutrient for both vegetative and animal organisms. In the former, it is an essential component of chlorophyll; over 50% of a leaf's Fe content is within the chloroplasts. This element additionally influences photosynthesis and biomass [56]. Fe and Mn are integrated within the complex of the enzyme nitrogenase, which is necessary for nitrogen fixation through symbiotic and non-symbiotic mechanisms [57]. Zn is also mandatory for both plants and animals, as it is related to numerous enzymes and specific proteins [58]. Both Mn and Zn act as part of the link between an enzyme and its substrate; Mn plays a role in nitrogen transformations in many plants and microorganisms. Plants and animals also require Cu, which is again associated with enzyme function, especially those which trigger oxidative processes utilising molecular oxygen [59]. Cu is also a constituent of the photosynthesis pathway [60]. Despite the presence of high Pb concentrations within P. stratiotes samples, Pb per se is not necessary for plants survival but is carried into plants with other elements. Pb is toxic and is not associated with any notable biological function [61]. In contrast, there was a relatively low uptake of Cd into P. stratiotes, a result which reflected that of earlier publications [26,28,43]. Cd is extremely poisonous and is effectively a surplus waste substance discarded from metal refining and electroplating industries that contaminates the environment [58]. It impacts vegetative propagation, metabolism, and water status [62]. Furthermore, Cd acts as an inhibitor of enzymes within the chlorophyll biosynthesis pathway and thus decreases plant chlorophyll content [63]. Monitoring systems for evaluating the accumulation and effect of PTE contamination within aquatic ecosystems are often reliant on live organisms [64]. In this study, there were significant associations between the water concentration of several PTEs and the concentrations of these elements within P. stratiotes tissues, thus offering a measure of the amassed consequences of PTE pollution in drain water and a means by which to quantify the quality of the environment. This implies that P. stratiotes can act as an effective biomonitor of the presence of PTEs. Furthermore, vegetation containing notable concentrations of PTEs are now being viewed as possible measures of the availability of such elements [43]. It was also noted that some of the positive associations of water and P. stratiotes PTE concentrations failed to reach significance, implying that the macrophyte's uptake of all the PTEs present was inconsistent. PTE absorption into P. stratiotes was therefore not dependent on the water concentration of the PTEs in every instance [65]. Similar data related to the association between the PTE concentration of the water and P. stratiotes have been published in previous studies [10,26,28]. PTE distributions within vegetative tissues are not generally uniform in plants from either aquatic or terrestrial ecosystems [26,66]. Their accumulation in various species occurs in accordance with multiple factors, including chemical speciation, water transport, plant species and accompanying phenology, physiology, vigour, propagation and age, climatic parameters, salinity, pH, and interchelating of the PTEs [43,51,67,68]. Calculating the BCF is a straightforward technique to measure the translocation of accessible PTEs from either the soil or water into a plant's root system [69], whereas transport from the root to shoot system can be appraised utilising the TF. Yanqun et al. [70] published data indicating that, in plants, accrued PTEs have a BCF > 1.0, whereas in plants that exclude PTEs, the BCF < 1.0. The current research demonstrated a BCF > 1.0 for P. stratiotes in relation to all the PTEs tested, indicating the ability of this macrophyte to absorb PTEs within its root system, as well as its appropriateness for phytoremediation or rhizofiltration tasks. These data essentially mirrored work published by Galal et al. [28] and Kumar et al. [10]. The fact that P. stratiotes is recognised as being a possible candidate for phytoremediation reflects the view of Weis and Weis [71], who have also reported that PTEs can be accumulated by aquatic plant species through their root systems. Overall, Mn had the largest BCF, with lower values in descending order for Fe, Cu, Zn, Co, Cd, Ni, and Pb. Mn, Fe, Cu, and Zn exhibited a higher BCF as they are essential macronutrients for the macrophyte. In the present study, non-linear regression was used to relate PTEs in P. stratiotes root system to the PTEs concentration in the water. The data demonstrated an exponential drop in BCF values for all the PTEs with rising water concentrations of these elements. In other words, the bioaccumulation of PTEs in root system decreased with an increase in PTE concentration in the water. A similar finding was noted by Prasad and Maiti [72] for E. crassipes growing in ponds from mining and non-mining areas in India, and Eid et al. [55] for E. crassipes grown in irrigation canals in the North Nile Delta in Egypt. A similar inverse relationship was recorded in another investigation in the terrestrial environment by Wang et al. [73] in four common vegetables (Chinese cabbage, spinach, celery, cole) grown on PTE-contaminated soils under field conditions in China. A potential mechanism to explain this is that the plants have a crucial ability to self-regulate PTE uptake into their root systems [74,75]. Additionally, the macrophytes tend to thrive less well in polluted water. This is particularly the case where the water is heavily contaminated; plants undergo blasting and may fail to survive, owing to the poisonous consequences of the water toxins [72]. In this situation, the poor quality of the habitat ameliorates the ability of the macrophytes to absorb PTEs, and thus the concentration of these PTEs within the root system is diminished [73]. The results therefore point to the fact that the concentration of the PTE is important for the availability of PTEs in water. The TF is a measure of the effectiveness of PTE transfer from the macrophytes' roots to their shoot systems. Calculation of this parameter for P. stratiotes revealed some differences between the varying PTEs; the value was < 1.0 for the majority of PTEs evaluated. P. stratiotes therefore has the capability to prevent some PTEs from reaching its physiologically active components, e.g., the leaves. The differences seen in the TF values could be associated with the interactions between the PTEs, which can originate from conflicting and synergetic processes [76,77]. Further factors to explain the differences in TF include physiological parameters relating to the plant, PTE solubility and availability factors, and governance pathways within the root and shoot systems which limit translocation to the latter [74,77]. Regression models can be used as mathematical strategies to facilitate the prediction of plant PTE concentrations utilising water parameters, e.g., the PTE concentration and pH [10,11]. Essential related concepts influencing plant absorption include PTE solubility and bioavailability [44]. pH acts as one of the most significant factors to determine the net metal ion availability in aqueous solutions, as well as their further absorption by plants [78]. Thus, the water pH is often involved in such models, as it impacts the bioavailability of the PTEs [10,11]. In the current study, the pH in the Al-Sero Drain ranged between 7.0 and 8.9. In a recently published study, the pH influence on the effectiveness of PTE absorption by the plant was reported as acidic > neutral > basic [29]. A study by Awuah et al. [79] showed that P. stratiotes was capable of growing at a minimum optimum pH of 4.4 when grown in ponds for wastewater treatment. Therefore, lowering the pH value of the Al-Sero Drain could enhance plant efficiency of the uptake of all selected PTEs. The results from this study demonstrated the ability of the models to estimate the quantity of PTEs within P. stratiotes root and shoot systems, according to parameters of model performance, i.e., R 2 , ME, MNAE, and t-values. In the designed models, satisfactory R 2 parameters were calculated in some instances, within a spectrum extending from Cu, 0.147, in the root system, to Mn, 0.592, in the shoot system. The diversity observed indicates that P. stratiotes may exhibit some metal-specific uptake properties [10]. The data presented in this study are new, with respect to the generation of regression models, in terms of their use as predictive tools for PTE absorption in P. stratiotes grown in a natural environment. To the authors' knowledge, no studies focused on this scenario have been published to date. Thus, the presented data have been contrasted with research conducted within a laboratory setting. Kumar et al. [10] described a range for R 2 for Cd in P. stratiotes of 95.0-99.0% when the macrophyte was cultured in paper effluent within a laboratory sized phytoremediation model. This compares with an R 2 for Cd of 29.4-29.9% measured in this study in a natural habitat. The R 2 for Pb attained by Kumar et al. [10] was between 79.0% and 91.0%; in this study, the range was 18.6-40.5%. The higher values in the former work suggested minimal intersample diversity; the data were collected from macrophytes cultured in a uniform laboratory setting. In the current study, the lower R 2 values may have been due to the fact that the samples were collected over a year, from May 2013 to April 2014, and any diversity in the water conditions and concentrations of PTEs became merged. Additionally, the smaller R 2 parameters in this research may reflect a lack of model sophistication and its restricted ability to demonstrate complex natural PTE phenomena [80]. Conclusions The current research was carried out in order to design new regression models for the prediction of eight PTE concentrations within the root and shoot systems of P. stratiotes, from the equivalent water elemental concentrations, utilising the water pH as a cofactor. P. stratiotes was characterised by a BCF > 1.0 for all eight PTEs evaluated in the study, and the TF of Cd, Cu, and Pb were > 1.0. This indicates that P. stratiotes is suitable for Cd, Cu, and Pb phytoextraction, as well as the exclusion of the remaining PTEs. Moreover, the high BAF and low TF of most investigated PTEs indicate the potential of P. stratiotes for phytostabilisation of these PTEs. The majority of the designed models for the prediction of PTE concentrations within the shoot and root systems of this plant were robust, offering a good fit, with high efficacy and minimal error. They could therefore be of use as predictors of PTE accretion within the plant components of P. stratiotes that inhabits drainage canals, with the exception of those with a low R 2 . These models represent new possibilities for environmental risk assessments and the creation of standards for PTE water quality. An extended field study may be needed for irrigation canals.
2021-08-02T00:06:03.115Z
2021-05-08T00:00:00.000
{ "year": 2021, "sha1": "dffdebd4cdb70c08f5ae9ed07ca9f2a1452203a5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/su13095276", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "1dcf6bd8fe0814f04929ec33b5d0edb0820877f3", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
260632470
pes2o/s2orc
v3-fos-license
The GRAVITY Young Stellar Object survey VIII. Gas and dust faint inner rings in the hybrid disk of HD141569 The formation and evolution of planetary systems impact the primordial accretion disk. HD141569 is the only known pre-main sequence star characterized by a hybrid disk. Observations probed the outer-disk structure showing a complex system of rings and interferometric observations attempted to characterize its inner 5 au region, but derived limited constraints. The goal of this work was to explore with new high-resolution interferometric observations the properties of the dust and gas in the internal regions of HD141569. We observed HD141569 on mas scales with GRAVITY/VLTI in the near-infrared at low and high spectral resolution. We interpreted the visibilities and spectral energy distribution with geometrical models and radiative transfer techniques to constrain the dust emission. We analyzed the high spectral resolution quantities to investigate the properties of the Br-Gamma line emitting region. Thanks to the combination of three different epochs, GRAVITY resolves the inner dusty disk in the K band. Data modeling shows that an IR excess of about 6% is spatially resolved and that the origin of this emission is confined in a ring of material located at a radius of 1 au from the star with a width smaller than 0.3 au. The MCMax modeling suggests that this emission could originate from a small amount of QHPs, while large silicate grain models cannot reproduce at the same time the observational constraints on the properties of near-IR and mid-IR fluxes. The differential phases in the Br-Gamma line clearly show an S-shape that can be best reproduced witha gas disk in Keplerian rotation, confined within 0.09 au. This is also hinted at by the double-peaked Br-Gamma emission line shape. The modeling of the continuum and gas emission shows that the inclination and position angle of these two components are consistent with a system showing relatively coplanar rings on all scales. Introduction The formation and evolution of protoplanetary disks are directly linked to planet formation.The outer disk features of young stellar objects (YSOs) have been thoroughly studied in the past through scattered light imaging (e.g., with SPHERE/VLT, Beuzit et al. 2019) and with ALMA in the (sub-)millimeter range (ALMA Partnership et al. 2015).With both techniques, disk observations have shown rings, gaps, and asymmetric structures GRAVITY is developed in a collaboration by the Max Planck Institute for Extraterrestrial Physics, LESIA of Paris Observatory and IPAG of Université Grenoble Alpes / CNRS, the Max Planck Institute for Astronomy, the University of Cologne, the Centro de Astrofísica e Gravitação and the European Southern Observatory. up to a few hundred au (e.g., Lodato et al. 2019;Benisty et al. 2017).The inner regions (at ∼au scale) of such disks are also of prime interest since key processes like gas accretion flows, winds, outflows, and dust sublimation take place.All these processes affect the dynamics and evolution of the first few au regions where terrestrial planets may form and/or migrate over few million years.Constraints on these processes can be derived indirectly through spectroscopic studies, but at typical distances of a few hundred parsecs only observations with milliarcsecond (mas) resolution, which are required to probe sub-au scales, can discriminate between competing models.Numerous interferometric studies have been conducted in the past in the nearand the mid-infrared (IR), for instance with IOTA (Millan-Gabet et al. 2001), the Palomar Testbed Interferometer (PTI; Eisner et al. 2004), the Keck Interferometer (Monnier et al. 2005;Eisner et al. 2014), and the VLTI (Menu et al. 2015;Lazareff et al. 2017;Gravity Collaboration et al. 2019).To date, the new generation of four-telescope instruments including GRAVITY, operating in the K band (Gravity Collaboration et al. 2017), and MATISSE, operating in L to N band (Lopez et al. 2014), are pushing further the achievable spectral coverage, the sensitivity, and precision of interferometric measurements.Even though statistical studies of large YSO samples are of high relevance (Lazareff et al. 2017;Gravity Collaboration et al. 2019), some of these objects require a more in-depth study enabled by the improved data quality of recent interferometric observations.Some objects, like HD 141569, are unusual in their evolutionary sequence and require dedicated studies.For such systems the exact nature and properties of the very inner regions are still a matter of debate and can be solved in part by modeling the distribution of the warm dust traced in the K band.Furthermore, it is still unclear how to characterize the star-disk mechanisms traced by emission lines like the hydrogen Brackett-γ (Brγ ∼ 8000-10000 K) and CO (T ∼ 2000-3000 K) bandheads, and where this emission occurs.The high-quality spectroscopic capabilities of the GRAVITY instrument allows us to study in detail the gas phase and its spatial morphology thanks to the interferometric visibilities and differential phase signals (Gravity Collaboration et al. 2020a,b, 2021).HD 141569 is a Herbig star classified as a B9-A0 spectral type (Augereau & Papaloizou 2004), with an effective temperature of 9750±250 K, an estimated age of 7.2±0.02Myr, a luminosity between 16.60 ± 1.07 L (Vioque et al. 2018) and 27.0 ± 3.6 L (Di Folco et al. 2020), a mass of 2.14±0.01M , and a GAIA distance of 110 ± 1 pc (Arun et al. 2019) 1 .It is a non-flaring disk system with little mid-IR excess classified as a group-II source (Meeus et al. 2001).It is the only known pre-main sequence star characterized by a hybrid disk (Wyatt et al. 2015;Péricaud et al. 2017;Di Folco et al. 2020), an evolutionary disk state between the protoplanetary and debris-disk regimes.Near-IR imaging spatially resolved an optically thin disk consisting of two rings located at about ∼ 280 and ∼ 455 au from the star Augereau et al. (1999); Weinberger et al. (1999); Biller et al. (2015).A more complex system is shown in the visible, consisting of multiple rings and outer spirals that could be explained through perturbations by two nearby (∼ 7.5 arcsec) M dwarfs, or by planetary perturbations (Augereau & Papaloizou 2004;Wyatt 2005;Reche et al. 2009).Fisher et al. (2000) found a warm disk component up to 110 au at 10.8 and 18.2 µm, later confirmed by Marsh et al. (2002).The short-wavelength counterpart of this component was detected by Mawet et al. (2017) through L imaging and ranging between 20 and 85 au.Emission at 8.6 µm was detected by Thi et al. (2014), and was interpreted as emission from polycyclic aromatic hydrocarbons (PAHs).NOEMA and ALMA observations in the millimeter range showed continuum emission equally shared between a compact ( 50 au) and a smooth extended dust component (∼ 350 au), with large millimeter grains dominating the inner regions and smaller grains in the outer ones (Di Folco et al. 2020).Finally, inner disk features were detected by SPHERE in the Y, J, H, and K bands (Perrot et al. 2016) and by Keck/NIRC2 in the L' band (Currie et al. 2016) at physical separations of 45, 61, and 88 au.These results point out the high morphological complexity of the outer disk in the HD 141569 system. Little is known about the central astronomical units of the system.The spectral energy distribution (SED) of HD 141569 alone does not help us in this sense since the IR excess is very small (see Fig. 9 of Thi et al. 2014).The majority of the K-band measurements listed in Table G.1 reflect a featureless SED in the near-IR.Moreover, pure SED fits obtained by different authors (Li & Lunine 2003;Merín et al. 2004;Thi et al. 2014) may suggest at first that the near-IR emission is exclusively photospheric in nature and that the disk contributes only at longer wavelengths.The object was observed at milliarcsecond resolution in the K band with the PTI and the Keck interferometer, but it was spatially unresolved (Eisner et al. 2004(Eisner et al. , 2009)).Monnier et al. (2005) derived a 10 mas upper limit in radius for the spatial extension of the K-band emission.Therefore, trustworthy information on the first 5 au of the system are scarce, and the question arises of whether the inner region of the disk could be already in a debris disk stage, where the SED fits in the near-IR are not accurate enough to detect such a faint excess.The circumstellar gas has been observed in both atomic and molecular form, which suggests the system has not yet reached the gas-depleted stage characteristic of a debris disk system.Mendigutía et al. (2017) set an upper limit of ∼ 0.11 au for the gas region responsible for the spatially unresolved doublepeaked Hα emission.A comparable upper limit of ∼ 0.13 au for the Brγ line emitting region is suggested by Eisner et al. (2009).Both lines are observed to be not variable over timescales of days and years (Eisner et al. 2015;Mendigutía et al. 2011b).In addition to hydrogen, CO ro-vibrational emission (v ≥ 1, ∆ v = 1) was observed by many authors extending from 10 to 275 au (Dent et al. 2005;Goto et al. 2006;Brittain et al. 2007;Flaherty et al. 2016;White et al. 2016;Miley et al. 2018;Di Folco et al. 2020).We present here the first GRAVITY interferometric observations of this disk, with the goal of revealing the geometry and dynamics of the internal structure of HD 141569, and of gaining insights about the dust and gas properties.Section 2 describes the observations; Section 3 and Section 4 present the observational data and the adopted methodology; Section 5 describes the results of the possible scenarios along with the corresponding modeling; A discussion is developed in Section 6. 1, left panel) with a maximum angular resolution of λ/2B, about 1.7 mas for the longest baseline (B) of 130 m, which corresponds to about 0.19 au at a distance of 110 pc.The data consist in high spectral resolution (R ∼ 4000) observables recorded by the science channel (SC) detector over the whole K band with individual integration times of 30 s and in low spectral resolution (R ∼ 20) observables recorded by the fringe tracker (FT) detector (five spectral channels over the K band at 1.908, 2.058, 2.153, 2.256, 2.336 µm) at frame rates of ≈ 300 and ≈ 900 Hz (Lacour et al. 2019).Each observation block corresponds to 5 minutes on the object.In total, three files were acquired in March 2019, one in May 2019, and eight in July 2019.HD 141569 observations were preceded by the observation of a point-source calibration star, close to our object Data All the data were reduced and calibrated using the GRAVITY data reduction software (Lapeyrere et al. 2014).For the lowresolution FT data we discarded the first spectral channel, which can typically be affected by the metrology laser operating at 1.908 µm.Figure 1 shows the U-V plane coverage and the FT calibrated squared visibilities and closure phases (right, left, and center panel, respectively).Following Gravity Collaboration et al. ( 2019), we applied a floor value on the error bars of 2% for the squared visibilities and 1 • on the closure phases as the error bars computed by the pipeline might be underestimated or correlated.We observe that GRAVITY partially resolved the near-IR emission in HD 141569 with squared visibilities between 0.8 and 1.0.Therefore, the data can be used to estimate the characteristic size of the dust environment (see Section 5.1).Moreover, with the inclusion of the May 2019 large configuration data, we observe that the visibility reaches a plateau at almost all spatial frequencies, allowing us to constrain near-IR flux contributions of the star and environment. Since the closure phases are consistent with 0 • at all baselines and for all the epochs, we can confidently consider the emission to be centro-symmetric on the spatial scale of our observations, and we therefore discard the hypothesis of a close companion as the origin of the resolved emission, at least within the 250 mas (∼28 au) field of view of GRAVITY with the ATs.For the high-resolution SC data, we concentrated on the July 2019 dataset only since this is the epoch where we gathered the highest number of files.In order to optimally exploit the SC dataset, the eight files from July 2019 have been merged in order to increase the signal-to-noise ratio per spectral channel.Considering that the maximum span in position angle between the two extreme positions of the UV coverage is only ∼4 • , we do not expect any visibility smearing of the data due to differences in hour angles.The error bars were computed as the standard deviation between the eight files of the corresponding differential quantity (visibility or differential phase) in each spectral channel.For instance, we derived the absolute error on the visibility to about 1.8%.Figure 2 shows the visibility amplitude (left panels) and the differential phase (right panels) for the six baselines in the region of interest of the Brγ line, between 2.15 and 2.18 µm.The top panels show the object spectrum normalized to the continuum, corrected for telluric lines (left plot), and corrected for both telluric lines and photospheric absorption (right plot).The visibilities appear to be spectrally flat with no clear signature at 2.16612 µm.They are measured to vary between 0.9 and 0.96 as a function of the baseline, which is indicative of a compact region well inside the dusty disk.Interestingly, the differential phase signal is more marked at the position of the Brγ line.We observe a clear S-shaped signal through baselines J3-D0 (100.1 m, 220 In order to study the Brγ line gas region using the SC high spectral resolution data, we need precise measurements for the lineto-continuum flux ratio.It is essential to perform a proper wavelength calibration of the spectrum and to take out the contribution by the telluric lines.We describe the whole procedure in Appendix C. Errors are from the original data, reduced and calibrated through the GRAVITY data reduction software.The high-resolution GRAVITY spectrum of HD 141569 (see top panels of Fig. 2) shows a double-peaked Brγ emission line.Since the error associated with the GRAVITY wavelength calibration is ∼ 3 Å, the two peak positions can be considered to be symmetric with respect to the Brγ wavelength rest position.Both the double-peaked emission line and the S-shaped feature in the differential phase suggest a scenario where the gas emitting in the Brγ line could be in Keplerian rotation.We explore this hypothesis further in the following sections. Methodology The properties of the spatially resolved continuum emission were first investigated with the help of low spectral resolution FT data (see Sects. 4. 1 and 5.1).The squared visibility curve was modeled through chromatic geometrical models accounting for a point-like central star, and simple geometrical shapes (Gaussian geometrically thin rings, Gaussian-convolved infinitesimally thin rings) representing the circumstellar environment.Useful information was obtained such as the star-to-dust flux ratio, the dust spectral index, and the spatial distribution of the dust.To gain further information on the dust emission properties, we investigated through radiative transfer (RT) modeling the impact of such a component on the SED (see Section 5.2).We used the RT code MCMax (Min et al. 2009), which solves 2D RT (e.g., Bjorkman & Wood 2001) to calculate the dust density and temperature structure of a given disk setup.In Sects.4.2 and 5.3 we discuss the high-spectral resolution SC data in the Brγ region used to constrain the spatial scale of the hot gas emitting component through modeling of the visibility curves, and through the analysis of the Brγ spectrum under the assumption of a gas disk in Keplerian rotation.Further information on the gas region size and its dynamical properties were derived through the analysis of the differential phases and the resulting photocenters shifts.Finally, an analytical axisymmetric Keplerian disk model is compared to our observations. Dust continuum: low spectral resolution data Following the work of Lazareff et al. (2017) and Gravity Collaboration et al. (2019), we used geometric models that consist of a point-like central star, assumed to be unresolved at all observed baselines, and a circumstellar environment in order to fit the observed visibilities.The complex visibility of the system at spatial frequencies (u, ) and at wavelength λ is therefore described by a linear combination of the two components as where V c is the visibility of the circumstellar environment, and F s and F c the specific fractional flux contributions of the star and of the circumstellar environment, respectively.The visibility of the star V s is equal to 1 since we assume it to be unresolved. Since our GRAVITY FT data contains six visibility measurements and four closure phases for each of the four spectral channels and for each file, we can derive the spectral dependence of the circumstellar environment by modeling it as a power law, defined by its spectral index k c , where k = d log F λ /d log λ, and by describing the complex visibility of the system as where λ 0 = 2.15 µm is the wavelength of the central spectral channel of the FT, and k s the spectral index of the star derived assuming that it radiates as a black body at the star effective temperature T eff =9750 K, which translates into s spectral index k s = −3.62 at λ 0 for the central star. We chose to fit our visibility data with three different geometric models that differ only by the V c term in Eq. 2. Since the closure phases are basically zero for every baseline, as shown in the central panel of Fig. 1, we do not consider any azimuthal modulation in our models.Therefore, the resulting brightness distributions are centro-symmetric. The first model consists of a Gaussian disk whose visibility is described, from Berger & Segransan (2007), as where Θ is the Gaussian full width at half maximum (FWHM) and r = √ u 2 + 2 = B/λ, with u and v the spatial frequency coordinates, and B the projected baseline.The second model consists of a geometrically thin ring whose visibility is described by subtracting an inner smaller uniform disk from a larger one: Here the subscript outer refers to the larger disk and the subscript inner refers to the inner hole.After normalization and knowing that F disk = π D 2 /4, where D is the diameter of the disk, we can express Eq. 4 as where J 1 is the first-order Bessel function.Finally, our last model consists of an infinitesimally thin ring convolved by a Gaussian whose visibility is described as the product between V gauss (u, ) and where J 0 is the zero-order Bessel function and a is the radius of the infinitesimally thin ring (Berger & Segransan 2007).Since in the Fourier space the convolution of two functions is simply their multiplication, we have The inner hole radius and ring width are defined as r i = D inner /2 and w = (D outer − D inner )/2, respectively, for the geometrically thin ring model while they are defined as r i = a − Θ/2 and w = Θ, respectively, for the Gaussian-convolved infinitesimally thin ring.Inclination and position angle of the circumstellar environment are taken into account through the parameter r = √ u 2 + 2 following Berger & Segransan (2007).The model fitting is based on a Markov chain Monte Carlo (MCMC, Foreman-Mackey et al. 2013) numerical approach and was implemented on the combined dataset of all three epochs in order to maximize the number of experimental points against the number of free parameters.This assumes that the near-IR emission and the disk structures are not variable over a five-month period, which is strengthened by the fact that the star is not variable, either spectroscopically in the optical (Mendigutía et al. 2011a) or photometrically in the mid-IR (Kóspál et al. 2012).Once a global solution was identified, we further checked how well the parameters are indeed constrained by the data.For this purpose, we performed a series of squared visibility fits to the model by fixing the tested parameter to different values and left the other parameters free in the subsequent minimization.In this way we obtain a χ 2 curve as a function of the tested parameter value. Finally, the error on the χ 2 was estimated by treating the quantity T i as a stochastic variable with where N is the number of points in the dataset, y i the individual measurement, y model the value of the model, and σ i the error associated to the measurement.The χ 2 value is given by the mean of T i , and the χ 2 error is given by the error on the mean of T i , This applies to the reduced χ 2 as well. Gas: High spectral resolution data To estimate the gas region size from the SC visibility we extrapolated the pure-line contribution from the total visibility (line+continuum) displayed in Fig. 2. To do this we modeled the total visibility with a three-component model that accounts for the contributions from the star, the circumstellar dust, and the line emitting gas.The total visibility is given therefore by where α(λ) is the science star continuum-normalized photospheric absorption (see Appendix C for more details), the subscript c refers to the dust component, and the subscript L refers to the Brγ line gas.From Eq. 11 it can be proven (see Appendix D) that the pure-line visibility is given by where F L/C is the line-to-continuum flux ratio and β(λ) is the disk-to-star flux ratio outside the line For clarity, we note that, outside the line emitting region, α(λ) and F L (λ) tend toward 1 and 0, respectively.Equation 13 corresponds to the line-to-continuum ratio including the photospheric absorption.Finally, we estimated the gas region size by modeling it with an infinitesimal ring model given by Eq. 6. In the same way, we needed to take out the continuum contribution from the total differential phases (Fig. 2).Following Weigelt et al. (2011), the pure-line differential phase is given by where φ L is the pure-line differential phase, φ the total differential phase, F tot the total flux (star, dust, and gas), and we write the ratio F tot /F L through Eq.D.5.Following Le Bouquin et al. (2009), we can derive wavelength-dependent photocenter displacements along each baseline from the pure-line differential phases by where p is the projection on the baseline B of the 2D photocenter vector with origin on the central star.The error bars on the pure-line differential quantities are computed through error propagation in Eq. 12 and Eq. 15. Disk component inside 2 au Of the three models discussed in Sect.4.1, the fit of the squared visibilities to the Gaussian disk model (Eq.3) did not converge to a solution (i.e., the marginal posterior distributions for the Gaussian width, inclination, and position angle are flat).Therefore, we discarded this model in the rest of the work.Solutions were found for the geometrically thin ring model (Eq.5) and the Gaussian-convolved ring model (Eq.7).The two models converge basically toward the same solution, with a reduced χ 2 r = 4.7 for both models.We performed a wide scan range of the fitted parameters to find convergence toward a global solution.The results of the minimization are presented in Fig. B.1 and Table 1 for the ring model along with the parameter scan range and the 1σ uncertainties.The resulting MCMC posterior distribution is presented in Fig. B.1 (top plots) and allows us to identify an optimal global solution for the six parameters.The fitting process leads to a photospheric near-IR flux contribution of ∼ 93.8%, and therefore to a dust ring flux contribution of ∼ 6.2% for both models.Interestingly, the degeneracy typically found between the disk's flux and the characteristic size in V 2 is broken here because the constant plateau as a function of spatial frequencies unambiguously determines the level of the disk's flux contribution.Both models predict a spectral index k c for the dust ring with a value of −0.35 ± 0.2.To better illustrate the visibility plateau and the expected modulation due to the modeled thin ring, we show in the bottom plots of Fig. B.1 three visibility curves corresponding to the best model for three selected baseline orientations and for a fixed wavelength value of 2.15 µm.Regarding the geometrical shape of the circumstellar environment, both models lead Notes.F s is the stellar flux contribution, F c the dusty circumstellar environment flux contribution, k c the dust spectral index at 2.15 µm, r i the ring inner hole radius, w the ring width (r i and w defined in different ways for the two models, see Section 4.1), i the ring inclination from face-on, PA the northeast position angle, and χ 2 r the reduced chi-square.The uncertainties on the fitted parameter correspond to the 1σ error.Scan ranges refer to both models. to a ring inclined from face-on by 58.5 • , with a northeast position angle PA ∼ 0 • .The inner hole radius is estimated to be r i ≈ 7.4 mas (0.8 au) from both models, while the ring width is estimated to be ∼ 0.24 − 0.35 mas (0.03 − 0.04 au, for the Gaussian-convolved ring model and the geometrically thin ring model, respectively).Importantly, Fig. B.2 shows the χ 2 r curves of each parameter for the geometrically thin ring model, which helps us to evaluate how well each parameter is constrained by our data.The star near-IR flux contribution is very well constrained as we expected from the plateau seen in the squared visibility curve.The ring spectral index is more loosely constrained, since the best value is consistent with values ranging between −2 and 2. The ring inclination is constrained between ∼ 45 • and ∼75 • , while the position angle is less well constrained with two possible minima at ∼ 0 • and 120 • , the former considered to be the absolute minimum.The inner hole radius is constrained to be inside the first 2 au of the system, with a global minimum found around 0.8 au (7.4 mas) and a second (almost equally possible) solution, at ∼1.7 au (15.4 mas).Taking into account the upper limit of 10 mas for the radius of the K-band emission found by Monnier et al. (2005), we decided to adopt r i =0.8 au.According to Fig. B.2, the ring width w tends toward small values, not larger than ∼ 0.3 au.This is discussed further in Sect.6.1. Dust properties through radiative transfer modeling To strengthen the obtained results and to assess the scenario of an inner ring as close as 0.8 au, gaining further information on the emission properties, we investigated through RT modeling the impact of such a component on the SED.We note here that our aim was not to perform a detailed mineralogy study of the system, but rather to understand how the detected inner dust is consistent with both the near-IR flux and overall SED of the system.We modeled the multiple and complex outer rings with only three rings based on the results from Thi et al. (2014).In their model, the lower limit particle size was set at 0.5 µm and the upper one at 0.5-1 cm for the two outermost rings and the innermost one, respectively.The grain size follows a distribution ∝ a −3.5 , the surface density profile is a modified version from Li & Lunine (2003), and the flaring index is γ = 1.The three rings peak at ∼ 15, 185, and 300 au, with the first two rings separated by a 75 au gap.Our initial disk setup consists of a disk structure similar to Thi et al. (2014), but with updated stellar parameters (see Sect. 1) and a grain population based on DIANA standard dust grains (Woitke et al. 2016) containing 75% amorphous silicates (e.g., Mg 0.7 Fe 0.3 SiO 3 ), 25% porosity, and no amorphous carbon.Our modified grain size distribution and surface density is described in next paragraph.The computed SEDs account for both thermal emission and scattered light contributions. A silicate dust ring: First, we attempted to reproduce the near-IR excess detected by GRAVITY by including dust grains close to the star, but also taking into account the fact that HD 141569 does not show a silicate emission feature at 10 µm (Seok & Li 2017), which is in part connected to the grain size distribution.We tested several models of the inner ring with the following properties: a varying lower-limit dust grain size of 0.6, 1.2, 2.5, 5.0, 10, 20, 40, 80, 158, and 316 µm; an equal upper-limit grain size of 1 cm, with a size distribution ∝ a −3.5 ; a surface density ∝ r −1 ; and an inner ring radius fixed at 0.8 au with a width of 0.04 au according to our best-fit model.All models with grains smaller than ∼ 20 µm in the inner ∼ 1 au region can be tuned to reproduce the ∼ 6% near-IR excess, but at the same time they still exhibit a clear 10 µm silicate feature, which is not in adequacy with the observations.Models accounting only for grains larger than 40 µm result in an almost complete quenching of the 10 µm silicate feature.When testing the mass at ∼ 1 au required not to exceed the mid-IR flux, we find that 10 −10 M (or 3.3×10 −5 M ⊕ ) would be compliant with this condition, but only a ∼ 1% near-IR excess is generated.On the other hand, the dust mass required to reach a ∼6% near-IR excess is well beyond 8.5×10 −10 M (or 2.8×10 −4 M ⊕ ), but this then produces a too large mid-IR emission inconsistent with the known SED.Finally, decreasing the percentage of silicates and increasing the carbon percentage in the dust grains up to 25 % did not improve the SED fit.Obtaining a more precise fit to the global SED would require further analysis and tuning of the outer ring contribution in the mid-IR, which is beyond the immediate goal of the paper.However, our modeling seems to point out that it is difficult to reconcile the level of near-and mid-IR excess reported with a model of solely silicate dust in the disk ring at ∼ 1 au.Three representative cases of our modeling are presented in A ring of quantum heated particles: Another way to produce near-IR emission consistent with the absence of the prominent 10 µm silicate feature and with the presence of the mid-IR PAH bands is to consider quantum heated particles (QHPs, Purcell 1976;Draine & Li 2001).In the context of interferometric observations, this scenario was invoked for HD 100453 where QHPs Notes.The stellar luminosity has been revisited in this work as follows.We determined a lower and upper limit of that value by matching in the K band the photospheric flux plus the near-IR excess with the 2 MASS photometry within its 5% uncertainty (see Table G .1).This provided a stellar luminosity between 18 and 19 L , in agreement with the revised value by Vioque et al. (2018). were detected in the disk gap (Klarmann et al. 2017), and for HD 179218 with the presence of hot QHPs inside the disk cavity (Kluska et al. 2018). The highest masses (∼ 10 −13 M ) produce an inner rim emission which results in a near-IR excess that is too large and inconsistent with the GRAVITY measurement.The smallest value (∼ 10 −15 M ) corresponds instead to an optically thin disk with negligible excess at 2 µm.For a ring geometry in agreement with our best fit of the GRAVITY data, a sweet spot is found for a mass of 4.3×10 −14 M (or 1.4×10 −8 M ⊕ ) for which the resulting disk produces a near-IR excess of ∼ 7 %.Smaller particle sizes (e.g., 10 2 carbon atoms) would require a larger mass reservoir of QHPs to reach a near-IR excess of ∼ 6%.As a consequence, a higher mass would result in stronger mid-IR PAH features overestimating the HD 141569 SED observed in the Spitzer IRS spectrum (Sloan et al. 2005).The order of magnitude of ∼ 10 5 carbon atoms per particle appears consistent with the SED profile estimated by GRAVITY and IRS/Spitzer.The resulting disk SED shows a spectral index (d logF λ /d logλ) of -1.4 at 2.15 µm which is consistent with the minimization curve of the spectral index in We find that the strongest constraint on the ring width is set by our interferometric measurement.When considering widths from 0.04 to 0.3 au in our RT modeling, the observed tendency remains the same: the population of silicate dust grains does not satisfactorily reproduce the near-and mid-IR excess, contrary to a population of a stochastically heated small grains. Beside the ring's width, the important result of this analysis suggests that a tenuous, QHP-dominated, optically thin inner ring may provide a suitable description of the close circumstellar environment of HD 141569 in agreement with existing detailed modeling of the outer disks.On the contrary, a silicate-dominated dusty inner ring, heavier by about four orders of magnitude and composed of large grains, fails to provide a satisfying description of the system, in particular in terms of flux contribution in the mid-IR spectral range.Figure 3 presents the final result of our MCMax modeling with the parameters of Table 2.We note that the strong PAH feature at 7.8 µm in HD 141569 is not seen in this model because the corresponding opacity has not been added to our models of the outer rings, unlike the models of Thi et al. (2014). Spatial scale of the Brγ-line emitting region We exploited the high-spectral resolution data of GRAVITY in the Brγ region to constrain the spatial scale of the hot gas emitting component, following the formalism in Sect.4.2.Conservatively, the resulting pure-line visibilities plotted in red in Fig. 2 are very close to 1 for all the baselines.Considering the error bars we can say that the gas region is at the limit of spatially unresolved emission.We therefore propose to constrain the size of the gas emitting region by considering the gas emitting at the Brγ line wavelength peaks (i.e., 2.1654 µm and 2.1672 µm) using an infinitesimally thin ring model and estimating the upper-limit size that would exceed the error bar of ∼2% on the pure-line visibilities.In this way, we estimated a maximum radius of ∼ 0.35 mas (0.0385 au) for the gas emitting region.While the analysis of the visibility amplitudes only provides us an estimate of the size scale of the gas emitting region, further information on the spatial and kinematic properties of the gas component is found in the differential phase signal.Typically, differential phases provide information on photocenter displacements along the baselines on angular scales that can surpass the nominal resolution of the interferometer.Figure 4 shows the GRAVITY pure-line differential phases.After removal of the continuum contribution (cf.Sect.4.2), the S-shape becomes clearly visible for the baselines J3-D0 (100 m, 220 • ) and J3-K0 (54 m, 151 • ) around the Brγ line, while a weaker trend is seen for the baselines J3-G2 (62 m, 223 • ) and D0-G2 (38 m, 36 • ).The strongest signature shows an amplitude in the differential phase exceeding ∼20 • for J3-K0.Since the differential phase signals for the baselines D0-K0 (95 m, 72 • ) and G2-K0 (68 m, 92 • ) are consistent with zero at all wavelengths, we fixed the pure-line differential phases to 0 • .The typical uncertainties after correcting for the continuum subtraction are ∼ 4 • .The resulting deprojected photocenter shifts per spectral channel, with reference frame fixed to the star location, are shown in Fig. 5.We clearly observe that all points are aligned along the same direction with an angle of -10 • ± 7 • .The redshifted points are located along northwest, while the blueshifted points are located toward the southeast.Based on the redshifted maximum extent of the photocenter shifts, we estimated the radius of the gas region from the differential phases to be 0.333 ± 0.039 mas or 0.037 ± 0.004 au.This appears consistent with the less precise upper-limit size set through the analysis of the visibility amplitudes.We recall however that the size estimate derived through the photocenter shift does not correspond to the physical outer radius of the gas region, but to the size where the gas emission is more intense for a given wavelength. The distribution of the 2D photocenter solution can be interpreted to the first order as being caused by a gas disk in Keplerian rotation orbiting HD 141569.Under this assumption the analysis of the Brγ emission line's shape provides further clues to the gas kinematics.We can indeed derive an estimate of the gaseous disk's radius from the separation between the two peaks of the line and the rest position.From Beckwith & Sargent (1993) we use where R g is the radius, G is the gravitational constant, M the mass of the star, v obs the projected velocity at the line peaks, and i the disk inclination.The 128 ± 42 km/s average peaks shift of the Brγ line with respect to the line rest position leads to an outer limit for the gas region of 0.766 ± 0.554 mas in radius, or 0.084 ± 0.061 au.The resulting error accounts for the uncertainty on the stellar mass (0.01 M ), on the distance (1 pc), on the disk inclination (15 • ), and on the peak position (3 Å), the last being the dominant one. The three approaches presented above and based on the analysis of the visibility amplitudes, differential phases, and the spectrum all seem consistent with a gas component in Keplerian rotation confined within ∼ 0.8 mas (∼ 0.09 au, ∼ 12.9 R ) in radius.We compared our differential phase signals, our strongest measurable quantity, to a simple geometrical model of an axisymmetric disk in Keplerian rotation built with two thin layers that account for the top and bottom sides of the disk, parameterized by an inner radius r in (varying from 0.008 to 0.03 au), an outer radius r out (varying from 0.033 to 0.8 au), and a power-law exponent α (varying from 0 to 4.0) for the disk's intensity radial profile following I(r) ∝ r −α .The inclination of the disk was fixed to 58.5 • , based on the results of the FT data analysis and under the assumption of coplanar dust-gas rings, and its position angle was fixed to -10 • , based on the photocenter shift analysis.The hypothesis of an optically thin disk is made, which implies that the disk intensity does not strongly depend on the disk scale height, which is then fixed to H/R = 0.1 throughout the ring, but only on the surface area of the disk: I ∝ dS where dS is an area element (for a complete description of the model, see de Valon et al. in preparation).The model delivers a spectral profile that is eventually normalized and fitted to our experimental double-peaked spectrum in Fig. 6.A grid of 2700 models has been explored, and a best-fit model is found for r in =0.011 au, r out =0.09 au, and α=0.5.From this best-fit model, we produced 2D velocity maps (Fig. 7) and retrieved the theoretical pure-line differential phase signal (see Fig. 4).Our Keplerian disk model reproduces qual-itatively well the observed pure-line differential phases both in the orientation of the sine wave relative to the wavelength and in amplitude reinforcing the Keplerian gas disk scenario.We also observe that the weakest signals indeed correspond to the baselines D0-K0 and G2-K0 for which the peak amplitudes are ∼3-5 • .On a final note, for the best-fit solution model we tested the optically thick hypothesis by accounting for projection effects on the emissivity law, I ∝ dS • e y = dS • (n • e y ), where e y is the line-of-sight unit vector and n the normal unit vector of the area element considered (see de Valon et al. in preparation).The resulting model leads to a spectrum that is similar to that derived from the optically thin model (a small difference is seen at the lowest velocities), and to pure-line differential phases and photocenters shifts consistent with those of the optically thin model. A newly detected inner dust ring Thanks to GRAVITY, an additional inner ring component at ∼ 1 au from the star, narrower than ∼ 0.3 au, has been discovered, which adds to the multi-ring picture identified for HD 141569. A schematic visualization of the dust and gas distribution obtained from multi-epoch and multi-instrument observations is depicted in Fig. 8.The low level of near-IR excess (∼ 6%) is comparable to the typical 5% accuracy of K-band photometric data (e.g., 2 MASS), which led to considering the near-IR flux of HD 141569 as essentially photospheric in earlier studies.With this result, the presence of dust as close as 1 au from the star, but well beyond the sublimation radius R sub ∼ 0.25 au is a more robust piece of evidence.The disk inclination and position angle estimated through the GRAVITY FT data analysis are consistent within the error bars with the values found for the outer rings that make up the circumstellar environment of the system (see Table H.1), suggesting an almost coplanar system of circumstellar rings.Considering how many constraints can effectively be set on the position angle (see Fig. B.2), it does not appear that any clear misalignment between the inner and outer disks could be claimed.The inclination and position angle could be the reason why PTI and the Keck interferometer were not able to spatially resolve the near-IR emission of HD 141569 in earlier measurements, since the alignment of their single baseline was around 42 • from north to east, hence not far from the semi-minor axis of the disk.However, the upper limit of 10 mas in radius for the location of the dust proposed by Monnier et al. (2005) is in agreement with our results.They also report a fractional excess of ∼5%, but compatible within the error bars with pure photospheric flux in the K band. Our results are substantially different from those obtained by Lazareff et al. (2017) with PIONIER in the H band.The reported flux excess is ∼ 50% larger than in our case and their half-flux radius for the dust emission is only 0.03 au, well inside the sublimation radius of the system expected to be at ∼0.21-0.26au (for L = 16.6 − 27.0 L , T subl ∼ 1470 K and a cooling efficiency ∼0.5).Lazareff et al. (2017) modeled the H-band excess emission using an ellipsoid distribution and not a ring (see their Table B. 2 and B.3).The choice of the model and the low-quality PIONIER data could potentially explain this unexpected result.The values derived for the radius and width of this new innermost dusty component allows us to compare the system with other YSOs.The analysis made by Gravity Collaboration et al. ( 2019) for a sample of 27 Herbig Ae/Be stars revealed dusty circumstellar environments with half-flux radii that range between 0.1 and 6 au depending on the stellar luminosity with a median of 0.6 au, and a width-to-radius ratio w ranging from 0.1 to 1 with a median of 0.83, which is interpreted as smooth and wide rings, even though with large error bars.Our radius estimate for HD 141569 (∼0.8 au) is within the range found by these authors, but its peculiarity is also reflected in the fact that its position in the size-luminosity diagram does not coincide with the bulk of the Herbig stars, reinforcing the idea that HD 141569 is a unique system in terms of evolution.From Table B.2, the width-to-size ratio is estimated to 0.05±0.05for the best-fit model, but it is also noticeable from Fig. B.2 that the ring's width is difficult to clearly constrain in the region below 0.3 au.This implies that the width-to-size ratio could be seven times larger.Therefore, our estimate of the width-to-size ratio goes on the lower end of the range found in Gravity Collaboration et al. ( 2019), but remains comparable to systems such as HD 114981 (0.10±0.03) or HD 190073 (0.14±0.03).In addition to the disk's well-known flux-size degeneracy, which in our case is broken thanks to the constant plateau in the squared visibilities curve as a function of spatial frequencies, Lazareff et al. (2017) found a negative correlation between the ring width and its half-flux radius, meaning that it gets more difficult to detect with the VLTI baseline small-radius ring-like structures than small-radius ellipsoid structures for angular sizes of the K-band emission of ∼1 mas or smaller.In our case the size is well constrained being beyond the suggested limit, which is one argument favoring the robustness of our modeling as opposed to the model of a Gaussian brightness distribution that did not converge to a solution. To further test our model findings, first we tried to add the contribution of a fully resolved emission (also known as a halo) to our geometrically thin ring model following Eq. 4 of Lazareff et al. (2017).Our best-fit solution (χ2 red = 4.69 ± 0.32) led to results similar to our geometrically thin ring model with a halo contribution that actually converges toward zero (0.103 +0.144 −0.075 %).Interestingly, this is in full agreement with Lazareff et al. (2017) who found null halo flux contribution as well.Second, we also tried a Lorentzian-convolved infinitesimally thin ring (see Table 5 of Lazareff et al. 2017), which leads to practically the same solution (χ 2 r = 4.68 ± 0.32).We conclude from this detailed analysis that the narrow ring-like shape with a width smaller than 0.3 au is a good description of our observations. Nature and origin of the detected ring We advance in Section 5.2 the scenario of a population of stochastically heated particles (e.g., PAH-like very small grains) as the cause for the near-IR excess.A number of arguments can be discussed in this context.Our best-fit chromatic model shows a spectral index k c =-0.35±0.21for the circumstellar emission.Following Lazareff et al. (2017) and Gravity Collaboration et al. ( 2019), we estimate from the parameter k c a temperature of the radiating dust under the gray body hypothesis (i.e., wavelength-independent emissivity) and find T c =1460±70 K.In the case of a silicate dust ring in thermal equilibrium at ∼0.8 au, we would expect a cooler temperature 2 of T c ∼ 650-850 K using the stellar parameter of Table 2 and a cooling efficiency between 0.3 and 1.Therefore, we argue that the near-IR emission is not dominated by emission of dust in thermal equilibrium, which can be explained by the presence of these small particles that are quantum heated by the stellar UV radiation.Even though the spectral index is found to be not very well constrained (see Fig. B.2), the spectral index corresponding to dust in thermal equilibrium at 800 K would be around +3.4 at λ 0 , relatively far from the our best-fit model.Maaskant et al. (2014) interpret the high intensity ratio between the PAH band at 6.2 µm to the band at 11.3 µm as a tracer of predominantly ionized PAH species located in a disk's gap and exposed to the intense ionizing UV radiation field of the central star.For instance, with a I 6.2 /I 11.3 feature peak ratio of ∼3−4 (Seok & Li 2017) the PAH sources IRS 48 and HD 179218 present emission from such predominantly ionized PAHs located in part inside the gap or disk cavity (Maaskant et al. 2014;Klarmann et al. 2017;Kluska et al. 2018;Taha et al. 2018).Interestingly, the I 6.2 /I 11.3 peak ratio of HD 141569 derived from Seok & Li (2017) is high as well, estimated to ∼ 5−6.This may indicate the presence of PAH species close to the star, with a predominantly ionized state due to the direct irradiation by UV stellar flux, bringing further support to our QHP-dominated inner ring model.Comparing the IRS spectrum to our model, we find that the our model accounts for 28%, 25%, and 4% of the observed PAH peak emission for the features at 6 µm, 8 µm, and 11 µm, respectively.The remaining emission would come from PAHs located in the outer rings.Comparing the outer ring PAH mass reservoir estimated by Thi et al. (2014) to that of our innermost ring model, we find that our estimate (4.3 × 10 −14 M ) is smaller than their ∼ 15 au and ∼ 300 au rings by three orders of magnitude (2.0 × 10 −11 M and 2.1 × 10 −11 M , respectively), and by four orders of magnitude for their ∼ 185 au ring (1.2 × 10 −10 M ) and the entire outer disk environment (1.6 × 10 −10 M ).We recall however that in their model Thi et al. (2014) do not account for any dust located at ∼ 1 au and suggest a 5 au dust-free inner gap, which could result in overestimated values on their side. Regarding the origin of the ring structure, the case of HD 141569 is particularly interesting under the aspect of our proposed QHPdominated inner component: one could question the presence of QHPs in an inner narrow ring since this kind of particle is expected to be coupled to the gas component and to date have been mostly invoked in more extended emission (e.g., Klarmann et al. 2017;Kluska et al. 2018).Several authors have detected CO emission beyond ∼10 au (see Introduction), whereas little information is available on the presence of CO within the first 10 au.It is likely however that this inner region is not gas depleted.The [O I ] λ6300 emission detected by Acke et al. (2005), and suggested by these authors as a dissociation product of OH in the circumstellar disk, would originate between ∼0.05 and 0.8 au under the assumption of a gas disk in Keplerian motion (Brittain et al. 2007).Hydrogen recombination and sodium lines were also detected (van der Plas et al. 2015).Both Mendigutía et al. (2017) and our results (see Section 6.3) find excited atomic hydrogen in the dust-free cavity, which implies the existence of a replenishment mechanism from the outer regions.Moreover, Brittain et al. (2007) set an upper limit on the column density of CO inside 6 au of N(CO) < 10 15 cm −2 , which translates into a gas mass < 5.9 × 10 −13 M .Considering our QHPs mass (4.3 × 10 −14 M ), a gas-to-dust ratio of 100, and a [CO/H 2 ] ratio of 10 −4 , this would translate in a CO mass of 4.3 × 10 −16 M , which is well below the upper limit and therefore not detected yet.These arguments suggest that a gaseous component may exist and coincide with the proposed QHP component to which it would be coupled.Furthermore, even though QHPs have been invoked in more extended emission in past works, we cannot exclude the possibility of more compact or narrow components.For example, Khalafinejad et al. (2016) modeled its inner circumstellar region through an optically thin spherical halo extending from 0.1 to 1.7 au in order to explain the near-IR flux of HD 100453 and to fit simultaneously its Q-band flux.The choice of the spherical halo was based on the fact that their data poorly constrained the structure of the inner disk (and so it is the halo extension estimate) and the optically thin hypothesis was set in order to not affect the Q-band flux modeling.This component was also suggested by Klarmann et al. (2017).Their QHP model for HD 100453 underestimates the observed flux in the 1-5 µm wavelength range by up to 30%, and slightly overestimates the long-baseline visibility data, indicating that the missing flux is emitted on short spatial scales.Closer results to those we obtained for HD 141569 were found by Maaskant et al. (2013).These authors suggest a compact optically thin spherical halo for HD 169142 (0.1-0.2 au), HD 135344 B, and Oph IRS 48 (0.1-0.3 au) to reproduce the observed near-IR flux.Several scenarios have been proposed to explain the different structures that protoplanetary disks exhibit, such as gaps, spirals, or rings.Fragmentation of wide rings into narrow ones by secular gravitational instability (e.g., Tominaga et al. 2020), self-induced pileup of particles by aerodynamical feedback (e.g., Gonzalez et al. 2017), and dust traps at local maxima in the gas density due to a reversal of the pressure gradient by dynamical clearing from a companion (e.g., Pinilla et al. 2012) could explain a structured nature of the disk.Our observations leave this matter as an open question since GRAVITY informs us solely on the spatial properties of the detected K-band continuum emission. HD 141569 Brγ-line emitting gas region Our analysis of the kinematic and spatial distribution (via the differential phase) of the hot hydrogen gas is in line with a scenario of a Keplerian disk inside the dust-free cavity.The distribution of the photocenter shifts shown in Fig. 5 agrees well with the behavior expected from a Keplerian disk (Mendigutía et al. 2015).The position angle of the photocenter shifts distribution (-10 • ±7 • north to east) is also found to be in overall agreement with the position angle of the inner ring responsible for the near-IR excess, and of the outer rings.Moreover, the photocenters of the Brγ line emitting gas region are located as the photocenters of the outer CO regions, blueshifted ones along the southeast and redshifted ones along the northwest (White et al. 2016).The profiles of pure-line differential phase signals depart a bit from the perfect S-shaped signal expected for a pure Keplerian disk.We believe that it is also limited ultimately by our spectral calibration.In this sense, more accurate measurement of the differential phases in HD 141569 using GRAVITY with the 8 m Unit Telescopes could certainly improve the accuracy of this analysis.In order to evaluate the quality of our spectral data, we chose to compare the GRAVITY profile measured with the ATs to other high-quality spectra obtained with the ISAAC spectrograph at the VLT (Garcia Lopez et al. 2006), the NIRSPEC echelle spectrograph at the Keck Observatory (Brittain et al. 2007), and with SINFONI/VLT from archival data.The comparison is shown in Fig. 9 and we observe that the GRAVITY spectrum exhibits a mild asymmetry between the blue and red peaks.This would suggest that our spectrum could be still affected by some calibration effects, either telluric or instrumental.We then further explored how far our resulting differential phases might be impacted by the slight spectrum asymmetry and tested the derivation of the pure-line differential phases using the SINFONI spectrum, which has a very similar spectral resolution, instead of the GRAVITY spectrum.We found that the 2D distribution of the photocenter shifts remains unchanged within the error bars reported in Fig. 5.The star is known to be a fast rotator (222.0 ± 7.0 km/s, Folsom et al. 2012), which results in a small co-rotation radius around 2.38 ± 0.53 R (0.011 ± 0.002 au), assuming R = 1.5 ± 0.5 R (Fairlamb et al. 2015).We cannot exclude that part of the Brγ line emission comes from magnetospheric accretion flows, but the small co-rotation radius compared to the size of the Brγ line emitting region estimated from the SC data analysis (∼ 0.09 au) would not favor this scenario, as opposed to what has been recently found for TW Hya (Gravity Collaboration et al. 2020b).Interestingly, the scenario of magnetospheric accretion was also tested by Mendigutía et al. (2017) to explain the Hα doublepeaked emission line, but they were not able to reproduce the observed profile with any set of input parameters.Comparing the extent of the Brγ emission to the continuum emission (R Brγ /R cont ≈ 0.1), we find that the case of HD 141569 is in contrast with the findings of Kraus et al. (2008).These authors found for a small sample (5 objects) of Herbig Ae/Be stars that those showing a P Cygni Hα line profile and a high mass-accretion rate (> 10 −7 M yr −1 ) seem to show compact Brγ-emitting regions (R Brγ /R cont < 0.2), from which the emission stems from magnetospheric accretion or recombination line from ionized hydrogen, while stars showing a double-peaked or single-peaked Hα line profile show a more extended Brγ-emitting region (0.6 ≤R Brγ /R cont ≤ 1.4), which would trace a stellar or disk wind.Our system shows mixed features, a Brγ and Hα doublepeaked emission line that originates from compact disks in Keplerian rotation where magnetospheric accretion is not the most likely main emission mechanism.Therefore, recombination line emission from ionized hydrogen in an inner gaseous accretion disk as hinted by GRAVITY is a more supported scenario.If that is the case, considering the age of the system and the reported accretion rates between 10 −7 −10 −11 M yr −1 (Merín et al. 2004;Garcia Lopez et al. 2006;Mendigutía et al. 2011a;Thi et al. 2014;Fairlamb et al. 2015), the inner gaseous disk requires some sort of replenishment mechanism to explain its presence and for it to survive.Replenishment flows, planet-boosted or not (Mendigutía et al. 2017), connecting the inner and outer disk as already observed in other Herbig stars like HD 142527 (Casassus et al. 2013) could be investigated in the future. Hybrid or debris disk? As mentioned in the introduction, HD 141569 is the only known pre-main sequence star characterized by a hybrid disk.The main characteristic of a hybrid disk is the weak fractional excess of IR emission (8.4 × 10 −3 for HD 141569, Sylvester et al. 1996), that stems from the optically thin second-generation grains dust component, coupled with the presence of a significant gaseous component, believed to be primordial.Unfortunately the other known best hybrid disk system candidates, which are 49 Cet, HD 21997 (Moór et al. 2011), HD 131835 (Moór et al. 2015), HD 121617, HD 131488 (Moór et al. 2017), and HD 32297 (Moór et al. 2019), have not been studied in their hot dust content.As HD 141569 does, they show a featureless SED in the near-IR suggesting systems depleted of material inside the first 5 au.However, as we saw in this work, interferometric observations could reveal as-yet-undetected dust, so this scenario should not be excluded for the other mentioned objects.The potential of long-baseline interferometry to detect small levels of hot circumstellar dust emission in supposedly dust-free systems has been indeed exploited for older main sequence stars (Ertel et al. 2014).In this context it is interesting to compare our results for Article number, page 12 of 25 GRAVITY Collaboration: V. Ganci et al.: The GRAVITY Young Stellar Object survey HD 141569 with Vega, the most iconic debris disk system with a similar spectral type (A0V), but that is significantly older than our source (400 − 700 Myr).Near-IR excess from Vega was detected and constrained to ∼ 1.29±0.19%by interferometric observations with CHARA/FLUOR in the K band (Absil et al. 2006).These authors suggest by SED modeling that the excess comes from hot small grains starting at ∼0.2-0.3 au with a total dust mass of 8×10 −8 M ⊕ .Knowing the age of Vega, it is clear that its circumstellar dust is of second generation, a characteristic of debris disks.In the scenario of a silicate-dominated inner ring for HD 141569, the total mass required to induce a larger near-IR excess is, as expected, significantly larger (> 10 −4 M ⊕ ).Considering the younger age of HD 141569 (∼7.2 Myr), a more massive inner disk is compatible with a system at an earlier stage of disk evolution.Since the timescale for disk dissipation is known to be about 5 − 10 Myr (Wyatt 2008), it is plausible that part of the dust in the inner region of HD 141569 is of first generation and a remnant of the primordial circumstellar environment.This would confirm that HD 141569 is closer to a system in the final stage of the protoplanetary disk phase than a debris disk system. The fact that gas, both H and CO, is detected in HD 141569 is an factor in favor of a system in the (late) protoplanetary disk stage rather than in the debris disk stage.The system, similarly to other hybrid disks, does not follow the correlation between the CO flux density and the millimeter continuum emission followed by T Tauri, Herbig Ae, and debris disks, but lies systematically above the correlation line (Péricaud et al. 2017).The authors suggest that the dust and gas evolution are decoupled, with the dust evolving faster than the gas, leading to an unusual high gas-to-dust ratio (between 135 and 2370 for HD 141569, Di Folco et al. 2020).Other than the primordial origin scenario, a secondary origin of CO was proposed (Kral et al. 2019) in which the gas is self-shielded and shielded by accumulated neutral carbon produced through photodissociation of molecular gas released by planetesimals.This model is able to explain the estimated CO masses in all the hybrid disk candidates (Kral et al. 2019;Moór et al. 2019) except for HD 141569, for which it was not tested.Since CO molecule photodissociation occurs at UV wavelengths, we note that C 0 could shield these molecules, and also QHPs (Woitke et al. 2016), which are known to absorb UV photons and cool down very quickly by emitting photons in the near-IR.The new detected innermost ring, which we propose in this work to be dominated by a small amount of QHPs, could contribute to the shielding process. Summary We presented the first GRAVITY interferometric observations of HD 141569.Here we summarize the main conclusions of our work: -The system was resolved by GRAVITY with squared visibilities down to V 2 ∼ 0.8.If before these observations the near-IR flux contribution of a dust disk was considered absent because no feature was seen in the SED of the object, now, thanks to interferometry, the presence of dust in the first au of the system is a more robust piece of evidence and the flux excess is clearly detected and constrained to be ∼ 6.Large silicate grain models, with and without carbon, can reproduce the 6% flux excess at 2.15 µm, but at the same time they show a significant emission in the mid-IR not consistent with the SED of the system.-The SC data analysis confirms the significant amount of Brγ line emitting gas already observed in the past.The gas region is spectrally resolved, but spatially unresolved.-The pure-line differential phases constrain the gas to be in a Keplerian disk-like structure, as hinted by the double-peaked line shape, confined within ∼ 0.09 au (∼ 12.9 R ) and oriented in the same was as the outer rings (PA NE ∼ −10 • ). These results confirm the complexity of the HD 141569 circumstellar environment also at milliarcsecond scale, making the system a unique astronomical laboratory to investigate the missing steps of disk evolution and planet formation theories.The total visibility three-component model used to fit the SC visibilities accounts for the contributions from the star, the circumstellar dust, and the line emitting gas.The total visibility as a function of the wavelength is given by V Tot (u, , λ) = α(λ) F s (λ) V s (u, , λ) + F c (λ) V c (u, , λ) + F L (λ) V L (u, , λ) α(λ) F s (λ) + F c (λ) + F L (λ) .(D.1) The parameter α(λ) is the star continuum-normalized photospheric absorption, which implies that α(λ) = 1 outside the Brγ line and α(λ) < 1 inside the line.This parameter is estimated from the Vienna New Model Grid of Stellar Atmospheres fitting the stellar parameters of HD 141569, as described in Section C. F s (λ), F c (λ), and F L (λ) are the wavelength-dependent fluxes of, respectively, the stellar continuum (i.e., outside the line and temperature-dependent), the dust ring continuum, and the Brγ line emitting gas.F L (λ) varies across the line and vanishes to zero outside the line.V s (λ), V c (λ), and V L (λ) are the intrinsic visibility functions of each of the three components taken individually.From now on, the star is considered unresolved; therefore, V s is equal to 1, and we drop the explicit parameter dependencies for convenience.Outside the Brγ line (i.e., in the continuum region) the total visibility is given by Using Eq.D.2 to replace V c , we can rewrite Eq.D.1 as and using the definition of the line-to-continuum flux ratio (Eq.13), Eq.D.3 becomes D.4) where F Tot = α F s + F c + F L .We note once again that F L/C is the raw line-to-continuum ratio including the photospheric absorption.This quantity corresponds to the top left spectrum in Fig. 2. We also note that outside the line (i.e., α=1 and F L =0), Eq.D.4 simplifies to Eq. D.2.Now, making use of the parameter β of Eq. 14, we can write , (D.5) and finally Eq.D.4 becomes ). (D.6) Solving Eq.D.6 for V L , and noting that from our data V Tot = V Cont Tot since the SC visibilities are spectrally flat for all wavelengths and baselines, we obtain Eq. 12: This equation tells us that the pure-line visbility can be estimated from the total visibility in the line (which in our case is comparable to the total visibility in the continuum) if the photospheric absorption profile can be estimated, the continuum disk-to-star flux ratio β is known and the continuum-normalized spectrum of the line is accessible.We note, in the case where V Tot = V Cont Tot , that the absence of photospheric absorption (i.e., α=1) leads simply to V L = V Cont Tot =V Tot .Finally, we see that the pure-line visibility V L is lower than 1 only when F L/C is greater than 1, which is equivalent to detecting the line above the continuum.V 0. Notes.Data without references are from Merín et al. (2004). Fig. 1 . Fig. 1.HD 141569 FT data; squared visibilities (left panel), closure phases (central panel), and U-V plane coverage (right panel), from all the observation epochs.Colors refer to the different GRAVITY spectral channels. Fig. 2 . Fig. 2. Science channel data of July 2019.Top left: Wavelengthcalibrated continuum-normalized spectrum, corrected for telluric lines.Top right: Same as top left, but corrected for the Brγ photospheric absorption.Differential phases (left column) and visibilities (right column) along the six GRAVITY baselines.The red lines in the visibility plots show the pure-line visibilities. Fig. B.2.The details of the QHP model parameters is shown in Table 2.In Fig. F.1 we show the density and temperature structure of the full-disk model.For the silicate dust in the outer second (∼ 15 au), third (∼ 185 au), and fourth ring (∼ 300 au), the equilibrium temperature varies between 125 K at ∼5 au to 20 K in the outermost disk regions.The QHPs used to model the innermost optically thin ring are not in thermal equilibrium.Their temperature distribution depends on their size (the smaller, the hotter) and on the strength of the local ultraviolet (UV) radiation field as they absorb UV photons to quickly re-emit in the near-IR, changing their temperature drastically and very fast.The dark red color in Fig. F.1 shows the location of the QHPs, and according to our RT simulations their temperatures are in the range between 1865 K and 95 K. Fig. 3 . Fig. 3. MCMax SED (black line) of the model described in Table 2.The yellow circles are HD 141569 photometric data listed in Table G.1.The blue line represents the star SED modeled as a black body.The cyan line represents the QHPs SED and the red line the silicates SED.The magenta line is the total disk SED accounting for both QHP and silicate emission.The top right plot focuses on the first 10 µm wavelength range of the SED. Fig. 4 . Fig. 4. HD 141569 spectrum (top), total (black circles and line), and pure-line differential phases (colored signs) along the different baselines.D0-K0 and G2-K0 baseline pure-line differential phases are set to zero.The colors refer to the different spectral channels.Dashed magenta lines represent the pure-line differential phases of the analytical Keplerian disk model described in Section 5.3. Fig. 5 . Fig. 5. Deprojected photocenter shifts.The colors refer to the different spectral channels and velocities, as shown in Fig. 4. The dashed black line, derived through a linear fit of the photocenter shifts, represents the gas region position angle. Fig. 6 . Fig.6.HD 141569 GRAVITY spectrum (black line), and the spectra of the Keplerian disk models described in Section 5.3.The blue line represents the model in the optically thin scenario with residuals given by the red dashed line, while the orange line represents the model in the optically thick scenario with residuals given by the green dashed line. Fig. 7 . Fig. 7. Monochromatic images (from 2.16452 to 2.16852 µm, i.e., from 221 to 332 km/s) of the Keplerian ring model described in Section 5.3.The colored dashed lines refer to the different GRAVITY baselines.Circles represent the 2D photocenter shift for each baseline. Fig. 8 . Fig. 8. Visualization of dust and gas distribution in HD 141569 (adapted from Di Folco et al. 2020).Shown in light orange are the optical-IR dust rings detected in scattered light by HTS (e.g.,Augereau et al. 1999;Clampin et al. 2003;Konishi et al. 2016).Overplotted in green are the three near-IR dust ringlets detected by VLT/SPHERE(Perrot et al. 2016).The extended millimeter continuum emission detected by ALMA(Miley et al. 2018) and NOEMA(Di Folco et al. 2020) is shown in orange.The large brown ring represents the mid-IR continuum emission detected by VLT-VISIR and modeled byThi et al. (2014).In dark red is shown the near-IR dust emission detected by GRAVITY and studied in this work.In the same color is the Brγ line emitting gas region detected by GRAVITY, also analyzed in this work.In blue is depicted the Hα line region based on the upper-limit size estimated byMendigutía et al. (2017).Finally, in purple is shown the CO gas region whose emissions were detected by ALMA (e.g.,White et al. 2016;Miley et al. 2018) and NOEMA(Di Folco et al. 2020). Fig. 9 . Fig. 9. HD 141569 continuum-normalized spectrum taken at different epochs and with different instruments.In blue is depicted the data taken on July 12, 2019, by GRAVITY; in red data taken in June 2019 by SIN-FONI; in green data taken in 2002 by the KECK NIRSPEC (Brittain et al. 2007); and in cyan data taken in 2004 by the VLT/ISAAC (Garcia Lopez et al. 2006). Fig. C. 1 . Fig. C.1.Atmospheric transmission functions derived from calibrator spectra: HD 137006 (top), HD 149789 (center), and HD 157029 (bottom).The last plot is the average transmission function of the three spectra. Table 1 . FT squared visibility best-fit solution for the geometrically thin ring model and the Gaussian-convolved ring model. Table 2 . List of parameters relevant to the RT modeling of the HD 141569 disk.This model only describes the case of a QHP-dominated inner ring. 2 % of the total flux.-Data modeling suggests that the dust is located in a thin ring ( 0.3 au in width) at a radius of ∼1 au from the star.The ring shares, within the errors of Fig. B.2, the same inclination (∼ 58 • ) and position angle (∼ 0 • ) as the outer rings observed in the past.-MCMax SED modeling suggests that this innermost ring could be made of a small amount (1.4 × 10 −8 M ⊕ ) of QHPs. Table H.1.Inclination and position angle of the HD 141569 outer rings.
2021-09-22T01:16:03.754Z
2021-09-21T00:00:00.000
{ "year": 2021, "sha1": "cc1452e1ec8a31fc98bd794303f9836d7c9e8507", "oa_license": "CCBY", "oa_url": "https://www.aanda.org/articles/aa/pdf/2021/11/aa41103-21.pdf", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "cc1452e1ec8a31fc98bd794303f9836d7c9e8507", "s2fieldsofstudy": [ "Physics", "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
149872024
pes2o/s2orc
v3-fos-license
An Induced Successful Performance Enhances Student Self-Efficacy and Boosts Academic Achievement A growing body of correlational research finds a relationship between self-efficacy—beliefs in one’s capabilities—and academic success. But few studies have investigated whether self-efficacy is causally related to academic success. We hypothesized that an experience of success would promote self-efficacy in junior high school students and would lead to academic improvements. To induce an experience of success, we secretly presented easy anagrams to target students (41 males and 43 females; 12–13 years old) who then outperformed their classmates (116 males and 115 females). We assessed students’ self-efficacy and academic achievement scores before and after the anagram tasks. We found that the success-induced students raised their self-efficacy, and this elevated self-efficacy persisted for as long as one year. Moreover, success-induced males eventually showed significant improvement in their academic achievement. These results provide a real-world experimental enactment of Bandura’s self-efficacy theory and have implications for the practices of educational practitioners. Introduction Believing in our ability to succeed matters. Students who more strongly endorse these beliefs of self-efficacy are better able to monitor their activities, adopt proximal goals, select well-tuned strategies, and motivate themselves (Bandura, 1977(Bandura, , 1986(Bandura, , 1997. Bandura (1997) defined selfefficacy as "beliefs in one's capabilities to organize and execute the courses of action required to produce given attainments" (p. 3). It is little wonder, then, that many educational psychologists have investigated the role of selfefficacy in learning (see Pajares & Schunk, 2001;and van Dinther, Dochy, & Segers, 2011, for a review). One overarching question drives much of this research: To what extent does the strengthening of self-efficacy improve academic achievement? However, the findings from these studies are limited due to their correlational nature. Though valuable, these findings do not demonstrate a causal relationship, wherein stronger self-efficacy produces greater academic achievement. According to the research standards set by the What Works Clearinghouse (Institute for Education Sciences, 2014), only experimental research using a randomized controlled trial (RCT) procedure provides evidence that an educational treatment causes academic improvement. 2 This potential causal relationship has been described as a "chicken-and-egg" problem, yet also one that is not contentious because of the reciprocal influence between motivation and behavior (Pajares & Schunk, 2001; see also Bandura, 1986). We disagree. Understanding causality is crucial, in part because it has implications for educational practitioners' practices. The first author of this article, for example, has been a schoolteacher for nearly 30 years and notes that many of his colleagues believe intuitively that high student self-efficacy evokes desirable learning behaviors and boosts academic achievement. These teachers might be motivated to encourage students to boost their self-efficacy. But the teachers' efforts are justified only if their intuition is correct. What is the experimental evidence, then, for a causal relationship between self-efficacy and academic achievement? Surprisingly, the answer at present seems to be: little to none. We searched the literature and identified only a handful of studies that have used various manipulations to successfully alter self-efficacy. But in these studies, changes to self-efficacy do not appear to reliably affect academic achievement (Bouffard-Bouchard, 1990;Jacobs, Prentice-Dunn, & Rogers, 1984;Litt, 1988;Prussia & Kinicki, 1996;Weinberg, Gould, & Jackson, 1979; see also Bandura, 1997, pp. 58-59). In one of the most cited of these experiments, for example, college students were given positive or negative feedback about their performance on a verbal concept formation task, irrespective of their actual performance. As expected, this feedback changed the students' selfefficacy: They believed more strongly in their ability to succeed on an upcoming task. But the results also showed that this belief change was unwarranted: The students performed no better than each other on later tasks (Bouffard-Bouchard, 1990). Other studies have shown that a variety of manipulations-rewards, goal setting, modeling, feedback, task strategies, self-monitoring, self-evaluation, and assessment-enhance students' self-efficacy (Schunk, 1982;Schunk, Hanson, & Cox, 1987;Schunk & Swartz, 1993;van Dinther et al., 2011). In a recent example, researchers examined how the coursework for pre-service primary teachers influenced their classroom management self-efficacy. Undergraduates in a four-year teacher education program for primary education learned various teaching skills, including classroom management strategies. This coursework elevated their self-efficacy for classroom management (O'Neill, 2016). Unfortunately, this study illustrates the "chicken-and-egg" problem in investigations of selfefficacy as a cause of learning: Behavioral changes-learning of effective task-specific strategies-were first necessary to induce improved self-efficacy. The conclusion that improved self-efficacy is a result rather than a cause is unlikely to be attractive to most teachers, who already build student self-efficacy when teaching new knowledge and skills. If these efforts do not cause improvements in academic achievement, the teachers' efforts may be better spent elsewhere. Mori and Uchida (2009) created a unique procedure to test the extent to which improved self-efficacy promotes academic achievement. In their procedure, they used equipment consisting of two distinct images projected on a single screen. Each projected image is viewable only with an appropriate pair of polarizing glasses. They used this equipment to secretly present two different series of anagram tasks to students, such that one group saw easier anagram tasks than their classmates. This "easy" group solved more anagrams and as a consequence reported greater selfefficacy-measured as how well they believed they could perform on the anagram task. This procedure has a number of experimental and practical strengths. First, students can be randomly assigned to conditions. Second, it boosts selfefficacy directly, without relying on prior training of a separate skill. Third, it can be used easily in classroom settings. Unfortunately, the Mori and Uchida (2009) experiment included only 24 target participants. This small sample size made it difficult to determine the key effect of interestchanges in academic achievement-with any degree of precision. The present research solves this problem by replicating the study with a larger sample. We conducted an experiment using three annual cohorts (comprising six classes each year) from the seventh grade of a junior high school, for a total of 315 participants. We hypothesized that an induced successful performance would promote students' self-efficacy and, ultimately, their academic performance. We registered this study on the Open Science Framework's (OSF) website (registration ID: 10.17605/OSF.IO/54WM7) as a replication study with a larger sample: https://osf.io/54wm7/. Participants We recruited seventh-grade junior high school students from six classes each year for three years from a municipal school in Japan, giving us an initial pool of 656 students (approximately 220 students each year). Twenty-five of these students were absent from the pre-assessment of selfefficacy and were therefore not part of the study, reducing the initial sample to 631 (335 males and 296 females). The socioeconomic status of the students' families varied within a narrow middle-class range. All students were Japanese natives. The students ranged in age from 12 to 13 years old. A small number of students were absent on the day of the anagram task (n = 9). Because the anagram task was crucial, we excluded these students. In addition, some students were absent for one or more of the repeated assessment periods during the study (n = 92). We excluded these students too. In an effort to avoid floor and ceiling effects, only those 3 students who were within the 26-75 percentile range of scholastic achievement were assigned to our experimental conditions (n = 315; n = 267 after the exclusions listed above). The remaining 307 non-experimental students participated in the anagram task, but only to ensure consistency of classroom activity-they were not considered part of the experiment proper. For transparency and clarity, we have prepared an anonymized raw data file. This file is available on the OSF at https://osf.io/cp8uh/. Experimental Design We used a factorial design with two between-subjects factors: treatment group (success, control) and gender (male, female). We included gender in the design because previous literature sometimes finds gender differences in self-efficacy. One study, for example, found that females reported higher self-efficacy in languages and arts than males, while males reported higher self-efficacy than females in mathematics and sciences (Huang, 2013). Another study found that females reported lower self-efficacy than males for a computerized science education task (Nietfeld, Shores, & Hoffmann, 2014). However, other studies have found no gender differences (Caprara et al., 2008;Caprara, Vecchione, Alessandri, Gerbino, & Barbaranelli, 2011;Jacob, Lanza, Osgood, Eccles, & Wigfield, 2002;Murayama et al., 2013). Success and Control Students. We randomly selected four to six students in each of the six seventh-grade classes in each of the three year-cohorts as targets to experience success. The remaining students formed the control group. This sampling procedure produced a total of 84 success students (41 males, 43 females) and 231 control students (116 males, 115 females) respectively. We chose this sample size to achieve a statistical power of .8 for detecting a small to medium difference (d = .4) between the academic performance of the two groups (Cohen, 1988). The experiment required that only a small number of students in each class experience "success" in order to seem impressive and promote self-efficacy. We therefore limited the number of successful students in each class to between four and six students. Accordingly, there were fewer students in the experimental condition than in the control condition. Because of the nested nature of the sampling procedure, we ran an ANOVA on pre-experimental achievement scores across the 18 classes to examine the influence of class cohorts. We found no meaningful differences, F (17, 638) = .78. We also ran an ANOVA on anagram task scores and found no meaningful differences, F (17, 614) = .56. Dependent Variables We repeatedly assessed two dependent variables: academic achievement and self-efficacy. We operationalized academic achievement as the scores from officially administered school examinations. We operationalized self-efficacy as students' self-reports of their ability to complete the anagram task. Details of these assessment procedures are as follows. Academic Achievement. The junior high school provided us with Z-scores of students' scholastic achievement. These Z-scores are commonly used in Japanese junior high schools. The scores are standardized and converted such that the mean of the distribution becomes 50 and the standard deviation 10 (Mori & Uchida, 2012). The Z-scores were calculated from the combined scores of term examinations in five major school subjects: Japanese language, social studies, mathematics, natural sciences, and English language. We obtained these Z-scores at six of the school's assessment periods: prior to the experiment, and then two, five, 10, 14, and 17 months afterward. Self-Efficacy. We defined self-efficacy procedurally in this study as a student's rating in response to this specific question: "How well can you perform in the letter rearrangement game?" Students indicated their answer on a five-point scale, ranging from 1 (very badly) to 5 (very well). We assessed self-efficacy eight times (pre-test, post-test, and at six follow-ups). The self-efficacy question was printed on a sheet mixed with other filler questions to mask the experiment's purpose. As a cover story for administering the questionnaire repeatedly, we told students we were regularly assessing their study habits. The same self-efficacy questionnaire was used in each assessment. Experimental Procedure Anagram Tasks. The anagram task was a one-time experience for each student. We ran student participants in class groups. Homeroom teachers led their class-approximately 35-40 students-to a room specially set up for the experiment at the junior high school. We arranged the seats in the room in front of a rear projection screen (80 cm × 80 cm). Students sat in the same configuration as they would in their ordinary classroom (See Figure 1). We prepared two types of polarizing sunglasses beforehand; four to six pairs of one type for the success students and the remaining pairs of the other type for the rest. We placed a pair of polarizing sunglasses on each seat, but only the success students wore the special polarizing sunglasses that let them alone view the easier anagram tasks. To the students, all the sunglasses looked identical. As a cover story, we told the students that the sunglasses were to eliminate glare from the rear projection apparatus. After the students sat down and put on the sunglasses, the experimenter gave general instructions. Then, he handed an answer sheet to each participant. Next, he projected 30 anagram tasks one-by-one using a PowerPoint slide show on an Apple iBook. Each of the 30 anagram tasks consisted of five Japanese hiragana characters. We arranged 10 of these tasks to have two levels of difficulty in accord with the student's condition (e.g., students in the success condition saw the relatively easy "DRAEM," while subjects in the control condition saw the relatively difficult "MAEDR," both of which can be rearranged to "DREAM"). The remaining 20 anagram tasks had a single problem and solution. We projected the anagram tasks using dual overlapping projections onto a single screen, as depicted in Figure 1. For the 10 tasks with two levels of difficulty, students saw only one version through the polarizing sunglasses (for details of this presentation trick, see Mori, 2007). We presented each anagram task for 10 seconds. During this time, the students tried to solve each anagram and write the answer on their answer sheet. We also included a fivesecond interval between each anagram task. The experimenter asked the students to stop writing at the end of the anagram task. Next, the experimenter announced the correct answers so that students could mark their answers. Then, the experimenter asked students with more than 22 correct answers to raise their hands. These students were frequently met with spontaneous applause from the class. Because no students were aware of the presentation trick, we assume these naturally occurring appraisals were genuine. We did not anticipate nor control for applause, and therefore did not collect data concerning any potential effects of applause in this study. Debriefing. Approximately one month after the anagram task, we disclosed the experimental purpose and the sunglasses trick to the students. But we did not specify which students, specifically, had observed the easier versions of the anagrams. Manipulation Check We first examined whether students who viewed easier anagrams solved more anagrams correctly. As expected, the success students answered more anagram tasks correctly (M = 24.90, SD = 3.90, range = 3-29) than the control students (M = 20.04, SD = 3.49, range = 5-29). We also found that males answered fewer anagram tasks correctly (M = 21.31, SD = 4.49, range = 3-29) than female (M = 22.14, SD = 3.68, range = 11-29). A 2 (treatment group: success, control) × 2 (gender: male, female) ANOVA revealed a statistically significant effect of treatment group: F (1,309) = 114.09, p < .001, Cohen's η 2 = .26, and a statistically significant effect of gender: F (1,309) = 8.54, p = .004, η 2 = .02. The interaction was not statistically significant, F (1,309) = .92, p > .250. Upon closer examination of the data, we noted that 10 of the success students scored fewer than 22 correct answers on their easier version of the anagram task, while 80 of the control students scored 22 or more correct answers on their harder version. These scores are incongruent with the experimental manipulation. However, because we found that the pattern of results remained virtually unchanged when these subjects were excluded, we elected to include these students in our analyses. Some students were absent from one or more occasions of the self-efficacy assessments and the academic achievement tests. We followed the same process as in a previous study, deleting these missing data case-wise (Mori & Uchida, 2009). Case-wise deletion procedures have at least two strengths: (a) for education RCTs that focus on test score outcomes, case deletion performs reasonably well relative to other missing data adjustment methods; and (b) case deletion is simple to apply and understand (Schochet, 2016;pp. 53-54). Ultimately, there were 267 students with complete assessment data for the following analyses (72 in the experimental condition and 195 in the control condition). For transparency, the raw data file with all data from students who participated in the study is available on the OSF site: https://osf.io/cp8uh/. Self-Efficacy We assessed students' self-efficacy at eight periods; these data appear in Figure 2. As the figure shows, students' selfreports of their ability to perform well on the anagram task rose sharply after the anagram task and remained high for one year-but only for those students in the success condition. The control students' self-efficacy, on the other hand, remained virtually unchanged. A 2 (treatment group: success, control) × 2 (gender: male, female) × 8 (assessment period) mixed ANOVA 5 revealed an interaction between treatment group and assessment period; F (7,1841) = 10.95, p < .001, η 2 = .03. Follow-up comparisons using the Ryan procedure (Ryan-Einot-Gabriel-Welsch and Quiot [REGWQ] procedure) showed that success students reported greater self-efficacy than control students at all assessment periods, except before the anagram tasks: Fs > 12.60, ps < .0004. We found no statistically significant main effects nor interactions with gender: Fs < 1.30, ps > .255. Academic Achievement We obtained students' average Z-scores at each of six assessment periods, including before the experiment, and then two, five, 10, 14, and 17 months afterward; these data appear in Figure 3, split by gender. As the figure shows, the Z-scores of males in the success condition increased from pre-test at the two-month assessment period, and remained elevated. In contrast, the Z-scores of males in the control condition showed a declining tendency. 1 For females, we found no clear differences between the two experimental conditions. A 2 (treatment group: success, control) × 2 (gender: male, female) × 6 (assessment period) mixed ANOVA revealed a statistically significant treatment group × gender × assessment period interaction, F (5,1315) = 2.69, p = .020, η 2 = .01; and a gender × assessment period interaction, F (5,1315) = 3.14, p = .008, η 2 = .01. We found no other statistically significant main effects or interactions (Fs < 3.23, ps > .071). To unpack the three-way interaction, we performed a 2 (treatment group: success, control) × 6 (assessment period) mixed ANOVA for the males, and another for the females. For the males, we found a significant interaction (F (5,630) = 4.41, p < .001, η 2 = .03), revealing that the differences between Z-scores of success and control males changed over the assessment periods; in general, the Z-scores of success males increased after the anagram task (F (5,630) = 4.26, MS = 30.60, MSe = 7.18, p < .001) while those of the control males declined (F (5,630) = 3.24, MS = 23.20, MSe = 7.18, p = .007). Multiple comparisons by the Ryan procedure (REGWQ) showed statistically significant greater Z-scores for success males at five months after the self-efficacy manipulation, compared to their pre-experiment scores. Meanwhile, the FIGURE 2. Self-efficacy scores before and after the anagram task. The vertical bars indicate the 95% confidence intervals. 6 Z-scores of the control males declined gradually and reached statistically significant differences from pre-experimental scores at 10, 14, and 17 months after the experiment. For the female students, however, we found no statistically significant effects (Fs < 2.06, ps >.069). A cautious reader may wonder about potential problems that arise due to the nested nature of the data, such as deteriorating statistical power (Usami, 2013(Usami, , 2014. But note that our study used multisite randomization trials (MRT). That is, students were randomly assigned to experimental and control conditions in each class. Simulations show that MRT procedures produce relatively stable intra-class correlations when compared with clustered randomization trials (CRT) (see Table 1a and 2a in Usami, 2011). Nonetheless, we tested the effect of year-cohort differences by including year-cohort as a variable in a three-way ANOVA for the males (3 year-cohorts × 2 treatment groups × 6 assessment periods). This analysis revealed a statistically significant interaction for treatment group × assessment period (F (5,610) = 3.93, p < .01, η 2 = .006) and a significant main effect for assessment period (F (5,610) = 2.67, p < .05, η 2 = .004). All other effects failed to reach statistical significance (F (2,122) = 0.24 for the main effect of year-cohort, Fs = 0.84 and 1.15 for the interactions). Consistent with this analysis, we also found a similar pattern displayed in Figure 3 when we looked separately at each of the three cohorts (see supplemental figures on the OSF site (https://osf.io/kuerw/). Experimental Enactment of Self-Regulated Learning Across three annual cohorts comprising a total of 267 students, we found in an RCT experiment that a brief experience of success in an anagram task raised students' self-efficacy immediately and eventually improved the male students' overall academic performance. Moreover, these broad benefits remained more than one year after the brief experimental manipulation. How does an induced successful experience in a simple task lead to overall academic improvement? One possible answer comes from self-regulated learning theory (Zimmerman, 1990), which hypothesizes a "virtuous causal cycle": Students first experience success, which raises their self-efficacy. Improved self-efficacy increases motivation and the use of effective learning strategies. These covert and overt changes then lead to improved academic achievement, and the cycle begins anew. Previous researchers have examined and found support for this self-regulated learning hypothesis using correlational methods (Caprara et al., 2011;Chen & Usher, 2013;Murayama et al., 2013;Usher & Pajares, 2009;Zuffianò et al., 2013). In our study, we provide novel experimental evidence in support of the theory. The "virtuous cycle" began with contrived success on a specific task, which strengthened self-efficacy for that task. But intriguingly, the benefits extended beyond the task itself to students' broad academic ability. These results may be particularly encouraging to teachers looking to break their students out of a cycle of poor scholastic performance and low self-efficacy. Here, we present some evidence that, to break that cycle, teachers might give these students "easy" tasks so that they experience success. We enclose the word easy in quotation marks because, according to what we have demonstrated here, the tasks should be covertly easy only for target students. We hope schoolteachers will come up with creative ways to accomplish this requirement. Gender Differences We found that boosting self-efficacy improved academic scores, but only among males and not females. Why? As briefly described earlier, the literature is mixed with respect to gender differences in academic self-efficacy. Some studies find differences (Huang, 2013;Nietfeld et al., 2014), but others do not (Caprara et al., 2008(Caprara et al., , 2011Jacob et al., 2002;Murayama et al., 2013). We were unable to find a good explanation within these studies that could account for our pattern of results. Instead, one possible explanation relates to the different attributions males and females make about their ability to succeed. Males, for example, will more readily ascribe the cause of their success to ability than females (Lloyd, Walsh, & Yailagh, 2005). Perhaps, then, the males in our study were more likely than females to attribute success on the anagram task to their ability, while females were more likely than males to attribute success to luck. If true, then an abilitybased attribution may be necessary to see general academic improvements. We state this potential explanation cautiously, however, because we found that males and females both reported increased self-efficacy, and we did not measure students' attributions of success. It may be worthwhile to ask participants about these attributions in future research. Feedback Effects The students in the success condition might have raised their self-efficacy and achievement scores simply because they received positive feedback. However, a review of the feedback literature, and other studies collected through major educational databases, concluded that there were inconsistent findings with respect to feedback (Shute, 2008). Some findings reported no feedback effects (Sleeman, Kelly, Martinak, Ward, & Moore, 1989) or even negative effects on learning (Kulhavy, White, Topp, Chan, & Adams, 1985). The review ultimately concluded that feedback could improve learning processes and outcomes, but only under certain conditions (Shute, 2008). 7 Feedback effects have been inconsistent because there are a variety of intervening variables (Krenn, Wuerth, and Hergovich, 2013). Moreover, self-efficacy is one of these moderating variables. Managers with high self-efficacy, for example, benefit more from feedback than those with low self-efficacy (Heslin & Latham, 2004). Feedback might also have effects on learners' attitudes and beliefs. For example, feedback attributed to competence promoted self-efficacy more in third-grade children on a subtraction skill test than feedback attributed to effort (Schunk, 1983). Considered as a whole, these findings bring us back to the "chicken-andegg" problem of causal relations. We believe, therefore, that there is insufficient evidence to attribute the promotion of self-efficacy of students in our study merely to positive feedback. Limitations and Directions for Future Research The most crucial limitation of our study is that it is unclear how the initial experience of success produced greater academic achievement. Although our hypothesis was theoretically motivated-drawn from the literature on self-regulated learning-the proximal and distal mechanisms were not well specified (Zimmerman, 1990). There are, potentially, a number of intervening variables, including: attributions of success, gradual transformations of task-specific to general self-efficacy, and increased motivation. A more complete explanation of the effects we report here will likely require future assessment of these variables, to untangle their contribution. Such work will illuminate the processes intervening between initial success and later achievement. We have demonstrated a brief intervention for students, producing remarkable results. Teachers-who want to motivate students, especially those with low confidence or learning difficulties-may wish to capitalize on this intervention. But there are limitations in applying this intervention to actual school settings. First, only a small number of students can experience success, or it would cease to be remarkable. Educators are unlikely to want to adopt the practice if it can be used only for a fraction of students. Second, and relatedly, our study used minor deception, revealing the "trick" to students only at debriefing. This necessary deception is also likely to make it difficult for educators to adopt the practice. We also note that our students' experiences of success included appraisal from classmates in the form of applause. This unintentional social appraisal occurred naturally and was thus outside our control. It is therefore more appropriate to regard the induced success we used here as induced success with social appraisal. At present, we do not know how this social appraisal affects student behavior. A follow-up experiment controlling for the presence or absence of appraisal could usefully tease apart the influence of an induced experience of success from the influence of appraisal. Finally, the gender differences in academic achievement pose an intriguing research question ripe for future investigation. Although our explanation above for this difference is plausible, we currently have no direct or even indirect evidence to suggest it is true. We plan to address this issue in a future experiment that probes students' attributions of success. Conclusions Our study provides a real-world experimental enactment of Bandura's self-regulatory efficacy theory in junior high school students. We hypothesized that a single experience of success would promote students' self-efficacy. Using a presentation trick, we secretly presented easier anagram tasks to target students, which led to an experience of success. These success-induced students reported improved self-efficacy and maintained this improved self-efficacy over an entire year. Most importantly, the success-induced males showed significant improvement in their academic achievement. It is unclear, at present, why improved self-efficacy produced higher achievement only in males. Nonetheless, our findings may give hope to teachers seeking a means to encourage students who suffer from low self-efficacy.
2019-05-12T14:23:48.090Z
2018-09-15T00:00:00.000
{ "year": 2018, "sha1": "0717b9477cc7df34f2ef05325e860c7e83427efc", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2332858418806198", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "c2a2dfd1d8e3c96188d6b631518bc91ce86d5661", "s2fieldsofstudy": [ "Education", "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
218989857
pes2o/s2orc
v3-fos-license
The intergenerational transmission of suicidal behavior: an offspring of siblings study We examined the extent to which genetic factors shared across generations, measured covariates, and environmental factors associated with parental suicidal behavior (suicide attempt or suicide) account for the association between parental and offspring suicidal behavior. We used a Swedish cohort of 2,762,883 offspring born 1973–2001. We conducted two sets of analyses with offspring of half- and full-siblings: (1) quantitative behavior genetic models analyzing maternal suicidal behavior and (2) fixed-effects Cox proportional hazard models analyzing maternal and paternal suicidal behavior. The analyses also adjusted for numerous measured covariates (e.g., parental severe mental illness). Quantitative behavior genetic analyses found that 29.2% (95% confidence interval [CI], 5.29, 53.12%) of the intergenerational association was due to environmental factors associated with exposure to maternal suicidal behavior, with the remainder due to genetic factors. Statistical adjustment for parental behavioral health problems partially attenuated the environmental association; however, the results were no longer statistically significant. Cox hazard models similarly found that offspring were at a 2.74-fold increased risk [95% CI, 2.67, 2.83]) of suicidal behavior if their mothers attempted/died by suicide. After adjustment for familial factors and measured covariates, associations attenuated but remained elevated for offspring of discordant half-siblings (HR, 1.57 [95% CI, 1.45, 1.71]) and full-siblings (HR, 1.62 [95% CI, 1.57, 1.67]). Cox hazard models demonstrated a similar pattern between paternal and offspring suicidal behavior. This study found that the intergenerational transmission of suicidal behavior is largely due to shared genetic factors, as well as factors associated with parental behavioral health problems and environmental factors associated with parental suicidal behavior. Introduction Research has consistently suggested that offspring of suicidal parents are at greater risk for suicidal behavior themselves 1,2 . A recent meta-analysis concluded that family history of self-injurious behaviors was moderately associated with offspring suicide attempt (odds ratio [OR], 1.57) 3 . However, it is unclear how the risk of family history of suicidal behavior is transmitted [2][3][4] . Researchers have proposed potential causal mechanisms including contagion [5][6][7][8][9] and exposure to adverse environments [10][11][12][13][14][15][16][17][18][19] . Parents also share genetic makeup with their offspring; consequently, the association between parental and offspring suicidal behavior may be confounded by genetic factors (i.e., passive gene-environment correlation) 20 . Twin, family, and adoption studies have consistently indicated that suicidal behavior is heritable 7,[21][22][23][24] . The comorbidity between psychopathology and suicidal behavior 25 also suggests that parental behavioral health problems may confound the association. Stated differently, the transmission of suicidal behavior between parent and offspring may not be specific to the exposure of parental suicidal behavior, but explained by behavioral health problems (e.g., being raised by a parent with psychopathology may result in a chaotic home environment), which is a common risk factor for suicidality 26 . Although previous studies have statistically adjusted for measured covariates (e.g., parental psychiatric disorder), attempts to draw causal inferences about the intergenerational association have been limited due to the inability to rigorously adjust for unmeasured genetic and environmental factors [26][27][28] . To date, adoption studies have been the primary research design used to account for unmeasured factors. While the results from these studies support the role of genetic influences on suicidal behavior 9,29,30 , adoption studies have several limitations (e.g., matching adoptees to families of higher socioeconomic status) 31 and have not formally examined the intergenerational transmission of suicidal behavior. Therefore, more genetically informed research is needed to assess the processes through which suicidal behavior is transmitted from parents to offspring. The primary aim of this study was to examine the processes accounting for the intergenerational transmission of suicidal behavior through systematically ruling out non-causal processes. To do so, we first estimated the extent to which genetic and environmental factors account for the intergenerational transmission of suicidal behavior using quantitative behavior genetic modeling of offspring of half-and full-siblings. Given that half-and full-cousins share approximately 6.25% or 12.5% of their segregating alleles, respectively, quantitative behavior genetic modeling can estimate the degree to which common genetic and environmental factors specific to the exposure of parental suicidal behavior account for the association 32 . Second, we used fixed-effects Cox regression models to further compare differentially exposed cousins (i.e., pairs in which one cousin experienced parental suicidal behavior and the other did not), which account for unmeasured familial factors when examining a specific risk (i.e., parental suicidal behavior). Through this comparison and the inclusion of measured covariates, we sought to differentiate among several processes that co-occur in traditional epidemiological studies 33 . Data The Internal Review Board at Indiana University and the Regional Ethical Review Board in Stockholm, Sweden, approved this study. We obtained data for the current study from eight national Swedish registers. The Medical Birth Register records nearly all pregnancies in Sweden beginning in 1973 34 . We linked all cohort members to their parents and grandparents using the Multi-Generation Register, which includes familial relations among individuals born after 1932 or living in Sweden since 1961 35 . We identified parental twin pairs from the Swedish Twin Register, which includes nearly all twin pairs born in Sweden from 1886 through 2000 36 Exposure and outcome We derived all information about parental and offspring suicide attempt and death by suicide using data from the National Patient Register and Cause of Death Register, respectively. We included both intentional and undetermined intent self-injurious behaviors to define suicidal behavior, consistent with previous research 22 . Information about International Classification of Disease codes used for identification can be found in Supplementary Table 1. We defined suicidal behavior for both generations as the first recorded suicide attempt requiring inpatient hospitalization or death by suicide after age 12, as the reliability of suicidal behavior before is unclear 41 . Prior research using Swedish registers has documented that childhood and adolescence are high-risk periods for suicidal behavior after exposure to parental suicidal behavior 11,42 ; therefore, we restricted exposure to parental suicidal behavior prior to age 18, including parents whose first suicidal behavior occurred before offspring birth. Covariates We considered offspring parity (first, second, third, or fourth or higher) and maternal and paternal age at offspring birth (in seven groups) as offspring-specific covariates. Maternal-and paternal-specific covariates were highest level of educational attainment (in six groups and a missing category), being born in Sweden, severe mental illness (i.e., lifetime history of either schizophrenia spectrum disorder or bipolar disorder after the age of 12 as recorded in the National Patient Register), and criminal conviction after the age of 15. While certain variables may be more theoretically intuitive as covariates (e.g., parental mental illness), the decision to include these variables was based on prior research 11,41,43,44 , their associations with parental and offspring suicidal behavior (Supplementary Table 2), and the likelihood that these variables temporally preceded the exposure and outcome 43,44 . Of note, offspring parity, parental age at childbearing, parental country of origin, and parental educational attainment served as demographic factors and/or as proxies for socioeconomic status, which may be related to parental suicidal behavior through processes such as chaotic home environment, lack of financial resources, and poor decision-making. We included maternal and paternal covariates to help account for environmental factors that differed within the cousin pair and potential confounding due to assortative mating 45 . Identification of cousin pairs We identified all sibling pairs within the parent generation and then subsequently determined offspring of siblings (i.e., cousin pairs). Among offspring of sister-sister, brother-brother, or sister-brother parents, we identified all cousin pairs based on those with the same maternal grandmother identifiers, paternal grandmother identifiers, or maternal grandmother and paternal grandmother, respectively. For offspring-of-full-sibling analyses, we excluded offspring of dizygotic (DZ) and monozygotic (MZ) twins identified either from the Swedish Twin Register 46 or as opposite-sex individuals born on the same day (n = 9861 unique cousin pairs). However, we included these individuals in an offspringof-twins sensitivity analysis. If offspring were missing grandmother and grandfather identifiers, they did not contribute to the analyses. Analyses To help specify the processes underlying the intergenerational transmission, we conducted two sets of analyses in which we fit a series of: (1) quantitative behavior genetic models and (2) Cox proportional hazard models. Both approaches estimated the association between parental and offspring suicidal behavior and addressed limitations that were inherent to one another. Access to code is available upon request. Quantitative behavior genetic models First, in order to formally estimate the extent to which the intergenerational association was due to genetic and environmental factors, we fit structural equation models that decomposed the variance of parental and offspring suicidal behavior into additive genetic (A), shared environmental (C; environmental factors that make individuals similar), and nonshared environmental (E; environmental factors that make individuals dissimilar and measurement error) factors. We derived A, C, and E factors for both the parents and offspring separately, which we were able to estimate through the comparison of half-and full-siblings in both generations (see Fig. 1 for a simplified representation of the quantitative behavior genetic models). Of note, A, C, and E are modeled additively in explaining the variance for the observed parental and offspring suicidal behavior. We constrained the correlations among these latent factors across individuals based on genetic relatedness. As such, we assumed the correlation between genetic factors across a parent and offspring to be approximately 50%. The avuncular genetic correlation (between offspring and aunt/uncle) was half of the parentsibling genetic correlation. The models also included a direct phenotypic path from parental to offspring suicidal behavior in order to capture the intergenerational association that was not explained by the genetic correlation between parental and offspring suicidal behavior. The implemented models were an extension of methods used by Kuja-Halkola et al. 32 , in which the liability towards suicidal behavior was assumed to follow a normal distribution. In this liability-threshold model, we estimated the associations between liabilities using the dichotomous observations of parental and offspring suicidal behavior. For a mathematical description of the models and an analytic solution to the quantitative behavior genetic models, see Supplementary Appendices 1 and 2. We included up to two offspring of each parent who were either half-or full-siblings. We then only included samesex parent siblings and randomly removed repeated extended families in order to eliminate the dependency between families (rows). See Supplementary Fig. 1 for examples of the types of extended families included in the quantitative behavior genetic analyses. We restricted the quantitative behavior genetic analyses to estimate the processes associated with maternal suicidal behavior for two reasons. First, half-siblings are more likely to live with their mothers and thus be exposed to their suicidal behavior, compared to paternal half-siblings. Second, the modeling assumes that the differences in the half-and full-sibling correlations in the offspring generation are due to genetic differences and shared effects of parental suicidal behavior. Paternal half-sibling correlations were not in line with this assumption (Supplementary Table 3). We fit three models, which included sequential covariate adjustment. First, we fit the quantitative behavior genetic models while only adjusting for parent-sibling and offspring-sibling type (i.e., half-or full-siblings) and differences in the expected prevalence of parental and offspring suicidal behavior. Second, in order to account for the role of comorbid maternal behavioral health problems and offspring characteristics, we adjusted for propensity scores associated with both. We calculated a propensity score from the covariates for mothers and offspring indicating the probability of suicidal behavior using a logistic regression model. For the creation of maternal propensity scores, we included educational attainment, country of origin, severe mental illness, substance use, and criminal convictions. For the creation of offspring propensity scores, we included offspring year of birth, parity, and maternal age at childbearing. Third, to capture potential bias due to assortative mating, we included the paternal propensity scores, in addition to the maternal and offspring propensity scores. We fit all models in a structural equation framework using OpenMx 47 . Cox proportional regression models In order to relax some of the assumptions of behavior genetic models and increase sample size, we used Cox proportional hazard models to estimate the within-pair (i.e., fixed-effects) estimate among offspring of half-and full-siblings. We also examined associations with both maternal and paternal suicidal behavior, as the sibling correlations among the offspring generation (i.e., the comparison of half-versus full-siblings) did not influence our estimates of the intergenerational association. We first compared individuals to unrelated individuals in the general population and in the subsets of children of half-and full-siblings. We then assigned a unique 11 and A 21 represent the parental additive genetic sources of variance, and g represents the genetic similarity between the two (i.e., 0.50 for full-siblings and 0.25 for maternal half-siblings); E 11 and E 21 represent the unique environmental contribution to the variance in the parental phenotype. A 12 and A 22 represent the offspring additive genetic sources of variance, and 0.25g represents the genetic similarity between the two; E 12 and E 22 represent the unique environmental contribution to the variance in offspring phenotype; r g is the genetic correlation between the parental and offspring phenotype, thus 0.50r g is the correlation between the parental and offspring phenotype due to shared genetics; similarly, 0.50gr g is the correlation between uncle/aunt and niece/nephew due to shared genetics. Parental and offspring may have different proportion of variance explained by A and E, as seen by having different path coefficients (e.g., a p and a o ). Finally, the direct, phenotypic intergenerational association is modeled by β, where the variance in parental phenotype, regardless of source, may directly influence the variance in offspring phenotype. A description of the model can be found in Supplementary Appendix 1 and in Kuja-Halkola et al. 32 . identifier to each cousin pair in the sample and stratified on this identifier to obtain fixed-effect estimates, which adjusted for all factors shared within cousin pairs. We accounted for offspring represented in more than one cousin pair by using clustered standard errors 48 . Cousin pairs that contributed to the estimate were those who were discordant on both exposure and outcome (Supplementary Table 4). The models accounted for right censoring of offspring follow-up time; if offspring did not have suicidal behavior within the follow-up period, they contributed to person-time at risk until death, emigration, or end of study date (December 31, 2013), whichever occurred first. For both the population and fixed-effects models, we also included a set of offspring and parental covariates. We conducted the general population analyses in SAS 9.4 and fixed-effects analyses in Stata 13.1 49 . Sensitivity analyses We performed several sensitivity analyses to address potential bias in our results due to methodological decisions or our dataset. The quantitative behavior genetic models assumed that the genetic correlation between mother and offspring was freely estimated and the heritability of maternal and offspring suicidal behavior were not constrained to be equal. In order to test these assumptions and compare model fit, we examined the estimates when modifying model constraints (i.e., constraining the genetic correlation between mothers and offspring to be either 0 or 1, and when constraining the heritability between parents and offspring to equivalent). As mentioned previously, we conducted the Cox hazard models among offspring of DZ and MZ twins to examine whether the pattern of results held in a sample who shared more environmental (e.g., in utero) and/or genetic factors (e.g., parental twin pairs share either 100% or, on average, 50% of their segregating alleles). Table 1 summarizes the cohort demographics, including details for both offspring-and parent-specific variables. Table 2 summarizes Kaplan-Meier estimates of offspring suicidal behavior at age 30. Results Maternal and offspring suicidal behavior were correlated (tetrachoric correlation=0.15, confidence interval [CI], 0.13, 0.17]). The quantitative behavior genetic analysis found that 29.2% (95% CI, 5.29, 53.12%) of the association was due to environmental factors specific to exposure to maternal suicidal behavior, whereas the remainder of the association was due to genetic factors shared across the generations (Table 3). When adjusting for offspring and maternal and then adding paternal propensity scores, the association due to specific environmental factors was attenuated and became statistically nonsignificant to 20.7% (95% CI, −19.29, 60.68%) and 15.7% (95% CI, −20.19, 51.57%), respectively. In addition to attenuating the association between parental and offspring suicidal behavior due to environmental factors, the inclusion of propensity scores attenuated the heritability and elevated nonshared environmental influences on Table 3 Structural equation model estimates of the processes underlying the association between maternal and offspring suicidal behavior. Additionally includes adjustment for offspring (derived from year of birth, parity, and parental age at childbearing) and maternal propensity score (derived from educational attainment, country of origin, severe mental illness, substance use, and criminal convictions). c Additionally includes adjustment for paternal propensity score (derived from educational attainment, country of origin, severe mental illness, substance use, and criminal convictions). Table 4). Sensitivity analyses When testing different assumptions in the quantitative behavior genetic modeling via model constraints, the model included in the main analyses conferred the best model fit and most reasonable interpretation (Supplementary Table 5a, b). Thus, our results supported that parent and offspring phenotypes were different, heritability in the parent-and offspring-generation differed, and the genetic correlation between suicidal behavior in the two generations were less than unity. Children of DZ and MZ twin analyses also yielded complementary results as the main analyses, though the confidence intervals around the estimates were quite large (Supplementary Table 6). Specifically, in the general population, offspring exposed to maternal suicidal behavior were at a two-fold increased risk for suicidal behavior (HR, 2.05 [95% CI, 1.50-2.79]), which then attenuated when adjusting for covariates (HR, 1.40 [95% CI, 0.99-1.99]). When comparing cousins exposed to maternal suicidal behavior, offspring were at a 50% increased risk without covariate adjustment (HR, 1.51 [95% CI, 1.11-2.13]), which attenuated slightly when further adjusting for covariates (HR, 1.41 [95% CI, 0.96-2.06]). Discussion When accounting for genetic factors and comorbid parental behavioral health problems, the intergenerational association between parental and offspring suicidal behavior persisted, albeit attenuated from the association identified in the general population. Taken together, the results suggest that: (1) genetic factors cannot completely explain the intergenerational association, although they account for roughly 70% the association; (2) measured covariates account for a portion of the association, above and beyond shared genetic factors; and (3) the remaining (approximately 15%) association is due to environmental factors specifically associated with parental suicidal behavior, potentially suggesting a non-genetic, independent intergenerational association. The heritability of suicidal behavior has been wellreplicated by adoption, twin, and family studies 7,22,29,50 , which is consistent with our quantitative behavior genetic findings that genetic factors largely account for the intergenerational association. Suicidal ideation and behavior are highly comorbid with psychiatric problems, and, as such, comorbid parental behavioral health problems may confer increased risk for offspring suicidal behavior through both genetic and environmental processes 22 . We found that when we included parental propensity scores, the heritability of parental suicidal behavior attenuated due to the shared genetic overlap with other behavioral health problems. When adjusting for parental propensity scores, the transmission of psychopathology did not entirely explain the Table 4 Hazard rate of suicidal behavior in the offspring generation among offspring exposed to parental suicidal behavior in different comparison groups. transmission of suicidal behavior, which is consistent with both our Cox proportional hazard results and previous literature examining the intergenerational transmission of anxiety, neuroticism, and depression 21,[51][52][53][54][55][56] . The remaining environmental mediation suggests that having a parent who displayed suicidal behavior may confer an increased risk for offspring suicidal behavior through mechanisms such as contagion [5][6][7][8][9] , bereavement after parental loss [10][11][12] , negative parenting style (e.g., hostility) 22,57 , or chaotic home environment [13][14][15] . In addition to the interpretation of a direct environmental effect, there may be two alternative explanations: first, there may be cohort-specific genetic effects for suicidal behavior and second, there may be differing genetic effects on adult versus adolescent suicidal behavior. While we did not stratify our quantitative behavior genetic analyses by birth cohort within the parental and offspring generation, our results did suggest that the additive genetic component of suicidal behavior was not perfectly correlated across generation. This may be due to differing genetic factors influencing the generations. The latter explanation of differing heritability estimates by developmental period has been supported by prior studies that have found that heritability estimates increase over the lifespan for various phenotypes (e.g., alcohol use, smoking, depression, and anxiety) 58,59 and the genetic influences on behavior differ by age of onset 60,61 , but it is unclear whether these findings apply to suicidal behavior. While outside the scope of the current paper, future research will need to explore these possibilities. When examining both maternal and paternal suicidal behavior in the Cox proportional hazard models, the magnitude of risk for offspring exposed to maternal suicidal behavior was slightly higher compared to exposure to paternal suicidal behavior. This finding is consistent with other research, which has hypothesized that because mothers are often the primary caregivers, their suicidal behavior has a greater impact on offspring suicidal behavior compared to fathers 14,42 . It is important to note, however, that while maternal suicidal behavior may be a greater risk factor for offspring suicidal behavior, the clinical implications may be similar for paternal and maternal suicidal behavior. Children exposed to parental suicidal behavior continue to be at an elevated risk and require additional clinical attention. We also have insufficient statistical power to examine the interaction between parental and offspring gender and risk for suicidal behavior, although previous research suggests that differences among genders may depend on the developmental period of exposure 62 . This study advances the field of suicidal behavior in two important ways. First, to the best of our knowledge, this is the only use of the offspring-of-siblings design to the study of the intergenerational transmission of suicidal behavior. This design allowed us to adjust for within-extended-family unmeasured confounding, providing a stronger test of causal inference than prior studies comparing unrelated individuals. Second, we used both quantitative behavior genetic analyses and Cox proportional hazard models with fixed-effects to examine the intergenerational association, which have different strengths and limitations. The quantitative behavior genetic analyses estimated the extent to which the maternal-offspring association was due to maternal exposure while simultaneously adjusting for common genetic factors. However, these models included a restricted sample of families (i.e., with up to two offspring of each parent, and same-sex parental siblings) and did not adjust for right censoring. In contrast, the use of proportional hazard modeling allowed us to adjust for rightcensored data, include all possible cousin pairs (e.g., born to parents of opposite sex), and examine paternal suicidal behavior. The ability to both quantify the intergenerational transmission and replicate the pattern of findings in a much larger sample is a significant contribution to the field. This study also has several limitations. First, an assumption of the offspring-of-siblings design is that offspring of full-siblings are directly comparable to offspring of half-siblings 63 . Given that the unadjusted population estimates from the Cox proportional hazard models in offspring of half-siblings were lower than that in the population estimates in the subset of children of full-siblings, this assumption may be violated 50 . However, the use of the quantitative behavior genetic analyses used a different scale for association (i.e., tetrachoric correlations) and was similar on this scale. Second, the offspringof-siblings design is unable to adjust for environmental factors unique to each nuclear family 55 and address assumptions related to assortative mating 64 . We included both maternal and paternal measured covariates to limit this bias, but we cannot make a definitive causal inference. Third, we had limited precision in our estimates for the quantitative behavior genetic models. When including propensity scores as covariates, the estimated direct transmission from parent to offspring was no longer statistically significant, hindering the interpretation of the extent to which the intergenerational transmission was consistent with a causal association. However, the converging results from the Cox proportional hazard models strengthened these findings. Fourth, our quantitative genetic models were linear models; we did not explore genetic-environment interactions, as our primary research aim was to examine the main effect of the intergenerational transmission while rigorously adjusting for unmeasured and measured confounding factors. Geneenvironment interactions within the context of the intergenerational transmission of suicidal behavior is an important future research direction. Fifth, we did not adjust for parental depression as we only had inpatient ICD codes for depressive disorders, which is likely to be highly correlated with inpatient suicide attempt. However, previous research suggests the intergenerational association persists after accounting for parental depression 65 . Sixth, all records of severe mental illness and suicidal behavior are derived from health care data using ICD codes, which likely limits our definition to severe events. Specific to suicide attempt, we included self-injurious behavior due to undetermined intent to account for potential misclassification of suicide attempt; however, we were unable to capture suicide attempts that did not present in a hospital setting. Additionally, the use of lifetime occurrences of suicidal behavior does not account for the risk of suicidal behavior to vary over time 3,25 . Our models did not account for repeated events of parental suicidal behavior, which may confer increased risk for offspring. Future research should examine how offspring risk for suicidal behavior develops after exposure to numerous parental suicide attempts and death by suicide. Future research should also examine suicide attempt and suicide separately, as we were unable to stratify by outcome in the current study. Finally, we examined all offspring suicidal behavior prior to age 18, but did not further investigate narrower age ranges that may be particularly sensitive periods in childhood and adolescence. Previous research suggests that the magnitude of the association between parental and offspring suicidal behavior is greater for childhood exposure compared to adolescence and young adulthood 11,62,66 . Continued genetically informed research is needed to further develop our understanding of developmental periods sensitive to parental suicidal behavior exposure. Conclusions This study found that the intergenerational transmission of suicidal behavior is due to genetic factors shared across the generations and factors associated with comorbid behavioral health problems. A remaining association, however, was due to environmental factors specifically associated with exposure to parental suicidal behavior, consistent with a causal interpretation. Research examining the intergenerational transmission of various disorders should consider using multiple analytic approaches. Future suicidality research that can further specify genetic and environmental processes as well as specific mechanisms underlying the intergenerational transmission will help inform clinical interventions. Importantly, however, future research that examines environmental mediators needs to do so in a genetically informed context, as genetic factors appear to explain a large portion of the intergenerational association between parents and offspring. Without accounting for unmeasured confounding factors, researchers may overestimate the impact of a possible mediator, resulting in potentially weak or ineffective behavioral interventions. As continued genetically informative research is needed to elucidate mechanisms that can help inform interventions among offspring who are bereaved and/or offspring who are experiencing suicidality themselves, we reiterate the call outlined by prior research and organizations for continued systematic screening of suicidality and thorough assessments of family history 67 .
2020-05-30T14:34:21.415Z
2020-05-30T00:00:00.000
{ "year": 2020, "sha1": "a1810d1a3c9f36df4576f903639f80cd8cfc9049", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41398-020-0850-6.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a707bdc2942153fa9973ee5df01cd0b3fcac5627", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
221768721
pes2o/s2orc
v3-fos-license
Health-related quality of life trajectory of treatment-naive patients with Merkel cell carcinoma receiving avelumab Aim To evaluate changes in health-related quality of life (HRQoL) in a Phase II trial (NCT02155647) of treatment-naive patients with metastatic Merkel cell carcinoma treated with avelumab (15-month follow-up). Materials & methods Mixed-effect Models for Repeated Measures were applied to HRQoL data (FACT-M; EQ-5D-5L) to assess changes over time. Clinically derived progression-free survival was compared with HRQoL deterioration-free survival. Results Overall, we saw relative stability in HRQoL among 116 included patients, with nonprogression associated with statistically and clinically meaningful better HRQoL compared with progressive disease. Deterioration-free survival rates (49–72% at 6 months, 40–58% at 12 months) were consistently higher/better compared with progression-free survival rates (41/31% at 6/12 months). Conclusion These findings show unique longitudinal HRQoL data for treatment-naive metastatic Merkel cell carcinoma patients treated with avelumab. Clinical trial registration: NCT02155647 (ClinicalTrials.gov). Avelumab, a fully human monoclonal antibody of the immunoglobulin G1 isotype has been shown to improve treatment options for metastatic MCC, both in the first-line setting and in chemotherapy-refractory patients [6,7]. As an immunotherapy, avelumab is also expected to have a more favorable safety profile than cytotoxic chemotherapy regimens. Based on results from the Phase II single-arm, open-label, multicenter, international JAVELIN Merkel 200 trial (NCT02155647), avelumab became the first treatment for metastatic MCC patients to receive approval in the USA [8], the EU [9], Japan [10] and many other countries. The JAVELIN Merkel 200 trial consists of two parts. In the first part, chemo-refractory patients were enrolled, in other words, patients had already received and failed one or more lines of chemotherapy treatment for metastatic MCC before joining the study, while treatment-naive metastatic MCC patients were included in the second part of the trial. Results from chemo-refractory patients of the trial have already been published widely, including patient's health-related quality of life (HRQoL) data while receiving avelumab treatment [7,[11][12][13]. In the present study, we are reporting on the treatment-naive metastatic MCC patients of the JAVELIN Merkel 200 trial [6,14]. Specifically, we report the trajectory of self-reported HRQoL scores as well as HRQoL deterioration-free survival (QFS) based on 15-month follow-up data obtained from treatment-naive metastatic MCC patients. Study design As described above, the JAVELIN Merkel 200 trial consists of two parts, including chemo-refractory metastatic MCC patients in part 1 and patients who were naive to systemic therapy in part 2. In both parts of the trial, patients received treatment with avelumab 10 mg/kg every 2 weeks. As part of the trial, patient-reported outcomes (PRO) questionnaires were completed at baseline, week 7 and every 6 weeks thereafter until disease progression and/or treatment discontinuation. This paper is the first to report HRQoL data obtained from metastatic MCC patients receiving avelumab as first-line MCC treatment. We focus on the 15-month follow-up dataset (cutoff date: 2 May 2019). Study population The intention-to-treat trial population consisted of n = 116 treatment-naive metastatic MCC patients [6,14]. Patient-reported outcome assessments The JAVELIN Merkel 200 trial included the melanoma-specific Functional Assessment of Cancer Therapy -Melanoma (FACT-M) [15] and the EQ-5D-5L [16] PRO instruments. PRO data were collected electronically at baseline, week 7 and then every 6 weeks while on treatment. For those patients who had stopped treatment, an end-of-treatment visit was assessed. The FACT-M questionnaire is a self-administered, melanoma-specific HRQoL instrument. Although specific to melanoma, it was considered suitable for application in MCC, especially in light of the dearth of MCC-specific PRO instruments. The FACT-M includes 51 items grouped into six subscale and three summary scores. In addition, three MCC-specific FACT-M scores had been developed and validated previously [12,17], are also included in this study along with the established FACT-M subscale and summary scores, leading to a total of 12 FACT-M scores presented here. For the analysis of FACT-M data, a FACT-M PRO analysis set was defined. The EQ-5D-5L is a self-administered, generic, preference-based HRQoL instrument developed by the EuroQol group [16,18]. It includes five single-item dimensions, each assessed using five levels and a vertical visual analog scale (VAS) for the patients to rate their current health state. The EQ VAS was used in this study but not the EQ-5D-5L index score, as the latter is typically used in other types of analyses such as economic modeling. EQ VAS ranges from 0 (worst imaginable health state) to 100 (best imaginable health state). For the analysis of EQ VAS data, an EQ-5D-5L PRO analysis set was defined. Both the FACT-M and the EQ-5D-5L have been validated for use in MCC [12,17,19]. Statistical analysis The statistical analyses of HRQoL data consisted of two parts. First, change in patients' HRQoL over time was explored. As included MCC patients had metastatic disease, one major challenge is the occurrence of missing data due to patient dropout. As a result, Mixed-effect Models for Repeated Measures (MMRM) were used that can handle missing data without losing cases [20]. Two MMRMs were fitted to model change from baseline (CFB) across all 12 FACT-M subscale, summary and MCC-specific scores and the EQ VAS. Both sets of MMRM included baseline value as a fixed effect covariate and a random intercept. The random intercept implies the compound symmetry structure, assuming constant intrasubject correlations. In the first set of MMRMs, we estimated the overall effect of the treatment (avelumab) on HRQoL over the course of the study, in other words, the average CFB across assessment time points. In the second set of MMRMs, we explored potential differences in HRQoL due to disease progression across all assessment time points. To assess group differences, we added 'response status' as a binary ('nonprogression' vs 'progression') fixed effect covariate to the model. The regression coefficient obtained from this analysis then describes the average change in HRQoL scores associated with nonprogression, in other words, a positive regression coefficient indicates that 'nonprogression' is associated with better/higher HRQoL compared with 'progression' and vice versa. Second, we examined HRQoL QFS, defined as the time up to definitive HRQoL deterioration. Definitive deterioration is a change reaching or exceeding a predefined minimal important difference (MID) at least once during the study without further improvement thereafter reaching or exceeding the MID for HRQoL improvement from baseline. QFS is reported in two ways: median QFS (in months), in other words, the time until 50% of patients showed definitive HRQoL deterioration; QFS rates at different landmarks, in other words, the percentage of patients without definitive HRQoL deterioration at 6 and 12 months, respectively. Median QFS and QFS rates were estimated using the Kaplan-Meier method; obtained estimates were then qualitatively compared with median progression-free survival (PFS) and PFS rates, respectively, for context. For the definition of HRQoL deterioration as applied in the statistical analyses, specific FACT-M 'minimum' and 'maximum' MID thresholds had been derived for application in MCC [12,17]. For the purpose of the present analyses, we only focus on the more conservative minimum MID. Further details on the FACT-M MID thresholds are reported elsewhere [17]. Results Study population, sociodemographic/clinical characteristics of patients A total of n = 116 treatment-naive MCC patients participated in the JAVELIN Merkel 200 trial [6,14]. Of these, n = 98 patients provided valid FACT-M data at baseline, while n = 100 patients provided valid EQ-5D-5L data at baseline. Table 1 shows the sociodemographic and clinical characteristics of patients who provided PRO data (n = 100). Over two thirds of MCC patients were male; average age was 73 years. The majority of patients were recruited in Western Europe (61.0%) and 27.0% in North America. Patients had received their MCC diagnosis of metastatic disease within about two and a half months of being included in this study. Excision of the primary MCC tumor was between 1 and 3 years earlier. Compliance rates for the FACT-M ranged from 75.6 to 87.9% between baseline and week 61, in other words, the assessment time point closest to the 15-month data cutoff date. HRQoL over time using MMRM across all metastatic MCC patients As shown in Table 2, the results of the MMRM analysis of mean change from baseline over time across all assessment time points suggest relative stability in HRQoL in patients still part of the study. That is, half (6/12) of the FACT-M subscale, summary and MCC-specific scores and the EQ VAS score show a p-value > 0.05 for CFB, indicating overall stability of HRQoL across scales. Of the remaining six FACT-M scores showing a p-value of p < 0.05, four scores suggest some deterioration. These scores were mostly related to physical and functional subscales, with two scores reaching respective MID threshold suggesting an important deterioration over time (i.e., functional well-being, Trial Outcome Index). In contrast, two FACT-M scores (emotional well-being, psychological impact) suggest a small overall improvement, with the former subscale exceeding the MID threshold suggesting an important improvement in emotional well-being over time in those patients still part of the study. Change in HRQoL associated with 'progression' versus 'nonprogression' As shown in Figure 1, 'nonprogression' was associated with substantially higher/better HRQoL scores compared with 'progression' across all FACT-M subscale, summary and MCC-specific scores and the EQ VAS score. The only exception was the FACT-M melanoma surgery subscale, where the 95% CI for the group difference included zero (see Figure 1; the 95% CI starts slightly below the x-axis for this subscale). All remaining group differences were generally of a large magnitude, in other words, reaching respective MID threshold in all but two FACT-M subscales (i.e., melanoma surgery subscale, physical function; see Figure 1; the horizontal MID line is at or above the respective bar of the two subscales). These findings suggest that PRO data clearly differentiated between future science group www.futuremedicine.com 'progression' and 'nonprogression', with the latter showing statistically significant and clinically meaningful better HRQoL over time compared with the former in EQ VAS and almost all FACT-M scales. Figure 2 presents the time until definitive deterioration based on FACT-M and EQ VAS scores, respectively. That is, we estimated the length of time (in months) until half of the patients indicated a deterioration in their HRQoL scores. Compared with median PFS, which was 4.1 months (95% CI: 1.4-6.1 months) for treatment-naive MCC patients [21], median QFS was longer across all HRQoL scores, with some being substantially longer than the PFS (indicated by the generally higher QFS bars in Figure 2 compared with the PFS bar). For the FACT-M scores, the shortest median QFS was observed for physical well-being, functional well-being, the melanoma subscale and the FACT-M Trial Outcome Index, respectively, with half of the patients presenting a definitive deterioration in terms of HRQoL around 4.5-5.8 months after their first treatment dose. Censoring rates ranged from 45% (physical well-being) to up to 66% (psychological impact). Due to a large amount of censoring, median QFS could not be estimated for emotional well-being and psychological impact. QFS rates QFS rates show the percentage of patients without definitive HRQoL deterioration at 6 and 12 months, respectively. As shown in Figure 3, for the FACT-M subscale, summary and MCC-specific scores, QFS rates ranged from 49% (melanoma subscale, physical well-being) to up to 72% (emotional well-being) at 6 months. At 12 months, QFS rates ranged from 40% (physical well-being) to up to 58% (psychological impact, melanoma surgery subscale). For EQ VAS, QFS rates were 62% at 6 months and 52% at 12 months, respectively. In comparison, all QFS rates were higher than PFS rates, which were 41% and 31% at 6 and 12 months, respectively [21]. Discussion The aim of this study was to assess HRQoL of patients who took part in the JAVELIN Merkel 200 trial, a singlearm, open-label, multicenter, international Phase II study exploring the efficacy of avelumab in treatment-naive metastatic MCC patients. Specifically, we aimed to explore the trajectory of self-reported HRQoL scores based on MMRM analysis and exploring HRQoL QFS based on FACT-M and EQ VAS data. This study is the first to present PRO data obtained from treatment-naive metastatic MCC patients of the JAVELIN Merkel 200 trial. The results of the general MMRM analysis show overall stability of HRQoL among all patients, in other words, a favorable finding, as HRQoL deterioration might be expected in patients with metastatic disease. When exploring the specific FACT-M domains, there was evidence for some improvement in emotional well-being, while there was a trend toward poorer functional and physical well-being over time. When exploring progression status, clear group differences were seen, with 'nonprogression' resulting in statistically and clinically meaningful better HRQoL compared with disease progression, in other words, differences were as expected and consistent with clinical response status. The median duration of HRQoL QFS ranged from 4.5 months to 1 year and beyond across EQ VAS and the different FACT-M scales. Of note, the QFS of the two emotional domains emotional well-being and psychological impact was not estimable, because 'definitive deterioration' had not yet occurred for at least half of the patients at the time of data cutoff, which is in line with the results of the MMRM analyses, suggesting that patients remaining in the study seemed to be emotionally stable or even to improve over time. Compared with median PFS, median QFS was consistently longer, which was also reflected in respective QFS rates at 6 and 12 months being consistently higher than respective PFS rates at these time points. This study is the first to present PRO data based on treatment-naive MCC patients of the JAVELIN Merkel 200 trial, while substantive quantitative and qualitative work around PRO data from chemotherapy-refractory MCC patients of the same trial has already been published extensively. Quantitative findings were largely consistent with the findings of the present study. That is, the analyses of PRO data obtained from chemo-refractory patients also showed that QFS rates were generally higher than respective PFS rates [22]. Also, the comparison of 'nonprogression' with progressive disease suggested clinically better HRQoL in nonprogressed disease in the chemo-refractory MCC cohort, like in the present study [11]. Similarly, qualitative findings based on treatment-naive MCC patients were again largely in line with the findings from chemo-refractory MCC patients in that all patients indicated similar experiences regarding perceived benefits and clinical changes during the JAVELIN Merkel 200 trial [23,24]. This study has limitations. A major challenge is the high attrition rate with substantial patient dropout throughout the trial especially within the first 6 months of study inception. While high patient dropout can be expected in metastatic disease, diminishing sample sizes result in large 95% CIs in the QFS analyses, which are indicative of uncertainty around the estimates. A further limitation is that MMRM analyses only provide unbiased estimates if data are missing at random; however, this assumption cannot be tested empirically, so that it cannot be ruled out that the missing data pattern was nonrandom. Also, PRO data were only collected until the end of treatment, which was often triggered by disease progression with two consequences. First, an underestimation of deterioration since, for some patients, HRQoL may have deteriorated shortly after disease progression, in other words, shortly after leaving the study and second, relatively high censoring rates in the QFS analyses. Finally, we used a melanoma-specific PRO instrument in a cohort of MCC patients. MCC patients tend to be about 10 years older on average than melanoma patients, which is true for our sample with an observed mean age of 73 years. Therefore, it cannot be ruled out that especially some of the oldest participants (up to 93 years of age in our sample) found it challenging to fill out the instrument correctly. Also, some of the melanoma-specific items may not be applicable to MCC patients, as supported by our psychometric analyses where both melanoma subscales showed worse performance than the other scales [17]. However, in light of the dearth of MCC-specific PRO instruments and the exceptionally strong performance of the MCC-specific FACT-M scales physical function and psychological impact [17] gives us confidence that the use of the FACT-M in this study was appropriate and delivered high quality HRQoL data. future science group www.futuremedicine.com Conclusion These findings show unique longitudinal HRQoL data for treatment-naive metastatic MCC patients treated with avelumab in a relatively large sample for a rare disease. Relatively stable HRQoL scores were observed over time. When differentiating patients by progression status, 'nonprogression' was associated with statistically and clinically meaningful better HRQoL compared with progressive disease. Finally, time-to-event analyses suggest longer HRQoL QFS compared with PFS. In conclusion, these data provide important insight into self-reported HRQoL of treatment-naive metastatic MCC patients over time while receiving avelumab treatment, validating previous findings of the positive impact of avelumab treatment of patients' HRQoL among metastatic MCC. Summary points • This study presents unique longitudinal health-related quality of life (HRQoL) data for treatment-naive metastatic Merkel cell carcinoma (MCC) patients treated with avelumab in a relatively large sample for a rare disease. • Compliance rates for the HRQoL data ranged from 75.6 to 87.9% between baseline and the 15-month data cutoff date. • In treatment-naive metastatic MCC patients treated with avelumab, nonprogression was associated with statistically and clinically meaningful better HRQoL compared with progressive disease. received reimbursement for travel and accommodation expenses from Adaptimmune, EMD Serono and Nektar. The authors have no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed. No writing assistance was utilized in the production of this manuscript. Ethical conduct of research This study was performed in compliance with the ethical principles arising from the Declaration of Helsinki and all current local regulations. The study protocol was approved by Independent Ethics Committee or Institutional Review Board prior to the study launch at each site. All patients gave written informed consent. Data sharing statement For all new products or new indications approved in both the European Union and the USA after 1 January 2014, Merck KGaA, Darmstadt, Germany will share patient-level and study-level data after de-identification, as well as redacted study protocols and clinical study reports from clinical trials in patients. These data will be shared with qualified scientific and medical researchers, upon researcher's request, as necessary for conducting legitimate research. Such requests must be submitted in writing to the company's data sharing portal. More information can be found at https://www.merckgroup.com/en/research/our-approach-toresearch-and-development/healthcare/clinical-trials/commitment-responsible-data-sharing.html. Where Merck KGaA has a coresearch, codevelopment or comarketing/copromotion agreement or where the product has been out-licensed, it is recognized that the responsibility for disclosure may be dependent on the agreement between parties. Under these circumstances, Merck KGaA will endeavor to gain agreement to share data in response to requests. Open access This work is licensed under the Attribution-NonCommercial-NoDerivatives 4.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0/
2020-09-18T13:06:11.978Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "e1529d09577b7f068b865da356c38a1001ddf203", "oa_license": "CCBYNCND", "oa_url": "https://www.futuremedicine.com/doi/pdf/10.2217/fon-2020-0426", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "60bbba6041f02cfc69d301153a69917ed27914cb", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
71321196
pes2o/s2orc
v3-fos-license
Nutrition Trends: Implications for Diabetes Health Care Professionals Since 1991, the American Dietetic Association has conducted nationwide consumer nutrition trend surveys (1991, 1995, 1997, 2000, 2002, and 2008). Results from the 2008 survey were presented at the association’s 2008 annual meeting, the Food & Nutrition Conference and Expo, in October 2008 in Chicago. Each of the six surveys conducted to date was designed to “measure people’s attitudes, knowledge, beliefs, and behaviors regarding food and nutrition; and to identify trends and understand how consumers’ attitudes and behavior have evolved over time.”1 From 25 February through 7 March 2008, the telephone survey, ~ 18 minutes in length, was carried out in a representative sample of the U.S. adult population (n = 783).1 The assumed standard deviation for each question is ± 3%, and results were projected to the 90% confidence interval.1 Public information releases and a PowerPoint presentation with details regarding the 2008 survey can be found at the association’s website (www.eatright.org/ trends2008). Since 1991, the American Dietetic Association has conducted nationwide consumer nutrition trend surveys (1991, 1995, 1997, 2000, 2002, and 2008). Results from the 2008 survey were presented at the association's 2008 annual meeting, the Food & Nutrition Conference and Expo, in October 2008 in Chicago. Each of the six surveys conducted to date was designed to "measure people's attitudes, knowledge, beliefs, and behaviors regarding food and nutrition; and to identify trends and understand how consumers' attitudes and behavior have evolved over time." 1 From 25 February through 7 March 2008, the telephone survey, ~ 18 minutes in length, was carried out in a representative sample of the U.S. adult population (n = 783). 1 The assumed standard deviation for each question is ± 3%, and results were projected to the 90% confidence interval. 1 Public information releases and a PowerPoint presentation with details regarding the 2008 survey can be found at the association's website (www.eatright.org/ trends2008). Importance of Diet, Nutrition, and Physical Activity When asked about the importance of diet, nutrition, and physical activity, approximately three out of five consumer respondents answered that diet, nutrition, and physical activity are "very important" to them personally; women and people with college or post-graduate degrees were more likely to say that nutrition and diet are "very important." Less variation was observed in exercise and physical activity importance among sex, age-groups, and education levels. The majority of respondents in all groups considered exercise and physical activity "very important." In each survey since 1991, the American Dietetic Association has segmented participants into three consumer groups representative of their overall attitudes toward maintaining a healthy diet and getting regular exercise. The groups are labeled "I'm already doing it"(describing consumers who are concerned about nutrition and overall fitness who feel they are doing all they can to eat a healthy diet), "I know I should" (consumers who indicate importance but may not have taken significant action to eat a healthy diet), and "Don't bother me" (consumers who do not feel that diet and exercise are important to them and are the least concerned with nutrition and their overall fitness. The The association reports that the "I'm already doing it" group has increased steadily in each survey, representing movement away from the "Don't bother me" group. Top Reasons for Not Eating Better The survey explored reasons why people are not doing more to improve their eating habits. It found that > 70% of adults do not do more to achieve balanced nutrition and a healthy diet because they report that they are satisfied with the way they eat, and they do not want to give up their favorite foods. The association states that "People like what they eat . . . and eat what they like." The major or minor reasons to not do more to change eating habits included: "I'm satisfied with the way I eat." • (79%) "I don't want to give up the foods • I like." (73%) "It takes too much time to keep • track of my diet." (54%) "I need more practical tips to help • me eat right." (52%) "I don't know or understand • guidelines for diet and nutrition." (41%) Food Consumption, Knowledge, and Beliefs Respondents were also asked about their intake of certain foods and nutrients and whether their consumption had increased, decreased, or stayed the same during the past 5 years. Overall, the top five foods that survey participants reported they had increased and the percentage of respondents reporting an increase were: When broken down by age-group, adults aged 18-34 years were the most likely to have increased their consumption of fruits, vegetables, and whole grains. Those ≥ 65 years of age were the least likely to have increased consumption of these Nutrition FYI foods. More than half of the respondents reported no change in their consumption of dairy foods, pork, low-carbohydrate foods, omega-3 fatty acids, low-sodium foods, and alternative sweeteners. By a large margin, foods containing trans fats were the most likely to have reduced consumption during the past 5 years. An average of 56% of all participants reported cutting back on these foods. Forty-one percent reported a decreased intake of beef, 33% cut back on pork, and 23% reduced dairy intake. Less than 20% of respondents had decreased their consumption of alternative sweeteners and low-carbohydrate products. More than three in four respondents had heard "a lot" about health-related effects of low-fat foods, and 72% had heard about foods containing trans fats. Additionally, 94% of those surveyed said that they believe whole-grain bread is healthier than white bread. More than half of the respondents believed that organically grown fruits and vegetables are healthier than conventionally grown, 38% believed there was no difference, and 8% believed conventionally grown produce was healthier. Sources of Food and Nutrition Information The nutrition trends survey indicates that Americans get most of their food and nutrition information from television and magazines ( Figure 1). However, the most popular sources are not necessarily considered the most credible sources of nutrition information by survey participants. Although television remains the top source of nutrition and food information at 63%, it is down from 72% in 2002. Magazines have also declined from 58% in 2002 to 45% in the current survey. In 2008, 24% of participants named the Internet as a source of nutrition and food information. This figure surpassed that for newspapers and is nearly double the number of people who identified the Internet as a source in 2002. Reports of using the Internet as a source for nutrition information vary widely among age-groups: 42% of those aged 25-34 years use the Internet, while only 5% of those ≥ 65 years of age use the Internet to find nutrition information. In addition to identifying where they get their nutrition information, participants were asked to read a list of sources and rate the credibility of each. At 78%, registered dietitians (RDs) and nutritionists topped the "very credible" list. Eight-six percent of participants have heard of RDs. By a three-to-one margin, respondents indicated that there is a difference between an RD and a nutritionist. The top 10 "very credi-ble" sources of nutrition information and the percentages of respondents reporting this were: RD Implications for Diabetes Health Care Professionals According to the survey, an increasing number of adults in the United States are conscious of nutrition and exercise and are taking steps to eat more healthfully and engage in regular physical activity. This is good news for diabetes health care professionals. Positive lifestyle changes, including meal planning and physical activity, aid in improving diabetes control and promoting healthy weight. 3 More than 24 million Americans have diabetes, and many more are at risk. 4 The challenge for diabetes health care professionals is to assist people with diabetes in moving from the "I know I should" category to the "I am already doing it" group. To facilitate this behavior change, it is important to encourage regular physical activity and healthful food choices at each encounter with patients or clients. At times, it can be a daunting task to share all the information needed to successfully manage diabetes in a meaningful way. As a result, the message of physical activity may be minimized or left out entirely. Potential benefits of regular exercise are many, including improved blood glucose control, reduction of cardiovascular risk factors, promotion of weight loss, and an enhanced sense of well-being. 5 It is generally accepted to recommend ~ 150 minutes of moderate-intensity exercise per week for people with diabetes. 5 People with diabetes, like those in the general population, are encouraged to choose a variety of fiber-containing foods, including whole grains, legumes, fruits, and Figure 1. Top five sources of nutrition information in 2008. 2 Department Nutrition FYI vegetables. 3 More than half of the respondents had increased their intake of whole-grain foods, and approximately half had increased their fruit and vegetable intake. Continuing to promote the favorable message of increasing consumption of these foods is beneficial for people with diabetes. The nutrition trends survey also reported that respondents had decreased their intake of foods containing trans fats and increased their consumption of low-fat foods, the same nutrition messages that they reported they had heard "a lot" about. This indicates that the messages are being heard, and consumers are responding. For people with diabetes, the primary goal with respect to dietary fat remains limiting intake of saturated fatty acids, trans fatty acids, and cholesterol to reduce the risk for cardiovascular disease. 3 Diabetes health care professionals can assist people with diabetes by translating the science of health-ful eating and regular exercise into practical action steps so that the information may be used in a beneficial way. Traditional methods of communicating these messages are face-to-face with clients in groups or individual sessions. Combining the public's most popular nutrition information sources with the credibility of diabetes health care professionals can extend the reach of the message beyond traditional settings. Although being interviewed for the media or writing for consumer print articles may seem overwhelming, the message from these activities can make a difference. Television and magazines remain the most common sources of nutrition information, with the Internet gaining in popularity. Diabetes health care professionals are encouraged to promote positive, credible nutrition and physical activity messages through these popular information sources to reach more people with diabetes.
2017-10-11T21:12:06.467Z
2009-01-01T00:00:00.000
{ "year": 2009, "sha1": "ead80049dbb78ffcea16ed4280b23807cf410e8b", "oa_license": "CCBY", "oa_url": "http://spectrum.diabetesjournals.org/content/22/1/23.full.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "fb192351f3903bcc98c2066f4643daa6efbca18e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258615717
pes2o/s2orc
v3-fos-license
Traceability and Reuse Mechanisms, the most important Properties of Model Transformation Languages Dedicated model transformation languages are claimed to provide many benefits over the use of general purpose languages for developing model transformations. However, the actual advantages associated with the use of MTLs are poorly understood empirically. There is little knowledge and empirical assessment about what advantages and disadvantages hold and where they originate from. In a prior interview study, we elicited expert opinions on what advantages result from what factors and a number of factors that moderate the influence. We aim to quantitatively asses the interview results to confirm or reject the effects posed by different factors. We intend to gain insights into how valuable different factors are so that future studies can draw on these data for designing targeted and relevant studies. We gather data on the factors and quality attributes using an online survey. To analyse the data, we use universal structure modelling based on a structure model. We use significance values and path coefficients produced bz USM for each hypothesised interdependence to confirm or reject correlation and to weigh the strength of influence present. We analyzed 113 responses. The results show that the Tracing and Reuse Mechanisms are most important overall. Though the observed effects were generally 10 times lower than anticipated. Additionally, we found that a more nuanced view of moderation effects is warranted. Their moderating influence differed significantly between the different influences, with the strongest effects being 1000 times higher than the weakest. The empirical assessment of MTLs is a complex topic that cannot be solved by looking at a single stand-alone factor. Our results provide clear indication that evaluation should consider transformations of different sizes and use-cases. Language development should focus on providing transformation specific reuse mechanisms . Introduction Model driven engineering (MDE) envisions the use of model transformations as a main activity during development (Sendall and Kozaczynski ). When practising MDE, model transformations are used for a wide array of tasks such as manipulating and evolving models (Metzger ), deriving artefacts like source code or documentation, simulating system behaviour or analysing system aspects (Schmidt ). Numerous dedicated model transformation languages (MTLs) of different form, aim and syntax (Kahani et al. ) have been developed to aid with model transformations. Using MTLs is associated with many benefits compared to using general purpose languages (GPLs), though little evidence for this has been brought forth (Götz, Tichy, and Groner ). The number of claimed benefits is enormous and includes, but is not limited to, better Comprehensibility, Productivity and Maintainability as well as easier development in general (Götz, Tichy, and Groner ). The existence of such claims can partially be attributed to the advantages that are ascribed to domain specific languages (DSLs) (Hermans, Pinzger, and Deursen ; Johannes et al. ). In a prior systematic literature review, we have shown that it is still uncertain whether these advantages exist and where they arise from (Götz, Tichy, and Groner ). Due to this uncertainty it is hard to convincingly argue the use of MTLs over GPLs for transformation development. This problem is exacerbated when considering recent GPL advancements, like Java Streams, LINQ in C# or advanced pattern matching syntax, that help reduce boilerplate code (Höppner, Kehrer, and Tichy ) and have put them back into the discussion for transformation development. Even a community discussion held at the th edition for the International Conference on Model Transformations (ICMT' ) acknowledges GPLs as suitable contenders (Cabot and Gérard ). Moreover, the few existing empirical studies on this topic provide mixed and limited results. Hebig et al. found no direct advantage for the development of transformations, but did find an advantage for the comprehensibility of transformation code in their limited setup (Hebig et al. ). A study conducted by us, found that certain use cases favour the use of MTLs, while in others the versatility of GPLs prevails (Höppner, Kehrer, and Tichy ). Overall there exists a gap in knowledge in what the exact benefits of MTLs are, how strong their impact really is and what parts of the language they originate from. To bridge this gap, we conducted an interview study with experts from research and industry to discuss the topic of advantages and disadvantages of model transformation languages (Höppner et al. ). Participants were queried about their views on the advantages and disadvantages of model transformation languages and the origins thereof. The results point towards three main-areas that are relevant to the discussion, namely General Purpose Languages Capabilities, Model Transformation Languages Capabilities and Tooling. From the responses of the interviewees we identified which claimed MTL properties are influenced by which sub-areas and why. They also provided us with insights on moderation effects on these interdependencies caused by different Use-Cases, Skill & Experience levels of users and Choice of Transformation Language. All results of the interview study are qualitative and therefore limited in their informative value as they do not provide indication on the strength of influence between the involved variables. It is also not clear whether the influence model is complete and whether the views pretended by the interview participants withstand community scrutiny. Therefore they only represent an initial data set that requires a quantitative and detailed analysis. In this paper, we report on the results of a study to confirm or deny the interdependencies hypothesised from our interview results. We provide quantification of the influence strengths and moderation effects. To ensure a more complete theory of interactions, we also present the results of exploring interdependencies between factors and quality properties not hypothesised in the interviews. Due to limited resources, this study focuses on the effects of MTL capabilities (namely Bidirectionality, Incrementality, Mappings, Model Management, Model Navigation, Model Traversal, Pattern Matching, Reuse Mechanisms and Traceability) on MTL properties (namely Comprehensibility, Ease of Writing, Expressiveness, Productivity, Maintainability and Reusability and Tool Support) in the context of their uses-case (namely bidirectional or unidirectional, incremental or non-incremental, meta-model sanity, meta-model, model and transformation size and semantic gap between input and output), the skills & experience of users and language choice. Further studies can follow the same approach and focus on different areas. Descriptions for all MTL capabilities and MTL properties can be found in Section and thorough explanations can be found in our previous works (Götz, Tichy, and Groner ; Höppner et al. ). The goal of our study is to provide quantitative results on the influence strengths of interdependences between model transformation language Capabilities and claimed Quality Properties as perceived by users. Additionally we provide data on the strength of moder-ation expressed by contextual properties. The study is structured around the hypothesised interdependencies between these variables, and their more detailed breakdown, extracted from our previous interview study. Each presumed influence of a MTL capability on a MTL property forms one hypothesis which is to be examined in this study. All hypotheses are extended with an assumption of moderation by the context variables. The system of hypotheses that arises from these deliberations is visualised in a structure model, which forms the basis for our study. The structure model is depicted in Figure . The model shows exogenous variables on the left and right and endogenous variables at the centre. Exogenous variables depicted in a ellipse with a dashed outline constitute the hypothesised moderating variables. All hypotheses investigated in our study are of the form: "<MTL Property> is (positively or negatively) influenced by <MTL Capability>". They are represented by arrows from exogenous variables on the left of Figure to endogenous variable at the centre. A moderation on the hypothesised influence is assumed from all exogenous variables on the right of the figure connected to the considered endogenous variable. In total we investigate hypothesised influences, i.e. the number of outgoing arrows from the exogenous variables on the left of Figure . Our study is guided by the following research questions: RQ Which of the hypothesised interdependencies withstands a test of significance? RQ How strong are the influences of model transformation language capabilities on the properties thereof? RQ How strong are moderation effects expressed by the contextual factors use-case, skills & experience and MTL choice? RQ What additional interdependencies arise from the analysis that were not initially hypothesised? As the first study on this subject it contains confirmatory and exploratory elements. We intend to confirm which of the interdependencies between MTL capabilities, MTL properties and contextual properties withstand quantitative scrutiny (RQ ). We explore how strong the influence and moderation effects between variables are (RQ & RQ ), to gain new insights and to confirm their significance and relevance (minor influence strengths might suggest irrelevance even if goodness of fit tests confirm a correlation that is not purely accidental). Lastly, we utilise the exploratory elements of USM to identify interdependencies not hypothesised by the experts in our interviews (RQ ). We use an online survey to gather data on language use and perceived quality of researchers and practitioners. The responses are analysed using universal structure modelling (USM) (Buckler and Hennig-Thurau ) based on the structure model developed from the interview responses. This results in a quantified structure model with influence weights, significance values and effect strengths. Based on the responses from participants, the key contributions of this paper are: • An adjusted structure model with newly discovered interdependencies; • Quantitative data on the influence weight and effect strength of all factors as well as significant values for the influences; • Quantitative data on the moderation strength of context factors; • An analysis of the implications of the results for further empirical studies and language development; • Reflections on the use of USM for investigating large hypotheses systems in software engineering research; The method used in the reported study has been reviewed and published as part of the Registered Reports track at ESEM' (Höppner and Tichy ). The structure of this paper is as follows: Section provides an extensive overview of model-driven engineering, domain-specific languages, model transformation languages and structural equation modelling as well as universal structure modelling. Afterwards, in Section the methodology is outlined. Demographic data of the responses is reported in Section and the results of analysis is presented in Section . In Section we discuss implications of the results and report our reflections on the use of USM. Section discusses threats to validity of our study and how we met them. Lastly, in Section we present related work before giving concluding remarks on our study in Section . Background In this section we provide the necessary background for our study. Since it is a follow up study to our interview study (Höppner et al. ) much of the background is the same and is therefore taken from those descriptions. To stay self contained we still provide these descriptions. This concerns Sections . to . . Sections . and . contains an extension of our descriptions from the registered report (Höppner and Tichy ). . Model-driven engineering The Model-Driven Architecture (MDA) paradigm was first introduced by the Object Management Group in (OMG ). It forms the basis for an approach commonly referred to as Model-driven development (MDD) (Brown, Conallen, and Tropeano ), introduced as means to cope with the ever growing complexity associated with software development. At the core of it lies the notion of using models as the central artefact for development. In essence this means, that models are used both to describe and reason about the problem domain as well as to develop solutions (Brown, Conallen, and Tropeano ). An advantage ascribed to this approach that arises from the use of models in this way, is that they can be expressed with concepts closer to the related domain than when using regular programming languages (Selic ). When fully utilized, MDD envisions automatic generation of executable solutions specialized from abstract models (Selic ; Schmidt ). To be able to achieve this, the structure of models needs to be known. This is achieved through so called meta-models which define the structure of models. The structure of meta-models themselves is then defined through meta-models of their own. For this setup, the OMG developed a modelling standard called Meta-object Facility (MOF) (OMG ) on the basis of which a number of modelling frameworks such as the Eclipse Modelling Framework (EMF) (Steinberg et al. ) and the .NET Modelling Framework (Hinkel ) have been developed. . Domain-specific languages Domain-specific languages (DSLs) are languages designed with a notation that is tailored for a specific domain by focusing on relevant features of the domain (Van Deursen and Klint ). In doing so DSLs aim to provide domain specific language constructs, that let developers feel like working directly with domain concepts thus increasing speed and ease of development (Sprinkle et al. ). Because of these potential advantages, a well defined DSL can provide a promising alternative to using general purpose tools for solving problems in a specific domain. Examples of this include languages such as shell scripts in Unix operating systems (Kernighan and Pike ), HTML (Raggett, Le Hors, Jacobs, et al. ) for designing web pages or AADL an architecture design language (SAEMobilus ). . . . External and Internal transformation languages Domain specific languages, and MTLs by extension, can be distinguished on whether they are embedded into another language, the so called host language, or whether they are fully independent languages that come with their own compiler or virtual machine. Languages . Examples for transformation rules are the rules that make up transformation modules in ATL, but also functions, methods or procedures that implement a transformation from input elements to output elements. The fundamental difference between model transformation languages and general-purpose languages that originates in this definition, lies in dedicated constructs that represent rules. The difference between a transformation rule and any other function, method or procedure is not clear cut when looking at GPLs. It can only be made based on the contents thereof. An example of this can be seen in Listing , which contains exemplary Java methods. Without detailed inspection of the two methods it is not apparent which method does some form of transformation and which does not. In a MTL on the other hand transformation rules tend to be dedicated constructs within the language that allow a definition of a mapping between input and output (elements). The example rules written in the model transformation language ATL in Listing make this apparent. They define mappings between model elements of type Member and model elements of type Male as well as between Member and Female using rules, a § ¤ . . Rule Application Control: Location Determination Location determination describes the strategy that is applied for determining the elements within a model onto which a transformation rule should be applied We differentiate two forms of location determination, based on the kind of matching that takes place during traversal. There is the basic automatic traversal in languages such as ATL or QVT, where single elements are matched to which transformation rules are applied. The other form of location determination, used in languages like Henshin, is based on pattern matching, meaning a model-or graph-pattern is matched to which rules are applied. This does allow developers to define sub-graphs consisting of several model elements and references between them which are then manipulated by a rule. The automatic traversal of ATL applied to the example from Listing will result in the transformation engine automatically executing the Member2Male on all model elements of type Member where the function isFemale() returns false and the Member2Female on all other model elements of type Member. The pattern matching of Henshin can be demonstrated using Figure ). It describes a transformation that creates a couple connection between two actors that play in two films together. When the transformation is executed the transformation engine will try and find instances of the defined graph pattern and apply the changes on the found matches. This highlights the main difference between automatic traversal and pattern matching as the engine will search for a sub graph within the model instead of applying a rule to single elements within the model. The directionality of a model transformation describes whether it can be executed in one direction, called a unidirectional transformation or in multiple directions, called a multidirectional transformation (Czarnecki and Helsen ). For the purpose of our study the distinction between unidirectional and bidirectional transformations is relevant. Some languages allow dedicated support for executing a transformation both ways based on only one transformation definition, while other require users to define transformation rules for both directions. Generalpurpose languages can not provide bidirectional support and also require both directions to be implemented explicitly. The ATL transformation from Listing defines a unidirectional transformation. Input and output are defined and the transformation can only be executed in that direction. The QVT-R relation defined in Listing is an example of a bidirectional transformation definition (For simplicity reasons the transformation omits the condition that males are only created from members that are not female). Instead of a declaration of input and output, it defines how two elements from different domains relate to one another. As a result given a Member element its corresponding Male elements can be inferred, and vice versa. . . Incrementality Incrementality of a transformation describes whether existing models can be updated based on changes in the source models without rerunning the complete transformation (Czarnecki and Helsen ). This feature is sometimes also called model synchronisation. Providing incrementality for transformations requires active monitoring of input and/or output models as well as information which rules affect what parts of the models. When a change is detected the corresponding rules can then be executed. It can also require additional management tasks to be executed to keep models valid and consistent. . . Tracing According to Czarnecki and Helsen ( ) tracing "is concerned with the mechanisms for recording different aspects of transformation execution, such as creating and maintaining trace links between source and target model elements". Several model transformation languages, such as ATL and QVT have automated mechanisms for trace management. This means that traces are automatically created during runtime. Some of the trace information can be accessed through special syntax constructs while some of it is automatically resolved to provide seamless access to the target elements based on their sources. An example of tracing in action can be seen in line 16 of Listing . Here the partner attribute of a Female element that is being created, is assigned to s.companion. The s.companion reference points towards a element of type Member within the input model. When creating a Female or Male element from a Member element, the ATL engine will resolve this reference into the corresponding element, that was created from the referred Member element via either the Member2Male or Member2Female rule. ATL achieves this by automatically tracing which target model elements are created from which source model elements. . . Dedicated Model Navigation Syntax Languages or syntax constructs for navigating models is not part of any feature classification for model transformation languages. However, it was often discussed in our interviews and thus requires an explanation as to what interviewees refer to. Languages such as OCL (OMG ), which is used in transformation languages like ATL, provide dedicated syntax for querying and navigating models. As such they provide syntactical constructs that aid users in navigation tasks. Different model transformation languages provide different syntax for this purpose. The aim is to provide specific syntax so users do not have to manually implement queries using loops or other general purpose constructs. OCL provides a functional approach for accumulating and querying data based on collections while Henshin uses graph patterns for expressing the relationship of sought-after model elements. . Structural equation modelling and (Universal) Structural Equation Modelling Structural equation modelling (SEM) is an approach used for confirmatory factor analysis (Graziotin et al. ). It defines a set of methods used to "investigate complex relationship structures between variables and allows for quantitative estimates of interdependencies thereof. Its goal is to map the a-priori formulated cause-effect relationships into a linear system of equations and to estimate the model parameters in such a way that the initial data, collected for the variables, are reproduced as well as possible" (Weiber and Mühlhaus ). Structural equation modelling distinguishes between two sets of variables manifest and latent. Manifest variables are variables that are empirically measured and latent variables describe theoretical constructs that are hypothesised to interact with each other. Latent variables are further divided into exogenous or independent and endogenous or dependent variables. So called structural equation models, a sample of which can be seen in Figure , comprised of manifest and latent variables, form the heart of analysis. They are made up of three connected sub-models. The structure model, the measurement model of the exogenous latent variables and the measurement model of the endogenous latent variables. The structure model defines all hypothesised interactions between exogenous ( ) and endogenous ( ) latent variables. Each exogenous variable is linked, by arrow, to all endogenous variables that are presumed to be influenced by it. Each of these connections is given a variable ( _ ) that measures the influence strength. If an exogenous variable moderates the influences on a endogenous variable, the exogenous variable is depicted with a dashed outline and connected is assigned. In addition, an residual (or error) variable is appended to each endogenous latent variable to represent the influence of variables not represented in the model. Figure shows an example structure equation model model for the hypothesis that "Mappings help with the comprehensibility of transformations, depending on the developers experience.". The structure model seen at the centre of the figure, is comprised of the exogenous latent variable 1 (Mappings), the moderating exogenous variable 2 (Experience), the endogenous latent variable 1 (Comprehensibility), a presumed influence of Mappings on Comprehensibility via 11 and the error variable 1 . Lastly the model also contains a moderation of Experience on all influences of Comprehensibility. As described earlier, this moderation effect is assigned the variable 11_2 . The moderation variables are not depicted in our graphical representation of the structure model because of their high number and associated visibility issues. The measurement model of the exogenous latent variables reflects the relationships between all exogenous latent variables and their associated manifest variables. Each manifest variable is linked, by arrow, to all exogenous latent variables that are measured through it. To illustrate moderation, arrows are usually shown from the moderating exogenous variable to the arrow representing the moderated influence , i.e., an arrow between an exogenous variable and an endogenous variable. However our illustration deviates from this due to the size and makeup of our hypothesis system. Standard representations can be found in the basic literature such as Weiber and Mühlhaus ( ). Each of these connections is given a variable that measures the indication strength of the manifest variable for the latent variable. Additionally, an error variable for each manifest variable is introduced that represents measurement errors. In Figure , the measurement model for exogenous latent variables, seen at the left of the figure, is comprised of the exogenous latent variables 1 (Mappings) and 2 (Experience), the manifest variables 1 (% of code using Mappings), 2 (number of years a person has been a programmer) and 3 (number of hours per month spent developing transformations) their measurement accuracy for Mapping usage 11 and their measurement accuracy for Experience 22 and and 32 and the associated measurement error 1 and 2 and 3 . The measurement model of the endogenous latent variables reflects the relationships between all endogenous latent variables and their associated manifest variables. It is structured the same way as the measurement model of the exogenous latent variables. In Figure , it is shown on the right of the figure. Given a structural equation model and measurements for manifest variables, the SEM approach calls for estimating the influence weights and latent variables within the models. This is done in alternation for the measurement models and the structure model until a predefined quality criterion is reached. Traditional methods (covariance-based structural equation modeling & partial least squares) use different mathematical approaches such as maximum-likelihood estimation or least squares (Weiber and Mühlhaus ) to estimate influence weights. Universal Structure Modeling (USM) is an exploratory approach that complements the traditional confirmatory SEM methods (Buckler and Hennig-Thurau ). It combines the iterative methodology of partial least squares with a Bayesian neural network approach using multilayer perceptron architecture. USM derives a starting value for latent variables in the model via principal component analysis and then applies the Bayesian neural network to discover an optimal system of linear, nonlinear and interactive paths between the variables. This enables USM to identify complex relationships that may not be detected using traditional SEM approaches including hidden structures within the data and highlights unproposed model paths, nonlinear relations among model variables, and moderation effects. The primary measures calculated in USM are the 'Average Simulated Effect' (ASE), 'Overall Explained Absolute Deviation' (OEAD), 'interaction effect' (IE) and 'parameter significance'. ASE measures the average change in the endogenous variable resulting from a one-unit change in the exogenous variable across all simulations. OEAD assesses the degree of fit between the observed and simulated values of the endogenous variable, capturing the overall explanatory power of the model. IE evaluates the extent to which the effect of one exogenous variable on the endogenous variable depends on the level of another variable. Parameter Significance determines whether the estimated coefficients for each exogenous variable in the model are statistically significant at a predetermined level of confidence which indicated if the exogenous variable has a meaningful impact on the endogenous variable and is calculated through a bootstrapping routine (Mooney et al. ). These metrics together provide a comprehensive assessment of the performance and explanatory power of a USM model. USM is recommended for use in situations where traditional SEM approaches may not be sufficient to fully explore the relationships between variables. Using USM instead of traditional structural equation modelling approaches is suggested for studies where there are still uncertainties about the completeness of the underlying hypotheses system and for exploring nonlinearity in the influences (Weiber and Mühlhaus ; Buckler and Hennig-Thurau ). Moreover its use of a neural network also reduces the requirements for the scale levels of data thus allowing the introduction of categorical variables in addition to metric variables (Weiber and Mühlhaus ). At present, the tool NEUSREL is the only tool available for conducting USM. https://www.neusrel.com . MTL Quality Properties There exists a large body of quality properties that get associated with model transformation languages. In literature many claims are made about advantages or disadvantages of MTLs in these different properties. We categorised these properties in a previous work of ours (Götz, Tichy, and Groner ). This study focuses on a subset of all the identified quality properties of MTLs which requires them to be properly explained. In this section, we give a brief description of our definitions of each of the quality properties of MTLs relevant to the study. Comprehensibility describes the ease of understanding the purpose and functionality of a transformation based on reading code. Ease of Writing describes the ease at which a developer can produce a transformation for a specific purpose. Expressiveness describes the amount of useful dedicated transformation concepts in a language. Productivity describes the degree of effectiveness and efficiency with which transformations can be developed and used. Maintainability describes the degree of effectiveness and efficiency with which a transformation can be modified. Reusability describes the ease of reusing transformations or parts of transformations to create new transformations (with different purposes). Tool Support describes the amount of quality tools that exist to support developers in their efforts. Methodology The methodology used in this study has been reviewed and published as part of the Registered Reports track at ESEM' (Höppner and Tichy ). In the following, we provide a more detailed description and highlight all deviations from the reported method as well as justification for the changes. The study itself is comprised of the following steps which were executed sequentially and are reported on in this section. The steps executed differ in two ways from those reported in the registered report. First, we do not contact potential participants for a second time after two weeks. This was deemed unnecessary based on the number of participants at that point in time. Moreover we did not want to bother those that participated already and had no way of knowing their identity. Second, we kept the survey open weeks longer than intended due to receiving several requests to do so. . Survey Design In this section we detail the design of the used questionnaire and methodology used to develop and distribute it. . . Questionnaire The questions in the questionnaire are designed to query data for measuring the latent variables from the structure model in Figure . The complete questionnaire can be found in Appendix B. In the following, we describe each latent variable and explain how we measure it through questions in the questionnaire. There are latent variables relevant to our study. Variables 1..19 describe exogenous variables and 1..7 describe endogenous variables. Each latent variable is measured through one or more manifest variables. Extending the structure model from Figure with the manifest variables produces the complete structural equation model evaluated in this study. Note that USM reduces the requirements for the scale levels of data thus allowing the use of categorical variables in addition to metric variables (Weiber and Mühlhaus ). All latent variables related to MTL capabilities ( 1..9 ) are associated with a single manifest variable 1..9 , which measures how frequently the participants utilized the MTL capabilities in their transformations. This measurement is represented as a ratio ranging from % to %. The higher the value of 1..9 , the more frequently the participants used the MTL capabilities in their transformations. Similarly, latent variables related to MTL properties ( 1..7 ) are associated with a single manifest variable 1..7 which measures the perceived quality of the property on a -point likert scale (e.g., very good, good, neither good nor bad, bad, very bad). The use of single-item scales is a debated topic. We justify their usage for the described latent variables on multiple grounds. First, the latent variables are of high complexity due to the abstract concepts they represent. Second, our study aims to produce first results that need to be investigated in more detail in follow up studies, more focused on single aspects of the model. And third, due to the size of our structural equation model multi-item scales for all latent variables would increase the size of the survey, potentially putting off many subjects. The validity of these deliberations for using single-item scales is supported by Fuchs and Diamantopoulos ( ). The latent variable language choice ( 10 ) is measured by means of querying participants to list their most recently used transformation languages. In our registered report we planned to also request participants to give an estimate on the percentage of their respective use % ( 10 ). This was discarded during pilot testing as it was seen as unnecessarily prolonging the questionnaire. Pilot testers had difficulties providing accurate data and questioned whether this data was actually used in analysis. Language skills ( 11 ) is measured through 11 and 12 for which participants are asked to give the amount of years they have been using each language ( 11 ) and the amount of hours they use the language per month ( 12 ). Meta-model size ). To formulate the semantic gap between input and output ( 16 ) we elicit the similarity of the structure ( 18 ) and data types ( 19 ) on a -point likert scale (very similar, similar, neither similar nor dissimilar, dissimilar, very dissimilar). Participants are asked to give the percentage of all their meta-models that fall within each of the five assessments. The meta-model sanity ( 17 ) is measured through means of how well participants perceive their structure ( 20 ) and their documentation ( 21 ) to be on a -point scale (very well, well, neither well nor bad, bad, very bad). Participants are asked to give the percentage of all their meta-models that fall within each of the five assessments. Lastly, for both bidirectional uses ( 18 ) and incremental uses ( 19 ) we query participants on the ratio of bidirectional ( 22 ) and incremental ( 23 ) transformations compared to simple uni-directional transformations they have written. . . Pilot Study We pilot tested the study with three researchers from the institute. All pilot testers are researchers in the field of model driven engineering with more than years of experience. Based on their feedback, we reworded some questions questions, removed the usage percentage part of the question for language choice and added more precise descriptions of the queried concepts. We then made the questionnaire publicly available and distributed a link to it via emails. . . Target Subjects & Distribution The target subjects are both researchers and professionals from industry that have used dedicated model transformation languages to develop model transformations in the last five years. We use voluntary and convenience sampling to select our study participants. Both authors reached out to researchers and professionals they knew personally via mail and request them to fill out the online survey. We further reach out, via mail, to all authors of publications listed in ACM Digital Library, IEEE Xplore, Springer Link and Web of Science that contain the key word model transformation from the last five years. A third source of subjects is drawn from social media. The authors use their available social media channels to recruit further subjects by posting about the online-survey on the platforms. The social media platform used for distribution was MDE-Net , a community platform dedicated to model driven engineering. The sampling method differs from the intended method by not including snowballing sampling as a secondary sampling method. We decided on this to have more control over the subjects receiving a link to the study as we believe secondary and tertiary contacts might be too far secluded from our target subjects. Participation was voluntary and we did not incentivise participation through offering rewards. This decision is rooted in our experience in previous studies one other survey with subjects (Groner et al. ) and the interview study we are basing this study on with subjects (Höppner et al. ). It is suggested in literature to have between to times as many participants as the largest number of parameters to be estimated in each structural equation (i.e., the largest number of incoming paths for a latent model variable) (Buckler and Hennig-Thurau ). Thus, the minimal number of subjects for our study to achieve stable results is . To gain any meaningful results a sample size of must not be undercut (Buckler and Hennig-Thurau ). In total we contacted potential participants and got responses exceeding the minimum requirement for stable results. . Data Analysis We use USM to examine the hypotheses system modelled by the structure model shown in Figure . USM is chosen over its structural equation modelling alternatives due to it being able to better handle uncertainty about the completeness of the hypothesis system under investigation, it having more capabilities to analyse moderation effects and the ability to investigate non linear correlations (Weiber and Mühlhaus ). USM requires a declaration of an initial likelihood of an interdependence between two variables. This is used as a starting point for calculating influence weights but can change over the course of calculation. For this, Buckler and Hennig-Thurau ( ) suggest to only assign a value of to those relationships that are known to be wrong. We use the results of our interview study (Höppner et al. ), shown in the structure model, to assign these values. For each path that is present in the model, we assume a likelihood of %. To check for interdependencies that might have been missed by interview https://mde-network.com/ This constitutes a response rate of 4.8%. We do however not know how many responses are a result of our social media posting. participants, we also use a likelihood of % for all missing paths between 1..19 and 1..7 . Our plan was to use a likelihood of % for these interdependencies but the tool available to us only allowed for either % or % to be put as input. The tool NEUSREL is used on the extracted empirical data and the described additional input to estimate path weights and moderation weights within the extended structure model, i.e., the structure model where each exogenous latent variable is connected to all endogenous latent variables. It also runs significance tests via a bootstrapping routine (Buckler and Hennig-Thurau ; Mooney et al. ) and produces the significance value estimates for each influence. The following procedures are then followed to answer the research questions from Section . RQ . We reject all hypothesised influences, i.e., those present in our structure model in Figure , that do not pass the statistical significance test. The threshold we set for this is 0.01. Moreover, we discard hypothesised influences with minimal effects strengths that are several magnitudes lower than the median influence of all coefficients. If, for example, the median of all path coefficients is . all influences with a coefficient lower or equal to . are discarded. We do so because such low influences suggest that the influence is negligible. RQ & RQ . All path coefficients produced that were not rejected in RQ will then provide direct values for the influence and moderation strengths to answer RQ . The same significance criteria we applied to all hypothesised influences for RQ , we also apply to the extended influences, i.e., those not present in the structure model from Figure . Those influences that pass the significance test are added to the initial structural model as newly discovered influences. . Privacy and Ethical concerns All participants were informed of the data collection procedure, handling of the data and their rights, prior to filling out the questionnaire. Participation was completely voluntary and not incentivised through rewards. During selection of potential participants the following data was collected and processed. The questionnaire did not collect any sensitive or identifiable data. All data collected during the study was not shared with any person outside of the group of authors. The complete information and consent form can be found in Appendix D. The study design was not presented to an ethical board. The basis for this decision are the rules of the German Research Foundation (DFG) on when to use a ethical board in humanities and social sciences . We refer to these guidelines because there are none specifically for software engineering research and humanities and social sciences are the closest related branch of science for our research. Demographics We detail the background and experience of the participants in our study in the following sections. . Experience in developing model transformations ( 12 ) Our survey captured model transformation developers with wide range of experience. The experience span ( 13 ) ranges from the least experience participant with half a year of experience up to the one with most experience of years. Figure shows a histogram of the experience stated by participants. Over half of all participants have between to ten years of experience in writing model transformations. Three stated to have more than years in total. On average our participants have years of experience. How much time participants spend developing transformations each month ( 14 ) also greatly varies. Some participants have not developed transformations in recent time whereas others stated to spend or more hours each month on transformation development. Figure shows an overview over the hours participants spend each month in developing transformations. The vast majority spends around to hours each month on transformation development. Nine stated that they did not develop any transformation in recent times. On average our participants spend about hours per month developing model transformations. . Languages used for developing model transformations ( 10 ) and experience therein ( 11 ) To develop their transformations, participants use a wide array of languages. In total languages ( 10 ) have been named of which are unique languages used only by a single participant. Surprisingly the language that has been used by the most participants is Java, a general purpose language. Java has been used by of the participants. The most used MTL is ATL with users closely followed by another GPL, namely Xtend with users. Table shows how many participants use one of the ten most used languages for developing transformations. Overall the prevalence of general purpose programming languages is higher than expected. This might be explained by the large number of existing MTLs which reduce the amount of total users per language while only four different GPLs are used. . Sizes ( 12 , 14 ) The size distribution of meta-models ( 15 ) transformed by participants is shown in Figure . On the x-axis the given intervals of meta-model sizes are shown and on the y-axis the distribution for each participant is shown. For example, the first ridge line at the bottom of Figure shows the answers of a participant who has stated that % of their transformations revolve around meta-models with or less meta-model elements. The figure illustrates that most transformations involve meta-models with to meta-model elements. Moreover, most participants have some experience with small meta-models while only a handful of them has experience with transformations involving large meta models of more than . elements. The size distribution of model transformations ( 17 ) written by participants is shown in Figure . Similarly to the meta-model sizes, the figure illustrates that most participants have some experience with small transformations of sizes up to lines of code. Most also have experience with large transformations up to . lines of code. More than % of all participants also have experience with large and very large transformations ranging from . up to more than . lines of transformation code. Overall the experience of our participants includes many moderately large to large transformations. This strengthens us in the assumption that their answers are meaningful for our study. ( 17 ) Participants agreed that the vast majority of metamodels they transform are well structured ( 20 ). This means there is little to no additional burden put onto development solely due to unfavourably structured metamodels. The distribution of structure assessment per participant is shown in Figure . The situation is different with documentation ( 21 ). Most participants stated that they have experience with badly or even very baldy documented meta-models Figure . For many participants, this constitutes the majority of meta-models they work with. Results In this section, we present the results of our analysis of the questionnaire responses using universal structure modelling structured around the research questions RQ -. The quantitative results for all influences between MTL capabilities and MTL properties are shown in Table in The rest of this section presents our results in context of the four research questions. We focus on the most salient influences that we deem interesting for the respective research question. Detailed interpretation and discussion of the implications of the presented results are done in Section . . RQ : Which of the hypothesised interdependencies withstands a test of significance? & RQ : What additional interdependencies arise from the analysis that were not initially hypothesised? Our first research question is aimed at evaluating the accuracy of the structure model developed in the previous study (Höppner et al. ). We do so by subjecting all hypothesised influences to a significance test during analysis. The significance test can also be used to directly gain insights into interdependencies missed in the initial model. Thus we discuss both the rejection of previously hypothesised influences as well as the extension of the model through newly discovered significant interdependencies in this section. Most initially hypothesised influences withstand the test of significance but there are several exceptions. Most notably all but one(Maintainability) of the hy- Regarding the moderating effects, our findings suggest that a nuanced view is warranted. The hypothesis that context moderates all influences on an MTL Property still holds but the strength of the moderation effects varies greatly. As hypothesised, we are able to observe that Comprehensibility and Ease of Writing are the two properties moderated by the most context variables. But the moderation is only significant for a hand full of influences on these properties. This can be seen e.g. in the moderation effects of Meta-Model Size on influences on Comprehensibility depicted in Table in Appendix A. Changes in the Meta-model sizes participants worked with had next to no effect on how their usage of Bidirectionality functionality affected their view on the Comprehensibility of transformations. The impact on the influence of Model Management on Comprehensibility is orders of magnitudes higher. Another observation that stands out is the impact of Language Choice and Language Experience. The moderation effects of both variables are negligible or even for all influences. We believe this is due to the large number of languages considered in this study. It makes analysing the effects of choosing one of the languages difficult. Overall the results for research questions RQ & suggest that our initial structure model contains many relevant interdependencies but several more have to be considered as well. We do have to reject several direct influences due to low significance and moderation effects have to be considered on a per influence basis instead of being generalised for each MTL Property. . RQ : How strong are the influences of model transformation language capabilities on the properties thereof? Our second research question is intended to provide numbers that can help to identify the most important factors to consider when evaluating the advantages and disadvantages of model transformation languages empirically. We do this by considering both the average simulated effect of influences calculated by NEUSREL as well as the overall explained absolute deviation of influences compared to each other. As explained earlier in this section all numbers can be found in Table . Overall the effects identified in our analysis are lower than anticipated. They range from . down till . e-. We expected some effects to be low, mainly those from non significant interdependencies, but the fact that even significant effects are in the order of . is surprising. We assume this stems from the large number of variables that are involved and the overall complexity of the matter under investigation. Nonetheless we believe there are meaningful insights that can be drawn when comparing the influences for each MTL Property with each other. Of the influences hypothesised from our previous interview study Traceability is the most impactful MTL Capability. Its usage exerts the highest influence on perceived Comprehensibility with 0.29. Similarly it has the highest influence for Ease of Writing though with a value of 0.0021 the effect is small. We were, however, already able to show empirical evidence that MTLs utilising automatic trace handling provide clear advantages for writing transformations compared to GPLs (Höppner, Kehrer, and Tichy ). Please note that the significance values obtained through the NEUSREL tool may exhibit reduced accuracy compared to standard approaches due to the bootstrapping method used for their estimation. For the properties Tool Support, Maintainability and Productivity the availability of Reuse Mechanisms seems to be the strongest driving factor with an average simulated effect of 0.1, 0.1, 0.1 and 0.2, respectively. No other factor has an ASE or effect strength as high as Reuse Mechanisms for these properties. This result is surprising as the influences were not raised even once during our interview study. Overall, automatic tracing and reuse mechanisms appear to be the most influential factors for MTL properties. This suggests to us two main pathways for further research. First, to improve model transformation languages more research should be devoted to developing effective ways to reuse transformations or parts of transformations. From our experience, current mechanism are hard to use and are especially unsuited for different use-cases. Secondly, the first area to address for improved adoption of model transformation concepts in general purpose languages should be the development of mechanisms for automatic trace handling. . RQ : How strong are moderation effects expressed by the contextual factors use-case, skills & experience and MTL choice? As expressed in Section . the results of our analysis suggest that a more nuanced view of moderation effects is warranted. In this section we go into detail on these nuances. As hypothesised the size of meta-models moderates the influences on Comprehensibility. The moderation strength differs greatly between the different causing factors though. For example, Meta-model size exerts the strongest moderation on the influence of Model Management onto Comprehensibility with 0.14. All other moderation effects are far lower. The second highest moderation effect, the moderation of Meta-model size on the influence of Traceability on Comprehensibility, is about half es strong (0.0778) and the lowest, the moderation of Meta-model size on the influence of Bidirectionality functionality on Comprehensibility, is only 0.0009. The moderations make sense intuitively as larger metamodels would make implementing these tasks manually more labour intensive and thus clutter the code unnecessarily. Model size exerts similar moderation effects as metamodel size. Its strongest moderation effect is also on the influence of Model Management on Comprehensibility (0.36). Moreover, Model size also strongly moderates the influence of Traceability functionality on the Ease of Writing transformations (0.17). Most other moderation effects of Model size are far lower than 0.1. Transformation size seems to be the most relevant moderating factors across the board. It has many noteworthy moderation effects on all influences of MTL Capabilities on Tool Support, none being less than 0.16, and Productivity, most being above 0.12. We assume this is because the larger transformations get, the more reliant developers are on tooling and abstractions that reduce the development effort. Another interesting effect we found is, that devel- Overall the size of transformations is in our opinion the most relevant moderating variable. The assumption on the relevance of language choice could however not be confirmed. This is most likely due to the large amount of languages each participant has had experience with which weakens the ability to elicit the effect of differences of language choice between participants. Discussion The results of our analysis provide useful insights for research on model transformation languages. In this section, we discuss the implications of our results for evaluation and development of MTLs. Additionally, we provide a critical evaluation of our methodology with regards to the goals of this study. . Implications of results The topic of influences on the quality properties of model transformation language is vastly complex, as reflected in the already large structure model which we set out to analyse. While we were able to reject some of the hypothesised influences, our analysis also identified several new influences. As a result, the structure model depicting the influences grew in complexity, further highlighting the need for comprehensive studies of the factors that influence MTL quality properties. The updated structure model can be seen in Figure . It contains more interdependencies than the one we started our analysis with. Our analysis produced a number of interesting observations that have important implications for further research. In particular, we now discuss the implications for empirical evaluations. Additionally, we highlight the implications of our results for further development of MTLs and domain-specific features thereof. . . Suggestions for further empirical evaluation studies Traceability is one of the most important factors to consider when it comes to the development of model transformations. This is because it has the strongest influence on the perceived quality of both the ease of writing and the comprehensibility of the resulting code. It is crucial to consider scenarios where tracing is involved in order to properly evaluate the value of MTL abstractions for writing and comprehending transformations. Additionally, it is important to evaluate scenarios where tracing is not necessary to understand the difference that MTL abstractions can make. To truly understand the relevance of this feature, it is also important to assess how many real-world use cases require it. By taking all of these factors into account, it is possible to gain a comprehensive understanding of the value of MTL abstractions for writing and comprehending transformations. For evaluation of Maintainability, Reuse Mechanisms as well as Model Traversal functionality are important capabilities to consider. We therefore believe that researchers focusing on such an evaluation must make sure to use transformations that utilise these capabilities. Moreover, the most important context to consider is the semantic gap between input and output metamodels. Empirical evaluations focusing on maintainability should therefore make sure to evaluate transformation cases with varying degrees of differences between input and output meta-models. These studies should then analyse how much the effectiveness of MTLs and GPLs changes in light of the semantic gap between input and output. When selecting transformations for evaluation, it is essential to consider their size. Our results have shown that size has the most significant impact on the influence of other factors on properties. Put differently, the larger the transformation, the more noticeable the effect of all capabilities will be. As such, it is imperative to focus on large transformation use-cases when designing a study to evaluate MTLs. . . Suggestions on language development For us, the most surprising finding of this study is the importance of reuse functionality. The quality attributes tool support, maintainability, productivity and reusability are all most influenced by it. This is especially surprising because there was no indication of this in our interviews (Höppner et al. ). We suppose this influence stems from the fact that reuse mechanisms allow for more abstraction and thus less code that can be developed and maintained more efficiently. As a result we believe that more focus should be put on developing transformation specific reuse mechanisms. We are aware that some languages, e.g. ATL, already provide general reuse mechanisms through concepts like inheritance. However, these concepts are limited by the fact that they rely on the object-oriented nature of the involved models. This means that they can only be used to define reusable code within transformations of a single meta-model. Defining transformation behaviour that can be reused between different meta-models is not possible. But this would be important to further reduce redundancy in transformation development. As result, we believe, that development of reuse mechanisms tailored to MTs is important to focus on. In order to stand out compared to the reuse mechanisms of GPLs, it may be valuable to explore ways to define and reuse common transformation patterns independently of meta-models. Higher order transformations are sometimes used to allow reuse too (Kusel et al. ), but from our experience current implementations are too cumbersome to be used productively. Chechik et al. ( ) provide a number of suggestions for transformation specific reuse mechanisms but to the best of our knowledge there exist no implementations of their concepts. . Interesting observations outside of USM When discussing model transformation languages, it is often stated that they are only demonstrated on 'toy examples' that have little to no real world value. This argumentation has for example been raised several times in our previous interview study (Höppner et al. ). However, the demographic data collected in our study disputes this. There are several participants that stated to have worked solely on small transformations with small metaand input models. But this group is opposed by a simi- larly large group of participants that have worked with huge transformations, dissimilar and large meta-models as well as large inputs. From this we conclude, that there are large use-cases where model transformations and MTLs are applied but they rarely get described in publications. It seems likely that such examples are not used for highlighting important aspects authors want to discuss due to the space describing such cases would take up. However, we argue that it is paramount that such case-studies are published to diminish the cynicism that MTLs are only useful for small examples. Another noteworthy observation based on the demographic data of our participants is that documentation pertaining meta-models is predominantly perceived as inadequate. We believe that this is primarily due to the fact that many of meta-models stem from research projects that prioritize expeditious prototyping over the long-term viability of the artefacts. Nonetheless, we are convinced that there is an urgent need to enhance the documentation surrounding model transformations. This issue is not limited solely to the meta-models, but also extends to the languages that are known for their challenging learning curve because of lack of tutorials (Höppner et al. ). . Critical Assessment of the used methodology The appeal of using structural equation modelling for analysing the responses to our survey was to have a method of analysis that can be used to investigate a complex hypothesis system in its entirety. Moreover, analysis is straight forward after an initial setup due to the sophisticated tooling for this methodology. Instead of presenting participants with a case that they should assess we also opted for querying them on their overall assessment of MTL quality attributes. These design decisions have implications and ramifications that we discuss in this section. First, the effects observed in our study are small. We assume this stems from the intricate and large structure model and the comparatively small sample size. As explained in Section it is suggested to have between to times as many participants as the largest number of parameters to be estimated in each structural equation. In light of the newly discovered paths in our structure model, the total participants are close to the minimum sample size required. Moreover, because of the large number of influences we do expect the influence of a single factor to be much smaller than in structure models where only -factors are relevant. The results therefore reinforce our assessment that it is a very complex topic. We also ran into some difficulties when using NEUSREL to analyse our data. The structure model was so large that sometimes the tool crashed during calculations. The online tooling to set everything up was also painfully inefficient leading to more problems during setup like browser crashes. It took us some trial and error to find a way to get everything set up and run the analysis without crashes. We chose to execute a study based on our study design in hopes of producing a complete theory independent of the use case under consideration. The results exhibit less effect strength but we believe them to be more externally valid. Nonetheless, we think that several additional studies need to be conducted to confirm our results for different use-cases. Threats to validity Our study is carefully designed and follows standard procedures for this type of study. There are, however still threats to validity that stem from design decisions and limitations. In this section we discuss these threats. . Internal Validity Internal validity is threatened by manual errors and biases of the involved researchers throughout the process. The two activities where such errors and biases can be introduced are the subject selection and question creation. The selection criteria for study subjects is designed in such a way, that no ambiguities exist during selection. This prevents researcher bias. The survey questions and answers to the questions pose another threat to internal validity. We used neutral questions to prevent subconsciously influencing the opinions of research subjects. We also provide explanations for ambiguous terms used in the survey. However, there are several instances where we can not fully ensure that each participant interprets terms the same way. The questions on quality properties of model transformation languages allow room for interpretation in that we do not provide a clear metric what terms such as 'Very Comprehensible' or 'Very Hard to write' mean. Similarly, the questions on meta-model quality leave room for interpretation on the side of participants. We opted for this limitation because there are no universal ways to quantify such estimates and because the subjective assessment is what we want to collect. The reason for this is, that subjective experiences are the main driving factor for all discussions on development when people are the main subject. To ensure overall understandability and prevent errors in the setup of the survey we used a pilot study. . External Validity External validity is threatened by our subject sampling strategy and the limitations on the survey questions imposed by the complexity of the subject matter. We utilise convenience sampling. Convenience sampling can limit how representative the final group of interviewees is. Since we do not know the target populations makeup, it is difficult to asses the extend of this problem. Using research articles as a starting point introduces a bias towards researchers. There is little potential to mitigate this problem during the study design, because there exists no systematic way to find industry users. Due to the complexity and abstractness of the concepts under investigation, a measurement via reflective of formative indicators is not possible. Instead we use single item questions. We further assume that positive and negative effects of a feature are more prominent if the feature is used more frequently. This can have a negative effect on the external validity of our results. However, we consciously decided for these limitations to be able to create a study that concerns itself with all factors and influences at once. . Construct Validity Construct validity is threatened by inappropriate methods used for the study. Using the results of online surveys as input for structural equation modelling techniques is common practice in market research (Weiber and Mühlhaus ). It is less common in computer science. However, we argue that for the purpose of our study it is an appropriate methodology. This is because the goal of extracting influence strengths and moderation effects of factors on different properties aligns with the goals of market research studies that employ structural equation modelling. . Conclusion Validity Conclusion validity is mainly threatened by biases of our survey participants. It is possible that people who do research on model transformation languages or use them for a long time are more likely to see them in a positive light. As such there is the risk that too little experiences will be reported on in our survey. However, this problem did not present itself in a previous study by us on the subject matter (Höppner et al. ). In fact researchers were far more critical in dealing with the subject. As a result, there might be a slight positive bias in the survey responses, but we believe this to be negligible. Related Work There are numerous works that explore the possibilities gained through the usage of MTLs such as automatic parallelisation (Sanchez Cuadrado et al. ; Biermann, Ermel, and Taentzer ; Benelallam et al. ), verification (Lano, Clark, and Kolahdouz-Rahimi ; Ko, Chung, and Han ) or simply the application of difficult transformations (Anastasakis et al. ). There is, however, only a small amount of works trying to evaluate the languages to gain insights into where specific advantages or disadvantages associated with the use of MTLs originate from. Several other works that can be related to our study also exist. ). The goal of the survey was to identify reasons why developers decided to use or dismiss MTLs for writing transformations. They also tried to gauge the communities sentiment on the future of model transformation languages. At ICMT' , where the results of the survey were presented, they then held an open discussion on this topic and collected the responses of participants. Their results show that MTLs have fallen in popularity. They attribute this to types of issues, technical issues, tooling issues and social issues, as well as the fact that GPLs have assimilated many ideas from MTLs. The results of their study are a major driver in the motivation of our work. While they identified issues and potential avenues for future research, their results are qualitative and broad which we try to improve upon with our study. In a prior study of ours (Götz, Tichy, and Groner ), we conducted a structured literature review which forms the basis of much of our work since then. The literature review aimed at extracting and categorising claims about the advantages and disadvantages of model transformation languages as well as the state of empirical evaluation thereof. We searched over publication for this purpose and extracted that directly claim properties of MTLs. In total claims were found and categorised into quality properties of model transformation languages. The results of the study show that little to no empirical studies to evaluate MTLs exist and that there is a severe lack of context and background information that further hinders their evaluation. Lastly, there is our interview study (Höppner et al. ) the data of which forms the basis for the reported study. We interviewed people on what they believe the most relevant factors are that facilitate or hamper their advantages for different quality properties identified in the prior literature review. The interviews brought forth insights into factors from which the advantages and disadvantages of MTLs originate from as well as suggested a number of moderation effects on the effects of these factors. These results for the data basis for this study. . Empirical Studies on Model Transformation Languages Hebig et al. ( ) report on a controlled experiment to evaluate how the use of different languages, namely ATL, QVT-O and Xtend affects the outcome of students solving several transformation tasks. During the study student participants had to complete a series of three model transformation tasks. One task was focused on comprehension, one task focused on modifying an existing transformation and one task required participants to develop a transformation from scratch. The authors compared how the use of ATL, QVTo and Xtend affected the outcome of each of the tasks. Unfortunately their results show no clear evidence of an advantage when using a model transformation language compared to Xtend. However, they concede that the conditions under which the observations are made, were narrow. We published a study on how much complexity stems from what parts of ATL transformations (Götz and Tichy ) and compared these results with data for transformations written in Java (Höppner, Kehrer, and Tichy ) to elicit advantageous features in ATL and to explore what use-cases justify the use of a general purpose language over a model transformation language. In the study, the complexity of transformations written in ATL were compared to the same transformations written in Java SE and Java SE allowing for a comparison and historical perspective. The Java transfor-mations were translated from the ATL transformations using a predefined translation schema. The results show that new language features in Java, like the Streams API, allow for significant improvement over older Java code, the relative amount of complexity aspects that ATL can hide stays the same between the two versions. Gerpheide, Schiffelers, and Serebrenik ( ) use a mixed method study consisting of expert interviews, a literature review and introspection, to formalize a quality model for the QVTo model transformation standard. The quality model is validated using a survey and used to identify the necessity of quality tool support for developers. We know of two study templates for evaluating model transformation languages that have been proposed but not yet used. Kramer et al. ( ) propose a template for a controlled experiment to evaluate comprehensibility of MTLs. The template envisages using a questionnaire to evaluate the ability of participants to understand what presented transformation code does. The influence of the language used for the transformation should then be measured by comparing the average number of correct answers and average time spent to fill out the questionnaire. Strüber and Anjorin ( ) also propose a template for a controlled experiment. The aim of the study is to evaluate the benefits and drawbacks of rule refinement and variability-based rules for reuse. The quality of reusability is measured through measuring the comprehensibility as well as the changeability collected in bug-fixing and modification tasks. Conclusion Our study provides the first quantification of the importance of model transformation language capabilities for the perception of quality attributes by developers. It once again highlight the complexity of the subject matter as the effect sizes of the influences are small and the final structure model grew in size. As demonstrated by the amount of influences contained in the structure model many language capabilities need to be considered when designing empirical studies on MTLs. The results however point towards Traceability and Reuse Mechanisms as the two most important MTL capabilities. Moreover, the size of the transformations provides the strongest moderation effects to many of influences and is thus the most important context factor to consider. Apart from implications for further empirical studies our results also point a clear picture for further language development. Transformation specific reuse mechanisms should be the main focus as shown by their rel-evance for many development lifecycle focused quality attributes such as Maintainability and Productivity. Conflict of Interests The authors have no competing interests to declare that are relevant to the content of this article. We now aim to quantitatively asses the interview results to confirm or reject the influences and moderation effects posed by different factors and to gain insights into how valuable different factors are to the discussion. As an expert in the field of model2model transformations your opinion is of high value for us because your answers can provide meaningful insights. Participating in the survey will take about 25 minutes. There are 3 pages in this survey. There are 35 questions in this survey. Quality properties of Model Transformation Languages In the following you will assess quality attributes of model transformations and the languages used for writing them. Each question presents a description of the quality attribute that is being assessed. Comprehensibility describes the degree of effectiveness and efficiency with which the purpose and functionality of a transformation can be understood. Tool Support describes the degree of effectiveness and efficiency with which tools support developers in their effort. Capability utilisation of Model Transformation Languages In the following you will be asked to estimate how often you use certain capabilities of model How many elements do the meta-models involved in your transformations have? Please estimate the percentage of your use cases that fall in the following ranges.  Each answer must be between 0 and 100  The sum must be at most 100  Only integer values may be entered in these fields. Please write your answer(s) here: E.g. If half of the meta-models in your transformations have 25 elements and the other half are meta-models with 4 elements, you would put 50 for #elements ≤ 10 and 50 for 20 < #elements ≤ 50. #elements ≤ 10 10 < #elements ≤ 20 20 < #elements ≤ 50 50 < #elements ≤ 100 100 < #elements ≤ 1.000 #elements > 1.000 How large are the models you transform measured in number of model elements? Please estimate the percentage of your use cases that fall in the following ranges.  Each answer must be between 0 and 100  The sum must be at most 100  Only integer values may be entered in these fields. Please write your answer(s) here: E.g. if 1/3 of all models you tranform contain 200 elements and the rest are larger than 100.000 elements, you would put 33 for 100 < #elements ≤ 1.000 and 66 for #elements > 100.000.  Each answer must be between 0 and 100  The sum must be at most 100  Only integer values may be entered in these fields.  Each answer must be between 0 and 100  The sum must be at most 100  Only integer values may be entered in these fields. Please write your answer(s) here: The structure of a meta-model is define by the number of elements and their associations with each other. How Dissimilar or Similar are the attribute types of input and output elements that are related to each other in your transformations? Please estimate the percentage of your use cases that fall in the following ranges.  Each answer must be between 0 and 100  The sum must be at most 100  Only integer values may be entered in these fields. Please write your answer(s) here: E.g. when mapping a Class to a How Bad or Well structured are the meta-models in your transformations? Please estimate the percentage of your use cases that fall in the following ranges.  Each answer must be between 0 and 100  The sum must be at most 100  Only integer values may be entered in these fields. Please write your answer(s) here: A well structured meta-model does for example not split related data over a large number of meta-model elements if it can be avoided. How Bad or Well documented are the meta-models in your transformations? Please estimate the percentage of your use cases that fall in the following ranges. Only consider documentation of the meta-model itself not documentation in you code.  Each answer must be between 0 and 100  The sum must be at most 100  Only integer values may be entered in these fields. Please write your answer(s) here: Documentation means description of the meta-model elements, their attributes and associations as well as any invariants on them. What percentage of your use cases require Synchronization between "input" and "output".  Only numbers may be entered in this field.  Your answer must be between 0 and 100 Please write your answer here: very bad bad neither well nor bad well very well
2023-05-12T01:15:48.667Z
2023-05-11T00:00:00.000
{ "year": 2023, "sha1": "2738b694f7b61e3191f371b109afb97f207c99aa", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10664-023-10428-2.pdf", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "2738b694f7b61e3191f371b109afb97f207c99aa", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
195818973
pes2o/s2orc
v3-fos-license
Patients with Incompetent Valves in Chronic Venous Insufficiency Show Increased Systematic Lipid Peroxidation and Cellular Oxidative Stress Markers Chronic venous insufficiency (CVI) is a disease that impacts cellular homeostasis. CVI may occur with a valvular destruction process known as venous reflux or valvular incompetence. One of the cellular processes that may be triggered as a consequence of these events is the production of reactive oxygen species (ROS), which may trigger the production of different cellular markers and cell damage processes, such as lipid peroxidation. Therefore, the present study performed an observational, analytical, and prospective cohort study by reviewing 110 patients with CVI, and the activities and plasma levels of iNOS, eNOS, NOX1, and NOX2 were determined using immunohistochemistry and RT-qPCR. Lipid peroxidation (MDA) was also measured. Patients were distributed according to the presence or absence of valvular incompetence-venous reflux, which was diagnosed clinically as the absence of venous reflux (NR = 29) or presence of venous reflux (R = 81). Each group was divided according to age, with a cutoff point of fifty years (NR < 50 = 13, NR ≥ 50 = 16, R < 50 = 32, and R ≥ 50 = 49). The results showed that R patients exhibited significantly increased plasma MDA levels, and R < 50 patients exhibited the highest statistically significant increase. iNOS, NOX1, and NOX2 exhibited the highest gene and protein expression in R patients. The increased expression was maintained in the R < 50 patients. Our data suggest that young patients with valvular incompetence (venous reflux) show higher levels of lipid peroxidation and oxidative stress, which reflects the characteristics of an aged patient. Introduction Chronic venous insufficiency (CVI) is a disorder of the venous system that prevents the return of blood to the heart [1]. In general, IVC is not a serious pathology, but it occurs with a high incidence in the population [2,3]. Currently available pharmacological treatments are not effective, and surgery is the treatment of choice when the disease progresses. In fact, these patients represent one of the most common consultations to vascular surgeons [4]. Different epidemiological studies performed worldwide reveal that CVI is a chronic pathology that occurs with high incidence and prevalence in the population [5,6]. One of the main risk factors for developing CVI is age because of the progressive deterioration of the venous wall and increased pressure at the level of the superficial venous system. Other factors that influence the development of CVI are gender, family history, ethnicity, number of pregnancies, obesity, and risk professions [7][8][9][10][11]. CVI is associated with a wide variety of signs and symptoms, but it seems likely that all of the symptoms are related to venous hypertension. Venous hypertension often occurs due to reflux caused by incompetent venous valves [12]. These valves decrease the venous pressure, which favors the return of blood to the heart, and tolerate high pressures for limited periods of time. Therefore, events that modify the structure of these valves will trigger valvular incompetence and generate a blood reflux that progressively increases the venous pressure in the leg [13]. Reactive oxygen species (ROS) are physiologically produced in a regulated manner from the incomplete reduction of oxygen in the vascular wall. An imbalance between the production of ROS and the antioxidant defense mechanisms creates an oxidative stress that produces lipid peroxidation, oxidation of DNA, RNA, protein, and the inactivation of some enzymes [14][15][16]. Numerous authors demonstrated that the roles of nitric oxide (NO) and nitric oxide synthase (NOS) in vascular diseases are prominent in ROS activity [1,17,18]. The present study examined the process of valvular incompetence (venous reflux) and measured the differential expression of cellular oxidative stress markers (iNOS, eNOS, NOX1, and NOX2) according to patient age and how these conditions change the profile of lipid peroxidation as quantified using malondialdehyde (MDA). The aim of this study is to demonstrate how the oxidative stress that occurs at the tissue level has systemic consequences in correlation with age. Study Population. This study was an observational, analytical, and prospective cohort study that reviewed patients with chronic venous insufficiency. Patients were divided according to age (cutoff point at 50 years of age) and the presence (R) or absence (NR) of incompetent valves (venous reflux). There were a total of 110 patients [NR = 29, 51 51 ± 14 04 years (NR < 50 = 13, 38 53 ± 6 21 years, NR ≥ 16, 62 06 ± 8 54 years), R = 81, 50 09 ± 15 91 years (R < 50 = 32, 62 06 ± 8 54 years, R ≥ 49, 59 98 ± 11 81 years)]. The study cohort was selected according to the following criteria. Inclusion criteria: women and men diagnosed with CVI and with and without venous reflux in the great saphenous vein; BMI ≤ 25; informed consent signed; and commitment to follow-ups during the pre-and postoperative periods plus tissue sample collection. Exclusion criteria: patients with venous malformations or arterial insufficiency; patients who did not provide their clinical history; patients with pathology affecting the cardiovascular system (e.g., infectious diseases, diabetes, dyslipidemia, hypertension); patients with toxic habits; and patients who doubted that they could complete the full follow-up. Each patient underwent an exploratory examination using a M-Turbo Eco-Doppler (SonoSite) transducer of 7.5 Mz. The examination of the lower limbs was performed in a standing position with the explored leg in external rotation and support on the contralateral leg. The examination included the greater saphenous axis from the inguinal region to the ankle and femoral vein. A distal compression maneuver was performed. Valsalva maneuvers were performed in the present study. Pathological reflux was considered when this was greater than 0.5 sec. NR patients had a compressive syndrome as the indication for surgery. Patients were classified according to CEAP international criteria [18]. Saphenectomy of the vein was produced, and the total of the arch of the greater saphenous vein was taken. These fragments were introduced into two different sterile tubes: one tube contained minimum essential medium (MEM) with 1% antibiotic/antimycotic (both from Thermo Fisher Scientific, Waltham, MA, USA) and the other tube contained RNAlater® solution (Ambion, Austin, TX, USA). Blood samples are taken from the study population via puncture of the superficial vein of the elbow fold, after placement of a tourniquet on the arm. One tube (Vacutest® Kima, Piove di Sacco, Italy) of blood sample was collected from each study subject. The tube contained heparin to obtain blood serum. The present study was performed in accordance with the basic ethical principles, autonomy, beneficence, nonmaleficence, and distributive justice, and its development followed Good Clinical Practice standards and the principles enunciated in the last Declaration of Helsinki (2013) and the Convention of Oviedo (1997). Patients were duly informed, and each was asked to provide written informed consent. 2.2. RT-qPCR. RNA was extracted from the samples collected in RNAlater® using the guanidine-phenol-chloroform isothiocyanate method of Chomczynski and Sacchi (1987). RNA samples (50 ng/μl) were used to synthesize complementary DNA (cDNA) via reverse transcription. Each sample (4 μl) was mixed with 4 μl of an oligo-dT solution (15) 0.25 μg/μl (Thermo Fisher Scientific) and incubated at 65°C for 10 minutes in a dry bath (AccuBlock™, Labnet International, Inc., Edison, NJ, USA) to denature the RNA, following the protocol of Ortega et al. [3]. The amount of cDNA in each sample of the following genes of interest was quantified using qPCR. De novo primers or specific primers were designed for all of the genes studied (Table 1) using the Primer-BLAST online application [19] and AutoDimer [20]. The constitutively expressed genes of glyceraldehyde 3-phosphate dehydrogenase (GAPDH) were used to formalize the results. Gene expression was normalized using GAPDH as reference gene. The qPCR was performed in a StepOnePlus™ System (Thermo Fisher Scientific), and the relative standard curve method was used. For this, 5 μl of each sample was mixed 1/20 with 10 μl of iQ™ SYBR® Green Supermix (Bio-Rad Laboratories), 1 μl of forward primer, 1 μl of reverse primer (reverse primer), and 3 μl of DNase and RNase-free water in a MicroAmp® 96-well plate (Thermo Fisher Scientific), for a total reaction volume of 20 μl. Fluorescence detection was performed at the end of each repetition cycle (amplification) and at each step of the dissociation curve. The data obtained from each gene were interpolated using a standard curve created from serial dilutions of a mixture of the study samples that was included in each plate. Results are expressed as arbitrary units. All tests were performed in duplicate. Immunohistochemistry. Samples destined for immunohistochemical studies were processed using standardized protocols [3,21]. Samples were embedded in paraffin and sectioned using a microtome into 5 μm thick sections. Sections were deparaffinized and hydrated. The different study molecules were detected using commercial primary and secondary antibodies ( Table 2). Sections of the same tissue were used as negative controls in all immunohistochemical studies, in which the primary antibody was replaced with blocking solution. Detection of the antigen-antibody reaction was performed using the ABC method (avidin-biotin complex) (DAB Kit, SK-4100, Vector, Burlingame, CA, USA), which used the chromogen avidin-peroxidase ExtrAvidin®-Peroxidase (Sigma-Aldrich, St. Louis, MO, USA) at a 1 : 200 dilution in PBS. Histological samples of the patients were stratified as negative (0) or positive (1). For each of the patients of the established groups, 5 sections and 5 random fields per section were examined. Patients were described as positive when the average of the test sample marked for each study subject was greater than or equal to 5% of the total [22]. Oxidative Stress Determination. MDA production is proportional to polyunsaturated fatty acid degradation of lipid peroxidation. Therefore, MDA concentration was measured to determine the oxidative stress in patient plasma. The lipid peroxidation assay kit (ab118970) is a suitable method for the sensitive detection of the malondialdehyde of the sample. The MDA present in the sample reacts with thiobarbituric acid (TBA) to generate an MDA-TBA adduct, which Table 1: The primers used in RT-qPCR, the sequence, and the binding temperature (Temp). Gene Sequence fwd (5′→3′) Sequence rev (5′→3′) Temp GADPH GGA AGG TGA AGG TCG GAG TCA GTC ATT GAT GGC AAC AAT ATC CAC T 60°C eNOS AAG AGG AAG GAG TCC AGT AAC ACA GA ACG AGC AAA GGC GCA GAA 60°C iNOS CCT TAC GAG GCG AAG AAG GAC AG CAG TTT GAG AGA GGA GGC TCC G 61°C NOX1 GTT TTA CCG CTC CCA GCA GAA GGA TGC CAT TCC AGG AGA GAG 55°C NOX2 TCC GCA TCG TTG GGG ACT GGA CCA AAG GGC CCA TCA ACC GCT 60°C in the plasma of patients without reflux less than fifty years of age (NR < 50), without reflux greater than or equal to fifty years of age (NR ≥ 50), with reflux less than fifty years of age (R < 50), and with reflux greater than or equal to fifty years of age (NR ≥ 50). * * p < 0 005. is easily quantified using colorimetry. The sensitivity of this method was 0.1 nmol MDA/well. Study of Lipid Peroxidation Levels: Malondialdehyde. Lipid peroxidation levels were determined using malondialdehyde levels in the plasma of the study cohort. Patients with venous reflux (R) exhibited a significant increase compared 1to the NR subjects (p < 0 05) (Figure 1(a)). The mean malondialdehyde levels were 1 306 ± 0 116 μM in nonreflux patients and 1 745 ± 0 142 μM in patients with reflux. A clear differential distribution was found in relation to the age factor, which significantly increased the levels of malondialdehyde in R < 50 patients compared to NR < 50 patients (0 952 ± 0 067 μM, NR < 50 versus 1 966 ± 0 142 μM, R < 50), p < 0 005 (Figure 1(b)). No significant differences were observed between groups greater than or equal to fifty years of age (1 508 ± 0 124 μM, NR ≥ 50 versus 1 303 ± 0 175 μM, R ≥ 50). (Figure 2(a)). The study patients exhibited differential protein expression of iNOS and eNOS (Figure 2(b)). These markers represented 34.48% and 44.83% in NR patients, respectively. These values were 48.15% and 61.73%, respectively, in R patients. There was a marked increase in the number of R patients who exhibited positive protein expression. iNOS and When the age factor was considered, the values of iNOS were 15.38% in NR < 50 and 50.00% in NR ≥ 50 patients. These values were 84.37% for R < 50 and 24.49% for R ≥ 50 patients. The expression of eNOS was 15.38% in NR < 50 and 68.75% in R ≥ 50. At reflux, eNOS was 90.62% in R < 50 compared to 42.86% in R ≥ 50. These results show that NR ≥ 50 and R < 50 patients exhibited the highest percentage of positive expression for iNOS and eNOS. iNOS expression showed that marker differences were established in the different layers of the human vein according to patient age (Figure 2(c)). iNOS protein was clustered in the three tunicas of NR patients. However, NR ≥ 50 patients exhibited a greater intensity of protein expression that was located more intensely in the adventitial tunica (Figure 2(c), B and C). NR < 50 patients exhibited large accumulations along the entire length of the vein wall, which was very intense in the middle tunica (Figure 2(c), D and C). The expression of eNOS was differentially maintained in the endothelium of NR < 50 patients, and it was especially intense in the adventitial tunica of R ≥ 50 patients (Figure 2(d), A-C). The study of the distribution of expression in the different layers of the human vein revealed important data on histological compression. NR ≥ 50 and R < 50 patients exhibited higher NOX1 protein expression in the intima, media, and adventitia layers of the human vein, and these differences were statistically significant (Figure 3(c), A-F). NOX2 protein expression was increased in R patients compared to NR patients in the intima, media, and adventitia layers of the vein. R < 50 patients showed a greater intensity of expression in the three tunicas of the venous wall (Figure 3(d), A-F). Discussion The multitude of mechanisms involved in the progression of CVI made it difficult for the scientific community to identify the factors that trigger this disease. Some studies related reflux with weakening of the venous walls [23], which may be due to an imbalance in the content of collagen and elastin in the vein [24]. Other studies focused on chronic inflammation as the main factor for the onset of the pathology [25]. Krzysciak and Kózka [26] showed that oxidative stress increased the risk of damage to the vascular endothelial wall and DNA and caused a remodeling of the tissue and the consequent progression of the pathology. Therefore, one of the events involved in valvular incompetence is oxidative stress. Krzysciak and Kózka [26] mentioned that ROS promotes reflux that generates a hypoxic environment in endothelial cells. These events favor the adhesion of leukocytes and other inflammatory mediators that release angiotensin II, which exerts a vasoconstrictive action directly on the smooth muscle and is capable of increasing the expression of growth factors, matrix metalloproteinases (MMPs), and collagen [1,27]. Overexpression of MMPs was also observed in fibroblasts, endothelial cells, and smooth muscle cells in patients with CVI [28]. Therefore, an alteration in cell balance may cause degenerative damage that compromises cell structure, the content of collagen and elastin, and the contraction and relaxation properties of the smooth muscle of the venous wall [29]. Therefore, ROS plays a decisive role in the progression of chronic venous insufficiency. Our results showed that R < 50 patients exhibited the highest concentrations of MDA in plasma. Krzysciak and Kózka [26] measured MDA concentrations in samples of saphenous veins of patients with CVI before and after development of the disease. These results showed a relationship between oxidative stress and chronic venous insufficiency at the tissue level and the systemic level beginning in the first years of the disease. Mikuła-Pietrasik et al. [30] showed that the sera of varicose patients increased cell proliferation, expression of the senescence marker SA-β-Gal, and ROS production in the endothelial cells of human umbilical veins (HUVECs) compared to the sera of healthy individuals. This result suggests that the presence of oxidative stress at a systemic level is the main factor triggering the progression of the pathology. Angiotensin II also activates nicotinamide adenine dinucleotide phosphate (NADPH) oxidase and enhances the production of superoxide anion O − 2 due to endothelium wall stress-dependent stimulation [26]. In addition to being a vasoconstrictor substance, it promotes inflammation, hypertrophy, and fibrosis, and it is implicated in vascular damage and remodeling in cardiovascular diseases [31]. A recent study by Zhang et al. [31] showed that an increase in the expression of NOX1 and NOX2 occurred after the stimulation with angiotensin II in HUVECs. Our results showed the event of oxidative stress in relation to NOX1 and NOX2 and the existence of a differential expression based on the age of the patients. These results should make us consider the implication of an accelerated aging process that leads to greater oxidative and inflammatory stress in the valvular incompetence (venous reflux). In fact, numerous authors noted the correlation of oxidative stress with age, but an accelerated aging process was not mentioned in young patients [22,32]. On the other hand, we wanted to further develop the implications of iNOS and eNOS in chronic venous insufficiency because many authors mentioned the role that these molecules play in vascular diseases [33]. eNOS is expressed primarily in endothelial cells. Therefore, our immunohistochemistry images of the low expression of eNOS in the tunica intima of the veins of patients with reflux stand out compared to patients without reflux. By providing a baseline level of NO in the vein and neutralizing ROS, it makes sense that patients with low eNOS expression are more susceptible to endothelial deterioration and develop valvular incompetence (venous reflux). The low expression of eNOS may be related to CVI and any disease in which the mechanism involves endothelium dysfunction, as indicated by Mikuła-Pietrasik et al. [30]. However, the expression of eNOS in the tunica adventitia suggests that it is reactive and remains functionally active. Our studies found differences in the iNOS isoform in the adventitia and middle vein tunicas. NR ≥ 50 patients tended to exhibit an increase in iNOS expression in the adventitia tunica, likely in response to age-induced stress. Notably, the expression of iNOS in patients with reflux never reached the expression detected in NR ≥ 50 patients, despite the oxidative stress generated in these patients. The low expression of eNOS and iNOS decreases the bioavailability of NO in the vein, which makes it more susceptible to oxidative stress. However, the increase in iNOS expression is related to other cardiovascular pathologies [34]. The decrease in the expression of iNOS and eNOS suggests the existence of a suppressive mechanism of expression, perhaps at the level of protein transcription because both proteins are encoded by different genes but share a 50-60% homology in amino acid sequence [35]. Our results support a role for oxidative stress as a mechanism involved in the development of valvular incompetence (venous reflux) in CVI. The present study showed the existence of an oxidative environment in human veins with chronic venous insufficiency and how the different molecular components that participate in CVI were differentially expressed in correlation with the age of the patients. Our study presents some limitations, since to observe the tissue response it would be necessary to develop in vitro experiments of the endothelial and muscle cells of the saphenous vein. In this line, another limitation of our study is to observe if this profile of protein and gene expression is the same in other venous territories of the lower limb. However, our study is the first to show how valvular incompetence has important consequences and there is a different profile depending on age. The importance of this study lies in demonstrating how venous disease produces a tissue change with systemic consequences. Venous disease is a common pathology in the general population that produces great disabilities, knowing its pathophysiology and its systemic consequences will help the development of specific therapies. Future studies should be aimed at discovering possible therapeutic targets at the tissue level that prevent systemic change and its consequences. Data Availability The data used to support the findings of the present study are available from the corresponding author upon request.
2019-07-09T00:31:24.957Z
2019-06-10T00:00:00.000
{ "year": 2019, "sha1": "e5f5b06a3bc95afda545900fc3be342d8bd52a15", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/omcl/2019/5164576.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e5f5b06a3bc95afda545900fc3be342d8bd52a15", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7719046
pes2o/s2orc
v3-fos-license
After 'completion': the changing face of human chromosomes 21 and 22 In the four years since the publication of the first two 'complete' human chromosome sequences the type of research being done on each has shifted subtly, reflecting the impact of genomic data on biological science in general. More than four years have now passed since Dunham et al. [1] published 'The DNA sequence of human chromosome 22', in December 1999. This was the first 'essentially' complete human chromosome sequence to be finished. A few months later, in May 2000, Hattori et al. [2] published 'The DNA sequence of human chromosome 21'. At that time it seemed as though a rapid succession of completed chromosomes and their publications were to follow (perhaps in reverse numerical order, reflecting chromosomal size), but it wasn't until almost two years later, in December 2001, that the completion of chromosome 20 was announced [3]. Since then, a few more of the remaining chromosomes (successively 14, Y, 7, 6, 13, 19, 9 and 10) have been published, but we are still waiting on the rest, hopefully all of which will appear by the end of this year. With the announcement of the 'completion' of the entire human genome in April 2003, it's just a matter of time. As the first two chromosome sequences have been complete for a relatively long time (in comparison to the rest of the chromosomes), now seems an appropriate time to take a look at how research on these chromosomes, and how genomic research in general, has been affected. How can we measure the impact of the completion and publication of the first two finished chromosomes? By counting the number of times each chromosome paper has been cited? By detecting an increase in the number of publications related to each chromosome? By noticing a shift in the types of research being carried out on each chromosome? By seeing an increase in the gene count, or a decrease in the number of unidentified disease genes? This article takes a brief look at these measures and more, concluding that the overall number of genes on chromosomes 21 and 22 has not changed much since the initial annotation of these chromosomes, but experimental verifications have increased the number of confirmed genes. Furthermore, the availability of the entire chromosome sequences seems to have facilitated the localization of some disease loci on chromosomes 21 and 22. (SNPs), linkage disequilibrium, microarray analysis of gene expression, and transposable elements, were cited 100 times or more. The types of articles that cited the first two chromosome publications covered a range of research areas, with the majority being comparative genomics, comparative mapping, gene discovery, haplotype analysis, genomic organization, and chromosome-wide gene expression analysis. Clearly, the availability of whole, 'completely' finished chromosomes made possible some of these new broad-scale types of research. For example, when doing comparative genomics to try and identify conserved regions that may contain regulatory elements, it is essential that both of the sequences that are being compared be as complete as possible, in order to minimize the false-negative rate. While the syntenic regions of these two chromosomes in other species are not necessarily finished to the same high quality, for example for mouse, rat and chicken, they are available at various levels of draft from whole-genome shotgun assemblies. Fortunately, in the case of human chromosome 21, the equivalent chromosome in chimpanzee, chromosome 22, is now available in high-quality finished form [7], and the same is being done for regions similar to human chromosome 22. The number of chromosome-related publications If we look at the number of publications in PubMed [8] using the search criteria 'human chromosome 21 OR human chromosome 22', the average number of articles per year for both chromosomes begins to level off in 1990 (106 for chromosome 21 and 83 for chromosome 22), several years before the sequence publications ( Figure 1a). On the basis of this information, the publications of the first two chromosome sequences had no effect on the number of chromosome-related papers published per year. If the number of publications per chromosome is weighted by chromosome size (Figure 1b), chromosomes 21 and 22 (as well as chromosomes 17 and 19) appear to be very 'high impact' chromosomes. In the case of chromosome 21, this effect could be due to the special interest in Down syndrome (trisomy 21). If the number of publications per chromosome is weighted by the number of genes on the chromosome (Figure 1c), chromosome 21 appears to be very significant, followed closely by chromosomes 13, 18 and 22. This observation may be due to the relatively small size of these chromosomes and low numbers of genes in comparison with the other chromosomes. It might have been expected that the number of chromosome-related papers would increase after the original publication of the first chromosome sequences, but instead we see a shift in the type of research that is being conducted. Whereas before their publication the research emphasis was on mapping and novel gene discovery, after their publication the emphasis turned to comparative analysis (for example, between mouse and human, as by Pletcher et al. [9]), haplotype analysis (for example, by Dawson et al. [10]) and wholechromosome transcription analysis (for example, by Rinn et al. [11]). Hence, the availability of essentially complete, highquality sequence is ushering in a whole new era of genomic research. Individual scientists generally no longer have to worry about the tedious tasks of mapping, sequencing and Other reasons for the leveling off in publication numbers could be that the number of researchers interested in these two chromosomes, and the amount of funding available for studying them, has not changed in recent years. And, because of the International Human Genome Sequencing Consortium's adherence to the 'Bermuda rules' [12], researchers around the world were able to access the sequence as it was being produced: they didn't have to wait until the chromosomes (or worse yet, the whole genome!) were published to utilize it. If this policy had not been implemented, we might have seen a spike in the number of chromosome-related publications upon publication and release of the sequence, assuming that researchers were eager to make use of it. The number of genes Another measure of the significance of the publications of the first full chromosome sequences might be the number of genes that have been identified since the original publications. When the sequences of chromosomes 21 and 22 were first published, it is safe to assume that the papers' authors did not believe that they had identified all of the genes on these chromosomes. They (we) knew that, upon release of the data, other scientists would identify more genes, and that new information would become available to help verify and append the initial annotations -and this is exactly what has taken place over the past four years. If we look at the number of genes (total non-pseudogenes) for each chromosome at the time of publication and compare it to the most recently available counts (Table 1), we can see that overall the gene numbers have not risen that dramatically -an indication that the initial gene identification was done very well. In the case of chromosome 21 there is quite a jump in number of genes, but this is mainly due to the annotation of two keratin-associated protein gene clusters, one of which was only counted as a single gene in the original analysis. We can also see that for both chromosomes the number of genes in the 'known' category has dramatically increased, while the number of 'novel' and 'putative' genes has generally decreased ( Table 1). This re-categorization is due in part to the number of experimental verifications that have since been carried out on the predicted genes, and in part to the significant increase in number of full-length cDNAs and expressed sequence tags (ESTs) that have recently been deposited in the public databases. Many more human genes are now covered by at least one of these valuable mRNA resources than when the chromosomes were first annotated; four years ago mRNA data were much scarcer, and many gene models were based on partial EST evidence or solely on in silico gene-prediction analysis. At that time, for each chromosome only one representative model was annotated per gene; because of all the new mRNA data, however, roughly 30-40% of genes now have multiple transcripts annotated. And, also because of the new mRNA data, most annotators now agree that, in order to keep the number of false-positive gene models to a minimum, computer-only gene predictions should not become part of the annotation set until they are experimentally verified. Another noticeable change that can be seen in Table 1 is the near doubling in the number of pseudogenes for both chromosomes. This jump is due to several factors, including the increase in mRNA data, the completion of the rest of the human genome and subsequent improvement of annotation elsewhere within the genome, and the development of standards on how to define pseudogenes. . The goals of the workshop were to establish communication between the groups involved in annotation, to standardize the way annotation is done across the human genome, and to exchange information, all with the aim of producing the highest standards of manual curation for the human genome. It should be noted that the HGNC has the daunting task of assigning unique identifiers, or gene symbols, to each gene in the human genome, thus reducing the amount of confusion often associated with multiple and non-unique gene names. The number of disorders characterized If we look at the number of human diseases and disorders (26 and 62, respectively) that have been mapped to chromosomes 21 and 22 (see Table 2, 3 and 4), we find that 3 (12%) and 12 (19%), respectively, were not mapped to the chromosomes until after January 2000. Thus, it appears that the availability of the entire chromosome sequences was necessary for locating some disease loci. Even now that all of these disorders have been mapped to their respective chromosomes, determining the exact location of the disease locus, the full-length cDNA product, and the mutation(s) that correlates phenotype and genotype remains a challenge. In the case of chromosome 21, 6 (23%, including Down syndrome) out of 26 disorders do not have any conclusive mutation identified, and 4 disorders (15%) do not yet have any specific sequence location. And, for chromosome 22, an amazing 30 (48%) out of 62 disorders do not have any conclusive mutation identified, and 14 disorders (23%) do not yet have any specific sequence location; but several of the disease loci on chromosome 22 are involved in chromosomal rearrangement disorders, which are difficult to pinpoint, such as chronic myeloid leukemia. Two of the biggest barriers to identifying disease-gene locations and mutations are the lack of patient (and family) samples and complexity of the disease, particularly in multi-gene disorders such as Down syndrome or heterogeneous disorders such as schizophrenia and Alzheimer's disease. By having the full human genome sequence available, investigators need only to concentrate on matching disease phenotypes with genes from the current annotation, rather than having to identifying the genes themselves. In Tables 3 and 4 The total number of relevant papers and the number of relevant papers since the chromosome 21 publication [2] are listed. A bold number indicates that there were ten or more post-chromosome locus-related publications; an italic percentage indicates that 25% or more of all locus-related publications appeared after the chromosome sequence was published. Curly brackets indicate examples of mutations that lead to universal susceptibility to a specific infection (diphtheria or polio), to frequent resistance to a specific infection (vivax malaria), protection from nicotine addiction, or other susceptibilities. Data were obtained from the NCBI resources OMIM [17] and PubMed [8].
2014-10-01T00:00:00.000Z
2004-06-30T00:00:00.000
{ "year": 2004, "sha1": "ec35bbe529acf25a5e920b589dd92eb714980200", "oa_license": null, "oa_url": "https://genomebiology.biomedcentral.com/track/pdf/10.1186/gb-2004-5-7-111", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "75de286c92306fb9ccafa4d2bb3c50a225854807", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
238031744
pes2o/s2orc
v3-fos-license
Research on the design of travel aids for the elderly with partial disability of lower limbs As the population continues to age, the daily lives of those with disabled lower limbs have gradually become an unavoidable social welfare problem. Based on an investigation and analysis into the daily routines of the elderly with partial disability in their lower limbs, the product design experiment is proposed to solve any mobility difficulties in the elderly with partial disability in their lower limbs to provide practical help that can ensure their independence and social contact. Based on the literature research and market comparison, According to the assessment criteria for the Instrumental Activity of Daily Living Scale (IADL), the physiological functions and user behaviors of this elderly population were investigated, and the actual needs for outdoor walking were analyzed. The idea of focusing on auxiliary tools for the elderly with lower limb disability was proposed, and a tool to aid in short-distance travel was designed for this target group to improve their IADL travel index and optimize their mobility. Introduction The aging of the population, mainly involving the increasingly prominent aging tendency and the increasing proportion of the elderly year by year, is a common challenge of social development in today's world. The data predicts that the number of the elderly disabled and semidisabled in China may surge to 42.5 million in 2020, accounting for about 18.3% of the total elderly population. Therefore, the daily life of the elderly disabled and semidisabled has become a key concern of national social security [1]. Currently, the degree of physical disability of the elderly can be evaluated according to the Instrumental Activity of Daily Living Scale (IADL), which includes 8 items, including riding, shopping, doing housework, washing, cooking, making a call, managing wealth and taking drugs, all of which shall be finished by themselves. It can be used to assess the quality of the elderly's daily life from such perspectives as the abilities to travel, live, think, etc. Long-term partial disability of lower limbs will physiologically lead to muscle atrophy, the spread of physical disability range, the decrease of immunity caused by lack of exercise, and the delay of movement, thinking and perception system; it may easily lead to mental diseases such as senile depression, senile anxiety disorder, empty nest syndrome and senile dementia due to longterm staying at home caused by mobility inconvenience and lack of basic communication with the outside world, which would endanger physical and mental health [2]. However, using travel means of transport can meet the basic travel needs of the elderly with partial disability of lower limbs, facilitate daily social interaction, and improve the quality of life, which is beneficial for their physical and mental health. According to researches, sports rehabilitation treatment and mental intervention treatment could help the elderly partially disabled relieve the degree of limb lesion, improve their mental status and finally contributes to certain enhancement of their physiological function [3]. Compared with the elderly completely disabled, the elderly partially disabled have more robust ability to travel, and their physical functions can meet simple and short-distance travel activities. At present, the common travel aids for the elderly in the market can be generally divided into open and closed types according to the appearance structure [4]. The representative products of the open type include multifunctional wheelchairs, tricycles, and electric mobility scooters. These products are flexible and convenient for parking and storage, but they are easily affected by natural environment and have weak protection for users. The appearance of the products of the closed type is mostly automobile-shaped, which makes storage convenient and more comfortable. However, it is difficult for elderly users to operate, and the management of travel and parking is difficult [5]. The existing travel aids for the elderly are polarized, mainly for two types of elderly users who can take care of themselves completely and lose their mobility, but not for the elderly who are partially incapacitated [6]. Physiological Characteristics of the Elderly with Partial Disability in the Lower Limbs The Product was designed based on the travelling demand of the elderly with partial disability of lower limbs. First of all, the etiologies and pathologies of the elderly with partial disability of lower limbs were analyzed, so that the Product could be designed by considering their actual physical conditions based on the ergonomic principle. From the anatomical point of view, the muscles around the knee joint are weak, and the lever formed by the upper and lower bones bears 86% of the weight of the human body [7] . However, the elderly have osteoporosis in different degrees, which can be embodied by the thinner internal surface of long and flat bones due to bone absorption, and the problem of "internal insufficient bone and external sufficient bone" by the slow formation of new bone external surface. The above leads to a higher bone brittleness which results in higher fracture risk during exercise [8]. The degenerative changes of leg joints of the elderly with partial disability of lower limbs are more obvious, including articular cartilages of knee and ankle joints, sclerotin, synovium, and synovia along with the increase of age, causing degenerative arthritis ( Figure 1). The common pathologies of the elderly with partial disability of lower limbs include articular cartilage injury, synovia's loss and viscosity increase, decreased muscle gross, muscle strength and muscle tension, lower muscle cell viability caused by waste muscular atrophy, etc.. All these symptoms will increase the walking resistance of the elderly with partial disability of lower limbs, which, as a result, leads to higher wear of legs and joints and the lack of physical strength during exercises and finally restricts their activities. Behavioral Analysis of the Elderly with Partial Disability in the Lower Limbs The activities of daily living of the elderly are affected by the physiological degenerative diseases. For the elderly with partial disability of lower limbs, most of their daily outings are for shopping, leisure and physical exercise, and their activities are mostly distributed in supermarkets and parks around the community [9]. According to the survey, the travel distance distribution of the elderly with such disability is drawn, as shown in Figure 1. It can be seen from the table that the daily travel distance of the elderly with partial disability of lower limbs is about 3 km, mainly short-distance travel. As far as the travel frequency is concerned, the data show that about 90% of the elderly with partial disability of lower limbs are willing to choose to go out daily, and only about 10% go out less because of physical illness, psychological emotion and other reasons [10]. It can be seen that the elderly with partial disability of lower limbs have a strong willingness to travel and are willing to communicate with the outside world, which requires the help of travel aids to complete their daily social activities. The design principle of the mobility aids for the elderly with partial disability of lower limbs were summarized by analyzing the demand of the targeted costumers. The Product is targeted at the elderly with partial disability of lower limbs assessed by the Instrumental Activity of Daily Living Scale (IADL), in order to help solve their travelling and surrounding service problems. IADL mainly assesses the life quality of the elderly based on their daily behaviors and cognitive abilities. After being surveyed based on the original assessment mode, the Product has been improved by taking targeted customers' demands into consideration. The items were divided again by focusing on the actual functions of the Product so as to better comply with the use effect of the Product and form a new experiment assessment table (Table 2). The Product can help the elderly with partial disability of lower limbs, who use other similar products with 2 to 3 points through assessment, to reach the assessment standard of 0 to 1 points and ultimately improve their life quality. The orientations to design travel means of transport for the elderly with partial disability of lower limbs mainly include two types, of which, the first one is the new energy battery-powered electric wheelchair with a relatively slow speed, which belongs to a kind of medical instrument. The second one is the fully enclosed mobility car, the capacity of which is higher than that of wheelchair. Furthermore, less affected by environmental factors, it is popular among a majority of elderly but it is hard to control [11]. Comparatively speaking, the Product is designed by integrating the strengths of both of the above, satisfying the requirements for low learning cost and high usage rate and complying with the ergonomic principle and customers' physical demand and in particular, the actual demand of the elderly with partial disability of lower limbs. Firstly, from the perspective of product function, the product should meet the needs of short-distance travel, shopping and entertainment for the elderly with partial disability of lower limbs, which can accommodate one person with certain storage function. Considering that the ability of the elderly to learn and accept new things is reduced, the complex functions of the product should be simplified and the operation difficulty should be reduced. Secondly, from the perspective of product appearance, considering the psychological factors that some of the elderly refuse to be labeled as "old people", we should optimize the product design, reduce the rigid inflection point and acute angle through smooth and round lines and harmonious and comfortable proportion, and give the product a light and energetic appearance, which is in line with the aesthetic preference of the elderly [12]. In the aspect of color matching, based on the suggestive effect of different colors on psychology in color psychology, colors that can soothe emotions and eliminate fatigue should be selected [13] . Finally, as far as the product structure is concerned, the body size design of the mobility scooter should be adapted to the average height and bone structure of the elderly in Asia, and the overall structure adopts foldable design, which is convenient to carry and store. Due to injuries such as knee joint, ankle joint and decreased muscle strength, the elderly with partial disability of lower limbs will have difficulties to a certain extent in the transition from sitting to standing [14]. Therefore, the auxiliary standing function should be added to the design of the handlebar of the mobility aids. With the increase of age, the hand muscle strength of the elderly will decrease by 16%-40%, and the arm strength will decrease by about 50%. The design of assistant standing at the handlebar can not only reduce the burden of standing for the elderly, but also slow down the muscle injury of upper limbs [15]. Typical User Personas In order to further understand the user's needs, a target user is selected for in-depth interview, and a user personas is constructed based on his/her physiological characteristics, psychological characteristics, behavior habits and actual needs. The interviewee is Zhang Lan of 76 years old, who lives in the urban community with her husband. When she was young, she loved sports and was in good health. She was a volleyball player in school and often took physical exercises such as swimming and running. After retirement, the physical function gradually declined, and the physical strength dropped rapidly. Originally, she could ride a tricycle to pick up her granddaughter to and from school. Since suffering from rheumatoid arthritis and hyperosteogeny, her legs were obviously bent. Now she will feel sore and weak after walking for 1 km, which makes it difficult to maintain long-distance travel. She is highly educated, willing to accept new things, in good mental state, and still have a high willingness to participate in community activities. At present, her means of transport is a human tricycle, which can meet the daily travel needs, but there are still some deficiencies such as bulky means of transport and excessive physical exertion. Through comprehensive interview results and literature analysis, the basic information of typical user portraits is obtained, as shown in Figure 2: Design Scheme Effectiveness By analyzing the personas of the target user and summarizing the design principles, the design scheme is shown as below: The Scheme 1, as shown in Figure 3, plan A: The overall shape of the Product seems like a wheelchair. The backrest chair of the Product conforms to ergonomics, and the backrest and seat surface of the chair fit the curves of the back, waist and buttocks of the human body, thus improving the sedentary comfort. The chair's part contacting human body is made of soft and comfortable leather that is easy to clean and highly durable. The overall effect highlights the superior taste and texture, besides the economic and practical features. The wheel is made of natural rubber (NR) material, which assists body to buffer external shocks, and guarantee the car's driving performance. Tire shading is straight groove pattern, which can ensure operation stability, reduce rotation resistance and noise, facilitate drainage and avoids sideslip. On the back of the backrest seat, there is a net woven by 9 durable parachute cords, which is durable, easy to stretch and can store some objects. The weaknesses of the design: Direction is controlled using the front, back, left and right operating levers; low steering flexibility and uneasy operation. The Scheme 2, as shown in Figure 3, plan B: The car body of the Product is made of ABS polymeric structural material which could enhance body strength, weaken impact influence and resist high heat and low temperature. The traditional handlebar design improves steering flexibility, and the hand-held brake simplifies braking operation to ensure driving safety. The scheme's seatback adopts the vinyl coating to enhance the UV filtering, and is put on a foldable dual-purpose vehicle shed. The closeable storage box is located under the seat to protect the stored articles. The scheme's tire pattern also adopts transverse groove pattern to increase the driving force, the traction force and the abrasive resistance. And the front and rear of the wheels are equipped with two kinds of lights to light up the front road and warn passers-by. Disadvantage of the design: The overall framework is large with the foldable part, which is not easy for carrying and storage. The Scheme 3, as shown in Figure 3, plan B: The overall framework according to this scheme is similar with that in Scheme I but it is optimized in terms of steering control. With 360° all-directional operating lever, it enjoys more intelligent and smoother steering direction. The seatback and chair seat, which are designed with anti-slip pattern on leather, become safer. Damping wheels are adopted, in which the damping springs are enclosed inside the frame, which has excellent dust-proof and antiwinding functions. The damping wheels adopt double tapered roller bearings, which can effectively prevent the scooter body from shaking during high-speed traction. The tire is made of butadiene styrene rubber (SBR), which is superior to natural rubber in terms of abrasive, heat resistance and ageing resistance. Disadvantage of the design: The overall shape makes it seem like a medical wheelchair, which does not consider the elderly's psychological resistance to such elements as "wheelchair", "unhealthy", "agedness", etc. The final scheme sketch in Figure 6 was finally made on the basis of the three schemes above by analyzing and comparing their strengths and weaknesses. The Product should meet the short-distance traveling demand of the customers with partial disability of lower limbs, and assist them to travel in the residential area or nearby and even the surrounding field or street as specified in the Activity of Daily Living Scale (ADL) in terms of traveling. The portable electric scooter can help the elderly to save their energy and reduce joint wear while travelling and extend their travel distance on the basis of a favorable mental status. The car body of the Product is made of ABS polymeric structural material which could enhance body strength, weaken impact influence and resist high heat and low temperature. The backrest chair conforms to ergonomics, and the backrest and seat surface of the chair fit the curves of the back, waist and buttocks of the human body, thus improving the sedentary comfort. The traditional handlebar design improves steering flexibility, and the hand-held brake simplifies braking operation to ensure driving safety. The scheme's seatback adopts the vinyl coating to enhance the UV filtering, and is put on a foldable dual-purpose vehicle shed. The closeable storage box is located under the seat to protect the stored articles. Damping wheels are adopted, in which the damping springs are enclosed inside the frame, which has excellent dust-proof and anti-winding functions. The damping wheels adopt double tapered roller bearings, which can effectively prevent the scooter body from shaking during high-speed traction. The scheme's tire pattern also adopts transverse groove pattern to increase the driving force, the traction force and the abrasive resistance. And the front and rear of the wheels are equipped with two kinds of lights to light up the front road and warn passers-by. The product rendering in Figure 7 was presented on the basis of the preliminary one by analyzing and comparing the strengths and weaknesses. Compared with the scheme sketch, the final scheme was adjusted in some details and retains the human body curve design of the seat to ensure comfort. An annular grip is designed for the handlebar, which makes the steering more flexible and smoother. The grip fits the contour of the human hand, and you can choose the straight grip and the side grip. The hand-held electromagnetic brake, which replaces grip brake and foot brake, becomes more convenient for the elderly. The annular grip, which is generally linked with the car frame, can support the elderly to get out of the car. The seatback is filled with silicone memory cotton to ensure sedentary comfort. The storage bag below the seat is made of synthetic spider silk polymer textile materials with high fiber strength so that it is uneasy to tear. The fabric plate inside the bag helps prevent deformation and is convenient and easy to fold. The damping wheel is made of PU foamed tire to totally get rid of flat tire risks. The tire adopts vertical and horizontal groove pattern combining the advantages of the straight and transverse groove patterns, with the characteristics of high stability, small rotation resistance, strong driving, braking and traction forces and strong drainage as well as slip & abrasive resistance. The framework of the car body is made from the ABS polymeric structural material. The whole car can be folded, which facilitates carrying and storage. Based on the psychological implication of different colors, the appearance is designed in such colors as red & yellow, green & blue, green & yellow, etc., which could pacify emotions and get rid of fatigue. The car body is equipped with the auxiliary devices like LED lights, warning taillights, etc. The GDP positioning device can locate the car position at any time by connecting with mobile phone map, so as to prevent the elderly from getting lost or losing the car. Product Performance Test To test whether the Product has practical effect on the elderly with partial disability of lower limbs, we selected a typical user for conducting comparison test between the Product and the traditional means of transport such as the multi-function wheelchair. The test above covers appearance design, ergonomics, operation procedure and use effect. Based on the travel distribution diagram of the elderly with partial disability of lower limbs, the respondents' feedback and experience effect of using the Product and multi-function wheelchair were tested by taking the communities, surrounding parks, squares, supermarkets, etc. all of which are within 3 km from the respondents' place of residence as destination based on the product experiment scale based on the IADL assessment standard; See Figure 3 for the results: Firstly, the elderly with partial disability of lower limbs are more likely to accept the product the appearance design of which could cater their psychological demands; secondly, the means of transport should satisfy the ergonomics principle and human's comfort needs and make up for the physiological defects of the elderly with partial disability of lower limbs; thirdly, the product should be operated as simply and conveniently as possible by the elderly which could use it independently. The product should also be easy to carry and store; fourthly, in terms of the using effect, the product can increase the travel distance of the elderly with partial disability of lower limbs effectively, shorten traveling time and improve the score of product experiment scale based on the IADL assessment standard so that the score can be within 2 to 3 points; it finally should influence the normal travelling of the elderly with partial disability of lower limbs and reach the ideal traveling effect with a 0 to 1 point. Conclusion Based on the developments of aging globally, this research targets the marginal group of the elderly with partial disability in their lower limbs, and conducts an innovative design practice for their short-distance travel. While researching, we investigated and analyzed the basic definition of the elderly with partial disability around the world, studied the social status of the elderly with partial disability in their lower limbs in China, conducted indepth communication with typical target groups, determined user needs, and drew portraits of the target groups. By analyzing ergonomics and product design principles and combining with an efficacy comparison for different materials, the final design and renderings were completed. Due to a lack of research on the elderly with partial disability in the industry, the links of data collection, collation, and induction in the research process are relatively weak, and the analysis of product structures and functionalities is still lacking in depth. This research, as a preliminary exploration for the elderly with partial disability in their lower limbs, hopes to provide a new design direction with the increasing aging population in China, and to provide some assistance to these elderly groups to revolutionize their travel experience and enjoy their golden years.
2021-08-27T17:12:28.719Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "4a918c8254abfa6d56684afd4da0d0e1c8765d32", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/51/e3sconf_eilcd2021_03037.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e80dd7afc4fc829a4566fe8283edcbd0d78a6b96", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
264594196
pes2o/s2orc
v3-fos-license
Attribute Selection Hybrid Network Model for risk factors analysis of postpartum depression using Social media Background and objective Postpartum Depression (PPD) is a frequently ignored birth-related consequence. Social network analysis can be used to address this issue because social media network serves as a platform for their users to communicate with their friends and share their opinions, photos, and videos, which reflect their moods, feelings, and sentiments. In this work, the depression of delivered mothers is identified using the PPD score and segregated into control and depressed groups. Recently, to detect depression, deep learning methods have played a vital role. However, these methods still do not clarify why some people have been identified as depressed. Methods We have developed Attribute Selection Hybrid Network (ASHN) to detect the postpartum depression diagnoses framework. Later analysis of the post of mothers who have been confirmed with the score calculated by the experts of the field using physiological questionnaire score. The model works on the analysis of the attributes of the negative Facebook posts for Depressed user Diagnosis, which is a large general forum. This framework explains the process of analyzing posts containing Sentiment, depressive symptoms, and reflective thinking and suggests psycho-linguistic and stylistic attributes of depression in posts. Results The experimental results show that ASHN works well and is easy to understand. Here, four attribute networks based on psychological studies were used to analyze the different parts of posts by depressed users. The results of the experiments show the extraction of psycho-linguistic markers-based attributes, the recording of assessment metrics including Precision, Recall and F1 score and visualization of those attributes were used title-wise as well as words wise and compared with daily life, depression and postpartum depressed people using Word cloud. Furthermore, a comparison to a reference with Baseline and ASHN model was carried out. Conclusions Attribute Selection Hybrid Network (ASHN) mimics the importance of attributes in social media posts to predict depressed mothers. Those mothers were anticipated to be depressed by answering a questionnaire designed by domain experts with prior knowledge of depression. This work will help researchers look at social media posts to find useful evidence for other depressive symptoms. Introduction Postpartum depression (PPD) is a type of depression that affects women after giving birth.According to the World Health Organization's 10th revision of the International Statistical Classification of Diseases and Related Health Problems (2009), PPD is a "behavioural and psychological problem" that occurs within the first six weeks after childbirth.PPD is more common in women than men [1], and it can have a range of emotional symptoms, such as crying, worry, sadness, sleep problems, confusion, and irritability.PPD is associated with suicidal thoughts and usually requires specialized treatment.However, a more severe form of PPD called postpartum psychosis can occur in a small percentage of women (0.1 − 0.2%) and is characterized by symptoms such as restlessness, sleep disturbances, paranoia, disordered thinking, impulsivity, hallucinations, anxiety, and delusions.Postpartum psychosis is a severe condition that requires immediate treatment and can be especially common in mothers who are 35 years old or older.It typically reaches its peak in the first two weeks after delivery. There is growing recognition among professionals that postpartum depression (PPD) has significant impacts on a mother's relationships with her family, spouse, and baby, as well as on the mother-infant connection and the long-term emotional and cognitive development of the child [2][3][4][5].PPD is associated with a poor quality of life and affects the language used in social media activities [6].Many studies have attempted to identify depressed individuals by analyzing language use in social media, focusing on differences in word usage between depressed and non-depressed groups [7][8][9][10][11][12].Some studies have tried to predict depression by comparing subjects with depression to control groups [13,14].In contrast, others have used sentiment analysis techniques based on the idea that people with depression are more likely to express negative emotions.However, previous research has often relied on small datasets and has not effectively explained the detection results with crucial concepts in the field.A few studies have used neural network approaches [15][16][17], but still, more in-depth research is needed on subsequent steps such as diagnosis and prevention of PPD. According to the findings of Hoyun et al. [18], and Eichstaedt and colleagues [19], individuals may post about their depression and therapy on social media, and the language used on Facebook can accurately predict depression based on medical records.De Choudhury and colleagues [8,9] developed a statistical model to predict extreme postnatal behavioural changes based on linguistic and emotional correlations for postnatal changes in new mothers.Reece and his colleagues [20] also built computational models to predict the likelihood of posttraumatic stress disorder in Twitter users.These studies demonstrate the potential of social media as a source of signals for predicting present or future episodes of depression. This research aims to investigate the attributes associated with the worsening of PPD to facilitate the development of new methods for identifying at-risk mothers and provide direction for effective therapies.It provides a digital safety net framework to support new mothers during a significant life transition.It builds on previous research that has found a link between depression and specific linguistic characteristics.It aims to expand the scope of social media-based mental health measures by creating a framework that recognizes text-based signs of Postpartum depression as similar [21].This work can be beneficial in identifying at-risk mothers early and offering them support on time. Highlights of the novel contributions are listed as follow: Highlights 1.A novel hybrid attribute selection model has been proposed for the prediction of Post-Partum Depression. 2. Attribute Hybrid Networks have been tested on a unique dataset that includes both the PDSS questionnaire and social media posts of the recruited individuals.3. The model applies a depression theory to select each attribute using interconnected neural networks and a postlevel attention layer.4. The experimental results demonstrate that the Attribute Selection Hybrid Network outperforms other baseline models.5.The study employs title-and word-wise word clouds visualization to compare daily life, depression, and postpartum depression.6.The proposed Attribute Selection Hybrid Networks model is also capable of predicting various mood disorders. Related work Various studies are being conducted to gain new insight into diagnosing PDD depression by analyzing the association between mental health and language usage [22].Studies on depression and other mental health illnesses have become more challenging as social media and the Internet have evolved.Online platforms such as Facebook, Twitter, and Reddit provide a new opportunity for innovative research by offering a vast amount of text data and social information that can be used to understand women's behavioural tendencies.Machine learning (ML) and deep learning (DL) techniques have been used to analyze textual data and investigate the impact of social networks on users' mental health.Existing research has been analyzed from multiple perspectives, including text and framework levels. New mothers depression detection in social media According to polls conducted by Nielsen Wire in 2012, 72% of mothers use Facebook compared to other social media platforms to express their feelings [23].Over the past several years, several types of research have been carried out to explore the social media usage of women who have recently given birth.These studies have focused on blogging [24], pregnancy and motherhood forums [24,25], and Facebook.McDaniel and colleagues [24] found that the frequency of posts from new mothers was related to their feelings of interpersonal connectedness to extended family and friends, as well as to express their feelings regarding social support and maternal welfare.Gibson and Hanson [26] found that new mothers saw Facebook as a valuable platform for creating a new identity, maintaining social connections after giving birth, and finding information and comfort about their decisions and worries about raising a baby.These findings are based on ethnographic studies.According to Schoenebeck [27], the posts on the anonymous message board YouBeMom.comdefine new social norms and expectations that shape the culture of online mothers.All those previous studies suggest that online social technologies provide new mothers with opportunities to use their social networks and to discover a liberated outlet for conversing, venting, and exchanging parenting information with other new mothers.This study continues to explore streams of online social activity to understand better the role played by online social groups in supporting PPD and the absence of such support. In related research, De Choudhury et al. analyzed tweets from new mothers to discover [28] and forecast [29] significant behavioural changes postpartum.Instead of accessing actual data on PPD results, the investigations relied on identifying substantial changes made on Twitter.To the best of our knowledge, this report is the first study to reveal postpartum depression predictions based on new mothers' usage of Facebook in conjunction with PDD scores. Various framework for processing social media data Numerous studies have been undertaken to explain the classification findings from neural networks to examine the essential attributes contributing to performance and to strengthen it further [30,31].Various vision-related investigations have employed neural visualization with representations learned from succeeding layers to provide human-interpretable data [32].Several researchers have applied interpretable approaches to natural language processing, concentrating primarily on interpreting vector-based models for various applications.In contrast to analyzing input patterns to analyze activated internal neurons, Palangi et al. [33] interpreted lexicalsemantic meanings and grammatical functions for each word based on internal representation.However, the inability to provide a detailed explanation is a drawback of interpreting a result by studying attention or neurons.Kshirsagar et al. [34] made an effort to generate explanations about detected results for suicidal posts by using representation learning.However, they conducted the attention mechanism only on the words in a post, which has limitations.Applying the attention mechanism to a mother's posts proved to be a difficult task since the proportion of posts containing depression indicators was insufficient, meaning that the majority of the new mother's posts did not have information that was sufficiently helpful for depression detection.To interpret the attribute representations related to various depression factors learned from hybrid attribute networks to understand which attributes are activated significantly during depression detection.To do this, the concepts discussed previously were utilized. Summary of research gaps Predicting PPD has not been widely researched in the past, likely due to the challenges associated with collecting longitudinal data on mothers' behaviour over a long period.Traditional methods, such as observation and in-person interviews, can be expensive and intrusive, making it challenging to collect enough data to draw meaningful conclusions.These approaches are mainly based on labour-intensive methods of manually gathering the attributes, which show limited performance for classification.Lack of domain knowledge about the attributes plays a vital role in predicting PPD through the posts shared through social media.The advent of online social platforms like Facebook has opened up new opportunities for research in this area.These techniques, however, still don't give a clear explanation for why certain newly delivered mothers have been labelled as depressed. Materials This section provides an introduction to the dataset used for this study's experiments.It includes a description of the dataset's primary characteristics, the corresponding task, and the criteria used for evaluating it. Ethical clearance The data collection for this study was approved by the Institutional Ethical Committee (IEC) at SRM Medical College and Research Center (SRMC &RC) in Chennai, India.Data was collected in 2022 from mid-April to mid-July.Each participant signed a consent form indicating that she had read and understood the terms and conditions of the study.All data collection and analysis were conducted under the applicable ethical guidelines and regulations. Participants selection Clinicians identified a potential participant pool for the study, and data collection was done efficiently.Using a sequential participant selection method, mothers who had given birth at SRMC &RC in Chennai, India and came for post-checkups within six weeks of delivery were included in the study.This ethical clearance allowed for collecting data from mothers at a critical time postpartum, increasing the chances of identifying postpartum depression. Inclusion criteria These individuals were informed about the objective of the study and voluntarily agreed to participate without any pressure or reservation based on the following criteria: • Mothers between the ages of 19 and 35 who had given birth.• Participants were able to read and comprehend the study's details and complete the consent form mentally. • The type of delivery (spontaneous or induced) does not matter; mothers can be primigravida or multigravida. These criteria were used to ensure that the study sample is representative of the population of mothers who have given birth and are within a specific age range.Additionally, including mothers with different types of deliveries and parity increases the findings' generalizability. Exclusion criteria Individuals were not eligible to participate in the study based on the following criteria: • Mothers with multiple fetal pregnancies. • Mothers who are convinced through IVF treatments. • Mothers with a complicated obstetric history. • Mothers considered high-risk pregnancies, such as those with gestational diabetes mellitus, preeclampsia, chronic disease, and fetal anomalies. These criteria were used to exclude certain groups of mothers at a higher risk of postpartum depression and whose experiences may not represent the overall population of mothers who have given birth.Additionally, these groups of mothers may have different medical needs and be unable to participate fully in the study. Dataset collection 3.3.1 Postpartum Depression Screening Scale PDSS survey Data were collected from volunteers who had given birth at SRM Medical College and Research Center and participated in psychological research.Participants completed the Postpartum Depression Screening Scale (PDSS), a questionnaire that helps determine a depression score on a scale of 0-63 [35].In addition to the survey questionnaire, data related to the child and childbirth experience were collected, such as the child's birth date and whether the child was the first-born [36].Demographic information such as the mother's age, family income, and occupation was also gathered.The survey also inquired about how the mothers use social media platforms, like Facebook, to update their thoughts and status. The PDSS is an online questionnaire that contains a wealth of information specific to the psychometrics of the English version.It has seven components: problems with sleeping and eating, anxiety and insecurity, emotional ability, mental confusion, loss of one's sense of self, guilt and shame, and thoughts of ending one's life.Each dimension is composed of five different items, each of which describes another emotion that a woman may be experiencing after the birth of her child.The evaluation was based on the score and confirmed by clinical experts.The participants who scored above a certain threshold were considered affected by postpartum depression and were used for further analysis. Facebook data Under strict privacy safeguards, participants were requested to grant access to their public Facebook pages before answering questions.All of the information that was accessible through public personal profile pages was gathered through the use of an API from users who scored above a certain threshold on the filled questionnaire.Each participant interaction posts in the group as well as individual thoughts data were collected and analyzed post-collection to predict the presence of PPD. Survey responses Participants who met the inclusion criteria signed an informed consent form, and their stress levels after delivery were assessed in this study through questionnaires and social media content analysis.The data used to perform this analysis is collected from 496 different profiles.Data collection did not include any personally identifying information that could lead to the identification of individuals.Textual messages, specifically posts after delivery that were written in English, were the focus of the analysis.As a result, text messages written on personal profiles by postpartum depression mothers primarily deal with messages about birth and feelings the mothers face after delivery.It is essential to remember that the data collected from social media platforms contain a significant amount of background noise, and the amount of text produced by each user varies greatly. Data cleaning and pre-processing Data cleaning and pre-processing steps were applied to the dataset to remove any irrelevant or duplicate data and to format the data so the model could quickly analyze.This section describes the procedures for cleaning and processing the dataset prior to the stress detection task.To begin, limits on the minimum required text volume and the total number of posts were imposed to channel the data set.This helped to ensure that only relevant data was used in the analysis and that the sample size was large enough to be statistically meaningful. Description of the cleaning procedures This paper analyzes the results of the PDSS and classifies mothers into two groups: the control group, whose scores were lower than 11, and the depression group, whose scores were higher than 29.Secondly, the posts were retrieved from these two groups of mothers.The data before cleaning was referred to as the "initial data, " and the data after the cleaning was referred to as the "cleaned data." Figure 1 shows the detailed criteria taken into account for the cleaning process. The original data is quite noisy, as illustrated in Table 1.The standard deviation for the post, sentence and word count is doubled compared to their mean values.Additionally, 318 data points from participants lacked textual volume.A superficial examination of the data from the posts revealed that it needs to be adjusted.As the next step, the data was adjusted using regular expressions and Fig. 1 Data cleaning removed all random characters that were not alphabet or punctuation marks.Any posts that were either too long (over 3000 characters) or too short (less than two words) were excluded, and also eliminated all users who had less than ten posts.Applying these procedures to the raw data resulted in 631 cleaned user profiles. The psychologists involved in the study grouped the data into two groups based on Postpartum Depression Screening Scale (PDSS) and predefined values established by the medical society.Within cleaned data, all mothers with depression annotations scored less than 11 and were categorised as a non-risk group (control group).The depression group was identified as those with scores greater than 29.Individuals who scored between these values and had depression were excluded from observation.The most appropriate method would be using regression analysis with raw depression ratings.Following this method, the number of users in the data population decreased to 314, as shown in Table 2. Of these, 99 were classified as the control group (those who did not exhibit signs of depression), and 215 mothers were classified as belonging to the depression group. Methods This section details the findings on depression and the hyperparameters of the utilised model.The Attribute Selection Hybrid Network Models Fig. 2 depicts the entire network architecture that makes up.It consists of two recursive attribute networks based on evaluating the posts and the interconnected networks.Each attribute is executed in accordance with a preexisting depression theory and a post-level attention layer on top of the networks.The process of each network, the post-level attention, is explained in the following parts, the reasoning behind why they are created in such a way and how they are put into practice. Experimental setup Tables 3, 4 present the hyperparameters used in conjunction with these selected models as similar to [37].Computing infrastructure used in our research was GEForce GTX 1080 GPU with uniform sampling strategy and training duration of 1234s.The Adam optimizer was used to train all models, and stochastic gradient descent was the training method [38].In each of the past models, the convolution size, the number of convolutional filters, the pooling type, the pooling length, and the number of dense layers were all comparable. Attribute selection hybrid network models (ASHN) This approach is used by domain experts with expertise in depression to identify depression in mothers' .And their social media post contain positive and negative mood expressions.Here, the positive posts by the depressed mothers were filtered and continued only with the negative posts (296/358) for further analysis, which served as the input of the model for the attribute networks in order to identify the importance of any various attributes in negative posts of the depressed mothers which contribute to predicting the depressed mothers.Fig. 2 shows the Attribute Selection Hybrid Network Models.The attribute networks' each colourful circles represent a different attribute. Domain expertise is utilized to identify pertinent signs of impending depression.For this reason, four neural networks are created; they are specifically designed for each of the four categories of strong symptoms of depression that are collected from psychological studies.A simple symbol (ex.x) signifies a non-vector in the following explanations.A bold-faced lower-case symbol (ex.x) denotes a vector, while a bold-faced uppercase symbol (ex.X) denotes a collection of vectors or a matrix. • Psycholinguistic style (morphological order) (A1) Some research suggests that people who suffer from depression display distinct linguistic styles, such as differences in the distribution of nouns, verbs, and adverbs and differences in the complexity of their sentences.These linguistic styles are conceptualized unconsciously [39].The previous study served as the foundation for developing the first type of attribute network, which aimed to recognize various writing styles.In addition to focusing on the multiple styles, attention is given to the order of the words and the distribution of parts-of-speech tags.Consequently, we send a post to the network that consists of a series of parts-of-speech tags.After that, the network will change the sequence into a one-hot vector with the same number of part-of-speech dimensions as the sequence, and it will use RNN to encode the one-hot vector into an attribute vector called a 1 as shown in Fig. 3. • Sentimental words (A2) The cognitive theory proposes that those who suffer from depression are more likely to exhibit negative thought patterns and negative feelings.As a result, there exists a hypothesis that anyone who is depressed has a greater propensity to express a negative polarity on their postings more frequently than other users on social media.The attribute extraction network is proposed on the above belief that it will identify such behaviour by considering the sentiments expressed in posts (1) a 1 = RNN x pos as shown in Fig. 4. Towards this end, SentiWordNet made use of computing sentiment scores for each word.By converting all of the words in a post into one of three categories-positive, neutral, and negative-SentiWordNet's, and then we use a Recurrent Neural Network (RNN) to encode the one-hot vectors into an attribute vector ( a 2 ). • Depressive symptom words (A3) It appears that the most distinguishable behavioural pattern of mothers who undergo PPD, posts the comments that are specifically associated with a particular depression symptom.The attribute network shown in Fig. 5 is proposed to find words that are related to depression symptoms in posts, which is based on this discovery.In order to determine which symptom is associated with depression, a dictionary was compiled with evidence keywords using terms taken from the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-V) [40].This has helped to determine to find out the symptoms that are associated with PPD.The lexicon includes 76 keywords pertaining to nine categories of symptoms described by DSM-V.To compute the similarity between one given post and tokens of the dictionary in order to capture each mothers' piece of evidence found in posts relating to one of nine symptoms.In the first step of this process, element-wise multiplication was used to combine the word vectors corresponding to each symptom category into a single vector.As a result of this, a symptom matrix was generated that consists of representative vectors for each category.The matrix displays the degree of similarity between an encoding vector of posts and the matrix.In the final step, the Multi-Layer Perceptron(MLP) was used to project the matrix onto the attribute vector ( a 3 ). (2) a 2 = RNN (x sent ) Fig. 3 A schema of attribute network to analyze the Psycholinguistic style of the posts by delivered mothers on social media Fig. 4 A schema of attribute network for analyzing the sentiments of the posts by delivered mothers on social media such as positive, negative, and neutral Fig. 5 A schema of attribute network to predict the depressive symptoms in the posts posted by delivered mothers on social media • Ruminative response style (A4) It is common knowledge that the ruminative reaction style manifests itself in the form of repetitious behaviours and thoughts.People who suffer from depression have a pattern of continuously expressing their sentiments or dwelling on unfavourable situations, which might lead to sentences on relevant topics repeatedly appearing on their online posts.On the basis of this theory, putting into practice a network that identifies the frequency with which particular stories concerning pertinent issues are repeated is shown in Fig. 6.Computation of two vectors using dot production to determine the degree of relevance between a specific post and others.The degree of significance for each post was derived using this information.After that, MLP was used to convert the degree into an attribute vector designated as a 4 . Each post demonstrates a unique level of depressive traits; it is vital to take into consideration the weights of the attributes before integrating the attribute networks.As a result, to classify the user based on the analysis of the post, we produce a vector with weights that indicate which attribute is the most representative.The next step is to multiply the weights of the attribute networks.After that, we build a post vector that considers all of the attributes by combining the weighted attribute vector with a vector based on the summation of the elements. (3) The weights indicate the contribution of each attribute in classifying the post, which helps to explain how and why depression develops, this interpretation of the behaviour was carried out by changing the weights. Post-level attention Even someone who struggles with depression may not always convey their depressed emotions through the postings they make on social media.Due to this reason, the preliminary phase consists of a questionnaire analysis.The results were utilised in conjunction with the forecasts of the medical experts.Moreover, the posts of the individuals on whom the questionnaire focused will not reflect depressive traits in all of the posts.As a result, it is essential to carefully choose and process such postings in accordance with the importance of their respective roles.Similar to the hierarchical attention method [41], the attention mechanism was applied to the posts.To calculate the importance of the postings, a context vector (v) was introduced with a post vector(p'). where M is the number of posts.o is the output vector for classifying depression using MLP. Metrics In this study, two neural network-based embedding models were compared regarding positive measures (F-measure, recall, True Positive, accuracy, precision) and negative measures (True Negative, False Positive, False Negative).The precision and recall of a model, in addition to its F1 score, are the metrics used to evaluate its accuracy.The percentage of times that a model either accurately or inaccurately predicts a class can be broken down into the following four categories: (5) Fig. 6 A schema of attribute network for psycholinguistic style • False negatives are an outcome that occurs when the model predicts (absence of the depression symptoms) the negative class in an incorrect manner. The recall is the measure that shows the accuracy of the model is in identifying true positives, while precision is the ratio between true positives and all positives. AUC stands for "Area under the ROC Curve." AUC provides an aggregate measure of performance across all possible classification thresholds.ROC stands for Receiver Operating Characteristic) Curve.ROC curve measures the performance of a classification model by plotting the rate of true positives against false positives. GloVe [42] is used to embed word vectors, and GRU [43], an RNN variant, is used to encode the sequence.Dropout and L2 regularization are used to improve generalization.We have chosen 0.001 for the learning rate and 0.0001 for the L2 regularization rate.We set a dropout rate for each model separately, with 0.3 and 0.2 for the baseline and our model, respectively.The words that appeared more than five times in the vocabulary are kept, and others are replaced the rest with UNK tokens. Results In order to choose the posts for analysis, various metrics were considered, such as analyzing the title of posts and the content of the posts.The titles and contents of the posts were visualized using a word cloud to determine the posts related to PDD as similar to [44].To identify PDD-related posts and anticipate the most regularly used terms, the most frequently used words in the titles of posts in each category were visualized. The word clouds plotted for the title and content of each post category were depicted in Figs. 7, and 8. Despite the variations in the keyword used to retrieve the posts of title and content in daily life, Fig. 7a,b and has a significantly distinct set of terms than the other two categories; it can be seen that Figs. 7, 8a and 7, 8b have numerous term occurrences.There is an evident variation in the frequency of usage, even among phrases that appear in both the PPD and depression categories.Furthermore, specific frequently occurring words, such as baby, PPD, and birth, were only found (7) in the PPD group, which makes sense considering that PPD relates to parents and parenting journeys.Overlap in word usage in the title and content can be seen by comparing corresponding word clouds, as shown in Figs.7 , 8, which share several terms that were used to crosscheck the content vector calculations in post level attention framework.Each post was split into a sequence of tokens and performed part-of-speech tagging using Stanford CoreNLP [45].Posts whose number of tokens was either smaller than five or bigger than 100 were discarded.Then about 245 posts were randomly selected from the whole posts for each user and used for training.The neural models encode posts to vectors using a convolutional neural network (CNN) and then merge the post vectors into a single vector.To distinguish between the conventional network and the attribute network based on human In order to complete the assignment, we have used the scikit-learn machine learning library [46].The examination of the data is carried out with the assistance of the Multinomial Naive Bayes(MNB) and Support-Vector Machines (SVM) models.Normalization and scaling were applied to each and every attribute set.Grid-search iterations are utilized to fine-tune the hyperparameters of the classification algorithms.The results of diagnosing depressed users on the collected test set are presented in Table 5. Table 5 reveals two important characteristics of the ASHN, based on the post-level attention weights and change in the effect of posts with high attention. • ASHN and our baseline model have similar F1 scores, but their performance balance differs.Even though ASHN has more precision than recall, the trend in the baseline is the opposite.To figure out why we have analyzed the post-level attention weights from both models to find important factors and interpret ways that are used to group depressed users.We chose the top 100 posts (20%) from each of the nearly 215 depressed users labelled as depressed by both models.These posts had the highest attention weights for each model.We have found that, on average, only 46 baseline and FAN posts are the same.This means that when two models come up with different attention weights, it usually leads to different results and performance when detecting posts.• In addition, to analyze the change in the effect of posts with high attention produced by the two models, we present the attention weights of the top 100 posts, averaging nearly 215 users.Interestingly, the baseline's highest attention weight is marginally more significant than ASHN.It is interesting that the baseline's highest attention weight is a little more promi-nent than ASHN's.Based on this, the baseline classifies users based on a small number of posts with high attention weights, while ASHN classifies users based on a large number of posts more evenly.This means that if only a small number of posts are messed up, there is a higher chance of baseline inaccuracy.On the other hand, the classification is based on a more significant number of posts, and the results produced by the ASHN are trust-able.This explains why the two models have different scores for accuracy and recall. Discussion By looking at the learned representations, we use ASHN to figure out what the detection results mean.We chose a group of almost 290 depressed mothers whose depression symptoms were found to be confirmed.Then, for each mother, we took a sample of the top 100 posts with most attention and the bottom 100 posts with the least attention.We also chose a group of almost 120 depressed mothers who were found to be false-negatives.For each of these users, we picked 100 of their best and worst posts similarly.The average feature weights for each of the four classes are shown in Table 6.The Table 6 below displays the typical amount of attention paid to each attribute( four classes) in the posts. Examining the Table 6 instances for each class to ensure ASHN provides sufficient results to meet the objectives. • A1:Psycholinguistic style The morphological writing style of a post, also known as A1, has a relatively minor impact on the ability to detect depression compared to other attribute networks.Every word in this post has been assigned a tag that corresponds to a component of the usage of words, which explains the purpose of each word.Tags for different parts of speech are determined by the connections between the individual words that make up the phrase.Models based on machine learning are used to determine the parts of speech tags associated with a word.The Penn Treebank corpus offers the tag notations that are utilized the majority of the time for the various elements of speech.Wherein a total of 48 (Parts Of Speech) P.O.S tags are defined in accordance with their respective applications.On the other hand, research has shown that an increase in the A1 weight also results in an increase in the number of verb phrases.When compared to the various forms of nouns, Table 7 demonstrates that an increase in the frequency of verbs results in a proportional rise in attention.This seems to imply that mothers who have issues with their mental health display a distinct level of sentence complexity when it comes to their language [39].• A2:Sentimental words Regarding the second attribute weight (A2), it is discovered that the higher A2 weight (0.63) displays, higher the attention scores in the posts.This implies that sentiment information is important in detecting depressed users.Table 8 displays the most common words and their polarity in a group of post with high A2 weights.The word 'hopeless, ' which has a negative polarity, for example, does not appear in the group of low A2 weighted posts from users in the TP-High and TP-Low classes. In contrast, it appears 978 in terms of the second attribute weight (A2); it has been discovered that the higher the post, displays higher attention scores the post.This implies that sentimental information is important in detecting depressed users.9.Many posts containing these phrases are tied to the practice of so-called "self-attention, " in which users regularly discuss their feelings or experiences.It is analyzes the frequency of the word "I" in all posts from the two classes of posts (TP-High and TP-Low), Conclusions This research has developed deep learning methods to improve PPD detection even more than the traditional labor-intensive approaches that use manual intervention for attribute collection.The suggested Attribute Selection Hybrid Network (ASHN) mimics the process of detecting depressed mothers through their social media posts.The input of our given models are posts gathered from PPDdepressed women, who were concluded as depressed by answering a questionnaire designed by domain experts.The attributes chosen for predicting PPD, such as Psycholinguistic style, sentimental words, depressive symptom words and Ruminative Response Style, are based on the advice of the domain experts.Furthermore, this model focuses more on attributes of depression-related sentences which matches well with the real-world situation in which only a few posts are relevant to depression, even for depressed mothers.Thus, ASHN uses a post attention mechanism to choose carefully and posts according to the importance of their respective roles based on considering context vectors.It also enables interpretation of why a particular post is connected to depression in terms of psychological study aspects by analyzing the keywords used in the PPD as general depression as depicted in visualization using word cloud, which is helpful for subsequent clinical investigation of depressive symptoms. Fig. 2 Fig. 2 Framework of Attribute Selection Hybrid Network deep learning models • True positives are results in which the model successfully predicts the presence of positive depression symptoms.• True negatives are results in which the model successfully predicts the absence of depression symptoms.• If a model incorrectly predicts the presence of the positive depression symptoms (positive class), the result is known as a false positive. Fig. 7 Fig. 8 Fig. 7 Word clouds for the titles of posts PPD, depression and daily life Table 1 Statistical information regarding the various data preparation phasesThe tables are presented as mean value ± standard deviation Table 2 Statistics of participants based on depression annotations score. Table 3 Hyper parameter search spaces Table 4 The hyper parameters used in the proposed model Table 5 Results of evaluation on the test set a and b indicate MNB and SVM classifiers, respectively Bolder values indicate the better results than other attribute selection methods Table 6 True-positive and true-negative values of each attribute based on high and low attention Table 7 Polarity for various parts of speech tags Table 8 [47]llection of frequent words in PDD mothers' posts and their polarity that determined using SentiWordNet A4 > 0.14) is 14.8%, compared to 0.4% in the TP-Low class.This demonstrates that individuals with mental health issues have a high level of self-awareness[47].However, due to limited computational capacity, as mentioned in Sect.4.1, our model uses less training data as input, resulting in inferior performance than the leading model, which uses three times as much data as ours and is trained in a less interpretable manner.It is believed that as computational power increases, our model has the ability to outperform the state-of-the-art model.In this paper, only binary classification (depressed or nondepressed) is considered.Phycological Scores of people less than 11 and grater than 29 are considered.ASHN now consists of only four aspects based on depressive psychiatric studies.It is clear evidence that to enhance performance, no samples were gathered for analysis, the number of attributes taken for analysis, and the computing power of the model used plays a predominant role.Given the widespread adoption of these technologies, there is now a window of opportunity to observe and analyze long-term user behaviour patterns among mothers after giving birth.This can provide valuable insight into the attributes contributing to PPD and help develop better early identification and treatment methods.Because our model employs high-dimensional representations of neural networks by allowing the incorporation of other high-level attributes and adding other valuable attributes to the model.It will enable us to generate more reasonable and diverse explanations for many elements of depression.We recreate the diagnosis process similarly if we can construct adequate attribute networks for other mental disorders (such as dementia, schizophrenia, and bipolar disorder).Moreover we can also extend into multi classification with individuals scored between 11 and 29 trapped with mild, moderate, serve, etc., cases. 6.1 Limitation and future work I (Abinaya Gopalakrishnan) would like to acknowledge University of Southern Queensland, SRM Institute of Science Technology and SRM Institute of Science Technology Medical college and Research centre SRMC & RC for providing me Table 9 Phrases from posts with high A3 and A4 weights as examples and supported with population to collect the data to carried out this research work.Finally, I am willing to thank Mrs. Arthi, Former Assistant Professor of Department of English, SRM Institute of Science Technology, Ramapuram, Chennai for extensive English revision of our work. scholarship
2023-10-31T13:57:55.406Z
2023-10-31T00:00:00.000
{ "year": 2023, "sha1": "e839ff5876b4d24c7bc3daab7c8faf502ac837dc", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "e839ff5876b4d24c7bc3daab7c8faf502ac837dc", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
239567313
pes2o/s2orc
v3-fos-license
Klatskin Tumor in the Light of ICD-O-3: A Population-Based Clinical Outcome Study Involving 1,144 Patients from the Surveillance, Epidemiology, and End Result (SEER) Database (2001-2012) Introduction Klatskin tumors (KTs) occur at the confluence of the right and left extrahepatic ducts and are classified based on their anatomical and histological codes in the International Classification of Diseases for Oncology (ICD-O). The second edition of the ICD-O (ICD-O-2) allocated a distinctive histological code to KT, which also included intrahepatic cholangiocarcinoma (CC). This unclear coding may result in ambiguous reporting of the demographic and clinical features of KT. The current study aimed to investigate the demographic, clinical, and pathological factors affecting the prognosis and survival of KT in the light of the updated third edition of ICD-O, Ninth Revision (ICD-O-3). Methods Data of 1,144 patients with KT from the Surveillance, Epidemiology, and End Result (SEER) database (2001-2012) were extracted. Patients with KT were analyzed for age, sex, race, stage, treatment, and long-term survival. The data were analyzed using chi-square tests, t-tests, and univariate and multivariate analyses. The Kaplan-Meier analysis was used to compare long-term survival between KT and subgroups of all biliary CCs. Results Of all biliary CCs, KT comprised 9.35%, with a mean age of diagnosis of 73±13 years, and was more common in men (54.8%) and Caucasian patients (69.5%). Histologically, moderately differentiated tumors were the most common (38.9%) followed by poorly differentiated (35.7%), well-differentiated (23.3%), and undifferentiated tumors (2.2%) (p<0.001). Most tumors in the KT group were 2-4 cm in size (41.5%), while fewer were >4 cm (29.7%) and <2 cm (28.8%) (p<0.001). ICD-O-3 defined most KTs in extrahepatic location (53.5%), while the remainder were in other biliary locations (46.5%) (p<0.001). Most KT patients received no treatment (73%), and for those who were treated, the most frequent modality was radiation (52.7%), followed by surgery (28.1%), and both surgery and radiation (19.2%) (p<0.001). Mean survival time for KT patients treated with surgery was inferior to all CCs of the biliary tree (1.72±2.61 vs. 1.87±2.18 years) (p=0.047). Multivariate analysis identified regional metastasis (OR=2.8; 95% CI=2.6-3.0), distant metastasis (OR=2.1; 95% CI=1.9-2.4), lymph node positivity (OR=1.6; 95% CI=1.4-1.8), Caucasian race (OR=2.0; 95% CI=1.8-2.2), and male sex (OR=1.2; 95% CI=1.1-1.3) were independently associated with increased mortality for KT (p<0.001). Conclusion The ICD-O-3 has permitted a greater understanding of KT. KT is a rare and lethal biliary malignancy that presents most often in Caucasian men in their seventh decade of life with moderately differentiated histology. Surgical resection does not provide any survival advantage compared to similarly treated biliary CCs. In addition, the combination of surgery and radiation appeared to provide no added survival benefits compared to other treatment modalities for KT. Introduction Klatskin tumors (KTs), also known as hilar cholangiocarcinomas (CCs) or Altemeier-Klatskin tumors, first described in late 1965 and named after Dr. Gerald Klatskin, are a rare entity of extrahepatic CCs arising at the confluence of the right and left hepatic ducts [1,2]. It is the most common type of CCs, accounting for approximately 60-80% of all CCs reported each year in the United States [3]. KT also accounts for approximately 2% of all cancer diagnoses, with an overall incidence of 2-4 cases/100,000 population/year patients and is seen slightly more frequently in males (male:female ratio of 1.3:1) [4]. Almost two-thirds of KT cases occur in patients over the age of 65 years, with a near 10% increase in patients over 80 years of age [5]. KTs are classified by the International Classification of Diseases for Oncology (ICD-O) based on their unique anatomical code (topographical code) and histological code (morphological code) in the Surveillance, Epidemiology, and End Result (SEER) database [6]. In version 1 of ICD-O (ICD-O-1), KTs were not assigned a unique histological and topographical code, and they were reported as either intrahepatic or extrahepatic CC [6]. In version 2 of the ICD-O (ICD-O-2), the unique histological code of KT is included in the topographical code of intrahepatic CC, resulting in a considerable error in reporting KT [6]. To examine the impact of this misclassification on site-specific CCs, Welzel et al. calculated the annual percentage changes from the SEER database using the ICD-O-2 classification. They found that 269 KT were found between 1992 and 2000 using ICD-O-2 from the SEER database; 91% (246 of 269) of the KTs were incorrectly coded as intrahepatic CCs, resulting in an overestimation of intrahepatic CC incidence by 13% and underestimation of extrahepatic CC incidence by 15% [6]. This coding error also partly explains the rise in the incidence of intrahepatic CC in the United States over the last decade and a decrease in the incidence of extrahepatic CC [6]. Additionally, this reporting error of KT in the SEER cancer registries makes it impossible to define KT incidence precisely on a population-based level. The current study examined a large cohort of KT patients from the SEER database in an effort to precisely identify the demographic, clinical, and treatment strategies in the light of updated version 3 of ICD-O (ICD-O-3), which may impact the clinical outcomes in the current KT cohort. Materials And Methods Data for the current study were extracted from the SEER database provided by the National Cancer Institute between 2001 and 2012. SEER Stat software version 8.3.4 (National Cancer Institute, Bethesda, MD, USA) was used to extract data from 18 SEER registries (Alaska Native Tumor Registry, Arizona Indians, Cherokee Nation, Connecticut, Detroit, Georgia Center for Cancer Statistics, Greater Bay Area Cancer Registry, Greater California, Hawaii, Iowa, Kentucky, Los Angeles, Louisiana, New Jersey, New Mexico, Seattle-Puget Sound, and Utah). A total of 1,144 patients with histologically confirmed KT were identified, and their data were exported to IBM SPSS Version 20.2 (IBM Corp., Armonk, NY, USA). A total of 1,144 patients with a primary diagnosis of KT were identified to form the final study cohort using the SEER International Classification of Disease for Oncology (ICD-O-3) codes 9508/3. Demographic and clinical data extracted included age, sex, race, tumor stage, tumor size, primary tumor site, and type of treatment received (surgery, radiation, both, or unknown/no therapy). The term "no treatment" refers to the lack of reported treatment. Patients with in situ cancers, those with a nonspecific site of tumor origin, and those in whom histologic confirmation of their cancer was not available were excluded from the final study cohort. The endpoints examined included overall survival and cancer-specific mortality. Categorical variables were compared using the chi-square test, and continuous variables were compared using Student's t-test and analysis of variance. Multivariable analysis using the "backward Wald" method was performed to calculate odds ratios (ORs) and determine the independent factors affecting survival. Missing and unknown data were excluded from multivariate analysis. The Kaplan-Meier analysis was used to compare the long-term actuarial survival between the groups. Statistical significance was set at p<0.05. Outcomes The longest survival was seen among KT patients receiving both surgery and radiation (2.0±2.3 years), followed by surgery alone (1.7±2.6 years), those receiving neither surgery nor radiation (1.7±2.6 years), and finally those receiving radiation alone (1.0±1.3 years), but it was not statistically significant. Additionally, the mean survival time for those KT patients treated with surgery was inferior to all CCs of the biliary tree (1.72±2.61 vs. 1.87±2.18 years; p=0.047). The overall cumulative survival for KT at one year was 24.4% (N=279), at two years was 11% (N=126), and at five years was 2.8% (N=32) (Figure 1, Table 3). Discussion KT is an uncommon malignant tumor originating from the epithelium of the common hepatic duct or its first and second bifurcations [2]. Previous studies on CC have reported an increasing incidence of intrahepatic CC and a decreasing incidence of extrahepatic CC in the United States [7,6]. KTs are anatomically defined as extrahepatic CCs and could have affected these trends. In version 1 of ICD-O (1973-1991), KTs were not assigned a unique ICD-O code and were coded as either intrahepatic CC or extrahepatic CC [6]. In ICD-O version 2, KTs were assigned a unique histology code (8162/3), which also included intrahepatic CC. Thus, KTs may have been misclassified as intrahepatic CCs under these versions of ICD-O [6]. Based on the current study, KT is most prevalent among male Caucasians in the seventh decade of life, which is consistent with previous studies [8]. Also, KTs are most commonly located in extrahepatic ducts, exhibit a locoregional spread, and have a tumor size of 2-4 cm. Additionally, the current study also found that distant metastasis occurred in 15.3% of KT patients. A careful review of the literature demonstrates that tumor extension with lymph node metastasis and neural invasion is a characteristic feature of KT, with the incidence of nodal involvement in resectable tumors ranging from 30% to 50% [9][10][11][12][13]. To assess the status of the regional and para-aortic lymph nodes in KT, Kitagawa et al. in an institutional study of 110 patients found that pericholedochal nodes in the hepatoduodenal ligament were the most common sites of metastasis for KT [9]. Histologically, KTs are mostly moderately to well-differentiated biliary type adenocarcinomas, which is consistent with the findings of this study [14]. These tumors are characterized by abundant tubules and glands in a typical desmoplastic stroma along with a variable inflammatory response [14]. Furthermore, advances in radiological imaging have permitted better delineation and improved sensitivity in detecting KT lesions [15]. Traditionally, the initial radiographic assessment for KT has always been a transabdominal ultrasound, which is cost-effective and easily accessible but cannot determine the type of obstruction and extent of tumor involvement [15]. Computed tomography (CT) is the most frequently used imaging modality and demonstrates an acceptable accuracy (>80%) in assessing ductal, portal vein, and hepatic artery involvement; however, it cannot accurately determine lymph node involvement and underestimates peritoneal involvement [15,16]. Recently, magnetic resonance cholangiopancreatography (MRCP) has gained tremendous popularity among surgeons owing to its ability to precisely predict the resectability of KT (>80%) [16][17][18]. KT appears as a hypointense signal on T1-weighted images and with high signal intensity on T2 imaging [15,19]. In addition, the role of positron emission tomography PET/CT in evaluating the local resectability of KT remains unclear [20]. Currently, it may be useful when assessing metastatic disease but has no clear role in helping to evaluate issues of local resectability [15,19,20]. Traditionally, KT is treated with surgical resection alone. Although surgical resection with negative margins is the only hope for a cure, only a small subset of patients is amenable to surgery at the time of diagnosis [21,22]. Complete surgical resection is the most critical prognostic factor for survival; however, total or neartotal resection is challenging because of the close anatomical relationship of the bile duct bifurcation with the portal vein bifurcation and hepatic arteries [5,22]. In the current study, most of the KT patients who were offered treatment received radiation alone as a primary treatment, followed by surgery, and the survival was poor (1.08±1.26 years). The role of adjuvant or neoadjuvant radiation therapy in the management of KT has always been controversial owing to the lack of prospective randomized controlled trials [23]. Some retrospective studies have demonstrated the beneficial effects of radiation therapy in augmenting survival rates in patients with KT. In a recent study from Japan, Todoroki et al. examined 63 patients with stage IVa KT, of which 21 patients underwent resection only and 42 patients received either intraoperative radiation therapy (IORT), postoperative radiation therapy (PORT), or both [24]. The locoregional control rate was significantly greater in the adjuvant therapy group than in the resection alone group (79.2% vs. 31%). The actual five-year survival was also significantly improved in patients treated with resection + IORT + PORT (39.2%) than in those who received resection alone (13.5%) (p=0.0141) [24]. Furthermore, the role of chemotherapy in the management of KT is not well established. To the best of our knowledge, the most extensively investigated chemotherapeutic agent for KT management is 5-fluorouracil (5-FU) [25]. Several small studies have used single-agent systemic chemotherapy drugs including 5-FU, cisplatin, rifampicin, mitomycin C, paclitaxel, and gemcitabine [25]. These studies failed to establish an acceptable response rate and efficacy of the single-agent chemotherapeutic regimen for the management of KT [25]. Because of the poor response rates with single-agent chemotherapy therapy, several authors have used combination chemotherapy in an attempt to achieve better response rates and longer survival. A prospective randomized trial by the Eastern Cooperative Oncology Group (ECOG) led by Falkson et al. compared 34 patients with unresectable KT treated with either oral 5-FU or cyclonexyle-chloroethyl-nitrosourea (CCNU), demonstrating a partial response rate of only 9% [26]. Limitations There are several limitations to this study that should be considered. First, the SEER database does not accurately code for all critical clinical factors such as socioeconomic status, geography, tumor depth, and method of diagnostic confirmation, which may have influenced survival. Second, information on diagnostic imaging and follow-up is lacking. Data on surgical and radiation therapy were available in the SEER database; however, data on chemotherapy received were not, which limited the ability of this study to evaluate the impact of adjuvant or neoadjuvant therapy. There may also be an element of selection bias since SEER registries are more likely to sample from urban areas than from rural areas. Despite these limitations, the SEER database has data obtained from 14% of the U.S. population, and these findings can be generalized to the overall population. Conclusions KT is a rare and highly malignant tumor of the biliary tract that is associated with poor survival. A considerable error in reporting KT was observed in the SEER ICD-O-2 classification system using a histological code for KT, which also included intrahepatic CC. To our knowledge, the current study represents the largest KT cohort according to the updated ICD-O-3 classification in the SEER database, establishing a more precise reporting of the demographics, management, and clinical outcomes. KT is more common among Caucasian males in the seventh decade of life and tends to occur in the extrahepatic ducts with locoregional presentation and size of 2-4 cm in size, with up to 15.3% of the patients developing distant metastasis. Although surgery remains the primary method of treatment for KT, radiation therapy in some studies has emerged as a promising adjunct for treatment, increasing the overall survival. Future studies optimizing the dosage of radiation regimens to establish the relationship between the multimodal approach for the treatment and its impact on survival are needed. All KT patients should be enrolled in clinical trials or registries to allow for more defined multimodality management to optimize clinical outcomes for these patients. Additional Information Disclosures Human subjects: All authors have confirmed that this study did not involve human participants or tissue. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2021-10-23T15:23:14.805Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "197ae10724e663d37114c90ba0d8853e9a1c735d", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/74415-klatskin-tumor-in-the-light-of-icd-o-3-a-population-based-clinical-outcome-study-involving-1144-patients-from-the-surveillance-epidemiology-and-end-result-seer-database-2001-2012.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "23f0228ebbd29930c1861704c59d685836482e30", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
21856801
pes2o/s2orc
v3-fos-license
The Effect of Automatization of the Phonological Component on the Reading Comprehension of ESP Students Prompted by the recent shift of attention from just focusing on the top-down processing in L2 reading towards considering the basic component, bottom-up processing, the role of phonological component has also enjoyed popularity among a selected circle of SLA investigators (Koda, 2005). This study investigated the effect of the automatization of the phonological component on the reading comprehension of the ESP students. After administering a reading section of a TOEFL test, sixty participants out of one hundred and thirty were selected from among ESP students volunteering to participate in this study. These sixty participants were randomly assigned to two groups namely, control group and experimental group. The result of the pre-test revealed that there was not any significance difference between the two groups prior to the treatment. Then, in the period of one semester, the control group received reading instruction through the automatization of phonological component (i.e. pronunciation practice) and the control group received the reading instruction based on the traditional approach which was based on literal translation of English words and sentences into their Persian equivalents. After the treatment, all the participants took the post-test. The result of an Independent-samples T-test indicated that teaching reading through the automatization of phonological component was more effective than the traditional approach in reading instruction. The result of this study is considered to be useful in methodological issues related to reading instruction and also teacher education programs. Moreover, the findings of this study have theoretical implications for SLA researchers. Introduction The reading process has been studied and continues to be studied through the eyes of diverse schools of thought.Information processing approach has been an active contributor for the past 20 years with computers playing a strong role since 1980.Although information processing theorists have contributed to the understanding of reading, there remains a great amount of investigation for understanding the reading comprehension process: and an even greater need for understanding second language reading (Koda, 2005).Since 1970s, the focus of most of the studies have been on investigation of top-down processing, while little attention has been paid to lower-level processing; e.g., phonological processing.Recently, upon the emergence of Interactive Approaches to SL reading, and the unavoidable contribution of bottom-up components, the pendulum of L2 reading research has swung back towards the investigation of lower-level, component processing (Koda, 2005). Reading, as one of the most attention-grabbing language skills, is a complex cognitive skill which uses various interactive processes.One of alluring issues in this skill is relationship between phonological component and semantic representation.From this perspective, it is known that information of several sorts (phonetic, lexical, syntactic, and pragmatic) is processed during the comprehension process as the meaning is constructed (Bialystok, et al. 2003;Blaiklock, 2004;Byrne, 1991;Carroll & Snowling, 2004;Coltheart et al. 1988;Harris and Coltheart, 1986).That said comprehension in reading, which is affected by both lower-level components and higher-level components, is a multifaceted set of processes, not an all-or-none operation.The line of evidence in favor of the effect of phonological component comes from several sources. In this regard, most of the recent body of research hold the view that there is an interaction between the processing of the physical stimuli (bottom-up processing) and the context provided by expectation and previous knowledge (top-down processing) (Carrell et al., 1998).In line with this perspective, Koda (1992) indicates that lower level verbal processing skills (e.g., phonological processing) is one of the four major reader-related skills.He further adds that little attention has been paid to the relationship between lower level verbal processing skills and reading comprehension, but a number of theorists in cognitive psychology claim that deficiency in lower level processing operations strains the limited capacity of short-term memory and inhibits text integration into a meaningful sequence (e.g., Leong et al. 2005;Lesaux & Siegel, 2003;Nation & Snowling, 2004;Perfetti, 1986).Based on his findings, he holds that efficient lower level verbal processing operations are essential for successful performance in FL reading comprehension tasks. More specifically, one of the lower level verbal processing mechanisms is phonological processing which is the task of linking printed letters to phonemes.This reading-specific processing is especially difficult for foreign language readers because of the lack of one-to-one correspondence between phonemes and graphemes. In one study done by Deacon and Kirby (2004), the roles of morphological and phonological awareness on the reading development were taken into account.It was a longitudinal study which took 4 years.They compared two factors, namely phonological and morphological awareness, in three aspects of reading development: pseudoword reading, reading comprehension, and single word reading.The results of their study divulged that morphological awareness contributed significantly to pseudoword reading and reading comprehension, after controlling prior measures of reading ability, verbal and nonverbal intelligence, and phonological awareness.This contribution was comparable to that of phonological awareness and remained 3 years after morphological awareness was assessed.In contrast, morphological awareness rarely contributed significantly to single word reading.They argued that these results provided evidence that morphological awareness had a wide-ranging role in reading development, one that extended beyond phonological awareness. Another study done by Nassaji and Geva (1999) investigated the role of phonological and orthographic processing skills in adult second language reading.The subjects were 60 ESL graduate students; all were native speakers of Farsi.Three types of ESL reading measures were used as criterion variables: reading comprehension, silent reading rate, and the ability to recognize individual words.Data were analyzed using correlational and hierarchical multiple regression.The analysis of the collected data revealed that efficiency in phonological and orthographic processing contributed significantly to individual differences on the reading measures.In particular, efficiency in orthographic processing contributed to the reading measures independently of syntactic and semantic measures.The study suggested that it was useful to consider individual differences in ESL reading with respect to individual differences in lower level processes -particularly the efficiency with which readers process phonological and orthographic information.This research (Nassaji and Geva, 1999) indicated that information about individual differences in the efficiency with which L2 readers process phonological and orthographic information helps us to understand individual differences in ESL reading.It suggested that the role of lower level graphophonic processing should not be overlooked in L2 reading, even when readers are proficient adult L2 readers.Droop and Verhoeven (2003) gave much importance to the role of oral language proficiency in reading comprehension because the L2 reading comprehension skills are more dependent upon lexical knowledge than the L2 decoding skills.Bilingual Turkish-Dutch children, although comparable in word recognition, performed more poorly in reading comprehension than their monolingual Dutch-speaking peers.The authors attributed this lower level of comprehension to the lower performance in syntactic ability and oral fluency.Measures of Dutch oral language proficiency included both expressive and receptive vocabulary tasks, and an expressive syntactic task.However, both for native speakers and for L2 speakers, decoding skills played only a minor role in the development of reading comprehension, and according to the authors, decoding and reading comprehension appear to develop as independent skills from third grade on (Droop & Verhoeven, 2003). In agreement with these findings, studies have demonstrated a significant effect of oral language proficiency in L2 reading comprehension, although measures of L2 decoding predicting L2 reading comprehension were not analyzed.Geva and Ryan (1993) conducted a cross-sectional study with 73 students in Grades 5 to 7, who were learning to read in English (L1) and Hebrew (L2) concurrently.Regression analysis showed that Hebrew oral proficiency, as measured by teachers' global ratings, accounted for 29.8% of the variance on Hebrew reading comprehension scores.Corresponding with these results, Lindsey et al. (2003) reported that receptive vocabulary was one of the best predictors of English reading comprehension, but did not account for variance in decoding.Torgesen (2000), having devoted some investigation, summarized the importance of phonological awareness in acquiring accurate word reading skills.According to Torgesen (2000): First, phonological awareness helps children understand the alphabetic principle.Second, it helps children realize the regular ways that letters represent sounds in words.Lastly, it makes it possible to generate possibilities for words in context that are only partially sounded out. Moreover, as Koda (2005) states, poor readers uniformly are handicapped in a wide variety of phonological tasks.Furthermore, Metsala & Ehri (1998) state that comprehension is a meaning-construction process, which involves integral interaction between text and reader.Extracting phonological information from individual words constitutes one of the first and most important steps in this endeavor.Also phonological skills have a direct, and seemingly causal relationship with reading ability knowledge of letter patterns and their linkages to sounds facilitates rapid automatic word recognition; such knowledge evolves gradually through cumulative print-processing experience; and limited word-recognition skills tend to induce over reliance in context (p.254). The central tenet of the mentioned studies is that links between phonological form and meaning can then produce meaning activation that is indirectly 'mediated' through phonology.One such model is the Interactive Constituency Theory (ICT) (Perfetti & Tan, 1998;1999cited in Perfetti et al.2003)).The ICT assumes that a phonological form is routinely activated as part of word identification because it is a constituent of the identified word.This phonological activation rapid and may precede the direct activation of specific word meaning in many situations.However, the ICT further assumes that phonological activation is diffuse across characters sharing the same pronunciation.William and Lovatt (2003) have considered phonological awareness not only as a fundamental factor determining learner's reading ability but also as an important element helping vocabulary learning in both normal and language-impaired adults and children in L2 acquisition.Snowling et al. (1991) have suggested that phonological awareness training perhaps should be incorporated into classroom activities to help young FL learners enhance word recall and pronunciation-learning ability or to ameliorate word-learning problems in FL.Segalowitz et al. (1991) have also argued that the modification and specification of the word-referent relationship cannot proceed if the phonological pattern is obscure and incomplete.Thus, even though FL word learning is not a simple phonological issue, the establishment of a complete and solid phonological representation for a word still appears to be the first and the most important springboard to success in early FL vocabulary acquisition for a young FL learner. Central to the relationship between phonological component and semantic component is the concept of automaticity which receives noticeable magnitude in 'bottom up' approaches to reading (Eskey, 1988;Torgesen, Wagner, Rashotte, Burgess, & Hecht, 1997;William & Lovatt, 2003) Theoretically speaking, learning is viewed as a complex cognitive skill from the viewpoint of Cognitive Theory and Information Processing Models.Learning a skill, in McLaughlin's terms (1987), requires the automatization of component subskills.McLaughlin (1987) notes that one aspect of second language performance where the automatic/controlled processing distinction is especially relevant is reading.On the basis of researches reported by Shiffrin and Schneider (1977), Cziko (1980) and Segalowitz (1986Segalowitz ( , 2003)), he adds that in learning to read, children utilize controlled processing as they move to more and more difficult levels of learning, the transition from controlled to automatic processing at each stage results in reduced discrimination time, more attention to higher order features and ignoring irrelevant information.Also, Smith (1981 cited in Bar-Shalom) refers to the same point when he claims that through practice the subcomponents, like phonological component, can be automatized, and controlled process would be freed for other functions. More specifically, Perfetti et al. (1988, p. 59) suggest that automatic activation of phonetic properties of word during word identification routinely occurs in reading, while others believe that recoding of graphemic input into phonetic information does not occur.Their consensus answer seems to be that adult readers often use unmediated routes (visual route to semantic representation) (Byrne, 1991;Bar-Shalom et al., 1993). From a pedagogical point of view, some researchers indicate that emphasizing perfect pronunciation can reduce comprehension (e.g., Rigg in Carrel et al., 1988;p. 215).Instead, others note that an awareness of the linguistic structure of words (both phonological and morphological) is virtually important to successful reading and spelling (e.g.Bar-Shalom et al., 1993;p. 197;Carlisle et al., 1993;p. 177-179;Cupples et al., 1992;p. 272). Therefore, to sum it up, within the framework of cognitive theory, learning is a cognitive process which requires the integration of a number of different skills, each of which has been practiced and made routine (McLeod and McLaughlin, 1986;Segalowitz, 2003).According to McLauglin's (1987;p. 134), cognitive theory stresses the limited information processing capacities of human learners, the use of various techniques to overcome these limitations, and the role of practice in stretching resources so that component skills that require more mental work become routinized and thereby to free controlled process for other functions.He further continues that as automaticity develops, controlled search is bypassed and attentional limitations are overcome.The acquisition of a complex cognitive skill, such as learning a second language, is thought to involve the gradual accumulation of automatized subskills and a constant restructuring of internalized representations as the learner achieves increasing degrees of mastery. In summary, consistent with many L1 studies and some recent L2 studies (e.g., Haynes & Carr, 1990;Koda, 1992), the present research provides evidence for the utility of a multivariate information processing model in ESL reading.It suggests that L2 reading theories should take into account the role played by different component processes in L2 reading, including efficient phonological processing. Participants The participants of the present study were 60 undergraduate ESP students selected out of 130 ESP students volunteering to participate in this study.In fact, these 60 students were screened based on their scores on reading section of a TOEFL and were regarded as of nearly the same proficiency level.The participants included both male and female students.Their age range varied from 18 to 25. 2.2.1 The Pre-test: A test comprising of the reading comprehension parts of a TOEFL test which included 50 multiple-choice items.This TOEFL test had, in fact, two purposes: a) to homogenize the students and b) to specify the learner's ability in comprehending texts before going through the procedures of this study. The Post-test: To see whether the automatization of phonological component would have any significant effect on ESP student's reading comprehension improvement, another 50-TOEFL actual test (only reading section) was conducted. Procedure The participants were randomly assigned to two groups of control and experimental.The approach employed in the control group was the traditional approach of teaching reading which was based on literal translation of the texts and answering some reading comprehension questions with no emphasis on pronunciation.While in the experimental group the researcher used different techniques to improve pronunciation.The treatment phase which lasted for a period of a semester involved practice on pronunciation.More specifically, it involved: a) awareness of phonological form of letters, clusters of letters, and words in hierarchical stages of identification, repetition, discrimination, and production b) transcription of words, phrases, sentences into the phonetic alphabet and transcription of the phonetic forms of words, phrases, sentences into conventional alphabets c) practice on reading phrase-by-phrase, clause-by-clause, and sentence-by-sentence; practice on oral timed reading without reference to meaning. Statistical Analysis In order to answer the research question, the mean scores of the control and experimental groups were compared using an independent samples T-test.This was done to see if there was any significant difference between the performance of the control and experimental group on the post-test. Results In order to analyze the gathered data, first the mean scores of experimental and control groups in pretest were compared with each other, second the mean scores of experimental and control groups in posttest were compared with each other. With regard to the statistical data presented in table 1, a mean score of 34.73 with a standard deviation of 9.74 was obtained for the control group, while the mean score of 34.93 with a standard deviation of 10.31 was gained for the experimental group on the pre-test.Therefore, it can be concluded that the two groups were homogeneous in terms of their reading comprehension.Also as shown in the table 1, we find out that the t-critical value is higher than our t-observed 0.077 at 0.05 level of significance, i.e. t (58)= 0.077.The Sig (2-tailed) 0.93 is higher than the assumed level of significance 0.05, this indicates that there was not any statistically significant difference between control and experimental group prior to the initiation of the treatment of the study. Table 1 also indicates that there has been a significant gain in the mean score of experimental group after the treatment.A mean score of 35.7 with a standard deviation of 10.7 was obtained for the control group, while the mean score of 42.3 with a standard deviation of 13.3 was gained for the experimental group on the post-test.Also as far as the results of the results of the independent samples T-test is concerned, with 58 degrees of freedom our t-observed at 0.05 level of significance, i.e., t(58)= 2.11, exceeds the t-critical value and means that the observed difference between groups is meaningful.The Sig (2-tailed) 0.039 is also evidence which is smaller than the assumed level of significance 0.05, therefore, it can be concluded that the automatization of phonological component as a new approach to reading instruction is significantly better than the traditional approach in reading instruction.Therefore, the treatment has enhanced the reading comprehension ability of the experimental group on the post-test. Conclusion and Discussion The statistical analyses revealed that teaching reading comprehension through pronunciation practice (i.e.automatization of phonological component) did have a significant effect on the ESP students' reading comprehension.Therefore, it can be said that ESP students benefited from pronunciation practice and phonological awareness in reading instruction more than the traditional approach of reading instruction.Since the very nature of reading processing mechanism is a not easily investigated, the reading research domain doesn't not seem to be a promising or inviting one; nevertheless, it is a field that has attracted many researchers, partly due to its very intricacy, and partly because any finding and discovery concerning the reading process would have immediate applications and implications for L2 language learning.The present study was designed to shed some light on the role of phonological processing as one of the bottom-up processing mechanism on the more efficiency of top-down processing such as comprehension.In agreement with this line of research in L2 reading and with the obtained, empirical results of the studies done recently, it can be argued that the abstract models of reading process in which the role of bottom-up, component processes are systematically neglected, have a little contribution to make.Such models of reading, are in general, models of the ideal fluent reader with completely developed knowledge systems and skills, whereas the foreign language reader is, almost by definition, a developing reader with gaps and limitations.These results of the recent studies, including this study, in which lower-level processes and their routinization and automaticity have proved to enhance reading comprehension, weaken considerably the common assumption in L2 that the availability of higher level processes in reading comprehension reduces significantly the contribution of lower level processes (Nassaji & Geva, 1999).Such studies also challenge the idea articulated by researchers such as Coady (1979) that, as L2 readers become more proficient (i.e., as they increase their command of L2 vocabulary, syntax, and discourse markers), they move away from using lower level skills and instead rely on higher level semantic and syntactic skills (Nassaji & Geva, 1999). As far as cognitive theory is concerned, the present study tried to show that the automatization of phonological component in foreign language environment is a useful information-processing technique to overcome capacity limitation and mind's limited capacity, which can free controlled process for other functions such as comprehension.In this way, this research contributes to foreign language learning on the basis of empirical findings in foreign language classroom to determine cognitive theory's worth. This study may also have implications for language teaching and syllabus design.From a practical point of view, a fuller appreciation of the central process of automatization has important implications for foreign language teaching.On the basis of findings, it is suggested that some time must be devoted in reading classes to development of relatively bottom-up concerns such as practice on pronunciation.Even students who have developed top-down skills in their native languages may not be able to transfer higher level skills to a second language context until they have developed a stronger bottom-up foundation of basic identification skills through translation of letters to sounds. Table 1 . Comparing differences between two groups
2017-08-28T05:15:03.777Z
2011-10-31T00:00:00.000
{ "year": 2011, "sha1": "4d7ab46d5314c936b07f952d7efffecfc561034f", "oa_license": "CCBY", "oa_url": "https://ccsenet.org/journal/index.php/ies/article/download/12892/9047", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "4d7ab46d5314c936b07f952d7efffecfc561034f", "s2fieldsofstudy": [ "Education", "Linguistics" ], "extfieldsofstudy": [ "Psychology" ] }
7066925
pes2o/s2orc
v3-fos-license
Structure of Importin-α from a Filamentous Fungus in Complex with a Classical Nuclear Localization Signal Neurospora crassa is a filamentous fungus that has been extensively studied as a model organism for eukaryotic biology, providing fundamental insights into cellular processes such as cell signaling, growth and differentiation. To advance in the study of this multicellular organism, an understanding of the specific mechanisms for protein transport into the cell nucleus is essential. Importin-α (Imp-α) is the receptor for cargo proteins that contain specific nuclear localization signals (NLSs) that play a key role in the classical nuclear import pathway. Structures of Imp-α from different organisms (yeast, rice, mouse, and human) have been determined, revealing that this receptor possesses a conserved structural scaffold. However, recent studies have demonstrated that the Impα mechanism of action may vary significantly for different organisms or for different isoforms from the same organism. Therefore, structural, functional, and biophysical characterization of different Impα proteins is necessary to understand the selectivity of nuclear transport. Here, we determined the first crystal structure of an Impα from a filamentous fungus which is also the highest resolution Impα structure already solved to date (1.75 Å). In addition, we performed calorimetric analysis to determine the affinity and thermodynamic parameters of the interaction between Imp-α and the classical SV40 NLS peptide. The comparison of these data with previous studies on Impα proteins led us to demonstrate that N. crassa Imp-α possess specific features that are distinct from mammalian Imp-α but exhibit important similarities to rice Imp-α, particularly at the minor NLS binding site. Introduction The filamentous fungus Neurospora crassa has been studied by classical and molecular genetics, providing several insights into cellular processes, which include cell signaling, growth and differentiation, secondary metabolism, circadian rhythm and genome defense [1]. Together by dialysis. NcImpα was stored under cryogenic temperatures in a buffer composed of 20 mM Tris-HCl, pH 8, and 100 mM NaCl. Synthesis of NLS peptides The peptide corresponding to SV40 NLS ( 125 PPKKKRKV 132 ) was synthesized by Proteimax (Brazil) with a purity of higher than 99%. The peptides contained additional residues at the Nand C-termini compared with the minimally identified NLS [18]. Isothermal titration calorimetry ITC measurements were performed using a MicroCal iTC200 microcalorimeter (GE Healthcare) calibrated according to the manufacturer 0 s instructions. NcImpα and the SV40 NLS peptide were prepared and dialyzed in buffer (20 mM Tris-HCl, pH 8.0, and 100 mM NaCl). The sample cell was loaded with 50 μM NcImpα that was titrated with the SV40 NLS peptide at a concentration of 1 mM (protein:peptide molar ratio of 1:20). Titrations were conducted at 10°C and consisted of 20 injections of 2.0 μL in an interval of 240 s with a 1000 rpm homogenization speed. The heat of dilution was determined by titration of the peptide sample into the protein sample buffer (20mM Tris-HCl, pH 8.0, and 100 mM NaCl) in separate control assays and was subtracted from the corresponding titrations. The assays temperature was chosen to avoid protein aggregation displayed for higher temperatures and to permit direct comparison to previously Impα/SV40NLS ITC studies performed in the same condition [19]. The data were processed using MicroCal Origin Software to obtain values for stoichiometry (N), dissociation constants (K d ), enthalpy (ΔH), and binding-type input parameters were adjusted to obtain the best fitting model. The values of K d and ΔH were used to calculate free energy (ΔG) and entropy (ΔS) values. Crystallization, X-ray data collection and structure determination NcImpα was concentrated to 12 mg/ml using an Amicon 30 kDa cutoff filter unit (Millipore) and stored at -20°C. Crystals of NcImpα in complex with SV40 NLS were obtained at a 1:8 protein:peptide molar ratio in 20 mM Bicine, pH 8.5, and 20% (w/v) polyethylene glycol 6000 at 4°C [20] using MRC2 Well Crystallization Plates and an Orix4 system (Douglas Instruments). X-ray diffraction data were collected from a single crystal of NcImpα/SV40NLS at a wavelength of 1.0 Å using a synchrotron radiation source (X25 beamline, National Synchrotron Light Source, NSLS, Upton, NY, USA) and a PILATUS detector. A crystal was mounted in a nylon loop and flash-cooled in a stream of nitrogen at -173.15°C using 20% (v/v) glycerol as the cryoprotectant. The crystal-to-detector distance was 270 mm with an oscillation range of 0.5°, resulting in the collection of a total of 720 images. The data were processed using the HKL2000 suite [21]. The crystal belonged to the space group P2 1 2 1 2 1 (Table 1) and was isomorphous to previously obtained crystals [20]. NcImpα/SV40NLS crystal structure was determined by molecular replacement using the program Phaser [22] and the coordinates of Impα from Mus musculus in complex with the nucleoplasmin NLS (PDB ID: 3UL1, chain B; [15]) as the search model. Rounds of manual modeling were performed using the program Coot [23] and the crystallographic refinement (positional and restrained isotropic individual B factors with an overall anisotropic temperature factor and bulk-solvent correction) was performed using the program phenix.refine [24], considering free R factors. Structure quality was evaluated using the program MolProbity [25], and interactions were analyzed using the program LIGPLOT [26]. Figures were generated using the program PyMOL [27]. Superposition of Impα structures was performed using LSQ algorithm [28] present in the software package CCP4 [29]. PDB accession code Coordinates and structure factors from NcImpα/SV40NLS have been deposited in the PDB under accession code 4RXH. Structure of NcImpα in complex with SV40NLS NcImpα was expressed as an N-terminal truncation lacking residues 1-74 (which corresponds to the IBB domain; [30]) that are responsible for autoinhibition. Furthermore, crystallization was performed in the presence of an NLS ligand to stabilize the truncated protein, as previously reported [17,20], and to investigate the protein-NLS binding features. X-ray diffraction data collection and refinement statistics of the NcImpα/SV40 NLS complex are summarized in Table 1. The crystals are not isomorphous to other Impα/SV40 , where jFobsj and jFcalcj are the observed and calculated structure-factor amplitudes, respectively. § R free is equivalent to R cryst but was calculated with reflections (5%) omitted from the refinement process. Calculated based on the Luzzati plot with the program SFCHECK [31]. † † Calculated with the program PROCHECK [31]. a Values in parentheses are for the highest-resolution shell. complexes [10] and diffracted to high resolution (1.75Å), which is the highest resolution obtained for an Impα to date. The final model of the NcImpα/SV40NLS complex consists of 428 residues of NcImpα (79-507), the peptide ligands bound to the major (seven residues) and minor (four residues) sites, and 349 water molecules ( Table 1). The structure exhibits an elongated and curved shape composed of ten tandem Arm repeats, each containing three α-helices (H1, H2 and H3;), as observed in other Impα structures [9,10,16,30,32] (Fig 1). The loop containing residues 461-465 could not be modeled due an absence of electron density. The concave surface of the protein maintains the conserved array of Trp and Asn residues and negatively charged residues that interact with positively charged residues from the NLS ligands. Binding of the SV40 NLS peptide at the NcImpα major binding site The SV40 NLS peptide binds the major binding site of NcImpα in an extended conformation with an orientation that is antiparallel to the Arm repeats. The electron density for the peptide in the major binding site is well defined, allowing for the unambiguous modeling of the eight peptide residues ( 125 PPKKKRKV 132 , Fig 2a). The SV40 NLS peptide exhibits a conserved binding mode at the major binding site, which is analogous to mouse (MmImpα; [10]), yeast (ScImpα; [9]) and rice (OsImpα; [16]) Impα proteins. The N-terminal residue (P125) also exhibits a conformation that is similar to that observed in the structures of OsImpα/SV40 NLS and MmImpα in complex with an extended SV40 NLS peptide (G110-G132, referred as Structure of Importin-α from a Filamentous Fungus CN-SV40NLS) [37]. ScImpα/SV40 NLS and MmImpα/SV40 NLS were crystallized using truncated version of the SV40 NLS peptide (P126-V132). The average B-factor of the peptide at the major binding site (33.2 Å 2 ) is lower than the average B-factor for the protein (37.7 Å 2 ), indicating stability for the interaction of the peptide at this site. The conserved asparagines N150, N192 and N235 of NcImpα stabilize the backbone of the SV40 NLS peptide via hydrogen bonds, whereas the residues W146, W188 and W231 form pockets for the side chains of K129 and K131 at the P 3 and P 5 positions of the SV40NLS peptide, respectively. The K127 of SV40NLS participates in hydrophobic interactions with G195 of NcImpα at the P 1 binding site. K128 of SV40 NLS forms hydrogen bonds with G154, A152 and T159 and a salt bridge with D196 of NcImpα, as observed for lysine residues occupying the P 2 position in previous structures [9,10]. In the P 4 site, the side chain of R130 interacts with L109 and K111 of NcImpα via hydrogen bonds and hydrophobic interactions with P115 and S153. Finally, K131 (position P 5 ) forms hydrogen bonds with Q185 and hydrophobic interactions with F142, and V132 (position P 6 ) forms a hydrogen bond with S110 of NcImpα (Fig 3a, S1 Fig). Interestingly, the N-terminal residue P125 of the SV40NLS peptide, that is not in the P 1 − P 6 positions, forms hydrogen bonds and hydrophobic interactions with R238 and D270 of NcImpα. P125 aids in the stabilization of interactions between the protein and a peptide NLS at the major binding site, as observed previously for MmImpα/CN-SV40NLS and OsImpα/ SV40NLS structures [16,37]. Binding of the SV40 NLS peptide at the NcImpα minor binding site The electron density for the SV40 NLS peptide at the minor binding site is also well defined and allows the unambiguous modeling of four residues of the peptide ( 128 KRKV 132 , Fig 2b). As observed for the major binding site, asparagines residues (N319 and N361 in this case) of NcImpα define and guide the backbone of the peptide. The average B-factor for the SV40 NLS peptide bound at the minor binding site of NcImpα is higher than the average B-factor for the protein (39.3 and 37.7 Å 2 , respectively), indicating a lower stability for the peptide at this site compared with the major site. The interactions between the SV40NLS peptide and NcImpα in the minor site exhibit greater similarities with the interactions between this peptide and OsImpα [16] (Table 2) than those with MmImpα or ScImpα [9,10]. In the NcImpα/SV40NLS structure, K129 at the P 1 ' position forms hydrogen bonds with G323, V321 and T328 of NcImpα. R130, which occupies the P 2 ' position, is accommodated between the hydrophobic side chains of W357 and W399 and interacts with E396 via salt bridges and with S360 via hydrogen bonds, which results in the lowest B-factor value (36.2 Å 2 ) compared to the other residues of the peptide. K131 at the P 3 ' position is stabilized by helix dipoles, negatively charged residues and hydrogen bonds with N283, G281, and T322. Finally, V132 at P 4 ' interacts with D280, R315 and N319 of NcImpα via hydrophobic interactions (Fig 3b, S2 Fig). Affinity of the SV40 NLS peptide for NcImpα The affinity and other thermodynamic parameters for the association of NcImpα and the SV40 NLS peptide were evaluated using ITC. A protein:peptide molar ratio of 1:20 was sufficient to yield a sigmoidal titration curve representing an exothermic process during complex formation (Fig 4). A two non-symmetrical binding site interaction model was selected based on the determined structure, which indicates the presence of two NLS binding sites in the Impα protein with different binding modes. The equilibrium dissociation constants of K d = 1.23±0.22 μM and K d = 1.69±0.46 μM (Table 3) indicated that the SV40NLS peptide may bind to both sites but with higher affinity for one site. These results are consistent with previous structural and functional results that indicate the presence of the major and minor binding sites in Impα in complex with the SV40NLS peptide [9,10] and are comparable with ITC experiments performed with MmImpα/SV40NLS complex [19]. Furthermore, the negative contribution for the enthalpy (ΔH) and positive value for the entropy (ΔS) suggest that both hydrogen bonds and hydrophobic interactions play a role in this interaction, whereas conformational changes are unfavorable. These data corroborate with the structural information obtained in the present study. Comparison of NcImpα with other Impα structures The crystal structure of NcImpα resembles other Impα structures [9,10,16] but exhibits a conformation that is more concave compared with MmImpα (Fig 5a). Its superposition (residues 83-504) with other Impα proteins reveals a higher r.m.s. (Fig 5b). The primary sequence identities between Table 2. Binding to specific pockets of Impα/NLS structures from different organisms. Several studies have shown that Impα proteins from different families exhibit preferences for specific NLSs [45][46][47][48]. Thus, examining the binding mode of the SV40NLS peptide, which has been crystallized with various Impα proteins, may provide insights into the binding specificities of these proteins. Crystal structures of Impα/SV40NLS complexes have been Structure of Importin-α from a Filamentous Fungus determined for ScImpα [9], MmImpα (in complex with SV40NLS and with an extended SV40NLS peptide, referred to as CN-SV40- [10,37]), OsImpα [16] and NcImpα (the present study). In all of these structures, the SV40NLS peptide binds strongly to the major site via several interactions, resulting in well-defined electron density and B-factor values that are similar to or lower than the average B-factor value for the protein. These results are consistent with the determined affinities for the MmImpα/SV40NLS [19] and NcImpα/SV40NLS complexes using ITC, in which both studies demonstrated the presence of two binding sites for the peptide Structure of Importin-α from a Filamentous Fungus exhibiting with different affinities. In all of these structures, SV40NLS binding to the major site is essentially identical, in which all six positions (P 1 − P 6 ) of the peptide ( 127 KKKRKV 132 , Table 2) are occupying similar regions at the protein. The main differences are related to the number of peptide residues bound before the P 1 position (OsImpα exhibits 4 residues before P 1 , NcImpα exhibits 2 residues, MmImpα exhibits 1 residue and MmImpα/CN-SV40NLS exhibits 4 residues). However, these differences result from the length of the SV40NLS peptide used in these structural studies rather than differences in Impα proteins from different organisms. The role of the minor binding site for NcImpα specificity The major site was usually associated with the binding of monopartite NLSs, whereas the minor site was associated with a secondary binding site for bipartite NLSs [7,10,40]. Recently, studies have demonstrated that some monopartite NLSs may bind to only the minor site [16,38], indicating the importance of this binding site for Impα specificity for cargo proteins. In contrast to the binding of SV40NLS to the major site (Fig 6a), the binding mode of the peptide at the minor binding site is different among different Impα proteins. As shown in Table 2 and Fig 6b, SV40NLS binds to ScImpα and MmImpα via Lys and Arg residues at the P 1 ' and P 2 ' positions, whereas the peptide binds OsImpα and NcImpα via Lys residues at both P 1 ' and P 2 ' positions, i.e., the binding of the peptide is shifted one position in the OsImpα and NcImpα structures. Interestingly, Lys and Arg residues at the P 1 ' and P 2 ' positions, respectively, are observed for the majority of monopartite (hPLSCR1, hPLSCR4, c-Myc, and TPX2 [11,38,39,49]) and bipartite NLS complexes (nucleoplasmin, N1N2, RB and FEN1 [10,37,40]), and it is also the predominant binding mode for MmImpα/CN-SV40NLS [37]. Furthermore, the authors of MmImpα/SV40NLS structure [10] also observed staggering of peptide one position N-terminally which may lead to an alternative binding mode with Lys and Arg residues at the P 1 ' and P 2 ' positions. The binding of KK residues at the P 1 ' and P 2 ' positions for SV40NLS in MmImpα and ScImpα seems to be exceptions in these particular cases. An additional interesting characteristic of SV40NLS binding to Impα is the presence of additional interactions between both the N-and the C-termini of the peptide and MmImpα in the minor site: two positions before P 1 ' (P126, N-terminus) and in the P 5 ' position (V132). These interactions are observed in only the mammalian receptor despite all SV40NLS peptides used for crystallization with Impα proteins have these same residues. Structural studies on OsImpα have indicated that the plant-specific NLS peptide preferentially binds at the minor site in OsImpα and at the major site in MmImpα [16]. The similarity of SV40NLS binding at the minor site between OsImpα and NcImpα raises questions concerning the importance of this NLS binding site for the import mechanism of N. crassa. Chang and colleagues [16] observed the presence of non-conserved residues between OsImpα and MmImpα (S394, R427, E434, E480, and K484 for OsImpα) in the region near the minor site (Arm repeats 8 and 9) and toward the C-terminus may be responsible for the specificity of NLS binding for OsImpα. Multiple sequence alignment of Impα proteins (Fig 7a) indicates that some of these residues that are present in OsImpα (S394, E480 and K484) are also present in NcImpα (S402, E493, and K497) and in other Impα proteins from the α1 family (S408, R443 and K500 in ScImpα and E491 and K495 in HsImpα). Structural comparison between NcImpα and MmImpα (Fig 7b) shows that these three residues (S402, E493, and K497 in NcImpα) are substituted by T402, S483 and A487 for MmImpα indicating that favorable interaction of particular peptides may occur. Particularly, T402 in MmImpα (S402 in NcImpα) has been reported to lie in an identical position as S394 in the OsImpα structure, sterically preventing the binding of the plant-specific NLS peptide to MmImpα [16]. However, these substitutions may also prevent the binding of K127 of SV40NLS to NcImpα and OsImpα, explaining the presence of this interaction in MmImpα, which belongs to the α2 family, and the absence of this interaction in NcImpα, which belongs to the α1 family. The majority of monopartite (hPLSCR4, c-Myc, and TPX2 [11,38,39]) and bipartite NLS complexes (nucleoplasmin, N1N2, RB and FEN1 [7,10,40]) presented KR residues at the P 0 1 and P 0 2 positions which seems to be the most favorable residues to occupy these positions. The binding of KK residues atthe P 0 1 and P 0 2 positions for SV40NLS in MmImpα seems to be an exception which probably occurs because it is also favorable [10] to bind additional residues at Comparison of SV40NLS peptides binding to the major (a) and minor (b) binding sites for Impα structures from different organisms. SV40NLS peptides in complex with NcImpα (magenta), ScImpα (green), OsImpα (blue) and MmImpα (yellow) were superimposed using the Cα atoms of the peptides. Positions binding corresponding to the major (P 1 -P 5 ) and minor (P 1 0 -P 4 0 ) sites are identified along the chains (binding pocket labels were based on peptide binding with NcImpα). doi:10.1371/journal.pone.0128687.g006 Structure of Importin-α from a Filamentous Fungus Fig 7. Non-conserved residues between Impα proteins near to the minor NLS binding region. (a)Partial alignment of amino acid sequences of Impα proteins from different organisms. Conserved residues are shown in black. The residues S402, K497 and E493 for NcImpα and their equivalent residues for Impα proteins are shown in red. (b) Superposition between Cα of NcImpα and MmImpα structures (performed as described in Fig 5) highlights the residues S402, K497 and E493 of NcImpα (magenta) and the residues T402, S483 and A487 of MmImpα (green). These particular residues may be associated to NLS binding specificity to the minor NLS binding site. E493 side-chain is shown in a hypothetical conformation in this figure because this residue presents lack of electron density in this structure due to its high flexibility. doi:10.1371/journal.pone.0128687.g007 Structure of Importin-α from a Filamentous Fungus the N-and C-terminal regions of the peptide for some Impα proteins. For OsImpα/SV40NLS and NcImpα/SV40NLS complexes, KR binding at the P 0 1 and P 0 2 positions is more favorable because fewer interactions occur with the N-and C-terminal regions of the peptide. This observation accounts for the shift in the binding of SV40NLS to OsImpα and NcImpα compared with MmImpα. Affinity assays between NLS peptides and Impα proteins have been performed based in indirect affinity measurements [16,40,[49][50][51][52][53][54], using Surface Plasmon Resonance (SPR) [32], and ITC [19,38,55]. Indirect affinity measurements between NLSs and Impα proteins reported dissociation constants in the nM range and permitted the comparison among these molecules. SPR experiments was only able to estimate the peptide-protein affinity in the μM range which was lately confirmed by ITC experiments. The comparison between ITC analysis of MmImpα/SV40NLS [19] and NcImpα/SV40NLS is consistent with the features of the minor binding site observed in the structure. The higher affinity of the peptide at the minor binding site for MmImpα (K d = 0.98±0.08 μM) compared with NcImpα (K d = 1.69±0.46 μM) may be associated with the presence of additional interactions at N-and C-termini in the MmImpα/ SV40NLS complex. The crystal structure of this complex [10] has revealed that the residues K131 and V132 of the SV40 NLS peptide form salt bridges with the conserved E354 and R315 of MmImpα, respectively. Additionally, a hydrogen bond between V132 and R135 aids in the stabilization of the backbone of the SV40 NLS C-terminus. The importance of additional interactions N-and C-termini has been also observed with ITC assays for a phosphorylated NLS peptide which enhanced by 10-fold its affinity to MmImpα compared to unphosphorylated version [54]. Furthermore, it has been observed that the full length nucleoplasmin protein binds to Impα/Impβ complex with a 2-fold increase in affinity compared to just nucleoplamin NLS peptide [53]. Interesting, ITC studies with the hPLSCR4 NLS peptide ( 273 GSIIRKWN 280 ) which binds only the minor site of MmImpα [38] showed lower affinity (K d = 48.7±6.5 μM) compared with that of the SV40NLS peptide. This fact may be mainly attributed to the absence of a positively charged residue in the hPLSCR4 peptide at the P 1 ' position. In conclusion, despite the structural similarities among Impα proteins, this study and other recent studies with this receptor from different organisms or different isoforms from the same organism clearly demonstrated differences in the binding specificities for cargo proteins. The differences between NcImpα and MmImpα may result from the phylogenetic distance among the proteins and the functions of each protein family in organism development, which results in differences in affinities for NLSs. The elucidation of NcImpα in complex with specific NLSs peptides from fungi may provide an explanation for the differences between these proteins.
2016-05-12T22:15:10.714Z
2015-06-19T00:00:00.000
{ "year": 2015, "sha1": "4a208f84a883e4d29c72d9b3d4557df60bc08f02", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0128687&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4a208f84a883e4d29c72d9b3d4557df60bc08f02", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
43954284
pes2o/s2orc
v3-fos-license
Increasing Onshore Oil Production: An Unexpected Explosion in Trauma Patients. Introduction Few data currently exist which are focused on type and severity of onshore oil extraction-related injuries. The purpose of this study was to evaluate injury patterns among onshore oil field operations. Methods A retrospective review was conducted of all trauma patients aged 18 and older with an onshore oil field-related injury admitted to an American College of Surgeons-verified level 1 trauma center between January 1, 2003 and June 30, 2012. Data collected included demographics, injury severity and details, hospital outcomes, and disposition. Results A total of 66 patients met inclusion criteria. All patients were male, of which the majority were Caucasian (81.8%, n = 54) with an average age of 36.5 ± 11.8 years, injury severity score of 9.4 ± 8.9, and Glasgow Coma Scale score of 13.8 ± 3.4. Extremity injuries were the most common (43.9%, n = 29), and most were the result of being struck by an object (40.9%, n = 27). Approximately one-third of patients (34.8%, n = 23) were admitted to the intensive care unit. Nine patients (13.6%) required mechanical ventilation while 27 (40.9%) underwent operative treatment. The average hospital length of stay was 5.8 ± 16.6 days, and most patients (78.8%, n = 52) were discharged home. Four patients suffered permanent disabilities, and there were two deaths. Conclusion Increased domestic onshore oil production inevitably will result in higher numbers of oil field-related traumas. By focusing on employees who are at the greatest risk for injuries and by targeting the main causes of injuries, training programs can lead to a decrease in injury incidence. INTRODUCTION In the United States (U.S.) between 2003 -2013, the oil and gas extraction industry experienced a 71% increase in the number of active oil rigs. 1 Onshore based operations involving horizontal drilling and fracturing experienced the greatest growth, seeing an increase in employment rates between 40% to 92%. [1][2][3][4] One place in particular that saw an increase in the number of onshore rigs due to the success rate of horizontal drilling and hydraulic fracturing operations was Kansas. 5 Although this increase was not as high as rates seen in Texas and Oklahoma, Kansas saw the addition of 1,000 active wells during this time. In 2011, 1,400 workers directly involved in operating and developing oil and gas field properties and 8,500 workers involved in support activities were injured on the job. 6 Most of these injuries, regardless of whether they were employed at an on-or off-shore facility, were related to highway motor vehicle crashes or extreme impact/crush. 6 However, explosions and flash fires on onshore rigs have become common due to the increased use of fracturing. 4 The median daysaway-from-work for those injured while working at or near an oil rig has been reported as three times longer (24 days) compared to all other industries (8 days). 6 The occupational fatality rate for this industry is four to seven times higher than among U.S. workers in general. [1][2][3]7,8 The majority of oil and gas extraction-related fatalities are due to transportation incidents and contact with objects or equipment. 1, 3,7,8 Factors that may increase the rate of injuries and the frequency of fatalities include working on aging rigs or, for smaller companies, length of time on the job, being subcontracted, or participating in rig maintenance, repairs, or drilling operations. 2,3,8 Human error, equipment failure, and weak operating systems also were contributing factors. 9,10 The majority of literature on the oil and gas extraction industry addresses the rate of offshore occupational related-injuries. [9][10][11][12][13][14][15][16][17]19 A closer examination of injury patterns and outcomes among onshore drilling workers could prove beneficial for triage and treatment of the patient in the field and hospital settings, as well as illustrate the need for safety procedures to prevent injury in this industry. The purpose of this study was to evaluate injury patterns in onshore oil field operations. METHODS A retrospective review of all adult patients admitted with injuries sustained during the operation or maintenance of onshore oil field machinery between January 1, 2003 and June 30, 2012 was conducted at a single American College of Surgeons-verified level 1 trauma center. Data were retrieved from the trauma registry, as well as from each patient's medical records. Patient data included age, sex, race, injury severity score (ISS), abbreviated injury severity score (AIS), Glasgow Coma Scale (GCS) score, and injury details. Hospitalization data included intensive care unit (ICU) admission and length of stay, mechanical ventilation requirements, and need for operative management. Outcomes data included hospital length of stay, discharge disposition (home, rehabilitation, skilled nursing facility), and mortality. Descriptive analyses were presented as frequencies with percentages for categorical variables and means with standard deviations for continuous variables. All statistical analyses were conducted using SPSS release 19.0 (IBM Corp., Armonk, New York). This study was approved for implementation by the Institutional Review Board of Via Christi Hospitals Wichita, Inc. and the University of Kansas School of Medicine-Wichita's Human Subjects Committee. RESULTS A total of 66 patients met the inclusion criteria for the study. All patients were male, and the majority were Caucasian (81.8%, n = 54) with an average age of 36.5 ± 11.8 years, ISS of 9.4 ± 8.9, and GCS of 13.8 ± 3.4 (Table 1). Based on AIS, the most severely injured body regions were the abdomen (2.7 ± 0.8) and the extremities (2.7 ± 0.7). All injuries were the result of blunt force trauma, and most were the result of being struck by an object (40.9%, n = 27). Falls (19.7%, n = 13) accounted for the second most common cause of injury, followed by caught in machine (12.1%, n = 8), and explosions (10.6%, n = 7). Most injuries were to the lower extremities (25.8%, n = 17; Table 2). Injuries to the head and face also were common, with most involving a facial fracture (22.7%, n = 15) or loss of consciousness (16.7%, n = 11). Among patients who sustained a vertebral spinal fracture, lumbar fractures (12.1%, n = 8) were the most common. Injuries to the thoracic and abdominal regions were not as common. Slightly over one-third (34.8%, n = 23) of patients were admitted to the ICU with an average length of stay of 1.7 ± 2.5 days (Table 3). Mechanical ventilation was required for 13.6% (n = 9) of patients and 40.9% (n = 27) required surgery. The majority of surgical interventions involved debridement and open reduction of extremity fractures. In addition, four patients required completion of an amputation and one patient required multiple orthopedic and abdominal surgeries. The average hospital length of stay was 5.8 ± 16.6 days, and most patients (78.8%, n = 52) were discharged home. Four patients suffered a permanent disability, and two patients (3.0%) died due to explosion-related injuries. INCREASING ONSHORE OIL PRODUCTION continued. DISCUSSION With a marked increase in the number of active onshore oil rigs in the United States, there is a correlated increase in injury and fatality rates among oil and gas extraction workers. 1, 8 Although there is previous research for offshore oil rigs, there is no study that specifically focuses on onshore oil rig injury characteristics based on hospital data. 1-3 In the current study, extremity fractures and head/facial injuries were the most common. In addition, the majority of injuries were due to the patient being struck by an object or as the result of a fall. The number of fatalities in the current study was low, and both were explosion related. Our results supported several offshore drilling injury studies. 12,13,16 For example, a study conducted among Venezuelan drillers indicated that most injuries were to the upper (48%) and lower (24%) extremities with the majority resulting from the worker being struck by an object (37%). 12 Our study demonstrated lower rates of upper and lower extremity injuries, 25.8% and 18.2%, respectively; however, the type and cause of these injuries were similar, as was the fact that they were the most common. Another study of Iranian gas refinery workers demonstrated most injuries were caused by being struck by an object (48%). 13 We reported a 40.9% rate of injury associated with being struck. In addition, Mehrdad 13 and Thibodaux 16 reported most injuries caused by an offshore drilling accident were to the extremities. Fatality statistics from the Bureau of Labor Statistic (BLS) Census of Fatal Occupational Injuries (CFOI) were used for comparisons regarding patient fatality rates. 1,3,6,8 Of note, it has been well documented that CFOI injuries are under-reported in this database. 17,18 The BLS studies demonstrated that most fatal injuries were caused by transportation-related accidents (40%), followed by contact with objects and equipment (26%), fires and explosions (14%), and finally falls, slips, and trips (8%). 1, 3,6,8 In the current study, there were no transportation-related fatalities; the two reported deaths were explosion-related. Possible fall prevention measures for our study population might include the use of a full body harness, impact protective clothing, or the use of personal fall arrest system (PFAS). 4,19,20 To protect workers from dangerous machinery and prevent accidental contact with objects, the use of suitable covers or casings, and barrier rails or screens are needed. [20][21] However, it has been documented that many onshore oil rigs routinely are unassembled and moved quickly resulting in design modifications that may involve removing handrails. 21 Prevention of injuries from being struck by an object may include strongly enforcing Occupational Standard Health Administration (OSHA) personal protective equipment regulations and implementing penalties for workers caught not following these regulations. Recommendations for future research include amalgamating hospital data with occupational reports to produce an accurate picture of which types of workers sustain the most severe injuries or are at the highest risk for death. For instance, Blakeley et al. 2 reported that improved engineering controls and safety programs would benefit floor men at a higher rate than other job types due to the fact they experience three times the rate of injuries compared to other positions. In addition, due to the small sample size of the current study, expanding beyond a single institution by including multiple hospitals would be beneficial for establishing injury patterns for onshore oil rigs. This study had several limitations. First, the findings are limited by all known biases associated with retrospective studies. These include a lack of granularity that would allow for the determination of demographic and environmental factors contributing to the injury, such as job type, tenure, training and experience, or lost time away from work. Second, there is a possibility that many patients injured in a rural location were missed due to being admitted to another hospital in the area. Also, it was possible that these rural patients sustained less severe injuries and were treated locally. Likewise, those workers killed at the site and not transported to the hospital were not represented in the analysis. Finally, the small sample size of the study population from a single institution limits the generalizability of the results. CONCLUSION There is a growing need for enhanced surveillance of the onshore oil and gas extraction industry to understand risk factors for fatal and non-fatal injuries. 1 To our knowledge, this is one of the first studies focusing solely on onshore oil rig injuries. Study results showed that extremity and head/facial injuries were the most common. In addition, most injuries were the result of patients being struck by an object or as the result of a fall. By targeting the main causes of injuries, training and prevention programs can be created to decrease the incidence of on-the-job injuries among this rapidly growing employment sector.
2018-06-07T13:50:05.021Z
2018-05-01T00:00:00.000
{ "year": 2018, "sha1": "6258c9962ab4265f21dd58d0497e8f2fb59764e7", "oa_license": "CCBYSA", "oa_url": "https://journals.ku.edu/kjm/article/download/8684/8186", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "6258c9962ab4265f21dd58d0497e8f2fb59764e7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Environmental Science" ] }
6996254
pes2o/s2orc
v3-fos-license
The Structural Organization of the Liver in the Chinese Fire-bellied Newt ( Cynops orientalis ) The morphology of Chinese fire-bellied newt liver consists of 5 lobes, with exception of a few individual differences present, which are composed by a number of hepatic lobules. Passing through the center of the lobules, a central vein radiates and is arranged in orderly row from one to several layers. The interval of the hepatic cords or masses are irregular and variable sinu so d. The hepatic sinusoidal wall consists of one layer endothelial cells or Mac ophagocytus stellatus (Kupffer cells), which have protrusions and elongations. The intervals of the hepatic cells have perisinusoidal space (space of Disse). The hepatic cell is polygonal in sh ape with uniform, round or oval nucleus, 17.8–12.4 μm in diameter, mean 14.2 μm 2-6 nucleoli, nuclear-cytoplasmic volume ratio was 0.24:1. There is a lot of pigmentation in the hepatic parenchyma. INTRODUCTION Chinese fire-bellied newt (Cynops orientalis, David, 1873) belonging to Amphibia, Caudata, Salamandridae in classification is a local species of China.It is widely distributed at the lower reach of the Yangtze River and adjacent areas, in the hilly plains of central and Southeastern China at 30 to 1,500 m altitude (the provinces of Henan, Southern Anhui, Hubei, hunan) (Zhao & Hu, 1988;Fei et al., 2006).The habitat consists of all suitable water bodies at various altitudes, mountain ponds, see pages and paddy fields in hilly areas, small brooks, flooded fields in mountain valleys, in forests and degraded areas. Because of the growing popular studies of this species of conservation biology, evolutionary biology, developmental biology, this field has attracted more and more the attention of experts.An extensive literature search found no articles describing the Chinese fire-bellied newt liver.Several studies have been conducted on amphibian livers which are regarded as good environmental indicators (Haar & Hightower, 1976;Barni et al., 1999;Fenoglio et al., 2005;Rohr et al., 2008) were found and will be used for comparative purpose.Our aim is to contribute to the knowledge of distinctive morphology of the Chinese firebellied newt liver and explore its adaptability to environment. Department of Bioengineering, Henan University of Urban Construction, Henan, China. MATERIAL AND METHOD The animals used in this experiment were 14 females and 5 males captured in their natural environment in the countryside near Guangshui, Hubei province, China (total length 40.1 ± 2.0mm, weight 1.90 ± 0.33g).The body cavity was opened and the liver was immediately fixed in 10% neutral buffered formaldehyde and Bouin's solution without acetic acid (3:1 mixture of saturated solution of picric acid in water and formalin) for 24h.The materials were then dehydrated in a graded series of ethanol and embedded in paraffin.Sections were cut at 7 µm, and processed for staining with haematoxylin and eosin and then examined in a Nikon TE2000-U microscope. RESULTS Anatomical observation.The liver of Chinese fire-bellied newt is located in the right anterior portion of abdominal cavity and presents dull red colour, whereas the speckled melanin as tigroid is located on the surface.The abdominal plane was swollen and flat in the dorsal plane.The surface of the liver was covered by a thin serous membrane where melanin scattered as needlepoint.Irregular small vessels distributed on the serous membrane in the form of a branching or treelike mark.There were five lobes of liver, the first lobe to the fifth from gastric side to right lateral body respectively (Fig. 1).The first lobe was almost 7.8 mm in length, 4.2 mm in width.The inferior border of the second lobe attached anteriorly to gastric cardia was the longest lobe almost 17.7 mm in length, 5.4 mm in width.The third lobe was the largest lobe almost 14.6 mm in length, 7.7 mm in width.The fourth lobe was as long as the first almost 8.5mm in length, 3.8mm in width.The fifth lobe in internal plane was almost 9.7 mm in length and 4.6 mm in width.The thickness of those lobes from the basilar part to marginal part just the same was about 0.3~3 mm respectively.The gallbladder showed translucent form and attached the third lobe in ventromedial part.The morphology of Chinese fire-bellied newt liver morphology presented individual differences. Histological observation.The superficial liver of the Chinese fire-bellied newt covered with connective tissue capsule that branched and extended throughout the substance of the liver as septae (Fig. 2).It was 16.3 ± 5.9 µm in thickness.This connective tissue tree provided a scaffolding of support and the course along which afferent blood vessels and bile ducts traversed the liver.Additionally, the sheets of connective tissue divided the parenchyma of the liver into lobules.The capillary and connective fibers were obvious in the intra-mucosal.The interlobular connective tissue was underdeveloped therefore the boundary between lobules was not obvious.Passing through the center of lobules, a central vein was radiating and arranged by hepatic cord or plate. The hepatocytes around the central vein are arranged in orderly row by one to several layers.Interval of hepatic cords or masses are irregular and variable sinusoid.The wall of the hepatic sinusoid consisted of one-layer endothelial cells or macrophagocytus stellatus which have protrusions and elongations.Endothelial cells and macrophagocytus stellatus is irregular and thin flat ribbon.The interval between hepatic cells is the perisinusoidal space.The central vein was thin, 61.6-30.2µm in diameter, mean 42 µm (Fig. 3). The hepatocytes displayed polygon (Fig. 4), 17.8-12.4µm in diameter, mean 14.2 µm together with irregular unstained areas of cytoplasm.Hepatocyte nuclei were round with blue-violet color.The hepatocyte nuclei located in the central cytoplasm or a bit more on one side with 2-6 nucleoli, nuclear-cytoplasmic volume ratio was 0.24:1.The hepatocytes sides contacted either with sinusoids (sinusoidal face) or neighboring hepatocytes (lateral faces).Sinusoids were lined with endothelial cells and flanked by plates of hepatocytes and populated by numerous macrophagocytus stellatus, red blood cells can be seen among them.A large number of melanin granules gathered into clusters and are distributed unevenly (Fig. 3). In the hepatic portal area interlobular artery, vein and bile duct were observed from superficial liver penetrated into parenchyma.The lumen of the interlobular artery the was smaller than the vein, and regular with thick wall.The interlobular bile duct was composed of simple cuboidal epithelium, 19.2 ± 2.5 µm in diameter.The epithelial cell nuclei was located in the central cytoplasm with 6.8 ± 1.6µm in diameter (Fig. 5). DISCUSSION The liver is a vital organ presenting in vertebrates with a wide range of functions, including detoxification, protein synthesis, and production of biochemicals necessary for digestion.In most amphibian species, it is divided into right and left lobes (Grafflin, 1966).However, the Taiwanese frog (Hoplobatrachus regulosus) has three lobes (Chen et al., 2003).We observed the Chinese fire-bellied newt liver has five lobes, differing from previous studies that have found this species right and left lobes only (Li et al., 2005).The possibility of region polymorphism in Chinese fire-bellied newt liver presumably exists. The hepatocytes are polyhedral, with 5 or more surfaces.The nuclei are large and round, located in lateral cytoplasm in common.The cytoplasm with irregular unstained areas corresponds to cytoplasmic glycogen and lipid stores removed during histological preparation.Owing to such variations in different nutritional conditions, it is assumed that the liver cells represent important storage of energy. Haar & Hightower described that fine structural characteristics of hepatocytes in newt Notophthalmus viridescens included abundant lipid and glycogen inclusions.Melanophores with developing melanosomes are situated throughout the hepatic parenchyma.Those results are similar to our observation in the Chinese fire-bellied newt.The melanins of the liver pigment cells are considered belonging to the reticulohistiocytic system (also defined as the mononuclear phagocytic system) and deriving from macrophagocytus stellatus based on their localization and phagocytic capacity (Rund et al., 1998;Barni et al.).It seems to play an important role as scavengers of cytotoxic substances such as ions and free radicals (Barni et al.;Frangioni et al., 2005).The newt Triturus carnifex holds melanin and hemosiderin in the macrophagocytus stellatus.Synthesis of the mixed polymer is possible through the wellknown capacity of ferrous iron to activate tyrosinase (the enzyme responsible for melanogenesis) even in the absence of DOPA (Frangioni et al., 2005).The genic expression of tyrosinase in hypoxia appears to be a physiological response aimed at prolonging survival time in anaerobiosis by lowering the metabolic level; melanin would be an inert subproduct of this function (Frangioni et al., 2000).As a consequence of protective adaptability of Chinese fire-bellied newt liver characterized by melanin aggregated in anaerobiosis environment, the molecular mechanism of melanin should be the subject of further study.
2017-11-07T00:26:27.925Z
2011-12-01T00:00:00.000
{ "year": 2011, "sha1": "be61201cf96687637477fbc849ff1ac8cea6db7d", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.cl/pdf/ijmorphol/v29n4/art41.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "be61201cf96687637477fbc849ff1ac8cea6db7d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
234359888
pes2o/s2orc
v3-fos-license
Longitudinal study of the relationship between coffee consumption and type 2 diabetes in Chinese adult residents: Data from China Health and Nutrition Survey Background Increasing coffee intake was inversely associated with risk of type 2 diabetes in Western countries. However, in China where coffee consumption and diabetes population has been growing fast in recent years, studies on the impact of coffee intakes on the onset of type 2 diabetes are lacking. This study attempts to determine the associations between coffee consumption and type 2 diabetes in Chinese adults. Methods This longitudinal study analyzed 10447 adults who had participated in at least two rounds of the China Health and Nutrition Survey (CHNS), which is a survey database of multistage, random cluster process during 1993–2011. Coffee consumption and type 2 diabetes incidence were measured in the survey. Body mass index (BMI), age, sex, place of residence, waves, education level, smoking, drinking alcohol and tea drinking frequency were adjusted as covariate. We used longitudinal fixed effects regression models to assess changes within person. Results After adjusting confounding factors, lower risk of diabetes is observed among Chinese adults who drink coffee occasionally (Adjusted Odds Ratio (AOR)  = 0.13, 95% CI = 0.05, 0.34) and drink almost every day (AOR = 0.61, 95% CI = 0.45, 0.83), compared with those who do not or hardly drink. In the subgroup analysis, among women aged 45–59 who drink coffee one to three times a week (AOR = 0.21, 95% CI = 0.08, 0.52) and men over 60 who drink coffee almost every day (AOR = 0.19, 95% CI = 0.07, 0.53), protective effects were found. For young men aged 19–29, drinking coffee almost every day showed a risk effect (AOR = 20.21, 95% CI = 5.96–68.57). Conclusions Coffee drinking habit is an independent protective factor for adult on type 2 diabetes in China. And it varies among people with different ages and genders. The rapid growth of coffee consumption in China in recent years may help reduce the risk of type 2 diabetes, but at the same time, the risk of type 2 diabetes in adolescents needs attention. Introduction Type 2 diabetes (T2D) is the most common form of diabetes. It is a complex endocrine metabolic disease associated with eating habits and is considered to be a major health care problem in China at present [1], which has brought a heavy financial burden [2]. T2D is caused by multiple factors, while coffee consumption is well known to be inversely associated with T2D [3]. China as a country with long history of drinking green tea, the demand for coffee has also grown rapidly in recent years [4]. However, research on the incidence of T2D and coffee consumption is lacking in China. Compared to people who do not or hardly drink coffee, some health benefits and lower risk of chronic disease has been observed on people consuming 3-4 cups per day [5]. Caffeine in coffee has been proved with effect to improve insulin sensitivity [6]. Studies about varies types of coffee can contribute to diabetes prevention [7][8][9][10] have been inquired, but not until recently has it been proved that coffee is also related to prevention of T2D [3,[11][12][13][14]. Some systematic analysis [15,16] supports the hypothesis that habitual coffee consumption is associated with lower risk of T2D. However, most of previous findings on the relationship between coffee consumption and health have been conducted in Western countries and among populations of European descent [17]. This longitudinal study aims to investigate the relationship between coffee consumption and T2D in China. Study population The research data comes from the CHNS [18], a public database which does not involve Human Subject Research, Animal Research and Field Research. The data has been anonymized to make sure no personal private information contained. Instead, a sequence of unique id was used in this study. All participants provided informed consent and the study has been approved by institutional review board from the University of North Carolina at Chapel Hill, the National Institute for Nutrition and Food Safety and China Center for Disease Control and Prevention. The CHNS is an ongoing, large-scale, longitudinal, household-based survey to investigate the health and nutritional status of the general population in China. The prospective household-based study surveyed nine provinces: Heilongjiang, Liaoning, Shandong, Jiangsu, Henan, Hubei, Hunan, Guangxi, and Guizhou. Provinces with diverse demography, geography, economic development, and public resource characteristics were surveyed by a multistage, random cluster process. The CHNS and the survey procedure has been described in detail elsewhere [18,19]. Ten rounds of surveys were completed between 1989 and 2015. This study selected Chinese residents aged between 18 and 80 years who participated in the surveys from 1993 to 2011 as longitudinal tracking people, excluding pregnant women, lactating mothers, and subjects with incomplete records of key analysis variables. Ultimately, 54,645 observations containing 10,447 adults who participated in at least 2 rounds of the survey were chosen as study subjects. Data quality assurance As the information provided by documents on CHNS official website http://www.cpc.unc. edu/projects/china/data/data.html, several data quality assurance methods were used. Trainings and field guides were distributed to data collectors and supervisors. Questionnaires were checked for consistency and completeness by supervisors at the end of every day. According to the unique id of each person, required data collections from different rounds were merged by using scripts of R language (version 4.0.3). Following data de-duplication and cleaning are mainly carried out using the R package data.table (version: 1.14.0). The whole process of data combination, cleaning, and analyzing was performed in a unified environment of R to ensure that every step of the data processing is fully recorded and can be reproduced. Operational definition Type 2 diabetes status was identified by the following question during the follow-up survey: "Has a doctor ever told you that you have diabetes?" If yes, "How old were you when you were told about the situation (diabetes)?" In the 2009 interview, blood sample of participants were collected in addition to the self-determine question in the survey, with which fasting plasma glucose and glycated hemoglobin (HbA1c) were checked. According to the 2010 American Diabetes Association criteria [20], participants with a fasting plasma glucose � 7.0 mmol/L or HbA1c � 6.5% were diagnosed as diabetes patients. Both self-reported and plasma glucose/ HbA1c determined T2D cases were included [21]. In each questionnaire, participants were asked how often on average had they consumed coffee. The participants could choose from 9 responses of drinking frequency from drinking every day to 30 days without drinking. The coffee drinking behavior was divided into three groups: no or hardly drink, one to three times a week, drink almost every day. Statistical analysis R was used for all data collation and basic statistical analysis. R package lme4 (version 1.1-23) was used to establish longitudinal fixed effects regression models to explore the crude and adjusted fixed effects between coffee drinking frequency and T2D. The individual itself and the group composed of sequences in multiple observations formed up two-layer model. Three models were evaluated:1) unadjusted crude associations were first examined, and 2) then these associations were adjusted for tea drinking frequency, BMI, age, waves, marital status, education level and place of residence. 3) Model 3 added smoking, drinking alcohol for further discussion. Odds ratios (ORs) with 95% confidence intervals (CIs) were presented to show the strength and direction of the association. The criterion for statistical significance was set at p < = 0.05. The variable assignment is listed in Table 1. The place of residence was divided into urban and rural. There were two types of marital status: unmarried or married. The education level was divided into four categories: primary school and below, junior high school, senior high school, and college and above. BMI and age are both treated as continuous variables. Smoking and drinking are classified as whether there is such a habit. Table 2 presented the demographic characteristics of the adult participants in the CHNS during 1993-2011 whose complete data were available. The first round of the CHNS, including individual, household, community, and health/family planning facility data, was collected in 1989. Eight additional panels were collected in 1991, 1993, 1997, 2000, 2004, 2006, 2009 and 2011 [22]. Since the 1993 survey, all new households formed from sample households were added. There was a gradual growth in the coffee drinking frequency during the seven waves, and the prevalence of T2D also showed an upward trend, from 7.9% in 1993 to 10.0% in 2011. Regarding to obesity and aging, which is most closely related to the onset of T2D, we can see that BMI increased from 22.5 in 1993 to 24.6 in 2011, and it has almost reached the normal standard of 24.9 recommended by the WHO [23]. People over 60 years old increased from 10.3% in 1993 to 23.6% in 2011, a more than two-fold increase. Association between coffee and type 2 diabetes The crude analysis showed that compared with participants who do not or hardly drink a cup of coffee per month, for those who drink one to three times a week and more frequently, the odds of T2D are 87% and 50% lower (Crude Odds Ratio (COR) = 0.13, 95% CI = 0.05-0.31 and COR = 0.5, 95% CI = 0.4-0.64). After adjusting for confounding factors including tea drinking frequency, BMI, age, waves, marital status, education level and place of residence, the one to three times a week drinking behavior still exerted a statistically significant effect on T2D (Adjusted Odds Ratio (AOR) = 0.13, 95% CI = 0.05-0.34). Meanwhile, the inverse relationship (AOR = 0.6, 95% CI = 0.47-0.81) of drink almost every day is slightly smaller than the former. The specific results are presented in Table 3, which indicates that aging (Over 60 years old, AOR = 13.86, 95% CI = 11.32-16.98) and obesity (BMI, AOR = 1.13, 95% CI = 1.12-1.14) were risk factors for T2D. For those who have received college education or above, education level is a protective factor (AOR = 0.88, 95% CI = 0.8-0.97) on T2D. Compared with women, men are at greater risk (AOR = 1.07, 95% CI = 1-1.14). Drinking tea almost every day has similar protective effect as coffee (AOR = 0.91, 95% CI = 0.84-0.98). However, the habit of drinking tea one to three times a week is not statistically significant. After further added other addictive common factors which is the habit of smoking and drinking alcohol, the statistical significance of coffee still stand and the correlation coefficient almost remains unchanged. The statistical significance of tea drinking frequency disappeared. This shows that the influence of tea drinking behavior on T2D might relate to other factors of lifestyle. As expected, smoking is a risk factor for T2D (AOR = 1.09, 95% CI = 1.01-1.17). However, on contrary to popular belief, drinking is a protective factor for T2D (AOR = 0.79, 95% CI = 0.73-0.86). The analysis based on different gender and age group implies the protective effect varies among people in Table 4. Among middle-aged (45-59 years old) women with a habit of drinking coffee almost every day, protective effect for T2D was observed (AOR = 0.21, 95% CI = 0.08-0.52). Same protective effect (AOR = 0.19, 95% CI = 0.07-0.53) also appears in elderly (> = 60 years old) men who drinks one to three times a week. But among young men (18-29 years old) with the habit of drinking coffee regularly, negative effect has been observed (AOR = 20.21, 95% CI = 5.96-68.57). Discussion This longitudinal study explored the association between coffee consumption and the prevalence of T2D in Chinese adults. The single-factor model of Model 1 shows that coffee consumption has a crude reverse correlation effect on T2D. The multifactor analysis includes drinking tea, BMI, sex, age, place of residence, waves, marital status and education level as covariates. Model 2 for all adults suggested that drinking coffee is a protective factor for T2D in China. We did not observe stronger protective effect on people who drink it almost every day than people who drink one to three times a week. The protective effect of drinking tea on T2D disappeared after smoking and drinking were added in Model 3. This may be related to more complicated lifestyle habits, which remains to be further studied and clarified. The protective effect of drinking alcohol on diabetes is consistent with some evidence [27,28] that a small amount of alcohol has positive health effects. Since China has a long-term drinking culture, adult drinking is a quite common phenomenon. The question of whether if you drink alcohol in the questionnaire may not be able to accurately measure their drinking habits. People who answered yes probably only drink very little in their daily lives. The models of subdivided participants revealed that the effect of coffee on T2D varies among genders and ages. For middle-aged women and elderly men, the habit of drinking coffee is a protective effect on T2D. However, among young men, those who drink coffee regularly are more likely to have T2D than those who hardly drink coffee. The impact of coffee on T2D in Chinese young people, especially young men, still requires further research. The reported association between coffee and T2D has not been entirely consistent across different countries and ethnicities. Studies performed in Dutch [24], Sweden [25], Spain [14], Netherlands [26], Japan [27], and the United States [14] found that coffee consumption reduces the risk of T2D, which is consistent with the results of our study. However, some studies have shown that coffee consumption in adults and adolescents has a U-shaped relationship with T2D, which means that excessive coffee drinking will bring potential health risks [28][29][30]. In present study we did not observe a U-shaped relationship between coffee and the prevalence of T2D, moreover, drinking coffee almost every day did not reduce more risk on T2D in China. Possible causes for these discrepancies include differences in race, sample size, average age of the population, correction factors, and differences in the grouping of coffee consumption. Many studies have shown that acute caffeine ingestion reduces insulin sensitivity [31]. This study found a habit of drinking coffee to be an independent protective factor for T2D in China, which might be due to the following mechanisms: 1. Coffee consumption is associated with widespread metabolic changes, among which lipid metabolites may be critical for the anti-diabetes benefit of coffee. Coffee-related metabolites might help improve prediction of diabetes [32]. 2. Coffee interferes with glucose homeostasis, Long-term consumption of both coffee species reduced weight gain and liver steatosis and improved insulin sensitivity in the model of T2D [33]. 3. Chlorogenic acids may affect glucose absorption and subsequent utilization, the latter through metabolites derived from endogenous pathways or action of the gut microbiota [34,35]. The strength of this study is the largeness of sample, which means that the results of the multivariate longitudinal fixed effects regression analyses is stable. However, compared with related research performed in other countries, this study is limited by the contents of the CHNS questionnaire, and it does not consider other relevant information related to coffee consumption such as the type of coffee. Some studies have considered the interaction between coffee consumption and types of coffee and found that the protective effect of coffee on T2D is quite different by different kinds of coffee [36]. In addition, conclusions of this paper mainly come from the subjective report of the questionnaire and a biometric measurement, which may neglect the potential diabetic population in the long-term process. In summary, a Coffee drinking habit is an independent protective factor for adult T2D. And it varies among people of different ages and genders. The rapid growth of coffee consumption in China in recent years may help reduce the risk of type 2 diabetes, but at the same time, the risk of type 2 diabetes in adolescents requires attention.
2021-05-12T06:16:52.063Z
2021-05-10T00:00:00.000
{ "year": 2021, "sha1": "03b337870f6e0b2e2ede5ea28370519007ea74ce", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0251377&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "73e70289eb60ad5eb4dbfc5a43763d90c8449094", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
212957607
pes2o/s2orc
v3-fos-license
Visualisation of a physical model of interacting of hard objects in a computer game The problem of this research consists of relevant need to develop a physical model of interaction of hard objects, and herewith physical modelling of interaction of hard objects is often used in computer games. The purpose of this research is visualization of a physical model of interaction of hard objects in a computer game. Methodology of this research consists of analysing of domestic and foreign authors’ scientific researches: method of system-information analyse, method of mathematical and informational modelling: method of computer test. As a result of research the existing methods realizing collisions of objects in computer games are analyzed; the model of a game engine is developed. The algorithms in this model were visualised and tested. In conclusion, there are inferences, which confirm that dynamics of an absolute hard and deformabale agents, modelling of gas and liquid are basic part of any physical process. Introduction Today, the value system of many people is focused on nature consumption and subordination to itself. Calculation power of computers grows every day to give an opportunity to develop different physical principles. Due to this modelling of physical phenomena has a rising popularity in films and computer games. The big part of users compares phenomena of physic in real life with events in films and computer games. In this regard, realistic modelling physical phenomena is actual branch in visual practical-Converter activity [1]. There is a special softwarea physical enginefor creating physic phenomena in computer programs. The physical engine is a program, creating model of real physic phenomena [2]. The physics engine is independent module and is a part of game engine or is used in imitation modelling of physical phenomena. We need to find a compromise between the accuracy of the calculations and execution speed when to create a physics engine according to the task. Creation of a physical engine is hard task, even if it has narrow functionality. Despite a lot of physical phenomena simulated by physical engines (dynamics of absolutely hard and deformable agents, modeling of liquids and gases, action of tissues, etc.)dynamics of absolutely hard agent is the basic component of any physical engine. In process of modeling phenomena in the studied cases, the main task is to detect and resolve collisions of interacting objects. Goals and objectives of the research. Thus, the purpose of the research is determined from the need to solve the problem of visualisation of the physical model of interaction of hard agents. As an application area of the model, a computer game is chosen as one of the most promising areas in technical and scientific innovation [3]. To achieve the goal, the following tasks were set:  make analysis of existing methods of implementing collision in computer games;  develop an informational model of the game physics engine;  visualization of interaction models and implement algorithms that correspond to the selected methods of the description of collisions;  test and debug the developed program. Materials and Methods Theoretical methodsanalysis of psycho-pedagogical, scientific and technical literature, software development for the implementation of collision detection, algorithms implementing the interaction of agents in computer games. The method of system-information analysis was used to analyze algorithms for detecting narrow phase collisions and to select collision resolution methods and methods for integrating equations of motion. The method of mathematical and information modeling, the method of computer experiment, as a kind of computational used to describe the game application, the development of a hard agent model, at the stages of modeling hard station physics and visualization of the model of the physical engine. Collision detection is the computational problem of analyzing or detecting the intersection of objects [4]. Basically, the problem of detection is often solved in the process of development of computer games [5]. However, the research area affects many other scientific fields. The indicated computational difficulty is used in automated design, for the synthesis of trajectories of the cutting tool in numerical control systems, for programming the movement of robots in an environment with obstacles, in virtual prototyping systems, in computer modeling of physical processes, etc. [6]. In computer-aided design of assembly processes, this problem is called geometric access or geometric solvability [4]. There are two main ways to defining collisions: a priori and a posteriori. In a posteriori way, the scene analysis for collision detection is performed at short intervals of time. In a priori approach, it is necessary to calculate the trajectory of objects and predict before the collision (a priori) with static elements of the scene, taking into account the friction forces, the collision elasticity, as well as changes in the internal state of the deformable objects [7]. In General, this problem is described by a system of differential equations and may not have an exact analytical solution, and its numerical solution requires significant computational resources. Thus, in the a priori approach, the fact of collision detection is determined before the origin of the collision. In this research, a posteriori approach to the detection of columns is chosen, as it is widely used in practice, and there is a possibility of detecting collisions of objects in real time, which there is no possibility to make it in a priori approach. The main problem in usage of a posteriori way is resourceintensive algorithms that handle the collision [8]. It is obvious that the computational complexity increases with the number of processed objects. In this regard, there is a problem of optimization of applications which use such algorithms. One of the solutions to this problem is in Browne C and Maire F [7]. Its essence is the separation of modeling interactions of objects in three phases: broad phase, narrow phase, and resolution of conflict [9]. Modeling of hard agent physics is divided into several parts: detection of intersection of parallelepipeds, limiting objects; detection of intersection of polygons of objects; collision resolution; calculation of forces acting on the object; integration of equations of motion. According to the results of the literature analysis, the following algorhythms were identified for modeling the interaction of agents:  The brute force algorithm, the Spatial Hashing algorithm, and the Sweep-and-Prun algorithm were studied for the broad phase stage and for detecting the intersection of AABB objects. In this research, the choice was made in favor of the Sweep-and-Prune algorithm, as algorthim has a better time estimate of O(nlogn) due to the sorting of objects along the axis with the highest density of objects than the Brute force algorithm based on the search of all pairs of objects.  For the part of the narrow phase and detection of the intersection of polygonal grids of objects, algorithms were studied: an algorithm based on the separating axis theorem, the Gilbert-Johnson Keerthi algorithm + Expanding Polytope Algorithm, The Algorithm of Lin-Canny, Algorithm V-Clip. After considering narrow phase detection algorithms, the choice was made in favor of an algorithm based on the separating axes theorem.  For the part of collision resolution it is possible to use projection algorithms, calculation of momentum, calculation of elastic forces.The narrow phase determines the collision, when the objects have already intersected, but on the scene interpenetration of objects should not occur. Therefore, a correction of the position of the objects that the prima value method of projection, and then applies the method of calculation of pulses.  For the integration part of the equations of motion, we generalize the algorithm: Explicit Euler Integrator, Implicit Euler Integrator, Improved Euler Integrator, 4th order Runge-Kutta Integrator, Time Adjusted Verlet Integrator. However, the improved Euler method is stable and most optimal. Results The game application will be aimed at cross-platform use. Java was chosen as the programming language thanks to the existing cross-platform library libgdx. Android Studio was chosen as the development environment [10]. Description of the game application. To test the model of the physical game engine, a game application in the genre of 2d platformer was developed [3]. Platformer-a genre of computer games in which the main feature of the gameplay is jumping on platforms, climbing stairs, picking up items that are usually necessary to complete the level. Description of the solid model. Objects in the physics engine can be represented by a convex polygon or a circle. For simplicity, we will use the term geometry in cases where it does not matter what shape the object has. An object represented as a convex polygon is defined by a set of vertices, where each vertex has its own weight and is defined by coordinates on the plane. An object represented by a circle is defined by the radius and center of the circle [1]. All objects are considered to be absolutely solid. A solid agent is an agent that does not change its shape. Each object has physical properties: density p, mass m, rotation angle, speed u, acceleration a, angular speed, angular acceleration, inertia of the agent, torque, a set of forces F, coefficient of elasticity e, coefficient of friction k. Each object is affected by the force of attraction. In a collision, objects exchange elastic impulses, taking into account rotation and friction [2]. Description of the stages of the physics simulation of a rigid agent. At the first stage, a wide phase is realized. The selected Sweep-and-Prune method allows to significantly optimize the collision detection algorithms of solids. Sweep-and-Prune is a method of sorting bounding parallelepipeds by coordinates. Using this algorithm requires a bounding box (AABB) for all objects. Thus, it is required to make limiting parallelepipeds for all objects. Detection of intersection between the AABB objects are realized by means of algorithm of Sweepand-Prune. The algorithm at the input receives a list of all objects of the game world. Next, you want to calculate the sample variance for each axis. The algorithm at the input receives a list of all objects in the game world. Next, you need to calculate the sample variance for each axis. The variance will be calculated by the formula: At the second stage, a narrow phase is realized. From the analysis it was found that the most effective algorithm for creating a 2d physics engine, based on Separating Axis Theorem, SAT using the optimization proposed by Dirk Gregorius in 2013, which is to find the reference points [11]. Implementation of the method: the initial position and speed are stored, the acceleration from the position and speed is calculated. Then the position p2 and speed v2 are calculated. Next, we calculate the acceleration from the position and speed. The third step is the calculation of all forces acting on the object based on the detected contact points. For this stage of the research is used the method of calculation of impulses. At the fourth stage, there comes a change in the positions of objects relative to the calculated acting forces and impulses. From the analysis it was found that the stable and most optimal is Improved Euler Integrator is a second order integrator, i.e. the accumulated error has the order of the second derivative. Thus, the possibility of developing a module describing the geometric shape of the object, as well as a module that determines the physical properties of the object. The need to develop the following modules for collision detection is derived: two convex polygons, two circles, a convex polygon, and a circle. After detecting collisions, you need to develop a module to exchange pulses between a pair of objects. It is obvious that the visualized physical model requires a controller that controls all the stages of modeling. When developing a physical engine, you should think carefully about the overall structure of the application. During the development of the physical engine model, the need for the development of the following classes was identified: Shape. An abstract object shape class that combines the Circle and Polygon classes. Contains common fields and methods for inheriting classes. Circle. This class is created as a separate independent module that defines the geometry of the object. Inherited from the Shape class. Used to represent a circle. At the entrance gets the radius and center of the circle. Polygon. This class is created as a separate independent module that gives geometry to the object. Inherited from the Shape class. Used to represent the convex polygon of an object. At the input receives an array of points that form a convex polygon. PhysicsObject. The class is created as a separate independent physical module that defines the physics of the object. GameActor. A game object that contains the geometry of the Shape class and the physics of the PhysicsObject class. Contact. Contains links to two traversed the object, the point of con-tact, the vector and the minimum depth of penetration. SATCollisionCallback. The interface contains two overloads of the intersection check method. SATPolygonToPolygon. The class implements the interface methods SATCollisionCallback, for two convex polygons on the basis of SAT. Satsicrletocircle. The class implements the SATCollisionCallback interface methods for two satbased circles. SATPolygonToCircle. In the class override the invocation order of the objects for the class SATCircleToPolygon. ContactSolver. The class takes data from the Contact class and resolves collisions by applying pulses to contact points, calculating linear and angular velocity, friction force. PhysicsController. The class performs the role of updating physics. The object pairs are checked for intersection using the algorithm for the wide phase. Then, for all potentially colliding depending on the geometric shape of the object determines their intersection using the appropriate algorithms of the narrow phase. If objects intersect, the narrow phase algorithms return a list of contacts between pairs of objects. Next, the ContactSolver class methods are run to calculate the pulses at the contact points, the PhysicsObject class methods change the speeds and positions of objects in accordance with the calculated pulses and the acting forces on the object. We present the results of each phase of the collision of two objects. For one of the tests the collision of two objects of square shape was realized. One object fell from above under the influence of gravity on another object standing on the platform. The parameter of elasticity was determined equal to one (absolutely elastic collision). After detecting the intersection of objects in the wide phase, the objects were checked for intersections in the narrow phase by the Separating Axis Theorem algorithm. The last stage of the collision was the resolution of the collision. After a collision between objects, elastic pulses are exchanged. The linear and angular velocity is calculated. Pulses applied to the contact points cause the object standing motionless on the platform to move. The procedure of multiple and complete testing of the developed physics engine for modeling the interaction of solids showed full efficiency. Conclusion The aim of the study was to visualize the physical model of interaction of solids on the example of a computer game. To achieve this goal, the analysis of collision detection and resolution algorithms, as well as methods for integrating the equations of motion. As a programming tool we used object-oriented language Java, as an environment Android Studio. The cross-platform library libgdx was used to visualize the application. The paper describes a model of a game object that includes solid state physics and geometric representation: a circle or a convex polygon. Also, the module of interaction of solids, which is divided into several stages: wide, narrow phase, collision resolution, integration of the equation of motion. During the design, the need to create the following classes: a class that determines the geometric shape of the object, a class of solid state physics, a class of game object, a class that controls all stages of physics, collision detection classes for a convex polygon and a circle, a class containing information about the detected collision, a class for the transmission of pulses between a pair of colliding agents. The formulated algorithm for collision detection and contact point detection was fully visualized. During the full multi-fold testing, it is proved that the points of contact between a pair of colliding objects are determined correctly and are used for the exchange of elastic pulses. A number of optimizations that significantly reduce the computational load of the application was presented. Thus, the efficiency of the developed physical engine is confirmed and the effectiveness of the proposed optimizations is proved.
2019-12-12T10:14:28.646Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "ad5d624c9347091876f70bbeb6a75443f6e3c1cb", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1399/3/033094", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "2f0b2aa6c3975d172ba08cf5e3accd74e42fed90", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
16981912
pes2o/s2orc
v3-fos-license
Synchronous unilateral basal cell adenoma of the parotid gland: A case report The current study reports the case of a 68-year-old male who presented with a 4-month history of a painless slow-growing mass in the left parotid region. Magnetic resonance imaging revealed two independent, round lesions in the superficial and deep lobes of the parotid gland on the left side, respectively. A total parotidectomy was performed and basal cell adenomas (BCAs) were identified by histopathological examination. At the 6-month follow-up examination, no sign of recurrence was found. This study describes the clinical features of a rare case of synchronous unilateral BCA in the parotid gland and also provides a review of the literature. Introduction Basal cell adenoma (BCA) was first described in the salivary gland in a study by Kleinsasser and Klein in 1967 (1). This tumor represents 1-2% of all salivary gland tumors, and the majority of these are found in the parotid gland. In 1991, BCA was recognized as an independent entity in the second edition of the Salivary Gland Tumors Classification by the World Health Organization (2). Histologically, this tumor is composed of basaloid cells delineated from the stroma by the basement membrane. There are four characteristic patterns of BCA; solid, trabecular, tubular and membranous. The membranous subtype exhibiting high recurrence rates. Histological and immunohistochemical staining are used for diagnosis, while surgical resection with a cuff of normal salivary tissue is the main treatment. The current study reports a rare case of synchronous BCA of the left parotid gland in a 68-year-old male. In addition, the clinical features of the condition are described and a review of the literature is presented. This study was approved by the Ethics Committee of Wuhan Central Hospital (Wuhan, China) and was performed according to the Declaration of Helsinki. Patient provided written informed consent. Case report In November 2012, a 68-year-old male presented to the Wuhan Union Hospital (Wuhan, China) with a mass in the left infra-auricular area was referred to the Department of Oral and Maxillofacial Surgery at Wuhan Central Hospital (Wuhan, China) in March 2013. Four months prior to admittance, an ultrasound examination at Wuhan Union Hospital identified a homogeneous tumor in the left parotid region. A fine-needle aspiration biopsy extracted brown liquid indicative of a cyst of the parotid gland. Upon physical examination, a round, 1.5x1.5-cm, movable, tender and painless mass was palpable on the superior portion of the parotid gland of the left side. The tumor was not attached to the skin, and no facial palsy or regional lymphadenopathy was observed. Magnetic resonance imaging (MRI) was performed and showed two independent masses in the superficial and deep lobes of the parotid gland on the left side, respectively (Figs. 1 and 2). The tumors were well-marginated, with peripheral solid and central cystic components. The superficial tumor measured 12 mm in diameter, whereas the deeper tumor measured 15 mm in diameter. From these results, the initial diagnosis was of synchronous unilateral tumors, similar to Warthin's tumors. The MRI features on the T1-weighted images revealed differences in the composition of the tumors. The solid component of the superior tumor returned a hypointense signal, higher than that of muscle, but lower than the surrounding parotid tissue. For the deep mass, however, the solid component exhibited slight hyperintensity compared with the superior tumor, and isointensity compared with the surrounding parotid tissue. Compared with the central component of the two masses, the superior tumor exhibited moderate enhancement and the deep tumor was slightly hypointense. On T2-weighted images, moderate enhancement was observed in the peripheral component and hypointensity in the central component. A total parotidectomy was performed, which included resection of the two tumors and preservation of the facial nerve. Synchronous unilateral basal cell adenoma of the parotid gland: A case report Hispathological examination and immunohistochemical study demonstrated that the tumors were BCAs (Figs. 3 and 4). After 6 months of follow-up, no sign of recurrence was found and the facial nerve function had recovered well. Discussion Synchronous unilateral or bilateral multifocal tumors of the salivary glands rarely occur, representing <1% of major salivary gland tumors (3). Adenolymphoma is the most common type of multifocal tumor (4,5). BCA is an uncommon benign neoplasm, accounting for ~2% of tumors in the salivary glands, and with the majority found in the parotid gland. The occurrence of synchronous bilateral BCAs of the parotid gland is also rare, with only four previously reported cases (6)(7)(8)(9). Synchronous unilateral BCA in the parotid gland is extremely rare, and has only been reported once by Kuratomi et al (10) in 2006. This study described the case of an elderly female with two simultaneous BCAs as recurrent tumors of pleomorphic adenoma (PA) of the left parotid gland. Clinical palpation is poor at detecting multifocal ipsilateral tumors, particularly for those tumors that occur in the deep portion. The use of imaging techniques is necessary pre-operatively. Studies on MRI and computed tomography (CT) for the assessment of BCA are few in number. Kiyosue et al (11) first reported the MRI findings of BCA of the parotid gland. In the study, BCA was well circumscribed with a rounded shape. The solid section of the tumor exhibited a lower intensity signal than that of the surrounding parotid tissue on T1-and T2-weighted images. Ethunandan et al (12), however, found that imaging investigations were able to diagnose only 23% of ipsilateral multiple tumors, while another 56% of tumors were noted by palpation during the surgery, and therefore suggested the use of intra-operative palpation to evaluate the presence and location of multiple tumors. Differential diagnoses for BCA of the parotid gland include PA and Warthin's tumors. A mass with lobulated contours favors the diagnosis of a PA, while cyst formation is more common in Warthin's tumors (13,14). Kuratomi et al (10) found that epithelial tumor cells of PA may form BCA through certain differentiation mechanisms. This was a result of the authors identifying that basal cells of the epithelium of PA possess reserve cell functions, through epithelial-mesenchymal transdifferentiation, forming the predominant basaloid cell population of BCA. Chawla et al (15) described the CT appearance of 14 cases of BCA of the parotid gland and found the presence of linear bands or stellate-shaped non-enhanced areas may be a specific imaging feature of the tumor. Histologically, BCA is composed of basaloid cells that are sharply delineated from the stroma by the basement membrane. The absence of a chondromyxoid stroma may be used to distinguish the tumors from PA (16). There are four characteristic patterns of BCA: Solid, trabecular, tubular and membranous. The membranous subtype forms 10% of BCAs, and is often non-encapsulated, multicentric and multilobular, with a post-resection recurrence rate of up to 25% (16). The other subtypes, however, have low recurrence rates due to the absence of pseudopodia (17). The present study encountered a rare case of synchronous BCA of the left parotid gland. Local excision or extracapsular dissection is not suitable for multifocal ipsilateral or non-encapsulated tumors, therefore, the present case underwent a total parotidectomy.
2018-04-03T00:53:45.650Z
2014-07-10T00:00:00.000
{ "year": 2014, "sha1": "396cc6cf30668d7153c853424a4ad4b9432af8cc", "oa_license": "CCBY", "oa_url": "https://www.spandidos-publications.com/ol/8/4/1822/download", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "396cc6cf30668d7153c853424a4ad4b9432af8cc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13895342
pes2o/s2orc
v3-fos-license
The partition function of the linear Poisson-sigma model on arbitrary surfaces We perform the calculation of the partition function of the Poisson-sigma model on the world sheet with the topology of a two-dimensional disc. Considering the special case of a linear Poisson structure we recover the partition function of the Yang-Mills theory. Using a glueing procedure we are able to calculate the partition function for arbitrary base manifolds. Introduction The Poisson-sigma model has attracted increasing interest in recent years. Originally investigated by Schaller and Strobl as a generalization of 2d gravity-Yang-Mills systems [1] and independently by Ikeda as a non-linear gauge theory [2] it turned out to be more than a unified treatment of these specific models. Actually, the Poisson-sigma model associates to various Poisson structures on finite dimensional manifolds two-dimensional field theories which include gravity models [3,4,5] and non-Abelian gauge theories, in particular Yang-Mills theory, and the gauged Wess-Zumino-Witten model [6]. It was already noted in [1,7] that the quantization of the Poisson-sigma model as a field theory has implications for more general questions concerning the quantization of various spaces, in particular the quantization of Poisson manifolds. By use of the canonical quantization procedure it was shown that the symplectic leaves of the target manifold must satisfy an integrality condition. In the meantime Catteneo and Felder [8] have shown that the perturbation expansion of the path integral in the covariant gauge reproduces the Kontsevich's formula for the deformation quantization of the algebra of functions on a Poisson manifold [9]. The connection to gravity models was used by Kummer, Liebl and Vassilevich to investigate the special case of 2d dilaton gravity in the temporal gauge, and they have calculated the generating functional using BRST methods [10]. In further work they have studied the coupling to matter fields [11]. In [12] we have used path integral techniques to derive a general expression for the partition function of the Poisson-sigma model on closed manifolds for an arbitrary gauge. In this calculation we reproduce the quantization condition for the symplectic leaves to be integral, now for arbitrary closed world sheets. Further, we have shown that for a linear Poisson structure the partition function is fully computable and the partition function for the Yang-Mills theory may be recovered from that of the linear Poisson-sigma model. Klimcik [13] has introduced a more general model where the target spaces are given by certain so-called Drinfeld doubles [14], such that the Poisson-sigma model with a Poisson-Lie group as the target space is included. He calculates the partition function for this model, which turns out to be a q-deformation of the ordinary Yang-Mills partition function. In a special case his expression coincides with the Verlinde formula of the conformal quantum filed theory. Work on the generalization of the Poisson-sigma model to manifolds with boundary, which was already initiated by Catteneo and Felder for the case where the world sheet has the topology of a two-dimensional disc, is still under progress. Recently, Falceto and Gawedzki have clarified the relation of the boundary version of the gauged WZW model with a Poisson-Lie group G as the target space to the topological Poisson sigma model with the dual Poisson-Lie group G * as the target [15]. The purpose of the present article is to show that a calculation generalizing that in [12] leads to an almost closed expression for the partition function of the Poisson-sigma model on a disc. Further, by introducing a procedure for glueing manifolds together by identifying certain boundary components we are able to determine the partition function of the linear Poisson-sigma model on arbitrary oriented two-dimensional manifolds. The paper is structured as follows. Sec. 2 starts with a brief review of the Poissonsigma model, including the gauge-fixed extended action of the Batalin-Vilkovisky quantization scheme. We then perform the calculation of the partition function on the disc. We show that for a linear Poisson structure on Ê 3 the partition function of the SU (2) Yang-Mills theory on a disc is recovered. In Sec. 3 we introduce a glueing prescription and evaluate the partition function for the linear Poisson-sigma model on arbitrary base manifolds. Finally, Sec. 4 contains the conclusions and an outlook for further research. The partition function on the disc The Poisson-sigma model is a semi-topological field theory on a two dimensional world sheet Σ g , where g denotes the genus of the manifold. The theory involves a set X i of bosonic scalar fields which can be interpreted as a set of maps X i : Σ g → N , where N is a Poisson manifold. In this article the Poisson manifolds considered are isomorphic to Ê n . In addition one has a one-form A i on the world sheet taking values on T * N . Due the splitting theorem of Weinstein [16] there exist so-called Casimir-Darboux coordinates X i → (X I , X α ) for the target manifold, with the properties that the X I are a complete set of Casimir functions, and the X α are Darboux coordinates on the corresponding leaves. In these coordinates the action is given as with C(X) = µC(X), where µ is the volume form of the world sheet and C(X) a Casimir function of the Poisson bivector P . The world sheet can have a boundary, we take it to have the topology of a two dimensional disc, so we must specify the boundary conditions. Denoting by u the coordinates of the world sheet, the fields A i (i = (I, α)) are restricted to obey A i (u) · v = 0 for u ∈ ∂D 2 and v a vector tangent to the boundary [8]. Due to the fact that the model possesses gauge invariances which have to be taken into account one must modify the action in order to perform the path integral quantization. We used for this purpose the antifield formalism of Batalin and Vilkovsky [17]. The resulting gauge-fixed extended action is then [12] S gf [A I , A α , X I , X α ] = D 2 The underlying geometry of the antifield formalism was described in the paper of Alexandrov et.al [18]. In [19] this approach was extended to the case of world sheets with boundary and applied to the Poisson-sigma model to calculate the extended action. They recovered in this approach the boundary conditions which they used in [8]. In that paper it was pointed out that the Hodge dual antifields have the same boundary condition as the fields. It then follows for u ∈ ∂D 2 that C i (u) = 0 and A i (u) · v = 0 for v tangent to the boundary, as well as C * (u) = 0 and A * (u) · w = 0 for w normal to the boundary. The boundary condition for the maps X i is as follows: one is to include in the path integral only maps which map all points on the boundary to a single point in the target manifold. Calculation of the partition function on a disc The partition function for the Poisson-sigma model on a disc is where Σ Ψ denotes the chosen Lagrangian submanifold associated to the gauge fermion Ψ [12]. δ x , φ(X) is the Dirac measure, a distribution of order zero. φ(X(u ∂ )) is an arbitrary function with support only on the boundary of the disc, while u ∂ denotes an arbitrary point on the boundary, and x is a point in the target manifold. This distribution ensures the boundary condition for the fields X, and reflects the freedom of the fields on the boundary of the disc. In general, functions of the form X(u ∂ ) are observables for the Poisson-sigma model, because of the boundary condition C i (u ∂ ) = 0, (S, X) | ∂D 2 = P ij C i | ∂D 2 = 0, as noted in [8]. The complete list of functional integrations which must be performed in Eq. (2.3) is given in Ref. [12]. If one is interested in submanifolds S of Ê n one has to reduce the Dirac measure to these submanifolds where ω is the Leray form, which can be chosen to be proportional to the volume form induced on the submanifold by the Euclidian measure on Ê n [20]. Note that the function φ is restricted to the submanifold S, and the dependence on the point x passes over to the choice of the specific submanifold. If one applies this restriction to the foliation of the Poisson manifold such that the symplectic leaves L are the considered submanifolds, the Dirac measure picks the symplectic leaf L given by C(X I ) = const.. The form of the partition function in Casimir-Darboux coordinates is then All the integrations over the fields may be performed. The calculation is the same as in [12]. If one performs the gauge fixing for the X α the integration over X α goes over to the sum over the homotopy classes of the maps. This has the consequence that the function φ L (X α ) does not depend on a specific point of the target anymore, but just on the homotopy class [X α ] of the associated map X α : D 2 → L: where the subscript Ω k (M ) indicates that the determinant results from an integration over k-forms and A D 2 denotes the surface area of the disc. Ω αβ is the symplectic form on the leaf. All the functional integrations have been performed and one has arrived at an almost closed expression for the partition function. The boundary condition is now restricted to a function on the symplectic leaves which reflects the freedom of the fields X on the boundary. This means that the boundary condition for the fields X is now reduced to each single symplectic leaf characterized by the corresponding constant mode X I 0 . One can now interpret the boundary condition as follows: the boundary of the disc is mapped to a point in the target space and one associates to this point the Leray form of the leaf in which it lies. The linear Poisson structure on Ê 3 In this section we show that the choice of a linear Poisson structure on the target manifold Ê 3 leads to the partition function for SU (2) Yang-Mills theory on the disc. The symplectic leaves are then 2-spheres S 2 , and the Leray form is proportional to the symplectic form on the sphere induced by the Poisson structure on Ê 3 . The mappings X α : D 2 → S 2 are characterized by the their degree n: where ω(X I 0 ) is the symplectic volume of the leaf associated to the constant mode X I 0 . The sum over the degrees defines a periodic delta-function, such that This shows that the symplectic leaves must be integral, more precisely they are half-integer valued. This connection to the SU (2) Yang-Mills theory was worked out in the canonical formalism by Schaller and Strobl, see [1]. If we choose the unitary gauge for the fields X α then both determinants in the partition function of Eq. (2.6) have the same form and it is possible to combine them. The number of linearly independent forms on a bordered manifold with vanishing tangent components, like the gauge fields and the ghosts, is equal to the relative Betti number. It then follows that the combined determinant has as exponent the sum of the Betti numbers, which is equal to the Euler characteristic of the world-sheet, where the boundary components are now included. For more details see [21]. It follows that the exponent for the disc is just 1. The determinants yield the symplectic volume Vol(L(X I 0 )) of the leaf L(X I 0 ) [12]. The linear Poisson structure gives rise to a Lie algebra structure on the dual space G. Weinstein [16] has shown that the symplectic leaves are exactly the orbits Ω of the coadjoint representation of the compact connected Lie group G corresponding to the Lie algebra G. The integrality condition (2.8) of the orbits, respectively the symplectic leaves, reduces them to a countable set O(Ω(X I 0 )). The final result for the partition function of the linear Poisson-sigma model on the disc is then where we have introduced the notation χ Ω (φ Ω ) = δ Ω , φ Ω (X α ) = Ω ωφ Ω (X α ). Further, the function φ is still dependent on the coordinates of the leaf, but due to the fact that one integrates over the coadjoint orbit Ω(X I 0 ) with respect to the symplectic form ω, the choice of this function now depends only on the coadjoint orbits. As in the case of closed manifolds [12], it is possible to identify the partition function of the linear Poisson-sigma model with that of the Yang-Mills theory. This is essentially based on the duality of the linear Poisson manifold and the Lie algebra. In this sense we consider the partition function of Eq. (2.9) dual to that of the Yang-Mills theory. To see this one may choose a particular function whereX is a point of the dual space, the Lie algebra, and ·, · denotes the duality pairing. This distribution is nothing else than the Fourier transformation of the measure on the orbits, which is the symplectic structure of the orbit. This in turn is related to the character formula of Kirillov [22]: The key ingredient of the orbit method [22] is the generalized Fourier transform from the space of functions on G to the space of functions on G * , which is the composition of two maps: 1. The map from functions on G to functions on G : where j(X) = d(expX) dX ; 2. The usual Fourier transform, which sends functions on G to functions on G * . Performing the Fourier transformation explicitly in the present case of G = SU (2) one gets The Fourier transformation of the Dirac measure restricted to the 2-spheres is proportional to sin(4πX I 0X )/X [20], where X I 0 stands now for the quadratic radius such that the argument of the sine function is scaled by the volume of the 2-spheres, which is, by the special casē X = 0 of Kirillov's character formula, the dimension of the corresponding representation. All irreducible representations of a compact, connected and simply connected Lie group G correspond to integral coadjoint orbits of maximal dimension, in the present case these are the orbits of dimension two, the spheres. The determinant of the exponential map j(X) is for the case of SU (2) This leads to which is exactly the expression for the character of SU (2). The representations are characterized by their dimensions dim(λ) = Vol(Ω) = 4πX I 0 . Taking into account the symmetrization map [22], which maps the quadratic Casimir C(Ω) which characterizes the coadjoint orbit into the Casimir C(λ) of the corresponding representation, one gets for the partition function where χ λ (expX) denotes the character of the irreducible representation λ of the SU (2) group. Equation (2.15) is the partition function of the two-dimensional Yang-Mills theory on a disc [23]. We see that it is a special case of the linear Poisson-sigma model, with exp(2πi X α ,X ) as the specific function on the boundary, which corresponds to expX by the identification of the Poisson manifold with its dual, the Lie algebra. The linear Poisson-sigma model on arbitrary surfaces The two-dimensional oriented manifolds are fully classified. Starting with a few standard manifolds it is possible to obtain an arbitrary manifold with the help of a glueing prescription [24]. We are interested in a glueing prescription for the partition function of the Poissonsigma model, which allows the partition function for the glued manifold to be deduced from the partition functions for the components. We consider the various cases in turn. 3.1 g = 0 , n ≥ 1 First we want to create manifolds with more than one boundary component. Geometrically this means that one starts with a boundary component, a circle, and deforms it into a rectangle. After that one identifies two opposite edges such that an additional boundary component is created. A formula which allows one to perform such calculations is provided in Ref. [25]. For functions φ 1 , φ 2 ∈ C(G), G a Lie group, define the convolution to be There exists a well-known equation for the generalized character [26] : where λ denotes an irreducible representation of the group. From this equation, together with the fact that the characters form an orthogonal basis for the central functions, one gets One can shift the group convolution to the Lie algebra with the so-called wrapping map, as shown in [27]. Let ψ 1 , ψ 2 be G-invariant, smooth functions on G. We denote by ψ ∧ the Fourier transform to the dual space of G. Then, since Ω λ dµ = dim(λ), where dµ stands for the measure corresponding to the symplectic form of the coadjoint orbit Ω. Translating this into the notation of the previous section one gets Using the formula (3.5) in Eq. (2.9) yields The result is a partition function containing two functions, one with support on each boundary. Geometrically this process can be interpreted as follows. First one deforms the boundary, the circle, into a rectangle such that each edge of the rectangle has its own degree of freedom, respectively its own function, on the edge. The freedom on the boundary turns into χ Ω (φ) = χ Ω (φ 1 φ 2 φ 3 φ 4 ), where the φ i denote the corresponding parts of the function φ with support on the edge i of the rectangle. Then one identifies two opposite edges The resulting surface is of course a cylinder. This result can be compared with the results achieved in the Dirac quantization scheme by Schaller and Strobl in [1]. In that paper they performed the canonical quantization and solved the operator constraint equation for the linear Poisson structure in Casimir-Darboux coordinates. Their result was that the wave functions are restricted to the symplectic leaves, as are the functions φ in our calculation, and hence the distributions χ Ω (φ). Further, they showed that in the general case each integral orbit corresponds to one quantum state. This can be seen in our calculation in Eq. (2.8), the integral condition for the orbits. Note that by choosing both functions to be exp(2πi X α ,X ) one gets the correct result for the partition function of the two-dimensional Yang-Mills theory on the cylinder [23]. The manifold with three boundary components, the next step in our construction, is called the pants manifold, and its partition function is In this way we can get any manifold with an arbitrary number n ≥ 1 of boundaries; for each boundary component there is an additional factor χ Ω (φ) with the new boundary function φ, as well as an additional factor Vol(Ω) −1 . g = 0 , n = 0 We now want to calculate the surface with genus g = 0 and no boundary component, which is the 2-sphere. The difference is that now we do not just deform the manifold as in the previous section, here we glue the manifolds together to get the sphere. For this glueing we define the following product This definition of the glueing product is thus quite natural. We are now in a position to calculate the partition function for the sphere by glueing two discs together Vol(Ω) 2 exp (A S 2 C(Ω)) , (3.11) which is exactly the partition function for the linear Poisson-sigma model on the sphere calculated in [12]. Another check for the new product is performed by deforming two discs to rectangles, and then glueing two edges together. The result should again be a rectangle, i.e. one should obtain the partition function for the disc. The partition function takes the form with φ = φ 1 φ 2 φ 3 φ 4 φ 5 φ 6 . This calculation shows that the glueing condition (3.9) is selfconsistent. 3.3 g = 1 , n ≥ 0 If one changes the genus of the surface one has to use the glueing product (3.9). The manifold with genus g = 1 and no boundary is the torus. One can get it by glueing together two cylinders: = Ω exp (A T C(Ω)) . (3.14) The torus is again a manifold without boundary, and one can compare it with the solution in [12]. The Euler character for the torus is zero, so in the partition function the symplectic volume of the coadjoint orbit does not appear, and we have the same result as in [12]. The next manifold we consider is the handle Σ 1,1 , with genus g = 1 and one boundary component n = 1. To get this surface one has to take the pants manifold and glue two of the boundary components together. Due to the fact that one changes the genus one has to use the glueing product (3.9): χ Ω (φ) exp A Σ 1,1 C(Ω) . (3.15) This result enables us to calculate the partition function for the torus in yet a third way. Starting from the handle, we glue a disc onto its boundary. which is the same result as (3.14). By glueing two pants together at two boundaries one gets the manifold Σ 1,2 . The resulting partition function is (3.17) Due to the fact that we do not change the genus we can proceed as in the previous section. Starting with the partition function (3.15) χ Ω (φ) exp(A Σ 1,1 C(Ω)), and applying (3.5) yields which is the same result as in Eq. (3.17). In this way one gets the partition function of any surface with genus g = 1 and an arbitrary number n of boundary components: Σ 1,n . Arbitrary g and n With the considerations of the previous sections we are in a position to calculate the partition function for the linear Poisson-sigma model on an arbitrary two-dimensional (oriented) manifold. The fundamental manifold we start with is the pants manifold Σ 0,3 . The question is how one can calculate the partition function on a manifold Σ g,n with arbitrary n and g. Starting with the pants manifold it should be possible to increase g and n in an arbitrary way. On the other hand one must have the possibility to decrease the number of boundary components to zero. Hence, one has three requirements: • The adding of a disc, i.e. decreasing the number of boundary components n by one, results in multiplying the partition function by a factor Vol(Ω). • The glueing of the pants manifold, i.e increasing the number of boundary components by one, results in a factor Vol(Ω) −1 . This is similar to the application of (3.5). • The glueing of Σ 1,2 increases the number of the genus, while in the partition function an additional factor of Vol(Ω) −2 appears. These considerations lead to the following expression for the partition function of the linear Poisson-sigma model on an arbitrary surface Σ g,n : Z(Σ g,n , φ 1 , . . . , φ n ) = One sees that the exponent of the volume of the symplectic leaf is exactly the Euler characteristic for a two-dimensional manifold with genus g and n boundary components. This is the result which would be expected by consideration of the determinants in the partition function. If one now chooses for each function the specific one which leads to the Fourier transformation for the symplectic measure of the orbit one reproduces the result for the two-dimensional Yang-Mills theory on arbitrary oriented manifolds [23]. In this article we have shown that it is possible to calculate the partition function of the linear Poisson-sigma model on an arbitrary oriented two-dimensional manifold. To achieve this result we started with the partition function on the disc and then defined the glueing product (3.9) to go over to manifolds with arbitrary genus and number of boundary components. The result includes the case of closed manifolds, which was calculated in [12] in another way. An interesting further step towards the general quantization of the Poisson-sigma model would be the calculation of the partition function for more general Poisson structures.
2014-10-01T00:00:00.000Z
2001-12-11T00:00:00.000
{ "year": 2001, "sha1": "11ef7acec1f5c620205ff0ff0e6ba5fdf81027f7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "11ef7acec1f5c620205ff0ff0e6ba5fdf81027f7", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
222825522
pes2o/s2orc
v3-fos-license
Polydatin inhibits ZEB1‐invoked epithelial‐mesenchymal transition in fructose‐induced liver fibrosis Abstract High fructose intake is a risk factor for liver fibrosis. Polydatin is a main constituent of the rhizome of Polygonum cuspidatum, which has been used in traditional Chinese medicine to treat liver fibrosis. However, the underlying mechanisms of fructose‐driven liver fibrosis as well as the actions of polydatin are not fully understood. In this study, fructose was found to promote zinc finger E‐box binding homeobox 1 (ZEB1) nuclear translocation, decrease microRNA‐203 (miR‐203) expression, increase survivin, activate transforming growth factor β1 (TGF‐β1)/Smad signalling, down‐regulate E‐cadherin, and up‐regulate fibroblast specific protein 1 (FSP1), vimentin, N‐cadherin and collagen I (COL1A1) in rat livers and BRL‐3A cells, in parallel with fructose‐induced liver fibrosis. Furthermore, ZEB1 nuclear translocation‐mediated miR‐203 low‐expression was found to target survivin to activate TGF‐β1/Smad signalling, causing the EMT in fructose‐exposed BRL‐3A cells. Polydatin antagonized ZEB1 nuclear translocation to up‐regulate miR‐203, subsequently blocked survivin‐activated TGF‐β1/Smad signalling, which were consistent with its protection against fructose‐induced EMT and liver fibrosis. These results suggest that ZEB1 nuclear translocation may play an essential role in fructose‐induced EMT in liver fibrosis by targeting survivin to activate TGF‐β1/Smad signalling. The suppression of ZEB1 nuclear translocation by polydatin may be a novel strategy for attenuating the EMT in liver fibrosis associated with high fructose diet. subsequently recruit their co-mediator Smad4 to nuclear translocation, and then regulate hepatocytic phenotype and function. 6,7 Survivin as a crucial inhibitor of apoptosis promotes bile duct ligation-induced rat liver fibrosis, 8 positively regulates TGF-β1 expression in adenoid cystic carcinoma cases 9 and provokes the EMT in glioblastoma. 10 Recently, excess fructose consumption is reported to increase collagen content of liver parenchyma in rodents. 3 In this regard, survivin may activate TGF-β1/Smad signalling to promote fructose-caused the EMT process in liver fibrosis. Of note, survivin/baculoviral IAP repeat containing 5 (BIRC5) is identified as a target gene of microRNA-203 (miR-203). 11 inhibits the EMT of ovarian cancer cell line by targeting BIRC5 and blocking TGF-β signal pathway. 12 Moreover, miR-203 expression is decreased in hepatitis C virus core protein-simulated human hepatocyte cell line. 13 However, it remains unknown how fructose alters miR-203 expression and whether this event affects survivin-activated TGF-β1/Smad signalling in the process of the EMT in liver fibrosis. Zinc finger E-box binding homeobox 1 (ZEB1) as a transcription factor suppresses the transcription of miR-203 in human cancer cells. 14,15 It up-regulates survivin gene expression and inhibits E-cadherin nuclear re-expression in thyroid papillary carcinoma cell line. 16 In addition, ZEB1 induces TGF-β1 expression in dimethylnitrosamine-induced liver fibrosis of rats. 17 However, it remains unclear whether ZEB1 nuclear translocation mediates miR-203 deregulation targeting survivin, which is required for TGF-β1/smad signalling activation in fructose-driven hepatocyte EMT. Polydatin is a major active ingredient derived from the rhizome of Polygonum cuspidatum Siebold & Zucc, which alleviates liver fibrosis in patients and experimental animals. [18][19][20] Previous studies have shown that polydatin down-regulates TGF-β1, collagen and p-Smad3 in diet-induced fibrotic liver of mice, 21 up-regulates E-cadherin and represses radiation-induced EMT in lung tissues of mice. 22 However, whether polydatin inhibits ZEB1 nuclear translocation to augment miR-203 and block survivin-activated TGF-β1/Smad signalling in the alleviation of fructose-induced EMT and liver fibrosis remains mostly unexplored. In this study, we found that ZEB1 nuclear translocation was sufficient to decrease miR-203 expression, this new action as a suitable alternative for targeting survivin-activated TGF-β1/Smad signalling in fructose-driven EMT and liver fibrosis. Additionally, we found that polydatin suppressed ZEB1 nuclear translocation to increase miR-203 expression, and then down-regulated survivin to block TGF-β1/Smad signalling activation, resulting in the alleviation of fructose-caused EMT and liver fibrosis. Invitrogen TM TRIzol reagent was obtained from Thermo Fisher Scientific. The reverse transcription system kit and ChamQ SYBR qPCR master mix were got from Vazyme Biotechnology Co., Ltd. The dual-luciferase reporter assay system kit was obtained from Promega Corporation. MiR-203 mimic, ZEB1 siRNA, survivin siRNA, TGF-β1 siRNA, the respective negative control and GP-miRGL0 reporter vector listed in Table 1 were provided by GenePharma Co., Ltd. The following antibodies were purchased from commercial sources: anti-survivin (sc-10811), anti-Histone H3 (sc-517576), anti-TGF-β1 (sc-146), anti-p-Smad3 (Ser208, sc-130218), anti-Smad3 Male Sprague-Dawley rats (180-220 g) were provided from Experimental Animal Center of Zhejiang Province (Hangzhou, China; SCXK 2014-0001). Each rat was given the drinking water or 100 mL drinking water containing 10% fructose (wt/vol) for 6 weeks. 23,24 Then, rats were randomly divided into six groups (n = 8/group): (a) normal control rats and (b) fructose control rats, which received saline; (c-e) 7.5, 15 and 30 mg/kg polydatin-and fructose-treated rats; (f) 4 mg/kg pioglitazone (positive drug)-and fructose-treated rats orally for the following 11 weeks. Doses of polydatin and pioglitazone used in this animal experiments were selected based on our previous studies 23,24 and other reports. 21,22 A schematic representation of the experiments performed and timeline with rats was provided in Figure 1A. | Oral glucose tolerance test (OGTT) and insulin tolerance test (ITT) OGTT and ITT were performed as previously described. 25 Briefly, for OGTT, rats received 1.5 g/kg glucose orally; for ITT, rats were given 0.3 IU/kg insulin intraperitoneally. Then, blood samples were collected from the rat tail veins to test glucose levels with blood glucose metre at 0, 30, 60, 90 and 120 minutes after the treatment of glucose or insulin, respectively. | Serum and tissue collection At the end of the animal experiments, rats were anaesthetized with 50 mg/kg sodium pentobarbital to collect blood samples from rat carotid artery as well as liver tissues. Blood samples were kept at room temperature for 1 hour and then centrifuged (3000 × g, 10 min) to obtain the serum samples for biochemical assays. Some liver samples were fixed with paraformaldehyde for histological study, while the others were stored at −80°C for protein or RNA extraction, biochemical assay, respectively. | Histological study Liver tissues fixed with paraformaldehyde were embedded in paraffin. Then, liver specimens (4 μm-thick) were cut and stained with Masson trichrome and Sirius red solution, respectively. These sections were observed and photographed under an optical microscope (Nikon Eclipse Ti-SR, Nikon), respectively. | Gene expression analysis Total RNA was extracted from rat liver tissues and the cultured BRL-3A cells using TRIzol reagent for analysis of survivin mRNA lev- Table 1 were provided by Generay Biotechnology Co., Ltd. | Western blot analysis The whole, nuclear or cytoplasm proteins from rat liver tissues or the cultured BRL-3A cells were extracted using lysis buffer, followed by | Statistical analysis Data are presented as mean ± standard error of the mean (SEM). Comparisons between two groups were performed by Student's t test. ANOVA with further analysed by post hoc Dunnett's test was used for comparison between more than two different groups. P < .05 was considered to be significant. | Polydatin ameliorates liver fibrosis in fructosefed rats with metabolic syndrome First, we examined whether polydatin ameliorated liver fibrosis in fructose-induced metabolic syndrome of rats. The data from biochemical analysis showed that polydatin significantly decreased serum concentrations of insulin ( Figure 1B), TG, IL-1β and TNF-α (Table 2), and alleviated insulin resistance in OGTT and ITT ( Figure 1C,D) in fructose-fed rats. Simultaneously, polydatin remarkably alleviated liver histologic changes including slight thickening of the central venous wall, perisinusoidal or portal/peripotal fibrosis and reduced fibrotic area in fructose-fed rats ( Figure 1E). In parallel, it significantly reduced serum levels of hyaluronic acid, laminin and type III procollagen, as well as liver levels of hyaluronic acid and hydroxyproline in this animal model (Table 2). Accordantly, polydatin remarkably decreased serum activities of ALT and AST in fructosefed rats ( Table 2). Pioglitazone exerted similar effects in this animal model ( Table 2 and Figure 1). These data suggest that polydatin and pioglitazone alleviate liver fibrosis to recover liver function in fructose-fed rats with metabolic syndrome. | Polydatin attenuates fructose-induced EMT in rat liver fibrosis and BRL-3A cells EMT plays an important role in liver fibrosis. 5 Next, we investigated whether polydatin attenuated fructose-induced EMT in liver fibrosis. As noted previously, polydatin effectively de- | Polydatin augments miR-203 targeting survivin to inhibit TGF-β1/Smad signalling activation Survivin is reported to positively regulate TGF-β1 expression in adenoid cystic carcinoma cases 9 and provoke the EMT in glioblastoma. 11 It is worth noticing that miR-203 inhibits the EMT of ovarian cancer cell line by targeting survivin/BIRC5 and blocking TGF-β signal pathway. 12 miR-203 is low-expression in carbon tetrachloride-induced rat liver fibrosis. 26 Therefore, we validated miR-203 expression change in fructose-caused liver fibrosis. In this study, we found that fructose significantly decreased miR-203 expression levels in rat livers ( Figure 4D) and BRL-3A cells ( Figure 4E). To investigate whether miR-203 changed survivin expression, we carried out the luciferase assay. The results showed that the lu- Furthermore, we observed that polydatin significantly increased miR-203 expression ( Figure 4D,E), and down-regulated survivin mRNA and protein levels ( Figure 4A,B) in fructose-fed rat livers and fructose-exposed BRL-3A cells. Polydatin reversed the effect of survivin siRNA to significantly increase miR-203 expression in fructose-exposed BRL-3A cells ( Figure 4G). Of note, polydatin markedly decreased survivin mRNA and protein levels (24 h) in miR-203 mimic-transfected BRL-3A cells under fructose exposure ( Figure 4H,I). In addition, polydatin down-regulated TGF-β1 protein levels (24 h) in fructose-exposed BRL-3A cells transfected with survivin siRNA ( Figure 4C). Pioglitazone had similar effects in these animals and cell models (Figure 4). These results demonstrate that polydatin and pioglitazone augment miR-203 to down-regulate survivin, then inhibit TGF-β1/Smad signalling activation. Figure 5C). In fact, fructose intake increased the nuclear ZEB1 protein levels and decreased the cytoplasm ZEB1 protein levels in rat livers ( Figure 5D,E). To investigate the role of ZEB1 in the change of miR-203 expression, we transfected ZEB1 siRNA into BRL-3A cells and found that ZEB1 siRNA significantly induced high-expression of miR-203 (12 h) in fructose-exposed BRL-3A cells ( Figure 5H). While miR-203 mimic was unable to affect the nuclear ZEB1 protein levels (1 h) in fructose-exposed BRL-3A cells ( Figure 5I). These data indicate that fructose may cause ZEB1 nuclear translocation to reduce miR-203 expression, leading to survivin-mediated the activation of TGF-β1/Smad signalling in EMT and liver fibrosis. | Polydatin inhibits ZEB1 nuclear translocation to enhance miR-203 expression More importantly, we found that polydatin was able to decrease the nuclear ZEB1 protein levels and increase cytoplasm ZEB1 protein levels in fructose-fed rat livers ( Figure 5D,E) and fructose-simulated BRL-3A cells ( Figure 5F,G). Polydatin increased miR-203 expression (12 h) in BRL-3A cells transfected with ZEB1 siRNA under fructose exposure condition ( Figure 5H). It was able to significantly decrease nuclear ZEB1 protein levels (1 h) in miR-203 mimic-transfected BRL-3A cells co-cultured with fructose ( Figure 5I). Pioglitazone had similar effects in these animals and cell models ( Figure 5). These results suggest that polydatin and pioglitazone may inhibit ZEB1 nuclear translocation to enhance miR-203 expression and then block survivin-activated TGF-β1/Smad signalling in fructose-induced EMT and liver fibrosis. | D ISCUSS I ON Clinically, excess fructose consumption is associated with the development of liver fibrosis. 2 Polydatin is a main constituent in P cuspidatum, which has potential utility in the treatment of liver fibrosis in patients. 20 To the best of our knowledge, we firstly find that ZEB1 nuclear translocation plays an essential role in fructose-induced EMT in liver fibrosis by targeting survivin to activate TGF-β1/Smad signalling. Moreover, polydatin represses ZEB1 nuclear translocation to increase miR-203 expression, subsequently, block survivin-activated TGF-β1/Smad signalling, attenuating fructose-induced EMT in liver fibrosis ( Figure 5J). Generally, activated hepatic stellate cells are considered to be the main source of the extracellular matrix. 27 While, hepatocytes simulated by TGF-β1 not only lost the epithelial phenotype with the decrease of E-cadherin, and the increase of N-cadherin, vimentin and FSP1, but also produce the collagen, 4,6,7 indicating that these hepatocytes undergoing EMT induced by TGF-β1 could be another source of the extracellular matrix. Of note, long-term fructose intake causes massive collagen deposition of liver parenchyma in the cynomolgus monkeys. 3 In this study, we showed that fructose triggered EMT in hepatocytes, consistently, it caused rat liver fibrosis. These observations further demonstrated that fructose-induced hepatocyte EMT, at least to some extent, promoted liver fibrosis process. In this study, we observed the decrease of E-cadherin with a relocation of E-cadherin from the membrane to the cytoplasm in fructose-exposed BRL-3A cells. β-catenin is reported to guide E-cadherin localization to the cell membrane in Madin-Darby canine kidney cells. 28 While, excessive fructose intake decreases β-catenin protein levels in fibrotic livers of mice. 29 Thus, we speculated that fructose-induced β-catenin reduction may obstruct the localization process of E-cadherin from the cytoplasm to the cell membrane, which may cause a relocation of E-cadherin from the membrane to the cytoplasm in fructose-exposed BRL-3A cells. Of note, survivin positively regulates TGF-β1 in human adenoid cystic carcinoma cell line 9 and promotes the EMT occurrence with E-cadherin low-expression in glioblastoma. 10 In addition, the activation of TGF-β1/Smad signalling is detected in cirrhotic liver of patients 30 and carbon monoxide-induced liver fibrosis of mice. 5 In this study, fructose-induced survivin over-expression and TGF-β1/Smad signalling activation were also observed in rat livers and BRL-3A cells. Furthermore, we found that fructose-induced survivin over-expression provoked the activation of TGF-β1/Smad signalling to develop the EMT, causing liver fibrosis. Therefore, we focused on the regulation of survivin in fructose-induced EMT in liver fibrosis. In relation to this, it is worth noting that miR-203 low-expression decreases E-cadherin in ovarian cancer cells line through a survivin-dependent manner. 12 We observed miR-203 low-expression in the animal and cell models, being consistent with these reports in carbon tetrachloride-induced fibrotic liver of rodents, 26 arecoline-induced fibrotic oral submucous of patients 31 and cirrhotic livers of patients. 32 We also showed that survivin was a target gene of miR-203 in BRL-3A cells. Importantly, miR-203 up-regulation nearly abrogated survivin over-expression in fructose-exposed BRL-3A cells. These results suggest that fructose-induced miR-203 low-expression may target survivin to activate TGF-β1/Smad signalling, causing the EMT. Transcription factor ZEB1 suppresses miR-203 expression and then activates cancer cell epithelial differentiation. 14 High ZEB1 expression is observed in hepatocellular carcinoma patients with or without cirrhosis. 33 In this study, we found that fructose increased nuclear ZEB1 protein levels in the animal and cell models. This fruc- 34 Our previous study found that fructose up-regulated p-STAT3 in rat liver fibrosis. 35 Therefore, we speculated that fructose-induced up-regulation of p-STAT3 may transfer into the nucleus to recruit on ZEB1 promoter, causing ZEB1 nuclear translocation in rat liver fibrosis. However, the precise molecular mechanism by which fructose induces ZEB1 nuclear translocation needs further study. Polydatin is reported to down-regulate TGF-β1 in unilateral ureter obstruction-induced fibrotic kidney of rats 36 and radiation-induced fibrotic lung of mice. 22 It also inhibits TGF-β1 and collagen, reduces TGF-β1-induced EMT in human alveolar epithelium A549 cells 37 and protects against methionine-choline deficient diet-induced mouse liver fibrosis. 21 Pioglitazone decreases hepatic TGF-β1 and COL1A1 in non-alcoholic steatohepatitis of mice. 38,39 In this study, polydatin Figure 5J). F I G U R E 5 Polydatin inhibits ZEB1 nuclear translocation to enhance miR-203 expression in fructose-exposed BRL-3A cells. (A) Images of fructose-exposed BRL-3A cells labelled with ZEB1 (red) at the indicated time points. (B) Western blot analysis of the nuclear ZEB1 protein levels at the indicated time points. (C) qRT-PCR analysis of miR-203 expression levels at the indicated time points. Western blot analysis of the nuclear and cytoplasm ZEB1 protein levels in rat livers (D and E) and BRL-3A cells (2 h) (F and G). (H) qRT-PCR analysis for expression of miR-203 (12 h) in transfected with 50 nM ZEB1 siRNA or NC BRL-3A cells treated with fructose in the presence or absence of 40 μM polydatin or 10 μM pioglitazone. (I) Nuclear ZEB1 protein levels in transfected with 50 nM miR-203 mimic or NC BRL-3A cells exposed to fructose in the presence or absence of 40 μM polydatin or 10 μM pioglitazone. (J) The mechanisms by which polydatin prevents fructoseinduced hepatocyte EMT in liver fibrosis. Histone H3 or Lamin A was as internal control for nuclear ZEB1. β-actin or GAPDH was as internal control for cytoplasm ZEB1. U6 was as internal control for miR-203. Each value is shown as mean ± SEM (n = 4-6). # P < .05, ## P < .01, ### P < .001 compared with the normal control; *P < .05, **P < .01, ***P < .001 compared with the fructose control; $ P < .05, $$ P < .01 compared with the fructose-negative control Therefore, the ability of polydatin and pioglitazone to inhibit ZEB1 nuclear translocation and increase miR-203 expression is the key to attenuate the EMT in the protection against liver fibrosis associated with high fructose intake through the blockade of survivin-mediated TGF-β1/Smad signalling activation. In conclusion, this study demonstrates that fructose causes ZEB1 nuclear translocation to decrease miR-203 expression, and then target survivin to activate TGF-β1/Smad signalling, developing the EMT in liver fibrosis. ZEB1 nuclear translocation inhibition with high miR-203 expression may be the predictor of good prognosis in patients with liver fibrosis. Polydatin protects against fructose-induced hepatocyte EMT by suppressing ZEB1 nuclear translocation to up-regulate miR-203 expression and block survivin-activated TGF-β1/Smad signalling, exhibiting the potential anti-liver fibrosis activity. The present study also supports that the blockade of ZEB1 nuclear translocation by polydatin is a novel strategy for attenuating EMT in liver fibrosis associated with high fructose consumption. This work was sponsored by Grants from National Key R&D Program of China (2019YFC1711000), and National Natural Science Foundation of China (No. 81573667). CO N FLI C T O F I NTE R E S T The authors confirm that there are no conflicts of interest. DATA AVA I L A B I L I T Y S TAT E M E N T The data that support the finding of this study are available from the corresponding author upon reasonable request.
2020-10-17T13:06:25.703Z
2020-10-15T00:00:00.000
{ "year": 2020, "sha1": "6b8a74d2e706aea8cc115edd451ea96e0f9b7eb6", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jcmm.15933", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5b0d6616514403e8e55b0d7bcee51bdeb1123e24", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
26154099
pes2o/s2orc
v3-fos-license
Observed and simulated global distribution and budget of atmospheric C 2-C 5 alkanes The primary sources and atmospheric chemistry of C2-C5 alkanes were incorporated into the atmospheric chemistry general circulation model EMAC. Model output is compared with new observations from the NOAA/ESRL GMD Cooperative Air Sampling Network. Based on the global coverage of the data, two different anthropogenic emission datasets for C4-C5 alkanes, widely used in the modelling community, are evaluated. We show that the model reproduces the main atmospheric features of the C2-C5 alkanes (e.g., seasonality). While the simulated values for ethane and propane are within a 20% range of the measurements, larger deviations are found for the other tracers. According to the analysis, an oceanic source of butanes and pentanes larger than the current estimates would be necessary to match the observations at some coastal stations. Finally the effect of C2-C5 alkanes on the concentration of acetone and acetaldehyde are assessed. Their chemical sources are largely controlled by the reaction with OH, while the reactions with NO3 and Cl contribute only to a little extent. The total amount of acetone produced by propane, i-butane and i-pentane oxidation is 11.2 Tg/yr, 4.3 Tg/yr, and 5.8 Tg/yr, respectively. Moreover, 18.1, 3.1, 3.4, 1.4 and 4.8 Tg/yr of acetaldehyde are formed by the oxidation of ethane, propane, n-butane, n-pentane and i-pentane, respectively. Correspondence to: A. Pozzer (pozzer@cyi.ac.cy) Three-dimensional (3-D) global models, which represent both transport and chemical processes, allow to study and predict the spatial distribution and the temporal development of these species (Gupta et al., 1998;Roelofs and Lelieveld, 2000;Poisson et al., 2000;von Kuhlmann et al., 2003b;Folberth et al., 2006).Here we compare results of the EMAC (ECHAM5/MESSy1 Atmospheric Chemistry) model with data based on flask measurements (see Sect. 3) collected at Published by Copernicus Publications on behalf of the European Geosciences Union. Model description and setup EMAC is a combination of the general circulation model ECHAM5 (Roeckner et al., 2006) (version 5.3.01) and the Modular Earth Submodel System (MESSy, version 1.1, Jöckel et al., 2005).Descriptions of the model system were published by Jöckel et al. (2006) and Pozzer et al. (2007).Details about the model system can be found at http://www.messy-interface.org.The setup is based on that of the evaluation simulation S1, described by Jöckel et al. (2006).It was modified by adding the emissions of butane and pentane isomers, and their corresponding oxidation pathways (see Sect. 2.1 and Sect.2.2). The simulation period covers the years 2005-2008, plus two additional months of spin-up time.The initial conditions are taken from the evaluation simulation S1 of the model.Dry and wet deposition processes are described by Kerkweg et al. (2006a) and Tost et al. (2006), respectively; the tracer emissions are described by Kerkweg et al. (2006b).As in the simulation S1, the applied spectral truncation of the ECHAM5 base model is T42, corresponding to an horizontal resolution of ≈ 2.8 • × 2.8 • of the quadratic Gaussian grid.The applied vertical resolution is 90 layers, with about 25 levels in the troposphere.The model setup includes feedbacks between chemistry and dynamics via radiation calculations.The model dynamics was weakly nudged (Jeuken et al., 1996;Jöckel et al., 2006;Lelieveld et al., 2007) towards the analysis data of the ECMWF (European Center Medium-range Weather Forecast) operational model (up to 100 hPa) to realistically represent the tropospheric meteorology of the selected period.This implies that the general circulation model is following the meteorology (at synoptic scale) as assimilated by the ECMWF analysis, which takes advantage of more than 75 million observations in a 12 h period (98% of them are from satellites).We refer to the ECMWF (http://www.ecmwf.int)for further information.The uncertainties connected with the weak nudging (or better, due to the internal variability of the model which remains despite the nudging) can be estimated by the differences of meteorological parameters between the two simulations.This has already been discussed in a previous study (Pozzer et al., 2009), where differences of ∼15% were found between two different simulations for temperature and relative humidity.It must be stressed, however, that the usage of monthly averages drastically decreases the uncertainties arising from the differences in the meteorology.As estimated by Pozzer et al. (2009), the differences in the temperature and relative humidity are below 5% if monthly averages are considered.It is hence expected to reproduce the meteorology assimilated by the ECMWF analysis, allowing a direct comparison of model results with observations.Nevertheless, for the overall representation of the real meteorology, we rely on the ECMWF data assimilation, which has been evaluated previously (see for example Bozzano et al. (2004) or Salstein et al. (2008) for comparisons with surface observations).Following Bozzano et al. (2004), for temperature, pressure and humidity, a low relative difference between ECMWF analysis data and measurements is observed, while for the wind speed a relative difference of up to 100% was determined from the comparison of model output and observations.These uncertainties do not translate directly into uncertainties in the simulated alkane mixing ratios, due to the non-linearity of the system.In addition, errors in estimating different meteorological parameters have different direct/indirect effects on the simulated chemistry of alkanes.Nevertheless, Bozzano et al. (2004) also showed that when long time averages are considered (as in this study), the differences are much lower.Therefore an upper threshold of ∼100% is estimated for the uncertainty in the simulated alkane concentrations. Chemistry The chemical kinetics within each grid-box was calculated with the submodel MECCA (Sander et al., 2005).The set of chemical equations solved by the Kinetic PreProcessor (KPP, Damian et al. (2002); Sandu et al. (2003); Daescu et al. (2003), see http://people.cs.vt.edu/ ∼ asandu/Software/Kpp/) in this study was essentially the same as in Jöckel et al. (2006).However, the propane oxidation mechanism (which was already included in the original chemical mechanism) was slightly changed, and new reactions for the butane and pentane isomers were added. The complete list of differences from the original chemical mechanism used in Jöckel et al. (2006) is presented in the electronic supplement (see http://www.atmos-chem-phys.net/10/4403/2010/acp-10-4403-2010-supplement.pdf).The new reactions are a reduction of the corresponding detailed Master Chemical Mechanism (MCM, Saunders et al., 2003).In order to keep the number of reactions as low as possible for 3-D global simulations, the first generation products of the reactions of butanes and pentanes with OH, NO 3 , and Cl were directly substituted with their final degradation products formaldehyde, acetaldehyde and acetone.Atmos.Chem. Phys., 10, 4403-4422, 2010 www.atmos-chem-phys.net/10/4403/2010/This substitution includes the production of corresponding amounts of a model peroxy radical (RO 2 ), which has generic properties representing the total number of RO 2 produced during the "instantaneous oxidation".With this approach we take into account the NO → NO 2 conversions and the HO 2 → OH interconversion.It is assumed that the reactions with OH and NO 3 have the same product distribution.The Cl distribution was nudged with monthly average mixing ratios taken from Kerkweg et al. (2008a,b, and references therein).Thus, both alkanes and Cl are simulated without the need of a computationally expensive chemical mechanism.Small uncertainties in the model simulation have to be attributed to the reaction rates used in this study.Nevertheless, according to the IUPAC (International Union of Pure and Applied Chemistry) recommendation for butanes and pentanes, the uncertainties in the reaction rates are on the order of 7%.Moreover, the chemical mechanism itself does not increase the uncertainties in the simulation of C 4 -C 5 alkanes, but only those of their products, due to the simplified degradation reactions. Finally, the OH concentration is very important for a correct simulation of NMHC.Jöckel et al. (2006) performed a detailed evaluation of the simulated OH abundance.In summary, OH compared very well with that of other models of similar complexity.Compared to Spivakovsky et al. (2000), the EMAC simulation of OH indicated slightly higher values in the lower troposphere and lower values in the upper troposphere.We refer to Jöckel et al. (2006) for further details. Anthropogenic emissions As pointed out by Jobson et al. (1994) and Poisson et al. (2000), the seasonal change in the anthropogenic emissions of NMHC are thought to be small, due to their relatively constant release from fossil fuel combustion and leakage from oil and natural gas production (Middleton et al., 1990;Blake and Rowland, 1995;Friedrich and Obermeier, 1999).The most detailed global emission inventory available is EDGAR (Olivier et al., 1999(Olivier et al., , 1996;;van Aardenne et al., 2001), Emission Database for Global Atmospheric Research, which was applied for the evaluation simulations of EMAC (Jöckel et al., 2006). In the evaluation simulation "S1" of the model (Jöckel et al., 2006), the anthropogenic emissions were taken from the EDGAR database (version 3.2 "fast-track", van Aardenne et al., 2005) for the year 2000.In order to keep the simulations as consistent as possible with the evaluation simulation S1, the ethane and propane emissions were not changed and annual global emissions of 9.2 and 10.5 Tg/yr respectively, as reported by Pozzer et al. (2007), were applied.Furthermore, the total butanes and pentanes emissions from EDGARv2.0 were used, i.e. 14.1 Tg/yr and 12.3 Tg/yr, respectively.The simulation with these emissions for butanes A. Pozzer et al.: Atmospheric C 2 −C 5 alkanes 3 only those of their products, due to the simplified degradation reactions.Finally, the OH concentration is very important for a correct simulation of NMHC.Jöckel et al. (2006) performed a detailed evaluation of the simulated OH abundance.In summary, OH compared very well with that of other models of similar complexity.Compared to Spivakovsky et al. (2000), the EMAC simulation of OH indicated slightly higher values in the lower troposphere and lower values in the upper troposphere.We refer to Jöckel et al. (2006) for further details. Anthropogenic emissions As pointed out by Jobson et al. (1994) and Poisson et al. (2000), the seasonal change in the anthropogenic emissions of NMHC are thought to be small, due to their relatively constant release from fossil fuel combustion and leakage from oil and natural gas production (Middleton et al., 1990;Blake and Rowland, 1995;Friedrich and Obermeier, 1999).The most detailed global emission inventory available is EDGAR (Olivier et al., 1999(Olivier et al., , 1996;;van Aardenne et al., 2001), Emission Database for Global Atmospheric Research, which was applied for the evaluation simulations of EMAC (Jöckel et al., 2006). In the evaluation simulation "S1" of the model (Jöckel et al., 2006), the anthropogenic emissions were taken from the EDGAR database (version 3.2 "fast-track", van Aardenne et al., 2005) for the year 2000.In order to keep the simulations as consistent as possible with the evaluation simulation S1, the ethane and propane emissions were not changed and annual global emissions of 9.2 and 10.5 Tg/yr respectively, as reported by Pozzer et al. (2007), were applied.Furthermore, the total butanes and pentanes emissions from EDGARv2.0 were used, i.e. 14.1 Tg/yr and 12.3 Tg/yr, respectively.The simulation with these emissions for butanes and pentanes is further denoted as "E1".Based on speciation factors described below, the total emissions are 9.9 Tg/yr for n-butane (70% of all butanes), 4.2 Tg/yr for i-butane (30% of all butanes), 4.3 Tg/yr for n-pentane (35% of all pentanes) and 8.0 Tg/yr for i-pentane (65% of all pentanes). It must be stressed that the EDGAR database has been criticized for the inaccuracies in the C 4 −C 5 alkane emissions.As pointed out by Jacob et al. (2002), ". . . the EDGAR inventory underestimates considerably the observed atmospheric concentration of propane and i-butane over Europe, over the United States and downwind Asia".Based on these considerations, Jacob et al. (2002) suggested a different emission inventory distribution, as described by Bey et al. (2001).From this distribution, Jacob et al. (2002) estimated a total of 10.15, 4.35, 3.2 and 6.0 Tg/yr emission of n-butane, ibutane, n-pentane and i-pentane respectively, with the same isomer speciacion factors used before.To evaluate which emissions database describes butanes and pentanes most re- alistically, an additional simulation (denoted "E2") was performed, using the butanes and pentanes emission distributions as suggested by Bey et al. (2001).The total butane emissions used in simulation E2 is the one estimated by Jacob et al. (2002).Differently, for pentanes, the total emission estimated by Jacob et al. (2002) significantly underestimates the observed mixing ratios of these tracers in a sensitivity simulation (not shown).Hence, the total amount of pentanes used in simulation E2 was scaled to 12.3 Tg/yr, the same total amount of the EDGARv2.0 database.In conclusion, the total amounts emitted in simulation E2 are 10.35, 4.35, 4.3, 8.0 Tg/yr for n-butane, i-butane, n-pentane and i-pentane, respectively.The emissions used in simulation S1, E1 and E2 are summarized in Table 1.The two emission sets, although with very similar total emissions of butanes and pentanes, present very different spatial distributions.The differences in a single grid box can be up to a factor of 4, depending on the location. It must be stressed that the EDGAR database has been criticized for the inaccuracies in the C 4 -C 5 alkane emissions.As pointed out by Jacob et al. (2002), ". . . the EDGAR inventory underestimates considerably the observed atmospheric concentration of propane and i-butane over Europe, over the United States and downwind Asia".Based on these considerations, Jacob et al. (2002) suggested a different emission inventory distribution, as described by Bey et al. (2001).From this distribution, Jacob et al. (2002) estimated a total of 10.15, 4.35, 3.2 and 6.0 Tg/yr emission of n-butane, ibutane, n-pentane and i-pentane respectively, with the same isomer speciacion factors used before.To evaluate which emissions database describes butanes and pentanes most realistically, an additional simulation (denoted "E2") was performed, using the butanes and pentanes emission distributions as suggested by Bey et al. (2001) emissions used in simulation E2 is the one estimated by Jacob et al. (2002).Differently, for pentanes, the total emission estimated by Jacob et al. (2002) significantly underestimates the observed mixing ratios of these tracers in a sensitivity simulation (not shown).Hence, the total amount of pentanes used in simulation E2 was scaled to 12.3 Tg/yr, the same total amount of the EDGARv2.0 database.In conclusion, the total amounts emitted in simulation E2 are 10.35, 4.35, 4.3, 8.0 Tg/yr for n-butane, i-butane, n-pentane and i-pentane, respectively.The emissions used in simulation S1, E1 and E2 are summarized in Table 1.The two emission sets, although with very similar total emissions of butanes and pentanes, present very different spatial distributions.The differences in a single grid box can be up to a factor of 4, depending on the location.The speciacion fractions used for i-butane (30%) and nbutane (70%), and for i-pentane (65%) and n-pentane (35%) are from the calculation of Saito et al. (2000) and Goldan et al. (2000), respectively.These fractions have been confirmed by McLaren et al. (1996), who showed that the ratio of n-pentane to i-pentane is 0.5 (i.e. a fraction of ∼66% for i-pentane and ∼34% for n-pentane over pentanes).The long-term measurements from the NOAA flask data set also confirm these speciacion factors.Measurements from the database are shown in Fig. 1, with the exception of data with very high uncertainties, i.e. observations of mixing ratios lower than 1 pmol/mol or larger than 1000 pmol/mol.As shown in Fig. 1, the fraction of i-butane of the butanes is ∼0.33, while the fraction of i-pentane of the pentanes is ∼0.65.These values are in close agreement with the speciacion factors present in the literature. Biomass burning Biomass burning is a large source of ethane and propane, and a negligible source of butane and pentane isomers (Andreae and Merlet, 2001;Guenther et al., 2000).Blake et al. (1993) extrapolated the total emission from biomass burning of 1.5 Tg/yr for ethane, and 0.6 Tg/yr for propane.Rudolph (1995) suggested instead 6.4 Tg/yr for ethane.The biomass burning contribution was added using the Global Fire Emissions Database (GFED version 1, Van der Werf et al., 2004) for the year 2000 (neglecting interannual variability) scaled with different emissions factors (Andreae and Merlet, 2001;von Kuhlmann et al., 2003a).The total amounts calculated are 2.76 Tg/yr and 0.86 Tg/yr for ethane and propane, respectively.No biomass burning emission was included for C 4 -C 5 alkanes, due to small contribution to the global budget of these tracers. Other sources Etiope and Ciccioli ( 2009) proposed a geophysical (volcanic) source of ethane and propane.Based on observations of gas emissions from volcanoes, they estimated emissions of 2 to 4 Tg/yr for ethane and of 1 to 2.4 Tg/yr for propane.However, since the emission distribution is unknown, it is not yet feasible to include this source into the model.In addition, results from our simulations do not support a further increase in the emissions of these species (see below, Sects.4.1-4.2). Observations The A detailed description of the flask instrument and a full evaluation of the analytical technique was published by (Pollmann et al., 2008).An intercomparison with the WMO GAW (World Meteorological Organization, Global Atmospheric Watch) station in Hohenpeissenberg, Germany showed that flask measurements meet the WMO data quality objective (World Meteorological Organization, 2007).These findings were confirmed during a recent audit by the World Calibration Center for Volatile Organic Compound (WCC-VOC, http://imk-ifu.fzk.de/wcc-voc/). Comparison of the model results with observations In this section only time series from a selected number of sites are presented.The complete set of figures can be found in the electronic supplement of this paper (see http://www.atmos-chem-phys.net/10/4403/2010/acp-10-4403-2010-supplement.pdf). The seasonal cycle of NMHC exhibits a maximum corresponding to the local winter and a minimum corresponding to the local summer, confirming previous studies by Gautrois et al. (2003); Lee et al. (2006); Swanson et al. (2003).In fact, Hagerman et al. (1997) and Sharma et al. (2000) showed that the seasonal cycle of C 2 -C 5 alkanes is anti-correlated with the production rate of the main atmospheric oxidant (OH, see Spivakovsky et al., 2000;Jöckel et al., 2006).The flask measurements used in this study confirm this and the model is able to reproduce the observed seasonal signal, with high mixing ratios during winter and low mixing ratios during summer.In addition, due to the small contribution of C 4 -C 5 alkanes to the total OH sink (less than ∼10%), both simulations E1 and E2, reproduce the OH mixing ratios simulated in the reference simulation S1, with local instantaneous differences below ∼15%.When monthly averages are considered for OH, the maximum differences between simulation S1 and simulation E1 are below ∼5%, while the differences between simulation E1 and simulation E2 are below ∼2%.For ethane and propane, the same sources (emissions), sinks (OH) and transport (thanks to the nudging) are then applied in both simulations E1 and E2.Although the results for these two tracers are not binary identical in the two simulations, they show only negligible differences, which are not statistically significant.Hence, for ethane and propane, only results from the simulation E1 are shown.On contrary, results from both simulations are presented for butanes and pentanes. In order to quantify the differences between simulation E1 and E2 with respect to the observations, we calculated the main statistical quantities and resume them in Taylor diagrams (Taylor, 2001).The diagrams (one for each C 4 -C 5 species) show at a glance the location (latitude) of the stations (color code) and the different simulations (symbol).Moreover, the correlations and biases between the simulations and observations have been weighted by the geometric mean of model variability (standard deviation from the averaged output values) and measurement variability (monthly standard deviation of the measurements from their average).For further details of this approach, we refer to Jöckel et al. (2006, Appendix D).This weighting preserves the relationship between the three statistical quantities visualised in the Taylor diagram.However, locations with a high variability, i.e., where absolute differences are less significant since single measurements are less representative, have less www.atmos-chem-phys.net/10/4403/2010/Atmos.Chem.Phys., 10, 4403-4422, 2010 The colour code denotes the mixing ratios in pmol/mol, calculated as a zonal average of the measurements available in the NOAA/ESRL GMD dataset.The superimposed contour lines denote the zonal averages of the model results. ALT, and Barrow, Alaska, BRW).On the other hand, the NH summer mixing ratios are reproduced correctly within the model/observation monthly variability (calculated as the monthly standard deviation of the observations).In the Southern Hemisphere (SH) the results are more difficult to interpret.Although the southern extratropics seem to be well simulated (see CRZ, Crozet Island, France), for polar sites (for example HBA, Halley Station, Antarctica) the model tends to simulate higher mixing ratios than observed.Fig. 3 shows the latitudinal gradients and the seasonal cycle from observations and as calculated by the model.The model is able to reproduce the latitudinal mixing ratio changes, including the strong north-south gradient during all seasons. Propane, C 3 H 8 As also shown in a previous analysis (Pozzer et al., 2007), the model simulation reproduces the main features observed for propane.The amplitude and phase of the simulated seasonal cycle also agree well with this new observational data set.As shown in Fig. 4, the seasonal cycle is well reproduced at the NH background sites (ALT and BRW).Moreover, Fig. 5 shows that not only the seasonal cycle is correctly reproduced, but also the latitudinal gradient.Generally, the model simulations agree well with the observations in the NH (where most of the emissions are located).However, at some locations (for example MHD, Mace Head, Ireland and LEF, Park Falls, USA) the model slightly overestimates the weight.Values which are more representative for the average conditions are weighted stronger, thus suppressing specific episodes that cannot be expected to be reproduced by the model.Generally, there is a much better agreement between the model simulations and the observations in the Northern Hemisphere (NH) extratropics than in the Southern Hemisphere (SH) extratropics, and the deviation from the observations is largest in the tropics. Ethane, C 2 H 6 In Fig. 2 a comparison of the observations and the model simulation is shown for a number of locations.Notice that the seasonal cycle is correctly reproduced, although the model simulates a too low mixing ratio of ethane during the Northern Hemisphere (NH) winter (e.g., Alert, Canada, ALT, and Barrow, Alaska, BRW).On the other hand, the NH summer mixing ratios are reproduced correctly within the model/observation monthly variability (calculated as the monthly standard deviation of the observations).In the Southern Hemisphere (SH) the results are more difficult to interpret.Although the southern extratropics seem to be well simulated (see CRZ, Crozet Island, France), for polar sites (for example HBA, Halley Station, Antarctica) the model tends to simulate higher mixing ratios than observed.Fig. 3 6 Fig. 2. Comparison of simulated and observed C2H6 mixing ratios in pmol/mol f lines and the bars represent the monthly averages and variability (calculated as the simulated monthly averages are indicated by the black lines and the corresponding standard deviations of the simulated mixing ratios) by the dashed lines.The three (see http://www.esrl.noaa.gov/gmd/ccgg/flask.html).Note the different scales of th Fig. 3. Seasonal cycle and latitudinal distribution of ethane (C2H6).The colour code denotes the mixing ratios in pmol/mol, calculated as a zonal average of the measurements available in the NOAA/ESRL GMD dataset.The superimposed contour lines denote the zonal averages of the model results. ALT, and Barrow, Alaska, BRW).On the other hand, the NH summer mixing ratios are reproduced correctly within the model/observation monthly variability (calculated as the Propane, C 3 H 8 As also shown in a previous analysis (Pozzer et al., 2007), the model simulation reproduces the main features observed for propane.The amplitude and phase of the simulated seasonal cycle also agree well with this new observational data set. As shown in Fig. 4, the seasonal cycle is well reproduced at the NH background sites (ALT and BRW).Moreover, Fig. 5 shows that not only the seasonal cycle is correctly reproduced, but also the latitudinal gradient.Generally, the model simulations agree well with the observations in the NH (where most of the emissions are located).However, at some locations (for example MHD, Mace Head, Ireland and LEF, Park Falls, USA) the model slightly overestimates the observed mixing ratios of propane.In addition, in the SH the simulated mixing ratios seem to be somewhat higher than the observations, especially during the SH winter (June, July and August) in remote regions, and during summer (January and February) in the SH extratropics.Clearly, these findings do not support a further increase of the emissions compared to the data used here. n-butane, n-C 4 H 10 As mentioned in Sect.4, E1 and E2 reproduce the observed phase of the seasonal cycle of n-butane (Fig. 6 and Fig. 7). As observed by Blake et al. (2003) during the TOPSE campaign and also shown by the model, n-butane is removed quite rapidly at the onset of summer in all regions, and it is reduced to low levels (almost depleted (single digit pmol/mol levels) by late spring, except at the highest latitudes.Examples are (Fig. 7) ALT and BRW, where the simulated mixing ratios (both in simulation E1 and E2) decrease from ∼300-400 pmol/mol in April to ∼1-2 pmol/mol in June and remain at this level during the NH summer (July and August).The ability of the model to reproduce the observed seasonal cycle is also confirmed in Fig. 6, where a high correlation is found between the simulations and the observations for stations located between 40 • N and 90 • N. In general, simulation E1 (based on anthropogenic emissions taken from the EDGAv2.0database) produces higher mixing ratios at almost all locations in the NH compared to simulation E2, as shown in Fig. 6, where the normalised standard deviations of model results from simulation E1 present values larger than 1.The opposite is the case in the SH, with lower mixing ratios in E1 than in E2.Simulation E1 seems to systematically overestimate the winter maximum in the NH (see Fig. 7, ALT and CBA, Cold Bay, USA, and many others) while simulation E2 is closer to the observed mixing ratios. Overall, for many stations, simulation E2 better represents the observed mixing ratios than E1 (see Fig. 6).Although a reasonable agreement of simulation E2 with the observations is achieved at Midway Island (MID), and Cape Kumukahi (KUM), two typical marine boundary layer (MBL) background stations, the model underestimates the observed mixing ratios in the NH summer at these locations.This indicates that a nearby source of n-butane may be present, hence that oceanic emissions potentially play a significant role.In the SH, both model simulations seem to underestimate n-butane mixing ratios, with almost a total depletion during SH summer at remote locations, which is not observed in the flask data.While both model setups simulate values below 1 pmol/mol (∼0.5-0.6 pmol/mol) during SH summer (December, January and February), the observations indicate ∼10 pmol/mol.This difference suggests localized n-butane emissions from the ocean.Additional high precision measurements of this tracer are needed to assess the role of the ocean in these remote areas. i-butane, i-C 4 H 10 A different picture arises for i-butane, for which it is difficult to clearly establish which simulation reproduces the observed mixing ratios better, due to the different performance of the model simulations at different locations.Generally (Fig. 8), the simulated mixing ratios from E1 are at the high end of the observed range for stations in the NH (normalised standard deviation systematically larger than 1) while the simulated mixing ratios from E2 are at the low end of the observed range for the same locations (normalised standard deviation systematically lower than 1).This can also be clearly seen in the time plot series in Fig. 9 (see, for example, ALT, and CBA).As for n-butane, in the SH both model simulations underestimate the observed mixing ratios (see Fig. 9, HBA).Please note that these measurements are close to the NMHC instrumental detection limit, causing an increase of the analytical uncertainty in these data.Simulation E1 does not underestimate i-butane in the USA and Europe, in contrast to the results obtained by Jacob et al. (2002).On the contrary, for the USA stations (see Fig. 9, LEF), E1 shows a slight overestimation or (see Fig. 9, UTA) a good agreement with the observations, whereas simulation E2 is too high.For Europe, both simulations E1 and E2 overestimate the observed mixing ratios (see Fig. 9, Ochsenkopf station, OXK, Germany), where the discrepancy is largest for E2.It must be stressed that both simulations predict a large variability at Ochsenkopf station.The coarse grid resolution hence prevents us from deciding which emission database is best in reproducing European or USA emissions.It is actually expected that simulation E2 reproduces observations in the USA better than simulation E1, because the Bey et al. (2001) emissions database was calculated based on USA data (see Wang et al., 1998).However, this is not always the case; in particular, at Park Falls (LEF), simulation E2 is better than www.atmos-chem-phys.net/10/4403/2010/Atmos.Chem.Phys., 10, 4403-4422, 2010 observed mixing ratios of propane.In addition, in the SH the simulated mixing ratios seem to be somewhat higher than the observations, especially during the SH winter (June, July and August) in remote regions, and during summer (January and February) in the SH extratropics.Clearly, these findings do not support a further increase of the emissions compared to the data used here. n-butane, n−C 4 H 10 As mentioned in Sect.4, E1 and E2 reproduce the observed phase of the seasonal cycle of n-butane (Fig. 6 and Fig. 7). As observed by Blake et al. (2003) during the TOPSE campaign and also shown by the model, n-butane is removed quite rapidly at the onset of summer in all regions, and it is reduced to low levels (almost depleted (single digit pmol/mol levels) by late spring, except at the highest latitudes.Examples are (Fig. 7) ALT and BRW, where the simulated mixing ratios (both in simulation E1 and E2) decrease from ∼300-400 pmol/mol in April to ∼1-2 pmol/mol in June and remain at this level during the NH summer (July and August). The ability of the model to reproduce the observed seasonal cycle is also confirmed in Fig. 6, where a high correlation is found between the simulations and the observations for stations located between 40 • N and 90 • N. In general, simulation E1 (based on anthropogenic emissions taken from the EDGAv2.0database) produces higher mixing ratios at almost all locations in the NH compared to simulation E2, as shown in Fig. 6, where the normalised standard deviations of model results from simulation E1 present values larger than 1.The opposite is the case in the SH, with lower mixing ratios in E1 than in E2.Simulation E1 seems to systematically overestimate the winter maximum in the NH (see Fig. 7, ALT and CBA, Cold Bay, USA, and many others) while simulation E2 is closer to the observed mixing ratios.observed mixing ratios of propane.In addition, in the SH the simulated mixing ratios seem to be somewhat higher than the observations, especially during the SH winter (June, July and August) in remote regions, and during summer (January and February) in the SH extratropics.Clearly, these findings do not support a further increase of the emissions compared to the data used here. n-butane, n−C 4 H 10 As mentioned in Sect.4, E1 and E2 reproduce the observed phase of the seasonal cycle of n-butane (Fig. 6 and Fig. 7). As observed by Blake et al. (2003) during the TOPSE campaign and also shown by the model, n-butane is removed quite rapidly at the onset of summer in all regions, and it is reduced to low levels (almost depleted (single digit pmol/mol levels) by late spring, except at the highest latitudes.Examples are (Fig. 7) ALT and BRW, where the simulated mixing ratios (both in simulation E1 and E2) decrease from ∼300-400 pmol/mol in April to ∼1-2 pmol/mol in June and remain at this level during the NH summer (July and August). The ability of the model to reproduce the observed seasonal cycle is also confirmed in Fig. 6, where a high correlation is found between the simulations and the observations for stations located between 40 • N and 90 • N. In general, simulation E1 (based on anthropogenic emissions taken from the EDGAv2.0database) produces higher mixing ratios at almost all locations in the NH compared to simulation E2, as shown in Fig. 6, where the normalised standard deviations of model results from simulation E1 present values larger than 1.The opposite is the case in the SH, with lower mixing ratios in E1 than in E2.Simulation E1 seems to systematically overestimate the winter maximum in the NH (see Fig. 7, ALT and CBA, Cold Bay, USA, and many others) while simulation E2 is closer to the observed mixing ratios.simulation E1.In contrast, at Wendower (UTA) simulation E1 is better than E2.For the SH, due to the low mixing ratios of i-butane (close to instrumental detection limit) and the high variability of the observations, it is difficult to draw a firm conclusion.However, at Halley Bay Station (HBA, Antarctica) simulation E2 reproduces the first year of observations (2005) better than E1. n-pentane, n-C 5 H 12 As for i-butane, also for this tracer it is difficult to establish clearly which simulation better represents the observations, as both agree well with the observed values at the remote locations in the NH.The comparison between simulation results and observations (Fig. 10) shows a poor agreement at stations located in the tropics and in the SH.In the NH, at locations north of 60 • N, the centered pattern root mean square (RMS) difference is similar for both simulations, whereas at locations between 20 and 30 • N simulation E1 is slightly better than simulation E2.This can also be seen (see Fig. 11) at BRW, where simulation E1 reproduces very well the observed mixing ratios, while in contrast at Storhofdi, Iceland (ICE), the results from simulation E2 are in better agreement with the measurements.The simulated mixing ratios are lower than observed throughout all seasons in the tropics and in the SH (Fig. 11, BKT, Bukit Kototabang, Indonesia, and HBA, Antarctica) in both simulations E1 and E2.However, as mentioned earlier, in SH remote regions the mixing ratios are close to the instrumental detection limits and the instrumental error is relatively large.www.atmos-chem-phys.net/10/4403/2010/Atmos.Chem.Phys., 10, 4403-4422, 2010 underestimated in the SH.This is corroborated by similar results for i-pentane (see also Sect.4.6). i-pentane, i-C 5 H 12 In contrast to n-pentane, north of 60 • N, the mixing ratios from simulation E2 present a better agreement (i.e.lower centered pattern RMS difference) with the observations than simulation E1 (see Fig. 12).At these latitudes, the amplitude of the seasonal cycle is overestimated by at least 60% in simulation E1 (visible from the normalised standard deviations), whereas it is within 40% in simulation E2.The overestimation of the simulated mixing ratios from simulation E1 with respect to the observations in the NH remote regions can be also seen in the time series plots (see Fig. 13, ALT), where it is highest during the NH winter, with a difference of a factor of 2. On the other hand, the model (both simulation E1 and E2) tends to underestimate the mixing ratios of i-C 5 H 12 in the NH subtropics and in the SH (see Fig. 13, MID and KUM).This systematic underestimation of the observed mixing ratios for the SH stations is again confirmed in Fig. 13.As mentioned in Sect.4.5, this points to a partially wrong distribution of the emissions in the model, which are located almost exclusively in the NH, notably in the industrialised regions. Global C 2 -C 5 alkanes budgets Following the analyses performed in Sects.4.1-4.6, a global inventory of C 2 -C 5 alkane emissions is shown in Table 2. Anthropogenic emissions are the most important sources in the budget of these tracers, ranging from ∼75% (for ethane) to ∼98% (for butanes and pentanes) of the total emissions.For butanes and pentanes, the dataset presented by Bey et al. (2001) (with an increased total emissions for pentanes) gives the best results with the EMAC model, and is recommended for future studies of these tracers.For ethane and propane, the model simulation with the EDGARv3.2fasttrack database gives satisfactory results.Biomass burning is the second important source for ethane and propane, i.e. ∼22% and ∼7% of the total sources, respectively.As shown by Helmig et al. (2008), biomass burning effects on C 3 -C 5 alkanes is generally sporadic.Hence, the monthly average values of the observational dataset used here generally masked the biomass burning signal that could be observed.In addition the coarse model resolution and the low estimated value limit the possibilities to further evaluate this type of emission.These values could hence not be confirmed by our study and are reported as suggested in the literature. Oceanic emissions play a small role in the budget for ethane and propane.The theoretical magnitude of oceanic emission for C 4 -C 5 is comparable to the one of biomass burning, and hence too weak to be clearly distinguished in the observational dataset.Nevertheless, our analysis suggests that oceanic emissions can play a more significant role also for butanes and pentanes, at least at some coastal locations. Acetone formation Acetone (CH 3 COCH 3 ), due to its photolysis, plays an important role in the upper tropospheric HO x budget (Singh et al., 1995;McKeen et al., 1997;Müller and Brasseur, 1995;Wennberg et al., 1998;Jaeglé et al., 2001) although recent studies have considerably reduced it (Blitz et al., 2004).Moreover, this trace gas is essential to correctly describe the ozone enhancement in flight corridors (Brühl et al., 2000;Folkins and Chatfield, 2000).Oxidation of propane and C 4 -C 5 isoalkanes (Singh et al., 1994) has been estimated to be ∼20-30% of the total sources of acetone (Jacob et al., 2002;Singh et al., 2004).It must be however stressed that there is still no agreement on the acetone budget.Recent studies, in fact, have modified significantly the estimated sources/sinks.Following recent studies, it is now widely thought that acetone has a net sink in the ocean (Singh et al. (2004), Marandino et al. (2005), and Taddei et al. (2009)). The lack of emissions from the ocean in the budget, however, is partially compensated by two other terms in the acetone budget: -reduced photolysis, following the studies of Blitz et al. (2004) and Arnold et al. (2005). -increased biogenic emissions.As pointed out by Singh et al. (2004, see also references therein), new measurements suggest higher biogenic emissions than the one proposed by Jacob et al. (2002) (33 Tg/yr).Based on the modeling study of Potter et al. (2003), the biogenic emissions should be in the range of 50-170 Tg/yr. The transport and chemical production of acetone were explicitly calculated with EMAC.Since E2 better reproduces the observations, we used the results of this simulation to quantify the acetone production.Globally, the total production of acetone from C 3 -C 5 alkanes is 21.3 Tg/yr in E2.The propane decomposition, with a yield of 0.73, produces ∼11.2 Tg/yr of acetone, which is higher than the total production of acetone from C 4 -C 5 isoalkanes oxidation, namely 10.1 Tg/yr.In fact, i-butane oxidation produces 4.3 Tg/yr acetone, while 5.8 Tg/yr of acetone are produced by i-pentane oxidation.This is the same for both simulations, because total emissions are equal.Despite the fact that both simulations produce very similar amounts of acetone, the production is distributed quite differently in the two simulations. As shown in Fig. 14 duces ∼11.2 Tg/yr of acetone, which is higher than the total production of acetone from C 4 −C 5 isoalkanes oxidation, namely 10.1 Tg/yr.In fact, i-butane oxidation produces 4.3 Tg/yr acetone, while 5.8 Tg/yr of acetone are produced by i-pentane oxidation.This is the same for both simulations, because total emissions are equal.Despite the fact that both simulations produce very similar amounts of acetone, the production is distributed quite differently in the two simulations. As shown in Fig. 14, simulation E1 indicates a pronounced acetone production over the middle East and Persian Gulf, northern Europe and western USA, compared to simulation E2.On the other hand, simulation E2 indicates a stronger production of acetone in the eastern USA, China, and in the SH.In both model simulations, CH 3 COCH 3 is produced almost solely by the reaction of the iso-alkanes with OH; the contributions of the reactions with Cl and NO 3 are negligible, being less than 0.5% of the total.Our result partially confirms the conclusion of Jacob et al. (2002), who calculated an acetone production of 14 Tg/yr, 4.0 Tg/yr and 2.6 Tg/yr from propane, i-butane and i-pentane, respectively.The different acetone production compared to the study of Jacob et al. (2002) (present for propane and i-pentane decomposition) arise from the different emissions and/or the acetone yield.For instance, Jacob et al. (2002) used an acetone yield of 0.53 from i-pentane (from the reaction with OH).In our study an acetone yield of ∼0.90 from i-pentane was ob- simulation S1 (see Sect. 2).The S1 analysis did not account for C 4 −C 5 alkanes and their subsequent atmospheric reactions.This allows us to evaluate the effect of higher alkanes on acetone. The resulting increase of the acetone mixing ratios is evident, especially in the NH.As shown in Fig. 15, the acetone mixing ratio increased at the surface between 100 and 300 pmol/mol in NH remote areas, with the highest values reached in locations downwind of polluted regions (for example over the Pacific and Atlantic Ocean).The relative effect in polluted regions is smaller (maximum increase ∼30%) due to the strong anthropogenic emission of acetone.However, the contributions from the alkanes oxidation are significant (up to 1 nmol/mol).The strongest production regions are located over polluted regions such as the eastern USA, the Mediterranean area and the China-Japan region. Here the maximum effect of C 4 −C 5 alkanes on acetone is achieved, with an increase of ∼1 nmol/mol.The mixing ratio of acetone in the SH is practically not affected by chemical formation from iso-alkanes, with the exception of a few locations in South America, simply because they are mainly emitted in the NH.This, combined with their short lifetime (shorter than the interhemispheric exchange time), confine the iso-alkanes to decompose and produce acetone only in the NH. To confirm the improvements in the acetone budget obtained by including the C 4 −C 5 alkanes, the model simula- E2.On the other hand, simulation E2 indicates a stronger production of acetone in the eastern USA, China, and in the SH.In both model simulations, CH 3 COCH 3 is produced almost solely by the reaction of the iso-alkanes with OH; the contributions of the reactions with Cl and NO 3 are negligible, being less than 0.5% of the total.Our result partially confirms the conclusion of Jacob et al. (2002), who calculated an acetone production of 14 Tg/yr, 4.0 Tg/yr and 2.6 Tg/yr from propane, i-butane and i-pentane, respectively.The different acetone production compared to the study of Jacob et al. (2002) (present for propane and i-pentane decomposition) arise from the different emissions and/or the acetone yield.For instance, Jacob et al. (2002) used an acetone yield of 0.53 from i-pentane (from the reaction with OH).In our study an acetone yield of ∼0.90 from i-pentane was obtained.In addition the i-pentane emissions are substantially different, being 6.0 and 8.0 Tg/yr in the study of Jacob et al. (2002), and our study, respectively.For propane, the acetone yield is very similar (0.72) to the one obtained here (0.73), but a difference in the emissions (13.5 vs. 11.7 Tg/yr) causes a slight difference in the acetone production.Because the E2 results reproduce i-butane and i-pentane better, we use this model simulation for the comparison with the evaluation simulation S1 (see Sect. 2).The S1 analysis did not account for C 4 -C 5 alkanes and their subsequent atmospheric reactions.This allows us to evaluate the effect of higher alkanes on acetone. The resulting increase of the acetone mixing ratios is evident, especially in the NH.As shown in Fig. 15, the acetone mixing ratio increased at the surface between 100 and 300 pmol/mol in NH remote areas, with the highest values reached in locations downwind of polluted regions (for example over the Pacific and Atlantic Ocean).The relative effect in polluted regions is smaller (maximum increase ∼30%) due to the strong anthropogenic emission of acetone.However, the contributions from the alkanes oxidation are significant (up to 1 nmol/mol).The strongest production regions are located over polluted regions such as the eastern USA, the Mediterranean area and the China-Japan region. Here the maximum effect of C 4 -C 5 alkanes on acetone is achieved, with an increase of ∼1 nmol/mol.The mixing ratio of acetone in the SH is practically not affected by chemical formation from iso-alkanes, with the exception of a few locations in South America, simply because they are mainly emitted in the NH.This, combined with their short lifetime (shorter than the interhemispheric exchange time), confine the iso-alkanes to decompose and produce acetone only in the NH. To confirm the improvements in the acetone budget obtained by including the C 4 -C 5 alkanes, the model simulation was compared with field data reported by Emmons et al. (2000).In Fig. 16, we show only campaigns performed in the NH where the differences between simulations E2 and S1 are largest.We refer to Pozzer et al. (2007) and the electronic supplement for the complete comparison (see http://www.atmos-chem-phys.net/10/4403/2010/acp-10-4403-2010-supplement.pdf).The inclusion of the C 4 -C 5 alkanes chemistry substantially increases the mixing ratios of acetone in the North Pacific region (PEM-Tropics-B and PEM-West-B).In these cases, the increase simulation S1 (see Sect. 2).The S1 analysis did not account for C 4 −C 5 alkanes and their subsequent atmospheric reactions.This allows us to evaluate the effect of higher alkanes on acetone. The resulting increase of the acetone mixing ratios is evident, especially in the NH.As shown in Fig. 15, the acetone mixing ratio increased at the surface between 100 and 300 pmol/mol in NH remote areas, with the highest values reached in locations downwind of polluted regions (for example over the Pacific and Atlantic Ocean).The relative effect in polluted regions is smaller (maximum increase ∼30%) due to the strong anthropogenic emission of acetone.However, the contributions from the alkanes oxidation are significant (up to 1 nmol/mol).The strongest production regions are located over polluted regions such as the eastern USA, the Mediterranean area and the China-Japan region. Here the maximum effect of C 4 −C 5 alkanes on acetone is achieved, with an increase of ∼1 nmol/mol.The mixing ratio of acetone in the SH is practically not affected by chemical formation from iso-alkanes, with the exception of a few locations in South America, simply because they are mainly emitted in the NH.This, combined with their short lifetime (shorter than the interhemispheric exchange time), confine the iso-alkanes to decompose and produce acetone only in the NH. To confirm the improvements in the acetone budget obtained by including the C 4 −C 5 alkanes, the model simulation was compared with field data reported by Emmons et al. (2000).In Fig. 16, we show only campaigns performed in the NH where the differences between simulations E2 and S1 are largest.We refer to Pozzer et al. (2007) and the electronic supplement for the complete comparison.The inclusion of the C 4 −C 5 alkanes chemistry substantially increases the mixing ratios of acetone in the North Pacific region (PEM-Tropics-B and PEM-West-B).In these cases, the increase is is ∼50% compared to a simulation without C 4 -C 5 alkanes.The simulated mixing ratios thus agree much better with the measurements.Especially below 5 km altitude, the simulated vertical profiles are closer to the observations, and improved compared to simulation S1.In a polluted region (TRACE-P, Fig. 16) downwind of China, the inclusion of C 4 -C 5 compounds results in a remarkable improvement of the acetone simulation.The underestimation of the freetroposphere mixing ratios seems to support the revision of the acetone quantum yield, as proposed by Blitz et al. (2004).Arnold et al. (2005), in fact, calculated an average increase of ∼60-80% of acetone in the upper troposphere.It must be stressed, however, that in two cases the comparison between the model results from simulation E2 and field campaigns deteriorates compared to the evaluation simulation S1.These are presented in Fig. 16 (bottom).Both cases are located in Japan, where the model, after the inclusion of C 4 -C 5 oxidation pathways in the chemistry scheme, simulates mixing ratios that are higher than the observations.This could be due to a too strong source of C 4 -C 5 alkanes in the region in simulation E2, or alternatively, an overestimation/underestimation of direct acetone emissions/depositions. Acetaldehyde formation Acetaldehyde (CH 3 CHO) is also formed during the chemical degradation of C 3 -C 5 alkanes.This tracer is a shortlived compound, with an average lifetime of several hours (Tyndall et al., 1995(Tyndall et al., , 2002)).It is an important precursor of PAN (peroxyacetyl nitrate), being a reservoir species of NO x (see Singh et al., 1985;Moxim et al., 1996).Oxidation of alkanes is responsible for ∼ 15-19% of the total acetaldehyde emissions (Singh et al., 2004), or ∼20-27% based on a more recent estimate (Millet et al., 2010).In this study, using the EMAC model, the calculated global production A. Pozzer et al.: Atmospheric C 2 −C 5 alkanes Fig. 16.Vertical profiles of CH 3 COCH 3 (in pmol/mol) for some selected campaigns from Emmons et al. (2000).Asterisks and boxes represent the average and the variability (with respect to space and time) of the measurements in the region, respectively.The simulated average is indicated by the solid line and the corresponding simulated variability (calculated as standard deviation with respect to time and space) by the dashed lines.The numbers of measurements are listed near the right axes.The red lines represent the simulation S1, the blue lines E2.The PEM-Tropics-B, PEM-West-B, SONEX and TRACE-P campaign took place in March-April (1999), February-March (1994), October-November (1997) and February-April (2001) .Vertical profiles of CH 3 COCH 3 (in pmol/mol) for some selected campaigns from Emmons et al. (2000).Asterisks and boxes represent the average and the variability (with respect to space and time) of the measurements in the region, respectively.The simulated average is indicated by the solid line and the corresponding simulated variability (calculated as standard deviation with respect to time and space) by the dashed lines.The numbers of measurements are listed near the right axes.The red lines represent the simulation S1, the blue lines E2.The PEM-Tropics-B, PEM-West-B, SONEX and TRACE-P campaign took place in March-April (1999), February-March (1994), October-November (1997) and February-April (2001), respectively. Conclusions We compared the EMAC model results of C 2 -C 5 alkanes with new observational data obtained from flask measurements from the NOAA/ESRL flask sampling network.Two emission distribution databases for butanes and pentanes (and associated isomers) were evaluated, new emissions of C 2 -C 5 estimated, and the effect of C 3 -C 5 alkanes on the concentrations of acetone and acetaldehyde calculated. Overall, the model reproduces the observations of ethane and propane mixing ratios well using the EDGARv3.2emission database (van Aardenne et al., 2005).The seasonal cycle is correctly reproduced, and the simulated mixing ratios are generally within 20% of the observations for ethane and propane.The simulation of ethane (C 2 H 6 ) shows a good agreement with the observations, both with respect to the spatial and the temporal distribution, although with some underestimation in the NH during winter.In the SH a general overestimation is found, especially during the SH summer.Propane (C 3 H 8 ) is reproduced well in the NH, while in the SH an overestimation occurs during the SH winter. To compare two different emissions databases, two sensitivity simulations were performed.In simulation E1 the EDGARv2 (Olivier et al., 1999) emissions for butanes and pentanes, and in simulation E2 the emission distributions suggested by Bey et al. (2001) were used.Generally, the simulated seasonal cycles of the butanes and pentanes agree well with the observations in both simulations.However, simulation E2 reproduces more realistically both, n-butane and i-pentane, while for i-butane and n-pentane it is not evident which simulation is better, one being at the higher end of the observations (E1) and the other at the lower end (E2).In conclusion, we recommend the emission database suggested by Bey et al. (2001) (with additionally increased pentanes emissions) for future studies of these tracers.In addition, our analysis suggests a larger source from the ocean than what is currently assumed.A simulation with higher spatial resolution would give additional information on the global impact of biomass burning on these tracers, which, due to the small emitted amount compared to anthropogenic emissions, is difficult to analyse and quantify with this low resolution model. The inclusion of C 4 -C 5 alkanes in the model improves the representation of acetone (CH 3 COCH 3 ).Based on simulation E2, i-butane and i-pentane degradation produces ∼4.3 and ∼5.8 Tg/yr of acetone, respectively.At the same time, the formation of acetaldehyde was also calculated, resulting in a production rate of 3.4 Tg/yr, 1.4 Tg/yr and 4.8 Tg/yr from the oxidation of n-butane, n-pentane and ipentane, respectively.The role of NO 3 and Cl radicals in the degradation of C 3 -C 5 isoalkanes and the formation of acetone and acetaldehyde is negligible, contributing less than 1% to the total chemical production. Fig. 1 . Fig. 1. i-butane versus butanes (upper figure) and i-pentane versus pentanes (lower figure) measurements in pmol/mol.The black line represents the 1 to 1 line, while the red line represent the linear regression of the data.In the upper left corner the regression parameters are presented.Note the logarithmic scale of the axes. Fig. 1 . Fig. 1. i-butane versus butanes (upper figure) and i-pentane versus pentanes (lower figure) measurements in pmol/mol.The black line represents the 1 to 1 line, while the red line represent the linear regression of the data.In the upper left corner the regression parameters are presented.Note the logarithmic scale of the axes. Fig. 2 . Fig.2.Comparison of simulated and observed C 2 H 6 mixing ratios in pmol/mol for some selected locations (ordered by latitude).The red lines and the bars represent the monthly averages and variability (calculated as the monthly standard deviations) of the measurements.The simulated monthly averages are indicated by the black lines and the corresponding simulated monthly variability (calculated as the monthly standard deviations of the simulated mixing ratios) by the dashed lines.The three letters at the center of each plot denote the station code (see http://www.esrl.noaa.gov/gmd/ccgg/flask.html).Note the different scales of the vertical axes. Fig. 3 . Fig. 3. Seasonal cycle and latitudinal distribution of ethane (C 2 H 6 ).The colour code denotes the mixing ratios in pmol/mol, calculated as a zonal average of the measurements available in the NOAA/ESRL GMD dataset.The superimposed contour lines denote the zonal averages of the model results. Fig. 2 . Fig. 2.Comparison of simulated and observed C 2 H 6 mixing ratios in pmol/mol for some selected locations (ordered by latitude).The red lines and the bars represent the monthly averages and variability (calculated as the monthly standard deviations) of the measurements.The simulated monthly averages are indicated by the black lines and the corresponding simulated monthly variability (calculated as the monthly standard deviations of the simulated mixing ratios) by the dashed lines.The three letters at the center of each plot denote the station code (see http://www.esrl.noaa.gov/gmd/ccgg/flask.html).Note the different scales of the vertical axes. Fig. 3 . Fig. 3. Seasonal cycle and latitudinal distribution of ethane (C 2 H 6 ).The colour code denotes the mixing ratios in pmol/mol, calculated as a zonal average of the measurements available in the NOAA/ESRL GMD dataset.The superimposed contour lines denote the zonal averages of the model results. Fig. 5 . Fig.5.Seasonal cycle and latitudinal distribution of propane (C 3 H 8 ).The colour code denotes the mixing ratios in pmol/mol, calculated as a zonal average of the measurements available in the NOAA/ESRL GMD dataset.The superimposed contour lines denote the zonal averages of the model results. Fig. 5 . Fig.5.Seasonal cycle and latitudinal distribution of propane (C3H8).The colour code denotes the mixing ratios in pmol/mol, calculated as a zonal average of the measurements available in the NOAA/ESRL GMD dataset.The superimposed contour lines denote the zonal averages of the model results. Fig. 5 . Fig.5.Seasonal cycle and latitudinal distribution of propane (C 3 H 8 ).The colour code denotes the mixing ratios in pmol/mol, calculated as a zonal average of the measurements available in the NOAA/ESRL GMD dataset.The superimposed contour lines denote the zonal averages of the model results. Fig. 6 . Fig. 6.Taylor diagram comparing monthly averages of n-C 4 H 10 from the model simulations with the surface observations from the NOAA ESRL GMD network.The colour code denotes the geographic latitude.The symbol denote the model results: circle from simulation E1, square from simulation E2. Fig. 6.Taylor diagram comparing monthly averages of n−C4H10 from the model simulations with the surface observations from the NOAA ESRL GMD network.The colour code denotes the geographic latitude.The symbol denote the model results: circle from simulation E1, square from simulation E2. Fig. 7 . Fig. 7. Comparison of simulated and observed n−C4H10 mixing ratios in pmol/mol for some selected locations (ordered by latitude).The red line and the bars represent the monthly average and the variability (calculated as the monthly standard deviations) of the measurements.The simulated monthly average is indicated in the solid line and the corresponding simulated monthly variability (calculated as the monthly standard deviations of the simulated mixing ratios) by the dashed line.The black and blue colours denote results from simulation E1 and E2, respectively.The three letters at the center of each plot denote the station code (see ttp://www.esrl.noaa.gov/gmd/ccgg/flask.html).Note the different scales of the vertical mixing ratio axes. Fig. 7 .Fig. 9 . Fig. 7.Comparison of simulated and observed n-C 4 H 10 mixing ratios in pmol/mol for some selected locations (ordered by latitude).The red line and the bars represent the monthly average and the variability (calculated as the monthly standard deviations) of the measurements.The simulated monthly average is indicated in the solid line and the corresponding simulated monthly variability (calculated as the monthly standard deviations of the simulated mixing ratios) by the dashed line.The black and blue colours denote results from simulation E1 and E2, respectively.The three letters at the center of each plot denote the station code (see http://www.esrl.noaa.gov/gmd/ccgg/flask.html).Note the different scales of the vertical mixing ratio axes. Fig. 16 Fig.16.Vertical profiles of CH 3 COCH 3 (in pmol/mol) for some selected campaigns fromEmmons et al. (2000).Asterisks and boxes represent the average and the variability (with respect to space and time) of the measurements in the region, respectively.The simulated average is indicated by the solid line and the corresponding simulated variability (calculated as standard deviation with respect to time and space) by the dashed lines.The numbers of measurements are listed near the right axes.The red lines represent the simulation S1, the blue lines E2.The PEM-Tropics-B, PEM-West-B, SONEX and TRACE-P campaign took place inMarch- April (1999), February-March (1994), October-November (1997) andFebruary-April (2001), respectively. Table 1 . Summary of the emissions used in simulation S1, E1 and E2 in Tg(species)/yr. Pozzer et al.: Atmospheric C 2 −C 5 alkanes Fig. 6.Taylor diagram comparing monthly averages of n−C 4 H 10 from the model simulations with the surface observations from the NOAA ESRL GMD network.The colour code denotes the geographic latitude.The symbol denote the model results: circle from simulation E1, square from simulation E2. Nevertheless, a bias between the model results and the observations is evident; the short lifetime of n-C 5 H 12 (shorter than the interhemispheric exchange time), indicates that the emissions are generally Atmos.Chem.Phys., 10, 4403-4422, 2010www.atmos-chem-phys.net/10/4403/2010/8 A. Table 2 . Global source estimates of C 2 -C 5 alkanes based on the present EMAC simulations (in Tg(species)/yr). , respectively.∼50%compared to a simulation without C 4 −C 5 alkanes.The simulated mixing ratios thus agree much better with the
2018-03-08T22:54:50.158Z
2010-05-12T00:00:00.000
{ "year": 2010, "sha1": "d9cc1e8d6c3a85fd7f45ced24a67897b1e682818", "oa_license": "CCBY", "oa_url": "https://acp.copernicus.org/articles/10/4403/2010/acp-10-4403-2010.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "1f585591289868246aeb4666599a56d85295537d", "s2fieldsofstudy": [ "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
118647138
pes2o/s2orc
v3-fos-license
Phenomenology of the 3-3-1-1 model We discuss a new SU(3)_C X SU(3)_L X U(1)_X X U(1)_N (3-3-1-1) gauge model that overhauls the theoretical and phenomenological aspects of the known 3-3-1 models. Additionally, we sift the outcome of the 3-3-1-1 model from precise electroweak bounds to dark matter observables. The mass spectra of the scalar and gauge sectors are diagonalized when the scale of the 3-3-1-1 breaking is compatible to that of the ordinary 3-3-1 breaking. All the interactions of the gauge bosons with the fermions and scalars are obtained. The 3-3-1-1 model provides two dark matters which are stabilized by the W-parity conservation: one fermion which may be either a Majorana or Dirac fermion and one complex scalar. We conclude that in the fermion dark matter setup the Z_2 gauge boson resonance sets the dark matter observables, whereas in the scalar one the Higgs portal dictates them. The standard model GIM mechanism works in the model because of the W-parity conservation. Hence, the dangerous flavor changing neutral currents due to the ordinary and exotic quark mixing are suppressed, while those coming from the non-universal couplings of the Z_2 and Z_N gauge bosons are easily evaded. Indeed, the K^0-\bar{K}^0 and B^0_s-\bar{B}^0_s mixings limit m_{Z_{2,N}}>2.037 TeV and m_{Z_{2,N}}>2.291 TeV, respectively, while the LEPII searches provide a quite close bound m_{Z_{2,N}}>2.737 TeV. The violation of the CKM unitarity due to the loop effects of the Z_2 and Z_N gauge bosons is negligible. [Full abstract is given in the text.] I. INTRODUCTION The standard model [1] has been extremely successful. However, it describes only about 5% mass-energy density of our universe. There remain around 25% dark matter and 70% dark energy that are referred as the physics beyond the standard model. In addition, the standard model cannot explain the nonzero small masses and mixing of the neutrinos, the matter-antimatter asymmetry of the universe, and the inflationary expansion of the early universe. On the theoretical side, the standard model cannot show how the Higgs mass is stabilized against radiative corrections, what makes the electric charges exist in discrete amounts, and why there are only the three generations of fermions observed in the nature. Among the standard model's extensions for the issues, the recently-proposed SU (3) C ⊗SU (3) L ⊗ U (1) X ⊗ U (1) N (3-3-1-1) gauge model has interesting features [2]. (i) The theory arises as a necessary consequence of the 3-3-1 models [3][4][5] that respects the conservation of lepton and baryon numbers. (ii) The B − L number is naturally gauged because it is a combination of the SU (3) L and U (1) N charges. And, the resulting theory yields an unification of the electroweak and B − L interactions, apart from the strong interaction. (iii) The right-handed neutrinos are emerged as fundamental fermion constituents, and consequently the small masses of the active neutrinos are generated by the type I seesaw mechanism. (iv) The W -parity which has the form similarly to the R-parity in supersymmetry is naturally resulted as a conserved remnant subgroup of the broken 3-3-1-1 gauge symmetry. (v) The dark matter automatically exists in the model that is stabilized due to the W -parity. It is the lightest particle among the new particles that characteristically have wrong lepton numbers transforming as odd fields under the W -parity (so-called W -particles). The dark matter candidate may be a neutral fermion (N ) or a neutral complex scalar (H ). The 3-3-1-1 model includes all the good features of the 3-3-1 models. Namely, the number of fermion families is just three as a consequence of anomaly cancelation and QCD asymptotic freedom condition [6]. The third quark generation transforms under SU (3) L differently from the first two. This explains why the top quark is uncharacteristically-heavy [7]. The strong CP problem is solved by just its particle content with an appropriate Peccei-Quinn symmetry [8]. The electric charge quantization is due to a special structure of the gauge symmetry and fermion content [9]. Additionally, it also provides the mentioned dark matter candidates similarly to [10,11]. The 3-3-1-1 model can solve the potential issues of the 3-3-1 models because the unwanted interactions and vacuums that lead to the dangerous tree-level flavor changing neutral currents (FCNCs) [12] as well as the CP T violation [13] are all suppressed due to the W -parity conservation [2]. In the previous work [2], the proposal of the 3-3-1-1 model with its direct consequence-the dark matter has been given. In the current work, we will deliver a detailed study of this new model. Particularly, we consider the new physics consequences besides the dark matter that are implied by the new extended sectors beyond those of the 3-3-1 model. These sectors include the new neutral gauge boson (C) as associated with U (1) N and the new scalar (φ) as required for the totally U (1) N breaking with necessary mass generations. The totally U (1) N breaking that consequently breaks the B − L symmetry, where the B − L is a residual charge related to the N charge and a SU (3) L generator, can happen closely to the 3-3-1 breaking scale of TeV order. This leads to a finite mixing and interesting interplay between the new neutral gauge bosons such as the Z of the 3-3-1 model and the C of U (1) N . Notice that our previous work considers only a special case when the B − L breaking scale is very high like the GUT one [14] as an example so that the new physics over the ordinary 3-3-1 symmetry is decoupled, which has neglected its imprint at the low energy [2]. Indeed, the stability of the proton is already ensured by the 3-3-1-1 gauge symmetry, there is no reason why that scale is not presented at the 3-3-1 scale. Similarly to the new neutral gauge bosons, there is an interesting mixing among the new neutral scalars that are used to break the above symmetry kinds, the 3-3-1 and the B − L. It is interesting to note that the new scalars and new gauge bosons as well as the new fermions can give significant contributions to the production and decay of the standard model Higgs boson. They might also modify the well-measured standard model couplings such as those of the photon, W and Z bosons with the fermions. There exist the hadronic FCNCs due to the contribution of the new neutral gauge bosons. These gauge bosons can also take part in the electron-positron collisions such as the LEPII and ILC as well as in the dark matter observables. The presence of the new neutral gauge bosons also induces the apparent violation of the CKM unitarity. In some case, the new scalar responsible for the U (1) N breaking may act as an inflaton. The decays of some new particles can solve the matter-antimatter asymmetry via leptogenesis mechanisms. The scope of this work is given as follows. The 3-3-1-1 model will be calculated in detail. Namely, the scalar potential and the gauge boson sector are in a general case diagonalized. All the interactions of the gauge bosons with the fermions as well as with the scalars are derived. The new physics processes through the FCNCs, the LEPII collider, the violation of the CKM unitarity as well as the dark matter observables are analyzed. Particularly, we will perform a phenomenological study of the dark matter taking into account the current data as well as the new contributions of the physics at Λ ∼ ω that have been kept in [2]. The constraints on the new gauge boson and dark matter masses are also obtained. The rest of this work is organized as follows. In Sec. II, we give a review of the model. Secs. III and IV are respectively devoted to the scalar and gauge sectors. In Sec. V we obtain all the gauge interactions of the fermions and scalars. Sec. VI is aimed at studying the new physics processes and constraints. Finally, we summarize our results and make concluding remarks in Sec. VII. The non-closed algebras can be deduced from the fact that in order for B − L to be some generator of SU (3) L , we have a linear combination B − L = x i T i (i = 1, 2, 3, ..., 8) and thus Tr(B − L) = 0, which is invalid for the lepton triplet, Tr(B − L) = −2 = 0, even for other particle multiplets. In other words, B −L and T i by themselves do not make a symmetry under which our theory based on is manifest. Therefore, to have a closed algebra, we must introduce at least a new Abelian charge N so that B −L is a residual symmetry of closed group SU (3) L ⊗U (1) N , i.e. B −L = x i T i +yN , where the embedding coefficients x i , y = 0 are given below (the existence of N can also be understood by a current algebra approach for T i and B −L similarly to the case of hyper-charge Y when we combine SU (2) L with U (1) Q to perform the SU (2) L ⊗ U (1) Y electroweak symmetry). Note that N cannot be identified as X (that defines the electric charge operator) because they generally differ for the particle multiplets (see below); thus they are independent charges. As a fact, the normal Lagrangian of the 3-3-1 models (including the gauge interactions, minimal Yukawa Lagrangian and minimal scalar potential) always preserves a U (1) N Abelian symmetry that along with SU (3) L realizes B − L as a conserved (non-commuting) residual charge, which has actually been investigated in the literature and given in terms of B = B and L = bT 8 + L where b is 3-3-1 model-class dependent and N = B − L [2,15]. Note also that a violation in N due to some unwanted interaction, by contrast, would lead to the corresponding violation in B − L and vice versa. Because T i are gauged charges, B −L and N must be gauged charges (by contrast, T i ∼ (B −L)−yN are global which is incorrect). The gauging of B − L is a consequence of the non-commuting between B − L and SU (3) L (which is unlike the standard model case). And, the theory is only consistent if it includes U (1) N as a gauge symmetry which also necessarily makes the resulting theory free from all the nontrivial leptonic and baryonic anomalies [2]. Otherwise, the 3-3-1 models must contain (abnormal) interactions that explicitly violate B − L (or N ). Equivalently, the 3-3-1 models are only survival if B − L is not a symmetry of such theories, actually recognized as an approximate symmetry, which has explicitly shown in [16]. To conclude, assuming that the B − L charge is conserved (that is respected by the experiments, the standard model, even the typical 3-3-1 models [1,[3][4][5]), the Abelian factor U (1) N must be included so that the algebras are closed that is needed for a self-consistent theory. Apart from the strong interaction with SU (3) C group, the SU (3) L ⊗ U (1) X ⊗ U (1) N framework thus presents an unification of the electroweak and B − L interactions, in the same manner of the standard model electroweak theory for the weak and electromagnetic ones. The two Abelian factors of the 3-3-1-1 symmetry associated with the SU (3) L group correspondingly determine the Q electric charge and B − L operators as residual symmetries, given by where T i (i = 1, 2, 3, ..., 8), X and N are the charges of SU (3) L , U (1) X and U (1) N , respectively (the SU (3) C charges will be denoted by t i ). Note that the above Q and B − L definitions embed the 3-3-1 model with neutral fermions [5] in the considering theory. However, the coefficients of T 8 might be different depending on which class of the 3-3-1 models is embedded in [15]. The Q is conserved responsible for the electromagnetic interaction, whereas the B − L must be broken so that the U (1) N gauge boson gets a large enough mass to escape from the detectors. Indeed, the B − L is broken down to a parity (i.e., a Z 2 symmetry), which consequently makes "wrong B − L particles" become stabilized, providing dark matter candidates [2]. We see that this R-parity has an origin as a residual symmetry of the broken SU (3) L ⊗ U (1) N gauge symmetry, which is unlike the R-symmetry in supersymmetry [17]. That being said, the parity P is automatically existed, and due to its nature it will play an important role in the model besides stabilizing the dark matter candidates as shown throughout the text. The fermion content of the 3-3-1-1 model that is anomaly free is given as [2] where the quantum numbers located in the parentheses are defined upon the gauge symmetries Table I in more detail) [2]. Such particles are generally called as the wrong-lepton particles (or W -particles for short) and the parity P is thus named as the To break the gauge symmetry and generate the masses for the particles in a correct way, the 3-3-1-1 model needs the following scalar multiplets [2]: with the VEVs that conserve Q and P being respectively given by The VEVs of η, ρ, χ break only SU ( which leaves the B − L invariant. The φ breaks U (1) N as well as the B − L that defines the Wparity, U (1) B−L → P , with the form as given [2]. It provides also the mass for the U (1) N gauge boson as well as the Majorana masses for ν aR . Note that the ρ 3 , η 3 and χ 1,2 are the W -particles, while the others including φ are not (i.e., as the ordinary particles). The electrically-neutral fields η 3 and χ 1 cannot develop a VEV due to the W -parity conservation. To keep a consistency with the standard model, we suppose u, v ω, Λ. Up to the gauge fixing and ghost terms, the Lagrangian of the 3-3-1-1 model is given by with the covariant derivative and the field strength tensors The Ψ denotes fermion multiplets such as ψ aL , Q 3L , u aR and so on, whereas the Φ stands for scalar multiplets, φ, η, ρ and χ. The coupling constants (g s , g, g X , g N ) and the gauge bosons (G iµ , A iµ , B µ , C µ ) are defined as coupled to the generators (t i , T i , X, N ), respectively. It is noted that in a mass basis the W ± bosons are associated with T 1,2 , the photon γ is with Q, and the Z, Z are with generators that are orthogonal to Q. All these fields including the C and gluons G are even under the W -parity. However, the new non-Hermitian gauge bosons, X 0,0 * as coupled to T 4,5 and Y ± as coupled to T 6,7 , are the W -particles. The scalar potential and Yukawa Lagrangian as mentioned above are obtained as follows [2] Because of the 3-3-1-1 gauge symmetry, the Yukawa Lagrangian and scalar potential as given take the standard forms that contain no lepton-number violating interactions. If such violating interactions as well as nonzero VEVs of η 3 and χ 1 were presented as in the 3-3-1 model, they would be the sources for the hadronic FCNCs at tree level [12]. The FCNC problem is partially solved by the 3-3-1-1 symmetry and W -parity conservation. Also, the presence of the η 3 and χ 1 VEVs would imply a mass hierarchy between the real and imaginary components of the X 0 gauge boson due to their different mixings with the neutral gauge bosons. This leads to the CP T violation that is experimentally unacceptable [13]. The CP T violation encountered with the 3-3-1 model is thus solved by the 3-3-1-1 symmetry and W -parity conservation too. whereas the other particles have B = 0). As shown in [2], the X 0 gauge boson cannot be a Particle ν e u d G γ W Z Z C η 1,2 ρ 1,2 χ 3 φ N U D X Y η 3 ρ 3 χ 1,2 L 1 1 0 0 0 0 0 0 0 0 0 0 0 −2 0 −1 1 1 1 −1 −1 1 P + + + + + + + + + + + + + + − − − − − − − − dark matter. However, the neutral fermion (a combination of N a fields) or the neutral complex scalar (a combination of η 0 3 and χ 0 1 fields) can be dark matter whatever one of them is the lightest wrong-lepton particle (LWP) in agreement with [11]. The fermion masses that are obtained from the Yukawa Lagrangian after the gauge symmetry breaking have been presented in [2] in detail. Below, we will calculate the masses and physical states of the scalar and gauge boson sectors when the Λ scale of the U (1) N breaking is comparable to the ω scale of the 3-3-1 breaking, which has been neglected in [2]. Also, all the gauge interactions of fermions and scalars as well as the constraints on the new physics are derived. We stress again that in the regime Λ ω the B − L and 3-3-1 symmetries decouple; whereas, when those scales become comparable, the new physics associated with the B − L and that of the 3-3-1 model are correlated, possibly happening at the TeV scale, to be all proved by the LHC or the ILC project. III. SCALAR SECTOR Since the W -parity is conserved, only the neutral scalar fields that are even under this parity symmetry can develop the VEVs as given in (10). We expand the fields around these VEVs as where in each expansion the first term and last term are denoted as the VEVs and physical fields, respectively. Note that S 1,2,3,4 and A 1,2,3,4 are W -even while those with primed signs, S 1,3 and A 1,3 , are W -odd. There is no mixing between the W -even and W -odd fields due to the W -parity conservation. On the other hand, the f parameter in the scalar potential can be complex (the remaining parameters such as µ 2 's and λ's are all real). However, its phase can be removed by redefining the fields η, ρ, χ appropriately. Consequently, the scalar potential conserves the CP symmetry. Assuming that the CP symmetry is also conserved by the vacuum, the VEVs and f can simultaneously be considered as the real parameters by this work. There is no mixing between the scalars (CP -even) and pseudoscalars (CP -odd) due to the CP conservation. To find the mass spectra of the scalar fields, let us expand all the terms of the potential up to the second order contributions of the fields: The scalar potential that is summed of all the terms above can be rearranged as where the interactions as stored in V interaction need not to be explicitly obtained. The V min contains the terms that are independent of the scalar fields, which contributes to the vacuum energy only. It does not affect to the physical processes. The V linear includes all the terms that linearly depend on the scalar fields, Because of the gauge invariance, the coefficients vanish, which are also the conditions of potential minimization, The 3-3-1-1 gauge symmetry will be broken in the correct way and the potential bounded from below by imposing µ 2 < 0, µ 2 1,2,3 < 0, λ > 0, λ 1,2,3 > 0, and other necessary conditions for λ 4,5,6,...,12 . In this case, the equations of the potential minimization above give an unique, nonzero solution for the VEVs (u, v, ω, Λ). The V mass consists of all the terms in the potential that quadratically depend on the scalar fields. It can be decomposed into, where the first term includes all the mass terms of charged scalars while the remaining terms belong to the neutral scalars with each term for a distinct group of fields characterized by the two values, W -and CP -parities, as mentioned before. The mass spectrum of the charged scalars is obtained by From the potential minimization conditions, we extract µ 2 1 , µ 2 2 , µ 2 3 and substitute them into the above expression to yield where we have defined, The fields H ± 4 , H ± 5 by themselves are physical charged scalars with masses respectively given by The field that is orthogonal to H 5 , G ± W = uη ± 2 −vρ ± 1 √ u 2 +v 2 , has a zero mass and can be identified as the Goldstone boson of the W ± gauge boson. Similarly, the orthogonal field to √ v 2 +ω 2 , is massless and can be identified as the Goldstone boson of the new Y ± gauge boson. For the neutral scalar fields, we start with the A group, with the help of the potential minimization conditions. Therefore, we have a physical pseudo-scalar field with corresponding mass, If u, v, ω > 0 we have f < 0 so that the squared mass is always positive. We realize that the A 4 is massless and can be identified as the Goldstone boson of the new neutral gauge boson C of U (1) N . The remaining massless fields are orthogonal to A as follows They are the Goldstone bosons of the neutral gauge bosons Z and Z , respectively (where the Z is standard model like while the Z is 3-3-1 model like). For the A group, we have by using the minimization conditions. Hence, a physical W -odd pseudo-scalar and its mass follow Similarly, for the S group, we obtain which yields a physical W -odd scalar with corresponding mass, The remarks are given in order: 1. We see that the scalar S and pseudo-scalar A have the same mass. They can be identified as the real and imaginary components of a physical neutral complex field: with the mass 2. The field that is orthogonal to H , , is massless and can be identified as the Goldstone boson of the new neutral non-Hermitian gauge boson X 0 . Finally, there remains the S group of the W -even, real scalar fields. Using the potential minimization conditions, we have where In [2], the physical states have been derived when the B − L breaking scale is large enough as the GUT one, for example, so that the S 4 is completely decoupled from the remaining three scalars of the 3-3-1 model. In this work we consider a possibility of the B − L interactions that might happen at a TeV scale like those of the 3-3-1 model, characterized by the ω, f scales. Therefore, let us assume that the Λ is in the same order with the f, ω and all are sufficiently large in comparison to the weak scales u, v so that the new physics is safe [2], i.e. Notice that all the physical scalar fields which have been found so far are new particles with the corresponding masses given in the ω or |f ω| scales. The mass matrix (38) will provide a small eigenvalue as the mass of the standard model Higgs boson. Whereas, the remaining eigenvalues will be large to be identified as the corresponding masses of the new neutral scalars. To see this explicitly, it is appropriately to consider the leading order contributions of the mass matrix (38). Imposing (39) and keeping only the terms that are proportional to (ω, Λ, f ) 2 , we have the result, The 2 × 2 matrix at the first diagonal box gives a zero eigenvalue with corresponding eigenstate: This state is identified as the standard model Higgs boson. The remaining eigenvalue is which corresponds to a new, heavy neutral scalar: The 2 × 2 matrix at the second diagonal box provides two heavy eigenstates with their masses respectively given in the ω scale, where the mixing angle is obtained by We have adopted the notations s x = sin x, c x = cos x, t x = tan x, and so forth, for any x angle like the ϕ and others throughout this text. We see that at the leading order, the standard model like Higgs boson has a vanishing mass. Hence, when considering the next-to-leading order contribution, its mass gets generated to be small due to the perturbative expansion. In fact, we can write the general mass matrix M 2 S in a new basis of the states (H, H 1 , H 2 , H 3 ). Since the mass of the standard model like Higgs boson is much smaller than those of the new particles, the resulting mass matrix will have a seesaw like form [18] that can transparently be diagonalized. Indeed, putting the mass matrix (38) in the new basis results where and C is a 3 × 3 matrix with corresponding components given by Because and so forth. Therefore, the standard model like Higgs boson obtains a mass given by the seesaw formula [18], which is realized at the weak scales in spite of the large scales ω, Λ and f (see below). The standard model like Higgs boson is given by The physical heavy scalars are given to be orthogonal to this light state with their masses negligibly changed in comparison to the leading order values, respectively. The mass of the standard model like Higgs boson can be approximated as where the mass parameters m 0 , m 1 , m 2 are given by Because the quantity f /ω is finite, the Higgs mass δm 2 H depends on only the weak scales u 2 , v 2 as stated. We will evaluate the Higgs mass and assign δm 2 H = (125 GeV) 2 as measured by the LHC [19,20]. For the purpose, let us assume u = v, ω = −f that leads to Here,λ is a function of only the λ's couplings, which can easily be achieved with the help of (52), (53) and (54) for the respective m 2 0,1,2 . In addition, we have u 2 + v 2 = (246 GeV) 2 , i.e. u = 246 √ 2 GeV, that is given from the mass of the W boson as shown below. Hence, we identify 5. This is an expected value for the effective self-interacting scalar coupling. In summary, we have the eleven Higgs bosons (H 0 , A 0 , H 0 1,2,3 , H ± 4,5 , H 0,0 * ) as well as the nine Goldstone bosons corresponding to the nine massive gauge bosons ( . In this case, the mixing parameters as determined by BC −1 have to be taken into account. However, it is also noted that even for the proportional coefficients of order unity like a scalar self-coupling in the large strength regime, the modifications to the standard model Higgs couplings are around |∆κ| ≡ u/ω ∼ 0.1 that easily satisfies the κ k bounds as presented in [1]. Let us remind the reader that apart from the H that will be identified as a viable dark matter candidate, the remaining scalars in this model would be sufficiently heavy in order to obey the bounds coming from the muon anomalous magnetic moment [21]. IV. GAUGE SECTOR The gauge bosons obtain masses when the scalar fields develop the VEVs. Therefore, their mass Lagrangian is given by Substituting the scalar multiplets η, ρ, χ and φ with their covariant derivative, gauge charges and VEVs as given before, we get where we have defined t X ≡ g X g , t N ≡ g N g , and The mass Lagrangian can be rewritten as where the Lorentz indices have been omitted and should be understood. The squared-mass matrix of the neutral gauge bosons is found to be, The non-Hermitian gauge bosons W ± , X 0,0 * and Y ± by themselves are physical fields with corresponding masses, Because of the constraints u, v ω, we have m W m X m Y . The W is identified as the standard model W boson, which implies The X and Y fields are the new gauge bosons with the large masses as given in the ω scale. The neutral gauge bosons (A 3 , A 8 , B, C) mix via the mass matrix M 2 . It is easily checked that M 2 has a zero eigenvalue with corresponding eigenstate, which are independent of the VEVs and identified as those of the photon (notice that all the other eigenvalues of M 2 are nonzero). The independence of the VEVs for the photon field and its mass is a consequence of the electric charge conservation [22]. With this at hand, electromagnetic vertices can be calculated that result in the form −eQ(f )f γ µ f A µ , where the electromagnetic coupling constant is identified as e = gs W in which the sine of Weinberg's angle is given by [22] The photon field can be rewritten as which is identical to the electric charge operator expression in (2) if one replaces its generators by the corresponding gauge bosons over couplings (namely, the Q is replaced by A µ /e, the T i by A iµ /g, and the X by B µ /g X ). Hence, A µ can be achieved from Q that need not mention M 2 . The mass eigenstate A µ depends on just A 3µ , A 8µ and B µ , whereas the new gauge boson C µ does not give any contribution, which results from the electric charge conservation too [22]. To identify the physical gauge bosons, we firstly rewrite the photon field in the form of with the aid of t X = √ 3s W / 3 − 4s 2 W . In the above expression, the combination in the parenthesis (· · · ) is just the field that is associated with the weak hyper-charge Y = − 1 which is orthogonal to the A as usual. The 3-3-1 model Z boson, which is a new neutral one, is obtained to be orthogonal to the field that is coupled to the hyper-charge Y as mentioned (thus it is orthogonal to both the A and Z bosons), Hence, we can work in a new basis of the form (A, Z, Z , C), where the photon is a physical particle and decoupled while the other fields Z, Z and C mix themselves. The mass matrix M 2 can be diagonalized via several steps. In the first step, we change the basis to: In this new basis, the mass matrix M 2 becomes where the 11 component is the zero mass of the photon which is decoupled, while the M 2 s is a 3 × 3 mass sub-matrix of Z, Z and C, Hence, in the second step, the mass matrix M 2 (or M 2 s ) can be diagonalized by using the seesaw formula [18] to separate the light state (Z) from the heavy states (Z , C). We denote the new basis as (A, Z 1 , Z , C) so that the A, Z 1 are physical fields and decoupled while the rest mix, where M 2 s is a 2 × 2 mass sub-matrix of the Z , C heavy states, while m Z 1 is the mass of the Z 1 light state. By the virtue of seesaw approximation, we have The E is a two-component vector given by which are suppressed at the leading order u, v ω, Λ. The Z 1 , Z and C fields are the standard model like, 3-3-1 model like and U (1) N like gauge bosons, respectively. To be concrete, we write by the only small mixing terms, respectively. We realize that the first term in E 1 is just the mixing angle of Z-Z in the 3-3-1 model with where the second terms in the brackets are negligible since Λ > ∼ ω. Therefore, the E 1 bounds as well as the E 2 parameter can be approximated as provided that s 2 W 0.231, t N ∼ 1, Λ ∼ ω and ω > 3.198 TeV as given from the ρ-parameter below. With such small values of the E 1,2 mixing parameters, their corrections to the couplings of the Z boson such as the well-measured Zff ones (due to the mixing with the new Z , C gauge bosons) can be neglected [1]. [But, notice that they can be changed due to the one-loop effects of Z , C as well as of the non-Hermitian X, Y gauge bosons accompanied by the corresponding new fermions, which subsequently give the constraints on their masses and the g N coupling. A detailed study on this matter is out of the scope of this work and it should be published elsewhere]. Even, the modifications of the Z interactions (due to the mixings) to the new physics processes via the Z , C bosons are negligible, which will be explicitly shown when some of those processes are mentioned at the end of this work. Therefore, except for an evaluation of the mentioned ρ-parameter, we will use only the leading order terms below. In other words, the mixing of the Z with the Z , C bosons For the final step, it is easily to diagonalize M 2 (or M 2 s ) to obtain the remaining two physical states, denoted by Z 2 and Z N , such that The mixing angle and new masses are given by t 2ξ It is noteworthy that the mixing of the 3-3-1 model Z boson and U (1) N C boson is finite and may be large since ω ∼ Λ. The Z 2 and Z N are heavy particles with the masses in the ω scale. In summary, the physical fields are related to the gauge states as where The approximation above is given at the leading order {u 2 , v 2 }/{ω 2 , Λ 2 } 1 and this means that the standard model Z boson by itself is a physical field Z Z 1 that does not mix with the new neutral gauge bosons, Z 2 and Z N . The next-to-leading order term (E) gives a contribution to the ρ-parameter obtained by Here, notice that m W = c W m Z and m 2 Z ∼ m 2 ZZ ∼ m 2 ZC . To have a numerical value, let us put u = v = (246/ √ 2) GeV and ω = Λ. Hence, we get the deviation as with the aid of s 2 W = 0.231, α = 1/128 [1]. From the experimental data ∆ρ < 0.0007 [1], we have u/ω < 0.0544 or ω > 3.198 TeV (provided that u = 246/ √ 2 GeV as mentioned). Therefore, the value of ω results in the TeV scale as expected. A. Fermion-gauge boson interaction The interactions of fermions with gauge bosons are derived from the Lagrangian, where Ψ runs on all the fermion multiplets of the model. The covariant derivative as defined in (12) can be rewritten as (note that t X = g X /g, t N = g N /g). Expanding the Lagrangian we find, where the first term is kinematic whereas the last two give rise to the strong, electroweak and B − L interactions of the fermions. Notice that the SU (3) C generators, t i , equal to 0 for leptons and λ i 2 for quarks q, where q indicates to all the quarks of the model such as q = u, d, c, s, t, b, D 1,2 , U . Hence, the interactions of gluons with fermions as given by the second term of (86) yield which takes the form as usual (only the colored particles have the strong interactions). Let us separate P = P CC + P NC , where Hence, the last term of (86) can be rewritten as Here, the first term provides the interactions of the non-Hermitian gauge bosons W ∓ , X 0,0 * , and Y ± with the fermions, while the last term leads to the interactions of the neutral gauge bosons A, Z 1 , Z 2 , and Z N with the fermions. Substituting the gauge states from (59) into P CC , we get where the raising and lowering operators are defined as Notice that T ± , U ± and V ± vanish for the right-handed fermion singlets. Therefore, the interactions of the non-Hermitian gauge bosons with fermions are obtained by where the currents as associated with the corresponding non-Hermitian gauge bosons are given by The interactions of the W boson are similar to those of the standard model, while the new interactions with the X and Y bosons are like those of the ordinary 3-3-1 model. Substituting the gauge states as given by (81) into P NC , we have For this expression, we have used t of the neutral gauge bosons with fermions are given by Three remarks are in order 1. With the help of e = gs W , the interactions of photon with fermions take the normal form where f indicates to any fermion of the model. The interactions of Z with fermions can be rewritten as where Therefore, the interactions of Z take the normal form. For a convenience in reading, the couplings of Z with fermions are given in Table II. 3. It is noteworthy that the interactions of Z 2 with fermions are identical to those of Z N if one makes a replacement in the Z 2 interactions by c ξ → −s ξ , s ξ → c ξ , and vice versa. Thus, we need only to obtain the interactions of either Z 2 or Z N , the remainders are straightforward. The interactions of Z 2 and Z N with fermions can respectively be rewritten in a common form like that of Z. Therefore, the last two terms of (95) yield where The interactions of Z 2 and Z N with fermions are listed in Table III and IV, respectively. B. Scalar-gauge boson interaction The interactions of gauge bosons with scalars arise from where Φ runs on all the scalar multiplets of the model. From Eqs. (16) and (17), Φ possesses a common form Φ = Φ + Φ . Moreover, the covariant derivative has the form The terms in the first line are respectively realized as the kinematic, scalar-gauge mixing and mass terms which are not relevant to this analysis. The second line includes all the interactions of three and four fields among the scalars and gauge bosons that we are interested in the investigation. To calculate the interactions, we need to present Φ and P µ in terms of the physical fields. Indeed, the gauge part takes the form P µ = P CC µ + P NC µ , where its terms have already been obtained by (90) and (94), respectively. On the other hand, the physical scalars are related to the gauge states by (56). Let us work in a basis that all the Goldstone bosons are gauged away. In this unitary gauge, the scalar multiplets are given by Notice that in each expansion above for the multiplet Φ = η, ρ, χ, φ, the first term is identified to the Φ while the second term is the Φ with the physical fields explicitly displayed. The denotations for the scalar multiplets including the gauge bosons in this unitary gauge have conveniently been retained unchanged which should be understood. The interactions of one gauge boson with two scalars arise from Substituting all the known multiplets into this expression we have Table V The interactions of one scalar with two gauge bosons are given by Vertex Coupling Vertex Coupling The interactions of two scalars and two gauge bosons are derived from which result in Table X, XI, XII and XIII, respectively. A. Dark matter: Complex scalar H The spectrum of scalar particles in the model contains an electrically-neutral particle H that is odd under the W -parity. Because the W -parity symmetry is exact and unbroken by the VEVs, Vertex Coupling the H is stabilized that cannot decay if it is the lightest particle among the W -particles. Under this regime we obtain the relic density of the H at present day and derive some constraints on its mass. Such scalar is within the context of the so-called Higgs portal which has been intensively exploited in the literature [23,24] due to its interaction with the standard model Higgs boson via the scalar potential regime. We will show that the H can be a viable dark matter which yields the right abundance (Ωh 2 = 0.11 − 0.12) as well as obeying the direct detection bounds [37]. In the early universe, the H was in thermal equilibrium with the standard model particles. As the universe expanded and cooled down, it reaches a point where the temperature is roughly equal to the H mass, preventing the H particles to be produced from the annihilation of the standard model particles, and only the annihilations between the H particles take place. However, as the universe keeps expanding, there is a point where the H particles can no longer annihilate themselves into the standard model particles, the so-called freeze-out. Then the H leftovers from the freeze-out episode populate the universe today. In order to accurately find the relic density of a dark matter particle one would need to solve the Boltzmann equation [25] as we will do for the fermion dark matter case. However, since the H is a scalar dark matter there are only s-wave contributions to the annihilation cross-section and thus the abundance can be approximated as Here, the σv rel is the thermal average over the cross-section for two H annihilation into the standard model particles multiplied by the relative velocity between the two H particles. For the dark matter masses below the m H /2 the Higgs portal is quite constrained as discussed Vertex Coupling in Refs. [23,24]. For the dark matter masses larger than the Higgs mass the annihilation channel H H → HH plays a major role in determining the abundance. Therefore, we will focus on the Higgs portal below in order to estimate the abundance and derive a bound on the scalar dark matter candidate. That being said, the interaction of H with H is obtained as follows We have the scattering amplitude for H H → HH, It is also noted that there may be other contributions to λ as mediated by the Higgs H, the new scalars and new gauge bosons. However, such corrections are subleading with the assumption that Vertex Coupling Vertex Coupling the λ coupling is in order of unity as well as the H is heavy enough. Therefore, the differential cross-section in the center-of-mass frame is given by where the H has an energy and momentum H (E, p) and thus H * (E, − p). Also, the two out-going Higgs bosons possess H(E, k) and H(E, − k). The coefficient 1 2 is due to the creation of the two identical particles. We have √ s = 2E. From the experimental side, the dark matter is non-relativistic (v ∼ 10 −3 c). We approximate where the v is the velocity of the dark matter given in natural units, v 1. We have also The Einstein relation implies Therefore, the differential cross-section takes the form dσ dΩ It is clear that the r.h.s is independent of the solid angle, where dΩ = dϕ sin θdθ. Hence, integrating out over the total space is simply multiplied by 4π, σ = dσ dΩ dΩ = 4π dσ dΩ . Because the relative Vertex Coupling this section we will not dwell on unnecessary details regarding the abundance and direct detection computation. Although we would like to show in Fig.1 the diagrams that contribute the abundance and direct detection signals of the fermion candidate N . Surely, the diagram that contributes to the direct detection signal is actually the t-channel diagram of Fig. 1 right panel. As explicitly shown at the end of Subsection VI E, the modifications to the couplings of the Z and Z 2,N gauge bosons with fermions due to the mixing effects (Z with Z 2,N ) are so small that can be neglected by this analysis. Similarly, the modifications to the Z 2,N ZH couplings due to those mixings as well as the neutral scalar mixings (H with H 1,2,3 ) are negligible. In addition, it is well-known that the interactions of Z 2 and Z N are exchangeable which are only differed by a replacement (c ξ → −s ξ ; s ξ → c ξ ), respectively. Therefore, given that these massive gauge bosons (Z 2,N ) are active particles (i.e. their scales and couplings are equivalent), they play quite the same role in new physical processes (some of these can also be seen obviously in the subsequent subsections). Hence, to keep a simplicity we might consider one particle (Z 2 ) to be active that dominantly sets the dark matter observables while the other one (Z N ) almost decouples (which gives negligible contributions). For this aim, we firstly assume Λ > ω but not so much larger than the ω so that our postulate of the Λ scale, that is comparable to ω, is unbroken (still correlated). Hence, choose Λ = 10 TeV and vary ω below this value so that 0.1 < ω/Λ < 1 (detailedly shown in the cases below). Besides the ω and Λ as determined, the Z 2,N masses as well as their mixing angle (ξ) still depend on their gauge couplings, respectively. The g, g X were fixed via the electromagnetic coupling e and the Weinberg angle, whereas the g N is unknown. But, we could demand α N ≡ g 2 N 4π < 1 or |g N | < 2 √ π so that this interaction to be perturbative. Without loss of generality, we set 0 < t N < 2 √ π/g = s W √ α 5.43. When t N is large, t N < ∼ 5.43, we have m Z N m Z 2 and the mixing is so small, t 2ξ − 1, as given from (78). This is the case considered for the relic density of the fermion candidate as a function of its mass (m f ), and t N = 5.43 is taken into account. Notice that the dark matter annihilation is via s-channels mediated by Z 2,N . The contribution of Z 2 is like g 2 Therefore, the Z N gives a smaller contribution of ω 2 /Λ 2 order which almost vanishes, whereas the relic density is sensitive to the Z 2 . Provided that the relic density of the dark matter gets the right value, we consider both the contributions of Z 2,N . This is done by varying 0 < t N < 5.43, and respectively −π/2 < ξ < 0 as derived from (78). When t N < ∼ 5.43, the Z 2 dominates the annihilation as given above. But, when , which is the pole of t 2ξ as obtained from (78), the m Z N becomes comparable to m Z 2 as well as the Z 2 and Z N possess the equivalent gauge couplings due to the large mixing. In this case, the Z 2 and Z N bosons simultaneously give dominant contributions to the dark matter annihilation despite the fact that ω Λ. Finally, when t N approximates zero, t N ≈ 0, the Z N boson governs the annihilation cross-section, while the contribution of Z 2 is negligible. The regime that the Z N dominantly contributes to the dark matter annihilation is very narrow since it is bounded by the maximal mixing value at t N 0.219ω/Λ which is close to zero due to ω < Λ. On the other hand, the regime that the Z 2 dominates the dark matter annihilation is mostly given in the total t N -range. This is the reason why the Z 2 was predicted to govern the dark matter observables while the Z N is almost neglected, provided that ω < Λ. It is also clear from all the above analysis that the Z 2 and Z N can be large mixing in spite of small ω/Λ, given that t N 0.219ω/Λ. Vice versa, the large regime t N < ∼ 5.43 implies that those gauge bosons can slightly mix t 2ξ − 0.146 t N ω 2 Λ 2 1 even if ω/Λ is close to one. Below, we will display the detailed computations for all the cases mentioned. In case the candidate N is a Dirac fermion, it has both vector and axial-vector couplings with the neutral gauge bosons. The abundance is shown in Fig. 2. [In this figure and the following ones, the ω is sometimes denoted as w instead that should not be confused]. It is clear from Fig. 2 that the gauge boson Z 2 overwhelms the remaining annihilation channels in agreement with Ref. [10], and the resonance at the m Z 2 /2 is crucial in determining the abundance. Moreover, we see that the mass region 100 − 200 GeV for ω = 3 TeV, 100 − 500 GeV for ω = 5 TeV, or 100 − 1000 GeV for ω = 7 TeV provides the right abundance. Additionally, we exhibit in the left panel of Fig. 3 the region of the parameter space cos(ξ) × the neutral fermion mass that yields the right abundance, where ξ is the Z 2 and Z N mixing angle. When this angle goes to zero the coupling Z 2 -quarks decreases and for this reason the scattering cross section rapidly decreases as shown in the right panel of Fig. 3. There, and throughout this work we let cosine of this mixing angle free to float from zero to unity. [Correspondingly, the ξ (t N ) run from −π/2 (0) to 0 (5.43)]. As for the Majorana case, the overall abundance is enhanced and hence we find a larger region of the parameter space that yields the right abundance as can seen in Fig. 4. As for the direct detection signal, the Dirac fermion dark matter candidates give rise to spinindependent (vector) and spin-dependent (axial-vector) scattering cross-sections. But, due to the A 2 enhancement that is typical of heavy targets used in direct detection experiments, the spinindependent bounds are the most stringent ones. One can see in Fig. 3. On the other hand, the Majorana fermions have zero vector current. This is because the current of a fermion is equal to the current of an anti-fermion, but if one applies the Majorana condition (ψ = ψ c ) one find that the vector current must vanish (which has also been used for the abundance computation aforementioned). Therefore, only the spin-dependent bounds apply. In Fig. 5 we show those bounds. The LUX collaboration has not reported their spin-dependent bounds yet, so the strongest constraints come from XENON100 [26]. One should conclude from Fig. 5 that the XENON100 bounds are quite loose for the Majorana fermion. The discontinuity in the plots has to do with the Z 2 resonance that pushes down the overall abundance. Right: Spin-independent scattering cross-section in terms of the Dirac fermion mass for different values of symmetry breaking. One can easily conclude that the current LUX bounds require ω > ∼ 5 TeV. We have let the mixing angle ξ free to float in our analyses. As the mixing angle goes to zero (cos ξ → 1) the coupling Z 2 -quarks decreases as seen from Table IV. namely M Z ∼ 1.7 TeV have been found under the assumption that the Z boson couples similarly to the standard model Z boson and for the dark matter masses smaller than 500 GeV. One might notice in fact that the Z 2 gauge boson couples similarly to the Z boson. Therefore, the bounds found in Ref. [27] apply here up to some extent since the couplings are not precisely identical. That being said, the result shown in the leftmost panel of Fig. 2 might be in tension with the existing dijet bounds. The remaining plots do obey the dijet bounds since they are obtained at the Z 2 masses greater than 1.7 TeV. It is important to keep in mind that the collider bounds derived from simplified models are more comprehensive than the ones using an effective operator approach, because the production cross-sections using the effective operator either over-estimate or under-estimate the collider bounds as discussed in Refs. [28,29]. Concerning the monojet bounds, it has been shown that the current direct detection limits coming from LUX are typically more stringent. Therefore, we will not refer to the monojet bounds hereafter. D. FCNCs The fermions get masses from the Yukawa interactions when the scalar fields develop VEVs as presented in [2]. Due to the W -parity conservation, the up quarks (u a ) do not mix with U and the down quarks (d a ) do not mix with D α (remind that the exotic quarks are W -odd while the ordinary quarks are W -even). The exotic quarks gain large masses in ω scale and decoupled, whereas the ordinary quarks concerned mix by themselves via a mass Lagrangian of the form, where The mass matrices m u = {m u ab } and m d = {m d ab } can be diagonalized to yield physical states and masses such as where u = {u a } and d = {d a }. The CKM matrix [30] is defined as V CKM = V † uL V dL . All the mixing matrices V uL , V dL , V uR , V dR including V CKM are unitary. The GIM mechanism [31] of the standard model works in this model, which is a consequence of the W -parity conservation. Let us note that in the 3-3-1 model with right-handed neutrinos, the ordinary quarks and exotic quarks that have different T 3 weak isospins mix by contrast (which results from the unwanted nonzero VEVs of η 0 3 and χ 0 1 as well as the lepton-number violating interactionsQ 3L χu aR ,Q 3L ηU R , Q 3L ρD αR ,Q αL χ * d aR ,Q αL η * D βR ,Q αL ρ * U R and their Hermitian conjugation, that directly couple ordinary quarks to exotic quarks via mass terms [35]). Hence, in that model, the dangerous treelevel FCNCs of Z boson happen due to the non-unitarity of the mixing matrices as listed above (V uL , V dL , V uR , V dR ). Even, the dangerous FCNCs also come from one-loop contributions of W boson due to the non-unitarity of the CKM matrix (V CKM ). Therefore, the standard model GIM mechanism does not work. This will particularly be analyzed at the end of this subsection. In this model, the tree level FCNCs happen only with the new gauge bosons Z 2 and Z N (notice that there is a negligible contribution coming from the Z boson due to the mixing with Z 2,N as explicitly shown below). This is due to the non-universal property of quark representations under SU (3) L that the third quark generation differs from the first two generations. Indeed, from (95) for the interactions of Z 2,N , the right-handed flavors (Ψ R ) are conserved since T 8 = 0, X = Q and N = B − L which are universal for ordinary up-and down-quarks. But, the left-handed flavors (Ψ L ) are changing due to the fact that T 8 differs for quark triplets and antitriplets [note that X and N are related to T 8 by (2); the source for the FCNCs is due to the T 8 only since T 3 is also universal for ordinary up-quarks and down-quarks as the same reason of the flavor-conserved Z current]. The interactions that lead to flavor changing can be derived from (95) as where Ψ L indicates to all ordinary left-handed quarks. We can rewrite where u = (u, c, t), d = (d, s, b) and T u = T d = 1 2 √ 3 diag(−1, −1, 1). Hence, the tree-level FCNCs are described by the Lagrangian, where we have denoted q as u either d. The FCNCs lead to hadronic mixings such as (s, b), respectively. These mixings are described by the effective interactions as obtained from the above Lagrangian via Z 2,N exchanges as The strongest constraint comes from the K 0 −K 0 mixing [1] that Assuming that u a is flavor-diagonal, the CKM matrix is just V dL (i.e. V CKM = V dL ). Therefore, This gives constraints on the mass and coupling of the new neutral gauge bosons, that is There is another bound coming from the B 0 s −B 0 s mixing that is given by [1] In this case, the CKM factor is |(V * dL ) 32 (V dL ) 33 | 3.9 × 10 −2 [1]. Therefore, we have which implies To be concrete, suppose that Z 2 and Z N have approximately equal masses and t N = g N /g = 1 so that the B − L interaction strength is equivalent to that of the weak interaction. From (129), we get while the relation (132) yields Here, we have used g 2 = 4πα/s 2 W with s 2 W = 0.231 and α = 1/128. This is in good agreement with the recent bound [32]. Notice that we have used m Z N m Z 2 in the dark matter subsections though which translates to m Z 2 > ∼ 1 TeV. Finally, let us give some remarks on the FCNCs due to the mixing effect of the neutral gauge bosons. In this case, the Lagrangian (124) is changed with the replacement by where Correspondingly, the effective interactions for the FCNCs given by (127) is also changed with the replacement as follows Let us compare the new contribution with the existing one, It is sufficient to consider two cases, Λ ω and Λ ∼ ω. For the first case, the R is similar to (becomes) the 3-3-1 model with right-handed neutrinos that which is very small. Above, we have used m 2 , v 2 w = u 2 +v 2 = (246 GeV) 2 , and ω > 3.198 TeV as derived from the ρ parameter. For the second case, the contributions of Z 2 and Z N are equivalent. So, the first remark is R ∼ (g 2 which starts from the (u/ω) 2 order and must be small too. Indeed, let us show this explicitly provided that t N = 1, ξ = −π/4 (s 2ξ is finite due to the large mixing of Z 2 and Z N , thus such value could be chosen), and Λ = ω = 3.198 TeV. Above, we have also used m Z 2 m Z N = 2g 2 c W t N ωΛ/ 3 − 4s 2 W , which can be derived from (79) and (80), the expression (78) for the ξ mixing angle, and the m 2 Z 1 as approximated before. In summary, the mixing effects with the Z boson do not affect to the FCNCs. For the sake of completeness, let us point out the dangerous FCNCs of Z boson due to the mixing of the ordinary quarks and exotic quarks that happens in the 3-3-1 model with right-handed neutrinos, which should be suppressed. The mixing matrices are redefined as L,R so that the 4 × 4 mass matrix of up-quarks (u a , U ) and the 5 × 5 mass matrix of down-quarks (d a , D α ) are diagonalized, respectively [35]. The Lagrangian that describes the FCNCs of Z boson is given by for V u and the plus sign is applied, but I = 4, 5 for V d and the minus sign is taken (note, however, that the right chiral currents of Z µ do not flavorchange since T 3 = 0 for any right-handed fermion). All these lead to the effective interactions for the hadronic mixings due to the exchange of Z boson as where we have used m 2 Z = g 2 (u 2 + v 2 )/(4c 2 W ) and notice that v 2 w ≡ u 2 + v 2 = (246 GeV) 2 . In the 3-3-1 model with right-handed neutrinos, the Lagrangian for the FCNCs of Z boson is easily Hence, the effective interactions for the hadronic mixings due to the Z contribution is given by where we have adopted m 2 Z g 2 c 2 W 3−4s 2 W ω 2 [22]. Since the weak scale v w in (142) is too low in comparison to the 3-3-1 scale ω in (143), it is clear that if the mixing of the ordinary quarks and exotic quarks is similar in size to that of the ordinary quarks, (V * qL ) Ii (V qL ) Ij ∼ (V * qL ) 3i (V qL ) 3j , the FCNCs due to the Z boson (142) is too large (∼ ω 2 /v 2 w ∼ 10 2 times the one coming from Z or the bound for the K 0 −K 0 mixing) as such the theory is invalid. Hence, the FCNCs due to the ordinary and exotic quark mixing are more dangerous than those coming from the non-universal interactions of Z boson. To avoid the large FCNCs, one must assume |(V * qL ) Ii (V qL ) Ij | |(V * qL ) 3i (V qL ) 3j | (and the FCNCs of Z are dominated by the ordinary quark mixing, [V † qL V qL ] ij (V * qL ) 3i (V qL ) 3j ). Indeed, the K 0 −K 0 mixing constrains (142) to be, This mixing of the exotic and ordinary quarks is much smaller than the smallest mixing element (about 5 × 10 −3 ) of the ordinary quark flavors by the CKM matrix [1]. Therefore, the 3-3-1-1 gauge symmetry as well as the resulting W -parity provide a more natural framework that not only solves those problems (including the large FCNCs, the unitarity of the CKM matrix, the lepton and baryon number symmetries and the CPT theorem that have strictly been proved by the experiments [1]), but also gives the neutrino small masses and the dark matter candidates. E. LEPII searches for Z 2 and Z N LEPII searches for new neutral gauge bosons via the channel e + e − → ff , where f is any ordinary fermion [33]. In this model, the new physics effect in such process is due to the dominant contribution of Z 2 and Z N gauge bosons, which is s-channel exchanges for f = e. The effective interaction for these contributions can be derived with the help of (99) as where the chiral couplings are given by Let us study a particular process for f = µ, e + e − → µ + µ − . The chiral couplings can be obtained from Table III and IV as The effective interaction can be rewritten by where the last three terms differ the first one only in chiral structures. Notice that LEPII searches for such chiral interactions and gives several constrains on the respective couplings, which are commonly given in the order of a few TeV [33]. Therefore, let us choose a typical value It is noted that this value, 6 TeV, is also a bound derived for the case of U (1) B−L gauge boson [34]. Similarly to the previous subsection, we suppose that Z 2 and Z N have approximately equal masses (m Z 2 ≈ m Z N ) and t N = 1. The above constraint leads to This bound is in good agreement with the limit in the previous subsection via the FCNC and the ones given in the literature [32]. As we previously emphasized, in the dark matter subsections we have adopted m Z N m Z 2 and therefore in this regime a bound in m Z 2 ∼ TeV rises. Finally, let us discuss the contribution of the mixing effects of the neutral gauge bosons to the above process. When the mixing is turned on, the interacting Lagrangian of the neutral gauge bosons takes the form, where i = 1, 2, N and the (chiral) couplings of the neutral gauge bosons are correspondingly changed as follows We realize that all the second terms are the E 1,2 corrections corresponding to the existing couplings due to the mixing, which can be neglected because of the so small E 1,2 values as given in (76). Indeed, for the concerned process e + e − → µ + µ − , let us consider the ratios of the corrections to the respective existing couplings for f = e a (the charged leptons). With the Z 1 couplings, we have which are easily obtained with the help of (76), s 2 W = 0.231 and Λ ∼ ω > 3.198 TeV. Similarly, for the Z 2 couplings, we have where notice that the mixing angle of the Z , C gauge bosons is bounded by −π/4 < ξ < 0 if t N > 0 either 0 < ξ < π/4 if t N < 0. The corrections to the Z N couplings are so small too. Therefore, the mixing effects of the neutral gauge bosons do not affect to the standard model e + e − → µ + µ − process as well as our results given above with the Z 2,N exchanges in the absence of the mixing. F. Radiative β decays involving Z 2,N and the violation of CKM unitarity The CKM unitarity implies d =d, u = u, c, t and d = d, s, b) are defined as before. The standard model calculations have provided a very good agreement with the above relations [1]. However, if there is a possible deviation, it is the sign for the violation of the CKM unitarity. Taking for the first row, the experimental bound yields [1] This violation can give the constraints on the new neutral Z 2,N gauge bosons as a result of their loop effects that contribute to ∆ CKM . Indeed, the ∆ CKM deviation is derived from the one-loop radiative corrections via the new Z 2,N and W bosons to quark β decay amplitudes from which the V ud , V us and V ub elements are extracted, including muon decay which normalizes the quark β decay amplitudes. These have previously been studied in other theories [36] with the respective diagrams to quark and muon β decays similarly displayed therein. Generalizing the results in [36], the deviation is obtained as where the lepton and quark couplings are given in the physical basis of the left chiral fields when Notice that the mixing effect of the neutral gauge bosons (Z with Z 2,N ) do not affect to the considering processes as explicitly pointed out in the previous subsection. Therefore, we have We consider two typical cases, Λ ω and Λ ∼ ω. In the first case, the Z N does not contribute, i.e. the second term above vanishes, and ξ = 0. Therefore, this is the case of the 3-3-1 model with right-handed neutrinos. We have Using the bound (156) Using the bound (156) we have m 2 Z 2 m 2 Z N > 600 GeV. The model in this case is easily to evade the experimental bound too. To conclude, the new neutral gauge bosons Z 2,N give the negligible contribution to the violation of the CKM unitarity. VII. DISCUSSION AND CONCLUSION In the standard model, the fermions come in generations, the subsequent generation is a replication of the former. The gauge anomaly is cancelled out over every generation. Thus, on this theoretical ground the number of the generations can be left arbitrarily. This may be due to the fact that the SU (2) L anomaly trivially vanishes for any chiral fermion representation. If the SU (2) L is minimally extended to SU (3) L with a corresponding enlargement of the lepton and quark representations (the doublets enlarged to triplets/antitriplets while the singlets retain, but for some cases the lepton singlets are put in the corresponding triplets/antitriplets as well), the new SU (3) L anomaly generally does not vanish for each nontrivial representation. Subsequently, this constrains the generation number to be an integer multiple of three-the fundamental color number-in order to cancel that anomaly over the total fermion content, which provides a partial solution to the number of the generation paradigms. Besides this feature, some very fundamental aspects of the standard model can also be understood by the presence of the SU (3) L that causes the electric charge quantization [9], the Peccei-Quinn like symmetry for the strong CP [8] and the oddly-heavy top-quark [7]. On the other hand, the B − L number and Q electric charge operators do not commute and also nonclose algebraically with the SU ( Firstly, the breakdown of the 3-3-1-1 gauge symmetry produces a conserved Z 2 subgroup (as a remnant) named the W -parity similar to the R-parity in supersymmetry that plays an important role as well as yielding insights in the present model. The lightest wrong-lepton particle is stabilized due to the W -parity conservation, which is responsible for dark matter. The two dark matter particles have been recognized, a neutral complex scalar H and a neutral fermion N of either Dirac or Majorana nature. The GIM mechanism for the standard model currents works in this model due to the W -parity conservation, while the new FCNCs are strictly suppressed. In fact, the experimental bounds can be easily evaded with the expected masses for the new neutral gauge bosons Z 2,N in a few TeV. Because of the W -parity conservation, the new neutral non-Hermitian gauge boson X does not mix with the neutral Z 1,2,N gauge bosons. Hence, there is no mass splitting within the real and imaginary components of the X that ensures the conservation of CP T symmetry. Those problems of the 3-3-1 model with right-handed neutrinos have been solved. We have shown that the B − L interactions can coexist with the new 3-3-1 interactions at the TeV scale. To realize this, the scales of the 3-3-1-1 and 3-3-1 breakings are taken to lie in the same energy scale Λ ∼ ω. In this regime, the scalar potential has been diagonalized. The number of Goldstone bosons matches the number of the massive gauge bosons. There are eleven physical scalar fields, one of them is identified as the standard model Higgs boson. The new physical scalar fields H 0 1,2,3 , A 0 , H ± 4,5 , and H 0,0 * are heavy with their masses in the ω, Λ or |ωf | scales. There is a finite mixing between the Higgs scalars-the S 4 for the U (1) N breaking and the S 3 for the 3-3-1 breaking-that results two physical fields the H 2,3 . The standard model Higgs boson is light with a mass given in the weak scale due to the seesaw-type mechanism associated with the little hierarchy u, v ω, Λ, −f . The Higgs mass gets a right value of 125 GeV provided that the effective coupling λ 0.5 with the assumption u = v, ω = −f . All the physical scalar fields are W -even except for the H and H 4 that are W -odd, known as the W -particles. In the proposed regime Λ ∼ ω, the gauge sector has been diagonalized with a recognition of the standard model gauge bosons W ± , A and Z. Moreover, we have six new gauge bosons X 0,0 * , Y ± , Z 2,N . Although the Z boson mixes with the new neutral gauge bosons, it is realized to be light due to a seesaw-type mechanism in the gauge sector. In order to reproduce the standard model W boson mass, we have constrained u 2 + v 2 = (246 GeV) 2 . From the experimental bound on the rho parameter, we get ω > 3.198 TeV provided that Λ ω and u v. There is a finite mixing between the U (1) N gauge boson and the Z of the 3-3-1 model that produces two physical states by the diagonalization as the 3-3-1 like gauge boson Z 2 and the U (1) N like gauge boson Z N . All the gauge bosons are W -even except for the X, Y that are the W -particles. The new neutral complex gauge boson X cannot be a dark matter because it entirely annihilates into the standard model particles before the thermal equilibrium process ended [2]. All the interactions of the gauge bosons with the fermions and scalars have been obtained. The result yields that every interaction conserves the W -parity. The corresponding standard model interactions are recovered. The new interactions as well as their implication to the new physics phenomenological processes are rich to be devoted to further studies. In this work, some of them have particularly been used for analyzing the new FCNCs, the LEPII collider, the violation of the CKM unitarity, and the fermionic dark matter observables. Because of the seesaw-type mixing suppression between the light and heavy states, namely between the Z and new Z 2,N gauge bosons as well as between the H and new H 1,2,3 Higgs bosons, the mixing effects are radically small. The new physics effects via those mixings in the gauge sector have explicitly been pointed out to be safely neglected. For the scalar sector, the new physics effects via those mixings are also negligible as disregarded for the most cases of the small scalar self-couplings (see the text in more detail). Only if the scalar self-couplings are more strong, they may give considerable contributions but are still in the current bounds. The accuracy of the standard model Higgs mechanism if it is the case could give some constraints on those mixing effects. Supposing that the scalar dark matter H dominantly annihilates into the standard model Higgs boson H via the Higgs portal, the relic density of H has been calculated. It gets the right value compatibly to the experiment data if m H = 1.328 TeV assumed that the H * H → HH coupling equals to unity, λ = 1. As for the neutral fermion candidate as a Dirac particle we conclude that a ω scale of the symmetry breaking greater than ∼ 5 TeV is required in order to obey the LUX2013 bounds. Whereas when the neutral fermion is a Majorana particle, the direct detection bounds are quite loose and a larger region of the parameter space has been found that yields the right abundance. The fermion dark matter observables are governed by the Z 2 gauge boson provided that Λ > ω. Only if g N g with Λ ∼ ω either the Λ is rare smaller than the ω with g N ∼ g, the Z N contribution becomes comparable to that of the Z 2 boson. We have shown that the CKM matrix is unitary as well as the ordinary GIM mechanism of the standard model works in this model, due to the W -parity conservation. We have also discussed that this mechanism does not work in the 3-3-1 model with right-handed neutrinos, and in such case the tree-level FCNCs due to the ordinary and exotic quark mixing are more dangerous than those coming from the non-universal couplings of the Z 2,N gauge bosons. All the FCNCs associated with the Z boson due to the above fermion mixing are prevented because of the W -parity conservation. Also, the new FCNCs coupled to the Z 2,N are highly suppressed too. In fact, the FCNCs due to the Z 2,N can present but they can be easily evaded by the new physics in the TeV range. Using the current bound on the K 0 −K 0 system, we have shown m Z 2,N > 2.037 TeV under the assumption that the Z 2 and Z N have approximately equal masses as well as t N = 1 (the B − L interaction strength equals to that of the weak interaction). For the B 0 s −B 0 s system, the bound is m Z 2,N > 2.291 under the same assumptions as the previous case. For hierarchical masses of Z 2 and Z N , the smaller mass will take a smaller bound, e.g m Z 2 > g 2 × 2 TeV corresponding to the K 0 −K 0 system, where g 2 is the reduced gauge coupling that has a natural value smaller than unity. The new neutral currents in the model are now under the experimental detections. We have calculated the contributions of Z 2 and Z N , which dominate the corrections of the new physics, to the process e + e − → µ + µ − at the LEPII collider. From the experimental bounds, we have shown that m Z 2,N > 2.737 TeV provided that these gauge bosons have approximately equal masses and t N = 1. Similarly, for the hierarchal Z 2 and Z N masses, the smaller mass will possess a smaller bound than the above result. Moreover, we have also indicated that the violation of the CKM unitarity due to the one-loop effects of the new neutral gauge bosons Z 2,N are negligible if the Z 2,N masses are given in the TeV range as expected. Finally, the 3-3-1-1 model that unifies the electroweak and B − L interactions along with the strong interaction is a self-consistent extension of the standard model that solves the potential problems of the 3-3-1 model in the consistency with the B, L, and CP T symmetries as well as curing the large FCNCs. The new physics of the 3-3-1-1 model is interesting with the outcomes in the TeV region. For all the reasons aforementioned, we believe that the 3-3-1-1 model is a compelling theory which is called for much experimental attention.
2014-09-30T08:54:51.000Z
2014-05-11T00:00:00.000
{ "year": 2014, "sha1": "d276224d9e24cdf5c54244463375ebb200727ddc", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevD.90.075021", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "d276224d9e24cdf5c54244463375ebb200727ddc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Philosophy" ] }
14447693
pes2o/s2orc
v3-fos-license
Use of an electronic administrative database to identify older community dwelling adults at high-risk for hospitalization or emergency department visits: The elders risk assessment index Background The prevention of recurrent hospitalizations in the frail elderly requires the implementation of high-intensity interventions such as case management. In order to be practically and financially sustainable, these programs require a method of identifying those patients most at risk for hospitalization, and therefore most likely to benefit from an intervention. The goal of this study is to demonstrate the use of an electronic medical record to create an administrative index which is able to risk-stratify this heterogeneous population. Methods We conducted a retrospective cohort study at a single tertiary care facility in Rochester, Minnesota. Patients included all 12,650 community-dwelling adults age 60 and older assigned to a primary care internal medicine provider on January 1, 2005. Patient risk factors over the previous two years, including demographic characteristics, comorbid diseases, and hospitalizations, were evaluated for significance in a logistic regression model. The primary outcome was the total number of emergency room visits and hospitalizations in the subsequent two years. Risk factors were assigned a score based on their regression coefficient estimate and a total risk score created. This score was evaluated for sensitivity and specificity. Results The final model had an AUC of 0.678 for the primary outcome. Patients in the highest 10% of the risk group had a relative risk of 9.5 for either hospitalization or emergency room visits, and a relative risk of 13.3 for hospitalization in the subsequent two year period. Conclusions It is possible to create a screening tool which identifies an elderly population at high risk for hospital and emergency room admission using clinical and administrative data readily available within an electronic medical record. Background The aging of the United States population represents a demographic imperative for innovation in the provision of healthcare to older Americans. Those aged 65 and older represented 12.4% of the total U.S. population in 2005, but this number is projected to double in the next twenty-five years [1]. Accordingly, the population of older adults at high risk for hospitalization, nursing home placement or functional decline is also increasing, creating an enormous financial and capacity burden on the health care system. Multiple interventions, such as case management and transition management programs, have target the prevention of recurrent hospitalizations among community dwelling older adults, and are under great scrutiny in the arenas of research and policy development [2][3][4][5]. The complexity and cost of many of these interventions, combined with the demographic challenges, require that the investment of these resources be made in the patient population that is most likely to benefit. In order to identify those patients, health care providers require some form of risk assessment to focus their efforts -recognizing that the elderly population is very heterogeneous in function and disease burden. These challenges have led to the need for a predictive instrument that is accurate, easy to calculate, inexpensive, and does not require patient completion. Our group hypothesized that we could identify older adults at high risk for hospitalization or emergency department visits using only information readily available from a centralized electronic health record, without taking time away from staff and patients. This model is becoming increasingly feasible as national policy continues to strongly encourage the creation and use of electronic medical records. Hospitalization and emergency department encounters were chosen as independent outcomes, as both events are associated with premature institutionalization and high resource utilization [6,7]. The primary aim of this study was to demonstrate that readily accessible information available in a provider's electronic medical record could be used to identify a population of community dwelling older adults at high-risk for hospitalization or emergency room utilization. Methods The study was a retrospective cohort of all patients age 60 and greater who were impaneled on January 1, 2005, in the Division of Primary Care Internal Medicine (PCIM) at Mayo Clinic in Rochester, MN. This division of the Department of Medicine serves local residents, Mayo Clinic employees, and their dependents. Rochester is a city of approximately 100,000 and is surrounded by small rural communities. There are only two other major alternative primary care providers for older adults in this community; the Department of Family Medicine at Mayo Clinic and the Olmsted Medical Group. Study Subjects All adults age 60 and older, assigned to a PCIM primary care provider on January 1, 2005, were included in the analysis. All subjects were community dwelling or lived in an assisted living facility within Olmsted County, MN. Patients who were residing within a skilled nursing facility on January 1, 2005, were excluded from the study. Patients who did not give consent for their medical chart review were also excluded from analysis, in accordance with Minnesota state law. Data Collection Information was electronically abstracted from the electronic medical record and administrative databases within Mayo Clinic's health records system. Mayo Clinic maintains all electronic medical record information within one system, including hospital, emergency room, nursing home, and clinic-visit information. No individual chart abstraction was performed. The demographic predictor variables collected included: date of birth, gender, marital status, race, and the number of hospital admission days in the prior two years (January 1, 2003 to December 31, 2004). Hospital days were stratified into two risk groups: one to five and six or more. Age was stratified into categories of 60 to 69, 70 to 79, 80 to 89, and greater than 90. Comorbid medical illnesses included the presence or history of diabetes mellitus, coronary artery disease (CAD), congestive heart failure (CHF), stroke, chronic obstructive pulmonary disease (COPD), history of cancer, history of hip fracture, and dementia. History of cancer excluded non-melanomatous skin cancers. Diagnoses were identified using ICD-9 billing codes entered by physicians during both inpatient and outpatient encounters. These comorbidities were chosen via consensus discussion based on their known risk for recurrent hospitalizations and greater complexity of care. The primary outcome variable was the total number of hospitalizations or emergency room visits measured from the date of January 1, 2005, through December 31, 2006. Emergency room visits resulting in a direct hospital admission were recorded as a single outcome event. The total number of hospital admissions and admission days during the same two-year period were collected as secondary outcome measures. Data Analysis Predictor variables for the primary outcome of the total number of hospitalizations or emergency room visits were screened for further analysis using univariate regression models and 1-way ANOVA. The variables with a p-value greater than 0.05 were discarded. A final multivariable regression model using stepwise elimination was then constructed with only those significant predictors identified by the univariate stage. The category of "unknown" race was a significant univariate predictor, but was not included in the final model as the category was not large enough (5%) to statistically influence the final multivariable model and it proved difficult to act upon prospectively in identifying new, at-risk patients. A total risk score for each individual was calculated based on the significant risk factors using regression estimates multiplied by ten in order to generate manageable scores. The scores were divided by quartiles and the top quartile further divided into the top 10% and then the next 15% (75% to 90%). This split was chosen in an attempt to create categories in the highest risk groups with small enough populations to enable focused future interventions. To estimate the precision of the score assignment, bootstrapping was used to draw 450 random samples from the original 12,650 patients with replacement. This method provides robust estimates of the standard error of a population parameter such as a regression coefficient [8]. For every sample, a regression model was run using the same predictive variables. The estimate of each predictor in the validation model was the mean of the regression coefficients of each predictor from 450 runs. The standard error was obtained from the standard error of the mean estimates. 1-way ANOVA for mean, Wilcoxon rank sum tests for median and Pearson chi-square test for frequency were used to compare variables across the 5 score categories. Hospitalizations and emergency visits within 2 years were compared across score categories using logistic regression analysis to provide odds ratios. Receiver operating characteristic (ROC) curves were developed to show sensitivity and specificity of hospitalization or emergency visits in 2 years stratified by the risk score. All information was directly entered via electronic abstraction into a Microsoft Excel (version 2003, Microsoft, Redmond, WA) spreadsheet for data entry, data retrieval, and analysis. The investigators analyzed the final information using SAS 9.1 (Cary, NC). The Mayo Clinic Institutional Review Board (IRB) reviewed and approved the protocol. All aspects of the research on this project were made in accordance with the principles of the Declaration of Helsinki. The investigators also adhered to Minnesota state statues regarding medical record use and privacy. Results The only variables excluded by the univariate estimates were gender and history of hip fracture, and race, which were not found to be statistically significant. The final estimates from the multivariable model are described in Table 1, along with their associated scores. The estimate, standard deviation and score for the validation model are also presented in Table 1. There were a total of 13,457 patients in the age range 60 and over in the PCIM panel on January 1, 2005. Ninetyfour percent of patients provided consent for medical record review for a total study population of 12,650 patients. The scores based on the instrument ranged from -7 to 32. The patients were placed in five groups based on total score, with the lowest quartile scores ranging from -7 to -1, the 2nd quartile 0 to 3, 3rd quartile 4 to 8, 75% to 90% group 9 to 15, and the top 10% had scores of 16 and greater. The average age in the top 10% by score was 80.7 years, compared to 65.0 years in the bottom quartile (P < 0.001). All comorbid conditions had significantly higher proportions in the highest 10%, compared to the lowest quartile as described in Table 2. The primary outcome was the number of emergency room and hospital visits in the subsequent two years, January 1, 2005, to December 31, 2006. The number of total visits/admissions increased consistently with an increasing risk score as described in Table 3. This was significant with a P-value < 0.01. The relative risk of the primary outcome of total ER visits and hospital stays also increased significantly between each of the risk categories. The receiver operating characteristic (ROC) curves associated with the main combined outcome, and the ER and hospital visits individually are described in Figure 1. The area under the curve (AUC) for the primary outcome of combined hospitalizations and emergency room visits was 0.678. For hospital visits only, the AUC was 0.705. For emergency room visits only, the AUC was 0.640. The results of secondary outcomes evaluated using the risk score included two-year (2005 and 2006) total number of hospital admissions and number of days hospitalized. Each of these outcomes increased significantly with increasing risk score as described in Table 4. Discussion In this study, a prognostic index was developed and validated, based on a scoring system that derived information from community-dwelling elderly patients' electronic medical records. The Elders Risk Assessment (ERA) index accurately identified older adults at highrisk of emergency department encounters and hospitalization; two outcomes that can lead to significant morbidity, functional decline, and institutionalization [7]. Previous authors have developed screening instruments aimed at identifying high risk populations of older adults. The ERA was developed to address and overcome a number of barriers that are typically associated with these instruments. One of the primary barriers is the requirement for patient self-reporting of information. The best validated self-administered prognostic index is the Probability of Repeated Admissions (PRA) [9,10]. This eight-item tool has been widely used by managed care organizations to prospectively identify enrollees at risk for repeated hospital admissions and health care resource utilization. This instrument has been shown to have good discriminating ability for one-year risk of hospitalization, with reported areas under the ROC curves ranging from 0.620-0.696, depending on the validation population and setting [11][12][13]. Similarly, the Community Assessment Risk Screen (CARS) index identifies those older adults at increased risk of hospitalization or emergency department visits with self-reported information about medical conditions, medication use, and health service utilization. Utilizing this risk classification, Shelton and colleagues found that the area under the ROC curve to be 0.74 for hospitalization or emergency department visits [6]. Mazzaglia and colleagues utilized self-reported data (functional status, sensory impairment, unintentional weight loss, and use of home care services), from community dwelling older adults in Florence, Italy, to create a risk score that was also found to be predictive of hospitalization (in the subsequent 15 years) with AUC of 0.68 [14]. Unfortunately, low response rates [13], recall bias [15], literacy requirements [16], time, and cost [17] have proven to be significant barriers to widespread use of self-reported instruments. Response rates for the PRA have ranged from 50-60% in the managed care setting [13,17]. A major advantage of the ERA index is that it uses administrative data, which is unaffected by the aforementioned limitations which are intrinsic to self-reported data. The ERA also performed favorably when compared with the administrative or "proxy" PRA. The administrative PRA model derives information from a health plan's multiple databases including a pharmacy database, chronic disease registries, billing data, and utilization data registries to calculate a risk score which performs similarly to the original self-reported Pra (AUC 0.694 vs. 0.696) in predicting hospitalization [11]. While undoubtedly useful in the managed care setting, this proxy model is challenging to adopt in traditional fee-forservice medical practices, like ours, which serves patients who utilize a multitude of pharmacies and supplemental insurance carriers thus limiting access to those database sources. Combined hospitalization and emergency room visits were chosen as the primary outcome because they are early precursors to the functional decline and institutionalization, which it is our goal to prevent. They also often result from acute changes in chronic conditions such as COPD, where early intervention by an outpatient provider may prevent recurrent admissions. In an effort to improve the primary care physician's awareness of these risks, we have subsequently developed it for real time use among our primary care providers in our electronic environment with a software system called Generic Disease Management Systems (GDMS). GDMS is a web-based application developed by Mayo Clinic and the Netherlands-based Noaber Foundation, which uses The ERA score is now calculated in real time based on the scoring system described in this article and displayed on the GDMS print out that we include in the rooming packet for all our patient visits. This allows our providers to easily identify at-risk elders and to pay special attention to the patient if clinically needed. This ability to measure ERA scores in real time is now being further developed into a registry which allows us to identify these high-risk patients as a unique population, similar to the population-based systems used to manage diabetics. Currently, this real-time registry is allowing the implementation and measurement of interventions such as transitions programs, discussions regarding goals of care, appointment access prioritization, and accelerated triage aimed at preventing recurrent admissions and secondary functional decline. This study is not without methodological limitations. First, the patient information obtained from administrative databases was recorded prior to the outcome of interest for purposes other than investigation of our hypothesis. Coding data were utilized to identify whether individuals had been diagnosed with any of the six predictor comorbid conditions. Coding data may under-estimate secondary diagnoses, however, other authors have found that administrative data such as ICD-9 codes, typically correlate well with patient chart diagnoses [18]. Second, this study was a retrospective cohort analysis. This creates the possibility of underreported risk factors, as well as outcomes. Although most patients receive both their acute and chronic care from Mayo Clinic, as their primary provider, it is certainly possible that they could have hospitalizations or chronic diagnoses which are identified elsewhere and of which our electronic medical record is therefore unaware. Although the outcome data requires further prospective validation, the retrospective collection of risk factor variables is an essential component of the model design and one of the factors this hypothesis was designed to examine. Third, we did not include functional-status measures in our initial predictive modeling. Functional-status measures are known to be independently associated with hospitalization and emergency department visits, however, functional-status data is dependent on patientprovided history or clinician-administered performance testing and is neither routinely collected, nor easily extractible from administrative data [19][20][21]. Additionally, self-reported information such as functional status and medications, fluctuate throughout an individual's life, further challenging the accurate collection and maintenance of this data. Despite the fact that the functional status was not utilized in our final model, the ERA index compared favorably with the aforementioned indices in which it was included. Conclusions Despite these limitations, results from this study suggest that the ERA index represents a risk identification model, which is an example of an effective, inexpensive, electronic mechanism able to identify populations of older, community-dwelling adults who are at increased risk for hospitalization and emergency department encounters. Administrative and clinical data modeling may afford busy primary care practices or payor organizations the opportunity to identify high-risk populations so that they may effectively allocate resources and evidence-based preventive interventions to those individuals with the greatest need and greatest potential to benefit.
2014-10-01T00:00:00.000Z
2010-12-13T00:00:00.000
{ "year": 2010, "sha1": "bad575eab1bfca6b470caf13898ae121b56b4563", "oa_license": "CCBY", "oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/1472-6963-10-338", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cc7196745c490508ce5697c1fcf535bbb340d99f", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
260350698
pes2o/s2orc
v3-fos-license
Rehabilitation after cervical and lumbar spine surgery The total number of spine surgeries is increasing, with a variable percentage of patients remaining symptomatic and functionally impaired after surgery. Rehabilitation has been widely recommended, although its effects remain unclear due to lack of research on this matter. The aim of this comprehensive review is to resume the most recent evidence regarding postoperative rehabilitation after spine surgery and make recommendations. The effectiveness of cervical spine surgery on the outcomes is moderate to good, so most physiatrists and surgeons agree that patients benefit from a structured postoperative rehabilitation protocol and despite best timing to start rehabilitation is still unknown, most programs start 4–6 weeks after surgery. Lumbar disc surgery has shown success rates between 78% and 95% after 2 years of follow-up. Postoperative rehabilitation is widely recommended, although its absolute indication has not yet been proven. Patients should be educated to start their own postoperative rehabilitation immediately after surgery until they enroll on a rehabilitation program usually 4–6 weeks post-intervention. The rate of lumbar interbody fusion surgery is increasing, particularly in patients over 60 years, although studies report that 25–45% of patients remain symptomatic. Despite no standardized rehabilitation program has been defined, patients benefit from a cognitive-behavioral physical therapy starting immediately after surgery with psychological intervention, patient education and gradual mobilization. Formal spine rehabilitation should begin at 2–3 months postoperatively. Rehabilitation has benefits on the recovery of patients after spine surgery, but further investigation is needed to achieve a standardized rehabilitation approach. Introduction Overall, the indications for operating on spinal disorders are increasing as reflected by the total number of spine surgeries. Spine surgery usually involves decompression and/or fusion of one or more spine levels (1,2,3). However, regardless of the pathology and surgical technique used, there is a variable percentage of patients who remain symptomatic and with functional disability (2,4,5). Following spine surgery, postoperative rehabilitation is considered important and is largely recommended by surgeons to help patients improve their functional status and achieve their recovery goals, aiming to extend activities of daily living, from personal care to housekeeping tasks in the short term as well as returning to work, sports and leisure activities in the long term (1,3,6,7,8). Rehabilitation in the context of spine surgery may be proposed to improve physical and psychosocial functioning, prevent and treat complications, accelerate recovery, alleviate residual symptoms and treat accompanying diseases (3,5,7,9,10,11). These programs can include physiotherapy (exercise therapy with stretching and strength training), cognitivebehavioral therapy and multidisciplinary protocols, which may include motor control modification and resumption of activities of daily living, work and physical activity and enhancement of pain-coping strategies. Rehabilitation programs may consist of supervised individual sessions, group training, home exercises, education or a combination of these (10,11,12). The mechanisms explaining the positive effects of exercise therapy remain largely unclear, but local biomechanical changes and more central mechanisms, like distorted body schema or altered cortical representation of the back, as well as modification of motor control patterns, may play a role (4,12,13,14,15,16). Furthermore, the therapist-patient relationship, changes in fear-avoidance beliefs, catastrophizing and self-efficacy regarding pain control should also be considered as resulting modifying factors (4,5,7,12,15). The focus of the available research is mainly on technique validation and surgery results, while the postoperative management of this population has received relatively little attention (4,6,9,15,17,18). Furthermore, there are no clear and standardized recommendations regarding postoperative rehabilitation treatment after spine surgery, for instance, if all patients have an indication for further postoperative rehabilitation treatment and whether its type and duration have an impact on the clinical and functional outcome after spine surgery (4,7,10,19,20). The aim of this review is to resume the most recent evidence regarding postoperative treatment after cervical and lumbar spine surgery and make recommendations regarding postoperative mobilization and rehabilitation. Methods A comprehensive literature review was performed on the most recent evidence regarding rehabilitation modalities used after cervical and lumbar spine surgery. The search was performed on PubMed and EMBASE databases for articles published from October 2013 to December 2022. The search strategy was conducted using Boolean operators (AND, OR) to combine the following keywords: 'lumbar spine surgery, cervical spine surgery, postoperative rehabilitation, physical therapy, postsurgery, pain management, physiotherapy'. One author (T. B.) screened all the titles and abstracts of all database records and retrieved the full text of relevant studies for further analysis according to the inclusion and exclusion criteria. Any doubts were discussed with another author (A. R.). Both authors (T. B. and A. R.) screened the full text for inclusion in this systematic review. Only articles written in English were included. All articles including any type of postoperative intervention were included for review (bracing period, massage therapy, muscular exercises and cognitive and copying therapies). Small case series (<15) and case reports were excluded. The identification process of the articles collected is depicted in Fig. 1. Cervical spine surgery Indication for surgical treatment is increasing in patients with neck pain and radiculopathy not responding to conservative measures (7,11,21). Anterior cervical discectomy and fusion is currently the most common surgical procedure on the cervical spine, followed by cervical disc arthroplasty and posterior and anterior cervical foraminotomy (11,16,22). The effectiveness of cervical spine surgery on radicular pain is moderate to good, but the effects on neck function are less clear (4,11). During the immediate postoperative period, there could be reduced neck motion (due to fusion), pain and postoperative immobilization, which can lead to decreased neck muscle function and, therefore, to the persistence of symptoms in many patients after surgery (7,16,22,23). The atrophy and deconditioning of the neck muscle function may not spontaneously resolve and can persist over time (16,23,24). Postoperative rehabilitation is largely recommended by the majority of surgeons although the scientific basis for this recommendation has not yet been well-established (7,21), as there are few studies assessing the best practices of postoperative rehabilitation (7,11,21). The most recent studies are listed in Table 1. The best timing to start postoperative rehabilitation is unknown; however, most protocols start at 4-6 weeks after surgery (4,7,22). In the meantime, the use of bracing can be advised depending on the surgical technique given that the use of a rigid cervical collar for 3 weeks can decrease pain and disability after non-plated discectomy and fusion (11,25). Despite the most recent surgical techniques and instrumentation, the lack of decisive large case series on bracing leads most surgeons to still prescribe bracing after cervical spine surgery (26). Despite the insufficient data, the most recent evidence argues that patients benefit from an active structured postoperative rehabilitation approach featuring endurance exercises, isometric strengthening, stretching and neck and shoulder-specific functioning and aerobic activity, in line with patient tolerance, while placing a lower emphasis on passive modalities. These active treatment interventions are targeted at restoring function, and neck-specific exercises are usually well-tolerated (4,7,11,27,28). However, the implementation of a structured program of therapeutic exercises combined with a cognitive-behavioral protocol has shown slightly better results in neck disability, pain intensity, catastrophizing or satisfaction, as compared with a standard treatment after surgery (4,11,28). More investigation is needed with a focus on improving patient education approaches based on patient fears and expectations, starting immediately after surgery, in order to improve patient anxiety management, patient empowerment, gratitude and satisfaction (4,11,22,28). A recent pilot study shows that early home exercises may be safe and can improve short-term outcomes, although long-term outcomes have not changed between groups (22). Further investigation is needed to confirm the effects and safety of the intervention (11,22). Some patients can experience dysphagia after cervical spine surgery, mainly with anterior cervical approaches and, even though the majority will experience improvement of symptoms over 2 months, in some patients, significant pharyngeal impairments persist, and for these, specific rehabilitation is needed (29,30,31,32). Considering the high degree of limitation and deterioration in quality of life that dysphagia can cause, studies are needed to explore the best measures and the most effective rehabilitation approach to managing dysphagia. Hermansen et al. advocate that initial high intensity neck-related pain, nonsmoking status at the time of surgery and male sex are preoperative predictive factors of good surgical outcomes after anterior cervical discectomy and fusion (33). Therefore, these factors should be considered when choosing the best rehabilitation program, as they can be related to greater improvement in pain, disability and psychological impairment. Additional investigation is needed to set predictive outcomes criteria to select those that may benefit the most from rehabilitation after cervical spine surgery. Rehabilitation management after cervical spine surgery still has a lack of powered randomized controlled trials addressing the effects of rehabilitation on muscular strength, neck-specific functioning, pain, physical activity, psychological impairment, dysphagia and quality of life (4,11,21). Due to the lack of studies with significant results, more studies are also needed to assess whether different pathologies and surgical techniques also require distinct rehabilitation approaches as in the case of lumbar spine surgery. This may explain the absence of statistically significant results in the existing studies. Lumbar disc surgery (discectomy/microdiscectomy) Lumbar disc surgery has shown success rates between 78% and 95% after the first and second postoperative years (6,12,20,34,35). Therefore, there is still a percentage of patients who do not have the desired outcome, maintaining symptoms such as pain or inability to return to work and perform tasks (6,8,18,34,36,37). Patients with lumbar disc herniation are usually between 30 and 50 years old and are productive members of society making surgery results particularly important in order to allow patients to return to their previous activity (19,20,34,38). Discectomy is the most common surgical spine procedure performed in Europe for patients with lumbar disc herniation who experience low back pain, most often accompanied by leg pain (8,12,18). Although its absolute indication has not been proven, exercise or physical therapy protocols are widely recommended in the postoperative period of lumbar disc surgery, aiming to accelerate recovery and improve longterm performance as well as general health benefits (8,18,38,39). So far, it has not been possible to establish guidelines for rehabilitation treatment in the postoperative period of lumbar disc surgery due to the great variability of results among the various studies performed (Table 2) and also because they have been classified with a low degree of evidence by the most recent systematic reviews (12,20,36). It is considered that trunk muscle atrophy, muscle weakness, impaired neuromuscular activation and coordination due to disc disease and surgery may all contribute to pain recurrence and impaired physical function after lumbar disc surgery (8,12,18). The majority of studies advocate that starting a rehabilitation program 4-6 weeks after surgery contributes to an improvement in disability, pain and physical function when compared to no treatment, and that high-intensity exercise protocols lead to faster improvement of these factors when compared with low-intensity exercise programs (6,12,18,40) related to improvement of the function of pelvic, hip and trunk muscles (18, 40). Comprehensive physiotherapy interventions are effective in improving muscle function, pain and disability after lumbar disc surgery. These multimodal interventions consist of a wide variety of active rehabilitation techniques, including a combination of education on the performance of daily functional tasks, functional weight-bearing, cardiovascular endurance exercises, lower limb strengthening and lumbar stabilization exercises, including stretching and strengthening (8,18,20,36,41). Also, when comparing supervised exercise programs with home exercises, none was superior to the other, and both proved to be effective in reducing pain and improving functional capacity when compared to no treatment (12,34,38). So far, rehabilitation programs based on a biopsychosocial intervention model have shown no difference compared to standard rehabilitation programs (12). Still, the choice of a rehabilitation protocol considering the preferences and expectations of the patient can have a synergistic effect on recovery, mainly concerning compliance enhancement (3,20,34,39). There is great variability related to the time when a rehabilitation program should start, and there is no consensus on the duration or even the need to restrict activity after surgery (12,18,19,34,36,40). Studies have shown that exercise programs starting immediately after surgery are not accompanied by higher rates of recurrence and are well tolerated, but they are neither significantly superior to those initiated 4-6 weeks after surgery (6,12,18,36,40), nor they proved to be more cost-effective (6). Although there is still a lack of consensus, it is believed that the use of orthotic treatment after surgery does not bring benefits and may even delay rehabilitation (35). Therefore, patients who underwent lumbar discectomy should start their postoperative rehabilitation immediately after surgery, with patient education for good posture and gradual mobilization, and at 4-6 weeks after surgery start the therapeutic exercises program (19, 34). Studies are needed to establish criteria for selecting patients that need rehabilitation, essentially those who maintain symptoms for long periods of time after surgery, while patients with complete resolution of symptoms in the postoperative period may not need rehabilitation (6,12,18,39,41). Some research has already been carried out in this regard, with studies acknowledging that the duration of preoperative leg pain and working ability, presence of comorbidities and some demographic factors (age and sex) are significantly associated with the duration of postoperative sick leave and returning to work period (18,19,42). The inclusion of all operated patients in the studies without the application of selection criteria may constitute a way of diluting the results and a source of biases (19). The implementation of a rehabilitation program after lumbar discectomy appears to improve functional status in the short term nevertheless, long-term effects don't reach consensus (12,34). Despite this, there are studies showing maintenance of results after 2 years (39), which can last for more than a decade (8,40). With the increasing use of minimally invasive techniques and their proven effectiveness, pilot studies have shown that the implementation of earlier rehabilitation programs after microdiscectomy has the potential of effectively improving outcomes (pain, disability and quality of life) and it is also associated with better return to work outcomes compared to more invasive techniques (42). Still, more studies with larger and cost-effective study groups are needed (43,44). Lumbar interbody fusion surgery Lumbar interbody fusion is commonly performed in spondylolisthesis, degenerative disc disease and spinal stenosis and is generally accompanied by decompressive surgery (2,15,45,46). Lately, the rate of lumbar interbody fusion is increasing, particularly in patients over 60 years of age (2,5,10,45). Studies report that 25-45% of patients remain symptomatic, with functional disability and maintain a poor quality of life (9,45,46), which could contribute to high reoperation rates (9,46). There is a great variability in the recommendations for postoperative patient management (1) Determining the efficacy of a cognitive-CBPT program for improving outcomes in patients following laminectomy with or without arthrodesis for a lumbar degenerative condition. The intervention started 6 weeks after surgery (mostly by telephone in both groups). months Tampa scale for kenesiophobia; pain self-efficacy questionnaire; brief pain inventory; ODI; general health; performance-based tests. Screening patients for fear of movement and using a targeted CBPT program results in significant improvement in pain, disability, general health and physical function after spine surgery for degenerative conditions. The intervention group were trained with systematic lower-limb rehabilitation procedures over 3 months. Control group had no intervention. months Lower-extremity muscle force; VAS; lumbar Japanese orthopedic association score; ODI; incidence of deep venous thrombosis; patient satisfaction. The lower-extremity rehabilitation exercise can effectively promote patient health recovery after surgery and also improve pain relief and functional outcomes; rehabilitation also decreases deep venous thrombosis events of the lower limbs. surgeries, no standardized rehabilitation program has been defined for patients after lumbar fusion surgery (2,13,46), much due to a lack of studies with moderateto high-quality evidence but also because of the sparse research on this subject (Table 3) (2,45,46). The use and effectiveness of bracing after lumbar spine fusion remain controversial (47,48,49,50). Some surgeons prescribe mostly rigid lumbosacral orthosis based on their personal experience and beliefs that it can improve lumbar stabilization and pain in the first 3 months postoperatively (13,47,48,50). In this regard, recent studies claim that postoperative bracing is not useful neither has effect in postoperative outcomes comparatively to no bracing (48,49,50,51,52), because solid internal immobilization can be ensured with modern instrumentation; thereby patients can begin gradual mobilization as symptoms allow them to (2,47,48). There is no consensus about the best time to start rehabilitation nor even about its intensity or duration (10,13). Early exercise programs starting at 6 weeks after surgery did not prove to be superior than starting at 12 weeks after surgery (2). Some advocate that starting rehabilitation at 2-3 months postoperatively align better with bony tissue healing and have better outcomes in pain and disability than early rehabilitation (2). Patients who undergo lumbar spinal fusion show a more severe muscular deterioration with muscle denervation because of a background of long-standing and disruptive back pain, muscle damage related to the surgical approach -specially in posterior lumbar interbody fusion (PLIF) -and usually a longer period of postoperative inabilitation than patients who undergo simple lumbar discectomy or decompression (2,13,14,15). Therefore, the implementation of a program of soft-tissue mobilization, neural mobilization, endurance exercises, back stretching exercises, neutral spine control exercises, lumbar muscle strengthening exercises and balancing of core musculature seems to be more effective than no rehabilitation on improving significantly back muscle strength, pain and disability (2,13,53). Patients with these degenerative pathologies develop high levels of functional limitation, fear of movement and pain catastrophizing (5,15,45). In that way, studies have shown the effectiveness and importance of using exercise rehabilitation protocols combined with cognitivebehavioral therapy and patient goal attainment-based therapy, showing significant improvements in disability, back and leg pain, fear avoidance behavior, mental health and quality of life (1,2,5,13,15,45,54). The scientific evidence is insufficient to recommend specific rehabilitation protocols cognitive-behavioral physical therapy programs should start immediately after surgery, with psychological intervention with personal goal attainment, patient education and gradual mobilization. Formal spine exercise rehabilitation should then begin at 2-3 months postoperatively, with soft-tissue mobilization, neural mobilization, joint mobilization and with more evidence support: back endurance, stretching, motor control and strengthening exercises (2,54). These rehabilitation programs seem to be well tolerated and safe for the patients (13). The extent of the rehabilitation is also a point of controversy because it is difficult to generalize a specific period due to the great variability among patients such as patients' ages, other orthopedic problems, more psychological barriers (greater fear avoidance and/ or depression), greater disability and preoperative deconditioning, different ability to exercise safely and independently, this means, these patients may need a closer rehabilitation monitoring and a more personalized rehabilitation program adjusted to their evolution, without a specific duration (2,5,55). Further investigation is necessary to better study the influence that each of these variables has on patient recovery and to evaluate its long-term effects on outcomes (5). No intervention proved to be superior to another. 4-6 weeks after surgery Cervico-scapulothoracic and upper extremity strengthening, endurance and stretching exercises as well as cognitive-behavioral therapy. Lumbar disc surgery Not recommended Comprehensive physiotherapy interventions: patient education, endurance, stretching, motor control and strengthening exercises. Immediately after surgery, with patient education The exercise rehabilitation program starts at 4-6 weeks after surgery. Lumbar interbody fusion surgery Not recommended Cognitive-behavioral physical therapy: Immediately after surgery with psychological intervention, patient education and gradual mobilization. Most recommended: psychosocial, patient education, endurance, stretching, motor control and strengthening exercises. The exercise rehabilitation program starts at 2-3 months postoperatively. Insufficient evidence: soft-tissue mobilization, neural mobilization and joint mobilization. Conclusions • Although rehabilitation is largely recommended after both cervical and lumbar spine surgery (Table 4 summarizes more consensual information), there is still lack of powerful evidence with most of the research focusing on improving and validating surgical techniques. • A better understanding of the mechanisms by which the disease, the surgery and the therapeutic exercises affect spine is needed in order to develop an effective rehabilitation program. • Lumbar discectomy is the most performed procedure and is also the one that presents the most amount of research regarding postoperative rehabilitation. Despite that, there is not yet any strong evidence to build guidelines. Therefore, more research is needed, specifically regarding rehabilitation after lumbar fusion surgery. • Is consensual that in all spine surgeries more investigation is needed to guarantee durability of the effect, evaluate cost-effectiveness and intervention quality, safety and tolerance and predictors of outcomes for postoperative rehabilitation. • We understand that rehabilitation has benefits on patient recovery after spine surgery, although further investigation, with larger prospective multicentric studies, is needed to achieve a standardized postoperative rehabilitation approach. ICMJE conflict of interest statement We declare that there is no conflict of interest that could be perceived as prejudicing the impartiality of the research reported. Funding statement This research did not receive any specific grant from any funding agency in the public, commercial or not-for-profit sector.
2023-08-02T06:17:23.972Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "e7093db125b21abfe9597e498403341baf77232a", "oa_license": "CCBYNC", "oa_url": "https://eor.bioscientifica.com/downloadpdf/journals/eor/8/8/EOR-23-0015.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b2f4f9365002f6f42c0fc323dd8b72efb7090e38", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
215737317
pes2o/s2orc
v3-fos-license
Peak Age of Information Distribution in Tandem Queue Systems Age of Information is a critical metric for several Internet of Things (IoT) applications, where sensors keep track of the environment by sending updates that need to be as fresh as possible. Knowing the full distribution of the Peak Age of Information (PAoI) allows system designers to tune the protocols and dimension the network to provide reliability using the tail of the distribution as well as the average. In this letter we consider the most common relaying scenario in satellite communications, which consists of two subsequent links, model it as a tandem queue with two consecutive M/M/1 systems, and derive the complete peak age distribution. I. INTRODUCTION Traditional communication networks consider packet delay as the one and only performance metric to capture the latency requirements of a transmission. However, numerous Internet of Things (IoT) applications require the transmission of realtime status updates of a process from a generating point to a remote destination. Sensor networks, vehicular networks and other tracking systems, and industrial control are examples of this kind of update process. For these cases, the Age of Information (AoI) is a novel concept that better represents the timeliness requirements by quantifying the freshness of the information at the receiver [1]. AoI measures the time elapsed since the latest received update was generated. Another agerelated metric is the Peak Age of Information (PAoI), which is the maximum value of AoI for each update. As in other performance metrics of communication systems, the PAoI is more informative than the average age when the interest is in the worst case, e.g., when the system requirement is on the tail of the distribution. AoI is a relatively new metric in networking, but it has gained widespread recognition thanks to its relevance to IoT applications. It is generally applied to queuing systems with a single node and First Come First Serve (FCFS) policy. However, a general result was proven for general queuing networks in [2], which shows that a preemptive Last Come First Serve (LCFS) policy minimizes the AoI: since queued packets increase the system delay, and updates are interchangeable, it is better to transmit the latest packet directly to send the freshest possible information. Preemption means that even the packet currently in service is blocked and queued after the new one. Similar results are shown for M/M/k queues in [3]. The decision over whether to preempt or skip the subsequent update under different service time distributions is modeled as a Markov Decision Process (MDP) in [4]. A more realistic model considering a wireless channel with retransmissions was used to compute the PAoI distribution over a single-hop link in [5], and a recent live AoI measurement study on a public networks generally confirmed that the theoretical models are realistic [6]. Multiple sources can also be considered, in which case the scheduling problem to maintain freshness for all sources becomes interesting [7]. A tandem queue models a system where the service is delivered in several successive stages. In communications, a relay network involves one or more intermediate nodes between transmitter and receiver to, e.g., overcome the physical distance between the two-end points. The case of a single relay corresponds to a 2-node tandem queue where the relay could be, for instance, a satellite. Another example is in some IoT scenarios, where the read from the sensor is first preprocessed, and then transmitted to the server [8]. This kind of model can capture the queueing dynamics of multi-hop links, which are much more complex than single-node models, as the combined effects of different service rates can be hard to gauge intuitively. Given the wide range of relevant applications, we focus our attention in the study of the age in tandem queues. A recent work [8] models the AoI in tandem queues and derives a rate control algorithm for multiple sources with different priorities. In this case, each queue followed the FCFS discipline, but the authors derive only the average PAoI. An analysis of the effect of preemption on this kind of models on the average AoI is presented in [9], and [10] derives the average AoI for two queues in tandem with preemption and different arrival processes. Another recent work [11] uses the Chernoff bound to derive an upper bound of the quantile function of the AoI for two queues in tandem with deterministic arrivals. Finally, a general transport protocol to control the generation rate of status updates to minimize the AoI over the Internet is presented in [12]. In this letter, we analyze the distribution of the PAoI in a tandem queue with two systems and a single source, where each infinite queue follows the FCFS policy. This result will allow system designers to define reliability requirements using PAoI thresholds and derive the network specifications needed to meet those requirements. The structure of the letter is as follows. In Section II the system model is detailed, as well as the procedure to calculate the AoI. Section III and Section IV present the calculations when the first system is busy and free, respectively. Numerical results are plotted in Section V, and the paper is concluded in Section VI. II. SYSTEM MODEL We consider a tandem of two M/M/1 queues. Packets are generated by a Poisson process with rate λ and enter the first system, whose service time is exponentially distributed with rate µ 1 . When the packet exits the first system, it enters the second one, whose service time is an exponential random variable with rate µ 2 . In the following, we use the compact notation P X|Y (x|y) for the conditioned probability P [X = x|Y = y]. Probability Density Functions (PDFs) are denoted by a lower-case p. Fig. 1 shows the evolution of the AoI over time: packet i is generated at time t i and departs the system at time t ′ i . We define the overall system time for the packet T i = t ′ i − t i as the difference between the departure and arrival times, and the interarrival time between two packets is denoted as Y i . The PAoI ∆ i is the AoI right before a packet reception, which corresponds to the maximum AoI in the cycle. It is given by T i + Y i , as it is the difference between the departure time of a packet and the origin time of the last update received at the destination, as shown in Fig. 1. Since the arrival process is independent from the rest of the system, while the system time depends on it, we can compute the complete PDF of the PAoI by conditioning on Y i and using the law of total probability: We now need to compute the conditioned system time probability p Ti|Yi (t i |y i ). For each system j in the tandem, we can define the system time T i,j , which is the sum of the waiting time W i,j and the service time S i,j . We know from basic queuing theory that the system time for any of the two systems is exponentially distributed with rate α j = µ j − λ, as long as the system is stable. We can also define the interarrival time for packet i in system j, denoted as Y i,j , knowing that Y i,1 = Y i . At each system, a packet can be queued for a certain time W i,j , or find that the system is free and go directly in service. We define the extended waiting time Ω i,j as the difference between the previous packet's system time and the interarrival time at the system, i.e., Knowing the PDF of the system time, we can derive the PDF of the extended waiting time, which will be useful in the next steps: where u(·) is the step function. The interarrival time at the first relay Y i,1 is exponentially distributed with rate λ, while in subsequent systems it is given by We can combine (2) with the definition of Y i,j+1 to get: To compute the exact PDF of PAoI in the 2-system case (j ∈ 1, 2), we distinguish between free and busy systems at each node, and calculate probabilities separately for the four possible combinations. The overall PDF of the PAoI is the sum of the four values in the four cases: (4) In case 1, the first system is busy, and the packet experiences queuing: if the first system is the bottleneck, i.e., µ 1 < µ 2 , they will have a higher system time. The case with the lowest system time is case 2B, in which both systems are free. However, these intuitive relations do not necessarily hold for the PAoI, as the interarrival time between update packets can play a major role. In the analysis of the four cases, we will omit the i packet index for the sake of brevity. III. CASE 1: THE FIRST SYSTEM IS BUSY We now consider the case in which the first system is busy, i.e., the i-th packet arrives before the departure of the i − 1th packet, and Ω 1 > 0. In this case, we start from the conditioned distribution of the system time on Ω 1 , Ω 2 , and S 1 , so S 2 is the only remaining random variable: T |Ω1,Ω2,S1 (t|ω 1 , ω 2 , s 1 ) = µ 2 e −µ2(t−ω1−s1−[ω2] + ) u(t). (5) We distinguish two sub-cases: one in which the second system is busy as well, and one in which the second system is free. (13) We then uncondition on S 1 , repeating the operation we performed for case 1A: T |Ω1,S1 (t|ω 1 , s 1 ) p S1 (s 1 )ds 1 As in the previous case, we then condition on Y 1 and uncondition on Ω 1 : From this result, we derive the PDF of the system time T : We can now find the PDF of the PAoI: (26) We can then uncondition on S 1 : T |Ω1,S1 (t|ω 1 , s 1 ) p S1 (s 1 )ds 1 We then condition on Y 1 and uncondition on Ω 1 : The PDF of the system time is: (29) We can now find the PDF of the PAoI: V. SIMULATION RESULTS We compared the results of our analysis with a Monte Carlo simulation, transmitting 10 million packets and computing the system delay and PAoI for each. The initial stages of each simulation were discarded, removing enough packets to ensure that the system had reached a steady state. Fig. 2 shows the PAoI CDF in the four subcases for λ = 0.5, µ 1 = 1, and µ 2 = 1.2. It is easy to note that the PAoI does not have the same behavior as the system time, which is the highest when the first system is busy, i.e., when there is queuing at the bottleneck, and lowest in case 2B, in which both systems are free. The PAoI is the lowest in case 2A, and almost identical in cases 1B and 2B. This difference is due to the effect of the interarrival times on the PAoI, as case 2B usually means that the instantaneous load of the system is low and packets are far apart, increasing the PAoI. In all cases, the simulation results fit the analytically derived curve with minimal error, for both the system time and the PAoI. In case 2A, the faster system is busy and the bottleneck is empty. Intuitively, this can reduce age, as the second system will probably be able to serve packets fast enough, but at the same time the instantaneous load will be high enough to avoid having a strong impact on the age. We can now examine the PAoI CDFs for different values of λ: the system time is always higher for higher values of λ, as it depends on the traffic. The same is not true for the PAoI, as Fig. 3 shows: the PAoI is lowest for λ = 0.5, as the high interarrival time becomes the dominant factor for λ = 0.25. As for the subcase analysis, the system time and PAoI from the Monte Carlo simulation follow the analytical curve perfectly. On the other hand, the values of µ 1 and µ 2 also have an important effect, as Fig. 4 shows: while the bottleneck always has a service rate 1, changing the service rate of the other link and even switching the two can have an impact on the PAoI. Naturally, increasing the rate of the other link from 1.2 to 1.6 slightly reduces the PAoI, but we note that for both values, having the first system as the bottleneck reduces performance, particularly when the systems are similar. Finally, Fig. 5 shows how the worst-case PAoI, measured using the 95th, 99th and 99.9th percentiles, changes as a function of λ: if the traffic is very high, the queuing time is the dominant factor, causing the worst-case PAoI to diverge. The same happens if the traffic is too low, as the interarrival times can be very large: in this case, the system will almost always be empty, but updates will be very rare. The best performance in terms of PAoI is close to the middle. VI. CONCLUSIONS AND FUTURE WORK In this letter, we derived the PDF of the PAoI for a tandem of two M/M/1 queues. This result can give more flexibility in the design of bounded AoI systems, both for IoT and other relay applications. The results are derived for two nodes, but the procedure is generic for K nodes. A first possible avenue of future work is the introduction of multiple independent sources in the system, possibly with different priorities. The extension of the system to longer line networks is also a possibility, but the complexity of the derivation might make the results unwieldy. Other potential directions are the inclusion of error probabilities in the links and preemption-based policies. Finally, the extension to tandem M/D/1 or D/M/1 systems might be very interesting, as these systems are often used to model real update applications.
2020-04-13T01:00:25.155Z
2020-04-10T00:00:00.000
{ "year": 2020, "sha1": "eab15da559f7ac6ca9b95d619c0cbf19b35d8b7b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "eab15da559f7ac6ca9b95d619c0cbf19b35d8b7b", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
7214062
pes2o/s2orc
v3-fos-license
Telomere attrition, kidney function, and prevalent chronic kidney disease in the United States Background Telomere length is an emerging novel biomarker of biologic age, cardiovascular risk and chronic medical conditions. Few studies have focused on the association between telomere length (TL) and kidney function. Objective We investigated the association between TL and kidney function/prevalent chronic kidney disease (CKD) in US adults. Methods The National Health and Nutrition Examination Survey (NHANES) participants with measured data on kidney function and TL from 1999 to 2002 were included. Estimated glomerular filtration rate (eGFR) was based on CKD Epidemiology Collaboration (CKD-EPI) equation. Urinary albumin excretion was assessed using urinary albumin-creatinine ratio (ACR). We used multivariable adjusted linear and logistic regression models, accounting for the survey design and sample weights. Results Of the 10568 eligible participants, 48.0% (n=5020) were men. Their mean age was 44.1 years. eGFR significantly decreased and ACR significantly increased across increasing quarters of TL (all p<0.001). The association between TL and kidney function remained robust even after adjusting for potential confounding factors, but the association between TL and ACR was only borderline significant (β-coefficient= -0.012, p=0.056). Conclusion The association of kidney function with a marker of cellular senescence suggests an underlying mechanism influencing the progression of nephropathy. INTRODUCTION Telomere is a repeat of specific short sequences of nucleotides found at the end of chromosomes. The length of telomere is so considerable, hence telomere shortening has been correlated with various pathological There is great interest in studying telomere length in relation with kidney health [12,13], but clinical and epidemiological studies remain scanty. The limited existing evidence suggest that patients with end-stage renal disease (ESRD) may have shorter telomere and accelerated telomere shortening compared with the general population [14,15]. Data for subjects with chronic kidney disease (CKD), derived mainly from two studies of severe heart failure patients, also suggest a strong correlation between reduced kidney function and shorter telomere length (TL), even after adjustment for age [16,17]. The mechanism through which creatinine is regulated by the kidney, and the relationship between kidney function and TL are not fully understood, while existing data have been controversial [18]. However, clarifying the association of kidney function with TL is imperative, to confirm if this pathway holds promise for CKD risk evaluation and/or reduction. In this study we investigated the association of TL, a marker for biological age, with kidney function and prevalent CKD using data from the National Health and Nutrition Examination Survey (NHANES). RESULTS Of the 10568 eligible participants, 48.0% (n = 5020) were men. The mean age was 44.1 years overall, 43.5 years in men and 44.8 years in women (p = 0.063). With regard to education 52.5% (n = 4022) of the participants had completed more than high school, 25.6% (n = 2178) had completed high school, while 21.7% (n = 3248) had completed less than high school. White (non-Hispanic) represented 70.4% (n = 4864) of the participants, African-Americans -10.9% (n = 2103) and Mexican-Americans -6.9% (n = 2690). Overall, 20.8% were current smokers (24.8% of the men and 16.6% of the women). The mean and standard error of mean (SEM) for the TL in the overall sample was 1.08±0.015 (1.07±0.014 in men and 1.08±0.016 in women). The distribution of participants by stage of CKD based on eGFR and ACR levels was the following: CKD stage 1 -65.3%, CKD stage 2 -26.5%, CKD stage 3 -7.1%, CKD stage 4 -0.6%, and CKD stage 5 -0.4%. Overall, 8.1 % of the participants had eGFR less than 60 ml/min/1.73m². Table 1 shows the characteristics of the participants according to their status for CKD. The lipid profile including triglyceride, total cholesterol, high-density lipoprotein cholesterol, and low-density lipoprotein cholesterol, was better in participants without CKD than in those with CKD (all p < 0.001). The association age-and sex-adjusted cardiometabolic factors, kidney function tests across quarters of TL analyzed are summarized in Table 2. Mean body mass index, fat-free mass, fat mass, triglyceride, total cholesterol and C-reactive protein significantly decreased across increasing TL quarters (all p < 0.001), while highdensity lipoprotein cholesterol significantly increased across increasing TL quarters (p < 0.001, Table 2). eGFR significantly decreased and ACR significantly increased across increasing quarters of TL (both p < 0.001). Further univariable and multivariable (age-, sex-, race-, smoking-, fasting blood glucose-, systolic and diastolic blood pressure-, body mass index-, and C-reactive protein) regression analysis were performed to examine the association of TL with kidney function ( Table 3). Univariable models revealed that TL was negatively associated with urea albumin and ACR (both p < 0.001), and positively associated with serum creatinine and eGFR (both p < 0.001). In multivariable adjusted models, the association remained significant between TL and eGFR, and borderline significant between TL and Urea Albumin (β-coefficient = -0.012, p = 0.056). Logistic regression was used to determine the association between quartile of the TL and chance of CKD, however we have failed to find any significant association between quartile of the TL and odds of CKD neither in crude nor in adjusted (age-, sex-, race-, smoking-, fasting blood glucose-, systolic and diastolic blood pressure-, body mass index-, and C-reactive protein) models. DISCUSSION In this large representative sample of adults Americans, eGFR decreased while urinary albumin excretion increased across decreasing TL quarters. These patterns were robust to adjustment for potential confounding factors. Although, these findings did not translate into significant association between TL and prevalent CKD, our study findings suggest that telomere shortening could be an independent predictor of deteriorating kidney function. In accordance with our findings, recent systematic review proposed that shortening TL might be related with CKD prevalence/occurrence or declining kidney function, however this relation is probably balanced by the cellular telomere reparative process in those surviving longer with CKD, furthermore stated that Short TL was independently related with increased risk of prevalent micro albuminuria in diabetic men with CKD [19]. Furthermore, recent Japanese investigation among persons with increased cardiovascular risk, telomere length indicated a relationship of longer telomere length to better renal function [20]. However, Pykhtina etal stated that no associations have been found between telomere length and inflammatory markers and the levels of glomerular www.impactjournals.com/oncotarget filtration rate, urea and creatinine except for albuminuria which is associated with telomere length [21]. In line with our study, previous reports have suggested that TL is shorter in patients with end-stage renal disease patients on dialysis compared with the general population. For example, in a study of 15 patients on haemodialysis and 15 age-matched controls, the authors found an accelerated telomere shortening in patients on dialysis [22]. Another study in 18 diabetic patients on dialysis and 20 controls found an inverse correlation between TL and length of time in dialysis [14]. A study of 42 haemodialysis patients found reduced telomerase activity compared with non-haemodialysis patients [15]. Interestingly, recent investigation conducetd by Karin Luttropp concluded that Telomere attrition after 12 months was significantly greater in patients with renal replacement therapy compared to dialysis patients in addition non-CKD patients had meaningfully longer telomeres than CKD patients [23]. regarding to the evidence from pre-ESRD and CKD patients, earlier studies in patients with heart failure and normal range kidney function have reported a strong correlation between shortening TL and declining kidney function, even after adjustment for age [16,17]. In this regard, Pim et al, have evaluated association of TL with renal function in 610 patients with heart failure (aged 40 to 80 years), and found that age-and sex-adjusted TL decreased steadily across decreasing quarters of eGFR [16]. Another study by Wong et al, explored the association between TL and renal function in patients with chronic heart failure (n = 866, median age was 74) [17]. They reported that TL was associated with renal function, even after adjustment for age, gender, age at chronic heart failure onset, and severity of chronic heart failure [17]. In contrast, the Heart and Soul study which is longitudinal study of patients with stable coronary heart disease, found that kidney function was not independently associated with shortened TL or telomere shortening over 5 years [24]. However their findings should be considered with caution, considering the focus on relatively old predominantly subjects (mean age was 66.7), with stable coronary heart disease [24]. This study has several strengths. It is one of the largest studies of the association of TL with kidney impairment. Kidney impairment was assessed by both eGFR and proteinuria. The selection of participants was based on random sampling of the general population and therefore the results can be extrapolated to the general population. As the data collection was performed on all days of the week throughout the year in NHANES, the potential for selection bias is very low [25,26]. The findings from our study should be considered in the context of some limitations. The cross-sectional nature does not allow inference about causality. We did not have available any repeated measure of TL with quantitative polymerase chain reaction in the same subjects after several follow-up years to elucidate temporality of these findings. This study has important clinical and public health implications. Understanding the interplay between TL and kidney function is a necessary and important step toward any application of the resulting knowledge for public health policy and action. Moreover we knew that CKD may increase the risk of CVD as leading cause of the death which may could be explain by role of the TL [27][28][29]. Our study provides a comprehensive snapshot of the relationship of kidney function with TL at the national level in the US. In conclusion, our findings provide further evidence on the association between TL and kidney function. The association of kidney function with a marker of cellular senescence suggests an underlying mechanism influencing the progression of nephropathy. Population The NHANES are ongoing repeated cross-sectional surveys conducted by the US National Center for Health Statistics (NCHS). The NCHS Research Ethics Review Board approved the NHANES protocol and consent was obtained from all participants [30,31]. Data collection on demographic, diets, and behaviours occurred through questionnaires administered during home visits, while anthropometrics and biomarkers data were collected by trained staff using mobile examination units [30,32]. The interview consisted of questions on socio-demographic characteristics (age, gender, education, race/Hispanic origin, and health insurance) and questions on previously diagnosed medical conditions. More detailed information on the NHANES protocol is available elsewhere [30,33,34]. This study was based on analysis of data from the 1999-2002 NHANES cycles. Analyses were restricted to participants aged 18 years and older. Fasting blood glucose (FBG), total cholesterol (TC), low density lipoprotein cholesterol (LDL-C), high density lipoprotein cholesterol (HDL-C), triglycerides (TG) levels and telomere length were assayed using methods described in the NHANES Laboratory/Medical Technologists Procedures Manual [30,35,36]. Complete laboratory procedures for collection, storage, calibration and quality control of blood samples for determination of hsCRP and other inflammatory markers are available elsewhere. [37] Creatinine was measured by the Jaffe reaction and standardized by methods described previously [38]. A random urine specimen was collected from participants, and urinary creatinine was measured by the Jaffe rate reaction, urinary albumin was measured by solidphase fluorescent immunoassay [39]. Albuminuria was measured by urinary albumin-creatinine ratio (ACR) [39]. Glomerular filtration rate [eGFR, (ml/min/1.73m²)] was estimated using the CKD Epidemiology Collaboration (CKD-EPI) equation. CKD was defined as eGFR less than 60 (ml/min/1.73m²) [39]. Telomere measurements Aliquots of purified DNA, isolated from whole blood using the Puregene (D-50 K) kit protocol (Gentra Systems, Inc., Minneapolis, MN, USA), were obtained from participants. TL assay was performed using the quantitative polymerase chain reaction method to measure TL relative to standard reference DNA (also known as the telomere-to-single copy gene (T/S) ratio) [35,40]. Each sample was assayed 3 times on 3 different days. The samples were assayed on duplicate wells, resulting in 6 data points. Control DNA values were used to normalize between-run variability [40,41]. Runs with more than 4 control DNA values falling outside 2.5 standard deviations from the mean for all assay runs were excluded from further analysis (6% of runs). For each sample, any potential outliers were identified and excluded from the calculations (2% of samples). The inter-assay coefficient of variation was 6.5%. The Centers for Disease Control (CDC) conducted a quality control review before linking the TL data to the NHANES data files. Statistical analysis We conducted the analyses according to the CDC guidelines for analysis of complex NHANES data, accounting for the masked variance and using the proposed weighting methodology [42]. To investigate the association between TL and kidney function, univariable and multivariable (age-, sex-, race-, smoking-, fasting blood glucose-, systolic and diastolic blood pressure-, body mass index-, C-reactive protein, diabetes and hypertension) regressions were applied. Adjusted (age-and sex-) mean of cardiometabolic factors and kidney function were compared across quarters of TL using the analysis of co-variance (ANCOVA) with Bonferroni correction. Variables were compared by using analysis of variance (ANOVA) and Chi-square tests. All tests were two sided, and p < 0.05 used to characterise statistically significant results. Data were analysed using SPSS ® complex sample module version 22.0 (IBM Corp, Armonk, NY, USA). ACKNOWLEDGMENTS MM was supported by The World Academy of Sciences studentship of the Chinese Academy of Sciences. CONFLICTS OF INTEREST None.
2018-04-03T03:38:04.769Z
2017-09-08T00:00:00.000
{ "year": 2017, "sha1": "f7f6f643a563c2e935ea63ca0f1de382983a37d8", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=20706&path[]=65970", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f7f6f643a563c2e935ea63ca0f1de382983a37d8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247027275
pes2o/s2orc
v3-fos-license
Decreased Risk of Anxiety in Diabetic Patients Receiving Glucagon-like Peptide-1 Receptor Agonist: A Nationwide, Population-Based Cohort Study Background: Previous findings on using Glucagon-like peptide-1 receptor agonist (GLP1-RA) as an antidepressant were conflicting, lacking large-scale studies. We used population-based data to investigate depression and anxiety risk in diabetic patients receiving the medication. Methods: From claims records of the National Health Insurance Research Database (NHIRD) of Taiwan, we identified cohorts of 10,690 GLP1-RA users and 42,766 propensity score-matched patients without GLP1-RA use from patients with diabetes mellitus (DM) diagnosed in 2011–2017, matched by age, gender, index year, occupation, urbanization, comorbidities, and medications. Incidence, hazard ratios (HR) and 95% confidence interval (CI) of depression and/or anxiety were estimated by the end of 2017. Results: The overall combined incidence of anxiety and/or depression was lower in GLP1-RA users than in non-users (6.80 versus 9.36 per 1,000 person-years), with an adjusted HR adjusted hazard ratio (aHR) of 0.8 (95% CI: 0.67–0.95) after controlling for covariates. The absolute incidence reduction was greater in anxiety (2.13 per 1,000 person-years) than in depression (0.41 per 1,000 person-years). The treatment effectiveness was significant for women. Patients taking GLP1-RA for longer than 180 days had the incidence of anxiety reduced to 2.93 per 1,000 person-years, with an aHR of 0.41 (95%CI: 0.27–0.61), compared to non-users. Dulaglutide could significantly decrease risks of both anxiety and depression. Conclusion: Patients with DM receiving GLP1-RA therapy have a greater reduction of the risk of anxiety than that of depression. Our findings strengthen previous research that advocated possible anti-depressant or anxiolytic effects of GLP1-RA and may lead to improved treatment adherence among patients with DM. INTRODUCTION Diabetes mellitus (DM) is a group of endocrinological and metabolic disorders affecting approximately 422 million patients worldwide (Saeedi et al., 2019). Patients with DM are at an elevated risk of developing acute and serious long-term complications (Fowler, 2008;Gregg et al., 2016;Harding et al., 2019). Considerable evidence has also associated DM with increased risk of incident or recurrent depression (Gavard et al., 1993;Lustman et al., 2000;Anderson et al., 2001;Snoek et al., 2015;Chima et al., 2017;Khaledi et al., 2019). An earlier meta-analysis found that the risk of depression was 2-fold higher for patients with DM than individuals without DM (Anderson et al., 2001). Another meta-analysis review reported that poor glycemic control is associated with the occurrence of depression in patients with both type 1 and type 2 DM (Lustman et al., 2000). Metformin, thiazolinediones (TZDs), glucagon-like peptide-1 receptor agonists (GLP1-RA), and dipeptidyl peptidase 4 (DPP-4) inhibitors are medications commonly prescribed for glycemic and weight control (Pozzi et al., 2019). A meta-analysis of clinical trials found that the use of pioglitazone, a TZD, was associated with significant improvements in depressive symptoms. The effect was more marked in women. The treatment effectiveness of metformin for diabetic patients was not consistent among studies (Moulton et al., 2018). However, a recent study in Saudi Arabia found that metformin could lower the probability of major depressive symptoms by 70% among patients with polycystic ovary syndrome (PCOS), but did not influence the occurrence of anxiety (AlHussain et al., 2020). GLP1-RAs can cross the blood brain barrier, exerting function at both peripheral and central systems (Pozzi et al., 2019). A meta-analysis advocated that GLP1-RAs could exert antidepressant or anxiolytic effects on reducing the depression rating score of −2.09 (95% CI −2.28 -−1.91, p < 0.001) for diabetic patients (Pozzi et al., 2019). A United Kingdom study evaluating changes of quality of life for patients with GLP1-RA demonstrated that the therapy significantly reduced the Hospital Anxiety and Depression Scale (HADS) scores, compared with insulin-treated patients. (Grant et al., 2011). A placebo-controlled single-blind study, investigating the effect of GLP1-RA on excessive daytime sleepiness with 16 male DM patients, demonstrated a significant reduction in depression scores after the treatment compared to the baseline scores. Nonetheless, the reduction was not significant compared to placebo (Idris et al., 2013). A 26-weeks randomized controlled trial in the Netherlands found that GLP1-RA treatment in 26 patients significantly improved the quality of life measured by the Problem Areas in Diabetes Scale (PAID) questionnaire, compared to 24 patients using standard treatment (de Wit et al., 2014). Further follow-up study revealed that the effectiveness was sustained to 52 weeks, but there was insignificant improvement in scores of Beck Depression Index (BDI) (de Wit et al., 2016). Another 6month treatment with liraglutide for 19 women with polycystic ovary syndrome without DM revealed no significant association with reduced depression, compared to 17 controls matched by age (Kahal et al., 2019). A pooled analysis on 5,325 individuals with higher BMI without diabetes evaluating the effectiveness of trials using GLP1-RA for 32-160 weeks also found no significant differences in depression (2.1 versus 2.1 events/100 person-years) and anxiety (1.9 versus 1.7 events/100 person-years) (O'Neil et al., 2017). Most previous studies on GLP1-RA were undertaken with small sample sizes and reported inconsistent findings on the treatment effectiveness related to risks of depression and anxiety. These studies ascertained conditions of depression or anxiety mainly based on screening instruments rather than clinical diagnoses. A large long-term cohort study with sufficient sample size focusing on DM patients is lacking. Therefore, we used the insurance claims data of Taiwan to conduct this study to evaluate risks of anxiety and/or depression between DM patients with and without GLP1-RA, with sufficient large populationbased cohorts and follow-up time. Data Source The National Health Insurance (NHI) of Taiwan is a compulsory health insurance system, launched in 1995 for all residents. For this study, we used the National Health Insurance Research Database (NHIRD) issued by NHI containing claims data of all insured individuals. The claims data provided records of birth date, gender, places of residence, and enrollee category, as well as diagnoses, drug prescriptions, and treatments for emergency and outpatient visits, and hospitalizations from 2000 to 2017. All Study Cohorts GLP1-RA was one of the available treatment options for DM patients, with approval from the NHI given in 2011. We therefore first identified 4,079,299 patients with the diagnosis of DM during the period of 2011-2017 ( Figure 1). Patients who were less than 20 years old or had been diagnosed with depression and/or anxiety at baseline were excluded. Patients who were newly diagnosed in 2011-2017 with DM (ICD-10-CM code E08, E09, E10, E11, and E13; ICD-9-CM codes: 250) at least twice were considered as the potential study population. 296.21,296.22,296.23,296.24,296.25,296.26,296.30,296.31,296.32,296.33,296.34,296.35,296.36,311,300.4,309.0, and 309.1) and/or anxiety (ICD-10-CM code: F41; ICD-9-CM codes: 300) were identified, or the individual withdrew from the insurance, or the end of 2017. Depression and anxiety were defined as having the diagnoses from at least two outpatient visits or one hospitalization to ensure the validity (Chien et al., 2007). The incidence rate of depression or anxiety was estimated by per 1,000 person-years. Statistical Analysis Demographic characteristics and prevalence of comorbidities in the study and the comparison groups were compared and examined with Chi-square tests for categorical variables and t-tests for continuous variables. The Kaplan-Meier method was used to graphically describe the cumulative incidence of depression and anxiety during the 7-years follow-up period. Cox proportional hazards regression analysis was used to determine the GLP1-RA group to compare group hazard ratio (HR) and 95% confidence interval (CI) of depression or anxiety. Demographic variables, comorbidities, and medications were included in multivariable models to estimate the adjusted hazard ratio (aHR): Model 1 adjusted for demographic factors, Model 2 adjusted for Model 1 variables and comorbidities, and Model 3 adjusted for Model 2 variables and medications. We further evaluated the treatment effectiveness by the length of treatment, stratifying the medication courses into 3 periods, 30-90 days, 91-180 days, and >180 days. All statistical analyses were performed using STATA version 14.0 (StataCorp) and results with p values less than 0.05 considered as significant. RESULTS A total of 10,690 DM patients prescribed GLP1-RA and 42,766 comparisons with non-users were identified from the NHIRD. Both groups had similar distributions of age, gender, occupation, urbanization and comorbidities, with a mean age of 53.33 years (SD 13.04) and 45.06% women (Table 1). Although, the percentage of using metformin was higher in non-GLP1-RA users. The cumulative incidence of anxiety was 2.13% lower in GLP1-RA users than non-users (Log-rank test p < 0.001), whereas that of depression was not significantly different between the 2 groups ( Figure 2). The overall incidence of depression and/or anxiety was lower in GLP1-RA users than non-users (6.80 versus 9.36 per 1,000 person-years), with an aHR Frontiers in Pharmacology | www.frontiersin.org February 2022 | Volume 13 | Article 765446 of 0.8 (95% CI: 0.67-0.95) for users, after controlling for demographic factors, comorbidities and medications ( Table 2). The difference of incidence rates between the two groups was greater for anxiety than depression. The aHRs of developing anxiety and depression for the GLP1-RA group, compared to non-users, were 0.78 (95% CI: 0.64-0.95) and 0.94 (95% CI: 0.72-1.23), respectively. The beneficial effect on anxiety ( Table 3) was specific to patients between 40 and 60 years old, with aHR of 0.73 (95% CI: 0.55-0.96). Compared with men, women in both groups exhibited greater incidences of depression and anxiety (Table 4). However, for women, risks of anxiety were significantly lower in GLP1-RA users than non-users. GLP1-RA users with either kind of comorbidity or that used other hypoglycemia agents showed no significant difference on the risk of depression (Table 4). On the other hand, GLP1-RA users without comorbidities except for hypertension showed significant reductions of anxiety (Table 3). GLP1-RA users with Data shown as n(%) or mean ± SD. SMD: standardized mean difference. A standardized mean difference of 0.1 or less indicates a negligible difference. GLP1-RA, glucagon-like peptide-1, receptor agonist; COPD, chronic obstructive pulmonary disease; TZD, thiazolidinedione; SGLT2, Sodium-glucose co-transporter-2; DPP-4, inhibitor, dipeptidyl peptidase-4 inhibitor. The urbanization level was categorized by the population density of the residential area into four levels, with level 1 as the most urbanized and level 4 as the least urbanized. 1:4 propensity score matching. Frontiers in Pharmacology | www.frontiersin.org February 2022 | Volume 13 | Article 765446 hypertension tended to have lower risk of anxiety, with aHR of 0.73 (95% CI: 0.54-0.97). GLP1-RA users that took metformin, sulfonylurea and oral medication combination had lower risk of anxiety, so as to those that did not use TZD, acarbose, SGLT2 inhibitors, DPP4 inhibitors, or insulin ( Table 3). The incidence of depression or anxiety decreased with increasing duration of treatment after the initiation of GLP1-RA medication ( Table 5). The reduction trends were significant after controlling for all covariates. After taking the medicine for 180 days or longer, rates of incidence of depression or anxiety reduced to 2.19 and 2.93 per 1,000 person-years, respectively. The risk of having any depression or anxiety fell to an aHR of 0.5 (95% CI: 0.36-0.69) in GLP1-RA users, compared to non-users. We also evaluated the effect of different GLP1-RA on anxiety or depression in Tables 6 and 7. Our results revealed that dulaglutide could significantly reduce risks of anxiety and depression, while liraglutide and exenatide showed no significant reductions on risks of either anxiety or depression. DISCUSSION To our knowledge, this study represents the largest populationbased analyses investigating whether GLP1-RA medication is associated with reduced risks of depression or anxiety in DM patients. GLP1-RA users exhibited a significant risk reduction for anxiety, and a moderate reduction for depression, compared with non-users. This treatment effectiveness on anxiety was observed in female users but not in male users. The effectiveness increased with the duration of medication and the significant risk reduction was observed after 6-months or longer therapy. Further agespecific stratified analyses revealed significant reduction of anxiety in patients between 40 and 60 years. GLP1-RA users taking metformin or sulfonylurea at the same time had decreased risk of anxiety, which was also noted in patients with hypertension. As for the specific effectiveness of each GLP1-RA, dulaglutide use could significantly decrease risks of both anxiety and depression, while liraglutide and exenatide showed no significant effect on reductions of either anxiety or depression. Our findings on reduced risks of anxiety or depression associated with GLP1-RA use were consistent with some previous findings (Bode et al., 2010;Grant et al., 2011;Idris et al., 2013;de Wit et al., 2014), but contrast with others that showed no significant association between GLP1-RA medication and depression (de Wit et al., 2016;O'Neil et al., 2017;Kahal et al., 2019). However, studies with non-significant findings were either based on small sample sizes or were not focused on DM patients. Although Eren-Yazicioglu et al. (2021) reported that exenatide was associated with higher depressive scores indirectly through its effect on perceived stress, their study participants already had lifetime or current psychiatric diagnosis at baseline, which was different from our study (we excluded those with prior anxiety or depression diagnosis at baseline). Neuroinflammation might be a critical factor for the onset, deterioration, relapse, and maintenance of depression or anxiety (Kopschina Feltes et al., 2017;Paudel et al., 2018;Woelfer et al., 2019). GLP-1 has been reported to promote productions of antiinflammatory cytokines in various organs, including the adipose tissue, the pancreas, and the brain (Dobrian et al., 2011;Lee et al., 2012;Darsalia et al., 2014;Augestad et al., 2020;Reed et al., 2020). Kim et al. (2020) summarized that GLP1-RA may ameliorate depression by reducing neuroinflammation, balancing neurotransmitter homeostasis, promoting neuronal differentiation and neural stem cell proliferation and improving synaptic function. Albeit we failed to find significant reductions in the overall incidence of depression in GLP1-RA users, our subgroup analysis showed that patients using dulaglutide were at a lower risk of depression than nonusers. GLP1-RA treatment-associated decrease in the risk of depression or anxiety in DM patients may be driven by combinations of anti-inflammation, better glycemic control, greater weight loss, and reduced concern about weight gain (Bode et al., 2010). Women are generally more likely than men to develop an anxiety disorder (Leach et al., 2008;McLean and Anderson, 2009). The present study demonstrated that women were also at a higher incidence of anxiety risk than men even after using GLP1-RA. However, female users had a greater benefit from GLP1-RA therapy, than male users. The treatment effectiveness of GLP1-RA might be due to interactions between GLP-1R, the central and peripheral nervous system, estrogen, and the anorexic actions. GLP-1 is secreted from gut enteroendocrine cells and brain preproglucagon (PPG) neurons, which are known as the peripheral and central GLP-1 systems (Brierley et al., 2021). GLP-1 secreted by the intestine releases into the hepatoportal vein. This activates the vagus nerve to generate a neural signal towards the brain stem, such as the nucleus of the solitary tract nucleus (NTS) and the area postrema (AP), which send axons to the hypothalamus to release GLP-1 and activate the receptors. A new signal is then sent towards peripheral tissues through the autonomic nervous system (ANS) to regulate numerous functions (Cabou and Burcelin, 2011). However, research indicated that the intake-inhibitory effects of the GLP-1 RA, exendin-4 and liraglutide, were mediated by activation of GLP-1R expressed on sub-diaphragmatic vagal afferents as well as in the brain (Kanoski et al., 2011). That is, central and peripheral GLP-1 systems suppress eating via independent gut-brain circuits (Brierley et al., 2021). Central injections of GLP1-RA strongly decrease food intake, due to GLP-1R expression targeting the hypothalamic and brainstem inducing anorexic action. (Larsen et al., 1997;McMahon and Wellman, 1998;Hayes et al., 2008;Hayes et al., 2010;Kanoski et al., 2011). The same CNS regions are implicated for the anorexic action of estrogens (Palmer and Gray, 1986;Osterlund et al., 1998;Merchenthaler et al., 2004;Musatov et al., 2007), and may provide neuroanatomical grounds for the interaction between GLP-1 receptor and gender (Richard et al., 2016) in our results. Such a mechanism was also supported by Richard et al. (2016), who reported that women are more sensitive than men to the food reward impact of central GLP-1 receptor activation. In addition, individuals with obesity and overweight are more likely to have anxiety than non-obese persons (Amiri and Behnezhad, 2019). Since GLP1-RA is well known for its effect on weight loss, this might lead to the elimination of anxiety in women. Furthermore, younger DM patients with fewer comorbidities may be more concerned about body weight. They were probably more willing to adhere to the prescription of GLP1-RA and thus the risk of anxiety might be decreased. Our study demonstrated that the combination of GLP1-RA with metformin exerted an anxiolytic effect, which was consistent with previous preclinical studies (Fan et al., 2019;Ji et al., 2019;Zemdegs et al., 2019;Turan et al., 2021). We also found that GLP1-RA users taking sulfonylurea had lower risk of anxiety, which may be due to the lower dose of sulfonylurea needed after using GLP1-RA, which could decrease the hypoglycemic risk of sulfonylurea. In our study, GLP1-RA users not using TZD also had a reduced risk of anxiety. This might be owing to the lowered side effect of weight gain from not using TZD. The finding that GLP1-RA users not using DPP-4 inhibitors had better reduction of anxiety was somewhat surprising. Numerous studies have reported that patients with mood and anxiety-related disorders are characterized by increased circulating inflammatory cytokines, including interleukin (IL)-1, IL-6, tumor necrosis factor (TNF), their soluble receptors, and acute phase reactants such as C-reactive protein (CRP) (Maes et al., 1992;Maes, 1999;Sluzewska, 1999;Michopoulos et al., 2017). DPP-4 is a novel adipokine secreted from adipose tissue (Lamers et al., 2011) with a pro-inflammatory role (Cordero et al., 1997). Obese patients may have higher DPP-4 levels from adipose tissue, and DPP-4 might induce depression by activating the immune system and promoting proinflammatory cytokine secretion. Previous studies revealed that although very low levels of the DPP-4 inhibitors were found in the brain (Fuchs et al., 2009;Fura et al., 2009), its neuroprotective effects might be more attributed to peripheral functions rather than directly in the central nervous system (Darsalia et al., 2013;Lin and Huang, 2016). On the other hand, GLP-1 RA crossed the BBB successfully (Hunter and Hölscher, 2012;Athauda and Foltynie, 2016). In Taiwan, combination usage of GLP1-RA and DPP-4 inhibitor is not reimbursed by health insurance. That is, either kind of medication has to be paid by the patient himself/herself. This may be another reason why GLP1-RA users not using DPP-4 inhibitors had better reduction of anxiety, since when the patient needs to pay for the other hypoglycemic agent, it may indicate that his or her glucose control is poorer, and may have more anxiety regarding health conditions compared to those who did not combine DPP-4 inhibitor with GLP1-RA. Our study found different effects from each subtype of GLP1-RA on anxiety or depression, and dulaglutide was the only subtype associated with significant reduction in such risks. The distinct result may be due to differences in molecular structure that affects the potency, duration, and frequency of use (Monami et al., 2009;Buse et al., 2011). Dulaglutide may have better reductions of anxiety or depression due to its weekly injection formulation, when compared with liraglutide (daily injection). Exenatide and liraglutide also showed neuroprotective effect in few clinical studies among patients with Parkinson's disease, Alzheimer's disease, ischemia, traumatic brain injury, neuropathies, neurogenesis, or epilepsy, but not in patients with anxiety disorder as in our study. Erbil et al. (2019) suggested that not all GLP1-RAs have the same neuroprotective effect and the effect of GLP1-RAs on reversing neurodegenerative or neuro-destructive processes might be time and dose-dependent. Behavioral assessments in these preclinical studies are limited to the lesion model and may not cover all aspects of cortical function and all cortical layers. Hence, more studies investigating cortical effects of different subtypes of GLP-1 on depression or anxiety at the whole brain level, with longer assessment periods, and complete neuropsychological evaluations or behavioral observations are still needed both for preclinical and clinical studies. The strength of this study lies in the very large number of patients used, the largest to-date, that focused on anxiety or depression associated with using GLP1-RA for the treatment for DM. We were able to conduct a longer duration follow-up than previous studies. The stratified data analysis was able to evaluate factors associated with treatment effectiveness. The identification of anxiety and depression were based on at least two clinical diagnoses rather than by screening instruments. However, several limitations should be acknowledged for this study. First, the study was observational, rather than a randomized control design, for the population of Taiwan. The generalizability of findings to other population may be restricted. Second, our efforts to match and adjust for possible confounding factors might be biased by unmeasured or unknown confounders not available in the NHIRD. Information on some potential confounders, including body weight, smoking, diet, exercise, stress, or family history, was not available in the NHIRD, the bias associated with unmeasured factors was unclear, despite our efforts to match study groups and controlling for available variables. Third, information on changes in body weight was unavailable in the NHIRD, we were unable to evaluate risks of anxiety or depression related to body weight changes. PY: person-years; IR: incidence rate per 1,000 person-years; cHR: crude hazard ratio; aHR: adjusted hazard ratio. 1:4 propensity score matching. #: adjusted by sex, age, urbanization level, enrollee category, comorbidities and medication; *p < 0.05, **p < 0.01, ***p < 0.001.
2022-02-23T14:21:28.220Z
2022-02-23T00:00:00.000
{ "year": 2022, "sha1": "476efdc210fb36a4c662a74151c50a546a6dfdd3", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2022.765446/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "098735c8148e3bb6e5d3beaccb4b2f196d519d03", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
220890743
pes2o/s2orc
v3-fos-license
The Impingement-free, Prosthesis-specific, and Anatomy-adjusted Combined Target Zone for Component Positioning in THA Depends on Design and Implantation Parameters of both Components Abstract Background Lewinnek’s recommendation for orienting the cup in THA is criticized because it involves a static assessment of the safe zone and because it does not consider stem geometry. A revised concept of the safe zone should consider those factors, but to our knowledge, this has not been assessed. Questions/purposes (1) To determine the shape, size, and location of target zones for combined cup and stem orientation for a straight stem/hemispheric cup THA to maximize the impingement-free ROM and (2) To determine whether and how these implant positions change as stem anteversion, neck-shaft angle, prosthetic head size and target range of movements are varied. Methods A three-dimensional computer-assisted design model, in which design geometry was expressed in terms of parameters, of a straight stem/hemispheric cup hip prosthesis was designed, its design parameters modified systematically, and each prosthesis model was implanted virtually at predefined component orientations. Functional component orientation referencing to body planes was used: cups were abducted from 20° to 70°, and anteverted from -10° to 40°. Stems were rotated from -10° to 40° anteversion, neck-shaft angles varied from 115° to 143°, and head sizes varied from 28 to 40 mm. Hip movements up to the point of prosthetic impingement were tested, including simple flexion/extension, internal/external rotation, ab/adduction, combinations of these, and activities of daily living that were known to trigger dislocation. For each combination of parameters, the impingement-free combined target zone was determined. Maximizing the size of the combined target zone was the optimization criterion. Results The combined target zones for impingement-free cup orientation had polygonal boundaries. Their size and position in the diagram changed with stem anteversion, neck-shaft angle, head size, and target ROM. The largest target zones were at neck-shaft angles from 125° to 127°, at stem anteversions from 10° to 20°, and at radiographic cup anteversions between 17° and 25°. Cup anteversion and stem anteversion were inverse-linearly correlated supporting the combined-anteversion concept. The range of impingement-free cup inclinations depended on head size, stem anteversion, and neck-shaft angle. For a 127°-neck-shaft angle, the lowest cup inclinations that fell within the target zone were 42° for the 28-mm and 35° for the 40-mm head. Cup anteversion and combined version depended on neck-shaft angle. For head size 32-mm cup, anteversion was 6° for a 115° neck-shaft angle and 25° for a 135°-neck-shaft angle, and combined version was 15° and 34° respectively. Conclusions The shape, size, and location of the combined target zones were dependent on design and implantation parameters of both components. Changing the prosthesis design or changing implantation parameters also changed the combined target zone. A maximized combined target zone was found. It is mandatory to consider both components to determine the accurate impingement-free prosthetic ROM in THA. Clinical Relevance This study accurately defines the hypothetical impingement-free, design-specific component orientation in THA. Transforming it into clinical precision may be the case for navigation and/or robotics, but this is speculative, and as of now, unproven. Introduction Correct cup and stem positioning is essential in THA; however, a consensus about the correct position of these components has not been reached and remains a subject of debate [10,16,24,56,72]. Lewinnek's safe zone for cup positioning [50], which is based on radiographic and empiric data about dislocations, has been accepted for a long time, but it has been criticized more recently [1,30,73,85]. Its critics raise at least two concerns. First, the original study showed that even THAs with cups positioned in the safe zone sometimes dislocated [1], and second, controlling the cup's orientation alone appears to be insufficient [31], especially when solely referencing the anterior pelvic plane. Beyond that, a more-individual positioning regarding femoral anatomy, pelvic tilt, the spino-pelvic relationship, and the interplay between the cup and stem seem important to consider in a surgeon's positioning strategy [1,30,72]. McKibbin [53] introduced the term combined version for assessing the growing dysplastic hips to combine acetabular and femoral neck versions (McKibbin's index). Typically, high femoral anteversion and low acetabular anteversion or even retroversion develop in the hips of infants [93]. Later on, the combined version concept was introduced into THA [70] and quantified for a specific hip prosthesis [95]. More investigations about the functional positioning of both components applying three-dimensional (3-D) geometry and comprehensive kinematic analyses confirmed that the interplay between the cup and stem determines the functional prosthetic ROM [6,26,33,39,88,99]. Although theoretical, these studies gave prosthesis-based directions on how to functionally position the cup and stem, not only to prevent dislocation or impingement but also to reduce the risk of complications such as wear, squeaking, as well as edge and peak loading [4,27,28,52,73,91]. Functional referencing (relying on functional guidelines such as the functional pelvic plane or body planes), although less common, may be superior to morphologic referencing with respect to joint stability [89]. Nevertheless, morphologic referencing (referring to anatomic landmarks such as the anterior pelvic plane, transverse acetabular ligament [2,10], iliac bone, and posterior femoral condyles) is in wide use clinically [5,7,23,24,74,77,90,97,100]. Changing the cup's functional orientation without changing its morphologic orientation led to a higher dislocation rate in one study [8]. Identifying the body planes is necessary for functional referencing. This is a challenging task, although it is less difficult in patients in supine position and more difficult in patients in the lateral decubitus [76,78]. Finding an accurate, functional, component orientation for an individual patient and a specific prosthesis requires a multifactorial approach. Our approach looks for design parameters and implantation orientations of the implants that enable impingement-free prosthetic joint motions as determined by activities of daily living [21,46,55,66,67] while considering anatomic restraints. Ideally, a prosthesis system should come with recommended prosthesisspecific (technical) targets provided by the distributor because the prosthesis' design determines the kinematic performance of a prosthesis system. Then it would be at the surgeon's discretion to adapt these targets to patientspecific factors, such as changes in pelvic tilt [3,20,26,45,48,54,61,69,74,101], changes because of limitations in the lower spine [8,13,30,32,34,38,47,62,71,79,84], incidental pelvic reorientation after THA [11,47,51,61,68], and other changes after surgery [65,83,87]. However, to our knowledge, no prosthesis-specific targets have been published. We wished to demonstrate the technical feasibility of developing such targets for a hypothetical implant system, but one that uses implant geometry that is employed in common practice: a straight stem and a fully hemispherical acetabular shell. We therefore sought (1) To determine the shape, size, and location of target zones for combined cup and stem orientation for a straight stem/hemispheric cup THA to maximize the impingement-free ROM and (2) To determine whether and how these implant positions change as stem anteversion, neck-shaft angle, prosthetic head size, and target range of movements are varied. Materials and Methods A kinematic analysis was performed using a 3-D geometric model of a total hip prosthesis consisting of a fully hemispherical acetabular shell, and a standard straight stem with a round, conically shaped neck. All relevant design parameters, such as the inner and outer diameter of the cup, head diameter, trunnion design, neck diameter and neck cross-sectional profile, head-to-neck ratio, orientation of the neck expressed by the neck-shaft angle and stem anteversion, cup radiographic anteversion, and cup radiographic inclination were used as parameters in the model. Thus, different straight stem designs were modeled and tested ( Table 1). The model was created in Maple R16 Software (Maplesoft, Waterloo, Canada) for batch computation. Two algorithms were established: one was a collision detection algorithm that analyzed joint motion until primary impingement occurred, and the other was a more analytic algorithm that calculated compatible cup positions for predefined hip movements. The second algorithm was the computational representation of the target ROM concept, meaning that the target motion of the femur was preset, and cup orientations allowing this motion were calculated ( Fig. 1A-B). The analysis generated iso-lines that revealed zones for cup orientations compatible to these predefined stem target motions (Fig. 2). For example, the Flx125°-line divides the diagram in a lower-left and an upper-right region. The region on the upper-right of the Flx125°-line denotes all impingement-free cup positions at 125°hip flexion while on the lower-left all impinging cup positions are located. The region below the Ext30°-line denotes all impingement-free cup positions at 30°hip extension and so on. The intersecting combination of all these regions like in the set theory designated the combined target zone together with its polygonal boundary (Fig. 2). All cup orientations within this zone fulfilled all criteria to reach the predefined target ROMs ( Table 2). Stem anteversion, head size and neck-shaft angle were the parameters of the diagram. We used Excel (Office 2016, Microsoft Corp, Redmond, WA, USA) to generate charts, means and SDs, correlations and regression analyses. Target zone sizes were calculated applying the formula for the area of irregular polygons (x i , y i are the coordinates of the polygons' vertices, n is the number of vertices): Simple in-anatomic plane movements and also combined movements were analyzed. There was a focus on combined movements that are known to cause dislocation and are commonly used to test hip stability intraoperatively such as combined adduction + flexion + internal rotation and extension + external rotation [12,58]. We also considered the ceiling effect due to bone-on-bone impingement for femoral heads of 32 mm or greater in flexion [18]. Nevertheless, these predefined hip movements were somewhat arbitrary and hence, they arbitrarily affected the shape and size of the combined target zone. The radiographic definition was used for identifying the orientation of the acetabular components [57]. All orientations referred to the body planes: These are the coronal, sagittal, and axial planes that define the body's coordinate system. In particular, the movements and the orientations of the cup and the stem were referenced to this body coordinate system [37]. A non-orthogonal coordinate system was applied to the hip according to the recommendations of the International Society of Biomechanics [36,98]. All movements started in the neutral position. The mediolateral axis (the flexion/extension axis) was affixed to the pelvis while the AP-axis (the abduction/adduction axis) was the floating axis. This AP-axis rotated around the mediolateral axis during flexion and extension. The longitudinal axis of the leg was also floating and rotated with flexion and extension and adduction and abduction. The sitting position was defined like this: 80°femoroacetabular flexion and 10°posterior pelvic tilt resulting in a 90°angle between thigh and trunk [30]. A standard straight femoral stem was implanted keeping its shaft co-linear with the intramedullary axis of the proximal femur. Hence, the shaft axis was flexed 5°and adducted 5°with respect to the body planes. This axis served as the rotational axis for stem version. Tests were performed from -10°of retroversion to 40°of stem anteversion in 5°increments (the minus sign [-] denotes retroversion). The following cup orientations were tested: radiographic inclination from 20°to 70°and radiographic anteversion from -20°to 50°(the minus sign [-] denotes retroversion). The neck-shaft angle varied from 115°to 143°, in 4°i ncrements over the entire range and 2°increments between 119°and 139°. Head sizes were 28, 32, 36 and 40 mm. The slightly conically shaped round neck yielded head-to-neck ratios ranging from 2.3 for the 28-mm, 2.67 for the 32-mm, 3.0 for the 36-mm, to 3.33 for the 40-mm-head (Table 1). To facilitate comparison the same chart layout as Lewinnek's was used to visualize the combined target zone (Fig. 2). A total of 572 diagrams each containing 19 test movements and 121 tested component orientations were produced and analyzed. The optimization process searched for the largest combined target zone as a function of radiographic cup anteversion, radiographic cup inclination, stem anteversion, neck-shaft angle, head size, and head-to-neck ratio. We chose this optimization criterion because it included the highest number of valid positioning combinations and offers the surgeon the highest flexibility for adjusting both components while still offering the patient the intended impingement-free ROM. Fig. 2 The red shaded area shows the combined target zone for a straight stem with a neck-shaft angle of 127°, stem anteversion of 15°, and head diameter of 32 mm. Cups oriented within this zone allowed all listed movements without prosthetic impingement. Cup inclination is limited to 50°for biomechanical reasons. Additionally, the optimization process searched for the lowest cup inclination aiming at improved tribology and increased jumping distance for enhanced joint stability [63,75]. Furthermore, combined version was calculated for neck-shaft angles from 115°to 143°using linear regression analysis. Results The shapes of the combined target zones for each stem anteversion were polygonal (Fig. 2). By rotating stem anteversion to -5°(meaning 5°retroversion) the size of the combined target zone was reduced, its contour changed, and the zone moved toward the top of the diagram (Fig. 3), whereas by increasing stem anteversion to 40°its size was also reduced, its contour also changed, but it moved into the opposite direction (Fig. 4). Taking each diagram of each stem anteversion and stacking all of them sequentially provided the 3-D target space (Fig. 5). The largest combined target zones were found for neck-shaft angles ranging from 122°to 130°with peaks at 125°to 127° (Fig. 6). Neck-shaft angles below 121°or above 131°provided smaller combined target zones. Larger prosthetic heads led to larger combined target zones, but the peak position remained at the 125°to 127°neck-shaft-angle corridor. Functional stem anteversions from 5°to 25°provided the largest combined target zones. This means that every stem version within this wide range fulfills the optimization criterion of maximizing the size of the combined target zone. (Fig. 7). Radiographic cup anteversions from 15°to 25°s howed the largest combined target zones, with a relative sharp decline in the size of the combined target zone when the cup anteversion increased above 31° (Fig. 8). Cup anteversion also depended on the neck-shaft angle: Changing the neck-shaft angle required the cup anteversion to be adjusted to keep the hip in the target zone (Fig. 9). For example, a neck-shaft angle of 115°required 5°of cup anteversion, while a 135°neck-shaft angle required 25°of cup anteversion. Likewise, when substituting a lateralizing (more varus, 123°) stem for a 135°neck-shaft angle stem, the cup had to be reoriented from 18°to 25°anteversion to achieve the largest combined target zone. Cup inclination was not dependent on target zone size. Instead, cup inclination was sensitive to cup anteversion, neck-shaft angle, and head size. The lowest radiographic cup inclination was in stem anteversions between 15°and 25° (Fig. 10) and in neck-shaft angles from 125°to 130° ( Fig. 11). Lowest cup inclinations were 42°for a 28-mm head, 40°for a 32-mm head, 37°for a 36-mm head, and 35°f or a 40-mm head. The upper limit for cup inclination was intentionally set to 50°because cup positions more vertical than that are associated with other problems, in particular accelerated polyethylene wear, edge loading, and reduced jumping distance [19,86,92]. Changing stem anteversion required cup anteversion to be adjusted in the opposite direction, that is, increasing stem anteversion called for a reduction in cup anteversion and vice versa. The linear regression analysis for a 127°n eck-shaft-angle stem showed this equation: Cup Anteversion + 0.68*Stem Anteversion = 31.3°, the coefficient of determination R 2 was 0.9969 (Fig. 12). This Fig. 3 Putting the stem into -5°retroversion yielded other iso-lines and a smaller shaded combined target zone of a different shape which is located in the upper part of the diagram. Fig. 4 Putting the stem into 40°anteversion yielded other iso-lines and a smaller shaded combined target zone of a different shape which is located in the lower part of the diagram. Volume 478, Number 8 Individual, Combined Target Zone correlation is called combined version. The contribution of stem anteversion to the combined version value was only 68% compared with 100% of cup anteversion. Therefore, it is more effective to adjust the cup than the stem to satisfy this equation. Changing the neck-shaft angle also changed combined version. For example, in a straight stem with a neck-shaft angle of 121°, the combined version was 24°, while it was 33°for a stem with a 135°neck-shaft angle (Fig. 13). Therefore, the combined version also was dependent on prosthesis design and was not the same for all prosthesis designs. The design that provided the largest combined target zones using a 32 mm head was a straight stem with a 127°n eck-shaft angle that was implanted at 15°stem anteversion and was combined with a cup implanted at 40°cup inclination and 20°radiographic anteversion resulting in a combined version of 31°. When the head size was increased, the corresponding lowest possible cup inclination decreased: Cup inclination was 42°for a 28-mm head, 40°f or a 32-mm head, 38°for a 36-mm head, and 36°for a 40mm head. Discussion Impingement-free component orientation is a cornerstone for enhanced joint stability in THA. This study identified a 6 The size of the combined target zone was dependent on the neck-shaft angle. The largest zones were found for neck-shaft angles from 122°to 130°(head sizes from 28 mm to 40 mm, stem anteversion 15°). multifactorial, dynamic interplay between cup and stem that must be considered when aiming to maximize the impingement-free ROM. This interplay involves both design and implantation parameters, such as head size, head/neck-ratio, neck-shaft angle, stem anteversion, cup inclination, and anteversion. Hence, considering both components is of upmost importance [31]. An accurate best-fit combination of component orientations providing the largest combined target zone was identified (Fig. 5). Changing one or more component orientations or design Fig. 7 The largest combined target zones were found for stem anteversions from 5°to 25°f or all head sizes from 28 mm to 40 mm. Fig. 8 The largest combined target zones were found for radiographic cup anteversion from 15°to 29°. The size of the combined target zone was substantially smaller for cup anteversions below 11°and above 31°. Volume 478, Number 8 Individual, Combined Target Zone parameters results in the need for the other parameters. The combined target zone is a guide for how to perform these adjustments to achieve the widest impingement-free ROM possible. Limitations The study has several limitations. First, it is theoretical in nature and second it addresses prosthetic impingement only. On one hand, being theoretical is a big advantage because any design or any component orientation or any prosthetic joint movement can be tested virtually in the computer-aided design system. On the other hand, the scope of all test parameters had to be limited to the range that is actually used in clinical practice. This means the parameters should include realistic numbers. For example, for stem anteversion the lowest value was -10°and the highest was 40°; it would not make sense to test 90°s tem anteversion even though the algorithm would process it. The hip movements selected for testing were predefined based on available evidence [46,55,67]. Although this set of movements included various combined flexing/extending + internal/external rotating + ab/adducting movements that are known to trigger dislocation, the combined target zones are still limited to the parameters tested and movements that diminish the target zones even further are possible. Obviously, by addressing prosthetic impingement only, bone-on-bone or implant-one-bone or soft tissue impingement were not covered. Furthermore, patient-specific parameters like sex, age, BMI, or height were not included in the modeling. This has been done intentionally since the goal was to test just the impingement-free prosthetic ROM of a straight stem/hemispheric cup type prosthesis. As is known, a THA needs more factors to be successful, including meticulous surgical technique, consideration of relevant biomechanical aspects during preoperative planning [25,42], attention to systemic risk factors, and adequate physiotherapy. Finally, we note that orienting the components is only one step in prosthesis implantation; containment, that is, achieving as much bone coverage as possible [94], and component fixation [96] are additional important aspects. Target Zones for Cup and Stem Orientation We developed a computer model to determine the target zones for cup and stem orientation and implant design in a hypothetical THA implant of a parameterized geometry (that is, expressing implant geometry in terms of changing design parameters) so as to maximize impingement-free ROM. First, we found that for a 127°neck-shaft angle, 15°s tem anteversion, 32 mm femoral head, 2.67 head-neck ratio, and a hemispherical shell, the impingement-free targets were 40°for cup inclination, 20°for cup anteversion, and 31°for combined version. We also found tolerance ranges for each component orientation. Lewinnek's recommendations for the cup are close to these results, provided that a 127°-neck-shaft-angle stem is used and the functional stem anteversion is adjusted to 20°. Therefore, when putting a 127°-straight stem into 20°a nteversion, the surgeon should continue to apply Lewinnek's recommendation for cup placement [50]. But there are important points to consider: Our target zone was smaller than Lewinnek's safe zone and one that was estimated by another recent modeling study [27]. In particular, putting the cup into lower inclination and lower anteversion is not recommended because a cup at 30°inclination and at 5°anteversion was outside the polygonal shape of the combined target zone that we found (Fig. 2). A rectangular safe zone, such as the one posited by Lewinnek et al. [50], would therefore leave a surgeon (and his or her patient) vulnerable to an alignment error that could result in impingement or dislocation. Indeed, adjusting cup and stem orientation in tandem is an important detail in THA, and the Lewinnek safe-zone does not adequately make this clear. Furthermore, we confirmed the important inverselinear correlation between stem anteversion and cup anteversion (Fig. 12). This inverse-linear regression is related to the concept of combined version and has been demonstrated for one specific prosthesis design [95], and it also has been confirmed by other investigators [6,39,41,99]. Its clinical impact-improving THA stability-has been demonstrated in clinical studies for primary and also for revision surgery [22,52,60,64]. We found that the combined target zone is located at a higher (Fig. 3) or lower location in the diagram depending on stem anteversion (Fig. 4). How Implant Positions Varied with Changes in Component Design By increasing the head size and the head/neck ratio, the target zone size also increased, meaning that the range for orienting the hemispheric cup was wider. Hence, prosthetic impingement was less likely to occur when we modeled a THA using a larger head with a larger head/neck ratio, and consequently the risk for dislocation might be reduced when using implants with those design features [15]. Clinical experience supports the stabilizing impact of Fig. 13 Combined version was dependent on neck-shaft angle. Greater neck-shaft angles required greater combined versions. There was a ceiling effect showing an asymptotic leveling at 34°of combined version. The marker size represents the relative size of the combined target zone showing the largest zones from 123°to 130°neck-shaft angles corresponding to 25°to 32°combined radiographic version (head size is 32 mm, stem anteversion is 15°). larger heads [40]. However, there are potential concerns with larger femoral heads, such as an increased corrosion risk at the head-neck trunnion [9] or wear [19], and so the surgeon needs to consider the potential trade-offs of these implant-selection choices. The combined target zone expanded to higher and lower cup anteversion but also to lower cup inclination. The clinical conclusion of the latter observation is that the cup should be implanted at the lowest inclination possible, which will make the jumping distance higher [63,75] and will improve tribology [4,52]. In addition, we observed that when the head size changed, the neck-shaft angle, stem anteversion, and cup anteversion did not have to be changed, and the combined version remained the same. Neck-shaft angles between 122°and 130°yielded the largest target zones. Stems with equal to or less than 115°o r greater than 135°neck-shaft angles provided substantially smaller target zones. Neck-shaft angle also determined the cup inclination. Lowest cup inclination was possible for 127°to 131°neck-shaft angles. Neck-shaft angle also influenced cup anteversion (Fig. 6) and combined version (Fig. 10), demonstrating that combined version is not a constant value for all prostheses but was dependent on design and implantation parameters [41,64]. In other words, combined version is also a prosthesisspecific parameter. Clinical Relevance Preventing dislocation in THA is an important goal. Although hip instability is multifactorial, it seems important to reduce the risk of prosthetic impingement. If a THA joint is unstable, despite adopting the components to orientations as presented here, it is very likely that one or more other causes are triggering instability, and these might need correction. It should also be noted that individual changes in pelvic tilt intra-or postoperatively do affect functional cup orientation [5,8] and should be considered during surgery [35,47,76,78,101]. Conclusions We determined the multidimensional interrelationship of impingement-free component orientation using a hypothetical hip prosthesis representing a straight-stem type prosthesis. We determined design-specific recommendations for cup and stem orientation to maximize an impingement-free target zone. We also calculated adjustments to those orientations when the position of one or both components was changed (Fig. 5). The analysis provided recommendations on how to reach the largest ROM for a prosthesis with design parameters familiar to many surgeons (straight stem, hemispherical shell). Transformation of these results into precise component implantation during surgery may benefit from additional tools like navigation and/or robotics [14,17,29,43,44,49,59,80,81,82]; however, this is speculative and as of now, unproven. Future clinical studies might show if such prosthesis-and patient-adjusted component orientations will help to enhance THA stability in our patients. This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.
2020-08-01T13:06:03.858Z
2020-04-09T00:00:00.000
{ "year": 2020, "sha1": "606fa7efd69dc44fd0d0aaf343432bd0d4ecd6ab", "oa_license": "CCBYNCND", "oa_url": "https://journals.lww.com/clinorthop/Fulltext/2020/08000/The_Impingement_free,_Prosthesis_specific,_and.35.aspx", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "6c92ccdc0a03909ccb57dbacac41a7e3ae4b83de", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
139909443
pes2o/s2orc
v3-fos-license
Electric field strength sensor of cylindrical form An electric-induction cylindrical electric field strength sensor is considered. The aim of this work is to investigate the interaction of a cylindrical sensor with an inhomogeneous electric field of a linear charge in order to determine the parameters of a sensor that affect its error in an inhomogeneous field. Optimization of the set parameters allows creating sensors with known guaranteed metrological characteristics and having an additional error from the inhomogeneity of the field no more than 3% in the spatial range from 0 to 3R to the source of the field. As a result of the study, the formula for estimating the error of a cylindrical sensor caused by the inhomogeneity of the electric field both from its angular and linear dimensions and from the spatial measurement range was obtained for the first time. The range of application of such high-precision sensors of electric fields is wide. They can be used both in production processes and in various areas of society. Introduction Various conditions for the use of electric field strength sensors [1] make it necessary to constantly improve the designs of both the sensor body and the sensors. The most common physical phenomenon underlying the construction of electric field strength sensors is the phenomenon of electrical induction. The essence of this phenomenon lies in the fact that a redistribution of electric charges occurs in a conducting body placed in an electric field. As a result of this displacement, the intensity of the electric field inside the body and the tangential component of the tension on its surface will vanish. Observance of these two conditions allows us to assume that the potential of the body at all its points will be the same and equal to the potential of the external electric field at some point of the conducting body. A point of a body whose potential is equal to the potential of an external electric field is called a reference-point. The reference-point coincides with the center of the electric charges of the body, the position of which in the body can be determined, according to expression , where r is distance from center of symmetry to center of electric charges of body; r i is distance from charge q i to center of electric charges of body. For symmetric conducting bodies (a sphere, a cube, a cylinder) located in a homogeneous electric field, the reference point coincides with the center of symmetry of the body, and does not coincide in the inhomogeneous field. This feature must be taken into account when constructing and using sensors to measure the strength of the electric field. It should be noted that the sphere, cube and cylinder are the most common forms of conducting bodies for construction of electric field strength sensors [2]. The effect of non-uniformity of electric fields in test for stability of the measurement result is considered in [3] using various types of sensors, for [4][5][6][7], sensors of the electric field intensity of the spherical shape are examined and analyzed, and also the sensors of a cubic and planar shape, there are practically no work on the sensors of a cylindrical shape. Therefore, the authors of this paper took the trouble to consider and analyze the features of the construction and behavior of the electric field intensity sensor of cylindrical shape in fields of different non-uniformity. Task set The complex mechanism of the action of electric fields on biological objects has not yet been sufficiently studied. This complexity requires improvement effects metrological characteristics as the electric field sensors, and means of processing signals. The main contribution to the total error of the sensors and the means of processing their signals is the error of the sensor. The sensor error can be reduced by taking into account the interacting factors of the sensor with the electric field when developing and optimizing the structural elements of the sensor. The existing need to improve the constructive and, as a result of the metrological characteristics of the sensors, obliges to create all new electric field strength sensors One of these new sensors is an electric induction cylindrical electric field strength sensor. The sensor will be presented with independence requirements (within the limits of the minimum possible error) of its output signals from the inhomogeneity of the electric field in a wide spatial range of measurements. Theory The theory of constructing the electric field strength sensor is based on the analysis of the interaction of a conducting cylinder of height h and radius R placed in electric fields of different nonuniform with the intensity E 0 = E⋅ sinωt. Further in the text, simply E 0 . As fields of different non-uniform, we choose boundary fields-a uniform electric field and a field of a linear charge with a strong non-uniform. A uniform field acts as an exemplary, reference field. In relation to it, the error of the sensor operating in an non-uniform field of linear charge will be estimated. We assume that the error of the sensor in other non-uniform fields is less than in the field of a linear charge. By a linear charge we mean an infinitely long uniformly charged thread. The linear charge field is chosen from the condition of the greatest non-uniform, which can be modeled when analyzing the behavior of the sensor in an non-uniform field. Description of a single-coordinate cylindrical sensor The sensor is based on the dielectric cylinder 1 with radius R and height h. On the external surface of the sensor on one coordinate axis diametrically opposite and two conducting sensors 2 and 3 of semi-cylindrical shape with an angular size θ 0 and a concave part to cylinder axis are diametrically opposed and isolated from each other. Sensitive elements 2 and 3 are a thin conductive layer with a thickness of the order of 10 ÷ 100 μm, deposited by nanotechnology methods on the surface of a dielectric cylinder, and their radius and length coincide with the dimensions of the dielectric cylinder, as shown in FIG. 1. If the radius R of the dielectric cylinder 1 is much larger than the thickness of the sensors 2 and 3, then we can assume that the radius of curvature of the sensing elements is also equal to R. Making gaps between the sensor elements 2b<<R, we can assume that the sensor elements 2 and 3 have equal potentials (special measures will be taken for this), and the sensor is a single conducting cylindrical surface. Single-coordinate cylindrical sensor in uniform field In considering this question, we use FIG. 2. The solution of this problem reduces to finding the surface density of the induced charge on the conducting cylinder through the potential of an arbitrary point A lying outside the cylinder vector of the intensity of the resultant field through the potential gradient and the components of the intensity vector of the resultant electric field: (3) where ε is the dielectric constant of the medium surrounding the conductive plate; ε 0 is the dielectric constant. Analysis of the expression (4) shows that the surface density of the charges on the conductive cylinder in the homogeneous field is, is constant but varies according to cosine law depending on polar angle a. Therefore, in this case, the charge is non-uniformly distributed over the surface of the cylinder. For example, at θ=0 and θ=π, on the generators of the cylinder, there arise maximal but opposite charges of charge density σ=±2εε 0 Е 0 , and at θ=π/2 and θ=3π/2 -zero densities. Thus, the lower half of the cylinder (FIG. 2) becomes a charged positive charge and the upper half is negative. The plane defined by the divided cylinder into two oppositely charged parts may be referred to as the plane of the electric neutral. In the homogeneous field this plane coincides with the plane of geometric symmetry, i.e. the plane dividing the cylinder into two equal parts. Consider the line of the above-described single-coordinate sensor in a uniform field. Let in some volume V space filled with dielectric with relative permittivity ε (in particular air) there is a time-varying uniform EF with the intensity Е 0 generated by the external sources. In order to measure EF, cylindrical sensor is introduced into it. It is necessary to establish a mathematical relationship between the electric charges induced on the sensitive electrodes of the Sensor And the intensity of EF Е 0 . All geometric relationships, dimensions of electrodes forming the field of the source, and location of the sensor in this system of electrodes are considered to be known. Let's select one of the sensitive elements, for example 2 on the x-axis, to be considered on the surface of the sensor and determine the induced electric charge (FIG. 1). It follows from expression 3 that when the conductive insulated cylinder Is introduced into the EP on the surface thereof, there will be only the normal component of the field Strength Eρ, the parameters of the field, the size of the cylinder and the parameters of the cylinder material are determined. The electric charge that has acquired the insulated conductive cylinder is defined by [8][9][10] where σ is the surface charge density defined by (4); is element of cylindrical surface expressed in polar coordinate system; R is radius of cylindrical electrode; a is angle of polar coordinate system; dz-the element of the z-axis coinciding with the axis of symmetry of the cylinder and varying from 0 to h; h-the height of the cylinder. The area of the cylindrical sensitive electrode is determined from expression (6), after substituting the corresponding integration limits (FIG. 1 Taking into account the expressions (4), (5) and (6), we find the charges induced by a homogeneous electric field on the surfaces of cylindrical sensitive electrodes 2 and 3 (FIG. 1) (8) . As can be seen from the expressions (8) and (9) the charges induced on the conductive surface of the sensitive electrodes 2 and 3 are proportional to the electric field strength. Therefore, they can act as a measure of field strength. In this case, the sensor sensitivity G of the electrodes 2 and 3 will be equal to where " + " corresponds to sensor element 2, and "-" corresponds to sensor sensitive element 3. It presence of two sensitive electrodes oppositely located on one coordinate axis allows us to speak of a dual sensor. At its differential inclusion, the total charge of the sensor, and, consequently, its sensitivity is doubled (12) It follows from expressions (10) and (12) that the sensitivity of the sensor depends on the geometric dimensions of the cylindrical sensor body, namely, the radius R and height h, as well as angular a 0 and linear h dimensions of sensitive electrodes. At the same time, the sensitivity of the sensor does not depend on the distance to the source of the field, since it is believed that the field source is at infinity. If the geometric dimensions of the cylindrical body and sensor sensitive electrodes do not change during the measurement, the sensor sensitivity remains constant. This condition is satisfied when the sensor is placed in a uniform electric field. Single-coordinate cylindrical sensor in non-uniform field of linear charge We place the cylindrical sensor in the electric field of the linear charge. As a linear charge, we will consider a uniformly charged rectilinear filament with an electric charge density per unit length τ ( Figure 3). 6 We shall find the charges induced by the electric field of a linear charge on the surfaces of cylindrical sensitive electrodes 2 and 3 oriented along the direction of the field. To do this, we use expressions (5) and (6), as well as an expression for the electric charge density on a conducting cylindrical surface located near a linear charge where is the voltage of the initial EF produced by the uniformly charged rectilinear thread with the surface density of the charge tau in the point with the coordinates ρ = 0, θ = 0, z = 0 in the absence of conductive cylinder; a = R/d is the relative distance from the cylinder center to the Power Source EF (characterizes the degree of field non-uniformity). By taking the tables of integrals [11] and considering this, the electric charges on the sensing electrodes 2 and 3 (see FIG.1) are defined by expressions . It follows from expressions (14) and (15) that the charges induced by an inhomogeneous field on the conducting surface of the sensitive electrodes 2 and 3 are proportional to the strength of the initial electric field. Therefore, as in the uniform field, they may be a measure of the strength E 0 . However, in an inhomogeneous field of sensitivity G of the sensor with respect to electrodes 2 and 3 will be different, and determined, according to expressions At differential connection of the sensor, the total charge from the sensor electrodes and its sensitivity will be correspondingly equal It follows from the expressions (16), (17) and (19) that the sensitivity of the sensor of a linear charge placed in an non-uniform field is determined not only by the geometric dimensions of the cylindrical body and the sensor's sensitive electrodes, but also by the relative distance a = R/d to the source of the field (R -radius of the cylindrical sensor housing, d is distance from axis of symmetry of sensor to linear charge). Consequently, the sensitivity of the sensor in the non-uniform field will not remain constant, but depends on the distance to the field source. The presence of this dependence leads to an additional error of the sensor from the inhomogeneity of the electric field. This error is estimated. For this, the expressions (11) and (18) and the well-known formula for the relative error: In the expression for the error, there is a parameter characterizing the proximity of the sensor to the source of the field. Using the mathematical editor MathCAD 14, we plot the error of the sensor from the inhomogeneity of the field as a function of the parameter a. The plot of this error is shown in Fig. 4. Test results The investigations made it possible to establish the relationship between the parameters of the cylindrical electric field strength sensor and the spatial measurement range from the field nonuniformity. This relationship is reflected in the form of the first obtained expression for the error of the sensor caused by the non-uniformity of the field from the angular dimensions θ 0 of the sensor's sensitive electrodes and the spatial measurement range a. Analysis of this error shows (FIG 4) that sensor with angular dimensions of sensitive electrodes θ 0 = π/2 in the entire spatial measurement range has a negative error and even at a > 0.3 this error exceeds 3%. Conclusions The mathematical dependence (20) of the error of a cylindrical sensor caused by the nonuniformity of the electric field on the angular and linear dimensions, both the sensor and the spatial measurement range a = R/d, limiting the range of the use of the sensor, was obtained for the first time. From the analysis of the study of the graphical dependence of the indicated error (FIG. 4) it follows that the above-mentioned voltage sensor has a negative error from field non-uniformity to -3% in the spatial range from 0 to 3R from the field source, where R is radius of cylindrical body of sensor. The sensor gives underestimated values of charges in an non-uniformity field, this can lead to a nonobjective evaluation of the effect of the electric field on technical and biological objects. The next stage of the study of the sensor in this question will be related to the solution of the problem of optimizing the dimensions of its sensitive elements in order to minimize the error from the non-uniformity of the electric field.
2019-04-30T13:08:41.769Z
2019-03-01T00:00:00.000
{ "year": 2019, "sha1": "7b2913bc38fa660336b223679970687c3fec6dce", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1210/1/012016", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "ec6197fe7e9d76009c042bc8970e1362bc7a4b29", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
221351971
pes2o/s2orc
v3-fos-license
Comparison of complications between transcatheter and surgical ventricle septal defect closure: a single-center cohort study BACKGROUND Some ventricular septal defects (VSDs) require an interventional procedure for closure. Transcatheter and surgical closures of VSD have similar effectiveness, but transcatheter VSD closure is considered associated with less complication than surgical closure. This study aimed to compare mid-term or long-term complications of transcatheter and surgical VSD closures. METHODS This was a retrospective cohort study compared the complication rates of transcatheter and surgical VSD closures performed in Cipto Mangunkusumo Hospital from January 1, 2010, to April 30, 2017, with 34 subjects in each group. The inclusion criteria were as follows: single lesion outlet perimembranous or doubly committed subarterial VSD, age 2–18 years, body weight >8 kg, and no arrhythmia. Electrocardiography and echocardiography were done to collect primary data. Other data were collected from medical records. Mid-term complications occurred 1–24 months after interventional closure. Long-term complications occurred 24 months after interventional closure. Complications were arrhythmia, valve regurgitation, and residual shunt. Data were analyzed by chi-square test. RESULTS The rate of worsening valve regurgitation was higher in the transcatheter group than in the surgical group (16 versus 11, p = 0.322). The number of patients with residual shunts were similar between the transcatheter group and surgical group (5 versus 5; p = 1.000). Both complications were found in mid-and long-term. Arrhythmia as a long-term complication occurred in five and seven patients in the transcatheter and surgical groups, respectively ( p = 0.752). CONCLUSIONS Transcatheter and surgical VSD closures have similar mid-or long-term complications. Ventricular septal defect (VSD), the most common congenital heart disease (CHD), is characterized by a defect or a hole in the septum separating the left ventricle and the right ventricle.¹VSD is estimated to occur in 20-30% of all CHD cases and perimembranous VSD (PM VSD) accounts for approximately 70% of all VSDs.¹VSD is divided according to the location of the defect: perimembranous outlet VSD (PMO VSD), doubly committed subarterial VSD (DCSA VSD), perimembranous inlet VSD, and muscular VSD.¹The disease course of VSD widely varied, ranging from spontaneous closure to death in early infancy.Spontaneous closure usually occurs at age 2 years and uncommon after age 4 years, though it has ever been reported in adults.²Spontaneous closure most commonly occurs in muscular defects (80%), followed by perimembranous defects (35-40%) and small defects.PMO VSDs have a low rate of spontaneous closure at age 2 years until 4 years (7.3%), while perimembranous inlet VSDs almost never close spontaneously.³Before the interventional cardiology era, VSD closure was done by the surgery alone, which is still the current gold standard.The complications often occur during and after VSD closure due to the use of a cardiopulmonary bypass (CPB).These complications can lead to developmental problems in pediatric patients.⁴Transcatheter closure of the VSD is considered more efficient, especially for hospitals with limited human resources and facilities, because it does not require special monitoring in the intensive care unit after the procedure; thus, it may reduce maintenance costs.It is also considered less lifethreatening because CPB is not needed.However, not all VSDs can be closed via the transcatheter approach, including large-sized VSD which is usually accompanied by heart failure and VSD in complex CHD.Moreover, controversy arises for the device used, that is, whether it can cause compression on the conduction pathway and heart rhythm disorders.⁵ The most commonly observed complications after transcatheter VSD closure are heart rhythm disorders, aortic and tricuspid valve regurgitation, residual shunt, thrombosis, hemolysis, and embolization.⁵One of the serious complications of transcatheter VSD closure is total atrioventricular (AV) block which occurred in 2% of cases during nearly 2 years of followup.Hence, long-term monitoring is necessary.Aortic and tricuspid regurgitations can be mid-term or long-term complications of transcatheter or surgical VSD closure,⁶ and the residual shunt is another complication.⁷ Complications of transcatheter and surgical VSD closures are often missed during monitoring, resulting in the late intervention.Therefore, this study aimed to monitor mid-and long-term complications through echocardiography and electrocardiography (ECG) and to compare the complications in both procedures, including heart rhythm disorders, valve regurgitation, and residual shunts in our center. Study design This retrospective cohort study was conducted at Cipto Mangunkusumo Hospital from March to May 2017.Data were taken from electronic health records of all subjects with PMO VSDs and DCSA VSDs attending the Integrated Heart Service of subjects between January 1, 2010, and April 30, 2017.Parents of children who met the inclusion criteria were then contacted by telephone for participation, examination, and appointment for ECG and echocardiography between March 1 and May 30, 2017.ECG and echocardiography were performed by a pediatric trainee cardiologist and confirmed by a consultant pediatric cardiologist. Sampling was done consecutively.The inclusion criteria were as follows: (1) subjects with single lesion PMO or DCSA VSD who had transcatheter or surgical VSD closure and (2) age 2-18 years and/or body weight ≥8 kg at the time of VSD closure.The exclusion criteria were as follows: (1) VSD subjects with complex CHD, (2) inlet and muscular VSD subjects, (3) incomplete medical record data, (4) unwilling to participate in the mid-term or long-term monitoring, and (5) history of arrhythmias prior to VSD closure. Every subject was classified to have mid-term or long-term complications according to the results of the last examination after interventions.Mid-term complications were measured 1-24 months after transcatheter and surgical VSD closures, while longterm complications were measured 24 months after VSD closure.⁸Arrhythmia was defined as an abnormal heart rhythm outside the sinus rhythm based on the ECG recorded at the time of monitoring.Residual shunt was the presence of flow from the left ventricle to the right ventricle as detected by transthoracic color Doppler echocardiography with the following classification: (a) trivial (<1 mm), ( 2) small (1-2 mm), (3) moderate (>2-4 mm), and (d) large (>4 mm).⁹Valve regurgitation was assessed by jet areas on color Doppler transthoracic echocardiography with the following classification: mild, moderate, and severe. VSD size before VSD closure was based on transthoracic echocardiography data, including (1) small VSD, defect size less than ½ diameter of the aortic annulus; (2) moderate VSD, defect size ½ diameter of the aortic annulus; and (3) large VSD, defect size equal to or greater than the aortic annulus diameter.The aortic annulus diameter of infants and young children and of older children and adults are 10 and 20 mm, respectively.⁹The right atrial approach is a technique for VSD closure in which the tricuspid valve leaflets are retracted to expose the location of the VSD.The transpulmonary arterial approach is a technique for VSD closure in which the right atrial approach is accomplished through a vertical incision in the pulmonary trunk.⁹ Statistical analyses The number of samples was calculated based on a minimum sample size of each procedure by using a formula for differences in the proportion of transcatheter and surgical complications.We assumed an alpha value of 5% and beta value of 20%, with p1 (proportion of complication of surgical VSD closure according to literature)¹⁰ of 0.323 and p2 (proportion of complication of transcatheter VSD closure according to literature)¹⁰ of 0.07.Differences in the proportion of arrhythmia events, changes in the severity of valve regurgitation, and residual shunts between the transcatheter and surgical groups were analyzed by chisquare with 95% confidence interval and significance value at p<0.05. Ethical approval This study received ethical approval from the Ethics Committee of the Faculty of Medicine, Universitas Indonesia (No: 171/UN2.F1/ETIK/2017). RESULTS In total, 115 and 72 subjects with PMO VSDs and DCSA VSDs underwent transcatheter and surgical closures, respectively, but only 34 subjects in each treatment were eligible to participate in the monitoring.The flowchart diagram of the subject's recruitment is shown in Figure 1.The subjects' characteristics are shown in Table 1. PM VSD was more prevalent in the transcatheter group than in the surgical group, whereas DCSA VSD was more common in the surgical group.In this study, 32 subjects had small VSD, 20 subjects had moderate VSD, and 16 subjects had large VSD.Small VSD accounted for the largest proportion of VSD in the transcatheter group, while it was large VSD in the surgical group.Compared with the mean duration time procedure of VSD transcatheter closure, the mean duration time of VSD surgery closure was longer (108.2versus 157.2 min, p<0.001).The mean duration time of CPB was 54.7 (range, 38-78) min, and the average crossclamp duration was 30.8 (range, 17-68) min.Surgical approaches used for VSD closure were the right atrial approach and transpulmonary approach, which were performed in 22 and 12 subjects, respectively, and all defect closures were performed using patches.For transcatheter VSD closure, the antegrade approach was employed, and ADO II was used in majority of the cases. ECG features, valve regurgitation, and residual shunt in subjects before and after VSD closure are shown in Table 2. Prior to VSD closure, all subjects had sinus heart rhythm as well as normal PR interval and QRS duration.Left ventricular hypertrophy was found in 61 subjects, with a larger proportion in the surgical group than in the transcatheter group (33 versus 28). After VSD closure, arrhythmias were detected in 12 subjects, including 5 subjects in the transcatheter group and 7 subjects in the surgical group (p = 0.732).Four subjects experienced arrhythmia during mid-term monitoring.The most common type of arrhythmias after VSD closure was incomplete right bundle branch block (RBBB), which occurred in three and four subjects in the transcatheter and surgical groups, respectively.The largest proportion of patients with arrhythmias was found in subjects with Amplatzer perimembranous VSD occlusion.The number of patients who developed arrhythmias following VSD closure was higher in the surgical group using the right atrial approach technique (n = 5) than those using the right atrial and transpulmonary approach (n = 2). Valve regurgitation was observed in 34 subjects prior to VSD closure and was more common in the surgical group (n = 31) than in the transcatheter group (n = 3).Aortic valve regurgitation occurred in 16 subjects after VSD closure, all of which are trivial/ mild aortic valve regurgitation, and no moderate and severe aortic valve regurgitation were found after both procedures.Tricuspid valve regurgitation occurred in 19 subjects after closure.Increasing severity of valve regurgitation (aortic and tricuspid) before and after both procedures was found in 27 subjects, which was more common in the transcatheter group than in the surgical group (16 versus 11; p = 0.322).Overall, five subjects (14.7%) had residual shunt after VSD closure in both groups (p = 1.000). DISCUSSION This study showed an increase in the severity of regurgitation after transcatheter (n = 16) and surgical A residual shunt was found in 5.9% of cases after transcatheter VSD closure and in 8.8% of cases after surgical closure.These results were similar to the findings of a previous study, i.e., residual shunt occurred in 5.4% of transcatheter cases and in 8.8% of surgical cases.⁸In the present study, residual shunts were found on those using APMVO (50%) and ADO II A previous study revealed that the safety and efficacy of the device used for VSD closure depends on the device suitability and type of defect.¹²Arrhythmias were found in five and seven subjects who underwent transcatheter and surgical closure of VSDs, respectively.Moreover, in this study, no total AV blocks were found following both VSD closure procedures.These results were similar to the finding of a study in China in which no total AV blocks were found following both VSD closure.¹⁰Another study found that total AV block occurred in 3.5% of subjects who underwent surgical VSD closure,¹¹ while other studies in Egypt reported 14 AV blocks in 400 subjects who underwent surgical VSD closure.¹³Differences in total AV block events in the present study with other studies could be caused by many factors, especially the patient's condition before the procedure and the surgical technique used.⁸Incomplete RBBB was the most common arrhythmia in both procedures; however, the rates were much lower than those in another study that reported 8.8% and 2.9% for transcatheter and surgical closure, respectively.In the present study, the incidence of incomplete RBBB was lower than that in another study, with 19.4% following transcatheter closure and 82.3% with surgical closure.¹¹The difference in the incidence rate could be due to the difference in monitoring time.In general, total AV block events did not occur immediately and required a more extended monitoring period.¹⁴Arrhythmia was the most common complication of VSD closure.It was caused by compression of the conduction pathway by the device after transcatheter closure or His bundle injury, which is located in the ventricular septum and occurred due to the surgical process.¹⁴Monitoring should be done periodically, encompassing mid-term and long-term monitoring, because complications can occur at different periods. Mean (standard deviation [SD]) age subjects who underwent transcatheter closure were older than those who underwent surgical closure in our study (6.1 [4.4] versus 5.6 [3.4]).Other studies showed a median age of 7.5 versus 4.4 years⁷ and mean age of 9 versus 1.8 years.⁹Subjects who underwent surgical closure were older than those in other studies, so the complications after surgical closure were fewer than those in other studies.The number of DCSA cases in the surgical group was higher than that in the transcatheter group, i.e., 13 versus 6.This is because DCSA VSDs are often accompanied by prolapse and aortic valve regurgitation; thus, surgical closure would be the best choice.¹⁵In this study, the mean (SD) procedure duration of transcatheter closure was 108.2 (37.8) min, and this value was not much different from those in other studies in Canada wherein the mean (SD) procedure duration for transcatheter and surgical closures was 123 (26.9) min.⁹In the present study, the mean (SD) fluoroscopy time in the transcatheter group was 27.5 (12.8) min versus 29.89 (12) min⁹ and 29 (14) min¹⁶ in other studies.The mean (SD) procedure duration of surgical closure was 157.2 (23.0) min, which was shorter than those in other studies in China, with a mean (SD) of 180.5 (66.1) min.¹⁴The procedure duration of VSD closure did not affect mid-and longterm complications. This study had some limitations.First, this was a retrospective cohort study using medical records and some data were unavailable.Second, some children who underwent transcatheter or surgical closure were lost to follow-up.Thus, the results may not represent mid-and long-term complications of VSD closure in Indonesia.A prospective cohort study with larger subjects is needed to assess the duration of arrhythmia, valve regurgitation, and residual shunts.Providing appropriate counseling to the patient and family is necessary, especially for the possibility of arrhythmia after transcatheter and surgical closure of VSD. In conclusion, the incidence of arrhythmia, increased severity of aortic and tricuspid regurgitation, and the incidence of residual shunt after transcatheter VSD closure was not higher than after surgical VSD closure. Table 2 . ECG and valve regurgitation features before and after VSD closure and residual shunt features after VSD closure
2020-08-13T10:04:59.527Z
2020-08-07T00:00:00.000
{ "year": 2020, "sha1": "6cbb1bd16381ca56e5434ac31a34225eb244b199", "oa_license": "CCBYNC", "oa_url": "https://mji.ui.ac.id/journal/index.php/mji/article/download/3837/1937", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "f2f778c89ea923ac9950a010ac3e7d230c2ede95", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247374237
pes2o/s2orc
v3-fos-license
Current Trends and New Challenges in Marine Phycotoxins Marine phycotoxins are a multiplicity of bioactive compounds which are produced by microalgae and bioaccumulate in the marine food web. Phycotoxins affect the ecosystem, pose a threat to human health, and have important economic effects on aquaculture and tourism worldwide. However, human health and food safety have been the primary concerns when considering the impacts of phycotoxins. Phycotoxins toxicity information, often used to set regulatory limits for these toxins in shellfish, lacks traceability of toxicity values highlighting the need for predefined toxicological criteria. Toxicity data together with adequate detection methods for monitoring procedures are crucial to protect human health. However, despite technological advances, there are still methodological uncertainties and high demand for universal phycotoxin detectors. This review focuses on these topics, including uncertainties of climate change, providing an overview of the current information as well as future perspectives. Introduction Harmful algal blooms (HABs) are a significant problem in coastal waters, particularly when they produce phycotoxins that accumulate in shellfish or fish, leading to the poisoning of humans and animals. The species that cause HABs are diverse, as are the habitats in which they occur. Climate change could affect the prevalence of HABs and impact of phycotoxins on human and ecosystem health [1]. Phycotoxins cause human intoxications with clinical symptoms ranging from intestinal to neurological effects but may also provoke respiratory distress, dermatological problems, or even death [2]. There are currently six main phycotoxin poisoning syndromes that can occur after ingestion of contaminated shellfish, fish, or fishery products, which are paralytic, neurotoxic, amnesic, diarrhetic, and azaspiracid shellfish poisoning (PSP, NSP, ASP, DSP, and AZP) and ciguatera fish poisoning (CFP). However, potential seafood contamination with tetrodotoxin or palytoxin is also of concern. Additionally, phycotoxins such as pectenotoxins, yessotoxins, and the cyclic imines are also considered. Paralytic shellfish poisoning (PSP) has been widely reported in many parts of the world [3]. Paralytic shellfish toxins (PSTs) include saxitoxin ( Figure 1A) and its analogues neurotoxic alkaloids produced mainly by dinoflagellates of the genera Alexandrium, by the species Pyrodinium bahamense and Gymnodinium catenatum and by benthic and planktonic marine cyanobacteria such as Anabaena, Cylindrospermopsis, Aphanizomenon, Planktothrix, and Lymgbya. PSTs act through the reversible blockade of the voltage-gated sodium channels in excitable membranes compromising the propagation of neural impulses in peripheral nerves and skeletal muscles ( Figure 1) [4][5][6]. PSTs share a similar molecular weight, toxicity, and mechanism of action with tetrodotoxin (TTX) ( Figure 1B) [7]. The main exposure to TTX for humans comes from fish from the Tetraodontidae family (pufferfish), which is forbidden in the European market, but this toxin also occurs in marine gastropods, oysters, mussels, and fish other than pufferfish (See Katikou et al. 2022 for updated review of TTX) [8]. Amnesic shellfish poisoning (ASP) is caused by domoic acid (DA) ( Figure 1C). This potent natural toxin is produced by the diatoms Nitzschia, Pseudonitzschia, and Amphora and found worldwide [9]. Filter feeding marine life, such as clams, oysters, mussels, and crabs, can accumulate DA and pass the toxin to humans and wildlife [10]. Azaspiracid shellfish poisoning (AZP) ( Figure 1F) is a toxicity syndrome in humans due to the ingestion of azaspiracid-bearing shellfish and causes mainly gastrointestinal symptoms [16,17]. Therefore, azaspiracids (AZAs) were initially included in the DSP group. The characterization of their chemical structure, along with a different mechanism of action, led to their classification as a stand-alone group [18,19]. AZAs are produced by planktonic species of the genus Azadinium and Amphidoma and can be further biotransformed by accumulating shellfish [20,21]. They were reported in mollusks and crustacean species from numerous European countries [22][23][24] as well as in shellfish from Africa, Asia, and America [25]. Ciguatera fish poisoning or Ciguatera is the most common non-bacterial human illness associated with seafood consumption across the globe affecting between 50,000 and 500,000 people annually including rare lethal cases [26][27][28]. Ciguatoxins (CTXs) ( Figure 1H) are lipid soluble, thermally-stable toxins responsible for Ciguatera and produced by microalgae within the Gambierdiscus and Fukuyoa genus. Gambierdiscus species also synthesize other toxins, including gambieric acids, gambierol, gambierone, gambieroxides, and maitotoxins [29]. However, until now there has been no evidence of their involvement in human Ciguatera cases [30]. Ciguatoxins are chemically similar to brevetoxins ( Figure 1J and K) and both act on voltage-gated sodium channels by binding to receptor site 5, causing depolarization of neuronal and muscle cell membranes and triggering human and animal neurological symptoms ( Figure 2) [31,32]. Brevetoxins (BTXs) are a group of polyether-toxins that cause neurotoxic shellfish poisoning (NSP) named after the more or less pronounced neurological symptoms that co-occur with gastrointestinal signs. Humans are exposed to brevetoxins through the ingestion of shellfish in which the toxins seem not to have any adverse effects. The aerosol containing the toxin also induces non-fatal effects on human health including skin irritation, non-productive cough, shortness of breath, and tearing. There has been only a small number of sporadic cases of NSP in humans, with hospitalization but no fatalities. However, brevetoxins have been implicated in the death of large numbers of fish and in morbidity and mortality of marine mammals. To date, no cases of NSP have been reported in Europe [33]; however, the presence of these toxins in shellfish from the Mediterranean Sea [34] raises the question of potential emergence of this group of toxins in areas preserved until now (see Hort et al. 2021 for an updated review of BTX) [35]. Palytoxin (PLTX) ( Figure 1I) is a very potent natural toxin responsible for seafood poisoning and produced by soft corals of the genera Palythoa, Zoanthus, and Parazoanthus, by planktonic and benthic dinoflagellates of the genus Ostreopsis, and by cyanobacteria of the genus Trichodesmium. In recent decades, species of the genus Ostreopsis are proliferating in temperate latitudes including the Mediterranean Sea Coast where recurrent blooms have occurred [36,37]. Even though PLTXs are not regulated in seafood in Europe, these toxins have been shown to be harmful and occur in European coasts. This review is focused on the toxins mentioned above and organized in four major parts: The first part addresses the lack of traceability of toxicity values, the second part discusses the mechanisms of action and toxicity including the need to establish objective toxicity parameters, the third part addresses marine toxins as a source of drugs, and the fourth part includes an update on toxin detection methods update and the need for universal detectors and ends with climate change uncertainties. Lack of Traceability of Toxicity Values During the last decades, the occurrence and intensity of marine biotoxins intoxications were boosted paralleling a worldwide increase in harmful algae bloom (HAB) events due to international trade expansion and anthropogenic eutrophication [38]. In this situation, the need to obtain data about the toxicity of the different marine biotoxins arises in order to set safety regulatory limits for these compounds in seafood and protect the health of consumers. Traditionally, the main assays employed to evaluate the toxicity of marine biotoxins were the in vivo mousse bioassay (MBA) and in vitro cell viability assays [11]. However, the lack of certified reference materials and the lack of standard operating procedures have hampered the obtention of reliable data regarding toxicity. The problem arises when the assay method is not suitable for the evaluation of the real toxicity of the toxins. A representative example of this problem is that the reference to estimate the relative potency of some toxins that are not considered lethal is the MBA. This assay considers the death of the mice after intraperitoneal administration of the compounds, though the main consequence of most of these intoxications is not death but the reported acute symptoms after oral or epidermal exposure mentioned above [39][40][41] and long-term sequelae for some toxins, as is the case for CTXs [42]. There is scarce information about the impacts of long-term low-level exposure [41,[43][44][45]. These types of studies are necessary to re-evaluate the real risk of the exposure of humans to these toxins since the main health risk for humans is a chronic exposure to low levels of these toxins. In other cases, the use of cell-based assays may be controversial since not all the cell lines have the same characteristics. In vitro data are particularly useful when the mechanism of action of the toxin is known; however, in toxins which the mechanism of action remains unknown, it is difficult to select a proper cell line for these studies. A representative example of this issue is that the reference cell line to detect and determine ciguatoxin toxicity is a neuroblastoma cell line. The main cellular targets of CTXs are voltage gated sodium channels (VGSCs), and undifferentiated neuroblastoma cells express only Na v 1.7 (most prominent) and Na v 1.3 sodium channels, while the other sodium channels are expressed at low levels in cells cultured up to 20 passages [46]. However, current assays for the evaluation of ciguatoxins effects in neuroblastoma cell lines use cells cultured up to passages 383 to 810 [47]. Another source of controversy is the purity of the compounds used in the toxicity assays. Certified reference materials are not available for most of the marine biotoxins; thus, it is difficult to really know the purity and the quantity of the toxin used if they were not determined using analytical methods [48]. The route of administration of the compounds is also a source of discrepancy since many toxicological data are based on the administration of the toxins through the intraperitoneal route, which is less relevant for the evaluation of human exposure than the oral route [48]. The limits for marine biotoxins for international trade were set over a decade ago by the Codex Committee on Fish and Fishery Products (CCFFP), that has developed the standard for Live and raw bivalve mollusks (CODEX STAN 292-2008) adopted in 2008, amended in 2013 and revised in 2014 and 2015 [49]. This standard identifies maximum permissible levels in mollusks flesh for five groups of marine toxins. Specifically, the maximum levels for toxins with neurotoxic activity were established as follows: 0.8 mg for saxitoxin (STX)/kg, 20 mg/kg for domoic acid (DA), 200 mouse units or eq/kg for brevetoxins (BTX), and 0.16 mg/kg for azaspiracids (AZA). It is noteworthy that each group of toxins comprises many analogues, and even then, the limits of toxins are represented according to the total toxicity of the analogues, which is not proved in a standardized form. Traditionally, regulatory limits were established using the mousse bioassay, which involves the intraperitoneal administration of seafood extracts to mice [50], thus providing information about the total toxicity of the sample. The advances in technologies together with changes in legislation and ethical concerns have allowed to increase the number of alternative methods to MBA for toxin monitoring purposes. Any alternative method should provide an equivalent level of protection to consumers as the techniques used as reference and should be interlaboratory validated through international systems such as the Association of Official Analytical Chemists (AOAC) or the European Committee for Normalization (CEN) [51]. Single laboratory validation is only acceptable to implement the method in house, not to consider a method as validated and therefore official. However, the analytical quantification of a toxin and its analogues in a sample is not sufficient in terms of monitoring and requires the toxicity equivalency factor (TEF), since different analogues may have different toxic potencies [48]. This factor compares the toxicity of the analogue to that of the reference compound in the toxin group, so that the concentration of the analogue determined by analytical methods in conjunction with the TEF allows the toxicity contribution to be calculated and expressed as toxin equivalents. For adequate TEF estimation, it is necessary to appropriately define the toxicity of each compound. For instance, a minimum lethal dose for TTX of 2 mg has been assumed in humans historically [52][53][54], and this value is still in the literature even though a much lower LD 50 of 232 µg/kg and a NOAEL of 75 µg/kg have been demonstrated after oral administration of a TTX single dose [39]. In addition, potential nephrotoxic and cardiotoxic effects have been observed at a TTX dose of 125 µg/kg after repeated oral administration [45]. Currently, the European Food Safety Authority (EFSA) has recommended a safe concentration of TTX below 44 µg of TTX equivalents/kg of shellfish meat in fishery products [55]. However, EFSA also highlighted the need to re-evaluate the seafood safety risk to consumers since recently reported results pointed to potential harmful effects of chronic low oral doses of TTX. This requires further and detailed studies, especially considering the possible synergies between different marine biotoxins with the same mechanism of action [43,45,56]. This same case occurs for ciguatoxins, which, in addition to the lack of certified standards, are not yet regulated in Europe in spite of the fact that the dinoflagellates producers of ciguatoxins are found in our coasts today: the presence of CTXs was confirmed in locally sourced fish in Canary Islands or Madeira [57]. As we and others have demonstrated, ciguatoxins elicited negative shifts in the active potential of voltage gated sodium channels [58,59]. Similarly, for palytoxins and the related compounds, there is continuous expansion of the dinoflagellate producers through mild temperate waters [60,61]. Thus, the toxicological data available for marine biotoxins should be re-evaluated considering all these aspects since new information about the mechanisms of action of toxins is known and new methodologies and purified compounds are available. This is especially relevant when the toxicity equivalency factor (TEF) is linked to a regulatory toxin level, and the establishment of TEFs depends on these data [48]. One of the priorities in order to use of analytical methods efficiently for monitoring purposes is to establish the TEFs that will allow knowing the toxicity of a sample [48]. The replacement of the bioassay by analytical methods has several drawbacks [62]. The unknown toxicity of many toxin analogues as compared with the reference compound is the first problem for setting TEFs. A second problem is the need for one standard for each analogue to be quantified and the lack of enough commercial standards, which has led most analysts to quantify several analogues with one single compound, leading to very large errors in quantification of up to 200% [63]. The third problem is that human toxicological information on these toxins is extremely scarce. Therefore, currently, most of the TEFs used by regulatory agencies are derived from in vivo toxicity and by intraperitoneal administration of the compounds in mice due to the lack of enough certified toxins [48]. Thus, accurate TEFs are essential for the control and establishment of regulatory limits for related compounds. It is noteworthy that the scientific opinions of European Food Safety Authority (EFSA) for marine toxins are always generated by an expert group that reviews the existing literature for each toxin group. For example, the revision of TEFs for tetrodotoxin performed by EFSA [55] was based on acute oral toxicity data that was obtained just in time for the opinion [39]; without this study, it would not be possible to set any risk value. This shows the imperative need for toxicological studies to properly regulate the presence of phycotoxins. However, for many years, EFSA has been highlighting a series of shortcomings that affect the legal limits and the monitoring of regulated marine phycotoxins in Europe [64]. Although these toxins limits are still used today, more studies are needed to improve consumer safety. Related to that, PTXs have been removed from the list of marine biotoxins to be analyzed in live bivalve mollusks in EU [65]. This legislative change is a consequence of the absence of reports of adverse effects in humans associated with PTXs [66]. In other words, a toxin should not be regulated based solely on the lethality by mouse bioassay. Caffeine is a clear example of this overstated policy; if caffeine were present in shellfish, it would be regulated as a toxin, since it is lethal by mouse bioassay. The following needs were identified by EFSA: 1. Establishment of TEFs based on acute oral toxicity data including analogues with toxic relevance at the levels in which they are present in mollusks; 2. Information is needed on genotoxicity, oral toxicity, and toxicity mechanisms for some groups of toxins; 3. Information is needed on the combined toxicity of different groups of toxins that are usually present in mollusks. During the last decade, different authors have highlighted the limitations associated with the use of the current TEFs implemented by the European legislation to quantify the toxicity of marine phycotoxins present in fishery products [38,[67][68][69]. Among the problems associated with the use of current TEFs for marine toxins monitoring are the following: 1. Most of the current TEFs are based on the acute effects of toxin intraperitoneal (i.p.) injection to mice, but these values do not reflect the oral absorption, which is the relevant route for the effects of marine phycotoxins on human health; 2. The majority of TEFs used today have been estimated using toxins of unknown origin and purity, and therefore generate discrepancies, recognized by EFSA [64]. In fact, the need for use of certified reference materials (CRM) for the different toxin analogues in TEFs determination is highlighted by several extensive reviews on marine toxins TEFs [48,70]. This is still a difficult problem to solve due to the absence of toxin CRMs for many analogues in the previously described toxin groups. In recent years, the commercialization of ISO 17034 certified reference materials for some marine toxins has been guaranteed in Europe through commercial channels (www.cifga.com; accessed on 11 January 2022); 3. Current TEFs for marine neurotoxin proposed by EFSA have been obtained using differently purified toxins and quantifying the amount of toxin according to a different criterion in each laboratory, which increases the diversity and disparity of the data collected by EFSA. This fact should be amended at present. In the case of working with purified toxins from mollusk samples, the Standard Operating Procedures (SOPs) of the European marine biotoxin reference laboratory harmonize the extraction of toxins from mollusk samples and the realization of the corresponding analytical or biological determinations. In summary, despite the fact that, during the last 10 years, several reports have reviewed the limitations for the use of current TEFs for monitoring marine neurotoxins with analytical methods [39,40,48,70], TEFs not obtained through oral administration of the toxin are still being used at present to determine the toxic load of the samples obtained from fishery products. The limitations of current TEFs in providing adequate assessment of marine phycotoxin-related toxicity have recently been collected in a technical report jointly prepared by FAO/WHO [70]. The following points summarize the drawbacks and recommendations reflected in this report for the use of TEFs as indicators of the toxic load of samples contaminated with marine phycotoxins: 1. First, the absence of correlation between the toxicity obtained by MBA and the acute oral toxicity is highlighted [71]. In general, compounds administered i.p. are absorbed quickly and completely from the peritoneal cavity, while the oral administration can decrease the absorption of many substances, and therefore the i.p. route would provide a much higher toxicity than the real one. The opposite occurs when the toxin is metabolized to a more toxic oral analogue: the MBA would give a lower toxicity, for example, in the case of neosaxitoxin that is more toxic than saxitoxin [72] and other toxins from the group of paralytic toxins [73]. In fact, these two studies with paralytic toxins and other studies with diarrheic toxins [39,40] emphasize the need to review actual TEFs using toxin CRMs and the oral route for toxin administration in order to determine reliable TEFs useful to use analytical methods for neurotoxin monitoring [48]. 2. In certain cases, TEFs have been established after measuring in vitro toxicity or cellular effects of the toxin [74][75][76]. Although these studies do not take into account neither the absorption nor the metabolism or elimination of the toxins in vivo, these data have also been taken into consideration by EFSA to establish the current TEFs [64] even when there is no approved in vitro model to evaluate the toxicity of paralytic toxins. In fact, the PSP TEFs reported by FAO/WHO are a combination of in vitro effect on human sodium channels with oral toxicity in mice [70]. 3. Although oral toxicity is the relevant parameter to establish TEFs for marine toxins, special caution is necessary, since, although the Organization for Economic Cooperation and Development (OECD) guidelines for determining acute toxicity OECD 420-Acute Oral Toxicity-Fixed Dose Procedure and OECD 425-Acute Oral Toxicity-Up-and-Down-Procedure establishes the administration of the chemical compound by gastric tube, the semisolid content of the stomach of the rodents can facilitate that the toxin is absorbed quickly in the duodenum instead of mixing with the stomach food. In the case of marine toxins, it seems more convenient to administer the toxins in the food to facilitate toxin ingestion with the food in a short time. In fact, TEFs obtained by forced feeding (gavage-gastric tube) and voluntary consumption of food [73] may show differences. Mechanism of Action and Toxicity: The Need for Predefined Toxicological Criteria Marine phycotoxins are bioactive compounds streamlined to act fast at very low concentrations. Most of them are highly specific for key physiological targets such as ion channels, enzymes, pumps, or cellular membrane receptors ( Figure 2). Toxicity and consequently human poisonings are generally related to the specific interaction of these toxins with their targets. In addition, these characteristics make them ideal candidates for basic research as well as biotechnological applications. It must be emphasized that data research in this field is advancing quickly even though there are still many unknown issues. The mechanism of action of some marine phycotoxins remains elusive. Furthermore, precise relationship of some historically assumed toxin actions and associated toxicity is lacking. Consequently, this review explores the available knowledge on these topics, identifying the gaps and highlighting future challenges and research priorities. Marine Phycotoxins Acting on Voltage-Gated Sodium Channel Voltage-gated sodium channels (VGSCs) are essential in the generation and transmission of action potentials in excitable cells such as neurons and muscle cells [77]. VGSCs are formed by a core α-subunit with four repeated domains (I-IV) coupled to β regulatory subunits. In those domains, two highly conserved regions (P1 and P2) partially re-enter cellular membrane from the outer side. In these segments, a ring of four amino acids, Asp-Glu-Lys-Ala (one from each domain), forms the DEKA motif, which comprises the "selectivity filter" allowing Na + influx [78]. There are eight binding sites targeted by many toxins, including several phycotoxin groups [77,79]. Moreover, ten isoforms of VGSCs have been identified in humans (Na v 1.1-Na v 1.9 and Na x ), nine of them functional [77]. Toxins acting on VGSCs are PSTs, TTX, BTXs, and CTXs. BTXs and CTXs activate site 5 of VGSCs, while STX and TTX bind to site 1 and block ion conduction ( Figure 2). The PSTs group of toxins includes more than 50 analogues. Several subgroups can be defined based on their chemical structure; of great importance are carbamate, N-sulfoylcarbamate, and decarbamoyl derivatives, though other derivatives such as deoxydecarbamoyl have been identified ( Figure 3) [5,6]. STX binds to a region placed on the outer side of the channel known as Site 1, blocking Na + entry into the cell ( Figure 2) [7,80,81]. Even though STX-VGSC interaction has been known for decades, binding data were originated from mutational cycles, electrophysiological recordings, and prokaryotic or chimeric VGSCs in combination with computational analysis [74,80,82,83]. Only recently has the structure of VGSC Na v 1.4 been fully elucidated [77,83], followed one year later by high resolution STX-human Na v 1.7 interaction structure [81]. This study determined the amino acids directly binding to the neurotoxin. STX and TTX bind to VGSCs but with different range of affinities for each isoform [32]. Thus, the sodium channels alpha subunits (Na v ) Na v 1.1 (EC 50 6 nM), Na v 1.2 (EC 50 18 nM), Na v 1.3 (EC 50 4 nM), Na v 1.4 (EC 50 25 nM), Na v 1.6 (EC 50 6 nM), and Na v 1.7 (EC 50 24.5 nM), are highly TTX-sensitive, while Na v 1.5 (EC 50 5.7 µM), Na v 1.8 (EC 50 60 µM), and Na v 1.9 (EC 50 40 µM) are considered TTX-resistant [84]. Given the considerable number of STX and TTX analogues and the fact that new compounds continue to be added to the list [85], gathering data of binding affinities to VGSCs along with their oral toxicity in vivo entails a major challenge [86]. This further enlarges the resources needed to evaluate potential toxicity of these toxins, which may be reduced by knowledge of VGSC structure in combination with bioinformatic tools, though experimental data are essential to confirm computational predictions. Aside from analogues affinity and toxicity data, a growing concern arises from additive effects of co-occurring toxins [87]. The symptomatology observed in vivo is in accordance with the action of these VGSCs targeted toxins. As a consequence of Na + influx inhibition, neuromuscular complications are features of PSP, which can compromise breathing due to muscle unresponsiveness, leading to paralysis of the diaphragm and resulting in lethality in extremely severe cases [88]. TTX poisoning presents with similar symptoms, consequently to the shared mode of action [8]. Toxicity variability among PSTs analogues may result from affinities to VGSCs isoforms [89,90]. Among the main groups of marine toxins capable of causing effects in humans, PSTs are likely the most dangerous for the severity of symptoms reported in seafood consumers. Bivalve mollusks are the traditional vectors, although PSTs have also been detected in some gastropods, crustaceans, and less frequently in fish [91,92]. Shellfish feeding on PST-producing phytoplankton species can accumulate the toxins, in most cases without exhibiting adverse effects themselves [93]. Additionally, PSTs are heat stable; thus, cooking does not destroy the toxins. After ingestion of bivalves with STX, absorption occurs mainly in the gut, followed by distribution to the remaining organs and tissues over time [94]. PSTs analogues have different toxic potencies [95,96]. This is important since bioconversions of PSTs may occur in phytoplankton, bivalves, and humans [97]. One of the ways to establish the toxicity relationship among analogues is through the TEFs. In the PSP group, this factor compares the toxicity of the analogue to that of STX. The concentration of the analogue determined by analytical methods in conjunction with the TEF allows the toxicity contribution to be calculated and expressed as STX equivalents [96]. TEF values were reevaluated by oral administration (gavage or feeding) and, as a result, TEFs for dcSTX and dcNeoSTX were lower when determined by oral toxicity than by MBA. However, oral TEF for NeoSTX was higher compared to the value obtained by the MBA [72]. It is interesting to note that there is a better match between the TEF obtained with in vitro methods using toxin potency on Na v subtype 1.2 channel blockage [74] and with oral toxicity in mice [72] than by MBA (Table 1). Related to that, conversion of dcGTX1&4 within the digestive tract to more toxic congeners may explain their high relative toxicity by feeding compared to that determined intraperitoneally [98,99]. There are also some gaps; for instance, TEFs have not been disclosed for the most recently discovered toxins yet, namely those of the M-series, belonging to both the carbamate group (M2, M4, M6, M8, M10, and M12) and the N-sulfocarbamoyl group (M1, M3, M5, M7, M9, and M11), although the few data collected so far seem to suggest a low toxicity among them. Upon ingestion of STX-bearing shellfish, the severity of PSP symptoms depends on the analogues and the doses ingested [94]. The poisoning effects occur quickly; the primary site of STX action in humans is the peripheral nervous system, causing a fast start of symptoms: numbness or a tingling sensation around the lips and tongue, which appear in less than 1 h and are due to local absorption of the PSP toxins through the buccal mucous membranes. Frequent symptoms are also a stinging sensation in the toes and fingertips, nausea, vomiting, diarrhea, dizziness, and headaches [100]. In severe poisoning, death may occur within 24 h of ingestion [3]. PSTs could also be fatal for marine wildlife. PSTs can enter the food web when toxin-producing dinoflagellates or cyanobacteria are ingested by shellfish, copepods, or other invertebrates and these, in turn, are consumed by larger organisms. Ingestion of PSTs by mammal and bird species can result in muscular weakness, motor incoordination, respiratory paralysis, and death [101,102]. In order to protect human health and to promote the trade of safe seafood, maximum permitted levels of STX in seafood have been established by regulatory authorities in many countries with the recommended regulatory level in CODEX of 800 µg STX equivalents/kg shellfish flesh [49]. The current regulatory limit for PST is based on the acute reference dose (ARfD) of 0.5 µg STX eq/kg body weight (bw) proposed by the European Food Safety Authority [65,103]. This limit seems appropriate in accordance with studies performed in mice to mimic human feeding behavior and diets containing STX [104]. However, oral toxicity assessment of natural toxin mixtures would reinforce consumer safety. The importance of the toxicological knowledge on PSTs should be highlighted, also considering the potential human chronic exposure [105]. A recent study demonstrated that daily exposure for 3 months to low levels of STX could cause significant cognitive deficits and neuronal cell cutbacks. The alterations of hippocampal sphingolipid metabolism and hippo signaling-pathway-related proteins may be involved in the STX-induced nerve damage [106]. It was verified that STX can cross the placental barrier and reach the fetal brain. This contributes to the understanding on toxic effects of these neurotoxins on the development of animal neuronal cells. Currently, there are no antidotes or therapies for PSP. Mortality induced by PSTs depends on the prompt recognition of PSP symptoms, which prevent complications and patient deaths. Clinical measures are taken to try speed up detoxification; the use of activated charcoal to remove unabsorbed toxins or cleaning gastric contents may be considered. Rapid intervention includes fluid therapy, assisted ventilation, and hemodialysis. TTXs are also extremely potent toxins with 25 analogues. They induce paralysis of muscles and even death through cardiorespiratory failures. TTX and analogues were recently detected in marine bivalves and gastropods from European waters [107,108] being a serious threat to human health. It should be considered that the toxicity of analogues is lower than TTX as reported based on intraperitoneal toxicity to mice [109]. However, even though acute oral toxicity of TTX was already reported with LD 50 of 232 µg/kg, to date, no studies evaluating the oral toxicity of TTX analogues have been released. In the EU TTXs are not monitored; the only relevant requirement in the current legislative framework is that fishery products derived from poisonous fish of the family Tetraodontidae must not be placed on the market [38,110]. EFSA proposed a safe concentration lower than 44 µg TTX eq/kg of shellfish meat [55]. However, TTXs levels detected in shellfish in the EU are often higher than this value, which indicated that seafood is in danger of being contaminated with this hazardous toxin and that appropriate measures are possibly required to protect human health [111]. Therefore, TTXs could be a future concern in Europe, as well as new global health risk due to the spread and prevalence in new geographical regions. In contrast to PSTs and TTXs, CTXs and BTXs maintain VGSCs in an active form [32]. CTXs and BTXs bind to site 5 of VGSCs, inducing an open state, subsequently allowing Na + passage inside cells. VGSC site 5 is comprised of the transmembrane segments S6 and S5 of the α-subunit domains I and IV, respectively [32,112]. CTXs bind to this site from the intracellular side of VGSCs ( Figure 2) [77]. Even though the region binding CTXs is known, the detailed interaction and conformation of CTXs-VGSCs binding structure has not been elucidated [77]. Similarly to other toxins, the activation of VGSCs by CTXs and BTXs induces action potential repeated firing and ion imbalance. It would be of interest to unveil CTX binding structure to VGSCs and variations with different analogue structures. In addition to their activation of sodium channels, CTXs have been shown to partially inhibit voltage-gated potassium channels (K v ), thus further increasing the membrane excitability [113]. Therefore, and secondary to sodium channel activation, CTXs trigger several cellular effects including swelling, neurosecretion, an increase in intracellular calcium levels and the modulation of gene expression [114]. Some CTXs are produced by Gambierdiscus and Fukuyoa dinoflagellates [115], but the biotransformation of ciguatoxins in invertebrates and fish has contributed to the more than 30 analogues reported-to date [116,117]. The molecular structures of ciguatoxins found within fish vary with location and historically a prefix is added to the name to distinguish them: P for compounds from the Pacific (e.g., P-CTX-1B) and C for compounds from the Caribbean (e.g., C-CTX-1) ( Figure 4) [30]. These structural differences result in Pacificciguatoxin-1 (P-CTX-1) being more potent than Caribbean-ciguatoxin-1 (C-CTX-1) [118]. It is worth mentioning that other groups of toxins can co-occur since they are produced by the same dinoflagellate species. These are maitotoxins and compounds such as gambierol [30]. Their mode of action is different from that of CTXs, though new reports shed some light on the effects at molecular level. Briefly, gambierol is a potent blocker of voltage-gated potassium channels (K v ), both in human T lymphocytes and mouse fibroblasts at nanomolar concentrations [119,120]. It also shows almost full inhibition (>97%) of the potassium channel subtypes 1.2, 1.3, and 1.4 at concentrations between 1 and 1.5 µM [121]. On the other hand, the activation of voltage-gated Ca 2+ channels (Ca v ) and the consequent entry of external Ca 2+ induced by MTX turns it into an important tool for studies in all cellular and physiological processes in which these channels are involved [122,123]. In a recent study, the effects of these toxins in human VGSCs have been analyzed, and gambierol, gambierone, and maitotoxin 3 (MTX3) had no effect on the size of the sodium currents. However, gambierone shifted the activation of VGSCs in the negative direction. The negative shift in the activation also allowed quantifying the low potency of MTX3 [59]. Therefore, gaining knowledge regarding the molecular target of these toxins and its relationship with in vivo toxicity should be addressed. The relative potencies of CTXs analogues have so far been determined by the mouse bioassay (MBA). However, even using a bioassay, there are many variables that can affect the data from different laboratories since it is a non-standarized method. In addition, these toxicity experiments were carried out with toxins that were not standards. Until better information is available, the panel on contaminants in the food chain adopted the TEFs that appear in the first column of Table 2 [124]. The best option is to set TEFs based on human data, but epidemiological information is scarce, and other options should be considered ( Figure 5). The lack of biomarkers to confirm Ciguatera diagnosis in humans and the failure by health professionals to achieve proper differential diagnosis in patients clearly explains the scarcity of epidemiological data. Since human exposure is in most cases through toxin ingestion, toxicity data through oral administration to animals are relevant. Recently, the Expert Group on Ciguatera concluded that, due to limited data from oral in vivo studies, it has not been possible to derive TEFs [30]. As was mentioned above, the in vitro toxic potency of these compounds in humans VGSCs could be a good indicator of their toxic effect, and these in vitro data should be also considered for determination of TEFs ( Figure 5), although these bioassays have not yet been sufficiently validated for use in risk assessment [30]. CFP is the most prevalent biotoxin-related seafood poisoning. The transfer of CTXs, within and among food webs, is due to their lipid-soluble bio-accumulative properties [28]. Toxic dinoflagellates adhere to algae, coral, and seaweed, where herbivorous fish eat them. Ciguatoxins are transferred through the food web from herbivorous reef fish to larger carnivorous finfish and bioaccumulated as they move up the food chain until they reach humans. The highest levels of toxins are observed in long-lived fish-eating predators [125]. Therefore, slightly higher toxicity in upper trophic level fish suggested biomagnification up the food chain [126]. Many fish species are regarded as potential vectors of CTXs, including but not limited to barracuda, grouper, snapper, amberjack, trevally, wrasse, mackerel, tang, moray eels, and parrotfish [28]. Ciguatoxin is tasteless, odorless, and heat-resistant, so boiling, cooking, frying, freezing, or baking cannot detoxify ciguatoxin-laden fish [127]. Additionally, it is not possible to distinguish a toxic fish from a non-toxic one by appearance, texture, smell, or taste [28]. CTXs are concentrated in the fish head, liver, intestines, roe, and other viscera. Toxicokinetic data indicate that CTXs are readily absorbed and largely distributed to the body tissues, including the muscles, liver, and brain, likely due to their lipophilic nature [30]. It was suggested that the quasi-irreversible binding of CTXs to VGSCs and a potential release from binding sites (tissue or plasma proteins or lipoproteins), may contribute to the persistence and reoccurrence of Ciguatera sensory disorders. When humans consume fish containing CTXs in sufficient amounts, the expected gastrointestinal, cardiovascular, and neurological symptoms of Ciguatera are classically presented within 1-6 h of fish ingestion [26,32]. The neurological symptoms include paresthesia, dysesthesia, vertigo, and sensory abnormalities such as metallic taste, pruritus, arthralgia, myalgia, dental pain, and cold allodynia, a pathognomonic Ciguatera symptom that is characterized by burning pain in response to a cold stimulus [128]. All CTXs analogues contribute to the neuronal firing, but under some conditions, external sensory stimuli might trigger a Ciguatera crisis, and a negative shift in the activation voltage of the sodium channels could be behind them [59]. Therefore, even though it is well-known that CTXs bind to site 5 of VGSC, the action related to the main clinical symptoms of Ciguatera in humans is not well-defined. It is a challenge to determine if it is related to the sensitization of the sodium channel or lowering of the trigger threshold to a nociceptive stimulus. Figure 5. Criteria in which toxicity equivalency factors (TEFs) are established are ranked based on their importance. Firstly, epidemiological data and clinical course reported from poisoning outbreaks along with the identified responsible toxin. Data obtained from in vivo assessments are then considered. Approaches such as the route of toxin administration or toxin quality are taken into account to evaluate relative potencies. Moreover, the toxicological measurement allowing for comparison between analogues should be selected based on symptoms observed in humans. This is, when the toxin causes death, a median lethal dose is recommended; however, the main symptom should be evaluated if no reported cases of fatalities are known. Finally, in vitro experimental reports allow to study the molecular target. The biological system should be corresponding to the observed affected/targeted tissues or organs in vivo. Even though the preferential order has been described, our proposal is to consider in vitro and in vivo information complementary for the establishment of TEFs. In vitro assays can also provide essential data that help in understanding the mechanism of action. This implies the relative potency of analogues or the clinical course expected to be observed in vivo. Similarly, in vivo studies are fundamental in determining TEFs even when human poisoning data are available. Modified from FAO/WHO (2016) [70]. The predominance of sensory disorders suggests that CTXs particularly target somatosensory nerves/neurons. The time course of sensory disturbances (i.e., perioral paresthesia, abdominal pain, and then pruritus and pain affecting the whole body including the face) suggests an initial impact on trigeminal and enteric sensory afferents, then dorsal root ganglion (DRG) and trigeminal sensory nerves [114]. This is compatible with absorption through the mouth and intestine mucosa followed by distribution to the DRG and trigeminal neurons. Cell soma in the peripheral nervous system are located in ganglia and are not protected by blood-brain or blood-spinal cord barriers. These toxicokinetic factors could contribute to the preferential sensory toxicity of CTX [129]. Additionally, a variety of gastrointestinal symptoms including abdominal pain, nausea and vomiting, and cardiovascular symptoms, such as heart rhythm disturbances, may also affect poisoned patients. In addition, breastfeeding mothers have reported diarrhea and facial rashes in their infants. This supports the theory that Ciguatera toxins are secreted into breast milk. The toxicity of Ciguatera is generally self-limiting, with gastrointestinal and cardiovascular manifestations only lasting a few days. Some symptoms, mainly neurological, can last days to weeks or even months to years, and, in extreme cases of severity, Ciguatera may cause the death of patients [28,130]. Chronic low-dose exposure to CTXs in humans over time may represent a potential long-term human health risk, as CTXs can bioaccumulate, cause DNA damage, and cross the blood-brain barrier [131,132]. The effects of CTXs on marine fauna are less documented. Ciguatoxins found in the brain, liver, and muscles of marine mammals suggest that they may also suffer from CTX exposure and that these compounds persist within the complex marine food webs [133]. The fish resistance mechanism to CTX is still unknown. Ciguatera fish poisoning or Ciguatera is mainly encountered in tropical and subtropical areas. Ciguatera also can represent a major source of concern to the tourism industry in endemic regions [28]. However, with the increase in fish imports and tourism, clinical Ciguatera can be found in non-endemic areas [134]. In the recent past, a geographical expansion of CTXs to more temperate areas has been evidenced by factors such as climate change, some anthropogenic activities, as well as the migration patterns of ciguateric fish [135]. Ciguatera is an emerging hazard in European waters (Canary and Madeira islands and the Mediterranean Sea), thus necessitating the adoption of official policies to manage the potential risks [136]. Until now, in the European Union's fisheries and aquaculture products market, fish with CTX-group toxins are forbidden [124]. In the United States, the current Food and Drug Administration (FDA) guideline for Ciguatera is now listed as 0.01 ng/g for Pacific ciguatoxin and 0.1 ng/g for Caribbean ciguatoxin [30]. Therefore, to adhere to the guidance, the CTX fish content should not exceed these recommended levels. Overall, the effective management of Ciguatera patients is significantly hampered by the lack of a specific antidote, and medical management of acute and chronic Ciguatera in affected patients relies mainly on symptomatic support and diet recommendations [26]. The autonomic dysfunction-based disorders, including digestive and cardiovascular symptoms, resolve spontaneously or are treated effectively. Treatment to relieve the persistent sensory disturbances is lacking. The benefit of mannitol is controversial in Ciguatera poisoning; some clinical trials found no difference between mannitol and normal saline, while other trials have demonstrated improvement of neurologic symptoms after administrating mannitol. The most frequently given advice is not to consume fish weighing more than 2 kg and not eating fish parts such as the viscera, brain, and gonads, where ciguatoxins are mostly accumulated [137]. Regarding MTXs, their implication in Ciguatera is unlikely [30]. Six MTX analogues have been identified: maitotoxin-1 (MTX1), maitotoxin-2 (MTX2), maitotoxin-3 (44-methyl gambierone), maitotoxin-4 (MTX4), desulfo-MTX1, and didehydro-demethyldesulfo-MTX1. MTXs have been historically considered one of the most toxic marine biotoxins that induced activation of voltage-gated Ca 2+ channels (Ca v ) and the consequent entry of external Ca 2+ [138]. However, recent reports indicated that MTXs have almost no activity on VGSCs [59]. All MTXs are characterized structurally by having at least one sulfate group, giving them increased polarity compared to CTXs. Their higher polarity limits their absorption when ingested [139]. Therefore, although its intraperitoneal administration in mice seem toxic, MTXs' oral toxicity is almost nondetectable [140], leading to them not be considered compounds responsible for Ciguatera. In addition, their accumulation along the food web is low, and they have not been found in tissue of fish involved in Ciguatera cases [141]. Future challenges related with marine phycotoxins acting on voltage-gated sodium channel (sumarised in Figure 9): • Epidemiology studies; • Studies of structure-activity relationship of toxins; • Common criteria in the naming of toxins; • Reevaluation of preestablished toxicity concepts based on false premises; • Review mechanism of action responsible for the toxicity of compounds including mechanisms involved in the disturbances that can persist or reoccur many months or even years afterwards; • Harmonization of criteria to set toxicity parameters to establish accurate TEF values especially of those toxin analogues commonly found in seafood or at relatively high levels; • Research to better understand the toxins produced by bioconversion in the organisms and their toxicity; • Information about pharmacokinetics of toxins; • Toxicity studies with special focus on oral toxicity and on toxin mixtures; • Studies related to chronic exposure of toxins; • Information on the occurrence and factors conducive to the accumulation of toxins in marine organisms; • Common legislative criteria: toxin regulation, implementation of effective toxin monitoring, and management programs for toxins; • Climate change and its consequences; • Evaluation of the therapeutic potential of these toxins based on the reversible interaction with the sodium channels. Marine Phycotoxins Acting on Glutamate Receptors: Domoic Acid and Analogues Glutamate is the major excitatory neurotransmitter in the central nervous system (CNS), though it is also expressed in peripheral tissues [142]. It binds to ionotropic glutamate receptors (iGluRs) [143], which are also the target for a variety of natural occurring toxins, such as domoic acid (DA) [144]. DA causes amnesic shellfish poisoning (ASP), named after the memory loss observed [145]. DA is a hydrophilic amino acid with several isomers, isodomoic acid A-H and epi-domoic acid [146], but only a few were detected in seafood products [147]. Domoic acid is a known agonist of iGluRs, which maintains them at an open state, triggering neuron excitability ( Figure 2) [144]. Three functional classes of iGluRs are currently identified: kainate receptors, AMPA receptors, and NMDA receptors [148]. They are formed by two pairs of dimers constituting a tetrameric structure conformed in a circular manner [149]. Each subunit has three transmembrane regions (M1, M3, and M4) with a partial re-entering loop from the cytoplasm to the membrane. Two large extracellular domains are also defined, i.e., the N-terminal domain and ligandbinding domain (LBD) [149,150]. The structure of DA binding to rat kainate receptors GluK1 (former GluR5) LBD and GluK2 (former GluR6) LBD has been determined at a high resolution [148,151,152]. LBD has a clamshell form to which DA binds between the two "shells" (lobes). Afterwards, it partially closes, leading to channel opening and calcium entrance [149,150]. However, these structures are based on LBD soluble form; thus, these conformation modifications are to be confirmed by elucidating DA structure bound to full length receptors. Additionally, DA is an agonist not only of kainate receptors but also AMPA receptors [144,146]. High-resolution conformational changes induced by DA in AMPA receptors in the LBD or in the full-length receptor have not been revealed yet. Kainate and AMPA receptors excitability implies Ca 2+ and Na + influx into neurons, which, in turn, activates NMDA receptors and glutamatergic signaling [144,147]. This is the mechanism through which DA causes ASP symptomatology [146]. Current challenges regarding the DA mode of action rely on structure-activity knowledge and detailed research into toxicology information on peripheral sites. iGluRs activity is complex due to different subunits combinations, desensitization, and auxiliary subunits assembly [143]. Structural modifications induced by DA, not only in the LBD, but also in the whole receptor, will help in understanding the structure-activity relationship. To this regard, data about DA isomers are also missing. Not all bivalve species have the same capability to accumulate DA; the differences observed can be related to the depuration rate. Most bivalves depurate DA very fast, except the king scallop Pecten maximus and the razor clam Siliqua patula, which accumulate high concentrations of DA [153]. ASP symptoms usually appear in humans 24-48 h after the consumption of DA-bearing bivalve mollusks. The clinical course is in accordance with glutamate being the major excitatory neurotransmitter in the CNS and its role in the autonomous nervous system as well as the broad distribution of iGluRs in the body [142]. Gastrointestinal signs are usually the earliest onset symptoms and the most frequent ones comprising nausea, vomiting, and diarrhea, among others [154][155][156]. In more severe cases, neurological complications, such as disorientation, confusion, headache, seizures, and memory loss develop. Other peripheral alterations can also manifest, such as cardiac arrythmias, blood pressure instability, or bronchial secretion [154,156]. In addition, difficulty in breathing, coma, and even certain cases of death have also been reported. DA is a water-soluble small molecule with low transcellular permeability [157]. Most isomers seem to be less toxic than DA [158]. Toxicokinetic data of DA are scarce, but due to its physicochemical characteristics, it is not expected to distribute widely in the body. In laboratory animals, DA following oral dose was absorbed slowly in the gut, limiting its oral bioavailability, and is mainly eliminated unchanged in the urine through glomerular filtration [159]. Neurotoxicity is the critical toxicological effect identified in experimental animals as well as in humans. Toxicity and effects of DA and isomers in the CNS have been evaluated in depth. Following acute DA exposure, laboratory models exhibit progressive symptoms with effects that include activity level changes, gastrointestinal distress, stereotypic behaviors, seizures, and death [160]. Due to its prominent role in ASP, memory has been the focus of most DA research. Both DA doses that trigger most ASP symptoms and asymptomatic DA doses cause adverse learning and memory outcomes, which were reversible in asymptomatic rodents [161]. The high doses of DA damage neurons by over-activating kainate receptors, leading to uncontrolled calcium influxes, and induce cell degeneration in certain regions of the brain, most recognizably in the hippocampus (the memory center of the brain) [162]. In mammals, prenatal and neonatal DA exposure has been linked to abnormalities in electrophysiology and a reduced threshold for chemically induced seizures [163]. It should be noted that research has been focused on the nervous system, leaving behind the understanding of DA effects on peripheral tissues such as cardiovascular, gastrointestinal, and renal impairment [86]. EFSA established an acute reference dose (ARfD) based on human data of acute toxicity from an outbreak of DA poisoning in Canada in 1987, comprising 107 cases [155]. The CONTAM Panel used the lowest observed adverse effect level (LOAEL) of 0.9 mg/kg bw, applying an uncertainty factor of 30 to derive an acute reference dose (ARfD) of 30 µg/kg bw. Because only DA and its diastereoisomer epi-DA have toxicological relevance, the ARfD applies to the sum of DA and epi-DA [146]. Consequently, a TEF of 1 is applicable. Genotoxicity data on DA were inconclusive, but chronic exposure to this toxin seems to have health consequences, making its close monitoring even more important [161]. Regulations developed in the late 1980s, and effective seafood monitoring programs for detection of DA in shellfish implemented by many regulatory agencies worldwide have prevented acute human DA poisonings [18]. The EU legislation sets the regulatory limit for DA in shellfish: 20 mg DA/kg of meat [49]. Shellfish harvesting is closed when monitoring programs indicate DA concentrations over the regulatory limit, leading to direct and indirect economic problems for fisheries and aquaculture. There is no antidote available for ASP, and treatment is supportive. Severe complications from DA intoxication have been especially reported for elderly patients. Humans have been protected from acute ASP by DA regulatory limits in shellfish, but multiple DA toxicity events have occurred in naturally exposed marine mammals over the past three decades and have caused substantial mortality events [164]. In fact, Pseudo-nitzschia blooms could be increasing due to climate change as well as the impact of DA on marine animals [165,166]. Marine mammal exposures are similar to the human oral exposure route, and the symptoms of acute sea lions toxicosis syndrome are analogous to ASP; therefore, sea lions have been invaluable sentinel species in DA research [167]. Future research efforts should aim to further explore the challenging topics (included in Figure 9): • Studies on the oral toxicity of DA isomers present in seafood. • The health impacts associated with chronic, low-dose exposure to this prevalent neurotoxin. Results from these studies will also help reveal the human subpopulations with pre-existing conditions who may be more vulnerable to the toxic effects of this compound. • Studies to further elucidate the toxicokinetic of DA and the role of drug transporters • Research into DA effects other than neurotoxic (cardiac, renal, and gastrointestinal) especially considering chronic exposure. • Research in humans and animal models should include studies during pregnancy and in exposed offspring to characterize the relationship between the increasing body burden of DA and related neurodevelopmental effects. The OA group comprises cyclic polyether fatty acids including DTX1 and 2 ( Figure 6), as well as their esterification products, referred to as DTX3. They are inhibitors of serine/threonine protein phosphatases (PPs) such as PP2A and PP1 [168] among others (Figure 2), resulting in the hyperphosphorylation of many cell proteins, which, in turn, leads to effects on several pathways [169,170]. The structural conformation of OA-PP1 binding has been resolved by crystallography at a high resolution [171]. These phycotoxins inhibit PP2A preferably to PP1; hence, conformation binding to PP2A was later elucidated [172]. The structure binding conformation was also studied for DTX1 and DTX2 [173]. These toxins bind to a hydrophobic groove close to the active site of PP2A catalytic subunit [171][172][173]. A two amino acid variation in this region leading to loosen ends of the pocket in PP1 would explain the increased affinity for PP2A over PP1 [172]. As mentioned above, OA and DTXs interfere with other PPs such as PP5 or PP6; however, the binding conformation of OA with either PP5 or PP6 has not been reported. Despite proteins' dephosphorylation by PPs being essential in modulating the activity of a wide variety of enzymes and tissues [174], their relationship with gastrointestinal dysfunction is not clear. Other PPs inhibitors do not elicit similar effects to OA toxins in vivo [99]. As the most prevalent symptom, diarrhea can be the result of complex mechanisms, and different activation pathways can be implicated [175,176]. Results from in vitro studies had suggested that the potent pro-absorptive peptide neuropeptide Y (NPY) was altered after OA treatment in a neuroblastoma cell line [177]. However, NPY administration prior to OA did not modify OA-induced poisoning in mice, but in the same study, serotonin was directly involved in OA-induced diarrhea [178]. The secretory role of serotonin in the pathophysiology of diarrhea has been largely reported [176,179]. On the other hand, an in vitro study in a model of intestinal barrier shows a protective role of enteric glial cells with regards to OA altered permeability [180]. Enteric glial cells have been related to secretory outcomes in the intestine [179], and whether their activation plays a role in OA diarrheagenicity is to be studied in vivo. Therefore, OA could act by modifying the crosstalk between the enteric nervous system and the intestinal epithelial cells for the regulation of homeostasis, gut functions, and intestinal barrier permeability through changes in the release of various mediators. Based on these recent reports, the need to review the mechanisms of DSTs toxicity should be emphasized. Diarrhetic shellfish poisoning (DSP) is a gastrointestinal disease associated with the ingestion of filter-feeding shellfish that have ingested OA producer dinoflagellates [13], although other shellfish, such as crabs, can also become toxic. The structural integrity of DSTs remains intact after cooking [181], and their presence in shellfish flesh does not appear to alter the organoleptic profile [182]. DSP includes incapacitating diarrhea, nausea, vomiting, abdominal pain, and, in some cases, chills and fever lasting 3 days on average, but it is not lethal [13,14]. However, the impact DSPs have across marine food webs, including commercial finfish and shellfish, is poorly defined [183]. Despite the efforts and deep research into DSTs toxicity at different levels, the molecular mechanism of action responsible of poisoning was not well-defined [184]. OMIC techniques may be a valuable tool in understanding OA-altered pathways [184]. After ingestion of DSTs-bearing bivalves, toxins are localized mainly at the gastrointestinal tissues [178]. They cause diarrhea stimulating Na + secretion by intestinal cells, leading to intraluminal gastrointestinal (GI) fluid accumulation and abdominal cramping [14]. OA has been shown to induce intestinal toxicity in mice with cell detachment, fluid accumulation, villous atrophy, inflammation, and dilatation of the intestinal tract [185]. Cytotoxicity of OA is mainly manifested as change in cell morphology, destruction of the cytoskeleton, variations in the cell cycle, and induction of apoptosis [186][187][188]. These alterations have been traditionally related to OA inhibition of PPs activity [189,190]. However, the broad symptomatology of DSP cannot be attributed only to inhibition of PPs [72,178,180]. Due to intestine complex structure and tight regulated physiology, understanding the interplay between different components (e.g., enterocytes and the enteric nervous system) in OA response might shed some light to OA mode of action. Beyond the GI tract, other organs can be affected by okadaic acid [72]. Exposure to OA leads to reorganization of cytoskeletal architecture, loss of intercellular communication, and apoptosis in liver [191,192], while a study in rats showed no acute cardiotoxic effects of OA and DTX1 [193]. A variety of OA toxic effects, including genotoxicity, angiotoxicity, immunotoxicity, and embryotoxicity, have been also reported [14,188,194]. Interestingly, the nervous system is sensitive to OA although it is not classified as a neurotoxin. It has been reported that OA can cause neuronal cell death by inducing hyperphosphorylation of a variety of microtubule-binding proteins, especially Tau protein and neurofibrillary tangles formation, resulting in changes in the neuronal cytoskeleton in vitro and in vivo [195]. It has been also demonstrated that OA can produce spatial memory impairment and neurodegeneration and cause hippocampal cell loss in rats [196]. Not only is the OA molecular target of interest, but chronic exposure assessment is also demanded [184]. The chronic toxic potential of DSTs is less understood, although long-term exposure to OA is linked to increased risk of cancer [197]. There are studies that correlate shellfish consumption with the incidence of gastrointestinal cancer in the Spanish and French population of the coast [198]. This possible association agrees with the alterations in the pattern of expression of 10 genes related to carcinogenic processes in SH-SY5Y neuronal cells exposed to OA [182]. Poisoning with toxins from the OA group at sub-regulatory levels could have long-term adverse effects on the digestive tract in people, leading to an increased risk of bacteriosis, likely from an existing resident gut symbiont or pathobiont [199,200]. The disruption of epithelial integrity by OA may affect the colonic microbiota, which, in turn, leads to various diseases such as colorectal cancer [201]. In addition, further research is needed in the toxicologic evaluation of toxins mixtures on behalf of consumers safety [87,184]. DSP treatment is supportive as victims recover without any special therapy after several days. Until now, no sequelae have been reported. In general, OA, DTX1, and DTX2 are produced by the microalgae, while DTX3 is present only in shellfish [202]. The metabolism of OA/DTXs in shellfish leads to extensive conversion to derivates DTX3 (7-O-acyl fatty acid esters and okadaates), which, although appearing to be of somewhat reduced toxicity, are believed to be largely converted back to the free toxins during digestion. This DSTs conversion, leading to multitude of compounds potentially present in shellfish, complicates the determination of overall toxicity. It should be noted that the main DSTs analogues differ in toxicity, as indicated in several studies performed in vitro with a range of cell lines [203] as well as in vivo in rodents [40,204]. TEFs are also used to determine the concentration of DSTs in shellfish, converting the amounts of individual toxins calculated by analytical methods to OA equivalents [62,65]. TEFs proposed by EFSA are derived from the PP inhibition potency and lethal intraperitoneal doses in mice (Table 3) [103]. However, PP binding affinity is not the only factor important for determining the relative toxicity of OA analogues to human consumers. All OA actions contribute to the final toxicity observed in vivo. Recent studies indicate that analogues' oral toxicity is DTX1 > OA > DTX2 with TEF values based on oral lethal toxicity: OA = 1, DTX1 = 1.5, and DTX2 = 0.3 [40]. Therefore, because human exposure to OA occurs by ingestion, the current TEF should be reevaluated for regulatory purposes to properly estimate OA equivalents in edible shellfish [70]. To minimize the potential health risk for consumers, several measures have been implemented in many countries including the regular monitoring of shellfish, the establishment of regulatory limits for some lipophilic marine phycotoxins in seafood, and temporary bans on shellfish harvesting whenever toxins exceed the safety limits [207]. The European Union has defined a regulatory threshold that allows a maximum contamination of 160 µg OA eq/kg shellfish flesh. DTX3 are considered in the regulatory framework by including a base-hydrolysis step during sample preparation for toxin detection [170]. Risk of acute intoxication with the current legislation and monitoring system is very low since they protect the population from DSTs acute effects [207]. However, the high persistence of the phytoplankton populations that produce this kind of toxin in many geographic areas [208] indicates that many shellfish consumers may be regularly exposed to low levels of DSTs. This highlights the importance of understanding the health impacts associated with chronic exposure to sub-regulatory levels of DSTs. Azaspiracids (AZAs) are a group of polyether lipophilic biotoxins gathering more than 40 analogues, from which AZA1 is the reference compound [19,209]. Hitherto, efforts were focused on analogues AZA1, AZA2, and AZA3, with limited data regarding the remaining derivatives ( Figure 7) [86]. Even if the structure of some analogues has been elucidated, the mechanism of action remains elusive (Figure 2). A variety of studies have reported alteration in several pathways under AZAs treatment both in vitro and in vivo. Some of the features determined in vitro comprise cytoskeletal reorganization, apoptosis induction, and mitochondrial and nuclear impairment [16,210,211], depending on dose, experimental time, and cell line. Interestingly, AZAs effects on ion channels have also been described. In hepatocytes, mitochondrial dehydrogenases activity is enhanced by AZAs1-3. Research pointed that these phycotoxins at micromolar concentrations decreased potassium currents acting as open state blockers of ether-à-go-go potassium channel (hERG) [212]. In the same line, three VGSCs isoforms are partially blocked in vitro by AZA1, AZA2, and AZA3 [211]. A recent publication reported the interaction of azaspiracids with volume-regulated anion channels (VRAC). Chloride currents' amplitude is not only increased under AZAs treatment but also significantly diminished when cells are exposed to a selective VRAC inhibitor [211]. Several of these reports might provide some insight to in vivo toxicology. For instance, AZAs modifying hERG activity in vitro could explain at least in part the cardiotoxicity observed in rats after intraperitoneal treatment [213][214][215]. Another example could be hepatocytes' K + and Clchannels modification in vitro with liver affection in mice following oral exposure to these biotoxins [215,216]. Cytoskeletal rearrangement in the intestinal cell model could also be related to intestinal fluid accumulation in mice [16,215]. Concerning AZAs, all encouraging in vitro effects should be confirmed in vivo in order to unveil their molecular target. In spite of all data, translation to human symptomatology is still a major challenge. Human poisonings that were ascribed to AZAs are currently limited to the ingestion of azaspiracid-laden mussels. AZP typically comprise gastrointestinal alterations: nausea, vomiting, diarrhea, stomach cramps, and even headache, but no deaths have been reported [217]. The lipophilic properties of AZAs ensure broad capabilities to cross cell membranes and interact with many biological structures. Several studies have underlined the complexity of AZAs effects, since they induce different responses depending on the experimental models. In vitro toxic effects of AZAs at the organ, cellular, and molecular levels revealed the inhibition of neuronal bioelectric activity, alteration in cell-cell adhesion, generation of autophagosomes, ATP depletion, upregulation of proteins involved in energy metabolism, and Golgi apparatus disruption [16,218,219]. It should be also considered that AZAs accumulate in shellfish tissues and have the potential to be metabolized similarly to other lipophilic toxins [220]. However, AZA analogues have different toxicity, probably due to their specific molecular structures [221]. To characterize in vivo toxicity, research has been performed with AZAs after oral, intravenous (iv), and i.p. administration. Most studies are limited to AZA1 with lethality at doses ranging from 250 to 775 µg/kg [222,223]. Recently, a comparative acute oral toxicity study on AZA1, −2, and −3 was performed in mice [215]. The lethal potency was AZA1 > AZA2 > AZA3, and the TEFs derived from LD 50 were 1.0, 0.7, and 0.5, respectively, for AZA1, −2 and −3. Those data differ from those proposed by EFSA and suggest the need for a reassessment for regulatory purposes (Table 4) [19]. Oral administration of AZAs showed that the main targets at the histological level were the liver, gastrointestinal tract, and spleen [215]. Toxicokinetic evaluation in mice after the acute oral administration of sub-lethal doses of AZA1 indicated that AZAs were readily absorbed and detected the highest amount of toxin in the liver, followed by kidneys, lungs, spleen, and heart, even though significant tissue damage was only observed at the intestinal level [222]. AZAs induce extensive damages to the gut including dilation and fluid accumulation in the small intestine, exfoliation of duodenal villi and infiltration of leukocytes [223,224]. However, from AZP symptoms, diarrhea has not been reported in mice, representing a hindrance to translate in vivo results to human poisoning [215]. Intraperitoneal injection of AZAs in mice induced the swelling of the liver, and histopathological analysis showed fat droplets in the hepatocytes cytoplasm and vacuoles in the centro-lobular and sub-capsular regions of the liver [225]. Liver, gastrointestinal, and lung damage was also reported by other studies after repeated oral exposure to sub-lethal AZA1 in mice even though the long-term effects are inconclusive [226]. Neurotoxic symptoms (spasms, a slow progressive paralysis) were also observed in mice treated with AZAs [227]. Intravenous or intraperitoneal injection of AZAs caused cardiotoxicity (arrhythmias, functional, and structural heart damage) and cardiovascular problems (altered arterial blood pressure) [214]. In addition to that, evaluation of toxins combination [87] along with their effects under chronic oral exposure [86] would support current public policies on consumers safety. Marine animals are the main sources of AZA contamination, but only a few studies were performed in vivo with Azadinium revealing a negative effect on feeding behavior of mussels [228]. The toxic effects detected in mussels could be used as early indicators of contamination associated with the ingestion of seafood [229]. Additionally, a potential adverse outcome of AZAs in fish development was suggested, with consequent ecological impacts [230]. Only levels of AZA1, AZA2, and AZA3 are regulated in shellfish at international level as a food safety measure based on occurrence and toxicity [49]. The regulatory limit set by the European Union (EU) legislation for azaspiracids is 160 µg AZA eq/kg shellfish flesh, and the reference method for toxin monitoring in bivalve mollusks for human consumption is the analysis by LC-MS/MS [19,103]. Pectenotoxins (PTXs) take their name from the organism where they first were discovered: the digestive gland of Japanese scallop, Patinopecten yessoensis. These toxins are heat-stable polyether macrolide compounds, of which PTX2 is believed to be the main precursor originating other analogues during metabolic processes in bivalves [231]. They have been shown to cause cytoskeleton disruption by binding actin in vitro [2,70,180,232]. Up to last year, in the European Union, PTXs were considered in the same group of OA toxins for regulatory purposes with a limit of 160 µg of toxin equivalent/kg of shellfish meat (EU Regulation 853/2004) [103]. However, EFSA has concluded, in its Opinion on Marine Biotoxins in Shellfish-Pectenotoxin Group, that PTXs in shellfish are always accompanied by toxins from the okadaic acid group, and there are no reports of adverse effects in humans associated with PTXs [66]. Therefore, at present and based on EFSA opinion, PTXs have been removed from the list of marine biotoxins to be analyzed in live bivalve mollusks in Commission Delegated Regulation (EU) 2021/1374 [65]. Looking at the historical process that leads to the regulation of PTXs, PTXs were initially legislated since they might have been responsible for outbreaks of human illness involving nausea, vomiting, and diarrhea in Australia in 1997 and 2000 [233]; however, the symptoms were later attributed to OA esters [18]. In fact, the presence of these toxins in shellfish was discovered due to their acute toxicity in the mouse bioassay after i.p. injections of lipophilic extracts of shellfish [66]. Therefore, even though there are no reports of human illness causally associated with exposure to PTX-group toxins, PTXs remained regulated for decades until last year. Yessotoxins (YTXs) are disulfated polyciclic ether compounds usually detected with OA toxins, so they were also included in this OA group initially [12]. Further on, they were reported not to share molecular targets, and their toxicological effects were not comparable. This resulted in yessotoxins being considered a separated group of toxins from OA [2]. YTXs are produced by the dinoflagellates Protoceratium reticulatum, Lingulodinium polyedrum, and Gonyaulax spinifera [2]. They enter in the food chain since they accumulate in edible tissues of filter feeding shellfish [234]. YTXs' mechanism of action is not fully understood. It was suggested that it could involve cross-talks between cAMP, calcium, phosphodiesterases, protein kinase C, and A-kinase anchor proteins as well as mitochondria, where the role of each signaling alteration and the final effect depend on the cellular model [235]. Additionaly, modifications in second messenger levels, protein levels, immune cells, and cytoskeleton have been published as consequence of YTXs exposure [235]. In addition, YTX seems to be a cellular death inducer in some types of tumor cells [236][237][238], to exert a cytotoxic effect in neuronal cortical neurons [239], and to display apoptotic activity in the cortex and medulla of mice after intraperitoneal administration [240]. The lethality on mice after i.p. injection has been reported, but when YTX was given orally, no poisoning symptoms were developed, and there are no records about human intoxication events [12,70,235,241]. YTXs were included in the list of marine toxins regulated due to the coexistence with diarrheic toxins and the lethality on mice after i.p. injection [241]. The regulatory limit for YTXs was 1 mg yessotoxin equivalent/kg (EU Regulation 853/2004) [103]. However, in 2013 and in the light of the EFSA Opinion and of the conclusions of the 32nd Session of the CODEX Committee on Fish and Fishery Products, the European Union increased the limit for yessotoxins to 3.75 milligrams/kg of shellfish meat [242]. This seems to be a preventive measure, taken to avoid a possible poisoning if it is ingested at very high doses, since, as mentioned above, low oral toxicity has been reported, and there are no records of human intoxications [235]. Cyclic imines (CIs) are a numerous group of more than 40 analogues, including pinnatoxins (PnTXs), spirolides (SPXs), gymnodimines (GYMs), pteriatoxins (PtTXs), porocentrolides (PcTXs), spiro-prorocentrimines, and portimine ( Figure 8) [15,243]. They have been grouped together because of their common imine group as a part of a cyclic ring, which confers the pharmacological and toxicological activity, and due to their similar acute fast-acting toxicity in mice [244]. They are produced by several microalgal species such as Vulcanodinium rugosum and Alexandrium ostenfeldii and are distributed globally. CIs have been reported in algal samples from Scottish waters and in shellfish from Norway and the French Atlantic coast [243,245,246]. Their mechanism of action is understood and relies on the inhibition of nicotinic acetylcholine receptors (nAChRs) [247]. Five subunits assembly into a circular homopentamer or heteropentamer structure, constituting the ligand-gated ion channel [15,248]. Molecular simulations along with the crystallographic structure of CIs bound to acetylcholine binding protein and nicotinic receptor have greatly provided insight into their interaction [15]. CIs settle in the ligand-binding pocket placed in between two subunits. They directly interact with loop C [(+) face] aromatic residues of the first subunit and, to a lesser extent, with loop F [(-) face] of the following subunit [249]. They reversibly block these channels, impeding the transmission of neuronal-evoked ACh-mediated muscle contraction [248]. Variability in affinities for each nAChRs subtypes found in nerves and muscle may account for differences in potency. Indeed, the symptomatology described in vivo is in accordance with antagonism of these receptors. Following oral exposure, animals develop tremors, reduced mobility, hind leg paralysis, and jumping and breathing difficulty, which can result in death [15,248,250]. Supporting this statement, CIs have been shown to impair neuron-induced muscle contraction directly in the neuromuscular junction [251]. Regarding structural data, high-resolution binding of CIs to nAChRs would complete the understanding of their interaction and the conformational changes induced by these toxins. Although cyclic imines (CIs) have been found to be highly toxic to mice, there is no evidence of intoxication in humans [15,243,252]. In 2010, the EFSA panel estimated exposure to spirolides did not present a health risk to shellfish consumers but that exposure risks from other cyclic imines could not be assessed [244]. Therefore, CIs are not yet regulated in Europe due to a lack of sufficient toxicological and epidemiological data needed to establish health safety thresholds. Even though no information has been reported yet linking CI toxins to neurotoxic events in humans, the potent interaction of PnTX with central and peripheral nAChRs raises concerns about the harmful downstream effects of PnTX exposure. In recent years, there has been increasing evidence for the occurrence of emerging PnTXs in various wild or commercial shellfish species, collected at different periods of the year, and in different marine waters. Related to that, the levels of PnTXs in shellfish at the Mediterranean Ingril lagoon in France were much higher than those reported in contaminated shellfish from other locations such as Norway, Spain, or Canada [253,254]. Given the potent antagonism of PnTXs against muscle and neuronal nicotinic acetylcholine receptors, a risk for human consumers may exist when PnTX shellfish accumulation reaches high levels [248,255,256]. Therefore, in 2019, the French Agency for Food, Environmental, and Occupational Health and Safety established a limit value of 23 µg PnTx-G/kg of total shellfish meat [255,257]. In conclusion, YTXs are legally regulated in Europe, even though there is a lack of demonstrated toxic effect in humans; PTXs remained regulated up to 2021, although they are nontoxic to humans, while CIs, which were reported to be neurotoxic and frequently detected in shellfish, are not regulated. This highlights the urgent need to pay attention to the update evidence versus the old spread knowledge in marine phycotoxins (Table 5) and to harmonize regulatory criteria about all these toxins. Future challenges related with lipophilic phycotoxins (included in Figure 9): Toxins Acting on Ion Pumps: Palytoxins, Ostreocins, and Ovatoxins PLTXs are a group of more than 25 compounds sharing a complex polyketide structure [2,86,258]. Among analogues, ostreocins, ovatoxins, and mascarenotoxins are found along with homoPLTX, 42-hydroxyPLTX, bishomoPLTX, neoPLTX, and deoxyPLTX [86,258]. PLTX binds to the Na + /K + ATPase pump, stabilizing an opening conformation that allows the flux of cations following their concentration gradient; thus, the pump is converted into a non-selective ion channel ( Figure 2) [2,259]. Na + /K + ATPase actively transfers Na + and K + ions against the concentration gradient, maintaining ion homeostasis between extracellular and intracellular media. Modulation of its physiological activity can lead to severe cell and tissue impairment. The structural binding of the toxin to Na + /K + ATPase pump is currently unknown. It is hypothesized that PLTX binds to the extracellular region of the protein, since its effects are only observed when applied extracellularly [259]. The ion imbalance caused can trigger depolarization of neurons and muscle cells [2], which is in accordance with the symptomatology resulting from PLTX poisoning. Currently, data about PLTX binding structure to Na + /K + ATPase pump, as well as its analogues, are of great interest in the understanding of the PLTX mode of action. Palytoxin intoxication may occur after physical contact with contaminated water (bathing activities) or inhalation of the marine aerosol containing PLTX as well as the consumption of contaminated seafood [260]. PLTX compounds threaten human health and marine life and can have an impact on tourism (beach closures), commercial fisheries, and aquaculture [261]. PLTX is heat-stable and is not eliminated by normal cooking or boiling. Therefore, PLTX poisoning is mostly related to the ingestion of PLTX-contaminated seafood and involves mainly respiratory, skeletomuscular, cardiovascular, gastrointestinal, and nervous symptoms [262]. Symptoms associated with PLTX ingestion depend on the toxin concentration and comprise a bitter and metallic taste, paresthesia, myalgia, hypertension, nausea, abdominal cramps, vomiting, diarrhea, cardiac dysrhythmias, hemolysis, respiratory distress, renal failure, and coma, which can lead to death in the most severe cases [2,86,263]. However, reliable quantitative data on acute toxicity in humans are unavailable [111]. Several cases of respiratory poisoning, skin injuries, or ocular exposure have been reported in beachgoers due to aerosols released during massive blooms of Ostreopsis [260]. Poisoning was also reported in aquarium hobbyists from incidental contact with PLTX-producing Palythoa [264]. The most common signs after inhalation and cutaneous exposures are respiratory distress, bronchoconstriction, mild dyspnea, rhinorrhea, cough, fever, and a small incidence of dermatitis and conjunctivitis [265]. The toxicity studies performed on a few PLTX congeners showed that, despite the small diversity in structure or even in stereo-structure, their relative toxic potencies might be quite different either in vivo or in vitro [262]. Palytoxin is highly neurotoxic and increases the cytosolic calcium concentration while decreasing intracellular pH in neurons [266]. The membrane depolarization generated and the massive increase of Ca 2+ in the cytosol interferes with some vital functions [263]. Palytoxin triggers a series of toxic responses; it inhibits cell proliferation and induces cell rounding, detachment from the substratum, and F-actin disruption [267]. Recent in vivo evaluation of PLTX indicates a high chronic toxicity with a lower NOAEL than previously determined for acute toxicity, which pointed out the need to consider this toxicity in risk assessments. PLTX has been shown to be harmful and occur in EU while it is not regulated [36,37]. An ARfD of 0.2 µg/kg (sum of palytoxin and ostreocin-D) has been established through experimental toxicity data [268]. PLTX is extremely potent through intravenous, intraperitoneal, and intratracheal exposure. However, in mice, the oral toxicity of palytoxin is found to be three times lower than the i.p. toxicity with LD 50 651-767 ng/kg. This is because palytoxin absorption is less efficient through the gastrointestinal tract than through the peritoneum due to the molecular weight and hydrophilia [269]. Biochemical changes were observed following oral administration of palytoxin in mice, and histological changes include inflammation in the forestomach [270,271]. A cardiac damage was supported by the in vitro effect of the toxin on cardiomyocytes [272]. Palytoxin topically applied to the skin or eyes cause skin irritation and erythema in mice [273]. Treatment is symptomatic and supportive. In case of ingestion of PLTX-bearing seafood, treatments, such as gastric lavage, fluid administration, forced diuresis therapy, and artificial respiration, are applied [274]. Victims recover after a few hours to days. To prevent dermal or inhalational exposure in persons handling zoanthids, they should wear gloves and a breathing mask. Today, PLTX is considered one of the most toxic non-protein natural compounds. Additionally, there are increasing records of PLTX presence in many edible marine organisms from the European coasts [275]. The efficiency of risk assessment of PLTX relies on the evaluation of this phycotoxin in fish and shellfish as well as adequate research, more relevant in the case of a toxin that demonstrates a potential higher health risk. However, the EU still has not adopted a maximum permissible limit to confront the risk of PLTX poisoning. Future challenges related to lipophilic phycotoxins (compiled in Figure 9): • Studies to assess the real hazard they present to humans; • Detailed epidemiological studies are needed to better evaluate safety levels and to promote regulations that will protect human health and reduce economic losses; • Pharmacological and toxicological effects of each PLTX analogue to carry out reliable structure-activity relationship; • Evaluation of PLTX and analogues oral toxicity; • Studies exploring treatments for PLTX including search for effective antidotes; • Common legislative criteria: toxin regulation, implementation of effective toxin monitoring, and management programs for toxins. Marine Toxins as a Source of Drugs Phycotoxins not only cause damage but also have therapeutic applications based on their specific interactions with their natural targets, converting these compounds in natural sources to develop new drugs. The diversity in their mechanisms of action pointed to these substances as lead compounds for the discovery of new drugs. However, the research in this field has not yet been fully developed [276]. Despite the great potential of these compounds as therapeutic agents, the development of new drugs from marine toxins is complex due, among other reasons, to the low availability of these biological products and the difficulty of obtaining them in sufficient quantities [277]. In the next paragraphs, some of the marine phycotoxins with potential clinical applications are summarized. Tetrodotoxin (TTX) is a neurotoxin with one of greatest potentials to be used for therapeutic purposes due to its potent effect blocking the voltage-gated sodium channels preventing nerve and muscle function and consequently, the transmission of the pain signalling [84]. Since Na v isoforms distribution in the nervous system is different [278], TTX is not effective in controlling all the different types of pain. Many studies have been performed using TTX as an agent to relieve different types of pain, administering TTX at different doses and by different routes [279][280][281][282]. Among these uses, the best results have been obtained in preclinical and clinical studies for alleviation of neuropathic pain [279,283]. Inflammatory pain and acute pain have also been evaluated in preclinical studies. Although the data obtained are still unclear, they suggest that the administration of TTX might have little impact on these types of pain [282,284,285]. More studies are needed to evaluate the benefit of the administration of this toxin to control different types of pain. TTX efficacy in relieving pain associated with cancer has also been evaluated with inconclusive results. However, data published to date indicate that TTX is a good tool of research for therapeutic purposes against opioid-resistant pain in cancer patients, with reported mild to moderate adverse effects, which are generally transient [286] and well tolerated at therapeutic doses, even when TTX was administered for a long period [287,288]. In fact, a promising drug containing TTX as the main active compound for the management of cancer-related pain is Halneuron ® (Vancouver, Canada Wex Pharmaceuticals Inc.), currently into Phase III clinical development for the treatment of cancer related pain and also in Phase II clinical trials development for chemotherapy-induced neuropathic pain https://wexpharma.com/technology/about-halneuron/, accessed on 11 January 2022, with a considerable number of patients obtaining good and promising results [283]. The blocking effect of VGSC caused by TTX has also been investigated for its potential as local anesthetic, with the advantage that TTX is extremely potent and cause minimal local toxicity [289]. It has been demonstrated that TTX provided a prolongation of the duration of local anesthesia without significant systemic or local toxicity when combined with an eluent, thus maintaining a sustained release at a sub-therapeutic level and enhancing the effects of the anesthetics [289]. Additionally, significant improvement in the efficacy of the analgesic effect induced by TTX was reported when co-administered with vasoconstrictors or wellknown local anesthetics [290,291]. The best results were obtained when administrating the anesthetic encapsulated in microparticles in combination with TTX [292]. Therefore, there are commercial formulations containing TTX under development for local and topical anesthesia [293]. Despite all these potential promising therapeutic applications of TTX, it must be taken into account that TTX has a very narrow therapeutic window, which means that TTX release must be controlled to avoid systemic toxicity [289]. Similarly to TTX, STX and its analogues have a therapeutic potential as anesthetic agents since they are highly selective sodium channel blockers [99]. Actually, there are several research reports that support the therapeutical use of STX and its analogues regarding their potential in pain management [294], even in combination with other known pain modulators, increasing their efficacy and potency without exacerbating their toxicity [290,[294][295][296][297]. OA and its analogues could be research tools for future investigations and useful probes for the discovery of new neurodegeneration and cancer drugs since the activity of serine/threonine protein phosphatases is a potential target for novel therapeutics with applications in many diseases including cancer, inflammatory diseases, and neurodegeneration [298,299]. Furthermore, OA has been shown to possess fungicidal and antimicrobial activity since it has been reported that it inhibits the growth of Candida albicans, Aspergillus niger, and Penicillium funiculosum [300]. The potential therapeutic use of yessotoxin and its derivatives has been postulated for Alzheimer's disease using in vitro models [301] and for treating and/or preventing metabolic diseases [302], a controversial fact due to the neurotoxicity of this substance, as well as being an antiallergic compound [238,301]. Moreover, the effect of yessotoxin inducing apoptosis of tumoral cells in certain types of tumors has been reported in several studies, suggesting that this toxin could be a compound with great potential to develop future antitumor therapies [235,236,238,[302][303][304]. Although, so far, maitotoxin was historically considered one of the most toxic natural compounds, its range of in vivo toxicity described in the literature varies from 50 to 200,000 µg/kg [140,[305][306][307][308][309]. This is an important fact that needs to be clarified since there are several effects of MTX on cellular regulatory mechanisms, which could make it a molecule with potential use for different purposes, for instance, neurotransmitter secretion [310], programmed cell death activation [311], fertilization [312], and insulinotropic activity since it has been reported that MTX activates non-selective cationic currents (NSCCs) [313,314], becoming a promising new tool available for the development of multiple new therapies for different purposes. 13-desmethylspirolide C has been reported to have beneficial in vitro and in vivo effects against neurodegenerative diseases, decreasing the amyloid beta load and Tau hyperphosphorylation in in vitro experiments with primary cortical neurons and an in vivo murine Alzheimer's disease model [315,316]. Another recent in vitro study has confirmed the neuroprotective effect of 13-desmethylspirolide C on human neuronal differentiation using a human neuronal stem cell line [317]. All these findings make SPXs and related compounds attractive molecules for the development of new therapies against neurodegenerative disorders [316,317]. Azaspiracids' blocking of hERG potassium channels suggests that they could be used as antiarrythmic drugs [212] since the hERG channel has been shown to be the target for class III antiarrhythmic drugs, which can reduce the risk of re-entrant arrhythmias by prolonging the action potential duration and refractory period without slowing the conduction velocity in the myocardium [213]. Moreover, since there are not selective extracellular activators of VRACs besides the intracellular application of the GTPγ-S (guanosine 5 -O-[gamma-thio]-triphosphate) molecule, this group of natural compounds could represent a way to analyze the role of these channels in cellular homeostasis. This fact is important since VRAC inhibitors have proven to be useful to modulate cancer progression, preventing the transition of tumoral cells to phase S of the cellular cycle [318]. Therefore, these marine toxins can be useful in vitro to test the anticarcinogenic effect of compounds, enabling concise and precise description of the experimental results, their interpretation, and the experimental conclusions that can be drawn. Gambierol and its analogues are compounds with great potential for drug development owing to their main target, voltage-gated potassium channels (K v ). The different K v subunits play a critical role in cellular homeostasis regulations; thus, each K v subunit is involved in a certain pathology. In this sense, K v 1.1 inhibition has been reported to be implicated in pain sensation, becoming an important target to develop anesthetics [319]. In addition, K v 1.2 blockers are important in the treatment of multiple sclerosis [320]. Moreover, K v 1.3 and K Ca 3.1 channels inhibitors have potential uses in immune responses [119,321], multiple sclerosis [322], rheumatoid arthritis, and type I diabetes mellitus [323], thus becoming a good tool to develop immunosuppressive therapies [324] and for neurodegenerative diseases [325]. K v 1.4 is important in the management of diabetes preventing some biochemical abnormalities [326], and K v 1.5 is important for cardiac excitability [327]. Therefore, gambierol could be a very promising compound from which to develop drugs for the treatment of multiple pathologies; however, the potential of this marine compound has not been yet developed and more studies and research are needed. Detection Methods Detection methods for marine toxins that threaten human health are needed to warrant food safety. Different approaches to detecting these compounds have been addressed by the scientific community, all of them having advantages and disadvantages that make them suitable for specific purposes. In general, marine toxin-detection techniques can be classified into two groups: analytical and non-analytical methods. Non-analytical methods could be subsequently subclassified into molecular interaction/activity-based assays, cellbased assays, and animal bioassays. Considering the characteristics of the technique, some will be adequate as confirmatory methods to identify and accurately quantify toxic molecules, while others may be useful for rapid sample screening or even on-site toxin detection and others may be useful as sentinels for yet unknown or new toxic activity. Analytical Detection Methods Analytical methods allow the identification and quantification of toxic molecules, as far as an analytical standard is available for reference. Most analytical methods for marine toxins are based on liquid chromatography (LC) separation followed by a detection technique. With regards to LC, reverse-phase high-pressure liquid chromatography (HPLC) and ultrahigh-pressure liquid chromatography (UPLC) have been used extensively. LC has been coupled to fluorescence, light absorbance, or mass spectrometry detection, depending on the targeted toxic compounds. Among mass spectrometry techniques, tandem mass spectrometry (MS/MS) with multiple reaction monitoring (MRM) mode is commonly used for routine detection and has gained remarkable relevance, especially for lipophilic toxins, in the last decades. MS/MS working in MRM mode is a targeted method, meaning that it searches only for specific toxins in the sample. There have been some attempts to develop non-targeted screening of toxins using high-resolution mass spectrometry; however, at the moment, the proposed methods provide only an extended targeted method or an unpractical alternative that requires fully matched matrix controls [328]. Because of the characteristics of analytical methods, their validation is relatively easy and, therefore, several of them have become official or reference methods for marine toxin detection. Although there are multiple analytical alternatives, the most commonly used for routine detection are those that have been officially validated. The first analytical method validated for marine toxins was domoic acid detection by HPLC coupled to ultraviolet light absorbance (HPLC-UV) by the AOAC [329]. Not long after, HPLC coupled to fluorescence detection (HPLC-FLD) was published as an AOAC official method for the detection of PSP toxins in 2005 [330]. However, this PSP-detection method required precolumn derivatization of some toxins for adequate identification of the PSP toxin profile of a sample, which is laborious and, therefore, requires significant handson time of laboratory personnel. Later developments significantly improved the HPLC-FLD detection of these toxins using a similar approach with postcolumn derivatization, which was also validated [331]. Currently, TTX detection simultaneously with PSTs has been achieved with LC-FLD [332]. LC-MS/MS provides efficient detection of lipophilic toxins simultaneously in the same sample, including OA and DTXs, AZAs, pectenotoxins, and yessotoxin, and was fully validated for mollusks by the European Union Reference Laboratory [333] and the European Committee of Standardization [334]. Because, in the case of lipophilic toxins, LC-MS/MS detection offered a great improvement in performance for seafood monitoring purposes compared to previously used methods, it displaced them in routine testing laboratories. The flexibility of MS/MS in detecting multiple compounds, once the technique was available in many laboratories, prompted the use of LC-MS/MS to detect other toxin groups such as domoic acid and PSP toxins. PSP detection by LC coupled to tandem mass spectrometry (LC-MS/MS) has been optimized and offers the advantage versus LC-FLD of no sample derivatization needed to identify the different analogues of this toxin class [335][336][337], including simultaneous detection of TTX in some protocols [338]. UPLC-MS/MS detection has also been validated for domoic acid [339]. Integrated analysis of regulated lipophilic and hydrophilic toxins in a single LC-MS/MS protocol can be achieved by the optimization of LC conditions [340]. However, simultaneous detection of these toxins in one single sample requires a unique sample preparation protocol for PSP toxins, DA, and lipophilic toxins (OA, DTXs, AZAs, PTX, and YTX) that is not available yet. Minimum performance requirements of these methods to allow adequate detection and quantification below current maximum regulated limits have been internationally established to warrant human safety [49]. Alternatives to LC for toxins separation previous to coupling to mass spectrometry have also been tested, such as capillary electrophoresis (CE-MS/MS) for hydrophilic toxins [341], but they are far from being routine monitoring techniques. Adequate evaluation of sample toxicity with analytical methods requires considering analogue toxic potency within the toxin class, which is achieved by application of TEFs. TEF values for several analogues of the PSP, DSP, and AZA groups have been estimated and published by FAO [70]. However, reliable estimation of TEF values is still missing for many analogues of most marine toxin groups. In LC-MS/MS procedures for lipophilic toxin detection, besides OA, DTXs and AZAs, cyclic imines, and yessotoxins are often analyzed [333,342,343]. Detection of palytoxins and ciguatoxins by LC coupled to high-resolution mass spectrometry has also been achieved in research laboratories and some monitoring laboratories [344][345][346], but results evaluation is more difficult due to the lack of commercial certified reference standards of these toxins and the scarcity of toxicological information to determine TEFs, and therefore they have not become widespread routine testing methods for these toxin classes yet. In addition, highresolution mass spectrometry has lower sensitivity than MS/MS, which is not adequate for identification of these toxins [347]. Sample preparation is usually a critical step in marine toxin monitoring, mainly in food samples, in which complex matrixes are an important source of interference. Recent advances in sample cleaning protocols incorporated the QuEChERS procedure, a combination of extraction with solvents and dispersive solid-phase (dSPE) extraction that improves preanalysis clean-up and, as a result, also LC-MS/MS general performance and sensibility [348,349]. Analytical methods require expensive instrumentation and qualified personnel and, therefore, they are not suitable for on-site detection by food handling end users. On the other hand, although they are not considered particularly fast, recent works have demonstrated that they are amenable for automated sample analysis [350,351]. One of the main disadvantages of analytical methods is that the lack of reliable TEF values for every analogue of each marine toxin group makes them unsuitable for quantification of sample toxicity. In addition, adequate identification and quantification of every toxic molecule require certified reference standards, which are not commercially available for many analogues of marine toxin groups, with remarkable problems related to this issue when considering ciguatoxins and palytoxins. Finally, they are targeted techniques that only search for the programmed molecules but would miss any other toxin. In spite of the widespread use of analytical techniques for marine toxin detection in routinary seafood monitoring, owed to the characteristics discussed above, these methods do not provide optimal protection of seafood consumers. Molecular Interaction/Function-Based Assays Many methods developed for marine toxin detection are based on the specific interaction of the toxin with a molecule, often a macromolecule. These assays do not usually allow the identification of the toxic molecule, because they target a toxin class and cannot detect different analogues of the same group independently. Owing to this important characteristic they are not suitable for accurate quantification of individual toxins, and they should be considered as semiquantitative methods useful for estimation of sample toxicity. Only a few of them have been officially validated for toxin detection. In most cases, the assay measures the interaction of the toxin with a specific binding protein, but modification of the protein function has also been used as a reporter of toxin presence. Although receptors could be used, the most common binding proteins are antibodies. In spite of the difficulty of producing antibodies specific for small toxins, many different immunoassays have been developed for the detection of these molecules. Immunoassays are easy-to-use, robust techniques that can be performed in the laboratory or adapted to on-site screening. ELISAs for several marine toxins, including OA and DTXs, AZAs, DA and PSTs, and even palytoxin and ciguatoxins, have been developed for detection in the laboratory, usually through indirect or competitive designs [352][353][354][355][356][357]. Actually, one of the official methods for DA detection is an ELISA acknowledged as an official method for screening purposes in many countries [330]. The excellent performance of antibodies as binding partners granted their use in different technological developments that can make toxin screening more efficient and even adapted to on-site detection, such as several immunosensor devices [358][359][360]. Lateral flow immunoassays (LFIA) are suitable for unexperienced end-users and have been used for multiple applications. LFIAs have also been adapted to marine toxin detection, including PSP toxins, OA and DTXs, being commercially available for a while even in a dipstick form [361][362][363][364]. In addition, immunoassays are easily adapted to high-throughput screening [365,366]. Multiplexing by combining several immunoassays for different analytes performed simultaneously in the same sample can be achieved by microsphere-based flow cytometry-like detection, high-density spotting microarrays, or parallel or in-line processing in biosensors, as it was demonstrated for PSP toxins, the OA group, and DA [359,367,368]. One of the main disadvantages of immunoassays is that antibodies are not specific for a toxin but rather can interact with several analogues of the toxin class. Consequently, toxic molecules cannot be independently identified. In addition, crossreactivity among analogues rarely matches toxic potency. Therefore, the final results will be affected by the toxin profile of the sample, and they should be considered an estimate of sample toxicity, not an accurate quantification. There are, though, some exceptions, such as a DSP toxins antibody whose crossreactivity for OA and DTXs fairly matches toxic potency [369]. There are a few techniques that employ other binding molecules. Receptors, usually the macromolecular natural targets of the toxins, have also been used for marine toxin detection assays, although they are not as robust as immunoassays owing to protein stability issues. Because receptor-based assays utilize toxin targets for detection, they bind to several analogues of the toxin class, not allowing identification of toxin molecules. However, the affinity of the toxin analogues for their natural target would be a better indicator of toxic potency than antibody affinity. Several receptor-binding assays have been published for marine toxins, one of them being an official method for the detection of PSP toxins [370][371][372][373]. An important drawback of this official assay is the use of radioactivity. Similarly, a receptorbased assay has been developed for ciguatoxins based on competition with radioactively labeled brevetoxin; however, in this case, an alternative non-radioactive approach using fluorescently labeled brevetoxin has been subsequently produced and is commercially available [347,372], although not specific for ciguatoxins, because the result would be affected by the presence of other sodium channel binding toxins such as brevetoxins. A nicotinic receptor-based assay was also developed for the detection of cyclic imines and adapted to different readout techniques [373][374][375][376][377]. An interesting variation of receptor-based assays are those that measure modification of target activity by the toxin. The protein phosphatase assay for OA and DTXs is an excellent example of this assay type. The initial detection strategy used conversion by PP2A of a substrate to a colored or fluorescent product in a microplate detection design [378,379]. Innovative approaches have adapted this enzyme inhibition assay to electrochemical sensors as biosensing devices [380,381]; however, their adequate performance in food samples has not been demonstrated. Overall, immunoassays and receptor/target-based assays are fast, sensitive, and cost-effective, and some of them have been adapted to portable or on-site assays and to high-throughput techniques. These assays should be considered semiquantitative methods useful for rapid screening of food samples, and further development and validation should provide testing assays needed to warrant reliable on-site detection of all regulated toxins. Probably, considering progression in the last decades, LFIA is the most promising detection technology to achieve efficient on-site detection for marine toxins in the near future, although antibody crossreactivity profiles for most groups are still an obstacle for adequate screening and must be improved. Currently, for some specific toxins, such as ciguatoxins, receptor-based assays offer great possibilities. One of their main advantages is a better relationship between toxin detection ability and toxicity, and future technical developments will probably allow the integration of receptor-based methods in on-site detection procedures. Besides receptors, enzymes, and antibodies, aptamers, a relatively new class of recognition biomolecules, have also been explored for marine toxin detection. Aptamers are synthetic single-stranded oligonucleotides, either DNA or RNA, with stable three-dimensional conformation that confers specific binding to analytes with high affinity and selectivity [382]. One of the advantages of aptamers is that they can bind molecules for which it is difficult to produce antibodies, such as highly toxic or small compounds that do not trigger an immune response, as it is the case of most marine toxins. Specific aptamers for STX, GTX1/4, DA, OA, tetrodotoxin, and palytoxin, among others, have been published and integrated for toxin detection in sensors using different transducer technologies ( Table 6). Some of them have shown remarkable sensitivity and compatibility with shellfish extracts (Table 6); however, aptamers used in these assays do not bind other analogues of the toxin class [383][384][385][386][387]. Detection of only one analogue of a toxin group does not provide adequate protection, and extended use of aptamers for marine toxin monitoring in seafood will have to cover, at least, the more frequent toxic analogues. Interestingly, this approach will allow identification of different analogues of the group. Considering that simultaneous detection of two toxins has been already demonstrated [384], it may be extended in the future to more molecules. Although much work is still needed in this field for efficient on-site evaluation of overall sample toxicity, recent developments make aptasensors a promising technology for marine toxin detection, because they are easy-to-use, portable, sensitive methods that use synthetic, low-cost, stable sensing molecules. Although a great effort has been made in the last years in this field, it is still to be demonstrated if aptasensors can provide efficient on-site detection for marine toxins. For this purpose, evaluation of overall sample toxicity with aptasensors based on specific aptamers for each toxin would depend on the availability of adequate TEFs as much as analytical methods do. Therefore, a fair estimation of toxicity would not be possible at the moment, even if other technical issues were solved. Cell-Based Assays Cell-based assays are widely used to study human and animal diseases and design therapeutic strategies [397]. However, they can also be used to determine if marine toxins are present in a seafood or water sample. There are many types of cell-based assays since a toxic agent may cause cytotoxicity via different mechanisms. Hence, when designing these methods, it is important to choose adequate cell types (primary cell, native or engineered cell-line, species of origin . . . ), reporters, number of repetitions, and data analysis strategies for reliable results [398]. Cell-based assays usually offer high sensitivities and are amenable for automation [398]. In addition, in vitro cell assays can be performed with human cell types, providing information relevant for human species, as opposed to in vivo bioassays in animals. Nevertheless, other factors such as cost, speed, and trained personnel and instrumentation requirements do not favour routine use for toxin detection. Today, the most-used cell-based assay to detect marine toxins is the viability assay, which consists of determining the number of healthy cells exposed to a sample for an incubation period. To make it possible, several viability tests can be employed, such as dye exclusion, colorimetric, fluorescent, or luminescent assays. One of the most popular tests, as thousands of published articles evidence, is the MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) assay. This assay allows determining the mitochondrial function through the metabolization at 37 • C of this tetrazolium salt to formazan dye by dehydrogenase enzyme, present in viable cells [399]. Viability assays have been used to detect OA, DTXs, AZAs, PSTs, palytoxin, and ciguatoxins (Table 7) [400][401][402][403][404]. Yet, some toxins need an addition of other compounds to show the cytotoxic effect, as it is the case of ouabain and veratridine to show the cytotoxic effect of saxitoxin [403]. A variation of this kind of assays is the hemolysis assay for palytoxin detection in microplates [405]. Remarkably, a cell-based cytotoxicity assay is being used for screening the presence of ciguatoxin in fish samples using the neuroblastoma cell line Neuro-2a, a co-treatment with veratridine and ouabain and MTT readout [347,406,407]. Other cell-based assays, such as electrophysiological assays [408,409] or measurement of membrane potential changes by fluorimetry [410,411], have been described for the detection of paralytic toxins with high sensibility. Recently, a new approach was proposed to detect saxitoxin or lipophilic toxins by identifying the changes in gene expression they produce in neuroblastoma cell lines or Caco-2 using qRT-PCR [412,413]. Nonetheless, even though these methods offer adequate detection, they are complicated to perform and not practical for routine food testing. Animal Bioassays Mouse and rat bioassays have been the official methods for the detection of marine toxins in many countries for many years. These bioassays consist in the administration of a seafood extract to an animal, either by intraperitoneal injection or orally, and the observation of sickness symptoms or time to death. The mouse bioassay for the detection of PSP toxins has been validated by the AOAC and is still an official method for PSP monitoring [50]. However, due to ethical and technical issues, animal bioassays have been replaced in the last decades by analytical or other non-analytical methods with regards to routine toxin detection of lipophilic toxins [62] or as reference method for PSP toxins [418,419]. Despite the reduced use of these techniques, they are still useful tools as sentinels for new unknown toxins that may threaten human consumers. Future challenges related to marine toxin detection ( Figure 9) • On-site, easy-to-use, efficient methods for detection of multiple toxin groups are not yet available; • Certified analytical standards of some toxin classes are urgently needed; • Improvement of sample preparation procedures for further testing or extended automation of routine monitoring; • Reliable TEF estimation for many analogues of these toxin groups is still missing; • Improvement of performance of analytical methods, especially for ciguatoxins and palytoxins. Climate Change Uncertainties Climate change is expected to have significant influences on both water quantity and water quality by shifting precipitation patterns, melting snow and glaciers, raising temperature, and increasing the frequency of extreme events. Phytoplankton responses to ongoing and future environmental change will significantly affect earth system processes at many scales. These primary producers are the photosynthetic base of marine food webs and responsible for approximately half of the global oxygen production [420]. Rising water temperature is likely to promote the spread and growth of microalgae in the sea [421]. Besides the beneficial impact of microbial eukaryotes, some of them can form HABs [9]. Global warming causes a decline of sea ice cover and an increase in the sea level, changing the physicochemical characteristics of the affected regions and impacting the ecology, the environment, and aquatic organisms at all trophic levels and potentially increasing the risk for future HABs even further [421][422][423]. The spread and increase of HABs worldwide is widely reported. Despite the extensive efforts to characterize the effects of climaterelated environmental variables on different harmful dinoflagellates, it remains a challenge to predict their response to climate change and assess potential consequences related to their toxicity [424]. Climate change could be related to new toxins appearing in areas or products where they previously had not occurred, and new guidelines are needed about how to manage them. In agreement with that, the presence of tropical species such as Gambierdiscus, Fukuyoa, and Ostreopsis in temperate regions has been recorded. This fact constitutes a serious threat to human health by CTXs and PLTXs intoxications in the future. Species of the genus Ostreopsis associated with PLTXs were first reported in Hawaii and Japan but are currently distributed worldwide, and blooms have been detected in the Mediterranean coast of countries such as France, Greece, Italy, and Spain [425]. Different PLTX-like compounds have been identified in the Mediterranean strains of O. cf. ovata and O. fattorussoi [60,426,427]. Currently aerosols with ostreocin are a problem in Mediterranean beaches. Blooms of Ostreopsis were also found on the coast of Portugal and the north of Spain indicating that species capable of producing PLTX analogues may be spreading from the Mediterranean to the north Atlantic [428]. Temperature seems an important factor determining both growth potential and toxin production of the genus Ostreopsis. Temperatures of 26-30 • C stimulated O. ovata cell growth and biomass accumulation and low toxicities, while temperatures of 20-22 • C induce higher toxicity per cell and cell numbers [111]. Ciguatera is endemic in certain tropical regions of the world but now is an emergent risk in Madeira and fish of the Canary Islands and Madeira, with a persistent incidence and impact on public health [429]. This northern expansion has been attributed to changes in distribution of toxin-producing microalgae. In support of that, the primary causative species of the genus Gambierdiscus and Fukuyoa have been recorded in the Canary Islands but also in the Mediterranean Sea. In fact, Gambierdiscus was detected in the Balearic Islands, being the northernmost point of this microalgae distribution worldwide. All these findings suggest a possible future concern about Ciguatera in finfish originating from Europe [430,431]. TTXs have been usually associated with contaminations of pufferfish in Japan; however, a recent emergence of TTX in different pufferfish, marine gastropods, and bivalve mollusks collected from Mediterranean countries has been reported [54,108,432,433]. Significant TTX levels have also been found in bivalves from northern latitudes such as England or the Netherlands and in shellfish harvested in the Atlantic coast of Spain and Portugal [107,[434][435][436]. These can be linked to the increase of the presence of TTX-vectors, such as pufferfish species of the Tetraodontidae family, in these waters due to the increase in global temperature related to climate change. Studies relating a water temperature increase to the increase of TTX content in its vectors seem rather inconsistent, although certain studies support an increase TTX incidence associated with higher water temperatures in the case of bivalve mollusks [111]. Climate change effects will have implications for food production, food security, and food safety. In particular, the products arising from marine production systems are expected to be affected by increased occurrence of phycotoxins. This highlights the need to implement shellfish monitoring programs, especially for emerging toxins to strengthen risk management capability and to enhance consumer protection, because ensuring safe and secure seafood is important, and climate change is one of the challenges in achieving this goal (Figure 9). Concluding Remarks The current regulation of marine toxins can be considered a success, given the extreme difficulty for this field to make progress in research due to the scarcity of research material. This has provided a rather good level of consumer protection through the advances and the work of many groups worldwide. However, in this field, the technological progress of the analytical equipment has not been accompanied by an equivalent advance of the toxicology and mode of action studies. The main cause of this bias is that most of the research performed in the field is undertaken by ecologists, organic chemists, and analytical chemists, and only a few toxicologists. This review emphasizes the need to perform further and deeper research into toxicology and the mode of action of the toxin groups. This would allow a better use of marine toxins as drug leads, but mostly to better use the toxicological research as a needed tool for the current analytical regulation. This is rather important, since LC-MS has been set as a monopoly for phycotoxins monitoring, in opposition to the philosophical regulation of mycotoxins that are legislated based on any method that follows the requirement of minimum performance characteristics.
2022-03-11T16:25:00.063Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "8c693ec5b00efc62621b27dca24eb96b3c776e7e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-3397/20/3/198/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3a29260af4bc5e1b5fd6da168f887aa5ba955fae", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
3586738
pes2o/s2orc
v3-fos-license
Cantú syndrome with coexisting familial pituitary adenoma Context Pseudoacromegaly describes conditions with an acromegaly related physical appearance without abnormalities in the growth hormone (GH) axis. Acromegaloid facies, together with hypertrichosis, are typical manifestations of Cantú syndrome. Case description We present a three-generation family with 5 affected members, with marked acromegaloid facies and prominent hypertrichosis, due to a novel missense variant in the ABCC9 gene. The proband, a 2-year-old girl, was referred due to marked hypertrichosis, noticed soon after birth, associated with coarsening of her facial appearance. Her endocrine assessment, including of the GH axis, was normal. The proband's father, paternal aunt, and half-sibling were referred to the Endocrine department for exclusion of acromegaly. Although the GH axis was normal in all, two subjects had clinically non-functioning pituitary macroadenomas, a feature which has not previously been associated with Cantú syndrome. Conclusions Activating mutations in the ABCC9 and, less commonly, KCNJ8 genes—representing the two subunits of the ATP-sensitive potassium channel—have been linked with Cantú syndrome. Interestingly, minoxidil, a well-known ATP-sensitive potassium channel agonist, can cause a similar phenotype. There is no clear explanation why activating this channel would lead to acromegaloid features or hypertrichosis. This report raises awareness for this complex condition, especially for adult or pediatric endocrinologists who might see these patients referred for evaluation of acromegaloid features or hirsutism. The link between Cantú syndrome and pituitary adenomas is currently unclear. Introduction The term pseudoacromegaly is used to describe cases where an acromegaly related physical appearance can be observed without any abnormality in the growth hormone (GH) axis. Coarse facial appearance with hypertrichosis are typical manifestations of Cantú syndrome [1][2][3]. Cantú syndrome, also known as hypertrichotic osteochondrodysplasia, is a heterogeneous condition that usually includes acromegaloid facial features, hypertrichosis, as well as skeletal and cardiac abnormalities (Table 1) [1,4,5]. Earlier reports have used different terms such as acromegaloid facial appearance (AFA) syndrome [6] or hypertrichosis acromegaloid facial features (HAFF) syndrome following the report of a family with 4 members affected with an AFA and congenital generalized hypertrichosis [2]. These conditions are phenotypically overlapping with Cantú syndrome and in fact represent a spectrum of the Enlarged medullary canal n.a. n.a. n.a. n.a. 8/12 [5,16] Pectus carinatum − − − − 2/11 [5] same condition. Following the description of activating ABCC9 mutations in Cantú syndrome [1,5], we have analyzed a family published 20 years ago by Irvine [2] and identified a novel missense ABCC9 variant carried by the affected members. We aim to raise awareness of this complex condition, with prominent features resembling endocrine conditions and having significant cardiological complications. Moreover, we highlight a potential link between familial pituitary adenomas and Cantú syndrome. Case description The proband (III.3) was referred at age of 2 years to the Dermatology department due to prominent generalized hypertrichosis, noticed soon after birth, and coarsening facial appearance, with broadening of her nose and lower lip thickening ( Fig. 1a-d). Her height and weight were just below the 97th centile, with her bone age matching the chronological age. Baseline pituitary function assessment was normal, including the GH axis. Over the following 20 years, her acromegaloid features and hypertrichosis progressed (Fig. 1b, d). The patient manages her hypertrichosis cosmetically and with clothing. Her final adult height is 171 cm (above the 90th centile). At the age of 14 years she was diagnosed with a 12 mm non-functioning pituitary adenoma (Fig. 2), which has been stable in size over the last 8 years. The proband's father (II.8) was referred to the Endocrinology department due to a clinical suspicion of acromegaly, particularly because of acromegaloid facies (Fig. 1e). His GH axis and pituitary MRI scan were normal. Over the last 20 years, his acromegaloid features have been stable (Fig. 1f, g). At the age of 24 years he presented with nonspecific chest pain and shortness of breath and was found to have a pericardial effusion for which no cause was identified. He later had repeated pericardiocentesis for recurrent effusions and subsequently had pericardial fenestration at the age of 30 years. The proband's paternal aunt (II.3) was first seen at the Endocrinology department for exclusion of acromegaly. In addition to her acromegaloid facial appearance (Fig. 1h), she had terminal hypertrichosis. Her GH axis assessment was normal, with a normal serum IGF-1. Twenty years later, progression of coarse facial features is noticeable (Fig. 1i-j), while the hypertrichosis has remained stable requiring no specific treatment. At age of 44 years she was diagnosed with a 13 mm non-functioning pituitary adenoma (Fig. 2), unchanged in size over the last 14 years. She was noted to have mild hyperprolactinemia, likely due to a stalk effect (1030 mU/l [NR < 500]), and secondary adrenal insufficiency was also documented (suboptimal cortisol peak of 461 nmol/l on an insulin tolerance test, and 300 nmol/l on a short Synacthen test) for which she was commenced on hydrocortisone replacement therapy. Moderate thickening of the posterior calvarium was identified on a skull X-ray, and also noted on the MRI images (Fig. 2c). She was noted to have cardiomegaly, although she does not have hypertension or valve abnormalities. She has been recently diagnosed with a grade III infiltrating ductal breast carcinoma; one of her 53-year-old sisters had the same condition. BRCA1 and BRCA2 genetic testing did not reveal any abnormality. The proband's half-sister (III.1) was referred to the Endocrinology department due to coarse facial features, a prominent forehead, thickened lips, long philtrum, and enlarged nose, and hypertrichosis. Her endocrine assessment was normal, including a normal serum IGF-1 and normal pituitary CT scan. At the age of 25 years she had an episode of chest pain associated with a mild troponin elevation, with a 15% rise on a second sample, attributed to a myocarditis. The proband's grandfather (I.2), described as "hairy", was never assessed by the genetic or medical departments. Genetic testing The ABCC9 gene has been linked with Cantú syndrome in 2012 [1,5], and some of the patients previously described as suffering from AFA and HAFF syndromes, were also identified with mutations in ABCC9 [4]. ABCC9 encodes a member of the superfamily of adenosine triphosphate (ATP)-binding cassette transporter subfamily C, commonly referred to as SUR2 (sulfonylurea receptor 2) protein. This transmembrane protein functions as a subunit of ATPsensitive potassium channels in cardiac, skeletal, vascular, and non-vascular smooth muscle, and other tissues. Coexpression of SUR2 with the pore-forming inward rectifier proteins, Kir6.1 (encoded by KCNJ8) or Kir6.2 (KCNJ11) generates functional ATP-sensitive potassium channels [3]. All pathogenic variants in ABCC9 reported to date in Cantú syndrome are gain-of-function missense mutations [1,3,5]. Activation of ABCC9 reduces ATP-mediated potassium channel inhibition, thereby opening the channel [1,5]. More rarely, Cantú syndrome can be caused by mutations in the KCNJ8 gene [7]. We sequenced the ABCC9 gene and identified a novel missense variant in the affected subjects: c.4039 C > T (p. Arg1347Cys) (Fig. 3). This missense variant, not reported in the literature and not present in the GnomAD database, causes a substitution of a highly conserved arginine residue for a cysteine at codon 1347 in the second nucleotide binding domain of ABCC9. In silico bioinformatics analysis (SIFT and PolyPhen) supports the pathogenicity of this variant. Discussion The prevalence of Cantú syndrome is unknown. Males and females are equally affected and there is no established phenotype-genotype correlation. This conditions is inherited in an autosomal dominant manner, and penetrance thus far appears to be complete [3,8]. It is currently unclear as to how activating ABCC9 mutations lead to hypertrichosis, acromegaloid facial features, osteochondrodysplasia, and cardiovascular anomalies, while these features remarkably overlap with the sideeffects of minoxidil, which binds to SUR2 resulting in ATP-sensitive potassium channel opening and activation [3]. Minoxidil promotes keratinocyte proliferation, glycosaminoglycan, and elastin production from skin fibroblasts, Fig. 1 Facial appearance and generalized terminal hypertrichosis of the proband at the ages of 2 (a, c) and 22 years (b, d). The proband's father at the ages of 28 (e) and 48 years (f, g), and the proband's paternal aunt at the ages of 36 (h) and 57 years (i, j) thereby changing connective tissue composition [9]. Regarding hypertrichosis, potassium channel opening, with consequent vasodilatation, may increase the blood supply, oxygen, and nutrients to the hair follicles leading to hair growth. Cardiovascular effects have been attributed to reduced vascular tone, which may explain pericardial effusions seen in Cantú syndrome patients [3,10] and minoxidil-treated patients [10]. ATP-sensitive potassium channels are expressed in chondrocytes and osteoblasts, but their role in bone maturation as the explanation for skeletal abnormalities in ABCC9-related disorders is unknown [3]. No major endocrinopathies have been reported in Cantú syndrome [11]. The GH axis, often investigated due to possible acromegaly (the main differential diagnostic entity), has been shown to be normal [1,[3][4][5]. There is, however, one single case of a boy with Cantú syndrome due to a KCNJ8 gene mutation found with GH deficiency [7]. No pituitary adenomas have been reported in Cantú syndrome, despite the fact that these patients commonly undergo brain imaging as part of investigations for neurological symptoms or as a routine procedure to exclude cerebrovascular abnormalities (Table 1) [4]. No pituitary adenomas were reported in a series of ten patients with genetically confirmed Cantú syndrome who had neuroimaging studies [12]. Scurr et al. reported one patient with a mild pituitary fossa enlargement and a moderate enlargement of the pituitary gland (10 × 11 mm) extending into the suprasellar cistern, but no pituitary adenoma was visible in this case [11]. In our kindred, we have two cases with non-functioning pituitary adenoma. Although pituitary adenomas are not rare in the general population, most are small incidentally found lesions [13]. Here we report pituitary macroadenomas in two family members, one found at the age of 14 years. These may represent a Cantú syndrome-related feature or the independent disease of familial isolated pituitary adenoma [14]. The differential diagnosis for Cantú syndrome includes acromegaly, hypothyroidism, hirsutism-related endocrinopathies such polycystic ovary syndrome, minoxidil use, or other rare pseudoacromegaly conditions such pachydermatoperiostosis, Berardinelli-Seip, Sotos, or Weaver syndromes; therefore, these patients are likely to be referred to adult or pediatric endocrine clinics [3,4]. In summary, we present a five-member three-generation family with Cantú syndrome due to a novel missense variant in ABCC9 gene showing full penetrance, and two family members with non-functioning pituitary adenomas. We show their acromegaloid facial phenotype over a 20year-period combined with marked generalized hypertrichosis, and draw attention to their cardiac complications. This family also shows familial pituitary adenoma and, as this was not described in other patients with Cantú syndrome, it is unclear whether this feature is part of Cantú syndrome or a coincidental finding. Familial pituitary adenomas have a heterogeneous genetic background [14], and further studies are needed to see if there is indeed a link with ABCC9.
2018-04-03T03:32:10.390Z
2018-01-11T00:00:00.000
{ "year": 2018, "sha1": "a2dc29a298d8d431f882aad60e45f9368d10b61d", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12020-017-1497-9.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "876a4f5a4dc8dc2bdde12c882d29e48108fab11d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
264172062
pes2o/s2orc
v3-fos-license
Application of deep learning technique in next generation sequence experiments In recent years, the widespread utilization of biological data processing technology has been driven by its cost-effectiveness. Consequently, next-generation sequencing (NGS) has become an integral component of biological research. NGS technologies enable the sequencing of billions of nucleotides in the entire genome, transcriptome, or specific target regions. This sequencing generates vast data matrices. Consequently, there is a growing demand for deep learning (DL) approaches, which employ multilayer artificial neural networks and systems capable of extracting meaningful information from these extensive data structures. In this study, the aim was to obtain optimized parameters and assess the prediction performance of deep learning and machine learning (ML) algorithms for binary classification in real and simulated whole genome data using a cloud-based system. The ART-simulated data and paired-end NGS (whole genome) data of Ch22, which includes ethnicity information, were evaluated using XGBoost, LightGBM, and DL algorithms. When the learning rate was set to 0.01 and 0.001, and the epoch values were updated to 500, 1000, and 2000 in the deep learning model for the ART simulated dataset, the median accuracy values of the ART models were as follows: 0.6320, 0.6800, and 0.7340 for epoch 0.01; and 0.6920, 0.7220, and 0.8020 for epoch 0.001, respectively. In comparison, the median accuracy values of the XGBoost and LightGBM models were 0.6990 and 0.6250 respectively. When the same process is repeated for Chr 22, the results are as follows: the median accuracy values of the DL models were 0.5290, 0.5420 and 0.5820 for epoch 0.01; and 0.5510, 0.5830 and 0.6040 for epoch 0.001, respectively. Additionally, the median accuracy values of the XGBoost and LightGBM models were 0.5760 and 0.5250, respectively. While the best classification estimates were obtained at 2000 epochs and a learning rate (LR) value of 0.001 for both real and simulated data, the XGBoost algorithm showed higher performance when the epoch value was 500 and the LR was 0.01. When dealing with class imbalance, the DL algorithm yielded similar and high Recall and Precision values. Conclusively, this study serves as a timely resource for genomic scientists, providing guidance on why, when, and how to effectively utilize deep learning/machine learning methods for the analysis of human genomic data. Introduction With the widespread use of biological data processing technology and the rapid advancement in high-throughput sequencing (HTS) technologies, especially Illumina systems, next-generation sequencing (NGS) technology has become an indispensable part of biological research in many areas [1].Next-generation sequencing technologies enable the sequencing of billions of nucleotides in the entire genome, transcriptome or smaller target regions.Therefore, the growth in data volume gives rise to extremely large data matrices.Systems that detect meaningful information from very large data structures have increased the need for deep learning (DL) approach which uses multilayer artificial neural networks (ANN).This situation has led researchers to utilize advanced statistical methods instead of classical statistical approaches in their studies. The quality of machine learning (ML) approaches depends on selecting the appropriate features [2].Various preprocessing, dimensionality reduction, and feature selection techniques are employed to uncover these features.To reduce computation time and increase accuracy, it is essential to reduce dependence on specific features at this stage.Deep learning algorithms aim to classify and describe data by extracting features that can provide more information from individually less informative variables.Unlike traditional machine learning methods, DL methods provide a significant advantage in solving problems in high-dimensional data matrices and analyzing such data [3].Performing hyperparameter optimization is crucial for creating an effective model and determining the optimal architecture and parameters [4][5][6]. Next-generation sequencing (NGS) methods have been at the center of numerous biological and medical research and have become very popular topics in recent years with deep learning algorithms [1].Especially since feature extraction is not possible in genetic data analysis, the application of deep learning techniques in this field is important for researchers to obtain more accurate results.In the existing literature, no studies have been identified that specifically investigate the optimized parameter evaluation of algorithms in NGS data.Therefore, it is necessary to obtain optimized values of hyperparameters, such as epoch, the number of layers, learning rate, and batch size, in NGS data analysis.In the literature, there are a limited number of studies that have performed diagnosis or classification using machine learning or deep learning techniques on various types of genetic data, including exome, metagenomic, and omics data.The convolutional neural network method (CNN) was utilized for the identification of clathrin proteins, the deficiency of which in the human body leads to significant neurodegenerative diseases such as Alzheimer's [6].Deep Neural Network (DNN) and XGBoost algorithms were used to classify variants into two classes which are somatic and germline, for a given whole exome sequencing (WES) data [7].Performance comparisons were conducted between ML and DL algorithms to predict the effects of non-coding mutations on gene expression and DNA [8].By utilizing TCGA data as input, a deep learning algorithm was used for the model of the association between genes and their corresponding proteins in relation to survival prognosis [9]. Deep learning techniques have recently emerged as powerful tools for various biomedical applications, notably in the realm of Next-Generation Sequencing.The exponential growth in genomic data produced by NGS platforms has presented both challenges and opportunities.Traditional bioinformatics methods often struggle to efficiently process and interpret the vast quantities of data generated.In contrast, deep neural networks (DNNs) have shown significant promise in detecting complex patterns, predicting phenotypes, and classifying genomic variants, among other tasks (Fig. 1) [10]. The Deep Neural Network (DNN) is a subfield of machine learning algorithms that models the workings of the biological nervous system.In a DNN model, there are multiple layers, including input and output layers, as well as more than two hidden layers, each containing neurons (processing nodes).These hidden layers are crucial components of the DNN model and actively participate in the learning process.While using more hidden layers during training can enhance the model's performance, it can also introduce significant challenges such as model complexity, computational cost, and overfitting.One of the remarkable capabilities of the DNN model is its ability to automatically extract relevant features from unlabeled or unstructured datasets using standard learning procedures.Several researchers have reported that DNN models outperform traditional learning methods in various complex classification problems.Therefore, in various domains, DNN models can achieve highly accurate prediction performance, especially in classification problems involving intricate relationships [11]. Recurrent Neural Networks (RNNs) are a type of artificial neural network designed to process sequential or time series data.Unlike conventional neural networks, which assume independence between inputs and outputs, RNNs operate on sequences, performing a similar task for each element in the sequence while taking into account previous outputs.However, the widespread utilization of RNNs in DNA sequencing data, where the order of bases holds crucial significance, has been limited [10].Maraziotis et al. pioneered the implementation of RNNs in genomics, utilizing microarray experimental data and employing a recurrent neuro-fuzzy protocol to infer complex causal relationships among genes by predicting time series gene expression patterns.Although most RNN applications in genomics are combined with other algorithms like Convolutional Neural Networks (CNNs), CNNs excel in capturing local DNA sequence patterns, whereas RNN derivatives are more adept at capturing long-range dependencies within sequence datasets [12].A convolutional neural network (CNN) is a deep learning algorithm characterized by a deep feedforward architecture comprising various building blocks, including convolution layers, pooling layers, and fully connected layers.It can be visualized as a fully connected network, where each node in a single layer is connected to every node in the next layer.In CNN layers, convolution units process input data from units in the previous layer, collectively contributing to making predictions.The fundamental principle behind this deep architecture is to enable extensive processing and connection features, allowing the network to capture complex nonlinear associations between inputs and outputs.Due to these features that effectively define linear relationships, CNNs have found applications in a wide range of fields, including medicine, genetics, engineering, and economics [13]. Deep Reinforcement learning (DRL) is a machine learning technique in which a computer agent learns to perform a task through repeated trial-and-error interactions with a dynamic environment.This learning approach empowers the agent to make a series of decisions aimed at maximizing a reward metric for the task, all without human intervention and without being explicitly programmed to achieve the task.Studies of RL in the field of genetics are quite limited, and the first applications seem to be aimed at solving DNA sequence alignment using the Markov decision process (MDP) [14]. As the scale of genetic data expands, there will be an increase in costs and time associated with data processing.This situation leads to an increase in demands such as data analysis and fast delivery of findings at low costs. Furthermore, it is important to present the optimized parameters obtained from methods such as ML and DL applied to different types of genetic data to practitioners in the field.This allows for performance evaluations and ensures the maximum information can be obtained from the data. In this study we have used GPU based model training but there are several computational environment options for deep learning applications.For instance: SPARK, High Performance Computing (HPC), Field-Programmable Gate Array (FPGA).Khan S. et al. and Xueqi L. et al. reported that there are limitations on high I/O latency, distributed compute memory maximization, optimization of configurable parameters and maintaince of the clusters [15,16]. The goal of this study was to obtain optimized parameters and evaluate the prediction performance of deep learning and machine learning (ML) algorithms for binary classification in both real and simulated whole-genome data using a cloud-based system.In this study we explored the following question: "Is GPU infrastructure based algorithm (DL) performs better than CPU based ML algorithms in terms of accuracy, time and repeatability?". Next-generation sequencing The human genome (Deoxyribonucleic Acid, DNA) consists of around 3 million nucleotides.There are four nucleotides in DNA: Adenine (A), Cytosine (C), Guanine (G), and Thymine (T).Only about 2% of DNA encode proteins.These DNA fragments that encode proteins are called exons, and the combination of all exons within the genome is known as the exome.The remaining parts of DNA are expressed as intergenic regions (introns) that do not encode protein. Damage to DNA can result in various consequences such as malformations, cancer, aging, genetic alterations, and cell death [17].Therefore, the early detection of DNA damage plays anincreasingly important role in diagnosis, treatment, and the quality of life for patients. It has been determined that most of the mutations that lead to the formation of diseases occur in the exon regions of DNA [18]. Next-generation sequencing (NGS) is a method that is based on the simultaneous and parallel processing of each part of a DNA molecule obtained from a single sample, which is divided into millions of parts.In other words, NGS is the process of determining the order of nucleotide bases in an individual's DNA molecule.NGS technology can detect genetic variants in an individual's DNA that may be associated with a disease.However, technical limitations may cause false negative results as they affect the diagnostic process of diseases [19].In addition, the lack of sequence depth also changes the reliability of the detected variants.Although whole exome sequencing is a powerful method for diagnosis, it should not be considered the best approach for all clinical indications.However, it is the most important step in establishing the necessary associations for the detection of clinical findings and the resulting phenotype variants [20]. ART: a next-generation sequencing read simulator The ART simulator is a group of methods that can generate data exactly the same as Illumina technology, including erroneous reads that may occur in real genomes.ART software was primarily developed for simulation studies helping to design data collection modalities for the 1000 Genomes Project.ART simulates sequencing reads by mimicking real sequencing processes with empirical error models or quality profiles summarized from large recalibrated sequencing data.Moreover, ART can simulate reads using the user's own read error model or quality profiles [21,22]. Whole genome data of human chromosome 22 In this research, the second dataset used was real data (whole genome) of chromosome 22, which includes ethnicity information.This dataset was prepared by the Microsoft Genomics team and made publicly available for use. The Cloud computing Studies based on large sequencing datasets are growing rapidly, and public archives for raw sequencing data are periodically doubled.Researchers need to use large-scale computational resources to use this data.Cloud computing, a model where users rent computers and storage from large data centers, is an attractive solution for genome research.Particularly in genetic research, conducting analyses directly on the stored data not only saves time but also reduces costs associated with data transfer across platforms [24].We have implemented our pipeline on Microsoft Azure cloud VMs and Jupyter notebooks. Methods In this section, we introduce the proposed best practice pipeline for the classification of Next Generation Sequencing data.Firstly, we constructed a dataset to simulate the entire Human Genome. Secondly, we obtained the VCF data by aligning the real Chr 22 whole genome FASTQ data shared by Microsoft Genomics Team with the reference genome using the BWA-GATK tool, which the Broad Institute defines as the best practice. ART simulation data set The distributions of the number of variants for two different continental groups (European and Others), as reported by the 1000 Genomes Project, were used to generate the variant types in the simulation.This approach ensured that the simulated data closely resembled real individuals. In this study, NGS reads based on synthetic human genomes were derived using one of the most commonly used methods in genetic data simulation: a next-generation sequencing read simulator (ART) [21].As a result, this study produced "500 data for group 0" and "500 data for group 1".The distinction between the groups was achieved by changing the f and m parameters.The average simulation time for generating a whole genome FASTQ paired-end data took approximately 4 h and 12 min using the virtual computer configurations employed in this study.The simulations were made in batches of 100 on 10 different virtual machines [25].The following are the codes used to generate the simulated data: • art_illumina.exe-ss HS25 -i./testSeq.fa-o./paired_end_com -l 150 -f 5 -p -m 250 -s 10 (for group 0) • art_illumina.exe-ss HS25 -i./testSeq.fa-o./paired_end_com -l 150 -f 10 -p -m 500 -s 10 (for group 1) Chromosome 22 WGS data set In the study, NC24, one of the NC series virtual machines supported by NVIDIA Tesla K80 Card and Intel Xeon E5-2690 V3 processor, was used.The analyses were conducted using Python programming language.The paired-end Next Generation Sequencing (NGS) data of Chromosome 22 (Ch22) in FastQ format was obtained using Illumina NextSeq 500.After performing quality control, the data was aligned to the reference genome (GRCh38) using the Burrows-Wheeler Aligner (BWA) method, which is part of the Broad Institute's best practices analysis pipeline.By applying the Genome Analyzer Tool Kit (GATK) method, which is the most frequently used pipeline for Variant Calling, to the aligned data, Variant Calling Format (VCF) data describing the variants were obtained (Fig. 2, Additional file 1: Table S1) [26]. Secondary analysis This section involves that the process where the produced reads of the individual's exome or genome are aligned to the reference genome and variant calls are generated.The first of the limitations at this stage is the lack of available human reference genomes and the lack of consensus on which optimal reference genome to use.Several software has been developed to realize this reading process.Various platforms such as BWA, Novalign, Stampy, SOAP2, LifeScope, and Bowtie are frequently used.As a result of this process, a BAM file is created as output (Fig. 3). FASTQ FASTQ is a text-based file format that contains nucleotide sequence reads and quality scores for each nucleotide read [27].A typical FASTQ file contains 4 lines: The first line starts with the '@' character and specifies the identity of the sequence.The second line contains the raw sequence data, represented by a font.The third line starts with symbol plus "+" and can be optionally blank, or optionally followed by the sequence identifier which in the first line is written.In the fourth line, the quality value of the sequence is displayed in ASCII format.The quality value shows the probability of the sequence misreading during reading.Higher quality scores indicate a smaller probability of error (p error) .The phred-scaled quality score (Q) is converted to probability with the formula as Q = − 10log 10 *p error . Variant calling&GATK Variant calling stage entails identifying single nucleotide polymorphisms (SNPs), multiple nucleotide polymorphisms (MNPs), small insertions and deletions (InDels, they are usually less than 50 bp) from next generation sequencing data [29].In this process, between 20,000 and 100,000, variants are discovered per exome, and approximately 3-4 million variants for whole genome sequencing.The Variant Call Format (VCF) is a text file that contains information about the variants found between the reference genome and the sample genome.The VCF format was developed for the 1000 Genomes Project.A VCF file consists of 8 fixed and mandatory columns, which are as follows: # chromosome (CHROM), a 1-based position of the start of the variant (POS), unique identifiers of the variant (ID), the reference allele (REF), a comma-separated list of alternate non-reference alleles (ALT), a phred-scaled quality score (QUAL), site filtering information (FILTER), and a semicolon-separated list of additional, user-extensible annotations (INFO) [30]. Experience has shown that software developed based on Bayesian statistical probability methods, such as SAMtools and the Genome Analysis Toolkit (GATK) (https:// gatk.broad insti tute.org/ hc/ en-us), are frequently preferred for their ability to reduce sequencing errors [31].In this study, GATK, which was developed by the Broad Institute, was used for variant discovery following alignment with BWA. Burrows-Wheeler Aligner, BWA Alignment tool (Burrows-Wheler Aligner, BWA) is a software package for mapping lowdivergent sequences against a large reference genome, such as the human genome.It consists of three algorithms: BWA-backtrack, BWA-SW and BWA-MEM.BWA-backtrack algorithm is designed for Illumina sequence reads up to 100 bp, while BWA-MEM and BWA-SW for longer sequences ranged from 70 bp to 1Mbp.BWA-MEM and BWA-SW have similar characteristics such as long-read support and split alignment.However, the BWA-MEM algorithm is faster and it provides more accurate results for high-quality queries.BWA-MEM also has a better algorithm than BWA-backtrack for 70-100 bp Illumina reads.In this study, the BWA-MEM algorithm was used for alignment [32]. Tertiary analysis This is the third and final step of the NGS analysis workflow.After merging the VCF data of individuals (Joint VCF), a matrix is created with individuals in rows and variants in columns.At the final stage, techniques such as machine learning, deep learning, and clustering are applied to this VCF matrix.Generally, this step includes the annotation of genes, mutations and transcripts.But it is focused to obtain the prediction performance and optimized parameters of deep learning and machine learning algorithms for the "Binary Classification" in real and simulated whole genome data using a cloudbased system in this study.Because the most important problems in genetic data are the storage, organization and modeling of this data.Therefore, it does not include a process related to "annotation" (Fig. 3). XGBoost XGBoost (eXtreme Gradient Boosting) algorithm is a high-performance version of the Gradient Boosting algorithm optimized with various arrangements.It was introduced by Tianqi Chen and Carlos Guestrin in the article "XGBoost: A Scalable Tree Boosting System" published in 2016.The most important characteristics of the algorithm are its high predictive power, preventing over-learning, handling missing data and at the same time performing these operations quickly.According to Tianqi, XGBoost runs 10 times faster than other popular algorithms.It is shown as the best of the decision tree-based algorithms (Table 1) [33]. LightGBM LightGBM is a high-performance gradient boosting algorithm using a tree-based learning algorithm designed by Microsoft Research Asia in the Distributed Machine Learning Toolkit (DMTK) project in 2017 (https:// light gbm.readt hedocs.io/ en/ latest).This algorithm has some advantages over boosting algorithms.These advantages are; solving prediction problems related to big data more effectively, using fewer resources (RAM), high prediction performance, and parallel learning [33].It is very fast, therefore it is defined by the expression "Light".In the article (A Highly Efficient Gradient Boosting Decision Tree), LightGBM was found to be 20 times faster than other algorithms [34]. In the LightGBM algorithm, optimizing the learning rate, max dept, num leaves, min data in leaf parameters to prevent overlearning and feature fraction, bagging fraction and num iteration parameters to accelerate the learning time increases the performance of the model (Table 1, Fig. 4). XGBoost utilizes a level-wise tree construction strategy, building the tree in a levelby-level manner.In contrast, LightGBM adopts a leaf-wise tree construction strategy, where the tree is grown by continuously splitting the leaf with the highest gain.This leaf-wise strategy in LightGBM often results in faster training times.It is noteworthy that although XGBoost and LightGBM share similar concepts and objectives as gradient boosting frameworks, the variations in their implementations contribute to differences in performance, speed, and memory efficiency between the two algorithms [35]. Deep learning Deep learning is a subset of artificial intelligence and machine learning that uses multilayer artificial neural networks to make predictions with high sensitivity and accuracy in areas such as image processing, object detection, and natural language processing.With the widespread use of biological data processing technology, NGS technology has become an indispensable part of biological research in many fields.It has been reported that there will be 100 million NGS data in the estimates made for 2025 (Fig. 5) [36]. It is not possible to extract features from these structures with classical approaches.For this reason, systems that evaluate many layers at the same time and detect meaningful information from large data structures have increased the need for a deep learning approach using multi-layer artificial neural networks. Deep learning requires the use of many hidden neurons and layers with new training models.The use of large numbers of neurons allows for a comprehensive representation of the raw data available.Adding more hidden layers to the neural network allows the hidden layers to capture nonlinear relationships.Thus, when the neural network is optimally weighted, high-level representations of the obtained raw data or images are provided [37]. In the tertiary analysis phase, Convolutional Neural Networks (CNN), one of the deep learning architectures, were used.CNNs are applied in various fields such as image recognition, video recognition, natural language processing, and computational biology.CNN is a variant of multi-layer perceptron (MLP) (Fig. 6). Deep learning in next-generation sequencing Genomics is advancing towards a data-driven scientific approach.With the emergence of high-throughput data generation technologies in human genomics, we are confronted with vast amounts of genomic data.Multiple genomic disciplines, such as variant calling and annotation, disease variant prediction, gene expression and regulation, epigenomics, and pharmacogenomics, benefit from the generation of high-throughput data and the utilization of deep learning algorithms to enable sophisticated predictions.Deep learning utilizes a wide range of parameters, which can be optimized through training on labeled data, particularly in the context of genetic datasets.Deep learning has the advantage of effectively modeling a large number of differentially expressed genes.There are still a limited number of studies in the literature evaluating NGS data with the deep learning method [38]. Using TCGA data as input, Wong et al. utilized deep learning to model the relationship between genes and their corresponding proteins in relation to survival prognosis [9].They presented a model which identifies different genes associated with glioblastoma survival, glioblastoma cancer cell migration, or glioblastoma stem cells.In another study, Young et al. used a deep learning algorithm to classify glioblastomas into six subtypes in patient survival [39]. Batch size Processing big data sets at once takes a long time and leads to memory problems.The data set is divided into small samples to prevent wasting time and memory problems, and the learning process is performed from these small pieces.The batch size defines the number of samples that will be propagated through the network [40]. Learning rate Learning rate (LR) or step size is defined as the amount that the weights are updated during training.This learning structure can be realized in different ways.The LR parameter used during this process can be selected as a fixed value or as an incremental value.For example, it can be done by taking 0.001 until a certain learning step of the algorithm and taking 0.01 after this step.If this parameter is selected too small, the learning rate will also be slow.The larger the value of the parameter, the greater the impact of the data on the algorithm.For this reason, it is recommended to keep this value high at the beginning of the process and to decrease it after a certain epoch [41]. Epoch Deep learning is used to make predictions in big data structures.Due to the large size of the matrices, data is divided into smaller parts and processed in parts rather than training the entire dataset at once.The number of epochs is a hyperparameter that determines the number of times the learning algorithm will iterate over the entire training dataset (If you have a training dataset with 1000 examples and set a batch size of 10, it will take 100 iterations to complete one epoch.).This comprises one instance of a forward pass and backpropagation.As the number of epochs increases, the network's accuracy increases.Performance improvements in terms of accuracy tend to diminish or plateau after a certain number of epochs.When the training reaches the desired level (the error value, the point where the accuracy value is optimal), it can be terminated [42]. Number of layers One of the most important features that distinguishes the deep learning algorithm from other artificial neural network algorithms is the number of layers, which enables it to successfully handle complex problems (Fig. 7).Increasing the number of layers improves the learning performance of the model.Thus, during the process of weight updates through backpropagation, the effect of these updates on the first layers will be reduced [43].The parameters used in the algorithm are presented in Table 1. Performance evaluation The evaluation criteria used to measure the predictive performance of models; recall, accuracy, precision, AUC-ROC, F criteria [37]. • Results The study presents the findings of the ART simulated data, which consider the distributions of the variant numbers for two different continental groups (European and Non-European) as reported by the 1000 Genomes Project.These findings are summarized below.When the learning rate was set to 0.01 and the epoch was updated with values of 500, 1000, and 2000 in the deep learning model for the ART simulated data, the mean accuracy values of the models were 0.6319 ± 0.0065, 0.6804 ± 0.0090, and 0.7333 ± 0.0167, respectively.The median accuracy values of the models were 0.6320 [0.6210-0.6430],0.6800 [0.6650-0.6960],and 0.7340 [0.7060-0.7630],respectively.As the epoch value increased, the average accuracy value of the model also increased (Table 2, Additional file 1: Table S2, and Fig. 7) (Table 2, Additional file 1: Table S2, Fig. S1).Secondly, the learning rate was decreased to 0.001, and the effect of epoch value on the model was investigated.When the epoch values were updated with 500, 1000, and 2000 in the deep learning models, the mean accuracy values of the models were 0.6922 ± 0.0168, 0.7214 ± 0.0182, and 0.8014 ± 0.0386, respectively.The median accuracy values of the models were 0.6920 [0.6640-0.7210],0.7220 [0.6910-0.7530],and 0.8020 [0.7360-0.8690],respectively.As the epoch value increased, the accuracy of the model increased.It was found that the accuracy performance of the deep learning model increased as the learning rate decreased and the epoch value increased (Table 2, Additional file 1: Table S2 and Fig. S1). The table presents the performance of XGBoost and LightGBM model parameters.The average accuracy value of the XGBoost model was 0.6987 ± 0.0081, and the median was 0.6990 [0.6850-0.7120].On the other hand, the average accuracy value of the Light-GBM model was 0.6258 ± 0.0096, and the median was 0.6250 [0.6100-0.6430].It can be observed that XGBoost has a higher accuracy compared to LightGBM (Table 2, Additional file 1: Table S2 and Fig. 7). Increasing the epoch values resulted in higher accuracy performances when using DL algorithms with low LR values, compared to machine learning algorithms.The performance of DL algorithms improved with high LR and epoch values, while lower accuracy values were observed with DL algorithms with low epoch values, in comparison to machine learning algorithms.In particular, results with the XGBoost algorithm showed a performance close to the performances obtained with the DL algorithm at high epoch and LR values (Table 3, Additional file 1: Table S3 & Fig. S2). When evaluating the performances of machine learning algorithms for Chr 22 WGS data, the mean and median accuracy values of the XGBoost model were determined as 0.5760 ± 0.0081 and 0.5760 [0.5730-0.5790],respectively.On the other hand, the mean and median accuracy values of the LightGBM algorithm were 0.5250 ± 0.0029 and 0.5250 [0.5200-0.5300],respectively.The performance of the XGBoost algorithm was higher than that of the LightGBM algorithm (Table 3, Additional file 1: Table S3 and Fig. S2). Discussion In this study, the prediction performance of deep learning and machine learning algorithms was demonstrated for ART simulation data and Chr 22 whole genome data, specifically focusing on "bivariate classification."The experiments were conducted using a cloud-based system, and optimized parameters were obtained.The storage, organization, and modeling of genetic data are among the most critical problems.The use of cloud systems accelerates researchers in these stages.The research demonstrated the impact of hyperparameter changes in deep learning models.Furthermore, the performance of deep learning models was compared with popular machine learning algorithms such as XGBoost and LightGBM.Additionally, this study represents an innovative approach in terms of parameter optimization and performance evaluation on whole genome data using a cloud-based system. Le et al. [6] utilized the deep learning method to identify clathrin proteins, the deficiency of which in the human body leads to significant neurodegenerative diseases like Alzheimer's.They employed the convolutional neural network method (CNN) and selected hyperparameters as follows: epoch = 80, LR = 0.001, batch size = 10, dropout = 0.2.The model's performance was evaluated using both machine learning and deep learning methods.The model exhibited a sensitivity of 92.2%, specificity of 91.2%, accuracy of 91.8%, and Matthews's correlation coefficient of 0.83 on the independent dataset.While our study yielded similar findings to Le et al., we additionally presented model performances at different epoch values and LR values (0.01-0.001).Consequently, we demonstrated that the deep learning method can achieve significantly higher performance levels than machine learning algorithms, particularly at higher epoch values [6]. Akker et al. [42] have developed a machine learning model that determines the accuracy of variant calls in captured-based next-generation sequencing.The model was tuned to eliminate false positives, which are variants identified by NGS but not confirmed by Sanger sequencing.They achieved an exceptionally high accuracy rate of 99.4%.In this study, it has been shown that NGS data has relevant properties to distinguish variables with low and high confidence using a machine learning-based model.Researchers did not focus on hyperparameter optimization in this study.Moreover, providing high discrimination in low coverage NGS data, which is smaller than the whole genome sequencing data, by using a machine learning algorithm is aligned with the findings of our study [42]. Marceddu et al. used a dataset of 7976 NGS calls validated as true or false positive by Sanger sequencing to train and test different ML approaches.While gradient boosting classifier (GBC), random forest (RF), and decision tree (DT) algorithms were less affected by the imbalance in the dataset, the prediction performance of linear support vector machine (LSVM), nearest neighbor (NN), and linear regression (LR) were significantly more affected.It has also been shown that for medium-small datasets, the best algorithms that can be used from ML methods were DT, GBC, and RF.This demonstrates the potential to reduce diagnosis time and costs when integrating machine learning with NGS data.The high performance of the boosting algorithm, which is one of the popular algorithms of the last period, even in the case of data imbalance, is similar to that of our study [43]. Sun et al. proposed the genome deep learning (GDL) method to examine the relationship between genomic variations and traits based on deep neural networks.They analyzed WES mutation data from 6083 samples from 12 cancer types from The Cancer Genome Atlas (TCGA) and WES data from 1991 healthy from the 1000 Genomes project.They created 12 different models to distinguish specific cancer types from healthy tissues, a general model that can identify healthy and cancerous tissues, and a mixed model to differentiate all 12 cancer types based on GDL.The accuracy of the different, mixed and total models was found to be 97.47%,70.08% and 94.70% for cancer diagnosis, respectively.Thus, they reported that an effective method based on genomic information was developed in the diagnosis of cancer.While the accuracy value of the mixed model was determined at the performance level of the models in our study, in the models where high performances were obtained, it was observed that no information about the model parameters was presented.Although very high performance values were obtained in the study, parameter optimization was not mentioned [44]. Maruf FA et al. [7] designed a novel ensemble model using Deep Neural Network (DNN) and XGBoost to classify variants into two classes: somatic and germline, for a given Whole Exome Sequencing (WES) data.The XGBoost algorithm was used to extract features from the results of variant callers, and these features were then fed into the DNN model as input.They noted that the DNN-Boost classification model outperformed the benchmark method in classifying somatic mutations from paired tumornormal exome data and tumor-only exome data.Although very high performance values were obtained in the study, parameter optimization was not mentioned [7]. Miotto et al. [8] reported that deep learning outperforms machine learning methods in predicting the effects of non-coding mutations in gene expression and DNA similar to our study [8]. The performance of the machine learning models obtained from the studies was found to be similar to the deep and machine learning performance of the real dataset in our study.Additionally, higher performances were achieved in our simulated data compared to the summarized studies.The findings of the study show that when genetic data is evaluated with appropriate models, the outputs are important in terms of time and supporting clinicians. The recall and precision results of "0 and 1" or "European/non-European ethnicity" predicted in our study were found to be close to each other.This means that both groups achieve a balanced prediction performance in sample class (label) prediction for both deep and machine learning.This result is important in terms of obtaining acceptable models, especially in population-based or rare disease studies.Furthermore, studies in the literature have shown that the deep learning method also shows high performance in imbalanced classification.From this perspective, it can be concluded that the deep learning method's performance in diagnosis-specific models yields reliable results in both detecting patients and distinguishing healthy individuals.The analysis systems (cloud-based & local) in which secondary and tertiary analysis will be performed and the machine features used directly affect the performance of DL models.Especially when modeling big data matrices such as the whole genome, the availability of such infrastructure allows for iterative processes and enables the attainment of maximum performance from the model through hyperparameter optimization. Conclusion Scope of this study, the problem of data storage in big data was eliminated by using the cloud system and it became easier to focus on the modeling of the data. Fig. 1 Fig. 1 Timeline of implementing deep learning algorithms in Genomics[10] individuals in this dataset consist of five different populations, which are as follows: British from England and Scotland (91 individuals), Finnish from Finland (99 individuals), Colombian from Medellin (94 individuals), Chinese (103 individuals), and individuals with African ancestry from the Southwest USA (61 individuals).The data from these countries were categorized into 190 individuals of European ancestry and 258 individuals of non-European ancestry by expert geneticists.The dataset consists of 448 FASTQ files, with each file containing individual variants on chromosome 22 of the human genome.VCF data was generated based on the Human Genome 38 (GRCh38) reference genome from the raw FASTQ data [23]. Table 1 Parameters used in ML and DL for ART simulation data and Chr 22 WGS data sets Table 3 Performance comparison of deep learning and machine learning algorithms on Chr 22 WGS data set
2023-10-18T13:16:44.506Z
2023-10-17T00:00:00.000
{ "year": 2023, "sha1": "5496815aba3f7c96dbafb110918c640f00ef6354", "oa_license": "CCBY", "oa_url": "https://journalofbigdata.springeropen.com/counter/pdf/10.1186/s40537-023-00838-w", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "ef4e6dc57456f451787298c9e54f24943460ef63", "s2fieldsofstudy": [ "Computer Science", "Biology" ], "extfieldsofstudy": [ "Computer Science" ] }
257987089
pes2o/s2orc
v3-fos-license
Increased iron deposition in nucleus accumbens associated with disease progression and chronicity in migraine Background Migraine is one of the world’s most prevalent and disabling diseases. Despite huge advances in neuroimaging research, more valuable neuroimaging markers are still urgently needed to provide important insights into the brain mechanisms that underlie migraine symptoms. We therefore aim to investigate the regional iron deposition in subcortical nuclei of migraineurs as compared to controls and its association with migraine-related pathophysiological assessments. Methods A total of 200 migraineurs (56 chronic migraine [CM], 144 episodic migraine [EM]) and 41 matched controls were recruited. All subjects underwent MRI and clinical variables including frequency/duration of migraine, intensity of migraine, 6-item Headache Impact Test (HIT-6), Migraine Disability Assessment (MIDAS), and Pittsburgh Sleep Quality Index (PSQI) were recorded. Quantitative susceptibility mapping was employed to quantify the regional iron content in subcortical regions. Associations between clinical variables and regional iron deposition were studied as well. Results Increased iron deposition in the putamen, caudate, and nucleus accumbens (NAC) was observed in migraineurs more than controls. Meanwhile, patients with CM had a significantly higher volume of iron deposits compared to EM in multiple subcortical nuclei, especially in NAC. Volume of iron in NAC can be used to distinguish patients with CM from EM with a sensitivity of 85.45% and specificity of 71.53%. As the most valuable neuroimaging markers in all of the subcortical nuclei, higher iron deposition in NAC was significantly associated with disease progression, and higher HIT-6, MIDAS, and PSQI. Conclusions These findings provide evidence that iron deposition in NAC may be a biomarker for migraine chronicity and migraine-related dysfunctions, thus may help to understand the underlying vascular and neural mechanisms of migraine. Trial registration ClinicalTrials.gov, number NCT04939922. Background Migraine is a highly prevalent disorder that imposes an enormous socioeconomic burden. While patients with chronic migraine (CM) only account for 1.4-2.2% of the general population globally [1], they usually have lower health-related quality of life and higher levels of disability [2] compared to patients with episodic migraine (EM). Annually, around 3% of the EM patients evolve to CM [3]; however, the rigorous neural mechanism behind the chronicity of migraine remains incompletely understood. The pathophysiology of migraine involves both vascular and neural mechanisms [4]. Although it is less clear what drives the activation of neuronal pain pathways in a susceptible patient, there is increasing evidence that the pathophysiology of migraine may, in part, be rooted in the dysfunction of subcortical structures [5][6][7]. During the migraine triggering process, neurons located in the trigeminal subnucleus caudalis (TNC) transmit glutamatergic processes to the thalamus. Subsequently, the thalamus neurons primarily project to the somatosensory cortex, the insula, and the association cortex [5]. TNC neurons also connect to affective/motivational circuits via the nucleus tractus solitarius and parabrachial nucleus, which have diffuse projections to the hypothalamus, thalamic nuclei, amygdala, insula, and frontal cortex. Finally, TNC neurons project directly to output structures involved in pain modulation, such as the hypothalamus and periaqueductal gray (PAG) [5]. Consequently, subcortical regions play an important role in the neuronal pain pathways of migraine. Meanwhile, during migraine attacks, inflammatory vasoactive peptides promote dilatation of the meningeal vessels [8,9], and the inflammatory response further contributes to the disruption of the blood-brain barrier (BBB) [10]. The alteration of BBB integrity leads to increased iron permeability and deregulation of iron homeostasis [11]. As an electron facilitator serves many brain functions including myelin production and neurotransmitter synthesis [12], iron has received increasing attention in recent years. Iron dysregulation, such as increased iron accumulation, may lead to the continual generation of radical species and toxic free radicals [13], damage dopamine synthesis [14], and eventually, damage to the nervous system. Hence, investigating the iron deposition in subcortical regions of migraineurs could help to advance our understanding of the underlying mechanisms of the disorder and lead to the development of new and more effective treatments. Using non-invasive techniques such as T2-weighted and T2*-weighted MR imaging, the signal reduction caused by iron provides us with an indirect way to visualize iron content. In migraineurs, increased iron deposition has been found in the PAG [15][16][17], putamen, and globus pallidus [16], and an inverse relationship was established between recurrent attacks and iron accumulation. However, previous studies have only focused on limited subcortical brain regions, and other studies [18][19][20] have supported the different roles of the amygdala, nucleus accumbens, and thalamus in migraine pathophysiological mechanisms, suggesting that these regions deserve equal attention. Furthermore, quantitative analysis of iron deposition has not been performed in a larger population. In this sense, a more comprehensive investigation may contribute to the potential modifiable role of iron accumulation in migraine with functional disability. With the recent development, quantitative susceptibility mapping (QSM) is a novel post-processing technique to quantitatively assess the magnetic susceptibility of the tissue thus may provide improved image quality for the visualization of the subcortical nucleus [21]. Compared to conventional T2* relaxometry, QSM derives values sensitive to the levels of iron, thus is more selective for iron. Previous studies [22,23] using QSM showed increased iron deposition in total cerebral gray matter and in cortical regions like precuneus, insula, supramarginal gyrus, and postcentral gyrus in CM. However, cortical susceptibility is more prone to the surface and streaking artifacts appear in the vicinity of large susceptibility gradients [24]. Although these artifacts could partly be suppressed by post-processing methods [25,26], this makes the subcortical assessments more feasible and readily available in daily practice. Furthermore, the regional iron deposition in subcortical nuclei was not fully investigated thus far. Therefore, in the current study, a susceptibility analysis would provide more valuable information for understanding the neural mechanism of CM. This study aims to use the QSM to comprehensively investigate the brain iron concentration of subcortical brain nuclei in patients with CM and EM as compared to healthy controls. The relationships between iron deposition and disease course as well as functional disabilities were also investigated. Participants This study was approved by the local Institutional Review Board, and written informed consents were obtained from all participants. From September 2021 to January 2023, individuals diagnosed with EM or CM according to the International Classification of Headache Disorders, 3rd edition criteria were selected. Patients were recruited based on the following inclusion criteria: (1) age: 18-70 years; (2) confirmed diagnosis is EM or CM; (3) history of migraine greater than 1 year. Subjects were excluded if they were (1) high blood pressure; (2) coronary disease; (3) diabetes mellitus; (4) hypercholesterolemia; (5) infectious diseases; (6) chronic inflammatory conditions and other autoimmune conditions; (7) severe systemic diseases; (8) pregnancy or lactation; (9) obesity (body mass index > 30 kg/m 2 ); (10) smoking habit; and (11) recent consumption of antiplatelet drugs or vasoactive drugs (> 4 times the medium half-life of the active substance). Age-and sex-matched healthy controls were recruited from the community if they fulfilled all inclusion and exclusion criteria and were free of any headache or psychiatric disorder. Eventually, a total of 200 migraineurs (56 CM, 144 EM) and 41 matched controls were recruited. Clinical assessment All subjects underwent a medical interview including demographic data (age, sex) and personal family histories. For migraineurs, disease duration (measured in years from first symptoms), frequency of migraine attacks per month, migraine days per month, and peak headache pain intensity (measured by visual analog scale (VAS) were registered. The 6-item Headache Impact Test (HIT-6) and Migraine Disability Assessment (MIDAS) were performed to measure the degree of migrainerelated functional disability, and Pittsburgh Sleep Quality Index (PSQI) was also performed to assess the sleep quality of migraineurs over the past month. Image processing The QSM images were reconstructed from GRE data using the SEPIA (SuscEptibility mapping PIpeline tool for phAse images) toolbox [27] in the MATLAB program (The Mathworks Inc., Natick MA). Brain extraction was performed on whole-brain magnitude data based on the BET tool in Functional Magnetic Resonance Imaging of the Brain (FMRIB) Software Library v6.0 (FSL; Oxford University, UK; https:// fsl. fmrib. ox. ac. uk/ fsl/ fslwi ki/ FSL) package from the MEDI toolbox. The phase images were unwrapped with SEGUE [28]. After unwrapping, the background field was removed with the regularisation-enabled SHARP (RESHARP) [29] filtering method. Lastly, magnetic susceptibility was quantitatively calculated using MEDI [29][30][31], and QSM images were generated. For MEDI, the mean susceptibility value of the cerebrospinal fluid (CSF) within the manually drawn ROI in the posterior lateral ventricles of each subject was used as the susceptibility reference [31,32]. While there has been no consensus on the determination of a susceptibility reference, most studies used the mean susceptibility value of white matter areas (frontal, occipital, etc.), CSF, or the whole brain as the reference [33]. Nevertheless, a high degree of consensus was demonstrated between the regional susceptibility values using different references (whole brain vs. CSF) by a recent study, and the findings remain repeatable regardless of the choice of different references [34]. Statistical analysis Sex was recorded as binary variables. Age, disease duration, migraine attacks per month, disease duration, migraine days per month, VAS, HIT-6, MIDAS, and PSQI were recorded as continuous variables, and onesample Kolmogorov-Smirnov test was used to check the normality of all continuous variables. Demographics and clinical variable were compared between controls and migraineurs, and between CM and EM using independent samples t-test and Mann-Whitney test for continuous variables, and the chi-squared test for proportions. Analysis of variance (ANOVA) was performed to evaluate the regional difference in iron-related metrics among the three groups. Subsequently, Bonferroni post hoc analysis was applied to analyze the difference between each of the two groups. Partial correlation analysis was conducted to detect the potential relationship between regional iron-related metric and clinical variables in migraine patients, and in patients with CM and EM, respectively. All analyses were adjusted for age and sex. Bonferroni correction for the problem of multiple comparisons in multiple-region level, and to further control for the type I error was performed. A significance level of p < 0.05 was set for all statistical tests. Receiver operating characteristics (ROC) curve was applied to evaluate the diagnostic efficacy of the QSM value, and area under the curve (AUC) was recognized as reasonable diagnostic valuable with AUC > 0.7. Fig. 1 Summarized steps of the pipeline for image preprocessing. The phase images were unwrapped, and the background field was removed with the regularisation-enabled SHARP filtering method. Magnetic susceptibility was quantitatively calculated using MEDI and quantitative susceptibility map (QSM) images were generated. T1, T2, and magnitude images were skull-stripped. By using Advanced Normalization Tool (ANTs), SyN multimodal warping was performed using joint T1 and T2 cost function to transform the high-resolution probabilistic subcortical brain nuclei atlas in CIT168 space to individual space. Eventually, the QSM value of each subcortical nuclei was extracted from each manually refined region of interest (ROI) based on the subcortical nuclei atlas SPSS 22.0 (SPSS, Chicago, IL) was used for all the statistical analyses mentioned above. Demographics A total of 200 patients with migraine as well as 41 normal controls (both age and sex matched) were recruited, and all of them underwent the MRI scan. For patients with migraine, 144 of them were episodic, while the rest 56 patients were chronic. Patient demographics and statistical significance of group comparisons are summarized in Table 1. There was no significant difference in age or gender between migraineurs and normal controls. Mean age of patients with CM was 37.9 ± 11.9 years, and 75.4% were female. Mean age of patients with EM was 47.5 ± 15.4 years, and 77.8% were female. There was a statistically significant difference in age between patients with CM and those with EM (p < 0.001). Patients with CM showed significantly longer disease duration (p < 0.01), higher frequency of attacks (p < 0.01), more migraine days per month (p < 0.01) when compared to EM. A higher VAS (p < 0.01), HIT-6 (p < 0.01), MIDAS (p < 0.01), and PSQI (p < 0.01) could also be observed in patients with CM. Moreover, higher PSQI was associated with higher VAS (r = 0.225, p = 0.004), HIT-6 (r = 0.741, p < 0.001), and MIDAS (r = 0.764, p < 0.001). Regional comparisons of iron-related metric between groups Significantly higher QSM value was observed in Pu (p = 0.001), Ca (p = 0.002), and NAC (p < 0.001) in patients with CM compared to controls (Fig. 2). Patients with EM showed significantly higher QSM values in NAC (p < 0.001) compared to controls. When compared to patients with EM, patients with CM had a significantly higher QSM value in Pu (p = 0.002), NAC (p < 0.001), SNc (p = 0.018), PBP (p = 0.003), and HN (p = 0.017). The difference between migraineurs and controls was not significant in other subcortical brain nuclei. ROC analysis of the QSM value After calculation of receiver operating characteristic curves (Fig. 3), the area under curve (AUC) for migraineurs regarding QSM value in NAC was 0.883 (95% CI 0.826-0.939). The optimal threshold was 22.91 ppb, which would identify 72.9% of patients with migraine (sensitivity) and 86.8% of patients without (specificity). Similarly, the AUC for the QSM value of NAC was 0.797 (95% CI 0.734-0.860), and the optimal Relationship between iron-related metric and clinical variables In migraineurs, the QSM values of NAC were significantly associated with longer disease duration (r = 0.160, p = 0.045), higher frequency of attacks Discussion Our study demonstrated that migraineurs had increased iron deposition in Pu, Ca, and NAC than healthy controls. Meanwhile, patients with CM had a significantly higher volume of iron deposits in multiple subcortical brain nuclei including Pu, Ca, NAC, SNc, PBP, and HN compared to EM. Volume of iron in NAC can be used to distinguish patients with migraine from controls with a sensitivity of 72.9% and specificity of 86.8, and CM from EM with a sensitivity of 85.45% and specificity of 71.53%. Moreover, greater iron deposition in NAC was significantly associated with greater migraine burden, as measured by longer disease duration, higher frequency of attacks, more migraine days per month, and higher scores in HIT-6, MIDAS, and PSQI. Although increased iron deposition of subcortical nuclei has been reported in migraine patients, there is a lack of comprehensive subcortical nucleus and systematic comparison. Welch et al. [36] found increased iron accumulation in PAG in patients with chronic daily headaches, suggesting a selectively impaired iron homeostasis in migraineurs, possibly caused by repeated migraine attacks. Another study [37] later confirmed these findings and showed increased iron concentration in Pu, RN, and GP in the younger migraineurs compared to controls. Moulton et al. [38] reported altered functional connectivity in the basal ganglia notably the Pu and Ca compared to normal controls, accentuated by frequency of migraine attacks in migraine patients. Our study showed increased iron-related metrics, as measured by increased QSM value, in Pu, Ca, and NAC in migraineurs when compared to healthy controls. We found that patients with CM had higher iron accumulation in Pu, Ca, NAC, SNc, PBP, and HN than EM. The increased iron levels in the brain, especially in subcortical regions around the basal ganglia, might be related to the abnormal metabolical activities in specific regions, and a potentially higher vulnerability to iron-induced oxidative stress [39]. Repetitive episodes of neuroinflammation and hyperoxia lead to iron redistribution and iron unbalance in migraine patients [40,41], which could result in an increase in BBB permeability [42] and allow the release of inflammatory mediators, free radicals, vascular endothelial growth factor, matrix metalloproteinases and micro-RNAs [43]. Increased iron loads and iron-mediated free radical production further caused degeneration of endothelial cells and opening of the BBB [44], thus resembling a vicious circle. Eventually, an excessive amount of iron deposits render the brain more vulnerable to oxidative stress, and thus may cause basal ganglia dysfunction by damaging synapses or modulating protein synthesis, leading to altered local levels of neurotransmitters [45]. Considering the significant role basal ganglia plays in the pathophysiology of pain in migraine [46][47][48], our study provides further evidence that structural, metabolic, and functional alteration in subcortical nuclei might associate with increased migraine burden and disease-related disability during repeated episodes of migraine. Considering we found larger iron deposits in a patient with CM than those with EM, subcortical nuclei like Pu, Ca, NAC, and SN could be related to migraine chronicity. Our correlation analysis showed that greater iron deposition in NAC was significantly associated with greater migraine burden, as measured by longer disease duration, higher frequency of attacks, and more migraine days per month, suggesting a relationship between recurring attacks and accumulation of iron [15,16,36]. Higher concentration of transferrin receptors in NAC, high iron content in glial cells, and impaired iron homeostasis are possibly associated with neuronal dysfunction or neuronal damage in repeatedly activated networks involved in nociception. A previous study [17] hypothesized that repeated migraine attacks could increase free radical cell damage and thus may lead to increased iron deposition could contribute to migraine chronicity. Stronger functional connectivity of NAC to medial prefrontal cortex (mPFC-NAC) was also found in patients with chronic pain and was positively correlated with pain chronicity [49,50]. Liu et al. [20] observed significantly decreased regional CBF value in left NAC in CM compared to controls, which might reflect a compensatory mechanism as activation of the mPFC-NAC pathway. Considering that decreased CBF is linked with BBB compromise, and a compensatory increase in CBV may lead to reperfusion injury on BBB [51], the increased iron accumulation in NAC might result from the BBB leakage caused by focal hypoperfusion. Eventually, the QSM value of NAC was identified in the current study to distinguish patients with CM from EM with a sensitivity of 85.45% and specificity of 71.53% (AUC = 0.797). Our study thus provides further evidence for the application of QSM in daily clinical practice to discriminate CM patients. If increased iron concentrations in NAC would play a role in the migraine chronicity, this might theoretically reflect a defective central pain processing system related to dysfunctions in several domains. NAC is located at the junction of the basal nucleus and the marginal system. The outer part of the septum and the inner and lower part of the caudate nucleus are connected to the preolfactory nucleus, and the ventral side is the ventral pale sphere and olfactory nodules. NAC plays important role in reward and punishment mechanisms, but studies on NAC and migraine are limited. The current study found that increased iron deposits in NAC were also associated with a higher level of migraine-related functional disability, as measured by HIT-6 and MIDAS, which are both widely accepted measures to assess headache-related disability and its impact on quality of life. As a key node of neural circuits projecting to multiple pain structures and mediating motivated behaviors [52,53], NAC is associated with pain medication and plays a role in migraine and hyperalgesia comorbidities when functionally disrupted. A study shows that NAC and pain sensitization are closely related to chronic pain, neurogenesis of medium spiny neurons in the NAC continues into adulthood and is enhanced by pathological pain [54]. Besides, there are also studies [55] that provide evidence that lower NAC volume confers risk for developing chronic pain, and altered NAC activity is a signature of the state of chronic pain. These evidence emphasize the potential role of NAC as a target brain region to track patient disability and aid in the monitoring of treatment regime. In addition, increased regional iron deposits of NAC were associated with worse sleep quality in migraineurs. Migraineurs usually have worse sleep quality than nonmigraineurs [56][57][58], and our study showed that such a condition is even worse in patients with CM than EM. This association between migraine and sleep disorders is underlined by the intimate relationship in the clinical presentation [57,59,60] and by the presence of shared anatomical pathways [61]. During sleep, the BBB and fluid systems play essential roles in the removal of metabolic overload. While sleep can promote toxic metabolic clearance [62], sleep disruption may result in the accumulation of neurotoxic waste products. In patients with primary insomnia [63], significantly increased iron deposition in multiple subcortical nuclei was observed, indicating the important role of iron concentration as a biomarker for sleep disorders. Meanwhile, NAC is a new regulating area for sleep through the integration of motivational stimuli [64]. This might explain why NAC is particularly prone to focal iron deposition in migraineurs. Limitations Despite the novelty of the current study, this prospective study is still prone to several limitations. One important limitation is the fact that our results are based on crosssectional observation, longitudinal data are needed to justify a such conclusion. Second, the current study included patients across a wide age range. While this approach allowed us to observe for a diverse sample of migraineurs regardless of age, it also limited our ability to draw conclusions about age-specific effects and their associated comorbidities. For instance, elderly patients are usually more prone to depression [65]. Considering patients with significant clinical depression might influence the results, we have excluded subjects who scored 11 or higher on HADS for patients referred to our headache clinic. Future studies focusing on the specific hypotheses about different age groups and their clinical characteristics for migraine and associated psychological problems might help us to understand the problem. Third, the patients with CM are significantly older than patients with EM. In the current study, age is positively associated with widespread iron deposition in subcortical nuclei, which is consistent with the known age-related iron deposition in both cortical and subcortical regions [66][67][68][69] despite the high spatial variation in iron distribution. Gender has also been associated with the iron deposition. Our study showed a lower level of iron concentration in multiple subcortical nuclei of women and is in agreement with previous studies [70,71]. To control for the effect of age and gender on iron concentrations, we have regressed out age and sex as covariates of no interest. Model adjustment alone could not completely eliminate the effect of age and sex, thus future studies are warranted to better address this issue. Finally, while the structural T1 images were collected, the analysis of subcortical structural changes was not included in the current study. Future studies to explore the structural changes of the brain and their relationship with iron deposition in patients with migraine would further our understanding of the spatiotemporal patterns of ironrelated neurodegeneration. Conclusions In conclusion, we have successfully demonstrated that there is an increased iron deposition in multiple subcortical nuclei, especially NAC, in patients with CM, and the regional iron accumulation level in NAC could be used to distinguish CM patients from EM. More importantly, the increased iron deposition in NAC was associated with higher disease burden, higher migraine-related disability, and worse sleep quality, suggesting a potential role as a neuroimaging marker to track patient disability and aid in the monitoring of treatment regime. These results provided further evidence for future research efforts to understand the underlying vascular and neural mechanisms behind the pathophysiology of migraine.
2023-04-07T13:39:50.344Z
2023-04-07T00:00:00.000
{ "year": 2023, "sha1": "df9428ab5e8c0539e3b0e79597c6d9a6bb6f747e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "df9428ab5e8c0539e3b0e79597c6d9a6bb6f747e", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
214318966
pes2o/s2orc
v3-fos-license
Forecasting indicators of thermal oxidative stability of lubricating oils The results of testing partially synthetic Total Quartz 10W-40 SL/CF motor oil for thermo-oxidative stability in terms of optical density, volatility and coefficient of thermo- oxidative stability are presented taking into account the thermal energy absorbed by the products of oxidation and evaporation. The linear dependences of the decimal logarithm of thermal energy absorbed by the products of oxidation and evaporation and the total thermal energy absorbed by these products from the decimal logarithm of the test time in the temperature range from 160 to 190 ° C are established. A graphical and analytical model is proposed for predicting the indicators of thermo-oxidative stability, determined by the dependence of the decimal logarithm of the thermal energy absorbed by the products of oxidation and evaporation on the temperature of thermostating. Introduction As a result of the study of the mechanism of oxidation of lubricating oils [1][2][3], it was found that the formation of water, resins, ester acids increases the acidity of the oil, which many authors have taken as a criterion for assessing the intensity of oxidation processes [4,5]. In [6], a formula was proposed for calculating the time to reach the set value of the acid number at temperature х Т based on data In the research work [7], an alternative method for predicting the indices of thermo-oxidative stability using photometry and an analytical model is proposed 2 t , x tthe time to reach the set values of thermal oxidative stability at temperature T T t t T T T t t Using formula (2), it is possible to determine the established values of the optical density, volatility and coefficient of thermo-oxidative stability, taking into account the processes of oxidation and evaporation for the selected temperature х Т based on these temperatures 1 Т and 2 Т . It should be noted that lubricating oil cannot absorb thermal energy indefinitely. Therefore, its excess is discharged in the form of oxidation and evaporation products. The amount of thermal energy Q absorbed by the lubricating oil is determined by the product where Т -test temperature, °С; ttest time at which excess thermal energy was released, h. The amount of thermal energy absorbed by the oxidation products is determined by the product where Doptical density of lubricating oil during the test t . Amount of thermal energy absorbed by evaporation products The total amount of thermal energy absorbed by the products of oxidation and evaporation is determined by the formula where tos Кvalue of the coefficient of thermal oxidative stability during the test t . The purpose of this research is to substantiate a graph-analytical model for predicting thermal oxidative stability indicators based on data obtained at three test temperatures or a reduced temperature control time. Materials and methods The universal, all-weather partially synthetic Total Quartz 10W-40 SL/CF engine oil was chosen for the study. As a means of control and testing were used: thermostat, photometric device and electronic scales. A sample of constant weight oil (100 ± 0.1 g) was thermostated, sequentially at temperatures of 160 °C, 170 °C, 180 °C and 190 °C with stirring with a mechanical stirrer with a speed of 300 rpm. At regular intervals (10 hours), a sample of oxidized oil was weighed, the mass of evaporated oil was determined; a part of the sample (2 g) was taken for direct photometry and determination of optical density where 300photometer readings in the absence of oil in the cuvette, μA; Rphotometer readings with an oil-filled cuvette, μA. The coefficient of thermo-oxidative stability was calculated from the data of optical density and evaporation where mmass of evaporated oil during the study t, g; Мmass of oil sample before testing, g. Study of the optical density of lubricating oils The investigated engine oil was tested at temperatures of 190 and 180 °C to a value of optical density equal to 0.5...0.6, and at temperatures of 170 and 160 °C the tests were carried out for 30 hours and after every 10 hours of testing, samples of oxidized oil were taken for determination of optical density, volatility and coefficient of thermo-oxidative stability, then, according to the data obtained, the indicators of thermo-oxidative stability were predicted after 10 hours to 90 hours of testing for a temperature of 170 °C and 140 hours for a temperature of 160 °C. At the same time, engine oil tests were continued to verify the convergence of experimental and calculation methods for determining the indicators of thermo-oxidative stability. Figure 1a shows the dependences of the decimal logarithm of the thermal energy absorbed by the oxidation products on the decimal logarithm of the time and temperature of the engine oil test. At a temperature of 190 °C (direct 1), the oil was tested for 30 hours, and at a temperature of 180 °C -60 hours (direct 2). These dependences are described by linear equations for temperatures: For temperatures of 170 °C (direct 3) and 160 °C (direct 4), the test oil was tested at the beginning of 30 hours, while the analysis was carried out after 10 hours. Based on the results of three values of the decimal logarithm of the thermal energy absorbed by the oxidation products, graphical dependences on the decimal logarithm of time are constructed, which are described by linear equations for temperatures: ) with an increase of time by 10 hours and sampling of oils for analysis. For example, at a temperature of 160 °C, it is necessary to determine the optical density during the test time of 50 hours ( lg t = 1.7) using the formula (13) or Figure 1. First, the decimal logarithm of the thermal energy absorbed by the oxidation products is determined (table 1), at a test time of 50 hours, the optical density was 0.06. The relative error was 1.66%. Further tests of the test oil at temperatures of 170 and 160 °C showed that the experimental data fell on lines 3 and 4. In order to predict the values of thermal energy absorbed by the oxidation products and determine the optical density at other test temperatures, a graph-analytical model is presented, represented by the dependence of the decimal logarithm of the thermal energy absorbed by the oxidation products on the test temperature (Figure 1b). This dependence is described by a linear equation . Comparing with the experimental data (table 1), at a test time of 50 hours, the volatility was 0.0227 g. The relative error was 0.44%. To predict the values of thermal energy absorbed by the products of evaporation and to determine the volatility at other temperatures, we use a graph-analytical model represented by the dependence of the decimal logarithm of the thermal energy absorbed by the products of evaporation on the test temperature of the test engine oil (Figure 2b). This dependence is described by a linear equation Comparing the calculated data of the coefficient of thermal oxidative stability tos K = 0.089 with experimental (table 1) tos K = 0.09, found that the relative error was 4.5 %. To predict the values of the total thermal energy absorbed by the products of oxidation and evaporation, and to determine the coefficient of thermal oxidative stability at other temperatures, we use a graph-analytical model represented by the dependence of the decimal logarithm of the total thermal energy absorbed by the products of oxidation and evaporation on the test temperature of the test engine oil (Figure 3b). This dependence is described by a linear equation
2019-12-05T09:24:23.917Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "633772b3778731be22a9b86bc4a33a04c38ddde2", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1384/1/012047", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "219a7e8e95eef4e0a9157cd24038f2bd8e92ee22", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
23233455
pes2o/s2orc
v3-fos-license
Characterization of a major QTL for manganese accumulation in rice grain Some diets lack sufficient manganese (Mn), an essential mineral. Increasing Mn in grain by biofortification could prevent Mn deficiency, but may increase levels of the toxic element cadmium (Cd). Here, we investigated Mn in rice (Oryza sativa) grains in recombinant inbred lines (RILs) from the cross of 93–11 (low grain Mn) with PA64s (high grain Mn). Quantitative trait locus (QTL) analysis to identify loci controlling grain Mn identified a major QTL, qGMN7.1, on the short arm of chromosome 7; qGMN7.1 explained 15.6% and 22.8% of the phenotypic variation in the RIL populations grown in two distinct environments. We validated the QTL with a chromosome segment substitution line (CSSL), CSSL-qGMN7.1, in the 93–11 background harboring qGMN7.1 from PA64s. Compared to 93–11, CSSL-qGMN7.1 grain had increased Mn and decreased Cd concentrations; CSSL-qGMN7.1 roots also showed enhanced Mn uptake. Fine mapping delimited qGMN7.1 to a 49.3-kb region containing OsNRAMP5, a gene responsible for Mn and Cd uptake. Sequence variations in the OsNRAMP5 promoter caused changes in its transcript level, and in grain Mn levels. Our study thus cloned a major QTL for grain Mn concentration in rice, and identified materials for breeding rice for high Mn and low Cd concentrations in the grain. Results The qGMN7.1 from PA64s significantly increased grain Mn concentration. The RIL population from the rice super hybrid LYPJ, and the hybrid parents were grown in two different environments, Hainan (110.0 E, 18.5 N) and Hangzhou (120.0 E, 30.1 N), China. Mature seeds were harvested for determining the grain Mn concentration. The concentration of Mn in the grains was significantly different between the parents in both Hainan and Hangzhou, with concentrations in PA64s approximately 2-fold higher than in 93-11 ( Fig. S1 and Table S1). The RIL population showed a wide range of phenotypic variation, in a continuous distribution (Fig. S1). Using the high-resolution SNP map, we detected 12 QTLs for grain Mn concentration distributed on all chromosomes except for chromosomes 10, 11, and 12 ( Fig. S2 and Table S2). Among those QTLs, 5 were identified in the RIL populations grown in both Hainan and Hangzhou, and 8 had additive effects coming from PA64s. One major QTL with the highest LOD value, qGMN7.1, was mapped between markers SNP7-53 and SNP7-64 on the short arm of chromosome 7 and explained 15.6% and 22.8% of the phenotypic variation in the RIL populations grown in Hangzhou and Hainan, respectively ( Fig. S2 and Table S2). Physiological characteristics of CSSL-qGMN7.1. We performed a series of physiological experiments to determine the physiological mechanism underlying the increased grain Mn concentration conferred by qGMN7. 1. In a time-course experiment of grain Mn accumulation, no significant difference was found between 93-11 and CSSL-qGMN7.1 at the early grain-filling stage, although both lines showed decreasing accumulation with time ( Fig. 2a). At the 18th day after heading, the grain Mn concentration stabilized and then significantly increased from the 24th day after heading in CSSL-qGMN7.1 compared to 93-11. The difference remained significant through maturity (Fig. 2a). Overall, CSSL-qGMN7.1 and 93-11 had a similar pattern of Mn accumulation, and qGMN7.1 functioned during the late grain-filling stage. We also measured Mn concentrations in flag leaves at different times after heading and found that Mn increased in both CSSL-qGMN7.1 and 93-11, with higher levels in CSSL-qGMN7.1, from the late filling stage to maturity (Fig. 2b). To determine whether Mn is transferred from other organs, such as the flag leaf, into the grain at the late filling stage, we removed flag leaves and measured the effect on Mn in the grains. Removal of flag leaves at the heading and filling stages did not affect Mn accumulation in the grains (Fig. 2c, d). These results suggested that Mn is not transferred from flag leaves to the grains at the grain-filling through maturity stages. As the distribution of Mn in the aboveground parts has been reported to play an important role in grain Mn accumulation, we analyzed the Mn concentration in different organs at maturity. The highest Mn concentration was observed in flag leaf blades, with approximately 3,500 mg·kg −1 dry weight (DW), and the lowest in the grains with only about 50 mg·kg −1 DW in the rice plants (Fig. 3a). Compared to 93-11, CSSL-qGMN7.1 accumulated higher concentrations of Mn in the grains, lemma, panicle branches, and flag leaf blades. However, Mn concentrations in other leaves and stems were almost the same in both lines (Fig. 3a). An analysis of the proportion of Mn content in different organs to the whole plant showed that the proportion of grain Mn content/whole plant Mn content was similar between CSSL-qGMN7.1 and 93-11 (Fig. 3b). These results indicate that the higher Mn concentration in the grains of CSSL-qGMN7.1 did not result from greater distribution from other organs, but from higher assimilation in the roots. To confirm this, we performed a short-term (30 min) uptake experiment using intact roots of 93-11 and CSSL-qGMN7.1. The Mn uptake at 4 °C was much lower than that at 25 °C in both lines, but CSSL-qGMN7.1 exhibited higher Mn uptake than 93-11 at both temperatures (Fig. 4a). The net Mn uptake, calculated by subtracting the Mn uptake at 4 °C from that at 25 °C, was significantly greater in CSSL-qGMN7.1 than in 93-11 (Fig. 4b). Although CSSL-qGMN7.1 and 93-11 had a similar affinity for Mn, the value of V max for CSSL-qGMN7.1 (118.6 mg·kg −1 root DW·h −1 ) was significantly higher than that for 93-11 (103.4 mg·kg −1 root DW·h −1 ) (Fig. 4). were identified (Fig. 5). Nine newly developed insertion/deletion (InDel) markers, well distributed within the interval, were used to further genotype the 12 recombinants ( Fig. 5b and Table S3). We tested the F 2:3 progeny of the recombinants in Hangzhou in 2016. Grain Mn concentrations showed no significant difference among the three genotypes of recombinants Line 1, Line 2, and Line 8 ( Table 1). For the other recombinant lines, qGMN7.1 segregated and significant differences in phenotypes were found in the three genotypes of qGMN7.1 (Table 1). Based on the phenotypes and genotypes of these recombinants, we delimited qGMN7.1 to a region of approximately 49.3-kb between markers L8857 and L8906 (Fig. 5c, d). 6,9,12,18,24,30, and 36 days after heading (a), grains were tagged and harvested until maturity after their corresponding flag leaves were sampled at the heading, grain-filling, and maturity stages (c). (b,d) Flag leaf Mn concentration of 93-11 (red) and CSSL-qGMN7.1 (blue). Flag leaves were harvested at 0, 3, 6, 9, 12, 18, 24, 30, and 36 days after heading (b), flag leaves were sampled at the heading, grain-filling, and maturity stages (d). Vertical bars represent the standard deviation (n = 4). *and **indicate a 5% and 1% significance level, respectively, according to the t test. We also measured the Cd concentration in the grains of the F 2:3 progeny of the recombinants. Three genotypes in the recombinants Line 1 to Line 4 and Line 6 showed no significant difference in grain Cd concentration, and recombinants Line 8 to Line 10 and Line 12 exhibited segregating phenotypes (Table 1). Combined with the genetic recombination sites of these recombinants (Fig. 5c), we concluded that qGMN7.1 had little influence on Cd accumulation in the grain. Furthermore, the Cd concentration in the grain of recombinant Line 6 was about 300 µg·kg −1 DW, much lower than that of recombinant Line 1 to Line 4 (500 to 600 µg·kg −1 DW), indicating that the allele from PA64s could greatly decrease grain Cd concentration. OsNRAMP5 is the candidate gene for the grain Mn accumulation trait. The Rice Genome Annotation Project (http://rice.plantbiology.msu.edu/) predicted five genes in the 49.3-kb target region of qGMN7.1 (Fig. 5d): LOC_Os07g15350 encoding a transposon, LOC_Os07g15360 and LOC_Os07g15390 encoding retrotransposons, LOC_Os07g15400 encoding an expressed protein, and LOC_Os07g15370 encoding a metal transporter. Because LOC_Os07g15370 has been reported previously as OsNRAMP5, encoding a major transporter for Mn and Cd 12 , it was considered the most likely candidate gene for the grain Mn accumulation trait in qGMN7.1. Sequence alignment of OsNRAMP5 between the two parents, 93-11 and PA64s, revealed no polymorphisms in the coding sequence, but 12 variations in the promoter region (Figs 5e and 6a). These sequence variations might alter the transcript levels and be responsible for the different phenotypes. Therefore, we measured the expression levels of OsNRAMP5 in 93-11 and CSSL-qGMN7.1 at different developmental stages. At the seedling stage, CSSL-qGMN7.1 had significantly higher transcript levels of OsNRAMP5 than in 93-11, with the largest difference of 3.7-fold found in the roots. Higher transcript levels were also observed in CSSL-qGMN7.1 at the booting stage, particularly in the roots (Fig. 6b, c). We then compared the promoter activity of OsNRAMP5 between 93-11 and PA64s by transient expression in rice protoplasts. The green fluorescent signals of GFP driven by the PA64s promoter were stronger than those driven by the 93-11 promoter (Figs 6d and S4), and the GUS transcript levels driven by the PA64s promoter also showed a higher level compared to that driven by the 93-11 promoter (Fig. 6e). These results suggested that the OsNRAMP5 promoter from PA64s was stronger than the OsNRAMP5 promoter from 93-11. To validate that OsNRAMP5 is responsible for Mn accumulation in the grain, we overexpressed it in 93-11. A significantly larger abundance of OsNRAMP5 transcript was found in roots (2 fold) and shoots (2-3 fold) of the overexpression lines than in 93-11 (Fig. 6f). The overexpression lines accumulated more Mn in the grains than 93-11 when grown in pots (Fig. 6g). However, the Cd concentration in the grains was nearly equal in 93-11 and the overexpression lines (Fig. 6h). Therefore, we concluded that OsNRAMP5 is responsible for the grain Mn accumulation trait in qGMN7.1. To gain further insight into the variations of the OsNRAMP5 promoters, we isolated and compared the 2-kb 5′-flanking regions of OsNRAMP5 from 30 different rice varieties. Based on the promoter sequences, three haplotypes were identified ( Fig. 7a and Table S4). Among the 30 rice varieties, 13 had the same haplotype as PA64s (designated as type I) and 14 coincided with that of 93-11 (designated as type II). The promoters from varieties TN1, NJ6, and No.565 were consistent with type II, with the exception of nucleotide variations at positions −1,866 (A → T) and −1,550 (G → T) from the start codon ATG (these were designated as type III) (Fig. 7a). Compared to type I, the rice varieties containing type II and III promoters exhibited lower expression of OsNRAMP5 and lower accumulation of Mn in the grains (Fig. 7b, c). Discussion To date, hundreds of QTLs related to grain mineral elements have been identified in rice, but few have been fine mapped or cloned. In this study, we analyzed the Mn concentration in the grains of the RILs derived from the rice super hybrid LYPJ and found 12 putative QTLs in two testing environments. Among them, qGMN7.1, detected on the short arm of chromosome 7 in both environments, accounted for the largest proportion of phenotypic variation (Fig. S2 and Table S2). In the same chromosomal region, a major QTL was also detected by Ishikawa Previous studies showed that Mn content was significantly correlated with the contents of other mineral elements in grains, such as Mg, Fe, Zn, or Cu, suggesting that the contents of these elements might be controlled by common genes 23,24 . However, the concentrations of Mg, Fe, Zn, and Cu in the grains were nearly equal between CSSL-qGMN7.1 and 93-11 ( Fig. 1d-g). Though the Cd concentration in the grains of CSSL-qGMN7.1 was much lower than that of 93-11, fine mapping of qGMN7.1 demonstrated that it had little influence on Cd accumulation in the grains (Table 1). These results implied that qGMN7.1 might be specialized for controlling Mn accumulation in the grains. Considering that agronomic traits could also affect the accumulation of elements in grains 27,28 , we investigated 9 traits of 93-11 and CSSL-qGMN7.1 and found no significant differences between them (Table S5). Therefore, CSSL-qGMN7.1 is an ideal material for rice breeding due to its improved Mn concentration and decreased Cd concentration in the grains without an accompanying loss of yield. The role of OsNRAMP5 in controlling Mn uptake and transport has been reported in rice [12][13][14][15]29 . OsNRAMP5 was constitutively expressed in the roots and encodes a plasma membrane-localized protein that belongs to the natural resistance associated macrophage protein (NRAMP) family, whose members function as proton-coupled metal ion transporters that can transport Mn 2+ , Zn 2+ , Cu 2+ , Fe 2+ , Cd 2+ , Ni 2+ , Co 2+ , and Al 3+ 12,30 . OsNRAMP5 encodes a major transporter responsible for Mn uptake in rice; knockout of OsNRAMP5 resulted in a significant decline in grain Mn concentrations compared with the wild type 12 . Ishimaru et al. also suggested that OsNRAMP5 could play a role in Mn transport during flowering and seed development 15 . Based on the RILs and the backcross population, we fine mapped qGMN7.1 to a 49.3-kb region containing OsNRAMP5 (Fig. 5d and Table 1). Although we did not find any alterations in the coding sequence of OsNRAMP5 between 93-11 and PA64s, we did find nucleotide differences in the promoter region (Figs 5e and 6a). These sequence variations lead to differences in the expression level of OsNRAMP5 and in Mn accumulation in the grains between CSSL-qGMN7.1 and 93-11 (Fig. 6b, c). Overexpression of OsNRAMP5 in the 93-11 variety increased the grain Mn concentration (Fig. 6f, g). Therefore, we inferred that the expression level of OsNRAMP5 contributed to Mn accumulation in the grains. Variations in promoter sequences commonly lead to phenotypic variation in rice [31][32][33][34][35] . The expression of OsNRAMP1 in roots was higher in high Cd-accumulating varieties (Habataki, Anjana Dhan, Jarjan) compared to low Cd-accumulating varieties (Sasanishiki, Nipponbare) due to a Table 1. Progeny test of nine recombinants for confirmation of the fine-mapped region of qGMN7.1. Note: Type 1, type 2, and type 3 in each panel represent the segregated genotypes of the recombinants. '9' , 'P' , and 'H' represent the homozygote of 93-11 and PA64s, and the heterozygote of the parents, respectively. *and **indicate a 5% and 1% significance level compared to type I, respectively, according to the t test (n = 6). 400-bp deletion in the promoter region of OsNRAMP1 in the high Cd-accumulating varieties 31 . Consistent with previous reports, we also found that some low Mn-accumulating varieties had similar OsNRAMP5 promoter sequences as 93-11, which contains low concentrations of Mn in the grains, whereas high Mn-accumulating varieties, including Nipponbare, exhibited promoter sequences similar to PA64s, which is known for high grain Mn concentrations (Fig. 7). Four major transport processes are involved in the accumulation of mineral elements: (1) root uptake, (2) root-to-shoot translocation by xylem flow, (3) distribution in aboveground tissues, and (4) remobilization from leaves 9 . In our study, Mn content in the flag leaves increased from the heading stage to maturity in the time-course experiment (Fig. 2b). Removal of the flag leaves at the heading and grain-filling stages did not affect Mn accumulation in the grains (Fig. 2c, d). Therefore, we speculated that elevation of Mn concentrations in the grain did not occur due to remobilization from the leaves. In addition, CSSL-qGMN7.1 and 93-11 showed little difference in Mn distribution (Fig. 3). However, higher Mn uptake activity was found in CSSL-qGMN7.1 compared to 93-11 (Fig. 4). Considering that OsNRAMP5 was constitutively expressed in roots (Fig. 6b, c) and that its expression was higher in CSSL-qGMN7.1 compared with 93-11, we concluded that OsNRAMP5 is responsible for the increased Mn concentrations in the grain by enhancing Mn uptake in roots. OsNRAMP5 has also been reported to function as a Cd/Fe transporter [12][13][14]36,37 . OsNRAMP5-knockdown rice lines accumulated more Cd in the shoots, but the total Cd content was lower than in the wild-type plants 36 . The OsNRAMP5 knockout line lost the ability to take up Mn and Cd concurrently 12 , and the osnramp5 mutant exhibited decreased Cd (as well as Mn) concentrations in the straw and grain 13 . These studies demonstrated that the entry of Cd into rice root cells occurred via this Mn transporter, OsNRAMP5. However, in our study, CSSL-qGMN7.1 accumulated lower amounts of Cd than 93-11, contrary to the higher amounts of Mn in CSSL-qGMN7.1 compared to 93-11 (Fig. 1b, c). The recombinants that showed segregation at the qGMN7.1 region exhibited no significant differences in grain Cd concentrations (Table 1), and the overexpression lines of OsNRAMP5 did not accumulate more Cd in the grains compared with 93-11 (Fig. 6h). A possible explanation is that when plants are grown on relatively high-Mn and low-Cd soils (Hangzhou, pH = 6.04 ± 0.02; 480.35 ± 51.02 mg/kg Mn; 0.64 ± 0.12 mg/kg Cd), Cd does not readily accumulate in the grains. Additionally, an antagonistic effect may exist between Mn and Cd uptake. That is, OsNRAMP5 is mainly responsible for the transport of Mn, but not Cd when Mn is abundant. Alternatively, another locus may exist for grain Cd concentration in the substituted segments of CSSL-qGMN7.1 (Table 1), which displayed lower Cd accumulation levels when compared with 93-11. The chromosomal segment substitution line, CSSL-qGMN7.1, was selected from the advanced backcross population (BC 4 F 2 ) derived from a cross of the recurrent parent 93-11 and the donor parent PA64s (Table S3). In 2015, 93-11 and CSSL-qGMN7.1 were grown in the paddy field of Hangzhou and Hainan. Both lines were also grown in pots inside a net enclosure in Hangzhou. Each pot was filled with 4 kg of sterilized paddy soil and amended with 2 mg/kg −1 CdCl 2 . The soil was maintained in a flooded state before heading, then kept moist until maturity. Materials and Methods In the paddy field of Hangzhou (2016), flag leaf blades from 93-11 and CSSL-qGMN7.1 were sampled at 0, 3,6,9,12,18,24,30, and 36 days after heading, and the grains were reaped at 6,9,12,18,24,30, and 36 days after heading. At maturity, the aboveground parts of 93-11 and CSSL-qGMN7.1 were reaped and segregated as grain, lemma, panicle branches, flag leaf blade, flag leaf sheath, lower leaf blade, lower leaf sheath, node, and basal stem. In the pot experiment, the flag leaf blades were harvested at heading, filling, and maturity stages, while the grains were labeled and retained to maturity. All samples were dried at 65 °C for 3 d and then weighed before determination of Mn concentration. Statistical analysis and QTL mapping. Statistical analysis was conducted by SAS (version 9.0). Broad-sense heritability was estimated as described by Singh and Chaudhary 38 . QTL analysis was performed with the MultiQTL package (www.mutiqtl.com) using the maximum likelihood interval mapping approach for the RILs. For major effect QTLs, the LOD threshold was obtained based on a permutation test (1,000 permutations, P = 0.05) for each dataset. We followed the suggestions by McCouch for the QTL nomenclature 39 . Fine mapping of qGMN7.1. To fine map qGMN7.1, CSSL-qGMN7.1 was crossed with 93-11. A total of 4,943 segregating F 2 individuals were grown in Hainan in 2015. Twelve recombinants were genotyped with previously developed insertion/deletion (InDel) markers supplied in Table S3. The progeny of these recombinants were grown and genotyped in Hangzhou (2016). Mature seeds were collected and prepared for mineral determination as described below. Short-term Mn uptake experiment. To compare the transport activity for Mn between 93-11 and CSSL-qGMN7.1, we performed a short-term (30 min) uptake experiment according to a previous method 12 . The seedlings (28 d old) of these two lines were exposed to the nutrient solution without Mn for 1 week and then subjected to the uptake solution containing various concentrations of Mn (0.0, 0.1, 0.5, 5, 10, 50, or 100 mM) at 25 °C and 4 °C with four replicates per treatment. After 30 min, the roots were washed three times with 5 mM CaCl 2 and separated from the shoots. The roots were dried at 70 °C for 3 d and used for mineral determination as described below. Determination of metals in plant tissues. The dried samples were digested with a mixture of HNO 3 (85%) and HClO 4 (15%) at a gradient temperature (60 °C for 1 h, 120 °C for 1 h, 150 °C for 1 h, and up to 190 °C). The concentration of the metals in the digest solution was determined by atomic absorption spectrometry (Z-2000; Hitachi) and an inductively coupled plasma-mass spectrometer (7700X, Agilent Technologies) after dilution. Quantitative reverse transcription PCR (qRT-PCR) analysis. Seedlings of 93-11 and CSSL-qGMN7.1 were grown in 1/2 Kimura B solution 12 for 2 weeks, then the roots and shoots were harvested separately. OsNRAMP5 expression was investigated in different tissues from plants grown in the paddy field at booting stage, including root, basal stem, node, lower leaf sheath, lower leaf blade, flag leaf blade, flag leaf sheath, and panicle. Flag leaves of rice varieties were sampled at the heading stage. RNA was extracted by the Micro RNA Extraction kit (Axygen) and reverse transcribed into cDNA using a ReverTra Ace qPCR-RT kit (TOYOBA, Japan). Primers for qRT-PCR were described in a previous study 12 (Table S3). Quantitative PCR was conducted on an ABI PRISM 7900HT Sequence Detector (Applied Biosystems) according to the manufacturer's instructions. The relative expression of each transcript was obtained by comparison with the expression of rice actin1 (Table S3). Relative promoter activity assays. The promoter fragments (2 kb) of OsNRAMP5 were amplified by PCR from the 93-11 and PA64s lines using the forward primer 5′-accatgattacgccaagcttGCGCATGTATCATTTGTTGT -3′ and the reverse primer 5′-aacgacggccagtgaattcCTCACTGCTCTCTCTCTCAA-3′, and were then cloned into the pCambia1391Z vector. The constructed pCAMBIA1391Z::93-11 p and pCAMBIA1391Z::PA64s p plasmids were co-transformed with eGFP into rice protoplasts and transiently expressed 40 . After 12 h of incubation at 25 °C, protoplasts were collected for RNA extraction. The GUS expression level was detected by qRT-PCR with eGFP expression as the endogenous control. Plasmid construction and rice transformation. The cDNA of OsNRAMP5 was amplified by PCR with the forward primer 5′-AAGGTACCATGGAGATTGAGAGAGAGAGC-3′ and the reverse primer 5′-AATCTAGACTACCTTGGGAGCGGGATGTC-3′, which include the KpnI and XbaI restriction sites, respectively. The amplified fragment was cloned into the pCAMBIA1300S vector for overexpression. The constructed vector was sequenced and introduced into 93-11 by Agrobacterium tumefaciens-(EHA105) mediated transformation. Thirteen independent transgenic plants of pCAMBIA1300S::OsNRAMP5 in the 93-11 background were obtained. Seedlings of these transgenic plants (T 2 selected by hygromycin) and 93-11 were grown in 1/2 Kimura solution and transferred to pots at the four-leaf stage.
2018-04-03T01:05:59.569Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "2851a3b6addaf96af79c742f959105c81a40cf73", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-017-18090-7.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2851a3b6addaf96af79c742f959105c81a40cf73", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
119322111
pes2o/s2orc
v3-fos-license
A Proximal Diffusion Strategy for Multi-Agent Optimization with Sparse Affine Constraints This work develops a proximal primal-dual decentralized strategy for multi-agent optimization problems that involve multiple coupled affine constraints, where each constraint may involve only a subset of the agents. The constraints are generally sparse, meaning that only a small subset of the agents are involved in them. This scenario arises in many applications including decentralized control formulations, resource allocation problems, and smart grids. Traditional decentralized solutions tend to ignore the structure of the constraints and lead to degraded performance. We instead develop a decentralized solution that exploits the sparsity structure. Under constant step-size learning, the asymptotic convergence of the proposed algorithm is established in the presence of non-smooth terms, and it occurs at a linear rate in the smooth case. We also examine how the performance of the algorithm is influenced by the sparsity of the constraints. Simulations illustrate the superior performance of the proposed strategy. I. INTRODUCTION In many applications such as network utility maximization [2], smart grids [3], basis pursuit [4], and resource allocation in wireless networks [5], a collection of K interconnected agents are coupled through an optimization problem of the following form: where J k (.): R Q k → R is a cost function associated with agent k and w k ∈ R Q k is the variable for the same agent. The matrix B k ∈ R S×Q k is known locally by agent k only, and the vector b ∈ R S is known by at least one agent in the network. In this formulation, each agent wants to find its own minimizer, denoted by w k , through interactions with neighboring agents, while satisfying the global coupling constraint. In many other applications, the constraint is sparse in the sense that some rows of B k are zero. For example, in network flow optimization [6], multitask problems [7], distributed model predictive control [8], and optimal power flow [9], [10], the constraint has a special sparse structure. Specifically, each agent s is coupled with its neighboring nodes through an A short preliminary conference version appears in [1]. No convergence proofs were included in [1]. Besides proofs and derivations, this extended version also deals with the case of non-differentiable regularizres. * S. A. Alghunaim individual affine constraint of the form: k∈Ns B s,k w k = b s , ∀ s = 1, · · · , K where B s,k ∈ R Ss×Q k , b s ∈ R Ss , and N s denotes the neighborhood of agent s including agent s itself. Note that we can rewrite the constraints (2) into a single constraint of the form given in (1) by choosing B k to be a block column matrix with blocks {B 1,k , · · · , B K,k } and by setting B s,k = 0 if s / ∈ N k . However, under decentralized settings, applying an algorithm that solves (1) directly and ignores the sparsity structure scales badly for large networks and its performance deteriorates as shown in this work. In some other applications (see Example 1 in Section II), unlike (2), the number of constraints is arbitrary, and independent of the number of agents K. Moreover, each constraint may include any subset of agents and not only the agents in the neighborhood of some agent. Therefore, a general scalable algorithm that can exploit the sparsity in the constraint set is necessary for large scale networks. A. Related Works Many distributed/decentralized algorithms have been developed for constraints of the form (2), but for special cases and/or under a different settings from what is considered in this work [9]- [13]. For example, the algorithms developed in [9]- [13] require the sharing of primal variables among neighboring agents and, moreover, the s−th constraint is of the form (2), which is limited to agents in the neighborhood of agent s. An augmented Lagrangian solution is pursued in [14], which further requires two hop communications. All these methods are not directly applicable for the case when the s-th constraint involves agents beyond the neighborhood of agent s. Direct extension of these methods to this case would require multi-hop communication, which is costly. Moreover, the settings in these works are different from this work. In these works, the parameters of the s-th constraint {B s,k , b s } k∈Ns are known by agent s. In this work, each agent s is only aware of the constraints matrices multiplying its own vector w s . Moreover, we consider a broader setting with arbitrary number of constraints, and each constraint may involve any subset of agents -see Section II. The setting in this work is closer to the one considered in [15]- [19]. However, these works focused on problems with a single coupling constraint of type (1), which ignores any sparsity structure. Problem (1) is solved in these references by using dual decomposition methods, which require each agent to maintain a dual variable associated with the constraint. Ignoring any sparsity structure means that each agent will be involved in the entire constraint. By doing so, each agent will maintain a long dual vector to reflect the whole constraint, and all agents in the network will have to reach consensus on a longer dual vector. The work [20] studied problem (1) for smooth functions with resource constraints (i.e., w k ≤ w k ≤ w k ) and focused on handling the useful case of dynamic and directed graphs. Note that the matrix B k in [20] has a specific structure; but, the solution employed also shares the whole dual variable and neglects any sparsity structure. In other resource allocation problems [21]- [23], all agents are involved in a single constraint of the form (1) with B k = I. Different from the previously mentioned works, we consider a broader class of coupled affine constraints, where there exist multiple affine constraints and each constraint may involve any connected subset of agents. Our solution requires sharing dual variables only and does not directly share any sensitive primal information, e.g., it does not share the local variables {w k }. Unlike the works [15]- [20], which solve problem (1) and do not consider the sparsity structure in the constraint, this work exploits the constraint structure. In this way, each agent will only need to maintain the dual variables corresponding to its part of the constraints and not the whole constraint. Thus, only the agents involved in one particular part will need to agree on the associated dual variables. An algorithm that ignores the sparsity structure scales badly (in terms of communications and memory) as the number of constraints or agents increases. Moreover, it is theoretically shown in this work that the sparsity in the constraint set influences the performance of the algorithm in terms of convergence rate. Therefore, for large scale networks, it is important to design a scalable algorithm that exploits any sparsity in the constraint. In [7], a multi-agent optimization problem is considered with stochastic quadratic costs and an arbitrary number of coupled affine constraints with the assumption that the agents involved in one constraint form a fully connected sub-network. This strong assumption was removed in [24] to handle constraints similar to what is considered in this work albeit with substantially different settings. First, the work [24] considers quadratic costs only, does not handle non-differentiable terms, and their solution solves an approximate penalized problem instead of the original problem. Second, it is assumed that every agent knows all the matrices multiplying the vectors of all other agents involved in the same constraint. For example, for the constraint (2), agent s knows {B k ,k } for all k ∈ N s or k ∈ N s . Lastly, the solution method requires every agent to maintain and receive delayed estimates of primal variables w k from all agents involved in the same constraint through a multi-hop relay protocol. This solution method suffers from high memory and communication burden; thus, it is impractical for large scale networks. In network utility maximization problems, a similar formulation appears, albeit with a different distributed framework; it is assumed that the agents (called sources) involved in a constraint are connected through a centralized unit (called link) that handles the constraint coupling these agents -see [2] and references therein. Finally, in [25], [26] a different "consensus" formulation is considered where the agents are interested in minimizing an aggregate cost function where two agents k and s would share similar block vectors {w k , w s } if, and only, if they are neighbors, where the notation w k stands for the block variable shared by the neighbors of agent k so that each w k = col{w s } s∈N k . A more general "consensus" formulation appears in [27], [28] where the sharing of block entries is not limited to neighboring agents. B. Main Contributions Given the above, we now state the main contributions of this work. A novel low computational decentralized algorithm is developed that exploits the sparsity in the constraints. The developed algorithm handles non-differentiable terms and is shown to converge to the optimal solution for constant stepsizes. Furthermore, linear convergence is shown in the absence of non-differentiable terms and an explicit upper bound on the rate of convergence is given. This bound shows the importance of exploiting any constraint sparsity and why not doing so degrades the performance of the designed algorithm. Notation. All vectors are column vectors unless otherwise stated. All norms are 2-norms unless otherwise stated. The notation x 2 D denotes the weighted norm x T Dx for a positive definite matrix D (or scalar). The symbol I S denotes the identity matrix of size S while the symbol 1 N denotes the N × 1 vector with all of its entries equal to one. We write col{x j } N j=1 to denote a column vector formed by stacking x 1 , ..., x N on top of each other and blkdiag{X j } N j=1 to denote a block diagonal matrix consisting of diagonal blocks {X j }. We let blkrow{X j } N j=1 = [X 1 · · · X N ]. For the integer set X = {m 1 , m 2 , · · · , m N }, we let U = [g mn ] m,n∈X denote the N × N matrix with (i, j)−th entry equal to g mi,mj . The subdifferential ∂ x f (x) of a function f (.) : R M → R ∪ {∞} at some x ∈ R M is the set of all subgradients: The proximal operator relative to a function R(x) with stepsize µ is defined by [29]: Symbol Description Ce Sub-network of nodes involved in constraint e. Ne The cardinality of the set Ce. E k The set of equality constraints indices involving agent k. The sum of all smooth functions, J (w) = K k=1 J k (w k ). II. PROBLEM FORMULATION Consider a network of K agents and assume that the agents are coupled through E affine equality constraint sets. For each constraint set e, we let C e denote the sub-network of agents involved in this particular constraint(s). We then formulate the following optimization problem: where B e,k ∈ R Se×Q k and b e,k ∈ R Se . The function {+∞} is a convex function possibly non-smooth. For example, R k (.) could be an indicator function of some local constraints (e.g., w k ≥ 0). These functions are assumed to satisfy the conditions in Assumption 1 further ahead. It is also assumed that agent k ∈ C e is only aware of B e,k and b e,k . Note that for the special case E = 1 and C 1 = {1, · · · , K}, problem (5) reduces to (1). Assumption 1. (Cost function): It is assumed that the aggregate function, , is a convex differentiable function with Lipschitz continuous gradient: Moreover, J (W) is also strongly convex, namely, it satisfies: where {δ, ν} are strictly positive scalars with δ ≥ ν. The regularization functions {R k (.)} are assumed to be proper and closed convex functions. 2 These assumptions are widely employed in the distributed optimization literature and they are encountered in some practical applications such as distributed model predictive control [8], power systems [9], and data regression problems [15]. Assumption 2. (Sub-networks): The network of K agents is undirected (i.e., agents can interact in both directions over the edges linking them) and each sub-network C e is connected. 2 This assumption means that there exists an undirected path between any two agents in each sub-network. This is automatically satisfied in various applications due to the physical nature of the problem. This is because coupling between agents often occurs for agents that are located close to each other. Applications where this assumption holds include, network flow optimization [6], optimal power flow [9], [10], and distributed model predictive control [8] problems. As explained in the introduction, in these problems, the constraints have the form given in equation (2). In this case, each constraint involves only the neighborhood of an agent, so that C e = N s (for s = e) and neighborhoods are naturally connected. Now, more generally, even if some chosen sub-network happens to be disconnected, we can always construct a larger connected sub-network as long as the entire network is connected -an explanation of this construction procedure can be found in [30]. The problem of finding this construction is the well known Steiner tree problem [31] and, many decentralized algorithms and heuristics exist to solve it [32], [33]. We now provide one motivational physical application that also satisfies the two previous assumptions. Example 1. (General exchange in smart-grids) For simplicity, we describe the resource management (or economic dispatch) problem in smart grids [34] with minimum notation. To begin with, let P G k and P L k be the power generation supply and power load demand at node k. Moreover, let P k = col{P G k , P L k } be a 2 × 1 vector formed by stacking P G k and P L k . Then, the resource management problem over a power network consisting of K nodes is [35]: where the non-differentiable term R k (P k ) is the indicator function of some capacity constraints such as positive powers and the maximum power generation. This problem fits into (1) and couples all nodes in a single constraint. The cost function typically used by power engineers is quadratic and satisfies Assumption 1 -see [9], [35]. In this formulation, it is assumed that each node is associated with one generator or load with P k denoting the power generation or demand at that node. Assume now that each node k has multiple generators and/or loads. For example, each generator (or load) can be divided into sub-generators (or sub-loads). Moreover, assume that the power network is divided into K nodes that provide power to E sub-areas. Let P e,G k and P e,L k denote the power supply and power load at node k in area e -see Figure 1. In this figure, there are six nodes (agents) and three sub-areas (subnetworks). Each node associates different generators or loads to different sub-areas. If we let C e denote the nodes that are involved in area e and P k to be the augmented vector P k = col{P e,G k , P e,L k } e:k∈Ce , which collects all local variables {P e,G k , P e,L k } over all areas that agent k belongs to. Then, we formulate the following more general problem: This formulation fits into the problem of dynamic energy exchange in smart grids applications [36] where each area satisfies Assumption 2. It can also be motivated as follows. Assume each sub-area represents some city. Then, problem (9) is useful when the transmission losses are costly in some parts of an area, which may require power generation from neighboring power networks. It is also useful when there are maintenance to some generators or lines causing high demands in some areas, which requires the need of extra generators from adjacent power networks. 2 III. ALGORITHM DEVELOPMENT In this section, we will derive our algorithm and introduce some important symbols, which are necessary for algorithm description and later analysis. To do so, we start by introducing the Lagrangian function of (5): and v e ∈ R Se denotes the dual variable associated with the e-th constraint. To facilitate the development of the algorithm we rewrite (10) as a sum of local Lagrangian terms. To do so, we need to introduce the set E k , which denotes the set of equality constraints that agent k is involved in (e.g., if agent k is involved in equality constraints one and three, then E k = {1, 3}). From the definition of E k and C e , we have Using this notation, the second term on the right hand side of (10) can be rewritten as a sum over all agents as follows: let B e,k = B e,k if k ∈ C e (or e ∈ E k ) and zero otherwise and, likewise, for b e,k . then it holds that: where in the last step we switched the order of summation and used the fact that k ∈ C e if, and only, if e ∈ E k . Therefore, if we let {v e } e∈E k denote the collection of dual variables related to agent k, then using the previous equation we can rewrite (10) as a sum of local terms as follows: where (13) is the local term for agent k. We are therefore interested in finding the minimizer of (5) through the equivalent solution of the saddle point problem: Assumption 3. (Strong duality) A solution exists for problem (14) and strong duality holds. 2 Since our problem (5) is convex with affine constraints only, then Slater's condition is satisfied and strong duality holds [37,Section 5.2.3], which ensures that the solution of (14) coincides with the solution of (5). We denote an optimal solution pair of (14) by W = col{w k } K k=1 and {v e, }. From Assumption (1), W is unique, but {v e, } are not necessarily unique. To derive our algorithm, which solves the saddle point problem (14), we will now relate the dual problem to the one considered in our previous work [30] and explain how the dual variables are partially shared across the agents, which is important for our derivation. A. Dual Problem Note that the Lagrangian (12) is separable in the variables {w k }. Thus, the dual problem is (we are reversing the min and max operations by negating the function) [37]: Figure 2 illustrates how the dual variables {v e } E e=1 are shared across agents participating in the same constraint. For example, agent k = 4 in Figure 2 is part of two sub-networks, C 1 and C 2 ; it is therefore part of two equality constraints and will be influenced by their respective dual variables, denoted by v 1 and v 2 . Similarly, for the other agents in the network. Problem (15) is of the form considered in [30]: it involves minimizing the aggregate sum of cost functions f k {v e } e∈E k where the arguments {v e } e∈E k among different agents can share block entries as illustrated in Fig. 2 . The main difference here, however, is that the costs f k {v e } e∈E k do not admit a closed form expression in general and are instead defined by (16), i.e., in this work we are actually dealing with the more challenging decentralized saddle point problem and not with a decentralized minimization problem as was the case in [30]. Thus, more is needed to arrive at the solution of (14), as we explain later. B. Combination coefficients To proceed from here and for the algorithm description, we introduce combination coefficients for the edges in C e denoted by {a e,sk } s,k∈Ce ; a e,sk refers to the coefficient used to scale data moving from agent s to agent k in subnetwork C e with a e,sk = 0 if s / ∈ N k ∩ C e . We collect these coefficients into the combination matrix where N e denotes the number of agents involved in equality e. The matrix A e is assumed to be symmetric and doublystochastic. We also require A e to be primitive, meaning that there exists an integer j such that the entries of the matrix A j e are all positive. One way to meet these conditions is to choose weights satisfying with a e,sk = 0 if s / ∈ N k ∩ C e . Under Assumption (2) many rules exists to choose such weights in a decentralized waysee [38,Ch. 14]. We are now ready to derive our algorithm. C. Dual Coupled Diffusion Using the combination matrix A e , it was shown in [30] that problem (15) can be solved by using the following coupled diffusion algorithm. Set v e k,−1 = ψ e k,−1 to arbitrary values. For each k and e ∈ E k repeat for i ≥ 1: where v e k,i is the estimate for v e at agent k, µ v > 0 is a stepsize parameter, and {ψ e k,i , φ e k,i } are auxiliary vectors used to find v e k,i . The coefficients {ā e,sk } are the entries of the matrix A e defined as follows: If the functions {f k (.)} are known and are differentiable, then each agent could run (19a)-(19c) to converge to its corresponding optimal dual variable, which in turn could be used to find the local minimizer w k by solving min w k L k w k , {v e, } e∈E k . However, this approach is not always possible because the local dual function f k {v e } e∈E k does not generally admit a closed form expression. Moreover, this method involves two time scales: one for finding the dual and the other for finding the primal. Therefore, to solve (14) we propose to employ a decentralized version of the centralized dual-ascent construction [40] combined with a proximal gradient descent step. Specifically, recall first that the dual-ascent method updates the primal variable w k at each iteration i as follows: Note that this minimization step, which need to be solved at each iteration, can be costly in terms of computation unless a closed form solution exists, which is not the case in general. Therefore, we approximate (21) by a proximal gradient descent step to arrive at what we shall refer to as the dual coupled diffusion algorithm (22). At each time instant i, each agent k first performs a proximal gradient descent step (22a) for the primal variable with step-size µ w > 0. Then, for each dual-ascent step, the coupled diffusion (22b)-(22d) are applied where step (22b) is obtained by using ∇ v e L k (w k,i , {v e k } e∈E k ) to approximate the gradient at the minimum value in (16). Note that only step (22d) requires sharing dual variables with the neighbors that are involved in similar constraints. We remark that Algorithm (22) can be potentially used for directed network if combined with the push-sum technique from [41] such that the dual iterates are corrected by dividing them by scalar as in [41]. The push-sum technique have been utilized before for distributed optimization algorithms -see for example [42]. To analyze algorithm (22) and show that it converges to Algorithm (DUAL COUPLED DIFFUSION) Setting: Choose step-sizes µ w > 0 and µ v > 0. Let v e k,−1 = ψ e k,−1 and w k,−1 arbitrary. For every agent k, repeat for i ≥ 0: For all e ∈ E k : an optimal solution of (14), we will rewrite it in a compact network form, which facilitates its analysis. IV. NETWORK RECURSION We start by stacking the dual estimates within each cluster and then stacking over all the clusters. This will allow us to rewrite the dual steps (22b)-(22d) in a form that enables us to see the affect of each sub-network in our analysis. Thus, we introduce the sub-network vector that collects the dual estimates v e k,i over the agents in C e : and the global network vector that collects Y e i over all e: We also repeat a similar construction for the quantities: whereĀ e = 1 2 (I Ne + A e ) introduced in (20). For the networked representation of the primal update (22a), we introduce the network quantities: We also need to represent the term e∈E k B T e,k v e k,i−1 in terms of the network quantity Y i−1 defined in (24). To do that we first rewrite each term B T e,k v e k,i−1 in terms of the sub-network vector Y e i−1 . This can be simply done by introducing the 1×N e block row matrix B T ek of similar block structure as Y e i−1 such that B T ek Y e i−1 = B T e,k y e k,i−1 if k ∈ C e and zero otherwise - Figure 3 illustrates this construction. This construction can be represented by: ek Y e i−1 . If we let then algorithm (22) can be rewritten compactly as follows: Notice that step (32b) depends on the two previous estimates; thus it is tedious to analyze directly. Therefore, to facilitate our analysis we will rewrite it in an equivalent form. To do that, we let: and introduce the singular value (or eigenvalue for symmetric matrices) decomposition [43]: where N = E e=1 N e S e , U 1 ∈ R N ×r , U 2 ∈ R N ×(N −r) , and Σ = diag{λ j } r j=1 with λ r ≤ · · · ≤ λ 1 denoting the non-zero eigenvalues of the matrix 0.5(I − A). Using an approach similar to the one used in [44], we can rewrite (32b) equivalently as follows -see Appendix A: for i ≥ 1, where we introduced a new sequence X i with X 0 = 0. Note that since A e is primitive, symmetric, and doubly stochastic, it holds that the eigenvalues of the matrix A e are in (−1, 1] -see [38,Lemma F.4]. Thus, from the block structure of A in (34), the eigenvalues of the matrix 0.5(I − A) are in [0, 1). Therefore, the non-zero eigenvalues are positive and satisfy: 0 < λ r ≤ · · · ≤ λ 1 < 1 This property is useful for our convergence analysis. V. CONVERGENCE RESULTS In this section, we give the Lemmas leading to the main convergence results. The following auxiliary result is proven in [45]. Lemma 1. For any N × N symmetric and doubly stochastic matrix A, it holds that I N −A is symmetric and positive semidefinite. If in addition A is primitive and we let A = A ⊗ I M , then, for any block vector Z = col{z 1 , ..., z N } in the nullspace of I − A with entries z n ∈ R M it holds that: 2 Lemma 1 will be used in the proof of the next Lemma to show that consensus is reached at the optimality conditions. Lemma 2. (Optimality condition) If there exists a point (W , Y , X ) and a subgradient g ∈ ∂ W R(W ) such that: Then, it holds that v e, k = v e, ∀ k ∈ C e where (W , v 1, , · · · , v e, ) is a saddle point for the Lagrangian (10). Proof: A similar argument appears in the conference version [1, Lemma 2] except for the addition of sub-gradient terms into the argument. Using the block structure of ∇J (.) and B in (29) and (30)-(31), we can expand (39a) into its components to get: where g k ∈ ∂ w k R k (w k ). From the fact U T 1 U 1 = I and Σ > 0, condition (39b) is equivalent to: Therefore, from (38), and the block structure of A in (34), condition (39b) gives: for some v e, . Hence, condition (40) satisfies the first optimality condition for problem (5) -see [37]. Now, let Z = blkdiag{1 Ne ⊗ I Se } E e=1 . Multiplying equation (39c) on the left by Z T gives: where step (a) holds because because from (38), Z is in the nullspace of I − A and thus also in the nullspace of U T 1 [1, Equation (51)]. Using the block structure of B and b in (30)- (31) and (25), we can also expand (43) into its components to get: for all e since Equation (44) is the second optimality condition for problem (5) and, thus, (W , v 1, , · · · , v e, ) is an optimal point for (14) [37]. 2 Remark 2 (EXISTENCE AND UNIQUENESS). Note that there exists a point (W , Y , X ) that satisfies the optimality conditions (39). is an optimal solution of the saddle point problem (14), then, it can be easily verified that conditions (39a)-(39b) are satisfied. Now, by following an argument similar to the one used in [46,Lemma 3], it can be shown that there exists an X such that (39c) holds; moreover, there exists a unique X in the range space of U T 1 . Now, we know from strong convexity that W is unique. Thus, from (39a), the dual point Y is unique if the matrix B has full row rank. Under this condition and in the absence of non-smooth terms, we will show that our algorithm converges linearly to this unique point -see Theorem 2 . 2 Remark 3. The analysis technique used in this work is not related to the techniques used in [30], [46]. Note that this work deals with a non-smooth saddle-point problem where the dual variables are shared across agents, while the works [30], [46] deal with smooth minimization problems with a shared primal variable and twice-differentiable functions. 2 We will now show that the equivalent network recursions (32a) and (36a)-(36b) of the proposed algorithm converge to a point that satisfies the optimality conditions given in Lemma 2. To give the convergence results, we introduce the error vectors: (46) and the diagonal matrix: where Σ was introduced in (35). Note that D is positive definite because of (37). Proof: See Appendices B and C. 2 The previous Lemma is used to establish the following theorem. Theorem 1. (Convergence): Suppose Assumptions 1-3 hold, then for positive constant step-sizes satisfying: recursions (32a) and (36a)-(36b) converge and it holds that W i converges to the optimal solution of (5). Proof: See Appendix D. 2 At this point we showed that the dual coupled diffusion strategy, which handles non-smooth terms, converges to the optimal point. However, it is still unclear how the sparsity of the constraints affects the convergence behavior. Apart from saving communication and memory, the next result reveals the advantage of exploiting the constraint structure. Theorem 2. (Linear convergence): Suppose Assumptions 1-3 hold, and, furthermore, assume that each R k (w k ) = 0 and each matrix blkcol{B e,k } e∈E k has full row rank. If the step sizes satisfy (50), then it holds that: λ r denoting the smallest non-zero eigenvalue of 0.5(I − A). Proof: See Appendix E. 2 The above result shows why solving (5) directly is important for at least two reasons. First, by using model (5), we are able to prove linear convergence under the assumption that each blkcol{B e,k } e∈E k has full row rank. If instead, we were to rewrite problem (5) into the form (1) by embedding zeros into the matrices B k , then our analysis would require B k to be full row rank for linear convergence. This will not be satisfied if some agent is not involved in some constraint since in that case B k will have zero rows and, thus, B k is row rank deficient even if blkcol{B e,k } e∈E k has full row rank. The second more important reason is that the convergence rate depends on the connectivity of the sub-networks C e and not on the connectivity of the entire network, as we illustrate now. Note from the block structure of (34) that the smallest non-negative eigenvalue of 0.5(I − A) has the form λ r = min e σ e where σ e denotes the smallest non-zero eigenvalue of the matrix 0.5(I −A e ). Since I −0.5(I −A e ) = 0.5(I +A e ) = A e , it holds that 1 − σ e =λ 2,e , whereλ 2,e denotes the second largest eigenvalue ofĀ e (the largest eigenvalue is equal to one). Therefore, Thus, assuming 1 − λ r is dominating the convergence rate, then the smaller max eλ2,e is, the faster the algorithm is. We see that this depends on the second largest eigenvalue of the matrices {Ā e }, which depends on the sub-networks connectivity and not the whole network. This observation reveals the importance of the algorithm for sparse networks and under sparsely coupled constraints. Since in that case the small sub-networks are much well connected than the whole network. This observation will be illustrated in the simulation section next. Remark 4 (CONDITION NUMBER). By using the the upper bound (50), we conclude from Theorem 2 that the number of iterations needed to reach accuracy is on the order of where κ J = δ/ν and κ B = λ max (BB T )/λ min (BB T ) are the condition numbers of the cost J (.) and the matrix BB T , respectively. 2 VI. NUMERICAL SIMULATION In this section, we test the performance of the proposed algorithm with two numerical experiments. • Distributed Linear Regression: The first set-up considers a linear regression problem with costs: and R k (w k ) = η 1 w k 1 where u k,t ∈ R Q k is the regressor vector for data sample t, p k (t) ∈ R, and T k denotes the amount of data for agent k. • Distributed Logistic Regression: The second set-up considers a logistic regression problem with costs: and R k (w k ) = η 1 w k 1 . The vector h k,t ∈ R Q k is the regressor vector for data sample t, and x k (t) is the label for that data sample, which is either +1 or −1. In both experiments, the network used is shown in Fig. 4a with K = 20 agents. The positions (x-axis and y-axis) of the agents are randomly generated in ([0, 1], [0, 1]), and two agents are connected if the distance between them is less than or equal d = 0.3. As for the constraints, we assume E = K = 20, and each constraint e (or k) (where e ∈ {1, · · · , 20}) is associated with a subnetwork involving agent e (or k) and all its neighbors as described in equation (2). Each element in B e,k is generated according to the standard Gaussian distribution N (0, 1). Each b e,k is also randomly generated and we guarantee that there exists a feasible solution to (5). All the combination matrices are generated according to the Metropolis rule. In the first simulation, we set T k = 1000 for all k and each regressor u k,t is generated according to the Gaussian distribution N (0, 1). To generate the associated p k (t), we first generate a vector w k,0 ∈ R Q k randomly from N (0, 1). We let 20% of the entries of w 0,k to be 0. With such sparse w k,0 , we generate p k (t) as p k (t) = u T k,t w k,0 +n k where n k ∼ N (0, 0.1) is some Gaussian noise. In this experiment, we set Q k = 10 for k = 1, · · · , K. We also set η 1 = 0.3 and B e,k ∈ R 3×10 to be an under-determined coefficient matrix. In the second set-up, each T k = 1000. Among all local data samples, half of them are generated by the Gaussian distribution N (1, 1) (a) Network topology used in simulations. To illustrate the effect of the constraint structure, we consider two approaches to solve problem (5). The first approach is to use the dual coupled diffusion (22) while considering the structure of the problem (5), i.e., run (22) with E = K, C e = N e . The second approach is to ignore the special structure of the problem and reformulate it into the form of problem (1) and also run the dual coupled diffusion (22) with E = 1, C 1 = {1, · · · , K}, which we call dual diffusion. To compare with other related methods that only share dual variables, we simulate the inexact distributed consensus ADMM (IDC-ADMM) from [15] and a modified proximal version of the one in [47] in which the dual iterates are updated similar to the DIGing algorithm in [42], which we call "Dual DIGing". Both of these algorithm are designed for problem (1) and ignores any structure. The step-sizes are chosen manually to get the best possible performance for each algorithm. In the first linear regression setup, the parameters used are (µ w = 0.28, µ v = 0.28) for the dual coupled diffusion, (µ w = 0.28, µ v = 0.28) for the dual diffusion, (c = 0.25, µ w = 0.05) for the IDC-ADMM [15], and the step-sizes are set to 0.45 for the dual DIGing method. In the second logistic regression set-up, they are set to (µ w = 0.2, µ v = 0.2) for the dual coupled diffusion, (µ w = 0.2, µ v = 0.2) for the dual diffusion, (c = 0.45, µ w = 0.2) for the IDC-ADMM [15], and the step-sizes are set to 0.18 for the dual DIGing method. Figure 4 shows the relative error 1 K K k=1 w k,i −w k 2 / w k 2 for each of the previous algorithms for both set-ups. Note that the dual DIGing algorithm requires communicating two vectors each round of communication. It is observed that dual diffusion, the IDC-ADMM, and the dual DIGing algorithms have a close performance (all ignores any structure), while the dual coupled diffusion clearly outperforms them. This means that, apart from requiring less amount of data to be exchanged per round of communication, our algorithm is also able to reach an accuracy (where is arbitrarily small) with much less time compared to these other algorithms. As explained before, this superiority is due to the sub-networks being better connected compared to the whole network and the dual coupled diffusion takes advantage of that. In this simulation, we have 1 − λ r = 0.911 for the dual coupled diffusion and 1 − λ = 0.973 for the dual diffusion (we dropped the subindex since we have one network combination matrix in this case), which backs up our theoretical findings. To further illustrate the effect of the sub-networks connectivity on the convergence rate, we simulate the dual coupled diffusion (exploits sparsity) and dual diffusion (which does not exploit the sparsity) with the same logistic regression setup from before but for the three different networks shown in top half of Fig. 5. The step sizes used in this simulation are adjusted to get the best possible results, which are shown on the bottom of Figure 5. Note that the network on the left has less connections compared to the network on the right, and thus, the sub-networks on the left are more sparse than the one on the right. Note further that for the constraints settings used (2), the more connections the network has, the closer the subnetworks are to the entire network. It is seen that dual coupled diffusion performs significantly better under sparser networks since in that case the sub-networks are much better connected than the whole network. On the other hand, when we add more connections, the sub-networks connectivity becomes closer to the network connectivity and, thus, the performance of the two algorithms become closer and closer. The performance will become identical when all agents are involved in all the constraint. VII. CONCLUDING REMARKS This work developed a proximal diffusion strategy with guaranteed exact convergence for a multi-agent optimization problem with multiple coupled constraints. We established analytically, and by means of simulations, the superior convergence properties of an algorithm that considers the sparsity structure in the constraints compared to others that ignore this structure. APPENDIX A EQUIVALENT REPRESENTATION In this appendix, we show that (36a)-(36b) is equivalent to (32b). Multiplying equation (36a) by U 1 Σ and then collecting the term U 1 ΣX i−1 we get: LetX i ∆ = U 1 ΣX i . Using (35) and collecting the term X i−1 on the right hand side of the last equation, we get: Multiplying (36b) byĀ on the left and using the definition Now, subtracting (56) from (36b) we get: Using (55) we can remove the term µ v (X i −ĀX i−1 ) from the previous expression to get: Rearranging the last expression gives (32b). APPENDIX B PRIMAL ERROR BOUND (48) From the optimality condition of (32a), we have: for some g i ∈ ∂ W R(W i ). Rearranging the last equation and using the optimality condition (39a) we get: Multiplying (W −W i ) T to both sides of the previous equation, we get: From the conditions on R k (w k ) in Assumption 1, there exists at least one subgradient at every point. And from the subgradient property (3) we have g T . Summing the two inequalities with y = W and x = W i , we get (W − W i ) T (g i − g ) ≤ 0. Using this bound in (60) we get: Note that: Substituting the last equation into (61) and rearranging terms gives: Using Assumption 1 we can bound the inner product: We again use (7) in the last expression to get: From (7) it holds that: Therefore, the last inner product in (63) can be bounded as follows: where the last step holds because (W − z) T ∇J (W) − ∇J (z) ≤ δ W − z 2 holds by using the Cauchy-Schwartz inequality and (6). Substituting (67) into (63) gives (48). APPENDIX C DUAL ERROR BOUND (49) It holds that: Rearranging the last equality we have: Note that: where in step (b) we took U 1 inside the first bracket and used U T 1 Y = 0 from (39b). From step (a) and the last step we get: Furthermore, note that: Substituting (71) into (72), we have where in step (a) we used (36b) and the optimality condition (39c). Re-arranging the last equation (73), we get Substituting the previous equation into (69), we get, The last term of (74) can be rewritten as: where in the last step we used (36b), (39c), and U T 1 U 1 = I. Substituting the last equality into (74), we get (49). APPENDIX D PROOF OF THEOREM 1 Let us introduce the quantity: Using (48)-(49) and B W i 2 µv ≤ µ v λ max (B T B) W i 2 , it holds that: where the last inequality holds under (50). Since V ( W i , Y i , X i ) is non-negative, we conclude that the norm of the error is non-increasing and bounded. Iterating the above inequality we have: and thus Since the sum of the infinite positive terms is upper bounded by a constant, it holds that each term (W i − W i−1 ), W i−1 , W i , (X i −X i−1 ), and Σ X i must converge to zero. APPENDIX E PROOF OF THEOREM 2 From the structure of B in (31), it can be confirmed that B having full row rank is equivalent to assuming that each matrix blkcol{B e,k } e∈E k has full row rank. This is illustrated in Fig. 6. Because two different agents belonging to the same cluster are located differently in Y e , it holds that the block rows of B are zeros except at one location. Recall that B ek ∈ R Se×Q k . Therefore, an equivalent statement is to say that blkcol{B e,k } e∈E k has full row rank. The last term in (63) can be rewritten as the last term can be upper bounded by where in step (a) we used (32a) and (39a) with R(W) = 0. The last inequality holds from [48, Theorem 2.1.5] since J (W) has δ-Lipschitz gradients. Combining the last two equations we have where the last step holds from the strong-convexity condition (7) and (2 − δµ w ) > 0 for µ w < 2/δ. Substituting into (63) we get:
2018-10-04T09:55:18.000Z
2018-10-04T00:00:00.000
{ "year": 2018, "sha1": "db3876ad3f4b2208bb4547e4246a85e21da5737b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1810.02124", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ef8dbe66dc2ebe9d97a0ff7f2e0e99f3aaca3831", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
67860670
pes2o/s2orc
v3-fos-license
Caries Management with the International Caries Detection and Assessment System II: Established Pit and Fissure Lesions Introduction Operative dentistry addresses the surgical management of caries, a significant portion of dental practice. Dental students, who typically develop their skill sets in this important discipline by creating idealized preparations in plastic teeth, are often confused by the wide variety of tooth anatomy and caries presentation they see when they subsequently treat patients. To address this significant clinical transition issue, we developed this resource on preparing the moderate carious lesion using a stepwise, structured technique. Methods This resource consists of a flipped-classroom learning module and associated laboratory activity with an algorithm worksheet to practice critical thinking skills. Prior to the exercise, an interactive tutorial introduces the didactic background. The 4-hour class session starts with a short quiz and review, then learners use the worksheet to prepare and restore their tooth specimens. Results Learner response has been very positive. Moreover, faculty note that learners' skills in treating patients in clinic are noticeably higher and require less faculty intervention than was previously the case. Discussion Since new national curriculum standards for caries are currently being introduced, in addition to providing instruction to dental students, this resource presents an excellent opportunity to calibrate faculty members, who are a secondary learner group in this exercise, on a standard clinical protocol. Introduction Developing skills with a dental handpiece is often accomplished in the first 2 years of dental school by having learners create standardized "ideal" cavity preparations on simulated plastic teeth. However, clinical care presents a very different challenge since actual teeth and caries rarely mimic the ideal. Even plastic teeth with simulated carious lesions show an unacceptably low level of variation. The development of critical decision-making skills regarding how to manage a carious lesion based upon presentation and depth is often postponed until learners are actually treating patients. This approach, which requires significant additional faculty oversight in the clinic, can still result in learner confusion. It also increases the risk of inadvertent patient harm. Traditionally, dentists were taught to diagnose pit and fissure caries by probing with a sharp explorer, with Original Publication the presence of a "stick" held to demonstrate the presence of caries. This technique has been disproved as a diagnostic technique and shown to be unnecessarily damaging to tooth structure. An international collaboration of caries researchers has more recently developed a method of diagnosing pit and fissure caries using visual criteria. After clinical confirmation, this diagnostic approach was published as the International Caries Detection and Assessment System (ICDAS). It was subsequently modified and re-named ICDAS II. While originally created and validated as a classification standard for epidemiology, it has been further developed by an international group of faculty and researchers as a routine diagnostic tool. Diagnosis using ICDAS II has now been accepted in many countries and is gaining acceptance in the US. Subsequently, the American Dental Association convened an expert panel and published the Caries Classification System using the same criteria as ICDAS II but simplifying the classification by combining six categories into three. At a 2015 national consensus meeting associated with the American Dental Education Association (ADEA), a working group representing 35 of North America's 69 dental schools voted to adopt evidence-based diagnostic criteria (i.e., visual, tactile, and radiographic methods) as part of a proposed national curriculum framework in teaching caries management. Their consensus plan has been endorsed by the cariology and operative dentistry sections of the ADEA and has been published. Teaching learners how to identify, diagnose, and treat caries is done in the context of Caries Management By Risk Assessment (CAMBRA), a methodology that evaluates the caries disease process based upon individual factors such as diet, saliva, and hygiene. Each patient receives a risk assessment that is subsequently used to determine nonsurgical and surgical treatment plans. Recent articles have addressed the question of how best to teach students to remove caries. A survey of dental school practices noted that there is a wide variation in the "criteria used for assessment and removal of carious tissue, management of deep carious lesions, and definition of 'caries remaining at cavity preparations.''' As a method for students to learn this skill, de Peralta, et al. encouraged selfreflection. However, there has not been an article published to date focused on teaching students this skill in detail. In 2013, concerns were raised via feedback about learners' performance in the treatment of caries on the state board examination. These concerns led to the development of a series of four laboratory experiences for learners called the Caries Continuum, conducted immediately before their first clinical operative experience, which in our school is the second semester of the second year. The first module of these experiences, titled "Caries Management with the International Caries Detection and Assessment System: Early Pit and Fissure Lesions," focuses on diagnosing and treating early pit and fissure caries on extracted teeth. This present module concerns the next step of diagnosing and managing moderate or established caries removal. Subsequent modules not yet published on MedEdPORTAL focus on smooth surface caries and vital pulp therapy in severe caries. Another impetus for the development of the Caries Continuum was a significant increase in class size at this institution that made it imperative to teach clinical skills more efficiently than one-on-one in the clinic. The Caries Continuum modules provide a venue to teach these clinical skills to the whole class in a few lab sessions instead of in multiple one-on-one, faculty-and time-intensive clinical encounters. The "need for clinician/faculty training and calibration" has been recognized as an important factor affecting the implementation of new paradigms around caries diagnosis and management, such as ICDAS II and CAMBRA. Integration of preclinical teaching with teaching conducted by clinical faculty, both full and part time, requires consistent instruction. This module offers an excellent opportunity to calibrate clinical faculty while they participate in the laboratory activities since, unlike participating in an operative preclinical laboratory, caries is present in the specimen teeth. The algorithm worksheet provides clear guidance to the faculty about how to guide student thinking in a familiar framework. To develop the learning module, a group of faculty with expertise in operative dentistry and cariology was formed in 2013 to review the standard teaching references and current literature on pit and fissure caries. Additionally, evidence on treatment recommendations for caries based on age and severity of lesion in the context of risk assessment were evaluated and incorporated. Each year, the authors formally met and evaluated both learner feedback and faculty experience in their clinical interactions and revised the exercises. Members of this group are the authors and others acknowledged in the tutorial. Because of the authors' familiarity with, and confidence in, the flipped-classroom technique of instruction based on current evidence, it was chosen as the educational technique in this course. In the flipped classroom, and the more structured team-based learning technique, the background didactic and conceptual material is presented before class. In this case, the material is a self-paced, interactive tutorial wherein learners master the content prior to class time. In the class session, the key cognitive concepts are tested with a short quiz and reinforced by quiz review. Learners go to the lab and work in pairs, then in groups of 10 to verbalize the situation and their thought processes, guided by a faculty member. The flipped classroom approach and team-based learning have both proven to be effective teaching methodologies in dentistry and the health professions. Methods The target audience for this resource is preclinical dental learners with the following prerequisites: Operative dentistry course: Basic knowledge of dental instrumentation and procedures relating to bases and liners, bonding procedures for both enamel and dentin, and composite restoration. Basic understanding of cariology, including key mechanisms of caries process of decalcification of enamel and dentin, as well as an understanding of CAMBRA. Working knowledge of dental operatory procedures and instruments, including personal protection in accordance with Occupational Safety and Health Administration (OSHA) guidelines. Logistics This session is held for the entire class in the preclinical lab in a single afternoon. The overall plan for the exercise outlined in the accompanying Figure. First, dental learners are asked to collect teeth from community dentists and store them in accordance with CDC guidance. At least 2 weeks prior to the activity, learners are sent instructions to search their specimen teeth to find three to four teeth in ICDAS II codes 3-4, then store them in damp paper towels in a sealed plastic bag. Learners are advised that part of their module grade will be dependent upon finding the correct teeth as well as correctly following the instructions for preparation and storage. At least 1 week prior to the lab activity, the learners are either emailed or given learning management system access to the tutorial (Appendix A) and worksheet (Appendix B.) Learners are advised that there will be a quiz on the information in the tutorial at the start of the lab session. They are advised to Each station in the lab is set with an operative cassette and hand pieces. A copy of the worksheet (Appendix B) is set out at each place, printed front and back on a single sheet. Personal protective equipment (i.e., mask, gloves, and eye protection) is provided and required to be worn in accordance with OSHA protocols to simulate clinical care. Learners retrieve their selected teeth from storage. Each group of 10 learners is assigned an operative faculty member who is familiar with the tutorial and comfortable with small-group interactive teaching. Each pair of learners then evaluates their selected teeth and chooses the two teeth that best meet the criteria for ICDAS II codes 3-4, presenting them to the faculty for initial confirmation and assessment. Learners adjust the roots with a handpiece until they fit loosely into the corresponding dentoform location. The roots are shallowly notched in three to five locations for additional retention and painted (excluding the corresponding dentoform socket) with polyvinyl siloxane adhesive. This is allowed to dry during the quiz and review. Next, administer the included quiz (Appendix C) or construct your own six-question quiz on the knowledge base contained in the PowerPoint tutorial. One suggested grading technique is to count off 10 points for each wrong answer, only giving a zero for an unexcused absence. Learners must correctly answer four of the six questions to pass. Finally, review the quiz. Using the same slideset, go over the quiz and highlight important material to ensure all learners have mastered key knowledge points. Inclusion of slides from the tutorial help link concepts and reinforce learning, as well as reduce arguments. Laboratory Experience To begin, set the teeth into a typodont. The faculty member dispenses polyvinyl siloxane impression material into the dentoform "socket" to hold each tooth in place. Each learner mounts the typodont into their manikin and a rubber dam is placed to more closely simulate clinical care. Learners are encouraged to use correct ergonomic positioning throughout the exercise. Working first alone, then comparing results in pairs, learners determine their planned outline form and mark in pencil on the tooth. Following the worksheet, a series of guided choices leads the learner to select the appropriate bur and handpiece. The instructor signs off the treatment plan for each learner, asking questions as necessary about their rationale while doing so. Continuing to follow the worksheet, learners prepare their teeth and answer a series of questions based on the caries presentation. Finally, they restore the tooth (usually with composite) if time allows. It is emphasized to them that this is the process they will follow in clinic on a patient. The group is then brought together to share particularly interesting caries presentation or treatment decision cases. As students proceed though their worksheet, the faculty member for each section gives students immediate verbal feedback. Faculty teachers also guide small-group learning by having students share their teeth and preparations. This allows the faculty to identify interesting features in the variety of carious presentations. If faculty note minor errors in student performance, that student may be quietly asked to share the tooth and error at the next group session as a lesson learned. Next, each learner writes a few comments on the experience, focusing on how well prepared they were, noting any areas where they need more practice or preparation. Self-reflection is an important adjunct to teaching these clinical skills. This exercise is primarily formative and diagnostic in that it determines if students are ready to treat carious lesions in clinic. To create incentive and soothe concerns over a "lost" clinic period, students are offered two clinical relative value units (RVUs, where each RVU equals a single surface restoration), one for the written exam and one for the laboratory exercise. If the student does not achieve the required passing score on the quiz, they do not receive the first RVU credit. The laboratory experience is graded as a formative assessment with only pass/fail recorded. Minor errors 29 18 10.15766/mep_2374-8265.10602 Association of American Medical Colleges (AAMC) are identified within the group and teaching points are clarified. Critical errors, as identified on the worksheet (Appendix B), result in a failure which requires a program of self-study and another opportunity to show understanding and skill with another carious tooth. Learners who commit a critical error, in addition to not achieving the RVU, are not allowed to treat patients in the operative clinic until they have successfully completing this exercise. The faculty member's overall assessment is based upon whether or not the student follows the correct procedure within the loose parameters bounded by critical errors. However, significant inability to articulate rationales for their choices can also be a factor in borderline performance. Results This exercise has been evaluated over the past 5 years in our internal review board-exempt study. Internal routine quality processes and metrics include anonymous written feedback from learners as well as observations of focus groups of students after they are experienced in clinical care. Learner reaction to this exercise has been overall positive. Nearly all learners expressed a positive reaction to the exercise as noted in the 2015 feedback (Table). Evidence of learning often comes from students who note the usefulness of the exercise once they are in clinic. "I couldn't have faced treating patients without that training," is a common theme. Objectively measuring learning is difficult because of the complexity and variability of the task. Faculty observations of student performance have, however, been revealing. The faculty for the operative clinic routinely meet to evaluate learner clinical performance in the second and third years and uniformly observe a significant improvement in student skill level, particularly in the novice learner. They note that the ability of learners to incorporate theoretical knowledge of caries into correct clinical practice is much improved. Introducing ICDAS II to the exercises has improved the learners' ability to anticipate the extent of caries in pits and fissures, increasing their accuracy in planning appropriate treatment. Faculty also note important behavioral improvements in students, such as less confusion and more confidence in clinic. Even the simple mechanics of when to ask for faculty intervention have improved, with faculty reporting that this knowledge has reduced friction and stress in clinic. Discussion This module is the result of a 5-year journey to fill a considerable void in dental education between the hand-skill training of preparing plastic teeth and the individualized, critical thinking skills involved in clinical preparation of carious teeth in patients. Along the way, we have encountered challenges, gained insights, made modifications, identified limitations, and planned future modules. One limitation of this module is the lack of objective data. Determining exact impacts on student learning and performance has been challenging, particularly since the introduction of significant improvements in the second-year clinical program and major changes in the clinical evaluation of caries removal assessment were both accomplished within the same time frame as the introduction and refinement of the Caries Continuum exercises. One issue that initially reduced the effectiveness of the exercise was learners selecting the wrong category of carious teeth. This was solved by having a dedicated assessment component focusing on the quality of their tooth selection. While sorting through jars of extracted teeth is a tedious, unpleasant job, it has significant learning value and mimics clinical diagnosis as learners compare their teeth against the ICDAS II criteria to find the established (ICDAS II codes 3-4) lesions. A limitation to the lab experience itself is that sterilization processes recommended by the CDC for extracted human teeth, which include storage in dilute bleach and autoclaving, change the appearance of carious teeth somewhat if extended for more than a few months. Another limitation is that learners will only see the teeth that they and their partner have selected unless there is skillful collaborative teaching by the faculty. When the Caries Continuum was started, there were only two modules, and these were held at the beginning of the third year. However, learners who start their clinical experience in the second half of the second year gave strong feedback that these exercises should be completed before they started actual clinical care. These two lab exercises were accordingly moved to the second half of the second year, and subsequent learner feedback has confirmed that this was an appropriate move. Faculty have validated learner input by observing less learner anxiety and fewer errors since this change was made. Learners expressed that they would appreciate more time to focus on the stepwise approach to preparing carious lesions in natural teeth. The difficulties of transferring preparation skills learned on a plastic tooth to natural teeth, as well as the variations in preparation design, were quite apparent to them. Learners were additionally troubled by preparation differences based upon which material is selected. Another learner concern cited by a number of third-year learners after they had broader clinic experience in clinic related to how to best manage the deep carious lesion. These learner concerns were addressed by expanding the Caries Continuum from two to four modules by adding this current (second) module and a fourth module on vital pulp therapy. The lab worksheet has been refined over several years of observing areas of misunderstanding in the lab and in clinic. It now serves as a checklist in the second-year clinic to ensure that appropriate clinical procedures are understood and followed. Anecdotally, faculty have noted fewer instances of adverse outcomes, such as mechanical exposures from following stained dentin, occurring in clinical care. Learners are now better able to estimate the size of planned restorations. Over the past 5 years, our class size has grown from approximately 75 to 95 learners. This increase in class size has led to a push to do more with less, and this module represents a crucial improvement in clinical teaching efficiency. Instead of each learner being laboriously taught one-on-one chairside in clinic, the entire class can, as a group, expeditiously master basic concepts around how to approach a carious lesion in collaboration with relatively few faculty members. Still, to keep the ratio to one faculty per 10 learners, the number of faculty involved in the Caries Curriculum has been increased annually. These faculty are primarily in the operative department, but faculty from the general dentistry department, who teach the fourth year students, have been included to standardize and calibrate teaching around caries throughout the curriculum.
2018-04-14T04:12:44.562Z
2016-04-14T00:00:00.000
{ "year": 2017, "sha1": "3caed1bbc88fa1739c63fe9cea31d95770dae596", "oa_license": "CCBYNCSA", "oa_url": "https://europepmc.org/articles/pmc6338171?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "3caed1bbc88fa1739c63fe9cea31d95770dae596", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233452264
pes2o/s2orc
v3-fos-license
THE TRAINING APPLICATION BASED ON VR INTERACTION SCENARIOS – WITH EXAMPLES FOR LOGISTICS . The main goal of the article is to share some experimental view onto VR training applications, that allows for adaptation of the directed training scenario to a user being trained. The resulted application allows to the three scenarios being selected in any combination of ones, as well as the tasks modification through the input files, before a test starts. The program was developed in the Unity engine, using the modern UnityXR framework providing extensive support for leading virtual reality hardware. The resulting solution presents the possibility of an efficient training in Virtual Reality while modifying the course of training without any needs to recompile the program, and also it shows some positive values of using VR technology as a didactic solution Introduction Along with the development of augmented and virtual reality technologies, the scope of such applications is increasing significantly.Over the last two decades, many times, scientists have considered the use of this technology, for example in the fields of medicine (neurosurgery and anatomy) [7] and architecture [10]. However, only with the popularization and wider availability of VR's equipment, these solutions begin to penetrate into many industries, and also they are used in a wider spectrum of fields.We come to the point where the use of such VR solutions is even sometimes cheaper and much more attractive, than classic methods based on natural interaction with physical objects, especially in a dangerous environments. Virtual Reality is no longer a solution for a narrow group of recipients, now it has become a competitive tool for teaching, entertainment and commercial purposes. The authors of this article have pointed that many of the observed VR applications of training present one, predetermined course scenario.The authors formulate a hypothesis that, it is justified and possible to develop multi-variant applications, in which ones the interaction scenario will be adjusted individually to the needs of a given actor. The assumption is, by using the indicated tools it is possible to develop a multi-variant training application in VR environment.The selected area for developing application will be a concern on a training course of an employee applying to work in a logistics warehouse.The three flexible scenarios will be implemented as examples. Evolution of VR technology The concept of Virtual Reality is not a new ideathe beginning of a commercial application of this technology is dated to the 90s of the last century [4], when the first commercial devices of this type has appeared.At the beginning, those applications were used only as entertainment devices, but even then it had been noticed the potential of this medium in other areas.Virtual Reality offers something that other medium can't guaranteea transfer directly into the middle of a displayed environment, and more a seminatural interaction with the displayed objects, by using controllers.This advantage had becomes a very immersive experience for an actor's senses. However, after the first enthusiastic pilots, commercial success originally VR did not succeed, because the communing environment was uncomfortable, unrealistic, very expensive and required computing power that was unavailable at that timeit was too early. The situation has been changed in the last 10 years, and many of the initial problems of this technology have been resolved, with the fact of the information technology grows up.The current market of such devices, e.g.: Oculus Quest & Rift [4], HTC Vive, Valve Index, Google Cardboard [14], offers an open architecture for developers, and as results VR becomes available to a wide audience of consumers.The open architecture of the equipment supports easier production of VR applications, and it allows for quick popularization of this type of solutions. VR in education The knowledge transfer is a condition for the progress of civilization, therefore any innovations being used in acquiring education are the catalyst for the development of human beings in general.The educational domain development has a positive impact on a range and methods of providing information [8]. In the digitization era, Virtual Reality has become a medium that can be used in many ways, and in particular it is another logical step in the computers use for educational purposesthe spectrum of possible applications is wide [1,16].No other medium will achieve such a level of immersion and contact with the virtual world as VR is. One of the biggest problems in education being based on the traditional information lecture is the student involvement for a long process of teaching.Lack of students' commitment leads to:  problems with the knowledge acquisition,  poor interests of the topic,  lower efficiency of teaching process, and in consequence it results in negative experiences and even the lack of any tangible learning outcomes. Each didactic tool that allows for more extensive contact with the topic being taught makes scholars more involved into the process and they are more willing to analyse an issue, an generally it results onto positive effects of they own knowledge increase.In this area the Virtual Reality enables a practical and extended contact with an issue, the internal interaction with virtual displayed objects in VR's applications is intuitive, and students are p-ISSN 2083-0157, e-ISSN 2391-6761 IAPGOŚ 1/2021 59 willing to contact within virtual representation of a teaching challenge, due to its modern and attractive looks, and the relatively short period of its presence on the market.Therefore VR teaching applications strengthens students' involvement, allowing the next perspective of education to achieve the next level of sense interaction, just like in video games [2,3]. Distance learning, visualization of complex environments The traditional form of teaching, based on direct contact between students and the master, in many situations turns out to be insufficient or even impossible to be implemented.The bright example of such situation are the recent months, when due to the epidemiological situation around the world, almost all teaching activities had to be transferred into online platforms; but also as well before the pandemic there had been many cases where the availability of individual specialists' knowledge push the student's to leave their homes/cites and goes even out of the countries. Virtual Reality allows us to move wherever we want, as well as interact with other users, if only the application was designed in this way.This adds variety of possibilities to the distance learning experience that can be realized even with a simple smartphone help for each students. The weaknesses of the traditional didactic form also manifest themselves if the environment in which the examiners have be trained turns out to be dangerous, or it would even endanger the students' lives, and sometimes also if the training costs have no positive economic returns.The organization of educational courses, where the subject of which would be, for example: saving miners from a collapsed mine or utilization of life-threatening materials is practically impossible to implement in their real conditionsof course if such situation happensthe services guards have to show their own experience and in fact they increase their own experience to.But how about the first job, the first missionwhen any work in such a difficult environment is practically impossible without exercise beforehand, or in the simplest case it is very expensive and risky for the trainee himself.For Virtual Reality these limitations do not exist, and the form of realization and customisation of a given environment depends only on the creators of a given virtual space.Currently, an extensive field in which VR is widely used is medicineoperations and procedures in which there is no agreement for the slightest error can be practiced and repeated many times using the Virtual Reality, and in this way prepare the future doctors for real operations [6]. Examples of mono-scenario VR teaching applications In 2019, the Central Mining Institute has implemented the project titled "Specialized competences of the graduates as a chance for employment in the construction industry on the cross-border labour market".This project has assumed preparation and delivery of a virtual reality workshop, where the aim of it was to train users in the removal of hazardous materials by following the scenarios presented to them via a network connection.The application made it possible to gain practical experience in the removal of life-threatening materials (asbestos) without physical contact with them.The advantage of the application was the possibility of conducting training with two users at the same time through a network connection, which allows for training more people at the same time, and also prepares trainees for cooperation.The examiners were supervised by an administrator who could run scenarios, watch users' actions, as well as indicate errors or give commands on a separate computer.The workshop prepared in that project was mobile and could be assembled and set up anywhere.One of the authors of the current article has had the pleasure to participate in developing process of the above solution. The limitation of that solution was the lack of the possibility of external interference with in the course of the scenario.The administrator could not change the characteristics of the items/tasks used by the trainees, which limits the functionality of the application.Anyway, this implementation is still actively used for user training [11,12]. Another implementation that allows for only one scenario of interaction is the "Mission: ISS" application available from the Oculus Store [13].It is an entertainment application that allows an user to play the role of a crew member of the International Space Station and explore its interior.However, the authors of the "Mission: ISS" went a step further than only an entertainment, and they faithfully have reproduced each element of the station, so they give the user a virtual document in which the former station members during a virtual tour tell about everyday life at the space station, describing its important elements, and also drawing the user's attention onto how their life looks like in the space.The application is free and available to anyone with the appropriate equipment.Thanks to this solution, the player is able to move to a place practically inaccessible to him, directly from his room, and thanks to the appropriate preparation of the application by the creators, he can gain knowledge from this experience that may seem boring.A virtual journey to such an exotic place, practically inaccessible to the common student, makes the experience extremely attractive, sharpens the senses and engages the participant to "absorb knowledge" somehow by the way.The disadvantage of the application is its linear structure and relatively small amount of content.After exploring once, the user may not be prompted to return to this application.The game may cause discomfort for some users, because no mechanics were used to increase the quality of the experience [5]. These two advantage applications shows, that even very realistic and attractive visualisation or even very important train course to be passed does not make an user to return back to the VR if it offers only one possible scenario of interactions.At least an user will be bored or will know the scenario as well that he/she will pass the exam without considering the current course.The authors assume that there is necessity to implement multi-scenario interaction with a VR teaching application, and such experiment will be described in the next chapters. Multi-scenario VR application for logistics The logistic warehouses are the space where many people starts their first job.At the first look young man doesn't need to high competitions to work physically there, but in this area there are many dangerousness as for the workers as well for the products being stored there.In this chapter there are described three typical situations which should be tested for warehouse workers' applicants, as in the examples: products confection; personal protective clothes use; safe path. VR mechanic of products confection test The first task for the player is to proper stack boxes onto appropriate pallets within a specified time and according to an order.Here any mistakes could results with wrong customers' orders confection and future reclamations or could results in the products damage if the boxes will not be fitted correctly.The task itself does not have to consist of a fixed number of pallets, as an example, there are used 16 different boxes palettes and 4 pallets (see Fig. 1).The application is designed in a way that allows the developer to adapt the number of variables to a given variant of the taskit requires recompilation of the program, but it is enough to use the prepared development tools (described in subchapter 3.5)it does not require interference with the source code of the application itself. After the task start, the student "physically" grabs the boxes and moves them to the appropriate pallet, he use two controllers (see Fig. 7).When the box is on the appropriate pallet and it is stable (i.e.standing still), he can starts to load the next one.After the proper boxes arranging on a given pallet, the VR mechanic will show him the correct completion of the task by sublighting the pallet in green (see Fig. 2). Fig. 2. The proper result of a task [source: own] If, after placing some boxes on a pallet, it still shows red colour, it means that the student has done a mistake and must correct it.Some typical problems which could happen: objects protruding too much beyond the perimeter of the pallet, inappropriate items are put on the pallet or careless positioning of the item itself.In each case, the student have to correct his mistake by approaching the pallet and moving the boxes to the appropriate places. VR mechanic of protective gloves test The second type of mechanics which could be added to the scenario is the task in which a student has to choose the appropriate gloves for cleaning the workplace.There are sharp objects therebroken glass, so before starting work, he has to choose the right protective gloves for this task.Out of the 3 types of gloves, two are correct, but only one type is provided in the workplace (e.g. the leather gloves are correct and available, rubber ones are available too but not correct, the firefighting gloves are safe and available, but they are a specialized equipment for special use and the worker shouldn't use it for casual work). In this mechanics, the user's prior preparation for training is tested and he will pass the test after the correct choice.Gloves are represented by the appropriate material being put on the student's hands in contact with the prepared objects (see Fig. 3). VR mechanic of safe path test The third type of mechanics which could be added to the scenario is a passive mechanics, where a student has to remember about his own safety.This mechanic continues from the test beginning to its end, and the user has to keep careful and remember about necessity of the safety path, because at the same warehouse there are many forklifts and the workers cannot walk everywhereit is too dangerous.If the mechanic is added to the test, the task consists of paths / communication routes that are used by forklifts to move the loads.The safety paths are marked by thin white stripes in the places where the examiner's entry is safe and withe the wide ones in the places where his move means an error (see Fig. 4).Those crosses of communication routes are usually marked and the operators of forklifts are required to increase vigilance in these placesso it is more safe to walk there, than in another places, were the drivers go more fast.If a student enters the wrong areathe place he has violated will immediately turn red to inform him about the error made.Similarly, when he enter the correct areathis will be highlighted in green, to inform him about the correct place of passage (see Fig. 5).Such visual anchors allow the user to get used to similar places in reality of a physical warehousea student after the training will remember that the densely placed strips are for his safety, and other beltsfor vehicles located in the warehouse. Multi-selection panel The user is the entity that has the closest contact with the applicationhe puts the equipment on his head and manipulates virtual objects by performing specific activities. The above tasks have been designed to be put into scenarios as full competition testing or as well in an individual selected parts for a dedicated user.The all prepared mechanics could be switched on or off on the main panel of the application (see Fig 6).The user is the entity that has the closest contact with the applicationhe puts the VR equipment on his head and manipulates virtual objects by performing specific activities.The mechanics are prepared to simulate some real situations in which the employee is supposed to demonstrate perceptiveness, orientation in the field in order to maintain safety, or select the appropriate tool for a given task.That an administrator/examiner decides which ones should be passed by the tested user. Software and technological background The equipment for which the application has been designed is a modern and up-to-date Virtual Reality Oculus Quest [14] goggles.They are equipped with two controllers, a head set and a cable connecting the equipment to the computer (see fig. 7). The native operating environment of this model of goggles are the internal peripherals of the device itself (built-in components that allow you to run the application without the need for any external hardware), but thanks to the Oculus Link technology [15] In the case of cable connection with a computer, the goggles can use its computing power to more advanced applications.The prepared solution assumes the use of this option.The solution was made using the modern UnityXR framework, allowing the program to be compiled on any VR device without the need to adapt it to hardware differences between platforms.It is a relatively new solution that significantly unifies the way of creating similar applications between different platforms (the way actions are performed on controllers, traffic recognition, etc.).As result, the process of creating applications for various VR platforms has been significantly simplified, because it is not necessary to spend a lot of time adjusting the solution to the equipment of different manufacturers. The engine that was used to build and compile the program is Unity, a complete and supported for years solution for creating games, commercial and educational programs, as well as animations.The engine version that was used to build the solution is 2019.4.1f1, but thanks to the use of UnityXR, which is a built-in tool in the engine, it is possible to update the compiler version. The code editor that was used to build the solution is Visual Studio Community 2019, a common and proven solution that can be natively connected to Unity, allowing for efficient work and code debugging.The mechanics and the entire software logic have been written in C#, which is a standard solution when working with the Unity engine. Conclusions The 3D scenes prepared in the presented VR training application are not such effective as e.g.presented in the Mission: ISS application, but that wasn't the main goal of the authors.Using simple graphics we attempt to prove the hypothesis that the multivariant interaction of a user with the virtual world is useful.In the article we describe the sample VR training application for a logistics warehouse worker. To create a dedicated training scenario in the Unity environment, it does not relay only on the layer of manipulation of windows in this environment.In order to be able to navigate through the previously developed multi-variant scenarios, it is necessary to prepare a script that activates individual scenarios.Such scripts being prepared properly in C # can be added to any element visible on the stage, regardless of whether it is a graphic object, UI part or even an empty GameObject, and give them the characteristics that have been defined through the code. Thus, it has been shown that the user's interaction with the virtual world can be personalized to the needs of a given training scenario, ant it would be more useful than repeats the same activities over and over again.The assumptions of the authors' work were achieved and the hypothesis has been proved. Fig. 3 . Fig. 3.The appropriate gloves used to a cleaning test [source: own]
2021-04-30T12:06:43.271Z
2021-03-31T00:00:00.000
{ "year": 2021, "sha1": "b0d993fe3d3ea3aa7eaa9f8d1fc462a682444ccf", "oa_license": "CCBYSA", "oa_url": "https://ph.pollub.pl/index.php/iapgos/article/download/2562/2398", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "9b0905948bba941a9396f97cf16a6482da591945", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
221167243
pes2o/s2orc
v3-fos-license
Accuracy of intraoral scanning in completely and partially edentulous maxillary and mandibular jaws: an in vitro analysis Objectives New generation intraoral scanners are promoted to be suitable for digital scans of long-span edentulous spaces and completely edentulous arches; however, the evidence is lacking. The current study evaluated the accuracy of intraoral scanning (IOS) in partially and completely edentulous arch models and analyzed the influence of operator experience on accuracy. Materials and methods Four different resin models (completely and partially edentulous maxilla and mandible) were scanned, using a new generation IOS device (n = 20 each). Ten scans of each model were performed by an IOS-experienced and an inexperienced operator. An industrial high-precision scanner was employed to obtain reference scans. IOS files of each model-operator combination, their respective reference scan files (n = 10 each; total = 80), as well as the IOS files from each model generated by the same operator, were superimposed (n = 45; total = 360) to calculate trueness and precision. An ANOVA for mixed models and post hoc t tests for mixed models were used to assess group-wise differences (α = 0.05). Results The median overall trueness and precision were 24.2 μm (IQR 20.7–27.4 μm) and 18.3 μm (IQR 14.4–22.1 μm), respectively. The scans of the inexperienced operator had significantly higher trueness in the edentulous mandibular model (p = 0.0001) and higher precision in the edentulous maxillary model (p = 0.0004). Conclusion The accuracy of IOS for partially and completely edentulous arches in in vitro settings was high. Experience with IOS had small influence on the accuracy of the scans. Clinical relevance IOS with the tested new generation intraoral scanner may be suitable for the fabrication of removable dentures regardless of clinician’s experience in IOS. Introduction Digital technologies are increasingly used in daily life, which is a trend that can also be found in dentists' clinical routine [1]. In dentistry, the introduction of the terms computer-aided design (CAD) and computer-aided manufacturing (CAM) marked the start of an unprecedented digitalization process. CAD-CAM procedures represent only one part of the digital processes, as they further comprise radiography, intraoral scanning (IOS), practice management, and patient recording, just to mention a few [2]. The IOS devices have evolved much and are currently available from a plethora of manufacturers since their first inception in dentistry in the 1980s [3]. Although with the technological advances the IOS devices now have higher accuracy, shorter scan times, and provide increased patient/ clinician comfort, the basic principles of IOS still remain quite similar [4]. Consequently, digital scans for the fabrication of single-or short-span fixed partial dentures are a proven option today, with similar or even better outcomes regarding the accuracy and scan time, compared to conventional impression taking [5][6][7][8][9]. From a patient's perspective, IOS appears to be more preferable to conventional impression taking in those scenarios, as it causes less discomfort [10]. Complete-arch scans in dentate sites have also been improving, and IOS can be successfully applied in those scenarios [11,12]. However, in terms of accuracy, complete-arch scans still seem to remain inferior compared to conventional impressions [12,13]. Furthermore, scan time may differ in different clinical complete-arch scenarios [14]. In a partially dentate scenario, the accuracy of IOS seems directly related to the size of the edentulous area, with higher inaccuracies when scanning extended edentulous areas [12,15]. When it comes to removable partial (RPDs) or complete dentures (RCDs), it remains unclear whether IOS is a suitable option with regards to scan accuracy and scan time [16]. Nevertheless, complete digital workflows for the fabrication of RPDs and RCDs based on IOS data are available in the current literature [17][18][19]. The major challenge for taking intraoral scans in edentulous arches is the recording of the non-attached mucosa in the sense of a functional impression, as done in conventional workflows [20]. Due to the imagebased nature, taking a functional impression with an IOS device is practically impossible, and the digital scans are taken under passive muco-static conditions [21]. However, clinical reports on IOS for the fabrication of RCDs and RPDs have reported clinically acceptable outcomes [17][18][19]. Recently introduced, new generation intraoral scanners are promoted as being suitable for scanning of extended or even completely edentulous ridges, even without reference markings, as suggested by some authors [22,23]. The present study aimed to analyze the accuracy (trueness and precision) of IOS in completely and partially edentulous maxillary and mandibular models. The study further evaluated the influence of the operators' experience with this new generation IOS device on the scan accuracy and scan time. The alternative hypothesis (H1) was that an IOS-experienced clinician would generate more accurate and faster scans compared to an inexperienced clinician. Study setting Four different types of resin models, namely edentulous (B-3CSP; frasaco GmbH, Tettnang, Germany) and partially edentulous (ANKA-4; frasaco GmbH, Tettnang, Germany) mandibular and maxillary models (Fig. 1), were mounted on a phantom head (P-6/3; frasaco GmbH, Tettnang, Germany) with a face mask (P-6 GMN, frasaco GmbH, Tettnang, Germany) to simulate clinical conditions. The teeth in the partially edentulous models were prepared to receive a combined clasp-and attachment-retained (mandibular model, Kennedy Class II) or a clasp-retained RPD (maxillary model, Kennedy Class III). Digital scans were performed using a new generation IOS device (Primescan; Sirona, Bensheim, Germany) with the software version 5.0.2 by two specialist prosthodontists, one experienced and one inexperienced in IOS. Neither of the clinicians had ever used the tested IOS device before. Therefore, the manufacturer provided a theoretical instruction on how to use the device, explaining the technique and the recommended scan strategy. The two operators had no practical training before taking the simulated intraoral scans. All scans were made on the phantom head under dry conditions with ambient light. No information on the measuring uncertainty of the Primescan is provided by the manufacturer. The decision on which type of model to start with was made by a coin flip, which was used to prevent the effect of "operator preference for scan order." Both clinicians started with scanning the edentulous, followed by the partially dentate models (always: first maxillary-second mandibular model). Each operator took ten digital scans of each model (n = 10) resulting in a total of 20 scans per model and a total of 80 scans. The scan time of each scan was recorded separately, which included only the time for scanning, but not for subsequent software calculations. Afterwards, the scan data were exported in the standard tessellation language (STL) file format. For the reference data, all models were digitized using an industrial high-precision scanner (ATOS Capsule 200MV120; GOM GmbH, Braunschweig, Germany). Before the reference data were obtained, the calibration of the system was done by an independent calibration service (German Calibration Service -DKD) revealing a measuring uncertainty of 1 μm. The reference scan data were also exported in the STL format. Before starting the superimposition of the STL files, a region of interest (ROI), which represented the future extension of an RCD or an RPD was defined based on the reference STL files and was digitally transferred to the STL files obtained by IOS (Fig. 2). The prospective denture borders were marked to be approximately 2 mm away from the mucobuccal fold, resulting in denture border positions outside the area of the alveolar mucosa. Subsequently, the superimpositions were done with a software (GOM Inspect Professional; GOM GmbH, Braunschweig, Germany) applying a local best-fit alignment according to the respective ROI, using all surface points of the IOS data within this region. The number of those surface points was recorded for each scan. For trueness, the STL file of each model and operator was superimposed to the respective reference scan STL file (n = 10, N = 80). Afterwards, the average 3-D deviation using the absolute amount of the distances between all surface points of the IOS and the reference scan within the ROI was calculated [12]. For precision, all IOS data of the same model and operator were superimposed to each other (intragroup comparisons; n = 45, N = 360) and 3-D deviations were calculated the same way. Statistical analysis For descriptive analyses, median values, interquartile ranges (IQRs), and minimum and maximum values were calculated. Trueness and precision were assessed in terms of the logarithm of absolute deviations (LAD), and the effect of the type of model and the operator were analyzed. For trueness, the impact of the factors "scan time" and "selected points" were additionally analyzed. The scan time was assessed in terms of the logarithm of scan time in minutes (LSTm). Linear mixed models were used to model the LAD and LSTm. Thereby, the repeated scans were modeled as random values. An ANOVA for mixed models was used as an omnibus test to assess global differences, and a t test for mixed models was used to assess group-wise differences post hoc (for both types of tests, the Satterthwaite approximation was used). The impact of scan time and the number of surface points on LAD were assessed while correcting for the effects of model and executor (covariance analysis). Model accuracy was tested with the help of goodness-of-fit tests (Shapiro-Wilk) on residuals and random effects. p Values less than 0.05 were considered statistically significant. No corrections for p values were applied due to the explorative nature of this study. All statistical analyses were performed with using R software (version 3.5.0; R Development Core Team, https:// www.r-project.org/, 2018). Results The overall median trueness comprising of all digital scans by the two operators was 24.2 μm (IQR 20.7 μm-27.4 μm). The statistical omnibus test yielded a significant influence of the type of model (p < 0.0001), the operator (p < 0.0001), and of the interaction of the operator and the type of model (p < 0.0001) on trueness. Significantly higher trueness was found in the scans of the edentulous mandibular model by the inexperienced operator (p = 0.0001). No differences were detected among the other scans (Table 1, Fig. 3). For the scans of the partially edentulous models, the largest deviations were found in the edentulous sites of the anterior maxilla and the right posterior mandible (Fig. 4). The overall median number of surface points was higher in the scans of the inexperienced operator (140,760; IQR 119,753-153,929 vs. 140,544; IQR 124,548-163,047), however, without influence on trueness values (p = 0.23). The overall median precision was 18.3 μm (IQR 14.4-22.1 μm). The statistical omnibus test yielded a significant influence of the type of model (p < 0.0001), the operator (p = 0.02), and of the interaction of the operator and the type of model (p = 0.03) on precision. A significantly higher precision was found for the scans of the edentulous maxillary model by the inexperienced operator (p = 0.0004). No differences were detected among the other scans ( Table 2, Fig. 5) The overall median scan time was 100.5 s (IQR 72.0, 139.2 s). The statistical omnibus test yielded a significant influence of the type of model (p < 0.0001) and the operator (p < 0.0001) on the scan time. Scans of experienced operator were faster than the scans of inexperienced operator (Table 3, Fig. 6). Longer scan times could be associated with a higher level of trueness (p = 0.04) Discussion IOS of completely and partially edentulous maxillary and mandibular models resulted in high trueness and precision. The accuracy of the digital scans obtained by the experienced operator was not higher compared to the scans of the inexperienced operator. As a matter of fact, higher trueness was found for the edentulous mandibular and higher precision for the edentulous maxillary model scans of the inexperienced operator. Therefore, in terms of accuracy, the alternative hypothesis had to be rejected. However, the scan time of the experienced operator was shorter, confirming the second part of the alternative hypothesis. Although no sample size calculation was done, the number of ten scans of each operator-model combination, resulting in 20 scans per model was deemed sufficient when analyzing the accuracy, considering that studies of similar nature analyzed equal or even smaller numbers [12,24]. In addition, statistical differences were found for trueness, precision, and scan time. However, including only a single IOS experienced and inexperienced operator, respectively, is a limiting factor. All digital scans were performed in a phantom head to simulate the limited space to move the camera intraorally. Other factors, such as patient movement, the presence of saliva or varied Median trueness values, interquartile ranges, and minimum and maximum deviations in μm for every cast, and comparison between experienced and inexperienced operator (post hoc pairwise t tests) light-reflecting due to different kinds of intraoral tissues, which are said to influence the accuracy, were not simulated. However, some recent studies have shown only minor differences of in vivo versus in vitro complete-arch scans with IOS devices, in terms of accuracy and precision [25,26]. Regarding the digital scans of the non-attached mucosa, which is the major challenge when scanning edentulous sites, a distance of 2 mm away from the mucobuccal fold was chosen simulating the future extension of the denture. As recent studies have proven an improved fit of digitally fabricated RCDs, it might not be necessary to extend the denture borders into the alveolar mucosa to result in adequate stability of an RCD, as it is done in conventionally fabricated RCDs [27,28]. However, this hypothesis must be confirmed by future studies, as there is no evidence for this theory. Keeping the scan borders 2 mm away from the mucobuccal fold decreased the scanned edentulous area. The decrease in scanned area might be a factor for the high accuracy found in the current study, as an increase in the scanned edentulous area has been reported to influence the accuracy of intraoral scans negatively [15]. Many different techniques analyzing the accuracy of IOS have been reported; however, using reference scan data from an industrial high-precision scanner is still regarded as the gold standard for measuring trueness [4,29]. Comparing scan data through a best-fit alignment is also a well-accepted methodology, although it has some limitations that have to be taken into account when interpreting the results of the present study. This algorithm attempts to find the superimposition of two surface scans with the minimum difference between all surface points, which can lead to underestimation of the distance between two, [29]. In the present study, it was chosen to apply a local best-fit alignment, only focusing on the surface points of the ROI, simulating the future extension of an RPD or RCD, respectively. As the ROI had to be defined only once for each type of model based on the reference scan data, this technique resulted in a more repeatable superimposition, compared to post-processing of every single scan, in terms of manual trimming of the STL files, and subsequent superimposition. Furthermore, different approaches have been used to describe deviations between digital scan data including rootmean-square(RMS) deviations, average deviations, mean deviations, and absolute deviations [12]. The currently applied technique, the use of absolute amounts of every deviation between two corresponding surface points and subsequently calculating the average, is mathematically similar to the RMS deviations. This similarity between those different techniques enables the comparison of current study results with the studies which used RMS deviations. The application of only one single intraoral scanner limits the interpretation of the results of present study. A test group with a conventional impression technique was not included, as the accuracy of conventional impressions with a polyvinylsiloxane or polyether respectively, under in vitro conditions has been demonstrated in dentate and edentulous scenarios, before [12,30,31]. In those studies, the median deviations of conventional impressions ranged from 7.4 to 39 μm. The accuracy of IOS in all types of models in the present study was very high. In the current literature, there is only a single study that reports on trueness and precision of the IOS device that was used in this study (Primescan, Sirona) which Median precision values, interquartile ranges, and minimum and maximum deviations in μm for every cast, and comparison between experienced and inexperienced operator (post hoc pairwise t tests) compared it to different IOS devices [12]. In that study, neither the trueness nor the precision was as high as in the present study, using the same software with a best-fit algorithm for their analyses. Interestingly, they scanned a completely dentate model, in which trueness and precision can be expected to be higher than in an edentulous or partially dentate model. Nevertheless, the Primescan also performed best, of all the applied scanners in that study, but trueness and precision were significantly higher with conventional polyvinylsiloxane impressions. An explanation for the higher accuracy in the present study could be attributed to the newer software version (version 5.0.2), which was not available when the former study was conducted. Compared to other studies reporting on in vitro assessed deviations of polyvinylsiloxane impressions in partially or completely edentulous arches, the median deviations in the present study applying IOS were smaller [32,33]. The small influence of IOS experience on the accuracy, and even higher trueness and precision found in the edentulous mandibular and maxillary model scans of the IOS-inexperienced operator, were not expected, as the available literature suggests higher accuracy in digital scans of IOS-experienced clinicians [34]. However, it is questionable if the small, but statistically significant difference of trueness and precision between the operators is of any clinical relevance. Considering the results of a recently published study on maxillary complete-arch scans, which reported maximum deviations of 0.3 mm to be clinically relevant, this has at least to be critically scrutinized [35]. The shorter scan times of the IOS-experienced operator were to be expected, as the positive effect of IOS experience on scan time was demonstrated in previous studies [36]. The main reason for the equal trueness and precision values of most of the digital scans by the two operators might be the technological evolution in this new generation IOS device. However, this hypothesis has to be proven by further clinical studies. The longer scan time of the inexperienced operator could be another reason for the higher trueness in the edentulous mandible, and the higher precision in the edentulous maxilla scans, as the statistical analysis showed a direct correlation between longer scan times and higher trueness. For future research, increasing the sample size for the number of experienced and inexperienced operators would help confirming the results of the current study. Clinical studies evaluating the suitability of IOS for RCD or RPD fabrication under in vivo conditions should also be performed. Controlled trials, comparing clinical-and patient-reported outcomes with dentures, fabricated based on digital scans or conventional impressions would be of particular interest. Furthermore, it would be interesting to investigate the hypothesis, whether denture borders must be extended into the functional zone or due to the improved fit, whether staying in the keratinized attached mucosa might result in adequate stability of a complete denture or not. Conclusion Within the limitations of this in vitro study, it was concluded that the accuracy of IOS in edentulous and partially edentulous models using the tested new generation IOS device (Primescan) was high. The operator's experience with IOS had only a small influence on the scan accuracy; however, the experienced operator's scan times were shorter. The intraoral scans obtained with the tested new generation intraoral scanner may be suitable for the fabrication of removable prostheses regardless of clinician's experience in IOS. Funding Information Open access funding provided by University of Bern. Compliance with ethical standards Conflict of interest The authors declare that they have no conflict of interest. Ethical approval This article does not contain any studies with human participants or animals performed by any of the authors. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2020-08-19T14:51:16.207Z
2020-08-19T00:00:00.000
{ "year": 2020, "sha1": "f60eacf2437336624886a6cfa030579edadbe9b5", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00784-020-03486-z.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "fb117f482ece2ae045184d855df8bd7a4d480492", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Mathematics" ] }
229168787
pes2o/s2orc
v3-fos-license
The making of the ‘useless and pathological’ uterus in Taiwan, 1960s to 1990s During the 1960s and 1970s, the notion that the uterus is a useless and pathological organ after a woman has had ‘enough’ children emerged alongside news reports of excessive hysterectomy in Taiwan. This notion and hysterectomy became two sides of the same coin, the former pointing to the burden of birth control and cancer risk, and the latter to sterilization and removing cancer risk. I explore how, in post-war Taiwan, the notion became commonplace through the intersection of three historical formations: the medical tradition of employing surgery to manage risk (such as appendectomy for appendicitis), American-dominated family planning projects that intensified the surgical approach and promoted reproductive rationality, and cancer prevention campaigns that helped cultivate a sense of cancer risk. The gender politics operating in the family planning and cancer prevention projects were apparent. The burden of birth control fell mainly on women, and the cancer prevention campaign, centring almost exclusively on early detection of cervical cancer, made cancer into a woman’s disease. I argue that the discourses of reproductive rationality and disease risk were parallel and, in several key ways, intersecting logics that rendered the uterus useless and pathological and then informed surgeons’ practice of hysterectomy. Exploring the ways in which the uterus was envisioned and targeted in the history of medicine in Taiwan, this paper shows overlapping bio-politics in three strands of research in an East Asian context – namely women’s health, family planning and cancer prevention – and offers a case for global comparison. popular audience, went so far as to draw an analogy between the uterus and the appendix, arguing that both were useless and pathological. The notion of the useless and pathological uterus (wuyong qie hui shengbing de zigong; hereafter, wuyong) began to inform doctors' practice in the 1960s, and it continued well into the 1990s, as indicated in a 1997 study on non-cancer hysterectomy patients in Taipei, which found that 75% of the women were told of the notion of wuyong by their physicians. 3 Addressed specifically to women who had had 'enough' children and were purportedly at risk of cancer, the notion of the useless and pathological uterus, a particular kind of embodied risk, has had profound implications for medical practice, as well as women's bodies. In the popular literature and women's personal accounts, the phrase yilao yongyi (one effort, once and for all) rationalised certain actions, most typically surgery. When the veteran uterus was framed as a source of unwanted pregnancy and a health threat, hysterectomy became a convenientseemingly objective, medico-technologicalanswer to that threat. Removal of 'useless uteruses' was not a Taiwanese invention; it was a global phenomenon with local variations. The American gynaecologist, Ralph C. Wright, advocated routine hysterectomy in the 1960s, as it 'demonstrated the value of prophylactic removal of a normal, but useless and potentially cancerbearing organ'. 4 J.B. Skelton in 1973 urged the American College of Obstetricians and Gynaecologists to recommend prophylactic elective total hysterectomy and bilateral salpingo-oophorectomy after completion of childbearing as proper preventive medicine. 5 In the United States, rising significantly during the 1960s and 1970s, the hysterectomy rate reached its peak in 1975, at 725 000; it was the most frequently performed surgery by the 1980s. 67 In 1970s Japan, ob/gyns did something very similar; for instance, more than one thousand women underwent hysterectomy in Fujimi Hospital; when they went to the hospital for regular cancer check-ups, abdominal pain or light bleeding, they were warned that the surgery was urgently needed to avoid cancer. 8 In early twenty-first century Mexico, hysterectomy was often used as treatment for cervical abnormalities, such as low-grade abnormal cell growth on the cervix. 9 Recently, Korea's hysterectomy rate has been at the top among OECD nations, signalling an alarm for the nation. 10 As a former colony of Japan, and dominated by the US after the Second World War, Taiwan appropriated knowledge and practices from both empires, including prophylactic hysterectomy. Following medical trends in the United States and Japan, surgery stood as the way to free women's bodies from unwanted pregnancy and cancer risk in the name of medical progress. Drawing on popular and professional medical writings, women's published illness stories, and oral interviews with fifteen women who had had a hysterectomy, one midwife and three medical practitioners, this article examines the history of the making of the useless and pathological uterus in Taiwan during the second half of the twentieth century. I explore how, through the cultivation of reproductive rationality and implementation of cancer prevention measures at the interface of government health administration, public health projects and doctor-patient clinical contexts, the uterus became an organ that carried the risks of unwanted pregnancy and cancer. While the medical view of the female body as pathological has a long and well-researched history in the West, 11 how the wuyong uterus emerged in Taiwan has received little scholarly attention. 12 I argue that the useless and pathological uterus discourses in Taiwan flourished at the nexus of three interconnected historical formations. First, an emphasis on a surgical approach within the medical profession, 13 including among ob/gyn practitioners, which had developed in the colonial period, was reinforced after the war by American influence on medical education as it expanded, surgical techniques matured and more surgeons were available. The second was the implementation of a national family planning project, also supported by American funding and American trained professionals, which officially began in 1964 and continued into the late 1980s. It promoted IUDs and sterilization as the best choices for birth control while the uterus increasingly became the main target of control. 14 The third was large-scale cervical cancer prevention measures in the early 1970s, which included the aggressive transfer of knowledge from the United States (e.g., public health education and Pap smear screening), and the uterus again was the organ of concern. Through these programs and the dissemination of popular health manuals, women were exposed in multiple ways to a rational, scientific approach to birth control and disease prevention and sensitised to a whole range of health risks surrounding unwanted pregnancy and gynaecological diseases. In short, by focusing on the ways in which the uterus was envisioned and targeted, this paper brings together three strands of research: surgery and women's health, family planning and cancer prevention. 15 On the history of surgery as a method of cancer prevention, Ilana Löwy's work shows that, in the early twentieth century in the United States, Britain and France, surgeons came to believe that removing precancerous lesions averted the danger of malignancy. This prophylactic surgical practice later led to surgical interventions performed on women with a hereditary predisposition to cancer (breast, ovarian and cervical) even though they were healthy. 16 A similar risk perception arose in Taiwan in the notion of the useless and pathological uterus. My analysis shows that, as an instance of the history of the globalised medical practices surrounding cancer prevention, the embodied risk of uterine cancer was set at an earlier point than the stage of pre-cancer, and it did not require any genetic testing. Furthermore, women's bodies were portrayed as inherently more complex and pathological than men's bodies, as seen, for instance, in the coverage of gynaecological diseases and birth control in the popular health literature in Taiwan. The health manuals for a popular audience reveal that the popularization of cancer risks went hand-in-hand with the ob/gyn's emphasis on surgery. 17 The American influence on medicine in Taiwan during the Cold War was extensive, ranging from health policy, nursing and medical education, medical administration, medical practice and family planning campaigns. American recommendations played a vital role in shaping the long-term health planning of Taiwan, as neither a national health policy nor a central health organization existed before 1971. 18 Medical education in Taiwan was going through a transformation at the time, from Japanese colonial medicine to American standard medicine, 19 as was nursing education and the nursing profession. 20 In medicine, not only did a large number of Taiwan's new medical graduates go to the United States for career advancement, 21 but, with USAID (United States Agency for International Development) support, other medical professionals, including ob/gyns with practices in Taiwan, also went to the United States for training. 22 Modern obstetrics and gynaecology in Taiwan established itself based on a surgical orientation, and hysterectomy, abortion, tubal ligation and caesarean section are major skills of the trade. During the 1950s, ob/gyns learned from their mentorstrained in the colonial erathe valued technique of radical hysterectomy for the treatment of cervical cancer. 23 Abortion (dilation and curettage) was also a common procedure in ob/gyn clinics from the 1950s, even though abortion was illegal until 1984. Tracing the history of caesarean sections in Taiwan 1970s, VABC (virginal birth after caesarean section) was abandoned in favour of serial C-sections. 24 Articles on tubal ligation and surgical treatments for cervical cancer featured prominently in the official journal of the Association of Obstetrics and Gynecology of the Republic of China (now Taiwan Association of Obstetrics and Gynaecology), Journal of Obstetrics and Gynecology of the Republic of China (now Taiwan Journal of Obstetrics and Gynecology), launched in 1962. Looking back on their careers, ob/gyns often took pride in the large number of births, surgeries and pap smears they had performed in their career. 25 Obstetrician/gynaecologists' surgical skills were instrumental in both family planning campaigns and cervical cancer prevention; the former included abortion and installations of thousands of Loops. When the cancer prevention campaign (focused mainly on cervical cancer) began in the 1970s, they again took part in cancer screening. 26 By offering reliable birth control and disease prevention, their surgical orientation grew alongside both the promotion of family planning and the cervical cancer prevention campaign. The history of wuyong in Taiwan resides in the intersected history of family planning and cancer prevention. Recently there has been a growing body of literature on family planning in East Asian contexts, and women's health is a critical issue. Yu-Ling Huang notes how population control, particularly data produced by fertility studies, helped shift the focus of population control to the reproductive behaviour of women. But the ways in which competing methods were rationalized and might have affected how women's reproductive body, particularly the uterus, was perceived remains unexplored. 27 In the case of South Korea, John DiMoia points out that population control comprised control of both number and quality of population and that both types of control had consequences on women's bodies, and in a later chapter he also shows how Korean masculinities were reshaped by the state in order to convince men to accept vasectomy. 28 This paper builds on this work on the history of family planning in East Asia to understand its ramifications on women's health, an aspect that has not yet been thoroughly explored. 29 In the case of how the uterus became useless and pathological, birth control was not the only issue at work; the emergence of cancer prevention, particularly cervical cancer, was also critical. These two subject areas have been treated separately in the scholarly literature, and, moreover, much of the work on the history of cancer has been centred on the West. 30 By showing the joint influence of population control and cancer prevention on the ways in which the uterus became a bio-political object significantly mediated by surgery, this article contributes new insights into the historiography of both fields. In Hsiu-Yun Wang particular, I hope to join the projects in exploring East Asian bio-politics, or, as Francesca Bray puts it, the theme of 'complex meshing of biology, body, and citizen that underpins projects of biological nation building and molds the forms of modern subjectivity'. 31 In what follows, I will discuss these three aspects of the historythat is, surgery, family planning and cancer preventionto elucidate the history of the making of a useless and pathological uterus. First, I trace the early use of surgical method (tubal ligation) before family planning, and then I describe how this surgical orientation developed along with a rational mindset of reproduction in the context of family planning. This rational mindset, expressed through the ideas of 'useless' and yilao yongyi, was a dramatic departure from a rural past that had valued fertility. Then, I describe the history of the cervical cancer prevention campaign to show the context in which the notion of a pathological uterus took root. The analogy made between the uterus and appendix was particularly revealing of this sense of disease risk. A surgical approach to birth control: before and after family planning The desire to limit birth had been in existence since, at least, the early twentieth century in Taiwan. 32 Before the arrival of family planning in the mid-1960s, upper-and middle-class womenteachers, writers, professionals and so forthwere early adopters of birth control, as indicated in women's own accounts. 33 Several methods were available to women, including the Ota ring (introduced from Japan in the 1930s), 34 condoms, spermicide and rhythm and withdrawal methods, but these provided no guarantees. Condoms and spermicides were available mostly at pharmacies, while ob/gyn clinics provided knowledge of the rhythm method and performed tubal ligation (a method in which the fallopian tubes are severed and sealed or 'pinched shut') and abortion procedures. 35 Even though tubal ligation was probably used mainly by elite women, it stood out as medical progress to liberate women from the on-going vigilance that many of the methods required. As early as the 1930s, a small group of upper-and middle-class women had already been using tubal ligation as a contraceptive method before the official family planning project began. 36 Shuang-Sui Lin (1901Lin ( -1968, the wife of Tsung-Ming Tu (Cong-Ming Du) (1893-1986), one of the most prominent physicians in modern Taiwan, used tubal ligation as a method of birth control after she had given birth to five children. As Tu's daughter, Shuchun Tu, recalled in an oral history interview: 'At that time this [tubal ligation] showed progressive thinking, as very few people would get this done'. 37 Similarly, midwives, a group also at the forefront of modernity since the colonial period, 38 were likely to support and have tubal ligation. Cai Shuang Dai (b. 1911), a midwife, who began her practice in 1934, had a tubal ligation after she gave birth to her fourth child in 1949. She later suggested to her younger brothers' daughters-in-law that they do the same. 39 When his wife had a tubal ligation at the time of an abortion in 1950, the prominent physician Xin-Rong Wu (1907Wu ( -1967 wrote in his dairy, 'We have decided to utilise the highest science to regulate we humans' natural life'. 40 Even though sterilization by tubal ligation was used, the uterus was not yet necessarily the target, in part, because both the technological and material means for performing hysterectomy and C-section were not available. It was not until the early 1970s that major surgery such as C-section became relatively safe; for instance, the mortality rate for C-section at National Taiwan University Hospital had gone down from 1.2% in 1951 to 0.3% in 1971. 41 Likewise, before the 1970s, hysterectomy for birth control was considered excessive. In 1951, a worried man wrote to the Reader's Service Column of Lianhe Bao: 'I live in the countryside... Now my wife is pregnant again, and we both agree that…we should not have any more, as we already have four children... I would like to ask...if we should do "shou-shu [surgery]," such as cutting the uterus or tubal ligation?' To answer this man's worry, the expert recommended tubal ligation over hysterectomy: 'If you want surgery to be done, tubal ligation is enough; no need to cut out the uterus'. 42 The family planning project officially began in 1964, and the surgical approach and reproductive rationality would expand to include women in the rural areas. It first deployed field workers to educate married women on limiting births, used mass media to disseminate propaganda, coordinated elementary schools asking children to bring home informational cards and pamphlets for their mothers, and conducted several well-known studies on effective means of carrying out these various efforts. 43 It reached deep into the rural population. After the 1970s, the family planning literature also reached beyond married women to target high school students. In 1973, as part of the expanded family planning that was incorporated into the Six-Year Economic Construction Planning, the Department of Health distributed 1 400 000 copies of Weiyu Choumou [Saving for a rainy day], 44 a pamphlet containing information about family planning, to junior, senior and vocational high school students, as well as junior college students. In 1975, the senior high school curriculum began to incorporate family planning, 41 According to the prominent ob/gyn doctor, Chen Fu-Min, the 1950s was 'a time of poverty and there was a lack of resources', including anaesthesia, blood-transfusion, drugs, instruments and manpower, and there was not enough knowledge and training. Therefore, even the 'simplest' procedures like C-section and hysterectomy were seen as daunting. Fu . To ensure that every junior high graduate had a copy of the booklet, the number of copies matched the number of graduating junior high students each year, approximately 400 000. The first edition was published in 1971, the year the first group of students completed the 9-year national compulsory education that had started in 1968. 52 Hsiu-Yun Wang and local census offices handed out Xinhun Jiating Jihua Shouce [Family Planning Manual for the Newly Married] 45 to couples at marriage registration. One of the campaign's long-lasting achievements was the popularization of reproductive rationality, the idea that women can be rational individuals who see the advantage to family welfare of having fewer children than average, which promised less burden for parents and more education for children. The recommended number of children went down from the 1960s to the 1970s. In 1967, Family Planning promoted 'Wusan' [five three's, 33333] of having one's first child after 3 years of marriage, having the second one after another 3 years, having no more than three children and completing all births before the age of 33. In 1969, it became 'Zinu shao Xingfu duo' (fewer children more happiness), and, by 1971, '3321' (one's first child after 3 years of marriage, the second after another 3 years, two are perfect and one is not too few). A downward trend can also be seen in the number of children women perceived to be ideal. In 1965, it was, on average, 3.96, including 2.30 sons; by 1980, among 34% of women surveyed, the ideal number of children was 2 (sex of children was not indicated). The birth rate was 4.825 in 1965 and it dropped to 1.885 in 1985. 46 The family planning campaign made various contraceptive techniques available to married couples, and initially surgical methods did not dominate, despite the fact that Tz-Chiu Hsu (Zi-Qiu Xu), the head of the then National Health Bureau, observed that the most commonly used contraceptive methods were abortion and tubal ligation. 47 In the period between 1964 and 1976, IUDs, rings and Lippes Loops were the number one method (64%). 48 Judging by the fact that Loops were cheaper than rings (80NT for a ring, 30NT for a Loop) or even free if installed at a public facility such as public hospitals and health stations, Loops were likely more common. The Lippes Loop, a type of IUD that had just been invented in the US, was introduced in 1962 after a study conducted in that year (Taichung Study) had found that the Loop had a high acceptance rate in Taiwan. Why did the Loop become the major method? As mentioned earlier, the U.S.-led family planning in Taiwan heavily promoted IUDs. 49 The family planning campaign framed the Pill as a method mainly suitable for newly married couples not yet ready to raise children, which means it was seen as a temporary solution. In addition, women had many reservations about taking the pill. Women's letters of inquiry to the popular magazine, Fengnian (Harvest), pointed to a number of problems, including 'I have been taking the pill and feeling nausea, is it normal?' 'I forgot to take it, what should I do?' 'If I were to take it long term, would it harm my health?' 'Would it cause fetal abnormalities in the future?' 'Does it cause cancer?' 'Can I take the pill and still have my ginseng chicken?' 50 Even though the answers sought to reassure the reader of the Pill's safety, these questions nonetheless reflected women's reservations and suspicions. Indeed, a family planning worker's analysis of why the pill was not well-received included side-effects, lack of knowledge, newspapers' scepticism, and having to take it every day. 51 Unlike South Korea where vasectomy was promoted heavily by the state, 52 the vasectomy rate was low in Taiwan, at only 0.24% among all contraceptive methods in 1964, indicating an uneven distribution of birth control burdens by gender. In the 1970s, tubal ligations outnumbered vasectomies by 10-0.8. 53 45 There are several reasons why this was the case. Certain popular perceptions that the family planning campaign was eager to dispel might explain the low acceptance rate. Popular articles frequently emphasized that vasectomy was not castration and, therefore, would not compromise a man's masculinity and would not be harmful to the body, but these were nonetheless palpable concerns. 54 A man who underwent vasectomy might also be considered un-filial-so much so that men would do it clandestinely before the 1960s. Bin-Yu Huang, writing in the 1970s about her husband's vasectomy two decades earlier, praised his determination and action to undergo the surgery. He had to take a three-day trip to Taipei to do it, and he told their neighbours that he had had an appendectomy. 55 In fact, the family planning campaign soon promoted the Loop exclusively. 56 In part because the rural population, where the birth rate was still relatively high, was thought by health planners to be ignorant and, therefore, not amenable to methods that required knowledge, training or self-discipline; for instance, it was thought that they could not be relied on to recognise the menstrual 'safe' period, take the Pill or use condoms. 57 Even though Taiwanese medical authorities claimed that the Loop was especially suitable for Taiwanese women, 58 it caused a significant percentage of adverse reactions, including spotting, bleeding, dysmenorrhea, lumbago, lower abdominal pain and perforations of the uterus. 59 This had an unintended consequence: wanting to avoid such complications and desiring a more guaranteed solution, many women found a surgical approach to birth control appealing. 60 Installing the Loop was only a step away from sterilization. For those women who had bad reactions to the Lippes Loop, the government encouraged voluntary sterilisation (tubal ligation) by providing financial compensation, and beginning in 1979 it was free in Taipei. As a result, the percentage of sterilisations rose precipitously from 1977, when it was 14.52% (109 722/755 465), to 1990, by which time it had climbed to 40.33% (656 680/1 628 254). 61 The ob/gyn profession also aggressively promoted tubal ligation and the idea of yilao yongyi (one effort, once and for all). The population expert, Dong-Ming Li, wrote that, 'When a couple have had the ideal number of children and have determined not to have any more children, in order to avoid getting pregnant again, surgical method is the simplest, safest, and most effective method'. 62 Hsiu-Yun Wang when Dr Shih-Chu Ho (Shi-Zhu He) (b. 1946) was a resident at Taipei Veterans General Hospital, she recalled, if the woman was over 28 and had already given birth to two children, the chief resident would ask her during rounds: 'Did you ask her to sign the consent form for tubal ligation yet'? If answered negatively, 'it would seem that I had not fulfilled my responsibility as a resident'. 63 Writing for the popular magazine, Jiankang Shijie, Dr Zi-Yao Li (1927Li ( -2015, a prominent ob/gyn of National Taiwan University, declared that tubal ligation was the best contraceptive method because it was a yilao yongyi method. 64 Similarly, a study carried out by Chien-Dai Chiang (Qian-Dai Jiang), Chang-E Xu and Pei-Hua Wu in the early 1980s concluded that tubal ligation was the most appropriate method for family planning and should be promoted more. 65 Yilao yongyi was a phrase frequently used by both physicians and women, and it even appeared as a key term in a survey on women's motivations for sterilisation. Yilao yongyi simultaneously spoke to women's determination to not have more children and to their perception of convenience since their ob/gyns promised that, unlike other methods, it was a one-time solution. 66 This strong sense of rationality in the decision-making around tubal ligation is also seen in its timing; according to the aforementioned study by Chiang, the majority (89.23%) of tubal ligations accompanied other medical procedures, such as right after giving birth (whether vaginal or caesarean), abortion or other gynaecological surgeries. 67 The appendix was similarly often removed during open abdomen surgery, such as C-section or hysterectomy, and, to promote tubal ligation (via abdomen), the family planning literature highlighted appendectomy as an additional benefit to tubal ligation. The free pamphlets distributed widely by the family planning project promoted it as '[a surgery that] only requires poking a small hole. It is simple and safe, and one can do appendectomy at the same time'. 68 Promoting tubal ligation in this way, the family planning project literature asserted that the uterus and the appendix were similarly dispensable organs. Indeed, appendectomy had direct associations with hysterectomy since the former was often a 'bonus' procedure during surgery. 69 National Taiwan University ob/gyn Dr Zi-Yao Li (1927Li ( -2015 published an article in 1976 suggesting that ob/gyns cut more appendixes than general surgeons since the former would cut the appendix 'incidentally' whenever they opened the abdomen. 70 Hai-Tao Zhao, in an account of her career as a nurse, told the story of a surgeon who, upon adding an appendectomy to a patient's emergency hysterectomy, said: 'To carry the good deed through, I will cut off her appendix too, so that she will not have any trouble in the future'. 71 How the appendix came to be seen as a useless and risky body part is a subject beyond the scope of this paper. Briefly, it emerged in a social and historical process tying disease to occupation. Appendectomy had been a top surgical procedure since at least the late 1950s. 72 By the 1960s, it had become a way of managing the risk of appendicitis, and many took the precaution of removing it, most commonly sailors, before long periods at sea. Dong-Hui Gao (1932-2012), a surgeon whose practice flourished during the 1960s-1980s in Hengchun, a small town near the ocean, remembered that when he first started his practice, ocean fishing was prosperous and many farmers went into sailing. He would regularly perform prophylactic appendectomy on sailors, sometimes more than ten cases in a day. 73 Therefore, when the ob/gyn Xin-Xing Zhang compared the uterus to the appendix in the 1970s, portraying both as useless and potentially life-threatening, it was a familiar idea: 'If a woman has followed her own family planning and does not want to become pregnant again, it is not necessary to keep the uterus in the body. Its existence is like Mangchang (appendix), a kind of appendage '. 74 Indeed, 'wanting to be sterilised' had become one of the indications for hysterectomy, as listed in Dr Zhong-Xiu Ou's essay, 'Talking about Hysterectomy'. 75 And this notion was popularized in health manuals such as Xiandai Funu Baojian [Keeping Health for Modern Women], a collection of short essays written by 'famous physicians from all over the country' and with the prominent ob/gyn pioneer, An-Chiun Chen (An-Jun Chen, 1931-2009), as the editor, which contains several other essays that promote prophylactic hysterectomy. 76 In the translated popular health book, Ni Xiang Zhidao de Aizheng Zhishi [The Knowledge You Need to Know about Cancer], the analogy was carried further to frame the uterus the same as the appendix in its irrelevance to femininities -'Removing uterus is the same as removing appendixneither will change women's characteristics'. 77 Hysterectomy as a method of sterilization was being practiced at least by the 1960s before it appeared in health manuals, as my oral interviews with women indicated. Women in physicians' families were most notable. In my interview with Ms Xu (b. 1951), who worked at a women's hospital as a nurse in the mid-1960s, she recalled that both the wife and mother-in-law of the head physician underwent hysterectomy when 'they decided not to have any more children'. The mother-in-law was the younger sister of the city mayor, who was also a physician. The wife's hysterectomy was done by the head of the hospital, her husband. Interestingly, during about the same time in the United States, it was noted that 'doctors' wives have proportionally more hysterectomies than any other group. 78 Hsiu-Yun Wang To Ms Xu, much like the new material things that she first encountered at the ob/gyn hospital, such as sanitary pads, hysterectomy was a sign of progresswomen could now rid themselves of menstruation, unwanted pregnancies and potential disease in one fell swoop. Not many women could afford hysterectomy, but the fact that women in physicians' families were doing it from the 1960s added a sense of modernity to hysterectomya convincing exemplar of the rational utility of these modern invasive procedures. 80 Women's big enemy If the government's family planning project defined the uterus as 'the place for the fetus to grow' and rendered it useless after enough children were born (in an era when pregnancy was still a tangible physical risk), how did the uterus come to assume the second characteristic of putting a woman at risk of cervical cancer (therefore, a threat to life), similar to the risk the appendix carried (appendicitis could be lethal)? The cultivation of such a risk emerged when several uterine conditions, including abnormal bleeding, myoma and endometriosis, were made into indicators of cervical cancer risk to be eliminated by hysterectomy. This hybridization of the discourses of reproductive rationality and disease risk turned the uterus into a useless and pathological object. Cervical cancer had been the most common of women's cancers in Taiwan since the second half of the twentieth century. 81 According to a study based on 1,869 surgical and autopsy specimens by Shu Yeh (or Shu Ye, 1908-2004) and E.V. Cowdry (Yeh being a prominent pathologist at National Taiwan University) in 1954, over half the tumours among females were 'carcinoma of the cervix uteri' (55.97%). 82 From the 1950s to 1970s, it was on top of the list for women's cancers, 83 and it commonly appeared in the diaries of the gentry class. 84 In the period between 1973 and 1974, the largest number of cancer deaths for women was cervical cancer (686). 85 When compared internationally, the mortality rate was high. In the late 1960s, the mortality rate for cervical cancer in Taiwan was 14.24 per 100 000, as opposed to 11.85 in Japan and 9.67 in the United States. 86 It had been called the 'public enemy of all women' since the 1950s. 87 As one of the popular health manuals states, 'for those who are slightly advanced in age, none have not heard of it', and it is 'the most troublesome and horrifying disease '. 88 in Kaohsiung city, and he was known to recommend hysterectomy to most of the women who went to his clinic. Interview with ob/gyn Dr L, Kaohsiung, 11 July 2012; Interview with ob/gyn Dr K, Kaohsiung, 21 April 2017. 80 Until 1997, the gynaecologist, Dr Shih-Chu Ho (Shi-Zhu He), still called it the 'number-one enemy of Taiwanese women'. 89 Pap smear screening began in the early 1970s, substantially lagging behind other countries' efforts. 90 The US began in the 1950s, and Japan adopted it in 1961. 91 Explaining why Pap smear screening began so late in Taiwan, Daiwie Fu suggests that, in addition to a lack of government policy and funding, and women's hesitancy to see male gynaecologists, the ob/gyn community was heavily invested in a surgical approach focused on radical hysterectomy as the pinnacle of the trade. Thus, it was difficult for ob/gyns to make the transition from a surgery-and hospital-based practice to a decentralised and communitybased practice of screening. 92 However, the family planning and cancer prevention project helped to bring the care of women closer to such a style of practice. Before cervical cancer prevention measures became common, most of the cases of cervical cancer occurred at an advanced stage, and radical hysterectomy and radiotherapy were the main treatments. Physicians regretted that little could be done, and the quantity of available radium was often limited. 93 As the popular saying had it, 'wenai sebian' (One pales with fear upon hearing cancer); the disease meant death. Women dreaded the disease and its treatment. Man-Qing Xiao recalled her experience in her memoir: Cervical cancer was a common occurrence in Xiao's social network, and many had died from the disease or the surgery. She feared death and leaving her young children behind. Xiao's account was a typical story from the time; during the 1950s-1970s, cervical cancer patients in newspaper accounts were often poor middle-aged women with many children. 95 Some of these accounts discussed poor women committing suicide as a result of the incurable disease and life's hardships. 96 Nevertheless, fear of cancer was not limited to the poor. Hua Yan (Ting-Yun Yan, b. 1926), a prominent writer, conveys her experience of myomectomy and fear of cancer: 'To be honest, I am afraid of exams. Once being examined, there is not a single cell of a healthy person that will not possibly get the most terminal diseases (Zuijue de Zheng) in the world'. 97 Husbands wrote about their loss and heartbreak. 'Cervical cancer took away my wife' was a story about how Chun-Chan's (Spring Silkworm, a pen name) wife died from cervical cancer in 1967 as a result of misdiagnosis and delayed treatments. Jiang Gui-qin's True Story is a father's memoir about his diseased daughter, which also contained a chapter about his wife who had died from cervical cancer at the age of 44 after three surgeries, leaving their daughter motherless. 98 Physicians also felt compelled to write about the tragedy of human suffering. Dr Tian-You Lin's memoir, 'Lies', describes two memorable patients in the 1960s, his elementary school classmate and the classmate's wife. The husband was diagnosed with stomach cancer and the wife was later diagnosed with cervical cancer. The couple separately told their physicians not to disclose the other's disease. The wife's cancer was stage III and not operable. 99 Physicians' stories about women's paralysing fear of the disease are also common in the popular literature of the era. 100 Efforts at disease prevention were limited before the 1970s; Zhong-Quan Zhang, an ob/gyn at Taipei Municipal Zhong-Xing Hospital (currently Taipei City Hospital Zhongxing Branch), reflected that in the 1960s most of his colleagues busied themselves in treating patients and paid little attention to prevention. 101 He did not begin conducting cervical cancer screening until the early1970s. 102 In fact, prior to the 1960s, the only efforts made were by National Taiwan University Hospital (hereafter NTU Hospital), which began a small-scale, subsidised examination programme in the early 1960s, after establishing a special clinic for treating cervical cancer in the 1950s. 103 As the two hospitals were in Taipei, these efforts mainly reached women in Taiwan's largest urban area. Cervical Cancer Prevention and Embodied Risk In the process of the promotion of cancer prevention and under the shadow of cancer risk, the notion of yilao yongyi was gradually applied to hysterectomy as it had been applied to tubal ligation. According to Xiang-Da Wu (1938Wu ( -2018 and his colleagues, prominent ob/gyns at Taipei Veterans' Hospital, prophylactic hysterectomy became popular in the mid-1970s. The first large-scale cancer prevention campaign began at this time, led by the Cancer Society of the Republic of China and the public health expert, Pesus Bise Chou (Bi-Se Zhou). Cervical cancer figured prominently in the gendered cancer prevention discourses and measures, 104 and it was also a model cancer for the idea of early detection and early treatment; the campaign popularised the idea that, if it was caught early, it could be 100% cured. 105 The campaign's main tasks were to educate women and to make individual ob/gyn practitioners the campaign's partners. Like the family planning campaign, it identified rural women, perceived to have no knowledge of prevention, as their main target. Public health experts and physicians alike saw rural women's ignorance as an obstacle. 106 The campaign sought to educate women via films, lectures, cancer survivors' experience-sharing and small-group meetings at local farmers' associations. Women were told to accept Pap smears and seek regular follow-up surveillance of their body; the message was to be constantly on guard and undergo gynaecological examination regularly. 107 I will return to the problem of gynaecological examinations later. To convince women to take preventive action, such as accepting an unpleasant Pap smear, the prevention campaign employed scare tactics and emphasized the deadly consequences of undetected and untreated disease. 108 For example, the film, Shengsi zhijian (Between Life and Death), purportedly the first of its kind, was meant to 'flash a warning light' at the intersection of life and death 'under the horrible shadow of cancer'. 109 The film featured well-known actors, such as Chang Feng and Mei Fang, and, in a time when not much entertainment was available, the showing of such films was often considered the highlight of a cancer prevention event, attracting many from afar, especially those living in the remote rural areas. The campaign reported that, after watching the film, women sought free Pap smears at their local ob/gyn clinics and husbands brought their wives to the ob/gyns. As a result, clinics were busier than before. 110 The spread of fear was double-edged. 111 Women were motivated to action by fear; some would rather 'overreact' than accept ambiguous results, which would mean living with the risk. The campaign created two main groups of womencooperative patients and non-participants (who refused to accept a Pap smear and follow-up exams). Those who were willing to cooperate 'erroneously took the attitude that they were not far from death, which greatly troubled their minds'. 112 If a test indicated something suspicious, a woman was encouraged to do further tests. As research has pointed out, women's experience of cervical screening may have reinforced a sense of risk, particularly for those who received suspicious results. 113 The leader of the campaign reported a case of 'overreaction'a woman who would rather undergo hysterectomy than live with the risk: '[There was] a patient whose Pap smear result was class III, but the tissue biopsy indicated merely cervix erosion. She was worried and went to three different hospitals to get tissue biopsies. The results were all the sameno cancer. Normally, she should have felt happy.... She complained, 'I'd rather have cancer and cut off my uterus so I can have quick relief'. 114 Ambiguous test results prompted many women to accept hysterectomy as a preventive measure after being encouraged by their doctor to err on the side of caution. In the late 1970s, Ms Huang's mother had a Pap smear with ambiguous results. It read, 'X is suspected'. Her mother was so terrified that she became bedridden for weeks. As she greatly feared surgery, she did not have the courage to do more than worry, but several of her friends who had received similar reports 'bravely' accepted hysterectomy. 115 The campaign also recruited individual ob/gyns into their network of prevention; when the campaign conducted the first two cervical cancer mass screenings (between 1974 and 1978 and between 1979 and 1984), 661 and 569 ob/gyn clinics were involved, respectively. 116 By 1977, the campaign had carried out over 60 000 Pap smears. 117 In addition to sending those who were diagnosed with cervical cancer to ob/gyns who would provide the treatment of hysterectomy, the campaign invited ob/gyns to hold funded free clinics and offered updated cancer knowledge from the United States, such as educational films sent by the American Cancer Society. Ob/gyns and their practice, along with popular literature (mostly written by ob/gyns such as the aforementioned Xiandai Funu Baojian), became part of the infrastructure for the cultivation of risk awareness. In an article meant to encourage women to receive regular exams, Tao-Sun Wang, MD, who also participated in Pap smear screening in the early 1970s, wrote about hysterectomy as one of the treatments for cervical cancer: 'The uterus, except for menstruation and nurturing the fetus, is not very important. If a woman has had enough children and has reached middle age, the uterus is not very useful for her. Hysterectomy would not damage her femininity'. 118 In trying to convince women to accept hysterectomy, Dr Wang repeated the notion of wuyong and argued against the popular notion that a woman without a uterus was not a true woman. As part of their persuasive tactics, physicians might also bring up the additional benefit of insurance compensation to their women patients. For women of reproductive age having no uterus (therefore infertile) was officially compensated as a form of canfei (handicap, disability), according to several major government insurance programmes, including Labor Insurance , Government Employees' Benefits and Insurance , and Farmer's Health Insurance (1987)(1988)(1989)(1990)(1991)(1992)(1993)(1994)(1995). Therefore, women under 45, who underwent hysterectomy, were qualified to receive handicap (later disability) compensation, regardless of whether or not they had received prophylactic hysterectomy because they had fulfilled their maternal duty. An additional aspect potentially enforcing the notion of the useless and pathological uterus was women's concern of modesty. Pap smears or regular check-ups were the two main weapons against cancer. However, both were resisted because the ways in which they were carried out made women uncomfortable. Women's concern for modesty in ob/gyn exams has a long history, noted by Westerntrained ob/gyns struggling to establish their practice in the colonial period. 119 This problem continued after the Second World War and into the post-war period. Reports about the embarrassing ob/gyn visit were common. Family planning workers reported conservative women's refusal to let men doctors install the Loop. Sometimes they asked the nurse to talk to the patient behind a curtain while the doctor worked silently also behind the curtain, obscuring the identity of the Loop installer. 120 Women asked about clinics that had women physicians installing the Loop, and the names of such clinics were listed in the popular magazine, Fengnian. In addition, the exam space itself contributed to women's resistance. In 1977, a group of physicians acknowledged that exam rooms were inadequately set upthe space was poorly regulated, people other than doctors and nurses were allowed to walk in and out of the room freely and women frequently felt embarrassed. 121 Hysterectomy became a way to avoid this kind of risk surveillance; violation of modesty or privacy was obviated by the hysterectomy as yilao yongyi, ending the need for routine gynaecological examinations. When Ms Miao Young was being discharged from the hospital after her hysterectomy (due to myoma), she was given a business card by her gynaecologist, who commented: 'You have biye (graduated) from here [ob/gyn department]', implying that she was done with ob/gyn clinics and could pass the business card to other women who might want to 'graduate' from the ob/gyn department. Ms Young's story suggests that having a uterus meant a woman would never be free from uncomfortable ob/gyn visits. Similarly, Mrs Hong explained, '[after hysterectomy] one will not have to take off one's pants and expose oneself to others'. Ms Xu mentioned that many of her friends in the Buddhist community, especially the nuns, did not think it was respectable to expose themselves to an ob/gyn regularly, and therefore they underwent hysterectomy. In addition to the cancer prevention campaign, another important place to observe how medical knowledge portrayed women's body as a site of cancer risk was a popular health literature for women, a genre that emerged in the late 1960s and early 1970s and included health manuals and newspaper columns. Popular health manuals tended to pathologise the female body, and, in this light, cervical cancer and its early symptoms were featured in most of the health manuals. 122 In the 1950s, the discourse of cancer risk had already penetrated the existing genres: both popular health literature and traditional Chinese medicine. Ob/gyns had been warning women in the news media that baidai ('whites strips', whites, leucorrhoea) were a sign of uterine cancer. 123 The popular literature of traditional Chinese medicine manufacturers also appropriated Western medical knowledge on cancer and emphasized the importance of menstrual regulation, offering remedies for menstrual problems and whites. 124 However, at this point, neither advocated hysterectomy as a form of prevention. The 1970s saw a marked increase in the number of news reports, magazine articles and health manuals for women on cervical cancer, which, together with the cancer prevention campaign, contributed to the cultivation of risk. 125 Specifically, women were told to watch out for two warning signs of cervical cancer, bu zhengchang chuxie (abnormal, irregular bleeding) as well as whites, in addition to conditions, such as myoma and cervical erosion, which were also linked to cancer. Abnormal bleeding was a frequent key term in the popular writing, and women were educated to seek doctor's help upon observing it; signs of bleeding now pointed to the problem of the uterus. Sumama (b. 1933) visited the ob/gyn clinic in her neighbourhood for 'abnormal' bleeding around 1980. Without doing any exams, the doctor simply told her that something bad was growing inside her uterus and a hysterectomy should be done. She decided to get a second opinion, however, and it turned out that the bleeding was a sign of menopause. 126 Hsiu-Yun Wang Another salient example of how a symptom was seen differently once the risk discourse spread was myoma, which has been the number one indication for hysterectomy since the 1970s. 127 The Chinese term for myoma, jiliu, is a term that can be easily confused with cancer, zhongliu, as they share the word liu (tumour). During the 1960s, ob/gyns' recommendations regarding myoma were conditioned by women's marital status and age. In 1967, an article, 'Why conduct hysterectomy', published in the popular health magazine, Dazhong Yixue, listed four conditions that would require hysterectomy: tumour (both malignant and benign), infection (especially, uterine dysfunction and haemorrhage) and uterine prolapse due to endometriosis or tubo-ovarian abscesses. Compared to the case of malignant tumours, in which the removal of the uterus, ovaries and fallopian tubes should be carried out hao-wubao-liu (without reservation, or leaving nothing), in the case of benign tumours, the author writes that it depends on the patient's age; 'If the patient is over 40, has many children, and has had severe anemia', she should have a total hysterectomy. If the patient is young, has not given birth yet, and the uterus is still healthy, she might consider partial hysterectomy to keep her married (sex) life unaffected. 128 In other words, risk was evaluated differently depending on a woman's marital status, reproductive status and age. However, health experts gradually began to write about the potential for myoma to become something malignant. Obstetrician/gynaecologists in their practice expressed the same concern. Women often heard and worried that 'something bad might be growing inside the uterus'. 129 For example, Mrs Lin (b. 1953), a dressmaker, had tubal ligation in the early 1970s. Almost 20 years later, in 1991, Mrs Lin went to her local ob/gyn for an ultrasound check-up, as many of her customers had recently been diagnosed with cancer. The doctor suggested that the tubes be removed, but he asked permission from her family to do a hysterectomy during operation because he thought the uterus was not something to keep. 130 Other uterine diseases also contributed to the problematizing of the uterus. For instance, the case of endometriosis, a disease whose numbers increased dramatically after the introduction of laparoscopy technology. Impossible to cure but rarely fatal, it acquired the nickname of 'benign cancer', an oxymoron that still indicated cause for concern. Again, the treatment options were highly dependent on the woman's marital and reproductive status. If she was young and not married, the doctor would recommend marriage, under the assumption that she should have children as soon as possible. If she was married without children, the doctor would treat her (assumed) infertility. If she was married with enough children, a hysterectomy would be recommended. 131 Some physicians went so far as to advocate the notion that having no symptom for cervical cancer was in itself a symptom. Citing a Japanese authority, Dr Zhang wrote, 'We should accept Kushima's suggestion that gynecology textbooks should list no symptom as one of the first, early symptoms of cervical cancer to increase women's awareness. Since early cervical cancer is not symptomatic, regular examinations are a necessary prevention measure'. 132 Dr Zhang was not alone in this view; the 'no symptom as the symptom' can also be found in the popular literature. 133 Almost every woman interviewed in this study had heard from her doctor that after giving birth the uterus was 'useless', and if you leave it alone, it might develop into something evil. 134 Their physicians' common refrain was along the lines of: 'You do not know what might be growing inside [the uterus]', which were the specific, ominous words of Ms Zheng's doctor. The uterus's interior was a dark mystery, a source of fear stoked by their physicians' statements. Other interviewees reported hearing or using a related expression: 'The uterus is not something you want to keep' (Lin, Xu, Mrs Wang, Ms Miao Young's doctors and Ms Miao Young). Hysterectomy was considered a logical step in response to cancer fears. Not surprisingly, some women demanded hysterectomies from their physicians out of fear. For example, Ms Miao (b. 1952), who worked at a textile factory, after giving birth to three children, feared getting pregnant again. She had had two abortions by the age of 30. If it was fear of unwanted pregnancy alone, a tubal ligation would have sufficed. But she was also troubled by the volume of her leucorrhoea, which made her worry about cervical cancer. In 1982, she demanded the local gynaecologist remove her uterus, but the doctor was hesitant to do the surgery, saying 'you are too young'. She nevertheless persisted and eventually convinced him to do it. She had heard from her fellow women workers that having too much whites was a sign of cancer. While she did not see working in a factory as a hardship, she certainly did not want to have any more children or get cancer. Dr Yi-Hung Zhan (Yi-Hong Zhan) writes about a 23-year-old woman who similarly requested hysterectomy to avoid 'houhuan' (future troubles), and another 39-year-old woman who, troubled by leucorrhoea and fearing that it might be cancer, also made the request. He cautioned that, even though the surgery was relatively safe, there were potential side-effects, such as infection and damage to the bladder and urethra. 135 More often than not, newspaper advice columns promoted the risk posed by the uterus and of not having a hysterectomy, and the sources of information were often from the US and, to a lesser degree, Japan, both of which carried some degree of imperial/social authority. The following, attributed to the American Cancer Society and appearing in translation, is just one example, in which every organ in a woman's reproductive system was depicted as potentially lethal. 'If a woman has had total hysterectomy (including uterus and cervix), of course, she will be free from the danger, but if the hysterectomy is not total, she will still have the risk of cervical cancer. Likewise, if the ovaries are not removed, they might become the prime source of disaster '. 136 Another example, also a translation into Chinese from an American source: 64 Hsiu-Yun Wang It is in this context of heightened sense of cancer risk that the notion of a pathological uterus emerged. 138 The pathologisation of the uterus was a critical element in the transformation of hysterectomy from a treatment to a prophylaxis. It is difficult to get an exact picture of the extent to which the practice of hysterectomy increased, since comprehensive data for medical procedures were not available before the implementation of National Health Insurance in 1995. However, we may extrapolate from statistics of particular groups of the population, mainly those who were enrolled in Government Employees' Insurance (since 1958), since it paid cash benefits for loss of fertility. 139 According to Statistical Data for Government Employees' Insurance (GEI), the number of claims for 'loss of fertility function' for women steadily increased from 3 in 1960 to 514 in 1990, the cumulative number being 6,233. 140 Moreover, since the mid-1960s, removal of the uterus was at the top of the list for 'disability' cases, and by 1990 it had reached 71.8% (514 cases) of all cases in the Loss of Fertility category. 141 In contrast, the number for men under the same category remained constant over the years, the cumulative number reaching a mere 65 (Figure 1). 142 Since its implementation in 1950, Labour Insurance has also provided cash benefits if one becomes canfei (handicapped, disabled), including loss of fertility. 143 There are, however, no data specifically on the loss of fertility in the category of 'Disability Benefit'. Even though the population of public employees was relatively small, its status as a stable middle-class group can be said to make it representative of a large portion of the rapidly economically developing society's population. Although dissenting voices within Taiwan's ob/gyn profession were rare, public controversies over hysterectomy and other surgeries resulted in broader critiques of the medical profession. A few popular essays, appearing in translation, warned women not to accept hysterectomy too easily. For example, a translated essay from the American women's magazine McCall's, written by a Dr William A. Kolem,144 suggested that seeing the uterus as a dispensable organ and cutting it without much consideration was an extremely arbitrary decision. One of the few ob/gyns who voiced a moderately different opinion in Taiwan was the prominent ob/gyn, Xiang-Da Wu, who was also one of the translators of the feminist classic, Our Bodies Ourselves. Wu was weary of his fellow ob/gyns, who were acting like the surgeons who had made cutting open the stomach a 'trend'; hysterectomy had become 'known by all ages', he lamented. He listed 10 indications for hysterectomy, and he particularly focused on myoma, as it was the most common indication for hysterectomies. He explained, one might consider hysterectomy only if the myoma was growing on the muscle of the uterus and extended into the inside of the uterus, as big as a 12-week foetus, causing aches, and bleeding after menopause. 145 In professional writings, Wu's position against unnecessary hysterectomy was more explicit. In a co-authored article on hysterectomy, noting its popularity, Wu and his colleagues concluded that 'it is not reasonable to do hysterectomy in order to sterilise, prevent cancer, or avoid symptoms of menopause'. 146 The fact that hysterectomy, along with other surgeries, had become so common no doubt also raised suspicions over surgical abuse. A 1977 newspaper article about unnecessary surgeries indicated that stomach, thyroid and uterus were the most frequently removed organs. 147 An ob/gyn, Dr Zu-Miao Zhao, went so far as to publish two books entitled Choulou de Yisheng (The Ugly Doctors), which exposed numerous forms of medical misconduct. 148 Bo Yang (1920Yang ( -2008, a well-known writer and cultural critic, sarcastically suggested that a Kangzai Weiyuanhui (Resisting Butchery Council) be established in order to curb such a trend. 149 Yet, the notion of the veteran uterus as useless and pathological persisted up to the end of the twentieth century. Nuquan Hui's (Taiwan Association for the Promotion of Women's Rights) Women's Health Support Service Phone Line reported receiving 1 021 phone calls in the period between October 1998 and July 1999, and they found that nearly 90% of the women who went to the doctor because of myoma had been encouraged to have hysterectomy. The reasons given by the doctors were: 'You are not going to have any more children anyway', 'It will save you a lot of trouble in the future', and, 'It's cancer prevention'. Some of the doctors even recommended the removal of the ovaries altogether as a prevention measure. 150 Conclusion As a justification for preventive measures to combat unwanted situations, the notion of yilao yongyi (one effort, once and for all) was first attached to appendectomy, tubal ligation, and, eventually, to hysterectomy in a historical process of making the uterus useless and pathological. Even though limiting births and dreading cancer were not new for post-colonial Taiwan, the response had its roots in surgical prowess that dated back to the Japanese colonial period and flourished in the American-dominated 144 family planning and cancer prevention. The three forces formed a powerful push towards a more rational technological control of reproduction and disease. Yilao yongyi was a history of various competing yet connecting birth control methods, as in the case of Lippes Loop and tubal ligation. Even though the Loop was heavily promoted by the United States, the later, a surgery that was already mature locally, came to be the yilao yongyi method. Women would be rid of any further work after tubal ligation. Yet, tubal ligation was not the right method if one also wanted to be yilao yongyi with cervical cancer; hysterectomy served the dual purposes of birth control and cancer prevention. The uterus, an organ that can breed lives and grow cancer, became the target to be removed. Hysterectomy was carried out as a prophylactic strike for women who were thought in need of birth control and cancer prevention. The discourse of wuyong 'useless and pathological' uterus and various associated practices were a form of bio-power at work, and it involved women, surgeons, public health nurses and the state. However, in a society where women took on the sole burden of birth control, the different actors did not share the stakes equally. If women wanted to retire from maternal social duty and they could find no satisfactory birth control method, the idea that they might be better off without a uterus was appealing. One wonders what would have happened if women had not had to take on the main burden of birth control, the project of family planning had devoted more attention to male methods, and the medical professions had been less surgery-oriented. Would the uterus still have been at the centre of family planning or cancer risk discourse? Without the cultivation of a rational reproductive mindset, as seen in the notion of technocratic 'planning', the uterus as useless would have been incomprehensible. Moreover, without the cultivation of a visceral sense of cancer risk, many women would not have gone to local ob/gyn clinics and the uterus would not have become a prime suspect of cancer (as opposed to being an interconnected part of the body). All of the behaviours and conditions of the uterus, including bleeding, excreting, eroding and developing myoma, were made into components of the pathologisation of the uterus. In the cancer prevention discourse, early detection and treatment necessitated regular surveillance of their bodies, and women's aversion to gynaecological exams, combined with the fear of cancer, further rendered the uterus a problem. In Taiwan, the history of reproductive technologies and the history of cervical cancer crossed at the nexus of hysterectomy, a practice that went from being a treatment for cervical cancer and obstetrical emergencies to a birth control method and routine prophylactic measure. The timing of action was moved to a very early point in lifethat is, when the woman had had 'enough' childrenas early detection came to be deemed inadequate in the face of heightened risk perceptions or unacceptable surveillance measures. 151 The uterus itself was the risk, giving but also taking life. In short, women were advised to be on guard for their life as soon as they had brought 'enough' lives into the world, and to seek convenience by seemingly rational, techno-scientific means. Thus, the history of the uterus being wuyong is also the history of how yilao yongyi became desirable and hysterectomy became a reasonable solution. Yilao yongyi (one effort, once and for all) gradually adhered to hysterectomy as a convenient and rational response to having a supposedly useless and pathological organ, authorised by biomedical authority, popular health discourses and women's testimonials. This paper not only adds to the current literature of bio-politics in East Asian context, but by bringing the history of the useless and pathological uterus into the story, it also answers Warwick Anderson's call to pay more attention to the 'more intimate and private parts of public health'. 152 The uterus was a woman's private body part, but it was also at the nexus of colonial surgical legacy, American dominance and local medical culture. Scholars have questioned the common assumption that biomedicine has been an unproblematic force for women's liberation. 153 Indeed, the construction of the useless and pathological uterus was armed with the rhetoric of progress and liberation; medicine's progress was meant to bring liberation for women from unwanted pregnancies and cancer. Nevertheless, the history of wuyong is a case of the bio-politics of population and disease control that shaped anatomo-politics by singling out the uterus as an organ that could be removed to manage risk. The isolated uterus is very different from the view from Chinese medicine, in which the uterus is intimately connected with other internal organs (and the removal would disconnect the flow of the qi). It is striking that the ways in which Taiwanese obstetrician/gynaecologists portrayed the uterus were very similar to their American and Japanese counterparts' depictions of a useless, dangerous and troublesome organ. No substantive research is yet available on how such a notion or practice has travelled globally. However, from the fact that ob/gyns in the second half of the twentieth century in Taiwan actively promoted the 'useless and pathological uterus' in ways socio-culturally significant in Taiwan while engaging in international medical networks, 154 we can glimpse how medical practice and knowledge circulated globally and, at the same time, developed local variations.
2020-12-15T19:32:24.634Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "78ea26605dcfff7e5fc071da2aa13dace1f414f2", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/D347EFF53D056E26FF7FEDD4585CE1BB/S0025727320000502a.pdf/div-class-title-the-making-of-the-useless-and-pathological-uterus-in-taiwan-1960s-to-1990s-div.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "78ea26605dcfff7e5fc071da2aa13dace1f414f2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
51917482
pes2o/s2orc
v3-fos-license
A New Approach to Electricity Market Clearing With Uniform Purchase Price and Curtailable Block Orders The European market clearing problem is characterized by a set of heterogeneous orders and rules that force the implementation of heuristic and iterative solving methods. In particular, curtailable block orders and the uniform purchase price (UPP) pose serious difficulties. A block is an order that spans over multiple hours, and can be either fully accepted or fully rejected. The UPP prescribes that all consumers pay a common price, i.e., the UPP, in all the zones, while producers receive zonal prices, which can differ from one zone to another. The market clearing problem in the presence of both the UPP and block orders is a major open issue in the European context. The UPP scheme leads to a non-linear optimization problem involving both primal and dual variables, whereas block orders introduce multi-temporal constraints and binary variables into the problem. As a consequence, the market clearing problem in the presence of both blocks and the UPP can be regarded as a non-linear integer programming problem involving both primal and dual variables with complementary and multi-temporal constraints. The aim of this paper is to present a non-iterative and heuristic-free approach for solving the market clearing problem in the presence of both curtailable block orders and the UPP. The solution is exact, with no approximation up to the level of resolution of current market data. By resorting to an equivalent UPP formulation, the proposed approach results in a mixed-integer linear program, which is built starting from a non-linear integer bilevel programming problem. Numerical results using real market data are reported to show the effectiveness of the proposed approach. The model has been implemented in Python, and the code is freely available on a public repository. A. Sets and Indices i Index of market zones, i ∈ Z. K π ti Set of consumers paying the UPP π t in zone i ∈ Z, with t ∈ T . K π t Set of all consumers paying the UPP, i.e., K π t = ∪ i K π ti , with t ∈ T . K ζ ti Set of consumers paying the zonal price ζ ti in zone i ∈ Z, with t ∈ T . K ζ t Set of all consumers paying zonal prices, i.e., K ζ t = ∪ i K ζ ti , with t ∈ T . K ti Set of all consumers in zone i ∈ Z, i.e., K ti = K π ti ∪ K ζ ti , with t ∈ T . K t Set of all consumers, i.e., K t = ∪ i K ti , with t ∈ T . P ti Set of producers submitting simple stepwise order in zone i ∈ Z, with t ∈ T . P t Set of all producers submitting simple stepwise order, i.e., P t = ∪ i P ti , t ∈ T . P B i Set of producers submitting curtailable profile block orders in zone i ∈ Z. P B Set of all producers submitting curtailable profile block orders, i.e., P B = ∪ i P B i . T Set of the 24 daily hours. T p Set of block order p timespan, with p ∈ P B , and T p ⊆ T . Z π Set of zones enforcing the UPP π t . Z ζ Set of zones without the UPP, all consumers pay zonal prices ζ ti . Z Set of all zones, Z = Z π ∪ Z ζ . B. Constants D max tk Maximum hourly quantity demanded by consumer k ∈ K t at time t ∈ T . Merit order for consumer k ∈ K π t , lower values mean higher priority. P d tk Hourly order price submitted by consumer k ∈ K t with t ∈ T . P s tp Hourly order price submitted by producer p ∈ P t with t ∈ T . P B p Block order price submitted by producer p ∈ P B . R min p Minimum acceptance ratio for curtailable block order with p ∈ P B . S max tp Maximum hourly quantity offered by producer p ∈ P t at time t ∈ T . S B,max tp Profile block order maximum hourly quantity offered by producer p ∈ P B with t ∈ T p . Introduction Electricity markets are experiencing significant changes due to different factors, as the modification of generation mix [1], the increasing presence of demand response [2] and energy storage systems [3], the growth of renewable energy [4], the request for both flexibility [5,6] and security of supply [7], C. Variables b tji Binary variable used in the binary expansion to convert a positive integer number in binary form, where i ∈ Z π , t ∈ T , and j ∈ {0, . . .}. d ζ tk Executed demand quantity for consumer k ∈ K ζ t , with t ∈ T . d w tk Executed demand quantity for consumer k ∈ K π t if u w tk = 1, with t ∈ T . d d tk Executed demand quantity for consumer k ∈ K π t if u d tk = 1, with t ∈ T . d π tk Executed demand quantity for consumer k ∈ K π t , where d π tk = u g tk D max tk + d w tk + d d tk , with k ∈ K π t and t ∈ T . f tij Flow from zone i to zone j with t ∈ T . r p Block order acceptance ratio with p ∈ P B . s tp Executed supply quantity for producer p ∈ P t , with t ∈ T . u f tij Binary variable with i, j ∈ Z π and t ∈ T , where u f tij = 1 if and only if the transmission line from i to j is congested, i.e., f tij = F max tij . u B p Binary variable representing the block order acceptance status with p ∈ P B , where u B p = 1 means accepted, and u B p = 0 rejected. u g tk Binary variable with k ∈ K π t and t ∈ T , where u g tk = 1 ⇐⇒ P d tk > π, and zero otherwise. u e tk Binary variable with k ∈ K π t and t ∈ T , where if u e tk = 1 then P d tk = π. u w tk Binary variable with k ∈ K π t and t ∈ T , where if u w tk = 1 then the demand order is at-the-money and it is partially cleared according to a social welfare approach. u d tk Binary variable with k ∈ K π t and t ∈ T , where if u d tk = 1 then the demand order is at-the-money and it is partially cleared according to an economic dispatch approach. δ max tij Dual variable of constraint f tij ≤ F max tij . ζ ti Zonal price in zone i ∈ Z with t ∈ T . η tij Dual variable of constraint f tij + f tji = 0. κ t Error tolerance in the uniform purchase price definition, currently κ t ∈ [− Uniform purchase price at time t ∈ T . D. Auxiliary Variables y gπ tk Auxiliary variable, it replaces the product u g tk π. y gζ tki Auxiliary variable, it replaces the product u g tk ζ ti . y eπ tk Auxiliary variable, it replaces the product u e tk π t . y wϕ tk Auxiliary variable, it replaces the product u w tk ϕ w tk . and the associated adjustment in power networks [8]. These changes affected also the European markets. In particular, the current day-ahead European electricity market is the result of a merging process that took place during the last three decades and involved all the main European countries [9], and it should lead to significant social welfare improvements [10]. However, the complete integration involves several difficulties both in terms of design [11], and interaction between different markets [12]. In particular, the lack of an original common design, leads to a European day-ahead electricity market that is characterized by heterogeneous orders (e.g., stepwise orders, piecewise linear orders, simple and linked block orders [13]), and rules (e.g., minimum income condition [14], uniform purchase price [15]), which cannot be easily harmonized. As a consequence, the European market clearing algorithm [13] deals with a wide variety of issues, due to, for example, the complexity of both the clearing rules and the orders involved, their heterogeneous nature, and the increasing number of orders currently submitted to the market, which forced the implementation of heuristics and iterative solving methods. One of the most challenging problem is the simultaneous presence of block orders and the uniform purchase pricing scheme. Block orders are present in the central and northern European countries [16,17], whereas the uniform purchase price (UPP) is implemented into the Italian market [15] with the name of Prezzo Unico Nazionale (PUN). The UPP scheme The UPP scheme requires that all consumers pay a unique price, termed the UPP, in all the zones, while producers receive zonal prices, which can differ from one zone to another. The UPP π t at time t is defined as the average of the zonal prices ζ ti , weighted by the consumers' cleared quantities d π tk . Formally: where Z π is the set of zones enforcing the UPP, K π ti is the set of consumers paying the UPP in the zone i at time t ∈ T , and K π t = ∪ i K π ti . Given the UPP definition (1), it is possible to specify the following UPP clearing rule: • demand orders with a submitted price P d tk strictly greater than π t , that is, in-the-money (ITM) demand orders, must be fully executed, i.e., d π tk = D max tk ; • demand orders with a submitted price P d tk exactly equal to π t , that is, at-the-money (ATM) demand orders, may be partially cleared, i.e., 0 ≤ d π tk ≤ D max tk ; • demand orders with a submitted price P d tk strictly lower than π t , that is, out-of-the-money (OTM) demand orders, must be fully rejected, i.e., d π tk = 0. In addition, demand orders subject to the UPP scheme are ranked by a parameter termed merit order, that determines a strict total ordering between the UPP orders. The merit order is assigned by the market operator before the day-ahead auction. Lower merit order O m tk implies a higher priority in execution. In particular, this ranking coincides with the price ranking for the orders with different submitted prices, i.e., if For the orders with the same price, the merit order is assigned according to a set of non-discriminatory rules, as for example the time stamp of submission. Traditionally, pumping units belonging to hydroelectric production plants are excluded from the UPP rule. These units buy electricity to refill their reservoirs usually during the night, whereas they generate energy during the remaining hours. To harmonize the buying and selling price, demand orders from these units pay zonal prices, and not the UPP. As a consequence, the set K ζ t of consumers paying zonal prices is usually non-empty in the set of UPP zones Z π , because it is populated by the pumping units. By contrast, the set K π t of consumers paying the UPP is always empty in the zones Z ζ non-enforcing the UPP. Currently, the implementation of the UPP pricing scheme on the Italian market allows an error tolerance κ t ∈ [−1; 5]. Therefore, the Italian PUN is actually defined as: Block orders A block order p submitted by a producer is an order that spans over multiple hours, and can be either fully accepted or fully rejected [18]. Moreover, the block order submitted price P B p must be the same over the whole timespan. The most general form of a single block order is the profile block order, which allows to submit different quantities for each hour. Furthermore, an additional feature called minimum acceptance ratio (MAR) has been introduced in the Nordic countries, which allows to partially execute a block order [17]. In this case, the hourly quantities involved can be partially cleared, and the profile block order is uniformly scaled over the whole timespan, as depicted in Figure 1. The fraction of the quantity executed for each hour is termed the acceptance ratio r p . The acceptance ratio is independent of t, and satisfies the constraint, R min p ≤ r p ≤ 1, where R min p is the MAR for the block order p. For this reason, block orders with MAR are called curtailable. Block orders can be classified according to their degree of moneyness, that is, a block order submitted by a producer is termed: • in-the-money (ITM), if the submitted price P B p is smaller than the average of the zonal prices ζ ti weighted by the hourly offered quantities S B,max tp , or equivalently, if the block order has a strictly positive surplus, i.e., • out-of-the-money (OTM), if the submitted price P B p is greater than the average of the zonal prices ζ ti weighted by the hourly offered quantities S B,max tp , or equivalently, if the block order has a strictly negative surplus, i.e., where T p ⊆ T is the timespan of block order p. We recall that the acceptance ratio r p does not depend on time. For this reason, in the definitions of the block order surplus only S B,max tp is considered instead of r p S B,max tp . ITM block orders should be fully executed. ATM block orders can be partially cleared. OTM block orders must be always rejected. Notice that, the indivisible nature of regular block orders may prevent the existence of an optimal market equilibrium, i.e., the clearing problem may result infeasible [19,20]. For this reason, current market rules allow to reject ITM block orders [13]. Rejected ITM block orders are termed paradoxically rejected block orders (PRBs). By contrast, OTM block orders that are accepted are termed paradoxically accepted block orders (PABs). PABs are not allowed in the European markets. The reasoning behind this different treatment is straightforward. Under perfect competition, prices submitted by producers are the marginal costs [21,22]. Therefore, a PAB would cause a monetary loss to the producer, whereas a PRB leads to a missed trading opportunity. Notice that, in some US markets [23] and in Turkey [24], it is possible to compensate producers with side payments [25], but this is not allowed in the European markets. Literature review In the literature, algorithms to solve the market clearing problem in the presence of block orders are based on different techniques. Reference [26] formulates a mixed-integer linear program (MILP) to clear the market with an additional iterative process to handle PRBs. Reference [27] proposes a primal-dual formulation of the market clearing problem, where an improved Benders-like decomposition method is further introduced to strengthen the classical Benders cuts, which is extended in [28]. In [19] a clearing method to minimize the impact of PRBs on the final solution is proposed. Reference [29] introduces a bilevel approach to handle regular block orders in a single-zone market where block order surpluses are explicitly considered. The official European algorithm for market coupling, termed EUPHEMIA [13], is based on a mixed-integer quadratic programming formulation with additional sequential subproblems and modules. It is partially derived from the COSMOS [30] model, originally employed in the central-western European electricity markets. Both algorithms implement a branch-and-bound method for solving a European social welfare maximization problem, where appropriate cuts are introduced until an optimal solution fulfilling all the market requirements is achieved. Reference [31] reports an interesting scenario analysis, which investigates the effects of different size, number and type of block orders on the computation time required to solve the market clearing problem. Reference [32] formulates a mixedinteger quadratic program to mimic the complete European day-ahead market, with iterative processes to handle PRBs, complex orders, and the UPP. With respect to the UPP scheme, reference [33] proposes a complementarity approach to solve a clearing problem un-der mixed pricing rules, which is further extended to reserves in [34], and block orders in [35]. In the latter case, an iterative process is implemented which involves an initial MILP problem to handle block orders, followed by a mixed-complementarity problem to deal with the UPP. Reference [36] uses a complementarity approach to clear a market with block and complex orders, which is extended to UPP in [37], where in both cases a heuristic process is used to handle paradoxical orders. Reference [38] proposes a bilevel model to clear a UPP market with simple stepwise orders, where the objective function maximize the surplus of the consumers. Reference [39] proposes an income based approach to overcome the non-linearities of the UPP, however this model cannot be extended to curtailable block orders. Originally, the method to clear the Italian market was based on [40], whereas the current approach implemented by EUPHEMIA is described in [13]. In the first case, the UPP is sequentially selected among each possible price in the aggregate market demand curve until the whole curve is explored. Then, the optimal solution is chosen among the feasible candidates that clear the market and yield the greatest social welfare. Similarly, EUPHEMIA explores the aggregate market demand curve until a feasible solution is found that clears the market while satisfying the UPP definition (2) within the error tolerance κ t . Notice that the UPP scheme differs substantially from both the consumers payment minimization scheme [41] (where the objective is to minimize the total consumers' payments), and from the clearing rule used in some US markets, such as [42], where the common price paid by consumers is computed by using an ex-post iteration. Finally, reference [43] reports an interesting analysis of the European market coupling impact on the UPP, where the positive effect of the increased market liquidity has been assessed. Market clearing issues in the presence of block orders and UPP Block orders and the UPP rule pose a considerable burden on the European market clearing problem. In particular, block orders introduce at least two kinds of relevant issues. Firstly, the indivisible nature of these orders forces the introduction of binary variables. Secondly, block orders span over multiple trading hours, and impose multi-temporal constraints. On the other hand, under the UPP scheme it is not possible to use directly the traditional social welfare maximization method to clear the market, due to the possible difference in the price paid by consumers (the UPP) and the price received by producers (the zonal price) within the same zone. Moreover, the UPP scheme requires to clear consumers and producers simultaneously, because both have price-elastic curves, i.e., the demanded and offered quantities depend on the actual market prices. Furthermore, under the marginal pricing framework [44,45], market prices are defined as the dual variables of the power balance constraints. This means that the problem formulation must involve the dual variables. Finally, the UPP definition (1) implies the presence of bilinear terms involving both quantities and prices, i.e., primal and dual variables, which make the problem nonlinear and non-convex. As a consequence, the European market clearing problem with both block orders and the UPP can be regarded as a non-linear integer program, involving both primal and dual variables with complementary and multitemporal constraints. c 2018. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/ Paper contribution The problem of finding a computationally tractable and exact social welfare maximization formulation, for solving the market clearing problem in the presence of both block orders and the UPP scheme, is an important open issue in the European context. The purpose of this paper is to present a noniterative solution to this problem, which results in a MILP model, that can be solved with off-the-shelf solvers. This model is obtained starting from a non-linear integer bilevel problem, which is transformed into an equivalent single level model by using primal-dual relations and properties. Then, all the non-linearities are removed by using both standard integer algebra and an equivalent reformulation of the UPP definition. We remark that, this approach is homogeneous in spite of the different traded instruments and market rules. That is, the proposed framework deals with both block orders and the UPP by using the same comprehensive model under the exact European social welfare maximization objective, with no iterative process or subproblems. Furthermore, by construction, market prices are guaranteed to fulfill the marginal pricing scheme [44] as required by the European regulatory framework [46] and coherently with standard market practices [13,14]. The solution is exact, with no approximation at least up to the level of resolution of current market data. Finally, the MILP formulation allows to prove the optimality of the solution. To summarize, the main novelties presented in this work are: 1. the exact formulation of the market clearing problem in the presence of both curtailable profile block orders and the UPP as a non-linear integer bilevel problem, which is then transformed into an equivalent MILP model; 2. the use of complementary relations and integer methods to linearize the UPP definition; 3. the non-iterative approach maximizing the exact social welfare in the presence of both the UPP and curtailable block orders. The aim of this work is to show that the UPP scheme (i.e., a non-linear program), and block orders (which involve binary variables), that are currently handled heuristically, can be recast as a single, all-encompassing MILP problem, fulfilling all the European regulatory requirements. This flexible approach allows one to gain knowledge on the overall clearing problem. By providing insights into the problem structure, the model can be used by transmission system operators, policy makers and stakeholders to evaluate the physical and economic impacts of both grid expansion plans and modifications to market policies and rules, by carrying out what-if analysis on specific elements reflected in the problem objective and constraints. In this respect, we freely provide the open-source Python code of the proposed model, in order to bridge the gap between modeling and implementation, and to offer a ready-to-use tool to the interested user. Finally, we recall that the MILP formulation allows one to certify the optimality of the obtained solution. The remaining part of this paper is organized as follows. Section 2 highlights some of the clearing differences between the UPP scheme and a traditional market. Section 3 presents a formulation of the non-linear integer bilevel model, and shows how the final MILP is built. Section 4 illustrates some optional modeling features to detect market splits. Section 5 describes the tests performed, and reports the numerical results. Finally, Section 6 outlines some conclusions. The complete MILP model is reported in Appendix A. Demand Supply A B Figure 2: Market clearing without the UPP rule. The intersection of the demand and supply curves determines both the zonal price and the cleared quantities. The demand order labeled by A is an at-the-money order, and it is partially cleared. By contrast, the demand order labeled by B is an out-of-the-money order, and must be rejected. The UPP is assumed to be 5 Euro/MWh. The demand order labeled by A has a price of 15 Euro/MWh, whereas demand order labeled by B has a price of 5 Euro/Mwh. According to the UPP rule, the order A is in-the-money and must be fully executed. By contrast, the order B is at-the-money and can be partially cleared. With respect to [38], curtailable profile block orders are now considered (both in UPP and non-UPP zones), a novel and equivalent UPP formulation is proposed, and the objective function represents the exact social welfare. Market clearing differences between the UPP scheme and a traditional European market This section provides a few examples to highlight some of the clearing differences between the UPP scheme and a traditional European market. For ease of reading, this section considers only stepwise orders. In a traditional European market, i.e., a market cleared according to a social welfare approach with no UPP involved, the intersection of the supply and the demand curves determines both the quantity executed and the zonal price, as depicted in Figure 2. In this case, the demand and supply orders are cleared at the same price, i.e., the zonal price. In particular, a demand order is in-the-money if its price is strictly greater than the zonal price, whereas it is at-themoney if its price is exactly equal to the zonal price, and it is out-of-the-money if its price is strictly lower than the zonal price. In Figure 2, the demand order labeled by A is intersected by the supply curve, the intersection determines the quantity partially executed, and the price of the order A sets the zonal price in the zone. [21,22]. The order A is an at-the-money order. By contrast, the demand order labeled by B is out-of-the-money, because its price is strictly lower than the zonal price, and it must be fully rejected. This is not necessarily true for a UPP demand order, because it is cleared at the UPP and not at the zonal price. Figure 3. The UPP is assumed to be 5 Euro/MWh. The total demanded quantity includes the quantity d d tk partially cleared. The zonal price ζ ti , collected by the producers, is determined as the price required to dispatch the demanded quantity. Figure 3 shows the same demand and supply curves as in Figure 2. However, in this example the UPP rule is enforced, and the UPP is assumed equal to 5 Euro/MWh. Here, all the demand orders are cleared at the UPP and not at the zonal price. Therefore, the demand order labeled by A is in-the-money and must be fully executed, whereas the demand order labeled by B is at-the-money and can be partially cleared. Notice that, the order labeled by B can be partially executed regardless of the zonal price in the zone. This is a fundamental difference with respect to a traditional market. In the case depicted in Figure 3, the in-the-money demand orders and the quantity partially cleared of the order B must be executed. Therefore, the problem boils down to finding the exact executed quantity and the zonal price, given the UPP. We recall that producers collect zonal prices. Therefore, the problem to solve is to find the price required by producers to match the demanded quantity. This problem is an economic dispatch of a potentially variable demand (which in turn depends on the UPP). In a dispatch problem, the demand is a constant, and the demand curve is considered as if it were inelastic, i.e., a vertical line, and the intersection with the supply curve determines the price required by producers, as depicted in Figure 4. In this case, the demand includes the quantity partially cleared d d tk . The zonal price ζ ti is the prices required by the producers to match the demanded quantity. An important consequence of the UPP pricing scheme is that it is possible to have, in the same zone, an at-the-money UPP demand order and a supply order with two different prices, both partially cleared, as in Figure 4. This is not possible in a traditional market as the one depicted in Figure 2, and it is an additional issue of the UPP pricing scheme. In particular cases, the market equilibrium given by the intersection of the demand and supply curves satisfies also the UPP rule, as shown in Figure 5. In this figure, the UPP is assumed to be 15 Euro/MWh. The demand order labeled by A has a price of 15 Euro/MWh, it is therefore at-themoney, and can be partially executed. This order is intersected by the supply curve. The quantity partially cleared and the zonal price determined by the intersection, as in a traditional market, fulfills also the UPP rule, because all the in-the-money UPP orders are fully executed, whereas the order partially cleared A is at-the-money. As a consequence, in this particular case, the zonal price, the UPP and the price of the demand order A coincide, i.e., P d tk = π t = ζ ti . Both the cases depicted in Figure 4, and Figure 5 will be considered. The first case allows a UPP demand order to be partially executed regardless of the zonal price, by relying on a dispatch approach. The second one is a special case, where we will exploit the elasticity of the demand curve and the traditional social welfare approach to deal efficiently with these at-the-money UPP orders, as shown in Section 3.4. The Model This section presents a formalization of the market clearing problem in the presence of both the UPP and curtailable profile block orders as a non-linear integer bilevel model. Then, it shows how the bilevel model can be transformed into an equivalent MILP problem. Bilevel programming A bilevel model can be regarded as two nested optimization problems, termed upper and lower level problems [47]. Formally, a bilevel model is defined as: where F and f are the upper and lower level objective functions, respectively. The main feature of a bilevel program is that the upper level decision variables, labeled u in (3)-(4), enter the lower level as fixed parameters. The variables x * represent the optimal solution of the lower level problem, which depends on the upper level variables u, i.e., x * = x * (u). However, for ease of reading this dependence is usually not formally expressed. Historically, the bilevel approach was used in the field of game theory to describe non-cooperative Stackelberg games [47]. In a Stackelberg game the upper level problem represents a leader that acts before a follower, that is represented by the lower level problem. However, in power system economics, the bilevel method is typically used to access the dual variables, i.e., the market prices, and not to actually build a game. Therefore, the upper and the lower level objective functions, i.e., F and f , are usually equivalent. The interested reader is referred to [47,48,49] for additional information on bilevel programs and their applications in power system economics. The non-linear integer bilevel model In the proposed approach, the upper level problem handles the UPP and verifies the degree of moneyness of block orders, whereas the lower level actually clears the market by using a social welfare maximization approach while properly dispatching the UPP orders, as outlined in Fig.6 The upper level problem This section describes the upper level problem, which is defined as follows: subject to: with d π tk ≥ 0, d d tk ≥ 0, κ t ∈ [−1; 5], and π t ∈ R. The term ǫ is a sufficiently small positive parameter, whereas M π , and M B p are appropriate large constants, and a discussion on the selection of these parameters is given in Appendix A. Notice that, the starred variables d ζ tk * , d w tk * , s * tp , ζ * ti , and r * p , are the optimal values of the lower level variables, as sketched in (3)- (4). The upper level is a social welfare maximization problem where the first two terms in the objective function (5) represent the demand orders, the third term represents producers submitting simple stepwise orders, and the last term represents producers submitting curtailable profile block orders. Constraint (6) is the UPP definition stated in (2). Constraints (7)- (8) imply that the binary variable u g tk is equal to one if and only if the submitted price P d tk is strictly greater than the UPP π t . Whereas, constraint (9) implies that the binary variable u e tk can be equal to one only if the submitted price P d tk is exactly equal to the UPP π t . Notice that, u g tk and u e tk cannot be equal to one at the same time. Constraint (10) enforces the priority due to the merit orders, i.e., it determines the sequential execution of the in-the-money UPP demand orders within the UPP zones, with a significant reduction in the search space of the binary variables. In this constraint, h and k are indices representing all the consumers paying the UPP, that is h, k ∈ K π t . If the order of consumer h has a smaller merit order than the order of consumer k (i.e., O m th < O m tk ), then the order of consumer h must be executed before the order of consumer k. Notice that merit orders are inputs, therefore constraint (10) is linear. Equation (11) defines the auxiliary variables d π tk , which are used to recap the executed quantities for UPP demand orders into single variables. Constraint (12) verifies the degree of moneyness for block orders, and implies that the binary variables u B p can be equal to one only if the block order has a non-negative surplus. That is, if the block order is accepted, i.e., u B p = 1, then the block order must be either ITM or ATM. By contrast, a block order can be rejected, i.e., u B p = 0, regardless of the surplus. Therefore, this formulation excludes any PAB, i.e., an OTM block order which is accepted, but it allows PRBs, i.e., ITM block orders which are rejected, consistently with the European market requirements, as described in Section 1.2. The constraint (13) defines the binary variables u w tk and u d tk . These variables can differ from zero only if u e tk = 1, i.e., if the UPP demand order is at-the-money, which is the requirement for having a UPP order partially executed. The variables u w tk handle the case of partial execution according to a traditional social welfare approach, as depicted in Figure 5, whereas the variables u d tk handle the case of partial execution according to an economic dispatch approach, as depicted in Figure 4. The constraint (13) prevents the double execution of the same order. Finally, the constraint (14) sets the limit on the maximum dispatchable quantity d d tk . Given the upper level decision variables u g tk , u w tk , d d tk , and u B p the market clearing is actually performed by the lower level problem. (18) subject to: (27) with d ζ tk ≥ 0, d w tk ∈ R, s tp ≥ 0, r p ∈ R, and f tij ∈ R. Dual variables are enclosed in square brackets. Given the upper level variables u g tk , u w tk , d d tk , and u B p , the lower level problem actually clears the market while dispatching the UPP order according to their degree of moneyness. We recall that the upper level variables enter the lower level as parameters, therefore the lower level problem is a linear program. Notice that, the lower level objective function (18) is equivalent to the upper level objective function (5). Indeed, if we substitute (11) into (5) and considering that the terms P d tk u g tk D max tk and P d tk d d tk are constants into the lower level problem, and that any constant term can be removed from an objective function without altering the optimal solution, we obtain the equivalent lower level objective function (18). Therefore, given the upper level variables, the lower level clears the market according to an exact social welfare maximization problem. Constraints (19)- (22) impose bounds on the demanded and offered quantities. Notice that, the constraints (20) explicitly sets the lower bound for the demanded quantities d w tk . This formulation will be exploited in Section 3.4. Constraints (23)-(24) impose bounds on the inter-zonal flows. Constraints (25)- (26) set the MAR conditions for block orders by enforcing the relation R min p ≤ r p ≤ 1. The binary variables u B p are used to exclude any out-of-the-money block orders as determined by (12). We recall that, the acceptance ratio r p must be the same during all the hours t ∈ T p . This means that the day-ahead clearing problem in the presence of block orders cannot be split in independent hourly subproblems. Finally, equation (27) defines the power balance constraint for each zone i ∈ Z = Z π ∪ Z ζ . The right-hand-side of (27) specifies the quantities that must be dispatched. In particular, the terms u g tk D max tk represent the in-the-money orders that must be fully executed and dispatched according to the UPP clearing rule (see Section 1.1). By contrast, the terms d d tk represent the at-the-money UPP orders partially executed and to dispatch (as in case depicted in Figure 4), where the quantity d d tk is determined by the upper level. Furthermore, notice the presence of d w tk in the left-hand-side of (27). The variable d w tk is a lower level decision variable, which determines the quantity partially cleared for an at-the-money UPP order by using a social welfare approach, as in the special cases depicted in Figure 5. The constraint (13) prevents the double clearing of the same order. Notice further that the set K π ti , i.e., the consumers paying the UPP, is empty in the zones non-enforcing the UPP, that is, The lower level problem The starred variables d ζ tk * , d w tk * , s * tp , r * p , f * tij , and ζ * ti represent the optimal values of the lower level variables. The zonal prices ζ ti are defined as the dual variables of the power balance constraints (27), as required by the marginal pricing framework [44,45]. The zonal prices are used within the upper level problem to compute the UPP in equation (6) and to verify the degree of moneyness for block orders in the constraint (12). In the following section, the bilevel program is reduced to a single level optimization problem. The equivalent single level problem In order to access the dual variables ζ ti , i.e., the zonal prices, the bilevel model formalized in Section 3.2 is reformulated as an equivalent single level optimization problem. The objective function of the single level problem is the same of the objective function of the upper level (5) The single level problem is a unique optimization program, and there is no distinction between upper and lower parts. Hence, all the decision variables of both the problems are present in (28). Furthermore, we recall that the lower level problem is a linear program, because all the upper level variables enter the lower level as parameters. As a linear program, the lower level is equivalent to its necessary and sufficient Karush-Kuhn-Tucker (KKT) conditions. Moreover, in a linear program, the KKT complementary slackness is equivalent to the strong duality property [48,50,51]. As a consequence, the lower level problem can be introduced into the single level problem by adding the following constraints to the single level: ϕ w tk − ϕ w,lo tk + ζ ti = P d tk ∀t ∈ T , ∀i ∈ Z, ∀k ∈ K π ti (30) ϕ ζ tk + ζ ti ≥ P d tk ∀t ∈ T , ∀i ∈ Z, ∀k ∈ K ζ ti (31) ϕ s tp − ζ ti ≥ −P s tp ∀t ∈ T , ∀i ∈ Z, ∀p ∈ P ti (32) where (29) is the strong duality property, which requires the equivalence between the objective functions' values in both the primal problem and the dual problem. Conditions (30)- (34) are the constraints of the dual problem, i.e., the dual feasibility, whereas (35) refers to the original constraints of the lower level problem, i.e., the primal feasibility. To summarize, the single level optimization problem, equivalent to the bilevel model presented in section 3.2, is composed by the following three parts: 1. the objective function (28); 2. the constraints of the upper level (6)-(17); 3. the conditions representing the lower level problem (29)- (35). The final MILP model The single level optimization problem presented in Section 3.3 is a non-linear integer program. To obtain the final equivalent MILP model, all the non-linearities must be removed. There are three kinds of non-linearities in the single level: 1. the products of a binary variable and a continuous bounded variable, as u e tk π t in (9); 2. the product π t d π tk in the UPP definition (6); 3. the product ζ ti k∈K π ti d d tk in the strong duality (29), and in (6) due to (11). The non-linearities due to the product of a binary and a continuous bounded variable can be removed by using standard integer algebra. As an example, the product ux of the binary variable u, and the continuous variable x with bounds ±M , can be replaced by an auxiliary continuous bounded variable y defined as: The Appendix A reports all the auxiliary variables actually used, with a discussion on the selection of the big-M's values. To handle the UPP definition (6), we propose a novel and equivalent formulation. Firstly, by using (11), the UPP definition (6) can be written as: In (38), the terms π t u g tk D max tk and ζ ti u g tk D max tk involve the products of a binary variable and a continuous variable, and can be handled as shown in (36)- (37). Furthermore, due to (9) and (13), the terms d w tk and d d tk refers to at-the-money UPP orders, where π t = P d tk , by definition. As a consequence, the terms π t d w tk and π t d d tk in (38) can be replaced by P d tk d w tk and P d tk d d tk , respectively. The term ζ ti d w tk in (38) is handled as follows. Firstly, by using (30) the zonal prices is recast as: therefore, ζ ti d w tk becomes: Then, we recall that to obtain the single level problem in Section 3.3, the lower level has been recast by using a set of equivalent necessary and sufficient conditions, and to avoid the KKT complementary slackness, the strong duality property has been used. This is desirable because the KKT complementary slackness would introduce further non-linearities. However, the strong duality guarantees that all the KKT complementary slackness conditions hold [52,53]. Therefore, we can use any subset of them for our purpose. In particular, the complementary slackness associated to the constraints (19)- (20) are defined as follows: Hence, by using (40)-(42) the following relation can be obtained: which involves only the product of a binary and a continuous variable, and can be handled as showed in (36)- (37). Finally, the only remaining non-linearity to handle is the term ζ ti k∈K π ti d d tk which is present not only in (38) but also in (29). To deal with this term we use a binary expansion. The basic idea of a binary expansion is to convert an integer number in binary form by using binary variables [54,55,56]. In particular, the binary expansion is utilized to convert the quantity k∈K π ti d d tk in binary form, as follows: order to obtain an integer number. Then, the left-hand-side actually performs the conversion in binary form. Given the value of c, which depends on the market specifications, the discretization performed in (44) is exact. As a consequence the following relation holds: Therefore, substituting the terms described, and simplifying the common terms, the UPP definition (38), can be equivalently recast as: which involves only the products of a binary and a continuous variable that can be handled as showed in (36)- (37). The definition (46) is exact, with no approximation provided that the parameter c, introduced in (44), is selected properly. Starting from the single level model described in Section 3.3, by using (46) in place of (6), substituting (45) in (29), and after removing all the non-linearities due to the products of a binary and a continuous variable as outlined in (36)-(37), we obtain the final MILP model reported in Appendix A. The MILP model solves the market clearing problem in the presence of both the UPP and curtailable profile block orders by using an exact social welfare maximization approach without any heuristic or iterative methods. Implementation details Under the UPP pricing scheme, all the UPP orders have a merit order, i.e., a parameter that determines a strict total ordering among the orders, as described in Section 1.1. All the UPP orders must be executed sequentially, according to the priority established by the merit order. However, the current implementation of the UPP pricing scheme in the European market, strictly enforces the merit order only for the in-the-money UPP orders, see (10). By contrast, the merit order for the ATM UPP orders is enforced only as long as there is enough transmission capacity between the zones involved, i.e., if there is no market split. For this reason, the merit order for ATM orders is currently enforced ex-post, given the market solution. In this section, we propose a set of constraints to detect whether the connecting lines are congested or not. In particular, we introduce a set of conditions to detect market splits, and to enforce the merit order for the ATM UPP orders directly within the optimization problem. These constraints are upper level constraints. However, they are not strictly required by the current European market rules. For this reason, they are described in this section and not in Section 3.2. Notice that, these constraints allow one to fix a significant part of the ATM quantities actually executed, with a significant reduction of the search space of the binary expansion (44). Specifically, for all t ∈ T , i ∈ Z π , and h, k ∈ K π ti , such that O m th < O m tk , and P d th = P d tk , the following constraint is added: Furthermore, for all t ∈ T , i, j ∈ Z π , h ∈ K π ti , and k ∈ K π tj , with i = j, such that O m th < O m tk , P d th = P d tk , and F max tij > 0, the following constraints are enforced: Notice that merit orders, prices and maximum flow capacities are inputs, therefore the above constraints are linear. The term ǫ f is a sufficiently small parameter, whereas M F tij and M D th are appropriate large constants, that are defined in Appendix A. Constraint (47) enforces the merit order for ATM orders within the same zone. Furthermore, from (23) and (48)-(50), u f tij = 1 if and only if f tij = F max tij , i.e., the line is congested. Constraint (51) enforces the merit order for ATM orders in zones directly connected. If the line is not saturated, then u f tij = 0, and the constraint is enforced, otherwise there is market split and the constraint is deactivated. Constraint (51) can be further generalized to zones connected through a path involving multiple lines. In this case, to enforce the constraint, all the variables u f tij must be zero, i.e., the lines connecting the zones i and j must not be saturated, otherwise there is a market split along the path, and the merit order must not be enforced. As a remark, notice that a loose formulation of u f tij can be implemented by using: instead of (48)- (49). In this case if f tij < F max tij , then u f tij = 0, but the converse is not true. This approach appears to be preferable for dealing with solvers or modeling languages that do not allow to prioritize the binary variables. Indeed, the merit order for ATM orders can be regarded as a secondary requirement, and it should be enforced by the solver only at the end of the Branch-and-Bound process. This can be performed efficiently by setting the lowest priority to the binary variables u f tij . Numerical Results This section describes the numerical results obtained by testing the MILP model introduced in Section 3.4, and reported in Appendix A. Experiment setup The data used to test the proposed MILP model was downloaded from the website of the Italian market operator [15]. It refers to the day-ahead market, and covers 31 days, ranging from January 1 st to January 31 st , 2018. Each day involves on average 20,307 demand orders and 37,810 offer orders. These orders are distributed over 22 zones. Six zones are the Italian physical zones, which enforce the UPP scheme, whereas the remaining zones do not enforce the UPP. We recall that the Italian UPP is termed PUN. Artificial orders were randomly added to test curtailable profile block orders. Specifically, for a block order p, the maximum hourly quantity offered S B,max tp was sampled from a uniform distribution ranging from 1 MWh to 75 MWh, whereas the block order price P [59], and the documentation of the main functions is available at [60]. The documentation describes also how to obtain the Italian data from [15]. Orders without a price limit, listed in the Italian market with a price P d tk = 3000 Euro/MWh, are assumed to be fully executed. The value of the parameter ǫ in (8) can be set arbitrarily small. Since the Italian market limits the resolution of the PUN to six digits, any value not greater than 10 −6 is acceptable. The tuning of the big-M parameters in (7)-(8), and (12) is discussed in Appendix A. Test with real Italian data The first test takes into account only the real data of the Italian day-ahead market, which ranges from January 1 st to January 31 st , 2018, and includes 1,801,652 market orders. The purpose is to verify the effectiveness of the proposed approach in solving the UPP clearing scheme. The test involves 744 hourly PUN problems. On average, each instance of the problem contains 985 binary variables, and is solved to optimality in 8.74 seconds. In the performed tests, all the cleared quantities match the real quantities executed on the Italian market. The binary expansion (44) is actually utilized only in 9 cases out of 744 (1.21% of the cases). That is, in these cases the solution contains at least one order with u d tk = 1. Figure 7 reports for each hourly PUN problem the number of binary variables involved, and the time to reach the optimal solution. The largest spikes in the computation time correspond to the instances where the binary expansion actually takes place. The maximum time is 74.60 seconds, which corresponds to the 20th hour of January 24th. Test with 50 curtailable profile block orders over 12 hours The second test involves the data of the Italian market operator referring to January 1 st , with the addition of 50 curtailable profile block orders randomly generated as described in Section 5.1. Each block order spans from the 9th to the 20th hour. To test the effectiveness of the proposed MILP model, the block orders are evenly distributed between an PUN zone (Sicily) and a non-PUN zone (Swiss). The presence of block orders requires to solve a single MILP problem spanning the whole considered day. The clearing problem involves 19,246 PUN demand orders, 414 non-UPP demand orders, 34,927 supply orders, 50 curtailable profile block orders, and is solved in 848.79 seconds. Table 1 reports the real Italian PUN (second column), and the PUN obtained from the proposed model (third column). The shadowed rows correspond to the hours where the block orders have been added. As can be observed, the difference between the real and the modeled PUN is zero up to the fourth decimal place in the hours where the block orders are not present. This discrepancy is due to the tolerance parameter κ t in (6), which can lead to slight differences in the PUN, despite the same matched quantities. By contrast, from the 9th to the 20th hour, the presence of block orders leads to a decrease in the PUN, which is caused by the additional quantity supplied by the block orders. Table 2 reports the surplus of each block order and the acceptance ratio r p . All the block orders with a positive surplus, i.e., the ITM block orders, are fully cleared (r p = 1). The block orders with a negative surplus, i.e., the OTM block orders, are correctly rejected (r p = 0). Furthermore, notice that block order 15 in the PUN zone has zero surplus (see the corresponding row in the left part of Table 2). That is, it is an ATM block order, and it is partially cleared with r p = 0.86. Conclusions The coupling of all the European electricity markets is an ongoing process which faces several difficulties. In particular, the presence of heterogeneous orders and rules, such as block orders and the uniform purchase price, rises several issues. The proposed mixed-integer linear program allows one to solve the market clearing problem in the presence of both curtailable profile block orders and the uniform purchase price scheme. In particular, it harmonizes within a unique optimization program two classes of heterogeneous orders and rules. An exact social welfare maximization problem is formulated and solved, as required by European guidelines. The proposed approach is non-iterative, heuristic-free, and the solution is exact, with no approximation up to the level of resolution of current market data. In addition, the solution is obtained coherently with the marginal pricing framework. Finally, the mixed-integer linear program formulation allows one to prove the optimality of the obtained solution. Ongoing work aims at introducing linked block orders, piecewise linear orders, and complex orders (as the Iberian minimum income condition) in the proposed framework. It is expected that the income condition could be modeled in a similar way as the surplus of the block order. By contrast, piecewise linear orders will pose additional issues, due to the presence of quadratic terms in the objective function. with p ∈ P B . In (8) the value of ǫ is 10 −8 , whereas in (48) the value of ǫ f is 10 −6 . In the Italian market, the maximum price M π , used in (7)-(8) and (A.9)-(A.18), is 3000 Euro/MWh. Moreover, considering that block orders span over multiple hours, the value of M B p in (12) (51) the parameter M D th is defined as M D th = D max th . Furthermore, in order to reduce significantly the search space of the upper level binary variables, the following constraint can be implemented: u e tk ≤ u g th − u g tk ∀t ∈ T , ∀h, k ∈ K π t , (A. 23) such that P d th > P d tk .
2018-06-06T09:42:51.000Z
2017-11-21T00:00:00.000
{ "year": 2018, "sha1": "bdb8808306d6b665927db72f35a2126a117ecf66", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1711.07731", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "fd14678bf1273da8d8163edf941f8d510512c2d8", "s2fieldsofstudy": [ "Engineering", "Economics" ], "extfieldsofstudy": [ "Computer Science", "Economics", "Mathematics" ] }
117829576
pes2o/s2orc
v3-fos-license
Fisher4Cast Users' Manual This is the Users' Manual for the Fisher Matrix software Fisher4Cast and covers installation, GUI help, command line basics, code flow and data structure, as well as cosmological applications and extensions. Finally we discuss the extensive tests performed on the software. Introduction The Fisher Matrix translates errors on observable quantities measured in a survey into constraints on parameters of interest in the underlying model. As such, it is the elegant way of doing propagation of errors to the case of multiple measurements and many parameters [2]. In contemporary cosmology, Fisher matrices are used to forecast parameter constraints from a proposed survey, and can be used to optimise future surveys (see [1] for a detailed discussion on the Fisher Matrix formalism). Fisher4Cast was developed with the aim of providing the community with a free, standard and tested tool for Fisher Matrix analysis, that is both easy to use through the Graphical User Interface, and yet also a robust general base-code for research. The underlying modular code of Fisher4Cast is completely general and is not specific to cosmology although the default setup for the GUI is intended for cosmology. It provides parameter error forecasts for cosmological surveys providing distance, Hubble expansion and growth measurements in a general, curved, FLRW background. the input and output in tables, matrices and figures. This means that both input and output data and results are easily portable from Fisher4Cast into a research publication. The simple start-up procedure and ease of use of the Fisher4Cast suite make it well-suited to both teaching and research. The input to the Fisher4Cast code can easily be changed and adapted, hence is can be run in large loops to explore parameter spaces, and for visualisation of the Fisher Matrix. We now describe the Fisher Matrix framework, outline the start-up procedure of Fisher4Cast, and describe the various functions and routines. A shortened version of the start-up procedure of Fisher4Cast is found in the Quickstart.pdf guide -which is included both as an appendix in [1] and in the bundle of Fisher4Cast software. Getting Started Currently the code is available for download at one of the following websites [4,5]. Save this '.zip' file into the directory you want to run the Fisher4Cast suite from. The Graphical User Interface The GUI can be started from the Matlab editor. The file FM GUI.m must be opened from the directory, and once the file is opened (click on the file icon from within the Command-line interface to open it with an editor) press 'F5' to run the code. This will open up the GUI screen. You can also launch the GUI from the command line by typing: >>FM_GUI The output data will not be saved into the workspace, but the 'Saving Features' button allows one to save the input and output from any particular run in text or L A T E X code. The Basic Layout Explained We describe the basic layout of the GUI, and illustrate the various actions with screenshots taken of a working GUI. The GUI has three main sections. The section on the top left controls the input to the GUI. The bottom left panel controls the things one might like to use in the analysis and the parameters you are interested in plotting. In Figure (1.1) we show the initial GUI screen, highlighting the observable about to be used (here the Growth function G(z)), and the cosmological parameters relevant to the analysis (which are the w 0 and w a coefficients in the Chevallier-Polarski-Linder parameterisation of dark energy [12, ?] -see Eq. (1.3)). The specific cosmological example is described in detail in [1], which contains the set of analytical derivatives used in Fisher4Cast. The right-hand side of the GUI controls the plotting commands for the ellipse. The various actions used to control the output are described below. Changing the Input Structure In order to compute Fisher ellipses for different input structures, one can either choose from a drop down list of default example structures contained within the distribution (as shown in Figure (1.2)) or one can generate a unique input structure. This file can then be loaded to the GUI, which must be given as '.m' file. You can obviously also just edit the input parameters in the GUI after the default input has been loaded or alternately you can edit the input file (eg Cooray et al 2004.m). Floating Help Floating help is provided with the Fisher4Cast GUI for most commands. The floating help is activated by moving the mouse pointer over the button or parameter on the GUI and leaving it there for a few seconds. This generates a screen prompt, which pops up and gives information about the function of the button or parameter in question. Figure (1.3) shows this help prompt for the 'Run' button on the GUI. Running Fisher4Cast Once satisfied with the observables considered and the parameters of interest, pressing the 'Run' button will execute the code. A box will pop up that will state that the code is running, and an error ellipse will appear when the code has finished running. This is shown in Figure ( Errors in the Input When the 'Run' button is pushed, the GUI first calls the FM errorchecker.m function with the input supplied. This checks for the input files, checks that the data vector (e.g. redshifts at which one has measurements of the Hubble parameter) and error vectors (e.g. the fractional errors on the Hubble parameter, σ H /H, at the redshifts above) are the same length and performs other consistency checks. Should any of these tests fail, an error box will appear explaining which errors to fix before calling the GUI again. A log file of these errors is created in the same directory the GUI is being run in, and is called 'log.mat'. Loading and reading this log file is described in Section 1.2.3. Figure (1.5) shows the error dialogue box indicating that a single error has been found. The Fisher Ellipse Once the code is running smoothly, the resulting Fisher error ellipse is plotted. This is shown in Fig Plotting more than one Ellipse Should one want to superimpose more than one ellipse, click the 'Hold on' button. This works both for the line and the area (although the same line and area fill properties will be used for both ellipses -see the below item for discussion of changing the colour of the area fill). Figure (1.7) shows the resulting ellipse for two observables, G(z), d A (z). Area fill Clicking on this button yields a filled error ellipse. Once it is clicked a colour must be selected from the menu on the left pop-up box. Note that should more than one error ellipse be plotted later, this area fill box must be ticked and un-ticked again to change the colour, otherwise the same colour will be used for all filled ellipses. This box is shown in Figure ( Importing Data The input data can also be imported from a file -either as the redshift vector (the data), the error vector or the matrix of prior information on the cosmological parameters. This can be done by clicking the relevant 'Browse' buttons on the GUI. This brings up a screen in which one can either load the data from file, or from the clipboard, in which case the data is cut and paste into the GUI fields. Figure (1.9) shows the screens for the loading of data from a file in the directory. In addition there is a check-box which specifies whether or not to use the prior matrix. Multiple σ It is possible to plot the ellipses for multiple confidence levels (i.e. 67%, 95%, 99% specified by 1−, 2−, and 3− σ respectively). This is done via a drop-down menu on the right-hand side of the GUI, and is illustrated in Figure (1.10). It is worth noting that the 'Hold on' was used to generate this plot. Controlling Output The buttons on the right-hand side of the GUI all control the output specifications of the ellipse, such as the limits of the x and y axis, the line style and colour of the ellipse, and whether or not to have a grid on (over or under) the data. This is designed for maximum flexibility in representing the ellipses in a unique and distinguishable way. The axis labels can also be modified using the 'Edit Axis' button. Saving the Plot Skins The GUI is available in a variety of skins and backgrounds. These can be chosen from a drop-down list (consisting of both colour schemes and background images [6, 7, 8]); additional background images can be loaded by the user. This is shown in Figure ( Fisher4Cast Menu A Fisher4Cast menu is defined in the top left-hand corner of the Fisher4Cast GUI. From this drop-down menu one can access the Readme file of the code suite, the Users' manual and Quickstart Guide for easy reference, and the version history of the code. The BSD licence [3] for the Fisher4Cast suite is also available from the drop-down list. This list is illustrated in Figure (1.15). In addition to the Fisher4Cast menu, there is a menu with information on the extensions available for Fisher4Cast. In future releases of the code this menu will select the extensions themselves, at present it provides the Readme for the modules. Interactive Plotting Interactive 'point-and-click' plotting is available in Fisher4Cast Version 2.0, available by selecting the 'Activate Interactive Plotting' option from the 'Fisher4Cast Extension' menu. Once selected the user interactively sets the values for parameters being plotted by clicking on the plotting area of the GUI. The arrow is activated for the first click on the plot area, the next click will run the code and produce the appropriate ellipse. The values selected will be displayed in the parameter input section of the GUI, the same as they would have been if manually entered. Care should be taken not to step to very unphysical values of the parameters, e.g. very positive values of w 0 . Running the Code Open your version of Matlab and change the working directory to be the same as where you saved Fisher4Cast in. To run the code from the command line with one of the standard test input structures supplied, type: This will call the code using the pre-supplied test input data (Cooray et al 2004) and then generate an error ellipse plot for the parameters and observables supplied in the chosen input. All the relevant generated output is written to the output structure. You can see the range of outputs to access by typing: >>output and then examine each output individually by specifying it exactly. For example: >>output.marginalised_matrix will access the marginalised matrix field in the output structure. It is worth noting that each '.' denotes another sub-level in the input structure. Example input files are supplied as a template for generating new input files with your own customised parameters and values. All fields specified in the example inputs must be specified in any user-defined example input. These are outlined in Section 1.4.2. The code can also be run from the Matlab editor. Once the code is opened (open it from inside the Matlab window), pressing 'F5' will run the code. Note that if the code is run from the Editor it will call the default input structure, which is the Cooray et al 2004.m file. This is an example file containing input data from the paper by Cooray et al. [9]. This output can be directly compared to that of Figure 1 of that paper. If your output compares correctly, you have a working installation of the code. Another input available is Seo Eisenstein 2003.m [10]. FM errorchecker The error-checker function acts 'behind-the-scenes' to check that the input structure and all the required variables are correct before executing the code. It can be run directly by using the command: where FM initialise is the specific function to initialise the input structure. The error checker validates, among other things, that all the derivative functions (whether analytical or numerical derivatives are going to be implemented) do in fact exist and that the data and corresponding variances vectors are the same length. This error checker is continually being updated to facilitate ease of use of the code. All error and checking messages are displayed to the screen and are also saved in the Matlab file 'log.mat' which can be loaded and examined at a later stage by invoking the following command: >>load(log.mat) Flowchart Throughout the code structures are used to allow different sub-parts of the general code access to the data. These structures are defined as global variables. The structures containing information on the input for the code are either defined at the beginning or loaded from file. All output structures can be saved for later use. We now outline the general framework of the code and describe the cosmological example specifically coded in Fisher4Cast. The flowchart shown in Figure They are from, left to right, the begin and terminate indicator; a simple processing function which would generally return an output; an if statement or for loop; an input process function designed to be edited and changed as per the user specifications and lastly a stored structure for either input or output and passed globally for use throughout the code. Components of the Code We now discuss the various components of the code in detail. With each section we give a subsection of the flowchart to highlight the position in the flow of information through the code structure. FM run FM run.m is the general wrapper of the code. In order to make the code clear and easily editable, all main processes are called from this general function, and it is where all data storage occurs. Links to separate functions for specific calculations are documented in the code. From the command line this code is called with one argument, namely the specific function that initialises the input structure. This is unique to each example. As outlined in the section on implementing the code, if no argument is given the code is run with a pre-defined function, given by Cooray et al 2004.m, which gives the parameters for a redshift survey as outlined by Cooray et al. [9]. FM initialise This function initialises the input used throughout the processing of the Fisher4Cast code. The values, names and areas of interest are specified here. It is called by FM run.m to set the initial values for the input structure, which is then passed globally to all other parts of the code. Examples of the default initialising functions provided are Cooray et al 2004.m, Seo Eisenstein 2003.m [9,10]. It is important to note that the format of the initialising function must be kept constant -in other words the same input must be specified in each initialising function -the code expects values for certain entries in the input structure. The entries are as follows: input.function names -A cell of strings containing the specific filenames of the analytical derivatives. Note in the coded cosmological example that no analytical derivative function is specified for the growth function, derivatives are only taken numerically. input.observable names -A cell of strings with the names of the observables. input.observable index -A vector of the indices corresponding to the observable names you are interested in, eg [2 3] would imply that your considering the second and third observables listed in observable names. input.data{i} -These are the row vectors of the data for each of the respective observables (indexed again from beginning to end by i). In the cosmological example input.data{1} would be the redshifts at which you have measurements of the Hubble parameter, for example. input.parameter names -A cell of strings containing the names of the parameters you can include for consideration to generate Fisher Ellipses. input.base parameters -A row vector of the parameter values (they must be specified with the same order as the parameter names vector). This is the model assumed to be true in the analysis, the Fisher Matrix is taken around this fiducial model. input.prior matrix -The prior matrix for the parameters taken from previous surveys etc. The order of the matrix columns and rows correspond to the respective parameters listed as your parameter names. input.parameters to plot -A row vector of the indices of the specific parameters you want to plot. If one index is given then a likelihood function for that parameter is plotted and if two are specified then an error ellipses is plotted. Selecting more than two parameters will produce an error message, as Fisher4Cast is only coded for up to 2-dimensional error contours. input.num parameters -This is a derived value and is given by the length of the number of parameters you are considering in parameters to plot. input.num observables -This is a derived value and is generated from the number of observables under consideration in observable index. input.error{i} -The fractional error on the data from the observables (σ X α /X α ). It is key that there are as many error entries as there are observables you are considering (i.e. input.error{1} gives the error on your measurements of the Hubble parameter, measured at input.data{1}). The entry can either be a row vector, in the case of uncorrelated observables (this vector is converted to a diagonal matrix in the code) or a covariance matrix. input.numderiv.flag -A logical entry is expected here for each observable, should you wish to use numerical derivatives. input.numderiv.f -Single string entries which are combined into a struct later. These entries are only required if you have specified that you would like numerical derivatives for your observables. They give the function name of the function (say g.m) of which you are taking derivatives. The Derivative Loop The code now runs various operations in a loop over the specific observables. For each observable it checks whether numerical or analytical derivatives are to be used by checking the numerical flag as specified in the input structure (see the above discussion on the numderiv.flag for the input structure). Both the analytical and numerical derivatives return a matrix of derivatives for all the parameters and observables as well as a vector of the function evaluated at the data points specified. The specific details of the numerical and analytical derivative codes are outlined in the proceeding discussion. Once the selected derivative process is completed, the relevant output is stored in the output structure. Numerical Derivatives The numerical derivative code will calculate the numerical derivatives of any function (say g.m) provided that the function is specified as a function of the input parameters (i.e. g = g(d, θ A , θ B , ...)), by calling on FM process numeric.m which in turn calls FM num deriv.m and passes it the name of the function you wish to take derivatives of. The standard numerical derivative algorithm used is known as the complex-step method [11]: where Im represents the imaginary part of the argument, and i 2 = −1 as usual. This method is a second order accurate formula and is not subject to subtractive cancellation. Unlike the finite-difference method an arbitrarily small step-size can be chosen and therefore the complex-step method can achieve near analytical accuracy. In addition, the simple double-sided central derivative is coded in the FM num deriv.m function. In order to use this algorithm the user must change the method field inside the derivative function from 'complex' to 'central'. In this case the gradient is then calculated as This is then iterated until the gradient converges for the parameter. Note that the convergence criterion is quite stringent -and an error message will result if there are possible convergence issues. However this criterion can be relaxed by changing the settings in the FM num deriv.m code. Once the derivatives are saved the Fisher Matrix must be calculated for this observable. This is done by first calculating the data covariance matrix for the observable. This is done in FM covariance.m which is passed the function value and the index of the observable. The code checks if the error entry is a covariance matrix (in the case of correlated observables) or a vector in the uncorrelated case. It then calculates the covariance matrix by multiplying the variance with the function value at the data points considered. FM matrix.m then produces a Fisher Matrix (F ) from the covariance matrix (C) and the derivative matrix (V ) using matrix multiplication as F = V T C −1 V . Analytical Derivatives The analytical derivatives are specific to each user and example. If one knows the analytical form of both the function and of the Fisher derivatives, one can include these functions explicitly. The only conditions on these functions are that they must be of the form g = d(d, θ) and must return as output a matrix of Fisher derivatives ∂g/∂θ and a vector of the function itself evaluated at the data points d given in the input.data{i}. These derivative functions are supplied for the Hubble parameter and angular diameter distance as FM analytic deriv 1.m and FM analytic deriv 2.m respectively. The Fisher derivatives of the angular diameter distance with respect to the cosmological parameter Ω k must be taken as Taylor series expansion when Ω k → 0 (see [1] for the full set of derivatives in Fisher4Cast). As in the numerical derivative case, once the derivatives are saved the Fisher Matrix must be calculated for this observable. This is done by calculating the data covariance matrix for the observable in FM covariance.m which is passed the function value and the index α of the observable. The code checks if the errors are specified a covariance matrix (in the case of correlated observables) or a vector in the uncorrelated case. It then calculates the covariance matrix by multiplying the variance with the function value at the data points considered. FM matrix.m then produces a Fisher Matrix (F ) from the covariance matrix (C) and the derivative matrix (V ) using matrix multiplication as F = V T C −1 V . Final processing FM sum.m collates all the derivative matrices from the previous steps and sums them together to form a full Fisher Matrix. The individual Fisher matrices for each observable are added to the prior matrix (as specified in the input structure). This complete Fisher Matrix is general and is assigned to the output structure for future reference. FM marginalise.m produces a marginalised Fisher Matrix (say F ). It takes the parameters you are interested in (specified as parameters to plot in the input structure) and shuffles the Fisher Matrix into a block form. It then performs matrix multiplication on the blocks to produced the marginalised Fisher Matrix, which is also assigned to the output structure. FM output fom then produces the appropriate error for the likelihood case and a range of FoMs, listed below, for the case of an ellipse (e.g. if the length of parameters to plot in the input structure is two then an ellipse will be plotted). These [14] is defined to be the reciprocal of the area of the 2 − σ error ellipse in the w 0 − w a plane of the CPL dark energy parameterisation [12,13]. This is, equal to det(F 1/2 )/(π √ 6.17). Unfortunately the DETF report does not appear to use this definition, and instead quotes det(F 1/2 ), which is the inverse of the 1−σ ellipse in units of the area of the unit circle. Because of the benefits of the geometric interpretation Fisher4Cast returns the true inverse area of the 2 − σ ellipse. To convert from one DETF FoM to the other, one should multiply the Fisher4Cast DETF output by π √ 6.17 ≃ 7.8 Area Unlike the previous definition this FoM is sensitive to the off-diagonal components of the covariance matrix as well as the diagonal components. FM output The data from all parts of the code are saved in the output structure. The structure formalism in Matlab means that each '.' indicates a further sub-level in the structure. Entries in the structure are of mixed type (i.e. output.function value is a cell of vectors, one for each observable, while output.function derivative is a cell of matrices of derivatives, again with one matrix of derivatives for each observable). By the end of the execution of FM run.m the output structure should have the following entries: output.function value -This is a cell which contains a set of vectors for each of the observables considered. If in the specific run of the code you have only calculated an error ellipse for say one out of three observables then the rest of the entries are empty vectors. output.function derivative -This cell now contains matrices of the Fisher derivatives for the observable. Again, the entries of observables you are not considering will result in empty matrices. output.data covariance -This cell contains the calculated data covariance matrix corresponding to each of the observables considered. output.matrix -This cell contains a separate Fisher Matrix for each of the observables considered. output.summed matrix -This cell contains the sum of the Fisher matrices for each observable, and the prior information matrix, if included. output.marginalised matrix -The marginalised Fisher Matrix given here depends on which parameters are of interest in each run of the code. The marginalisation via matrix multiplication is outlined in Section 1.4.4. output.fom -This vector contains either a single entry (1 − σ error), in the case where a onedimensional likelihood function a parameter θ A (for example) is being considered or an array of different FoMs when an ellipse of two parameters is being plotted. These are each explained in above in 1.4.4. Generating plots FM generate plot calls the either FM plot ellipse.m or FM plot likelihood.m depending on whether a 1−D likelihood or an ellipse is required (whether one or two parameters are specified in the parameters of interest field in the input structure. The style of the plot is controlled by the FM plot specifications.m file, which controls variables such as the line style, the colour of the lines, the resolution of the grid and the contour level (for example 1 − σ, 2 − σ). Similarly the file FM axis specifications.m controls the x and y labels and the range of the plot that will be generated. Lastly FM save struct is called to save the input and output structures with a user specified filename. One could invoke this function from the command line: >>FM_save_struct('saved_filename',input,output) where input and output correspond to the structures that are being saved as saved filename-01-Nov-2008.mat. The date on which the structure is saved is appended to the end of the filename. If a filename is not specified then the default name of FM saved data is used. It is important to note that the function overwrites existing files with the same name and date and no warning is given. Care should thus be taken to ensure different names are specified when saving important data on the same day. The structures can be loaded once again by issuing the following command: this will make the previously used input and output structures available in the current session. To end with we provide a global view of the structure of the code in Figure (1.18). The Background Observables -Hubble parameter The expansion history of the Universe is described by the Hubble parameter, which is defined as with the evolution of the dark energy density, f (z) given by Assuming the Chevallier-Polarski-Linder (CPL) expansion of the dark energy equation of state [12,13]: where a 0 = c/(H 0 |Ω k |) is the curvature radius of the cosmos, f (z) becomes: -Angular diameter distance Measurements of 'standard rulers' of known intrinsic length is widely used as a probe of the cosmology of the Universe. The angular diameter distance relates the angular size of an object to its known length to obtain a measure of the distance to the object, and given by and E(z) is as defined in Eq. (1.1). -The Growth of Structure The growth of structure is a potentially powerful probe of dark energy [14,15,16,17,18,19,20]. Consider the differential equation for the evolution of perturbations in the matter density δ (assuming the pressure and pressure perturbations of the matter are zerop = δp = 0) [21,22,23]: The growth function provides the temporal evolution of these density perturbations, i.e. δ(x, z) ∝ G(z). In Fisher4Cast his is solved in a general FLRW universe, and hence there is in general no analytical solution to Eq. (1.7). Under the assumption of a flat universe and a cosmological constant (or pure curvature) however, the growing mode satisfies the following integral form [24,25]: where the 5/2 coefficient is chosen to ensure that G(z) → 1/(1 + z) as z → ∞ This expression should not be used however, to compute the Fisher derivatives ∂G/∂Ω k , ∂G/∂w 0 or ∂G/∂w a since all of these derivatives violate the validity of the equation. Instead, the growth derivatives should be computed numerically from the solution of the full differential equation for δ(x). Rewriting the Raychaudhuri equation in terms of the Friedmann equation and the curvature density allows one to find an equation explicitly showing the curvature and dynamical dark energy contributions to the friction term: where the new independent variable is x ≡ a/a 0 = 1/(1 + z), a 0 is the radius of curvature and Ω k ( are the fractions of the critical density in curvature and dark energy respectively. Alternatively, this can be written as a differential equation in terms of ln(x): which is the equation actually solved in Fisher4Cast since it is typically more stable numerically. Appropriate initial conditions for this differential equation are set deep in the matter dominated era G(z i ) = 1, dG/d ln x(z i ) = G(z i ) for z i ≥ 100. † Note that as a result, the growth solutions will be unreliable if w(z → ∞) = w 0 + w a ≥ 0 (or even if it is close to zero from below) since then there will be significant or even dominant early dark energy. Fisher4Cast allows the user to choose the redshift where the growth is normalised to unity. The Fisher derivatives all satisfy ∂G/∂θ i = 0 at the normalisation redshift. Alternative Dark Energy Parametrisations The Fisher4Cast GUI is hard-coded for three cosmological observables (H,d A , and G), assuming the Chevallier-Polarski-Linder (CPL) parameterisation [12,13] with parameters (w 0 , w a ) -see Eq. (1.3). This is true of both the functions themselves, and the analytical derivatives included in the Fisher4Cast suite. The general framework of Fisher4Cast, however, means that one is not restricted to this parametrisation. As can be seen from Eq. The same is true for the derivatives -either they can be coded analytically for the particular parametrisation of dark energy, or the derivatives will be evaluated numerically from the functions specified in the input structure. As a caveat, the GUI can only be used if the new parametrisation of dark energy still contains only two coefficients. If this is not the case, Fisher4Cast must be run from the command line version. Extensions The general philosophy of Fisher4Cast was to make it as easy as possible to mould and extend to the needs of a general user. In line with this philosophy we have introduced extensions as a means to add functionality and customisation to the existing Fisher4Cast suite. As a design philosophy for future extensions we envisage that extensions do not alter the core functions of the code but rather access the input, output or other modular core-functions. This will enable a large community of contributors to add and make available their own specific extensions while ensuring that the robust design features of the core code remain intact. An important element in ensuring the success of shared extensions is that all contributors have a good appreciation of the structures used in Fisher4Cast while also documenting and commenting the details of their code thoroughly, including the purpose of the extension, the required input and output produced, which files or structures the extension interacts with from Fisher4Cast and whether it is run from the command line or GUI. The latest extensions included in this release are listed below. Obtaining Baryon Acoustic Oscillation Errors from Survey Parameters Two modules are included that calculate errors on the Hubble parameter and angular diameter distance in BAO surveys characterised in input structures of survey parameters. The provided codes are extensions to Fisher4Cast, and should be placed in the same folder as the main code suite so that they can access the required elements of the Fisher4Cast suite. The first of these extensions, EXT FF Blake etal2005 uses the fitting formulae of Blake et al. [26] to calculate the errors on the Hubble parameter and angular diameter distance given certain survey specifications, such as the survey area and the redshifts used for the measurements of H and d A . These are either given as central redshift bins or as the edges of redshift bins. The galaxy number density is also required, and is expected in units of 10 −3 Mpc −3 h 3 . All files associated with this module have the same prefix to identify them as external modules for the fitting formula of Blake et. al [26]. The fitting formulae contain coefficients specific to either photometric or spectroscopic surveys, and hence it must be specified which survey one is considering. The input parameters to this module are supplied in an input structure, which is explained in the EXT FF Blake etal2005 Readme.txt file. They are: sigH: A vector of the fractional error on the angular diameter distance: σ dA /d A . For percentage errors, multiply this by 100. The module EXT FF SeoEisenstein2007 also computes the errors on the Hubble parameter and angular diameter distance using the prescription set out in [27] and the sound horizon scale as given in [28]. This module contains a wrapper to call the Matlab version of the C code of Seo & Eisenstein [29]. In this module the code does not need to be in the same directory as the Fisher4Cast suite, and runs completely independently of Fisher4Cast. The module also takes an input structure (a default input structure with all fields specified is given in EXT FF SeoEisenstein2007 Input.m) with the following parameters defined: Input survey.Sigma z The line of sight real mean squared comoving distance error due to redshift uncertainties. Input survey.beta: The redshift distortion parameter is entered. Input survey.volume: The survey volume in units of h −3 Gpc 3 . Sigma perp (the transverse rms Lagrangian displacement) and Sigma par (the radial displacement) are calculated and saved to the input structure. The module is comprised of smaller functions; the flowchart of the module is shown in Figure (1.20). As in the case of the Blake et al. module, the Seo and Eisenstein module code can be called from the command line using: >>[Drms,Hrms,r,Rrms] = EXT_FF_SeoEisenstein2007_Main(Input_survey) The outputs of the code are the root mean square error on D/s and Hs, where s is the oscillation scale. These are both given as fractional error, for percentage error multiplied by 100. In addition the correlation coefficient between D and H is given (r), and the diagonal entry in the covariance matrix between D and H. To be sure that the extensions have access to the functions contained in Fisher4Cast we need to be sure that the extensions are either placed directly in the same folder or the path is specified to both Fisher4Cast and the extensions. One can use the path command to do this: >>path(path,'/path-to-folder/Fisher4Cast-v2.0') >>path(path,'/path-to-folder/EXT_FF_Blake_etal2005') where the 'path-to-folder' is the path specifying the directory where the extension or Fisher4Cast code is kept on your local computer. Similarly, to run the Seo and Eisenstein [27] module, you will need to ensure that the respective extension is either in the same directory or the path is specified: >>path(path,'/path-to-folder/Fisher4Cast-v2.0') >>path(path,'/path-to-folder/EXT_FF_SeoEisenstein2007') Reporting Features for the Fisher4Cast Suite Two extension modules have been included to provide reports of the input and output structures during a run of Fisher4Cast. These reports can either generate an ASCII text file (.txt) or a L A T E X file (.tex) which detail all the input and output produced by Fisher4Cast. In the case of the L A T E X reporting function the resulting .tex file can be compiled using L A T E X to produce a Postscript file (.ps) or Portable Document Format file (.pdf). This allows for a more polished presentation of the results generated from Fisher4Cast. It also includes a figure of the ellipse or likelihood plot which is embedded in the document. The additional benefit to generating a document in .tex format is that one can cut-and-paste the L A T E X formatted syntax of the figure or any of the tabulated data for easy inclusion of the figure in an article or document containing the results from a run of Fisher4Cast. These reporting features are accessible through the Graphical User Interface, by clicking the scroll-down menu bar labelled 'Saving Features' (see Figure (1.12)) for a screenshot of the drop-down menu). This opens a dialog box which in the case of the text report prompts the user for the .txt filename that it should be saved as. Upon choosing a L A T E X report, two dialogue boxes are opened and the user is prompted for the names of both the .tex file and the .eps file. These two extensions can just as easily be called from the command line. To generate a text report one uses the functions FM report text.m. The user is required to supply at least an input structure to generate a report. This input structure can either be a default input structures, eg Cooray et al 2004.m, or a user customised input. The function FM report text.m then calls FM run(input) with the same supplied input which then produces the relevant output structure. Both the input and output used and generated from Fisher4Cast are then recorded in the report. The user can also specify a filename to saved the report as (if no .txt extension is supplied one will be added automatically). A default name of 'Fisher4Cast Report-Day-Month-Year.txt' will be used, should no name for the report be specified, where the Day-Month-Year are the date on which the report was generated. For example the command: will generate a report with the name, 'report name.txt', as described above. If the same report name is used, the previous report will be overwritten without warning. Please specify a unique report name to ensure the report is correctly saved. Finally there is an option of including a specific output structure in the report function. This is useful when generating the report from the GUI, but care should be taken when using this option in the command line, as one runs the risk of generating a report where the input and output are not appropriately related. In other words, generates a report as before with the name, 'report name.txt', using the input supplied and assuming that the given output is associated with the respective input. Much the same as the text report, the L A T E X report is called using FM report latex.m and requires at least an input structure. The filename the report is to be saved as can also be specified (either with or without the .tex extension). The L A T E X report includes the EPS figure generated from Fisher4Cast: as a default the figure will be saved with the same name as the .tex, except an .eps extension. The default name of 'Fisher4Cast Report-Day-Month-Year.tex' is used, should no report name be specified The commands >>FM_report_latex(input,'report_name') generate a report with the name, 'report name.tex', and a figure with the name 'report name.eps', where the names overwrite any existing files of the same name. Additionally, one can use a specific figure in the report with the command: In this case there is of course no guarantee that the figure and the output from Fisher4Cast agree. As in the case of the .txt report, one can specify the output structure directly with: >>FM_report_latex(input,'report_name','use_figure',output) which generates a report as before with the name, 'report name.tex', using a figure called 'use figure.eps' where the output and figure are assumed to be associated with the respective input supplied. How to produce small, good quality postscript images for inclusion in L A T E X documents Much of the output from Fisher4Cast is expected to be used in research publications produced using L A T E X, in which case Fisher4Cast figures need to be saved in '.eps' or '.ps' format. Unfortunately the default Matlab '.eps' and '.ps' files produced tend to be large, often several MB in size ‡ . Apart from making printing slow, this is a problem when submitting papers to online archives, like the arXiv § , which has a strict file limit. Many of the figures in this paper exceeded 1MB after saving from Matlab, and several exceeded 2MB. To solve this, one can instead save the files as bitmaps and then use a utility such as jpeg2ps to convert the '.jpeg' file to postscript with a much smaller file size ¶ . Nevertheless, achieving good compression and good quality results can be tricky to achieve with Matlab figures, and we share the steps that have worked well, here. Save the files in '.eps' format in Matlab. Load the eps file into Photoshop (or equivalent such as GIMP), which will immediately request to rasterize the file and hence request an image size (in cm or inches) and a resolution (in dots per inch, dpi). Choose the size to be that which you want the figure to appear in your L A T E Xdocument (e.g. 10cm). As with all bitmaps, choose it to be the physical size you will use in the end since rescaling bitmaps causes blurring and poor quality. 300dpi gives good resolution. The resulting file will be huge in Photoshop, but do not worry since it is only an intermediate step. Now save the file as a '.jpeg' with high quality factor, 80% or better. Using jpeg2ps or a similar utility, convert the '.jpeg' file to '.eps'. This will add approximately 10% to the jpeg file size due to the postscript wrapper. You should be able to achieve a 10-20 fold reduction in eps to eps file size with only marginal reduction in quality. Tests of the Code Various tests were performed to check the correctness and accuracy of all components of Fisher4Cast, namely the integration routines in Fisher4Cast, the derivatives and their validity (especially relevant for the growth function) and the matrix manipulation and generation of Fisher error ellipses. Integration Tests The integration routines in Fisher4Cast are tested by comparing them to standard results and fitting formulae for the angular diameter and growth function respectively. Angular Diameter Distance The angular diameter distance, d A (z), defined in Eq. (1.5), is the ratio of an object's physical transverse size to its angular size (in radians). Characteristically it does not increase indefinitely as z → ∞, rather ‡ A brief word of caution: if you plot many ellipses in the GUI, or with multiple colour variations, and then save the result, Matlab will save the entire history, making the resulting file large. It is best to decide on what combinations you want and then save only that figure. § http://xxx.lanl.gov ¶ This process is discussed at http://aps.arxiv.org/help/bitmap/index and elsewhere. it inverts at z ∼ 1 -thereafter more distant objects actually appear larger in angular size. In the case of Ω Λ = 0, one can write Eq. (1.5) as an analytical function of redshift and cosmic parameters as [30,21]: . (1.11) The angular diameter distances for three particular cosmologies (the same cosmologies as found as Figure (2) of [31]) are shown in Figure (1.22). The same axis lengths and line styles are used for easy comparison of the two plots, which show agreement between the Fisher4Cast algorithms and the know results. Growth function In order to test the numerical growth function, the solution is compared to other numerical solutions in the literature (code is available for comparison at http://gyudon.as.utexas.edu/˜komatsu/CRL/, which implements the growth equation given as Eq. (76) in [32]), to analytical approximations for particular cosmologies, such as Eq. (1.8), or to fitting formulae, such as that originally suggested by Carroll, Press & Turner [33]. This fitting formula is given by G(z) = g(z) 1+z [34] where and The comparison between the growth function from Fisher4Cast and the fitting formula is shown in the left-hand panel of Figure 1.23, together with the relative difference, (G − G f it )/G, between the two methods, in the right-hand panel. . The normalised differences between the growth function used in Fisher4Cast and the fitting formula for the various models are of order 10 −3 , which is shown in the residuals of the right-hand panel. Degeneracy Tests The degeneracy direction of a Fisher ellipse (we consider w 0 , w a ) is a useful diagnostic of whether or not the Fisher ellipses are being calculated correctly. The direction of degeneracy can be computed analytically for a given redshift by assuming that the specific observable X α is constant at a particular redshift, and to then solve for w a as a function of w 0 , or by computing the likelihood over a grid of w 0 −w a , and then taking contours of the likelihood that correspond to the same cosmology as that assumed in the run of Fisher4Cast. This is particularly important when considering numerical derivative routines, such as those needed for the growth function. Growth function As a test of the correctness of the solution (and Fisher derivatives) of Eq. (1.10), the Fisher ellipse from a survey consisting of a single measurement of the growth function at z = 3 (as computed with Fisher4Cast) are shown in Figure (1.24), overlaid with contours of the likelihood corresponding to the growth value of the fiducial cosmology. The agreement of the degeneracy directions indicates that numerical computation of both the function and its Fisher derivatives is sound. Hubble parameter In the case of the Hubble parameter one can compute the degeneracy direction analytically: consider a single perfect measurement H(z) at some particular redshift z. Solving for w a in terms of w 0 by substituting Eq. The growth function is evaluated at a single redshift z = 1 for a range of models on a grid of −1.5 < w 0 < 0.5 and −1 < w a < 1. The w 0 − w a degeneracy direction in an assumed ΛCDM cosmology is then the iso-growth contour corresponding to ΛCDM (w 0 = −1, w a = 0), shown here as the orange dashed line. The Fisher4Cast degeneracy direction (computed assuming an arbitrary value of 1.5% error on growth) is shown as brown solid lines. hence w a = C(z) − 3(1 + w 0 ) ln(1 + z) 3(ln(1 + z) − z/(1 + z)) (1.14)
2009-06-04T19:53:27.000Z
2009-06-04T00:00:00.000
{ "year": 2009, "sha1": "01ef0eedabf98edf463e78a4c0f7d40218115d6a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "01ef0eedabf98edf463e78a4c0f7d40218115d6a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
245650979
pes2o/s2orc
v3-fos-license
UWHVF: A Real-World, Open Source Dataset of Perimetry Tests From the Humphrey Field Analyzer at the University of Washington Purpose This article describes the Humphrey field analyzer (HFA) dataset from the Department of Ophthalmology at the University of Washington. Methods Pointwise sensitivities were extracted from HFA 24-2, stimulus III visual fields (VF). Total deviation (TD), mean TD (MTD), pattern deviation, and pattern standard deviation (PSD) were calculated. Progression analysis was performed with simple linear regression on global, regional, and pointwise values for VF series with greater than four tests spanning at least four months. VF data were extracted independently of clinical information except for patient age, gender, and laterality Results This dataset includes 28,943 VFs from 7248 eyes of 3871 patients. Progression was calculated for 2985 eyes from 1579 patients. Median [interquartile range] age was 64 years [54, 73], and follow-up was 2.49 years [1.11, 5.03]. Baseline MTD was −4.51 dB [−8.01, −2.65], and baseline PSD was 2.41 dB [1.7, 5.34]. Conclusion MTD was found to decrease by −0.10 dB/yr [−0.40, 0.11] in eyes for which progression analysis was able to be performed. VFs with deep localized defects, PSD > 12 dB and MTD −15 dB to −25 dB, were plotted, visually inspected, and found to be consistent with neurologic or glaucomatous VFs from patients. For a small number of tests, extracted sensitivity values were compared to corresponding printouts and confirmed to match. Translational Relevance This open access pointwise VF dataset serves as a source of raw data for investigation such as VF behavior, clinical comparisons to trials, and development of new machine learning algorithms. Introduction Glaucoma is an optic neuropathy defined by characteristic change of the optic nerves with corresponding visual field (VF) deficits. VF testing with standard automated perimetry plays an integral role in the assessment and management of patients with glaucoma by allowing providers to track patients' visual function and estimate future decline. More recently, there has been a growing interest in applying artificial intelligence (AI) to the arena of VF analysis to forecast future fields, 1 identify common glaucomatous field defects, 2 or detect the presence of glaucomatous progression, 3 as a few examples. As with other applications of AI, meaningful data of sufficient scale is required to adequately train the AI for its intended purpose. Significant work is required to prepare these datasets for analysis, and the limited access to this data presents a barrier to researchers interested in studying VFs. The ability to have access to an open dataset could significantly accelerate VF research. 4 This article describes the open-source VF dataset from the University of Washington and the steps involved in processing and annotating the raw data. This repository is published and available at https:// github.com/uw-biomedical-ml/uwhvf. Data Extraction Standard automated perimetry tests from all patients performed on the Humphrey field analyzer (HFA) II (Carl Zeiss Meditec, Inc. Dublin, CA, USA) at the University of Washington were extracted under an Institutional Review Board-approved protocol and then all protected health information was destroyed to create a deidentified dataset for public release. All VF testing dates were converted to floating point years of age by calculating the days of life from birth to the date of the VF testing and then dividing by 365.25. All ages above 90 were changed to be 90 to be in accordance with HIPAA Safe Harbor guidelines for deidentification. Floating point estimations of VF sensitivities at each testing location were extracted from the binary header data for the VF file by decoding the hexadecimal values as little endian integers. Duplicated VFs were identified by finding VF instances with the exact same sensitivities at each location with the same age and same eye. Duplicated series were merged together and the data was formatted into JSON for public release. All tests were 24-2 white-on-white Goldmann stimulus size III examinations, performed with either a Swedish Interactive Thresholding Algorithm (Standard or Fast) or a full-threshold strategy. Derived Metrics Total deviation (TD) values were calculated using normative values from an HFA. These values were obtained by running mock tests on the device where no response was provided and inputting different ages, by decade. The TD maps in these mock tests therefore reported the (Normative value) for each location at each age. TD values were calculated for all 52 locations in the 24-2 HVF, excluding the two blind spots at (X = 15; Y = ±3) degrees from fixation (for a right eye). The pattern deviation (PD) was calculated by subtracting the seventh highest (less negative) TD value from all the other TD values. The seventh highest value is used as a robust estimate of the general height (GH) of the field, which is then used to account for generalized depression of the sensitivity because of, for example, optical media opacity. Our calculation reflects the definition provided by the Imaging and Perimetry Society. 5,6 Global indexes were also calculated. The usual HFA mean deviation (MD) makes use of a weighting system based on location-specific variability estimates, 5 which cannot be extracted from the HFA device. Therefore we calculated the mean total deviation (MTD) instead, which is simply the arithmetic average of the TD values. Its interpretation is essentially equivalent to MD. 7,8 Of note, this is identical to the definition of mean defect on Octopus perimeters. 5 Similarly, pattern standard deviation (PSD) was also calculated as the standard deviation of the TD values, without any correction factors. 5 The mean sensitivity (MS) was calculated as the average of the 52 sensitivity values (excluding the two blind-spot locations) in the 24-2 VF. Average localized perimetric defect was also quantified for each VF cluster by calculating clusterwise MTD and MS, as described by Garway-Heath et al. 9 Quantification of Progression Global, clusterwise, and pointwise progressions of VF defect over time were quantified using simple linear regression on TD values or their cluster or global average. Progression was only calculated on a subset of eyes with a minimum of four tests spanning at least four months. This selection was made to reduce large fluctuations in the progression slopes, particularly when only a few tests were concentrated over a short period of time. Data Summary The database includes 28943 VFs from 7248 eyes of 3871 patients. Descriptive statistics for the whole sample are reported in Table 1. Progression was calculated for 2985 eyes from 1579 patients. Figure 1 reports additional descriptive statistics for the eyes that progressed and represents the distribution of MTD slopes with respect to their baseline value. This dataset is open sourced under the three-clause Berkeley Source Distribution license. In addition, as recommended by Gebru et al., 10 we have provided a structured datasheet in Supplementary Materials. Raw Data The raw dataset is provided in two alternative formats: 1) Structured JSON file: this file contains sensitivity values, TD values, age, laterality (left or right eye), and gender when specified. Sensitivity and TD values are stored both in long format (as a vector) and provided as an 8 × 9 matrix. The latter is meant to preserve the original spatial organiza- Clusters are defined as in Garway-Heath et al. 9 Average intertest interval was only computed for eyes with more than one test (N = 7398). tion of the data, which is particularly useful in spatial-aware processing often used in machine learning. All VF data are stored as a right eye in that the left eye VFs are flipped to have the same layout as the right eye. Empty matrix cells are filled with a fixed value (100). A validated JSON Schema is provided in the repository for full description of the data. 2) Long format table: this is a comma-separated value (CSV) file, where each row contains data Progression Data Additional progression and baseline data are reported separately for each eye. Global progression is reported in a CSV file. Clusterwise and pointwise intercepts, progression slopes and P values are reported in separate tables, where each row corresponds to an individual eye and each column to an individual location/cluster. Locations are ordered as previously described. Clusters follow those defined by Garway-Heath et al. 9 In short, the clusters are labeled as cluster 1 (superior peripheral), cluster 2 (superior paracentral), cluster 3 (central nasal), cluster 4 (inferior paracentral), cluster 5 (inferior peripheral), and cluster 6 (temporal). For consistency, eyes for which the calculation of progression was not possible (see selection criteria described above) are reported in the table, but the corresponding cells for progression metrics are left empty. Technical Validation Additional descriptive statistics for progression slopes and intercepts (global and by cluster) are reported in Table 2. Regional differences in baseline VF defect and the progression rate are shown in Figure 2. The average difference in rate of progression between MS and MTD was −0.06 dB/year, which is in excellent agreement with the average normal VF ageing reported by Spry and Johnson (−0.064 dB/year). 11 A selection of 315 VFs in the dataset were plotted and visually inspected by two experts. We targeted VFs with a high likelihood of localized deep defects by selecting examples with a PSD > 12 dB and a MTD between −15 dB and −25 dB. The two experts evaluated the plausibility of the plotted examples, looking for typical glaucomatous or neurological patterns. All plots were found to be consistent with bona fide real VF test results. Because our data were deidentified, at the time of extraction we were not able to link them back to the original printouts. However, for two examples, the original HFA printouts were extracted manually without deidentification for validation purposes. These are reported in Figure 3. Discussion We present an open-sourced, observational VF dataset curated from a single academic institution with progression analysis performed on series with at least four VF tests over at least four months. The raw sensitivities and progression of pointwise, clusterwise and global values are included in the raw data files. To our knowledge, this is the first open access VF dataset of this magnitude to be made available for research. Rates of change in this dataset are in line with those presented previously. Spry and Johnson 11 reported a progression rate of 0.64 dB per decade in normal eyes, which is in line with the difference we found between the MS and MTD progression rate. They also reported that the rate may be affected by age, eccentricity of the test location, and the hemifield. 11 In this dataset, the most damage at baseline occurred in the superior hemifield (Fig. 2), in agreement with previous reports. 12,13 However, in our dataset, the inferior field was the fastest progressing region. Other studies in glaucoma patients report a wide range for mean deviation rate of progression from −0.05 to −0.57 dB/year. [14][15][16][17][18][19][20] The variability of MD rates could be influenced by factors such as different baseline glaucoma severity, surgical interventions during the follow-up period, and different follow-up times. Although MD is not available in this dataset because of proprietary location-specific variability estimates, the surrogate mean total deviation has been shown to be equvalent in interpretation. 7,8 Pointwise sensitivities were extracted from the binary header data in the VF file for this dataset. Pointwise sensitivities can also be obtained by exporting the data from Zeiss Forum or, more recently, by extracting values from the DICOM file or images of the VF report with third-party software. 21,22 Zeiss Forum was not available at our institution at the time of data extraction. Duplicate tests from the extraction process used for this dataset had to be identified and removed. These duplicate tests likely occurred due to a combination of correction of user input errors, relocations of devices across different sites, or transitions between data servers across the years. This open access VF dataset is the first of its size to be published and aims to lower the barrier to entry for the scientific community. 4 We provide both summary statistics and technical validation of our dataset. An open-access VF dataset would have a number of applications. It could be used to explore localized rates of change and interactions of neighboring VF locations. It can serve as a clinically-derived point of comparison to other VF datasets from other studies or for future clinical trials. It could also be used to evaluate the effects of different criteria for progression analysis. The size of this dataset opens avenues for possible machine learning applications. Limitations to this dataset exist. First, the VF data were extracted independently of clinic information other than patient age, gender, and laterality. Some clinical information was not included, such as initiation and timing of treatment and surgical interventions. The dataset represents all patients undergoing VF testing at an academic institution with or without glaucoma diagnosis, and may not reflect the general patient population. Further work will have to be performed to identify, categorize, and rectify relevant health records. Second, reliability indices were not extracted, but the effects of less-reliable tests would be somewhat mitigated by the number of eyes and tests in this dataset. Third, proprietary information in the HFA limited the information that could be extracted. Probability deviation maps were not available, along with the Glaucoma Hemifield Test classification and Statpac analysis such as the Guided Progression Analysis, which are used clinically to help determine glaucoma progression and diagnosis. Likewise, our TD and PD values were not directly extracted from the test but rather calculated from the sensitivity values. However, as explained in the Methods section, the TD maps were derived as deviations from the normative values extracted from the machine and are therefore likely to be accurate. We confirmed this by showing that the difference between the average rate of progression for MS and MTD matched the expected sensitivity decline because of aging alone (−0.06 dB/year). 11 PD maps were derived according to the Imaging and Perimetry Society standards from TD maps. Therefore, provided that the TD values are correct, they should be reflective of what would be obtained from the actual HFA printouts. It is also worth mentioning that other freely available software, such as the visualFields package for R, 23 provide independent normative datasets and tools to calculate all these metrics from any dataset, including ours. It should be finally noted that we used simple linear regression to quantify progression. Better and more precise methods exist. [24][25][26][27][28] However, the scope of our progression analysis was mainly to provide descriptive statistics of the sample for researchers making use of our database, for example to select specific patients based on their rate of progression. As such, we chose simple linear regression as a straightforward method, easily replicated by other researchers for validation of their results. On the other hand, because we are making our database fully available, researchers in different fields would be able to apply their preferred method for detection of progression for their specific applications. The transition of health data to an electronic health record format opens the possibility of access to large datasets for research. Large, high-quality datasets have provided the foundation for innovative statistical and computational models but also present a barrier of entry. This VF dataset aims to help establish an open access repository so that the scientific community may use it to accelerate discoveries in this field. * GM and AC contributed equally and should be considered co-first authors.
2022-01-04T06:22:56.878Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "cfeeddd8fd4fc2292903c963c08beeb5890be7da", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1167/tvst.11.1.1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ce9dcb0f7b5d1daa7c56365ce2fba72d24ad142c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
207933874
pes2o/s2orc
v3-fos-license
Treatment of extraoral submental sinus tract associated with large periapical lesion of traumatized lower central incisors teeth by periapical surgery and demineralized freeze-dried bone allograft The purpose of present case report was to observe the effect of demineralized freeze-dried bone allograft (DFDBA) when combined with periapical surgery for the treatment of extraoral submental sinus tract associated with large periapical lesion of traumatized lower central incisors teeth. A patient complains of extraoral submental draining sinus tract since 6 months due to trauma of lower central incisors teeth 1 year back. Radiographical investigation showed large periapical lesion associated with lower central incisors teeth. The patient was planned for treatment by periapical surgery and DFDBA. The treatment process includes elevation of full-thickness flap, debridement of periapical lesion, root canal treatment, defect fill with DFDBA, and suturing of full-thickness flap at its original position. Complete resolution of extraoral submental sinus tract was achieved after 1 week, and periapical lesion was repaired after 1 year. Thus, DFDBA was effective for the treatment of extraoral submental sinus tract associated with large periapical lesion of traumatized lower central incisors teeth. INTRODUCTION A sinus tract is an opening or communication of an enclosed area of inflammation/infection or abscess to an epithelial body surface or body cavity. A sinus tract of dental origin, usually the result of dental caries or traumatic injuries is formed by pulpal necrosis followed by the invasion of microorganisms into the periapical region causing an inflammatory periapical lesion of the affected tooth. The infection then progresses slowly, resorbing cancellous bone and spreading toward the cortical plate along the path of least resistance. Once the infection from offending tooth has perforated the periosteum, the tooth may become asymptomatic. After perforation of cortical plate, the infection may spread into a facial space, may develop into a cellulitis or may localize into an abscess, or may open either intraorally or extraorally. If the infection tracks out of jaw above the buccinator muscle attachment in the maxilla or below the mentalis, mylohyoid or buccinator attachments in the mandible, the sinus tract drains extraorally. If the perforation of the cortical plate is below the muscle attachments in the maxilla and above the muscle attachments in the mandible, the sinus tract is more likely to drain intraorally. The point of drainage depends in part on the length of the root and the position of the apex relative to the muscular attachments. Extraoral sinus tracts Treatment of extraoral submental sinus tract associated with large periapical lesion of traumatized lower central incisors teeth by periapical surgery and demineralized freeze-dried bone allograft are more common in children and adolescents because the teeth are not yet fully erupted and the alveolar process is not fully developed and so roots are more deeply seated. [1] Intraoral opening of sinus tract is usually visible either on the facial attached gingiva or in the vestibule in case of mandible and in case of maxilla; opening is either on the facial attached gingiva, palatal mucosa, or in the vestibule. An extraoral sinus tract may open anywhere on the face or neck, usual locations are the angle of mandible, chin, and cheek. Mandibular teeth are implicated over maxillary teeth in a ratio of 4:1 with 50% of mandibular sinus tracts emanating from lower incisors or canines. It is not surprising therefore that the most common cutaneous sinus tract is seen in the chin or submental region. [2] Premolars will commonly point to the submandibular region while lower molars can point on submandibular skin or to the cheek. Maxillary incisors may point to the floor of the nose, [3] while canine teeth will commonly point to below the inner canthus of the eye. Maxillary premolars and molars may point to the cheek. [4] Demineralized freeze-dried bone allograft (DFDBA) is the most widely used allograft material in periodontics, in part due to its availability, safety, osteoinductive, and osteoconductive properties. Osteoinductive property is due to the presence of bone morphogenetic proteins, which stimulate local cell cycles to produce new bone and osteoconductive property is due to freeze-drying process, which destroys cells while maintaining cellular morphology and chemical integrity. These two properties of DFDBA enhance periodontal regeneration and/or bone fill. Human histologic studies have shown that DFDBA can promote the formation of a new attachment apparatus on previously diseased root surfaces including new cementum, bone, and periodontal ligament. [5] The aim of this case report was to observe the effect of DFDBA when combined with periapical surgery for the treatment of extraoral submental sinus tract associated with large periapical lesion of offending traumatized lower central incisors teeth. CASE REPORT A 45-year-old male patient had the complaint of pus discharge from the undersurface of chin region since 6 months. The patient gave a history of trauma of lower two teeth 1 year back. Extraoral examination reveals a fixed, nontender, erythematous nodulocystic lesion on the skin, below the lower border of chin. Digital palpation of this area feels a "cord" like tissue connecting the painless skin lesion to involve lower central incisors teeth. During palpation, the attempt was made to "milk" the sinus tract which produced a purulent bloody discharge that confirms the presence of a sinus tract [ Figure 1]. During inspection, it appears that nodule and perilesional skin are slightly retracted below the level of the surrounding skin surface. There was no swelling or pain due to the presence of sinus tract which prevents swelling or pain from pressure buildup because it provides drainage from the periapical lesion of involved tooth. On intraoral examination, mandibular central incisors look black discolored [ Figure 2]. Normal probing depth was present at the mandibular anterior teeth region. On percussion, both mandibular central incisors teeth were nontender. Vitality tests of both teeth were done by electric pulp tester (Foshan COXO Medical Instrument Co. Ltd., 21 Wufeng Si Road Foshan, Guangdong, China), cold test (ice cube), and hot test (hot end of ball burnisher) which showed no response and confirms that they are nonvital. Radiographic examination revealed a large periapical radiolucent lesion associated with the roots of mandibular central incisors teeth [ Figure 3]. Radiograph with lacrimal probe or gutta-percha cone or sharp-tipped wire was not taken because offending teeth were easily identified. Since periapical lesion was large and associated with two traumatized teeth, periapical surgery was planned for their treatment. Before treatment, verbal and written consent was taken from the patient. This case report was approved by the Institutional Ethical Committee for human subjects and also conducted in accordance with the Declaration of Helsinki in 1975, as revised in 2000. The patient underwent basic periodontal treatment of phase I therapy including scaling, root planing, and instructions for proper oral hygiene measures. Endodontic treatment including root canal treatment and restoration were planned at the time of surgery. The patient was instructed to do presurgical rinse by 0.2% chlorhexidine solution (REXIDIN ® plus, INDOCO REMEDIES Ltd., Aurangabad, India.). The facial skin around the mouth was cleaned with spirit (isopropyl alcohol, 70%) and scrubbed by 7.5% povidone-iodine solution (Betadine ® , Win-Medicare, Pvt. Ltd., New Delhi, India). The intraoral surgical site was painted with 5% povidone-iodine solution (Povishield™, Microwin Labs Pvt. Ltd., Janakpuri, New Delhi, India). [6] After proper part preparation, 2% lignocaine hydrochloride with 1:200,000 adrenaline bitartrate (LOX*, Neon Laboratories Limited, Andheri East, Mumbai, India) was administered to anesthetize left and right mental nerves. After the effectiveness of local anesthesia, a sulcular incision and two vertical incisions, distal to lower left and right lateral incisors were given. A full-thickness flap was elevated to access the root apices and the periapical lesion [ Figure 4]. After elevation of the flap, the granulation tissue over and apical to the involved roots was removed and irrigated with normal saline solution (NS, ALBERT DAVID LIMITED, Meerut Road Ind. Area, Ghaziabad, India) [ Figure 5]. After this, root canal treatment was completed, and apical root-end resection was done with round diamond bur at high speed, with sterile water coolant, removing approximately 3 mm of the root apices to completely clean the undersurface of root apices. A 3-mm deep root-end cavity was prepared and filled with light cure glass ionomer cement. Irrigation was done with 100 mg/mL of doxycycline solution for 5 min to remove the smear layer, to expose the collagen matrix, and to prevent the degradation of collagen by collagenase enzyme. The periapical defect was filled with DFDBA (Tata Memorial Hospital, Tissue Bank, Mumbai, India) [ Figure 6]. Full-thickness flap was sutured at its original position with 3-0 black silk suture (Mersilk, Nonabsorbable Surgical Suture, Ethicon, Johnson and Johnson, Ltd., Aurangabad, India) [ Figure 7]. Finger pressure was then applied for 5 min on the operated area to close adaptation of the tissue. Periodontal dressing (COE-PAK, Regular Set, GC America Inc., Alsip, Il, USA) was applied to protect the surgical area [ Figure 8]. Immediate postoperative intraoral periapical radiograph was taken [ Figure 9]. DISCUSSION The sinus tracts are more frequently involve maxillary teeth (65%) than mandibular teeth (35%) and the majority of the sinus tracts have labial openings (94%). It is because mandibular teeth are embedded within a thicker cortical bone compared to maxilla and that lingual bone is more compact than the labial/buccal bone for both jaws. These characteristics related to upper jaw bone may explain the higher incidence of sinus tracts with labial openings in maxilla. The majority of sinus tracts were associated with posterior teeth (43%). The first molar teeth are the first erupted permanent teeth in the mouth, more susceptible to dental caries and therefore, are the most common teeth to undergo endodontic treatment or extraction. Consequently, the higher incidence of sinus tracts for posterior teeth may be naturally expected. [7] for 3 days) were prescribed. The patient was instructed to be extremely cautious during mastication at meals and no tooth brushing or chewing on the operated area for 3 weeks. After this period, the patient was advised to mechanical cleaning of the operated area using an extra soft toothbrush by coronally directed "roll" technique. Plaque control was obtained by 0.2% chlorhexidine rinse (REXIDIN ® plus, INDOCO REMEDIES Ltd., Aurangabad, India.), twice daily during the first 2 weeks, and then application of 0.2% chlorhexidine gel (REXIDIN ® -M Fort Gel, INDOCO REMEDIES Ltd., Mumbai, India) onto the operated area for another 2 weeks. Sutures were removed 1 week after surgery [ Figure 10]. Clinical and radiological follow-up was performed at 3 months, 6 months, and 1 year after surgery. Sinus tract was healed completely after 1 week [ Figure 11]. The radiographical evaluation showed complete bone healing [ Figure 12] and the tooth was asymptomatic with The pattern of breakdown and repair of periradicular lesions was demonstrated by Fish. He described four reactive zones to the bacteria, which are zone of infection, zone of contamination, zone of irritation, and zone of stimulation. The central infection zone consists of microorganisms and neutrophils. Second contamination zone contains round cell infiltrate. Irritation zone contains osteoclasts and macrophages. Outer stimulation zone contains fibroblasts and osteoblasts that forming collagen and bone, respectively. Egress of microorganisms into periradicular region causes tissue destruction in the central zone of infection. As the toxicity of irritants is reduced in central infection zone, the numbers of reparative cells increase in periphery. Removal of irritants, proper debridement, and obturation permits reparative zone to move inward. [8] The treatment of draining sinus tract includes nonsurgical methods and surgical methods. Numerous nonsurgical methods are decompression technique, aspiration-irrigation technique, intracanal medicaments, conventional root canal therapy, and apical perforation of root during root canal treatment. Decompression technique [9] and aspiration-irrigation technique [10] aid in decreasing the hydrostatic pressure resulting in shrinkage of the lesion. At the same time, the more conservative nonsurgical approach that can be treated by intracanal medicaments cannot be ignored. Calcium hydroxide is recommended as intracanal medicament because of its antibacterial properties, tissue dissolving ability, inhibition of tooth resorption, and indication of tissue repair by hard tissue formation. [11] Root canal therapy involves removal of etiological factors by proper bio-and chemo-mechanical preparation and three-dimensional obturation. Apically perforating the root of tooth during root canal treatment thus draining the pus through the orthograde approach, to creating an extraoral pathway for providing rapid relief to the patient in case of large sinus. Surgical methods include shoelace technique, surgical endodontic therapy, periapical surgery, and extraction for speedy disappearance of draining sinus tract in a very short period. Shoelace technique is one such method, where the sinus tract is managed extraorally by inserting a gauge piece soaked in povidone-iodine to make a path for pus drainage. [12] The decompression technique involves the placement of tubing to maintain drainage. [9] However, several disadvantages such as inflammation of alveolar mucosa, persistence of a surgical defect at the site, development of acute or chronic infection of the lesion, submergence of the tube, and patient cooperation limit the use of this technique. [13] The aspiration-irrigation technique involves aspirating the fluid using a wide gauge needle attached to a syringe. The needle penetrates the lesion through the buccal mucosa, creating a buccal wound, and exits through the palatal mucosa creating a palatal wound that later act as a pathway for the escape of the irrigant. A disadvantage of this technique is the creation of the buccal and palatal wounds, which result in inflammation of the alveolar mucosa and cause discomfort. [10] The healing of periradicular tissues after root canal treatment is often associated with formation and organization of a fibrin clot, granulation tissue formation, maturation, subsidence of inflammation, and finally, the restoration of the normal architecture of periodontal ligament. Hence, the treatment must be focused on the elimination of the source of the infection. [14] Radiographic signs such as density change within the lesion, trabecular reformation and lamina dura formation confirmed healing, particularly when associated with the clinical finding that the tooth was asymptomatic and the soft tissue was healthy. [15] Some authors declared that a period of more than 2 years is able to determine the final treatment result of these lesions. [16] In the present case, the recession of the periapical lesion by periapical surgery with DFDBA was evident after 1 year. The sinus tract in the present case report healed after 1 week of the periapical surgery, and there was no esthetic need for surgical intervention, possibly due to their position below the border of the chin and because of their recent development. Nevertheless, a fibrosis of the sinus tract trajectory is not uncommon, mainly in the older sinus tract. In these cases, fibrosis develops peripherally, spreading along the whole trajectory, and its surgical removal is necessary. [17] Microbiological culturing of the sinus tracts showed a mixed assortment of both obligate and facultative anaerobic bacteria. The bacteria species identified were a typical representative of both endodontic abscesses and skin infections. [18] Cutaneous sinus tracts may be lined with either granulomatous tissue or epithelium. [19] Spontaneous closure of the tract should be expected within 5-14 days after root canal therapy or extraction. [20] Slight dimpling or cutaneous retraction and hyperpigmentation of the area are not uncommon and usually diminish with time. Surgical revision of the scar occasionally may be indicated to provide better cosmetic results. Failure of a cutaneous sinus tract to heal after adequate root canal therapy or extraction requires further evaluation, microbiological sampling, and biopsy. However, in this case, sinus tract lesion healed after 1 week with a minimal scar, unnoticeable by a patient and his social environment. It is now believed that the activated macrophages in the periapical lesion are the reason for delayed healing of the lesions in the absence of bacterial antigens. The futuristic view of treating the periapical lesions include placement of biodegradable local sustained drug delivery points into the periapical lesion before obturating the tooth to deactivate the macrophages and enhancing the faster healing of the lesions. [21] CONCLUSION The observation of the present case suggests that when DFDBA combines with periapical surgery for the treatment of extraoral submental sinus tract associated with large periapical lesion of traumatizing lower central incisors teeth, it promotes a favorable environment for periapical repair as well as soft-tissue healing, that is resolution of sinus tract. Declaration of patient consent The authors certify that they have obtained all appropriate patient consent forms. In the form, the patient has given his consent for his images and other clinical information to be reported in the journal. The patient understands that name and initials will not be published and due efforts will be made to conceal identity, but anonymity cannot be guaranteed. Financial support and sponsorship Nil.
2019-11-14T14:08:12.445Z
2019-07-01T00:00:00.000
{ "year": 2019, "sha1": "699f7920e0d7100c0862291f07a9fdc2a7409f15", "oa_license": "CCBYNCSA", "oa_url": "https://europepmc.org/articles/pmc6883894", "oa_status": "GREEN", "pdf_src": "WoltersKluwer", "pdf_hash": "699f7920e0d7100c0862291f07a9fdc2a7409f15", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225422435
pes2o/s2orc
v3-fos-license
A pluralist account of the basis of moral status Standard liberal theories of justice rest on the assumption that only those beings that hold the capacity for moral personality (CMP) have moral status and therefore are right-holders. As many pointed out, this has the disturbing implication of excluding a wide range of entities from the scope of justice. Call this the under-inclusiveness objection. This paper provides a response to the under-inclusiveness objection and illustrates its implications for liberal theories of justice. In particular, the paper defends two claims: first, it argues that both the CMP and the potential capacity for moral personality (PCMP) are bases of moral status. This pluralist account of the basis of moral status can broaden the scope of justice and provide a solid philosophical justification for the common-sense intuition that almost all human beings have a moral status that is different and superior to that of nonhuman animals. Second, contra what is commonly suggested, it contends that potential and actual moral persons have different and unequal rights, other things being equal. Introduction The conception of the person as an autonomous agent is one of the most fundamental commitments of liberalism. Traditionally, liberals hold that the possession of the capacity for moral personality (CMP) grounds persons' moral status qua rational and reasonable agents to impose a duty of respect not to interfere with how persons exercise this capacity, so long as they do so consistently with others' rights. 1 Accordingly, much of the debate in liberal political philosophy has focused on working out the most plausible theory of justice that is entailed by a commitment to moral personhood and the principle of respect as non-interference, whereas relatively little attention has been paid to the objections that have been pressed against these underlying assumptions of liberalism. This paper aims to fill this gap by considering the under-inclusiveness objection against the liberal commitment to moral personhood. On the one hand, many observed that if moral status is grounded in the possession of the CMP, then it is unclear how liberal theories can justify the existence of moral obligations that are owed to infants and children, as well as to those human beings who are severely cognitively disabled (Jaworska 1999;Kittay 2005;Nussbaum 2006). On the other hand, it has also been forcefully argued that the liberal emphasis on the CMP obscures the value of nonhuman animals' lives, and thus is unable to condemn those practices that cause the death of, or an enormous amount of suffering to, nonhuman animals (Regan 1983;Singer 2011). In response to the under-inclusiveness objection, this paper argues that liberals can, and indeed should, broaden the scope of moral status by maintaining that the potential capacity for moral personality (PCMP) is also a basis of moral status. This paper, then, defends a pluralist account of the basis of moral status which has the theoretical resources to make moral status more inclusive while, at the same time, providing a solid philosophical justification for the common-sense intuition that almost all human beings have a moral status that is different and superior to that of nonhuman animals. To be sure, the belief that the PCMP is a basis of moral status is quite widespread among liberal political philosophers. This, for example, clearly emerges in Rawls when he claims that: ''one should observe that moral personality is here defined as a potentiality that is ordinary realized in due course. It is potentiality which brings the claims of justice into play'' (Rawls 1971, p. 505; emphasis added). 2 Nevertheless, it is fair to maintain that most liberal political philosophers have more or less implicitly relied on the notion of potentiality as the basis of moral status with neither explicitly addressing some pressing objections that have raised against this concept, nor examining the implications that a pluralist account of the basis of moral status entails for theories of justice. The second aim of this paper is to shed some light on these issues: in particular, this paper shows that actual and potential moral persons have different and unequal rights, other things being equal. The paper is structured as follows: Section 2 presents a liberal conception of moral status and it introduces the Intrinsic Value condition, which states that a property must have intrinsic value to be a suitable candidate for the basis of moral status. Section 3 discusses Rawls's liberal account of moral status. It argues that while Rawls's view is a plausible account of moral status because the CMP meets the Intrinsic Value condition, the CMP-account runs up against the under-inclusiveness objection. Section 4 argues that an appeal to those properties that are often considered to be the basis of moral status-such as, consciousness and sentience-can only avoid the under-inclusiveness objection at high moral costs. This is because these statusconferring properties entail that some nonhuman animals have a moral status that is superior, or equal, to that of a wide range of human beings. And, this, I submit, is a conclusion hard to accept. Call this the nonhuman animal superiority objection (NASO). Section 5, then, contends that the PCMP is a plausible candidate for the basis of moral status because it satisfies the Intrinsic Value condition. Being a statusconferring property that almost all human beings possess, but nonhuman animals lack, the PCMP allows us to broaden the scope of moral status without falling prey to the NASO. Section 6 concludes by discussing two objections so as to sharpen the argument and illustrate its implications. Moral status and the intrinsic value condition Moral philosophers have long debated the question of what it means to have moral status. Frances Kamm, for example, distinguishes between two different senses of moral status. The ''broad sense'' of moral status refers to ''what is morally permissible or impermissible to do to some entity'' (Kamm 2007, p. 227). Thus, for instance, a rock may have moral status in this broad sense, as that entity to which it is morally permissible to do anything. The ''narrow sense'' of moral status, instead, indicates entities that ''count morally in their own right'' (Kamm 2007, p. 227). There are, however, different ways to count morally: first, an entity may count in its own right in the sense that it may give us reasons to constrain our actions toward it. For example, we may think that we should preserve a piece of art, independently of the pleasure that it can bring to people. Second, an entity can count in its own right and for its own sake. So, saving a bird, as opposed to preserving a piece of art, is something that can be done for the bird's own sake because saving the bird would be good for the bird (Kamm 2007, p. 228). Finally, an entity counts morally when it is the object of a ''directed duty''-that is, a duty that is owed to that entity, in particular, in virtue of the entity that she is. 3 And, as Kamm notes, ''a directed duty is typically correlative to a right held by the entity to which the duty is owed against the person who owes it'' (Kamm 2007, p. 230). To appreciate this, imagine that Tom slaps Jack in the face. According to this narrow sense of moral status, Tom has violated his directed duty owed to Jack-or, alternatively, Tom has failed to respect Jack's right not to be harmed. Furthermore, in doing so, Tom has not simply done something wrong, but he has wronged Jack, in particular, because he has violated a duty owed to him. It is this last sense of the narrow conception of moral status-the moral status of a right-holder, or the object of directed duties-that is most salient for the question at hand. Indeed, liberals commonly hold that having moral status consists in being a right-holder, or being the object of directed duties, in virtue of the entity that one is (Carter 2011;Liao 2015;Sangiovanni 2017). And, the aim of this paper is to examine what account of the basis of moral status liberal theories of justice should rest on. Now, if having moral status entails being a right-holder, then it seems reasonable to hold that the possession of moral status presupposes one's intrinsic value. This is because it is unclear why A has directed duties that are owed to B, in particular and for its own sake, if B is not valuable in and of itself. In short, being valued for its own sake presupposes being intrinsically valuable. 4 To see this more clearly, consider the speciesist account of the basis of moral status, which maintains that human beings have equal moral status in virtue of their membership in the species Homo sapiens. As many observed, the problem with the speciesist account is that a purely biological feature has no intrinsic value and, as such, cannot generate rights. Why should the possession of a specific DNA in and of itself confer any moral value on its holder, and thus ground her moral status to be the object of directed duties? Speciesism, then, should be rejected because it is unable to meet the Intrinsic Value condition (McMahan 1996, p. 34;Singer 2011, pp. 48-53). 5 At this point, two comments are in order. First, one may note that there are some cases in which it seems reasonable to maintain that entities with intrinsic value do not have rights. Here is an example: life, one may plausibly maintain, has value in and of itself. It would be odd, however, to hold that any form of living beings has rights; for instance, few would maintain that bacteria are right-holders. As we have seen above, there are different ways of counting morally, or different senses of moral status. Accordingly, it seems plausible to hold that while intrinsic value necessarily entails a certain kind of moral status, only some intrinsic values generate the moral status of a right-holder. Put differently, any entity that has intrinsic value ''counts morally in its own right''; however, only some entities that have intrinsic value are the object of directed duties. 6 Second, one may observe that there are cases of entities that have rights, despite not being intrinsically valuable, like corporations and legitimate states. However, even assuming that these entities do have rights, they do not have the moral status of a right-holder in the sense at issue here. The reason for this is that the rights of corporate entities are grounded in, and conditional on, how they serve the interests of individuals (Valentini 2017, p. 878). Hence, it is precisely because they do not have intrinsic value that corporate agents are not ultimate units of moral concern, for they are not the object of directed duties in virtue of the entity that they are. Therefore, they do not have the kind of moral status that is relevant for this article, which consists in being a right-holder, or the object of directed duties, in virtue of the entity that one is. 7 To conclude, then, according to a liberal conception of moral status, having moral status means being a right-holder, or being the object of directed duties, in virtue of the entity that one is; and, since being intrinsically valuable is necessary to be a right-holder in virtue of the entity that one is, a suitable candidate for the basis of moral status must meet the Intrinsic Value condition, namely, a status-conferring property must have intrinsic value. The capacity for moral personality (CMP) as the basis of moral status and the under-inclusiveness objection In this section, I introduce one of the most prominent liberal accounts of the basis of moral status: Rawls's view. 8 Rawls argues that justice is owed to those beings that have the capacity for moral personality (CMP), that is, those beings that ''are capable of having (and are assumed to have) a conception of the good (as expressed by a rational plan of life); and … are capable of having (and are assumed to acquire) a sense of justice'' (Rawls 1971, p. 505). In brief, Rawls maintains that moral persons-i.e., those beings that are capable of moral personality-have moral status qua rational and reasonable beings. The possession of the CMP is widely considered to be intrinsically valuable. Some suggested that this is because autonomous agents have the capacity to prescribe ''general principles to themselves rationally, free from causal determinism, and not motivated by serious desires'' (Hill Jr. 1991, p. 44). Others stressed that the CMP is valuable for its own sake because it entails the moral power of rationally choosing one's ends and thus becoming responsible for one's choices (Korsgaard 1996, ch. 7). It might be objected that these justifications for the intrinsic value of the CMP simply describe what the CMP is rather than providing independent reasons to account for its intrinsic value. The question of how the intrinsic value of a thing can be justified has long been debated in value theory (Zimmerman 2015). However, two points are worth noting. First, it should be observed that this is not a problem for the CMP alone, but for any plausible status-conferring property. As Liao puts it, 7 I am grateful to an anonymous reviewer for prompting me to clarify these points. 8 For other liberal accounts of the basis of moral status, see, Gauthier (1986), Locke (2016), and Mill (2015). While this paper focuses on liberal accounts of moral status, it is important to acknowledge that the belief that human beings have rights and are owed directed duties has also been strongly defended by several thinkers of different philosophical traditions. For instance, Douglas B. Rasmussen and Douglas J. Den Uyl develop a neo-Aristotelian account of moral status. See Rasmussen and Uly (1991). ''suppose one holds the view that if X has actual sentience, then X is a right-holder. It may be asked, why is this so? Asserting that ''pain is bad'' does not seem to providing an independent argument for this account'' (Liao 2015, pp. 24-5). Second, it seems reasonable to maintain that since it is very difficult to argue for the intrinsic value of X-insofar as any attempt to do so will have to refer to properties that are outside X-the best we can do to justify X's intrinsic value is to explain what X is and, by doing so, pump the intuition that X is non-instrumentally valuable. Inevitably, however, this discussion will reach an end; and, at that point, we will have to seek a reflective equilibrium by testing the normative principles entailed by a view which holds that X has intrinsic value against particular cases, and vice versa, so as to achieve a mutual fit between our considered judgments and our theory. 9 Now, since the CMP meets the Intrinsic Value condition, this entails that Rawls's view is a plausible account of the basis of moral status. But is Rawls's view the most plausible account of the basis of moral status? Many raised scepticism in this regard. In particular, it has been argued that Rawls's view should be rejected because it implies the disturbing conclusion that some human beings-those who do not yet hold the CMP and those who had the CMP but no longer hold it-and all nonhuman animals do not have moral status, and therefore they are not right-holders (Nussbaum 2006;Regan 1983;Singer 2011). Call this the under-inclusiveness objection. To summarise, standard liberal theories of justice have traditionally relied on the assumption that moral status is grounded in the possession of the CMP. And, while the CMP does indeed provide a plausible basis of moral status, maintaining that only those beings that are capable of moral personality have moral status has the disturbing implication of excluding a wide range of entities from the scope of justice. A pluralist account of the basis of moral status and the nonhuman animal superiority objection (NASO) A promising solution to overcome the under-inclusiveness objection is to argue in favour of a pluralist account of the basis of moral status, according to which the possession of the CMP is not the only status-conferring property. And, here different possibilities open up: some affirmed that the basis of moral status is to be found in one's sense of own consciousness (Sangiovanni 2017). Others contended that sentience is a plausible candidate for the basis of moral status (Bentham 2000;Singer 2011). Finally, it has also been argued that moral status is grounded in the property of ''being-subject-of-a-life'' which consists in the capacity to have desires and beliefs, memory and a sense of the future, and to act intentionally (Regan 1983, p. 243). A pluralist account that includes the CMP and any of the above properties as the bases of moral status has the theoretical resources to broaden the scope of moral status so as to include almost all human beings and nonhuman animals within the realm of justice. Nonetheless, a pluralist account of the basis of moral status which rests on sentience, being-subject-of-a-life, or consciousness, can only avoid the underinclusiveness objection at high moral costs. To see this, it will be necessary to introduce the distinction between the question of moral status and the question of moral equality or equal moral status. The former is essentially non-comparative because it concerns what is owed to a being in and of itself, independently of what is owed to others. The latter, instead, is comparative in that it addresses what is owed to moral status-holders in relation to one another. So, for example, if A's and B's moral status is grounded in the same status-conferring property X, then this means that both A and B are right-holders. However, this does not entail that A and B have equally stringent rights: this is because since X confers moral worth on A and B, if A holds X to a higher degree than B, then it follows that A is more morally worthy than B-or, equivalently, A has a moral status that is superior to that of B. Hence, A's rights are more stringent than B's rights, other things being equal (Carter 2011). We are now in the position to see that an appeal to sentience, being-subject-of-alife, or consciousness has disturbing implications: if the moral status of some human beings and nonhuman animals is grounded in the same status-conferring property, then there will be cases in which the latter hold this property to a higher, or equal, extent than the former. Therefore, nonhuman animals will have a moral status that is superior, or equal, to that of a wide range of human beings. To illustrate this, consider some nonhuman animals, like dolphins, chimpanzees, and whales, which are deemed to display fairly high degrees of reasoning capacities, and some human beings, such as young children, cognitively disabled human beings and infants. It seems reasonable to believe that there will be cases in which the former hold the status-conferring properties of consciousness, sentience, and beingsubject-of-a-life to a superior, or an equal, degree than the latter. Therefore, according to a pluralist account that includes any of these status-conferring properties, some nonhuman animals have a moral status that is superior, or equal, to that of a wide range of human beings. This, I submit, is a conclusion hard to accept. Call this the nonhuman animal superiority objection (NASO). 10 At this point, two comments are in order: first, one may doubt that it is necessary to point to a property which some human beings hold, but nonhuman animals lack, in order to conclude that the former have more stringent rights than the latter. The reason for this is that the stringency of rights does not only depend on the statusconferring property on which rights are grounded but also on the underlying 10 A critic may protest that the NASO is question begging. It is true that I am here relying on an intuition about the prima facie implausibility of conferring superior, or equal, moral status to some nonhuman animals over a wide range of human beings. However, if I will show that there is indeed a statusconferring property that nonhuman animals lack, but almost all human beings possess, which marks a moral discontinuity between the two, then the intuition that underpins the NASO will be vindicated. As observed above, the aim is to achieve a mutual fit between our considered judgment and our theory. interests at stake. Accordingly, so the argument goes, even if human beings and nonhuman animals hold the same status-conferring property to an equal degree, this does not entail that they have equal rights, provided that the interests at stake are different. This line of argument, for example, is adopted by Tom Regan in his discussion of the lifeboat case, where there are five survivors-four human beings and a dog-only four of which can be saved. Regan argues that even if human beings and dogs have equal moral status, we should abandon the dog and rescue the humans because the former would suffer less harm than the latter. This is because ''the harm that death is, is a function of the opportunities for satisfaction that it forecloses''; hence, the death of a human being is a greater loss than the death of a dog (Regan 1983, p. 324). Even if this argument is correct, I contend that Regan's solution does not provide us with a persuasive response to the NASO, for the following two reasons: first, it seems plausible to maintain that a convincing account of the basis of moral status should hold that human beings have more stringent rights than dogs not merely when and because the interests of the former are more fundamental than the interests of the latter, but also and mainly because the former are morally superior to the latter, insofar as, for example, at least some human beings are capable of moral personality, whereas dogs do not have this capacity. Second, there will be at least some cases in which human beings and nonhuman animals have the same interests at stake-or, more precisely, cases in which the interests at stake are equally fundamental. For instance, Singer observes that ''there must be some kind of blow […] that would cause the horse as much pain as we cause a baby by a simple slap'' (Singer 2011, p. 51). And, in these cases, Regan's solution does not have the theoretical resources to justify prioritising the rights of human beings over the rights of nonhuman animals. Indeed, if their moral status is equal, and if the interests at stake are equally fundamental, then it follows that the decision about which rights should be prioritised will depend on a coin flip, other things being equal. 11 For this reason, I conclude that it is necessary to identify a status-conferring property that human beings have, and nonhuman animals lack, to avoid the NASO. Second, one may think that a plausible way to overcome the NASO can be found in relational accounts of the basis of moral status which hold that human beings' moral status is grounded in their relational nature of being in relation to one other (Kittay 2005). 12 To start with, as many observed, there are strong reasons to suspect that relational properties cannot justify a being's intrinsic value (Zimmerman 2015): indeed, there seems to be something odd in affirming that A's intrinsic value is grounded in the value that A has in relation to B, rather than in the value that A has whatever her relation to B. But even leaving this metaphysical worry aside, one may observe that relational accounts do not single out a property that only human beings have, but not nonhuman animals lack, for some nonhuman animals-especially those domesticated nonhuman animals-also possess the relational property of being in relation to other human beings (Valentini 2014). In short, given the ubiquity of relations in the animal kingdom, it is unclear how relational views of moral status can avoid the NASO. Standard properties that are considered to ground moral status are unable to refute the NASO. To overcome this difficulty, in what follows, I argue that the potential capacity for moral personality (PCMP) is a status-conferring property. And, since almost all human beings have this property, but nonhuman animals do not, the PCMP provides us with a justification to affirm that the former have a moral status that is different and superior to that of the latter. 13 Hence, a pluralist account of the basis of moral status that maintains that both the CMP and the PCMP are bases of moral status allows us to overcome the under-inclusiveness objection without falling prey to the NASO. The potential capacity for moral personality (PCMP) as a basis of moral status Few would deny that potentiality is valuable; indeed, potentiality is clearly valuable for the sake of its actualisation, or fulfilment. But is it plausible to hold that potentiality has also intrinsic value? As we saw earlier, arguing in favour of the intrinsic value of a property is a very difficult task, for it is unclear what needs to be shown to account for the intrinsic value of something. In what follows, however, I first refute some of the reasons that are usually given against the intrinsic value of potentiality: this will help us to elucidate further the meaning of potentiality and, moreover, to clarify on what basis it cannot be argued that potentiality is not valuable in and of itself. Second, on a more positive note, I conclude by discussing two reasons why we should think that the PCMP is also valuable in and of itself and, therefore, satisfies the Intrinsic Value condition. To begin with, it will be instructive to examine the distinction between capacity and potentiality. As Thomas K. Johansen observes, Aristotle, for example, uses these two terms almost interchangeably. More precisely, we can distinguish two conceptions of capacity, or potentiality, in Aristotle: first, the notion of a capacity ''in the sense of a power to bring about or undergo change'' (Johansen 2012, p. 209). Second, the modal notion of capacity which ''underlies our talk of things being in capacity, in contrast to their being in activity'' (Johansen 2012, p. 209). Accordingly, saying that ''A has the capacity for X'' means (1) that A has the power to undergo, or bring about, a change-i.e. to pass from a state in which A is not doing X to a state in which A is actually doing X, and (2) that A is in the modal state of being able to do X, rather than in the modal state of actually doing X. But if this is true, then there is a plausible sense in which ''capacity'' and ''potentiality'' are not two different metaphysical notions; rather, they are two different kinds of the same metaphysical concept. To appreciate this, consider the statements ''A has the capacity to read'' and ''B has the potentiality for reading''. As we have just seen, these two statements indicate the same modal state in that neither A nor B is in activity-i.e. neither is reading. Furthermore, just as the first statement refers to A's power to cause a change-A can pass from a state in which she does not read to a state in which she actually reads-so, analogously, the second statement indicates B's power to bring about, or undergo, a change-B has the power to pass from a state in which she does not have the ability to read to one in which she actually possesses this ability. Therefore, it seems reasonable to maintain that ''capacity'' and ''potentiality'' are not different concepts, but they are two notions which describe two kinds of the same concept. Put simply, ''capacity'' and ''potentiality'' are used to distinguish two different kinds of ''potentialities''-that is, two different powers to undergo or bring about a change. This discussion can help us to rule out some of the reasons that are commonly advanced against the intrinsic value of potentiality in the literature. First, it is often suggested that since what is morally relevant is ''what is here and now'', rather than ''what may be'', potentiality cannot be morally significant in and of itself, for it has to do with what may be in a hypothetical future. As Warren observes: Merely potential people […] are just things that might have existed, that is, that at some time were empirically possible, but which in fact do not, never did, and never will exist. And what does not exist and never will cannot be harmed or wronged or have its rights violated. (Warren 1977, p. 280; emphasis added). Even if we grant the assumption that what may be is not morally relevant in and of itself, this does not entail that the potential capacity for X cannot have intrinsic value because the latter describes an ability, or a power, that certain entities possess here and now. In other words, while potential people refers to some entities that may be in the future-e.g. future generations-but are not here and now, potential capacity denotes a specific ability that a range of actual beings hold here and now. Therefore, the value of the potential capacity is grounded in what is here and now, rather than in what may be. Second, and relatedly, it is sometimes observed that only ''what A can do here and now''-as opposed to ''what A may be able to do in the future''-is morally relevant. Here again, even if we accept that what A may do in the future is not morally relevant in and of itself, this does not imply that the potential capacity for X is not intrinsically valuable. The reason for this is that, as has been noted above, capacity and potentiality denote two different abilities-i.e. powers to do something-that are held here and now. Accordingly, maintaining that A's potential capacity for X is morally relevant when assessing A's moral status amounts to saying that A holds an ability to do something here and now that confers moral worth on her. Therefore, potential capacity for X does not ground A's value in what A may be able to do in the future; rather, it justifies A's value on the basis of what A is able to do here and now. The analysis of the metaphysical distinction between the capacity for X and the potential capacity for X allowed us to refute some of the objections that are usually raised against the intrinsic value of potentiality. Admittedly, however, this does not say anything on whether the PCMP can indeed meet the Intrinsic Value condition. In the final part of this section, then, I discuss two reasons as to why the PCMP should be considered to also have intrinsic value. A standard line of argument to justify the moral significance of potentiality is to contend that those beings that have the ability to acquire some goods in the future have a right to be helped to obtain those goods. More precisely, the possession of the ability to acquire a range of goods generates an interest in the acquisition of those goods which, in turn, grounds a right to be helped to obtain those goods (Stone 1985). To begin with, it should be pointed out that, as we saw in Sect. 1, the mere existence of an interest is not sufficient to ground a right to the satisfaction of that interest: if A has an interest in X, this alone does not entail that A has a right against B to be helped to satisfy her interest in X, for it is not clear why B has a directed duty owed to A, in particular and for its own sake, unless A has intrinsic value. Thus, for instance, someone who denies that nonhuman animals have a right to life need not deny that nonhuman animals have a fundamental interest in living. It is conceptually coherent to affirm that despite nonhuman animals have an interest in living, they do not hold any intrinsically valuable property which confers moral worth on them; therefore, they do not have the moral standing to have a right to life. It follows from this that a justification for the possession of the moral status of a right-holder on the grounds of potentiality must show that potentiality is intrinsically valuable. And, this seems to rest on the claim that the possession of the ability to acquire a range of goods is itself intrinsically valuable. Put differently, the possession of this kind of ability confers moral worth upon its holder and thus accounts for her moral standing to have rights against others to be helped to obtain the goods she has the potential to acquire. Now, it is important to observe that, for our purposes, we do not need to justify the intrinsic value of any type of potentiality to acquire some goods. Rather, we only need to justify the intrinsic value of a specific kind of potentiality, namely, the PCMP. Hence, following a similar line of argument, I argue that while the possession of the PCMP is not intrinsically valuable simply because it represents the ability to obtain some goods, it has intrinsic value because it consists in the ability to obtain a range of non-instrumentally valuable moral powers. Put simply, the possession of the ability to acquire a range of non-instrumentally valuable moral powers is itself intrinsically valuable and thus confers moral worth on its holder; hence, the PCMP is a basis of moral status. This, however, is not the only line of argument that can be put forward in favour of the intrinsic value of PCMP. A different justification for PCMP's intrinsic value, I argue, can be mounted by appealing to a constitutive argument, whereby the intrinsic value of X lies in being a constitutive part of an intrinsic value, Y, which makes the former share in the value of the latter while being less valuable. To see this, we need to address the following question: what is a constitutive value? What kind of relationship must there be between the part and the whole for the former to be a constitutive part of the latter? According to some proponents of constitutive values, ''things are constituent goods if they are elements of what is good in itself which contribute to its value, i.e. elements but for which a situation which is good in itself would be less valuable'' (Raz 1986, p. 200). This kind of constitutive argument is often invoked by Aristotelians to account for the intrinsic value of those goods that are part of human flourishing (MacIntyre 2007, p. 149). For instance, relationships of love and companionship are usually considered to be elements of a flourishing life. This, however, Aristotelians argue, does not entail that these relationships are only valuable for the sake of the promotion of human flourishing. On the contrary, they are also non-instrumentally valuable because they are a constitutive component of human flourishing that contributes to the value of human flourishing itself. As some observed, however, a part X can be a constitutive part of Y even if X does not contribute to the value of Y-or, at least, even if X does not contribute to Y's value in the same way that relationships of love contribute to the value of a flourishing life. To illustrate this, consider Cruft's (2010) analysis of the value of the duties of friendship. The duties that A owes to B, Cruft observes, are not only instrumentally valuable to the extent that they motivate A to behave in a friendly manner-indeed, a good friend should not care for her friend out of a sense of duty. The reason for this is that ''the duties themselves […] are a conceptually necessary constituent of friendship. Without such duties, the relationship would lack the directed normative character necessary for it to be friendship'' (Cruft 2010, p. 452;emphasis added). Accordingly, duties of friendship are not merely valuable to the extent that they promote friendship relation, but they are also valuable in and of themselves, insofar as they are a conceptually necessary constituent of what friendship-which, ex hypothesi, has intrinsic value-is. Crucially, then, duties of friendship do not contribute to the value of friendship in the same way that relationships of love contribute to the value of a flourishing life: a friendship without duties of friendship would not be less valuable, but it would not be friendship at all, for the former is a conceptually necessary constituent of the latter. We are now in the position to see that the metaphysical relation that holds between potentiality and actuality reveals that there is a plausible sense in which the PCMP is a constitutive part of the CMP in the same way in which duties of friendship are a constitutive part of friendship. In brief, this is because actuality retains its potentiality: hence, having the PCMP is a constitutive part of having the CMP. To appreciate this, consider Michael Frede's example: If we have an actually healthy person, what underlies the health -the person independently of being healthy -remains potentially healthy even after having being cured by a doctor, namely in so far as he continues to be in a state such that, if he were to be ill, he could still be cured by a doctor. (Frede 1994, p. 192). Analogously, then, it can be argued that part of what it means to be a being capable of moral personality is to be a being that retains the PCMP. Put differently, part of what it means to be a moral person is to be that kind of being that has the potentiality to reacquire the CMP if and when this has been lost; hence, the PCMP is a conceptually necessary constituent of the CMP. It follows from this that the PCMP is not merely instrumentally valuable but it is also intrinsically valuable because it is a constitutive part of something that is valuable for its own sake. A critic may object to this constitutive argument by noting that, for example, some individuals late in life lose their PCMP and never recover it. 14 While the validity of this claim seems to ultimately depend on the specific account of potentiality that one endorses, it is worth noting that affirming that part of what it means to be a moral person is to retain the PCMP does not entail that the PCMP will be actualised in all circumstances. Indeed, this claim is consistent with maintaining that there are several ways of losing the CMP which imply a loss of the PCMP, too. Thus, for example, a middle-aged adult human being capable of moral personality retains the potentiality to reacquire the CMP if and when this is lost. Nonetheless, this is compatible with acknowledging that there are cases in which such potentiality will not be realised, such as if this person dies or if she becomes severely cognitively disabled. Accordingly, it seems plausible to maintain that individuals late in life, who are still capable of moral personality, do not necessarily lose their PCMP completely. Rather, their PCMP is diminished because for them losing the CMP implies losing the PCMP in a wider range of cases than it does for middle-aged adult human beings who have the CMP, other things being equal. For this reason, I argue that these cases do not undermine the validity of the metaphysical relation between the CMP and the PCMP. To conclude, in this section, I showed that there are fewer reasons to be suspicious about the intrinsic value of potentiality and more reasons to maintain that potentiality is also valuable in and of itself than is commonly thought. In particular, I argued that the PCMP has intrinsic value because (1) it consists in the ability to acquire a range of non-instrumentally valuable moral powers, and (2) it is a constitutive value of the CMP, for the former is a conceptually necessary constituent of the latter. As we saw earlier, an argument in favour of the intrinsic value of the PCMP must explain what the PCMP is and, by doing so, pump the intuition that it indeed has value in and of itself. This discussion, however, will inevitably reach an end; at that point, we will have to seek a reflective equilibrium by testing the implications of a theory that maintains that the PCMP is a basis of moral status against particular cases, and vice versa. Therefore, it seems reasonable to hold that we should regard the bar of justification of the intrinsic value of the PCMP as lower the more intuitively convinced we are about the strength of the NASO. Far from begging the question, this allows us to reach a ''mutual fit'' between our considered judgment and our theory. With this point in mind, then, I conclude that we have enough reasons to affirm that the PCMP satisfies the Intrinsic Value condition. Objections In the final section of this paper, I discuss two objections so as to strengthen the argument made so far as well as illustrate its implications. The intrinsic/extrinsic distinction objection In Abortion and Infanticide, Michael Tooley presented what has become a standard example against the moral significance of potentiality. Here is the example: imagine a future in which a chemical has been discovered that can transform kittens into adult humans. Since kittens, Tooley argues, now have the potentiality to become adult humans, exactly like infants, a potentiality account entails that both kittens and infants are entitled to have their potentiality actualised qua potential adult humans. This, however, seems a very disturbing conclusion (Tooley 1972, pp. 61-2). In response to this example, it is usually suggested that, pace Tooley, there is a crucial distinction in the kind of potentialities that kittens and infants hold: while kittens have the extrinsic potentiality to become adult humans, infants have the intrinsic potentiality to do so. And, since, as I argued in Sect. 1, a being has moral status if its value supervenes upon its intrinsically valuable properties, it follows that only infants have a right to have their potentiality actualised qua potential adult humans (Harman 2003). A critic, however, may protest that this response hinges on an untenable distinction between intrinsic and extrinsic potentiality. Both the potentiality of kittens and that of infants, so the objection goes, need some external inputschemical and nurture, respectively-to be actualised. Therefore, it is difficult to see on what grounds it can be argued that the former is extrinsic, whereas the latter is intrinsic. Call this the intrinsic/extrinsic distinction objection. Proponents of potentiality views have often tried to reject the intrinsic/extrinsic objection by appealing to metaphysics of essence (Reichlin 1997). Following this line of thought, the difference between the potentiality of kittens and that of infants lies in the fact that only the latter have the inherent telos to become adult human beings. This line of argument is notoriously problematic. In particular, it is unclear what determines a being's telos-that is, what defines a being's essence: what reasons do we have to maintain that becoming adult humans is not part of kittens' nature in Tooley's world where kittens can in fact become adult humans if provided with some external inputs? Metaphysical accounts of essence are very hard to defend. A more persuasive answer to the intrinsic/extrinsic objection, I contend, consists in showing that it does not provide us with sufficient reasons to reject the notion of potentiality. Rather, to the extent that the intrinsic/extrinsic objection is true, it points to a bullet that potentiality views have to bite, but one that is not too hard to bite, at most. First of all, it is important to note that the intrinsic/extrinsic objection does not call into question the possibility of distinguishing intrinsic and extrinsic potentiality in a non-arbitrary way in all circumstances. So, for instance, it seems reasonable to hold that Tooley's chemical does not do all the work in transforming kittens in adult humans-that is, kittens' potentiality is not wholly extrinsic-like in the case of a god who has the ability to transform some creatures in beings of a different kind. (Indeed, if that were the case, we would have a principled reason to affirm that kittens' potentiality is extrinsic and thus that they do not have moral status qua potential adult humans.) Tooley's chemical, instead, triggers a reaction from a range of intrinsic physical properties that kittens hold which generate their transformation into adult humans. In other words, in Tooley's world, kittens possess some intrinsic properties that, if provided with the appropriate external inputs, actualise their potentiality to become adult humans. Hence, there is a relevant sense in which kittens do have the intrinsic potentiality to become adult humans, at least to some extent. If this is true, then the intrinsic/extrinsic objection does not reveal an inherent arbitrariness in the distinction between intrinsic and extrinsic potentiality which would cast doubt on the notion of intrinsic potentiality itself. Accordingly, the intrinsic/extrinsic objection does not undermine the validity of the concept of intrinsic potentiality as the kind of potentiality that, being intrinsically valuable, is a basis of moral status. Of course, this does not entail that in Tooley's world infants, but not kittens, have the intrinsic potentiality to become adult humans. On the contrary, it seems reasonable to concede that both infants and pre-injected kittens have the right to become adult humans qua potential adult humans. On reflection, however, accepting that in a world in which kittens and infants have the same intrinsic potentiality to become adult humans, both have the right to have their potentiality actualised does not seem such an implausible conclusion. Neither is a conclusion to which only potentiality views are committed to. To appreciate this, consider a world-possibly less far-fetched than Tooley's-in which some AI display a high degree of rationality and reasonableness so that they can be deemed to be capable of moral personality. In such a scenario, according to the same line of argument that underpins Tooley's example, standard CMP-accounts of moral status seem to entail that AI have intrinsic value and therefore have moral status qua actual moral persons, other things being equal. Proponents of potentiality views have attempted to show that Tooley's preinjected kittens do not have the intrinsic potentiality to become adult humans; this claim, however, rests on shaky grounds. In this section, I argued that advocates of potentiality views have fewer reasons to worry about Tooley's example than they have usually thought. Tooley's thought experiment shows that in a world in which some nonhuman animals and some human beings have the same potentiality to become adult humans, potentiality views are committed to the conclusion that both the former and the latter have the right to have their potentiality actualised qua potential adult humans. This, however, is either precisely the kind of conclusion that a non-speciesist account of the basis of moral status is meant to entail, at best; or, it is a bullet that is neither too hard to bite, nor one that only potentiality views are committed to biting, at worst. The important point is that Tooley's objection does not show that intrinsic potentiality cannot be distinguished from extrinsic potentiality in a non-arbitrary way in all circumstances. Hence, Tooley's example is insufficient to reject the notion of intrinsic potentiality as the kind of potentiality that is intrinsically valuable and thus grounds moral status. Logical point(s) about potentiality Even if the notion of intrinsic potentiality rests on solid metaphysical grounds, it has been argued that potentiality accounts of the basis of moral status should be rejected because they do not entail the implications that their advocates argue they have. This concern, for example, is raised by Feinberg's ''logical point about potentiality'': ''it is a logical error, some have charged, to deduce actual rights from merely potential (but not yet actual) qualification for those rights'' (Feinberg 1992, p. 48; emphasis in the original). As Benn puts it, ''a potential president of the United States is not on that account Commander-in-Chief [of the U.S. Army and Navy]'' (Benn 1974, cit. in Feinberg 1992). Feinberg's logical point is not a very pressing objection against potentiality accounts because it rests on the unwarranted assumption that these accounts are committed to holding that potential qualification for rights justifies actual qualification for rights. This need not be so: as I argued in Sect. 4, a potentiality account of the basis of moral status maintains that the actual qualification of rights is grounded in the actual possession of a potential capacity. There are, however, two versions of Feinberg's logical point which are worth noting in order to bring to light two significant implications of the difference between the CMP and the PCMP that have sometimes been neglected in the literature. The first version is the following: ''it is a logical error to deduce the same rights from the possession of potential and actual capacity''. The reason why it would be a mistake to deduce the same rights from the possession of potential and actual capacity is that it seems reasonable to maintain that what is owed to a moral statusholder depends, at least to some extent, on the reason why she has moral status. If having moral status means being the object of directed duties in virtue of the sort of entity that one is, then this implies that the nature of the entity-i.e. the basis of moral status-informs the content of the directed duties that are owed to her. As Raz puts it, ''the ground of an entitlement determines its nature'' (Raz 1986, p. 223). Accordingly, since the potential capacity for X and the actual capacity for X are two different properties-the former is the ability to acquire the capacity for X, whereas the latter is the ability to exercise X-it follows from this that they generate different sets of rights. To be clear: this is not to say that the same right cannot be grounded in different properties. Rather, what I am claiming is that since the moral statuses of potential and actual persons are grounded in different properties, they have distinct-but, at least to some extent, overlapping-sets of rights. The second version of Feinberg's logical point, instead, reads as follows: ''it is a logical error to deduce equal rights from the possession of potential and actual capacity''. Indeed, it would be a mistake to maintain-as some, more or less explicitly, seem to suggest 15 -that potentiality and actuality ground equal rights, insofar as the latter has priority over the former. While the priority of actuality is a very complex topic, 16 for our purposes, it is sufficient to note the fairly uncontroversial point that the actual possession of X is more morally relevant than the potential possession of X, other things being equal (Burgess 2010, p. 143). Hence, the possession of the former grounds a superior moral status than the possession of the latter. Therefore, potential moral persons and actual moral persons do not have equal moral status and thus they do not have equally stringent rights, other things being equal. It is worth observing that the inequality of moral status among actual and potential moral persons has important implications for the debate on the permissibility of abortion. Indeed, we can now see that even if the PCMP might justify a foetus' right to life, this does not entail that abortion is morally impermissible, all things considered. The reason for this is that the right to life that a foetus has might be in conflict with another fundamental right of a being whose moral status is superior-e.g. the right to bodily integrity of the actual moral person who carries the foetus. Hence, a commitment to the PCMP as a basis of moral status does not necessarily entail the impermissibility of abortion. To conclude, potentiality accounts of moral status do not rest on a logical fallacy, for they maintain that the actual possession of a potential capacity grounds actual qualification for a range of rights. However, in this section, I argued that the difference between potentiality and actuality has significant implications for the moral status of potential and actual moral persons. In particular, potentiality views cannot provide stand-alone accounts of moral status, but they need to be part of a pluralist account of the basis of moral status which holds that potential and actual moral persons have different and unequal rights because the PCMP and the CMP are two different and unequally valuable status-conferring properties. Conclusion Standard liberal theories of justice have often relied on the assumption that the CMP is the basis of moral status. As many noted, however, this has the disturbing implication of excluding a wide range of entities from the scope of justice. In this paper, I argued that liberals have in their arsenal the theoretical resources to offer a powerful response to this challenge: liberals should embrace a pluralist account of the basis of moral status which maintains that the CMP and the PCMP are two bases of moral status. This pluralist account has significant implications for liberal theories of justice: on the one hand, it identifies a status-conferring property that almost all human 15 As we noted above, this seems entailed by Rawls's observation that ''moral personality is here defined as a potentiality that is ordinary realized in due course. It is potentiality which brings the claims into play'' (Rawls, 1971, p. 505; emphasis added). The equality of moral status between potential and actual moral persons is also explicitly defended in more classical neo-Thomist views. See Patrick Lee and Robert P. George (2008). 16 For illuminating discussions on this issue, see Witt (1994). beings have, but nonhuman animals lack. Therefore, it shows that liberals can broaden the scope of justice while, at the same time, maintain that the rights of almost all human beings should have priority over the rights of nonhuman animals. On the other hand, it reveals that-contra what is commonly believed-actual and potential moral persons have different and unequal rights, other things being equal.
2020-08-20T10:01:54.927Z
2020-08-14T00:00:00.000
{ "year": 2020, "sha1": "cec0d3d139bb9d3d7548de2def7c6cf4d5aceaee", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11098-020-01513-z.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "0ec8145af028fad8854f10ac995c8c3e7715ad6f", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [ "Sociology" ] }
213700481
pes2o/s2orc
v3-fos-license
A Screening of Fungi from Oil Palm Rhizosphere in Peat Soils and the Potential as Biological Agents against Ganoderma boninense . One of the diseases that attack oil palm plants is stem rot disease. Control efforts that can be done is to use rhizosphere fungi from oil palm plants in peat soils. This study aimed to select fungi from rhizosphere of oil palm plants in peat soil based on morphological characteristics and test their potential as biological agents against Ganoderma boninense . This research was conducted by exploration, observation and experiment by using complete randomized design (RAL). The parameters observed were macroscopic characteristics of fungi from oil palm rhizosphere, disease severity index, fungus inhibition power from oil palm rhizosphere to G. boninense , colony diameter and growth rate of high antagonist rhizosphere fungus, hyperparasitic type of fungus from rhizosphere of oil palm plant with G. boninense and the morphological characteristics of fungi from high antagonist rhizosphere in macroscopic and microscopic. The results showed that 12 rhizosphere fungi isolates and 4 isolates were antagonist to G. boninense . Isolate J5 has a high antagonist power of 70.26% and is a genus Trichoderma, isolate J7 belongs to the genus Trichoderma, isolate J10 genus Aspergillus and isolate J12 genus Introduction Oil palm (Elaeis guineensis Jacq.) is one of the plantation crops in Indonesia which has a high economic value. The area of oil palm plantations in Indonesia in 2016 reached 12 million ha [1]. Oil palm plantations have spread in several provinces in Indonesia, one of the Riau Province. [2]. Oil palm in Riau Province was planted in the 1990 and most of the final stages of the production cycle that needs to be planned replanting activities. Replanting is done by clearing the palm trees are no longer productive in the area of land. Activity oil palm replanting must be done carefully, so that no occurrence of a problem for the future. Problems faced when replanting of oil palm without sanitation, where the roots or tubers of plants left behind, which can potentially a source of inoculum from plant diseases. One of the important disease in oil palm plantation is stem rot disease. Stem rot disease consists of 2 disease are basal stem rot (BSR) and the upper stem rot (USR). According to data from the [2] G. boninense have attacked oil palm plantations covering an area of 533.8 hectares and the largest is the Kampar District was 211 ha. The attack G. boninense. oil palm plantations necessary control measures. Control is done only on BSR diseases, one of them with biological control. [3] stated that the highest percentage of inhibition in vitro was Trichoderma sp. by 100% against G. boninense. BSR is the cause of the diseases nor the USR, so that biological control can also be used to control it. The effort to control it was utilize fungi origin in oil palm rhizosphere on peat soil. The origin fungi in oil palm rhizosphere to be effective in controlling the USR diseases, as the host of G. boninense. is a lot of palm trees planted in peat soils. Fungi rhizosphere be around the roots of plants. This is because the roots of plants that grow will result in root exudates in the form of water or soluble compounds such as sugars and organic acids which are useful as nutrients for fungi. The origin fungi in oil palm rhizosphere on peat soil can suppress the development of plant pathogens [4]. The mechanism controlling fungi antagonis to pathogen fungi is directly and indirectly. Indirectly such as induce systemic resistance and Plant Growth Promoting Fungi (PGPF). PGPF found around the roots of healthy plants grown cultivation and wild plants [5]. Biological control for plant pathogen such as Ganoderma needs to be explored, isolation, and identification of potential test. Isolation origin fungi in oil palm rhizosphere potential in controlling G. boninense. because it can be associated, grow and develop in an environment of relatively. This study purposed to select the origin fungi rhizosphere in oil palm on peat soils based on morphological characteristics, and test its potential as a biological agent against G. boninense. Methods This research was conducted at the Laboratory of Plant Pathology, Faculty of Agriculture, Universitas Riau. It was conducted for three months from December 2017 to February 2018. Materials used in this research was the origin of the rhizosphere soil plant oil palm plantation in Rimbo Panjang Riau, G. boninense. isolate from the collection of industrial business unit Biofertilizer and Biofungisida derived from palm trees showing symptoms of USR in oil palm plantations, cucumber seeds, potato dextrose agar (PDA), 2% water agar, spritus, amoxiciilin, tissue, 70% alcohol, water distilled sterile, aluminum foil, plastic warp and graph paper. The research was conducted in exploration, observation and experimentation. The antagonist test from the origin of the rhizosphere soil plant oil palm done using a completely randomized design (CRD), which consisted of 9 treatments and 3 replications thus obtained 27 experimental units. Growth test in antagonist fungi done using a completely randomized design (CRD), which consists of 4 treatments and 5 replications, in order to obtain 20 experimental units. The results of further tests of honestly significant differences (HSD) at the 5% level. Implementation of the study consisted of isolation of fungal origin rhizosphere oil palm plantations, purification of fungal origin rhizosphere of oil palm, rejuvenation isolate G. boninense, hypovirulence test, test of antagonists fungal origin rhizosphere plant oil palm on G. boninense., the growth of fungus origin rhizosphere oil palm defenseless antagonist high against G. boninense, hyperparasitis test of 4 fungal isolates rhizosphere palm trees defenseless antagonist high against G. boninense, identification of fungal origin rhizosphere oil palm plantations helpless antagonist high based on morphological characteristics and observation. Disease Severity Index (DSI) in the Hypovirulence Test Hypovirulence test observations were made by observing the disease severity index (DSI) following from [6]. How to calculate DSI is based on the following formula: Observation of fungal inhibition of peat rhizosphere origin on G. boninense performed from day 3 after incubation until one of the mycelium fungi meets a petri dish containing a PDA. after incubation by measuring the radius of pathogen G. boninense which moves away from and approaches the antagonistic fungus using millimeter paper. The percentage of inhibition is calculated by the following formula: If the inhibition is> 60%, the antagonistic fungus has the potential to become a biological agent [7]. Fungi macroscopic characteristics in the rhizosphere of oil palm The result of exploration was obtained 12 fungi isolates the rhizosphere of oil palm. The characteristics of macroscopic morphology fungi on Table 1 and Figure 4. Figure 1 shows that rhizosphere fungal isolates from oil palm plants in isolated peat soils have morphological characteristics that differ in color, colony surface and spread of growth. This is presumably because the rhizosphere fungi of oil palm plants have different genera and species. This is supported by the opinion of [5] which stated that the fungus group consists of, namely Zygomycetes, Ascomycetes, Basidiomycetes and Deuteromycetes. Disease Severity Index (Disease Severity Index / DSI) in Test Hypovirulence The disease severity index of 12 rhizosphere fungi isolates from rhizosphere in oil palm on Table 2. The inhibitory power of eight rhizosphere fungi isolates from hypovirulence oil palm plants has different percentage of resistance to G. boninense. after analyzing variance. of the rhizosphere fungal inhibitory power can be seen in Table 3. Table 3 shows that isolate J5 have inhibitory tends to be high, namely 70.26% and no significant isolates J7, J10, and J12, but significantly different from the isolates J2, J6, J11, J8 and without treatment. This is presumably because J5 isolates have faster growth and can be seen in the measurement parameters diameter and speed of growth, where the nutrients that should be used by G. boninense, But was used by the rhizosphere fungi. [8] [7] stated that the fast-growing fungus is able to outperform in the control room and in the end could suppress the growth of fungi opponent. 4 antagonist assay results rhizosphere fungal isolates high-powered antagonists against G. boninense can be seen in Figure 2. boninense. This is presumably because the rhizosphere fungi capable of competing space to grow, so take advantage of the growing medium as a source of food for fungal pathogens antagonist and both need nutrients to grow. [9] stated that the rate of growth of the fungus high antagonist activity in suppressing pathogen determine the competition space and nutrients. J5 and J7 isolates also showed inhibition zones, namely the change in color to the antagonist fungus hyphae-hyphae that at the end of fungal hyphae G. boninense. This is presumably because the antagonist fungi secrete an antibiotic substance capable of inhibiting the growth and development of G. boninense. According to [10], biological agents produces secondary metabolites that function as antibiotics, namely dermadin and gliotoxin. J10 isolates also showed inhibition zone on the test antagonist marked by a clear coat at a meeting of fungal mycelium antagonist and G. boninense. This is presumably because the fungal isolates produce antibiotic substances, so terbetuknya clear zone. This is supported by research results [11] which stated that the mechanism of antibiosis shown by the formation of Diameter Colonies and Fungus Growth Speed Rhizosphere Antagonists which Helpless Height (mm/day) The results of rhizosphere fungi isolates with high antagonist properties resulted in different colony diameter and growth velocity after being analyzed by variance. The results of further tests are the smallest significant difference at the 5% level of the measurement of diameter and growth velocity can be seen in Table 4. Table 4 shows that isolate J5 has a diameter and the highest growth rate reaching 92.30 mm and 19.60 mm and significantly different from the 3 other isolates that J7 (84.00 mm and 12.70 mm), J12 (60.40 mm and 11.50 mm) and J10 (44.40 mm and 6.10 mm). J5 isolates growth very quickly so as to meet the growing space on the fourth day of observation. This is related to the results of the rhizosphere fungi antagonist power capable of competing in the race for space and nutrients with G. boninense. This is supported by research results [11] which stated that the fungus Mucor sp. and T. harzianum able to meet the growing space on the third day. Hyperparasitic Mode With the Rhizosphere Fungus G. boninense Hyperparasitic types of rhizosphere fungi with high antagonist properties have diverse interactions based on hyperparasitism tests. The results of his observations can be seen in Figure 3. aims to degrade the cell wall of G. boninense. [13] stated that the fungus T. harzianum able to produce the enzyme chitinase, β-1,3-glucanase, β-1,4-glucanase and lipase compound that can break down chitin, glucans and lipids from the cell wall of fungal pathogens, [14] stated that the enzyme plays an important role in degrading the cell membrane to form holes in the pathogen fungal hyphae. Interaction between J7 isolates with G. boninense. Figure 3.b namely interruption hyphae formation and cause damage to the G. boninense hyphae. This is presumably because the antagonist fungus produces a wide variety of chemical compounds that are toxic to the G. boninense. [15] reported that a group of fungi known biological agent is able to produce toxic compounds (toxic) that serves as an anti-microbial. [16] stated that the genus Trichoderma can inhibit the growth of pathogen hyphae to produce antibiotics gliotoxin and viridin. J10 isolates have interaction against G. boninense. Figure 3.c in the form of thinning hyphae and then hyphae of pathogens to be broke. This is consistent with the results [17] which stated that the lysis mechanism characterized by changing the color of pathogen fungal hyphae become clear and empty, then there are broken and eventually destroyed. Figure 3.d has demonstrated that isolates J12 pathogen interaction in the form of fungal hyphae that grow curl (deformation/malformation). This is consistent with the results of research [18] which stated that the symptoms caused by infection of a microbe can be discoloration and deformation. Antagonistic and Microscopic Identification of 4 rhizosphere fungi isolates that have high antagonistic power based on macroscopic and microscopic characteristics. The results of identification of 4 rhizosphere fungi isolates refer to the book "Pictorial Atlas of Soil and Seed Fungi" [19] can be seen in Table 5, Table 5 and Figure 4 show that isolate J5 has macroscopic characteristics: the color of the colony is greenish-white, the spread of mycelium in all directions and the shape of the mycelium is rough. Microscopic characteristics are that it has a round conidia, upright and branched conidiophores and has short and thick phialids and forms of septa and hyaline hyphae. Isolate J5 belongs to the genus Trichoderma based on the literature book "Pictorial Atlas of Soil and Seed Fungi" [19]. Trichoderma has a conidiophoric form which is developed in the Figure 6). J10 isolates are isolates that belong to the genus Aspergillus based on the results of identification from the literature book "Pictorial Atlas of Soil and Seed Fungi" [19]. Aspergillus has microscopic characteristics of conidiophores appearing unbranched from special foot cells, conidiophores enlarge at the tip, forming swollen vesicles. Texture like velvet or cotton. The opposite color of the colony is usually white, golden or brown. Table 5 shows that J12 isolates had macroscopic characteristics, namely J12 isolate colonies on PDA medium light brown with white edges (Figure 7), mycelium spread in all directions and the shape of mycelium was smooth. Microscopic characteristics of J12 isolate have erect and pale yellow sporangiophores, oval-shaped spores and various sizes, round sporangium and hyaline. Isolate J7 is included in the genus Mucor based on identification results from the literature book "Pictorial Atlas of Soil and Seed Fungi" [19]. Mucor has white to light brown colonies. Microscopically hyphae are insulated, long, round, dark spores. The genus Mucor generally has a long sporangiospora (diameter 50-300 µm) and does not form rhizoid. Sporangiofor rather hard wall and branching. Round or rather round columna and hyaline sporangiophores. Conclusions Based on the results of research obtained 12 isolates from origin of the rhizosphere fungi insulation plant oil palm on peat soils with distinct morphological characters based on color and shape. Test hypovirulence on cucumber seedlings obtained eight isolates that isolates J2, J5, J6,
2020-03-19T20:02:48.346Z
2019-09-29T00:00:00.000
{ "year": 2019, "sha1": "67a331e039a8f33bf8b7019b3bcd35b1630d0833", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.32734/injar.v2i2.918", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "e68025921fec2bfbe25caede5dc21470f3bb6cb0", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
208337587
pes2o/s2orc
v3-fos-license
Revealing the Importance of Aging, Environment, Size and Stabilization Mechanisms on the Stability of Metal Nanoparticles: A Case Study for Silver Nanoparticles in a Minimally Defined and Complex Undefined Bacterial Growth Medium Although the production and stabilization of metal nanoparticles (MNPs) is well understood, the behavior of these MNPs (possible aggregation or disaggregation) when they are intentionally or unintentionally exposed to different environments is a factor that continues to be underrated or overlooked. A case study is performed to analyze the stability of silver nanoparticles (AgNPs)—one of the most frequently used MNPs with excellent antibacterial properties—within two bacterial growth media: a minimally defined medium (IDL) and an undefined complex medium (LB). Moreover, the effect of aging, size and stabilization mechanisms is considered. Results clearly indicate a strong aggregation when AgNPs are dispersed in IDL. Regarding LB, the 100 nm electrosterically stabilized AgNPs remain stable while all others aggregate. Moreover, a serious aging effect is observed for the 10 nm electrostatically stabilized AgNPs when added to LB: after aggregation a restabilization effect occurs over time. Generally, this study demonstrates that the aging, medium composition (environment), size and stabilization mechanism—rarely acknowledged as important factors in nanotoxicity studies—have a profound impact on the AgNPs stabilization and should gain more attention in scientific research. Introduction Lately, the world has seen an exponential rise in the applications of metal nanoparticles (MNPs), leading to an increasing interest of researchers from different scientific disciplines [1,2]. During the synthesis of MNPs, stabilizing agents are added to prevent the interaction of MNPs with one another. These stabilizing or capping agents typically stabilize the MNPs through the absorption or covalent attachment of organic compounds. The stabilization can be achieved either sterically, electrostatically or with both types combined: electrosterically. Due to the Brownian motion and Van der Waals attractive forces, particles can move to each other and aggregate. Macromolecules, like the polyvinylpyrrolidone (PVP) polymer can sterically stabilize MNPs by attaching to the surface and form a 'brush-like' layer. This layer reduces the degree of freedom when the particles approach each other, leading to an energetic unfavorable state whereby the particles repel and remain stabilized. Electrostatic repulsion is obtained pH and EC Measurement The pH and electrical conductivity (EC) of both media was measured by the inoLab ® WTW pH Level 1 pH-meter (Xylem, New York City, NY, USA) with a SenTix ® 81 WTW electrode (Xylem) and the K611 EC-meter (Consort, Turnhout, Belgium) with a SK20T electrode (Consort), respectively. Evaluation of the Stability The different types of AgNPs were diluted in a 1:1 ratio. This dilution was made in their solvent-2 mM NaC solution for the NaC stabilized AgNPs and ultra-pure water (Milli Q ® , Merck) for the PVP and BPEI stabilized AgNPs-, in an IDL medium or in an LB medium. The influence of these different media on the stability of the different AgNPs was investigated by means of three complementary analysis techniques: (1) measuring the UV-VIS absorption spectra as an addition to visual observations; (2) by TEM; and (3) DLS. UV-VIS Spectroscopy UV-VIS spectroscopy was performed by a Genesys UV-VIS spectrophotometer (Thermo Scientific, Waltham, MA, USA). The absorption spectrum was recorded from 340 nm to 700 nm. The samples were measured during different time intervals between 0 and 6 h and after 24 h of incubation. The mixtures were stored at room temperature in quartz cuvettes and in day-night light conditions in between measurements. Also, blanks (without AgNPs) of an IDL and an LB medium mixed in a 1:1 ratio with the solvent specific for the stabilization type of the AgNPs were measured. Moreover, a visual analysis of the samples was performed in parallel. Transmission Electron Microscopy (TEM) TEM was performed at 60 kV with the TEM JEM1010 (Jeol, Tokyo, Japan). Pictures were digitalized using a Ditabis (Pforzheim, Germany) system. As previously described, the AgNPs were diluted in a 1:1 ratio with the medium prior to TEM analysis. Two microlitres of sample was put onto the carbon coated TEM grid of 200 Mesh (Electron Microscopy Sciences, Hatfield, PA, USA) and air dried. TEM analysis was performed immediately after mixing the AgNPs and medium or after 24 h of incubation of this mixture in a 1.5 mL Eppendorf tube in day-night light conditions. Dynamic Light Scattering (DLS) DLS was measured by the Zetasizer Nano ZS (Malvern Instruments, Malvern, United Kingdom) with Zetasizer software 7.11. After mixing the AgNPs with the medium-as previously described-the samples were analyzed by DLS after different time intervals between 0 and 6 h and after 24 h of incubation. Analogous to the UV-VIS spectrophotometry and TEM samples, the mixtures were stored at room temperature in PMMA cuvettes and in day-night light conditions in between measurements. Using the DLS software, silver was selected as the dispersed material. A refraction index (RI) of 0.150 and absorption of 0.001 was entered. Water was selected as the dispersant for all samples. Water has a viscosity of 0.8872 cP and RI of 1.330, at a temperature of 25 • C. Three measurements for each of the three runs were performed on each sample per time point. pH and EC The pH and EC of both media was measured immediately after preparation. A pH of 7.06 ± 0.06 and 7.02 ± 0.04 was measured for the IDL and LB medium, respectively. Consequently, the pH of both media can be considered as equal and neutral. The LB medium had an EC of 19.69 ± 0.35 mS cm −1 , while the IDL medium had a lower EC of 9.53 ± 0.28 mS cm −1 . The high amount of NaCl in the LB medium, compared with the minimal medium, resulted in an increase in EC [20]. Evaluation of the Stability by UV-VIS Spectroscopy AgNP suspensions show unique optical UV-VIS absorption spectra and have typical vibrant colors because they have free electrons in the conductivity band. Specific wavelengths of light can drive the conduction electrons in the metal to collectively oscillate, a phenomenon known as surface plasmon resonance. These vibrations are specified by the size and shape of the AgNPs. Therefore, UV-VIS spectroscopy can be used as a characterization technique that provides information about the size and shape of the AgNPs [35][36][37][38]. Small Ag nanospheres (10-50 nm) typically have a small absorbance peak near 400 nm, while larger spheres (100 nm) give a broader peak with a maximum that shifts toward longer wavelengths near 500 nm. Moreover, the spectra of larger spheres have a secondary peak at shorter wavelengths, which is a result of quadrupole resonance in addition to the primary dipole resonance [39][40][41][42]. Destabilization and the formation of aggregates can lead to peak broadening or a secondary peak will form at longer wavelengths. Actually, an ever-increasing aggregation finally will lead to the disappearance of the typical UV-VIS absorption spectra. Therefore, absorbance spectra give information as to whether the AgNPs suspension has destabilized over time. This change in absorbance spectra often will be visible as a color change [37,39,43]. The original color of the LB medium is pale yellow and IDL is colorless. Figure 1 shows the UV-VIS absorption spectra of the blanks (without AgNPs). Regarding all 4 mixtures, the spectra and color remained constant through time. Concerning the spectra of the mixtures with LB, a typical increase in absorbance at shorter wavelengths was observed. The spectra of mixtures with IDL gave a negligible signal for all tested wavelengths. Considering all tested capping agents, the AgNPs of 10 nm or 50 nm had a bright yellow color, while the large AgNPs of 100 nm were cloudier and had a white-gray color. This original, less vibrant color led to an indistinct color change when aggregation occurred. Color change will, therefore, only be shown for the AgNPs of 10 and 50 nm. show the spectra of the NaC, PVP and BPEI stabilized AgNPs, respectively. When the particles were mixed solely with their solvent, the spectra remained constant through time, which indicates a stable suspension. The Ag nanospheres of 10 and 50 nm had a bright yellow color and a small absorbance peak with a maximum near 400 nm for the 10 nm and at 420 nm for the 50 nm AgNPs. The larger spheres of 100 nm gave a broader peak with a maximum that shifted toward 480 nm. Moreover, a secondary peak was observed in this spectrum at 400 nm. These typical AgNPs' absorbance spectra were consistent with the literature [39][40][41][42]. Regarding some, a small decrease in absorbance in a function of time was observed when suspended in their solvent. The nanoparticles may settle during storage, leading to an obscure color change to a lighter color and a small decrease in absorbance of the whole spectrum. Considering all tested capping agents, the AgNPs of 10 nm or 50 nm had a bright yellow color, while the large AgNPs of 100 nm were cloudier and had a white-gray color. This original, less vibrant color led to an indistinct color change when aggregation occurred. Color change will, therefore, only be shown for the AgNPs of 10 and 50 nm. show the spectra of the NaC, PVP and BPEI stabilized AgNPs, respectively. When the particles were mixed solely with their solvent, the spectra remained constant through time, which indicates a stable suspension. The Ag nanospheres of 10 and 50 nm had a bright yellow color and a small absorbance peak with a maximum near 400 nm for the 10 nm and at 420 nm for the 50 nm AgNPs. The larger spheres of 100 nm gave a broader peak with a maximum that shifted toward 480 nm. Moreover, a secondary peak was observed in this spectrum at 400 nm. These typical AgNPs' absorbance spectra were consistent with the literature [39][40][41][42]. Regarding some, a small decrease in absorbance in a function of time was observed when suspended in their solvent. The nanoparticles may settle during storage, leading to an obscure color change to a lighter color and a small decrease in absorbance of the whole spectrum. Adding the 10 nm NaC AgNPs ( Figure 2) to the IDL and LB media resulted in a sudden change in color: bright pink for the IDL medium and bright orange for LB. The color intensity decreased over time and, after 24 h, the color returned to colorless for IDL and to a light yellow for LB. Regarding the minimal IDL medium, the original 10 nm absorbance peak was noticeable, but another peak at a longer wavelength was formed instantly after mixing. After a slight shift to the right, a decrease in absorbance occurred across the whole spectrum as a function of time, indicating an aggregation process. Concerning the LB medium, the original absorbance peak of the 10 nm AgNPs was vanished almost completely by onset of the measurements, but the peak reappeared with time, which indicates a disaggregation process. The 100 nm NaC stabilized particles gave a slight decrease in cloudiness as time advanced and, after 24 h, the mixtures returned to the original IDL and LB colors. The original 100 nm spectrum still was noticeable slightly in both IDL and LB at the start but disappeared quickly in time. Concerning the 10 nm PVP AgNPs ( Figure 3), a pink color was observed in the IDL medium and a bright yellow color in the LB medium. Through time, the color intensity decreased to less intense pink for the IDL medium and a more grayish yellow for the LB medium. Considering the minimal medium, the original 10 nm AgNPs peak still was noticeable at first, but a plateau at a longer wavelength was formed instantly. When time advanced, a slight shift to the right occurred and the absorbance of the whole spectrum decreased, thus aggregation occurred. Regarding the LB medium, a similar result was observed: the original 10 nm AgNPs peak still was clearly noticeable at first but disappeared with time. As mentioned before, the color change of 100 nm was obscure: the intensity of the cloudiness decreased as time advanced and, finally, the AgNPs color disappeared. Concerning both the IDL and LB media, the original 100 nm spectrum was clearly visible during the first timepoints but, afterward, the whole spectrum decreased quickly in absorbance. Adding the 10 nm NaC AgNPs ( Figure 2) to the IDL and LB media resulted in a sudden change in color: bright pink for the IDL medium and bright orange for LB. The color intensity decreased over time and, after 24 h, the color returned to colorless for IDL and to a light yellow for LB. Regarding the minimal IDL medium, the original 10 nm absorbance peak was noticeable, but another peak at a longer wavelength was formed instantly after mixing. After a slight shift to the right, a decrease in absorbance occurred across the whole spectrum as a function of time, indicating an aggregation process. Concerning the LB medium, the original absorbance peak of the 10 nm AgNPs was vanished almost completely by onset of the measurements, but the peak reappeared with time, which indicates a disaggregation process. The 100 nm NaC stabilized particles gave a slight decrease in cloudiness as time advanced and, after 24 h, the mixtures returned to the original IDL and LB colors. The original 100 nm spectrum still was noticeable slightly in both IDL and LB at the start but disappeared quickly in time. Concerning the 10 nm PVP AgNPs (Figure 3), a pink color was observed in the IDL medium and a bright yellow color in the LB medium. Through time, the color intensity decreased to less intense The original yellow color of the 50 nm BPEI AgNPs ( Figure 4) changed immediately after mixing with IDL or LB: first a grayish color was observed in both media, followed by the return to the original medium color. The first measured spectrum showed-for both media-a spectrum where the original 50 nm AgNPs peak still was apparent at 420 nm, followed by a plateau at a longer wavelength. When time advanced the absorbance of the whole spectrum decreased and, thus, aggregation occurred. The larger BPEI stabilized AgNPs of 100 nm showed a slight decrease in cloudiness and, finally, a complete disappearance of the color in the IDL medium. Considering the LB medium, the cloudiness and color of the 100 nm AgNPs still was observed after 24 h of incubation. Regarding the absorbance spectra, the original 100 nm peak still was visible after mixing with the IDL in the beginning of the measurements, but sharply decreased when time advanced. After mixing the 100 nm BPEI AgNPs with the LB medium, the 100 nm AgNPs peak was visible and the spectrum stayed constant through time. Evaluation of the Stability by TEM After mixing the NaC, PVP and BPEI stabilized AgNPs with their solvent, TEM analysis was performed immediately and after 24 h (Figures 5-7). No aggregation was observed for both time intervals. Furthermore, the AgNPs showed a spherical shape and their size was in accordance with the expectations. Additionally, TEM analysis showed that the LB medium gave some more background-like crystal structures compared with Milli Q, 2 mM NaC or IDL. cloudiness and, finally, a complete disappearance of the color in the IDL medium. Considering the LB medium, the cloudiness and color of the 100 nm AgNPs still was observed after 24 h of incubation. Regarding the absorbance spectra, the original 100 nm peak still was visible after mixing with the IDL in the beginning of the measurements, but sharply decreased when time advanced. After mixing the 100 nm BPEI AgNPs with the LB medium, the 100 nm AgNPs peak was visible and the spectrum stayed constant through time. Evaluation of the Stability by TEM After mixing the NaC, PVP and BPEI stabilized AgNPs with their solvent, TEM analysis was performed immediately and after 24 h (Figures 5-7). No aggregation was observed for both time intervals. Furthermore, the AgNPs showed a spherical shape and their size was in accordance with the expectations. Additionally, TEM analysis showed that the LB medium gave some more background-like crystal structures compared with Milli Q, 2 mM NaC or IDL. Images of 10 nm NaC AgNPs ( Figure 5) in IDL display that aggregation occurred immediately after mixing and an increase in size of these aggregates was observed when time advanced. Similar observations were seen for the 100 nm NaC AgNPs mixed with IDL and LB. Conversely, 10 nm NaC AgNPs in LB initially resulted in aggregates but, after 24 h, the aggregates became smaller, indicating a disaggregation process. Increasing aggregation also was observed for 10 nm and 100 nm PVP AgNPs ( Figure 6) when suspended in IDL. Regarding the LB medium, results were different. Both sizes of the sterically stabilized AgNPs had single particles observed instantly after addition to the medium, and aggregated particles were visible after 24 h. Moreover, it was remarkable that the aggregates of the 10 nm PVP AgNPs had a more spherical shape compared to the other observed aggregates. Aggregates were formed when the 50 nm BPEI AgNPs (Figure 7) were mixed with IDL and LB. Similar to the previous observations, the size of these aggregates became larger when time passed. Concerning the larger BPEI AgNPs, increasing aggregation was seen in IDL, but single particles were observed when the AgNPs were suspended in LB. Even after 24 h, the 100 nm BPEI AgNPs were not aggregated in this complex medium, indicating a stable nanosuspension. Evaluation of the Stability by DLS During the final stage, the UV-VIS spectroscopy and TEM measurements were substantiated through DLS analysis. Size distribution graphs and the polydispersity index (PdI) of each measurement are reported in this paper. The PdI is a value that ranges from 0-1. It is used to describe the width of the particle size distribution and gives information about the polydispersity of the sample. A PdI value that is higher than 0.400 indicates a polydisperse system. This means the sample may not be suitable for a DLS measurement and that the provided data may be unreliable [44,45]. Reporting of the hydrodynamic diameter of an aggregate alone seems incorrect to us due to the complexity of aggregation processes that can lead to more polydispersity and the formation of non-spherical aggregates of which size cannot be defined by one value. Through the combining of both PdIs and size distribution graphs, DLS can be a good addition to our previous results. Figures 8-10 show the size distribution graphs which represent the number percent in function of size (nm) of NaC, PVP and BPEI AgNPs, respectively. The PdI values of each measurement are listed in Table 1. Regarding all tested AgNPs, no remarkable shift in the size distribution graph was observed when the particles were suspended in their solvent. Moreover, the PdI values of these samples were smaller than 0.400, apart from one single measurement. Similar to the UV-VIS spectrophotometer and the TEM observation, DLS data confirmed that the used AgNPs remained stable during 24 h in their solvent. Regarding the electrostatically stabilized AgNPs, PdI (Table 1) was increasing sharply for both tested sizes when suspended in the minimal IDL medium. When the 100 nm particles were mixed with LB, a similar pattern was observed. This indicates an increasing polydispersity, due to aggregation, leading to less reliable DLS results [44,45]. The size distribution graphs (Figure 8) shifted back and forth for these samples, at least partly due to the higher polydispersity. The first measurement, whereby the PdI was still below 0.400, showed a distribution around 1000 nm for the 10 nm NaC AgNPs in IDL, while the size distribution of the 100 nm NaC particles initially stayed situated around 100 nm in both IDL and LB. The PdIs of 10 nm NaC AgNPs mixed with LB were within the limit of 0.400, and the size distribution graph shifted from a larger to a smaller size, indicating a disaggregation process. Nanomaterials 2019, 9, x FOR PEER REVIEW 14 of 21 Analogous to the NaC AgNPs, PdIs of 10 nm and 100 nm PVP (Table 1) in IDL and 100 nm PVP in LB was rising above 0.400. Only the first measurements showed a PdI value below 0.400. The size distribution (Figure 9) of the 100 nm particles still was situated around 100 nm immediately after mixing with both media. Concerning the 10 nm AgNPs in IDL, a larger size was measured at the first timepoint. Considering these three samples, the profile shifts to the right, indicating an aggregation process. Ten nanometers PVP AgNPs in LB gave PdIs that stayed below 0.400 when time advanced, apart from the last measurement. The distribution curves showed that size was getting bigger. Lastly, BPEI stabilized particles were analyzed by DLS. PdIs (Table 1) were increasing above 0.400 for the mixture of 50 nm and 100 nm AgNPs in IDL and the 50 nm in LB, with exception of the first timepoint. Regarding the size distribution graphs (Figure 10), an increasing size was measured, starting from the first measurement. No change in size distribution was observed when 100 nm BPEI AgNPs were suspended in LB. Furthermore, the PdIs of this sample stayed below 0.400 for the performed measurements, which indicate that the 100 nm BPEI AgNPs remained stable in LB. Table 1. PdI (Polydispersity Index) of NaC (sodium citrate) (10 and 100 nm), PVP (polyvinylpyrrolidone) (10 and 100 nm) and BPEI (branched polyethyleneimine) (50 and 100 nm) stabilized AgNPs (silver nanoparticles) in their solvent, IDL (minimally defined medium) and LB (Luria-Bertani, complex undefined medium). PdI was measured at different time points between 0 h and 24 h. PdI values > 0.400 are indicated by grayscale. Figure 10. Number distribution of 50 nm (left) and 100 nm (right) BPEI (branched polyethyleneimine) stabilized AgNPs (silver nanoparticles) in Milli Q, IDL (minimally defined medium) and LB (Luria-Bertani, complex undefined medium). Overlays of different time points between 0 h and 24 h are shown. The first (black) and last (red) measurement is represented with a solid line, the arrows indicate the shift. Discussion All three analysis techniques corroborated that the tested AgNPs remained stable during 24 h in their solvent. No change in absorbance spectrum or color was observed. Microscopy images showed no aggregates, and DLS gave acceptable PdIs and narrow size distributions curves without shifting in time. This control confirms that the performed storage conditions (light and temperature) had no effect on the aggregation state of the AgNPs. When the electrostatically (NaC), sterically (PVP) and electrosterically (BPEI) stabilized AgNPs were mixed with the minimally defined IDL medium, aggregation occurred almost immediately. A remarkable color change occurred for the small AgNPs and all spectra returned to the absorbance spectrum of the blank medium. A nanomaterial is defined as a material where 50% or more of the particles-in an unbound state, as an aggregate or as an agglomerate-have one or more external dimensions in the size range 1-100 nm [46,47]. TEM showed aggregates with sizes that far exceed the limits of this definition. Moreover, DLS results corroborated the instability: high polydispersity was measured, and size distribution graphs were shifting and were situated finally on the right side of the size axis. The IDL medium had a neutral pH and contained only salts and glucose. No complex and undefined components were added to this minimal medium. The salts dissociated in ions and this ionic strength led to the compression of the EDL of the electrostatically and electrosterically stabilized AgNPs. The reduction in thickness of this layer, causing a decrease in repulsive electrostatic force and, thus, aggregation can occur more easily [30,31,[48][49][50]. Moreover, both negatively and positively charged ions present in the medium interacted with charged groups of the stabilizing agents. Citrate groups are negatively charged, for example, and the electrosteric stabilizing agent BPEI has a positive charge. Via this interaction, the surface of the AgNPs is neutralized, the repulsive forces are weaker and, thus, aggregation can occur [48,51,52]. Moreover, previously reported studies revealed that the behavior of the PVP polymer depends on the ionic composition, and ions like H 2 PO 4 − , SO 4 2− and HPO 4 2− -highly present in the IDL medium-can have a negative influence on the PVP polymer [53][54][55]. This explains, at least partly, why the PVP AgNPs aggregated in the IDL medium. To contrast, previous research [27,[56][57][58] indicates that sterically stabilized AgNPs are more stable than other stabilization mechanisms, even at a higher ionic strength. However, this was not confirmed in the IDL, possibly for two reasons. First, the above-mentioned ions that can have a negative influence on the PVP polymer were absent in the referred researches. Moreover, and presumably the most important reason: the IDL medium had a much higher ionic strength than the ones performed in the referred researches. Notwithstanding, the IDL medium can be used as a standard medium for determining Ag + toxicity [20], the NP instability should be taken into account when performing AgNPs toxicity studies within this medium. Except for two of the tested AgNPs, mixing with LB resulted in aggregation during the performed 24 h. The first exception was observed for the 10 nm NaC AgNPs. When 10 nm NaC AgNPs were suspended in the LB, aggregation occurred immediately but, over time, a disaggregation process was observed through all three techniques. Secondly, the 100 nm BPEI stabilized AgNPs remained stable in the LB medium: no color change or change in spectrum was observed when time advanced and TEM and DLS data showed no aggregation. NaC stabilized AgNPs cause a different stability depending on their size. Aggregation of the 100 nm NaC AgNPs possibly occurred due to the disturbance of the EDL [30,31,[48][49][50], as previously described. Ten nanometers of NaC AgNPs behaved differently and disaggregated when time advanced. Compared with IDL, the LB has a similar and neutral pH, but a higher EC value, and contains organic matter including hydrolyzed casein and yeast extract, both high in protein [59][60][61]. It is assumed that the original electrostatically NaC stabilized particles first aggregate due to the EDL compression followed by the adsorption of organic molecules onto the surface of the AgNPs, resulting in a steric restabilization. A similar result was reported by Albanese et al. for gold NPs [50]. We assume that the higher surface area of the 10 nm AgNPs compared with the 100 nm AgNPs leads to more interaction with the organic matter and, thus, a size-dependent aggregation response. Analogous to IDL, the ionic composition (in this case NaCl) and ionic strength of the LB medium led to an instability of both sizes of the PVP AgNPs [53][54][55]. However, it was remarkable that both 10 and 100 nm PVP AgNPs initially showed no aggregation immediately after mixing with LB but, as soon as the incubation time increased, the aggregation occurred. A better stability was thus initially achieved when organic matter was present, even at a high ionic strength. The enhanced NP stability due to an additional sterical hindrance by the presence of organic compounds already was reported by other researchers [62][63][64][65][66]. Similar to the NaC AgNPs, BPEI stabilized AgNPs showed a size dependent stability. Different sizes lead to a different curvature of spherically shaped AgNPs and to a difference in the physical packing of the electrosterically stabilizing agent. Smaller sized AgNPs have a higher curvature compared to larger sized AgNPs. This higher curvature leads to a reduced layer thickness of the polyelectrolytes (like BPEI) and fewer interaction places of stabilizer-NP surface [67][68][69]. Hence, lower stability can be observed for smaller AgNPs. This phenomenon was confirmed by the BPEI AgNPs: the LB medium led to the disturbance of the BPEI stabilizing mechanism for 50 nm AgNPs, while the 100 nm AgNPs remained stable through time. Conclusions We provided a case study here, where the stability of AgNPs with different stabilization mechanisms and sizes was analyzed within bacterial growth media during 24 h. It has become clear that the effect of (1) aging, (2) medium composition (the environment), (3) the NP size and (4) the NP stabilization mechanism has a profound influence on the stability. Our results showed that the addition of complex organic matter to the environment led to a better stability for some of the tested AgNPs-even at a high ionic strength-possibly due to an extra sterical NP hindrance [62][63][64][65][66]. Moreover, we proved that the stability was size-dependent, attributed to the difference in curvature or surface area of small versus larger AgNPs [2,18,50,[67][68][69][70]. Finally, the stabilization mechanism of the AgNPs was important. Different stabilized AgNPs behave differently and the aggregation was dependent from the incubation time. To the best of our knowledge, this is the first report to demonstrate the influence of all 4 parameters on the stability of AgNPs. The impressive difference in toxicity of MNPs within different environments already was noted by some other researches [31,50,71,72]. We believe that the difference in MNP stability within these different environments is at least partly responsible for the observed toxicity differences. NPs are characterized by a large surface-to-volume ratio due to their smaller size-order in comparison to the bulk material [2,18,70]. When NPs aggregate, the external and reactive surface area is decreasing and, therefore, the reactivity, bioavailability and toxicity changes [4,8,31,50]. MNP stability, therefore, should be considered within MNPs research. The MNP stability should be analyzed within their application during storage, on the one hand. Conversely, and at least as important, is analyzing the MNP stability in the environment that is achieved when the MNPs move from their application to their intentional or unintentional 'target'. The complexity of these 'target' environments with a certain pH, ionic strength and often a presence of organic matter is extremely diverse and can range from ground water, for example, to active sludge [3,29,73,74] or from human gastrointestinal fluids to blood [12,15]. We believe that our approach was much needed because the influence of (1) aging, (2) medium composition (the environment), (3) the NP size and (4) the NP stabilization mechanism on the MNPs stability currently is underrepresented in literature [1,4,15,[26][27][28]. Since we proved that all four of the aforementioned parameters have a high relevance on MNPs stability, we strongly recommend their inclusion in future projects. Authors should monitor the MNP (with a certain size and stabilization mechanism) stability within the specific application domain or simulated circumstances during the performed time period and take these results into account before making conclusions.
2019-11-28T12:34:59.613Z
2019-11-25T00:00:00.000
{ "year": 2019, "sha1": "688086ef6fc0f7686e752aa26689581a1a0ae5fa", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/9/12/1684/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "565133101f0654c7c1c7b3adc5dfb01999f98182", "s2fieldsofstudy": [ "Environmental Science", "Chemistry", "Materials Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
245875639
pes2o/s2orc
v3-fos-license
Analysis of the Multimedia Use of Primary Schools Kata kunci: Guru; Multimedia; Sekolah Dasar Multimedia in learning could make the learning atmosphere more attractive for student attention and student interactions. The objectives of this research were: (1) describe the use of multimedia learning by elementary school teachers in Kendari City; (2) analyze the comparison of the use of multimedia learning in terms of school accreditation by the teacher at elementary school in Kendari City; (3) analyze aspects of obstacles in the use of multimedia learning for elementary school teachers in Kendari City. This research used mixed methods with a sequential exploratory strategy held at the Kendari City Elementary School. The school sample was determined using the random cluster sampling technique selecting 3 schools from 11 sub-districts. Each sub-district took 2 accredited A and non-A public schools and 1 accredited private school so that the total respondents were 66 people. Research data was collected through observation forms and interviews. The results of this research showed that: (1) multimedia learning use by primary school teachers in Kendari City was still good; (2) there was a meaningful comparison of the use of multimedia learning by elementary school teachers in Kendari City in terms of school accreditation; (3) aspects of barriers to the use of multimedia learning by primary school teachers in Kendari City include: (a) teachers 'lack competence in IT, (b) teachers' misconceptions about learning multimedia, (c) the lack of support structures and infrastructures, (d)) the lack of training on multimedia learning means, (e) the lack of involvement of the educational software used by teachers to support learning activities. INTRODUCTION The advances in technology and information have become a part of life that plays a significant role in education. Technology and information referred to tools and as media that had solved more complex and more dynamic learning problems. Myori et al. (2019) showed that the integration of technology and information into learning leads to students having decisions and responsibilities in the learning process so they could participate in the learning process activity. In order to increase the role and activities of the students in learning, it was necessary to select the appropriate learning media. One of the usable learning mediums was learning multimedia. Praheto, Andayani, Rohmadi & Wardani (2017) explained that multimedia, when identified in learning, could be interpreted as a multimedia application used in learning how to deliver messages in the form of knowledge, skills, attitudes and making decisions, and also could stimulate the student's feelings, attention, and willingness so that the learning process was intentional, purposeful, and controlled. Multimedia combined text, graphics, animation, video, music, sound/narration, and sound effects used to convey messages or data (Meifiani & Prastyo, 2015). Multimedia combined different media into text, photos, graphics, sound, animation, interactive videos, packaged as digital files (computerized) and used to convey messages to the public (Arsyad & Fatmawati, 2018). According to Firdaus, Damiri & Tresnawati (2012), multimedia was a medium that was easily understood by any group compared to brochure media, considering that multimedia contains the combination of sound, text, animation, photos, and videos. If connected to the field of learning, multimedia means using computers in the design of text, graphics, audio, video, and animation in the form of multimedia learning so that there was a correlation between individuals and the media (Rasyid, Azis & Saleh, 2016). Multimedia consists of two there were: (1) linear multimedia, namely multimedia that did not contain a user-operated controller; (2) interactive multimedia, namely multimedia that contains a user-operated controller so that the user could freely select the next program content (Daryanto, 2013). Multimedia in learning needed to be interactive to engage students in learning, and learning interactions occur between students and multimedia. According to Leow & Neo (2014), students could be master the concepts well when they use interactive multimedia. Armansyah, Sulton & Sulthoni (2019) revealed an alternative that helped the students understand the concept of interactive multimedia. Interactive multimedia was a solution to students 'limitations in gaining learning experiences, material understanding concepts well, and increasing students' interest in learning (Maharani, Suryani & Ardiyanto, 2018). The curriculum in 2013 has been included the ability to select multimedia as a skill that the learning actors should have. Multimedia has made learning more dynamic, so teachers needed to be able to use this multimedia so that more creative and effective ideas were used in learning. Interactive learning would make the learning process between the learning media and the students go in two directions (Ramansyah, 2016). Through the use of multimedia, the hope was to create a learning environment that could raise students' attention to better learning. Preliminary studies in the elementary schools of Kendari City showed that there were still many elementary school teachers who were not fully able to select and use the right multimedia content in the learning classroom. Although there were adequate facilities in schools, teachers could not use multimedia properly. In addition, the multimedia content used is only categorized according to the use of only one medium, namely books, whiteboards and power sockets. It was because the teacher already felt comfortable with the existing media and did not want to innovate for multimedia learning. However, in line with the development of technology and information, the development of multimedia learning must be able to offer experiences that could renew the understanding of the lesson content The weakness of primary school teachers in Kendari City in understanding and implementing multimedia learning methods was becoming increasingly apparent compared to several research results on the use of multimedia in primary schools. The study results by Pravitasa & Yulianto (2017) concluded that mastery of concepts and improved understanding of students are affected by the use of interactive multimedia. Bakhtiar's (2018) concluded that multimedia had effectively used in elementary schools in the learning process. In addition, the students also gave good category answers in thematic learning activities with multimedia teaching materials. Purba, Hernawati & Suryadi (2018) explained that interactive media maps and Indonesian cultures had developed to help fourthgrade elementary school students increase their interest in learning. The diversity of the research showed that the use of multimedia in elementary schools could increase student interest, increase student motivation, improve mastery of concepts, and improve student learning outcomes. The empirical facts of the research above showed the urgency of multimedia use in elementary schools. The results analysis on the use of multimedia in elementary schools would have new effects on science in the form of information that could use as a reference for conducting multimedia training courses for teachers and obtaining multimedia teaching material in elementary schools. This research was urgent as it would provide an overview of the skills of primary school teachers to understand multimedia concepts, design, and create multimedia teaching materials. Based on the description above, a specific empirical study of multimedia learning in elementary schools in Kendari City is required to provide complete information on multimedia used by the educators in the learning. The objectives of this study were: (1) to describe the use of multimedia learning by primary school teachers in Kendari City; (2) to analyze the differences in the use of multimedia learning by primary school teachers in Kendari City concerning school accreditation; (3) to analyze the limiting factors in the use of multimedia learning for primary school teachers in Kendari City. METHOD This study used mixed methods that combined qualitative and quantitative research. According to Creswell (2010), mixed research was a combination of qualitative and quantitative research approaches. The mixed strategy of this research was sequential mixed methods with sequential mixed methods strategy and sequential exploratory strategy. The first phase of this research was to analyze quantitative data on the multimedia learning use by elementary school teachers in Kendari City and the differences of the multimedia learning use by elementary school teachers in Kendari City. The second phase was to analyze the qualitative data related factors that cause the low development of multimedia learning among primary school teachers in Kendari City. The implementation of this study had conducted in elementary schools in Kendari city that began March to June 2019. The determination of the school sample was using the cluster random sampling technique by selecting 3 schools from 11 sub-districts, and each sub-district took 2 public schools accredited A and not accredited A, and 1 accredited private school. After the school had established, respondents had identified by selecting a teacher and the headmaster in each school, so the total number of respondents were 66 peoples. The research data were collected using observation sheets and interviews. The data in the study is analyzed quantitatively and qualitatively. The quantitative analysis had done by describing the categorization and inferential to test the hypotheses. Table 1 showed the categorization of the use of multimedia learning by primary school teachers in Kendari City. The interval and categorization criteria were adopted from the categorization by Salim et al (2020). Based on quantitative inferential analysis was used to differentiate multimedia learning used by the teacher of primary schools in Kendari City concerning the school accreditation reviewed. Analysis was using an independent sample t-test. The data had analyzed by using the SPSS 22 application based on decision making. If the significance value was less than  = 0.05 then Ho was rejected. It means that there was a significant difference in the use of multimedia learning by primary school teachers in Kendari City when seen from school accreditation. The qualitative data analysis used process stages including editing, classifying, verifying, analyzing, concluding, and recommendation. FINDINGS AND DISCUSSION The use of multimedia learning by primary school teachers in Kendari City concerning the measurement dimensions comprises 4 components, namely: (1) the availability of multimedia learning devices, (2) use of learning media, (3) design of learning media, (4) student understanding of multimedia learning. The respondents' responses to the dimensions of measuring the use of multimedia learning by primary school teachers in Kendari City had presented in table 2. Table 2 above showed that elementary schools in Kendari City have sufficient availability of multimedia learning support devices. However, the primary school teachers encountered problems in using it. It had shown through the data in Table 3 that the use of multimedia learning and multimedia learning design was weak. In general, the multimedia learning of primary school teachers in Kendari City was still weak. Generally, it had influenced by the teacher's inability to understand multimedia concepts, develop multimedia, and operate computers. Salehudin & Sada (2020) explained that multimedia development in practice requires computer expertise to support the multimedia preparation process, the ability to customize the material, student characteristics, and needed with multimedia characteristics thus developed that arouse interest and influence students' motivation to learn. The results of interviews had found out the reasons for the lack of the use multimedia learning by teachers were: the school still has limited resources and sources of knowledge about the provision of multimedia devices for the creation of technology-based learning media, teachers did not fully understand the concept of multimedia learning and the types of applications used to create educational multimedia, teachers had no technical skills, so it had difficulty to develop multimedia learning for learning purposes in the classroom, most teachers thought that multimedia tools (projector/LCD, internet computer/laptop, sound system) as multimedia learning programs, none of the teachers had interactive multimedia, and teachers still had linear multimedia like power points and textbooks, and multimedia learning used by linear teachers did not fully help students understand the material. The result above was in line with Hadijah's (2018) explained problems related to the use of multimedia in learning was the willingness of schools, both facilities and infrastructures that supported the use of multimedia and the willingness of teachers in applying multimedia in the learning process. Setiawan, Asrowi & Suryani (2017) explained factor that influences the use of multimedia include teachers' poor ability to master the technology, regulating the availability of facilities and infrastructure not being well prepared, and the internet network had not been evenly distributed. The use of multimedia learning by primary school teachers in Kendari City concerning school accreditation were schools with accreditation A and schools with non-accreditation A that had shown in tables 3 and 4 below. Table 3 above showed the use of multimedia learning by primary school teachers in Kendari City for schools with an A accreditation in the "adequate" category. All aspects of the indicators were assessed to indicate a fairly good category. Table 4 above showed the use of multimedia learning by primary school teachers in Kendari City for schools with non-accreditation A in the weak category. Some aspects of the indicators were in the poor category. Only the availability of learning multimedia devices was in the fair category. The result showed that the availability of multimedia learning devices was quite good. However, it did not eventually distribute well in schools with non-A accredited, but the problem that arose in the human resources was the teachers. All respondents did not fully understand the concept of multimedia learning, lacked understanding of applications for creating multimedia learning programs. The test of the differences in multimedia learning use by primary school teachers in Kendari City about school accreditation had carried out with a t-test through an independent sample t-test. The test had done when the pre-requisite test of the data normality test and the data homogeneity test that tested. Statistically, the data were normally distributed and had homogeneous data groups. The results of the analysis had shown in table 5. Table 5. The Results Analysis of T-Test Using Independent Sample T-Test Tcount Sig (2-tailed) Result 2,475 0,025 Tolak Ho The analysis of Table 5 showed that the value of Sig. (2-tailed) = 0.025. This value was smaller than  = 0.05, so this analysis provided information that there was a significant difference in multimedia learning use for primary school teachers in Kendari City when viewed from the accreditation of schools. These results confirmed that multimedia learning use by elementary school teachers in Kendari City had classified as poor. It means that the cause of the lack of multimedia learning used by teachers in elementary schools in Kendari City was due to the status of the school with non-accreditation A. The result supported by Annisa, Tanjung & Ridwan (2016) claimed that schools with accreditation A have better infrastructure than schools with accreditation B and C. Zulnika (2017) had shown in his research that schools with good accreditation could increase the quality of students' learning. Based on the interview respondents' results, it had found that several factors limit the use of multimedia learning by primary school teachers in Kendari City, shown in table 6. Valued Aspect Result of Interview The availability of multimedia learning The availability of supporting facilities and infrastructures for multimedia learning application devices became an inhibiting factor Keterbatasan sumber daya manusia/ahli sebagai tempat mencari informasi terkait multimedia pembelajaran The use of multimedia learning The competence of the teachers in the technology area was still very low, especially if we look at them in multimedia learning was still lack The lack of training in technology-based learning media, especially in multimedia learning The lack of references and understanding of the concept of multimedia learning by teachers The design of multimedia learning The limited knowledge regarding multimedia learning, so most respondents understand the use of projectors/LCD, computers/laptops, the internet could also use as multimedia, but only as a tool to support multimedia learning use. Students understanding with multimedia learning The less implicit learning software used by teachers to support learning activities. The application of learning in the classroom was still conventional, and the learning strategies that used technology as a learning resource was still lacking. The research results on the inhibiting factors in multimedia learning used by teachers in Kendari City showed that many teachers experience this in schools with non-accreditation A. In schools with accreditation A, this limitation factor did not become the predominant part of multimedia use by teachers that support the human resources and quality of the schools. In schools with accreditation A, this limitation factor did not become the predominant part of multimedia use by teachers that support the human resources and quality of the schools. The results matching with a study by Setyaningsih (2017), explained the connection between the school accreditation status and the quality of schools had increased after the completion accreditation program done. Afriani (2017) also notes in her research found out that the achievement of accreditation had a significant correlation with the educators' productivity. Irawan, Tagela, & Windrawanto (2020), in their study, was carried out the schools which had excellent accreditation had better quality than the schools that had good enough accreditation. The result of the study provided information for teachers, schools, principals, educational institutions and local government of Kendari City that there were some issues related to the use of multimedia learning by primary school teachers in Kendari City that must consider the provision attention for supporting the facilities, infrastructure, multimedia classrooms and the implementation of multimedia policy that actively used by teachers in the learning process. CONCLUSION Based on the results of this study showed that several things related to the use of multimedia learning for elementary school teachers in Kendari City there were: (1) generally, the use of multimedia learning by primary school teachers in Kendari City was still not good, it was due to the low availability of multimedia facilities and infrastructure and the teachers' lack of ability to design materials into multimedia form; (2) there were significant differences in the use of multimedia learning by primary school teachers in Kendari City reviewed from the school accreditation; and (3) there were still several factors hindering the use of multimedia learning by primary school teachers in Kendari City that must be considered related provision attention for increasing the use of multimedia in primary school learning. The suggestions recommended in this study needed a multimedia house in Kendari City that served as a provider of multimedia learning resources used by primary school teachers in Kendari City. Further research could regularly evaluate the use of multimedia learning, considering that multimedia had become one of the learning media needed in the learning age of the 21st century.
2022-01-12T16:06:17.953Z
2021-12-31T00:00:00.000
{ "year": 2021, "sha1": "0784c2667ab86d9e790662552f453c17f2837895", "oa_license": "CCBYNCSA", "oa_url": "https://journal.staihubbulwathan.id/index.php/alishlah/article/download/673/571", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0f6dc9ff0e18cc47af3a74e2da226e27afb19539", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
1493608
pes2o/s2orc
v3-fos-license
Tetralogy of Fallot Surgical Repair: Shunt Configurations, Ductus Arteriosus and the Circle of Willis In this study, hemodynamic performance of three novel shunt configurations that are considered for the surgical repair of tetralogy of Fallot (TOF) disease are investigated in detail. Clinical experience suggests that the shunt location, connecting angle, and its diameter can influence the post-operative physiology and the neurodevelopment of the neonatal patient. An experimentally validated second order computational fluid dynamics (CFD) solver and a parametric neonatal diseased great artery model that incorporates the ductus arteriosus (DA) and the full patient-specific circle of Willis (CoW) are employed. Standard truncated resistance CFD boundary conditions are compared with the full cerebral arterial system, which resulted 21, −13, and 37% difference in flow rate at the brachiocephalic, left carotid, and subclavian arteries, respectively. Flow splits at the aortic arch and cerebral arteries are calculated and found to change with shunt configuration significantly for TOF disease. The central direct shunt (direct shunt) has pulmonary flow 5% higher than central oblique shunt (oblique shunt) and 23% higher than modified Blalock Taussig shunt (RPA shunt) while the DA is closed. Maximum wall shear stress (WSS) in the direct shunt configuration is 9 and 60% higher than that of the oblique and RPA shunts, respectively. Patent DA, significantly eliminated the pulmonary flow control function of the shunt repair. These results suggests that, due to the higher flow rates at the pulmonary arteries, the direct shunt, rather than the central oblique, or right pulmonary artery shunts could be preferred by the surgeon. This extended model introduced new hemodynamic performance indices for the cerebral circulation that can correlate with the post-operative neurodevelopment quality of the patient. Electronic supplementary material The online version of this article (doi:10.1007/s13239-017-0302-5) contains supplementary material, which is available to authorized users. INTRODUCTION The primary surgical repair of common congenital heart defects (CHD), particularly the tetralogy of Fallot (TOF), pulmonary artery atresia (PAA), and hypoplastic left heart syndrome (HLHS), involve the reconstruction of palliative vascular shunts that are anastomosed between the aorta and the pulmonary arteries (1st stage shunt surgery). Studies demonstrated poor functional outcome with reduced exercise capacity, diminished cardiac output, and risks of heart failure after the surgical repair. 10,53,54 Furthermore, the post-operative shunt hemodynamics is not stable due to vascular growth, collateral vessels, and the postoperative management strategy. 13,38 Although there is no consensus on the hemodynamic evaluation criteria that can correlate with the post-operative performance of the 1st stage shunt surgeries, 50 computational fluid dynamics (CFD) simulations are proven useful in improving the hemodynamics of the 3rd stage shunt surgeries. 9 Similar studies that investigate the hemodynamics of the 2nd stage shunt surgeries 7,40 as well as the 1st stage surgical palliation are relatively rare. [3][4][5]23,24,30,44 Furthermore the existing studies focused entirely on the surgical repair of HLHS 48 in which the Norwood (innominate artery or the aorta shunt to the right pulmonary artery) and Sano (right ventricle to pulmonary artery shunt) shunt variations are analyzed. 30 However the 3D arterial geometry of CHD constitutes a spectrum of anatomical templates, 39 in which HLHS represents only one of the extreme anatomical configurations. The other opposite anatomical extreme is the TOF disease anatomy. 1,19 Unlike HLHS, the TOF disease template has a large aorta and under developed pulmonary arteries. It is composed of a ventricular septal defect, right ventricular hypertrophy 2,16 and occurs in 3 out of 10,000 live births. 2 Thus, the lack of comprehensive 1st stage shunt surgery planning investigations on TOF disease anatomy motivates the present manuscript. Another focus of the current study is the Circle of Willis (CoW) region of the anatomy since the computational studies available from literature that investigate the first-stage shunt surgeries overlooked the full detail of the cerebral great vessel circulation system, particularly the CoW section, 32 to our knowledge. In this manuscript we hypothesize that the neonatal cerebral arterial hemodynamics is critical for the 1st stage patient-specific pre-surgical shunt design. Unlike the adult cerebral circulation, for the newborn baby a large percent of the total cardiac output may be delivered to the brain-50% in neonates versus 15% in adults. 41 Optimal intra-and post-operative cerebral arterial perfusion is also critical for the normal neurodevelopment of the newborn CHD patient. Among fetuses with single ventricle anomalies, lower cerebrovascular resistance was associated with higher neurodevelopmental (ND) scores. 55 Indeed, ND dysfunction has become the most common and potentially the most disabling outcome of CHD repair, 34 including high prevalence of low-severity developmental problems in the areas of language, motor skills, attention, and executive function. 33 Neurodevelopment-associated impairment may occur in up to 70% of survivors as they grow through childhood. 26 These recent clinical facts 18 prompted the present investigation where potential cerebral perfusion changes due to different shunt configurations are studied in detail. Patent ductus arteriosus (PDA) accounts for approximately 10% of all CHDs with an incidence of at least 2-4 per 1000 term births. 14 DA plays an important role in the 1st stage palliation of congenital heart diseases and can be left open clinically both pre and post-operative stages as a hybrid therapy. 14 Although a moderate size patent DA should be closed by the time the patient is 1-2 years old, the decision of DA closure at the neonatal period remains uncertain. 6 In the present manuscript, we studied both the patent DA and closed DA states in order to provide insight to the surgeons on the role of DA in shunt hemodynamics. Thus, in this manuscript we study the pre-surgical planning of 1st stage shunt operations through a 3D model of cardiovascular system including neck and cerebral arteries (CoW region). Resistance boundary conditions are assigned both to the artery outlets and inlets to represent the flow competition between the outflow trunks and the downstream organs. Three different shunt configurations designed by pediatric cardiovascular surgeons are implemented into our model. Furthermore, these shunt configurations are studied in two different PDA stats: when the DA is open and functioning and when the DA is totally closed and not functioning. The present manuscript is organized as follows; in ''Materials and Methods'' section, together with the details of the CFD solver, the 3D reconstruction of realistic geometry cardiovascular system and CoW region and shunt configurations designed by surgeon as post-surgery anatomy post-surgery anatomy for surgical planning are described. In ''Results'' section, flow splits and wall shear stress (WSS) distributions are presented for the three shunt configurations using TOF disease template including CoW region. Also the closed and patent DA states are examined for all three shunt configurations. In ''Discussion'' section, the post-surgery results of shunt configurations and DA states are compared and analyzed in detail. The limitations, assumptions of our approach and their adequacy are provided at the separate ''Limitations'' section. Finally, the surgical interpretations of the key findings are stated in ''Conclusions'' section. 3D Geometry (Aortic Arch, Neck and Cerebral Arteries) A realistic 3D aortic arch anatomy of TOF is established based on our previous anatomical reconstructions 8 where the left ventricle aortic outflow diameter is significantly less than normal, representing a symmetric diffuse stenosis 39 (Fig. 1). Vessel dimensions of this idealized model have been validated rigorously and employed in our earlier hemodynamic investigations involving the neonatal stage. 8,27,28,37 This anatomical template has a long publication history and its evolution is available in the Supplementary Material (Appendix A). For the present study, the anatomical TOF template was further improved by adding the cerebral arteries including the CoW using a magnetic resonance imaging (MRI) scan of a healthy young adult through approved institutional review board (IRB). The original cerebral geometry is scaled down 1.7 times, so that the connecting head-neck arteries are consistent with the neonatal aortic arch dimensions (Fig. 1). The cerebral anatomical dimensions are further validated here through the clinical neonatal cerebral measurements as summarized in the Supplementary Material (Appendix A). Nomenclatures of all the vessels involved are presented in Table 1. The coupled anatomy was created using Geomagic (Geomagic Inc., NC, USA). Arteries are listed in three groups: great arteries proximal to the aortic arch, arteries associated with the neck region and cerebral arteries. The boundary condition (BC) types specified are also provided. Shunt Configurations and Simulated Cases The surgical shunts were created by 3D sketching on the computer using an in-house anatomical design toolkit. 12 Several candidate shunt configurations were produced in collaboration with the two independent pediatric cardiovascular surgeons and 3 of those configurations were retained for the present study (Fig. 2). Two of these surgical configurations are central shunts constructed between the Aao and MPA. Central direct shunt (direct shunt) corresponds to a more horizontally configured case as opposed to a central oblique shunt (oblique shunt) case. The third configuration is a modified Blalock Taussig (mBT) shunt (RPA shunt) that connects the aortic arch and RPA that is retained as a baseline. The inner diameter of the implemented grafts is 2.5 mm polytetrafluoroethylene (PTFE) conduit. All anatomical configurations and associated CFD simulation cases are summarized in Table 2. They include DA closure and TOF disease cases. Inclusion of the full cerebral system improved the predictive capability of our simulations compared to the prior isolated aortic arch models in which the CoW flow characteristics were missing. These additional boundary condition verification simulations and the resulting performance improvements are summarized in the Supplementary Material (Appendix B) for reference. Boundary Conditions Standard resistance outlet boundary conditions are employed at truncated arterial boundaries using the following formulation: where P o is the assigned outlet pressure boundary condition, Q o the flow rate at the outlet, R o the resistance value, P a the atrium pressure and A o the area of the outlet. The resistance values for neonatal aorta have already been calculated in our previous studies by matching the physiological flow distributions for neonates. 8 For cerebral outlets, these values are slightly adjusted as reported in Ref. 46 in order to match the physiological pulmonary to systemic flow rate ratio (Q p /Q s ). The resistance value for each of the cerebral arteries is assumed to be the same: 8 MPa s m À3 for RACA, LACA, RMCA, LMCA, RPCA and LPCA. In the Eq. (1) outlet pressure is calculated based on the flow rate and resistance value of downstream vasculature and organs. Based on the typical flow rates observed in the clinic, at Aao and MPA a new inlet resistance velocity boundary condition is developed. In this formulation the flow to either the systemic or pulmonary outflow tracts is now determined by the corresponding predefined inlet resistance values, which includes the right/ left ventricle pathway and the corresponding outflow trunk valve resistances. Thus, the flow rate is calculated based on the resistance value of the upstream ventricle and pressure at the inlet that reads as; where Q is the flow rate at the inlet, P i the calculated inlet pressure boundary condition, R i the resistance value at the inlet and A i the area of the inlet. The plug- flow velocity profile is applied consistently with the standard practice of aortic simulations. We compared this boundary condition with the standard constant inlet flow boundary conditions. The results did not change for inlet pathway resistance values that are proportional to the inlet cross-sectional areas. However if the resistance values prior to outflow tracts are different, as in various single-ventricle disease states, deviations are recorded. For example, in the case of a ventricle septal defect, this new boundary condition allows the flow exchange between right and left ventricle thus flow splits between aorta and MPA starts to change. The sensitivity of this new inlet boundary scheme is validated by assigning different resistance values for the inlet of the aorta systematically. The increase in aorta inlet pathway resistance decreased the aortic flow and increased MPA flowrate respectively. CFD Solver A commercial CFD solver, FLUENT 15.0 (Ansys, Inc., PA, USA) was adopted for this study. The CFD code was configured to implement a multi-grid artificial compressibility solver for incompressible Newtonian flows, and employs a second-order accurate numerical discretization scheme in space. A steadystate simulation is performed due to the average flow rates at the outlets are found to be sufficient to compare shunt hemodynamics. Also, all Reynolds numbers are below 1500, justifying the use of a laminar flow solver. A diligent mesh density sensitivity analysis followed Ref. 47, based on achieving a relative difference of less than 5% variations in velocity at Dao region just after DA (Supplementary Material, Figure B). Grid sensitivity analysis was conducted using grids of decreasing mesh size (starting with 1.3 mm nodes, to 0.5 mm). For a typical high-density spatial grid with a total of~1 M fluid nodes, with a grid spacing of 0.7 mm, a simulation time step size of 10 À5 s in physical time is required to achieve the convergence. Simulations were continued until convergence to 10 À6 residue. The conservation of mass was ensured of all cases having maximum 10 À8 L/min difference between inlet and outlet. See the Supplementary Material (Appendix C) for the detailed mesh verification study conducted for both the aorta and the new CoW segments. Figure 3 illustrates the flow streamlines and wall shear stress (WSS) distributions for the pre-and postoperative direct shunt configurations. The flow struc-ture and head-neck flow split are altered significantly by the introduction of the shunt. Before the shunt, the entire aortic arch flow was laminar with an average Reynolds number of 300. However, the high velocity gradient around the shunt, due to its small diameter connecting to the large arterial reservoir, altered this condition. The vorticity content is increased particularly at the pulmonary trunk. Likewise, the placement of the shunt increased the WSS levels and its distribution around the aortic arch but decreased it at the pulmonary arteries. There are significant changes in flow rates for all the major vessels as quantified in Table 3, particularly the cerebral arteries. Cerebral flows increase almost two fold after the shunt is implemented. Significant increases are observed for all three shunt configurations with respect to the presurgical configuration before the shunt anastomosis. Comparison of Shunt Configurations In tions. For the direct and RPA shunts the corresponding flow streamlines spiral in the ascending aorta influencing the head-neck flow split and WSS distribution. For the oblique shunt, the flow in the ascending aorta is relatively laminar. According to Fig. 4, for RPA shunt the flow produces high vorticity in both pulmonary arteries. Vorticity in pulmonary arteries is lower for oblique shunt configuration. Direct shunt produces almost laminar flow in both pulmonary arteries. The average flow splits at the aortic arch and cerebral arteries for all 3 models are summarized in Table 3. Flow rates at the pulmonary arteries change 5% between the central direct shunt and central oblique shunt configurations and 23% between central direct shunt and RPA shunt configurations. The right and left pulmonary artery flow rates are not altered significantly for the different shunt configurations as the symmetric flow condition is maintained. At cerebral arteries shunt type caused about 2 and 5% differences in flow rate for the central oblique shunt and RPA shunt configurations, respectively. Whereas the flow splits between right and left cerebral arteries are not symmetric for the oblique shunt configuration (see Table 3). Likewise, the trans-shunt flow changes up to 10%, between the direct (0.588 LPM) and the RPA shunt (0.643 LPM). Q p /Q s for shunt configurations are also calculated and found to be 5 and 26% different for oblique and RPA shunts, respectively. These results suggest that the acute post-operative hemodynamic condition depends on the shunt configuration, and shunt configurations change the flow rates substantially. Results presented in Table 3 correspond to a patient with a very high pulmonary vascular resistance value of 8 MPa s m À3 (for details see Supplementary Material, Appendix B). This resistance results in a low Q p / Q s . To illustrate the performance at a higher Q p /Q s and thus lower pulmonary vascular resistance, we performed simulations with a resistance value of 3 MPa s m À3 for both RPA and LPA outlet boundary conditions. Results for these cases are presented in Table 4. Figure 5 illustrates the WSS levels for the direct, oblique and RPA shunts at neck and head arteries. Likewise, Fig. 6 provides the WSS changes specifically at the CoW region. The WSS is about 20% higher for the RPA shunt configuration compared to the direct shunt configuration, especially at the aortic arch region. The shunt anastomosis feature higher WSS distribution compared to its periphery. Maximum WSS values are 118, 107 and 48 Pa for direct, oblique and RPA shunt configurations, respectively. Direct shunt configuration produces the highest WSS at the shunt region, although it does not have the highest shunt flow rate. Geometry of the shunt and its anastomosis are the determinants of WSS distribution in addition to the flow magnitude, as demonstrated earlier. 12 Finally, the flow rate through the internal arteries of CoW region is computed and compared for all three shunt configurations. The internal arteries that are particularly important for intra-operative diagnosis Last two columns summarize the differences in flow rates for shunt configurations with respect to the direct (central) shunt. Only the vessels with significant differences between the surgical configurations are included. Negative flow rate values represent outlet flows. Q p /Q s : Ratio of pulmonary flow to systemic flow. Q c /Q CO : Ratio of total cerebral flow to cardiac output. include RPCoA, LPCoA, and the connecting arteries of RMCA-RPCA and LMCA-LPCA. These internal arteries are important as they regulate the blood flow to the cerebral artery through the afferent arteries. Figure 6 presents the relative flow differences in internal CoW arteries for oblique and RPA shunt configurations compared to direct shunt configuration. The shunt configurations affect flow distribution in the head and neck arteries and can challenge a balanced regional brain perfusion that can be taken into account in pre-surgical planning. The differences in flow splits start at the connecting cerebral arterial level influencing the downstream cerebral vascular perfusion. Effect of Ductus Arteriosus Constriction on Shunt Hemodynamics Although the results of the present study represent neonatal patients without a DA, some patients 5%, while the shunt flow increases sixfold with the DA. Flow distributions at all artery outlets and the Q p / Q s stays almost constant for all three shunt configurations while DA is active. This is due to the high peripheral resistance values of the neonatal patient, which determines the flow rates, consistent with our previous study. 8 Interestingly, the flow passing through shunt and PDA depends on the shunt configuration. The trans-shunt flowrate is 21% higher for central oblique shunt compared to central direct shunt when DA is patent. Likewise, the flow rate at the RPA shunt is 53% higher than that of central direct shunt, resulting in improved shunt performance for TOF (fully-open MPA model). Flow rate at the PDA decreases 3 and 8% (0.02 LPM and 0.03 LPM) when compared to central direct shunt configuration for central oblique and RPA shunt configurations, respectively. Those decreases in flow rates balance the shunt flows so that the flow convected to the pulmonary arteries stay relatively constant. RMCA Finally, the flow rates at the major interior cerebral branches depend on the shunt configuration. For example, the artery connecting the LMCA and anterior arteries has 3% higher flow rate for RPA shunt configuration compared to central direct shunt configuration for the TOF disease model. DISCUSSION During the last decade, neonatal surgical repair of TOF resulted in minimal mortality, and the present focus shifted towards achieving the best late-functional outcome. 11 It is hypothesized that the optimal shunt hemodynamics is critical for improved quality of life. As our morphometric study demonstrated, shunt placement will balance the post-operative Q p /Q s and minimize pulmonary hypertension 1,21,49,51,52 in a shunt configuration dependent manner. Local shunt flow performance, particularly WSS, is known to influence the shunt patency, and can vary as much as 33% between the different shunt configurations. Higher WSS distributions will decrease the graft life due to high friction at the material surface. Likewise, analyzing the WSS distribution at the head-neck, neck and cerebral arteries for direct and RPA shunts indicates higher WSS at distal LCA and vertebral artery anastomosis for RPA shunt compared to the direct-shunt configuration (Fig. 5). We have simulated two different pulmonary vascular resistance states: with high and low values (Tables 3 and 4). For the high pulmonary vascular resistance state: Shunt configuration can also cause substantial changes in the total PA flow leading to 26% changes in Q p /Q s for the same shunt diameter. However, once the peripheral pulmonary vascular resistances at RPA and LPA are fixed there is no significant preferential flow direction between RPA and LPA, for all shunt configurations studied in this work. The DA patency (naturally or after the stenting operation) and disease severity does not alter these flow regimes. Therefore, realistic measurements of pulmonary vascular resistances are critical in predicting the pulmonary flow preference. In our model, we intentionally kept RPA and LPA geometries at the same diameter in symmetric shape, which resulted in the same great artery resis- tance at uniform flow conditions. Any change in geometry, peripheral or branch resistance (e.g., due to complex flow patterns in one branch, see Figs. 3 and 4) could cause a difference in LPA/RPA ratio. Also, larger shunt or pulmonary artery diameter could increase the Q p /Q s . For the high pulmonary vascular resistance state: Q p /Q s can change up to 61% for the same shunt diameter while RPA and LPA flow rates also differ significantly within the same shunt configuration (up to 63%). Thus, the pulmonary vascular resistance, besides shunt configuration, has a substantial effect on the arterial flow splits. Considering the complex recirculation regions (vorticity), the direct shunt is the most laminar flow among the shunt configurations studied. Vorticity is important in terms of energy loss and blood damage and should be avoided. 15,35,56 Since the shunt position affects the formation of vortices, it is also expected to affect flow split at the artery outlets and WSS at the root of the head neck arteries. Therefore, existence of vorticity should also be taken into consideration in terms of surgery performance. Present results illustrate an important function of DA as a balancing vessel as there are substantial differences between cases with and without DA. For an active and functioning DA, the influence of shunt configuration on hemodynamic balance is found to be minimal. For an open DA, the shunt diameter and configuration cannot control the Q p /Q s and achieve hemodynamic stability. Regardless of the shunt type, all arterial flow splits will remain the same. Shunt size does not allow enough blood flow to maintain the same flow distribution. In contrast, when the DA is closed the head-neck flow distributions and Q p /Q s become highly dependent on shunt configurations in addition to the resistances of peripheral arterial beds. While the Open-DA configuration can be employed in hybrid repair, the closed DA case is more common in pediatric patients and achieves better circulatory control since it is ligated by the surgeon or tends to vanish naturally after birth. In an earlier study, through an idealized parametric computational model of hypoplastic left heart syndrome, Migliavacca et al. 29 calculated pressure drops for straight and blunt shunt configurations that resemble the direct and oblique shunts of the present study, to be~30 and~26 mmHg respectively. Even though the type of disease is considerably different, the pressure drop values are in agreement with the present computations. As such, the pressure drops for all flow rates are higher for straight shunts compared to the blunt shunts. In terms of higher pulmonary perfusion and lower pressure drops, the surgeon may prefer the direct shunt during surgery. Neurodevelopmental delays in CHD patients are common and highly variable. 17,34 Present results demonstrate that the placement of the surgical shunt alters the head-neck flow split and the acute hemodynamic balance of the cerebral circulation system. The differences in flow rates in cerebral arteries indicate different perfusion rates at the vital brain sections. Whether this finding might have major physiological consequences or be associated with the poor post-operative neurodevelopment outcome of 1st stage surgeries should prompt further investigations. Still, it would be wise to consider cerebellum blood perfusion as a new performance parameter that can easily be calculated in 1st stage computational pre-surgical planning. This enables an estimate of the flow changes in the brain after shunt surgery. Particularly the role of CoW to redistribute the cerebral flow after the acute shunt placement is an important clinical factor and has not been emphasized in the literature to our knowledge. Knowledge of the detailed post-operative 3D cerebral perfusion map could lead to optimal neurodevelopment. Patient-specific computational fluid dynamics evolved to be a standard tool for simulating the hemodynamic performance of pediatric cardiovascular shunts, reducing the need for in vitro tests as well as complex animal experiments. 42 While most steps of the patient-specific analysis methodology including the MRI scanning, segmentation, volume generation, mesh discretization and visualization has matured, 42 as our study indicates, the predictive capability of realistic boundary conditions representing the peripheral circulation needs further emphasis. Particularly the inclusion of major cerebral vessels undertaken in the present work is a step towards this objective. We showed that the standard resistance boundary conditions attached to the truncated head-neck vessels outlets representing the cerebral circulation is not adequate for predicting flow-splits as well as the local flow properties such as streamlines, WSS and pressure distribution (See Supplementary Material Appendix B for details). According to our results, the standard CFD model without the head-neck and cerebral arteries overestimates the flow passing through the shunt and underestimates the DA flow. Likewise, inclusion of the full cerebral system substantially changes the flow distribution and shifts the flow balance at the head-neck arteries. This finding is more critical for the smaller size neonatal aortic arch system compared to a mature aortic arch, since for the later the peripheral vascular resistance values are significantly lower. As our results demonstrate, the hemodynamic shunt analysis cannot be localized to the shunt region alone. The entire cardiovascular circulation system, including the natural shunt of DA, if it exists, must be considered for precise surgical decisionmaking. 42 Finally, the addition of 3D cerebral arteries to the CFD domain will not eliminate the utility of lumped parameter model boundary conditions as they will still be needed for the rest of the vasculature. LIMITATIONS The present study is a pilot investigation that focused on diseased type and shunt configurations, which will be expanded through larger shunt sizes, surgeonspecific shunt configurations and parametric pulmonary arterial diameters, 45 including patient-specific anatomical cases as they become available. 44 Computational results correspond to the time-averaged hemodynamics and exclude the deformation of the artery as well as the non-Newtonian effects. The latter parameter is potentially important, but its effect is shadowed due to the high variability of pediatric blood and so would not influence our comparative results. As in most arterial hemodynamic applications, for aortic flows the use of compliant models (for the deformation of the artery compared to aortic root rotation) alone does not bring much improvement on the accuracy of results over simpler and computationally more efficient rigid models. 20,25,31,36,41 We utilized a patient-specific cerebral arterial anatomy but developed a realistic arch reconstruction through the diligent input from several experienced clinicians on this integrated model (model development is summarized in Supplementary Material, Appendices A and B). Idealized aortic arch and neck arteries can cause some deviations from the patient-specific results, but this effect is limited since, in the present study, the shunt configurations are compared to each other using the same baseline geometry. Our results clearly illustrate that the predictive surgical planning simulations require the use of an accurate downstream patient-specific cerebral geometry. Complexity of the cerebral arterial system is a major challenge for the present study and needs to be revised and improved in future models. For example, an incomplete CoW is common for neonates and congenital heart patients, which should influence the reported flow splits. Still, the comparative values of present results should be valid to a certain degree. The physiologically realistic geometry and boundary conditions are critical for replicating the physiological results, even if it is challenging to obtain accurate measurements and data needed for modeling purposes in infants and small babies. 22,42,43 Likewise, the effects of disease-specific shunt configurations and anastomosis location on cerebral and coronary flow are all important considerations for the surgical decisionmaking process. Our downstream boundary conditions are not fully multi-scale, still the present boundary conditions are indeed the ''lumped'' versions of more detailed multiscale boundary conditions, thus both simulate the same physical behavior. Our manuscript demonstrated an important weakness of these schemes, applicable both for lumped or multiscale; the 3D cerebral system geometry in simulations is critical for accurate estimation of changes in especially WSS and 3D flow characteristics: secondary flow and vorticity as highlighted in the original Appendix material (Page 2, Section B). CONCLUSIONS The present manuscript explored alternative shunt configurations that have potential for improved peripheral blood flow split and local hemodynamics. Quantitative information on cerebral hemodynamics and perfusion are provided, which are critical for CHD patients. Our study showed that the RPA shunt has slightly better cerebral perfusion for TOF. Furthermore, a persistent ductal communication between the systemic and pulmonary arteries suppresses the influence of surgical shunt and results in poor flow split control. When the ductus arteriosus is fully ligated, all three clinically shunt configurations result in significant differences in flow distributions and local hemodynamics. Most importantly, major differences observed in cerebral blood flows prompted the requirement for detailed future studies on neonatal cerebral perfusion of CHDs. The shunt configuration has a very limited, almost no, effect on flow splits while the DA is open and is critical for flow control when the DA is closed. In addition to the shunt configuration, our computations indicated that neonatal arterial hemodynamics is also influenced by the pulmonary vascular resistance severity and should be taken into consideration during the 1st stage shunt planning (for example, for high pulmonary resistance case, direct shunt has 26% higher pulmonary perfusion with respect to RPA shunt while for low pulmonary resistance case RPA shunt has 61% higher pulmonary perfusion with respect to direct shunt). Surgeons can prefer direct shunt in terms of higher pulmonary perfusion (23% with respect to RPA shunt perfusion) and lower pressure drop, even though it has 5% lower cerebral perfusion in the case of TOF in case of low pulmonary vascular resistance. Current practice in hemodynamic modeling, including the lumped parameter system models, is to consider aortic arch manifold vessel as central, and to treat the neck and cerebral arteries as a lumped network or as a truncated constant pressure boundary condition. As the present study illustrates, if such truncated boundary conditions are utilized, the results might be misleading. ELECTRONIC SUPPLEMENTARY MATERIAL The online version of this article (doi: 10.1007/s13239-017-0302-5) contains supplementary material, which is available to authorized users.
2018-04-03T03:16:17.869Z
2017-04-05T00:00:00.000
{ "year": 2017, "sha1": "118a7511c17308471859f2600fdab2c92cbff6ff", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s13239-017-0302-5.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "3a3e3c109bfdc1140a4d4edbb7abadb932278224", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
44023375
pes2o/s2orc
v3-fos-license
Electricity markets regarding the operational flexibility of power plants Electricity market mechanisms designed to steer sustainable generation of electricity play an important role for the energy transition intended to mitigate climate change. One of the major problems is to complement volatile renewable energy sources by operationally flexible capacity reserves. In this paper a proposal is given to determine prices on electricity markets taking into account the operational flexibility of power plants, such that the costs of long-term capacity reserves can be paid by short-term electricity spot markets. For this purpose, a measure of operational flexibility is introduced enabling to compute an inflexibility fee charging each individual power plant on a wholesale electricity spot market. The total sum of inflexibility fees accumulated on the spot markets then can be used to finance a capacity market keeping the necessary reserves to warrant grid reliability. Here each reserve power plant then gets a reliability payment depending on its operational flexibility. The proposal is applied to a small exemplary grid, illustrating its main idea and also revealing the caveat that too high fees paradoxically could create incentives to employ highly flexible power plants on the spot market rather than to run them as backup capacity. I. INTRODUCTION To mitigate the global climate change it is commonly agreed that greenhouse gas emissions, and in particular emissions of CO 2 , have to be reduced substantially [11, 17-19, 23, 24]. Since 85% of current primary energy driving global economies are due to the combustion of fossil fuels, and since consumption of fossil fuels accounts for 56.6% of all anthropogenic greenhouse gas emissions, introducing renewable energy sources to support all areas of human life plays an essential role in fighting global warming [8]. In particular, the generation of electricity by renewables will be an important step towards this goal, requiring substantial changes to current grid structures and power plant systems. If generation and distribution of electricity is to be organized by market principles, a preeminent challenge of a future electricity market mechanism design is to set effective price signals to reward the introduction and the use of renewable energy sources for the generation of electricity, and simultaneously to penalize fossil fuel power plants. However, the physical requirements of electricity grids and the necessities of public life in present societies impose special restrictions on electricity markets. In particular, a necessary condition for grid stability is the reliability of electricity generation and the immediate equality of supply and demand at any instant of time. It is expected that the biggest contribution of renewable energy sources in electricity grids will come from wind turbines and photovoltaic cells [1], both producing electricity only with high volatility. Their widespread installation therefore would challenge the reliability of electricity supply and thus the stability of the grids. Lacking sufficiently large storages for electricity, to warrant reliability in grids with volatile energy sources power plants with high operational flexibility are required as a power reserve standing in in cases of sudden scarcity of electricity supply or of blackouts. Cramton and Ockenfels [4] proved the "missing money" theorem stating that, in a competitive electricity market, prices are always too low to pay for adequate capacity. In fact, present electricity markets are not perfect efficient markets since both supply and demand are price inelastic, see Figure 1. Future increase of demand elasticity, for instance by smart grids, would relax the difficulties to a certain degree, but inelasticity on the supply-side could only be removed by capacity reserves or huge electricity storages. The first option, however, requires long-term plannings at a magnitude of decades, whereas the second option is technologically not realizable to date. For more details see [5]. Besides these theoretical arguments there also exist empirical clues to doubt that current electricity markets encourage investments in operationally flexible power plants or in the provision of power reserves for cases of emergency or maintenance [2,7,9,27]. Several solutions to this problem have been proposed recently to complement the present "energy-only" markets, ranging from separate capacity markets which trade backup capacity, to strategic capacity reserves usually settled by long-term contracts with national agencies [2,4,9,13,15,21,26,27]. The main goal of this paper is to propose a solution for the economic problem to finance the necessary capacity reserves guaranteeing grid stability by market principles. One necessary property of a power plant for being part of a capacity reserve is a fast guaranteed operational flexibility. In our opinion the main problem of current market mechanism designs is the fact that market prices do not regard operational flexibil- On an electricity spot market, a blackout is a market failure due to inelastic demand and supply, with the supply curve given by the merit order of the power plant system (right hand). Here a ("rolling") blackout occurs if the demand is higher than the total maximum power P max of all power plants [5]. Increasing the demand-side inelasticity, e.g., by smart grids, could remove the problem on the long run, but in the short run electricity markets require capacity reserves which are not demanded for most of the time. ity, being determined solely by the marginal costs of electricity generation. Thus the costs of operational inflexibility are market externalities [3, §14], [20, p 125] and reduce welfare. By contrast, a sustainable electricity market mechanism design should induce market prices which take into account both the direct variable production costs and the external ecological costs of electricity production, but also the costs caused by operational inflexibility of each individual power plant. Due to emissions trading [28, §15.4], the first two cost factors are already priced in as marginal costs on present electricity spot markets, but operational flexibility does not play a role for the determination of the spot market prices to date. To internalize it into price calculation, at first we define measure for the operational flexibility of a given power plant. This measure then can be used to compute an inflexibility fee for each power plant. The total of these inflexibility fees then can serve to pay power reserves provided by some given capacity mechanism. This paper is organized as follows. In section II a general class of functions measuring the operational flexibility of a power pant in dependence to its guaranteed start-up time is defined. In section III the effect of the inflexibility fee on the offer price of a power plant on an electricity spot market is calculated and demonstrated by a prototypical exemplary "toy" grid in Example 1. A way how the accumulated inflexibility fees then can be used to finance a capacity mechanism is described in section IV, before a short discussion concludes the paper. II. A MEASURE OF THE OPERATIONAL FLEXIBILITY OF A POWER PLANT We stipulate that the operational flexibility of a power plant depends on its guaranteed start-up time t s ∈ [0, ∞) which is defined as the time that a power plant requires to supply a guaranteed power of electricity. Moreover, we claim that the measure should be a pure number expressing a degree of flexibility ranging from 0 to 1, with the property that the longer the guaranteed start-up time the smaller the value of flexibility. Consequently, we define a general measure of operational flexibility to be a strictly monotonically decreasing function ϕ : [0, ∞) → [0, 1] of a single variable satisfying the limit be- Here the variable x represents the starting time of the power plant, measured in hours [h]. A simple example of such a measure is the differentiable function In the sequel we will use this function to measure the operational flexibility of a given power plant. In Table I a wind turbine is assigned a vanishing operational flexibility, since due to the volatility of winds a predetermined amount of energy by a wind turbine cannot be guaranteed at a given future instant. The highest operational flexibilities are exposed by hydroelectric power stations and modern gas turbines. III. FEES ON OPERATIONAL INFLEXIBILITY On an wholesale electricity market, each participating power plant operator offers electric power with a sell bid for each of its power plant. The market maker collects all these sell bids and determines the market-clearing price in accordance to the buy bids and the merit order [6,10,25,30], for a theoretical introduction see also [12, §6.5, §7.4.5]. Our main idea now is to rise a fee for operational inflexibility on each power plant, its amount being calculated by the operational flexibility ϕ as part of a factor to a given market-wide reference level. In consequence, the offer price of each power plant must take it into account its operational flexibility. To be more precise, let p mc i denote the marginal offer price per energy quantity of the power plant regarding only the marginal costs, including the variable costs of production and emissions trade certificates; this is the price which would be offered for the power plant on a current wholesale spot market [29]. Assume moreover that all power plants participating at the spot market are uniquely numbered by the indices i = 1, 2, . . . , n. The spot market offer price p i of plant i taking into account its operational flexibility ϕ i then is calculated by the formula Here p 0 denotes a market-wide constant reference level price, set by the market authority. It therefore is a political or regulatory quantity, not a market-inherent value or immediately economically deducible. It is arbitrary in principle, but the higher its amount the heavier the effect of operational flexibility on the final spot market price. It should be high enough to signal effective incentives to introduce and use operationally flexible power plants for scarcity situations and black-outs, but it must be low enough to avoid a too radical change of the merit order such that too many flexible power plants are operational on the spot market and thus unavailable for a capacity reserve (see Figure 2). Example 1. Consider a small examplary grid (called "toy grid" in the sequel) consisting of the eight power plants listed in Table I. The prices resulting from the respective inflexibility fees in dependence to different reference level prices p 0 are listed in Table II. If the reference level price is low (here p 0 = 10 e/MWh), the modified offer prices do not change the merit order of the power plant system, whereas a sufficiently high reference level price (e.g., p 0 = 70 e/MWh) changes it, as is depicted in Figure 2. In our toy grid we can recognize that, if the amount of p 0 is too high, the effect may be even counterproductive since the flexible gas turbine is in the money and thus operating at a normal quantity demand, leaving no power plant as a capacity reserve. In case of a sudden scarcity or of a blackout, the grid then would perform worse than with the original merit order. Moreover we observe that the higher the reference level price p 0 , the higher the spot market price. The amounts, however, are not related to each other in a linear manner, but depend discontinously on the changes of the merit order. The total amount of inflexibility fees, at last, is directly calculated to be either 48.4 e/MWh in case of p 0 = 10 e/MWh, or 339 e/MWh in case of p 0 = 70 e/MWh. We finally note that for the demanded quantity q * depicted in the scenarios in Figure 2, only five power plants are operational. Depending on the reference level price the realized profit then is given by the following tables. Assume for simplicity that the demand remains constantly at q * during a certain hour and that all power plants yield the same power of 5 MW, say, and let be q * = 25 MW be the demanded electrical power for the hour considered (such that the consumed electricity energy during this period is E = 25 MWh). Then with Table II the total of the inflexibility fees for the five operational power plants in the money has the amount of at a reference level price p 0 = 10 e/MWh, and at a reference level price p 0 = 70 e/MWh. The total fee then can be distributed to the power plants participating at a capacity mechanism, paying their time of reliability. The toy grid in Example 1 demonstrates the possible direct consequences of the inflexibility fee to the wholesale electricity market. In essence, by Equation (3) a power plant with a low operational flexibility is penalized more than one with a high operational flexibility. In the limit case that all power plants participating on the spot market are equally operationally flexible, i.e., ϕ i = const, all sell bids are raised by the same amount and the merit order cannot change. On the other hand, if the power plants have different operational flexibilities and the reference price level p 0 is chosen too high, the merit order changes the merit order such that all flexible power plants are operational on the spot market, such that no power plant is left for the capacity reserve necessary to warrant grid reliability. The total amount of inflexibility fees paid for each power plants participating the spot market now is available for a capacity mechanism, as described in the following section. IV. ACCUMULATED INFLEXIBILITY FEES PAYING CAPACITY RESERVES A power plant serving as a power reserve for periods of scarcity or blackouts should have fast and guaranteed startup times, i.e., should be operationally flexible to a high degree. There exist several proposed capacity mechanisms, for Table II, neglecting operational flexibility (left) and regarding it (right). The reference level price are assumed as p 0 = 10 e/MWh and p 0 = 70 e/MWh, respectively. For a given demand q * of electric power, the market-clearing spot price increases more or less slightly, depending on p 0 . For a high operational inflexibility fee, as in the second case, the merit order is changed. instance capacity markets or a strategic reserve determined by a grid agency. In either of these approaches, we therefore require a power plant to offer capacity reserves to have a high operational flexibility ϕ, say This value means that the guaranteed start-up time of a power plant participating the capacity mechanism must be less than one hour. A further natural requirement is that a power plant offering its reliability on the capacity market cannot participate on the spot market. Assume then that there are k power plants participating on the capacity market, each one established with a unique index i = 1, . . . , k. Let ϕ i and P i denote the operational flexibility and the capacity (measured in MW) of power plant i, respec-tively, and let C f be the total of inflexibility fees accumulated on the spot market in a certain past period, say, the day before. It has the dimension currency per time, for instance e/h. Then the reliability payment ρ i for power plant i in that period is defined as Note that by construction ∑ k 1 ρ i = C f , i.e., the sum over all reliability payments equals the total amount of the inflexibility fees. The quantity P flex is the weighted sum of all available capacities, where the weights are precisely the respective operational flexibilities. Example 2. Assume the toy grid from Example 1. Then by the requirement (6) only three power plants can participate at the capacity market, namely the hydroelectric power station, the CHP plant and the gas turbine. In Table III they are listed with their capacities and the resulting reliability payments according to Equation (7) and depending on the amount of total inflexibility fee coming from the spot market. For calculational details refer to the Excel file http://math-it.org/ climate/operational-flexibilities. xls. V. DISCUSSION In this paper a proposal has been worked out to integrate operational flexibility into the sell bids of power plants participating wholesale electricity spot markets. The main idea is to calculate a fee for each power plant depending on its operational flexibility. For this purpose the concept of a general measure of operational flexibility of a power plant is introduced here as a strictly monotonically decreasing function ϕ of the guaranteed start-up time, normed by condition (1). With such a measure, the inflexibility is priced in by Equation (3) to the marginal price determining the sell bid of each power plant at the spot market. The amount depends on a marketwide reference level price p 0 which is set by the market authority or the state. The total operational inflexibility fee C f accumulated at the spot markets then is spread on the power plants participating in a given capacity mechanism, depending on their operational flexibilities according to Equation (7). Here the power plants forming a capacity reserve should have a very high operational flexibility, to guarantee reliability and stability of the grid. A reasonable value is proposed by inequality (6). A simple example of a measure for operational flexibility is given by Equation (2). Using this measure, the spot market and the corresponding payments to power plants participating in a capacity mechanism are applied to a simple but prototypical toy grid in Examples 1 and 2. The most important consequence of our proposal, as viewed from an economic perspective, is the internalization of the negative externality of operational inflexibility of power plants. With the inflexibility fees determined as above, the currently external costs would thus be paid by the spot markets and could be used to pay capacity reserves, be it on a separate capacity market or another capacity mechanism such as a pool of power plants forming a strategic reserve. The inflexibility fee therefore increases welfare without necessarily decreasing dispatch efficiency. A critical point of our approach, however, is the determination of the reference level price p 0 . It is crucial since it can even change the merit order of electricity markets if it is set very high. Although a change of the merit order in itself does not necessarily imply severe problems, it could nonetheless lead to the paradox that operationally flexible power plants participate in a short-term spot market and therefore could not serve as a capacity reserve. An amount p 0 too high would thus be adverse to the intention to pay a capacity mechanism and thus would even diminish welfare. We therefore are faced with the conflicting objectives of providing enough means to fund the reserves of a capacity mechanism, and of keeping suitable power plants with high operational flexibility as capacity reserve. Although this risk is calculable when choosing the amount for a given grid cautiously such that experiences could be gained over time, a comprehensive theoretical framework to illuminate effects and limits of inflexibility fees on electricity markets should be accomplished. Hints to tackle this problem may be indicated by the optimal taxation due to Ramsey [22], or by regulation theory [28, §13]. Further research in this direction appears worthwhile.
2015-01-31T07:10:18.000Z
2015-01-31T00:00:00.000
{ "year": 2015, "sha1": "43b9ec5a288ab4bfdaeafb74ee7076c66082bf56", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=75676", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "d587d5027853c7e3df743c0c1e8885122eb09c1a", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Economics", "Computer Science" ] }
255261995
pes2o/s2orc
v3-fos-license
Deciphering Molecular Factors That Affect Electron Transfer at the Cell Surface of Electroactive Bacteria: The Case of OmcA from Shewanella oneidensis MR-1 Multiheme cytochromes play a central role in extracellular electron transfer, a process that allows microorganisms to sustain their metabolism with external electron acceptors or donors. In Shewanella oneidensis MR-1, the decaheme cytochromes OmcA and MtrC show functional specificity for interaction with soluble and insoluble redox partners. In this work, the capacity of extracellular electron transfer by mutant variants of S. oneidensis MR-1 OmcA was investigated. The results show that amino acid mutations can affect protein stability and alter the redox properties of the protein, without affecting the ability to perform extracellular electron transfer to methyl orange dye or a poised electrode. The results also show that there is a good correlation between the reduction of the dye and the current generated at the electrode for most but not all mutants. This observation opens the door for investigations of the molecular mechanisms of interaction with different electron acceptors to tailor these surface exposed cytochromes towards specific bio-based applications. Introduction Electroactive organisms possess extracellular electron transfer (EET) capabilities that enable them to be used in microbial electrochemical technologies (METs), including microbial fuel cells (MFC) and microbial electrosynthesis (MES), to produce bioenergy and added-value compounds, respectively. Given the interest in these technologies as promising sustainable processes for wastewater treatment, biosensing, bioremediation, and the production of biofuels [1][2][3][4], there has been a growing interest in understanding the EET processes performed by electroactive bacteria. This information was already shown to be crucial for the optimization of bioreactors and electrode materials, as well as to engineer or tailor electroactive organisms [5][6][7]. The use of genetic engineering strategies to develop electroactive bacteria with enhanced EET properties and increase current generation in MFC has increased in the last decade. Genetic engineering has been mainly used to improve indirect electron transfer [8], increase substrate oxidation [9][10][11], enhance biofilm formation [12][13][14][15], as well as increase the expression of multiheme cytochromes, the key components of EET processes [16][17][18]. Shewanella oneidensis MR-1 (SOMR1), a mesophilic Gram-negative facultative anaerobic bacterium renowned for respiratory versatility, is one of the most extensively studied electroactive organisms [19]. In this bacterium, EET processes are performed by a repertoire of multiheme cytochromes, including the inner-membrane tetraheme cytochrome CymA, the periplasmic cytochromes STC and FccA, and the decaheme cytochromes MtrA and MtrC that are part of the outer-membrane porin complex MtrCAB [20]. These cytochromes form a conductive pathway that enables electrons derived from the oxidation of electron donors Construction of Plasmids Harboring OmcA Mutants The plasmid used to produce OmcA mutant protein for the in vitro studies was pLS147 provided by Dr. Liang Shi. This plasmid, based on the commercially available plasmid pBAD202/D-TOPO, contains a modified version of the omcA gene, where the signal peptide was replaced by the signal peptide of MtrB from SOMR1 to produce a soluble version of the protein [24]. A histidine tag was available at the C-terminal to facilitate the purification process [24]. The distal histidine ligand of hemes 1, 3, 4, 6, and 8 was mutated to a methionine using site-directed mutagenesis as previously described [29]. The pBAD202/D-TOPO plasmids containing the different OmcA variants were transformed in wild-type SOMR1. For the in vivo studies, native OmcA was first cloned in pBBR1MCS-2 vector using NEBuilder ® HiFi DNA Assembly and the primers pBBR_OmcA_Forw and pBBR_OmcA_Rev ( Table 1). The plasmid pBBR1MCS-2 was a gift from Kenneth Peterson [32]. For the production of OmcA mutants in this plasmid, two different approaches were used: sitedirected mutagenesis using the pBBR1MCS-2 vector containing native OmcA (for mutants H1, H3, H6, H7, and H8); and NEBuilder ® HiFi DNA Assembly using mutated OmcA genes previously cloned in pBAD202/D-TOPO vector (for mutants H2, H4, H5, H9, and H10). In this case, the mutated genes were amplified using the primers pBBR_OmcA_Forw and pBBR_OmcA_Rev from the plasmid harboring each mutated gene. For the mutant H9, the primer pBBR_OmcA_Rev was replaced by the primer pBBR_OmcA_H9_Rev to guarantee the insertion of the mutation. The pBBR1MCS-2 plasmids containing the different versions of OmcA were transformed in SOMR1 ∆OmcA ∆MtrC. The use of this double knock-out strain is because of the overlapping role of both OmcA and MtrC [33]. To guarantee that the data obtained is due to the mutation and not to any other interacting factor, SOMR1 ∆OmcA ∆MtrC was used for all the in vivo experiments, including the negative (with empty pBBR1MCS-2 plasmid) and positive (with pBBR1MCS-2 plasmid containing native OmcA) control. All the primers used in this study are listed in Table 1 and in [29], while all the plasmids used are presented in Table 2. All the constructs were confirmed by DNA sequencing (Eurofins, Germany). The transformation was achieved using electroporation [34,35]. The pBAD202/D-TOPO plasmid harboring native omcA gene and the SOMR1 ∆OmcA ∆MtrC strain were kindly provided by Professor Johannes Gescher from Hamburg University of Technology, Germany. Purification of OmcA Mutants The mutant proteins OmcA_H1, OmcA_H3, OmcA_H4, OmcA_H6, OmcA_H8, OmcA_H9, and OmcA_H10 where the distal histidine ligand of the respective heme has been modified to a methionine, were produced and purified as previously described [29]. The purity of the proteins was verified by a single band in the SDS-PAGE and by an A 408 /A 280 ratio of above 5 measured by UV-visible spectroscopy. All the proteins were washed with 20 mM phosphate buffer, 100 mM KCl at pH 7.6. This buffer was used for all experiments. The concentration of the proteins was determined by UV-visible spectroscopy using a ε 408 nm of 125,000 M −1 cm −1 per heme for the oxidized state of the cytochrome. 1 H-1D-NMR spectra were collected for the different mutants on a Bruker Avance II+ 500 MHz NMR spectrometer equipped with a 5 mm TCI C/N prodigy cryoprobe. These experiments were performed at 25 • C. Cyclic Voltammetry of Native OmcA and OmcA Mutants Cyclic voltammetry (CV) was performed using a three-electrode system cell configuration consisting of a pyrolytic graphite edge (PGE) working electrode (IJ Cambria Scientific, Llanelli, UK), an Ag/AgCl (3M KCl) reference electrode, and a graphite rod counter electrode (IJ Cambria Scientific, Llanelli, UK). The experiments were performed inside an anaerobic chamber (Coy Laboratory Products) at 25 • C controlled by an external bath. Before use, the PGE electrode was polished with aqueous Al 2 O 3 slurry (1.0 µm), rinsed with water, and dried with a tissue before being exposed to the protein. For the experiments, 2 µL of the protein (concentration between 24 and 220 µM) was deposited onto the PGE electrode and left to dry. CV experiments were performed at a scan rate of 100 mV/s using CHI software. All potentials are reported with respect to Standard Hydrogen Electrode (SHE) by the addition of 210 mV [36] to those measured. QSOAS (version 1.0) [37] was used to subtract the capacitive current of the raw electrochemical data. Kinetic Experiments with Electron Shuttles To explore the ability of the OmcA mutants to perform indirect electron transfer to different electron shuttles, kinetic experiments using a stopped-flow apparatus (HI-TECH Scientific SF-61 DX2) installed inside an anaerobic chamber (M-Braun 150) containing less than 5 ppm of oxygen were performed [27,29]. Four electron shuttles were tested: AQDS, flavin mononucleotide (FMN), riboflavin (RF), and phenazine methosulfate (PMS). These experiments were performed at 25 • C, and all the solutions were prepared inside the anaerobic chamber using degassed buffer (20 mM phosphate buffer, 100 mM KCl at pH 7.6). The concentrations of the proteins were determined by UV-visible spectroscopy using a ε 552 nm of 30,000 M −1 cm −1 per heme, for the reduced state of the cytochrome [38]. The concentration of the electron shuttles was determined by UV-visible spectroscopy using ε 445 nm of 12,500 M −1 cm −1 for RF [39], ε 445 nm of 12,200 M −1 cm −1 for FMN [40], ε 326 nm of 5200 M −1 cm −1 for AQDS [41], and ε 387nm of 26,300 M −1 cm −1 for PMS [42]. To perform the kinetic experiments, reduced mutants OmcA_H4, OmcA_H6, and OmcA_H8, prepared with the addition of small volumes of a concentrated solution of sodium dithionite, were mixed with the different electron shuttles. Data were collected by measuring the light absorption changes at 552 nm as previously described [27]. For each mutant, the fully oxidized and fully reduced state of the protein were obtained by calibration with potassium ferricyanide and sodium dithionite, respectively. Data analysis was performed as previously explained [27]. Interactions Studies with FMN Using NMR Interaction studies between FMN and OmcA mutants were performed as previously described for OmcA and mutants OmcA_H4, OmcA_H5, OmcA_H6, OmcA_H8, OmcA_H9, and OmcA_H10 [27,29]. Briefly, 100 µM FMN samples were titrated against increasing amounts of the target mutant protein and 31 P-1D-NMR spectra were recorded after each addition [27]. The NMR experiments, performed at 25 • C, were acquired on a Bruker Avance II 500 MHz NMR spectrometer equipped with a SEX probe. 31 P-1D-NMR experiments were collected with proton decoupling and calibrated using phosphate buffer as an internal reference. Data analysis and binding affinities determination were performed as previously described [29]. Reduction of Methyl Orange by S. oneidensis The ability of native and mutant OmcA to perform EET was evaluated through the decolorization of methyl orange using living cells as previously described [43]. The experiments were performed in triplicate for each strain (e.g., SOMR1 ∆OmcA ∆MtrC strains carrying the pBBR1MCS-2 plasmid with mutated OmcA), using a 96-well plate with a flat bottom. Briefly, bacterial cells grown overnight in LB medium at 30 • C and 150 rpm were inoculated in SBM minimal medium supplemented with lactate (20 mM) and methyl orange (50 µM), previously de-aerated with N 2 for 15 min [43]. Decolorization of methyl orange was followed over time at 465 nm at 30 • C using a microplate spectrophotometer (Multiskan Sky Microplate Spectrophotometer from Thermo Scientific, Waltham, MA, USA). The preparation of the plate was conducted inside an anaerobic chamber (Coy Laboratory Products), and to maintain anaerobic conditions during the experiment, de-aerated Johnson oil was added to each well prior the sealing of the plate with a disposable seal and lid. SOMR1 ∆OmcA ∆MtrC carrying pBBR1MCS-2 (SOMR1 ∆OmcA ∆MtrC/pBBR_empty) and the wild-type omcA gene cloned in this plasmid (SOMR1 ∆OmcA ∆MtrC/pBBR_OmcA) were used as controls. These experiments were repeated at least two times independently, and the results were reproducible between the strains. Reduction of Electrodes by S. oneidensis The electroactivity of SOMR1 ∆OmcA ∆MtrC containing pBBR1MCS-2 and carrying the gene for the different mutant variants of OmcA (SOMR1 ∆OmcA ∆MtrC/pBBR_OmcA H1-H10) was evaluated by the ability of these strains to reduce screen-printed electrodes (SPEs), using a similar strategy as described previously for Geobacter sulfurreducens [44]. Toward this, cells from the different strains, grown overnight in LB medium at 30 • C and 150 rpm, were harvested at 12,000 rpm for 1 min and resuspended in SBM minimal medium supplemented with lactate (20 mM) to achieve an OD 600nm between 0.8-1.0. Then, after de-aeration with N 2 for 15 min, 1 mL of S. oneidensis cell suspension was added to the SPEs, using a sealed cap-tube inverted fixed in the SPEs electrode with hot glue (see Figure S1). All electrochemical assays were performed in a SPEs C11L (Dropsens, Spain) that is composed of a three-electrode configuration with a working electrode of carbon ink (surface 0.126 cm 2 ), a carbon counter electrode, and an Ag/AgCl reference electrode. The current of the chronoamperometry assays was measured every 120 s, and a fixed potential of 0.2 V was used for all the experiments. The experiments for each strain were performed at least in duplicate, and in each set of experiments, SOMR1 ∆OmcA ∆MtrC carrying pBBR1MCS-2 (SOMR1 ∆OmcA ∆MtrC/pBBR_empty) and this plasmid carrying the wild-type omcA gene (SOMR1 ∆OmcA ∆MtrC/pBBR_OmcA) were used as controls. Not All OmcA Protein Mutant Variants Retain the Native Overall Structure The replacement of the distal ligand of the hemes from histidine to methionine generally changes the reduction potential of the heme to more positive values [45] and changes the electronic structure of the heme orbitals [46], but retains the coordination number and the spin-state of the heme unless steric hindrance prevents methionine to bind to the iron. In a previous work, the distal ligand of hemes 2, 5, 7, 9, and 10 of OmcA has been modified to a methionine [29], and in this work, the modification was achieved for the remaining hemes. OmcA with the methionine as the distal ligand of hemes 4, 6, and 8 could be produced in high amounts (>10 mg/L culture), while OmcA with hemes 1 and 3 mutated could only be obtained in low amounts (<0.1 mg/L culture). An SDS-PAGE gel stained for c-type heme proteins [47] showed that S. oneidensis is able to produce mutants OmcA_H1 and OmcA_H3 ( Figure S2), but the amount is significantly lower than that of the other mutants or native OmcA. The 1D 1 H NMR spectrum of the downfield paramagnetically shifted signals of oxidized, low-spin heme proteins is dominated by heme methyl signals. Figure 1 shows that the spectra of mutants harboring a methionine as the distal ligand of heme 1 and heme 3 are severely disturbed, suggesting either partial unfolding of the protein or the co-existence of multiple conformations, with the lower stability and yield of these variants. This could be explained by the position of the distal ligand of both hemes that are close to each other ( Figure S3), suggesting that the coordination of these hemes is important to stabilize the folding of the protein. Given the low yield obtained for these OmcA mutants, these proteins were not studied further. For the mutants OmcA_H6 and OmcA_H8, most signals appeared in the same frequency as the native protein, with only a few signals being affected (Figure 1). Given the exquisite sensitivity of paramagnetic shifts to structural changes, these observations indicate that the overall folding of the protein was retained for these OmcA mutants and that the signals with major changes are likely from the heme for which the axial ligand was Although the 1D 1 H NMR spectrum of OmcA_H4 is similar to that of OmcA_H3, it is clear that the replacement of the histidine by methionine in heme 4 did not affect the stability of the protein, given the high amount of protein obtained for this mutant. This indicates that the structural modifications that occur in OmcA_H4 are different from those obtained for mutants OmcA_H1 and OmcA_H3 and do not affect the stability of the protein. For the mutants OmcA_H6 and OmcA_H8, most signals appeared in the same frequency as the native protein, with only a few signals being affected (Figure 1). Given the exquisite sensitivity of paramagnetic shifts to structural changes, these observations indicate that the overall folding of the protein was retained for these OmcA mutants and that the signals with major changes are likely from the heme for which the axial ligand was mutated (Figure 1). Mutations in the Axial Ligands of the Hemes Change the Reduction Potential of the Individual Redox Centers of OmcA To evaluate the direct electron transfer of native OmcA and its mutants, protein film voltammetry was used. This technique allows the investigation of the reduction and oxidation processes of molecular species, providing information regarding their electron transfer mechanisms. The cyclic voltammograms of native OmcA and all the produced mutants are presented in Figure 2. Native OmcA titrates between 0 to −400 mV vs. SHE, which is in line with what was reported in the literature using UV-visible spectroelectrochemistry [48]. Although the mutations did not affect the overall potential window where the protein is electrochemically active (between 0 and −400 mV vs. SHE), the shape of the voltammogram is different between the mutants (Figure 2). It is clear that the substitutions of the histidine axial ligand to a methionine affects the thermodynamic properties of OmcA. Up to date, the determination of the reduction potential of individual hemes has been only accomplished in multiheme cytochromes with up to six hemes [49]. This approach can only be performed for proteins where a discrimination of redox transitions of the individual hemes can be achieved, which is mainly achieved by NMR [50]. For OmcA, this discrimination has not yet been performed given the high number of hemes and size of the protein. For this reason, it is not possible to determine the heme(s) that was affected by the mutation, and if this was the one where the mutation was introduced, or if it was a nearby heme. Mutation of the Axial Ligand of the Respective Heme Affects Electron Transfer Rates from OmcA to Soluble Acceptors Kinetics of oxidation of OmcA mutants OmcA_H4, OmcA_H6, and OmcA_H8 by the electron shuttles FMN, RF, PMS, and AQDS were studied using stopped flow as previously described [27]. These four compounds represent the chemical and electrostatic diversity of electron shuttles that can be found by S. oneidensis in the environment [27,29]. The oxidation of the OmcA mutants with the different electron shuttles occurred in the millisecond timescale, with most of the reactions occurring within the dead time of the stopped flow ( Figure 3). The extent of oxidation of the protein by each redox shuttle is determined by the value of the reduction potential of the shuttle versus the potentials of the various hemes. PMS has a positive reduction potential, which allows OmcA (native and mutants) to achieve nearly complete oxidation (i.e., reduced fraction of 0) because all hemes in OmcA have a lower potential and, therefore, PMS is capable of extracting electrons from all of them. Nonetheless, the replacement of a histidine for a methionine in heme 8 has led to an increase of 10% of residual reduction of OmcA which may indicate that this heme has been brought to a potential closer to PMS by the mutation. FMN and RF have similar reduction potentials and, therefore, oxidize OmcA and its mutants to the same extent, indicating that none of the mutations has given rise to a transition of the potential of one heme to a value that is above that of these two mediators. AQDS has a slightly higher reduction potential than FMN and RF. Interestingly, AQDS is able to oxidize OmcA mutated in heme 4 to less than 30%, which is lower than the 40% observed for wild-type protein [29], indicating that the mutation has lowered the potential of at least one heme from a value that is above that of AQDS in the native protein to one that is below in the mutant. The fact that the extent of reduction of OmcA achieved by FMN and RF remains the same as the native protein means that the affected heme had its potential lowered to a value between that of AQDS and FMN/RF. Replacement of a histidine for a methionine in the axial coordination of heme iron in the absence of other changes leads to an increase of the reduction potential due to the extra stabilization of the Fe(II) state by the coordinating sulfur versus the nitrogen. The fact that the opposite is observed strongly suggests that this mutation, although still compatible with a protein that has not lost its overall fold, has made one heme, which is not necessarily heme 4, more solvent exposed and, therefore, with lower potential. This functional observation is in line with the numerous changes observed in the NMR spectrum of mutant OmcA_H4 (Figure 1). an increase of the reduction potential due to the extra stabilization of the Fe(II) state by the coordinating sulfur versus the nitrogen. The fact that the opposite is observed strongly suggests that this mutation, although still compatible with a protein that has not lost its overall fold, has made one heme, which is not necessarily heme 4, more solvent exposed and, therefore, with lower potential. This functional observation is in line with the numerous changes observed in the NMR spectrum of mutant OmcA_H4 (Figure 1). [29]) and mutants OmcA_H4, OmcA_H6, and OmcA_H8, by RF (orange), FMN (green), AQDS (blue), and PMS (red). The cytochrome concentration was 0.5 μM, 1.3 μM, and 0.6 μM for mutants OmcA_H4, OmcA_H6, and OmcA_H8, respectively. Mutations in OmcA Did Not Significantly Affect the Binding of FMN The effect of the mutations on the hemes 4, 5, 6, 8, 9, and 10 in the binding of FMN was explored by 31 P 1D-NMR as previously described [29]. For all the mutants tested, upon protein binding, the phosphorous atom signal shifts position and broadens, indicating an interaction between FMN and the protein in a fast regime on the NMR timescale ( Figure S4A). The fitting of the data with the binding model previously described [29] ( Figure S4B) shows that weak transient interactions occur between OmcA and FMN. The values of the dissociation constants of the different mutants (Table 3) are all typical of electron transfer reactions between cytochromes and their physiological partners [27,51]. Interestingly, the dissociation constant obtained for the mutants OmcA_H4, OmcA_H5, OmcA_H6, OmcA_H7, and OmcA_H8 indicate slightly weaker binding than the native OmcA, while for mutants OmcA_H2, OmcA_H9, and OmcA_H10, the value of the Mutations in OmcA Did Not Significantly Affect the Binding of FMN The effect of the mutations on the hemes 4, 5, 6, 8, 9, and 10 in the binding of FMN was explored by 31 P 1D-NMR as previously described [29]. For all the mutants tested, upon protein binding, the phosphorous atom signal shifts position and broadens, indicating an interaction between FMN and the protein in a fast regime on the NMR timescale ( Figure S4A). The fitting of the data with the binding model previously described [29] ( Figure S4B) shows that weak transient interactions occur between OmcA and FMN. The values of the dissociation constants of the different mutants (Table 3) are all typical of electron transfer reactions between cytochromes and their physiological partners [27,51]. Interestingly, the dissociation constant obtained for the mutants OmcA_H4, OmcA_H5, OmcA_H6, OmcA_H7, and OmcA_H8 indicate slightly weaker binding than the native OmcA, while for mutants OmcA_H2, OmcA_H9, and OmcA_H10, the value of the dissociation constants is more similar to that obtained for native OmcA (Table 2). Nonetheless, it is not expected that the differences observed would have a significant impact on the electron transfer processes, given that the values are all in the sub millimolar range. The Electroactivity of the Different OmcA Mutants in S. oneidensis Generally Matches the Reactivity with Methyl Orange The capacity of the S. oneidensis strains carrying different OmcA mutants in performing extracellular electron transfer was evaluated by the rate at which they decolorize methyl orange ( Figure 4) and by the current produced in an electrode (Figures 5 and S5). assays, it is seen that the SOMR1 OmcA MtrC strain carrying the empty plasmid produced less current when compared with that containing the native OmcA ( Figure S4), confirming the importance of OmcA in electron transfer to an electrode. S. oneidensis strains carrying OmcA_H1 and OmcA_H3 produce approximately half of the current density obtained for the native protein ( Figure 5). This is likely a consequence of the fact that these proteins are less structured, and their amount in S. oneidensis cells is significantly lower than for the other mutants or native OmcA ( Figure S2). As observed for methyl orange, the mutation of the axial ligand of heme 10 prevented electron transfer from OmcA to electrodes. Given that this mutation did not affect the growth of the strain during anaerobic conditions with methyl orange (Figure S6), the overall folding of the protein, neither its global redox properties nor its ability to reduce soluble electron shuttles [29], it is possible that the mutation affected the binding process to insoluble electron acceptors, including electrodes, disrupting the electron transfer event. Indeed, heme 10 is at the edge of the protein (see Figure S3) and was proposed to be responsible for the interaction with minerals and metal ions [28]. Interestingly, SOMR1 OmcA MtrC containing OmcA_H6 reduced the electrode at a similar rate as the strain containing the native OmcA ( Figure S5). This result is not similar to that observed with methyl orange and suggests that the binding of methyl orange to OmcA occurs near heme 6. Discussion Multiheme cytochromes are key players in EET processes of numerous electroactive organisms. Although amino acid substitutions in these proteins are known to affect protein folding and their mode of action [6,29], these studies have only been performed in vitro. The information on the factors that control electron transfer processes in living organisms is crucial to genetically manipulate them toward improved properties. In this work, we demonstrated that amino acid substitutions can modulate electron transfer, either by changing the redox properties of the protein or by affecting protein folding or the It has been demonstrated that outer-membrane cytochromes play a key role in the reduction of methyl orange [52,53], given that this azo dye does not cross the outer-membrane of Shewanella. Indeed, the decolorization of methyl orange by the S. oneidensis strain lacking both OmcA and MtrC (pBBR_empty in Figure 4) occurs at a slower rate when compared with the strain containing native OmcA (pBBR_OmcA in Figure 4). This clearly shows that OmcA plays a significant role in the reduction of methyl orange at the cell surface of S. oneidensis. Given that the decolorization is not completely abolished when both MtrC and OmcA are absent from the cell surface, there must be other processes or other proteins that also contribute to the decolorization of this azo dye. Most of the OmcA mutants decolorize methyl orange at the same rate as native OmcA. S. oneidensis containing OmcA_H1, OmcA_H3, and OmcA_H6 decolorized methyl orange at a slower rate, while S. oneidensis carrying OmcA_H10 behaves similarly to that lacking both OmcA and MtrC ( Figure 4). This suggests that these mutations may affect the reactivity of the protein with methyl orange. When evaluating the capacity of the different strains for transferring electrons to an electrode, the behavior was found similar to that obtained with methyl orange. From these assays, it is seen that the SOMR1 ∆OmcA ∆MtrC strain carrying the empty plasmid produced less current when compared with that containing the native OmcA ( Figure S4), confirming the importance of OmcA in electron transfer to an electrode. S. oneidensis strains carrying OmcA_H1 and OmcA_H3 produce approximately half of the current density obtained for the native protein ( Figure 5). This is likely a consequence of the fact that these proteins are less structured, and their amount in S. oneidensis cells is significantly lower than for the other mutants or native OmcA ( Figure S2). As observed for methyl orange, the mutation of the axial ligand of heme 10 prevented electron transfer from OmcA to electrodes. Given that this mutation did not affect the growth of the strain during anaerobic conditions with methyl orange (Figure S6), the overall folding of the protein, neither its global redox properties nor its ability to reduce soluble electron shuttles [29], it is possible that the mutation affected the binding process to insoluble electron acceptors, including electrodes, disrupting the electron transfer event. Indeed, heme 10 is at the edge of the protein (see Figure S3) and was proposed to be responsible for the interaction with minerals and metal ions [28]. Interestingly, SOMR1 ∆OmcA ∆MtrC containing OmcA_H6 reduced the electrode at a similar rate as the strain containing the native OmcA ( Figure S5). This result is not similar to that observed with methyl orange and suggests that the binding of methyl orange to OmcA occurs near heme 6. Discussion Multiheme cytochromes are key players in EET processes of numerous electroactive organisms. Although amino acid substitutions in these proteins are known to affect protein folding and their mode of action [6,29], these studies have only been performed in vitro. The information on the factors that control electron transfer processes in living organisms is crucial to genetically manipulate them toward improved properties. In this work, we demonstrated that amino acid substitutions can modulate electron transfer, either by changing the redox properties of the protein or by affecting protein folding or the binding process. By replacing the distal axial ligand of each heme of OmcA, we showed that this outermembrane cytochrome is functionally resilient, and that although some of the mutations affect protein folding and stability, it still sustains the ability of the organisms to perform electron transfer to soluble and insoluble electron acceptors. Among the ten protein mutant variants studied, only the substitution of the distal axial ligand of heme 10, present at the surface of the protein, affected the physiological function of the protein, preventing S. oneidensis from transferring electrons to methyl orange and electrodes. Furthermore, the replacement of the histidine of heme 6 with a methionine impacts the electron transfer process to methyl orange but not to electrodes, suggesting that this heme is somehow involved in the specific process of electron transfer to methyl orange. These two observations are in line with the proposal that the staggered cross architecture of OmcA and its homologues is designed to set functional specificity to the various hemes [27].
2022-12-30T16:11:28.427Z
2022-12-28T00:00:00.000
{ "year": 2022, "sha1": "5ce1c2dbaeb55fd773767306eb4894bc1eceb8bb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2607/11/1/79/pdf?version=1672212847", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c27fa0b99a8d20d256387d87b5e5dc9c0e0c9f96", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
118601052
pes2o/s2orc
v3-fos-license
A Critical History of Renormalization The history of renormalization is reviewed with a critical eye, starting with Lorentz's theory of radiation damping, through perturbative QED with Dyson, Gell-Mann&Low, and others, to Wilson's formulation and Polchinski's functional equation, and applications to"triviality", and dark energy in cosmology. Dedication Renormalization, that astounding mathematical trick that enabled one to tame divergences in Feynman diagrams, led to the triumph of quantum electrodynamics. Ken Wilson made it physics, by uncovering its deep connection with scale transformations. The idea that scale determines the perception of the world seems obvious. When one examines an oil painting, for example, what one sees depends on the resolution of the instrument one uses for the examination. At resolutions of the naked eye, one sees art, perhaps, but upon greater and greater magnifications, one sees pigments, then molecules and atoms, and so forth. What is non-trivial is to formulate this mathematically, as a physical theory, and this is what Ken Wilson had achieved. To remember him, I recall some events at the beginning of his physics career. I first met Ken around 1957, when I was a fresh assistant professor at M.I.T., and Ken a Junior Fellow at Harvard's Society of Fellows. He had just gotten his Ph.D. from Cal. Tech. under Gell-Mann's supervision. In his thesis, he obtained exact solutions of the Low equation, which describes π-meson scattering from a fixed-source nucleus. (He described himself as an "aficionado" of the equation.) I had occasion to refer to this thesis years later, when Francis Low and I proved that the equation does not possess the kind of "bootstrap solution" that Geoffrey Chew advocated [2,3]. While at the Society of Fellows, Ken spent most of his time at M.I.T. using the computing facilities. He was frequently seen dashing about with stacks of IBM punched cards used then for Fortran programming. He used to play the oboe in those days, and I played the violin, and we had talked about getting together to play the Bach concerto for oboe and violin with a pianist (for we dare not contemplate an orchestra), but we never got around to that. I had him over for dinner at our apartment on Wendell Street in Cambridge, and received a thank-you postcard a few days later, with an itemized list of the dishes he liked. At the time, my M.I.T. colleague Ken Johnson was working on nonperturbative QED, on which Ken Wilson had strong opinions. One day, when Francis Low and I went by Johnson's office to pick him up for lunch, we found the two of them in violent argument at the blackboard. So Francis said, "We'll go to lunch, and leave you two scorpions to fight it out." That was quite a while ago, and Ken went on to do great things, including the theory of renormalization that earned him the Nobel Prize of 1982. In this article, I attempt to put myself in the role of a "physics critic" on this subject. I will concentrate on ideas, and refer technical details to [4,5]. While Ken's work has a strong impact on the theory of critical phenomena, I concentrate here on particle physics. Lorentz: electron self-force and radiation damping After J.J. Thomson discovered the first elementary particle, the electron [6], the question naturally arose about what it was made of. Lorentz ventured into the subject by regarding the electron as a uniform charge distribution of radius a, held together by unknown forces. As indicated in Fig.1, the charge elements of this distribution exert Coulomb forces on each other, but they do not cancel out, due to retardation. Thus, there is a net "self-force", and Lorentz obtained it in the limit 0 a  [7]: Internal Coulomb interactions give rise to a "self-mass": which diverges linearly when 0 a  . This was the first occurrence of the "ultraviolet catastrophe", which befalls anyone toying with the inner structure of elementary particles. Fig1. Modeling the classical electron as charge distribution of radius a. The Coulombic forces between charge elements do not add up to zero, because of retardation: Consequently, there is a "self-force", featuring a "self-mass" that diverges in the limit 0 a  , but can be absorbed into the physical mass. The finite remainder gives the force of radiation damping. One notices with great relieve that the self-mass can be absorbed into the physical mass in the equation of motion where m₀ is the bare mass. One can takes the physical mass from experiments, and write with m=m₀+m self . One imagines that the divergence of m self is cancelled by m₀, which comes from the unknown forces that hold the electron together. This is the earliest example of "mass renormalization". Thus, the x term, the famous radiation damping, is exact in the limit a → 0 within the classical theory. Of course, when a approaches the electron Compton wavelength, this model must be replaced by a quantum-mechanical one, and this leads us to QED (quantum electrodynamics). 4 3. The triumph of QED Modern QED took shape soon after the advent of the Dirac equation in 1928 [8], and the hole theory in 1930 [9]. These theories make the vacuum a dynamical medium containing virtual electron-positron pairs. Weisskopf [10] was the first to investigate the electron self-energy in this light, and found that screening by induced pairs reduces the linear divergence in the Lorentz theory to a logarithmic one 2 [11,12]. Heisenberg, Dirac, and others [13][14][15][16] studied the electron's charge distribution due to "vacuum polarization", i.e., momentary charge separation in the Dirac vacuum. The unscreened "bare charge" was found to be divergent, again logarithmically. A sketch of the charge distribution of the electron is shown in Fig.2. The mildness of the logarithmic divergence played an important role in the subsequent renormalization of QED. But it was delayed for a decade because of World War II. Fig2. Charge density of the bare electron (left) and that of the physical electron, which is "dressed" by virtual pairs induced in the Dirac vacuum (vacuum polarization). The bare charge is logarithmically divergent. The breakthrough in QED came in 1947, with the measurements of the Lamb shift [17] and the electron anomalous moment [18]. In the first post-war physics conference at Shelter Island, LI, NY, June 2-4, 1947, participants thrashed out QED issues. (Fig.3 shows a group picture.) Bethe [19] made an estimate of the Lamb shift immediately after the conference, (reportedly on the train back to Ithaca, N.Y.,) by implementing charge renormalization, in addition to Lorentz's 2 Weisskopf was then Pauli's assistant. According to his recollection (private communication), he made an error in his first paper and got a quadratic divergence. On day, he got a letter from "an obscure physicist at Harvard" by the name of Wendell Furry, who pointed out that the divergence should have been logarithmic. Greatly distressed, Weisskopf showed Pauli the letter, and asked whether he should "quit physics". The usually acerbic Pauli became quite restraint at moments like this, and merely huffed, "I never make mistakes!" mass renormalization. This pointed the way to the successful calculation of the Lamb shift [17--19] in lowest-order perturbation theory. As for the electron anomalous moment, Schwinger [23] calculated it to lowest order as α/2π, where α is the fine-structure constant, without encountering divergences. Dyson made a systematic study of renormalization in QED in perturbation theory [24]. The dynamics in QED can be described in terms of scattering processes. In perturbation theory, one expands the scattering amplitude as power series in the electron bare charge e₀, (the charge that appears in the Lagrangian). Terms in this expansion are associated with Feynman graphs, which involve momentum-space integrals that diverge at the upper limit. To work with them, one introduces a high-momentum cutoff Λ. Dyson shows that showing that mass and charge renormalization remove all divergences, to all orders of perturbation theory. The divergences can be traced to one of three basic divergent elements in Feynman graphs, contained in the full electron propagator S′, the full photon propagator D′, and the full vertex Γ. They can be reduced to the following forms: which must be regarded as power series expansions in e₀².The divergent elements are Σ,Π,Λ*, called respectively the self-energy, the vacuum polarization, and the proper vertex part. The Feynman graphs for these quantities are shown in Fig.4, and they are all logarithmically divergent. Thus, one subtraction will suffice to render them finite. Fig4. The basic divergent elements in Feynman graphs. They are all logarithmically divergent. The divergent part of Σ can absorbed into the bare mass m₀, as in the Lorentz theory. What is new is the divergent subtracted part of Π can be converted into multiplicative charge renormalization, whereby e₀ is replaced by the renormalized charge e=Ze₀. The divergence in Λ* can be similarly disposed of. We illustrate how this happen to lowest order. The electron charge can be defined via the electron-electron scattering amplitude, which is given in QED by the Feynman graphs in Fig.5. The two electrons exchange a photon. We can write the propagator in the form where k is the 4-momentum transfer. Fig5. Electron-electron scattering. The limit of zero 4-momentum transfer k→0 defines the electron charge. To lowest order, the vacuum polarization is given by where Λ is the high-momentum cutoff, and m is the electron mass. (To this order it does not matter whether it is the bare mass or renormalized mass. ) The first term is logarithmically divergent when Λ→∞, and the term R is convergent. One subtraction at some momentum μ makes Π convergent: Both Z and Z⁻¹ are power series with divergent coefficients, and both diverge when Λ→∞. The combination e₀²Z(μ²) gives a renormalized fine-structure constant and the physical fine-structure constant corresponds to zero momentum transfer: We see that the subtraction of Π(μ²) in (8) has been turned into a multiplication by Z(μ²) in (11); but only to order e₀⁴ in perturbation theory. Dyson proves the seeming miracle, that this is valid order by order, to all orders of perturbation theory. Gell-Mann & Low: it's all a matter of scale Gell-Mann and Low [25] reformulates Dyson's renormalization program, using a functional approach, in which the divergent elements Σ,Π,Λ* are regarded as functionals of one other, and functional equations for them can be derived from general properties of Feynman graphs. The divergent parts of these functionals can be isolated via subtractions, and the subtracted parts can be absorbed into multiplicative renormalization constant, by virtue of the behaviors of the functionals under scale transformations. Fig6. The degrees of freedom of the system at higher momenta than the cutoff Λ are omitted from the theory, by definition. The degrees of freedom between Λ and the sliding renormalization point μ are "hidden" in the renormalization constants. Thus, μ is an effective cutoff, representing the scale at which one is observing the system. One sees the cutoff Λ in a new light, as a scale parameter. In fact, it is the only scale parameter in a self-contained theory. When one performs a subtraction at momentum μ, and absorbs the Λ-dependent part into renormalization constants, one effectively lowers the scale from Λ to μ. The degrees of freedom between Λ and μ are not discarded, but hidden in the renormalization constants; the identity of the theory is preserved. The situation is illustrated in Fig.6. The renormalized charge to order α², and for |k|²≫m², is given by This is called a "running coupling constant", because it depends on the momentum scale k. It has been measured at a high momentum [26]: (14) where k₀≈91.2 GeV. The Fourier transform of α(k²) gives the electrostatic potential of an electron [27]. As expected, it approaches the Coulomb potential er⁻¹ as r→∞, where e is the physical charge. For r≪ /mc, it is given by where r₀=( /mc)(e 5/6 γ)⁻¹, γ≈1.781. We see the the bare charge e₀ of the electron, namely that residing at the center, diverges like ln(1/ ) r . Gell-Mann and Low [25] give the following physical interpretation of charge renormalization: A test body of "bare charge" q₀ polarizes the vacuum, surrounding itself by a neutral cloud of electrons and positrons; some of these, with a net charge δq, of the same sign as q₀, escape to infinity, leaving a net charge -δq in the part of the cloud which is closely bound to the test body (within a distance of /mc). If we observe the body from a distance much greater than /mc, we see an effective charge q=q₀-δq, the renormalized charge. However, as we inspect more closely and penetrate through the cloud to the core of the test charge, the charge that we see inside approaches the bare charge q₀ concentrated at a point at the center. Asymptotic freedom The running coupling constant "runs" at a rate described by the β-function (introduced as ψ by Gell-Mann and Low): For QED we can calculate this from (13) to lowest order in α: (17) That this is positive means that α increases with the momentum scale. But it has the opposite sign in QCD (quantum chromodynamics) [28,29]: (18) where α here is the analog of the fine-structure constant, and N f =6 is the number of quark flavors. Thus, QCD approaches a free theory in the highmomentum limit. This is called "asymptotic freedom". QCD is a gauge theory like QED, but there are 8 "color" charges, and 8 gauge photons, called gluons, and. Unlike the photon, which is neutral, the gluons carry color charge. When a bare electron emits or absorbs a photon, its charge distribution does not change, because the photon is neutral. In contrast, when a quark emits or absorbs a gluon, its charge center is shifted, since the gluon is charged. Consequently, the "dressing" of a bare quark smears out its charge to a distribution without a central singularity. As one penetrates the cloud of vacuum polarization of a dressed quark, one see less and less charge inside, and finally nothing at the center. This is the physical origin of asymptotic freedom. Fig.7 shows a comparison between the dressed electron and the dressed quark, with relevant Feynman graphs that contribute to the dressing. In the standard model of particle physics, there are 3 forces strong, electromagnetic and weak, whose strengths can be characterized respectively by α QCD , α QED , α Weak , with strength standing at low momenta in the approximate ratio 10: 10⁻²: 10⁻⁵. While α QCD is asymptotically free, the other two are not. Consequently α QCD will decrease with momentum scale, whereas the other two increase. Extrapolation of present trend indicate they would all meet at about 10¹⁷ GeV, as indicated in Fig.8. This underlies the search for a "grand unified theory" at that scale. Fig7. Comparison between a dressed electron and a dressed quark. There is a point charge at the center of the dressed electron, but none in the dressed quark, for it has been smeared out by the gluons, which are themselves charged. Lower panels show the relevant Feynman graphs. For the quark, there are two extra graphs arising from gluongluon interactions. Fig8. Extrapolation of the running coupling constants for the strong, electromagnetic, and weak interactions indicate that they would meet at a momentum k≈ 10¹⁷GeV, giving rise to speculations of a "grand unification". The renormalization group (RG) The transformations of the scale μ form form a group, and the running coupling constant α(μ²) gives a representation of this group, which was named RG (the renormalization group) by Bogoliubov [30]. The β-function is a "tangent vector" to the group. By integrating (16), we obtain the relation As μ→∞, the left side diverges, and therefore   Fig.9 shows plots of the β-function for QED and QCD. As the momentum scale k increases, α(k²) runs along the direction of the arrows determined by the sign of β. For QED, α increases with k, and since perturbation theory becomes invalid at high k, we lose control over high-energy QED. For QCD, on the other hand, α runs towards the UV fixed point at zero, perturbation theory becomes increasingly accurate, and we have a good understanding in this regime. The other side of the coin is that QCD becomes a hard problem at low energies, where it exhibits quark confinement. The plots clarify the relation between the cutoff scale Λ that defines the bare system, and the effective scale μ, which defines the renormalized system. We now have a better understanding of what can be done with the original cutoff Λ. Being a scale parameter, Λ is determined by (19) and the limit Λ→∞ can be achieved only by moving In QED, on the other hand, there is no known fixed point except the one at the origin. In practice, one keeps Λ finite, whose value is not important. In this way, one can perform calculations that agree with experiments to one part in 10¹², in the case of the electron anomalous moment [29,30]. If one insists on making Λ infinite, one must make 2 ( )   =0, but that makes 2 ( )   =0 for all μ<Λ, and one has a trivial free theory. We will expand on this "triviality problem" later. Particle theorists have a peculiar sensitivity to the cutoff, because they regard it as a stigma that exposes an imperfect theory. In the early days of renormalization, when the cutoff was put out of sight by renormalization, some leaped to declare that the cutoff has been "sent to infinity". That, of course, cannot be done by fiat. Only in QCD can one achieve that, owing to asymptotic freedom. A more general statement of renormalization refers to any correlation function G': where p collectively denotes all the external external momenta, Z* is a dimensionless renormalization constant that diverges when Λ→∞, μ is an arbitrary momentum scale less than Λ, α(μ²) is given by (11), and G is a convergent correlation function. Since the left side is independent of μ, we have (16), and γ(μ)=μ(∂/∂μ) lnZ*(Λ/μ, α₀) is called the "anomalous dimension". This shows how renormalization accompanies a scale transformation, so as to preserve the basic identity of the theory. The Landau ghost Between the great triumph of quantum field theory in QED in 1947, and the emergence of the standard model of particle physics around 1975, particle theorists wandered like Moses in some desert, for nearly three decades. During that time they get disenchanted with quantum field theory, because the great hope they had pinned on the theory to explain the strong interactions did not materialize 3 . The was a feeling that something crazy was called for, like quantum mechanics 4 , or maybe the "bootstrap" [2,3]. Landau thought he has at least disposed of quantum field theory by exposing a fatal flaw. Substituting (16) into (19) and performing the integration, one obtains This is supposed to be an improvement on (13), equivalent to summing a certain class of Feynman graphs ---the so-called "leading logs" with terms of the form (e₀²lnΛ)ⁿ. Landau [34] pointed out that there is a pole with negative residue: This represents a photon excited state, whose wave function has negative squared modulus, and is called a "ghost state". Its mass is of order 10³⁰⁰ m. It can be shown that Λ < k ghost , and thus the ghost occurs only if we continue the theory to beyond the preset cutoff. However, if one insists on making Λ→∞, one must push the ghost to infinity, and this means α→0, leading to a trivial theory. Landau said that this possibility exposes a fundamental flaw in quantum field theory 5 , which "should be buried with honors". The triviality problem also occurs in other theories, for example the scalar φ⁴ Higgs field in the standard model. Earlier, it was found in the Lee model [35], an exactly soluble model of meson scattering. Källén and W. Pauli [36] showed that called the delta baryon), and said, "I will not understand this in my lifetime." Dyson talked about the so-called "Tamm-Dancoff approximation" for pionnucleon scattering, and said, "We will not understand this problem in a hundred years." 4 In 1958, Heisenberg and Pauli proposed a "unified field theory". Pauli gave a seminar at Columbia University with Niels Bohr in attendance. When the seminar began, Bohr said, "To be right, the theory had better be crazy". Pauli said, "It's crazy! You will see. It's crazy!" The theory turns out be a version of the fourfermion interaction. 5 Apparently, Landau considered the ghost state a hallmark of quantum field theories. He reportedly calculated the β-function of Yang-Mills theory (on which QCD is based), but made a sign error, and missed asymptotic freedom. the ghost state renders the S-matrix non-unitary, and this pathology cannot be cured by redefining Hilbert space to admit negative norms. 6 We shall see that the triviality problem is a general property of IR fixed points. The moral is: to get infinite cutoff, get yourself a UV fixed point! Quantum field theory did not die, but bounced back with a vengeance, in the form of Yang-Mills gauge theory in the standard model. Renormalizability Renormalization in perturbation theory hinges on the degree of divergence K of Feynman graphs, which is determined via a power-counting procedure. It depends on the form of coupling ---how many lines meet at a vertex, etc. Renormalization in QED relies on the fact the interaction A   gives K=0 (logarithmic divergence). One can imagine interactions that would give K>0, and that would be non-renormalizable. An example is the 4-fermion interaction Such considerations are based on the presumption that each new coupling bring in its own scale. In a self-contained system, however, the cutoff Λ sets the only scale, and all coupling constants must be proportional to an appropriate power of Λ. When this is taken into account in the power counting, what was considered a non-renormalizable interaction can become renormalizable. If all coupling constants are made dimensionless in this manner, then they could freely arise under scale transformations, and the system need not be self-similar to be renormalizable. As illustration, consider scalar field theory with a Lagrangian density of the form (with =c=1) The theory is called φ M theory, where M is the highest power that occurs. Each coupling g n corresponds to a vertex in a Feynman graph, at which n lines meet, and each line carries momentum. The momenta of the internal lines are integrated over, and produce divergences. Thus, each Feynman graph is proportional to Λ K , with a degree of divergence K that can be found by a counting procedure. The relation between K and topological properties of Feynman graphs, such as the number of vertices and internal lines, determines renormalizability. It was said conventionally that only the φ⁴ theory is renormalizable. This determination, however, assumes that the g n are arbitrary parameters. The dimensionality of g n in d-dimensional space-time is (26) Treating them as independent will means that each g n bring into the system an independent scale. But the only intrinsic length scale in a self-contained system is the inverse cutoff Λ⁻¹. Thus each g n should be scaled with appropriate powers of Λ: g n  u n  nd−nd/2 (27) so that u n is dimensionless. When this is done, the cutoff dependence of g n enters into the power counting, and all φ K theories become renormalizable [37]. With the scaling (27), one can construct an asymptotically free scalar field, one that is free from the triviality problem. For an N-component scalar field in d=1, V(φ) is uniquely given by the Halpern-Huang potential [38] (28) where c,b are arbitrary constants, and M(a,b;z) is the Kummer function, which has exponential behavior for large fields: Mp, q, z ≈ ΓqΓ −1 pz p−q exp z (29) The theory is asymptotically free for b>0. This has applications in the Higgs sector of the standard model and in cosmology, to be discussed later. Not all theories are renormalizable, even with the scaling of coupling constants. There is a true spoiler, namely, the "axial anomaly" in fermionic theories. It arises from the fact that the classically conserved axial vector current becomes non-conserved in quantum theory, due to the existence of topological charges. (See [2,4]). This leads to Feynman graphs with the "wrong" scaling behavior, and the only way to get rid of divergences arising from such Feynman graphs is to cancel them with similar graphs. The practical consequence is that quarks and leptons in the standard model must occur in a family, such that their anomalies cancel. We know of three families: {u,d,e,ν e }, {s,c,μ, ν μ }, {t,b,τ, ν τ }. If a new quark or lepton is discovered, it should bring with it an entire family. Wilson's renormalization theory Wilson reformulates renormalization independent of perturbation theory, and puts scale transformations at the forefront. He was concerned with critical phenomena in matter, where there is a natural cutoff, the atomic lattice spacing a. When one writes down a Hamiltonian, a does not explicitly appear, because it only supplies the length scale. The scaling (27) of coupling constants is natural and automatic. This is an important psychological factor in one's approach to the subject. The first hint of how to do renormalization on a spatial lattice space comes from Kadanoff's "block spin" transformations [39]. This is a coarse-graining process, as illustrated in Fig.10. Spins with only up-down states are represented by the black dots, with nearest-neighbor (nn) interactions. In the first level of coarse-graining, spins are grouped into blocks, indicated by the solid enclosures. The original spins are replaced by a single averaged spin at the center. The lattice spacing becomes 2, but is rescaled back to 1. The block-block interactions now have renormalized coupling constants; however, new couplings arise, for the blocking process generates nnn and longer-ranged interactions. Kadanoff concentrates on the fixed points of iterative blocking, and ignores the new couplings for this purpose. Wilson take the new couplings into account, by providing "hooks" for them from the beginning. That is, the coupling-constant space is enlarged to include all possible couplings: nnn, nnnn, etc. In the beginning, when there were only the nn couplings, one regarded the rest as potentially present, but negligible. The couplings can grow or decrease in successive blocking transformations. Fig10. Block-spin transformations. In the spin lattice, the up-down spins, represented by the black dots, interact with each other via nearest-neighbor (nn) interactions. In the first level of coarse-graining, they are grouped into blocks of 4, indicated by the solid enclosures, and replaced by a single averaged spin at the center. The original lattice spacing a now becomes 2a, but is rescaled back to a. In the next level, these blocks are grouped into higher blocks indicated by the dotted enclosure, and so forth. However,the block-block interactions will include nnn, nnnn interactions, and so forth. Wilson implements renormalization using the Feynman path integral, as follows. A quantum field theory can be described through its correlation functions. For a scalar field, for example, these are the functional averages <φφ>, <φφφ>, <φφφφ>, …, and they can be obtained from the generating functional WJ  N  D exp iS,  − J,  (30) by repeated functional differentiation with respect to the external current J(x). L is the classical action, where L is the classical Lagrangian density (25), and ∫Dφ denotes functional integration. There is a short-distance cutoff Λ⁻¹, which is only scale in S[φ]. Of course, J introduces an scale, but that is external rather than intrinsic. For simplicity we set J≡0 in this discussion. By making the time pure imaginary (Euclidean time, in the language of relativistic quantum field theory) one can regard W[J] as the partition function for a thermal system described by an order parameter φ(x), and the imaginary time corresponds to the inverse temperature. In this way, a result from quantum field theory can be translated into that in statistical mechanics, and vice versa. The functional integration ∫Dφ extends over all possible functional forms of φ(x). It may be carried out by discretizing x as a spatial lattice, and integrating over the field at each site. Alternatively one can integrate over all Fourier transforms in momentum space, made discrete by enclosing the system in a large spatial box. Here we choose the latter route: (31) where φ k denotes a Fourier component of the field, and Λ is the highmomentum cutoff. We lower the effective cutoff to μ by "hiding" the degrees of freedom between Λ and μ, as indicated in Fig.6. To do this, we integrate over the momenta in this interval, and put the result in the form of a new effective action. That is, we write (32) The integrations in the brackets define the new action S ′  , which contains only degrees of freedom below momentum μ 7 . From this, we can obtain a new Lagrangian density L ′ , which contains new couplings u n ′  that are functions of the old ones u n  8 . This, in a nutshell, is Wilson's renormalization transformation. Successive renormalization transformations give a series of effective Lagrangians: L → L ′ → L ′′ → L ′′′ →  (33) which describe how the appearance of the system changes under coarsegraining. The identity of the system is preserved, because the generating functional W is not changed. We allow for all possible couplings u n , and thus the parameter space is that of all possible Lagrangians. Renormalization generates a trajectory in that space ---the RG trajectory. Couplings that were originally negligible can grow, and so the trajectory can break out into new dimensions, as illustrated in Fig.11. There is no requirement that the theory be self-similar, and thus it appears that all theories are renormalizable 9 . That this method of renormalization reduces to that in perturbation theory can be proven by deriving (20) with this approach [4]. Fig11. By rendering all coupling constant dimensionless through scaling with appropriate powers of the cutoff momentum, the system can break out into a new direction in parameter space under renormalization. The trajectories sketched here represent RG trajectories with various initial conditions. In the space of all possible Lagrangians Under the coarse-graining steps, the effective Lagrangian traces out a trajectory in parameter space, the RG trajectory 10 . With different initial conditions, one goes on different trajectories, and the whole parameter is filled with them, like streams lines in a hydrodynamic flow. There are sources and sinks in the flow, and these are fixed points, where the system remains invariant under scale changes. The correlation length becomes infinite at these fixed points. This means that the lattice approaches a continuum: a→0, or Λ→∞. Let us define the direction of flow along an RG trajectory to be the coarsegraining direction, or towards low momentum. If it flows out of a fixed point, then the fixed point appears to be a UV fixed point, for it is to be reached by going opposite to the flow, towards the high-momentum limit. Such a trajectory is called a UV trajectory. If if flows into a fixed point, it is called an IR trajectory, along which the fixed point appears to be an IR fixed point. This is illustrated in Fig.12. Actually, Λ is infinite along the entire IR trajectory, because this is so at the fixed point, and Λ can only decrease upon coarse-graining. Thus, one cannot place a system on an IR trajectory, but only on an adjacent trajectory. When we get closer and closer to the IR trajectory, Λ→∞, and system more closely resemble that at the IR fixed point. It is most common to have a free theory at the fixed point, since it is scale-invariant, and this gives a more physical understanding of the triviality problem. The flow velocity along an RG trajectory can be measured by the arc length covered in a coarse-graining step. It slows down in the neighborhood of a fixed point, and speed up between fixed points. Thus, it darts from fixed point to the next, like a ship sailing between ports of call. Some couplings grow as it approaches a fixed point, and these are called "relevant" interactions. The one that die out are called "irrelevant", and may be neglected. Thus, each port corresponds to a characteristic set of interactions, and the system puts on a certain face at that port. This is illustrated in Fig.13. Fig13. Different physical theories govern different length scales. Each theory can be represented by a fixed point in the space of Lagrangians. The world is like a ship navigating this space, and the fixed points are the ports of call. As the scale changes, the world sails from port to port, and lingers for a while at each port. Polchinski's functional equation We have used a sharp momentum cutoff in (31). In a significant improvement, Polchinski [41] generalizes the method to an arbitrary cutoff, and derive a functional equation for the renormalized action. The cutoff is introduced by modifying the free propagator k⁻² of the field theory, by replacing it by where c is an arbitrary constant, and M is the Kummer function. We have subtracted 1 to make U b (0)=0. This is permissible, since it merely changes the normalization of the generating functional. For d=2, (43) leads to the so-called sine-Gordon theory. Why triviality is not a problem The massless free scalar is scale-invariant, and corresponds to the Gaussian fixed point. When the length scale increases from zero, and we imagine the system being displaced infinitesimally from this fixed point, it will sail along some RG trajectory, along some direction in parameter space, the function space spanned by possible forms of V(φ). Eq.(42) describes the properties associated with various directions. Along directions with b>0, the system will be on a UV trajectory. With b<0, the system is on an IR trajectory, and behave as if it had never left the fixed point. This is illustrated in Fig.14. The existence of UV directions suggests a possible solution to the triviality problem, as illustrated in Fig.15. Consider two Gaussian fixed points A and B. A scalar field leaves A along a UV trajectory, and crosses over to a neighborhood B, skirting an IR trajectory of B. At point 1, the potential is Halpern-Huang, but at 2 it becomes φ⁴, with all higher coupling becoming irrelevant. The original cutoff Λ is infinite, being pushed into A. The effective cutoff at 2 is a renormalization point μ, and there is no reason to make it infinite. In the case of QED, the fixed point B would correspond to our QED Lagrangian, and A could represent some asymptotically free Yang-Mill gauge theory. Fig15. How "triviality" may arise, and why it is not a problem. Here, A and B represent two Gaussian fixed points. The system at 1 is on a UV trajectory and asymptotically free. It crosses over to a neighborhood of B, skirting an IR trajectory. At point 2 it resembles a trivial φ⁴ theory, because higher couplings have become irrelevant. Asymptotic freedom and the big bang The vacuum carries complex scalar fields. There is at least the Higgs field of the standard model, which generates mass for gauge bosons in the weak sector. Grand unified theories call for more scalar fields. A complex scalar field serves as order parameter for superfluidity, and from this point of view the entire universe is a superfluid. In a recent theory, dark energy and dark matter in the universe arise from this superfluid. Briefly, dark energy is the energy density of the superfluid, and dark matter is the manifestation of density fluctuations of the superfluid from its equilibrium vacuum value [43][44][45]. At the big bang, the scalar field is assumed to emerge from the Gaussian fixed point along some direction in parameter space, as indicated in Fig.14. If the chosen direction corresponds to an IR trajectory, then the system never left the fixed point, and nothing happens. If it is a UV trajectory, however, it will develop into a Halpern-Huang potential, and spawn a possible universe. We assume that was only one scale at the big bang, the radius of the universe a(t) in the Robertson-Walker metric. Thus, it must be identified with the cutoff Λ of the scalar field: This relation creates a dynamical feedback: the scalar field generates gravity, which supplies the cutoff to the field. Einstein's equation then leads to a powerlaw expansion of the form where p < 1. This describes a universe with accelerated expansion, thus having dark energy. The equivalent cosmological constant decays in time like t -2p , circumventing the usual "fine-tuning problem". Vortex activities in the superfluid creates quantum turbulence, in which all matter was created during a initial "inflation era". Many observed phenomena, such as dark mass halos around galaxies, can be explained.
2019-04-13T07:00:46.515Z
2013-10-21T00:00:00.000
{ "year": 2013, "sha1": "73a40450033440863a867ca66237b5cea8beed19", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "248a19a1d8e50c7784420877a46a05c980cfd999", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
221343179
pes2o/s2orc
v3-fos-license
Data Augmentation for Motor Imagery Signal Classification Based on a Hybrid Neural Network As an important paradigm of spontaneous brain-computer interfaces (BCIs), motor imagery (MI) has been widely used in the fields of neurological rehabilitation and robot control. Recently, researchers have proposed various methods for feature extraction and classification based on MI signals. The decoding model based on deep neural networks (DNNs) has attracted significant attention in the field of MI signal processing. Due to the strict requirements for subjects and experimental environments, it is difficult to collect large-scale and high-quality electroencephalogram (EEG) data. However, the performance of a deep learning model depends directly on the size of the datasets. Therefore, the decoding of MI-EEG signals based on a DNN has proven highly challenging in practice. Based on this, we investigated the performance of different data augmentation (DA) methods for the classification of MI data using a DNN. First, we transformed the time series signals into spectrogram images using a short-time Fourier transform (STFT). Then, we evaluated and compared the performance of different DA methods for this spectrogram data. Next, we developed a convolutional neural network (CNN) to classify the MI signals and compared the classification performance of after DA. The Fréchet inception distance (FID) was used to evaluate the quality of the generated data (GD) and the classification accuracy, and mean kappa values were used to explore the best CNN-DA method. In addition, analysis of variance (ANOVA) and paired t-tests were used to assess the significance of the results. The results showed that the deep convolutional generative adversarial network (DCGAN) provided better augmentation performance than traditional DA methods: geometric transformation (GT), autoencoder (AE), and variational autoencoder (VAE) (p < 0.01). Public datasets of the BCI competition IV (datasets 1 and 2b) were used to verify the classification performance. Improvements in the classification accuracies of 17% and 21% (p < 0.01) were observed after DA for the two datasets. In addition, the hybrid network CNN-DCGAN outperformed the other classification methods, with average kappa values of 0.564 and 0.677 for the two datasets. Introduction A brain-computer interface (BCI) is a communication method between a user and a computer that does not rely on the normal neural pathways of the brain and muscles [1]. Electroencephalogram (EEG) signals are widely used as a BCI input because the method is non-invasive, cheap, and convenient. The generation of EEG signals can be divided into two types: active induction, such as motor imagery of the four subjects (b, d, e, and g) were used for the analysis. The experimental process is shown in Figure 1. The sampling frequency of this experiment was 100 Hz, and each subject underwent 200 trials, resulting in 800 trials for the four subjects as the training and test data. We used EEG signals from three channels (C3, Cz, and C4). Datasets We selected two datasets [42] for MI classification to validate our methods. First, we chose the BCI competition IV data set 1 as the training and test data set. This data set was provided by the BCI Research Institute in Berlin and contained two parts: the standard set and the evaluation set. The data of the four subjects (b, d, e, and g) were used for the analysis. The experimental process is shown in Figure 1. The sampling frequency of this experiment was 100 Hz, and each subject underwent 200 trials, resulting in 800 trials for the four subjects as the training and test data. We used EEG signals from three channels (C3, Cz, and C4). The second dataset included the data from nine subjects from the BCI competition IV data set 2b. Three channels (C3, Cz, and C4) were used to record the EEG signals using a 250 Hz sampling rate. Each subject underwent 120 trials in 1-2 sessions and 160 trials in 3-5 sessions. We used five sessions for 720 × 9 trials for all subjects. The experimental process is shown in Figure 2. The number of trials in each subject class was the same for both datasets. We filtered the 8-30 Hz signals using a Butterworth filter before analysis. Preprocessing of the Raw Data MI can cause ERD in the contralateral motor cortex and ERS in the ipsilateral cortex; these phenomena are reflected in changes in the energy of different frequency bands [43]. However, timeseries signals cannot describe the features of these conditions. One promising method is a timefrequency transform, which expands the signal in two dimensions. A short-time Fourier transform (STFT) [44] is commonly used, in which a time-frequency localized window function is used for the transformation. The energy characteristics can be detected using a sliding window function that transforms the signals [45] because C3, C4, and Cz represent the dynamical change in the EEG of the MI [46]. Therefore, these three channels were used for the analysis. As shown in Figure 3, the three channels were converted into a two-dimensional form and were mosaicked into an image using vertical stacking. For each image, the color depth indicates the signal energy of the different bands, the color change trend in the x-axis direction represents the time series, The second dataset included the data from nine subjects from the BCI competition IV data set 2b. Three channels (C3, Cz, and C4) were used to record the EEG signals using a 250 Hz sampling rate. Each subject underwent 120 trials in 1-2 sessions and 160 trials in 3-5 sessions. We used five sessions for 720 × 9 trials for all subjects. The experimental process is shown in Figure 2. Datasets We selected two datasets [42] for MI classification to validate our methods. First, we chose the BCI competition IV data set 1 as the training and test data set. This data set was provided by the BCI Research Institute in Berlin and contained two parts: the standard set and the evaluation set. The data of the four subjects (b, d, e, and g) were used for the analysis. The experimental process is shown in Figure 1. The sampling frequency of this experiment was 100 Hz, and each subject underwent 200 trials, resulting in 800 trials for the four subjects as the training and test data. We used EEG signals from three channels (C3, Cz, and C4). The second dataset included the data from nine subjects from the BCI competition IV data set 2b. Three channels (C3, Cz, and C4) were used to record the EEG signals using a 250 Hz sampling rate. Each subject underwent 120 trials in 1-2 sessions and 160 trials in 3-5 sessions. We used five sessions for 720 × 9 trials for all subjects. The experimental process is shown in Figure 2. The number of trials in each subject class was the same for both datasets. We filtered the 8-30 Hz signals using a Butterworth filter before analysis. Preprocessing of the Raw Data MI can cause ERD in the contralateral motor cortex and ERS in the ipsilateral cortex; these phenomena are reflected in changes in the energy of different frequency bands [43]. However, timeseries signals cannot describe the features of these conditions. One promising method is a timefrequency transform, which expands the signal in two dimensions. A short-time Fourier transform (STFT) [44] is commonly used, in which a time-frequency localized window function is used for the transformation. The energy characteristics can be detected using a sliding window function that transforms the signals [45] because C3, C4, and Cz represent the dynamical change in the EEG of the MI [46]. Therefore, these three channels were used for the analysis. As shown in Figure 3, the three channels were converted into a two-dimensional form and were mosaicked into an image using vertical stacking. For each image, the color depth indicates the signal energy of the different bands, the color change trend in the x-axis direction represents the time series, The number of trials in each subject class was the same for both datasets. We filtered the 8-30 Hz signals using a Butterworth filter before analysis. Preprocessing of the Raw Data MI can cause ERD in the contralateral motor cortex and ERS in the ipsilateral cortex; these phenomena are reflected in changes in the energy of different frequency bands [43]. However, time-series signals cannot describe the features of these conditions. One promising method is a time-frequency transform, which expands the signal in two dimensions. A short-time Fourier transform (STFT) [44] is commonly used, in which a time-frequency localized window function is used for the transformation. The energy characteristics can be detected using a sliding window function that transforms the signals [45] because C3, C4, and Cz represent the dynamical change in the EEG of the MI [46]. Therefore, these three channels were used for the analysis. As shown in Figure 3, the three channels were converted into a two-dimensional form and were mosaicked into an image using vertical stacking. For each image, the color depth indicates the signal energy of the different bands, the color change trend in the x-axis direction represents the time series, and the color change trend in the y-axis direction reflects the characteristics of the different frequency bands. STFT was applied to the time series for 4 s trials (during imagery period), with window sizes equal to 128 and 256 for the two datasets, respectively. Due to the difference in sampling rate, the sample sizes of the two datasets were 400 and 1000. Meanwhile, the frequency bands between 8 Sensors 2020, 20, 4485 5 of 20 and 30 Hz were considered to represent motion-related bands. The process was repeated for three electrodes, which were C3, Cz, and C4. The results were vertically stacked in a way that the channel's neighboring information was preserved. Finally, all spectrogram images were resized to 64 × 64 after the transformation for convenience and consistency in the subsequent calculations. Sensors 2020, 20, x FOR PEER REVIEW 5 of 21 and the color change trend in the y-axis direction reflects the characteristics of the different frequency bands. STFT was applied to the time series for 4 s trials (during imagery period), with window sizes equal to 128 and 256 for the two datasets, respectively. Due to the difference in sampling rate, the sample sizes of the two datasets were 400 and 1000. Meanwhile, the frequency bands between 8 and 30 Hz were considered to represent motion-related bands. The process was repeated for three electrodes, which were C3, Cz, and C4. The results were vertically stacked in a way that the channel's neighboring information was preserved. Finally, all spectrogram images were resized to 64 × 64 after the transformation for convenience and consistency in the subsequent calculations. Different Data Augmentation Models DA has been demonstrated to improve the performance of pattern recognition models in the computer vision field [47]. DA increases the complexity of the training model and reduces overfitting by adding artificial data. In this study, we compared the performance of different DA methods for MI classification using a DNN. In the following section, we briefly introduce the different data methods used in our research. Geometric Transformation (GT) GT is an effective method that changes the geometry of the data. The method preserves the characteristics of the data and increases the diversity of the representation [48]. As shown in Figure 4 we used three GT methods for the DA of the MI signals: (1) Rotate the image 180° right or left on the x-axis (rotation); (2) Shift the images left, right, up, or down; the remaining space is filled with random noise (translation); (3) Perform augmentations in the color space (color-space transformation). Noise Addition (NA) NA refers to the addition of random values to the raw data using a Gaussian distribution. Francisco et al. [49] demonstrated that NA significantly improves the performance and robustness of a model. A standard random uniform noise procedure was implemented to augment the raw data. The calculation is shown in the following equation: Different Data Augmentation Models DA has been demonstrated to improve the performance of pattern recognition models in the computer vision field [47]. DA increases the complexity of the training model and reduces overfitting by adding artificial data. In this study, we compared the performance of different DA methods for MI classification using a DNN. In the following section, we briefly introduce the different data methods used in our research. Geometric Transformation (GT) GT is an effective method that changes the geometry of the data. The method preserves the characteristics of the data and increases the diversity of the representation [48]. As shown in Figure 4 we used three GT methods for the DA of the MI signals: (1) Rotate the image 180 • right or left on the x-axis (rotation); (2) Shift the images left, right, up, or down; the remaining space is filled with random noise (translation); (3) Perform augmentations in the color space (color-space transformation). Sensors 2020, 20, x FOR PEER REVIEW 5 of 21 and the color change trend in the y-axis direction reflects the characteristics of the different frequency bands. STFT was applied to the time series for 4 s trials (during imagery period), with window sizes equal to 128 and 256 for the two datasets, respectively. Due to the difference in sampling rate, the sample sizes of the two datasets were 400 and 1000. Meanwhile, the frequency bands between 8 and 30 Hz were considered to represent motion-related bands. The process was repeated for three electrodes, which were C3, Cz, and C4. The results were vertically stacked in a way that the channel's neighboring information was preserved. Finally, all spectrogram images were resized to 64 × 64 after the transformation for convenience and consistency in the subsequent calculations. Different Data Augmentation Models DA has been demonstrated to improve the performance of pattern recognition models in the computer vision field [47]. DA increases the complexity of the training model and reduces overfitting by adding artificial data. In this study, we compared the performance of different DA methods for MI classification using a DNN. In the following section, we briefly introduce the different data methods used in our research. Geometric Transformation (GT) GT is an effective method that changes the geometry of the data. The method preserves the characteristics of the data and increases the diversity of the representation [48]. As shown in Figure 4 we used three GT methods for the DA of the MI signals: (1) Rotate the image 180° right or left on the x-axis (rotation); (2) Shift the images left, right, up, or down; the remaining space is filled with random noise (translation); (3) Perform augmentations in the color space (color-space transformation). Noise Addition (NA) NA refers to the addition of random values to the raw data using a Gaussian distribution. Francisco et al. [49] demonstrated that NA significantly improves the performance and robustness of a model. A standard random uniform noise procedure was implemented to augment the raw data. The calculation is shown in the following equation: Noise Addition (NA) NA refers to the addition of random values to the raw data using a Gaussian distribution. Francisco et al. [49] demonstrated that NA significantly improves the performance and robustness of a model. A standard random uniform noise procedure was implemented to augment the raw data. The calculation is shown in the following equation: x = x + random(−0.5, 0.5) * noise. In our study, we randomly added Gaussian noise to the MI spectrogram data ( Figure 5). Generative Model Generative models use artificial data with features similar to that of the raw data; these models have a powerful feature mapping ability and provide a good representation of the original data. In this study, we evaluated the performance of three different generative models. a. Autoencoder (AE) A useful strategy for generative modeling involves an autoencoder (AE). As shown in Figure 6, an AE is a feed-forward neural network that is used for data dimensionality reduction, feature extraction, and model generation. The network contains two parts: the encoder = ( ) is used to compress the input data, and the decoder = ( ) restores the data that contains useful features. Variational autoencoders (VAEs) and AEs have a similar structure, but VAEs include constraints on the encoder to ensure that the output of the AE has a particular distribution and good robustness. A VAE can be defined as a directed model that uses learned approximate inferences [50]. To generate new data using a VAE, an encoder is used to obtain the hidden variable z, and the decoder then generates new data x. During training, the hidden variable learns the probability distribution from the input. In this study, we used the AE ( Figure 6) and VAE ( Figure 7) models described in Ref [51]. Generative Model Generative models use artificial data with features similar to that of the raw data; these models have a powerful feature mapping ability and provide a good representation of the original data. In this study, we evaluated the performance of three different generative models. a. Autoencoder (AE) A useful strategy for generative modeling involves an autoencoder (AE). As shown in Figure 6, an AE is a feed-forward neural network that is used for data dimensionality reduction, feature extraction, and model generation. The network contains two parts: the encoder z = f (x) is used to compress the input data, and the decoder r = g(z) restores the data that contains useful features. In our study, we randomly added Gaussian noise to the MI spectrogram data ( Figure 5). Generative Model Generative models use artificial data with features similar to that of the raw data; these models have a powerful feature mapping ability and provide a good representation of the original data. In this study, we evaluated the performance of three different generative models. a. Autoencoder (AE) A useful strategy for generative modeling involves an autoencoder (AE). As shown in Figure 6, an AE is a feed-forward neural network that is used for data dimensionality reduction, feature extraction, and model generation. The network contains two parts: the encoder = ( ) is used to compress the input data, and the decoder = ( ) restores the data that contains useful features. Variational autoencoders (VAEs) and AEs have a similar structure, but VAEs include constraints on the encoder to ensure that the output of the AE has a particular distribution and good robustness. A VAE can be defined as a directed model that uses learned approximate inferences [50]. To generate new data using a VAE, an encoder is used to obtain the hidden variable z, and the decoder then generates new data x. During training, the hidden variable learns the probability distribution from the input. In this study, we used the AE ( Figure 6) and VAE ( Figure 7) models described in Ref [51]. Variational autoencoders (VAEs) and AEs have a similar structure, but VAEs include constraints on the encoder to ensure that the output of the AE has a particular distribution and good robustness. A VAE can be defined as a directed model that uses learned approximate inferences [50]. To generate new data using a VAE, an encoder is used to obtain the hidden variable z, and the decoder then generates new data x. During training, the hidden variable learns the probability distribution from the input. In this study, we used the AE ( Figure 6) and VAE ( Figure 7) models described in Ref. [51]. c. Deep Convolutional Generative Adversarial Networks (DCGANs) Another type of generative model for DA is a GAN. Goodfellow et al. originally proposed the GAN for data generation and conducted qualitative and quantitative evaluations of the GAN model by comparing it with deep learning networks and overlapping self-encoders [52]. A GAN uses the competition between two networks to achieve a dynamic balance to learn the statistical distribution of the target data. The generator first initializes a random noise vector p z and learns the distribution P x of the target parameter X by fitting a differentiable function to approximate G(z; θ G ). The discriminator uses the differentiable function approximator D() to predict the input variables from the actual target data distribution P x and not from the generated function. The optimization goal of the framework is to minimize the mean square error between the generated sample prediction label and the real sample label. The generator is trained to minimize the function log(1 − D(G(z; θ G )). Hence, the optimization problem of the GAN can be defined as: where V represents the value function and E represents the expected value. x is the RD, z is the random noise vector, and P(·) is the distribution. The discriminator aims to distinguish whether the generated data are real or not. Thus, cross-entropy is adopted as the loss for this binary classification: During the training of GANs, the objective is to find the Nash equilibrium of a non-convex game with continuous, high-dimensional parameters. GANs are typically trained using gradient descent techniques to determine the minimum value of a cost function. The GAN learns the feature representation without requiring a cost function, but this may result in instability during training, which often generates a meaningless output [53]. To address this problem, many researchers have proposed various morphing shapes. In the field of image processing, the DCGAN was proposed [54], and the authors focused on the topology of the DCGAN to ensure stability during training. The discriminator creates filters based on the CNN learning process and ensures that the filters learn useful features of the target image. This generator determines the feature quality of the generated image to ensure the diversity of the generated samples. Since the DCGAN shows excellent performance for image features in hidden space [55], we chose the DCGAN to generate the EEG images. The DCGAN differs from the GAN in the following model structure: The pooling layer is replaced by fractional-strided convolutions in the generator and by strided convolutions in the discriminator. 2. Batch normalization is used in the generator and discriminator, and there is no fully connected layer. 3. In the generator, all layers except for the output use the rectified linear unit (ReLU) as an activation function; the output layer use tanh. 4. All layers use the leaky ReLU as the action function in the discriminator. In this study, we referred to the structure of DCGAN in Cubuk et al. [48] and implemented it as a baseline; the generator and discriminator networks were extended to capture more relevant features from the MI-EEG datasets. The detail of the network structure is described in the following. Generator Model Due to the weakness and non-stationary nature of the features, a generator is necessary to create high precision. To guarantee the performance of DA, the generator model should maintain a balanced Sensors 2020, 20, 4485 8 of 20 condition between the discriminator and the generator. As shown in Figure 8, a six-layer network was proposed in our study. Generator Model Due to the weakness and non-stationary nature of the features, a generator is necessary to create high precision. To guarantee the performance of DA, the generator model should maintain a balanced condition between the discriminator and the generator. As shown in Figure 8, a six-layer network was proposed in our study. A three-channel RGB spectrogram MI image was generated by a random vector using the generator. The operation of up-sampling and convolution guaranteed the output was consistent with the original training dataset. The number of channels of each deconvolution layer was halved, and the output tensor was doubled. Finally, the last generated image was output by the tanh activation layer. Details of the generator are summarized in Table 2. Discriminator Model As shown in Figure 9, the discriminator network consisted of a deep convolution network that aimed to distinguish whether the generated image came from the training data or the generator. Details of the discriminator are summarized in Table 3. A three-channel RGB spectrogram MI image was generated by a random vector using the generator. The operation of up-sampling and convolution guaranteed the output was consistent with the original training dataset. The number of channels of each deconvolution layer was halved, and the output tensor was doubled. Finally, the last generated image was output by the tanh activation layer. Details of the generator are summarized in Table 2. Discriminator Model As shown in Figure 9, the discriminator network consisted of a deep convolution network that aimed to distinguish whether the generated image came from the training data or the generator. Details of the discriminator are summarized in Table 3. "Adam" was used as the optimizer with the following parameters: learning rate = 2 × 10 −4 , batch size = 128, and training epoch = 20. For every subject in the two datasets, we used a 10-fold cross-validation to divide the data and train the network. The network structure of the DCGAN is shown in Figure 10. "Adam" was used as the optimizer with the following parameters: learning rate = 2 × 10 −4 , batch size = 128, and training epoch = 20. For every subject in the two datasets, we used a 10-fold crossvalidation to divide the data and train the network. The network structure of the DCGAN is shown in Figure 10. Figure 9. The structure of the discriminator. "Adam" was used as the optimizer with the following parameters: learning rate = 2 × 10 −4 , batch size = 128, and training epoch = 20. For every subject in the two datasets, we used a 10-fold crossvalidation to divide the data and train the network. The network structure of the DCGAN is shown in Figure 10. Performance Verification of the Data Augmentation It is well known that the clarity and diversity of the GD are important evaluation indicators. Researchers conducted a systematic review of the quality evaluation of the GD [56]. For image data, visualization is a reliable method because problems can be easily detected in the GD. However, this method does not provide quantitative indicators of the quality of the GD. The inception score is a commonly used quantitative index of the quality of GD. This method assesses the accuracy of the GD using an inception network. The FID is an improved version of the inception score and includes the probability distribution and a similarity measure between the GD and RD [53]. In this method, the features of the data are extracted using the inception network [57], and a Gaussian model is used to conduct spatial modeling of the features. The FID is calculated according to the mean value and covariance of the Gaussian model: where r represents the RD, g represents the GD, and Tr is the trace of the matrix. A small FID value indicates a high similarity between the GD and RD and a good DA performance. We compared the augmentation performance of the DCGAN with those of the GT, NA, and other generative models. Evaluation of the MI Classification Performance after the Augmentation It is expected that a good DA performance improves the performance of the classifier, especially for classification models based on a DNN, which is sensitive to the size of the dataset. CNNs are often used in image classification tasks and result in a good performance. CNNs often provide better performance than traditional methods for the processing of EEG signals [58][59][60]. A CNN is a multi-layered neural network consisting of a sequence of convolution, pooling, and fully connected layers. Each neuron is connected to the previous feature map by the convolution kernel. The convolution layer extracts the features of the input image using the kennel size, and the pooling layer is located between the continuous convolution layers to compress the data and parameters and reduce overfitting. More advanced features can be extracted with a larger number of layers. The fully connected layer transforms the output matrix from the last layer to an n-dimensional vector (n is the number of classes) to predict the distribution of the different classes. Backpropagation is utilized to decrease the classification error. In the convolution layer, the input image can be convolved with a spatial filter to form the feature map and output function, which is expressed as: This formula describes the jth feature map in layer l, where X l j is calculated using the previous feature map X l−1 i multiplied by the convolution kernel W l ij and adding a bias parameter b l j . Finally, the mapping is completed using the ReLU function f (a): The pooling layer is sandwiched in the continuous convolution layer to compress the amount of data and parameters and reduce overfitting. The max-pooling method was chosen in this work as follows: X l j,k = max 0≤m,n≤s X l−1 j·s+m,k·s+n . where j and k are the locations of the current feature map X l j and s stands for pooling size. The double fully connected layer structure can effectively translate the multi-scale features of the image. Considering the multiple influencing factors of time, frequency, and channel, this study used double fully connected layers to improve the performance gain of the softmax layer. Two-way softmax in the last layer in the deep networks was used to predict the distribution of the two motor imagery tasks: where x i is the ith feature map and y i represents an output probability distribution. The gradient of the backpropagation was calculated according to the cross-entropy loss function: Furthermore, we used the stochastic gradient descent (SGD) optimizer with a learning rate of 1 × 10 −4 to improve the speed of the network training: where µ is the learning rate, W k represents the weight matrix for kernel k, and b k represents the bias value. E represents the difference between the desired output and the real output. In our study, an eight-layer neural network structure was used to classify the two-class MI signals ( Figure 11). Sensors 2020, 20, x FOR PEER REVIEW 11 of 21 = (∑ · , + ) ∑ exp (∑ · , + ) , where is the ith feature map and represents an output probability distribution. The gradient of the backpropagation was calculated according to the cross-entropy loss function: Furthermore, we used the stochastic gradient descent (SGD) optimizer with a learning rate of 1 × 10 −4 to improve the speed of the network training: where is the learning rate, represents the weight matrix for kernel k, and represents the bias value. represents the difference between the desired output and the real output. In our study, an eight-layer neural network structure was used to classify the two-class MI signals ( Figure 11). Considering the multiple influencing factors of time, frequency, and channel, we used two fully connected layers to improve the performance gain of the softmax layer [58]. The gradient of the backpropagation was calculated using the cross-entropy loss function, and we used a stochastic gradient descent with momentum (SGDM) optimizer with a learning rate of 1 × 10 −4 to improve the speed of network training. To reduce computation time and prevent overfitting, we adopted the dropout operation. The parameters of the proposed CNN model are summarized in Table 4: Considering the multiple influencing factors of time, frequency, and channel, we used two fully connected layers to improve the performance gain of the softmax layer [58]. The gradient of the backpropagation was calculated using the cross-entropy loss function, and we used a stochastic gradient descent with momentum (SGDM) optimizer with a learning rate of 1 × 10 −4 to improve the speed of network training. To reduce computation time and prevent overfitting, we adopted the dropout operation. The parameters of the proposed CNN model are summarized in Table 4: The average classification accuracy and kappa value were used as evaluation criteria to compare the performances of all methods. We divided the RD into training data and test data using 10-fold cross-validation [61]. In each dataset, 90% of the trials combined with the GD were selected randomly as the training set, and the remaining 10% of the RD was used as the test set. This operation was repeated 10 times. The kappa value is a well-known method for evaluating EEG classifications because it removes the influence of random errors. It is calculated as: We determined the optimal ratio of the GD and RD by comparing the classification accuracies of different ratios of the GD and RD. Results of the Freéchet Inception Distances for Different Data Augmentation Methods In this experiment, we used five DA methods to generate artificial MI-EEG data. We executed data augmentation based on a spectrogram MI signal (Section 2.2) for each subject independently. Furthermore, there were 200 trials for one subject in dataset 1 and 720 trials for one subject in dataset 2b. As for the GT and NA methods, all trials from one subject were randomly sampled for training. Meanwhile, the 10-fold cross-validation strategy was used to train the generated model for AE, VAE, and DCGAN. The quality of the GD was assessed using the FID, which is the probability distance between the two distributions. A lower value represents a better DA performance. As shown in Table 5, the data generated by the GT were considerably different from the RD. The quality of the data generated by the DCGAN was significantly higher than that of the other models, although the FID results were not ideal. Among the three DA methods based on generative models, the score of dataset 2b was better than that of dataset 1. Some possible explanations are listed in the following: 1. One subject for each of 200 trials and 720 trials in datasets 1 and 2b, respectively. A larger-scale training data improved the robustness and generalization of the model. 2. Due to the difference in sampling rate, the sample sizes of the two datasets were 400 and 1000 (datasets 1 and 2b, respectively). More samples would be helpful to improve the resolution of the spectrogram. 3. During the experimental process, dataset 2b designed the cue-based screening paradigm that aimed to enhance the attention of the subjects before imagery. However, there was no similar set in dataset 1. This setting may lead to a more consistent feature distribution and higher quality for MI spectrogram data. In summary, the sampling rate, design of the paradigm, and the dataset scale could obviously influence the quality of the generated data. Figure 12a,b shows the analysis of variance (ANOVA) statistics of the different methods for the BCI Competition IV datasets 1 and 2b, respectively. There were statistically significant differences between the different DA methods. To compare the effects of different DA methods, we show different generated spectrogram MI data in Figure 13. Classification Performance of Different Data Augmentation Methods We used the average classification accuracy and mean kappa value to evaluate both datasets. First, we determined the classification accuracies using DA. The results of the classification accuracy and standard deviation are shown in Tables 6 and 7, and the kappa value results and standard deviations of the methods are presented in Tables 4 and 5. The average classification accuracies of the CNN methods without DA were 74.5 ± 4.0% and 80.6 ± 3.2% for datasets 1 and 2b, respectively (baseline). The NA-CNN, VAE-CNN, and DCGAN-CNN provided higher accuracies than the baseline for both datasets (Tables 2 and 3). The results of the different ratios of RD and GD indicated no positive correlation between the accuracy and the proportion of training data from the GD. In this study, the ratio of 1:3 (RD:GD) provided the optimal DA performance. The average classification accuracy of the CNN-DCGAN was 12.6% higher than the baseline for dataset 2b and 8.7% higher than the baseline for dataset 1. We also noticed that none of the ratios provided satisfactory results for the CNN-GT model. One possible explanation is that the rotation may have adversely affected the information in the EEG channel, resulting in incorrect labels. Table 7. Classification Accuracy of the methods for the BCI competition IV dataset 2b (baseline: 80.6 ± 3.2%). To compare the effects of different DA methods, we show different generated spectrogram MI data in Figure 13. To compare the effects of different DA methods, we show different generated spectrogram MI data in Figure 13. Classification Performance of Different Data Augmentation Methods We used the average classification accuracy and mean kappa value to evaluate both datasets. First, we determined the classification accuracies using DA. The results of the classification accuracy and standard deviation are shown in Tables 6 and 7, and the kappa value results and standard deviations of the methods are presented in Tables 4 and 5. The average classification accuracies of the CNN methods without DA were 74.5 ± 4.0% and 80.6 ± 3.2% for datasets 1 and 2b, respectively (baseline). The NA-CNN, VAE-CNN, and DCGAN-CNN provided higher accuracies than the baseline for both datasets (Tables 2 and 3). The results of the different ratios of RD and GD indicated no positive correlation between the accuracy and the proportion of training data from the GD. In this study, the ratio of 1:3 (RD:GD) provided the optimal DA performance. The average classification accuracy of the CNN-DCGAN was 12.6% higher than the baseline for dataset 2b and 8.7% higher than the baseline for dataset 1. We also noticed that none of the ratios provided satisfactory results for the CNN-GT model. One possible explanation is that the rotation may have adversely affected the information in the EEG channel, resulting in incorrect labels. Table 7. Classification Accuracy of the methods for the BCI competition IV dataset 2b (baseline: 80.6 ± 3.2%). Classification Performance of Different Data Augmentation Methods We used the average classification accuracy and mean kappa value to evaluate both datasets. First, we determined the classification accuracies using DA. The results of the classification accuracy and standard deviation are shown in Tables 6 and 7, and the kappa value results and standard deviations of the methods are presented in Tables 4 and 5. The average classification accuracies of the CNN methods without DA were 74.5 ± 4.0% and 80.6 ± 3.2% for datasets 1 and 2b, respectively (baseline). The NA-CNN, VAE-CNN, and DCGAN-CNN provided higher accuracies than the baseline for both datasets (Tables 2 and 3). The results of the different ratios of RD and GD indicated no positive correlation between the accuracy and the proportion of training data from the GD. In this study, the ratio of 1:3 (RD:GD) provided the optimal DA performance. The average classification accuracy of the CNN-DCGAN was 12.6% higher than the baseline for dataset 2b and 8.7% higher than the baseline for dataset 1. We also noticed that none of the ratios provided satisfactory results for the CNN-GT model. One possible explanation is that the rotation may have adversely affected the information in the EEG channel, resulting in incorrect labels. Table 6. Classification accuracy of the methods for the BCI competition IV dataset 1 (baseline: 74.5 ± 4.0%). Ratio Accuracy% (Mean ± std. dev.) The mean kappa value of the CNN-DCGAN was the highest among the methods, indicating that the DCGAN obtained sufficient knowledge of the features of the EEG spectrogram. As shown in Tables 8 and 9, the performance of the three generative models was superior to that of the other DA methods. In addition, the standard deviation of the kappa value was relatively small, indicating the good stability and robustness of this method. Regardless of the RD:GD ratio, the results of the CNN-DCGAN showed a high degree of consistency for the average classification accuracy. Overall, the results demonstrated that this strategy provided the most stable and accurate classification performance. ANOVA and paired t-tests were performed. We compared the CNN-DCGAN with other CNN-DA to determine the optimal DA method (with the optimal ratio) and compared the CNN-DCGAN with the CNN to verify the effectiveness of augmentation. Statistically significant differences were observed and are shown in Figure 14. DA using DCGAN effectively improved the performance of the classification model (CNN). Among the proposed CNN-DA methods, CNN-DCGAN outperformed in terms of the classification performance. In addition, the p-values for the comparison of the CNN-DCGAN and proposed methods are shown in Table 10. The classification performance of CNN-DCGAN was significantly higher than other methods (p < 0.01). Although CNN-VAE was second to CNN-DCGAN in dataset 2b (p < 0.05), CNN-DCGAN obtained the best p-values. In summary, the DCGAN provided effective DA and resulted in the highest classification performance. Comparison with Existing Classification Methods We compared the classification performance of the CNN-DCGAN hybrid model with that of existing methods (Figure 15). The results are shown in Table 11. The CNN-DCGAN exhibited a 0.072 improvement in the mean kappa value over the winning algorithm for the BCI competition IV dataset 2b [62]. The strategy proved favorable in the DNN for the classification of the MI-EEG signal, and the proposed model achieved comparable or better results than the other methods. Comparison with Existing Classification Methods We compared the classification performance of the CNN-DCGAN hybrid model with that of existing methods (Figure 15). The results are shown in Table 11. The CNN-DCGAN exhibited a 0.072 improvement in the mean kappa value over the winning algorithm for the BCI competition IV dataset 2b [62]. The strategy proved favorable in the DNN for the classification of the MI-EEG signal, and the proposed model achieved comparable or better results than the other methods. Discussion In this study, we proposed a method to augment and generate EEG data to address the problem of small-scale datasets in deep learning applications for MI tasks. The BCI Competition IV dataset 1 and 2b were used to evaluate the method. We used a new form of input in the CNN that considered the time-frequency and energy characteristics of the MI signals to perform the classifications. Discussion In this study, we proposed a method to augment and generate EEG data to address the problem of small-scale datasets in deep learning applications for MI tasks. The BCI Competition IV dataset 1 and 2b were used to evaluate the method. We used a new form of input in the CNN that considered the time-frequency and energy characteristics of the MI signals to perform the classifications. Different DA methods were used for the MI classification. The results showed that the classification accuracy and mean kappa values of the DA based on DCGAN were highest for the two datasets, indicating that the CNN-DCGAN was the preferred method to classify MI signals and DCGAN was an effective DA strategy. Recently, a growing number of researchers have used deep learning networks to decode EEG signals [60]. However, it remains a challenge to find the optimal representation of an EEG signal that is suitable for a classification model based on different BCI tasks. For example, the number of channels and the selection of frequency bands are crucial when choosing input data; therefore, different input parameters need to match the neural networks with different structures. Researchers require sufficient knowledge of the implications of using different EEG parameters and choosing classification networks for different forms of input data. In Vernon et al. [68], the deep separate CNN achieved better classification results for time-domain EEG signals because the model structure was highly suitable for the time-domain characteristics of the steady-state visually evoked potentials. AlexNet had excellent classification performance for time-frequency EEG signals after a continuous wavelet transform in Chaudary et al. [69]. In this study, we concluded that MI signals based on time-frequency representation was more suitable as the input of the DNN classification model. In future studies, we will investigate which useful features the convolution kernel learns from the EEG and optimize the structure and parameters of the model accordingly. In applications of EEG decoding, the performance of a classification model based on DNNs is directly related to the scale of the training data. However, in a BCI system, it is difficult to collect large-scale data due to the strict requirements regarding the subject and experimental environment. Data augmentation provides an enlightening strategy to solve this limitation and we have verified its effectiveness in this manuscript. In a previous study, some research has shown that generative networks provided good performance for the deep interpretation of EEG signals [70]. Therefore, future studies could focus on generative networks to interpret the physiological meaning of EEG signals in depth to improve the explanation of EEG signals and investigate how to design a specific DA model with the requirements of specific tasks. Finally, by combining these methods, we hope to achieve accurate identification of MI tasks using a small sample size. As an important technology focused on rehabilitation [71,72], MI-BCI aims to replace or recover the motor nervous system functionality that is lost due to disease or injury. In the application of DA for MI-EEG, future work could extend this work in clinical BCI tasks. For example, due to the cerebral injury of stroke patients, it is difficult to collect the available EEG signal that may lead to a long calibration cycle. One approach worth doing is to generate artificial data based on limited real data using a DA strategy and train the decoding model using these data. Additionally, we could also use the proposed methods to assess the difference between patients and healthy people, utilizing the generator to produce "healthy" EEG data based on patients and discriminator models to distinguish whether the current EEG signal is healthy or not. Based on DA for EEG, we may establish the correlation between the EEG signal with a rehabilitation condition. Rafael and Esther [73] used DA methods to simulate EMG signals with different tremor patterns for patients suffering from Parkinson's disease and extended them to different sets of movement protocols. Furthermore, the proposed method has the potential to extend the application in rehabilitation and clinical operations based on BCI in practical applications. Conclusions In this study, we proposed a DA method based on the generative adversarial model to improve the classification performance in MI tasks. We utilized two datasets from the BCI competition IV to verify our method and evaluate the classification performance using statistical methods. The results showed that the DCGAN generated high-quality artificial EEG spectrogram data and was the optimal approach among the DA methods compared in this study. The hybrid structure of the CNN-DCGAN outperformed other methods reported in the literature in terms of the classification accuracy. Based on the experimental results, we can conclude that the proposed model was not limited by small-scale datasets and DA provided an effective strategy for EEG decoding based on deep learning. In the future, we will explore specific DA strategies for different mental tasks or signal types in a BCI system.
2020-08-13T10:07:26.633Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "194d14b45d5e96c02573ac8b24991efef50d868f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/20/16/4485/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a3ab9282fc63c7671c6859d5ec72e4d9f38f0f81", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
17134831
pes2o/s2orc
v3-fos-license
Overshadowing as prevention of anticipatory nausea and vomiting in pediatric cancer patients: study protocol for a randomized controlled trial Background Emesis and nausea are side effects induced by chemotherapy. These effects lead to enormous stress and strain on cancer patients. Further consequences may include restrictions in quality of life, cachexia or therapy avoidance. Evidence suggests that cancer patients develop the side effects of nausea and vomiting in anticipation of chemotherapy. Contextual cues such as smell, sounds or even the sight of the clinic may evoke anticipatory nausea and vomiting prior to infusion. Anticipatory nausea and vomiting are problems that cannot be solved by administration of antiemetica alone. The purpose of the proposed randomized placebo-controlled trial is to use an overshadowing technique to prevent anticipatory nausea and vomiting and to decrease the intensity and duration of post-treatment nausea and vomiting. Furthermore, the effect on anxiety, adherence and quality of life will be evaluated. Methods/Design Fifty-two pediatric cancer patients will be evenly assigned to two groups: an experimental group and a control group. The participants, hospital staff and data analysts will be kept blinded towards group allocation. The experimental group will receive during three chemotherapy cycles a salient piece of candy prior to every infusion, whereas the control group will receive flavorless placebo tablets. Discussion If an effectiveness of the overshadowing technique is proven, implementation of this treatment into the hospitals’ daily routine will follow. The use of this efficient and economic procedure should aid a reduced need for antiemetics. Trial registration Current Controlled Trials ISRCTN30242271/ Discussion: If an effectiveness of the overshadowing technique is proven, implementation of this treatment into the hospitals' daily routine will follow. The use of this efficient and economic procedure should aid a reduced need for antiemetics. Trial registration: Current Controlled Trials ISRCTN30242271 Background Around 1,800 children (under 15 years old) per year develop cancer in Germany [1]. The chances of surviving childhood cancer have increased considerably in the past 30 years due to differentiated diagnostic and developments in the therapy regimes. Today 83% of all pediatric cancer patients survive the first 5 years after diagnosis, an increase from 67% in the 1980s. A reasonable proportion of this development can be attributed to progress in cytostatics. However, the well-known side effects have remained. The typical side effects are nausea and vomiting. Experiences of nausea and vomiting can lead to anxiety, restrictions in quality of life and reduced adherence to therapy. In the proposed randomized controlled trial the effectiveness of an intervention technique called overshadowing on chemotherapy-related nausea and vomiting will be investigated. Furthermore, the impact on anxiety, adherence and quality of life will be studied. The following sections describe the conceptualities. Emetogenity of cytostatics Aside from the desired effect of tumor reduction, cytostatics affect a number of organ systems. Amongst other systems, cytostatics stimulate the area postrema, a circumventricular organ that lies outside the bloodbrain barrier, stimulation of which can lead to vomiting [2]. Nausea and vomiting are considered by patients to be the most burdening adverse reactions and the most abundant reasons to terminate therapy. Physiologically, nausea and vomiting raise the risk of developing Mallory-Weiss syndrome. Furthermore, prolonged nausea and vomiting could produce exsiccosis, cause electrolyte imbalance and lead to a high level of weight loss [3]. The frequency of chemotherapy-induced emesis depends primarily on the emetogenic potential of the cytostatics. The Multinational Association of Supportive Care in Cancer (MASCC) classifies the cytostatics in four emetic risk groups. A high-level agent produces emesis in nearly all patients (>90%), a moderate-level risk in 30 to 90% of patients, a low-level risk in 10 to 30% of patients and the minimal level tends to show risk in <10% of patients [4]. Table 1 presents the emetic risk groups of cytostatics. Post-treatment nausea and emesis Chemotherapy-induced nausea and/or emesis are commonly classified as acute, delayed, anticipatory, breakthrough or refractory [5]. Acute onset usually occurs within a few minutes to 1 or 2 hours after infusion and resolves within the first 24 hours. Delayed onset emesis begins or persists more than 24 hours after chemotherapy treatment. Anticipatory nausea and emesis occurs before patients receive their chemotherapy administration. Breakthrough emesis occurs despite prophylactic treatment and requires rescue antiemetics. Refractory emesis arises during subsequent treatment cycles when antiemetic prophylaxis and rescues have failed in earlier cycles. Chemotherapy-induced nausea and vomiting differs from that usually experiencedit lasts longer, its degree of severity varies from treatment to treatment and there is a greater variability in patient reaction. For example, anxiety, personality and environment seem to play a key role. Factors that increase the risk of nausea and emesis beside pharmacological (dosage, agent, duration) are age, gender and expectation of these adverse effects [6]. Initiation and coordination of the emetic process is the responsibility of the vomiting center, a structure located in the lateral reticular formation of the medulla. Afferent input from several sources, including the higher brain stem and cortical structures, are capable of initiating the emetic process [7]. Antiemetics The MASCC published guidelines for the use of antiemetics [4]. For adult patients with high emetic risk from chemotherapy, a combination of a 5-HT3 receptor antagonist, dexamethasone, and aprepitant is recommended prior to chemotherapy. For patients who receive moderate emetic-risk chemotherapy, not including a combination of anthracycline plus cyclophosphamide, palonosetron plus dexamethasone is recommended for prophylaxis of acute nausea and vomiting. Patients who receive moderately emetic chemotherapy known to be associated with a significant incidence of delayed nausea and vomiting should receive antiemetic prophylaxis for delayed emesis. A single antiemetic agent such as dexamethasone, a 5-HT3 receptor antagonist, or a dopamine receptor antagonist, such as metoclopramide, is suggested for prophylaxis in patients receiving agents of low emetic risk. No antiemetic should be administered for the prevention of delayed emesis induced by low or minimally emetic chemotherapy. The MASCC describes that the best approach for anticipatory emesis is the best possible control of acute and delayed emesis. The guidelines for the chemotherapy-induced prevention of nausea and vomiting for high and moderate risk in children states that all patients should receive antiemetic prophylaxis with a combination of a 5-HT3 receptor antagonist and dexamethasone. There are currently no appropriate studies available for the prevention of delayed anticipatory nausea and vomiting (ANV) or for the prevention of nausea and vomiting following chemotherapy of minimal and low This limited level of standardization may lead to widely varying antiemetic strategies in different centers. However, the MASCC recommendations are similar to, for example, those given in the protocol for one of the largest therapy-optimizing studies worldwide [8], for acute lymphoblastic leukemia. Ihbe-Heffinger and colleagues observed that the majority of their adult patients (64.4%) experienced nausea and emesis, although they took prophylactic medication [9]. More patients experienced delayed than acute nausea and emesis (60.7% vs. 32.8%), and more patients reported nausea than vomiting (62.5% vs. 26%). The authors concluded that antiemetic medications could control acute rather than delay emesis and should effect a reduction in the frequency of vomiting but not in episodes of nausea. Anticipatory nausea and emesis As already mentioned, many cancer patients not only experience the side effects of nausea and emesis after chemotherapeutic drug infusion, but also prior to treatment [10]. These symptoms are known as ANV. The incidence ranges from 18 to 57% and nausea is more common than vomiting [5]. The reported rates vary widely among studies. Morrow and colleagues found in their meta-analysis of 35 studies an average prevalence of 29% for adult and pediatric patients [11]. Despite modern antiemetic treatment, ANV still occurs in 25 to 30% of cases [12]. The etiology of ANV can be explained by classical conditioning established by Pavlov (1849Pavlov ( to 1936. During conditioning an organism learns to associate an initial neutral stimulus (the conditioned stimulus) with a biologically relevant stimulus (the unconditioned stimulus). By pairing a conditioned stimulus with an unconditioned stimulus in the acquisition phase, the conditioned stimulus comes to evoke a conditioned response that is commonly similar to the response elicited by the unconditioned stimulus [13]. Accordingly, contextual stimuli of the clinic environment, such as the smell, sounds and sight of the building, function as the conditioned stimulus that becomes associated with the unconditioned stimulus of chemotherapy treatment. Following one or more contingent pairings (chemotherapy infusions), the patient may develop the conditioned response of nausea and/or vomiting even before the next treatment just by seeing the infusion, meeting the same clinician or already while re-entering the clinic [14]. As shown by Hickok and colleagues, the development of ANV coheres with the emetogenicity of the chemotherapy drug [15]. Beyond that, Tyc and colleagues showed that occurrence of ANV is positively correlated with severity of vomiting (intensity, frequency, duration) and number of chemotherapy cycles (conditioning trials) [16]. ANV is further inversely correlated with patient age, according to Morrow [17]. ANV is also seen in animal models. Limebeer and colleagues observe that, although rats do not vomit, they display a distinctive gaping reaction when exposed to a toxin-paired flavored solution [14]. After several pairings the contextual cues elicit a conditioned state of nausea in rats. Quality of life Quality of life is defined as a health-related multidimensional construct that includes physical, emotional, mental, social and behavioral components of well-being and functioning from the viewpoint of patients respective to observers [18]. Calaminus and colleagues found that patients who survived childhood cancer estimate their quality of life to be as good as that of healthy children of the same age [19]. However, the various aspects of quality of life are judged differently among the diverse oncological domains. For example, children with solid tumors show less impairment than children with leukemia; one could therefore suggest that a diagnosis at young age and a longer period of being dependent on family support, isolation from peer groups and delayed independence may be reflected by this result [19]. Previous studies estimate an influence of nausea and emesis on cancer patients' quality of life [20,21]. As shown by Akechi and colleagues, the presence of anticipatory nausea was significantly affecting most domains of patients' quality of life [22]. This influence maintains when controlling for age, sex, performance status, and psychological distress. Anxiety State anxiety (as opposed to trait anxiety) is defined as an emotional process signed through arousal, worries, nervousness, inner restlessness and fear of future events. State anxiety varies in intensity, time and situation [23]. Anxiety is the result of threats that are perceived to be uncontrollable or unavoidable [24]. State anxiety is associated with incidence and severity of post-treatment vomiting, and varies inversely with the emetic potential of the chemotherapy regimen [7]. This counterintuitive finding might be explained by psychological factors being relevant in the experience of posttreatment vomiting for regimens of low to moderate emetic potential while their impact might be reduced or minimal for regimens with high emetic potential. State anxiety might foster development of ANV as it facilitates classical conditioning of anticipatory responses [17]. A review by Andrykowski comprising 12 studies showed mixed results [25]. The relationship between anxiety and ANV seems unclear. A study with pediatric cancer patients found no significant differences in state anxiety scores between patients, whether or not they experience ANV [26]. Compliance/adherence Compliance was previously defined as the willingness to follow medical advice. Understanding of the patient's role, however, has changed in recent decades. As a consequence the term adherence is increasingly used instead of compliance. Adherence expresses an active patient role with the aim to create a cooperation based on agreement between physicians and patients that should lead to maintenance in therapy regimens [27]. In the literature, compliance is still synonymously used for adherence. Reasons for nonadherent behavior are shown in Table 2. Predictors for adherent behavior from pediatric cancer patients mentioned in the review by Tebbi are the mode of application, satisfaction with medical supply, inner belief of control and age [28]. Adolescents often showed nonadherent behavior. Factors such as gender, parental income or family status had no influence on adherence. Adherence is important for treatment success. A low degree of adherence is found to lead to increased mortality [27]. Overshadowing The phenomenon of overshadowing was first observed by Pavlov [29]. When two or more stimuli are present, the more salient one produces a stronger response than the other. The presence of the more salient element is commonly found to restrict the acquisition of associative strength by the less salient element [30]. Pavlov explained it as follows: 'The effect of the compound stimulus is found nearly always to be equal to that of the stronger component used singly, the weaker stimulus appearing therefore to be completely overshadowed by the stronger one' [29]. Transferring the overshadowing paradigm to chemotherapy processes, a salient stimulus presented during drug infusion may overshadow the effects of the less salient one (the doctor's white coat). The conditioned response elicited by the less salient stimuli is weakened through the overshadowing element. This weakening prevents the development of ANV [10]. According to Garcia and Koelling, tastes become more associated with stimuli causing nausea and vomiting [31]; they are more salient than other sensational perceived stimuli. Examination of the psychological, medical and nursing literature in PubMed for the overshadowing procedure, also known as the scapegoat technique from classical conditioning, revealed a total number of 124 findings (see search strategy in Table 3). The majority of studies attend to foundational research of overshadowing; for example, the involved brain functions during associative learning. A limited search focusing on cancer leads to four articles about the scapegoat effect on food aversion from cancer patients [32][33][34][35], while two articles consider the overshadowing effect on conditioned nausea [10,36]. Of these, just one describes an investigation among pediatric cancer patients [34]. Screening of the reference lists of these articles did not add any previously unconsidered publications. Broberg and Bernstein used a scapegoat technique to prevent food aversion in children undergoing chemotherapy [34]. Patients received candy (coconut and root beer Lifesavers) between the consumption of a meal and administration of chemotherapy. Children who received the candy, which served as a scapegoat, were twice as likely to eat some portion of a future test meal. Stockhorst and colleagues investigated 16 adult cancer patients with an overshadowing protocol using salient drinks to prevent anticipatory nausea and emesis [10]. The experimental group (n = 8) received salient drinks before administration of drug infusions through two cycles of chemotherapy, while the control group received water. In the third cycle of chemotherapy all patients received water. Patients receiving an overshadowing treatment did not develop anticipatory nausea, whereas two patients of the control group did. Furthermore, overshadowing tended to modify the occurrence of posttreatment nausea: it occurred later and was of shorter In a pilot study at our medical center, Görges adapted the study design of Stockhorst and colleagues [10] for the pediatric setting (n = 30), where overshadowing proved to be effective [37]. No patient of the overshadowing group (n = 15) developed anticipatory nausea, compared with 13 patients of the control group. Furthermore, overshadowing reduced the occurrence of concomitant symptoms such as anxiety, nonadherent behavior and affected well-being. Overshadowing also seemed to decrease the intensity of post-treatment nausea. However, the partially insufficient feasibility of the used intervention technique led to problems of recruitment. This problem may have biased the results towards an overestimation of the intervention effect. Such threats of validity have to be avoided in the present study. Additionally, reducing the complexity of the intervention increases the chances of implementation into daily clinical routine. Aims of the study The aims of the present study were to verify the effect of an optimized overshadowing technique on ANV (primary endpoints), and to further investigate the intervention effect on post-treatment nausea and vomiting (secondary endpoints). The subgoals were to investigate the overshadowing effect on patients' quality of life, state anxiety and adherence; to survey the relation between prevalence of post-treatment and ANV; and to determine the applicability of the overshadowing treatment in the hospital's daily routine. Ethical approval The study protocol was approved by the ethics committee of the Christian-Albrechts-University, Medical Faculty, Kiel, Germany (6 March 2012, reference number A 168/11). Participants and their parents shall receive written information and are required to give their written consent prior to participation. Participants Newly diagnosed pediatric patients with an oncological disease shall be included in the Kiel University clinic. These children, adolescents and young adults must also meet further inclusion criteria: German speaking, over the age of 4 years, and receiving chemotherapy. To guarantee a proper acquisition phase of overshadowing, the children should run through at least three chemotherapy cycles. An interval of 7 days between each chemotherapy cycle is important to differentiate between post-treatment and ANV. Children with brain or gastrointestinal tract cancer will be excluded to eliminate an organic cause of nausea and vomiting. Other exclusion criteria include mental restrictions, recurrent cancer, and received radiotherapy. Finally, pediatric patients should not have experienced treatment-related nausea or vomiting before. Sample size calculation The calculation of the required sample size is oriented towards the demands of the first study question, which addresses the effect of overshadowing on ANV (primary endpoints). The effect sizes that were achieved in our pilot study for reduction of ANV are to be judged as 4 (vomiting)). For the present study, a more conservative effect size of f = 0.4 in analysis of covariance (see below) is assumed. Aiming at a statistical power of 0.80, a total sample size of n = 52 is needed (two-sided α = 0.05). Study design The study is a monocenter randomized controlled trial comparing groups of pediatric cancer patients undergoing an overshadowing treatment during chemotherapy or receiving a placebo treatment. Figure 1 shows the study design. Different diagnoses require different cytostatics. Drugs, in turn, influence the probability of usage and the type of antiemetics. Block randomization will therefore be conducted within each diagnosis (that is, within the group of patients suffering from acute lymphoblastic leukemia or those suffering from Ewing's sarcoma, and so forth). According to the recommendations of Altman and Bland [38], separate block randomization lists for each stratum will be used. This process is carried out by an independent third person not involved in data collection, analysis or medical care. Participants, medical staff and data analysts will be kept blinded towards allocation. While blinding of medical staff and data analysts is easily realized, participants may notice if they get candy with a salient taste (experimental group) or without (control group). However, as the study information leaflets do not refer to taste in particular but to the effect of melting something in one's mouth, they are not able to conclude their group allocation by their intervention. Accordingly, participants can also be considered blinded. Intervention In the mentioned pilot study the overshadowing treatment known from the paradigm of classical conditioning was tested on pediatric cancer patients. To increase efficiency and applicability, an overshadowing technique known from Broberg and Bernstein will be used [34]. Implementing overshadowing in the hospital's daily routine requires easy handling and regard to the high hygienic standards on the oncology ward, although in the proposed study the experimental group will receive salient candy and the control group flavorless placebo tablets instead of drinks. During three chemotherapy cycles the experimental group will get their treatment (candy) prior to each infusion. To avoid influences of possible aversive taste reactions, the flavor of candy will be changed from infusion to infusion. Determining an evaluation period of three cycles can be seen as a compromise. On the one hand, more cycles may increase the intervention effect compared with the placebo condition. On the other hand, more cycles also increase the danger of withdrawals, missing data and comparability of participants and their treatment as the total number of treatment cycles differs widely between low-risk Hodgkin lymphoma and Ewing sarcoma, for example. Three cycles are thus a trade-off between a large effect size and threatened data quality. The pilot study [37] and the basic research [39] showed that even less than three cycleswhere each comprises several unconditioned stimulus-conditioned stimulus pairingsis sufficient. However, to determine the applicability of the intervention in daily clinical routine (the third subgoal), the overshadowing treatment in the experimental group will be continued until the end of treatment without any further outcome assessment whenever possible. Measuring nausea and vomiting Symptoms of nausea and emesis will be measured using discomfort logs administered by patients. These logs have been successfully used in our pilot study [37]. They assess nausea and vomiting on 6-point Likert scales focusing on intervals of 2 hours, thus leading to 12 ratings every day. Children from 4 to 7 years of age will be interviewed by a blinded interviewer about their symptoms, and also parents will be asked about their observations. Children from the age of 8 years will use a self-assessment version. As a second measure, the combined nausea and vomiting scale Baxter Retching Faces (BARF) by Baxter and colleagues will be used [40]. BARF uses a 6-point visual analogue scale with six comic faces expressing mimics ranging from neutral mood to vomiting. The authors developed the BARF scale for children aged from 4 to 17 years. The validation study confirms reliability and validity among patients from 7 to 18 years old. In our study, the BARF scale will be administered by all patients. There will be two measurement points: the anticipatory measurement phase prior to the third chemotherapy cycle, and also the post-treatment phase after the third chemotherapy cycle (see Figure 1 and Table 4). Measuring anxiety The Kinder-Angst-Test-II/Children Anxiety Test-II [41] is a revision of the questionnaire for German-speaking children and adolescents referring to the trait concept developed in 1969. The trait scale for measuring the anxiety disposition was kept and extended to aspects of state anxiety and was normalized. The revised Kinder-Angst-Test-II consists of three questionnaires, acquiring two different aspects of anxiety. One questionnaire (Form A) appraises trait anxiety, whereas the two others (Form P, Form R) appraise the state anxiety, particularly the anticipated respectively reminded anxiety. The age range for application reaches from 9 to 16 years. Form A consists of 20 items. The questionnaires Form P and Form R consist of 12 items each, with differences to the referred time period. To operationalize state anxiety, questionnaire Form P will be used prior to the third chemotherapy cycle (see Table 4). The State and Trait Anxiety Inventory [42] is a selfreport inventory with 20 items each for trait and state anxiety (two questionnaires). Items are rated on a fourlevel scale, ranging from not at all to very, to measure the intensity of anxiety. Adolescents older than 15 years will be only asked to complete the state anxiety questionnaire. The state scale will be assessed prior to the third chemotherapy cycle (see Table 4). Measuring quality of life The German Children's Quality of Life Questionnaire (KINDL) -Revised is a quality-of-life questionnaire for children with 24 items and allows assessments of six domains: physical well-being, psychological well-being, self-esteem, family, friends, and daily function [43]. Reliability scores (Cronbach's α = 0.85) and validity of the instrument are confirmed [44]. There are three different self-assessment versions and two parents' versions: Kiddy- To assess quality of life, pediatric patients will complete the KINDL-R versions after the third chemotherapy cycle (see Table 4). Measuring adherence Adherent behavior will be obtained through ratings from caregivers and physicians. As in the primary study, the caregivers and physicians administer a questionnaire with eight items to assess the adherent behavior of the children and adolescents. Answersfor example, the intake of medicinecan be rated on a four-level scale from poor to very good (poor, a little, good, very good). The rating of patients' adherent behavior will proceed after the third chemotherapy cycle (see Table 4). Statistical analysis The effect of the overshadowing treatment on both anticipatory (t1) and post-treatment nausea and vomiting, anxiety, quality of life and adherence (t2) will be analyzed using analysis of covariance comparing mean scores of intervention and control group while controlling for dosage of antiemetics. To account for different appropriate absolute dosages (mg/kg or mg/m 2 ) among different patients and drugs, the variable dosage will be operationalized as the percentage of the recommended maximum dosage for each patient and drug. The assumed dependence between the prevalence of post-treatment (t2) and ANV (t1) will be determined by Pearson correlation coefficient. Timetable In consideration of the number of patients and the distribution of diagnoses in previous years, 23 patients per year in the study clinic would meet the inclusion criteria. Extrapolating this number leads to an inclusion period of approximately 2.5 years. If the number of participants is less than expected, the investigation will be extended to include a second center (a verbal commitment from University Clinic Lübeck was received). In that case the randomization procedure will be repeated in the second center. The trial is intended to start in summer 2013. Limitations Several limitations of this study have to be mentioned. First, it is unclear whether it will be possible to produce balanced study arms regarding diagnoses and antiemetics, respectively. In cases of clear imbalances, their impact on the result has to be discussed. Second, it is an open question whether overshadowing will work equally independently of age. Additionally, it is unclear whether ratings of ANV are influenced by the administering person (self-rating vs. third-person rating). However, our pilot study did not reveal any influence of age or application mode. Third, a paradox conditioning might occur: since nausea and vomiting are expected to occur closely after the application of candy, the latter might be associated with nausea and vomiting. Hence, both might occur as a consequence of other candy independently of treatment situations. However, this will not influence the desired effect on ANV as t1 lies before application of candy in the context of the third treatment cycle. Unintended induction of nausea and vomiting in patients' everyday life is seen as unlikely since the tastes used as scapegoats are rather rare. Trial status Ready to start recruitment.
2016-05-12T22:15:10.714Z
2013-04-20T00:00:00.000
{ "year": 2013, "sha1": "9ea10859f881d033ab9416d3ececc5dc4796868f", "oa_license": "CCBY", "oa_url": "https://trialsjournal.biomedcentral.com/track/pdf/10.1186/1745-6215-14-103", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "80802e8481c347301bc9c3d7be7294c6852c6e91", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247011656
pes2o/s2orc
v3-fos-license
A density-wave mechanism with a continuously variable wave vector The origin of density waves is a vital component of our insight into electronic quantum matters. Here, we propose an additional mosaic to the existing mechanisms such as Fermi-surface nesting, electron-phonon coupling, and exciton condensation. In particular, we find that certain 2D densitywave systems are equivalent to 3D nodal-line systems in the presence of a magnetic field, whose electronic structure takes the form of Dirac-fermion Landau levels and allows a straightforward analysis on its optimal filling. The subsequent minimum-energy wave vector varies over a continuous range and shows no direct connection to the original Fermi surfaces in 2D. Also, we carry out numerical calculations where the results on model examples support our theory. Our study points out that we have yet attained a complete story on our understanding of emergent density wave formalism. INTRODUCTION The origin of density waves (DWs) [1][2][3][4], relevant to various physical phenomena in electron and spin quantum matter, has been a fundamental yet controversial problem in condensed matter physics for several decades. In the Peierls theory of charge density waves (CDWs), the Fermi surface nesting (FSN) in the 1D chain gives rise to a spatially periodic re-distribution of charge density [5][6][7][8] with a 2π/q n period, commonly accompanied by a distortion of the lattice structure and a metal-insulator transition, where q n = 2k F is the nesting vector between the two Fermi points. Though the Peierls transition has successfully described the properties of various quasi-1D DW materials [9,10], its extensions to higher dimensions have witnessed many difficulties [5]. Other mechanisms based upon electron-phonon coupling [11,12] and exciton condensation [13][14][15][16] offer consistent explanations on the CDW origin and physical properties of a series of materials such as NbSe 2 , TaSe 2 and CeTe 3 [17][18][19][20][21] without FSN. Besides, Overhauser's theory pointed out the importance of electron interaction and correlations in the formation of DW in certain 3D materials [22,23]. The existing theories do well in and only in their respective sphere of applications. Some of them are based upon perturbative analysis and become less controlled for stronger coupling [12,24]. In addition, most theories cater to preferential DW wave vectors that are discrete and special to the band structures and/or the auxiliary degrees of freedom. Still, the origin of various CDWs, e.g., the 3D CDW states in M 3 T 4 Sn 13 (e.g. Ca 3 Ir 4 Sn 3 and Sr 3 Ir 4 Sn 3 ) [25,26], the CDW in cuprate materials [27], remain controversial to a degree. For instance, the charge modulation in some cuprate materials exhibits a dependence on the spectral gap [27] with its wave vector spanning a continuous spectrum [28][29][30]. Therefore, our overall understanding of the DW mechanism has not, by far, reached a complete picture yet. Here, we propose a novel, independent DW mechanism from a Dirac-fermion Landau levels (LLs) energetics perspective. We note that a model with DW is equivalent to a higher-dimensional lattice model in the presence of a magnetic field [31]. Therefore, to locate the optimal DW wave vector, we can, in turn, look for the magnetic field strength that minimized the energy of the corresponding higher-dimensional system. Such an argument is unrelated to the FSN and remains relevant even when the DW strength is no longer weak in comparison with the bandwidth. Here, we focus on a specific set of twodimensional (2D) models we find particularly illuminating: these models' counterparts in three dimensions (3D) possess Dirac-fermion nodal lines (NLs), whose electronic structure in a magnetic field possesses a zeroth LL with a large degeneracy and a large gap from the rest of the LLs. Therefore, both the optimal amplitude and direction of the magnetic field and thus the corresponding DW wave vector in the original 2D model depend solely on the geometry of the NL and can be estimated theoretically in a wide parameter region. Also, as the NL is continuously variable with respect to the model parameters such as the DW amplitude, so is the optimal DW wave vector [32][33][34][35]. To solidify our claim, we further illustrate numerical results quantitatively consistent with our theoretical expectations. We organize the rest of the paper as follows: In the next section, we introduce the Dirac-fermion LLs perspective for density wave tendencies, including the duality between a 2D DW system and a 3D system with a magnetic field, and the optimal DW wave vector in terms of the energetics on filling the Dirac-fermion LLs. In section III, we showcase the emergent DW in a benchmark 2D model and compare the numerical results on the optimal DW wave vectors with our theoretical expectations from the Dirac-fermion LL perspective. We also summarize the approximations enlisted in our theory and their potential impacts. We conclude our discussions with a summary of our theory and models, the implications, and what we can do in the next stage. Let's consider a category of model systems where our theoretical analysis can semi-quantitatively determine the preferential wave vector of an emergent density wave. Equivalence between a 2D DW system and a 3D system with a magnetic field -A 2D system with an incommensurate density wave: whereĤ 0 includes the translation-invariant terms,ĉ r is the electron annihilation operator at site r, and q = (q x , q y ) is the DW wave vector, is equivalent to a 3D systemĤ 3D = kzĤ kz [31]: with a magnetic field B = (q y , −q x , 0) corresponding to the vector potential A = (0, 0, q · r), and k z = φ is a good quantum number. We have applied the convention that the lattice spacing is 1 and e = = 1. We aim to determine the optimal q with the lowest energy given the rest of the model parameters, in analogy to the search for the Peierls transition in a mean-field treatment of the electron-phonon couplings, etc. For an incommensurate q, the physics ofĤ 2D is independent of φ = k z , which is sometimes denoted as a 'sliding symmetry.' Therefore, H 2D is equivalent toĤ 3D upto a constant factor of N z , the number of sites of the 3D system in theẑ direction, and we can analyze the former with the help of the latter (or vice versa). The above equivalence relation offers a clear physical picture when the zero-field electronic structure around the Fermi energy takes the form of NLs in the 3D system, which we will discuss next. Dirac-fermion LLs for a NL system-Without loss of generality, we consider a 3D model with a magnetic field: where the first (second) line is the intra-layer (interlayer) hopping between different x − y planes, z = z for r and r, and σ l and σ m (l, m = x, y, z, l = m) are the Pauli matrices on the spin (or pseudo-spin) s, s . A = (0, 0, q x x + q y y) is the magnetic vector potential of a magnetic field B = (q y , −q x , 0). In this gauge, we can diagonalize the Hamiltonian in the k z basis: where t ẑ (t ẑ ) is the real (imaginary) part of tẑ and r xy = (x, y).Ĥ 2D ∝Ĥ 3D (k z ) reflects a 2D DW system and is our focus in this work. To understand the preferential DW wave vector q in such a 2D system, we can analyze the preferential magnetic field amplitude and direction in the equivalent 3D system in Eq. 3. Without the magnetic field, the 3D Hamiltonian H 3D can be diagonalized in the k space given its fully restored translation symmetry: where k xy = (k x , k y ), and ( k xy ) represents the in-plane terms in Eq. 3. The Hamiltonian has a nodal line wherever ( k xy ) = 0 and t ẑ cos(k z ) + t ẑ sin(k z ) = 0, which is illustrated in Fig. 1a. We note that all the nodes on the nodal line are at the same energy in this specific model example, which simplifies our upcoming discussion but can be relaxed to some extent as long as there is little mixing between the zeroth and higher LLs once the magnetic field is present. In the presence of the magnetic field B, the momentum k parallel to the magnetic field is a good quantum number that labels the different perpendicular crosssections. Each cross-section may possess pairs of Dirac nodes, as shown in Fig. 1b, which develop into discrete LLs n ∝ ± √ nB, n = 0, 1, 2 . . . in the presence of the magnetic field -the zeroth Landau level exists at the energy of the Dirac nodes, while the rest of the LLs are either above or below with a gap ∝ √ B. Summing over k , we obtain a large zeroth LL degeneracy proportional to the number of Dirac nodes intersected -the NLs' projection along k , see Fig. 2. The counting works for both strong (large tẑ) and weak (small tẑ) DWs. Magnetic field for optimal filling-For a system with a fixed electron density n e = 1 + δn e , δn e 1, the optimal filling is to fill the zeroth LLs and leave all the higher LLs empty. When | B| < | B| opt , the electrons will be forced into the higher LLs, leading to an excitation in an incompressible system and an increase in the systematic energy; when | B| > | B| opt , on the other hand, while the zeroth LLs fully accommodating the electrons above half-filling provide no further energy reduction, the Fermi sea sees an uncompensated energy rise due to the larger magnetic field. Such energy dependence versus the external magnetic field or LL filling constitutes the premise of quantum oscillations [36,37], e.g., the dHvA effect. Therefore, quantitatively, the electron density above halffilling should match half of the zeroth LL degeneracy: where L (S ⊥ ) is the length (area) of the system parallel (perpendicular) to the magnetic field, | B|/2π = | q|/2π is the LL degeneracy over unit space, n D (k ) denotes the number of Dirac nodes in the cross-section at k , and n D = n D (k )dk /2π is its average over all k . With n D , the expression for optimal | q opt | resembles the onedimensional case in Ref. 38, yetn D is a continuous variable instead of an integer just like n D (k ). A similar analysis yields the favorable q ≡ B direction: we note that under the circumstance of the filling of the zeroth Landau level, B, and thus the energy penalty to the Fermi sea is minimal whenn D is maximum, which depends on the geometry of the nodal lines and favors the direction with the largest projection. In Fig. 2, we illustrate an example of a simple-loop NL and the direction to maximize the span of the projection | k 1 − k 2 | parallel to B and thusn D = | k 1 − k 2 |/π. Our approximations-Our arguments are valid when the magnetic field is small enough to treat the Dirac nodes of the same k independently. Otherwise, quantum tunneling kicks in and gaps the Dirac nodes out, which happens if the separation between them |∆k ⊥ | |l −1 B |, where l B = /eB is the magnetic length [39,40]. Those zeroth LLs split beyond the nonzero LLs should not count in Eq. 6. As we can see in nately, the splitting ∝ B is generally smaller than the spacing between the zeroth and nonzero LLs ∝ √ B for small B. On the other hand, even the Dirac nodes have pairwise-annihilated and developed a mass, as the points i and j in Fig. 1b, the original zeroth LLs may have not yet shifted beyond the first LLs, leading to an underestimation (over-estimation) ofn 3D ( q opt ) following Eq. 6. We find the latter effect more dominant in our examples and provide a detailed numerical analysis of these effects in the Appendix. Also, we have assumed that the energy of the Fermi sea depends monotonically on the strength of the magnetic field and is insensitive to its direction, for which we show supporting numerical results in the Appendix. MODEL EXAMPLE AND NUMERICAL RESULTS 2D model example-For demonstration purpose, let's consider the modelĤ 2D =Ĥ 0 +Ĥ DW as follows: +2λ sin( q · r + φ 0 )ĉ † r,s σ x s,s ĉ r,s ], where σ = (σ x , σ y , σ z ) are the Pauli matrices, = ( 1 , 0, 0 ) are the onsite potentials, and t δ = t for δ =x,ŷ and t δ = it for δ =x +ŷ,x −ŷ are the hopping parameters. The dispersion of the translation invariant Hamil-tonianĤ 0 : is shown in Fig. 3a, where the Fermi surface slightly above half-filling, as shown in Fig. 3b, is rather circular and shows no obvious FSN. However, we will show that to minimize energy, the model prefers a DW, characterized by the mean-field H DW , with a preferential wave vector | q opt | that shows a continuous range and nothing to do with FSN. 3D nodal-line system and LLs-For an incommensurate q, the 2D model in Eq. 7 is equivalent to a 3D system with a magnetic field. Without the magnetic field, may possess NLs where the σ x coefficient in Eq. 9 vanishes on the k z = ± arccos( 0 /V ) planes. We show a couple of examples in Fig. 4. In the presence of the magnetic field, the Dirac nodes along the nodal lines exhibit themselves as Dirac-fermion LLs. While the n = 0 LLs depend on Fermi-velocity 1, 0) shows a peak around zero energy theoretically attributed to the zeroth LLs of the 3D NL systems in Eq. 9 in the presence of a magnetic field. The rest of the parameters are the same as in Fig. 4. The integrated DOS in (b) is larger given the extent of its NLs. Also, we see signatures of the zeroth LLs' splittings, which remain small compared to the gaps and keep the validity of our argument thanks to the smallness of the magnetic field. details and form continuum, the zeroth Landau levels remain (nearly) degenerate at zero energy and are separated from the rest of the Landau bands with large gaps due to the LL spacings. For example, we show in Fig. 5 the density of states (DOS) of the models with the NLs in Fig. 4 in a magnetic field | q| 2π, where the contributions from the zeroth LLs are clearly visible between the red dashed lines. Optimal DW wave vectors-We calculate the energy ofĤ 2D in Eq. 7 numerically via exact diagonalization on system size L x = 1000 along q and L ⊥ = 200 values of k ⊥ in the perpendicular direction. We discuss our setup for q along directions other thanx andŷ in the Appendix. The results on the average energy per electron E versus the DW wave vector q for the model parameters in Figs. 4a and 5a is shown in Fig. 6. The optimal wave vector | q opt | ∼ 0.15 is approximately consistent with our theoretical expectation of | q theory | ∼ 0.17 according to Eq. 6. In addition, the DWs in theq =x,ŷ directions yield the lowest energy overall with negligible difference, consistent with our analysis and the fact that the NLs projection |k 1 −k 2 | and subsequentlyn D are largest along these two directions. In comparison, the average electron energy without the DWĤ DW is ∆Ē 0 ∼ 0.29 above these minimums, suggesting that the DW formation is indeed favorable energetically. Also, the dependence ofĒ with | q| is consistent with our expectation: the optimal | q opt | allows all the electrons above charge neutrality to be accommodated by the degenerate zeroth LLs; when 0 < | q| < | q opt |, the filling goes to some of the higher LLs leading to higher energy; when | q| > | q opt |, on the other hand, the electron Fermi sea suffers an energy penalty due to the higher magnetic field. We discuss more detailed results and analysis on energetics in the Appendix. More generally, the value of q opt versus the DW amplitude λ as a tuning parameter is summarized in Fig. FIG. 6. The (relative) average energy per electron (Er)Ē for the model in Eq. 7 shows minimums at different optimal | qopt| along different directions. The parameters are the same as in Figs 7, and we observe a good consistency between our analysis based on Eq. 6 and the numerical benchmark for both q along the (1, 0) and the (1, 1)/ √ 2 directions. Intuitively, when λ is small, the nodal line is a closed contour and grows with λ, leading to an increasingn D and decreasing q opt . On the contrary, as λ increases further and beyond the Van Hove singularities, see Fig. 4b, the NLs' extent and thusn D reverses its trend and decreases until it vanishes, giving rise to a monotonically increasing | q opt |. The slight differences between numerics and theory are likely due to the neglection of massive Dirac fermion Landau levels (Fig. 1b) that underestimatesn D (overestimates q opt ), see more related discussions in the Appendix. In addition, the optimal | q opt | is smaller for q along the (1, 0) direction until large λ ∼ 3.5, suggesting that qualitatively, so should the DW alongx be the winner with lower energy, which is indeed the case as we compare the two in the inset of Fig. 7. Overall, we note that the DWs wave vectors change continuously with the DW strengths, in sharp contrast to the behaviors descending from the FSN, electron-phonon coupling, or exciton condensation, where the DW wave vector is generally fixed by the Fermi surface geometry or a momentum specially meaningful to the bosonic degrees of freedom. CONCLUSIONS In summary, we put forward a novel perspective to understand the DW tendency in 2D systems from the energetics of a 3D NL system. Our setup is parallel to the Peierls transition yet does not require any apparent FSN to begin with. Correspondingly, the optimal DW wave vector depends on the geometry of the 3D NLs and may vary continuously with respect to the model parameters. Also, our numerical results on benchmark models fit with our analysis consistently. Such continuous variations of the DW wave vectors are present in materials such as various cuprates [27][28][29][30], where a purely FSN interpretation is unlikely. While we do not intend to relate our analysis to these materials directly, our perspective kindles the theoretical possibility of a variable DW wave vector from more generic origins. On the other hand, our study points out that our current understanding of the DW origin is still primitive and a universal understanding is not yet available. Our current study has focused on models with rather specific constructions. While such a setup facilitates the theoretical analysis and its controllability, it also limits the generalization of the mechanism and its application in practice. It will be interesting to probe the generalization of the current mechanism beyond its model limits and its connection with FSN and other DW mechanisms in interpolating models for further physical intuition. Acknowledgement-We thank Di-Zhao Zhu for insightful discussions. The authors are supported by the National Science Foundation of China(No.12174008) and the start-up grant at Peking University. The calculations of this work is supported by HPC facilities at Peking University. where m is a parameter. The model has 2 Dirac-fermions at (± √ m, 0) for m > 0, and massive Dirac-fermions with a mass ∼ |m| for m < 0. Therefore, we can use this model to simulate the low-energy scenarios at different k y crosssections in Fig. 1 in the main text. In the presence of a magnetic field B = (0, 0, B) with gauge A = (0, Bx, 0), we can express the momentum as: whereâ andâ † are ladder operators defined upon the LL number space. In turn, we can re-write the Hamiltonian as: which we solve for the dependence of the low-lying LLs on B, as in Fig. 8. When two Dirac nodes are farther apart, they behave independently, and their zeroth LLs contribute to the zero-energy DOS in the magnetic field. However, as their separation |∆k ⊥ | l −1 B gets smaller in comparison with the magnetic length, the quantum tunneling between the Dirac nodes leads to splittings between the Landau levels [39,40], which gradually deviates from the zero energy as B increases. This process is illustrated in Fig. 8 for m = 0.01. Fortunately, such splitting is relatively small in comparison with the LL spacings, especially between the n = 0 and n = 1 LLs (see the red curves in Fig. 8 for n = 1 LLs), at small to moderate magnetic field, and does not deflect much of our reasoning. On the other hand, even when the Dirac nodes have annihilated in pairs and no NL is present, similar to the i and j points in Fig. 1, the resulting LLs depend on the actual values of their masses and may still be relatively close to zero energy. For instance, we present the zeroth LL for m = −0.01 in Fig. 8, which are closer to zero energy than the generic first LLs. By counting only the NLs, we may underestimate the contribution from such (slightly) massive Dirac-fermion LLs to the DOS around zero energy. Still, the NLs offer a good starting point for the counting, as the LLs for most massive fermion LLs, e.g., m = −0.25, are too far away from zero energy to actually contribute, and our neglection is small. Energy cost of magnetic field on the Fermi sea We calculate the energy of the Fermi sea, i.e., the average energy of the electrons at half-filling n e = 1 for the model in Figs. 4a and 5a in the main text in a magnetic field B, and summarize the results in Fig. 9. The energy increases monotonically with respect to the amplitude of the magnetic field (DW wave vector) and is relatively insensitive to the direction of the magnetic field (DW wave vector). Therefore, we cannot arbitrarily increase |q opt | after reaching the complete filling of zeroth LLs due to the associated energy cost. √ 1+3 2 ) on the 2D square lattice, we change the basis fromx,ŷ tox ,ŷ so that we can utilize the translation symmetry along theŷ direction. Numerical method for systems with a tilted magnetic field Generally, density waves may arise in any direction, and we need to consider the case where q = (q x , q y ) is tilted from thex orŷ directions for the most energetically favorable condition. Here is the method we used to compute the dispersion of the model in Eq. 7 with q in any direction. While our theoretical analysis based upon 3D NLs is a low-energy effective theory that can apply to q along any direction, the numerical calculations require that q = (q x , q y ) points to a commensurate direction, qx qy = m n , m, n ∈ Z, gcd(m, n) = 1. Without loss of gener-ality, we limit ourselves to: for simplicity, so that there will not be additional sublattices. Here, |q 0 | is the norm of the wave vector, and p ∈ Z determines the direction of q. This wave vector in the 2D DW system corresponds to a magnetic field B = |q 0 |( p √ 1+p 2 , − 1 √ 1+p 2 , 0) in the 3D system, whose electromagnetic vector potential takes the form of A = (0, 0, q · r). Consequently, neither k x nor k y is good quantum number. Instead, we can take a new basis:x = (1, 0) so that there remains translation symmetry along theŷ direction, and k y is a good quantum number, see Fig. 10 for illustration. The HamiltonianĤ 2D =Ĥ 0 +Ĥ DW in the new basis takes the same form as Eq.7 in the main text and can be solved in a similar fashion, with the following changes to the model settings: Finally, we apply a polynomial fit to the resulting E 0 (| q|) to get rid of the fluctuations in data, mainly caused by the limited system and step sizes, for a smoother display.
2022-02-22T06:47:21.936Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "5ed2e703e417fa0530b9e142fc4446598ddca18d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5ed2e703e417fa0530b9e142fc4446598ddca18d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
69723597
pes2o/s2orc
v3-fos-license
Evaluation of the Implementation of the General Data Protection Regulation in Health Clinics The new General Data Protection Regulation (GDPR) was approved on April 27 2016. The GDPR 2016/679 aims to ensure the coherence of natural persons’ protection within the European Union (EU), comprising very important innovative rules that will be applied across the EU and will directly affect every Member State. Furthermore, it aims to overcome the existing fragmented regulations and to modernise the principles of privacy in the EU. This regulation will come into force in May 2018, bringing along several challenges for citizens, companies and other private and public organisations. The protection of personal data is a fundamental right. The GDPR considers a ‘special category of personal data’, which includes data regarding health, since this is sensitive data and is therefore subject to special conditions regarding treatment and access by third parties. This premise provides the focus of this research work, where the implementation of the GDPR in health clinics in Portugal is analysed. The results are discussed in light of the data collected in the survey and possible future works are identified. INTRODUCTION Although this regulation was approved on April 27, its enforcement was set for the twentieth day after its publication in the Official Journal of the European Union. Therefore, it came into force on May 26 2016. The EU established a two-year transitional period for companies to implement the necessary changes until May 25 2018 in order to ensure the full compliance of their data treatment with the rules imposed by the GDPR. The relevance of the GDPR is due to the major current challenge of ensuring control over data privacy in a time when the growing adoption of the Internet, social networks and digital business models create an equation which is hard to solve: on the one hand, people are enticed and share information about their personal life, more often than not without accounting for potential collateral effects; on the other hand, organisations collect increasingly more information on their clients, usually with the aim to provide more and better services or as a way to monetise the information (Data Privacy da KPMG Portugal, 2017). Currently, there are 28 data protection acts based on the EU Data Protection Directive of 1995, that is, a regulation implemented over 20 years ago (Ryz and Grest, 2016). Technological evolution, and the increasingly common use of smartphones, wearable devices or the Internet of things brings along a pressing concern about our personal data and its protection and causes the regulatory entities to be more alert and implement new regulations. Before the new GDPR, previous data protection legislation had become fragmented across the EU as different countries added to the basic principles enshrined in the original directive of 1995. Another reason why new legislation was needed is that the original directive of 1995 was formulated in what now appears to be a different technological era. Back then, just 1% of the world population was using the Internet, but today it is almost ubiquitous across the EU. Cloud computing and social media were not known then, nor were smartphones or tablets. Today, the vast majority of information is produced and consumed electronically, making it harder to protect (Tankard, 2016). The recent approval of the General Data Protection Regulation holds positive prospects for the future of data protection in Europe. The existence of a solid and uniform legal framework across Europe that has been updated to meet the needs of technology will not only allow for the potential of the Digital Market to be freed up, for the promotion of innovation, for the creation of employment and generation of wealth, but also for safeguarding the fundamental right of data processing protection for citizens or residents in Europe (Díaz, 2016). This regulation introduces significant changes in natural persons' protection with regards to personal data treatment, imposing new obligations to citizens, companies and other private and public organisations. Since the transitional period is coming to an end for the full compliance of companies with the regulation, it is relevant to acknowledge companies' level of preparation for the new GDPR demands. Many industry sectors could have been chosen, but this research work focused on the health sector, through a survey conducted in health clinics in Portugal. The aim was to determine the point to which these companies are in compliance with the new personal data regulation. The structure of the present work consists of an introduction, followed by a desk review on the general data protection regulation and its implementation. The following section focuses on the research methodology, identifying the target population and the structure of the survey. The results of the study are discussed in section 5, followed by the conclusions drawn from the study. Finally, the limitations of this research work are identified and possible future studies are proposed. GENERAL DATA PROTECTION REGULATION The enforcement of the GDPR on natural persons' protection regarding personal data treatment and movement, which repeals the Directive 95/46/CE of October 24 1995, poses innumerable challenges to both public and private entities as well as to all the agents whose activities involve the treatment of personal data. Although the full application of the new GDPR has been set for May 25 2018, date from which the directive 95/46/CE will be effectively repealed, its enforcement on May 25 2016 dictated the need for an adaptation to all the aspects changed or introduced by the regulation. Such adaptation of the present systems and models as well as of best practices regarding personal data treatment and protection by companies is now an imperative stemming from the regulation in order to safeguard its full applicability from May 25 2018. However, before focusing directly on the new regulation, it is important to clarify exactly how the document defines 'personal data' since its protection is the focus of the act. The GDPR defines personal data in a broad sense so as to include any information related to an individual which can lead to their identification, either directly, indirectly or by reference to an identifier. Identifiers include (European Parliament and Council, 2016): • Names. • Online identifiers such as social media accounts. • Any data that can be linked to the physical, physiological, genetic, mental, economic, cultural or social identity of a person. Companies collecting, transferring and processing data should be aware that personal data is contained in any email and also consider that third parties mentioned in emails also count as personal data and, as such, would be subject to the requirements of the GDPR L. (Ryz and Grest, 2016). The GDPR requirements apply to each member state of the European Union, aiming to create more consistent protection of consumer and personal data across EU nations. The GDPR mandates a baseline set of standards for companies that handle EU citizens' data to better safeguard the processing and movement of citizens' personal data. The main innovations of the General Data Protection Regulation are (Díaz, 2016): 1. New rights for citizens: the right to be forgotten and the right to a user's data portability from one electronic system to another. 2. The creation of the post of Data Protection Officer (DPO). 3. Obligation to carry out Risk Analyses and Impact Assessments to determine compliance with the regulation. 4. Obligation of the Data Controller and Data Processor to document the processing operations. 5. New notifications to the Supervisory Authority: security breaches and prior authorisation for certain kinds of processing. 6. New obligations to inform the data subject by means of a system of icons that are harmonised across all the countries of the EU. 7. An increase in the size of sanctions. 8. Application of the concept 'One-stop-shop' so that data subjects can carry out procedures even though this affects authorities in other member states. 9. Establishment of obligations for new special categories of data. 10. New principles in the obligations over data: transparency and minimisation of data. Among these points representing the main innovations imposed by the new legislation, we highlight point nine, in which the regulation recognises that health data integrates the 'special categories of data' considering that such data is sensitive and therefore subjected to special limitations regarding access and treatment by third parties. Health data may reveal information on a citizen's health condition as well as genetic data such as personal data regarding hereditary or acquired genetic characteristics which may disclose unique information on the physiology or health condition of that person. The protection of such health data imposes particular duties and obligations to the companies operating in this sector. As far as the security of personal data is concerned, the GDPR mandates the application of appropriate technical and organisational measures to ensure an adequate security level, among which: • The pseudonymisation and encryption of personal data; • The capacity to ensure the permanent confidentiality, integrity, availability and resilience of data treatment systems and services; • The capacity to re-establish prompt availability and access to personal data in the event of a physical or technical hazard. All organisations, including small to medium-sized companies and large enterprises, must be aware of all the GDPR requirements and be prepared to comply by May 2018. By beginning to implement data protection policies and solutions now, companies will be in a much better position to achieve GDPR compliance when it takes effect. IMPLEMENTATION OF THE GDPR The sooner organisations begin to prepare for the GDPR, the more they will minimise risks and reduce the likelihood of fines being imposed and the more able they will be to comply with the changes imposed. Therefore, companies must determine what changes they need to make in order to comply with the new regulation and proceed to the implementation of such changes, which might even include the adoption of new security measures. Organisations should get acquainted with the requirements under the GDPR. Following this, organisations should review all data processing activities currently undertaken and envisaged in order to identify any breaches in compliance with the GDPR and the associated risks. It is also important to review all contracts, privacy notices, consent forms and any other documentation under which data processing occurs so as to ensure that these are in line with the GDPR. The following figure (Figure 1) identifies the main stages towards the implementation of the GDPR as well as what to develop in each of them. The implementation of the GDPR can be conducted in three different but complementary stages -Gather, Analyse and Implement. After the conclusion of theses stages, companies will have to ensure the continuity of their compliance with the GDPR, for which periodical compliance audits must be carried out. This GDPR best practices guide puts forward a GDPR implementation methodology designed to (MetaCompliance, 2017): • Engage stakeholders to ensure timely and efficient organisational readiness for the GDPR. • Implement effective procedures that embed GDPR-compliant operational behaviours. • Establish assurance criteria that will sustain and evidence GDPR accountability. The methodology consists of three phases (Prepare, Operate, Maintain), with each incorporating a number of supporting activities. The objective defined for each phase is attained once all of the activities for that phase have been successfully executed. The ultimate goal of the methodology is to sustain and evidence compliance with the GDPR Accountability Principle. Other opinion, referring four options that are possible to meet the GDPR vision of certification mechanisms, data protection seals and marks encouraging transparency and compliance with the Regulation, in line with Recital 100. The options demonstrate the range of possibilities that the GDPR supports. The options are (Rodrigues et al., 2016): • Encouraging and supporting the GDPR certification regime; • Accreditation of certification bodies; • Certification by national data protection authorities; • Co-existence of the above three. These approaches provide a summary of the necessary phases or stages for the implementation of the GDPR. In the next section, information is given on the target population of this research work so as to then present the results concerning the implementation of the GDPR in the health clinics surveyed. RESEARCH METHODOLOGY The digital impact and transformation of recent years is visible in several sectors. The health sector is no exception and such transformation is an indisputable fact. Digital revolution brings along inevitable concerns regarding users' data security, privacy and protection, especially as far as health and clinical information is concerned (SPMS, 2017). The choice of an appropriate data collection to characterise the implementation of the GDPR in medical clinics fell on the survey technique, since it enables a clear, direct and objective answer to the questions asked to the respondents. Also, the universe under study comprises thousands of clinics, among which 190 were surveyed, numbers which make the adoption of alternative research techniques not recommendable if not impossible. The aim of the survey was to characterise the current state of health clinics with regards to the implementation of the GDPR, in other words, determine their level of knowledge and preparation regarding the issue of personal data protection and privacy. Population The survey was sent to 190 clinics, but only 57 gave an effective reply, which corresponds to a response rate of 30%. The sample subjects were selected randomly based on the kind of clinic and their location distributed throughout the 18 inland Portuguese districts as well as Madeira and the Azores. Among the 190 contacts established, 35 replied via telephone and 22 via email after a first telephone contact. In as many cases as possible, the respondent to the survey was the person in charge of the clinic's IT department. When there was no such person, the respondent was the person in charge of the clinic. The study was conducted between October and December 2017. Structure The structure of the survey resulted from a desk review on personal data protection and the study of the legal framework Regulation (EU) 2016/679 of the European Parliament and of the Council of April 27 -General Regulation on Data Protection. The questions of the survey, of individual and confidential response, were organised in three groups. The first group aimed to obtain a brief characterisation of the clinic as well as of the respondent. The two following groups contained questions concerning the GDPR applicability, preceded by the paramount core question: 'has the clinic implemented the measures imposed by the GDPR yet?' After responding to this central question and when the answer was negative, respondents were asked whether they intended to implement such measures, since they were not in compliance with the regulation, and if so, whether the implementation process was already in motion. When the respondents did not intend to adopt any measure, they were asked about whether or not they were aware of the fines they may have to pay for the noncompliance with the regulation and why they did not intend to adopt such measures. A positive answer to the central question would lead to the group of questions targeted at the companies which are already in compliance with the regulation or which are implementing the measures imposed. Some of the questions asked within this group were as follows: Are you aware of the GDPR? What impacts and challenges will clinics face in the compliance with the regulation? What stage of the implementation of the GDPR are you in? Have you identified or designated anyone for the post of Data Protection Officer? Has any training or awareness raising session been held about the new rules? Is the protection of personal data a priority in this clinic? The survey was quite extensive. However, for this study, our focus lies uniquely on the implementation of the regulation, thus on the clinics which have implemented or are currently implementing the regulation. RESULTS When asked whether they have started or concluded the process of implementation of the measures enshrined in the GDPR, 43 respondents (75%) answered no and 14 (25%) said that they have started or concluded the adoption of the measures (Figure 2). Among the 14 clinics which gave a positive answer, only 4 (28%) consider that they are already in compliance with the demands of the regulation. The remaining 10 clinics (72%) are still implementing the measures. This number seems residual when we observe that among the 57 clinics enquired only 7% have completed the implementation of the GDPR. For a better understanding of the results, we can group the clinics into three clusters (Figure 3): • Cluster 1 -Clinics in compliance with the regulation; • Cluster 2 -Clinics which are implementing the measures imposed by the regulation; • Cluster 3 -Clinics which are not in compliance with the regulation. Since this study focuses on the implementation of the GDPR, emphasis will be given to clusters 1 and 2 since cluster 3 comprises clinics which are not implementing the regulation. The majority of the subjects surveyed are aware of the obligations and challenges posed by the new general data protection regulation, although this seems a contradiction since only 25% of the clinics have adopted or are adopting the measures imposed. The implementation of the regulation requires a higher or lower level of demands depending on the size of the company as well as on whether they were already or not in compliance with the principles enshrined in the directive n. 95/46/CE. In about 36% of the clinics surveyed, the respondents foresee that the adoption of the GDPR will entail a high or very high impact, 43% foresee a medium impact and 21% a low or very low impact as far as implementation time, effort and costs are concerned (Figure 4). When the clinics in cluster 2 were questioned about the implementation stage of the GDPR (gather, analyse and implement) they were in, half of the ten respondents stated to be in a stage which corresponds to gathering, 30% said they were in a stage of analysis, more specifically assessing the risk of not complying with the regulation, and 20% of the clinics are currently implementing measures identified in the previous stages and contained in the reports. The implementation will enable the creation of conditions to make the GDPR an integrating part of the organisation's activities as well as to make it monitorable (Figure 5). After the conclusion of these implementation stages, a compliance assessment must be conducted periodically since the data is not immutable and even the company business and activity may undergo changes which may make the measures initially implemented inadequate to the new circumstances. When asked how they had implemented the new measures enshrined in the regulation, the respondents gave the same answer, namely that there was nobody in the company with enough knowledge to conduct the process. They stated to have hired the services of external companies for guidance in order to be able to meet the requirements imposed by the GDPR. Also, we determined that among the four clinics which said to be in compliance with the regulation, only one has identified and designated the person who will be responsible for the data treatment, the Data Protection Officer. Overall, the respondents showed to be sensitive to the importance of both board and workers' training. However, no training or awareness raising session has been held concerning the new rules to be adopted, but such sessions were said to take place soon. It is paramount to ensure that workers are aware of the GDPR implications and such sessions are the most appropriate way to communicate the new data protection rules to collaborators. With regard to the acknowledgement of the sanctions and fines companies are subjected to, 28 respondents (65%) are not aware and 15 (35%) are aware of such sanctions and fines. The GDPR reinforces the power of authorities and increases the fines. These sanctions are more burdensome and can reach the sum of 20 million euros or 4% of the overall turnover for the previous year. Of the total number of clinics responding to the survey, most consider the stipulated two-year transitional period given to companies to adapt to the new GDPR insufficient, with results distributed as follows: 10 (31.5%) consider that there is enough time, 29 (50.8%) consider this time insufficient, and the other 10 respondents (17,6%) had no formed opinion on the matter (Figure 6). The time taken to implement the GDPR will always depend on the complexity of the company's business activity, its organisational maturity, the volume and variety of the personal data used, the adequacy and flexibility of its information systems and on all its stakeholders' availability and willingness. It is not easy to establish the time it takes to ensure compliance with the regulation. Since many companies only take action on the deadlines, it is believed that most companies will be ready to comply within the set transitional period, especially considering the high applicable sanctions. One of the grounds supporting the GDPR was the reinforcement of citizens' rights regarding the way companies and organisations collect and use their personal data. All the respondents to this survey agree with this principle and consider this regulation of high relevance and importance. It is not enough for a company to claim that they comply with the regulation, they have to make proof that the personal data they use within the scope of their activity is being protected in accordance with the regulation. CONCLUSION After years of wrangling, the GDPR is now a fact and compliance deadlines are looming. The time to start preparing is now. Organisations need to ensure that they are not caught out and face sanctions for non-compliance. With the right precautions in place, organisations should have little to fear. The time and effort required to achieve compliance will vary greatly from one organisation to another, but it is well worth the effort (Tankard, 2016). Since the GDPR is an EU law and not a directive, it is mandatory and has binding legal force. In Portugal, it will be regulated by the National Data Protection Commission (CNPD), which is the Portuguese data protection authority. The implementation of the GDPR will imply challenges which will not be easily overcome. In many cases, it will imply a cultural change within the organisation. However, it may be an opportunity for many companies to finally document their processes and procedures, implement their values, consolidate their business ethics and display a convincing and motivating coherence to the market and to their clients, partners and collaborators. The treatment of personal data must be a transparent process to subjects at all times from its collection to its deletion. The purpose of the data collection to which the subjects consent must be clear and no other data besides the strictly necessary must be collected. The implementation of the regulation implies the definition of procedures, records and policies. Both people and technologies represent critical success factors to its implementation. Therefore, it might be relevant to carry out further research to determine to what extent this GDPR, although targeted at data protection, might not be as well a booster for the digital transformation of health clinics.
2019-02-19T14:07:48.680Z
2018-11-10T00:00:00.000
{ "year": 2018, "sha1": "32ab1ea26338e20b30d7819b4aa6cd4de291af08", "oa_license": "CCBY", "oa_url": "https://www.jisem-journal.com/download/evaluation-of-the-implementation-of-the-general-data-protection-regulation-in-health-clinics-3939.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "c18185d5d4a30404cce0f8cbfc0faf54dddea4dc", "s2fieldsofstudy": [ "Law", "Medicine" ], "extfieldsofstudy": [ "Computer Science" ] }
221844108
pes2o/s2orc
v3-fos-license
A Multisite Network Assessment of the Epidemiology and Etiology of Acquired Diarrhea among U.S. Military and Western Travelers (Global Travelers’ Diarrhea Study): A Principal Role of Norovirus among Travelers with Gastrointestinal Illness Abstract. U.S. military personnel must be ready to deploy to locations worldwide, including environments with heightened risk of infectious disease. Diarrheal illnesses continue to be among the most significant infectious disease threats to operational capability. To better prevent, detect, and respond to these threats and improve synchronization across the Department of Defense (DoD) overseas laboratory network, a multisite Global Travelers’ Diarrhea protocol was implemented with standardized case definitions and harmonized laboratory methods to identify enteric pathogens. Harmonized laboratory procedures for detection of Norovirus (NoV), enterotoxigenic Escherichia coli (ETEC), enteroaggregative E. coli, Shiga toxin–producing E. coli, enteropathogenic E. coli, Salmonella enterica, Shigella/enteroinvasive E. coli, and Campylobacter jejuni have been implemented at six DoD laboratories with surveillance sites in Egypt, Honduras, Peru, Nepal, Thailand, and Kenya. Samples from individuals traveling from wealthy to poorer countries were collected between June 2012 and May 2018, and of samples with all variables of interest available (n = 410), most participants enrolled were students (46%), tourists (26%), U.S. military personnel (13%), or other unspecified travelers (11%). One or more pathogens were detected in 59% of samples tested. Of samples tested, the most commonly detected pathogens were NoV (24%), ETEC (16%), and C. jejuni (14%), suggesting that NoV plays a larger role in travelers’ diarrhea than has previously been described. Harmonized data collection and methods will ensure identification and characterization of enteric pathogens are consistent across the DoD laboratory network, ultimately resulting in more comparable data for global assessments, preventive measures, and treatment recommendations. INTRODUCTION Travelers' diarrhea (TD) has been described as the most common medical ailment among those traveling from resource-wealthy to resource-poor countries. According to data from the Foodborne Diseases Active Surveillance Network, the highest burden of infectious diarrhea was reported among U.S. citizens returning from travel to Mexico (32.7%), India (8.2%), and Peru (4.0%). 1 Although modern advances in public health, such as improved water, sanitation and hygiene conditions; development and widespread dissemination of vaccines; and antimicrobials to treat infection have all led to an overall decline in infectious diarrhea during U.S. military engagements, it still remains a significant threat to travelers, both civilian and military, 2,3 even those whose travel is long term (1 month or more). 4 Military personnel experience TD in austere, operational settings that are unique among international travelers 2,5 and present diagnostic challenges. 6 The 2019 U.S. Military Infectious Diseases Threats Prioritization Panel, 7 which ranks infectious disease threats by tiers of military concern to guide medical research investment, ranked bacterial diarrhea first among 65 threats. 7 Diarrheal illnesses continue to threaten operational capability through mission degradation and lost person-hours, 8 with deployed military service members traveling from higher to lower income countries experiencing an approximately 30% incidence of diarrhea, 5 and most cases of untreated TD lasting 4-5 days. 9 There is a dire need for improved surveillance that will better define this infectious disease threat and leading to more effective prevention and treatment practices. The Armed Forces Health Surveillance Division, Global Emerging Infections Surveillance section facilitates global surveillance of enteric pathogens across the Geographic Combatant Commands to provide data that inform force health protection (FHP) decision-making, Department of Defense (DoD) policy, public health action to prevent, detect, and respond to enteric threats, as well as research involving product development (e.g., pharmaceuticals, vaccines, and diagnostics), ultimately benefiting DoD beneficiaries worldwide. Although enteric surveillance throughout the DoD overseas laboratory network is robust, it has been hampered by a lack of integrated case definitions, standardized data elements, and nonuniversally optimized laboratory procedures. Such limitations are challenges to understanding the true burden of disease across regions. In an effort to improve harmonization and yield more comparable data, DoD partners designed and implemented a multisite Global TD (GTD) protocol consisting of standardized case definitions for enteric disease and harmonized laboratory methods for identification of enteric pathogens. MATERIALS AND METHODS Our study used standardized case definitions for TD (Table 1) to include both acute diarrhea (AD) and acute gastroenteritis (AGE), a minimum set of clinical data elements and harmonized laboratory procedures for detection of Norovirus (NoV), including genogroup identification; diarrheagenic Escherichia coli (DEC), including toxins and colonization factors (CFs); Salmonella enterica; Shigella spp.; and Campylobacter jejuni. The GTD study also included a robust laboratory quality assurance and quality control (QA/QC) program and a centralized data management system. Study population. Although the GTD study incorporates eight partner laboratories in the DoD network, our analysis included six laboratories (Table 2) from the period of June 2012 to May 2018. The DoD laboratories participating in this study represent surveillance sites in Egypt, Honduras, Peru, Nepal, Thailand, and Kenya (GTD study sites in Cambodia and Georgia were not included because of few samples available for analysis). Participants were enrolled from embassy clinics, traveler clinics, foreign language schools, and military installations when they sought care for AD or AGE. Study eligibility. Because previous work has shown that travelers from wealthier countries have higher attack rates than those from less wealthy countries, 10 participants were required to originate from Organisation for Economic Cooperation and Development (OECD) 11 member countries, with travel to OECD nonmember countries. Participants included in the study were 18 years or older and had been in the country 1 year or less. Those with reported consumption (dose and duration) of any antimicrobial agent(s) within the preceding 7 days before study enrollment date (with the exception of antimalarial agents, such as Malarone [atovaquone/proguanil combination], doxycycline, chloroquine, mefloquine, or primaquine); those with chronic, persistent gastrointestinal (GI) symptom(s) with a duration greater than 7 days before enrollment, or noninfectious diarrhea; and those who could not produce a stool sample were excluded from the study. Participants were eligible to be enrolled multiple times in the study; however, a different subject identifier was used for each new episode of AD or AGE. After assessing eligibility and obtaining informed consent, participants underwent a clinical evaluation, provided a stool specimen, and completed a questionnaire administered by a healthcare worker. The questionnaire elicited demographic information (sex, age, country of residence, and type of travel), clinical presentation (vital signs, clinical signs and symptoms, and stool grade), treatment history and on-site treatment administered (treatment setting, treatment type, treatment provided, etc.), and case disposition (effect of illness on ability to travel or perform duties). Each site was independently responsible for developing a questionnaire to collect these harmonized predetermined minimum data elements, although questionnaire verbiage and formatting itself were not harmonized to leverage existing data collection infrastructure at the individual site level. Participants were treated for their illnesses as per site clinical treatment guidelines. Laboratory methods. Each participating laboratory tested clinical specimens in compliance with standard operating procedures (SOPs) developed by the Naval Health Research Center (NHRC) for molecular testing of the GTD study core pathogens: NoV, enterotoxigenic E. coli (ETEC), CF antigens of ETEC (ETEC-CF), enteroaggregative E. coli (EAEC), Shigella/ enteroinvasive E. coli (EIEC), Shiga toxin-producing E. coli (STEC), enteropathogenic E. coli (EPEC), Salmonella, and C. jejuni. Three categories of testing were performed at each laboratory: 1) traditional plate-based culture, identification and antimicrobial susceptibility testing (AST), 2) bacterial isolate DNA-targeted PCR, and 3) stool RNA/DNA-targeted real-time PCR and conventional PCR. Traditional culture-( Figure 1) and molecular-based assays ( Figure 2) were performed in parallel to increase the chances of identifying pathogens. For culturebased testing, stool samples were streaked onto various selective and nonselective agar plates and incubated as per the protocol. Salmonella, Shigella, and Campylobacter were the primary pathogens of interest, but individual laboratories may have used protocols to detect Vibrio, Yersinia, Aeromonas, Plesiomonas, and DEC as determined by each individual laboratory. Bacteria were identified by a combination of conventional microbiological methods, manual multiplex biochemical test strips, and automated identification systems, with serological confirmation performed by some laboratories. Antimicrobial susceptibility testing was performed by agar disk diffusion, gradient strips, or automated systems. Molecular testing SOPs prepared by researchers at the NHRC were distributed to participating sites before study initiation. In brief, viral RNA was extracted via the QIAGEN (Germantown, MD) QIAamp ® Viral RNA Mini Kit. Testing for NoV GI and genogroup II (GII) RNA in fecal samples was completed using the NoV duplex real-time (TaqMan ® ) reverse transcriptase (RT)-PCR assay developed by the CDC as part of CaliciNet. 12 Because findings of NoV infections with both GI and GII are uncommon, we have considered such findings to be a single infection for analysis purposes. A real-time multiplex PCR assay was used for the identification of Salmonella, Shigella-EIEC, and C. jejuni in fecal samples. Identification of DEC in extracted stool samples was performed using a multiplex assay set containing targets for EPEC, STEC, ETEC, Detection of ETEC toxins and CFs was performed using a conventional, four-part, multiplex PCR assay. This assay was used to determine whether a lactose-fermenting, E. coli-like bacterial colony was ETEC and to categorize the strain based on its toxin and CF profiles. In addition, a conventional multiplex PCR assay was used for the identification of select Shigella species (S. flexneri, S. sonnei, and S. dysenteriae) in stool samples known to be Shigella/EIEC positive. A QA/QC validation program was administered to each participating laboratory on an annual basis to verify molecular testing capabilities. In brief, the NHRC generated blinded specimens and coordinated with each site laboratory regarding their proficiency testing. The site laboratory identified the blinded sample etiology and reported back to the reference laboratory. Statistical methods. We limited the primary analysis to participants (n = 410) for which all variables of interest, including complete testing results for all pathogens, were available. A supplementary analysis (SA) was conducted, examining FIGURE 1. Global travelers' diarrhea study standardized culture testing scheme. Some laboratories may have used other agars for isolation of Salmonella, Shigella, and other enteropathogens. In addition, a Campylobacter-selective agar plate was used (not shown). BAP = 5% sheep blood agar plate; MAC = MacConkey agar; XLD = xylose lysine deoxycholate agar. archived, retrospectively tested samples (n = 87) for pathogen data only (metadata were unavailable for this group). All archived specimens were collected from participants who were enrolled in Kenya between January 2013 and December 2015, were male, originated from Europe, and were service members of a non-U.S. military. The only inclusion criterion for this group was that stool grade was 3 or higher and that complete testing results for all pathogens were available. Descriptive statistics were performed, as well as a comparison of single pathogen-, multiple pathogen-, and no pathogen-detected results. Analyses were performed using SAS software, version 9.4 (SAS Institute, Cary, NC). This study was independently reviewed and approved by the institutional review boards of each participating laboratory. Across all sites (Table 4), a single pathogen was detected in 43% of specimens, multiple pathogens were detected in 16% of specimens, and 41% of specimens had no pathogen detected. The highest percentages of multiple-pathogen infections were seen in Asia, with 31% of specimens tested in Thailand and 25% of specimens tested in Nepal revealing multiple-pathogen infections. The highest percentages of no pathogen detections were seen in Latin America, with 54% of specimens in Peru and 52% of specimens in Honduras having no pathogen identified. The most frequently detected pathogen in each country was NoV (of these, GII was the most common genogroup detected), with the exception of Egypt, where ETEC was most frequently detected. Only Nepal and Kenya (SA) sites detected combination NoV GI and GII infections. Infections with Campylobacter, EPEC, and EAEC were most commonly seen in Asia (Nepal and Thailand). Across all sites, very few (n = 6) infections with Salmonella were detected. Supplementary analysis (retrospectively collected samples from Kenya, limited to pathogen data only). A total of 87 archived specimens with stool grade data meeting inclusion criteria (grades 3-5) were assessed. Enterotoxigenic E. coli (29%), followed by NoV (17%) and EAEC (15%), were the most common pathogens detected overall among archived specimens (Supplemental Table 1). Among this group, a single pathogen was detected in 39%, multiple pathogens were detected in 17%, and no pathogen was detected in 44% of the archived specimens tested. The most common multiplepathogen combination was ETEC and EAEC (6/15, 40%)-all other multiple-pathogen combinations in this data subset were observed only once or twice. DISCUSSION This study has examined TD pathogen distribution; single-, multiple-, and no pathogen detected trends; and traveler types across a number of global surveillance sites. Although in the past, there have been multisite studies of children implementing standardized laboratory methods and study designs, [13][14][15] to our knowledge, this is the first multisite observational TD study with standardized molecular laboratory methods examining both military and civilian adult travelers. Although many of our pathogen findings agreed with prior studies, there were notable differences. The most commonly detected pathogen in the main analysis was NoV, rather than bacterial pathogens such as ETEC or Campylobacter that have been more commonly associated with TD in previous studies, 9,16,17 although it is possible that NoV has been underreported in the past because of short clinical duration, diagnostic methods, or case definitions. 4 Although NoV has long been known to impact military personnel in operational environments because of crowded living situations and lack of development of widespread natural immunity, 18,19 it has not been considered a leading cause of TD, relative to diarrheagenic bacterial pathogens. 20 In addition, certain types of travel, such as backpacking, have been found to be a greater risk of NoV infection than other travel types, 21 and our findings of the highest percentages of NoV infections in Thailand (44% positive) and Nepal (32% positive) support this, as 100% and 54% of travelers to Thailand and Nepal, respectively, were tourists, and both of these nations are well-known backpacking destinations. This finding highlights the importance of NoV as an etiology of TD and underscores the importance of continued vaccine development to prevent illness caused by this significant pathogen. 20 Our findings of NoV GII being the most prominent genotype agree with the findings of others. 22,23 Enterotoxigenic E. coli was the second most frequently detected pathogen overall, and Campylobacter was also commonly detected, especially in Thailand and Nepal, in agreement with prior studies. 24,25 There were few Salmonella detections (1% of specimens in the main analysis; 0-19% by region; 0% of specimens from Kenya in the SA), and whereas Salmonella was the pathogen implicated for the highest incidence of foodborne infections from 10 sites in the United States (2006-2013), 26 studies focusing on TD in civilians and E. coli = Escherichia coli. * Limited to observations with all pathogen reports of "0"; "missing," or "pending" observations were excluded. deployed/overseas service members have shown Salmonella to be detected less frequently relative to other diarrheagenic pathogens. 27 Our findings of no pathogen detected in 41% of samples agree with previous studies using stool samples collected during acute illness. 4 Regional differences of pathogen recovery reflect previous work as well, 24,28 with the highest pathogen recovery found in Southeast Asia and the next highest pathogen recovery found in South Asia. For locations with high numbers of no pathogen detections, the lack of detection could indicate that some etiologies are not being tested for, such as emerging pathogens or toxins that could play a meaningful role in clinical manifestations of TD. 25 Multiple-pathogen infections were not uncommon (16%), especially among travelers enrolled in the Asian countries participating in our study. This may be attributable, in part, to differences in the traveler types enrolled in Thailand and Nepal versus other sites. Although from a different region, previous work among young Kenyan children has shown that exposure to multiple public locations increased probability of ingesting multiple pathogens. 29 Because children, like travelers, are naive to enteric pathogens and may be considered a proxy for how adult travelers could respond to pathogen exposure in high-risk areas, this finding may provide insights into the distribution of multiple-pathogen infections among GTD travelers. In Thailand and Nepal, the most frequently enrolled travel populations were tourists and individuals categorized as "other" (the majority of whom [66%] described themselves as "volunteers"). These individuals may have been more likely to visit a larger variety of locations than other traveler categories such as military or student travelers, and such increased exposure to multiple locations might have increased their risk of multiple-pathogen infections. The pattern of multiple-pathogen infections by both region and traveler type differed from previous work. In their study, Lääveri et al. 30 tested for all bacterial pathogens that were included in the GTD panel and found that among Finnish travelers experiencing TD, 32% of travelers to Southeast Asia, 60% of travelers to South Asia, 29% of travelers to Latin America, and 52% of travelers to East Africa had multiple pathogen infections. Although our study found a similar pattern of multiple-pathogen infections in Southeast Asia, our results from other geographic locations differed. We found that 25% of participants in Nepal, 9% and 6% (in Honduras and Peru, respectively) of participants in Latin America, and 17% of samples from Kenya (SA) had multiple pathogen infections. The differences between the findings of Lääveri et al. 30 and our results might have been due to small sample sizes with stratification by region, differences in traveler type and country of origin (Finnish travelers versus any traveler from an OECD member country), or other differences in laboratory protocols. Nepal, in general, exhibited the greatest variety in pathogens detected, as it was the only site that had positive test results for all pathogens of interest. Nepal may be a riskier area, in general, for TD, as previous work has shown that travel to Nepal has a higher association with TD than other countries, both regionally and globally. 31,32 It has also been found that studies of TD in U.S. military populations had higher pathogen detection than those conducted in nonmilitary individuals, 28 although we did not find this in our study. Although reasons for this are not completely clear, the comparatively lower proportions of U.S. military with pathogen detections (53%) in our study might have been related to the high percentage (83%) of U.S. military who were enrolled in Honduras, the country with the highest percentage of no pathogen detections. Of note, when examining only the nine U.S. military members who were not enrolled in Honduras, 78% of these participants had at least one pathogen detected. There were differences in most frequently detected pathogens among the retrospectively tested samples from Kenya compared to the main analysis. Whereas NoV (24%), followed by ETEC (15%), were the most commonly identified pathogens in the primary analysis, ETEC (29%), followed by NoV (17%), were the most common pathogens detected overall in the SA. This is in agreement with previous work examining British soldiers in Kenya, revealing ETEC as the most frequently detected pathogen. 27 Furthermore, EAEC was detected among 15% of samples from the SA, yet only detected in 6% of samples in the main analysis. This higher percentage of EAEC found in the African region differs from Shah et al., 17 who found EAEC to be infrequently detected in Africa (3/165, 2%), but is in agreement with later findings of ETEC, EPEC, and EAEC frequently detected among Western military personnel in South Sudan, and NoV detected less frequently. 33 Despite these differences in most common pathogens detected, the distribution of single-, multiple-, and no pathogen-detected results showed similar patterns when comparing the SA with the primary analysis, although these differed from the work of Biswas et al. 33 who found that nearly 80% of those enrolled in their study had two or more pathogens detected. However, this group used the BioFire Film Array GI panel, which included a wider scope of pathogens than the GTD study, and so may have contributed to the higher proportion of reported coinfections. The most commonly detected multiple pathogen infections via the Film Array GI panel included ETEC, EPEC, and EAEC, all of which were pathogens tested for in the GTD study. Even so, the high sensitivity of the BioFire Film Array is well known, in particular, the potential for false-positive ETEC detections due to cross-reactivity has been described, and therefore, it is unsurprising that many more multiple-pathogen infections were detected in participants with samples tested by the Film Array GI panel alone 34 than the parallel molecular and culture laboratory approach of the GTD study. Our study had several limitations that should be considered. Although molecular laboratory methods were harmonized across sites, culture was not standardized and was performed at differing points in individual laboratory workflows. Even so, the impact on our results is likely negligible, as few pathogens would be expected to be detected by culture and not by standardized PCR testing. There were also differences in sample sizes and demographic composition of site participants. Most travelers in our study were enrolled in Nepal (40%) or Peru (42%), and traveler types enrolled at a given site largely depended on the accessibility of these groups to each partnering laboratory. For instance, those enrolled in Nepal tended to be tourists or "other" travelers and were seeking treatment at one of two travel clinics known to provide care for trekkers on adventure travel. Those enrolled in Peru tended to be students and were seeking care at a clinic associated with a Spanish language school in Cusco. There were differences in participant sex distribution by site, and this might have resulted in variation in care-seeking behavior. Previous work has found that women with TD are more likely than men to seek medical care, 9,35 although the incidence of TD has been reported to be the same between women and men. 36,37 In addition, data collection was independently carried out by each participating site, and no standardized questionnaire verbiage, formatting, or training was provided to sites. This might have resulted in misclassification bias if there were differences in how sites collected metadata. Considering limitations presented by sample sizes and differing exposures by location is important in interpretation of these findings and in planning for future surveillance efforts. CONCLUSION Harmonization of methods across unique geographic locations is critical for ensuring consistent identification and characterization of enteric pathogens across the DoD laboratory network, and this ultimately results in more comparable data for global assessments, preventive measures, and treatment recommendations. Future research should examine in greater detail the role of NoV in AD and AGE affecting military or civilian travelers; the distribution of single-, multiple-, and no pathogen detected reports among TD cases; and the impact of the most common pathogen combinations on incidence and severity of TD. Exploring these trends according to traveler type may also provide relevant host factor and exposure information that can better explain trends in both multiple-pathogen detection and disease severity. Assessment and evaluation of the variety of factors potentially contributing to TD, including both host and environmental exposure factors, can inform FHP decisionmaking for military personnel traveling to high-risk areas as well as shape and prioritize future global surveillance and vaccine development activities.
2020-09-23T13:06:09.153Z
2020-09-21T00:00:00.000
{ "year": 2020, "sha1": "c58697a2f3ad54035cacec2a003b4235bb306d06", "oa_license": "CCBY", "oa_url": "https://www.ajtmh.org/downloadpdf/journals/tpmd/103/5/article-p1855.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "08aa4c1852da34f8bddefd5a6fb6fd27cbcf4f8e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
203569857
pes2o/s2orc
v3-fos-license
Heterozygotes Are a Potential New Entity among Homozygotes and Compound Heterozygotes in Congenital Sucrase-Isomaltase Deficiency Congenital sucrase-isomaltase deficiency (CSID) is an autosomal recessive disorder of carbohydrate maldigestion and malabsorption caused by mutations in the sucrase-isomaltase (SI) gene. SI, together with maltase-glucoamylase (MGAM), belongs to the enzyme family of disaccharidases required for breakdown of α-glycosidic linkages in the small intestine. The effects of homozygote and compound heterozygote inheritance trait of SI mutations in CSID patients have been well described in former studies. Here we propose the inclusion of heterozygote mutation carriers as a new entity in CSID, possibly presenting with milder symptoms. The hypothesis is supported by recent observations of heterozygote mutation carriers among patients suffering from CSID or patients diagnosed with functional gastrointestinal disorders. Recent studies implicate significant phenotypic heterogeneity depending on the character of the mutation and call for more research regarding the correlation of genetics, function at the cellular and molecular level and clinical presentation. The increased importance of SI gene variants in irritable bowel syndrome (IBS) or other functional gastrointestinal disorders FGIDs and their available symptom relief diets like fermentable oligo-, di-, mono-saccharides and polyols FODMAPs suggest that the heterozygote mutants may affect the disease development and treatment. Introduction Digestion of starch, glycogen, sucrose, maltose and other carbohydrates in the intestinal lumen is achieved by the concerted action of a family of microvillar enzymes, the disaccharidases. The digestion of α-glycosidic linkages of carbohydrates commences by salivary and pancreatic α-amylases and is continued in the small intestine by two major mucosal α-glycosidases, sucrase-isomaltase (SI, EC 3.2.148 and 3.2.1.10) and maltase-glucoamylase (MGAM, EC 3.2.1.20 and 3.2.1.3) [1]. The digestive capacities of SI and MGAM cover almost the entire spectrum of carbohydrates that are linked via α-1,2, α-1,4 and α-1,6 linkages and comprise the majority of the typical diet in children and adults. SI exhibits a wide α-glucosidase activity profile and cooperates with maltase (MA) in digesting α-1,4 linkages, the Nutrients 2019, 11, 2290 2 of 9 major glycosidic linkages in starchy food. SI accounts in vivo for almost 80% of mucosal MA activity as well as the entire digestive capacity towards sucrose (α-d-glucopyranosyl-(1→2)-β-d-fructofuranoside; SUC) and almost all isomaltase (IM) (1,6-O-α-d-glucanohydrolase) activity [2,3]. By virtue of the complementing activities of SI and MGAM the expression of the two enzyme complexes in the mucosa constitutes an absolute requirement for the mucosal digestion of α-d-glucose oligomers that originate from plants (Table 1) [3]. The two enzymes SI and MGAM share striking structural similarities at the protein level and their biosynthetic pathways are also similar. It is unclear though whether they interact with each other and this interaction may modulate each other's activities. In vitro studies with recombinant forms of the individual subunits of SI and MGAM have proposed a modulation of starch digestion for slow glucose release through "toggling" of activities of mucosal α-glucosidases [4]. This mechanism suggests an interaction between the two enzymes' complexes that may occur in close proximity to each other. Reduced expression levels or complete absence of intestinal disaccharidases at the cell surface of the enterocytes is associated with carbohydrate maldigestion and malabsorption, most notably described in several cases of genetically-determined sucrase-isomaltase deficiency (CSID) [5][6][7][8][9]. Here, we review the different inheritance forms of CSID and discuss the possible onset of CSID due to heterozygote mutations as a new entity, possibly presenting with milder symptoms. Table 1. The amounts of human disaccharidases, sucrase isomaltase (SI), maltase glucoamylase (MGAM) and lactase-phlorizin hydrolase (LPH) in intestinal brush border membrane (BBM) preparation is presented as percentage of total BBM proteins. The activities of the disaccharidases in their natural milieu (BBM) or in immunoprecipitates (Immunopr.) were determined using their respective substrate(s). Adapted from [10]. Molecular and Cellular Basis of Genetically-Determined Carbohydrate Malabsorption The main symptom in CSID is osmotic diarrhea, often acidic, since disaccharides can cause an osmotic force, which drives water into the gut lumen [11]. Bloating, stomach pain and gas are further symptoms of CSID. While unequivocal data on the existence of genetically-determined carbohydrate malabsorption due to MGAM do not exist, the maltase and glucoamylase activities of this enzyme complex are substantially reduced in many cases of CSID. One possible explanation for this reduction is that SI contributes to about 60-80% of the total maltose digesting capacity in the intestine. CSID is elicited by single-nucleotide polymorphisms in the coding region of the SI gene. These mutations are distributed over both domains of SI [9]. Biochemical, cellular and functional analyses of SI mutants established the concept of phenotypic heterogeneity and classified the SI mutants into groups that vary in their intracellular localization, cell surface localization (apical/basolateral), proteolytic processing and function [9] ( Table 2). Some of the SI mutants are blocked in the endoplasmic reticulum (ER) or the ER-Golgi intermediate compartment (ERGIC) and cis-Golgi [8,12] or are normally trafficked along the secretory pathway, but missorted to the basolateral membrane [13,14]. Other mutants undergo aberrant intracellular cleavage [14,15] or are characterized by an increased turnover rate (Figure 1) [9]. with R1124X belong to the most common mutations in CSID with an estimated frequency of 83% in European descendants. The severity of these mutations stems from the fact that they generate an SI protein phenotype that is intracellularly blocked in the ER [6,7,16]. More recently, several new mutations in the SI gene have been tested by genotyping in a panel of patients suffering from irritable bowel syndrome (IBS) symptoms [20][21][22]. Some of the last cited mutations were already found in CSID patients; these findings unraveled a remarkable heterogeneity in the pathogenesis of CSID revealing the unique etiologies of this multifaceted intestinal malabsorption disorder [6,16]. Heterozygotes in CSID While homozygote and compound heterozygote inheritance traits are well documented in diagnosed CSID patients, there are also reports of CSID patients with heterozygous genotypes. In addition, recent studies have found an association of CSID-associated heterozygous genotypes with an Figure 1. Categorization of the SI mutants into major three biosynthetic protein phenotypes. WT like: the mutants are trafficked along the secretory pathway and mature in a fashion similar to the WT-SI; it is not clear, however, whether an efficient polarized sorting of the mutants to the apical membrane is maintained. Partially trafficked: the mutants are trafficked at a reduced rate between the ER and the Golgi and ultimately to the cell surface. ER block: the mutants are entirely located in the ER. WT: wild type, SI: sucrase-isomaltase, BBM: brush border membrane, ER: endoplasmic reticulum. Homozygous and Compound Heterozygous Inheritance in CSID A decisive factor in the occurrence and severity of CSID is the inheritance form and whether both alleles of the gene are affected by mutations. The first identified mutation in CSID, Q1089P, is in the sucrase domain of SI and elicits retention of SI in the ERGIC and cis-Golgi compartments [19]. Its inheritance is homozygous and is associated with severe symptoms due to a complete absence of sucrase and isomaltase activities and substantial reduction of the maltase activity [17]. Similar severe effects are also elicited by other homozygous mutations, Q117R, L340P, L620P, C635R, that were identified in intestinal biopsy specimens from CSID patients [12,13,17]. Genetic testing of blood samples from a cohort of patients with diagnosed CSID revealed mutations that were identified as compound heterozygotes, for example G1073D, V577G and F1745C [6,7,16]. These mutations together with R1124X belong to the most common mutations in CSID with an estimated frequency of 83% in European descendants. The severity of these mutations stems from the fact that they generate an SI protein phenotype that is intracellularly blocked in the ER [6,7,16]. More recently, several new mutations in the SI gene have been tested by genotyping in a panel of patients suffering from irritable bowel syndrome (IBS) symptoms [20][21][22]. Some of the last cited mutations were already found in CSID patients; these findings unraveled a remarkable heterogeneity in the pathogenesis of CSID revealing the unique etiologies of this multifaceted intestinal malabsorption disorder [6,16]. Heterozygotes in CSID While homozygote and compound heterozygote inheritance traits are well documented in diagnosed CSID patients, there are also reports of CSID patients with heterozygous genotypes. In addition, recent studies have found an association of CSID-associated heterozygous genotypes with an increased risk for functional gastrointestinal disorders. In these studies, several mutations in the SI gene have been identified, such as R774G, C1229Y and G1073D [6,7,16,18]. In theory, heterozygotes should express an SI molecule that is virtually 50% active or transport-competent if one allele is normal. This hypothesis implies that disaccharides can be metabolized to an extent that does not necessarily elicit malabsorption symptoms. Nonetheless, enzymatic levels of sucrase and maltase in some reported heterozygote cases were apparently low, enough to induce symptoms of carbohydrate malabsorption [6,23]. The existence of one mutated allele in CSID suggests that the pathogenesis of CSID depends not only on the biosynthetic, trafficking and functional features of the individual mutants per se, but also on the degree of potential regulatory influence of these mutants on wild type SI. Several observations can be provided to explain the potential effect of heterozygote mutations on wild type SI. The Quaternary Structure of SI Recent observations have shown that wild type SI dimerizes along the secretory pathway [16]. The interaction between SI monomers occurs likely via the transmembrane domain, as has been shown for several proteins of the medial Golgi in a fashion referred to as kin recognition [24]. Wild type SI protein could be retained intracellularly along the secretory pathway via its direct interaction with an SI mutant that would exhibit a dominant structural hierarchy over the correctly folded wild type [16]. An intact transmembrane domain of a mutant SI would be enough to elicit this dimerization [25]. Most of the mutations in CSID that have been identified to date are in the luminal domains of SI, which implies that the transmembrane domain may remain intact and capable of interacting with wild type SI. This theory is supported by the fact that mutations that lead to truncated forms of SI, such as R1124X and E821X which lack the transmembrane domain and the theoretical ability to interact with the isoforms of wild-type SI, have not been so far reported in a heterozygote background [16]. The SI Biosynthetic Phenotypes in Intestinal Biopsy Specimens Form CSID Heterozygotes In two cases of CSID, intestinal biopsy specimens were assessed for biosynthesis, glycosylation and processing of SI in an in vitro heterozygote background with the C1229Y or T694P mutation [6]. The resulting biosynthetic phenotypes of SI in the intestinal tissue that contained the normal and the mutated alleles resembled that of the individually expressed SI mutant in a transfected cell model. SI in an intestinal biopsy specimen expressing only the heterozygote C1229Y mutation is partially trafficked between the ER and the Golgi at a 20-25% maturation rate [6]. A similar biosynthetic pattern was also obtained when an SI-C1229Y mutant was expressed in COS-1 cells [7]. Similarly, in biopsy specimens harboring the heterozygous T694P mutation a mannose-rich ER-located protein phenotype was the prevailing form of SI [6] that was also revealed upon individual expression of the mutant SI-T694P in COS-1 cells (unpublished data). Together these studies strongly propose that SI mutants significantly affect the wild type SI protein via a protein-protein mode of interaction along an early secretory pathway. Such an interaction is possible in cases when the transmembrane domain of SI or its mutants is intact conforming thus to the kin recognition model shown for type II membrane glycoproteins. The Mosaic Structure of SI in the Enterocytes Another potential explanation for the symptomatic SI heterozygote subjects is the mosaic or heterogeneous expression pattern of many disaccharidases in the enterocytes including SI [26,27]. This is perhaps why the normal activity levels of brush border disaccharidases in intestinal biopsy specimens routinely used in gastrointestinal diagnostics vary substantially among individuals with the highest normal levels being more than 3.5-fold that of the lowest normal levels [28]. Regardless of any potential genetic alterations in the SI gene that may lead to abnormalities in its structure or function, the gene expression of SI can be downregulated in different regions of the intestinal epithelium that may ultimately be associated with reduced carbohydrate digestion capacity of the intestine [26,27]. Hitherto individuals with a priori reduced expression of wild type SI will be more susceptible to develop gastrointestinal symptoms when SI mutants occur in a heterozygote background. Future Perspectives Several studies provide evidence for a multi-factorial etiology of carbohydrate malabsorption disorders, including psychological and physiological factors [29] besides genetic predisposition. The progress made in the last two decades in the genetics of CSID as well as in unraveling basic molecular mechanisms underlying the pathophysiology of CSID has revised initial concepts on the inheritance trait and severity of the disease. The scientific value gained from this knowledge in the etiology of CSID has resulted in better awareness of the disease as well as the development of more reliable diagnostic tools. While most patients with extremely severe CSID are homozygotes or compound heterozygotes with pathogenic mutations that elicit localization of SI in the ER, milder forms of CSID can be triggered by combinations of less pathogenic mutations (i.e., partially trafficked SI mutants or heterozygotes). Two immediate questions should be addressed that would certainly contribute to a better understanding of the milder forms of CSID. What Is the Level of the Contribution of the Partner Glycosidase, MGAM, to the Overall Starch-Digestive Capacity and Other Carbohydrates in CSID? At present, there is no firm evidence that has precisely assessed the activities of SI or MGAM alone and in combination and compared the levels of activity of one enzyme to the other. These studies are essential in the context of understanding the pathophysiology in CSID, whether the background is heterozygote, compound heterozygote or homozygote. A toggling effect has been described for a potential substrate hydrolysis by recombinant maltase and sucrase in transient transfection systems. Studies using mutants of SI in co-expression systems with MGAM could help explain a potential compensatory role of MGAM in carbohydrate digestion in milder forms of CSID. Do Mutations in Heterozygotes Elicit CSID-Like Symptoms? The major question that requires detailed studies at the cellular and molecular levels is whether an interaction between wild type SI and an SI mutant yields an SI protein phenotype that can be considered biochemically pathogenic. Given the interaction between SI molecules along the early secretory pathway, it is reasonable to suggest this type of interaction when transmembrane domains of SI are intact in SI mutants. Here, protein trafficking and activity profiles of SI mutants in a heterozygote background (i.e., wild type SI plus mutant SI) versus wild type SI can be compared to determine the effect of heterozygous mutations on the function and activity of SI. In parallel studies the effects of MGAM in this heterozygous experimental model can be also examined to mimic the in vivo situation and determine whether the attenuated carbohydrate digestion can be restored by MGAM. While all that has preceded suggests that a potential heterozygote entity in CSID may indeed exist, the validity of this concept implies that the symptoms are not due to other SI mutations, for example in the regulatory non-coding regions of the gene or other genes, or the disaccharidase activity is compromised in vitro and/or ex vivo (in biopsy specimens) and finally, the patients felt better upon dietary or recombinant enzyme therapy administration. Further Clinical and Nutritional Implications of SI Gene Variants It has been recognized that irritable bowel syndrome (IBS) affects about 10-13% of adults [30,31]. There is evidence that symptoms of a major part of these patients improve under a gluten free as well as a low fermentable oligo-, di-, mono-saccharides and polyols FODMAP diet [32,33] knowing that FODMAPs contain fructans from wheat, rye, barley and oats. The complexity of IBS brings along a subgroup, which does not respond on a FODMAP diet, but expresses SI gene variants with impaired function and may respond to a more causal therapy [18,21]. Dissecting the pathogenesis of IBS is
2019-09-28T13:02:29.419Z
2019-09-25T00:00:00.000
{ "year": 2019, "sha1": "2f1f4a4b64dca4267a2e81e78a12831b8c923b34", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/11/10/2290/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a3e7dfb520156808d5d4a0bbbf2984e78a39a157", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
265292674
pes2o/s2orc
v3-fos-license
A collaborative network trial to evaluate the effectiveness of implementation strategies to maximize adoption of a school-based healthy lunchbox program: a study protocol Introduction An important impediment to the large-scale adoption of evidence-based school nutrition interventions is the lack of evidence on effective strategies to implement them. This paper describes the protocol for a “Collaborative Network Trial” to support the simultaneous testing of different strategies undertaken by New South Wales Local Health Districts to facilitate the adoption of an effective school-based healthy lunchbox program (‘SWAP IT’). The primary objective of this study is to assess the effectiveness of different implementation strategies to increase school adoption of the SWAP across New South Wales Local Health Districts. Methods Within a Master Protocol framework, a collaborative network trial will be undertaken. Independent randomized controlled trials to test implementation strategies to increase school adoption of SWAP IT within primary schools in 10 different New South Wales Local Health Districts will occur. Schools will be randomly allocated to either the intervention or control condition. Schools allocated to the intervention group will receive a combination of implementation strategies. Across the 10 participating Local Health Districts, six broad strategies were developed and combinations of these strategies will be executed over a 6 month period. In six districts an active comparison group (containing one or more implementation strategies) was selected. The primary outcome of the trial will be adoption of SWAP IT, assessed via electronic registration records captured automatically following online school registration to the program. The primary outcome will be assessed using logistic regression analyses for each trial. Individual participant data component network meta-analysis, under a Bayesian framework, will be used to explore strategy-covariate interactions; to model additive main effects (separate effects for each component of an implementation strategy); two way interactions (synergistic/antagonistic effects of components), and full interactions. Discussion The study will provide rigorous evidence of the effects of a variety of implementation strategies, employed in different contexts, on the adoption of a school-based healthy lunchbox program at scale. Importantly, it will also provide evidence as to whether health service-centered, collaborative research models can rapidly generate new knowledge and yield health service improvements. Clinical trial registration This trial is registered prospectively with the Australian New Zealand Clinical Trials Registry (ACTRN12623000558628). Introduction: An important impediment to the large-scale adoption of evidence-based school nutrition interventions is the lack of evidence on effective strategies to implement them.This paper describes the protocol for a "Collaborative Network Trial" to support the simultaneous testing of different strategies undertaken by New South Wales Local Health Districts to facilitate the adoption of an effective school-based healthy lunchbox program ('SWAP IT').The primary objective of this study is to assess the effectiveness of different implementation strategies to increase school adoption of the SWAP across New South Wales Local Health Districts. Methods: Within a Master Protocol framework, a collaborative network trial will be undertaken.Independent randomized controlled trials to test implementation strategies to increase school adoption of SWAP IT within primary schools in 10 different New South Wales Local Health Districts will occur.Schools will be randomly allocated to either the intervention or control condition.Schools allocated to the intervention group will receive a combination of implementation strategies.Across the 10 participating Local Health Districts, six broad strategies were developed and combinations of these strategies will be executed over a Introduction Dietary risk factors are a leading cause of preventable death and disability (1).Reducing dietary risks is recommended to improve child health and mitigate future burdens of chronic disease (2).In Australia, for example, 96% of children do not consume sufficient serves of vegetables, while discretionary foods (i.e., foods high in added sugar, saturated fat and sodium) account for over one-third of children's daily energy intake (3).Schools provide universal access to children aged over 5 years, and are a setting recommended for nutrition interventions in chronic disease prevention internationally (4)(5)(6).In countries such as Australia, food brought to school (from home) packed in school 'lunchboxes' are used daily by 90% of students, (7) and contribute up to 30-50% of a child's daily energy intake (7).As approximately 40% of foods in lunchboxes are discretionary (8) improving the packing of healthy foods for child consumption at school provides a considerable opportunity for chronic disease prevention. Systematic reviews suggest that school-based healthy lunchbox interventions can improve student nutritional intake (9).In Australia, a series of randomized controlled trials of a healthy lunchbox program, known as 'SWAP IT' were recently conducted in 34 primary schools with 4,600 children (10,11).The program supports parents and carers to make simple 'swaps' aligned to dietary guidelines, (12) replacing discretionary food and beverage items with comparable core (nutrient dense) items.It is comprised of three broad program components: (i) school food (lunchbox) guidelines; (ii) messages and hard copy resources to parents and carers; and (iii) curricula resources for teachers.Across these randomized trials, the program was found to significantly improve child diet quality, energy intake and weight status, and was acceptable to both parents and teachers (10,11).A subsequent comparative effectiveness randomized trial found no difference in effectiveness between the messages and parent booklets combined, compared with those two components plus school-based curriculum and policy resources on student dietary outcomes. Given the reported benefits of SWAP IT on child health, (10, 11) broad implementation in schools has the potential to make a significant contribution to improving public health nutrition.An important impediment to the large scale adoption of effective school nutrition initiatives, however, is a lack of published evidence of effective strategies to implement them (13).A recent Cochrane review of implementation strategies for school-based health promotion programs identified few randomized controlled trials of strategies to implement policies and practices promoting healthy eating, particularly 'at scale' (defined by the authors as 50 or more schools) (13).Furthermore, strategies identified as effective in improving implementation in one jurisdiction (e.g., Local Health District), may not be effective, appropriate or feasible for application in another.Similarly, differing capacities (e.g., resources or infrastructure) of agencies responsible for undertaking or supporting program implementation may mean an effective implementation strategy in one jurisdiction may not be feasible to execute in another.Such issues must be addressed if effective interventions are to be adopted at a population level (at scale). As in clinical services, systematic reviews and best practice guidelines identify evidence-based programs and practices that can be employed in community settings to reduce child dietary risks.As such, within devolved health systems such as Australia, different health services will often seek to address the same disease risk or health condition, using the same intervention (e.g., guideline concordant care) at the same time (14).These services, however, operate in different contexts, with different capacities and resource constraints.As a result, there is often natural heterogeneity in the strategies that health services employ to support the implementation of programs in schools and other clinical and community settings to improve dietary (and other) outcomes.This convergence of objective (to implement a similar intervention), but heterogeneity in context and strategies used to implement school-based programs, presents an attractive opportunity to learn about the types of implementation strategies that may be effective in different contexts.Specifically, the coordinated evaluation of implementation efforts across a network of health services, and the establishment of processes to share and learn from the findings, may provide a mechanism for rapid evidence generation, and health system improvement 'at scale' .Such collaborative and data-driven models of working are also consistent with recommendations for the development of 'learning health system' approaches to healthcare improvement (15).Broadly, Master Protocols represent an approach that could be used to facilitate coordinated and collaborative research, learning and improvement (16).Master Protocols refer to designs employing coordinated approaches to assess the effects of interventions within a unifying overall trial structure (16).This infrastructure, including a centralized trial protocol and governance, facilitates the standardization of study processes and procedures, including recruitment, evaluation and data collection, analysis, and reporting (17).Although frequently used to test pharmacological interventions, (18-20) this type of trial design is not broadly used within communitybased interventions and to our knowledge, has not previously been used to assess the effectiveness of strategies on school implementation of health promotion programs.Employing this type of trial design would be a novel transformation from how health promotion programs, and strategies to support their implementation, are conventionally tested.Currently, few trials test the effectiveness of strategies to improve the implementation of such programs, (21) and those that do often employ different research designs and measures.This impedes cross-study synthesis, and also fails to address the issue of context, with strategies that effectively improve implementation in one context potentially unsuitable or ineffective in another (22).Comparatively, Master Protocol designs allow for the examination of multiple hypotheses, (23) such as the effects of a variety of implementation or scale-up strategies on school implementation of health promotion programs, or differences in effectiveness for different population groups. Following demonstration of the effectiveness and acceptability of the SWAP IT program, (10,11) three Local Health Districts (LHDs) from across New South Wales (NSW), Australia, expressed interest in supporting the implementation of this program in their LHD.In this context, and drawing on research design principles of Master Protocols and prospective meta-analysis methodology, (24) a pilot collaboration was formed that networked three LHDs and the University of Newcastle (National Centre of Implementation Science) (25) to undertake a harmonized evaluation of strategies used within each LHD to support the adoption of the SWAP IT program, and to share learning from these evaluations across participating LHDs (26).The collaboration was supported by shared implementation strategy development processes, governance structures, centralized data collection infrastructure, and a community of practice (26).While collaboration across and flexibility within LHDs for the implementation of various health promotion programs has occurred routinely among NSW LHDs over time, a formal evaluation of such a collaborative approach had not been undertaken.The pilot found the collaborative model was highly acceptable to all parties, (26) and strategies employed yielded significant, but contextually dependent improvements in program adoption. Based on these encouraging findings, the collaborative approach is now being employed across 10 of the 15 LHDs (67%) in NSW.This paper describes the protocol for what we term a "Collaborative Network Trial" to support the simultaneous testing of different implementation strategies undertaken by 10 LHDs in NSW, Australia to facilitate the adoption of the SWAP IT program at scale. Objectives As such, the primary objective of this study is to assess, using individual level participant (in this case 'school') data (IPD), the effectiveness of different implementation strategies employed by 10 NSW LHDs to increase school adoption of the SWAP IT program.Secondary objectives of the study are to: (1) explore the effects of different implementation strategy components and contextual factors on the school-level adoption of SWAP IT using pooled individual level data across all trials; (2) assess the acceptability of the implementation strategies to school principals; and (3) assess the sustainability of SWAP IT within schools that adopted the program at 18-months. Context LHDs are NSW Government funded health services responsible for providing or supporting the provision of health promotion services to address the leading risk factors for chronic disease in their community.The NSW Ministry of Health provides funding to LHDs to support the implementation of state-wide health promotion programs (27).These health promotion programs are often developed by employing a multi-sectoral approach, involving health (e.g., LHD health promotion practitioners), policy (e.g., NSW Ministry of Health) and education stakeholders (e.g., Department of Education) to maximize the alignment of the programs with the priorities of the school sector, such as fit of the program with the school curriculum and student wellbeing policies.All NSW LHDs have received funding to facilitate the implementation of healthy eating and physical activity policies and practices in NSW primary schools for over a decade as part of the NSW Healthy Children's Initiative (27).This involves LHD health promotion staff engaging with all schools in their region to deliver training, education and other health promotion activities to support schools to implement healthy eating and physical activity policies and practices.Although healthy lunchboxes have historically been a focus for health promotion activities in some LHDs and non-government organisations (e.g., Cancer Council NSW), the funding provided by NSW Ministry of Health did not explicitly focus on a formal school-based program to support the packing of healthy lunchboxes.In addition, while a core component of health promotion practice, Health Promotion Unit capability to undertake research and evaluation of health promotion activity has been found to vary across LHDs (28). Ethics and trial registration The research will be conducted and reported in accordance with the requirements of the Consolidated Standards of Reporting Trials (CONSORT) Statement (29).Ethics approval has been obtained via the following Human Research Ethics Committees: Hunter New England (2019/ETH12353); University of Newcastle (09/07/26/4.04);NSW of Department of Education (2018247); and the Maitland-Newcastle, Sydney, Wollongong, Bathurst, Parramatta, Wagga Wagga and Canberra-Goulburn Catholic Dioceses.This trial is also registered prospectively with the Australian New Zealand Clinical Trials Registry (ACTRN12623000558628).The protocol is reported according to the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) (Supplementary Files 1, 2) (30). Study design and setting Within a Master Protocol framework, ( 16) we will undertake a Collaborative Network Trial.Specifically, independent randomized controlled trials to test strategies to implement or improve health care occurring at different sites (LHDs) will be undertaken by the Health Promotion Units at each LHD.The key trial methods, measures and data collection processes will be harmonized with agreement across sites to provide individual school-level data for planned pooled analyses as part of a collaborative, following a prospective metaanalysis framework (24).The design allows for heterogeneity or natural variation in the implementation strategies being tested and the contexts (i.e., sites) they are tested in (16).The study builds on a pilot network trial to implement the scale-up of SWAP IT program in three LHDs (13). Sample and participants The study will be conducted with primary and combined schools located across 10 LHDs in NSW, Australia.The state of NSW is socioeconomically and geographically diverse (31).Department of Education (DoE), Catholic Schools NSW and Association of Independent Schools of NSW primary and combined schools located within the LHDs of Murrumbidgee, Hunter New England, Sydney, Western Sydney, South Western Sydney, South Eastern Sydney, Northern Sydney, Western NSW, Nepean Blue Mountains, and Illawarra Shoalhaven will be included in the study.These LHDs have partnered with the research team to participate in a separate trial occurring concurrently with other primary and combined schools in their region (ACTRN12623000145606).As such, LHD staff are well engaged in the research and infrastructure and resources to support the research (e.g., regular meetings with research sites/LHDs, data collection systems and staff) are in place. A list of potentially eligible schools located within NSW will be sourced from a publicly accessible database (n = 3,183) (32).The research team will apply the following criteria prior to commencing the study to identify schools that are eligible for inclusion.Primary and combined schools located within the participating LHDs who cater for at least one primary school year and have not implemented the SWAP IT program will be eligible to participate.Only schools that do not use the Audiri parent communication app will be eligible, as these schools are participating in another trial being conducted concurrently by the research team (ACTRN12623000145606).The following schools will be excluded from the study sample: schools with special purposes (e.g., schools catering exclusively for children requiring specialist care, hospital schools, distance education schools and environmental education centers) (n = 206); schools with secondary students only (n = 544); schools identified as early learning centers (n = 6); schools located outside of the partnering LHDs (n = 483); schools who have already implemented SWAP IT (n = 208); and schools that have previously participated or currently participating in separate SWAP IT trials (n = 394).The total sample of eligible schools is 1,342. All schools that meet the eligibility criteria outlined above will be included in the study as part of usual service delivery provided by LHD health promotion staff to support schools to implement a range of healthy eating and physical activity policies and practices.Eligible schools will be invited to participate in the secondary data collection component of the study, specifically the follow-up survey conducted with school principals (described below).Schools will be recruited for the follow-up data collection via an invitation email containing a link to an online survey and a study information statement outlining the purpose of the research and their involvement.Schools that are yet to complete the survey will receive up to three reminder prompts via telephone or email by the research team to encourage completion.Recruitment for the data collection component commenced in November 2023 and concluded in December 2023. Randomization and blinding Prior to the delivery of the first scale-up strategy, schools within each LHD will be randomly allocated to either the intervention or control condition using a computerized random number function in a 1:1 (intervention: control) ratio.Randomization will be stratified by school size and social socio-economic location, as determined by Socio-Economic Indexes for Areas categorization using school postcodes, (33) given the socio-economic association with implementation of school nutrition programs (34).Randomization will be completed by a statistician not otherwise involved in the trial.Due to the nature of the intervention, participants will not be blinded to group allocation.However, research staff assessing the outcomes at follow-up will be blinded. Implementation strategies A series of implementation strategies were developed with the aim of maximizing school adoption of the SWAP IT program in eligible schools that have not yet adopted the program.These implementation strategies were developed for each of the LHDs independently, based on their existing capacities and local contexts.Implementation strategies for each participating LHD ('site') were co-designed by LHD health promotion staff and other stakeholders, with support provided by National Centre of Implementation Science (NCOIS) implementation scientists and SWAP IT developers from the University of Newcastle.The development process included: (i) planning workshops facilitated by University staff that drew on tacit knowledge and experience of health promotion staff who had considerable experience working with schools; (ii) evidence regarding barriers to school adoption and implementation of SWAP IT collected by the research team as part of previous SWAP IT trials, (iii) data from systematic reviews and pilot trials regarding the effectiveness of strategies to facilitate adoption (32).During the workshops, theoretical framework tools were used to facilitate the selection of strategies to address barriers that were aligned to individual LHD capacity and contexts (35-37).Processes may have also been undertaken by LHDs to identify strategies to support access and engagement of priority populations within their region to ensure school adoption and implementation of SWAP IT does not further exacerbate health inequities.This may have included consultation and engagement processes with Aboriginal, or Culturally and Linguistically Diverse individuals, groups or stakeholders. Across the 10 participating LHDs, six broad implementation strategies to maximize school adoption of SWAP IT emerged.The combination of these six strategies employed by each LHD will differ and will be executed over a period of 6 months.Once a school adopts SWAP IT, they will not receive any subsequent implementation strategies, and will select which school term they would prefer to receive the program.The SWAP IT messages are delivered weekly to parents and carers via usual school-parent communication channels for one school term (one message per week), followed by two messages per term on an ongoing basis.A school must adopt SWAP IT in order to receive the program. The implementation strategies executed by each LHD are described below and in Table 1, with the timeline for the delivery of the implementation strategies outlined in Table 2. Sector support and endorsement Policy makers from Health will target principals to communicate, support and endorse the program and its outcomes, its alignment to sector policies and recommend its adoption.This endorsement will occur via a maximum of two targeted letters or emails developed by the research team, approved and endorsed by local and state-level Health partners.The letters or emails will also contain a link to resources and the enrolment website.As an additional strategy, some LHDs (outlined in Table 1) will use their existing connections to obtain endorsement for the program from local educational and wellbeing liaisons within the NSW Department of Education.This endorsement will be promoted to schools via an email distributed by the liaisons directly to schools receiving this strategy. Local facilitation Health promotion staff from LHDs have developed strong and trusted local relationships with schools for over a decade and represent credible sources of local nutrition expertise.LHD health promotion staff will use up to two of their existing planned school contacts, conducted via telephone call or face-to-face meeting, to assess interest in the SWAP IT program, address any school-specific barriers to adoption, and facilitate goal setting and action planning.Scripts developed by the research team to guide the local facilitation will incorporate motivational interviewing techniques to be employed by health promotion staff to address school barriers to program adoption. Develop and distribute educational materials Targeted at principals to address perceived barriers to adoption, the strategy will initially aim to create tension for change (e.g., via outlining parent and carer interest and expectations); and then communicate the attractive program attributes (e.g., simplicity, no-cost).This communication will consist of up to two contacts, including a printed information pack (consisting of a flyer, SWAP IT pen and example parent booklet) at the commencement of the intervention period followed by an email to promote the program.As an additional strategy, one LHD will offer printed parent booklets promoting the SWAP IT program to all parents and carers with children commencing the following school year within their school kindergarten orientation packs along with a flyer encouraging the school principal or wellbeing coordinator to adopt the program. Local opinion leaders Promotional materials, including one printed information pack (consisting of a flyer and example SWAP IT parent booklet) and one email, will be delivered to other leaders that may be influential in a schools decision to adopt health promotion programs, specifically the school administration manager and parent committee.The aim of these materials is to promote the SWAP IT program and encourage school adoption. Audit and feedback Data and feedback on school adoption of SWAP IT will be automatically captured through electronic registration records and be provided to schools via other implementation strategies, including educational materials, local facilitation and local opinion leaders.For example, educational materials provided to principals, school administration managers and parent committees will include information on the number of schools that have registered for SWAP IT, a link to view an online list of schools have already adopted the program (to create tension for change and social norms) and provide instruction on how the school can also register for the program. Educational meeting Health promotion staff from LHDs will conduct one webinar with schools within their LHD to assess interest in the SWAP IT program and address any barriers to adoption.Webinar content will be developed by the research team in collaboration with health promotion staff. Control group and contamination Registration for the SWAP IT program is publicly available and freely accessible for all schools, including schools allocated to the control group.The implementation strategies to be delivered to the control group across LHDs is described in Table 1.For most schools allocated to the control group, the comparison will be 'no implementation support' or a singular strategy.Execution of the implementation strategies will be monitored centrally by the research team in consultation with health promotion staff from each LHD to minimize risk of contamination.Nonetheless, school exposure to the implementation strategies will be assessed at follow-up via an online or telephone survey with school principals (described below). Study outcomes and data collection Trial outcomes were discussed and agreed upon by participating LHDs (Table 3).Data collection for all trial outcomes were harmonized across all LHDs and will be collected centrally by the research team at the University of Newcastle.The centralisation of data collection represented an efficient means of collecting and managing data for all participating LHDs.All demographic, operational and trial outcome measures are harmonized (i.e., identical item, measure and data collection method) to facilitate comparability and analysis.Each participating LHD will retain access to their trial dataset. Primary outcome Adoption of the SWAP IT program, defined as the number of schools who register for the lunchbox nutrition program (SWAP IT), will be assessed within schools allocated to the intervention and control group via electronic registration records captured automatically following school registration to SWAP IT.No additional data collection is required to assess the primary outcome.As part of the registration process, schools provide consent for the de-identified registration data to be used for research and evaluation purposes.This outcome will be assessed at baseline and approximately 9 months after baseline data collection. Secondary outcomes Acceptability of implementation strategies, defined as the perception among principals that the implementation strategies are agreeable, palatable or satisfactory, will be assessed in a telephone or online survey with school principals at 9-month follow-up.School principals will be asked if they recall receiving each of the implementation strategies during the intervention period.For strategies the participants recall receiving, they will be asked to rate how acceptable they found the strategy on a 5-point Likert scale (1 = not acceptable; 5 = very acceptable) (38).Principals from 243 Catholic and Independent primary schools located across five LHDs (LHD 1; LHD 5; LHD 7; LHD 8; LHD 9) will be invited to participate in the survey.These LHDs have been selected as they are employing diverse combinations of the implementation strategies (Table 1).Including schools from these LHDs in the survey will ensure the acceptability of all employed strategies (across the 10 LHDs) will be assessed and ensures the data collection remains feasible to be conducted within the study timeline.Implementation of the SWAP IT program, defined as the extent to which the SWAP IT program components were delivered by the school to parents, will be assessed in the telephone or online survey (described above) with a sub-sample of 243 school principals at 9-months follow-up.Schools will be asked to report if they implemented the SWAP IT program at their school, and what program components were implemented (i.e., parent messages; school lunchbox guidelines; curriculum resources; parent and carer resources). Sustainability of the SWAP IT program, defined as continued school use of the lunchbox nutrition program (SWAP IT) at 18 months after baseline data collection, will be assessed via electronic registration records captured automatically following school registration to SWAP IT. School characteristics, including postcode, total student enrolments, geographic location (urban, regional, rural and remote), proportion of Aboriginal student enrolments, and proportion of students that speak a language other than English at home, were obtained from a publically accessible Australian Curriculum, Assessment and Reporting Authority (ACARA) database (39). Sample size and data analysis We are anticipating a sample of at least 30 schools per group (and an average of 60 per group) in trials of each of the 10 participating LHDs.Descriptive statistics, including proportions, means and standard deviations, will be used to describe school characteristics, adoption, implementation and sustainability of SWAP IT, as well as the acceptability of the implementation strategies. Analyses of trial outcomes will be undertaken under an intention to treat framework separately for each trial.For assessment of school level program adoption, the primary trial outcome, between-group differences, will be assessed using logistic regression.The model will include a term for treatment group (intervention vs. control) and pre-specified covariates prognostic of the outcome.Little, if any, missing primary outcome data is anticipated at follow-up, as program adoption is recorded automatically for all participating schools.Nonetheless, we will employ multiple imputation for any missing data in the event that schools withdraw from the study and request that their data are not used.All statistical tests will be 2 tailed with alpha of 0.05.Assuming adoption of the program by 10% in the comparison group, a sample size of approximately 30 schools per group will Frontiers in Public Health 08 frontiersin.orgbe sufficient to detect an absolute difference between groups of 30%, with 80% power and an alpha of 0.05.We will employ component IPD component network meta-analysis to compare and rank the effects from all the tested strategies on the primary trial outcome (40).For this analysis we will also include the three randomized controlled trials from the pilot, (26) expanding the network and providing pooled individual level data from 13 randomized controlled trials.We will explore combining 'educational meetings and educational materials' into a single component for analysis given their shared underlying behavioral targets.We will adjust for prognostic factors and exploration of strategy-covariate interactions to identify if and to what extent effects vary by participant, population or other contextual factors (effect modifier) (40).We will also employ component network meta-analyses to model additive main effects (separate effects for each element or component of an implementation strategy); two way interactions (synergistic/antagonistic effects of components), and full interactions (different effects from each combination of components).The analyses will be performed under a Bayesian framework.There are no established methods for sample size calculations for component network meta-analysis. For the secondary outcomes assessed via an online or telephone survey, data screening strategies were employed during survey development to minimize incomplete or inaccurate responses.These strategies included the use of mandatory fields (i.e., participants were unable to leave a survey item blank, but could select 'prefer not to say'), minimizing the inclusion of open responses and reducing the survey length.Best practice recommendations for data screening will also be employed following data collection, including visually inspecting data to identify data entry errors or implausible values for each variable, and calculating distributional characteristics of items to assist in identifying outliers or extreme values (41). Trial governance The trial will be overseen by a Steering Group, comprised of representatives from each LHD, including: Aboriginal Health Promotion Managers; program developers, implementation scientists, trialists and research dietitians from the University of Newcastle.Roles and responsibilities will be documented in a Terms of Reference for the Group.LHDs will be responsible for the selection of implementation strategies for their jurisdiction, and execution of some of the strategies to schools.The University of Newcastle will be responsible for facilitating trial workshops, ethics, data collection, monitoring and quality assurance, data management and analysis.A Community of Practice, established in the pilot, (26) will also be employed to support the interpretation of trial results and pooled analyses, exchange tacit knowledge and experience and identify opportunities for improvement. Discussion This protocol provides a comprehensive description of a novel research design, employing individual level participant (i.e., 'school') data component meta-analysis, to help generate evidence that can better inform approaches to support the adoption and implementation of health promotion interventions at scale.The study will provide rigorous evidence of the effects of a variety of implementation strategies, employed in different contexts on the adoption of the SWAP IT school lunchbox program. Evidence generated from this research will help address an important constraint of the current literature, with systematic review evidence identifying few rigorous trials that have tested strategies to implement health promotion interventions at scale.The strategies tested within this study have been developed following a systematic co-design approach with implementation researchers, LHD health promotion staff and other stakeholders.In addition to considering the evidence-base (i.e., barriers and enablers to adoption of school-based programs, and the effectiveness of implementation strategies), this process included working with LHD health promotion staff to consider the human, technical and financial resources available in LHDs responsible for strategy delivery.Applying this type of systematic approach to scale up has been recommended by implementation and scale-up experts to help address a common pitfall of scale-up research, which is the diminishment in effect of interventions with proven efficacy when delivered at scale (42,43). The currently limited evidence base has resulted in a failure to provide guidance on the crucial issue of context, with strategies that effectively improve implementation in one context potentially ineffective or inappropriate to deliver in another (22).Through partnering with 13 NSW LHDs (including three from the pilot) to conduct this research, schools from all sectors and located within the majority (86%) of the state of NSW will be represented.These LHDs encompass socioeconomically and geographically diverse regions, ensuring the contexts in which these strategies are tested are diverse and representative of the broader setting (31).In order to further address the issue of context, future research should potentially consider identifying and addressing other contextual, sectoral and political factors that may be influential in maximizing school adoption of the SWAP IT program.For example, the World Health Organization's Health Promoting Schools Framework recommends employing a comprehensive approach, encompassing (45)(46)(47).These advantages include the increase in statistical power compared to aggregate data meta-analysis, the ability to standardize the analysis across studies to ensure consistency in outcome measures, and enhancing the ability to effectively explore heterogeneity in participant characteristics (i.e., schools and LHDs) and treatment effects (i.e., implementation strategies) (45-47).This type of analysis has been frequently employed to synthesize the effects of health behavior interventions (48)(49)(50).For example, the Transforming Obesity Prevention for CHILDren (TOPCHILD) Collaboration uses IPD meta-analysis to assess the effectiveness of obesity-prevention interventions on child weight outcomes, and also assess differential effects by individual-and trial-level characteristics (50).The use of objective and validated measures of data collection to assess study outcomes is a considerable strength of the study.For example, the objective measure of school adoption of SWAP IT, automatically captured upon school registration for the program, will provide high-quality and accurate assessment of the trial primary outcome.The use of validated measures (38) within the survey to assess school acceptability of the employed implementation strategies will provide reliable insight into the types of strategies that could potentially be employed within future interventions to support the implementation of health promotion programs.The use of such measures has been recommended by leading implementation researchers, who have developed definitions and validated measures of implementation outcomes (including adoption and acceptability) to improve the consistency in how outcomes are assessed within the implementation field and enable the comparison of strategies across studies (38,51).These definitions and measures have been incorporated within other school-based interventions to assess implementation outcomes (52,53).Despite the strengths outlined above, a number of limitations should be considered.While employing a Master Protocol trial design is innovative and shows promise as a method to transform traditional approaches to evaluating strategies to improve implementation of health promotion programs, there is also limited research to guide the conduct of such trials in school-based interventions.As such, the utility of this type of trial design in school-based interventions is still largely unknown.Indeed, the study will provide valuable learnings of this design as a model of evidence generation more broadly.Additionally, although the analysis will include schools from 13 of the 15 LHDs, these schools are solely located within one state of Australia.As such, generalizability of the findings beyond this region may be limited. TABLE 1 Implementation strategies delivered by each Local Health District. TABLE 2 Timeline for the delivery of the implementation strategies. TABLE 3 (44)y outcomes and sample assessed.(e.g., learning and curriculum), environmental (e.g., culture and policies) and partnership (e.g., families, health professionals and educators, teachers and community) components, to enhance the effectiveness of health promotion programs(44).Employing individual-level participant data component metaanalysis within this research provides an opportunity to gather robust evidence on the types of strategies that are effective in improving implementation of SWAP IT, and in what context.Additionally, it addresses a noticeable constraint in the current literature, that is, the substantial heterogeneity in trial design and measures employed in the few studies that have tested strategies to implement health promotion programs at scale.IPD meta-analysis is considered the gold standard for combining data from randomized trials and has several advantages over other analytical approaches education Program (H20/28248).RS is supported by a Medical Research Future Fund Fellowship (APP1150661) and a Hunter New England Clinical Research Fellowship.JN receives salary support from the NSW Ministry of Health PRSP funding awarded to Early Start at the University of Wollongong.JJ is supported by a Hunter New England Clinical Research Fellowship.AS is supported by an NHMRC Investigator Grant (APP2009432).The contents of this manuscript are the responsibility of authors and do not reflect the views of NSW Ministry of Health or NHMRC.
2023-11-20T20:08:15.510Z
2023-11-20T00:00:00.000
{ "year": 2024, "sha1": "d579509f461d18a6c8f86ecb4de3b87fefa87163", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/journals/public-health/articles/10.3389/fpubh.2024.1367017/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "057afb6f7eef19601abc30b3ca25cf087b2eefd3", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
262716684
pes2o/s2orc
v3-fos-license
Correlates of depressive symptoms among Latino and Non-Latino White adolescents: Findings from the 2003 California Health Interview Survey Background The prevalence of depression is increasing not only among adults, but also among adolescents. Several risk factors for depression in youth have been identified, including female gender, increasing age, lower socio-economic status, and Latino ethnic background. The literature is divided regarding the role of acculturation as risk factor among Latino youth. We analyzed the correlates of depressive symptoms among Latino and Non-Latino White adolescents residing in California with a special focus on acculturation. Methods We performed an analysis of the adolescent sample of the 2003 California Health Interview Survey, which included 3,196 telephone-interviews with Latino and Non-Latino White adolescents between the ages of 12 and 17. Depressive symptomatology was measured with a reduced version of the Center for Epidemiologic Studies Depression Scale. Acculturation was measured by a score based on language in which the interview was conducted, language(s) spoken at home, place of birth, number of years lived in the United States, and citizenship status of the adolescent and both of his/her parents, using canonical principal component analysis. Other variables used in the analysis were: support provided by adults at school and at home, age of the adolescent, gender, socio-economic status, and household type (two parent or one parent household). Results Unadjusted analysis suggested that the risk of depressive symptoms was twice as high among Latinos as compared to Non-Latino Whites (10.5% versus 5.5 %, p < 0.001). The risk was slightly higher in the low acculturation group than in the high acculturation group (13.1% versus 9.7%, p = 0.12). Similarly, low acculturation was associated with an increased risk of depressive symptoms in multivariate analysis within the Latino subsample (OR 1.54, CI 0.97–2.44, p = 0.07). Latino ethnicity emerged as risk factor for depressive symptoms among the strata with higher income and high support at home and at school. In the disadvantaged subgroups (higher poverty, low support at home and at school) Non-Latino Whites and Latinos had a similar risk of depressive symptoms. Conclusion Our findings suggest that the differences in depressive symptoms between Non-Latino Whites and Latino adolescents disappear at least in some strata after adjusting for socio-demographic and social support variables. Background Depression is a very frequent health problem which is growing at an alarming rate. It has been suggested that if the trend persists, by 2020, depression will be the second biggest health care problem after heart disease worldwide [1]. The prevalence of depression is increasing not only among adults, but also among adolescents [2]. This increase is mostly related to the moderate forms of depressions without psychomotoric and physical symptoms [3]. Nevertheless, this form of depression is still strongly affecting health and quality of life of the patients. Population based studies show that at any one time between 10 and 15% of the child and adolescent population has some symptoms of depression [4]. Our paper focuses on symptoms of depression based on self-report. Several risk factors for depression among adolescents have been identified, including female gender, increasing age, lower socio-economic status, and Latino ethnic background [5,6]. The effect of acculturation on depression has also been investigated. The term acculturation generally refers to the process whereby the attitudes and/or behaviors of persons from one culture are modified as a result of contact with a different culture. A recent literature review examined the association between acculturation and depression among Latinos. High acculturation was associated with a worse outcome in two studies, with a positive effect in one study, and had mixed or no effect on depression in three studies [7]. The author argues that past research was inconsistent in the measurement of acculturation or in the adjustment for possible confounding factors. In some cases, when studies have controlled for factors such as age, education or other factors, the effects of acculturation diminish or disappear [7]. In addition to demographic characteristics it has been suggested that social support and supervised care after school may be protective factors [8,9]. The aim of our study is to investigate the association between demographic factors (age, gender, ethnic background, socio-economic status), acculturation, social support and depressive symptoms in a large populationbased sample of Latino and Non-Latino White adolescents residing in California. Sample and variables A secondary analysis was conducted using data from the adolescent sample of the 2003 California Health Interview Survey (CHIS) [10]. CHIS is a population-based, random-digit dial telephone survey which is representative for California's households in 2003. The analysis was based on 3,196 telephone-interviews with Latino and Non-Latino White adolescents between the ages of 12 and 17. Measure of depressive symptoms Depressive symptoms were measured according to a reduced version of the Center for Epidemiologic Studies Depression Scale (CES-D) following Radloff [11], who first adopted the CES-D Scale to children and adolescents. There were eight items in total, covering depressed affect (felt depressed, lonely, sad, could not shake off feeling sad and unhappy, felt life was a failure), happiness (were happy, enjoyed life), and retarded activity (did not want to do the things you usually do) during the past 7 days. Similarly to Radloff, the CHIS questionnaire did not ask for interpersonal aspects, because problematic peer relationships might be the norm for adolescents. There was a four point answer scale for the eight items (never, sometimes, a lot of the time, most of the time); for two of the items the scores were reversed, then the sum of the scores (0, 1, 2, 3) was calculated with higher scores indicating more symptoms. Although the CES-D has been used as a screening instrument for depressive symptoms among adolescents, there are no established cutoff scores of the 8-item version of the CES-D Scale that was used in the CHIS. Cutoff scores of ≥ 8 and ≥ 10 have been used on the 10 item CES-D Scale in adult samples (e.g., [12]). Cutoff of ≥ 16 indicates depressive symptoms in the original 20 item CES-D (maximum score = 60 points), which was validated using DSM_III criteria. We set a cutoff score of >10 as an indictor for depressive symptoms for the 8 item CES-D (maximum score = 24) in the CHIS sample, which corresponds to a cutoff score of >25 on the 20 item scale. Based on analyses by Roberts et al., [13] 25 is the midpoint in the "moderately depressed" category in a student sample that completed the 20-item CES-D. Few studies have assessed psychometric properties of the CES-D Scale among adolescents from diverse ethnocultural groups [14][15][16]. These studies suggest that it is appropriate to use the CES-D among Mexican-American adolescents, since their depression symptomatology is very similar to that of their Anglo-American peers. In our sample, factor analysis yielded identical factor structure and similar factor loadings in all three groups. High internal reliability indicated high homogeneity of the scale: Cronbach's alpha of the 8-item CES-D scale ranged from .73 in the low acculturation Latino subgroup to .79 among Non-Latino Whites. Ethnic background and acculturation index Our analysis was based on data from Latino and Non-Latino White respondents. Within the Latino subgroup we computed an acculturation index based on the following variables: language in which the interview was conducted, language(s) spoken at home, place of birth, and number of years lived in the United States, citizenship status of the adolescent and both of his/her parents. The index was created using canonical principal component analysis (CAPCTA) [17], the nonparametric version of the principal component analysis, which has to be used when variables are either multinomial or ordinal. In a density plot the distribution of the acculturation index showed two clearly separated groups (see Figure 1.). Based on visual inspection, the index was dichotomized at 0.6 into high and low acculturation. The components of the acculturation index together with the categories of the acculturation index are presented in Table 1. The high acculturation Latino group was US born, bi-lingual and most of the interviews were conducted in English. The low acculturation Latino group was characterized by youth being foreign born and more frequent use of Spanish only. Other variables used in the analysis Support provided by adults at school and at home was each measured with 8 items assessing the presence of adults who cared about, listened to and encouraged the adolescent respondent. Using CAPCTA, these variables were combined into two separates scores which were dichotomized at the median into low and high categories. The correlation between the support variables was below 0.4. Additionally we included in the analysis: age of the adolescent (in three groups 12-13, 14-15, 16-17), gender, socio-economic status (<200% poverty level and ≥ 200% poverty level), and household type (two parent or one parent household). Statistical analysis Factor analysis was conducted to confirm identical factor structure of the CES-D scale for all subgroups. Internal reliability of the scale was tested by Cronbach's alpha. Univariate analysis was performed by tabulation and chisquare tests between Non-Latino White and Latino adolescents with high or low level of acculturation. We used the conventional significance level (p < 0.05). Univariate und multivariate logistic regression was used for the analysis of association between the risk of depressive symptoms and other variables. Multicollinearity test was performed based on the tolerance coefficient -no colline- arity problems were detected for the analyzed variables. We also examined the effect of acculturation in the Latino subsample, using a continuous acculturation score (data not shown) and in a second step as a dichotomous variable. Finally, the possibility of effect modification between ethnic background and other variables was investigated. The modeling strategy followed Hosmer and Lemeshow [18]. All variables were included in a multiple logistic regression model. All two-ways interactions were investigated in separate steps with the main effects model. Interactions significant at the 0.05 level (based on Wald-test) were included jointly in the preliminary final model. The final model was obtained by removing effects which were not significant at the 0.05 level. For variables with effect modification the effect of ethnic background was calculated for different strata. All analyses were performed using SPSS 12.0. Description of the sample The total sample consisted of 3196 adolescents: 2071 Non-Latino White youth, 865 Latino youth in the high acculturation category and 260 in the low acculturation category. The three groups were significantly different with respect to most socio-demographic characteristics (see Table 2). Latino youth came more frequently from lower income households, and reported lower support at home or at school. The low acculturation Latino group was the most disadvantaged in terms of social support and income. However, more Latino youth in the low acculturation group came from two parent households as compared to high acculturation Latinos and Non-Latino Whites. Characteristics associated with depressive symptoms Scores on the 8-item CES-D Scale ranged from 0-24, with a mean score of 4.4 and standard deviation of 3.7. A total of 232 adolescents (7.3% of the sample) had scores >10 and were classified as having symptoms of depression (see Table 3). Based on univariate analysis, a significantly larger proportion of Latino youth had depressive symptoms as compared to Non-Latino Whites (10.5% versus 5.5%, p < 0.001). Although more low acculturation Latinos had symptoms of depression than high acculturation Latinos (13.1% versus 9.7%), this difference was not statistically significant (chi-square, p = 0.12). Females, youth living in households below 200% poverty level, those living in a one parent household and those who received low support at home and at school were significantly more likely to have symptoms of depression. Age was not associated with symptoms of depression in this sample. In multivariate analysis, not accounting for effect modification, the impact of ethnic background and acculturation decreased as compared to univariate analysis (Table 3, columns 4 and 5). In this analysis, the odds of depressive symptoms were only increased among low acculturation Latinos as compared to Non-Latino Whites. Poverty level and low level of support at home no longer emerged as independent predictors of depressive symptoms. In this analysis, the most important predictors of depressive symptoms were low support at school, female gender, being classified as low acculturation Latino and coming from a one parent household. Characteristics associated with depressive symptoms in the Latino subsample We examined independent predictors of depressive symptoms among Latinos by limiting the analysis to the Latino subsample. Similar to the multivariate analysis in the whole sample shown in Table 3, female gender and low support at school emerged as risk factors in the Latino subsample (Table 4). In the low acculturation group the odds of depressive symptoms were 50% higher than in the high acculturation group, but the effect did not reach statistical significance. There was no substantial change in effects of other variables after the inclusion of the acculturation variable in the joint model, only the impact of poverty level slightly decreased. There was also no evidence of effect modification in the Latino subsample which was assessed by significance of interaction terms (data not shown). Distribution of the acculturation index (Kernel density plot) Figure 1 Distribution of the acculturation index (Kernel density plot). Ethnic differences in depressive symptoms in different strata of covariates The comparison of the results in the Latino subsample (Table 4) and Latino and Non-Latino White combined sample (Table 3) revealed a strong effect modification. The poverty level, support at home and support at school had consistently a weaker association with depressive symptoms in the Latino sample than in the whole sample. This was confirmed in the formal analysis of interactions and the stratified results are presented in Table 5. The highest odds ratios were found in the more advantaged strata: the odds ratios for Latinos to have depressive symptoms as compared to Non-Latino Whites ranged from 2.1 in the strata with poverty level ≥ 200% to 3.29 in the strata with high support at school. The odds ratio of having depressive symptoms among Latinos as compared to Non-Latino Whites was 4.62 in the strata with high income and high support both at school and at home. The odds of having depressive symptoms were not different between Latinos and Non-Latino Whites in the strata that had lower levels of income and support at home. Predictors of depressive symptoms also varied somewhat in the male and female subsamples ( Table 6). In both gender groups, low support at school remained the most important predictor of depressive symptoms. Among males, additional predictors were low support at home and coming from a one parent household which did not emerge as risk factors among females. Among females but not among males, low family income and being 14-15 years old emerged as risk factors for depressive symptoms. Ethnicity emerged as risk factor for depressive symptoms only among females. Discussion We conducted extensive analyses in a population-based sample of Latino and Non-Latino White adolescents to examine associations between depressive symptoms and socio-demographic variables (age, gender, ethnicity, income, one parent versus two parent household type), acculturation, and social support at home and at school. Crude analyses suggested that the risk of depressive symptoms was twice as high among Latinos as compared to Non-Latino Whites (10.5% versus 5.5 %). Other risk factors included female gender, low household income, one parent household, and low support at home and at school. All of these factors have been reported as risk factors for depressive symptoms among Latinos and other ethnic groups [6,8,9,19]. However, when all risk factors were considered simultaneously in a multivariate analysis, only four independent risk factors emerged: having low support at school, being female, being classified as low acculturation Latino and coming from a one parent household. In a stratified analysis, risk factors that were unique to males were low support at home and coming from a one parent household. Ethnicity was not a risk factor in this stratified analysis, suggesting that these risk factors are similar among both Non-Latino White and Latino male adolescents regardless of ethnic background. Almost one third of children less than 18 years of age in California (29%) live in one parent households: 21% live in mother only households and 7% in father only households [20]. Thus, boys are more likely than girls to live in a one parent household with a parent of the opposite gender. It may be that males growing up without a father in the household are either experiencing something or lacking something, such as for example a male role model, that increases their risk of depressive symptoms. Patten and colleagues [8] analyzed data of a large sample of California adults and also found higher rates of depressive symptoms among adolescents living in one parent households than in those living in two parent households. Their study showed highest rates of depressive symptoms among girls living in father only households (25.1% vs. 19.35 in mother only households), whereas the rates of depressive symptoms for boys were around 16% for both father or mother only households [8]. The relative effect of single parent household (which are predominantly single mother households) was stronger for boys than for girls in our analysis. Further analyses by Patten and colleagues [8] revealed that household type has to be considered in conjunction with parental support, as even in a two parent household risk of depression was increased if the adolescents perceived that they were not able to talk to either parent about their problems. Clearly, the complex relationship between depression, household type and parental support and the mechanisms of how these variables may relate to depression need to be further studied. Risk factors of depressive symptoms that were unique to females in our sample were Latino ethnicity, age 14-15 and low household income. Latino females emerged as risk group for depressive symptoms in both a gender stratified analysis and in an analysis limited to the Latino subsample. Interestingly, the age group 14-15 years had the highest risk of depressive symptoms among females, but the lowest risk for males. Thus, in the combined analysis, the risk estimates were averaged and age did not emerge as a risk factor for depressive symptoms. Our findings suggest that boys and girls show different profiles of correlates and probable risk factors for depressive symptoms. Others have suggested that risk factors for depression such as stress and social support may have a greater impact among girls than among boys [9]. Future studies need to further evaluate gender differences in rates and risk factors of depression as gender specific intervention programs may be needed. In our sample, low support at school was the strongest risk factor for depressive symptoms for both males and females. This variable captured respondents' perceptions of the availability of a teacher or other adult at school who "noticed when they were not there, listened to them when * adjusted for all other variables in the tables they had something to say, told them when they did a good job, always wanted them to do their best, and noticed when they were in a bad mood". Thus, teachers and school counselors are important sources of support, and need to be trained to recognize symptoms and risk factors of depression. They also need to be given the time to pay attention to individual students. A multivariate analyses taking into account existing interactions between socio-economic status, perceived support and ethnicity provided a profile of depressive symptoms that was even more detailed. When we examined different strata of household income and support, either at home or at school, Latino ethnicity emerged as risk factor for depressive symptoms only among the strata with higher income and high support at home and at school. While this finding is counterintuitive at first, it suggests that high economic status and social support are protective factors only among Non-Latino Whites. We have not been able to find any literature that is investigating this hypothesis. An alternative interpretation relates to the association between depressive symptoms and perceived discrimination. Several studies suggest that higher income is associated with more perceived discrimination and that discrimination is a risk factor for depression [21,22]. Since CHIS does not assess perceived discrimination, we were not able to examine this relationship. We found no ethnic differences between Latino and Non-Latino Whites in the prevalence of depressive symptoms in the strata with low income or low social support at home Although our data suggest several correlations between socio-demographic characteristics, social support and depressive symptoms, the causal nature of these relationships is ambiguous given the cross sectional study design. As pointed out by others [8], depressed adolescents may be less inclined to form supportive relationships with parents, teachers or peers, and less likely to perceive relationships as supportive, and to report supportive relationships. Another limitation of our data set is that several variables that have been shown to be risk factors for depression, such as stressful life events [9], perceived discrimination and low self esteem [21-23], being involved in bullying either as a perpetrator or as a victim [5], affiliation with high-versus low status peer crowd, negative or positive qualities of friendships, and presence or absence of romantic relationships [24] were not available. Finally, as in many other studies, our measure of acculturation may not have captured aspects of the acculturation process that are related to depression. Although we attempted to include all data related to the acculturation experience that were available in this data set in developing an acculturation scale, and although we used a method that has the advantage of not making inappropriate statistical assumptions, the dichotomized accultur-ation variable that we created was almost identical to a simple dichotomization based on country of birth (US versus other). Finally, our sample of low acculturation Latino respondents was relatively small and given that most Latinos living in California are from Mexico, findings may not be generalizable to those with different heritage. However, despite these limitations, our analysis adds some information to the sparse literature on depression among ethnically diverse adolescents. Conclusion Our findings suggest that differences in depressive symptoms between Non-Latino Whites and Latino adolescents disappear at least in some strata after adjusting for sociodemographic and social support variables and gave rise to some interesting hypotheses regarding modifiers of depression such as household income, social support and gender. These hypotheses should be further investigated in order to identify groups that are at high risk for depression and could benefit from interventions.
2018-04-03T02:15:06.061Z
2007-02-21T00:00:00.000
{ "year": 2007, "sha1": "3483d0a565e930f2ed4c3e70009d09b94b99cc47", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/counter/pdf/10.1186/1471-2458-7-21", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7f76f3c7b3a8f458117b47ef3bd77ca851c961ba", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
260486128
pes2o/s2orc
v3-fos-license
A global action agenda for turning the tide on fatty liver disease Background and Aims: Fatty liver disease is a major public health threat due to its very high prevalence and related morbidity and mortality. Focused and dedicated interventions are urgently needed to target disease prevention, treatment, and care. Approach and Results: We developed an aligned, prioritized action agenda for the global fatty liver disease community of practice. Following a Delphi methodology over 2 rounds, a large panel (R1 n = 344, R2 n = 288) reviewed the action priorities using Qualtrics XM, indicating agreement using a 4-point Likert-scale and providing written feedback. Priorities were revised between rounds, and in R2, panelists also ranked the priorities within 6 domains: epidemiology, treatment and care, models of care, education and awareness, patient and community perspectives, and leadership and public health policy. The consensus fatty liver disease action agenda encompasses 29 priorities. In R2, the mean percentage of “agree” responses was 82.4%, with all individual priorities having at least a super-majority of agreement (> 66.7% “agree”). The highest-ranked action priorities included collaboration between liver specialists and primary care doctors on early diagnosis, action to address the needs of people living with multiple morbidities, and the incorporation of fatty liver disease into relevant non-communicable disease strategies and guidance. Conclusions: This consensus-driven multidisciplinary fatty liver disease action agenda developed by care providers, clinical researchers, and public health and policy experts provides a path to reduce the prevalence of fatty liver disease and improve health outcomes. To implement this agenda, concerted efforts will be needed at the global, regional, and national levels. INTRODUCTION NAFLD, hereafter referred to simply as fatty liver disease, is the most widespread liver disease, with an estimated prevalence of 38% of the global adult population [1] and around 13% of children and adolescents. [2]The disease is an increasingly important contributor to global morbidity and mortality, emphasized by the substantial increase in fatty liver diseaserelated cirrhosis over the past decade. [3]8] Despite excess fat in the liver (hepatic steatosis) in the early stages of the disease, affected individuals generally experience few, nonspecific, symptoms (eg, fatigue, abdominal pain), commonly leading to a delayed diagnosis and worse health outcomes. [9]More broadly, the asymptomatic nature of the disease manifests through a generalized lack of urgency and policies to tackle the issue. [10]he burden of fatty liver disease is expected to grow in the coming decades [11] with wide-ranging implications for public health and health systems, yet countries are ill-prepared to face this challenge.A 2020 survey of 102 countries found that no country had a written strategy to address fatty liver disease, and around one-third of countries scored zero on a policy preparedness index. [12]In the same year, a consortium of 218 experts from 91 countries published a set of recommendations to advance the public health and policy agenda, including a call for a global coalition to lead the development of a public health roadmap for fatty liver disease. [13]atty liver disease represents a contemporary public health challenge that requires multidisciplinary and multisectoral responses and novel collaboration, from reorienting health systems to addressing food systems, the built environment, and social deprivation. [14]or policymakers, practitioners, industry, and patient advocates, this represents unique challenges as they seek to embrace the complexity and scale of the problem with the need for effective and efficient responses. [15]uilding on earlier work, this study engaged a global multidisciplinary group of experts to develop a set of consensus actions, which can collectively turn the tide on this silent but challenging public health threat. METHODS This study employed a Delphi methodology to develop consensus action priorities for fatty liver disease.The same global consortium previously published 28 research priorities following the same methodology. [16]GLOBAL ACTION AGENDA FOR TURNING THE TIDE ON FATTY The 9 co-chairs identified 33 experts, covering clinical care and research, public health and policy, and advocacy, who collectively formed the core author group (n = 42) (Supplementary Table 1, http://links.lww.com/HEP/H907).The core group identified experts who formed the survey panel (n = 473) (Figure 1; Table 1).All participants had expertise in the field of fatty liver disease, non-communicable diseases (NCDs), and/or consensus methodologies.The core group drew on participants from earlier work, including the global NAFLD nomenclature process (n = 240), [17] through which the American Association for the Study of Liver Diseases, the Latin American Association for the Study of the Liver (Asociación Latinoamericana para el Estudio del Hígado), the Asian Pacific Association for the Study of the Liver, and the European Association for the Study of the Liver nominated participants.Panelists were also identified from past NAFLD consensus efforts [13] and the Wilton Park and Economist Intelligence Unit projects through the EASL International Liver Foundation. Drafting of action priorities Part of the core group (n = 20) reviewed the literature and evidence base, then developed a set of evidence briefs around 7 topics, summarizing the current knowledge base, envisioning what "success" would look like in the next decade, identifying key questions, and suggesting action priorities for (1) the human and economic burden; (2) defining and implementing models of care; (3) treatment and care; (4) education and awareness; (5) patient and community perspectives; (6) policy strategies and a societal approach; and (7) leadership for the fatty liver disease public health agenda.The briefs were debated during a 3-day roundtable at Wilton Park, UK, in October 2022-cochaired by Henry E. Mark and opened by Thomas Berg and Jeffrey V. Lazarus-in which 26 core group members and 11 co-authors participated.The action priorities were subsequently revised by Jeffrey V. Lazarus and Henry E. Mark to reflect the Wilton Park discussions, and topics 6 and 7 were combined.The priorities were revised by core group members to reflect the discussions ahead of the first Delphi survey round (December 21, 2022 to January 15, 2023). Delphi method data collection and analysis The study design consisted of the Wilton Park meeting (Supplementary Table 2, http://links.lww.com/HEP/H907) and 2 survey rounds (R1 and R2).In both rounds, respondents indicated their agreement with each priority using a 4-point Likert-type scale (ie, "agree," "somewhat agree," "somewhat disagree," and "disagree").Given the multidisciplinary nature of the panel, the survey included a fifth "not qualified to respond" option.Panelists could provide comments and suggest edits to individual priorities and provide overall comments at the end of each survey.Demographic data were collected in R1.The survey was distributed using the Qualtrics XM platform (round duration ranged from 2 to 3.5 wks). An analytic team of core group members (Jeffrey V. Lazarus, Henry E. Mark, Paul N. Brennan, Christopher J. Kopka, Diana Romero, Dana Ivancovsky Wajcman, and Marcela Villota-Rivas) reviewed the R1 data, including 545 open-ended comments, and initiated revisions; the core group subsequently reviewed the revised priorities ahead of R2.In R2 (8-21 February 2023), panelists voted on the revised priorities and ranked at least half of the priorities within each of the 6 domains: epidemiology, models of care, treatment and care, education and awareness, patient and community perspectives, and leadership and public health policy. Each action priority was graded to indicate the level of combined agreement ("agree" + "somewhat agree"), using a system that has been used in other Delphi studies [13] in which "U" denotes unanimous (100%) agreement, "A" denotes 90%-99% combined agreement, "B" denotes 78%-89% combined agreement, and "C" denotes 67%-77% combined agreement.For the ranking, scores were calculated and normalized in Microsoft Excel (v.16.70) to compare rankings within each domain. Ethical considerations This study received an ethical review exemption from the Hospital Clínic of Barcelona, Spain, ethics committee on December 19, 2022.All research was conducted in accordance with both the Declarations of Helsinki and Istanbul.Panelists consented to participate in the study, and data were anonymized for all analyses. RESULTS A total of 473 individuals were invited to participate in R1, and 344 (72.7%) completed the survey.These 344 respondents were invited to participate in R2, of whom 288 (83.7%) completed the survey.Table 1 details the demographics of all expert panelists involved in the study.The mean age of respondents was 53.8 (SD: 10.1).Most respondents were male (64.8%), worked in high-income countries (69.9%) and in the Europe and Central Asia region (42.2%), were primarily employed in the academic sector (66.6%), and worked in the clinical research field (79.4%).A total of 94 countries were represented in terms of respondent country of origin and 91 in terms of respondent country of work. In R1, 27 initial action priorities were presented to the panel.During revisions ahead of R2, 2 additional action priorities were included, with the panel reviewing 29 priorities in R2.Across the 2 Delphi rounds, combined agreement ("agree" + "somewhat agree") increased for all domains.The mean percentage of "agree" responses across domains increased from 80.0% in R1 to 82.4% in R2, following the consideration of substantive comments received in R1.Table 2 presents the final priorities, agreement grades, and rankings for each of the 6 domains.Within the final priorities in R2 (Figure 2), the panel reached a unanimous combined agreement for 2 priorities and > 90% combined agreement for the remaining 27; the mean level of combined agreement across all priorities was 98.1% (rising from 96.8% in R1).For 11 priorities, "agree" answers were < 80%, with higher reliance on "somewhat agree" to achieve the high rate of overall combined agreement (Supplementary Table 3, http:// links.lww.com/HEP/H907).Defining and implementing models of care and treatment and care were the 2 domains where more than half of the research priority statements had < 80% of the panel "agree"; all of these statements received > 90% combined agreement but relied more heavily on the "somewhat agree" category to achieve this.All of the action priorities received at least a super-majority (66.7%) of "agree" in R2. DISCUSSION Fatty liver disease has far-reaching health, social, and economic consequences, [3,6,7,18] which, without urgent efforts, will continue to grow. [11]Heeding earlier calls for further collaboration, [10,13] this study employed an inclusive and responsive methodology to develop a multidisciplinary action agenda for stakeholders around the world.As noted previously, this work follows different yet complementary work on setting a global research agenda for fatty liver disease. [16]elow, we discuss the 29 agreed-upon actions within 6 overarching domains. Domain 1: The human and economic burden Both the clinical and economic burden of fatty liver disease continue to increase.The prevalence of the disease has grown dramatically in recent decades, becoming an increasingly important contributor to morbidity and mortality. [3]The economic burden is vast; data from several high-income countries show the scale of direct health care costs in both out-patient [19] and inpatient settings [20] and the wider societal costs. [6,7,21]hile data from a broader range of contexts, including resource-limited settings, will strengthen our understanding, what we know today about the human and economic consequences of this disease present a compelling case for action. A prior consensus statement from the liver health community noted the increased costs associated with fatty liver disease while also accepting that "incomplete data hinder concerted action at the national and global levels". [13]In this study, panelists proposed 2 priorities intending to deepen understanding and action with respect to the human and economic burden.The highest-ranked priority within this domain reflects the need to promote standardization and harmonization of data collection and reporting on the human and economic burden (priority 1.2) to allow for meaningful comparisons.The panelists also agreed with prioritizing the development of investment cases for fatty liver disease (priority 1.1).Such investment cases will provide an empirical investigation of the human and economic burden associated with fatty liver disease, alongside estimations of expenses associated with reducing the human and economic burden.These can be key tools for engaging policymakers around not only the importance of action but the health and economic benefits of this. Domain 2: Defining and implementing models of care An important aspect of fatty liver disease is that the vast majority of patients can be cared for in primary care settings, whereas those with advanced fibrosis, or cirrhosis, need specialized care delivered by a multidisciplinary team. [22]The availability of high-performing non-invasive tests (NITs) has now markedly reduced the need to rely on liver biopsy for the diagnostic and prognostic context of use, [23,24] providing an effective and efficient way to identify patients at risk of poor hepatic-related outcomes. [25]et, it is acknowledged that most primary care settings are ill-equipped to effectively identify and refer patients at risk for advanced disease to secondary care as needed.Unsurprisingly, the highest-ranking priority within this domain focused on the need for liver specialists to collaborate with primary care experts to determine which NITs are most appropriate for use in primary care settings (priority 2.6), which is likely to differ between settings based on the resource availability and health system structure.Subsequently, providing clear guidance on care pathways and timely referrals was ranked second in this domain (priority 2.5).Along with the evolving refinement of NITs and referral pathways, the panel agreed on the importance of standardization around key effectiveness measures to be used in the evaluation of multidisciplinary models of care (priority 2.7).These priorities sit alongside previous calls to generate data to validate NITs for early diagnosis, prognosis, and monitoring of liver disease progression. [16]ecognizing the shift within public policy and health systems toward person-centered care, [26,27] engaging affected populations in the development of patientcentered care pathways (priority 2.1) and implementing community-tailored models of care for diagnosis, prevention, and treatment (priority 2.2, ranked 3rd in its domain) were determined to be priorities. Emerging evidence suggests that a multidisciplinary approach to the management of fatty liver disease is imperative, although multidisciplinary care models are poorly adopted in most health care settings. [22]Therefore, the panelists agreed that the development of a range of context-specific and resource-specific fatty liver disease multidisciplinary model of care examples (priority 2.4) was the fourth highest priority within the domain.As models of care for the disease emerge and evolve, panelists unanimously agreed on engaging with health system decision-makers about their operational and financial implications (priority 2.3)."Preventive hepatology"-first proposed in 2008-emphasizes the use of timely interventions to minimize adverse health outcomes of chronic liver disease. [28]This is an important framing within fatty liver disease, given the imperative of actively implementing a spectrum of strategies to prevent both disease onset and progression. Taken together, the actions outlined in this domain will help to drive the much-needed knowledge and innovation in the management of this disease, which will inevitably place an increasing amount of pressure on health systems in the coming years. Domain 3: Treatment and care Notwithstanding current developments, including latestage clinical trials for pharmacological treatments and bariatric procedures, [29,30] the management of fatty liver disease remains highly dependent on weight reduction (targeting a sustained loss of at least 7%-10% of the initial body weight).However, barriers-such as insufficient knowledge and access to resources promoting a healthy lifestyle, physical discomfort, time constraints, and financial consideration-hinder the achievement of long-term weight loss goals.[33] These approaches target improvements in insulin resistance, optimizing glycemic control, and attenuating the pro-inflammatory milieu of obesity, which is a driver of disease progression. [34]o implement successful behavioral change, personcentered care and social interventions are needed.Motivational and self-monitoring approaches (eg, cognitive behavioral therapy, mindfulness-based stress reduction therapy) have shown positive outcomes in treating fatty liver disease. [35]However, the social environment-which encompasses factors such as culture, gender, and socioeconomic status-also plays a significant role in obesity. [36]The concept of social nutrition aims to promote a social environment that fosters improved metabolic health; this will be a critical concept to embed within actions for fatty liver disease care. In anticipation of future pharmacological approvals, the panelists agreed on and ranked the development of tools to support pharmacological treatment uptake as the highest priority in this domain (priority 3.2).This work can draw inspiration from previous efforts in viral hepatitis. [37]As the clinical trial space of NASH-specific drugs evolves, the appropriateness and utility of different trial end points, from the resolution of NASH or fibrosis regression to slowing disease progression, continues to be debated. [38]The panel agreed that engaging relevant stakeholders, including patients, in focused discussions with regulators will help to advance the discourse around end points, ranking this as the second highest priority in this domain (priority 3.3).As similarly noted in other domains-and again consistent with patient-centric approaches-the panelists agreed with expanding the use of patient-reported outcomes and including these alongside clinical outcomes within trials (priority 3.4).This is an emerging but rapidly expanding area within fatty liver disease. [39]he field of public health is also increasingly recognizing the role of commercial determinants of health [40] alongside biological and social determinants. [41]In light of this recognition, the panelists agreed that not only social but also commercial determinants of health should be prioritized when developing treatment and care strategies (priority 3.1).This work will require the liver health community to engage with those working across the NCD spectrum, including by lending their voice to existing calls for action to address negative commercial influences on public health. Domain 4: Education and awareness Available data on fatty liver disease awareness, while limited, illustrate low levels of public and patient awareness. [38,42]Prior consensus statements from the liver health community have called for an increased strategic emphasis on education and awareness. [13,16]n recognizing this evidence base and building on the prior consensus statements, the panelists agreed with 8 action priorities with respect to education and awareness for 4 broad audiences: (i) current health professionals, (ii) future health professionals, (iii) people living with fatty liver disease, and (iv) the general public. The fatty liver disease continuum is bidirectional and inherently modifiable, sharing cardiometabolic features with several other NCDs (eg, obesity, diabetes, hypertension, cardiovascular disease). [4,5]Yet, as noted, awareness among health care providers, at-risk patients, and policymakers is generally low.The highest-ranked action priority for this domain was crosscutting, with panelists calling for promoting awareness among health care providers and patients of the possibility of multiple diagnoses (priority 4.5).The panel brought forward a second cross-cutting priority, calling for the development of informational products to communicate how liver function, and metabolic health, influence overall population health (priority 4.4). Health care professionals and patients alike have reported a dearth of information about fatty liver disease and its management following diagnosis. [43]Lack of awareness of the fibrosis stage is also emerging as being associated with lower adherence to lifestyle changes. [44]With respect to affected populations, the panel agreed to inform all people with fatty liver disease of their disease stage and educate them on the reversibility of liver fibrosis (priority 4.8).With regards to the broader public, the panel supports awarenessraising through public campaigns, leveraging traditional media, social media, and collaborative approaches, ranking this as the third highest priority in the domain (priority 4.7).This is particularly important considering the forthcoming change in NAFLD nomenclature. [17]s previously alluded to, NITs hold great promise for expanding the diagnosis of fatty liver disease.The panelists agreed to disseminate educational resources on the implementation of NITs in different settings (eg, primary care, diabetes, and obesity clinics) (priority 4.3, ranked 2nd in its domain).Recognizing that knowledge and awareness of fatty liver disease may be increased among some health professionals outside liver-specific environs, the panelists also unanimously agreed and ranked as the fourth highest priority to "expand the availability of educational courses and toolkits on fatty liver disease"; this could be achieved through "formal medical curricula and continuing education, in collaboration with other disciplines" (priority 4.2). As the prevalence of fatty liver disease continues to expand not only among adults but also among children and adolescents, [1,11] the panelists brought attention to and called for action on strategies for raising awareness in collaboration with pediatric professionals (priority 4.6). Consistent with the strategic emphasis on expanding the fatty liver disease community of practice, further prioritized in a separate domain below, the panelists highlighted the need for education-oriented actions directed toward future health professionals through evaluating current medical curricula to identify how the disease is taught in both medical school and postgraduate training curricula (priority 4.1). Domain 5: Patient and community perspectives People living with fatty liver disease have unique support needs.A cohort study from 2023 demonstrated that low social support and loneliness (functional measures of social relationships) increased mortality risk in cirrhotic patients compared with noncirrhotic individuals. [45]ddressing these barriers will be a major challenge, not least given the prevalence of the disease; however, with this comes the opportunity to innovate and transform fatty liver disease models of care.There is a wealth of experience that can be drawn on both within [37] and outside of the liver health community to inform this work, [46] including the World Health Organization frameworks on meaningful engagement of people living with NCDs [27] and people-centered health care. [47]n step with this, the panelists agree to the importance of incorporating community perspectives, with 2 areas of early action emphasized.Firstly, the panel highlighted the importance of growing support networks for people with fatty liver disease (eg, patient groups) (priority 5.1) and, secondly, the need to co-create, with affected communities, non-stigmatizing communication guidance for health professionals to use when engaging people living with fatty liver disease (priority 5.2). Domain 6: Leadership and policies for the fatty liver disease public health agenda The consensus-built priorities for advancing the fatty liver disease public health agenda point to the importance of taking action that addresses the unique challenges posed by fatty liver disease and, crucially, that reflects the interlinked risks and solutions for fatty liver disease and other NCDs.Building on earlier calls for comprehensive public health and political efforts to counteract the growing fatty liver disease burden, [13,16,48] this paper sets out a roadmap for action. Public health and health systems increasingly face the complex challenges presented by growing multimorbidity across NCDs, [49] and fatty liver disease is no exception.Unsurprisingly, then, 2 of the 4 highestscoring action priorities based on "agree" alone (priorities 6.1, ranked second in the domain and 6.2, ranked first in the domain) pertain to the inclusion of fatty liver disease in the strategies of other NCDs and advanced collaborations with stakeholders engaged with other NCDs (eg, diabetes, obesity). There is growing clarity that more talent is needed to address the overall increasing burden of fatty liver disease. [50]Unsurprisingly, the panelists called for a strategic approach to expand the fatty liver disease community of practice (priority 6.6), which will both broaden and deepen the expertise and talent within the community.As the community of practice expands, the panelists also advocated for nurturing the next generation of both clinical and public health leaders (priority 6.3) and convening multidisciplinary experts to enact these priorities at all levels (priority 6.4). The panelists concurred that alongside national and regional efforts, there is a need for a coalition that can spearhead these efforts at the global level (priority 6.5, ranked third in its domain).Given the lack of awareness and attention provided to fatty liver disease within the broader global health discourse, the global coalition can foster discussion, partnerships, and action and provide a common platform for advancing this agenda.Early efforts to establish such a coalition have been instigated by regional liver associations (American Association for the Study of Liver Diseases, Asociación Latinoamericana para el Estudio del Hígado (Latin American Association for the Study of the Liver), the Asian Pacific Association for the Study of the Liver, and the European Association for the Study of the Liver) under the umbrella Healthy Livers, Healthy Lives. Study strengths and limitations As described within the research agenda developed by the same panel, [16] the major strength of this study lies in its novelty as the first global, large-scale effort to propose a comprehensive action agenda for fatty liver disease.Again, the group used the rigorous Delphi consensus process.This methodology allows degrees of agreement to be illustrated by breaking-out "agree/somewhat agree" and "somewhat disagree/disagree" responses, which the co-authors believe may assist decision-makers in government, industry, health systems, and across communities in their own prioritization efforts.We suggest that the scoping nature of the domains, combined with more refined actions, makes the outcome both globally relevant and operationally actionable. While this study used the Delphi methodology, given its efficacy in consensus building, we note that multidisciplinary, action-oriented consensuses are nonetheless challenging.This study used a purposive sampling of experts with prior experience in fatty liver disease, NCDs, and/or consensus methodologies in the development of the core group.To mitigate the biases of purposive sampling, the core group then used snowballing and targeted sampling to yield a geographically diverse, multidisciplinary panel of 344 people.However, we recognize that the panel's characteristics (eg, predominantly based in high-income countries and employed in the academic sector) will have influenced the study results.Notably, patient-centric and policyoriented priorities had overall lower agreement levels, which likely reflects the smaller proportion of the panel whose primary field of work is patient and policy advocacy (n = 16, 4.7%).The chosen language for the study, English, may have also influenced those who accepted the invitation to contribute or the panelist´s ability to fully comprehend every statement. This study presents the first global consensus-built action agenda on fatty liver disease.Through a rigorous Delphi process, a large panel identified 29 unique action priorities across 6 domains.Taken together, these actions set out the collective efforts needed to arrest this growing but under-addressed public threat in the coming years.Critically, implementing these actions will require a fundamental shift in the liver field from a narrow focus on hepatology to a more comprehensive approach that includes various stakeholders from different medical specializations, such as endocrinology, primary care, and cardiology, alongside public health experts, social scientists, policymakers and governments, pharmaceutical and device industries, patient advocates, and, most importantly, patients themselves. a 5 Based on World Bank regions.b n = 3 participants are originally from Central Asia.c n = 3 participants work in Central Asia.d Denominator includes n of no response.e Sum may exceed the sample size as participants could choose > 1 response.Abbreviations: AASLD, American Association for the Study of Liver Diseases; ALEH, Asociación Latinoamericana para el Estudio del Hígado (Latin American Association for the Study of the Liver); APASL, Asian Pacific Association for the Study of the Liver; EASL, European Association for the Study of the Liver.A GLOBAL ACTION AGENDA FOR TURNING THE TIDE ON FATTY | Downloaded from http://journals.lww.com/hep by BhDMf5ePHKav1zEoum1tQfN4a+kJLhEZgbsIHo4XMi0hCywCX1AWn F I G U R E 2 9 Action priorities to turn the tide on fatty liver disease.A GLOBAL ACTION AGENDA FOR TURNING THE TIDE ON FATTY | Downloaded from http://journals.lww.com/hep by BhDMf5ePHKav1zEoum1tQfN4a+kJLhEZgbsIHo4XMi0hCywCX1AWn YQp/IlQrHD3i3D0OdRyi7TvSFl4Cf3VC4/OAVpDDa8K2+Ya6H515kE= on 11/21/2023 A GLOBAL ACTION AGENDA FOR TURNING THE TIDE ON FATTY | 11 Downloaded from http://journals.lww.com/hep by BhDMf5ePHKav1zEoum1tQfN4a+kJLhEZgbsIHo4XMi0hCywCX1AWn This study was led by a core group of 42 co-authors.Jeffrey V. Lazarus led the core group and provided regular updates by email.Twenty-six core group members and 11 co-authors participated in a 3-day inperson meeting hosted by Wilton Park, UK, in October 2022, which informed the development of the action priorities included in the Delphi study.Seven of the cochairs (Alina M. Allen, Juan Pablo Arab, Patrizia | HEPATOLOGY Downloaded from http://journals.lww.com/hep by BhDMf5ePHKav1zEoum1tQfN4a+kJLhEZgbsIHo4XMi0hCywCX1AWn YQp/IlQrHD3i3D0OdRyi7TvSFl4Cf3VC4/OAVpDDa8K2+Ya6H515kE= on 11/21/2023 Delphi panel generation and data collection.T A B L E 2 Consensus statements for a fatty liver disease action priorities agenda. Notes: Percentages may add up to more than 100 due to rounding.Grades are based on the percentage of combined agreement (agree + somewhat agree).U, unanimous (100%) agreement; A, 90%-99% agreement.Responses to each statement are presented as percentages of the total responses.Abbreviations: A, agree; D, disagree; N, total number of responses; NQ, the percentage of participants that indicated that they were not qualified to respond; SA, somewhat agree; SD, somewhat disagree.
2023-08-05T06:17:22.304Z
2023-08-04T00:00:00.000
{ "year": 2023, "sha1": "db144481b0167821e41fa6bcf723bc1c19c0457b", "oa_license": "CCBYNCND", "oa_url": "https://journals.lww.com/hep/Abstract/9900/A_global_action_agenda_for_turning_the_tide_on.527.aspx", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "ed2a0998b21bf53d4eaeff8f5d4dec9e6c08cf9f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221402421
pes2o/s2orc
v3-fos-license
Transcriptional and immunohistological assessment of immune infiltration in pancreatic cancer Pancreatic adenocarcinoma is characterized by a complex tumor environment with a wide diversity of infiltrating stromal and immune cell types that impact the tumor response to conventional treatments. However, even in this poorly responsive tumor the extent of T cell infiltration as determined by quantitative immunohistology is a candidate prognostic factor for patient outcome. As such, even more comprehensive immunophenotyping of the tumor environment, such as immune cell type deconvolution via inference models based on gene expression profiling, holds significant promise. We hypothesized that RNA-Seq can provide a comprehensive alternative to quantitative immunohistology for immunophenotyping pancreatic cancer. We performed RNA-Seq on a prospective cohort of pancreatic tumor specimens and compared multiple approaches for gene expression-based immunophenotyping analysis compared to quantitative immunohistology. Our analyses demonstrated that while gene expression analyses provide additional information on the complexity of the tumor immune environment, they are limited in sensitivity by the low overall immune infiltrate in pancreatic cancer. As an alternative approach, we identified a set of genes that were enriched in highly T cell infiltrated pancreatic tumors, and demonstrate that these can identify patients with improved outcome in a reference population. These data demonstrate that the poor immune infiltrate in pancreatic cancer can present problems for analyses that use gene expression-based tools; however, there remains enormous potential in using these approaches to understand the relationships between diverse patterns of infiltrating cells and their impact on patient treatment outcomes. Introduction Pancreatic cancer is commonly characterized by extensive desmoplastic stroma and an environment that is poorly supportive of adaptive immune responses, yet like many other cancers, the degree of T cell infiltrate in pancreatic tumors is correlated with patient outcome [1][2][3][4]. T cells in pancreatic tumors face an array of suppressive mechanisms that can limit their ability to control tumors, and it would be beneficial to understand the relationship between T cell infiltration and the presence of other immune populations that positively or negatively regulate immune responses. For this reason, there is significant effort in the field to understand and manipulate the complex immune environment of tumors. Quantitative immunohistochemistry (IHC) has long represented the gold standard by which tumor infiltrating immune populations can be assessed, and recent advances in multispectral IHC combined with automated image analysis have made possible an unprecedented ability to map out the immune environment of tumors. However, these approaches are limited by the availability and quality of antibodies, and complex multispectral panels require extensive validation to confirm the specificity and selectivity of binding. Recently, multiple groups have shown that the quantity of a diverse array of infiltrating immune cell types in a specimen can be inferred based on characteristic gene expression patterns unique to or enriched in specific cell types [5,6]. In theory, a single RNA sequencing (RNA-Seq) analysis of preserved tissue can provide an assessment of immune cell infiltration as well as other information such as the cytokine and chemokine balance that may be regulating cell entry and retention in the tissue, together with candidate features of the cancer cells that orchestrate this environment. The addition of simultaneous whole exome sequencing can permit comprehensive profiling of cancer driver mutations, immune targetable mutations, as well as a personalized understanding of the patient's immune profile [7,8]. However, it remains unclear whether IHC and gene expression-based immune assessment approaches are highly concordant. For example, the utility of RNA-Seq in tumor profiling can be limited by a range of unique factors including degradation of transcripts in excised human tissues and by common tumor preservatives (e.g. formalin) and the ability to detect low-abundant transcripts. The latter is of particular concern in pancreatic cancer, which can have a relatively low infiltration of critical cell types such as CD8 T cells. Thus far we are not aware of any studies that have directly compared IHC quantification of immune infiltration to RNA-Seq-based analyses in pancreatic cancer. In this study we aim to directly compare conventional IHC and gene expression-based approaches to characterize the immune environment of pancreatic cancer. We hypothesize that RNA-Seq analysis can provide a comprehensive alternative to quantitative IHC for immunophenotyping pancreatic cancer. We performed RNA-Seq on a prospective cohort of 39 pancreatic adenocarcinoma patient tumors with matched quantitative IHC, and evaluated approaches to quantify infiltrating immune cells using gene expression data. We found limited agreement between IHC and RNA-Seq analysis of infiltrating cells, however concordance was greatest when multiple cell types were aggregated to identify a mixed population, such as CD3 + T cells from a combination of CD4 + , CD8 + , and other cell types that express CD3. This aggregation may overcome the limitation of low T cell-derived RNA transcripts in poorly-immune infiltrated tumors. As an alternative, we identified gene signatures that were enriched in highly T cell infiltrated tumors that are associated with increased disease-free survival in other patient cohorts. These data demonstrate that immune infiltration remains an important predictor of outcome in pancreatic cancer patients, and that RNA analysis can provide an important addition to IHC data to understand the complexities of the immune environment that influence patient outcome. Quantitative immunohistology for infiltrating immune cells in pancreatic cancer We conducted a prospective cohort study of resectable pancreatic masses to determine which immunologic parameters have prognostic value. All procedures were approved under Providence Portland Medical Center Institutional Review Board, with approval number IRB 10-037, and patients provided written informed consent. We restricted our analysis to adult patients who underwent surgical resection for pancreatic masses. Patients were recruited from June 2010 to November 2014 at Providence Portland Medical Center in Portland, OR, where the research was conducted. Inclusion criteria included patients 18 years or older who had a diagnosis of a pancreatic or ampullary mass who were scheduled for surgical resection. Patients had to be able to give informed consent and could not have a diagnosis of a prior malignancy unless they were disease free for 10 years. We included patients who were subsequently determined to have other histologies. Demographics and survival of these patients are outlined in Table 1. Prior studies on this cohort had identified a positive correlation between CD3+ T cell infiltrate and overall survival by Multivariate Cox modeling and univariate analysis [1], so this sample set was applied for additional genomic analysis. Additional power calculations were not performed. Tumor infiltrating immune cells were quantified by immunohistochemistry and quantitative digital image analysis for CD3 + , CD68 + , and CD8 + cells as previously described [1]. Infiltrating cells were quantified from whole slide digital Table 1. Demographics of patients on the study. Characteristic Data Pancreatic Adenocarcinoma, n 75 images scanned at 20x resolution (Leica SCN400). Regions of interest were defined with Pathologist guidance using Definiens Tissue Studio (Definiens Inc), and the automated algorithm used the immunohistology staining combined with nuclear counterstain to count total cells and positive cells to report a marker positive cell density per mm 2 tissue for each patient. Primary outcome was overall survival. RNASeq analysis of pancreatic cancer Of the patients above with pancreatic masses, we randomly selected 39 patients with pathologically diagnosed pancreatic adenocarcinoma with no neoadjuvant treatment and with matched quantitative IHC for RNASeq. All subsequent analyses of IHC and RNASeq were performed on unpretreated patients. A representative Hematoxylin and Eosin (H&E) stained slide for each formalin-fixed paraffin-embedded (FFPE) tissue block specimen was reviewed by a board-certified pathologist for tumor content and tumor-rich regions were identified for microdissection. Blocks were matched but not in series with IHC sections. 5 μm thick unstained sections on glass slides were processed for DNA and RNA purification by the Providence Molecular Genomics Laboratory. The FFPE tissue sections were deparaffinized using Envirene (Hardy Diagnostics) followed by RNA extraction and purification using the Qiagen AllPrep DNA/RNA FFPE kit. 85ng of input RNA was used to prepare sequencing libraries using the Illumina TruSeq RNA Exome kit. Sequencing of the RNA Exome libraries was performed on the Illumina HiSeq 4000 instrument at 2 x 75 read paired end configuration. Transcripts were quantified using salmon-v.0.11.2 [9]. A matrix of gene expression values for all patients analyzed in this study along with matched quantitative IHC are provided as a S1 Table. Computational analysis of infiltrating cells and comparison of techniques RNA-Seq-based cell type deconvolution was performed using xCELL (5) and CIBERSORT (6), with TPM gene expression levels as input. xCELL was used to perform cell type enrichment analysis from gene expression data for 64 immune and stroma cell types, whereas CIBERSORT provides absolute and relative abundance of different immune cell types depending on the specified gene set. In the present analysis, the LM22 signature provided by CIBERSORT was applied. We also applied EPIC (10) and MCPcounter (11) to estimate the abundance of immune cell population. EPIC estimates the proportions of Immune and Cancer cells by using RNA-Seq-based gene expression reference profiles from immune cells and other nonmalignant cell types found in tumors. MCPcounter quantifies abundance of tissue-infiltrating eight immune and two stromal cell populations based on transcriptome profile. Initial clustering of patients based on infiltrating cell types was performed using ClusterVis [10]. Principal components are calculated as described in (6) using ClusterVis (7). Missing data is assigned using Singular Value Decomposition with imputation iteratively until estimates of missing values converge. Statistical significance of the resulting hierarchical clusters were assessed using the sigclust2 R package [11]. Correlation between infiltrating cell types calculated using RNA-Seq versus quantitative IHC was performed by generating a composite of T cell and macrophage cell types determined by RNA-Seq for direct comparison to IHC populations (Table 2). Correlations between xCELL immune cell type enrichment and quantitative IHC, as well as between CIBERSORT cell type abundances and quantitative IHC, were determined by Spearman Rank correlation. Identification of genes associated with high and low T cell infiltration Patients were categorized into high and low infiltration of CD3 + and CD8 + T cells according to quantitative IHC and sub-categorized into those with high infiltration of both populations (CD3 HI CD8 HI ) versus low infiltration of both populations (CD3 LO CD8 LO ). Using RNA-Seq data from these patients, gene expression analyses were performed using a univariate two-sample T-test with a stringent false-positive threshold to identify genes significantly differentially expressed (p<0.001) in our patients [12,13]. These genes were mapped to known pathways using the Reactome Functional Interaction network tool [14]. These genes were tested on the TCGA Pancreatic Adenocarcinoma PanCancer Atlas [15] as a validation cohort using cBioportal [16,17] where mRNA expression z-scores are compared to the expression distribution of each gene in tumors that are diploid for this gene. To be classified as enriched for the gene score the sample must have at least one gene that is 2 log fold overexpressed. Survival and expression data exported to Graphpad Prism for survival comparison using log-rank tests. Pre-calculated xCELL analysis of patients in the TCGA database were obtained from http:// xcell.ucsf.edu. Statistical methods Survival data were analyzed using Prism (Version 8.4.2, GraphPad Software, La Jolla, CA). Overall survival of groups was compared using a log rank test for differences in Kaplan-Meier survival curves. All cutoffs for high/low infiltration by RNA analysis use median values. Gene expression analyses were performed using a univariate two-sample T-test with a stringent false-positive threshold to identify genes significantly differentially expressed (p<0.001) [12,13]. Correlations between immune cell type enrichment and quantitative IHC were determined by Spearman Rank correlation. To assess the statistical significance of correlation between RNA-Seq-based cell type deconvolution and CD3, CD8 or CD68 immune cell types determined by quantitative IHC, the composition of underlying cell types specified in Table 2 were randomly permuted. The number of cells that constitute a cell type were kept the same between the true set and permuted set. Owing to the number of cell types estimated by xCELL (64 different cell type populations) and CIBERSORT (22 different cell type populations), we conducted 100 and 20 different random assignments of cell types, respectively, attributed to the CD3, CD8 and CD68 IHC populations in Table 2. For each permutation, Spearman rank correlation was computed between the random cell type assignments and quantitative IHC, allowing for a level of statistical significance to be estimated for the true set with respect to all permuted sets, thereby indicating whether the RNA-Seq-based cell type deconvolution methods are statistically significantly correlated with quantitative IHC. Additional correlations between multiple variables were analyzed using Prism (Version 8.4.2) to calculate a Pearson correlation coefficient. Statistical significance of hierarchical clusters were assessed using the sigclust2 R package [11]. Differential gene expression analysis between CD3 Hi CD8 HI vs CD3 Lo CD8 Lo RNA-Seq samples were carried out using DESeq2 [18]. We identified differentially expressed genes, ranked them based on fold-change and p-value. This gene signature was further used for gene set enrichment. Results We previously demonstrated that increased CD3 + T cell infiltrates in surgically resected pancreatic adenocarcinoma correlate with improved outcomes using a Cox proportional hazards model, and remained prognostic by multivariable analysis [1]. By contrast, CD8 + T cell infiltrates and CD68 + macrophage infiltrates did not correlate with outcome [1]. However, using categorical variables of high or low CD3 + , CD8 + and CD68 + cell numbers in tumors, we were able to identify cutoffs that demonstrated that patients with high CD3 + T cells or high CD8 + T cells exhibited improved survival, but again CD68 + macrophages did not correlate with survival (Fig 1a). To evaluate the complexity of the infiltrate in patient tumors, we performed hierarchical cluster analysis. Initially, we used our broader dataset that included pancreatic ductal adenocarcinoma (PDA) with and without neoadjuvant treatment, as well as benign pancreatic masses, pre-malignant disease, neuroendocrine tumors, as well as a small number of duodenal adenocarcinoma and gallbladder adenocarcinoma. Principal component analysis was not able to distinguish these pathologies based on CD3 + , CD8 + and CD68 + cell infiltrate (Fig 1bi), and while cluster analysis tended to group benign and premalignant disease in poorly infiltrated groups, there was not a clear classifier to distinguish PDA from other related pathologies (Fig 1bii). Prior studies have demonstrated that patients with the highest macrophage proportions and lowest CD8 + T cell proportions exhibit worse outcome than those with lowest macrophage proportions and highest T cell proportions [19]. We calculated the correlation between CD3 + , CD8 + and CD68 + cell infiltrate in patients with all pathologies and those with PDA and found good correlation between CD3 + and CD8 + infiltrates, and poor, but not negative, correlation with T cells and CD68 + cell infiltrate (Fig 1c). To determine whether the degree of CD68 + cell infiltrate impacted outcome for patients with high or low T cell infiltrates, we tested the effect of CD68 + cell infiltrate with each T cell cutoff. For patients with PDA we did not find an effect of macrophage infiltration on the survival benefit of CD3 + T cells (Fig 1di and 1dii) or CD8 + T cells (Fig 1diii and 1div). These data demonstrate that quantitative immunohistology was able to identify good and poor outcome groups based on CD3 + and CD8 + T cell infiltrate, but analyzing the degree of CD68 + macrophage infiltrate alone or in combination with T cell infiltrates did not help refine outcome groups. There are significant limitations in the use of CD68 as a sole marker of macrophages in tumors [20], particularly in view of the diverging phenotypes macrophages can generate. There is not a well-defined set of markers that is unique to distinct macrophage polarization states that do not overlap with other cell types. Recently, a number of algorithms have been developed that can analyze gene expression data to estimate the prevalence of the broad range of cell types in a mixed tissue sample [5,6,21]. To evaluate whether gene expression analysis could refine our understanding of the immune environment of pancreatic cancer we tested two different approaches, CIBERSORT [6] and xCELL [5]. CIBERSORT has been most widely applied, and provides an assessment of 22 of the most common immune cell types and some information on the differentiation of CD4 + T cells and macrophages [6]. We performed RNA-Seq on a subset of our PDA patients with quantitative IHC, and performed CIBERSORT analysis of immune infiltration using the RNA-Seq data. We used a correlation analysis to determine whether some cell types were co-regulated in the tumor, but there was little evidence of correlation between the infiltration of different immune cell types in the tumor Cutoffs used were approximately CD3 -75 th percentile; CD8 -median; CD68 -no significant cutoff found, 75 th percentile shown. b) Infiltrating CD8 + , CD3 + , and CD68 + cells across a range of pathologies was used to evaluate principal component analysis and clustering. i) Unit variance scaling is applied to rows; SVD with imputation is used to calculate principal components. X and Y axis show principal component 1 and principal component 2 that explain 56.7% and 30.5% of the total variance, respectively. Prediction ellipses are such that with probability 0.95, a new observation from the same group will fall inside the ellipse. N = 123 data points. ii) Clustering of patients according to infiltrates. Imputation is used for missing value estimation. Both rows and columns are clustered using Manhattan distance and complete linkage. 3 rows, 123 columns. c) Pearson correlation coefficients for CD8, CD3 and CD68 infiltrating cells in i) all (S1 Fig). We performed cluster analysis on patients based on their immune infiltrates calculated by CIBERSORT (Fig 2ai), which appeared to identify a diffuse cluster of patients with higher numbers of CD8 T cells and dendritic cells, but there was no strong statistical association between these cell types and this analysis was not able to identify patient groupings with significant differences in overall survival (not shown). To directly compare these CIBERSORT calculated infiltrating cell types to the quantified immunohistology from the same samples, we made three combined groups ( Table 2): 1. CD3 + equivalent based on cell types that express CD3 (Treg+CD4 populations+CD8+gd T cells); CD8 + equivalent (CD8); and CD68 + macrophage equivalent (MO+M1+M2). We then evaluated the correlation between infiltration determined by CIBERSORT analysis of RNA-Seq versus quantified immunohistology. Both CD3 + and CD8 + T cells showed a weakly positive correlation between the IHC and CIBER-SORT assessments, but CD68 did not correlate well between histology and CIBERSORT ( Fig 2b). Spearman's rho computed between IHC and CIBERSORT relative cell type abundance were 0.355, 0.308, 0.144 for CD3, CD8 and CD68 respectively. Notably, we see a number of patients with no detectable infiltrating T cells or macrophages by CIBERSORT, who did have detectable cells by histology. Since CIBERSORT is dependent on key RNA transcripts being present amongst the RNA sequenced, we believe that at low cell infiltration this approach can struggle to identify the RNA signature of rare infiltrating cells. To determine whether CIBER-SORT infiltration of these key cell types predicted outcome, we similarly stratified patients into high or low infiltration groups. We found that patients with high combined CD3 + equivalent scores exhibited improved overall survival (p<0.05), but the CD8 and macrophage scores were not able to discriminate patients with significantly different overall survival (Fig 2c). These suggest that while CIBERSORT analysis can provide additional information on the diversity of immune cells in tumors, it does not improve our ability to predict outcome over quantitative histology. Since there is a general agreement between the assessment of total CD3 infiltrate by histology and CIBERSORT and each are associated with improved outcome in patients, aggregating CIBERSORT T cell infiltration could be further tested as a prognostic factor in pancreatic cancer patients. As an alternative approach xCELL can identify 64 different cell populations and composite infiltration scores from RNA-Seq data [5]. We performed xCELL analysis of immune infiltration in pancreatic cancer and generated a correlation matrix to examine associations between different cell types. Clear patterns emerged, with some tight clusters based around epithelial cells or Th2 cells, and more broad groupings of co-regulated cells including Th1 cells, DC, M1 macrophages and CD8 T cells (S2 Fig). To determine whether these co-resident cells identified unique patient populations, we clustered patients based on their immune infiltrate, identifying patients with higher levels of CD8 T cells, DC, and M1 macrophages (Cluster A), and those with higher levels of fibroblasts and endothelial cells to form a distinct cluster (Cluster B) (Fig 3a). Comparing the overall survival of patients in each cluster demonstrated there were no significant differences between groups (Fig 3b). In view of the high correlation between specific cell types resulting in apparent clusters, we investigated whether this was due to closely related cells having overlapping genes that are included in the underlying gene signature. Using the gene signatures that determine cell types in xCELL, we computed the percent match between genes across all cell types (S3 Fig). These data demonstrated that most cells were defined using a unique gene set. We did identify overlapping gene usage between related cell types such as pathologies; ii) PDA. d) Overall survival of PDA patients with i) high or ii) low CD3 + infiltrates, and iii) high or iv) low CD8 + infiltrates subdivided according to high or low CD68 infiltrates as determined in a). Number of patients on each arm of survival curves are shown in grey. https://doi.org/10.1371/journal.pone.0238380.g001 (Table 2) determined by CIBERSORT compared to quantitative IHC from the same patient. Each symbol represents one patient. c) Overall survival of patients with high versus low infiltrates of i) CD3 + , ii) CD8 + , and iii) CD68 + equivalent cell populations ( (Table 2) determined by xCELL compared to quantitative IHC from the same patient. Each symbol represents one patient. c) Overall survival of patients with high versus low infiltrates of i) CD3 + , ii) CD8 + , and iii) CD68 + equivalent cell populations (Table 2) determined by median xCELL infiltration. NS = not significant. d) Overall survival of patients with high versus low i) immunescore, ii) stromascore, and iii) environment score using median values as cutoffs. https://doi.org/10.1371/journal.pone.0238380.g003 PLOS ONE DC subtypes or T cell subtypes; however, there were no significant gene overlaps between DC and T cells, for example, that would explain their correlation. These suggest that the positive correlation between the number of T cells and DCs in patient tumors is likely a result of the presence of both cell types in the analyzed sample. To compare these calculated infiltrating cell types to quantified immunohistology, we again made three combined groups: 1. CD3 equivalent (Treg + all CD4 populations + all CD8 populations +gd T cells); CD8 equivalent (all CD8 populations); and CD68 macrophage equivalent (MO + M1 + M2) ( Table 2). We evaluated the correlation between cell infiltration assayed by xCELL analysis of RNA-Seq versus quantified immunohistology. Each population showed a positive correlation between the two approaches; however, many of the samples were calculated to have no CD8 T cells by xCELL, even in patients with relatively abundant CD8 T cells as determined by immunohistology (Fig 3b). As with CIBERSORT analysis, despite the positive correlation, the R 2 value was not strong for each cell type. Spearman's rho computed between IHC and xCELL were 0.354, 0.307 and 0.144 for CD3, CD8 and CD68 respectively. We also evaluated whether infiltration of these cell types impacted outcome, and we could not detect a cutoff that impacted overall survival (Fig 3c). xCELL analysis does calculate three additional fields that integrate many of the infiltrating cell features to generate an "immunescore", a "microenvironment score" and a "stromascore". Such combined fields may have an advantage over individual cell types, particularly where the cells are of low abundance. Broadly, the immunescore was correlated with T cell infiltration, while the stromascore was correlated with fibroblast and endothelial cell infiltration (S3 Fig). Patients with a high immunescore exhibited improved overall survival, but the stromascore and the microenvironment score were not able to distinguish patient groups with improved outcome. These data suggest that xCELL can provide a more complex understanding of the immune cell diversity in tumors; however, there remain significant issues identifying the small numbers of T cells infiltrating pancreatic adenocarcinoma. There may be a benefit in integrating multiple immune cell types through features such as the immunescore to identify tumor environments indicative of improved outcome. There are increasing numbers of methods to analyze cell infiltrates from RNA data. We compared the additional methods EPIC [22] and MCPcounter [23] with the same dataset. The MCPcounter assessment of T cell infiltration correlated well with the IHC CD3 infiltrate, but all other analyses had poor correlation to IHC data (S4 Fig). To understand whether there was agreement amongst the various RNA-based approaches, we analyzed the correlation between the various infiltrating T cell populations assessed by CIBERSORT, xCELL, MCPcounter, and EPIC. The correlation between the different approaches using the same RNA dataset was moderate, but the closest correlations were found among CD4 T cell populations and relatively poor correlations between CD8 T cell populations (S4 Fig). These data suggest that there are significant differences between the RNA-based approaches and each has difficulty consistently identifying CD8 T cell infiltrates in T cell poor tumors like pancreatic cancer. To determine whether there is an alternative RNA signature of high T cell infiltration that can be used in pancreatic tumors to infer T cell infiltration and assess outcome using RNA-Seq samples, we identified genes that were enriched in tumors with both high CD3 and high CD8 infiltration or both low CD3 and low CD8 infiltration by quantitative IHC. Class comparison of gene expression identified a subset of genes that were statistically associated with highly T cell infiltrated tumors (Table 3, Fig 4a). As would be predicted, these included genes encoding for CD3 as well as genes involved in T cell signaling such as FYN and LAT. Interestingly, the gene set includes the immunotherapy target CTLA4 [24], as well as SLAMF6, which is a marker of progenitor exhausted T cells [25]. To determine whether increased expression of these genes were useful predictors of outcome in pancreatic cancer patients, we examined expression of these genes in pancreatic adenocarcinoma patients in the TCGA database [15], PLOS ONE and their effect on patient outcome. Initial analysis indicated that patients with increased expression of the genes positively associated with T cell infiltration in our cohort had significantly increased overall survival and disease-free survival. However, following curation of the TCGA dataset according to Peran et al. [26] to remove mischaracterized tumors from the cohort, overall survival was no longer significantly different, but disease-free survival was significantly improved (Fig 4b). These data suggest that the geneset was identifying patients with neuroendocrine tumors and the improved prognosis of these patients was influencing the overall survival results in the uncurated dataset. To understand whether the geneset was associated with increased T cell infiltration, we obtained pre-calculated xCELL analysis of infiltrating cells in these pancreatic cancer TCGA specimens (https://xcell.ucsf.edu), and examined the correlation of each cell type with the expression of genes in our panel. To find potential patterns of biological significance, we correlated the expression of the gene set with the infiltration of immune cells across the TCGA cohort and performed clustering to gather co-regulated genes and cells together. We discovered that as with our cohort, there was a strong correlation between infiltrating T cells and dendritic cells in pancreatic tumors, and these correlated closely with the expression of genes in our panel (Fig 4c). Macrophage and endothelial cell infiltration did not correlate well with any of the genes in our panel, and smooth muscle cells, keratinocytes, and epithelial cell infiltration correlated best with some of the genes that were negatively associated with T cell infiltration, such as NEBL and HSP90AB1. These data indicate that the gene signature associated with high T cell infiltration in our pancreatic cancer cohort can similarly identify high T cell infiltration in other pancreatic cancer cohorts represented in the TCGA database. Further studies are needed to understand whether these genes have functional roles and are potential therapeutic targets. Discussion Despite the poor overall prognosis of pancreatic cancer, patients with high numbers of infiltrating T cells as determined by quantitative immunohistology have improved outcome. To understand the complexity of the tumor immune environment, we performed RNA-Seq and evaluated gene expression-based analyses of tumor-infiltrating cells. We found that there were limitations in current gene expression analyses of infiltrating immune cells, particularly where overall infiltration was low. Both CIBERSORT and xCELL showed poor concordance with IHC, and individual immune cell types identified by gene expression analysis had limited prognostic value. This could be somewhat overcome by aggregating molecularly-identified immune populations, for example into total T cell infiltrates, which showed some correlation to quantitative immunohistology and could be predictive of outcome. To determine if we could identify alternative molecular signatures of highly infiltrated tumors, we performed class comparison of gene expression and identified a transcriptional pattern in pancreatic adenocarcinoma that had predictive value for disease-free survival in the TCGA cohort. Immunohistology with validated antibodies is the gold standard for quantitative assessment of infiltrating immune cells in cancer [27]. However, the diversity of cell types and limited number of cell-type specific markers makes it difficult to accurately assess many infiltrating cell types using histology. Flow cytometry is more able to address this diversity, using a series of gates to distinguish cell subtypes expressing multiple overlapping markers; however, this cannot be performed with archived tissues. The recent improvement in image analysis and technological improvements in multiplex staining has permitted much more complex assessment of tumors using immunohistology [28]. This is critical since many of our single markers have limitations. For example, our study used CD68 as a well-validated marker for tumor associated macrophages. However, there are significant limitations in the use of CD68 as a sole marker of macrophages in tumors [20], particularly in view of the diverging phenotypes macrophages can generate. In particular, while there may be a spectrum of macrophage phenotypes [29], the polarized M1 (classical) versus M2 (alternative) macrophage phenotype has proven useful in discriminating macrophages that support versus suppress adaptive immunity to tumors [30][31][32][33]. Therefore, the presence of macrophages does not necessarily indicate that they generate immune suppression in the tumor. The recent development of algorithms that can analyze gene expression data to estimate the prevalence of the broad range of cell types in a mixed tissue samples, combined with the increasing affordability of comprehensive genomic profiling of patient tumors, has opened new avenues of research [21]. While gene expression data can provide a great deal of information from small quantities of tissue, there are potential issues in estimating immune cell numbers in pancreatic tumors, since the abundance of some of these populations can be very low even when they have prognostic significance. For example, xCELL gene signatures were identified using purified populations and validated on peripheral blood samples [5], which have a very different immune cell abundance when compared to tumors. CIBERSORT was shown to have superior performance to other approaches available at the time to assess immune infiltrating cells from genomic data [6]; however, performance was limited when the target immune populations represented fewer than 1% of the spiked mixture. Each approach provides valuable information on immune infiltration and may be sufficient to assess the environment of more abundantly infiltrated tumors. However, there are limitations based on the number of RNA reads provided by each infiltrating cell in a bulk population. Immunohistology has its own limitations, particularly those relating to epitope preservation through tissue processing, and the difficulties in standardizing staining over time and between institutions. In this study we set quantitative immunohistology as the gold standard for comparison; however, standard sampling issues such as selection of an appropriate archived tissue block and the relevance of a single 5μM section to the tissue as a whole can lead to inaccuracies that apply to each approach. Novel technologies are emerging that incorporate the geographic information of histology with comprehensive gene profiling, and have the potential to change how we assess the immune complexity of tumors [34]. Further analysis of patients giving discordant RNA and IHC data would be valuable to understand the impact of sampling versus other complicating factors that could explain the variations. The strength of genomic and other omic profiling is the wealth of data that can be extracted simultaneously. Along with infiltrating cells, omic analyses can inform as to the mutational status of the tumor [8] to identify immunotherapeutic targets [35], and identify expression of inflammatory and chemokine markers that may dictate immune cell recruitment [36,37]. Such analyses would be best combined with genomic analysis approaches that subdivide patients according to novel molecular subtypes, some of which include tumor subtypes associated with higher immune infiltrates [38,39]. However, further subdivision of patients will require much larger cohorts to generate meaningful results. All of this can be performed on very small quantities of patient material, at decreasing cost and with increasing speed. While IHC-based multiplexing continues to increase the number of analytes that can be assessed on a single tissue sample, this approach depends on the availability of high-quality validated antibodies for each target. Unbiased sequencing-based approaches are to some degree futureproofed against genes that may be of interest yet do not currently have reagents for IHC or other analyses. Further refinement of gene expression profiling for pancreatic adenocarcinoma and similar poorly infiltrated tumors could have significant benefits in personalizing immunotherapy for these recalcitrant tumors. For example, we show that CTLA4 is enriched in patients with high T cell infiltration and is part of the gene set that is associated with improved disease free survival. Antibodies targeting CTLA4 are an effective immunotherapy in some tumors [40], but single agent anti-CTLA4 is not effective in patients with locally advanced and metastatic pancreatic adenocarcinoma [41]. In preclinical models of pancreatic adenocarcinoma, we similarly found that anti-CTLA4 is ineffective as a single agent [42]. The combination of anti-CTLA4 and radiation therapy is curative, but only where the host has good pre-existing immunity to the tumor. Thus, gene expression profiling may help identify patient subsets with adequate T cell infiltration that may benefit from immunotherapy combinations, and direct other patients to novel interventions to improve their tumor immune environment prior to further treatment. Conclusions While immunotherapy options are currently limited for patients with pancreatic adenocarcinoma, these and other data showing an impact of immune infiltrates on patient outcomes suggests we should continue to refine our understanding of the immune environment and pursue immune therapies that are appropriate to the particular tumor environments of pancreatic cancer. At present RNASeq-based analyses must take into account the poor overall infiltrate in some tumor types to provide accurate assessments of the tumor environment. IHC CD3 + , ii) MCPcounter CD8 T cell vs IHC CD8 + , and iii) MCPcounter monocytic lineage vs IHC CD68 + cell populations from the same patient. Each symbol represents one patient. b) analysis as in a) for i) EPIC Bref CD8 T cells and ii) EPIC Tref macrophages. c) Pearson correlation matrix of infiltrating T cell subsets calculated from RNASeq data using xCELL, CIBERSORT, MCPcounter, EPIC Bref, and EPIC Tref. Rows are centered; no scaling is applied to rows. Both rows and columns are clustered using Manhattan distance and average linkage. (PDF) S1 Table. Gene-based expression levels for all patients with RNASeq analysis in the study, along with quantitative IHC. (TXT)
2020-09-02T13:06:37.829Z
2020-08-31T00:00:00.000
{ "year": 2020, "sha1": "cb8b099f0c432a9efc8ce9b2d826dfe7782bd951", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0238380&type=printable", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a620079fc5985abb6a58d7e2df5ae0e22d9f55c0", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
220386886
pes2o/s2orc
v3-fos-license
Reactive Oxygen Species and Antitumor Immunity—From Surveillance to Evasion The immune system is a crucial regulator of tumor biology with the capacity to support or inhibit cancer development, growth, invasion and metastasis. Emerging evidence show that reactive oxygen species (ROS) are not only mediators of oxidative stress but also players of immune regulation in tumor development. This review intends to discuss the mechanism by which ROS can affect the anti-tumor immune response, with particular emphasis on their role on cancer antigenicity, immunogenicity and shaping of the tumor immune microenvironment. Given the complex role that ROS play in the dynamics of cancer-immune cell interaction, further investigation is needed for the development of effective strategies combining ROS manipulation and immunotherapies for cancer treatment. Introduction Reactive oxygen species (ROS) are defined as chemically reactive derivatives of oxygen that elicits both harmful and beneficial effects in cells depending on their concentration. Oxidative stress occurs when ROS production overcomes the scavenging potential of cells or when the antioxidant response is severely impaired; as a consequence nonradical and free radical ROS such as hydrogen peroxide (H 2 O 2 ), the superoxide radical (O 2 • ) or the hydroxyl radical (OH • ) accumulate [1]. They can represent by-products of mitochondrial adenosine triphosphate generation in the electron transport chain or they can be produced in enzymatic reactions mainly mediated by the NADPH oxidase (NOX) and Dual Oxidase (DUOX) families, while the antioxidative machinery include enzymes such as superoxide dismutase (SOD), catalase (CAT) and glutathione peroxidase (GPX) [2]. Although oxidative stress can cause toxicity, it is essential to realize that redox signaling is pivotal for critical functions in physiological systems and immunity against disease. Indeed, ROS production is recognized as necessary for all stages of the inflammatory process. Both innate and adaptive immunity entail redox-regulated processes, for instance, the governance of immune cells infiltration, their activation and differentiation, the oxidative burst of phagocytes, as well as the control of cellular signal transduction and transcription programs [3,4]. It is well established that the immune system plays a complex and dynamic role in cancer progression. In this regard, several studies have demonstrated its dual role due to host-protecting and tumor-sculpting actions [5][6][7]. Oxygen centered oxidants are formed by many cell types in the tumor microenvironment (TME), including cancer cells and innate and adaptive immune cells. ROS can be both beneficial and detrimental for the immune function, therefore they can indirectly impact cancer important immune modulatory cells essential for the immune response control. Treg cell can suppress other T cells indirectly by their ability to prevent glutathione release from DCs [35] or directly by secretion of ROS [36]. Indeed, H 2 O 2 was shown to inhibit Nuclear Factor κB (NF-κB)-induced cytokine expression from activated T cells [37,38]. The suppressive functions of MDSCs can be terminated by impeding their ROS production and MDSCs-derived ROS are reported to inhibit T cell response [39,40]. This phenomenon may result from the loss of TCR ζ chain expression caused by H 2 O 2 . MDSCs can also compromise TGF-β-induced Treg conversion from conventional T cells in a ROS-dependent fashion [41]. The Antitumor Immune Response and Cancer Immune Evasion Mechanisms: An Overview It is well-established that host immune cells can both antagonize and stimulate cancer growth [42]. Indeed, their crucial involvement in tumor progression is acknowledged by the identification of inflammation and immune evasion as hallmarks of cancer [43]. Many inflammatory conditions can favor neoplastic transformation. However, whether or not inflammation is present in the origin of tumorigenesis, most tumors advance to a state of chronic inflammation that supports distinct aspects of cancer progression. Therefore, interactions between the immune system and the tumor take place in all different stages of the disease, from early events of neoplastic transformation to metastatic spreading and ultimately also during therapy [44]. During the early stages of tumor development, disease progression is controlled by the T-cell response against tumor-derived antigens, characterized by release of Th-1 cytokines, NK cells recruitment and the presence of CD8+ cytotoxic T cells (CTLs), which identify and kill the more immunogenic cancer cells (i.e., cancer immunosurveillance) [45]. Following the persistent selective pressure of the effector response, tumor subclones are selected and escape immune recognition and elimination by developing mechanisms that mimic peripheral tolerance [46]. At the same time, the tumor promotes the recruitment of CD4+ Tregs that neutralize anti-tumor immune cells. Moreover, as the tumor grows, it becomes hypoxic while the surrounding tissue becomes damaged, both of which are important signals for the recruitment of immune cells. Angiogenesis, extracellular matrix remodeling and immune evasion are influenced by tumor-associated macrophages (TAMs), tumor-associated neutrophils (TANs), myeloid-derived suppressor cells (MDSCs) and immature dendritic cells (DCs) and can accelerate tumor progression, metastasis and therapy resistance [47]. By contrast, the recruitment of cytotoxic macrophages and neutrophils, NK cells and mature DCs leads to the elimination of tumor cells in primary sites and after dissemination. Moreover, the immunogenic cell death in the inflamed tumor environment, which occurs in response to certain therapies, may result in antitumor adaptive immune responses [48,49]. Tumor cells evade the immune attack using two main strategies-eluding the immune recognition and prompting an immunosuppressive TME [50,51]. Malignant cells can express antigens that have the capacity to induce tumor-specific responses; however, the immune selection of cancer cells that lack or mutate immunogenic tumor antigens, as well as the acquisition of defects or deficiencies in antigen processing and presentation, may lead to loss of their antigenicity. Moreover, tumors can avoid elimination by diminishing their immunogenicity through the modulation of expression of costimulatory and coinhibitory molecules. Furthermore, some tumors evade immune elimination by disposing a suppressive microenvironment. Impact of ROS in Antitumor Immunity and Immune Escape Considering what is known about ROS in tumorigenesis [52,53] and their influence in immunity, as described above, it is conceivable that both cancer immune surveillance and immune evasion display some degree of redox regulation ultimately shaping the cellular fate of tumor-infiltrating immune cells and cancer cells elimination. Impact of ROS on Tumor Antigenicity and Immunogenicity Tumor immunogenicity, which is the ability to induce adaptive immune responses, is dictated by two major criteria-antigen expression and antigen presentation ( Figure 1). Weak antigenicity elicits a suboptimal immune response that provides the opportunity and time to tumor cells to develop immune evasion mechanisms [54]. Mapping of the subset of the immunopeptidome (the set of peptides selected and presented at the cell surface) that comprises redox-sensitive cysteine residues showed that a high proportion of cysteine-containing peptides are oxidatively modified physiologically [55]. In the context of tumor cells, alterations in the cellular redox state and the free oxygen radicals generated in inflammatory TME could yield post-translational modification of cysteine residues in proteins [56] which may alter antigenicity and have consequences for T cell escape. Indeed, the oxidative status of antigens can modify T cell receptor affinity to the antigenic peptide [55,57]. Moreover, oxidative stress triggers the upregulation of antigenic peptides generation that is compensated by a limitation of their capacity to be loaded onto major histocompatibility complex (MHC) molecules [58]. Cancers 2020, 12, x 4 of 16 elicits a suboptimal immune response that provides the opportunity and time to tumor cells to develop immune evasion mechanisms [54]. Mapping of the subset of the immunopeptidome (the set of peptides selected and presented at the cell surface) that comprises redox-sensitive cysteine residues showed that a high proportion of cysteine-containing peptides are oxidatively modified physiologically [55]. In the context of tumor cells, alterations in the cellular redox state and the free oxygen radicals generated in inflammatory TME could yield post-translational modification of cysteine residues in proteins [56] which may alter antigenicity and have consequences for T cell escape. Indeed, the oxidative status of antigens can modify T cell receptor affinity to the antigenic peptide [55,57]. Moreover, oxidative stress triggers the upregulation of antigenic peptides generation that is compensated by a limitation of their capacity to be loaded onto major histocompatibility complex (MHC) molecules [58]. Alteration in the expression of co-signaling receptors for T cells is another strategy adopted by cancer cells to escape immune surveillance [59]. Lack of positive costimulatory ligands or the presence of inhibitory ligands on tumor cells have been suggested to participate to poor anti-tumor T-cell efficacy. Indeed, co-stimulation deficiency leads to anti-tumor T cells anergy, whereas in the presence of co-inhibitory signals T cells activation is suppressed [60]. ROS were shown to impact the expression of the coinhibitory molecule PD-L1 in cancer cells in vitro, although no simple and direct relationship could be deduced between elevation/reduction of ROS production and modulation of PD-L1 expression [61]. On the other hand, ROS could induce the expression of the costimulatory Alteration in the expression of co-signaling receptors for T cells is another strategy adopted by cancer cells to escape immune surveillance [59]. Lack of positive costimulatory ligands or the presence of inhibitory ligands on tumor cells have been suggested to participate to poor anti-tumor T-cell efficacy. Indeed, co-stimulation deficiency leads to anti-tumor T cells anergy, whereas in the presence of co-inhibitory signals T cells activation is suppressed [60]. ROS were shown to impact the expression of the coinhibitory molecule PD-L1 in cancer cells in vitro, although no simple and direct relationship could be deduced between elevation/reduction of ROS production and modulation of PD-L1 expression [61]. On the other hand, ROS could induce the expression of the costimulatory molecule CD80 via the c-Jun N-terminal kinase (JNK) and p38 mitogen-activated protein kinase (MAPK) pathways, that activated Signal transducer and activator of transcription 3 (STAT3) transcription factor in colon cancer epithelial cells in vitro [62]. Moreover, it appears that modest generation of ROS by cancer cells can trigger hypoxia [63], which can modulate immunity by regulating the expression of co-stimulatory (CD137, OX-40) and co-inhibitory (PD-L1) molecules for T and NK cell activation [64]. The presentation of antigens on MHC class I molecules is unnecessary for the identification of tumor cells by NK cells; thus tumor cells can still be eliminated even in the absence of proper antigen expression and presentation. Senescent myeloma cells upregulated ligands (MICA, MICB and PVR) for NK cell activating receptors Natural killer group 2 member D (NKG2D) and DNAX accessory molecule-1 (DNAM1) in an oxidant-dependent manner, resulting in enhanced NK cell activation [65]. Moreover, the upregulation of MICA and MICB gene expression was also shown in CaCo-2 colon carcinoma cell line upon oxidative stress [66], an effect that could strengthen NK cell recognition and tumor cell elimination. Impact of ROS on Tumor Microenvironment Cancer is associated with oxidative stress, mediated through ROS generated mainly by malignant cells, granulocytes, TAMs and MDSCs into the TME. The TME includes a large number of different immune cell types [67], among which MDSCs, TAMs and Tregs concurrently work to restrain the immune response to a tumor, allowing for greater tumor invasion, metastasis and resistance to treatments [68,69]. In this section, we will focus mainly on ROS functions and effects on the distinct tumor infiltrating immune cells which are essential to the host immune response to cancer (Table 1 and Figure 2). [102,103] Cancer is associated with oxidative stress, mediated through ROS generated mainly by malignant cells, granulocytes, TAMs and MDSCs into the TME. The TME includes a large number of different immune cell types [67], among which MDSCs, TAMs and Tregs concurrently work to restrain the immune response to a tumor, allowing for greater tumor invasion, metastasis and resistance to treatments [68,69]. In this section, we will focus mainly on ROS functions and effects on the distinct tumor infiltrating immune cells which are essential to the host immune response to cancer (Table 1 and Figure 2). Tumor Infiltrating Lymphocytes (TILs) TILs comprise cytotoxic lymphocytes, natural killer cells and T helper 1 lymphocytes which are pivotal for tumor cell recognition and elimination. As previously described, low levels of ROS are necessary for proper T cell activation, proliferation and differentiation, while high ROS have been noticed as one of the major factors for immunosuppression and inhibition for T cell activation and proliferation inside the TME [53,104]. TILs could be dysfunctional due to the ROS accumulated in the TME but they also demonstrated a persistent dysfunction of oxidative metabolism due to loss of mitochondrial function and mass when they infiltrated tumors, which led to impaired effector functions [88]. Moreover, T lymphocytes from peripheral blood of cancer patients showed an augmented ROS production compared to those of healthy subjects [105]. Cellular antioxidant levels resulted essential for maintaining the anti-tumor function of T cells within oxidative TME [34]. A study reported that central memory T cells characterized by higher cytosolic Glutathione (GSH), surface thiol and intracellular antioxidant levels could last for longer in an immunosuppressive microenvironment and better govern tumor growth than effector memory T cells, characterized by lower cytoplasmic antioxidant levels [89]. Indeed, a recent report showed that ROS scavengers could amplify the activation of CD8+ tumor-infiltrating lymphocyte in kidney tumors by activating the mitochondrial superoxide dismutase 2 (SOD2) [90]. Similarly, CTLs armed with engineered T cell receptors (CAR-T cells, chimeric antigen receptor-redirected T cells) that co-expressed catalase were secured from oxidative stress and preserved high tumor killing activity indicating that hydrogen peroxide participates to T cell anergy [106]. NK cells are innate lymphocytes able to constrain tumor development by their cytotoxic activity. However, tumor-infiltrating NK cells usually exhibit defective phenotypes and are characterized by either anergy or reduced cytotoxicity. Indeed, oxidative stress can alter natural killer cell functioning, contributing to immune escape within the TME. Hydrogen peroxide produced within TME inversely correlated with the infiltration of NK cells, possibly due to their preferentially induced cell death [83], whereas H 2 O 2 derived from macrophages isolated from melanoma-bearing patients was demonstrated to reduce T and NK cells mediated cytotoxic activity [78]. Furthermore, tumor-produced ROS likely caused NK cell dysfunction in chronic myelogenous leukemia (CML), since catalase could restore NK cell cytotoxic capacity against primary tumor cells obtained from patients affected with this malignancy [79]. The inhibitory activity of ROS on NK cells recruitment was observed in melanoma and sarcoma mouse models [70], furthermore myeloid NADPH oxidase 2 (NOX2)-deficient mice diminished melanoma metastasis and increased Interferon gamma (IFN-γ) generation in NK cells, suggesting that myeloid-derived ROS hamper NK cells control of cancer malignancy [80]. Likewise, phagocytes derived ROS downregulated NKG2D and NKp46 surface expression in vitro, which has been suggested to mediate NK cell deficiency in patients with acute myeloid leukemia [81]. Regulatory T Cells (Tregs) Tregs are another immune cell type that is commonly present in the tumor TME. A rise in the number of Tregs in the TME denotes local immunosuppression, which is essential for cancer cells to escape from the immune system and represents an obstacle to cancer therapy [107]. Despite the deleterious effects of oxidative stress on natural killer (NK) and T cells, greater numbers of Tregs can be detected at tumor sites, indicating that Tregs can persist in this oxidant environment. Indeed, it was demonstrated that Treg cells, compared to effector CD4+T cells, are less sensitive to oxidative stress-induced cell death, a phenomenon that may be ascribed to their proven high antioxidative capacity [92]. However, it was recently discovered that tumor Treg cells sustain and amplify their suppressor capacity through death mediated by oxidative stress [93]. Furthermore, it was found that oxidative stress, rather than glycolysis, was the metabolic mechanism that controlled tumor Treg cell functional behavior and reinforced the therapeutic efficacy of immune checkpoint therapy [93]. Myeloid-Derived Suppressor Cells (MDSCs) MDSCs often represent the major producer of oxidizing species in the TME. In addition to their release of ROS, MDSCs often arise in oxidative-stress prone environments such as tumors. ROS not only initiate anti-oxidative pathways but also activate transcriptional programs that control the fate and function of MDSCs. Furthermore, MDSCs utilize redox mechanism to cause T cell unresponsiveness or T cell apoptosis and are reportedly more suppressive compared to granulocytes and monocytes from healthy subjects [94,95]. The maintenance of MDSCs in their undifferentiated state requires ROS molecules. Immature myeloid cells differentiated into macrophages when H 2 O 2 was scavenged with catalase [102], while deficiency of NOX activity caused MDSCs to differentiate into macrophages and DCs in tumor-bearing mice [103]. Interestingly, lack of NOX2 activity in this model also impaired the ability of MDSCs to limit antigen-specific CD8+ T cell activation. Therefore, endogenous oxidative stress might represent a mechanism by which tumors inhibit the differentiation of MDSCs. MDSCs cause immunosuppression by T cell inhibition because ROS production inhibits recognition between TCR and MHC-peptide complex, as shown in a mouse lymphoma model [91]. In a mouse model, increased ROS levels in MDSCs suppressed IFNγ production and T-cell proliferation. Furthermore, MDSCs also inhibited T cells by exhaustion of cysteine and arginine (fundamental for T-cell activation and proliferation), generation of peroxynitrite (cytotoxic to T cells) and upregulation of the ROS-producing enzyme cyclooxygenase (COX)-2 in T cells [96][97][98][99]. More recently it was shown that tumor-induced MDSCs prevented T cell proliferation and promoted colorectal carcinoma cell growth through the production of ROS [100]. Interestingly, the use of ROS inhibitors completely abolished MDSCs immunosuppressive effects on T-cells [101]. Indeed, the co-culture of suppressed T cells and MDSCs from metastatic renal cell carcinoma, in the presence of the H 2 O 2 scavenger catalase, could reinvigorate IFN-γ production in T cells to physiological levels [108]. Tumor-Associated Macrophages (TAMs) Macrophages are also among the first host cells infiltrating the tumor mass [109]. Their role in the TME is double-faced. On the one hand, macrophages have the potential to eliminate cancer cells. However, the appearance and the high number of macrophages in the tumor tissue is generally accepted as a negative prognostic marker. Depending on the composition of the microenvironment, macrophages may exist in many functional states. Generally, they are classified into two extremes-M1 and M2 macrophages [110]. M1 cells are classically activated cells that have a pro-inflammatory phenotype with antitumor activity, while M2 cells are alternatively activated cells that have immunosuppressive features promoting cancer progression. In lung and breast cancer models, ROS were essential for TAMs to invade the tumor niche and to acquire a pro-tumorigenic M2 phenotype [76]. Another study demonstrated that high intracellular ROS supported a more invasive phenotype in TAMs isolated from melanomas, possibly due to ROS-dependent tumor necrosis factor α secretion [77]. The authors of this study found that at least part of the intracellular oxidative stress was endogenously generated by TAMs from melanomas, which expressed elevated levels of several mitochondrial biogenesis and respiratory chain genes. Besides, macrophages-derived ROS drove the recruitment of Tregs to the TME for exerting tumor progressive roles [71]. Moreover, H 2 O 2 production by macrophages has also been proven to sustain tumor progression in gastric cancer via modulation of miR-328-CD44 signaling [111]. Tumor-Associated Neutrophils (TANs) Tumor infiltrating neutrophils present functional heterogeneity and the existence of two polarized states, N1 and N2, was suggested similarly to macrophages [112,113]. N2-like TANs can show pro-tumorigenic activities whereas N1 exhibit cytotoxicity to tumor cells. Indeed, it was demonstrated that the infiltration of neutrophils in mouse tumor models induced tumor apoptosis with the use of ROS [72]. Furthermore, TANs could also impede metastatic dissemination in the lungs through hydrogen peroxide production [73]. Recently, ROS mediated cell elimination by TAN was shown to be dependent on tumor cell expression of TRPM2 [74]. Moreover, in mouse tumor models, TANs inhibited the proliferation of murine IL17+ γδ T cells via induction of oxidative stress, thereby preventing them from constituting the major source of pro-tumoral IL-17 in the TME [75]. On the other hand, ROS derived from neutrophil MPO could restrain NK cell activity against tumor cells [82] and could contribute to oxidative DNA damage and genetic instability [114]. Tumor cells can elicit c-Kit signaling in neutrophils, driving an oxidative phenotype that maintains ROS-mediated suppression of T cells even in the nutrient limited TME [115]. Dendritic Cells (DCs) DCs are crucial for eliciting anti-tumor immunity, due to their ability to (cross-)present antigens and activate T cells. This capacity is affected by the inflammatory environments that the cells meet [116]. The effects of ROS on DCs are complex, including metabolic and transcriptional changes that can affect the quality of DCs [84,117]. Although DCs actively utilize endo/phagosomal ROS to assist cross-presentation, the augmentation of the environmental redox potential could also hamper cross-presentation [85]. Excessive ROS can lead to chronic ER stress responses and oxidative damage to intracellular lipids that can inhibit DCs capacity to present local antigens to intratumoral T cells [86,87], thereby impairing the development of an effective antitumor immune response. Impact of ROS on Cancer Immunotherapy Much evidence points out that an oxidative milieu has an enormous impact on tumor cells, as well as TILs and other immune cells (and their interactions). Thus, it is plausible that ROS may also have a Cancers 2020, 12, 1748 9 of 16 role in the efficacy of novel cancer immunotherapy approaches, not only in conventional anticancer treatments [2,118,119]. Immune checkpoint inhibitors (ICI) and adoptive cell therapy (ACT) are two of the main actors in the immune-oncologic approach aiming at boosting antitumor immunity [120]. Treatment strategies based on antioxidants exploitation were developed to maintain antitumor activity by ACT under hypoxia and oxidative stress conditions. Indeed, exposure of ex vivo expanded TILs to N-acetyl cysteine treatment avoided their apoptosis following adoptive transfer into patients, eventually supporting extended survival of patients receiving them [121]. Moreover, Ligtenberg et al. remolded CAR-T cells to co-express CAT for improving their antioxidant capacity [106]. These CAR-CAT-T cells had a reduced oxidative state at both the basal state and upon activation but they preserved their antitumor activity. Moreover, they could exert bystander protection of T cells and NK cells even in the presence of high H 2 O 2 concentrations. Another strategy of cancer immunotherapy is the repolarization of immunosuppressive TAMs to antitumor M1 macrophages [122]. TAM-targeted ROS-inducing micelles effectively repolarized TAMs to M1 macrophages and largely augmented the activated NK cells and T lymphocytes in B16-F10 melanoma tumors, causing vigorous tumor regression [123]. In the last decade, ICI targeting Programmed cell death protein 1 (PD-1)/PD-L1 blockade significantly increased the survival rate in cancer patients, revolutionizing the landscape of cancer treatment. Recently, it was reported a synergetic effect of mitochondrial activation chemicals with anti PD-1 therapy on induction of T cell-dependent antitumor activity [124]. The authors showed that tumor-reactive CTLs, isolated from mice treated with anti PD-L1, carried higher levels of ROS and ROS generation enhanced the tumor killing activity of PD-1 blockade by the expansion of effector/memory cytotoxic CD8+ lymphocytes. Thus, altering endogenous mitochondrial activity in CTLs may affect the response to PD-1 blockade. Another study showed a correlation between the ability of mouse tumor cell lines to consume oxygen and produce hypoxic environments with their sensitivity to PD-1 checkpoint blockade [125], thus suggesting that decreased levels of ROS and consequently a less hypoxic TME may intensify the effectiveness of PD-1 blockade immunotherapy. Finally, a recent study reported that continuous NOX4-dependent ROS generation was required in cancer-associated fibroblasts (CAF) to maintain their activated phenotype, which promoted resistance to different immunotherapy modalities. Specifically targeting CAF NOX4 could re-sensitize CAF-rich tumors to anti-cancer vaccination and anti PD-1 checkpoint inhibition by reshaping CAF-regulated immune microenvironment [126]. Conclusions Taken together, the data presented in this review uncover a double-faced role of ROS in the antitumor immune response. Additional studies are needed to characterize how different subcellular localization, magnitude and duration of ROS production within tumor infiltrating immune cells and into the TME are affecting tumor immunity. Cancer treatment approaches via oxidative or antioxidative drugs should consider the broad range of both beneficial and detrimental effects of ROS on immunity and cancer progression. A more successful strategy could be to target ROS or antioxidants to a specific cell type and conceive innovative combinatorial therapies. Moreover, further studies addressing the potential role of ROS levels and redox status as prognostic or predictive markers of immunotherapy outcome are warranted.
2020-07-08T13:02:48.621Z
2020-07-01T00:00:00.000
{ "year": 2020, "sha1": "a75b318a9ad464152720c08efecab53857eb9992", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/12/7/1748/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b8a702ffed24a3af4ec59673915677537a73cf54", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
259134885
pes2o/s2orc
v3-fos-license
Breakthrough treatment choice for Acute Myeloid Leukemia in pediatric and adult patients: Revumenib, an oral selective inhibitor of KMTA2Ar Acute myeloid leukemia (AML) represents the predominant manifestation of acute leukemia in the adult population, whereas in children, it ranks second in terms of frequency. It is characterized by genetic mutations and epigenetic dysregulation resulting in a heterogeneous population of malignant cells with blocked differentiation resulting in increased proliferation and self-renewal activity . Every year 20,000 new cases of AML are diagnosed in the United States, whereas the global burden of the disease is believed to range between 119,000 to 352,000 cases per annum. NPM1 gene mutations are the most encountered genetic aberrations in acute myeloid leukemia (AML), being detectable in about one-third of adult AML and 50– 60% of AML patients with normal karyotype. The mutant NPM1 is directly involved in promoting increased expression of homeobox (HOX) genes, which are necessary for maintaining the leukemic cells in undifferentiated state. Recent studies have shown the importance of MLL1Menin interaction in AML with mutated nucleophosmin 1 (NPM1c). MLL1 (also known as lysine methyltransferase 2A [KMT2A]) is located on chromosome 11q23, but chromosomal translocation (MLL1-rearrangement [MLL1-r]) is observed in 5%–10% of acute leukemia cases (AML and ALL) in adults and children. This leads to the expression of chimeric MLL1 fusion proteins (ML-FP) that drive leukemic gene expression and proliferation and prevent hematopoietic differentiation, consequently giving rise to a particularly aggressive subtype of leukemia with an unfavorable outcome. Chromosomal rearrangements involving KMT2A gene are prevalent in neonates with acute leukemia, and affects 75% of newborns with ALL. Research findings suggest that this crucial molecular alternation takes place antenatally, leading to leukemia during the infantile period. Although induction therapy achieves complete remission (CR) in 60–80% cases, no targeted therapies have specifically been approved for acute leukemia with KMT2A rearrangement (KMT2Ar) or mutated NPM1currently. Unfortunately, the median survival is relatively brief at 8.5 months with 2-year and 5-year Overall Survival (OS) rates just 32% and 24%, respectively. Furthermore, existing research has suggested that circRNAs are capable of playing a role in the post-transcriptional regulation of AML by binding miRNAs, activating downstream signaling cascades, and regulating the expression of related genes, closely correlated with a wide variety of processes of AML. AML has a poor prognosis and a considerable tendency to relapse therefore, the need for effective treatment is undeniable. Acute myeloid leukemia (AML) represents the predominant manifestation of acute leukemia in the adult population, whereas in children, it ranks second in terms of frequency. It is characterized by genetic mutations and epigenetic dysregulation resulting in a heterogeneous population of malignant cells with blocked differentiation resulting in increased proliferation and self-renewal activity 1 . Every year 20,000 new cases of AML are diagnosed in the United States, whereas the global burden of the disease is believed to range between 119,000 to 352,000 cases per annum. 2 NPM1 gene mutations are the most encountered genetic aberrations in acute myeloid leukemia (AML), being detectable in about one-third of adult AML and 50-60% of AML patients with normal karyotype. The mutant NPM1 is directly involved in promoting increased expression of homeobox (HOX) genes, which are necessary for maintaining the leukemic cells in undifferentiated state. 3 Recent studies have shown the importance of MLL1-Menin interaction in AML with mutated nucleophosmin 1 (NPM1c). MLL1 (also known as lysine methyltransferase 2A [KMT2A]) is located on chromosome 11q23, but chromosomal translocation (MLL1-rearrangement [MLL1-r]) is observed in 5%-10% of acute leukemia cases (AML and ALL) in adults and children. This leads to the expression of chimeric MLL1 fusion proteins (ML-FP) that drive leukemic gene expression and proliferation and prevent hematopoietic differentiation, consequently giving rise to a particularly aggressive subtype of leukemia with an unfavorable outcome. 1,4 Chromosomal rearrangements involving KMT2A gene are prevalent in neonates with acute leukemia, 5 and affects 75% of newborns with ALL. 6 Research findings suggest that this crucial molecular alternation takes place antenatally, leading to leukemia during the infantile period. 5 Although induction therapy achieves complete remission (CR) in 60-80% cases, no targeted therapies have specifically been approved for acute leukemia with KMT2A rearrangement (KMT2Ar) or mutated NPM1currently. Unfortunately, the median survival is relatively brief at 8.5 months with 2-year and 5-year Overall Survival (OS) rates just 32% and 24%, respectively. 2 Furthermore, existing research has suggested that circRNAs are capable of playing a role in the post-transcriptional regulation of AML by binding miRNAs, activating downstream signaling cascades, and regulating the expression of related genes, closely correlated with a wide variety of processes of AML. 7 AML has a poor prognosis and a considerable tendency to relapse 1 therefore, the need for effective treatment is undeniable. On 5 December 2022, the U.S. Food and Drug Administration (FDA) granted Breakthrough Therapy Designation (BTD) for Revumenib as a first-and best-in-class therapy for the treatment of adult and pediatric patients with relapsed or refractory (R/R) acute leukemia harboring a KMT2Ar. 8 Revumenib, previously known as SNDX-5613, is a potent, oral and selective inhibitor of the menin-KMT2A interaction. It disrupts the interaction between Menin and its binding pocket in MLL1/2 and MLL1-FP, causing differentiation and apoptosis of AML cells expressing MLL-FP or NPM1c 9 Thus, this development represents a major step forward in our efforts to combat this devastating disease and provides new hope for patients and their families. To evaluate the safety, tolerability, pharmacokinetics, and efficacy of orally administered Revumenib, AUGMENT-101 Phase 1 open-label trial was conducted. In between 5 November 2019 and 31 March 2022, a total of 68 patients with nucleophosmin mutant and KMT2A-rearranged relapsed/refractory (R/ R) acute leukemia were enrolled. The cohort included adult and pediatric patients, having a median age of 50.5 years and 2.5 years, respectively. 56 patients (82%) had relapsed or refractory AML, 11 (16%) suffered from ALL, and one with mixed-phenotype acute leukemia (2%) 10 They were allocated into two separate cohorts based on concomitant treatment of revumenib with a strong CYP3A4 inhibitor or a less potent one. Arm A enrolled 37 patients receiving between 226 mg and 276 mg of revumenib once every 12 h without a strong CYP3A4 inhibitor, while Arm B enrolled 31 patients taking 113 mg-163 mg of revumenib at the same interval but with a strong CYP3A4 inhibitor. 10,11 Revumenib has demonstrated a promising outcome in its Phase 1 open-label trial; 30% of the 18 evaluable patients among 60 total patients achieved complete remission or complete remission with partial hematologic recovery (CR/CRh), while 78% of these 18 participants attained measurable residual disease (MRD) negativity. 10 Though Revumenib opens a new door of hope for patients and physicians, the treatment also carries risks. Differentiation syndrome is a notable treatment-related adverse event (AE) that occurred in 16% of revumenib recipients. 11 Furthermore, irregular cardiac rhythm was another notable AE, which took place in 53% of revumenib recipients. 11 Nevertheless, the medication was welltolerated by study participants and no participants stopped taking the therapy due to treatment-related adverse effects. Thus, AUGMENT-101 may potentially serve as the basis for changing the treatment paradigm for patients with relapsed or refractory KMTA2-rearranged acute leukemia for outweighed positive outcomes. 8 Therefore, the development of Revumenib represents an important turning point in our fight against acute myeloid leukemia. The endorsement of this drug is a laudable achievement, and it demonstrates the unwavering commitment of researchers and healthcare professionals to discovering effective treatments for such a fatal disease. In conclusion, this drug offers a ray of hope to patients and their families and also inspires researchers to continue their quest for the cure which is evident by the fact that Phase 2 pivotal portion of AUGMENT-101 is currently underway. 11 Therefore, it is crucial that we continue to support research into leukemia and the development of new treatments to envisage a time where leukemia is eradicated. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article.
2023-06-12T15:02:34.825Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "d030d4204f4d6a0d4635c02100a8daa89f914237", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1177/20363613231183785", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c53cbb2e0e147cea7e512b47979f0243a78ee7a2", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
148801938
pes2o/s2orc
v3-fos-license
Effects of Clinical Placements on Paramedic Students ’ Learning Outcomes The preceptor role is multifaceted and complex [1] because he or she must align responsibilities for patient care with and the fieldwork atmosphere. Preceptors reported that students’ clinical skills across all categories improved significantly in the last two weeks of training. Qualitative data indicated that students displayed appropriate behaviour and professional socialisation and were keen to learn, and demonstrated competence in paramedic skills. abstract Background: Clinical placements are of major importance in students' learning processes through creating supportive environments and fostering independence into paramedic professional roles.The study aimed to explore whether clinical experiences in out-of-hospital emergency services affected students' learning outcomes and satisfaction. Methods: A retrospective study was carried out using preceptors' evaluations (n=160) and students' feedback forms (n=21).Descriptive and non-parametric inferential statistics were used to analyse quantitative items, and open-ended questions were analysed using content analysis. Results: Findings showed that more than 70% of students were satisfied with the quality of preceptors facilitation of student learning.Preceptors may experience stress and burnout [2] caused through increasing workloads and the need to protect their licence to practise from allegations of malpractice that may result from student errors.[3] Consequently, preceptors may not acknowledge students or involve them in the clinical team.Several studies have emphasised that working in supportive environments and developing good clinical skills and decision-making in patient care are important influences on students' decision to remain in healthcare professions.[4] Preceptor feedback contributes substantially to student achievement and improvement in their critical thinking skills.[5,6] Additionally, students who regularly use feedback to improve their work are more likely to succeed as part of a clinical team and attain a level of independence in their clinical skills. Paramedics work in ambulance services to provide care in critical issues or medical emergencies.An ambulance Effects of Clinical Placements on Paramedic Students' Learning Outcomes emergency call-out is multifaceted and has extraordinary challenges because paramedics manage a wide range of conditions in patients, from lifethreatening issues requiring immediate attention to those where no treatment is needed.Paramedics also encounter people from various age groups, cultures, and social backgrounds [7] and the services delivered to patients are significant for their wellbeing and recovery.[8] Therefore, it is essential that undergraduate paramedic students gain hands-on skills through clinical placements in ambulance services so they can apply knowledge learned in the classroom to real life situations.It is essential that undergraduate paramedic students have opportunities to practise in clinical settings so they can integrate the clinical knowledge gained at university with emergency practices.The number of clinical placements hours required by universities varies and is inconsistent across programs and Australian states.In the authors' university, students are expected to spend 480 hours on placements with ambulance services and in community settings.[9] Second and third year paramedic students spend nearly 200 hours on placements a year (4-5 week).Clinical environments in ambulance settings are different from those in hospitals because paramedics must perform tasks quickly in difficult conditions and make rapid clinical judgements to save patients' lives at the scene.[5] Hence, the role of healthcare professionals in out-of-hospital settings is somewhat different from that of hospital emergency room staff.Although preceptors have similar roles across healthcare professions, paramedic professionals perform a diverse range of invasive procedures, some of which may be considered risky treatments.[10] Hence, paramedics require different clinical skills to those of nurses.While several studies have explored the relationship between clinical placements and aspects of student learning, most have emerged from nursing perspectives and there are few studies from the paramedic paradigm.[11] Boyle and colleagues carried out a pilot study of paramedic students' experiences of clinical placements in Victoria and found that nearly 85% of participants mentioned that classroom scenarios helped them to understand real-time patients and approximately 90% gained handson practical experiences while working on the road.However, 69% of them believed they undertook unproductive work and 39% did not have opportunities to be involved in patient assessment or clinical scenarios.[12] A qualitative study with 15 paramedic preceptors indicated there were differences between skills students obtained from classrooms and hands-on skills seen in ambulance services.They also believed that short placements limited students' learning opportunities, while extended periods of clinical practice with specific instructors for students would produce beneficial clinical experiences.[13] Unfortunately, the study did not measure the competencies and satisfaction students gained from placements.There are limited opportunities for clinical placements in Australia, so one university in Victoria took paramedic students for international clinical placement experiences in Israel. When the clinical benefits of work experiences in traditional placements and Israeli ambulance services were compared, findings showed that caseloads between the two settings were not significantly different but students gained more experiences from the various shifts worked in Israeli ambulance services.[14] It appears that most studies of paramedic clinical placements in Australia have been conducted in Victoria.Clinical experiences obtained from ambulance services in other states may be similar, but this study aimed to explore the relationship between paramedic preceptors' views of student competencies and student satisfaction with preceptors in clinical settings in the Ambulance Service of New South Wales, Australia. Methods This retrospective study used preceptors' assessments and student feedback forms collected in the Bachelor of Clinical Practice (Paramedic) course.Paramedic preceptors assessed the clinical skills and other competencies demonstrated by undergraduate paramedic students who undertook a four-week clinical placement in ambulance stations around New South Wales in their final year.The 233 students were divided into three groups for placement in February (n=57), August (n=83) or November (n=93).Forms from 73 paramedic preceptors were incomplete, giving a total of 160 responses for analysis.Because this was a retrospective study, informed consent was not sought. instruments Two self-report questionnaires were developed by one of the authors who has worked in ambulance services and educational settings for more than 20 years.Three lecturers from the university assessed the questionnaires for content validity and judged that they were suitable for the paramedic clinical placement paradigm. Preceptor evaluation of student competency Preceptors assessed student competency on 15 items divided into six main categories: survey on scene, monitoring and assessment, communication, clinical decision-making, Asia Pacific Journal of Health Management 2017; 12: 3 functioning as a team member, and standard infection control precautions.Preceptors were asked to rate each item on a 4-point nominal scale, E (Exemplary), S (Satisfactory), NI (Needs Improvement), U (Unsatisfactory), and to report students' strengths and areas for improvement in two open-ended questions.Each student was assessed twice during placement, at the end of Week 2 and Week 4, and preceptors reported their assessments directly to a lecturer at the university.Internal consistency (Cronbach's alpha) was 0.921. Student clinical placement feedback This was a self-administered 14-item tool for students to provide feedback for each ambulance service in which they undertook clinical placement, in four categories: preceptor's expertise (clinical skills, experiences and role modelling), supportive environment (orientation, staff welcoming, participation encouraged), constructive feedback (verbal and written), and attitude towards returning to placement venue.Each item was assessed on a 6-point Likert scale, Non-Applicable (1), Strongly Disagree (2), Disagree (3), Uncertain (4), Agree (5), and Strongly Agree (6).Two openended questions were used to assess what students found most valuable during the clinical placement, and what they liked least in the clinical placement.Students completed the feedback form at the end of the four-week placement and returned it to the university.In total, 21 forms were submitted.Internal consistency of this instrument was 0.849. Data analysis Quantitative data.Descriptive statistics were used to analyse frequency counts and percentages.For the student feedback scale, total scores were summed from the 14 items and divided into quartiles.The 1st quartile was defined as 'to some extent satisfied' , followed by 'satisfied' (Q2), 'very satisfied' (Q3), and 'extremely satisfied' (Q4).For preceptors' assessments, Wilcoxon's matched pairs signed ranks test was performed to compare the difference in competency for each student between the end of the second and fourth weeks of clinical placement.A p value < .05 was used to indicate statistical significance.All analyses were performed using SPSS version 20. Content analysis Qualitative data (open-ended questions).Content analysis was used to analyse the openended questions on preceptor assessments and student feedback forms.This is an appropriate method for handling large amounts of data.[15] The researchers extracted key words and phrases from the open-ended responses, then allocated codes and developed categories and themes.The emergent codes and themes were discussed until agreement was reached to validate the findings.For instance, themes identified in preceptor evaluations were 'student competencies' and 'skills improvement' , and categories included 'appropriate behaviour and professional socialisation' , 'competency in clinical skills and keen to learn' , 'lacking selfconfidence' 'communication difficulty' , and 'shortage of case scenarios' . Results Preceptors submitted two complete reports for 160 students (40 in February, 60 in August, and 60 in November).Of these students, 88 (55%) were male and 72 (45%) female.The 73 incomplete assessments were excluded from the study.Most preceptors from 21 ambulance stations were male and their paramedic qualifications ranged from paramedic (P1) to paramedic specialists (Intensive Care Paramedic and Aviation Paramedic). Preceptor evaluations of student competencies are shown in Table 1.Unsurprisingly, students showed improvement across all items between the assessments in Week 2 and Week 4. The differences were statistically significant.For the item 'conducting a primary survey' the percentage of students rated as 'Needs Improvement' fell significantly from 6.25% at Week 2 to 4.38% at Week 4, a reduction of 29.92%.The improvement was more marked for the item 'establish rapport with patients' , from 6.88% at Week 2 to 0.63% at Week 4, a reduction of 90.84%.The number of students rated as 'Needs Improvement' on any item at the end of Week 2 fell significantly by the end of Week 4. Similarly, the preceptor evaluations at 'Exemplary' level substantially increased to nearly 60% at Week 4 for the item 'assess patient to formulate diagnosis' , and by approximately 50% for the item 'conduct a secondary survey' .In addition, evaluations at 'Exemplary' level for the items about using communication skills to obtain information and using clinical decision-making to suggest appropriate care increased by almost 40% between the end of Week 2 and the end of Week 4. Student perceptions are shown in Table 2.The majority of students acknowledged that clinical staff welcomed them and provided a supportive atmosphere, with approximately 72% strongly agreeing.Similarly, more than 71% strongly agreed that their preceptor had appropriate skills and experience and encouraged them to ask questions, helped them to engage in clinical experiences (61.90%), and provided good verbal feedback on their progress (52.38%).More than 50% of students agreed that they received appropriate direction when they first arrived and had Appropriate behaviour and professional socialisation Students who demonstrated appropriate behaviour and professional socialisation to clinical staff helped create a positive atmosphere and harmony within the workplace. Effects of Clinical Placements on Paramedic Students' Learning Outcomes Therefore, preceptors were willing to teach clinical skills and answer students' questions when they required clarification. Preceptor A: Student A is a very happy person and easy to get along with.She engages with staff and patients well.She follows instruction well and is learning quickly.A pleasure to be mentoring.The areas that need improvement are getting better as she gains experience.She is doing well for the time she has spent on road. Preceptor D: Student D is a polite and courteous student who works well in a team environment.He actively seeks clarification on anything he may not understand.He demonstrates a willingness to learn.He gives a concise and accurate patient handover to other medical staff. Competency in clinical skills and keen to learn Paramedic students showed a high level of clinical knowledge and understood how to integrate theoretical knowledge into practice.They were actively involved in assessment and treatment of patients, and were keen to learn in different environments. Preceptor B: Student B has shown an above average ability to apply clinical knowledge to on road experiences.He is mature for his age and this is apparent when dealing with patients.I believe his life experiences prior to University with safety have placed him well for paramedicine.He is patient, displays empathy, and is keen and quick at adapting to ever-changing environments… Preceptor C: Within the first two weeks of student C's placement he has immediately applied himself to all tasks asked of him.He has been punctual for every shift and… He has been well presented and courteous to all staff members and allied health professionals.He is extremely keen to actively participate in the assessment and treatment of all patients, regardless of the patient complaint, and he seeks guidance when required… Lack of self-confidence Preceptors identified areas where students needed to make improvement.These included development of greater selfconfidence and assertive communication skills to enable them to manage challenging situations.Preceptor B: Student B needs to be more forward/ confident in talking with patients to ensure he has an accurate history of the events leading up to pt/paramedic meeting-I know this will be well addressed in the near future.With more on-road time, I believe he will be a solid clinician and will make a very good paramedic, welcomed on any station within ASNSW. Preceptor G: On the flip side of above, she will need to learn to be assertive in difficult and confronting situations. Communication difficulties Students were sometimes nervous when asking for patient information or investigating the circumstances surrounding a call-out.This was apparent from their low-pitched voices when participating in patient assessments.Paramedics are the first healthcare personnel on the scene in emergencies, so clear communication and assertive behaviour is essential for them to gather crucial information from patients and eyewitnesses before giving treatment. Preceptor A: Student A can be shy at times when dealing with patients and families and just needs to find some more confidence and initiative to fulfil some of these requirements.Student does need to be asked if she would like to train in anything or if she has any questions regarding jobs.She could be a bit more forward in wanting to participate in practical training sessions. Preceptor D: Encouraging student D to be more assertive and direct when trying to obtain a patient history.I have explained to him that he needs to increase the volume of his voice when talking to patients as they have difficulty hearing him. Limited case scenarios Students placed in regional areas with limited case scenarios missed the learning opportunities that arise when dealing with a diverse range of cases.Nevertheless, when exposed to a trauma scene, students showed competency for managing situations, including leadership and suggested treatment pathways. Preceptor E: The only areas of improvement I can see for student are to be exposed to real life scenarios to give her experience and a greater depth of knowledge. Preceptor F: Unfortunately to this point we have had limited exposure to some skills such as trauma requiring IMISTAMBO and scene reports.There have also been limited opportunities to cannulate and set up fluids.On the occasions that this has occurred she has performed these tasks with little problem….She has shown constant improvement in her confidence and I am sure by the end of the 4 weeks together her ability to show leadership and control scenes will have reached the level expected of her. Exposure to real-life experiences and improvement in clinical skills Students gained greater understanding of how to perform ambulance tasks and work effectively with patients and other healthcare professionals when they were exposed to reallife situations.These enabled them to apply the knowledge acquired at university to practical scenarios.Student I: I feel that my hand-overs to triage nurses and doctors improved a lot over the 3 weeks and by the third week I was presenting all hand-overs.Student J: Getting real practical experience!!! My preceptor was very knowledgeable and able to explain the pathophysiology of everything!Learning how ambos work together and where everything is placed in the ambulance was great!Also, skills for questioning difficult patients!And just treating actual people! Shortage of case experiences Students were not pleased when their placement venues offered only a limited range of learning experiences in reallife ambulance settings.They believed this restricted their learning experiences and opportunities to put theoretical knowledge into practice in prehospital situations.They felt bored and untested when dealing only with non-emergency calls and light case loads. Student L: Placement in rural areas didn't provide me with the load or work or experience I was hoping to see. Student I: Because the area I was in wasn't very busy I feel like I didn't really receive anything too exciting nor challenging.The one call out that I found exciting because I'd learnt so much about it, I was put in the front seat and didn't really even get talked through it. Discussion In the study, paramedic students showed satisfactory to exemplary levels of competencies in their clinical skills, and preceptor evaluations of student abilities increased between the first and second assessments for all items.A similar picture was seen in the qualitative data, with preceptors reporting that students were skilled at putting their theoretical knowledge into practice and that they displayed appropriate behaviour and professional socialisation when working with staff and patients.Preceptors were willing to teach students and allow them to undertake clinical experiences in real-life scenarios especially where there were good working relationships between themselves and students.These findings differed from those in a qualitative study by O'Meara and colleagues, who found that preceptors from vocational backgrounds tended to limit students to observation only rather than allowing them hands-on clinical practices, while preceptors who were university-educated paramedics adopted the opposite view. [13] Several studies have pointed out that building a healthy relationship is the key in achieving satisfactory learning experiences in clinical placements.Through this, preceptors can encourage students' exposure to clinical skills, improve their self-confidence and enhance their critical thinking.[16][17][18] A paramedic clinical placement study in Victoria demonstrated that paramedic students expressed concern about their experiences on clinical placements.The findings showed nearly 60% of students had negative experiences with preceptors, such as 'being ignored' and 'know nothing but not offered any opportunities to practice skills' .[12] A mutual relationship in the preceptorship whereby staff accept students as team members, take care of them, show empathy and act as good role models is the most effective strategy to achieve student learning in clinical practice.[16] In this study, findings from the qualitative data clearly show the benefits for students who experienced a sense of security through building good personal relationships with the preceptor and other staff. These students showed an increasing trust in their own abilities as they participated in challenging activities, they were not afraid to ask questions and started to work independently. The findings also demonstrated that students' competencies increased significantly across most skill categories in the final weeks of placement.Consistent, long-term exposure to reallife scenarios helped to boost students' selfconfidence, allowing them to take the initiative to perform complicated tasks.Conversely, short periods of placement Effects of Clinical Placements on Paramedic Students' Learning Outcomes limited students' learning opportunities to practise clinical skills while creating difficulties for ambulance settings in organising shifts and seeking mentors.[13] Students who experienced different emergency scenarios and who were able to stay longer on scene gained more experience of clinical procedures and reported high satisfaction levels. Similarly, McCall et al (2009) indicated that students' learning experiences depended on the caseload in the services where they undertook placement.Students gained a wider range of experiences when undertaking placement in metropolitan areas rather than with rural ambulance services.[19] The development of professional competency in pre-hospital settings usually depends upon exposure to a diversity of caseload scenarios and the use of effective clinical decisionmaking skills acquired during clinical placements.Clinical placements that function well are influential in enhancing learning and generating professional identity for paramedic students.The study showed that the students' level of knowledge before placement was sufficient for them to manage the reallife scenarios they encountered.However, they needed to develop more self-confidence and assertive communication skills so they could adapt quickly to new environments and reduce possible dissatisfaction between students and supervisors.Sustained cooperation between universities and ambulance services is essential to provide a good learning atmosphere for students and reduce frustration among clinical staff. Limitations of the study The use of existing data in this retrospective study potentially created bias because it was not possible to confirm or reassess information, while some crucial variables were missing and there were issues with data quality and generalisability.A strength of the study is that it reports new findings about the development of mutual relationships between preceptors and paramedic students and the value of increasing the length of clinical placement periods, both of which could lead to improved quality of clinical placements.These findings have not been identified previously, nor have there been studies conducted in New South Wales, Australia. Table 1 . Preceptors' perceptions of student competencies during four-week placement Effects of Clinical Placements on Paramedic Students' Learning OutcomesEffects of Clinical Placements on Paramedic Students' Learning Outcomes *p<.05Asia Pacific Journal of Health Management 2017; 12: 3 Table 2 . Students' Perceptions of Clinical Placement Students appreciated preceptors who allowed them to have a range of clinical experiences and use their knowledge on Effects of Clinical Placements on Paramedic Students' Learning Outcomes Asia Pacific Journal of Health Management 2017; 12: 3 real patients.Students also enjoyed working with preceptors who were easy to relate to and who helped them to keep calm.
2019-05-11T13:05:52.605Z
2017-11-12T00:00:00.000
{ "year": 2017, "sha1": "e4c9bb7dfb41da70f1fda1ba4e6d5c6e539e1e32", "oa_license": "CCBYNC", "oa_url": "https://journal.achsm.org.au/index.php/achsm/article/download/55/39", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e4c9bb7dfb41da70f1fda1ba4e6d5c6e539e1e32", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
1446638
pes2o/s2orc
v3-fos-license
Discovery of novel human transcript variants by analysis of intronic single-block EST with polyadenylation site Background Alternative polyadenylation sites within a gene can lead to alternative transcript variants. Although bioinformatic analysis has been conducted to detect polyadenylation sites using nucleic acid sequences (EST/mRNA) in the public databases, one special type, single-block EST is much less emphasized. This bias leaves a large space to discover novel transcript variants. Results In the present study, we identified novel transcript variants in the human genome by detecting intronic polyadenylation sites. Poly(A/T)-tailed ESTs were obtained from single-block ESTs and clustered into 10,844 groups standing for 5,670 genes. Most sites were not found in other alternative splicing databases. To verify that these sites are from expressed transcripts, we analyzed the supporting EST number of each site, blasted representative ESTs against known mRNA sequences, traced terminal sequences from cDNA clones, and compared with the data of Affymetrix tiling array. These analyses confirmed about 84% (9,118/10,844) of the novel alternative transcripts, especially, 33% (3,575/10,844) of the transcripts from 2,704 genes were taken as high-reliability. Additionally, RT-PCR confirmed 38% (10/26) of predicted novel transcript variants. Conclusion Our results provide evidence for novel transcript variants with intronic poly(A) sites. The expression of these novel variants was confirmed with computational and experimental tools. Our data provide a genome-wide resource for identification of novel human transcript variants with intronic polyadenylation sites, and offer a new view into the mystery of the human transcriptome. Background Eukaryotic mRNA is frequently alternative spliced. Recent studies of human tissue transcriptomes by high-throughput sequencing have revealed that about 95% of multiexon genes undergo alternative splicing (AS) [1,2]. This greatly enhances previous estimate of human AS events [3][4][5], thus further adds complexity to transcripts and proteins. Alternative cleavage and polyadenylation (APA) is also an important mechanism to produce diverse mRNA isoforms. In APA events, a key regulatory step in the formation of the mRNA 3'-end, a nascent mRNA is cleaved at its cleavage site and the poly(A) tail is added to the mRNA [6,7]. Polyadenylation is associated with important ciselements, such as the upstream canonical AAUAAA and its hexamer variants, the downstream U/GU-rich elements, the auxiliary upstream elements, and the downstream elements [8][9][10][11]. These element combinations determine how mRNA 3'-ends are processed. In human, over half of genes have alternative polyadenylation products [9]. These alternative transcripts are often expressed in a tissue-specific pattern, and contribute to some inherited disorders and tumor development [12][13][14][15][16]. In addition to the 3' most exons, polyadenylation sites (poly(A) sites) can also exist in introns and internal exons. In human, at least 20% of the genes have intronic polyadenylation [17]. Alternative tandem or intronic poly(A) sites can lead to alternative polyadenylation [18]. Bioinformatic analysis has revealed different polyadenylation configuration within gene structure [11,17,19]. The mRNA produced from an internal polyadenylation site often encodes truncated proteins or distinct protein isoforms. These protein products often show different cellular localization and/or different functions compared to the protein produced from the 3'-most poly(A) site [20][21][22][23][24][25][26][27][28][29]. Genome-wide searches for poly(A) sites resulted in the polyA_DB and PolyA_DB2 (the latest version) databases [17,30,31]. To date, 54,686 poly(A) sites have been identified [31]. However, these poly(A) sites are mainly limited to coding regions, and the frequency of poly(A) sites in large introns and intergenic regions remains largely unstudied. In addition, the sequence selection for these databases was biased towards sequences in the UniGene database [30,32]. Because intron did not overlap with known exons or cDNA sequences, most intronic expressed sequence tag (EST) sequences were excluded. For example, ESTs located in large introns were removed in Lee's research [32], because these sequences usually did not overlap with other sequences for the same gene. No doubt, this bias leaves a large pool of undiscovered transcript variants with intronic polyadenylation sites. To identify these underrepresented poly(A) sites, we preferentially selected intronic single-block ESTs considering that ESTs that span multiple exons often have been included in known UniGene clusters and have been used for study of alternative splicing. However, single-block ESTs, which span just one exon on chromosome, were not well considered [33][34][35][36][37]. We focused on the intronic 3'end exon sites associated with poly(A/T)-tailed ESTs derived from single-block ESTs. An intronic 3'-end exon site is defined as a terminal exon site located in introns upstream of the 3'-most exon of the gene. Herein we use the term "3'-end exon site", but not "3'-end exon" to describe intronic poly(A/T)-tailed single-block ESTs because these 3'-end exons are usually incomplete at their 5'-ends and the closest exon junction is ambiguous. As a result, 10,844 intronic 3'-end exon sites from 5,670 human genes were identified. 45% of all these sites represent novel transcript variants that are absent from other alternative splicing related databases. To confirm that these sites are transcribed, we collected expression data from non-poly(A/T)-tailed ESTs, full-length cDNAs, endpair sequencing of cDNA clones, and Affymetrix genomic tiling arrays. These data confirm that about 84% of the predicted sites represent true transcripts. We also successfully verified some predicted transcripts by RT-PCR experiments. Mapping and clustering intronic poly(A) sites in the human genome To identify novel transcript variants resulting from previously unidentified intronic poly(A) sites, an annotated EST alignment file from UCSC Genome Browser http:// genome.ucsc.edu was analyzed ( Figure 1). We focused on single-block ESTs that did not overlap known mRNA sequences. Initially, 7,948,198 aligned EST entries were analyzed for poly(A) sites. Among these, 3,614,581 single-block EST entries were obtained, containing 3,323,676 non-redundant ESTs. These ESTs could be further divided into two type: poly(A/T)-tailed or nonpoly(A/T)-tailed, with the number of 494,529 and 2,829,147, respectively (Table 1). For the poly(A/T)-tailed ESTs, by blasting these sequences against the RefSeq mRNA database, known poly(A) sites were identified and removed, leaving 22,117 sequences for further analysis. These ESTs were finally clustered into 10,844 groups (poly(A) clusters) from 5,670 human genes (Table 1 and Additional file 1, sheet "all_site") according to their position overlapping in the chromosome alignment. These clusters represented 3'-end exon sites. Thus, the involved gene increased previous reported number that at least 3,344 human genes contained intronic poly(A) sites [17]. The various poly(A/T)-tailed ESTs in the same cluster may represent heterogeneous cleavage or different polyadenylation pattern if they contain different poly(A) sites [6,11]. Most 3'-end exon sites were flanked by exons containing coding sequences (CDS). Among the single-block ESTs, 2,829,147 ESTs without poly(A/T)-tail were grouped in 396,094 clusters. Nonpoly(A) clusters that overlapped with the poly(A/T)-tailed 3'-end exon clusters would support the expression of the novel transcripts. Of the intronic 3'-end exon sites, 7,676 (71%) from 4,599 genes had at least one supporting nonpoly(A/T)-tailed EST (Additional file 2), and 3,041 (28%) of the poly(A) clusters contained at least two poly(A/T)tailed ESTs (Additional file 1). Totally, 75% (8,189/ 10,844) of the identified 3'-end sites were supported by at least two ESTs. Among the resting 25% supported by only a single EST, 37% (974/2,655) got further supported by transcriptional data from Affymetrix genomic tiling array (see below, Additional file 1 and 2). There were 351 independent poly(A) clusters that could overlap with their adjacent clusters via the bridge of non-poly(A) clusters. Some poly(A) clusters bridged several non-poly(A) clusters into a single large cluster (data not shown). These large clusters could just manifest the heterogeneity of the polyadenylation pattern at the 3'-end exons in the local genomic context [11,38]. A pipeline for identifying novel intronic 3'-end exon sites 3'-end novel transcript variants are expressed To confirm that these poly(A) sites represent novel alternative transcript variants and not genomic DNA contamination, our analysis pipeline had four steps. First, we did blast searches against all known mRNAs excluding sequences from RefSeq. Second, for ESTs with clone ID, we traced their partner sequences of the same clones and checked for splicing signals within the sequences. Third, we compared the 3'-end exon sites with the data of Affymetrix tiling array. Finally, we selected some novel transcript variants and verified them via RT-PCR experiments. In our analysis, poly(A/T)-tailed ESTs that had hits in the RefSeq mRNA database by blast searching were eliminated as known transcripts. However, there are many mRNAs are not included in the RefSeq database. Most of the sequences are produced by full-length cDNA sequencing projects. If our 3'-end ESTs could be aligned well to such cDNAs, the ESTs were thought to be potential novel transcripts. Among the 10,844 3'-end exon sites, 2,957 (27%, 2,957/10,844) from 2,257 genes had hits from at least one mRNA (Table 1 and Additional file 1). This indicates the transcript variants have been cloned by others. As the full-length cDNA sequencing projects have been conducted with state-of-art quality control as well as manual verification, it is appropriate that most of these supported ESTs stand for bona fide mRNAs. The remaining 7,887 sites, involving 3,413 human genes, may represent unidentified 3'-end exon sites for novel transcript variants. ESTs often have clone IDs, which identify the plasmid clones of source cDNA fragments. EST sequences are produced from single-pass sequencing of 5'-and/or 3'-end of the clones. As we have got the 3'-end single-block ESTs, we could trace their corresponding 5'-end ESTs with the same clone IDs. If the 5'-end EST could be split into multiple blocks, with adjacent GT/AG splicing signal on the human genome, which could be taken as the exons in mature mRNAs, it was concluded that the pair of 5'-end and 3'end ESTs comprised a bona fide mRNA. In our data, 3'-end exon sites contained two types of ESTs: poly(A/T)-tailed and non-poly(A/T)-tailed but overlapped with the former. If either type of the ESTs had multi-block 5'-end ESTs, the 3'-end exon site was thought to be supported. (Table 1 and Additional file 2). Transcriptional fragments from the Affymetrix genomic tiling array [39], which could support the existence of transcripts through the specified chromosome region, were integrated in our analysis. Affymetrix fragments overlapped with 5,475 3'-end exon sites (50%) from 3,627 genes (Table 1 and Additional file 2). Finally, we selected novel isoforms of a couple of genes which have roles in signal transduction and did nested-PCR verification. Our interest was to explore the function of novel protein products encoded by the transcripts. It was expected that the full coding sequence should be included in PCR products. The primer strategy was that the upstream primer (5') was located nearby the translational start site (ATG) of the RefSeq mRNA, while the downstream primer (3') should be located in the poly(A/ T)-tailed ESTs. Primer sets were listed in Additional file 4. The electrophoresis bands of the second PCR products were shown in Figure 2. Sequences of the PCR products were made blast search, and were revealed to be novel (Additional file 4). Sequence analysis was also made with the BLAT program. As a result, RT-PCRs confirmed transcription of at least 38% (10 of 26 candidates) of selected intronic poly(A) sites. Acquired novel sequences and their accession numbers in GenBank database were listed in Additional file 4. In case of MAPK14 (Mitogen-activated protein kinase 14, also known as p38 alpha.), two novel alternative splicing variants were obtained, FJ032367 and FJ032368. The latter had an extra 27 nt resulting from alternative receptor site in exon 7, just like caspase-9 gamma [40], and a inframe pre-stop codon is therefore introduced. The 3'-end exons that defined novel transcript variants could either be "hidden exons", not overlapping with any known exons, or "composite exons", extending known exons [19]. One "composite exon" and one "hidden exon" examples were shown in Figure 3A and 3B, respectively. The 3'-end exon for DLL1 (Delta-like 1) was "composite exon" (Figure 3A), whereas the pattern for STAMBP (STAM binding protein) was "hidden exon" ( Figure 3B). The submitted sequences were indicated as "YourSeq" in each panel ( Figure 3A and 3B). A prolonged ( Figure 3A) or an additional block ( Figure 3B) relative to the reference sequences was shown. These blocks represented prolonged or novel exons previously unidentified, that is, the "composite exons" and "hidden exons". Further analysis of these transcript variants suggested that these variants complied with GT/AG rules and were not incompletely processed mRNAs. For the resting 16 candidates that were not successfully cloned, it might be relevant to limited tissue cDNA sources, unsuitable primers, PCR condition, low expression level of target transcripts, and non-specific amplification, and so on. In summary, the majority (84%) of our poly(A) sites were supported by at least one of validation steps, as well as non-poly(A/T)-tailed ESTs (Additional file 2). Among these validation tests, the new isoforms supported by the RT-PCR, EST clone ID and blast hit against full cDNAs are more trustable than those only from Affymetrix validation because the single-block ESTs in the 3'-end exon sites can be joined to the upstream part of the annotated genes, whereas sites supported by Affymetrix tiling array may belong to independent genes hidden in the introns. Totally, 3,575 (33%, 3,575/10,844) 3'-end exon sites from 2,704 genes were taken as high-reliability (Additional file 1, sheet "validation_test"). The resting sites, whether they were supported by Affymetrix transcriptional fragments or not, were relatively less reliable and more dependent of experimental validation to exclude independently expressed transcripts. Among our 10 validated novel isoforms from RT-PCR experiments, 6 were still successfully cloned although their 3'-end exons were not supported by above mentioned EST clone ID and blast hit against full cDNAs. The average length of the 3'-end exon sites in our result was 437 nucleotides, whereas the average length of the 3'end exons of all the human RefSeqs was about 820 nucleotides. This could be explained by that ESTs just represent segments of complete transcriptional sequences, and therefore, a 3'-end exon site just represents partial sequence of a full 3'-end exon. No doubt, the actual 3'-end exon could extend toward upstream (5' end), and the precise nearest exon-exon boundary could be revealed. Theoretically, a 3'-end exon site corresponds to one full-length transcript which needs PCR validation to reveal the complete 3'-end exon. This is the reason that we would like to use the term "3'-end exon site", but not "3'-end exon" to describe these incomplete 3'-end exons in our study. Comparison with other alternative splicing related databases To make a comparison with the PolyA_DB2 database, the accession numbers of poly(A/T)-tailed ESTs as well as the positions of chromosome alignment of each cluster were used. As a result, total 1,410 (13%) 3'-end exon sites from 1,235 genes were covered by PolyA_DB2 database (Table 1 and Additional file 1). Among some of these overlapping poly(A) sites, we found more supporting ESTs. For example, the poly(A) site in PolyA_DB2, Hs.279594.1.27, included only one poly(A/T)-tailed sequence (BQ772378), but in our dataset, the corresponding 3'-end exon site (ExonSiteNo is 3479) was supported by two poly(A/T)-tailed ESTs (BQ772378 and AW293188), and seven non-poly(A/T)-tailed ESTs (BF902676, BQ933237, DB119003, CR744722, AW805980, AI612802, and AW198031). It suggests from above analysis that our data can well complement previous studies. Up to date, many alternative splicing databases have been developed [33][34][35][36][37]41,42], the main purpose is to collect all the alternative splicing candidates. It seems that one important common aspect for these databases is that multiple-block exons are used for analysis, and precise exonintron boundary is required, whereas single-block ESTs are not well considered. We made a comparison between our data and two reputed alternative splicing databases, ASAP II database (released in 2007) [43] and ASTD (released in 2008) [44], which superseded the ASD (Alternative Splicing Database) [41] and ATD (Alternative Transcript Diversity) [45] databases. As shown in Table 1 and Additional file 1, among 10,844 3'-end exon sites, only Chromosomal alignment results for novel transcripts of DLL1 and STAMBP . A prolonged (A) or an additional block (B) relative to the reference sequences. These blocks represent prolonged or novel exons previously unidentified, that is, the "composite exons" and "hidden exons" [19]. A. B. 6% (613/10,844) sites from 554 genes were covered by ASAP II, whereas 11% (1,250/10,844) sites from 1,115 genes were covered by ASTD database (Table 1 and Additional file 1). This suggests that most of our data are novel. During our process, Muro et al recently identified 3'-ends of human and murine genes by automated EST cluster analysis [46], we compared their data and ours, and found that about 37% (4,046/10,844) sites from 2,895 genes were same (Table 1 and Additional file 1). Excluding all the above crossing 3'-end exon sites and the sites having blast hit against full cDNAs, a total 45% (4,905/10,844) from 3,269 genes are novel and unique in our data. Novel transcript variants are derived from processed mature mRNAs From the sequence analysis shown in Additional file 4, the canonical splice boundaries (GT/AG in introns) were implicated. These novel isoforms showed that they were processed with introns deletion. The gene structures (Figure 3) of two examples further confirmed that the RT-PCR products were derived from processed mature mRNAs, but not unspliced precursor mRNAs. On the other hand, the clone ID tracing analysis (see above) also revealed that the novel transcripts were derived from processed mature mRNAs. Polyadenylation usually requires a hexamer motif as a primary 3'-end processing element, which is usually called the polyadenylation signal (PAS). A 50 nt nucleotide region preceding the potential cleavage sites of all 17,201 ESTs was searched for the motifs to match at least one of the thirteen known PAS hexamers (AATAAA, ATTAAA, TATAAA, AGTAAA, AAGAAA, AATATA, AATACA, CAT-AAA, GATAAA, AATGAA, TTTAAA, ACTAAA, AATAGA) [11]. As a result, about 65% (7,051/10,844) of all the 3'end exons had at least one of these PAS hexamers (Additional file 1). Among 2,957 (27%, 2,957/10,844) 3'-end sites having mRNA hits (see above, Table 1 and Additional file 1), also about 63% (1,864/2,957) had at least one of thirteen above mentioned PAS. It suggests from above analysis that the novel transcript variants be derived from processed mature mRNAs, but not unspliced precursor mRNAs or degradation products of pre-mRNA. Novel transcript variants are truncated and missing functional domains Intronic poly(A) sites often lead to truncated isoforms that lose important functional domain or localization signals. To evaluate if domains are lost in the novel transcript variants from intronic poly(A) sites, all protein products containing the intronic poly (A) sites had been annotated. Domains were deleted or truncated in transcript variants from 7,641 poly(A) sites from 4,142 genes (Tables 1 and Additional file 5). The detailed information of involved domains in Additional file 5 was shown in Additional file 6. Among all poly(A) sites, 1,616 could lead to deletion of trans-membrane domain. As an example, the novel isoform for TNFRSF1A (Tumor necrosis factor receptor superfamily, member 1A, also known as TNF-R1 or p55 TNFR), herein designated as TNFRSF1Aβ as it represents the second isoform of TNFRSF1A, was analyzed. TNFRSF1A is a death receptor with two known ligands, tumor necrosis factor and lymphotoxin-α. Through interactions with these ligands, TNFRSF1A initiates cellular signals and regulates many cellular functions including inflammation, immune response, proliferation, and apoptosis [47][48][49][50]. The length of PCR product is 1,339 bp which contains an open reading frame of 657 bp ( Figure 4A). TNFRSF1Aβ consists of 218 amino acids ( Figure 4A and 4B), and is generated from an intronic "hidden exon" between exon 5 and exon 6 ( Figure 4C). TNFRSF1Aβ lacks the trans-membrane helix and the full cytoplasmic region including the DEATH domain compared to the full-length protein ( Figure 4C), while retaining the signal peptide and the conserved binding domain, that is TNFR (TNF receptor) domain. Soluble TNFRSF1A, which functions as natural inhibitors for tumor necrosis factor, was observed and widely investigated [51][52][53][54][55]. Soluble TNFRSF1A is likely produced when TACE (tumor necrosis factor-alpha converting enzyme), a metalloprotease that cleaves transmembrane proteins, cleaves the TNFRSF1A ecodomain [56-58]. However, TNFRSF1Aβ we found is a natural transcript and likely encodes a secretory protein product, and may play a regulatory role preferentially by competitively binding TNFR ligand (TNF). As alternate poly(A) sites may be regulated in a tissue-or disease-specific pattern [59,60], in addition to domain annotation, expression profiles for novel 3'-end exon sites were provided (Additional file 7). We compared the EST distribution in normal and cancerous tissues for each cluster, it revealed that some transcript variants may be cancer-specifically expressed. Moreover, an additional supplemental file (Additional file 8) provided all the candidate poly(A) sites of each human genes, integrating PolyA_DB2, Muro et al's [46] and our results. Totally 112,074 sites of 19,748 genes were included. Discussion In this genome-wide analysis, we showed that alternative polyadenylation in intronic sites can generate lots of novel transcript variants. We preferentially selected intronic single-block ESTs for analysis in that these ESTs were not well considered in previous studies [33][34][35][36][37], including Lee's research [32]. So, our work is a good com-Nucleotide and deduced amino acid sequences and the genomic structure of human TNFRSF1A beta LGRVLRDMDLLGCLEDIEEALCGPAALPPAPSLLR 455 A. plement for previous study [17,32]. Single-block ESTs within the intergenic region were not included in our analysis though some of them represent gene extensions [61]. Single-block ESTs are often suspected as contamination of genomic DNA. However, in our analysis, we showed that about 84% of the EST clusters were supported by at least one evidence: hit from full-length cDNA, multiple-block 5'-end ESTs, overlapping with transcribing sites from Affymetrix tiling array, or having multiple supporting ESTs. So by carefully screening, the single-block ESTs could be used as valuable resources for discovering novel transcripts. Besides focusing on singleblock ESTs, the pipeline in our analysis was designed to improve poly(A) site detection, all these contribute to the discovery of novel intronic 3'-end exons. During our analysis, we found that more than 90% of the EST entries in our results were created before the polyA_DB2 was released. It implied that most of the novel transcript variants were derived by the improvement of our detection methods and the consideration of single-block ESTs, but not merely by the growth of the transcript databases. Although different methods have been used for poly(A) site prediction [10,62], current methods achieve only moderate sensitivity and specificity. For example, about 47% of known poly(A) sequences in the polyA_DB database were not predicted the Support Vector Machine (polya_svm) [10]. Among our predicted 3'-end exon sites, less than thirty can be predicted by polya_svm (threshold = 0.5 when the genomic region containing the poly(A) cluster region ± 300 nucleotides was used for predictions). However, 68% of the 17,201 ESTs, which correspond to about 63% of the 10,844 3'-end exons (Additional file 1), have at least one of thirteen known PAS hexamers. This low detection rate of prediction by polya_svm likely results from heterogeneity of the intronic poly(A) sites compared to the conventional 3'-most poly(A) sites. It is worthy of note, a method different to ours for identification of 3'-ends of genes was made according to EST frequency histogram along the genome by Muro et al [46]. They show that 22-52% of sequences in commonly used human and murine "full-length" transcript databases may not currently end at bona fide polyadenylation sites. Since the average length of the 3'-end exons of all the current human RefSeqs is about 820 nucleotides, they will get longer according to Muro et al 's results. As the comparison in the text has shown, Muro et al's method and ours have respective advantages, and complement each other. Both methods will contribute to identification of fulllength transcripts. Novel 3'-end exons we detected could be defined as "hidden exons" and "composite exons" described previously [19]. However, some apparent "hidden exons" could be actually "composite", because ESTs only represent partial cDNA sequences and may be extended to overlap with known exons. Not all intronic poly(A) sites correspond to actual novel transcript variants. For example, internal priming, due to a consecutive string of 'A's in the mRNAs, results in false positives. For cDNA library construction, oligo-dT is often used as the primer for first strand cDNA synthesis. This primer can anneal to the internal priming site, producing truncated sequences. Internal priming accounts for about 12% for the total 3' ESTs in the database [63]. In previous study like Tian's [11], the genomic DNA sequence around the predicted poly(A) site was checked. If there were more than 6 consecutive 'A's or at least 7 'A's in 10 nt window, it was suspected to be an internal priming site. However, when applied the criterion to study the adjacent DNA sequence of 3'-end of human RefSeq mRNAs, it was found that 19.4% (6,147/31,642) mRNAs had such A trait at their 3'-ends. So if using the above criterion, many true positive sites might be missed. In our analysis, we tried to reduce internal priming sites by eliminating all ESTs that could be aligned well with known RefSeq mRNAs (see Methods). In order to find novel transcript variants as many as possible, we did not request an accurate signature of exon junction and cleavage site. This is different to those previous reported [17,19,[30][31][32]. The 3'-end exon site provides the approximate locus of the "composite exons" or the "hidden exons" for novel isoforms. The supporting ESTs of a 3'-end exon site further provide proper sites for downstream primer designing to amplify the full coding region of corresponding novel isoforms. We performed RT-PCR to validate some interested candidates with success rate of about 38% (10/26, see Results). Sequence analysis revealed they were derived from processed mature mRNA, but not unspliced precursor. In our analysis, although most of the sites are supported by at least two types of evidences, there are still 1,468 sites containing only one EST sequence without supporting in other way. Some of these sites may truly represent novel transcript variants associated with low expression levels. For example, the sites DB550185 (ExonSiteNo: 8501), DB347581 (ExonSiteNo: 8549), DB536313 (ExonSiteNo: 8628), and DB517750 (ExonSiteNo: 9840), and DB512524 (ExonSiteNo: 10422), they contain only one EST sequence, but the EST is from a full-length cDNA clone (Additional file 1 and 3). One type of RNA polyadenylation controls RNA degradation in the nucleus [64][65][66]. The exosome plays a key role in the surveillance of nuclear mRNA synthesis and maturation. Poly(A) tails guiding RNA to be degraded by the exosome are usually shorter than those increasing mRNA stability, and these poly(A) tails are not made strictly of 'A's. These sites were not actively eliminated in our analysis, but they are unlikely to greatly affect the results because they would not be detected under our stringent criteria. On the other hand, sequence analysis of the poly(A/T)-tailed ESTs revealed that PAS did exist in most of our ESTs. This result combined with other evidences, suggest our predicted poly(A) sites should represent bona fide mRNAs, but not unspliced precursor mRNAs, neither the degradation products. Another type of RNA quality control is nonsense-mediated mRNA decay (NMD), which selectively degrades mRNAs that contain a premature translation termination codon (PTC, also called "nonsense codon") [67,68]. Although NMD mainly acts as quality control to eliminate faulty transcripts in gene expression, it is also involved in physiological and pathological functions [68,69]. Usually, NMD occurs when translation terminates more than 50-55 nucleotides upstream of the exon-exon junction, in which case components of the termination complex are thought to interact with the exon-junction complex (EJC) to elicit NMD [67]. Although 45% of alternatively spliced mRNAs are predicted to be an NMD target [68], an mRNA is immune to NMD if translation terminates less than 50-55 nucleotides upstream of the 3'-most exon-exon junction or downstream of the junction. This means if a natural stop codon of an mRNA exists in the 3'-end exon, it is not subject to NMD. The transcripts predicted in our study use an alternative 3'-UTRs, assuming that upstream exons do not change. Because we have not got the full-length form for each transcripts, we can not estimate the proportion of our results that would be affected by NMD. However, it has been reported that alternative polyadenylation may be an NMD-rescue regulatory mechanism in PTCcontaining mRNAs [70]. Our data seem to be consistent with the view. Actually all the novel transcripts proved by RT-PCR experiments in our study comprise the natural stop codon in the last exon. A further analysis revealed that in nearly all the 3'-end ESTs except some very short ones, stop codons exist in all three reading frames (data not shown). So if there were no correct stop codons in the 5'-exons, the stop codon in the 3'-end exons of our result would be used. This is different to middle exons that may not contain in-frame stop codons and could not help conveniently clone transcripts with complete coding regions. It should be noted that a large number of non-coding RNAs (ncRNAs) are expressed from the mammalian genome [71,72]. These ncRNAs include miRNAs, snoR-NAs, snRNAs, and piRNAs, and so on, which are involved in controlling various levels of gene expression in physiology and development. Non-coding RNAs can be derived from antisense or sense transcripts with overlapping or interlacing exons, or retained introns. To investigate that whether the internal intronic transcripts in our data actually represent known ncRNAs, we compared the chromo-some alignment position between the 3'-end exon sites in our study and those of human ncRNAs from NONCODE v2.0 [72]. In 35,2434 human ncRNA entries collected in NONCODE v2.0, less than one hundred 3'-end exon sites were overlapped (data not shown). So it seems that most of our 3'-end exons do not represent known ncRNAs. Whereas, we found many poly(A) sites were located in the introns before the coding exons. If they were real, the potential novel transcripts would be composed of the 5'-UTR of the original mRNA. Whether the transcripts encode small ORFs or regulatory small RNAs needs to study in the future. Conclusion In conclusion, our results identify novel 3'-end alternative splicing isoforms. The expression of these novel variants was confirmed with computational and experimental tools. These data provide a genome-wide resource for identification of novel human transcript variants with intronic polyadenylation sites, and offer a new view into the mystery of the human transcriptome. Data source The University of California, Santa Cruz (UCSC) Genome Browser Database (GBD) http://genome.ucsc.edu provides a common repository for genomic annotation data, including comparative genomics, genes and gene predictions, mRNA and EST alignments, and so on [73,74] Intronic 3'-end exon site identification and EST clustering To identify novel transcript variants, we focused on intronic 3'-end exon sites. The outline of data analysis is shown in Figure 1. First, single-block ESTs were collected from UCSC Genome Browser annotation file. The annotation file provides detailed information including chromosome localization, transcription direction, blockCount (number of blocks in the alignment) and blockSizes (comma-separated list of sizes of each block). BlockCount loosely reflects the alignment exon number. BlockCount increases as EST quality decreases. Many ESTs were annotated for multiple blockcounts but are really single-block ESTs. To identify all single-block ESTs, we corrected for misplaced blocks as follows: if the chromosomal distance between consecutive blocks was less than 10 nucleotides or if the chromosomal distance was more than 10 nucleotides but the blocksize was less than or equal to 10 nucleotides, the blockcount was reduced by one. If the final blockcount was one, the EST was kept as a single-block EST. Second, 3'-end exon sites were identified by a poly(A/T)tail. All single-block ESTs were checked for 5'-end 'T's or 3'-end 'A's as poly(A) tails in the reverse and forward orientations, respectively. The EST was firstly requested to contain at least 10 consecutive 'A's or 'T's in either terminal 100 nucleotides. Then poly(A/T) tail was determined if one of the following criteria was satisfied: (1) (2) the EST had 20 or more consecutive 'A's or 'T's in either terminal 100 nucleotides, or the EST had 40 or more consecutive 'A's or 'T's in the entire sequence; (3) the EST had more than 15 'A's or 'T's within a 20 nucleotide window in either terminal 50 nucleotides. The criteria (1) was the most effective and could identify most of poly(A) tails. More consecutive 'A's or 'T's were needed if interrupted by other nucleotides because of sequence quality. On the other hand, various length of vector sequences are contained in some proportion of ESTs, and the poor sequencing quality in the ends or linker sequences in oligo(T) primers should be concerned, therefore, criteria (2) and (3) were introduced. To our knowledge, the distance from sequencing primers to multiple cloning site (MCS) is not too long and 100 nucleotides were used as threshold. These criteria could provide suitable endurance for sequence quality. The chromosomal loci for these poly(A/T)-tailed ESTs locations were regarded as 3'-end exon sites. The remaining ESTs were considered as non-poly(A/T)-tailed ESTs. Nonpoly(A/T)-tailed ESTs were used as supporting evidence for novel transcript variant expression if their chromosome alignment overlapped with poly(A/T)-tailed ESTs. Third, the poly(A/T)-tailed EST candidates were used as queries to blast the RefSeq mRNA database. The E-value was set at e -10 . All ESTs with a hit were removed. The remaining ESTs were further blasted against all mRNA database with the same E-value to provide transcriptional evidence. Finally, the ESTs were mapped to genes. The transcriptional orientation of a gene was annotated in the downloaded file "refSeqAli.txt.gz". The orientation of the EST sequences relative to their mRNA was determined by the presence of a 5'poly(T) tail or a 3'poly(A) tail. If both poly(A) and poly(T) tails existed in the same EST, overlapping poly(A/T)-tailed ESTs were used to determine the true orientation. Poly(A/T)-tailed ESTs and non-poly(A/ T)-tailed ESTs were clustered according to their chromosomal alignments. The start and end positions for each cluster were recorded as the position of the 3'-end exon site. The RefSeq gene corresponding to each cluster was determined. Although many genes have more than one RefSeq, we always selected the same RefSeq for clusters from the gene, unless the EST alignment was not within that RefSeq locus. All the ESTs were analyzed for their tissue source and divided into cancer-originated or normaloriginated. Tracing sequences via clone IDs All clone IDs were extracted by EST accession numbers. For each clone ID, the opposite end sequence was traced. For each end sequence, the GT/AG splicing boundary determined the transcriptional orientation. If the traced sequence had both the same transcription orientation as the RefSeq mRNA and at least one overlapping alignment block, the EST clone represented a novel isoform. Comparison with Affymetrix genomic tiling array data The transcription fragment file of Affymetrix genomic tiling array were downloaded from the UCSC Genome Browser http://hgdownload.cse.ucsc.edu/goldenPath/ hg18/database/. The chromosomal location of the fragments was compared with the 3'-end exon sites. If the fragments overlapped a 3'-end exon site, the EST represented a novel transcript variant. RT-PCR experiments RT-PCR experiments were made to clone some interested transcript variants. Nested-PCR was performed. The primers were shown in Additional file 4. The cDNA template was the Clontech mixed human multiple tissue cDNA panel, including ten human tissues (brain, spleen, heart, skeletal muscle, thymus, liver, pancreas, lung and placenta and kidney). The Touchdown-PCR method had the following conditions: denaturing for 30 s at 94°C; annealing for 30 s from 65°C to 60°C, decreasing at 0.5°C each cycle, for the first 10 cycles and at 60°C for the last 20 cycles; extension for 90 s at 72°C for all cycles, with the final extension at 72°C for 5 min. Each experiment was done in a 20 μl PCR reaction volume, containing 2 μl of template, with a GeneAmp ® PCR System 2700 amplifier. Conditions for the second PCR were the same, except that 3 μl of template derived from the first PCR products were used. The second PCR products were for electrophoresis and recovered, then cloned in pGEM-T easy vector (Promega) or directly sequenced. The sequences were aligned with the BLAT, ClustalW http://www.ebi.ac.uk/ clustalw/, and BLAST. Comparison with other alternative splicing related databases The sequences of human alternative splicing variants were downloaded from the ASAP II http://bio info.mbi.ucla.edu/ASAP2/ and ASTD http:// www.ebi.ac.uk/astd databases. The poly(A/T)-tailed EST candidates were used as queries to search these databases by blast program. The E-value was set at e -10 and minimum match of 60 nt with 80% identity was requested. A comparison between our and Muro et al's 3'-terminal sequence data was also made using blast analysis. The accession numbers of poly(A/T)-tailed ESTs as well as the positions of chromosome alignment of each cluster were used for comparison with the PolyA_DB2 database. The ESTs without hits represent novel 3'-end exons. To supply a comprehensive list of poly(A) sites, we integrated the PolyA_DB2, Muro et al's [46] and our prediction (Additional file 8). The integration was done according to the chromosomal location of predicted sites. Sites that were within 20 nt to each other were taken as one cluster. For each site, the strand that the site belongs to was determined by the direction of known mRNA containing the site. Sites those were aligned to random chromosome were eliminated. Domain mapping Most intronic poly(A) sites result in changes in CDS region [17]. To determine the effects of these CDS changes, we mapped domains in all the potential novel transcript isoforms with the assumption that the exons before the novel poly(A) site remain unchanged. Domain information was extracted from RefSeq. The secretory signal and trans-membrane helix were analyzed with SignalP http://www.cbs.dtu.dk/services/SignalP/ and TMHMM http://www.cbs.dtu.dk/services/TMHMM/, respectively. Internal priming site evaluation We downloaded the alignment data of human RefSeq mRNA from UCSC Genome Browser. The -10 to +10 genomic DNA sequence around the 3'-end was extracted. If there were more than 6 consecutive 'A's or at least 7 'A's in 10 nt window, it was taken as a 'A' trait. In our methods for poly(A) identification as mentioned in above criteria, especially, the criteria (3), it was likely to introduce internal priming sites. To try the best to decrease the false positive results, validation tests (see above) were performed for the 3'-end exon candidates by blast analysis against all known mRNA database, tracing EST clone ID, RT-PCR experiments and comparison with Affymetrix genomic tiling array data. The 3'-end exon sites validated by the first three processes produced more reliable results than those validated only by Affymetrix tran-scriptional fragments or not because of exon overlapping with the containing genes. Therefore, if the 3'-end exon candidates were not supported by any of the first three validation tests, all the poly(A/T)-tailed ESTs in these 3'-end exon candidates were re-analyzed, and an extra criterion was introduced, i.e. the sequence downstream the poly(A) sites should not match the corresponding genomic region as to eliminate internal priming sites as possible as we can. For this purpose, we compared two types of positions, the EST alignment 3'-end position in chromosome and the identified poly(A) site. If their distance was within 20 nt, the corresponding poly(A/T)-tailed EST was kept, otherwise it was abandoned. Moreover, if all the poly(A/ T)-tailed ESTs were completely matched the genome, or at most with 5 nt hanging tails without matching, the containing 3'-end exon sites were deleted.
2017-07-06T09:05:51.623Z
2009-11-12T00:00:00.000
{ "year": 2009, "sha1": "9a0a00419f0fb313b6cc686f6d5a9ac1771f7f0d", "oa_license": "CCBY", "oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-10-518", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "63da6668e6fc250189a4c31fbd747f94d9a30a50", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119209733
pes2o/s2orc
v3-fos-license
Antilinearity Rather than Hermiticity as a Guiding Principle for Quantum Theory Currently there is much interest in Hamiltonians that are not Hermitian but instead possess an antilinear $PT$ symmetry, since such Hamiltonians can still lead to the time-independent evolution of scalar products, and can still have an entirely real energy spectrum. However, such theories can also admit of energy spectra in which energies come in complex conjugate pairs, and can even admit of Hamiltonians that cannot be diagonalized at all. Hermiticity is just a particular realization of $PT$ symmetry, with $PT$ symmetry being the more general. These $PT$ theories are themselves part of an even broader class of theories, theories that can be characterized by possessing some general antilinear symmetry, as that requirement alone is a both necessary and sufficient condition for the time-independent evolution of scalar products, with all the different realizations of the $PT$ symmetry program then being obtained. Use of complex Lorentz invariance allows us to show that the antilinear symmetry is uniquely specified to be $CPT$, with the $CPT$ theorem thus being extended to the non-Hermitian case. For theories that are separately charge conjugation invariant, the results of the $PT$-symmetry program then follow. We show that in order to construct the correct classical action needed for a path integral quantization one must impose $CPT$ symmetry on each classical path, a requirement that has no counterpart in any Hermiticity condition since Hermiticity of a Hamiltonian is only definable after the quantization has been performed and the quantum Hilbert space has been constructed. We show that whether or not a $CPT$-invariant Hamiltonian is Hermitian is a property of the solutions to the theory and not of the Hamiltonian itself. Thus Hermiticity never needs to be postulated at all. I. INTRODUCTION TO ANTILINEAR SYMMETRY A. Overview of the Antilinear Symmetry Program Triggered by the fact that the eigenvalues of the non-Hermitian Hamiltonian H = p 2 + ix 3 are all real [1,2], there has been much interest in the literature (see e.g. the reviews of [3][4][5]) in Hamiltonians that are not Hermitian but have an antilinear P T symmetry, where P denotes parity and T denotes time reversal.(Under P T : p → −p, x → −x, i → −i, so that p 2 + ix 3 → p 2 + ix 3 .)Even though the postulate of Hermiticity of a Hamiltonian has been an integral component of quantum mechanics ever since its inception, one can replace it by the more general requirement of antilinear symmetry (antilinearity) without needing to generalize or modify the basic structure of quantum mechanics in any way.Specifically, to construct a sensible Hilbert space description of quantum mechanics one needs to be able to define an inner product that is time independent, and one needs the Hamiltonian to be self-adjoint.There is no need for the inner product to be composed of a ket and its Hermitian conjugate or for the Hamiltonian to be Hermitian.The inner product can be composed of any choice of bra and ket states as long as it is time independent, and for the P T case for instance the appropriate bra for time independence is the P T conjugate of the ket rather than its Hermitian conjugate.And in regard to self-adjointness, it is not necessary that the Hamiltonian be Hermitian, it is only necessary that the Hamiltonian be well-enough behaved in some domain (known as a Stokes wedge) in the complex coordinate plane so that in an integration by parts one can throw away surface terms.And as we show here, the necessary condition for this to be the case is that the Hamiltonian possess an antilinear symmetry.In regard to eigenvalues, we note that while the eigenvalues of a Hermitian Hamiltonian are all real, Hermiticity of a Hamiltonian is only a sufficient condition for such reality but not a necessary one.And again, the necessary condition is that the Hamiltonian possess an antilinear symmetry, and we note that this condition is in a sense surprising since it involves an operator that acts antilinearly in the space of states rather than linearly, and is thus not ordinarily considered in linear algebra studies. While antilinear symmetry of a Hamiltonian is the necessary condition for the time independence of inner products, for self-adjointness, and for the reality of eigenvalues, antilinearity goes further as it encompasses physically interesting cases that cannot be achieved with Hermitian Hamiltonians, while of course also encompassing Hermitian ones since a Hamiltonian can both have an antilinear symmetry and be Hermitian.In general, antilinear symmetry requires that Hamiltonians have energy eigenvalues that all real or have some or all eigenvalues appear in complex conjugate pairs (E = E R ± iE I ).In addition, antilinear symmetry admits of Jordan-block Hamiltonians that cannot be diagonalized at all. The complex conjugate pair case corresponds to the optical cavity gain (E = E R + iE I ) plus loss (E = E R − iE I ) systems that have been explored experimentally in the P T literature [6] and reviewed in [4,5].In the presence of complex conjugate pairs of energy eigenvalues one still has a time independent inner product, with the only allowed transitions being between the decaying and growing states.In consequence, when a state |A (the state whose energy has a negative imaginary part) decays into some other state |B (the one whose energy has a positive imaginary part), as the population of state |A decreases that of |B increases in proportion.Thus despite the presence of the growing state B|, the B|A transition matrix element never grows in time [11].In contrast, in the standard approach to decays, one has just the decaying mode alone. As regards Hamiltonians that are not diagonalizable, this is not just of abstract interest since systems have been constructed that expressly correspond to the Jordan-block case for specific values of the parameters in a Hamiltonian [4,5], these values being referred to as exceptional points in the P T literature.The Jordan-block case has also been found to occur in the fourth-order derivative Pais-Uhlenbeck two-oscillator model when the two oscillator frequencies are equal, with the relevant Hamiltonian being shown [7] to not be Hermitian but to instead be P T symmetric (actually CP T symmetric since charge conjugation plays no role here) and non-diagonalizable [8].The fourth-order derivative conformal gravity theory (viz.gravity based on the action I W = −α g d 4 x(−g) 1/2 C λµνκ C λµνκ where C λµνκ is the Weyl conformal tensor) that has been offered [9,10] as a candidate alternate to the standard Einstein gravity theory also falls into this category, and is able to be ghost free and unitary at the quantum level because of it [7,8]. The Jordan-block case is particularly interesting since for any Jordan-block Hamiltonian the eigenvalues all have to be equal.Jordan-block Hamiltonians that have a total of two eigenvalues and have an antilinear symmetry cannot have all eigenvalues be equal if one is the complex conjugate pair realization, and thus Jordan-block Hamiltonians must fall into the antilinear realization in which all eigenvalues are real.Jordan-block Hamiltonians with antilinear symmetry thus provide a direct demonstration of the fact that while Hermiticity implies the reality of eigenvalues, reality does not imply Hermiticity.It will be shown here that in both the Jordan-block case and in the complex conjugate pair realizations of antilinear symmetry the Hamiltonian is still self-adjoint.These two realizations thus provide a direct demonstration of the fact that while Hermiticity implies self-adjointness, self-adjointness does not imply Hermiticity. With the exception of isolated studies such as the conformal gravity study, most of the study of Hamiltonians with an antilinear symmetry has been made within the context of non-relativistic quantum mechanics, a domain where one can in principle use any appropriate antilinear symmetry.While a study of general non-relativistic systems is of value for developing understanding of the implications of antilinear symmetry, for any given non-relativistic quantum theory to be of physical relevance it has to be the non-relativistic limit of a relativistically invariant theory. (Even if the system of interest might be composed of slow moving components the observer is free to move with any velocity up to just below the speed of light, and the physics cannot depend on the velocity of the observer.)With a CP T transformation having a direct connection to relativity since its linear part is a specific complex Lorentz transformation, when combined solely with the requirement of the time independence of inner products, through use of complex Lorentz invariance the allowed antilinear symmetry is uniquely fixed to be CP T .With the CP T theorem previously only having been established for Hermitian Hamiltonians, the CP T theorem is thus extended to the non-Hermitian case.CP T is thus the uniquely favored antilinear symmetry for nature, and any physically relevant theory has to possess it.Since one is below the threshold for particle creation at non-relativistic energies, in non-relativistic quantum mechanics CP T symmetry reduces to P T symmetry, to thus put the P T symmetry program on a quite secure theoretical foundation.Thus for non-relativistic quantum mechanics antilinearity is more basic than Hermiticity, while for relativistic quantum field theory CP T symmetry is uniquely selected as the antilinear symmetry, with antilinearity again being more basic than Hermiticity.In this paper we shall explore antilinearity per se as an interesting concept in and of itself, and shall explore its connection to CP T symmetry.In order to see how the requirement of antilinearity works in practice, for the benefit of the reader we provide a straightforward example. B. Antilinear Symmetry for Matrices A simple model in which one can illustrate the basic features of antilinear symmetry is the matrix given in [3]: where the parameter s is real and positive.The matrix M (s) does not obey the Hermiticity condition M ij = M * ji .However, if we set P = σ 1 and T = K, where K denotes complex conjugation we obtain P T M (s)T −1 P −1 = M (s), with M (s) thus being P T symmetric for any value of the real parameter s.With the eigenvalues of M (s) being given by E ± = 1 ± (s 2 − 1) 1/2 , we see that both of these eigenvalues are real if s is either greater or equal to one, and form a complex conjugate pair if s is less than one.And while the energy eigenvalues would be real and degenerate (both eigenvalues being equal to one) at the crossover point where s = 1, at this point the matrix becomes of non-diagonalizable Jordan-block form [11]. Neither of the s = 1 or s < 1 possibilities is achievable with Hermitian Hamiltonians. As regards the Jordan-Block case, we recall that in matrix theory Jordan showed that via a sequence of similarity transformations any matrix can be brought either to a diagonal form or to the Jordan canonical form in which all the eigenvalues are on the diagonal, in which the only non-zero off-diagonal elements fill one of the diagonals next to the leading diagonal, and in which all non-zero elements in the matrix are all equal to each other.To see this explicitly for our example, when s = 1 we note that by means of a similarity transformation we can bring M (s = 1) to the Jordan-block form and on noting that 1 1 0 1 for eigenvalue equal to one, we see that the transformed M (s = 1) is found to only possess one eigenvector, viz. the (1, 0) one with q = 0, where the tilde denotes transpose.Thus even though the secular equation |M (s = 1) − λI| = 0 has two solutions (each with λ = 1), there is only one eigenvector and M (s = 1) cannot be diagonalized.(Since the energy eigenvalues have to share the only eigenvector available in the Jordan-block case, they must be degenerate.)Such lack of diagonalizability cannot occur for Hermitian matrices, to show that antilinear symmetry is richer than Hermiticity, with the above M (s = 1) being a clearcut example of a non-Hermitian matrix whose eigenvalues are all real, and thus the simplest demonstration of the fact that while Hermiticity implies the reality of eigenvalues, reality does not imply Hermiticity. To understand why a P T -symmetric Hamiltonian must be Jordan block at a transition point such as s = 1, we note that in the region where the energy eigenvalues are in complex conjugate pairs their eigenfunctions are given by exp(−i(E R + iE I )t) and exp(−i(E R − iE I )t).Then, as we adjust the parameters in the Hamiltonian so that we approach the transition point from the complex energy region (cf.letting s approach one from below), not only do the two energy eigenvalues become equal, their eigenvectors become equal too, Thus at the transition point there is only one eigenvector, with the Hamiltonian then necessarily being Jordan block.While the Hamiltonian loses an eigenvector at the transition point the Hilbert space on which it acts must still contain two wave functions since it did so before the limit was taken.The combination that becomes the eigenvector in the limit is given by the The second combination is given by the to thus behave as the non-stationary t exp(−iE R t).The Hilbert space on which the Hamiltonian acts is still complete, it is just the set of stationary states that is not [8].Because of this, wave packets have to be constructed out of the complete set of stationary and non-stationary states combined, with the associated inner products still being preserved in time [8].For the matrix given on the right-hand side of (2) for instance, the right-and left-Schrödinger equation wave functions are non-stationary, being given by Their overlap is given by Thus, despite the presence of terms linear in t, their overlap is time independent.In this paper we will have occasion to return to Jordan-block Hamiltonians, and especially to discuss theories such as the illustrative Pais-Uhlenbeck two-oscillator model, whose Hamiltonian appears to be Hermitian but in fact is not. For the complex conjugate eigenvalue case we can also construct a time-independent inner product.As we shall show in detail in Sec.II, to do this we need to introduce an operator V that effects V HV −1 = H † .Thus for M (s < 1), if we set sinh β = (1 − s 2 ) 1/2 /s = ν/s, the needed V operator and the right-eigenvectors of M (s < 1) are given by [11] V = 1 i sinh β (σ 0 + σ 2 cosh β) , The V -operator based inner products obey the expressly time-independent orthogonality and closure relations with the associated propagator then being given by [11] D (This propagator is the analog of the Ω + |φ(0, t)φ(0, 0)|Ω − + Ω − |φ(0, t)φ(0, 0)|Ω + Green's function discussed in Sec.VI below.)Now we recall that in the conventional quantum-mechanical discussion of potential scattering, near a resonance one can parametrize the energy-dependent phase shift as tan δ = Γ/(E 0 − E), so that δ = π/2 at E = E 0 .With this phase shift the scattering amplitude behaves as and the propagator has the standard Breit-Wigner form with both f (E) and D BW (E) only possessing a decaying mode that behaves as exp(−i(E 0 − iΓ)t/h).This decaying mode is associated with a time delay of order h/Γ due to the scattered wave being held by the potential.In contrast, in the complex conjugate pair case, one has both growing and decaying modes, with the scattering amplitude having poles at both E 0 + iΓ and E 0 − iΓ, corresponding to both time advance and time delay.In the presence of both types of poles the propagator D(E) is as given in (10), and we note that because of the relative minus sign between the residues of the two pole terms as expressly required by (9), D(E) takes the form With the imaginary part of D(E) automatically having the same sign as that of the imaginary part of a standard Breit-Wigner, and with it behaving the same way as a Breit-Wigner at the resonance peak where E = E 0 , the interpretation of D(E) as a probability is thus the standard one that is associated with decays.For our purposes here, we note that the utility in having a complex conjugate pair of energy eigenvalues is that even with states that decay or grow one still has an inner product that is time independent since, as shown in (9), the only non-trivial transitions are matrix elements that connect the decaying and growing modes.Thus with a time-independent inner product, the presence of a time advance does not lead to a propagator that violates probability conservation, and the complex conjugate pair realization of antilinear symmetry is fully viable.In passing we note that the interplay between the two complex conjugate poles exhibited in (13) has a pre-P T symmetry theory antecedent in the Lee-Wick analysis of the complex conjugate pair realization of the Lee model [12], where one has the same D(E) and no violation of probability conservation. While we can make contact between the antilinear symmetry D(E) propagator and the Breit-Wigner D BW (E) propagator, there are still some key difference between the two cases.For the complex conjugate case there exist experimentally established processes that exhibit both gain and loss, while for the standard Breit-Wigner case one only has loss.Also, as we show in the Appendix, even in the complex conjugate pair case one can still construct a propagator that is causal, i.e. one that does not take support outside the light cone, with the presence of the time advance that accompanies the time delay not violating causality. In analyzing the eigenspectrum of M (s > 1), even though M (s > 1) does not obey M ij = M * ji , we should not characterize the s > 1 situation as being a non-Hermitian case in which all energy eigenvalues are real.The reason for this is that on setting sin α = (s 2 − 1) 1/2 /s, we can write where Thus under the S(s > 1) similarity transformation we can bring With M ′ being Hermitian the matrix M (s > 1) is actually Hermitian in disguise.The similarity transformation needed to bring M (s > 1) to a Hermitian form is not unitary and is thus a transformation from a skew basis to an orthogonal one.The definition of Hermiticity as the condition is not a basis-independent definition.To be specific, consider a Hamiltonian H that obeys H ij = H * ji in some given basis.Now apply a similarity transformation S to a new basis to construct H ′ = SHS −1 .In the new basis we have As we see, [H ′ ] † is not in general equal to H ′ , being so only if S is unitary.Thus to say that a Hamiltonian is Hermitian is to say that one can find a basis in which H ij is equal to H * ji , with the basis-independent statement being that the eigenvalues of a Hermitian operator are all real and the eigenvectors are complete.And if a Hamiltonian with these properties is not in a basis in which H ij = H * ji , the Hamiltonian is Hermitian in disguise.In consequence, matrices such as M (s > 1) are Hermitian in disguise even though they do not appear to be so, and are in the quasi-Hermitian class of operators discussed in [13].With the Hamiltonian H = p 2 + ix 3 possessing an energy eigenspectrum that is real and complete, H = p 2 + ix 3 is also Hermitian in disguise.The utility of antilinear symmetry is that since it is the necessary condition for the reality of eigenvalues, if a Hamiltonian does not possess an antilinear symmetry one can conclude immediately that not all of its eigenvalues can be real, and one is able to make such a claim without needing to actually determine any single eigenvalue at all or seek a similarity transformation that could establish that the Hamiltonian is Hermitian in disguise.For any Hamiltonian that does descend from a relativistic theory, the required antilinear symmetry is uniquely prescribed to be CP T , so one only has to check whether to not it might be CP T invariant.(We will show below that this in fact the case for H = p 2 + ix 3 .) On recognizing the matrix M (s) as being Hermitian in disguise when s > 1, we see that whether or not a Hamiltonian is Hermitian or Hermitian in disguise is a property of the solutions to the theory, and is something that cannot be determined by inspection.While we have seen that a Hamiltonian can be Hermitian (in disguise) even if does not appear to be so, below we will find examples of Hamiltonians that are not Hermitian (and not even Hermitian in disguise) even though they do appear to be so. Even though a non-linear condition such as H = H † is not preserved under a similarity transformation, we should note that in contrast commutation relations are preserved under similarity transformations.While standard for linear operators, a relation such as [H, A] = 0 where A = LK (A antilinear, L linear) is also preserved when A is antilinear, though, as noted in [11], under a similarity transformation it would be the transformed L that would obey [H ′ , L ′ K] = 0. Specifically, if we set In consequence, the commutation relation [CP T, H] = 0 is preserved under a similarity transform, a very powerful constraint, with the linear part of CP T transforming as would be needed.The same is true for the P T operator.Specifically, if we want to maintain the discrete properties of P and T , we set P = π, T = τ K = Kτ * and require that P 2 = I, T 2 = I, [P, T ] = 0, to then obtain π 2 = I, τ τ * = I, πτ = τ π * .If we now make a similarity transform SP S −1 = P ′ , ST S −1 = T ′ and set If we transform a Hamiltonian H obeying H = P T HT P = πτ H * τ * π, we find that with P T symmetry being maintained.Thus unlike a Hermiticity condition, an antilinear symmetry relation is not basis dependent, and is thus far more powerful to work with.While our discussion of M (s) has only been made for finite-dimensional matrices, it immediately generalizes to infinite-dimensional ones since one can work in occupation number space.However, something unexpected can happen.If we evaluate a matrix element such as Ω|Ω where |Ω is the no-particle state, we can introduce a complete set of position eigenstates according to Ω|Ω = dx Ω|x x|Ω = dxψ * 0 (x)ψ 0 (x) where x is real, and it can turn out that wave functions such as ψ 0 (x) might not be normalizable on the real x axis.While they would be normalizable on the real x axis in the standard self-adjoint Hermitian case, in the antilinear case one might need to continue the coordinate x into the complex plane in order to obtain a wave function that is normalizable, and it is only in such complex domains where dxψ * 0 (x)ψ 0 (x) is finite that the Hamiltonian is then self adjoint.We thus turn now to a discussion of self-adjointness as it pertains to Hamiltonians with antilinear symmetry. C. Self-Adjointness In regard to self-adjointness, we note that to show that a quantum-mechanical operator such as the momentum operator p = −i∂ x (or the Hamiltonian that is built out of it) acts as a Hermitian operator in the space of the wave functions of the Hamiltonian, one has to integrate by parts and be able to throw away spatially asymptotic surface terms.( ) In a P T symmetric or some general antilinearly symmetric situation this procedure can be realized by allowing for the possibility that one may have to rotate into the complex (x, p) plane in order to find so-called Stokes wedges in which one can throw surface terms away [3] when it is not possible to do so on the real axis.A typical example is the divergent Gaussian exp(x 2 ).It is not normalizable on the real x-axis, but is normalizable on the imaginary x-axis, and would be of relevance if the momentum operator p were to be anti-Hermitian rather than Hermitian, and thus represented by ∂ x , with the [x, p] = i commutator being realized as [−ix, ∂ x ] = i.The difference between the −i∂ x and ∂ x representations of the momentum operator is only in a fully permissible commutation-relation-preserving similarity transformation into the complex plane through an angle θ = −π/2, since the general angle Ŝ = exp(−θ px) effects while preserving both the relation [x, p] = i and the eigenvalues of a Hamiltonian Ĥ(x, p) that is built out of x and p. A commutation relation is actually not defined until one can specify a good test function on which it can act according to [x, p]ψ(x) = iψ(x), as the commutation relation can be represented by [x, −i∂ x]ψ(x) = iψ(x) for any x = x exp(iθ), with wave functions potentially only being normalizable for specific, non-trivial domains in θ.It is the domain in the complex x plane for which the test function is normalizable that determines the appropriate differential representation for an operator.Until one has looked at asymptotic boundary conditions, one cannot determine whether an operator is self-adjoint or not, since such self-adjointness is determined not by the operator itself but by the space of states on which it acts.When acting on its own eigenstates according to x|x = x|x , the position operator is self-adjoint and Hermitian.When acting on the eigenstates of Ĥ(x, p) it may not be self-adjoint until it is continued into the complex plane according to x′ = Ŝ x Ŝ−1 .However now x′ would not be Hermitian.Since p′ = Ŝ p Ŝ−1 would then not be Hermitian either, Ĥ′ (x ′ , p′ ) = Ŝ Ĥ(x, p) Ŝ−1 would in general not be Hermitian as well.In securing self-adjointness one can thus lose Hermiticity.It is only when x is self-adjoint when acting on the eigenstates of Ĥ(x, p) without any continuation into the complex plane being needed (viz.θ = 0) that Ĥ(x, p) could be Hermitian, with its wave functions ψ(x) = x|ψ then being normalizable on the real x axis. A self-adjointness mismatch between the action of the position and momentum operators on their own eigenstates and on those of the Hamiltonian is one of the key components of the P T -symmetry program or of any general antilinear-symmetry program, with a continuation into the complex (x, p) plane being required whenever there is any such mismatch, something that is expressly found to be the case for H = p 2 + ix 3 .The art of the P T -symmetry program then is the art of determining in which domain in the relevant complex plane the wave functions of a Hamiltonian are well-behaved asymptotically, with many examples being provided in [3][4][5].In the following we will present examples in which manifestly non-Hermitian Hamiltonians that are either Jordan Block or have complex conjugate eigenvalues are nonetheless self-adjoint in appropriate Stokes wedges in the complex plane.Self-adjointness is thus more general than Hermiticity and encompasses it as a special case. D. Organization of the Paper The present paper is organized as follows.Given our above study of the properties of the particular matrix M (s), in Sec.II we extend our study of antilinearity of a Hamiltonian as an alternative to Hermiticity to the general case.And following [14] and [11] we show that antilinearity is both necessary and sufficient to secure the time independence of the most general allowed Hilbert space inner products, and thus secure conservation of probability.While most of the results presented in Sec.II are already in the literature, some of our derivations are new.Also new is the centrality and emphasis we give to the time independence of inner products.In Secs.I and II we study antilinearity in and of itself, while starting in Sec.III we study how relativity constrains this analysis.The material presented in Secs.I and II is primarily preparatory, with the remainder of the paper then presenting new results that had not previously been reported in the literature. In Sec.III we show that the Lorentz group has a natural complex extension, and then identify the linear component of a CP T transformation as being a specific complex Lorentz transformation.With this property we can then show that once one imposes complex Lorentz invariance the antilinear symmetry associated with the time independence of inner products is uniquely prescribed to be CP T . In Sec.IV we apply these results to some interesting CP T theories such as the H = p 2 + ix 3 theory and the fourthorder derivative Pais-Uhlenbeck two-oscillator model.We show that the Pais-Uhlenbeck model admits of explicit realizations in which the energy eigenvalues of the Hamiltonian come in complex conjugate pairs or in which the Hamiltonian is a Jordan-block Hamiltonian that cannot be diagonalized at all.Both of these two realizations are shown to be CP T symmetric, to thus provide explicit examples of manifestly non-Hermitian Hamiltonians that are CP T invariant. One of the surprising results of our work is that we find that whether we use Hermiticity to derive the CP T theorem or use complex Lorentz invariance and probability conservation to derive the CP T theorem, in both the cases the allowed Hamiltonians that we obtain are always of exactly the same form, the same operator structure and the same reality pattern for coefficients.Despite this, it does not follow that the only allowed Hamiltonians are then Hermitian, since the Hermticity that is being appealed to here is that of the individual operators in the Hamiltonian and their coefficients and not that of the Hamiltonian itself.And we had noted above that when the generic Ĥ(x, p) acts on the eigenstates of x it might not be self-adjoint even though x itself is self-adjoint when acting on that very same basis.Moreover, as the M (s) example given above shows, even if the secular equation |H − λI| = 0 is a real equation for any value of the parameters, it can have real or complex solutions depending on the range of the parameters.As we discuss in Secs.III and IV, if we do start only from the requirements of time independence of inner products and complex Lorentz invariance, we may then obtain Hamiltonians that are Hermitian for certain ranges of parameters.For such cases though, we cannot immediately tell ahead of time what those ranges might be and need to solve the theory first, with Hermiticity not being determinable merely by inspection of the form of the Hamiltonian.Thus Hermiticity of a Hamiltonian never needs to be postulated, with it being output rather than input in those cases where it is found to occur. In Sec.V we show that the illustrative two-oscillator Pais-Uhlenbeck Hamiltonian is self-adjoint even when it is Jordan block or when energy eigenvalues come in complex conjugate pairs, to thus provide an explicit example in which a non-Hermitian Hamiltonian is self-adjoint.In this section we show that in general the connection between antilinearity and self-adjointness is very tight -for any Hamiltonian antilinearity implies self-adjointness, and self-adjointness implies antilinearity.We should thus associate self-adjointness with antilinearity rather than with Hermiticity, with its association with Hermiticity being the special case. In deriving the CP T theorem in Sec.III, we find that a CP T -invariant Hamiltonian has to obey H = H * .With the Euclidean time evolution operator being given by exp(−τ H), it follows that for time-independent Hamiltonians the Euclidean time Green's functions and path integrals have to be real.In Sec.VI we explore this aspect of the CP T theorem in some Hermitian and non-Hermitian cases and show that CP T symmetry is a necessary and sufficient condition for the reality of the field-theoretic Euclidean time Green's functions and path integrals, while Hermiticity is only a sufficient condition for such reality.As such, this result generalizes to field theory a similar result found in [15,16] for matrices. In quantizing a physical system one can work directly with quantum operators acting on a Hilbert space and impose canonical commutation relations for the operators, a q-number approach, or one can quantize using Feynman path integrals, a purely c-number approach.In constructing the appropriate classical action needed for the path integral approach, one ordinarily builds the action out of real quantities, because real quantities are the eigenvalues of Hermitian quantum operators.However, as we show in Sec.VII, this is inadequate in certain cases, and particularly so in minimally coupled electrodynamics (while ∂ µ − eA µ is real, it is only i∂ µ − eA µ that can be Hermitian in the quantum case), with the correct i∂ µ − eA µ based classical action being constructed by requiring that it be CP T symmetric instead (classically i∂ µ and eA µ are both CP T even, since classically the product eA µ is C even). Since the space of states needed for self-adjointness could be in the complex plane rather than on the real axis, one has to ask what happens to the antilinear symmetry as one continues into the complex plane.In Sec.VIII we show that despite the fact that the antilinear symmetry acts non-trivially on angles that are complex, in such a complex plane continuation both the antilinear operator and the Hamiltonian transform so that their commutation relation is preserved. A central theme of this paper is the primacy of antilinearity over Hermiticity.This is manifested in the canonical quantization approach to quantum mechanics, where c-number Poisson brackets are replaced by q-number commutators, and one constructs a q-number Hamiltonian operator that acts on quantum-mechanical states in a quantummechanical Hilbert space.In and of itself nothing in the canonical quantization procedure makes any reference to Hermiticity per se or forces the q-number Hamiltonian to necessarily be Hermitian (one usually just takes it to be so).However, as discussed in Sec.VIII, there is, as with any symmetry, a correlation between an antilinear symmetry in the classical theory and one in the quantum theory that is derived from it by canonical quantization.A quantum theory can thus inherit an antilinear symmetry from an underlying classical theory, and a quantum Hamiltonian can have an antilinear symmetry without being Hermitian, with antilinearity being more far reaching than Hermiticity while encompassing it as a special case. The contrast between antilinearity and Hermiticity is even more sharp in path integral quantization, since path integral quantization is a completely c-number approach in which no reference is made to any quantum-mechanical Hilbert space at all.Rather, path integral quantization enables one to construct quantum-mechanical matrix elements (viz.Green's functions such as Ω|T [φ(x 1 )φ(x 2 )]|Ω or the more general ones such as Ω L |T [φ(x 1 )φ(x 2 )]|Ω R that we introduce below in Sec.VI) without one needing to construct the quantum operators and Hilbert space themselves.Once one has constructed these matrix elements one can construct a quantum-mechanical Hamiltonian time evolution operator and Hilbert space that would yield them.However, since there is no reference to any quantum-mechanical Hilbert space in the path integral itself (it being an integral over strictly classical paths alone), there is no immediate reason to presume that the resulting quantum-mechanical system would be one in which the quantum Hamiltonian would be Hermitian. Path integral quantization thus raises the question [11] of how quantum-mechanical Hermiticity ever comes into physics at all, and what there would be in any given c-number path integral that would indicate whether the associated quantum-mechanical Hamiltonian would or would not be Hermitian.In Sec.VIII we address this question by showing that for any pair of canonical variables such as q and p, there is a correspondence principle between complex similarity transformations on the q-number q and p in the quantum theory and symplectic transformations through the selfsame complex angles on the c-number q and p in the classical theory.Use of this complex plane correspondence principle enables us to show that only if the path integral exists with a real measure and its Euclidean time continuation is real could the quantum-mechanical Hamiltonian be Hermitian, though even so, the results of this paper require that it would also possess an antilinear CP T symmetry.However, if the path integral only exists with a complex measure, the Hamiltonian would be CP T symmetric but not Hermitian (though it could still be Hermitian in disguise).It is thus through the existence of a real measure path integral that Hermiticity can enter quantum theory. In Sec.IX we make some final comments.In an Appendix we discuss the Majorana basis for the Dirac gamma matrices, a basis that is very convenient for discussing the relation between CP T transformations and the complex Lorentz group.Also in the Appendix we present a quantization scheme for fermion fields in which complex conjugation acts non-trivially on the fermion fields.With this quantization scheme we find that all spin zero fermion multilinears are real, something that will prove central to the proof of the CP T theorem that we give in this paper.In addition, we compare and contrast the charge conjugation operator C with the C operator that appears [3] in P T studies.Finally in the Appendix we show how causality is maintained in all the various realizations (real, Jordan block, complex conjugate pair energy eigenvalues) of a non-Hermitian but CP T -symmetric fourth-order derivative scalar field theory. II. ANTILINEARITY AS A BASIC PRINCIPLE FOR QUANTUM THEORY A. Necessary Condition for the Reality of Eigenvalues In order to identify the specific role played by antilinearity, we consider some generic discrete antilinear operator A with A 2 = I, an operator we shall write as A = LK where L is a linear operator, K is complex conjugation, K 2 = I, LL * = I, and A −1 = KL −1 .It is instructive to look first not at the eigenvector equation H|ψ = E|ψ itself, but at the secular equation f (λ) = |H − λI| = 0 that determines the eigenvalues of H.In [15] it was noted that if H has an antilinear symmetry, then the eigenvalues obey In consequence H and H * both have the same set of eigenvalues, with f (λ) thus being a real function of λ (viz. in an expansion f (λ) = a n λ n all a n are real).Then in [16] the converse was shown, namely if f (λ) is a real function of λ, H must have an antilinear symmetry.If f (λ) is a real function the eigenvalues can be real or appear in complex conjugate pairs (just as we found in our M (s) example), while if f (λ) is not real the condition f (λ) = 0 must have at least one complex solution.Antilinear symmetry is thus seen to be the necessary condition for the reality of eigenvalues, while Hermiticity is only a sufficient condition. B. Necessary and Sufficient Condition for the Reality of Eigenvalues As to a condition that is both necessary and sufficient, in P T theory it was shown in [16] that a non-Jordan-block, P T -symmetric Hamiltonian will always possess an additional discrete linear symmetry, with there always being an operator, called C in the P T literature (see [3]), that obeys [C, H] = 0, C 2 = I.In those cases in which this C operator can be constructed explicitly in closed form it is found to depend on the structure of the particular Hamiltonian of interest, and for our M (s) example the C operator is given by where sin α = (s 2 − 1) 1/2 /s, sinh β = (1 − s 2 ) 1/2 /s.Given the existence of the C operator, in [16] it was shown that if the P T theory C commutes with P T then all eigenvalues are real, while if it does not, then some of the eigenvalues must appear in complex conjugate pairs, with, as we elaborate on in the Appendix, no non-trivial such C existing in the Jordan-block case.Simultaneously satisfying the conditions [P T, H] = 0, [P T, C] = 0 is thus both necessary and sufficient for all the eigenvalues of a non-Jordan-block Hamiltonian to be real.In the Appendix we compare and contrast this C operator with the charge conjugation operator C. C. Antilinearity and Eigenvector Equations As well as the eigenvalue equation it is also instructive to look at the eigenvector equation On replacing the parameter t by −t and then multiplying by a general antilinear operator A we obtain From (23) To establish the converse, suppose we are given that the energy eigenvalues are real or appear in complex conjugate pairs.In such a case not only would E be an eigenvalue but E * would be too.Hence, we can set HA|ψ(−t) = E * A|ψ(−t) in (23), and obtain Then if the eigenstates of H are complete, (24) must hold for every eigenstate, to yield AHA −1 = H as an operator identity, with H thus having an antilinear symmetry. An alternate argument is to note that if we are given that all energy eigenvalues of H are real or in complex conjugate pairs, from H|ψ = E|ψ , and thus AHA −1 A|ψ = E * A|ψ , it follows that H and AHA −1 have the same set of energy eigenvalues and are thus isospectrally related via H = SAHA −1 S −1 = SLKHK(SL) −1 with a linear S. Thus again H has an antilinear symmetry (viz.SLK).Hence we see that if a Hamiltonian has an antilinear symmetry then its eigenvalues are either real or appear in complex conjugate pairs; while if all the energy eigenvalues are real or appear in complex conjugate pairs, the Hamiltonian must admit of an antilinear symmetry. D. Antilinearity and the Time Independence of Inner Products While this analysis shows that H will have an antilinear symmetry if its eigenvalues are real or appear in complex conjugate pairs, we still need a reason for why the eigenspectrum should in fact take this form.To this end we look at the time evolution of inner products.Specifically, the eigenvector equation i∂ t |R = H|R = E|R only involves the kets and serves to identify right-eigenvectors.Since the bra states are not specified by an equation that only involves the kets, there is some freedom in choosing them.As discussed for instance in [11], in general one should not use the standard R|R Dirac inner product associated with the Dirac conjugate when the Hamiltonian is not Hermitian, with this inner product then not being preserved in time.Rather, one should introduce left-eigenvectors of the Hamiltonian according to −i∂ t L| = L|H = L|E, and use the more general inner product L|R , since for it one does have with this inner product being preserved in time.While this inner product coincides with the Dirac inner product R|R for Hermitian H, for non-Hermitian H one should use the L|R inner product instead.Since a Hamiltonian cannot have eigenstates other than its left and right ones, the L|R inner product is the most general inner product one could use. E. Time Independence of Inner Products and the V operator In [14] and [11] a procedure was given for constructing the left-eigenvectors from the right-eigenvectors.Since the norm R j (t)|R i (t) is not time independent when the Hamiltonian is not Hermitian, as long as the sets of all {|R i (t) } and all { R j (t)|V } are both complete, the most general inner product one could introduce would be of the form R j (t)|V |R i (t) , as written here in terms of some as yet to be determined operator V .On provisionally presupposing V to be time independent, we evaluate From (26) we see that the V -based inner products will be time independent if V obeys the so-called pseudo-Hermitian condition V H −H † V = 0.For time-independent Hamiltonians the operator V then would indeed be time independent, just as we had presupposed.Since R| obeys −i∂ t R| = R|H † , and thus obeys we find that R|V then obeys −i∂ t R|V = R|V H, and we can thus identify L| = R|V .Thus via the righteigenvectors and the operator V that obeys V H − H † V = 0 one can construct the left-eigenvectors. 1 From ( 26) we can also show that V H − H † V = 0 if the V -based inner products are time independent [14], [11].Specifically, from (26) we see that if we are given that all V -based inner products are time independent, then if the set of all |R i (t) is complete, the right-hand side of ( 26) must vanish for all states, with the condition V H − H † V = 0 then emerging as an operator identity.The conditions that all V -based inner products are time independent and the condition that V H − H † V = 0 are thus equivalent.Now the operator V may or may not be not be invertible (V will not be invertible if the eigenvectors are complete but do not form a Reisz basis [17]), and so we need to discuss both invertible and non-invertible cases.With H and H † being related by H † = V HV −1 when V is invertible, it follows that in the invertible case H and H † both have the same set of eigenvalues.In consequence, the eigenvalues of H are either real or appear in complex conjugate pairs.Thus, as we noted above, H must have an antilinear symmetry.Hence if all R j (t)|V |R i (t) inner products are time independent and V is invertible, the Hamiltonian must have an antilinear symmetry.Now if the Hamiltonian has an antilinear symmetry, its eigenvalues are then real or in complex conjugate pairs, and H and H † must thus be isospectrally related by some operator V according to H † = V HV −1 .Thus, as noted in [14] and [11], pseudo-Hermiticity implies antilinearity and antilinearity implies pseudo-Hermiticity. Regardless of whether or not V is invertible, we note that if |R i (t) is a right-eigenstate of H with energy eigenvalue Since V has been chosen so that the R j (t)|V |R i (t) inner products are to be time independent, the only allowed non-zero inner product are those that obey with all other V -based inner products having to obey R j (0)|V |R i (0) = 0. We recognize (28) as being precisely none other than the requirement that eigenvalues be real or appear in complex conjugate pairs, just as required of antilinear symmetry.Since this analysis does not require the invertibility of V , the time independence of the V -based inner products thus implies that the Hamiltonian must have an antilinear symmetry regardless of whether or not V is invertible.As had been noted above, in the presence of complex energy eigenvalues the time independence of inner products is maintained because the only non-zero overlap of any given right-eigenvector with a complex energy eigenvalue is that with the appropriate left-eigenvector with the eigenvalue needed to satisfy (28), i.e. precisely between decaying and growing modes.Thus regardless of whether or not V is invertible, if all V -based inner products are time independent it follows that the energy eigenvalues are either real or appear in complex conjugate pairs.Thus, as had been noted above, H must have an antilinear symmetry.While construction of the needed V operator is not a straightforward task, the V operator must exist if the Hamiltonian has an antilinear symmetry, with a symmetry condition, even an antilinear one, being something that is much easier to identify, and thus more powerful since it guarantees that such a V must exist even if one cannot explicitly construct it in closed form.With the operator V we note that the time evolution operator thus generalize the standard unitarity condition U −1 = U † that holds for Hermitian Hamiltonians (where V = I). Time independence of inner products under the evolution of a Hamiltonian and antilinearity of that Hamiltonian thus complement each other, with the validity of either one ensuring the validity of the other.Since on physical grounds one must require time independence of inner products if one is to construct a quantum theory with probability conservation, that requirement entails not that the Hamiltonian be Hermitian, but that it instead possess an antilinear symmetry.Since it in addition requires that V H − H † V = 0 and thus that L| = R|V , the resulting left-right R|V |R = L|R norm is thus the most general time-independent inner product that one could write down.Antilinearity thus emerges as a basic requirement of quantum theory, to thus supplant the standard requirement of Hermiticity. III. ANTILINEARITY AND THE CP T THEOREM A. Complex Lorentz Invariance for Coordinates While our above remarks apply to any discrete antilinear symmetry, it is of interest to ask whether there might be any specially chosen or preferred one, and in this section we show that once we impose Lorentz invariance (as where sin α = (s 2 − 1) 1/2 /s.Also we note that the associated s > 1 C operator is given by C(s > 1) = (σ 1 + iσ 3 cos α)/ sin α, with both it and the analogous s < 1 C operator obeying C = P V , a point we explore further in the Appendix. extended to include complex transformations) there is such a choice, namely CP T .We thus extend the CP T theorem to non-Hermitian Hamiltonians, and through the presence of complex conjugate pairs of energy eigenvalues to unstable states, a result we announced in [18].(The familiar standard proofs always involved Hermiticity -see e.g.[19,20], with the axiomatic field theory proof [19] involving complex Lorentz invariance as well.)With the Hamiltonian being the generator of time translations we can anticipate a connection to the Lorentz group and to spacetime operators, and with time reversal being a spacetime-based antilinear operator we can anticipate that the discrete symmetry would involve T .The possible antilinear options that have a spacetime connection are thus T , P T , CT and CP T .As we will see, of the four it will be CP T that will be automatically selected.(Some alternate discussion of the CP T theorem in the presence of unstable states may be found in [21].) While Lorentz invariance is ordinarily thought of as involving real transformations only, so that x ′µ = Λ µ ν x ν is real, the line element η µν x µ x ν is left invariant even if Λ µ ν is complex.Specifically, if we introduce a set of six antisymmetric Lorentz generators M µν that obey as written here with diag[η µν ] = (1, −1, −1, −1), and introduce six antisymmetric angles w µν , the Lorentz transformation exp(iw µν M µν ) will not only leave the xµ x µ line element invariant with real w µν , it will do so with complex w µν as well since the reality or otherwise of w µν plays no role in the analysis.To see this in detail it is instructive to ignore metric and dimension issues and consider invariance of the two-dimensional line element because this matrix is orthogonal, the line element is preserved , the product is also orthogonal, with rotation matrices thus forming a group.Suppose we now make α complex.Then even with complex angle R remains orthogonal, the line element is still preserved, and the class of all real and complex rotations forms a group.Since this analysis immediately generalizes to the coordinate representation of SO( 4) and consequently to that of the Lorentz SO(3, 1), we see that the SO(3, 1) length xµ x µ is left invariant under real and complex Lorentz transformations, with the group structure remaining intact. B. Complex Lorentz Invariance for Fields For field theories similar remarks apply to the action I = d 4 xL(x).With L(x) having spin zero, this action is invariant under real Lorentz transformations of the form exp(iw µν M µν ) where the six w µν = −w νµ are real parameters and the six M µν = −M νµ are the generators of the Lorentz group.Specifically, with M µν acting on the Lorentz spin zero L(x) as x µ p ν − x ν p µ , under an infinitesimal Lorentz transformation the change in the action is given by δI = 2w µν d 4 xx µ ∂ ν L(x), and thus by δI = 2w µν d 4 x∂ ν [x µ L(x)].Since the change in the action is a total divergence, the familiar invariance of the action under real Lorentz transformations is secured.However, we now note that nothing in this argument depended on w µν being real, with the change in the action still being a total divergence even if w µν is complex.The action I = d 4 xL(x) is thus actually invariant under complex Lorentz transformations as well and not just under real ones, with complex Lorentz invariance thus being just as natural to physics as real Lorentz invariance. C. Majorana Spinors In extending the discussion to spinors there is a subtlety since Dirac spinors reside not in SO(3, 1) but in its complex covering group.While this immediately implies the potential relevance of complex transformations, if one were to work with unitary transformations they would not remain unitary if w µν is complexified.(For transformations of the form R = exp(iαJ) with generic generator J, under a complexification of α the relation ) However, Dirac spinors are reducible under the Lorentz group, with it being Majorana and Weyl spinors that are irreducible, with a Dirac spinor being writable as a sum of two Majorana spinors or two Weyl spinors.Now these two spinors are related since a Majorana spinor can be written as a Weyl spinor plus its charge conjugate (see e.g.[22]), and we shall thus work with Majorana spinors in the following.As such, Majorana spinors are the natural counterparts of the coordinates, since unlike SO (4), which only has one real four-dimensional irreducible representation (the vector), because of the Minkowski nature of the spacetime metric the group SO(3, 1) has two inequivalent real four-dimensional representations, the vector representation and the Majorana spinor representation.This is most easily seen in the Majorana basis for the Dirac matrices (see e.g.[22]), with the two irreducible representations being reproduced in the Appendix.Now while SO(3, 1) possesses a real four-dimensional irreducible Majorana spinor representation, this is not the case for the SO(4, 2) conformal group of which SO(3, 1) is a subgroup, since the four-dimensional spinor representation of the conformal group is complex, not real.2However, since SO(4, 2) is an orthogonal group, its group structure will remain intact under complex conformal transformations, just as we had found to be the case for SO (3,1).Now conformal invariance is the full symmetry of the light cone, and if all elementary particle masses are to arise though vacuum breaking, the fermion and gauge boson sector of the fundamental action that is to describe their dynamics would then be conformal invariant, just as is indeed the case in the standard SU (3) × SU (2) × U (1) theory of strong, electromagnetic and weak interactions.With the spinor representation of the conformal group being complex, it is then natural that the spinor representation of its SO(3, 1) subgroup would be complex too, with its two separate Majorana spinor components being combined into a single irreducible representation of the conformal group.Thus with a Dirac spinor being irreducible under the conformal group even as it is reducible under SO(3, 1), through the conformal group we are again led to complex Lorentz invariance. D. Complex Lorentz Invariance for Majorana Spinors With Majorana spinors living in SO(3, 1) itself rather than its covering group, the extension to complex Lorentz transformations parallels that for the coordinates.With spinors being Grassmann variables, to implement such a parallel treatment we work in the Majorana basis of the Dirac gamma matrices where the Dirac space matrix C that transposes according to Cγ µ C −1 = − γ µ coincides with γ 0 .Following e.g.[23], we introduce a "line element" in Grassmann space, viz.ψCψ (the tilde here denotes transposition in the Dirac gamma matrix space alone and not in the field space of ψ).In the Majorana basis C is antisymmetric, just as needed since the Grassmann ψ and ψ obey an anticommutation algebra.With the Lorentz generators behaving as M µν = i[γ µ , γ ν ]/4 in the Dirac gamma matrix space, under a Lorentz transformation we find that Then, with M µν = −CM µν C −1 , the invariance of ψCψ is secured.Moreover, since this analysis is independent of whether w µν is real or complex, the invariance of ψCψ is secured not just for real w µν but for complex w µν as well. Because of the signature of the spacetime metric, the three Lorentz M 0i boosts are symmetric in the Majorana basis for the Dirac gamma matrices while the three M ij rotations are antisymmetric.Since this same pattern is found for the vector representation, on recalling that xµ x µ is invariant under complex Lorentz transformations, we see that in the Majorana spinor space the Lorentz group structure also remains intact under complex transformations, with the Majorana spinor line element being left invariant under the complex Lorentz group.Using Majorana spinors we can thus extend complex Lorentz invariance to the spinor sector. To make an explicit connection between Majorana spinors and Dirac spinors at the quantum field theory level, we introduce a unitary charge conjugation operator which in quantum field space transforms a general Dirac spinor into its charge conjugate according to Ĉψ Ĉ−1 = ψ c . 3 On introducing , with ψ M being self conjugate (just like the x µ ) and ψ A being anti-self-conjugate.For convenience in the following we set The utility of this particular ψ = ψ M + ψ A = ψ 1 + iψ 2 decomposition is that it is preserved under an arbitrary similarity transformation S, with the transformed ψ 1 and ψ 2 respectively being selfconjugate and anti-self-conjugate under the transformed charge conjugation operator Ĉ′ = S ĈS −1 .As we discussed in Sec.I, the Hermiticity condition H ij = H * ji is not preserved under a general similarity transformation, with self-conjugacy having a basis-independent status that Hermiticity does not possess.While the Hermiticity condition H ij = H * ji for an operator is not basis independent, we note that in the Majorana basis of the Dirac gamma matrices charge conjugation is the same as Hermitian conjugation.Thus in that basis we can take ψ 1 and ψ 2 to be Hermitian fields, and in the following we shall work in the Majorana basis and use the ψ = ψ 1 + iψ 2 decomposition of a general Dirac spinor where Ĉψ In the Majorana basis for the Dirac gamma matrices P and T implement P ψ( x, t) as it is these transformations that leave the action for a free Dirac field invariant.In terms of the ψ 1 , ψ 2 basis Ĉ P T itself thus implements a relation that will prove central in the following. As regards complex Lorentz transformations, we note that for Dirac spinors quantities such as ψψ = ψ † γ 0 ψ would not be invariant under a complex Lorentz transformation if it is applied to both ψ and ψ † as is.However, with ψ 1 and ψ 2 both being taken to be Hermitian Majorana spinors, we should write ψ † γ 0 ψ as ( ψ1 − i ψ2 )γ 0 (ψ 1 + iψ 2 ) (in constructing ψi the transposition acts only on their four components in the Dirac gamma matrix space and not on quantum fields themselves), and then implement the transformation on the separate ψ 1 and ψ 2 , since they transform as Given that ψ transforms as ψ → exp(iw µν M µν )ψ under a real or a complex Lorentz transformation, we might initially expect that ψ † transforms as , rather than as the relation ψ † → ψ † exp(iw µν M µν ) that we have found.To appreciate the distinction we need to introduce the quantum field-theoretic Lorentz generators ) and T µν is the quantum field energy-momentum tensor.Even if we were to take M µν to be Hermitian (which it would not be if Ĥ = d 3 x T 00 is not Hermitian), with complex w µν the operator Λ would not be unitary, and there is thus no otherwise troublesome relation of the form Λ−1 . In this way we can extend complex Lorentz invariance to ψψ. To determine what happens to a general matrix element under a complex Lorentz transformation, we recall that in Sec.II we had introduced a V operator that effects V H = H † V .Given this V , for a Lorentz transformation Λ = exp(iw µν Mµν ) first with real w µν , we can set With the matrix element R|V |R transforming into R| Λ † V Λ|R under a Lorentz transformation on the states, R|V |R transforms into R|V Λ−1 Λ|R , to thus be invariant.However, this procedure will not work as is if w µν is complex, and so in the complex Lorentz case we will need to find an alternate matrix element.This alternate is provided by the Ĉ P T operator.Specifically, we note that given a quantum field-theoretic action that is CP T even, its variation with respect to the C even, P even, T even metric g µν yields an energy-momentum tensor T µν that is CP T even too.In consequence Ĥ is CP T even, while the M µν = d 3 x(x µ T 0ν − x ν T 0µ ) generators that are constructed from it are CP T odd. 4 Thus if we now apply CP T to a complex Lorentz transformation generator we obtain and thus obtain V Ĉ P T Λ−1 = Λ † V Ĉ P T .On defining the more general matrix element R|V Ĉ P T |R , we find that it transforms into R| Λ † V Ĉ P T Λ|R under a complex Lorentz transformation on the states.It thus transforms into R|V Ĉ P T Λ−1 Λ|R , to thus be invariant.Finally we note that even if the Mµν are Hermitian (so V = I), it is R| Ĉ P T |R that is invariant under complex Lorentz transformations and not the standard Dirac norm R|R .This then is how one constructs matrix elements that are invariant under complex Lorentz transformations. E. Connection Between Complex Lorentz Transformations and P T and CP T Transformations The utility of complex Lorentz invariance is that it has a natural connection to both P T and CP T transformations.For coordinates P T implements x µ → −x µ , and thus so does CP T since the coordinates are charge conjugation even (i.e.unaffected by a charge conjugation transformation).With a boost in the x 1 -direction implementing and with the Λ 0 3 (iπ) boost implementing , just as required of a P T or CP T transformation on the coordinates. With Lorentz transformations on real coordinates obeying (Λ 0 0 , there are four disconnected L det sgn domains, classified according to detΛ = ±1 and sgnΛ 0 0 = ±1.The domains L + + and L + − are then connected by a P T transformation on the coordinates.Complex Lorentz transformations thus cover the otherwise disconnected L + + and L + − domains, with this thus being an interesting geometrical aspect of P T transformations.With Λ 0 i (iπ) implementing exp(−iπγ 0 γ i /2) = −iγ 0 γ i in the Dirac gamma matrix space, quite remarkably, we find that as an operator in quantum field space πτ = Λ0 Thus up to an overall complex phase, we recognize this transformation as acting as none other than (the linear part of) a CP T transformation, and thus see that CP T is naturally associated with the complex Lorentz group, even having a Lorentz invariant structure since γ 5 commutes with all of the M µν = i[γ µ , γ ν ]/4 Lorentz generators. In general then, we can implement a CP T transformation as K πτ where the complex conjugation K serves as the antilinear component of CP T .Because of the factor i that is present in , the effect of K πτ on a fermion bilinear can at most differ from the effect of Ĉ P T on the bilinear by a phase that is real.In the Appendix we construct an explicit anticommutation quantization scheme for Majorana fields in which the phase is found to be equal to one in all combinations of fermion bilinears and quadrilinears that have spin zero, a property that will prove central to our derivation of the CP T theorem.With the fermions being in the fundamental representation of the Lorentz group from which all other representations can be constructed, this result then generalizes to the arbitrary spin zero fermion multilinear.Since the Hamiltonian is constructed from the Lagrangian by first forming the energy-momentum tensor from it and then setting H = d 3 xT 00 , the only terms of interest for exploring properties of the Hamiltonian are those that are associated with spin zero terms present in the Lagrangian.With the K πτ phase of all such spin zero terms being real, none of these terms is affected by K at all.Thus given complex Lorentz invariance, and given the fact that the individual spin zero terms themselves are K invariant even if they contain factors i (which some are shown in the Appendix to do), to establish CP T invariance we now only need to be able to monitor any other factors of i that might appear in the Lagrangian, such as in combinations of fields or in any numerical coefficients that might be present in the Lagrangian. F. Discrete Transformations on Fermion Spin Zero Multilinears To see first how such a monitoring is achieved in the Hermitian case, we recall that, as noted for instance in [20], every representation of the Lorentz group transforms under Ĉ P T as Ĉ P T φ(x) T −1 P −1 Ĉ−1 = η(φ)φ(−x), with a φ-dependent intrinsic CP T phase η(φ) that depends on the spin of each φ, and for integer spin systems (bosons or fermion multilinears (bilinears, quadrilinears, etc.)) obeys η 2 (φ) = 1.Moreover, all spin zero fields (both scalar and pseudoscalar) expressly have η(φ) = 1.Since the most general Lorentz invariant Lagrangian density must be built out of sums of appropriately contracted spin zero products of fields with arbitrary numerical coefficients, and since it is only spin zero fields that can multiply any given net spin zero product an arbitrary number of times and still yield net spin zero, all net spin zero products of fields must have a net η(φ) equal to one.Generically, such products could involve φ + φ + , φ + φ − , or φ − φ − type contractions where φ ± = φ 1 ± iφ 2 .Establishing CP T invariance of the Lagrangian density (and thus that of the Hamiltonian) requires showing that the numerical coefficients are all real and that only φ + φ − (or φ + φ + + φ − φ − ) type contractions appear.As noted in [20], this will precisely be the case if the Lagrangian density is Hermitian, with the CP T invariance of the Hamiltonian then following. To appreciate the η(φ) pattern, it is instructive to look at the intrinsic C, P and T parities of fermion bilinears as given in Table I, and for the moment we take the bilinears to be Hermitian.(In Table I associated changes in the signs of x and t are implicit.)Even though it is not independent of the other fermion bilinears we have included the spin two, parity minus ψ[γ µ , γ ν ]γ 5 ψ, so that we can contract it into a spin zero combination with ψi[γ µ , γ ν ]ψ.In constructing spin zero combinations from these fermions we can use ψψ and ψiγ 5 ψ themselves or contract ψψ and ψiγ 5 ψ with themselves or with each other an arbitrary number of times.Similarly, we can contract ψγ µ ψ and TABLE II: C, P, and T assignments for fermion bilinears and quadrilinears that have spin zero ψγ µ γ 5 ψ with themselves or with each other, and we can contract ψi[γ µ , γ ν ]ψ and ψ[γ µ , γ ν ]γ 5 ψ with themselves or with each other.As we see from Table I, it is only for CP T that the net intrinsic phase shows any universal behavior, being correlated [20] with the spin of the bilinear by being even or odd according to whether the spin is even or odd.Initially the factors of i in ψiγ 5 ψ and ψi[γ µ , γ ν ]ψ were introduced to make the bilinears be Hermitian.Now we see that the very same factors of i can be introduced in order to make the intrinsic CP T parity of the bilinears alternate with spin, and in consequence we do not need to impose Hermiticity on the fermion bilinears at all, and can define the bilinears as being of the form ψ † γ 0 ψ = ( ψ1 − i ψ2 )γ 0 (ψ 1 + iψ 2 ) etc., where ψ 1 and ψ 2 are Majorana spinors that transform as Ĉψ Given the correlation between intrinsic CP T parity and spin, from Table II we see that for the fermion bilinears and quadrilinears every contraction that has spin zero has even intrinsic CP T parity.Moreover, as we also see from Table II, CP T is the only transformation that produces the same positive sign for every one of the spin zero contractions.(P T almost has this property, failing to meet it only for ψγ µ ψ ψγ µ γ 5 ψ.) Thus in a spin zero Lagrangian density, it is only under CP T that every term in it has the same net intrinsic parity.CP T is thus singled out as being different from all the other spacetime transformations. G. Derivation of the CP T Theorem To now derive a CP T theorem for non-Hermitian Hamiltonians, we note first that, as shown in the Appendix, every single one of the spin zero fermion combinations that is listed in Table II is unchanged under complex conjugation Since the action of πτ = Λ0 3 (iπ) Λ0 2 (iπ) Λ0 1 (iπ) on a general spin zero combination will leave it invariant while reversing the signs of all four components of x µ , the action of K πτ on any spin zero combination will do so too.K πτ thus has precisely the same effect on the spin zero terms as Ĉ P T , to thus lead to the same positive intrinsic CP T parities as listed in the last column in Table II.Thus to implement CPT we only need to implement K πτ .On now applying the Lorentz transformation πτ to a general spin zero action, every single spin zero combination in it will transform the same way, to give I = d Finally, since we had shown in Sec.II that a Hamiltonian must admit of an antilinear symmetry if it is to effect time-independent evolution of inner products, with this probability conservation requirement we then infer that KL(x)K = L(x).The Lagrangian density and thus the Hamiltonian are thereby CP T symmetric, and we thus obtain our desired CP T theorem for non-Hermitian Hamiltonians. In addition, we note that since K complex conjugates all factors of i, even, as noted in the Appendix, including those in the matrix representations of the quantum fields, we see that the Hamiltonian obeys H = H * , to thus be real.While this condition is somewhat analogous to H = H † , in the standard approach to the CP T theorem the H = H † condition is input, while in our approach H = H * is output.With the use of complex conjugation under K, we see that the action of K entails that in L(x) all numerical coefficients are real, with only general bosonic or fermionic (φ As we see, quite remarkably we finish up with the same allowed generic structure for L(x) as in the Hermitian case, except that now no restriction to Hermiticity has been imposed.In our approach we do not require the fields in L(x) to be Hermitian, we only require that they have a well-defined behavior under CP T , so that now we obtain CP T symmetry of a Hamiltonian even if the Hamiltonian is Jordan-block or its energy eigenvalues appear in complex conjugate pairs.In the standard Hermiticity-based approach to the CP T theorem one requires the fields in L(x) to be Hermitian and requires the coefficients in the action to be real.However, as we had noted in our discussion in Sec.I, this is not sufficient to secure the Hermiticity of a Hamiltonian that is built out of the fields in the action, since when the Hamiltonian acts on the eigenstates of the field operators themselves the Hamiltonian may not be self-adjoint.CP T symmetry thus goes beyond Hermiticity, and under only two requirements, viz.conservation of probability (for the antilinear part of the CP T transformation) and invariance under the complex Lorentz group (for the linear part of the CP T transformation), CP T invariance of the Hamiltonian then follows, with no restriction to Hermiticity being needed. H. No Vacuum Breaking of CP T Symmetry While we have shown that the Hamiltonian is CP T invariant, there is still the possibility that CP T might be broken in the vacuum.However with every spin zero combination of fields being CP T even as per Table II, then since Lorentz invariance only permits spin zero field configurations to acquire a non-vanishing vacuum expectation value, CP T symmetry could not be broken spontaneously.Lorentz invariance thus plays a double role as it is central to making both the Hamiltonian and the vacuum be CP T symmetric. I. How to Distinguish Hermiticity from CP T Invariance Since we obtain exactly the same generic form for the Hamiltonian whether we use Hermiticity or invariance under complex conjugation times complex Lorentz invariance, and thus obtain Hamiltonians that on the face of it always appear to be Hermitian, we will need some criterion to determine which case we are in.As we will see, just as in the example given in (1), it depends on the values of the parameters.As regards the behavior in time, we note that if we have a real wave equation that does not mean that the associated frequencies are necessarily real, since solutions to real equations could come in complex conjugate pairs.As regards the behavior in space, that depends on asymptotic boundary conditions (viz.self-adjointness), since a real wave equation can have non-normalizable solutions that diverge asymptotically, and in Sec.V we discuss this issue in detail. For the time dependence issue, consider the neutral scalar field with action Thus the poles in the scalar field propagator are at ω and the Hamiltonian is given by For either sign of m 2 the I S action is CPT symmetric, and for both signs I S appears to be Hermitian.For m 2 > 0, H and φ( x, t) are indeed Hermitian and all frequencies are real.However, for m 2 < 0, frequencies become complex when k 2 < −m 2 .The poles in the propagator move into the complex plane, the field φ( x, t) then contains modes that grow or decay exponentially in time, while H contains energies that are complex.Thus now H = H † and φ = φ † .As we see, whether or not an action is CP T symmetric is an intrinsic property of the unconstrained action itself prior to any stationary variation, but whether or not a Hamiltonian is Hermitian is a property of the stationary solution. 5Hermiticity of a Hamiltonian or of the fields that it is composed of cannot be assigned a priori, and can only be determined after the theory has been solved.However, the CP T properties of Hamiltonians or fields can be assigned a priori, and thus that is how Hamiltonians and fields should be characterized.One never needs to postulate Hermiticity at all. A. CP T Symmetry and Unstable States In the classic application of the CP T theorem, the theorem was used to establish the equality of the lifetimes of unstable particles and their antiparticles, with the most familiar application being in K meson decays.However, such use of the theorem was made via a CP T theorem whose derivation had only been obtained for Hamiltonians that are Hermitian, and for such Hamiltonians states should not decay at all.To get round this one by hand adds a non-Hermitian term to the Hamiltonian, with the added term being the same one in both the particle and the antiparticle decay channels.In addition, one also by hand imposes a non-CP T -invariant boundary condition that only allows for decaying modes and forbids growing ones.In our approach we have no need to do this since the time-independent inner products that we use precisely provide for time-independent transitions between decaying states and the growing states into which they decay without any need to add in any terms by hand.CP T invariance then requires that the transition rates for the decays of particles and their antiparticles be equal. B. CP T Symmetry and P T Symmetry Our derivation of the CP T theorem for non-Hermitian Hamiltonians provides a fundamental justification for the P T studies of Bender and collaborators.These studies are mainly quantum-mechanical ones in which the field-theoretic charge conjugation operator plays no role (i.e.[ Ĉ, Ĥ] = 0).The CP T symmetry of any given relativistic theory thus ensures the P T symmetry of any charge conjugation invariant quantum-mechanical theory that descends from it, doing so regardless of whether or not the Hamiltonian is Hermitian, and independent of whether or not P or T themselves are conserved. C. The H = p 2 + ix 3 Theory and CP T Symmetry To appreciate the above points within a specific context, we recall that it was the H = p 2 + ix 3 theory that first engendered interest in P T symmetry, since despite not being Hermitian but instead being P T symmetric, it had an entirely real set of energy eigenvalues [1,2], [3] (and is actually Hermitian in disguise).Now the presence of the factor i initially suggests that H might not have descended from a CP T -invariant theory since our derivation of the CP T theorem led us to numerical coefficients that are all real.However, in this particular case the factor of i arises because the H = p 2 + ix 3 theory does not descend directly from a CP T -invariant Hamiltonian but from a similarity transformation of one that does, an allowable transformation since it does not affect energy eigenvalues. To be specific, consider an initial CP T -symmetric, time-independent Hamiltonian with real coefficients, and the C, P , and T assignments for Φ, Π, and −Π 2 + Φ 3 as indicated in Table III as per the pseudoscalar ψiγ 5 ψ assignments listed in Table I. 6 Since H is time independent, we only need to evaluate the fields in it at t = 0.The similarity transformation S = exp[(π/2) where we have introduced the compact notation x, p and p 2 + ix 3 .The similarity transformation also leads to the C, P , and T assignments for x and p as indicated in Table III, and a thus CP T even SHS −1 = p 2 + ix 3 . 7Then, with both Φ and Π being charge conjugation even neutral fields, the P T symmetry of H = p 2 + ix 3 directly follows.Given our derivation of the CP T theorem without assuming Hermiticity, it would be of interest to find an explicit CP T -invariant Hamiltonian whose energy eigenvalues come in complex conjugate pairs or whose Hamiltonian is not diagonalizable.To this end we consider the fourth-order Pais-Uhlenbeck two-oscillator ([z, p z ] = i and [x, p] = i) model studied in [7,8].Its action and Hamiltonian are given by where initially ω 1 and ω 2 are taken to be real (and positive for definitiveness).Once one sets 2 ) 1/2 and drops the spatial dependence, this Hamiltonian becomes the quantum-mechanical limit of a covariant fourth-order derivative neutral scalar field theory [8], with action and propagator and Hamiltonian H = d 3 xT 00 where The H PU Hamiltonian turns out not to be Hermitian but to instead be P T symmetric [7,8], with all energy eigenvalues nonetheless being given by E(n 1 , n 2 ) = (n 1 +1/2)ω 1 +(n 2 +1/2)ω 2 , an expression that is real when ω 1 and ω 2 are both real.(When the frequencies are real all the poles of the propagator are on the real axis.)In addition, H PU is CP T symmetric since H P U is separately charge conjugation invariant ([C, H P U ] = 0), while thus descending from a neutral scalar field theory with an action I S that is CP T invariant itself.The theory is also free of ghost states of negative norm, since when one uses the needed positive definite P T theory norm (viz.the one constructed via ψ|CP T |ψ [3] where C this time is the P T theory C operator described earlier -a norm that, as we show in the Appendix, is equivalent to the L|R norm introduced earlier), the relative minus sign in the partial fraction decomposition of the propagator given in (41) is generated not by the structure of the Hilbert space itself but by the C operator [7,8], since with it obeying C 2 = I, it has eigenvalues equal to plus and minus one.The negative residue of the pole in the 1/(k 2 − M 2 2 ) term in (41) is not due to a negative Dirac norm.Rather it means that one should not be using the Dirac norm at all. With the eigenvectors of H PU being complete if ω 1 and ω 2 are real and unequal [7], for real and unequal ω 1 and ω 2 , H PU while not Hermitian is Hermitian in disguise, with the explicit similarity transformation needed to bring it to a Hermitian form being given by [7] where in terms of the operators given in (40), y = −iz, q = ip z , and [y, q] = i.In this particular case x = x † , p = p † , y = y † , q = q † , Q = Q † , V = e −Q , and C = P V .As we see, HPU is a perfectly well-behaved, standard Hermitian two-oscillator system that manifestly cannot have any states of negative norm.Thus for the two oscillator frequencies being real and unequal, while not Hermitian, H PU is nonetheless Hermitian in disguise.As we now show, when we take the two frequencies to be equal or be in a complex conjugate pair this will no longer be the case. E. CP T Symmetry when Energies are in Complex Conjugate Pairs If we set ω 1 = α + iβ, ω 2 = α − iβ with real α and β, we see that despite the fact that ω 1 and ω 2 are now complex, quite remarkably, the quantities (ω 2 1 + ω 2 2 )/2 = α 2 − β 2 and ω 2 1 ω 2 2 = (α 2 + β 2 ) 2 both remain real.In consequence H PU remains CP T invariant, but now the energies come in complex conjugate pairs as per E(n 1 , n 2 ) = (n 1 +1/2)(α+iβ)+(n 2 +1/2)(α−iβ).With all the terms in the I PU action still being real, the theory looks very much like a Hermitian theory, but it is not since energy eigenvalues come in complex conjugate pairs.The Pais-Uhlenbeck two-oscillator theory with complex conjugate frequencies thus provides an example of a theory that looks Hermitian but is not.The Pais-Uhlenbeck two-oscillator model with frequencies that come in complex conjugate pairs thus serves as an explicit example of a CP T -invariant but non-Hermitian Hamiltonian in which energy eigenvalues come in complex conjugate pairs, while showing that one can indeed write down theories of this type.(This example also shows that one can have dissipation despite the absence any odd-time-derivative dissipative terms in (39).) F. CP T Symmetry in the Jordan-Block Case It is also of interest to note that when ω 1 = ω 2 = α with α real, the seemingly Hermitian H PU becomes of nondiagonalizable, and thus of manifestly non-Hermitian, Jordan-block form [8] (the similarity transformation in (43) that effects e −Q/2 H PU e Q/2 = HPU becomes undefined when ω 1 = ω 2 ), with its CP T symmetry not being impaired.(In [8] the emergence of a Jordan-block Hamiltonian in the equal frequency limit was associated with the fact that the partial fraction decomposition of the propagator given in (41) becomes undefined when 2 ) prefactor becomes singular.)Thus for ω 1 and ω 2 both real and unequal, both real and equal, or being in a complex conjugates of each other, in all cases one has a non-Hermitian but CP T -invariant Hamiltonian that descends from a quantum field theory whose Hamiltonian while not Hermitian is nonetheless CP T symmetric. 8ven though the work of [7,8] shows explicitly that H PU is not Hermitian (being quadratic H PU is exactly solvable), it nonetheless appears to be so.However, while not Hermitian, H PU is self-adjoint, and so we turn now to a discussion of the distinction between Hermiticity and self-adjointness.This will involve the introduction of Stokes wedges in the complex plane, regions where wave functions are asymptotically bounded, with such wedges playing a key role in P T or any general antilinear symmetry studies [3]. A. Self-Adjointness and the Pais-Uhlenbeck Hamiltonian To understand the issue of self-adjointness we again consider the Pais-Uhlenbeck Hamiltonian, and make a standard wave-mechanics representation of the Schrödinger equation H PU ψ n = E n ψ n by setting p z = −i∂/∂z, p x = −i∂/∂x.In this representation we find two classes of eigenstates, one a potentially physical class with positive energy eigenvalues when ω 1 and ω 2 are both real and positive, and the other, an unphysical class with negative energy eigenvalues.The state whose energy is (ω 1 + ω 2 )/2, the lowest energy state in the positive energy sector, has an eigenfunction of the form [24] while the state whose energy is −(ω 1 + ω 2 )/2, the highest energy state in an unbounded from below negative energy sector, has an eigenfunction of the form With ψ + (z, x) diverging at large z and ψ − (z, x) diverging at large x, neither of theses two states is normalizable.Thus in trying to show that H PU obeys ψ * 1 Hψ 2 = [ ψ * 2 Hψ 1 ] * , we are unable to drop the surface terms that are generated in an integration by parts, and have to conclude [7,8] that in the basis of wave functions associated with the positive energy eigenfunctions (or negative for that matter) H PU is not self-adjoint.Self-adjointness of a differential operator in a given basis means that one can throw away surface terms.Moreover, without actually looking at asymptotic boundary conditions, one cannot in fact determine if a differential operator is self-adjoint from the form of the operator itself, since such self-adjointness is determined not by the operator but by the space of states on which it acts. Since there is only a sensible physical interpretation of a theory if the energy spectrum is bounded from below, we thus seek a viable interpretation of the ψ + (z, x) sector of the Pais-Uhlenbeck model.Inspection of ψ + (z, x) shows that ψ + (z, x) would be normalizable if we were to replace z by iz, and thus replace p z by −∂ z (so to maintain [z, p z ] = i). In other words we cannot presume a priori that p z is Hermitian in the basis of eigenfunctions of H PU , and thus cannot presume a priori that H P U is Hermitian either.The complete domain in the complex z plane in which the wave function is normalizable is known as a Stokes wedge.If we draw a letter X in the complex z plane and also draw a letter X in the complex x plane, then ψ + (z, x) is normalizable if z is in the north or south quadrant of its letter X, and x is in the east or west quadrant of its letter X.The needed Stokes wedges contain purely imaginary z and purely real x.And in these particular wedges we can construct normalizable wave functions whose energy eigenvalues are strictly bounded from below.Since the wave functions of the excited states are just polynomials functions of z and x times the ground state wave function [8,24], in the same Stokes wedges these wave functions are normalizable too.While H PU is not Hermitian, in these particular Stokes wedges we see that H PU is nonetheless self-adjoint. Inspection of ψ + (z, x) shows that in these particular Stokes wedges the asymptotic behavior is not modified if we set ω 1 = ω 2 = α, with α > 0. With this being true also for the excited states [8,24], the Jordan-block limit of the Pais-Uhlenbeck Hamiltonian is thus self-adjoint even though it is manifestly not Hermitian.Moreover, if we set ω 1 = α + iβ, ω 2 = α − iβ (α still positive and β real) we obtain ω 1 + ω 2 = 2α, ω 1 ω 2 = α 2 + β 2 .Thus, quite remarkably, all the terms in ψ + (z, x) not only remain real, they undergo no sign change, with the wave functions thus still being normalizable in the selfsame Stokes wedges.With this also being the case for the excited states, even in the complex energy sector, an again manifestly non-Hermitian situation, H PU is still self-adjoint. B. Self-Adjointness and Antilinearity While of course many operators are both Hermitian and self-adjoint, as we see from the Pais-Uhlenbeck example self-adjointness should not in general be associated with Hermiticity.The Pais-Uhlenbeck model shows that there is instead a connection between antilinearity and self-adjointness, and this turns out to be general.Specifically, below in Sec.VI we will show that if a Hamiltonian has an antilinear symmetry the Euclidean time path integral is real.Moreover, if the real parts of the energy eigenvalues of the Hamiltonian are bounded from below and all are positive, the Euclidean time path integral is well-behaved and finite.In consequence, the Minkowski time path integral is finite too.Then, because of the complex plane correspondence principle that we derive below in Sec.VIII, the quantum Hamiltonian must be self-adjoint in some domain in the complex plane.In general then, antilinearity implies selfadjointness.As to the converse, we note that if a Hamiltonian is self-adjoint in some direction in the complex plane, in that direction asymptotic surface terms would vanish and left-right inner products would be time independent. While we can show that i∂ t L(t) is immediately zero when Ĥ is represented as an infinite-dimensional matrix in Hilbert space, when Ĥ is represented as a differential operator, it acts to the right on |R(t) and to the left on L(t)|.To then show that i∂ t L(t)|R(t) is zero requires the vanishing of the asymptotic surface term generated in an integration by parts.With such surface terms vanishing when Ĥ is selfadjoint, self-adjointness thus leads to probability conservation.In addition, we note that if in matrix elements of the form R| Ĥ|R = dxdyψ * R (x) x| Ĥ |y ψ R (y) we can drop surface terms in an integration by parts, we would have both self-adjointness and Hermiticity.However, when we need to distinguish between left-and right-eigenstates and introduce matrix elements of the form L| Ĥ|R = dxdyψ * L (x) x| Ĥ |y ψ R (y), this time if we can drop surface terms in an integration by parts, we would still have self-adjointness but would not have Hermiticity (i.e.not have ĤLR = ( ĤRL ) * ) since ψ * L (x) is not the same as ψ * R (x).Self-adjointness is thus distinct from Hermiticity while encompassing it as the special case in which self-adjointness is secured without the need to continue into the complex plane.Probability would then be conserved and, as shown in Sec.II, the Hamiltonian would then have an antilinear symmetry.Thus antilinearity implies self-adjointness, and self-adjointness implies antiinearity. C. Connection Between the CP T Norm and Left-Right Norm Now that we have identified CP T as the basic antilinear symmetry for quantum theory, we see that the overlap of a state with its CP T conjugate is time independent since the Hamiltonian is itself CP T symmetric, with this norm thus being preserved in time.Now in Sec.II we introduced a different time-independent norm, the overlap of a right-eigenvector with a left-eigenvector.Thus up to a phase we can now identify the left-eigenvector as the CP T conjugate of the right-eigenvector. The issue of the phase is of relevance since the utility of the CP T norm or of the left-right norm is not just in the time independence.The sign of the norm is also of significance.Since non-Hermitian Hamiltonians that have a real and complete eigenspectrum can be brought to a Hermitian form by a similarity transformation (cf.( 14) above), and since the signs and magnitudes of inner products do not change under a similarity transformation, prior to making the transformation one must be able to define a positive definite norm for such non-Hermitian Hamiltonian.The norm in question is not actually the overlap of a state with its CP T conjugate, but is instead the left-right norm L|R = R|V |R .However, as we discuss in more detail in the Appendix, in many cases the V operator can be written as V = P C where C = C −1 is the P T theory C operator.The V norm is thus equivalent to a P C norm.With both of these norms being positive definite, their interpretation as probabilities is secured. The issue of the sign is also of significance for a different reason.For the unequal frequency fourth-order derivative Pais-Uhlenbeck Hamiltonian it is found that if one quantizes the theory using the Dirac norm, these norms turn out to be negative (see e.g.[7]), causing one to think that such theories are not unitary or of physical relevance.However, the fact that the Dirac norm is found to be negative is actually a signal that one is quantizing in the wrong Hilbert space and that the Hamiltonian is not Hermitian.When quantized with the CCP T norm used in (C)P T theories (C added to (C)P T ), the norms are then positive definite [7], with the theory then being fully acceptable. By same token, conformal gravity, equally a fourth-order derivative theory, is actually free of any negative Dirac norm ghost states [9,10], to thus be a fully acceptable quantum gravity theory.Moreover, it turns out that the Hamiltonian of (linearized) conformal gravity is actually Jordan block [9,10] (analog of the equal frequency Pais-Uhlenbeck model), to thus manifestly not be Hermitian but to instead possess an antilinear CPT symmetry. D. CP T symmetry and the Construction of Field-Theoretic Lagrangians As regards the difference between Hermiticity and antilinearity, we note additionally that in constructing fieldtheoretic Lagrangian densities it is standard practice, particularly when spinors are involved, to add on to the chosen Lagrangian density its Hermitian conjugate.This is done in order to make the ensuing Hamiltonian be Hermitian, since one simply postulates as a priori input that it should be.However, as we have seen, this is too restrictive a condition, with quantum theory being richer.Moreover, it is anyway unnecessary and one never actually needs to impose Hermiticity at all, since one should instead add on the CP T conjugate (if one had initially chosen a Lagrangian density that was not CP T invariant).Not only does this encompass Hermiticity while allowing more general possibilities, CP T symmetry does not even need to be postulated as it is an output requirement for any quantum theory that has probability conservation and complex Lorentz invariance. VI. ANTILINEARITY AND EUCLIDEAN TIME GREEN'S FUNCTIONS AND PATH INTEGRALS A. Hermitian Case To explore the interplay between antilinear symmetry and path integrals it suffices to discuss self-conjugate fields, and so we assume C invariance and reduce CP T symmetry to P T symmetry.So consider now the generic two-point path integral D[φ]φ(0, t)φ(0, 0) exp(iS) with classical action S = d 4 xL(x), as integrated over the paths of some generic self-conjugate field φ( x, t), with x conveniently taken to be zero.In theories in which the Hamiltonian is Hermitian, the left and right vacua needed for the two-point function are Hermitian conjugates of each other, and we can represent the associated time-ordered two-point function as a path integral where E 0 is the energy of the state |Ω .Since the treatment of the t > 0 and t < 0 parts of the two point function are analogous, we shall only discuss the t > 0 part in the following.On introducing the time evolution operator, using the completeness relation H = n |n E n n|, and taking φ( x, t) to be Hermitian, evaluation of the t > 0 part of the two-point function yields In arriving at this result we have identified n|φ(0, 0)|Ω as the complex conjugate of Ω|φ(0, 0)|n .Such an identification can immediately be made if the states |n are also eigenstates of a Hermitian φ(0, 0), except for the fact that they actually cannot be since [φ, H] = i∂ t φ is not equal to zero.Nonetheless, in its own eigenbasis we can set φ = α |α φ α α|, where the φ α are real.Consequently, we can set from which the last equality in (47) then follows after all. If we now substitute the Euclidean time τ = it in (47) we obtain In Euclidean time this expression is completely real since all the eigenvalues of a Hermitian Hamiltonian are real, to thus confirm that in this case the Euclidean time two-point function and the Euclidean time path integral are completely real.The Euclidean time two-point function is convergent at large positive τ if all the E n are greater or equal to zero.(The complex t plane Wick rotation is such that t > 0 corresponds to τ > 0. 9 ) Also, its expansion at large τ is dominated by E 0 , with the next to leading term being given by the next lowest energy E 1 and so on.Finally, in order for the time-ordered two-point function given in (46) to be describable by a Euclidean time path integral with convergent exponentials, as per continuing in time according to τ = it, we would need iS = i dtd 3 xL( x, t) = dτ d 3 xL( x, −iτ ) to be real and negative definite on every path. 9With t-plane singularities having t I > 0 (the typical oscillator path integral behaves as 1/ sin[(ω − iǫ)t]), and with circle at infinity terms vanishing in the lower half plane (cf.exp[−iω(t R + it I )]), with τ = it a lower right quadrant Wick rotation yields i B. CP T Symmetric Case with All Energies Real We can obtain an analogous outcome when the Hamiltonian is not Hermitian, and as we now show, it will precisely be P T symmetry (i.e.CP T symmetry) that will achieve it for us.As described earlier, in general we must distinguish between left-and right-eigenvectors, and so in general the t > 0 two-point function will represent Ω L |φ(0, t)φ(0, 0)|Ω R e −iE0t .Now in the event that the left-eigenvectors are not the Dirac conjugates of the right-eigenvectors of H, the general completeness and orthogonality relations (in the non-Jordan-block case) are given by [11] m), while the spectral decomposition of the Hamiltonian is given by H To analyze this expression we will need to determine the matrix elements of φ(0, 0).To use Hermiticity for φ(0, 0) is complicated and potentially not fruitful.Specifically, if we insert φ = α |α φ α α| in the various matrix elements of interest, on recalling that L| = R|V , we obtain This last expression is not only not necessarily equal to Ω L |φ(0, 0)|R n * , it does not even appear to be related to it.To be able to obtain a quantity that does involve the needed complex conjugate, we note that as well as being Hermitian, as a self-conjugate neutral scalar field, φ(0, 0) is P T even.Its P T transformation properties are straightforward since we can write everything in the left-right energy eigenvector basis (as noted in Sec.I relations such as [P T, φ] = 0 and thus P T φT −1 P −1 = φ are basis independent).On applying a P T transformation and recalling that As per (23), for energy eigenvalues that are real we have P T |R i = |R i , L j |T P = L j |, with P T φT P = φ thus yielding Thus we can set With φ 0n and φ n0 both being real, with real E n this expression is completely real when the time is Euclidean.Thus in the real eigenvalue sector of a P T -symmetric theory, the Euclidean time two-point function and the Euclidean time path integral are completely real.Since they both are completely real, we confirm that the form Ω L |φ(0, t)φ(0, 0)|Ω R is indeed the correct P T -symmetry generalization of the Hermitian theory form Ω|φ(0, t)φ(0, 0)|Ω used in (46) above. C. CP T Symmetric Case with Some Energies in Complex Pairs In the event that energy eigenvalues appear in complex conjugate pairs, we have two cases to consider, namely cases in which there are also real eigenvalues, and cases in which all eigenvalues are in complex conjugate pairs.In both the cases we shall sequence the energy eigenvalues in order of increasing real parts of the energy eigenvalues.Moreover, in cases where there are both real and complex energy eigenvalues we shall take the one with the lowest real part to have a purely real energy. For energy eigenvalues that are in complex conjugate pairs according to E ± = E R ± iE I , as per (23) we have with time dependencies Given ( 27) and ( 28), we see that these eigenvectors have no overlap with the eigenvectors associated with purely real eigenvalues.In the complex conjugate energy eigenvalue sector we can set n |] = I as summed over however many complex conjugate pairs there are.Also we can set ), while the previous spectral decomposition of the Hamiltonian given by H Thus just as in our discussion of transition matrix elements in Secs.I and II, the non-trivial overlaps are always between states with exponentially decaying and exponentially growing behavior in time. Now while the Hamiltonian does not link the real and complex conjugate energy sectors the scalar field can.In this mixed sector, with summations being suppressed, the decomposition of the scalar field is given by with P T φT P = φ thus yielding The contribution of this sector to the two-point function is given by Via (57) we see that the Euclidean time Green's function and path integral are completely real, just as desired. On comparing (58) with (54), we see that ( 58) is a direct continuation of (54), with pairs of states with real energy eigenvalues in (54) continuing into pairs of states with complex conjugate energy eigenvalues in (58).This pattern is identical to the one exhibited by the two-dimensional matrix example given in (1).Since we have to go through a Jordan-block phase in order to make the continuation from real to complex energy eigenvalues, we can infer that also in the P T -symmetric Jordan-Block case the Euclidean time Green's function and path integral will be real.In fact this very situation has already been encountered in a specific model, the real frequency realization of the fourth-order Pais-Uhlenbeck two-oscillator model.The Hamiltonian of the theory is P T symmetric, and in the equal-frequency limit becomes Jordan block.For both the real and unequal frequency case and the real and equal frequency case the Euclidean time path integral is found to be real [24], with the unequal-frequency path integral continuing into the equal-frequency path integral in the limit, while nicely generating none other than the Euclidean time continuation of the non-stationary t exp(−iEt) wave function described in Sec.I. D. CP T Symmetric Case with All Energies in Complex Pairs In the event that all the energy eigenvalues of the theory are in complex conjugate pairs, we need to evaluate two-point function matrix elements taken in these states.Since the Hamiltonian does not induce transitions between differing pairs we only need to consider one such pair.In this sector we can expand φ according to with P T φT P = φ thus yielding In this sector we can thus set From (60) we see that the Euclidean time Green's function and path integral associated with the sum Ω + |φ(0, t)φ(0, 0)|Ω − + Ω − |φ(0, t)φ(0, 0)|Ω + are completely real.(The difference would be purely imaginary.)Thus, as indicated in Sec.I, in all possible cases we find that if the Hamiltonian is P T symmetric the Euclidean time Green's functions and path integrals are real. 10o prove the converse, we note that when we continue the path integral to Euclidean time and take the large τ = it limit, the leading term is of the form exp(−E 0 τ ) where E 0 is the energy of the ground state.The next to leading term is the first excited state and so on (as sequenced according to the real parts of the energy eigenvalues, all taken to be positive).If the Euclidean time path integral is real, it is not possible for there to be any single isolated complex energy eigenvalue.Rather, any such complex eigenvalues must come in complex conjugate pairs, and likewise the left-right overlap matrix elements of the fields (the coefficients of the exp(−Eτ ) terms) must equally come in complex conjugate pairs.Thus if the Euclidean time path integral is real we can conclude that all the energies and matrix elements are real or appear in complex conjugate pairs.Moreover, if the energies are all real but one obtains some matrix elements that are not stationary (i.e.∼ τ exp(−Eτ )), we can conclude that the Hamiltonian is Jordan block.Hence, according to our previous discussion, in all cases the Hamiltonian of the theory must be P T symmetric.We thus establish that P T (i.e.CP T ) symmetry is a both necessary and sufficient condition for the reality of the Euclidean time path integral, and generalize to field theory the analogous result for |H − λI| that was obtained in [16] for matrix mechanics. VII. CONSTRAINING THE PATH INTEGRAL ACTION VIA CP T SYMMETRY The discussion given above regarding path integrals was based on starting with matrix elements of products of quantum fields and rewriting them as path integrals.Thus we begin with the q-number theory in which the quantummechanical Hilbert space is already specified and construct a c-number path integral representation of its Green's functions from it.However, if one wants to use path integrals to quantize a theory in the first place one must integrate the exponential of i times the classical action over classical paths.Thus we start with the classical action, and if we have no knowledge beforehand of the structure of the quantum action, we cannot construct the classical action by taking the quantum action and replacing each q-number quantity in it by a c-number (i.e. by replacing q-number operators that obey non-trivial h-dependent commutation relations by c-number quantities for which all commutators are zero.)Moreover, while a quantum field theory may be based on Hermitian operators, such Hermiticity is an intrinsically quantum-mechanical concept that cannot even be defined until a quantum-mechanical Hilbert space has been constructed on which the quantum operators can then act.Or stated differently, since path integration is an entirely classical procedure involving integration of a purely classical action over classical paths there is no reference to any Hermiticity of operators in it at all.And even if one writes the Lagrangian in the classical action as the Legendre transform of the classical Hamiltonian, one cannot attach any notion of Hermiticity to the classical Hamiltonian either. To try to get round this problem one could argue that since the eigenvalues of Hermitian operators are real, and since such eigenvalues are c-numbers, one should build the classical action out of these eigenvalues, with the classical action then being a real c-number.And if the classical action is real, in Euclidean time i times the action would be real too.The simplest example of a real classical action is the one inferred from the quantum Lagrangian m ẋ2 /2 for a free, non-relativistic quantum particle with a q-number position operator that obeys [x, p] = ih.On setting h = 0 one constructs the classical Lagrangian as the same m ẋ2 /2 except that now x is a c-number that obeys [x, p] = 0. Another familiar example is the neutral scalar field Lagrangian ∂ µ φ∂ µ φ, with the same form serving in both the q-number and c-number cases.If we take the fields to be charged, while we could use a Lagrangian of the form ∂ µ φ∂ µ φ * in the c-number case, in the q-number case we would have to use ∂ µ φ∂ µ φ † . A. Gauge Field and Fermion Field Considerations Despite this, this prescription fails as soon as one couples to a gauge field or introduces a fermion field.For a gauge field one can take the quantum-mechanical A µ to be Hermitian and the classical-mechanical A µ to be real.With such automatically be real.Consequently, the associated Euclidean time path integrals and Green's functions would be real too.However, like the condition H = H † , the condition H = H * is not preserved under a similarity transformation.Thus initially we could only establish reality of the Euclidean time Green's functions and path integrals in a restricted class of bases.As the analysis of Sec.III shows, when Cφ( x, t)C −1 = φ( x, t) those bases include the ones in which P T φ( x, t)[P T ] −1 = φ(− x, −t).However, while the operator identity H = H * would transform non-trivially under a similarity transform, with the Green's functions being matrix elements of the fields as per Ω L |φ(0, t)φ(0, 0)|Ω R , the Euclidean time Green's functions and path integrals would be left invariant under the similarity transform and thus always take the real values obtained in the basis in which CP T φ( x, t)[CP T ] −1 = φ(− x, −t).That this must be the case is because the terms in the Euclidean time path integral behave as exp(−E i τ ) times left-right matrix elements of the field operators where the E i are energy eigenvalues, and energy eigenvalues and field operator matrix elements are left invariant under similarity transformations. a real A µ one could introduce a classical Lagrangian density of the form (∂ µ φ − A µ φ)(∂ µ φ * − A µ φ * ).Now while this particular classical Lagrangian density would be locally invariant under φ → e α(x) φ, A µ → A µ + ∂ µ α(x), it would not be acceptable since a path integration based on it would not produce conventional quantum electrodynamics.Rather, to generate conventional quantum electrodynamics via path integration one must take the classical Lagrangian density to be of the form (∂ µ φ − iA µ φ)(∂ µ φ * + iA µ φ * ).Now in this particular case we already know the answer since the is the form of the quantummechanical Lagrangian density.However, that does not tell us what classical action to use for other theories for which the quantum-mechanical action is not known ahead of time. To address this issue we need to ask why one should include the factor of i in the quantum Lagrangian in the first place.The answer is that in quantum mechanics it is not ∂ µ that is Hermitian.Rather, it is i∂ µ .Then since ∂ µ is anti-Hermitian one must combine it with some anti-Hermitian function of the Hermitian A µ , hence iA µ .We thus have a mismatch between the quantum and classical theories, since while ∂ µ is real it is not Hermitian.We must thus seek some entirely different rule for determining the classical action needed for path integration, one that does not rely on any notion of Hermiticity at all.That needed different rule is CP T symmetry. Because of the structure of the Lorentz force F = e E + e v × B, in classical electromagnetism one should not be able to distinguish between a charge e moving in given E and B fields and the oppositely signed charge moving in − E and − B fields (opposite since these E and B fields are themselves set up by charges).In consequence both e and A µ are taken to be charge conjugation odd so that the combination eA µ is charge conjugation even.Thus in order to implement CP T invariance for classical electromagnetic couplings where A µ always appears multiplied by e, one only needs to implement P T invariance.Now under a P T transformation A µ is P T even.Thus with ∂ µ being P T odd, 11 we see that we must always have ∂ µ be accompanied by ieA µ and not by eA µ itself, since then both ∂ µ and ieA µ would have the same negative sign under P T .To then construct a coupling term that has zero Lorentz spin, is P T (and thus CP T ) even, and obeys KL(x)K = L(x) (cf. the discussion in Sec.III), we must take , with P T and CP T thus readily being implementable at the level of the classical action.We must thus use CP T symmetry at the classical level in order to fix the structure of the classical path integral action.And moreover, CP T symmetry can be implemented not just on one classical path such as the stationary one, it can be implemented on every classical path, stationary or non-stationary alike.When this is done, the resulting quantum theory obtained via path integral quantization will also be CP T symmetric, with the associated quantum Hamiltonian being CP T symmetric too, and being so regardless of whether or not it might be Hermitian. The situation for fermion fields is analogous.Specifically, for fermion fields we could introduce Grassmann fermions and take the path integral action to be d 4 x ψγ µ ∂ µ ψ.However, this expression is not CP T invariant, and it is CP T symmetry that tells us to introduce a factor of i and use the standard d 4 x ψiγ µ ∂ µ ψ instead. B. Gravity Considerations Similar considerations apply to path integral actions that involve gravity, and again there is a simplification, since just like the classical eA µ , the metric g µν is charge conjugation even.Thus if we take a relativistic flat spacetime theory that is already CP T invariant and replace η µν by g µν , replace ordinary derivatives by covariant ones, and couple to gravity via the standard Levi-Civita connection CP T invariance would not be impaired.Now in coupling to gravity one can use a geometric connection Γ λ µν that is more general than the standard Levi-Civita connection.One could for instance introduce a torsion-dependent connection of the form where Q λ µν = Γ λ µν −Γ λ νµ is the antisymmetric part of the connection.Or one could use the modified Weyl connection introduced in [25][26][27], viz. where A µ is the electromagnetic vector potential.As shown in [25], both K λ µν and V λ µν transform in the same CP T way (viz.CP T odd) as Λ λ µν (with V λ µν doing so precisely because of the factor of i), and thus neither of them modifies the P T or CP T structure of the theory in any way, with the theory remaining CP T invariant. Our use of the modified V λ µν connection is of interest for another reason.When first introduced by Weyl in an attempt to metricate (geometrize) electromagnetism and give gravity a conformal structure, the connection was taken to be of the form Apart from an overall normalization factor, this connection differs from the modified one by not possessing the factor of i.Since Weyl was working in classical gravity, everything was taken to be real, with the ∂ µ derivative in the Levi-Civita connection being replaced by ∂ µ − 2eA µ in order to generate W λ µν .From the perspective of classical physics the Weyl prescription was the natural one to introduce.However, it turns out that this prescription does not work for fermions, since if the Weyl connection is inserted into the curved space Dirac action as is, it is found to drop out identically [25], with Weyl's attempt to metricate electromagnetism thus failing for fermions.However, when instead the modified V λ µν is inserted into the curved space Dirac action, it is found [25] to precisely lead to minimally coupled electromagnetism with action d 4 x(−g) 1/2 i ψγ µ (∂ µ + Γ µ − ieA µ )ψ (the 2/3 factor in V λ µν serves to give A µ the standard minimally coupled weight), where Γ µ is the fermion spin connection as evaluated with the Levi-Civita connection alone.Thus the geometric prescription that leads to the correct coupling of fermions to the vector potential is not to replace ∂ µ by ∂ µ − 2eA µ in the Levi-Civita connection, but to replace it by ∂ µ − (4ie/3)A µ instead.We note that it is this latter form that respects CP T symmetry, and in so doing it leads to a geometrically-generated electromagnetic Dirac action that is automatically CP T symmetric.Hence even in the presence of gravity we can establish a CP T theorem.Now as we had noted above, the conformal gravity theory possesses a non-diagonalizable Jordan-block Hamiltonian.It thus provides an explicit field-theoretic model in which the CP T theorem holds in a non-Hermitian gravitational theory. Beyond being an example of a non-Hermitian but CP T -invariant theory, conformal gravity is of interest in its own right, with the case for local conformal gravity having been made in [10,28], and the case for local conformal symmetry having been made in [29,30].Moreover, if we introduce a fermion Dirac action I D = d 4 x(−g) 1/2 i ψγ µ (∂ µ + Γ µ − ieA µ )ψ, then as noted in [31], if we perform a path integration over the fermions of the form we obtain an effective action of the form (a and b are numerical coefficients), i.e. we obtain none other than the conformal gravity action (as evaluated with the standard Levi-Civita connection) plus the Maxwell action.Since the I D fermion action is the completely standard one that is used for fermions coupled to gravity and electromagnetism all the time, we see that the emergence of the conformal gravity action is unavoidable in any conventional standard theory.(In a study of quantum gravity 't Hooft [30] has commented that the inclusion of the conformal gravity action seems to be inevitable.)Since we have seen that the conformal gravity action is not Hermitian but nonetheless CP T symmetric, in any fundamental theory of physics one would at some point have to deal with the issues raised in this paper. VIII. CONTINUING THE CP T AND P T OPERATORS AND PATH INTEGRALS INTO THE COMPLEX PLANE As we have seen, there are two different ways to obtain a real Euclidean time path integral in which all energy eigenvalues are real -the Hamiltonian could be Hermitian, or the theory could be in the real eigenvalue realization of a CP T symmetric but non-Hermitian (and possibly even Jordan-block) Hamiltonian.Thus one needs to ask how is one to determine which case is which.In [11] a candidate resolution of this issue was suggested.Specifically, the real time (i.e.Minkowski not Euclidean) path integral was studied in some specific models that were charge conjugation invariant (as we discussed in Sec.VII, charge conjugation essentially plays no role at the classical level anyway since at the classical level eA µ is charge conjugation invariant).In these studies it was found that in the Hermitian case the path integral existed with a real measure, while in the CP T and thus P T case the fields in the path integral measure (but not the coordinates on which they depend) needed to be continued into the complex plane.12(Continuing the path integral measure into the complex plane is also encountered in 't Hooft's study of quantum gravity [32].)Moreover, should this pattern of behavior prove to be the general rule, it would then explain how quantum Hermiticity arises in a purely c-number based path integral quantization procedure in the first place, since the path integral itself makes no reference to any Hilbert space whatsoever.Specifically, the general rule would then be that only if the real time path integral exists with a real measure, and its Euclidean time continuation is real, would the quantum matrix elements that the path integral describes then be associated with a Hermitian Hamiltonian acting on a Hilbert space with a standard Dirac norm.In the section we provide a proof of this proposition. A. The Pais-Uhlenbeck Two-Oscillator Theory Path Integral To see what specifically happens to the path integral in the non-Hermitian case, it is instructive to begin by considering the path integral associated with the illustrative Pais-Uhlenbeck two-oscillator model that we discussed in Secs.IV and V.With charge conjugation playing no role in the path integral, it suffices to discuss the path integral from the perspective of P T symmetry.For real Minkowski time the path integral is given by Here the path integration is over independent z(t) and x(t) paths since the equations of motion are fourth-order derivative equations, and thus have twice the number of degrees of freedom as second-order ones, with x(t) replacing ż(t) and ẋ(t) replacing z(t) in the I PU action given in (39) [24].To enable the path integration to be asymptotically damped we use the Feynman prescription and replace ω 2 1 and ω 2 2 by ω 2 1 − iǫ and ω 2 2 − iǫ.This then generates an additional contribution to the path integral action of the form While this term provides damping for real x if ω 2 1 + ω 2 2 is positive, it does not do so for real z.Thus just as we had discussed in Sec.V in regard to normalizable wave functions, to obtain the required damping z needs to be continued into the Stokes wedges associated with the north and south quadrants of a letter X drawn in the complex z plane.In these particular wedges the path integration converges, and is then well-defined.Moreover, since ω 2 1 + ω 2 2 is real and positive for ω 1 and ω 2 both real and unequal, for ω 1 and ω 2 both real and equal, and for ω 1 and ω 2 complex conjugates of each other, the damping is achieved in all three of the possible realizations of the Pais-Uhlenbeck oscillator, with the path integral existing in all of these three cases, and existing in the self-same Stokes wedge in the three cases. The boundaries between Stokes wedges are known as Stokes lines, with it being necessary to continue z into the complex plane until it crosses a Stokes line (the arms of the letter X in the Pais-Uhlenbeck case) in order to get a well-defined real time path integral.For the Pais-Uhlenbeck oscillator with real and unequal ω 1 and ω 2 the welldefined path integral that then ensues is associated with a P T -symmetric Hamiltonian, which while not Hermitian is Hermitian in disguise, with all energy eigenvalues being real and bounded from below [7], and with the Euclidean time path integral being real and finite. 13And even if ω 1 and ω 2 are complex conjugates of each other, the Euclidean time path integral is still real and finite.The need to continue the path integral measure into the complex plane thus reflects the fact the Pais-Uhlenbeck Hamiltonian is not self-adjoint on the real z axis but is instead P T (and thus CP T ) symmetric. B. Continuing Classical Symplectic Transformations into the Complex Plane In order to generalize this result, below we will establish a general complex plane correspondence principle for Poisson brackets and commutators, and then use it to show that in general whenever a continuation of the path integral measure into the complex plane is required, the associated quantum Hamiltonian could not be self-adjoint on the real axis.Moreover, since the discussion depends on the P T symmetry of the Hamiltonian (here we leave out C for simplicity), in a continuation into the complex plane we also need to ask what happens to the P T symmetry.As we now show, it too is continued so that the [P T, H] = 0 commutator remains intact.We give the discussion for particle mechanics, with the generalization to fields being direct. In classical mechanics one can make symplectic transformations that preserve Poisson brackets.A general discussion may for instance be found in [11], and we adapt that discussion here and consider the simplest case, namely that of a phase space consisting of just one q and one p.In terms of the two-dimensional column vector η = (q, p) (the tilde denotes transpose) and an operator J = iσ 2 we can write a general Poisson bracket as If we now make a phase space transformation to a new two-dimensional vector η ′ = (q ′ , p ′ ) according to the Poisson bracket then takes the form The Poisson bracket will thus be left invariant for any M that obeys the symplectic symmetry relation M J M = J. In the two-dimensional case the relation M J M = J has a simple solution, viz.M = exp(−iωσ 3 ), and thus for any ω the Poisson bracket algebra is left invariant.With q and p transforming as the qp product and the phase space measure dqdp respectively transform into q ′ p ′ and dq ′ dp ′ .With the classical action dt(p q − H(q, p)) transforming into dt(p ′ q′ − H(q ′ , p ′ )), under a symplectic transformation the path integral of the theory is left invariant too.Now though it is not always stressed in classical mechanics studies, since iω is just a number the Poisson bracket algebra is left invariant even if, in our notation, ω is not pure imaginary.This then permits us to invariantly continue the path integral into the complex (q, p) plane.Now one ordinarily does not do this because one ordinarily works with (phase space) path integrals that are already well-defined with real q and p.However, in the P T case the path integral is often not well-defined for real q and p but can become so in a suitable Stokes wedge region in the complex (q, p) plane.This means that as one makes the continuation one crosses a Stokes line, with the theories on the two sides of the Stokes line being inequivalent. As regards what happens to a P T transformation when we continue into the complex plane, we first need to discuss the effect of P T when q and p are real.When they are real, P effects q → −q, p → −p, and T effects q → q, p → −p.We can thus set P T = −σ 3 K where K effects complex conjugation on anything other than the real q and p that may stand to the right, and set Let us now make a symplectic transformation to a new P T operator (P T ) ′ = M P T M −1 .With iω being complex the transformation takes the form With η being real, we thus obtain Thus the primed variables transform the same way under the transformed PT operator as the unprimed variables do under the unprimed PT operator.With the Hamiltonian transforming as H ′ (q ′ , p ′ ) = M H(q, p)M −1 , the classical {P T, H} = {(P T ) ′ , H ′ } = 0 Poisson bracket is left invariant, in much the same manner as discussed for quantum commutators in Sec.I.The utility of this remark is that once the path integral is shown to be P T symmetric for all real paths, the P T operator will transform in just the right way to enable the path integral to be P T symmetric for complex paths as well.P T symmetry can thus be used to constrain complex plane path integrals in exactly the same way as it can be used to constrain real ones, and to test for P T symmetry one only needs to do so for the real measure case. C. Continuing Quantum Similarity Transformations into the Complex Plane It is also instructive to discuss the quantum analog.Consider a pair of quantum operators q and p that obey [q, p] = i.Apply a similarity transformation of the form exp(ω pq) where ω is a complex number.This yields q′ = e ω pq qe −ω pq = e −iω q, p′ = e ω pq pe −ω pq = e iω p, (77) and preserves the commutation relation according to [q ′ , p′ ] = i.Now introduce quantum operators P and T that obey P 2 = I, T 2 = I, [P, T ] = 0, and effect P qP = −q, T qT = q, P T qT P = −q, P pP = −p, T pT = −p, P T pT Under the similarity transformation the P T and T P operators transform according to (P T ) ′ = e ω pq P T e −ω pq = e ω pq e ω * p qP T, (T P ) ′ = e ω pq T P e −ω pq = T P e −ω * p qe −ω pq . From ( 78) and (79) we thus obtain (P T ) ′ q′ (T P ) ′ = e ω pq e ω * pq P T e −iω qT P e −ω * pq e −ω pq = e ω pq e ω * pq e iω * (−q)e −ω * p qe −ω pq = e ω pq e iω * e −iω * (−q)e −ω pq = −e −iω q = −q ′ , ( Thus the primed variables transform the same way under the transformed PT operator as the unprimed variables do under the unprimed PT operator.With the Hamiltonian being a function of q and p, the [P T, Ĥ] = [(P T ) ′ , Ĥ′ ] = 0 commutator is left invariant.As we see, the classical and quantum cases track into each other as we continue into the complex plane, with both the Poisson bracket and commutator algebras being maintained for every ω.We can thus quantize the theory canonically by replacing Poisson brackets by commutators along any direction in the complex (q, p) plane, and in any such direction there will be a correspondence principle for that direction.We thus generalize the notion of correspondence principle to the complex plane.And in so doing we see that even if the untransformed q and p are Hermitian, as noted earlier, the transformed q′ and p′ will in general not be since the transformations are not unitary ((q ′ ) † = e iω * q † = e iω * q = e −iω q).However, what will be preserved is their P T structure, with operators thus having well-defined transformation properties under a P T (i.e.CP T ) transformation. D. Continuing Path Integrals into the Complex Plane In order to apply this complex plane correspondence principle to path integrals, we need to compare the path integral and canonical quantization determinations of Green's functions.To this end we look at the matrix element iG(i, f ) = q i | exp(−i Ĥt)|q f .If one introduces left-and right-eigenstates of the quantum Hamiltonian, then, as we had noted in Sec.VI, the completeness and orthogonality relations take the form while the spectral decomposition of the Hamiltonian is given by Ĥ In terms of wave functions we thus have and can thus express G(i, f ) in terms of the eigenfunctions of Ĥ. Similarly, if we introduce eigenstates of the position and momentum operators q and p, and insert them into time slices of q i | exp(−i Ĥt)|q f , we obtain the path integral representation iG(i, f ) = D[q]D[p] exp[iS CL (q, p)] where S CL (q, p) = dt[p q − H(p, q)] is the value taken by the classical action on each classical path that connects q i at t = 0 with q f at t. Now even in the non-Hermitian Hamiltonian case this expression is the standard path integral representation of iG(i, f ) since it only involves the eigenstates of q and p and makes no reference to the eigenstates of Ĥ.Even if neither q nor p is self-adjoint when acting on the space of eigenstates of Ĥ, they are always self-adjoint and Hermitian when acting on their own position and momentum eigenstates.As had been noted in Sec.I such a self-adjointness mismatch between the action of the position and momentum operators on their own eigenstates and on those of the Hamiltonian is central to the P T -symmetry program, with a continuation into the complex (q, p) plane being required whenever there is any such mismatch.Thus while there are various ways to represent q i | exp(−i Ĥt)|q f , even though it was not originally intended when path integrals were first introduced, we see that writing iG(i, f ) as iG(i, f ) = D[q]D[p] exp[iS CL (q, p)] provides us with an ideal platform to effect a continuation of q and p into the complex plane. From the perspective of path integrals it initially appears that the path integral representation is not sensitive to the domain in the complex q plane in which the wave functions of the quantum Hamiltonian might be normalizable and in which the Hamiltonian acts on them as a self-adjoint operator.However, there is sensitivity to the Hamiltonian, not in writing the path integral down, but in determining the appropriate domain to use for the path integral measure.Specifically, since we may need to continue the coordinates through some complex angle in the complex plane in order to make the quantum Hamiltonian be self-adjoint, the complex plane correspondence principle requires that we would then have to continue the path integral measure through exactly the self-same complex angle.As we show below, when we do need to make such a continuation, it will be the very continuation that will enable the path integral to actually be well-defined and exist. On introducing the matrix elements q|R = ψ R (q), L|q = ψ * L (q), the matrix element L|R is given by If the wave functions are not normalizable when q is real, we must transform the coordinates into the complex plane to obtain The theory is well-defined and the L|R norm is finite (i.e.probability is finite) if there exists some domain in the complex q ′ plane in which dq ′ ψ * L (q ′ )ψ R (q ′ ) is finite.In such a domain we must consider Green's functions of the form iG ′ (i, f ) = q ′ i | exp(−iHt)|q ′ f .They can be represented by both matrix elements and path integrals of respective form Since the domain of q and p is chosen so that wave functions are normalizable, on normalizing them to one we obtain If all the energy eigenvalues have real parts that are positive (i.e.real parts of the energies bounded from below), then on sequencing the sum on n so that Re[E n+1 ] > Re[E n ] and setting τ = it, we find that the modulus of exp(−E n+1 τ )/ exp(−E n τ is less than one for all n if τ > 0, with the sum exp(−E n τ ) thus being convergent when τ is positive.In consequence the associated Euclidean time path integral must also be convergent in the same complex q, p domain.The complex plane correspondence principle thus translates into the equivalence of the two representations of the Green's function, with the domain in which the quantum Hamiltonian is self-adjoint being associated with the classical domain for which the path integral exists. We can thus associate a real path integral measure with real self-adjoint quantum fields, and can associate a complex path integral measure with quantum fields that are only self-adjoint in Stokes wedges that do not include the real axis.Self-adjointness of the quantum Hamiltonian thus correlates with finiteness of the path integral.In consequence, only if the path integral is convergent with a real measure and its Euclidean time continuation is real (i.e.every term in exp(−E n τ ) is real) could the Hamiltonian be Hermitian, though even so the Hamiltonian would still be P T (i.e.CP T ) symmetric.However, if the path integral is only convergent if the measure is complex, the Hamiltonian would be P T (i.e.CP T ) symmetric but not Hermitian (though still possibly Hermitian in disguise of course).It is thus through the existence of path integrals that are convergent when the measure is real that Hermiticity can enter quantum theory.However, as noted earlier in our comparison of CP T symmetry and Hermiticity, the emergence of Hermiticity would be output rather than input, with it being dependent on what appropriate path integral measure would be needed in order for the path integral to actually be convergent.Thus, in quantizing physical theories via path integral quantization, Hermiticity of a Hamiltonian never needs to postulated at all, with its presence or absence being determined by the domain of convergence of the path integral of the problem. IX. FINAL COMMENTS In this paper we have studied the implications for quantum theory of antilinearity of a Hamiltonian and have presented various theorems.We have seen that if a Hamiltonian has an antilinear symmetry, then its eigenvalues are either or real or appear in complex conjugate pairs; while if its eigenvalues are either or real or appear in complex conjugate pairs, then the Hamiltonian must possess an antilinear symmetry.Similarly, we have seen that if a Hamiltonian has an antilinear symmetry, then its left-right inner products are time independent and probability is conserved; while if its left-right inner products are time independent and probability is conserved, then the Hamiltonian must possess an antilinear symmetry.In addition, we have discussed the distinction between Hermiticity and selfadjointness, and have shown that if a Hamiltonian is self-adjoint it must have an antilinear symmetry, and if it has an antilinear symmetry it must be self-adjoint.Such self-adjointness has primacy over Hermiticity since non-Hermitian Hamiltonians can be self-adjoint.When complex Lorentz invariance is imposed we have shown that the antilinear symmetry is then uniquely specified to be CP T .Since no restriction to Hermiticity is required, we thus extend the CP T theorem to non-Hermitian Hamiltonians, and through the presence of complex conjugate pairs of energy eigenvalues to unstable states. As our discussion of the various Levi-Civita, generalized Weyl, and torsion connections given in Sec.VII shows, we even extend the CP T theorem to include gravity, with its extension to the conformal gravity theory showing that one can have a CP T theorem when a gravitational Hamiltonian (as defined via a linearization about flat spacetime) is not only not Hermitian, one can even have a CP T theorem when a gravitational Hamiltonian is not even diagonalizable.CP T symmetry is thus seen to be altogether more far reaching than Hermiticity, and in general Hamiltonians should be taken to be CP T symmetric rather than Hermitian.With Hermiticity of a Hamiltonian when it is in fact found to occur being a property of the solution to a CP T -invariant theory and not an input requirement, Hermiticity never needs to be postulated at all. In comparing CP T symmetry with Hermiticity we note that C, P , and T symmetries all have a natural connection to spacetime, since P affects spatial coordinates, T affects the time coordinate, and C relates particles propagating forward in time to antiparticles propagating backward in time.As stressed in [3], Hermiticity has no such physical association, being instead a purely mathematical requirement.While one can use such a mathematical requirement to derive the CP T theorem, our point here is that one can derive the CP T theorem entirely from physical considerations, namely conservation of probability and invariance under complex Lorentz transformations. A further distinction between antilinearity and Hermiticity is to be found in Feynman path integral quantization, with Feynman path integral quantization being a purely c-number approach to quantization, while Hermiticity of a Hamiltonian is only definable at the q-number level.Moreover, we have shown that in order to construct the correct classical action needed for a path integral quantization one must impose CP T symmetry on each classical path.Such a requirement has no counterpart in any Hermiticity condition since Hermiticity of a Hamiltonian is only definable after the quantization has been performed and the quantum Hilbert space has been constructed.Hermiticity is thus quite foreign to c-number path integrals while CP T symmetry is perfectly compatible with them. When Hermiticity was first introduced into quantum mechanics its was done so because in experiments one measures real quantities, and one would like to associate them with real eigenvalues of quantum-mechanical operators, with the operators then being observables.However, one does not need to impose Hermiticity in order to obtain real eigenvalues since Hermiticity is only a sufficient condition for obtaining real eigenvalues, with it being antilinearity that is the necessary condition.In addition, we note that since the eigenvectors of a Hermitian Hamiltonian are stationary, they cannot describe decays.Now while decays would require energy eigenvalues to be complex, the imaginary part of a complex energy is real, and is thus also an observable.Specifically, in a scattering experiment one measures a cross section as a function of energy, and on observing a resonance one identifies the position of peak of the resonance as the real part of the energy of the state and the value of its width as its imaginary part, i.e. one measures two real numbers, the position of the peak and the width.Thus both the position of the peak and the value of the width are real observable quantities even though the resonance state is described by a complex energy.While such complex energies are foreign to Hermitian Hamiltonians they are perfectly natural for antilinearly symmetric ones, since the presence of complex conjugate pairs of energy eigenvectors and energy eigenvalues ensures the time independence of the appropriate inner products and conservation of probability, just as discussed in Secs.I and II.Antilinearity thus outperforms Hermiticity.To conclude we note that CP T symmetry is more far reaching than Hermiticity and can supplant it as a fundamental requirement for physical theories, with it being antilinearity (as realized as CP T ) rather than Hermiticity that should be taken to be a guiding principle for quantum theory. X. APPENDIX A. The Majorana Basis for the Dirac Gamma Matrices As described for instance in [22], in terms of the standard Dirac γ µ D basis for the Dirac gamma matrices one constructs the Majorana basis via to yield where These matrices obey the standard γ µ M γ ν M +γ ν M γ µ M = 2η µν , and as constructed, every non-zero element of every γ µ M , of γ 5 M , and of C M is pure imaginary.In the Majorana basis C M = γ 0 M .With the gamma matrices one then constructs the six antisymmetric The six M µν M satisfy the infinitesimal Lorentz generator algebra given in (29), and as constructed every non-zero element of every M µν M is pure imaginary.Consequently, for real w µν the transformation exp(iw µν M µν M ) is purely real, and thus maintains the reality of a real Majorana spinor under a real Lorentz transformation. C. Implications of Complex Conjugation In applying complex conjugation one ordinarily takes K to act on c-numbers but not on q-numbers, so that for the typical ψ 1 + iψ 2 , K is taken to effect K(ψ 1 + iψ 2 )K = ψ 1 − iψ 2 .However, this is not a general rule, since if we apply K to the [x, p] = i commutator we find that K[x, p]K = −i.Hence one of x and p must conjugate into minus itself.Now both x and p are Hermitian, and given the [x, p] = i commutator, both x and p can be represented as infinite-dimensional matrices.If one sets x = (a + a † )/ √ 2, p = i(a † − a)/ √ 2, so that [a, a † ] = 1, then in the Fock space with a vacuum that obeys a|Ω = 0, we find that x is represented by an infinite-dimensional matrix that is real and symmetric (analog of σ 1 ), while p is represented by an infinite-dimensional matrix that is pure imaginary and antisymmetric (analog of σ 2 ).Complex conjugation thus does see the i factor in p and effects K pK = −p while leaving x = K xK untouched. For field theory exactly the same situation prevails for the canonical commutator [φ( x, t), π( y, t)] = iδ 3 ( x − y), and with one ordinarily taking the Hermitian φ(x) to be a real and symmetric infinite-dimensional matrix that obeys Kφ(x)K = φ(x), one must take the Hermitian π(x) to be a pure imaginary and antisymmetric infinite-dimensional matrix that obeys Kπ(x)K = −π(x).However, since one ordinarily only discusses how operations such as time reversal affect the fields that appear in the Lagrangian, one does not need to discuss how complex conjugation might affect their canonical conjugates. However, for fermions the situation can be different.Ordinarily one chooses to set R αβ = I αβ (in any basis for the gamma matrices), to give And even though the anticommutation relations are then consistent with each component of the Hermitian ψ 1 α and ψ 2 β being represented by matrices that are real and symmetric, one could equally represent these relations by appropriately choosing some or even all of the components of ψ 1 α and ψ 2 β to be pure imaginary and antisymmetric (cf.σ 2 2 = I).The above remarks also hold in the Dirac basis of the gamma matrices if one sets R αβ = (γ 0 D ) αβ since γ 0 D is real and diagonal, differing only from I in the signs but not in the reality of its two lower components.However, if one sets R αβ = (γ 0 M ) αβ in the Majorana basis, one encounters two differences.First, one would have multiply by i since (γ 0 M ) αβ is pure imaginary, so as to give R αβ = i(γ 0 M ) αβ .And second, (γ 0 M ) αβ is antisymmetric in its (α, β) indices.Thus with this quantization scheme we obtain Now, since the Hermitian γ 0 M is pure imaginary and antisymmetric, every term in ψ 2 α ψ 1 β +ψ 1 β ψ 2 α −ψ 1 α ψ 2 β −ψ 2 β ψ 1 α must be pure imaginary, and thus must be affected by complex conjugation.Thus with the choices R αβ = I αβ , R αβ = (γ 0 D ) αβ some of the representations of the fermion fields could be pure imaginary.However, with the choice R αβ = i(γ 0 M ) αβ some of the representations must be pure imaginary.Thus whether or not Hermitian fields are affected by complex conjugation is not an intrinsic property of the fields themselves, but is instead a property of the structure of the quantization conditions.Thus in general we see that complex conjugation can act non-trivially on q-number fields depending on how they are represented, with the general rule being that K complex conjugates all factors of i no matter where they might appear.Thus in imposing complex conjugation one does not need to differentiate between c-numbers and q-numbers at all. While we have quantized the fermion fields so that K changes the signs of the two lower components of the ψ α spinor, this does not mean that time reversal does too.Rather time reversal must effect T ψ( x, t) T −1 = γ 1 γ 2 γ 3 ψ( x, −t) as this is the transformation that leaves the action for a free Dirac field invariant.Now the time reversal operator can be written as Û K where Û is unitary.Ordinarily one introduces the standard Û1 that with K effects T ψ( x, t) T −1 = γ 1 γ 2 γ 3 ψ( x, −t) when K is taken not to affect q-numbers at all.Thus in our case we set Û = Û1 Û2 where Û2 effects Û2 ψ( x, t) Û −1 2 = γ 2 γ 0 ψ( x, t) as this also reverses the signs of the two lower components of the spinor.Thus with T = Û1 Û2 K, the effect of time reversal on ψ( x, t) is the standard one that effects T ψ( x, t) T −1 = γ 1 γ 2 γ 3 ψ( x, −t).And indeed, it was using this standard form for the time reversal transformation that the entries in Tables I and II given in Sec.III were obtained. D. Comparing the Charge Conjugation Operator with the P T Theory C Operator In quantum field theory the charge conjugation operator obeys [ Ĉ, Ĥ] = 0, Ĉ2 = I, and in P T theory there exists a C operator that obeys [C, Ĥ] = 0, C 2 = I.It was noted in [18] that with every Hamiltonian being CP T invariant, in the event that the Hamiltonian is also charge conjugation invariant, one would then have a P T invariant Hamiltonian that possesses an additional charge conjugation invariance, to thus suggest [18] that the Ĉ and C operators could be one and the same.Attractive as this possibility is, we show here that this is not in fact the case.However, if it is not to be the case, then one has to ask where the C operator invariance comes from if it is not to be charge conjugation invariance, and need to ask why a Hamiltonian should then possess two separate C-type invariances.We address these issues here. To see why there is a difference between the two C-type operators, it suffices to consider the simple matrix M (s) given in (1).As noted in Sec.I, in its s 2 > 1 and s 2 < 1 realizations (energies real and energies in a complex pair) the P T theory C operator is given by C(s 2 > 1) = (σ 1 + iσ 3 cos α)/ sin α where sin α = (s 2 − 1) 1/2 /s, and C(s 2 < 1) = (σ 1 + iσ 3 cosh β)/i sinh β where sinh β = (1 − s 2 ) 1/2 /s.First, we note that these two expressions differ from each other, and second we note that both become singular when s 2 = 1, the point at which the Hamiltonian becomes Jordan block.Such a behavior cannot occur for charge conjugation, since a Hamiltonian is either charge conjugation invariant or it is not, and its status under charge conjugation or the structure of the charge conjugation operator cannot change as one varies c-number coefficients since charge conjugation only acts on q-number fields.Also, charge conjugation is not sensitive to any possible Jordan-block structures, with a Jordan-block Hamiltonian being able to be charge conjugation invariant. However, before concluding definitively that the C operator does not exist in the Jordan-block case even though the charge conjugation operator does exist, we have to show that there is no other choice for C that might exist in this case.To this end we consider the s 2 = 1 structure of our simple model as given in the Jordan canonical form exhibited on the right hand side of (2), where M = σ 0 + (σ 1 + iσ 2 )/2.If there is to be a C operator for it, the C operator must take the form C = c 0 σ 0 + c i σ i , and if it is to square to one and not simply be the identity matrix the coefficients must obey c 0 = 0, c 2 1 + c 2 2 + c 2 3 = 1.On setting [C, M ] = 0 we obtain −ic 2 σ 3 + ic 3 σ 2 − c 1 σ 3 + c 3 σ 1 = 0. Thus we need c 1 + ic 2 = 0, c 3 = 0. Since these conditions are not compatible with c 2 1 + c 2 2 + c 2 3 = 1, we conclude that there is no solution [C, M ] = 0, C 2 = I in the Jordan-block case except the identity matrix, and only it would be continuous in the continuing through the three s 2 > 1, s 2 = 1 and s 2 < 1 regions. Even though we have only derived this result in the two-dimensional case, this result is in fact quite general for any antilinear operator for which we can continue parameters to go from the Jordan-block domain to the domain where energy eigenvalues appear in complex conjugate pairs.In that domain we only need to look at each pair separately, and since each such pair forms a two-dimensional system, we can continue back to the Jordan-block case pair by pair, to thus establish that the only allowed C operator that is continuous in the Jordan-block limit is the identity matrix.That of course does not mean that we cannot use a non-trivial C operator away from the Jordan-block limit, it is just that any such non-trivial C operator would have to be singular in the limit.Moreover, since the charge conjugation operator would obey the same two conditions (commute with the Hamiltonian and square to one) as the C operator in the event the Hamiltonian is charge conjugation invariant, we can also conclude that for any charge conjugation invariant field-theoretic Hamiltonian that can be Jordan block, the charge conjugation operator must be the identity operator.In fact we have even met an example of this -the neutral scalar field theory with the action given in (41), as both the neutral scalar field and the associated Hamiltonian are charge conjugation even, with the Hamiltonian becoming Jordan block when M 2 1 = M 2 2 .Since the gravitational field is charge conjugation even, similar remarks thus apply to the conformal gravity theory, since its Hamiltonian is non-diagonalizable. We thus have to conclude that the charge conjugation operator Ĉ and the P T theory C operator are different independent operators.Moreover, Ĉ is a spacetime based operator whose action on fields is intrinsic to the fields themselves no matter in what particular Hamiltonian they might appear, whereas the structure found for the C operator in our example shows it to depend intrinsically on the structure of the Hamiltonian, to thus change as one goes from one Hamiltonian to another. Since we did find that the C operator becomes singular in the Jordan-block limit, this suggests that when a C operator does exist it should be related to the Hamiltonian-dependent similarity transformation that brings a given diagonalizable Hamiltonian to a diagonal form, since this similarity transform must also become singular in the Jordan-block limit if the Hamiltonian is not to be diagonalizable in the limit.We now show that this is indeed the case. Thus consider a general diagonalizable Hamiltonian H that is brought to diagonal form by the similarity transform BHB −1 = H D .In the diagonal form one can always find a non-trivial operator C D that will commute with H D and square to one.Specifically, one only needs every diagonal element of C D to be +1 or −1, and this can always be achieved.If for instance H D is N -dimensional, we can use the N diagonal λ i operators of U (N ) as a complete basis for any diagonal operator in that space.Since we can form N independent linear combinations of the diagonal λ i , we have just the right number of degrees of freedom to be able to specify the N diagonal elements of C D in that space.In order to be definitive, we shall always define the C D operator of interest to be the one that has equal numbers of +1 and −1 diagonal elements when N is even, and to have one additional +1 element when N is odd. 15 Finally, now having defined the diagonal elements of C D , we can transform back to the original basis to identify C = B −1 C D B. This then gives us the desired C operator for any diagonalizable Hamiltonian (with either real or complex pair eigenvalues), while showing that a non-trivial C operator must always exist in such cases, i.e. it must exist simply because of diagonalizability, even though it has no relation to the charge conjugation operator.Finally, since a Jordan-block Hamiltonian cannot be diagonalized, the B operator must become singular in the Jordan-block limit, with C = B −1 C D B becoming undefined. Some further constraints on C can be obtained in the event that all eigenvalues are real.Specifically, in this case all the eigenvalues of the diagonal H D are real and H D is Hermitian.Thus now we obtain BHB −1 = H D = H † D = (B −1 ) † H † B † , to yield B † BHB −1 (B † ) −1 = H † .Thus on defining V = B † B we obtain V HV −1 = H † .We thus recognize the V operator that transforms H into H † to be related to the B operator that transforms H into H D .Now with V being of the form B † B, V is not only Hermitian, it is a positive operator of the type introduced by Mostafazadeh [14], with all of its eigenvalues being positive.Since that is the case, we can write V = G 2 where G is also a Hermitian operator.We thus obtain (GHG −1 16 And since we have seen that in general we should use the V norm, in those cases where GC U G = C U we can justify the use of the C operator norm that is used in P T studies. 4 xL(x) → d 4 xL(−x).However since d 4 xL(−x) = d 4 xL(x) we see that I is left invariant.The full CP T transformation on the action thus reduces to I → KIK = d 4 xKL(x)K. with GHG −1 thus being Hermitian.Since one can bring a Hermitian operator to a diagonal form by a unitary transformation U , we can set B = U G, and can thus identifyC = G −1 C U G, where C U = U −1 C D U .We can thus express C in terms of the operator G that effects G 2 HG −2 = H † .With C = G −2 C U + G −2 [GC U G − C U ],it is often the case in P T studies that GC U G − C U = 0, in which case we can set C we see that if H has an antilinear symmetry so that AHA −1 = H, then, as first noted by Wigner in his study of time reversal invariance, energies can either be real and have eigenfunctions that obey A|ψ(−t) = |ψ(t) , or can appear in complex conjugate pairs that have conjugate eigenfunctions (|ψ(t) ∼ exp(−iEt) and A|ψ(−t) ∼ exp(−iE * t)). TABLE III : C, P, and T assignments for Φ, Π, x, and p D. The Pais-Uhlenbeck Two-Oscillator Theory and CP T Symmetry
2017-05-11T16:27:38.000Z
2015-12-15T00:00:00.000
{ "year": 2015, "sha1": "39a6a7a15348950c0934ddd6d3dd363a90b9fb78", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1512.04915", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a7f87c51343f30c01d36e5d47e58f5274b3ea360", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
231872542
pes2o/s2orc
v3-fos-license
Incidence and Prognosis of Ventilator-Associated Pneumonia in Critically Ill Patients with COVID-19: A Multicenter Study The primary objective of this multicenter, observational, retrospective study was to assess the incidence rate of ventilator-associated pneumonia (VAP) in coronavirus disease 2019 (COVID-19) patients in intensive care units (ICU). The secondary objective was to assess predictors of 30-day case-fatality of VAP. From 15 February to 15 May 2020, 586 COVID-19 patients were admitted to the participating ICU. Of them, 171 developed VAP (29%) and were included in the study. The incidence rate of VAP was of 18 events per 1000 ventilator days (95% confidence intervals [CI] 16–21). Deep respiratory cultures were available and positive in 77/171 patients (45%). The most frequent organisms were Pseudomonas aeruginosa (27/77, 35%) and Staphylococcus aureus (18/77, 23%). The 30-day case-fatality of VAP was 46% (78/171). In multivariable analysis, septic shock at VAP onset (odds ratio [OR] 3.30, 95% CI 1.43–7.61, p = 0.005) and acute respiratory distress syndrome at VAP onset (OR 13.21, 95% CI 3.05–57.26, p < 0.001) were associated with fatality. In conclusion, VAP is frequent in critically ill COVID-19 patients. The related high fatality is likely the sum of the unfavorable prognostic impacts of the underlying viral and the superimposed bacterial diseases. The clinical presentation of COVID-19 pneumonia includes fever, leukocytosis, severe hypoxemia, bilateral infiltrates, and multisystemic inflammatory syndrome with possible multiorgan failure (MODS-CoV-2) [5,6]. Some COVID-19 patients admitted to the ICU may require mechanical ventilation for a long time, putting them at risk of developing bacterial superinfections, including ventilator-associated pneumonia (VAP), that may contribute to unfavorably influencing prognosis [7][8][9]. However, a clear picture of the true incidence rate, spectrum of causative agents, and prognostic factors of VAP in COVID-19 patients, which may help in improving its management, is still unavailable. The primary objective of this observational, multicenter study was to assess the incidence rate of VAP in COVID-19 patients. The secondary objective was to assess predictors of 30-day case-fatality of VAP in COVID-19 patients. Study Design and Setting The present multicenter, observational, retrospective study was conducted in 11 intensive care units (ICU) across 9 centers in Italy (see Supplementary Materials Table S1 for details) from 15 February 2020 to 15 May 2020. All patients with COVID-19 who developed VAP during ICU stay were included in the study. Ventilator days of both VAP and non-VAP COVID-19 patients were also collected for calculating the incidence rate of VAP. The primary study endpoint was the incidence rate of VAP. Secondary study endpoints were: (i) 30-day case-fatality of VAP; (ii) 30-day case-fatality of bronchoalveolar lavage fluid (BALF)-positive VAP. The collection of anonymized data for the present study was approved by the Ethics Committee of the coordinating center (Liguria Region Ethics Committee, registry number 163/2020), and specific informed consent was waived due to the retrospective nature of the study. The other participating centers followed the local ethical requirements. Definitions The diagnosis of COVID-19 was made in presence of at least one positive real-time polymerase chain reaction (RT-PCR) test for SARS-CoV-2 on respiratory specimen/s (nasopharyngeal swab, sputum, and/or lower respiratory tract specimens). VAP was defined as new or changing chest X-ray infiltrate/s occurring more than 48 h after initiation of invasive mechanical ventilation, plus both of the following: (i) new onset of fever (body temperature ≥ 38 • C)/hypothermia (body temperature ≤ 35 • C) and/or leukocytosis (total peripheral white blood cell count ≥ 10,000 cells/µL)/leukopenia (total WBC count ≤ 4500 cells/µL)/ > 15% immature neutrophils; (ii) new onset of suctioned respiratory secretions and/or need for acute ventilator support system changes to enhance oxygenation [10]. BALF-positive VAP was defined as VAP with a positive BALF culture for bacterial respiratory pathogens. Ventilator days were defined as days with an invasive device in the airways, including tracheostomy. Data Collection Anonymized demographic and clinical data were collected using REDCap (Research Electronic Data Capture), a secure, web-based application designed to support data capture for research studies [11]. Data were collected for first VAP episodes. The following demographic and clinical data were collected: age in years; gender; body mass index; diabetes mellitus; hypertension; smoking; respiratory disease (defined as asthma or chronic obstructive pulmonary disease); end-stage renal disease (defined as estimated glomerular filtration rate <15 mL/min/1.73 m 2 ); moderate-to-severe liver failure (defined as compensated or decompensated liver cirrhosis); neurologic disease (defined as at least one of the following: epilepsy, Alzheimer disease or other dementias, cerebrovascular diseases including stroke, migraine and other headache disorders, multiple sclerosis, Parkinson's disease, infections of the nervous system, brain tumors, traumatic disorders of the nervous system due to head trauma, and neurological disorders as a result of malnutrition); solid cancer; hematological malignancy; human immunodeficiency virus infection; previous antibiotic therapy (within 30 days before VAP onset); previous anti-inflammatory treatments (within 30 days before VAP onset); days of invasive ventilation before VAP; sequential organ failure assessment (SOFA) score [12]; tracheostomy before VAP. The following variables were collected as they were at VAP onset: presence of septic shock (defined according to sepsis-3 criteria [13]); presence of at least mild acute respiratory distress syndrome (ARDS) [14]; presence of acute kidney injury according to RIFLE criteria [15]; need for hemodialytic therapy; need for extracorporeal membrane oxygenation (ECMO); presence of thrombotic or hemorrhagic disorders; bronchoscopy with BALF collection performed at VAP onset (yes/no) and related BALF culture results; concomitant bloodstream infection (BSI). The following variables were also collected regarding the management of VAP: administration of IgM-enriched intravenous immunoglobulins; use of cytokine blood filter/s; timing of antibiotic therapy; appropriateness of antibiotic therapy (measured in the subgroup of patients with BALF-positive VAP and defined as therapy with at least one agent displaying in vitro activity against the given BALF isolate/s (and against blood cultures isolate/s in patients with concomitant BSI)). Isolates were identified by automated biochemical-based phenotypic identification systems or MALDI-TOF, according to the standard procedures of the different local microbiology laboratories. Susceptibility test results were obtained using automated dilution methods and interpreted according to European Committee on Antimicrobial Susceptibility Testing (EUCAST) breakpoint tables (version 10.0, 2020; http://www.eucast.org). Sample Size Calculation The number of participating centers was selected in order to guarantee, based on local estimates, a minimum sample size of 4000 ventilator days. This was considered an acceptable compromise between feasibility and generalizability of study results with regard to the primary descriptive endpoint (incidence rate of VAP in COVID-19 patients). Indeed, by assuming normal distribution of the measure of interest, a sample size of 4000 ventilator days would have guaranteed a maximum margin of error (95% confidence interval [CI]) of ±5 events for an expected incidence rate of ≤20 VAP episodes per 1000 ventilator days. Statistical Analysis The primary study aim was to assess the incidence rate of VAP in COVID-19 patients, that was calculated as the number of events per 1000 ventilator days. Actual confidence intervals of the incidence rate estimate were calculated by means of the exact mid-p test [16]. For the secondary study analysis (assessment of predictors of 30-day case fatality), predefined demographic and clinical variables were first tested for their association with the outcome in univariable logistic regression models. Then, factors potentially associated with 30-day case-fatality in univariable analysis (p < 0.10) were included in a multivariable logistic regression model (model A). Variables related to antibiotic therapy, which we deemed as clinically relevant (as they are modifiable interventions), were included in model A, independent of their p-value in univariable comparisons. No stepwise procedure was adopted. All variables included in model A were also tested for their association with 30-day case fatality in an additional multivariable, generalized, linear mixed model (model B, with center as a random effect and logit as the link function). A pre-planned subgroup analysis of factors associated with 30-day case fatality was conducted in patients with BALF-positive VAP. A descriptive comparison of 30-day case fatality in patients who did not undergo bronchoscopy and patients with positive BALF culture was performed with the Kaplan-Meier method and the log-rank test, and with the day of VAP onset as the time of origin. The analyses were performed using R Statistical Software version 3.5.2 (R Foundation for Statistical Computing, Vienna, Austria). Results of the mixed model were obtained by using the glmer function in the lme4 package for R Statistical Software. Results During the study period, 586 patients with severe COVID-19 infections required invasive mechanical ventilation and were admitted to the participating ICU, for a total of 9416 ventilator-days. Overall, 171/586 (29%) patients were diagnosed with VAP. The median time elapsed from ICU admission to VAP development was of 10 days (interquartile range 6-17). The incidence rate of VAP was of 18 events per 1000 ventilator days (95% CI [16][17][18][19][20][21]. Five additional patients were included in the electronic data capture systems, but they did not fulfill criteria for inclusion (Supplementary Materials Figure S1). The demographic and clinical characteristics of COVID-19 patients with VAP are shown in Table 1. Their median age was 64 years (interquartile range (IQR) 57-71) and 80% were males (137/171). The most frequent comorbid conditions were hypertension (109/171; 64%) and diabetes mellitus (39/171; 23%). Before developing VAP, most patients received antibiotic treatment (162/171; 95%), mostly cephalosporins (88/171; 52%) and macrolides (78/171; 46%). As many as 159/171 (93%) patients were previously treated with chloroquine or hydroxychloroquine, whereas 108/171 (63%) and 109/171 (64%) received steroids and anti-interleukin 6 (IL-6) monoclonal antibodies, respectively. BALF specimens were obtained in 79/171 cases (46%), with culture being positive in 77/79 of them (97%). The most frequently isolated organisms were Pseudomonas aeruginosa ( Tables 2 and 3 show the results of univariable and multivariable analyses, respectively, of factors associated with 30-day fatality. In univariable analysis, higher SOFA score, septic shock at VAP onset, ARDS at VAP onset, AKI at VAP onset, hemodialytic therapy at VAP onset, and ECMO at VAP onset were unfavorably associated with the outcome, whereas previous treatment with anti-IL-6 receptor monoclonal antibodies and tracheostomy before VAP were associated with reduced 30-day case fatality. In multivariable analysis (model A), only septic shock at VAP onset (odds ratio (OR) 3.30, 95% CI 1.43-7.61, p = 0.005), and ARDS at VAP onset (OR 13.21, 95% CI 3.05-57.26, p < 0.001) retained an independent association with the outcome. As shown in Table 3, the results of the additional multivariable model with center as a random effect (model B) were in line with those of model A. The 30-day case-fatality of BALF culture-positive VAP was 42% (32/77). Results of univariable and multivariable analyses of factors associated with 30-day case-fatality in this subgroup are reported in detail in Supplementary Materials Tables S4 and S5. In the multivariable model, ARDS at VAP onset showed an independent association with 30-day case-fatality (Table S5). Similar Kaplan-Meier curves were observed for 30-day case fatality in patients who did not undergo bronchoscopy vs. patients with positive BALF cultures ( Figure 1). Discussion In our multicenter cohort, the incidence rate of VAP in critically ill patients with COVID-19 was as high as 18 events per 1000 ventilator days in ICU, with 30-day fatality of VAP being as high as 46%. The most frequent causative organism was P. aeruginosa, followed by S. aureus. The incidence rate of VAP we reported in COVID-19 critically ill patients is among the highest when compared to that of 1 to 19 episodes per 1000 ventilator days reported in non-COVID-19 patients [17][18][19][20][21]. There are different reasons that may explain this high incidence rate we registered. On the one hand, a truly increased risk of VAP in COVID-19 patients (which is in line with the high incidence rate of 28 episodes per 1000 ventilatordays registered in a recent UK study and with the high reported prevalence of 58% in a large cohort of 4244 critically ill patients with COVID-19 [22,23]), might be explained by different reasons: (i) a potential increased predisposition to bacterial superinfection, on the top of lung damage caused by COVID-19; (ii) the virus-related immunosuppressive effect with deep lymphopenia; (iii) the potential concomitant anti-inflammatory or immunosuppressive effect of steroids and biologic agents (e.g., anti-IL-6 receptor monoclonal antibodies) [24,25]. On the other hand, supporting instead a possible artefactual increase of the registered VAP incidence rate, we may have included some patients diagnosed with VAP who in reality did not have VAP, since we used a broad definition of VAP that is generally used for enrollment in clinical trials rather than for epidemiological purposes. This was done because of the non-negligible frequency of lack of microbiological data, that would have rendered unreliable other more specific definitions of VAP. Indeed, achieving etiological diagnosis of VAP in COVID-19 patients remains difficult for at least two major reasons: (i) there could be a reduced propensity to collect deep respiratory specimens (BALF), owing to the risks either of worsening hypoxemia or of SARS-CoV-2 transmission to healthcare workers; (ii) information from less invasive specimens (e.g., from endotracheal aspirate) may not allow to easily differentiate between airway colonization or pulmonary bacterial superinfection in COVID-19 patients, even when using traditional quantitative thresholds [5]. In addition, either presentation or worsening of COVID-19 pneumonia share many features with VAP, such as fever, hypoxemia, consolidative infiltrates, and alterations in inflammatory markers [5]. For all these reasons, there could be a risk of VAP overdiagnosis in critically ill COVID-19 patients. However, it should also be noted that our numerator for the calculation of the incidence rate was made only of first VAP episodes. Therefore, since some patients may have experienced more than one VAP episode, we also cannot exclude an underestimation of the true incidence rate of VAP in critically ill COVID-19 patients. With regard to organisms isolated from deep respiratory specimens in patients with VAP in our series, the higher frequency of Gram-negative bacteria we registered is in line with recent data from other countries [22,26,27]. In this study, we also assessed predictors of 30-day case fatality in COVID-19 patients with VAP. The associations of both ARDS and septic shock with fatality likely testify to the well-known unfavorable prognostic effect of severe acute conditions at VAP presentation [28], which is also confirmed in our additional mixed model accounting for variability across centers. Furthermore, the unfavorable prognostic effect of ARDS is in line with the results observed in the subgroup of patients with BALF-positive VAP. The prognosis may be influenced by two concomitant diseases (COVID-19 and VAP superinfection). Consequently, it cannot be excluded that the course of COVID-19 may exert a modifying effect on the prognostic impact of potential predictors of unfavorable VAP prognosis explored in this study. In this regard, we did not find an association between early antibiotic therapy and reduced fatality, and also between early appropriate antibiotic therapy and reduced fatality in the subgroup of patients with BALF-positive VAP, differently from what observed in classical ICU populations [28]. Although this result may merely depend on the low power of our analyses (especially in the subgroup of patients with BALF-positive VAP in which we were able to assess early appropriate therapy, considering that the direction of the prognostic effect, although not statistically significant, was towards reduced fatality), it is also plausible that the interfering effect of the viral disease may play an important role as confounder. Notably, there could also be a relevant background noise played by the already well-known difficulties in clearly deciphering the attributable mortality of VAP in ICU populations in general [20]. The present study has some important limitations. First, it was observational and retrospective, which inherently implies some risks of selection and information biases. Nonetheless, at least for the latter, we tried to minimize them by employing a real-time review of inserted data by a dedicated central investigator (DB), with rapid generation of pertinent queries to be resolved by local investigators. Second, we were unable to retrospectively collect precise quantitative (in terms of colony forming units [CFU]/mL) rather than qualitative information from positive BALF cultures, as well as quantitative data from endotracheal aspirate cultures (for this reason we ultimately decided to include deeper [BALF] and not endotracheal cultures for subgroup analysis in the attempt to obtain a distribution of bacteria possibly closer to infection than colonization). We acknowledge this was an arbitrary decision in order to identify what we thought was the best subgroup for a more plausible estimation of etiological diagnosis considering the limited available data collected during routine practice in the first months of the COVID-19 pandemic. In this regard, the lack of many microbiological data certainly remains a major limitation of the present study, since all of this lacking microbiological information could have been exploited either for a more precise selection of the subgroup of patients with microbiological diagnosis of VAP or for exploring the possible prognostic effects of different CFU/mL counts. Third, in line with the used VAP definition [10], and besides the risk of incidence overestimation as described in the previous paragraphs, there is also the major limitation that we only collected data on bacteria, and not on other organisms that may cause deep respiratory infections in critically ill COVID-19 patients (e.g., COVID-19-associated pulmonary aspergillosis (CAPA) [29,30]). Although CAPA has specific, proposed diagnostic criteria that would have required dedicated and systematic data collection from all patients for a reliable incidence picture [30], it is likely that some patients in the present study also had CAPA, thus, we cannot exclude an independent prognostic effect of CAPA as unmeasured confounding in the analysis of prognostic predictors. Fourth, there was a possible inclusion of too many variables in our multivariable models, although we ultimately preferred not to remove potential explanatory variables on the basis of stepwise selection in view of the purely exploratory nature of our analysis [31,32]. Fifth, although data of critically ill patients without VAP was not collected since the aim of this study was not to assess predictors of VAP, this precluded a descriptive comparison of crude fatality in COVID-19 patients with and without VAP. Finally, the lack of information regarding both patient-level and center-level VAP prevention systems does not allow to infer their contribution to the risk of VAP (and, in turn, to VAP incidence) in the present study. Conclusions VAP may be frequent in critically ill COVID-19 patients, but its clinical diagnosis remains difficult. The high 30-day case fatality of VAP we observed likely represents the sum of the prognostic effects of the underlying viral and the superimposed bacterial diseases. Further investigation is needed to precisely characterize the relative contribution of these effects and further improve our therapeutic approach to both COVID-19 and superimposed VAP. Supplementary Materials: The following are available online at https://www.mdpi.com/2077-038 3/10/4/555/s1, Figure S1: flow-chart of the patient inclusion process; Table S1: list of participating centers; Table S2: isolates from BALF cultures; Table S3: descriptive comparison of the demographic and clinical characteristics of patients with and without BALF culture; Table S4: univariable analysis of factors associated with 30-day case fatality in critically ill COVID-19 patients with BALF-positive VAP. Table S5: multivariable analysis of independent predictors of 30-day case fatality in critically ill COVID-19 patients with BALF-positive VAP*. Institutional Review Board Statement: The collection of anonymized data for the present study was approved by the Ethics Committee of the coordinating center (Liguria Region Ethics Committee, registry number 163/2020). The other participating centers followed the local ethical requirements. The study was conducted according to the guidelines of the Declaration of Helsinki. Informed Consent Statement: Specific informed consent was waived due to the retrospective nature of the study. Data Availability Statement: The data presented in this study are available on reasonable request from the corresponding author.
2021-02-05T14:07:43.933Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "3e70096bd074328d79aef6519e7cc8e9166dcacc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/10/4/555/pdf?version=1612315333", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "be07e438bbdddacff6ba9d248af980efb27686e3", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
248277129
pes2o/s2orc
v3-fos-license
Nano toolbox in immune modulation and nanovaccines of the roadblocks in vaccine development Despite the great success of vaccines over two centuries, the conventional strategy is based on attenuated/altered microorganisms.However, this is not effective for all microbes and often fails to elicit a protective immune response, and sometimes poses unexpected safety risks.The expanding nano toolbox may overcome some of the roadblocks in vaccine development given the plethora of unique nanoparticle (NP)-based platforms that can successfully induce specific immune responses leading to exciting and novel solutions.Nanovaccines necessitate a thorough understanding of the immunostimulatory effect of these nanotools.We present a comprehensive description of strategies in which nanotools have been used to elicit an immune response and provide a perspective on how nanotechnology can lead to future personalized nanovaccines. Nanoscale improvements to traditional vaccines The immune system is an interconnected mesh of cells, tissues, and organs that protect the body against fatal diseases.Immune homeostasis is disrupted by either an underperforming or hyperactive immune response; the former can fail to protect against a simple infection [1] whereas the latter can result in destruction of healthy tissue [2,3].The immune system consists of innate (non-specific) and adaptive (specific) immunity.Adaptive immunity is characterized by its ability to precisely identify a pathogenic substance and to develop a long-term memory of it.Vaccines train the adaptive immune system to either generate immunological memory before infection (prophylactic) or to recognize ongoing disease (therapeutic) [4].Although the development of prophylactic vaccines against fatal infections such as smallpox, anthrax, and plague has made a very significant contribution to healthcare, more recent advances in therapeutic vaccines provide promise for treating incurable conditions such as cancer, HIV infection, and type I diabetes [5].Conventional vaccines based on attenuated or inactivated pathogens suffer from the potential risk of introducing live pathogens and the inability to elicit a satisfactory level of immunity, thus stimulating the development of new vaccines [6].With progress in nanotechnology, NP-based vaccines (nanovaccines) have been formulated that not only overcome the drawbacks of traditional vaccines but also afford advanced-level modulation that was not previously possible [7][8][9].Superior efficacy can be achieved by nanovaccines because of (i) extended antigen stability, (ii) enhanced immunogenicity, (iii) targeted delivery, and (iv) sustained release (Box 1). NPs can provide strong protection to both the antigens and adjuvants against enzymatic and proteolytic degradation [10].NPs can evoke both humoral and cell-mediated immune responses because of their unique physicochemical characteristics (Figure 1).They also aid in targeted delivery and can potentially load multiple antigenic components into a single platform [11][12][13][14][15][16].Lastly, fine-tuning the physical attributes such as size, shape, and surface charge of the NPs can lead to substantial enhancement in the duration of antigen presentation and dendritic cell (DC)-mediated antigen uptake, leading to mature DCs and promoting cell-mediated immunity Highlights Nanoscale materials can extend antigen stability, enhance immunogenicity, and improve antigen presentation time in the targeted cell or tissue. The reasons behind the current success of advanced nanoscale vaccine technologies and how they differ from traditional and conventional vaccines in terms of immune modulation are discussed. The capacity and extent of eliciting humoral and cell-mediated immune responses by nanovaccines are reviewed We present a list of all currently FDA-approved nanovaccines and those in clinical trials. An in-depth and rational understanding will be necessary for the development of nanotools for use in future vaccines.We overview the lessons learnt from this potentially transformative nanovaccine development and how they have been used to elicit an immune response, with a focus on the most recent nanovaccines. Box 1. Key features of nanovaccines Extended antigen stability: because of the protective nature of the NPs, the antigens are protected from degradation by cellular components and enzymes. Enhanced immunogenicity: the NPs themselves can be immunogenic, leading to an enhanced immune response against the target antigen. Targeted delivery: nanovaccines can be designed to deliver antigen to targeted sites such as specific cell types or tissues, and thus reduce the likelihood of harmful side effects. Protection of antigens and adjuvants against enzymatic and proteolytic degradation: key immunogenic components such as peptides, oligonucleotides, and adjuvants are protected from degradation by the nanovaccine formulation. Evoke both humoral and cell-mediated immune responses: the two major branches of immunity (the antibody and cellular responses) can both be enhanced by nanovaccines. Present multiple components in a single platform: multiple antigens can be included in the same NP, leading to a nanovaccine formulation that can potentially protect against a wider range of antigens or infections. Enhanced duration of antigen presentation and DC processing: professional APCs require time to recognize and process antigen before presenting it to elicit a downstream immune response.Nanovaccines can persist for a longer time without alteration or degradation and thereby provide ample opportunity for APCs to boost the immune response. Trends in Biotechnology OPEN ACCESS [17][18][19].We review how different nanotools have been utilized successfully for improving immunogenicity and developing novel vaccines.The specific role of NPs in vaccine improvement with respect to their size, loading efficiency, nano-enhanced immunogenicity, antigen presentation, and retention in lymph nodes (LNs) is discussed.Finally, nanovaccines that are approved for clinical use or under clinical investigations are summarized. Types of nanomaterials NPs are ideal vehicles to deliver antigens for vaccination because they are comparable in size to viruses and have the ability to load and release active biomolecules [20].Many types of NPs have been utilized to develop nanovaccines, including metallic NPs, carbon nanotubes, liposomes, micelles, dendrimers, and biomacromolecules.Noble metal NPs, such as colloidal gold, are bio-inert, nontoxic, and their synthesis is well established [21].Gold NPs (AuNPs) have been utilized for vaccines against influenza [22], malaria [23], and cancer [24].However, their longterm accumulation remains a safety concern [25].Other inorganic NPs which have been utilized in vaccine formulations include carbon nanotubes [26], silica NPs [27], and magnetic NPs [28].Polymeric materials have been widely explored as nanovaccines because of their desirable biodegradability and biocompatibility.Polylactide-co-glycolic acid (PLGA) copolymer [29,30], chitosan [31], and other types of in-house synthesized polymers [32][33][34] have been shown to successfully deliver antigens.Micelles [35][36][37], liposomes [38,39], and dendrimers [40,41] have been investigated as nanovaccines based on their ability to load and deliver antigens.Although proteins usually serve as the antigens in subunit vaccines, engineered proteins can self-assemble into antigen-containing NPs and act as nanovaccines [42,43]. Nanovaccines exploit NP drug delivery systems in general, and biocompatibility and safety are major metrics.Although the goal of nanovaccines is to elicit a specific immune response, it is important that their immunogenicity is antigen-specific rather than NP-specific [44].By contrast, adjuvanticitythe ability to augment the immune responseis desirable for NPs in nanovaccine formulations.It has been demonstrated that NPs made from a wide range of materials can promote an immune response, including those composed of materials that are widely considered to be biocompatible [45].There is growing evidence that metallic NPs (e.g., gold, iron, and nickel) display immune-modulatory properties by promoting cell recruitment, antigen-presenting cell (APC) activation, and cytokine induction, and can facilitate a humoral response.Niikura and coworkers showed that spherical AuNPs of 40 nm in diameter, surface-modified with West Nile envelope protein (WNE), produced the highest titers of WNE-specific antibodies and also induced inflammatory cytokine production, including tumor necrosis factor-α (TNF-α), interleukin (IL)-6, IL-12, and granulocyte macrophage colony-stimulating factor (GM-CSF) [46].Citrate-stabilized AuNPs ranging from 2 to 50 nm in diameter conjugated with a synthetic peptide for a foot and mouth disease virus (FMDV) protein showed higher antibody titers for NPs in the 8-17 nm size range, and other spherical AuNPs (<50 nm) have been reported as antigen carriers for immunization against several other microorganism [22,[47][48][49][50][51][52][53][54][55][56]. Size-dependent immunogenicity Antigens delivered by NPs are known to elicit stronger antigenic responses compared to their free counterparts because of the combination of enhanced stability, sustained release, and adjuvant effects [57][58][59].NP size is a crucial factor that can strongly influence the efficacy and ultimately affects the magnitude and type of immune response (B cell vs. T cell) [60].Particles with a size of >1 μm (i.e., comparable in size to a bacterial pathogen) are internalized via phagocytosis, whereas smaller particles <1 μm in size are internalized by micropinocytosis, receptormediated clathrin-coated endocytosis, and clathrin-independent and caveolin-independent endocytosis [61][62][63].Thus, particle size is a determining factor that dictates NP entry, the intracellular fate of the antigen processing, and T cell activation.It was recently revealed that small NPs have a higher uptake efficiency by DCs [18,60,64] and accumulate in the LNs with greater efficacy than large NPs, thus inducing an enhanced immune response [65].However, a universal correlation between size and immune response for solid particle-based NPs has not been reached [66,67], and NPs composed of different core materials showed various optimum sizes for the induction of an immune response [68].In general, smaller particles are considered to be more effective for targeted drug delivery because of their improved ability to permeate biological barriers [69,70].Conversely, for a nanovaccine formulation, these criteria do not hold true because the purpose of vaccination is to elicit a designated immune response by allowing specific recognition by the immune system.To date, agreement on the optimum nanovaccine size range that generates a stronger immunological response has not been achieved [64]. For example, 1000 nm bovine serum albumin (BSA)-loaded PLGA particles evoked a more robust serum IgG response than particles sized 200-500 nm [66].By contrast, some researchers report that smaller NPs are more efficient and potent immune system stimulators.For instance, an NP-based nicotine vaccine consisting of PLGA and a lipid shell produced a significantly higher anti-nicotine antibody (IgG1 and IgG2) titers with a 100 nm than a 500 nm nanovaccine [71].One possible explanation is a difference in the mechanism of immunity that is targeted.Large-sized nanomaterials boost humoral immune responses, whereas smaller NPs promote cell-mediated immune protection [72][73][74].Larger NPs have a tendency to preferentially generate type 2 T helper (Th2) cell responses [7,75,76].This is mostly because of differential uptakefor sizes >500 nm the internalization and processing of antigen leads to a more efficient presentation by MHC II, thereby generating a stronger humoral response [7,75].For example, a study showed that smaller HIV TAT protein-modified cationic polymeric NPs promote a higher TAT-specific cellular immune response and a weaker anti-TAT antibody response than larger particles (~2 μm) [77].In another study, using poly-lactic acid (PLA)-entrapped hepatitis B virus surface antigen (HBsAg), a single immunization with smaller particles induced a lower humoral response than did larger particles [74].Immunization with smaller particles encouraged Th1 immune responses, whereas the larger particles favored Th2 responses [74].This is because the smaller particles were efficiently engulfed by APCs such as macrophages, which leads to cellular immune response, whereas larger particles cannot be taken up by macrophages but can adhere to the macrophage surface and release trapped antigens. Another study showed that nanobeads of 40-49 nm could evoke the secretion of Th1-biased cytokines, whereas nanobeads of 93-101 nm elicited Th2-biased cytokine secretion following immunization in mice [78].These observations showed that precise selection of NP size for vaccination can influence the type1/type2 cytokine balance, which can be crucial for protection against respiratory syncytial virus [78].Similarly, polystyrene beads of 40-50 nm effectively induced cellular responses by activating CD8 + T cells and interferon (IFN)-γ production [79].This was tested in an in vivo animal model where polystyrene beads of 48 nm covalently bound to antigen induced an enhanced antigen-specific Th1-biased response and IFN-γ production [80].Other studies show that NPs of larger size can also induce a robust Th1 response with predominant IFN-γ production by priming CD4 + T cells [81,82].Researchers have shown that large bile salt-stabilized vesicles (bilosomes) with influenza A antigens elicited immune responses that were biased toward Th1 as compared to small particles [83].Given such variability, it is difficult to predict the optimum NP size range to elicit a Th1 or a mixed Th1/Th2 immune response.Finally, the kinetics of NP migration through the lymphatic vessels is highly size-dependent [65,84,85].Particles <5 nm in size can freely enter the bloodstream whereas particles of >100 nm remain at the injection site and fail to move into the lymphatic system.LN targeting is discussed in detail in a later section.Table 1 summarizes the size-dependency of nanosystem immunological responses. NP loading of antigens Antigens of interest can be either encapsulated within or attached to the surface of NPs.Antigen encapsulation can be achieved with polymeric, micellar, and liposomal NPs [86], and surface functionalization can be performed with polymeric, inorganic, or metallic NPs [67,79,87,88].In general, encapsulation of antigens into NP cores gives protection against enzymatic degradation, whereas surface immobilization mimics the presentation of antigens by pathogens [89].More recent studies have focused on using biomimetic strategies to load antigens, such as by using lipid membranes.Liu and colleagues reported the fabrication of self-assembled nanovaccines containing phospholipids which were able to deliver strong initial antigen stimulation followed by controlled long-term antigen release, leading to effective cross-presentation and a CD8 + T cell response [90].When choosing the loading method, multiple factors including loading capacity, release efficiency, preservation of antigen function and structure, epitope orientation, and the overall influence on the colloidal stability of the NPs [91] must be carefully considered. To date, there are limited systematic studies on the effect of loading methods on nanovaccine efficiency.One reason is that the loading method is often specific to the NP system of choice, such as surface functional groups, geometry structure, and fabrication technique (Figure 2).It was found that chemically conjugated protein antigen induced a stronger immune response than when the same antigen was simply physically mixed with the NPs, but this was possibly because of different loading capacities [79].A study comparing PLGA NPs with encapsulated versus surface-adsorbed ovalbumin (OVA) demonstrated that faster in vitro internalization was achieved by the encapsulation architecture; however, the difference might be caused by a change in surface charge [92].In addition, it was revealed that PLGA NPs with encapsulated OVA preferentially activated the MHC I pathway as compared to PLGA NPs with surfaceadsorbed OVA which resulted in enhanced MHC II presentation [92].Several other reports imply that liposomes with covalently conjugated antigens generate stronger antibody responses than other types of loading strategies [57,[93][94][95][96][97][98][99].For DNA vaccines, there have been reports that plasmid DNA vaccine adsorbed onto PLGA NPs was much more efficient than the same DNA entrapped inside PLGA [100].In DNA vaccines, the nanocarriers serve as the non-viral vector for gene delivery (as reviewed extensively elsewhere [101,102]).To sum up, it seems the surface-loading method has some advantages over the entrapment method, but more systematic studies with various nanosystems should be conducted to provide a clearer picture. Nano-enhanced immunogenicity and antigen delivery Antigens delivered by NPs are internalized through several endocytic pathways.Apart from the size effect discussed above, surface charge and surface functionalization of targeting molecules can facilitate delivery to APCs for antigen presentation.Cationic NPs are internalized by APCs more rapidly and usually promote intracellular trafficking through endosomal escape [103].Cationic dendrimer NPs with adsorbed antigens demonstrate enhanced delivery of antigens to DCs, and simultaneously activate DCs including the secretion of cytokines such as IL-1β and IL-12 [104].DCs play a crucial role in the orchestration of the innate and adaptive immune system through antigen uptake, processing, and presentation of epitopes to naive T cells (Figure 3, right). Because most vaccines used in current practice are exogenous to the cells, DCs play a vital role in vaccine-activated cellular immune responses against viral and cancerous diseases.Hence, numerous strategies have been developed for nanovaccine targeting of DCs [70]. DCs express cell-surface mannose receptors which help in antigen internalization through mannosylation, and this enhances the activation of CD4 + and CD8 + T cell responses [105].The same strategy has been employed successfully using a dextran-based nanovaccine with lipopolysaccharide (LPS).Nanoformulations showed robust antigen-specific CD4 + and CD8 + T cell responses, and generated stronger CD8 + T cell response than the soluble antigen and LPS mixture [106].By targeting the langerins (CD207) which are exclusively expressed on Langerhans cells, liposomes conjugated with langerin ligands exhibited effective targeting of Langerhans cells in human skin [107].In addition to the usual MHC II presentation and CD4 + T helper cell activation pathway, DCs can also process antigens and present them via the MHC I pathway leading to activation of CD8 + T cell response in a process known as 'cross-presentation' [108,109].This cross-presentation occurs via the cytosolic pathway.The exogenous antigens are processed in the cytosol by proteasomes [109].Nanovaccines can modulate intracellular antigen delivery and promote cross-presentation.Many types of NPs including inorganic, polymeric, and lipid NPs were shown to induce effective CD8 + T cell expansion by antigen cross-presentation [ 110,111].A specially designed polymeric microneedle with encapsulated antigens was able to target Langerhans cells with efficient cross-priming and Th1 immune responses [112].Crosspresentation was shown to be dependent on the particle-antigen linkage, and disulphide bonding between NPs and antigens results in antigen release into the endosomal compartment leading to subsequent CD8 + T cell expansion, whereas non-degradable linkers do not [113]. Other than the cell-mediated immune response, various nanovaccines can elicit humoral responses.B cells, which oversee antibody production, require prolonged and constant activation to generate humoral responses.As mentioned previously, the strategy for loading the antigens onto the NPs may have a profound influence on the resulting humoral responses.For example, calcium phosphate NPs with the antigen covalently attached to the surface exhibit a substantial increase in B cell activation in vitro in comparison to the soluble antigen.Similarly, antigen displayed on the surface of multilamellar vesicles provided an enhanced humoral response compared to the encapsulated antigen.However, studies are few and further exploration is warranted.Elevated levels of antigen-specific antibodies can also be achieved by multivalent presentation of antigens, and NP systems can serve as the platform for this purpose.Ueda and colleagues have engineered self-assembling NPs for tailoring the optimal geometry for multivalent presentation of viral glycoproteins [114]. Strengthening lymph node retention by nanovaccines The generation of a cell-mediated immune response relies on efficient trafficking or drainage of antigenic components to LNs for further processing and presentation to T and B cells.LNs thus represent a crucial target site for the delivery of vaccines and other immunotherapeutic agents because direct delivery of antigenic components into APCs residing in LNs can induce more potent and robust immune stimulation than can antigen uptake by migrating APCs.LNs also contain a substantial fraction of resident DCs which are phenotypically immature and well equipped for simultaneously internalizing antigens and particles [115].By targeting LN APCs or DCs instead of those in peripheral sites, immune tolerance as a result of antigen exposure on the DC surface before reaching the LN can be avoided [116].In addition, DC-targeting ligands are not a prerequisite because the in situ concentration of LN-resident DCs is extremely high [117,118].Therefore, targeting APCs including DCs in LNs that can be readily taken up into lymphatic vessels and retained in draining LNs is a promising strategy. As mentioned in the previous section, particle size plays an important role in LN targeting and retention.In one study, a synthetic vaccine NP (SVNP) was developed to improve the targeting and retention efficacy of cancer vaccines [119].The positively charged SVNPs of varying size upon conjugation with a negatively charged tumor antigen showed rapid migration into LNs, leading to secretion of higher levels of proinflammatory cytokines and type I IFN (IFN-α, IFN-β) [119].In another study, biodegradable NPs of 20, 45, and 100 nm were used as delivery vehicles to DCs in LNs [84].It was observed that 20 nm poly(ethylene glycol) (PEG)-stabilized poly(propylene sulfide) (PPS) NPs, which can carry hydrophobic drugs and degrade in an oxidative environment, were taken up readily by lymphatic vessels following interstitial administration with 20 nm and 45 nm particles, and showed enhanced retention in LNs [84].In another instance, large particles (500-2000 nm) were shown to be mostly internalized by DCs from the site of injection, whereas particles of 20-200 nm and virus-like particles (30 nm) were found in LN-resident DCs and macrophages, indicating free drainage and retention of these particles in LNs [120].It was shown that biodegradable 20 nm PLGA-b-PEG NPs rapidly drained across proximal and distal LNs with a higher retention time than 40 nm particles, whereas the drainage of 100 nm NPs was negligible [121].In another study where 25 nm and 100 nm Pluronic-stabilized PSS NPs were intradermally injected, there was ten-fold greater interstitial flow into lymphatic capillaries and associated draining LNs for 25 nm particles than for 100 nm particles [65].Size-dependent LN targeting was also exhibited by 30 nm and 90 nm AuNPs antigen carriers, and 30 nm particles displayed higher LN retention and accumulation than 90 nm particles [122].In summary, small particle size is required for efficient penetration of lymphatic vessels and prolonged LN retention.NPs with a size in the 20-200 nm range, which coincides with the sizes of viral particles, can exploit interstitial flow for lymphatic delivery, and in this range smaller NPs tend to accumulate more in the LNs. Nanomaterial-mediated inflammation and cytokine release Nanomaterials are known to boost the immune system and have been used to develop vaccines when conjugated with antigens.We review here cases of inflammation reported in the literature that resulted from inflammatory cytokine release following NP administration.The Th1 or Th2 responses elicited thus caused either an efficient immune response or damage to the host tissue. The use of a lipid-based particle (ISCOMATRIX) as the adjuvant for a chimeric peptide vaccine containing multiple epitopes of T cell lymphotropic virus (HTLV) type I led to enhanced production of mucosal IgA and IgG2a antibody titers as well as increased IFN-γ and IL-10 production [122].Carbon NPs containing bovine serum albumin exhibited strong stimulation of IgA antibodies in salivary, intestinal, and vaginal mucosa following oral immunization.They were also capable of inducing Th1 and Th2 responses [123].Kim and coworkers synthesized synthetic vaccine NPs with a combination of OVA and Toll-like receptor 3 (TLR3).These enhanced antigen uptake by APCs and the secretion of inflammatory cytokines including type I interferon, TNF-α, and IL-6 [119].Mycobacterium tuberculosis (MtB) lipids attached to chitosan NPs induce both cell-mediated and humoral immunity leading to enhanced secretion of IgG, IgM, and Th1/Th2 cytokines [123].Amantadine-coated silver NPs triggered HIV-specific cytotoxic T lymphocyte (CTL) production and eightfold stronger TNF-α production in vivo [124]. Multiwalled carbon nanotubes and silica NPs can both activate the NOD-like receptor (NLR) family pyrin domain-containing 3 (NLRP3) inflammasome leading to uncontrolled pathological inflammation.Superparamagnetic iron oxide NPs (SPIONs) showed enhanced activation of inflammatory genes in response to LPS [125].The PLGA-OVA + A20 nanovaccine maintains immune homeostasis by suppressing Th2 inflammation and promoting the regulatory T cell (Treg) response and IL-10 production in lung airway tissue of an allergic asthma murine model [126]. Synergistic stimulation of the production of IL-1β by some NPs and bacteria induces strong pathological inflammation leading to leukocyte influx, swelling, fever, vasodilation, and inflammation-driven tissue damage [127].Elevated release of proinflammatory cytokines such as IL-6, TNF-α, IL-12 from APCs was observed after the uptake of DNA-inorganic hybrid nanovaccines (hNVs) [128]. The adjuvants used with NP vaccines such as alum, oil in water emulsions (incomplete Freund's adjuvant), and monophosphoryl lipid A (MPLA) are also sometimes associated with inflammation. Potential cytotoxicity of CTLs was observed in an overtly activated proinflammatory cytokine (IFN-γ, TNF-α) response following albumin/albiCpG nanocomplex inoculation into mice.Encapsulated OVA polyanhydride NPs boosted the formation of antigen-specific CD8 + T cell memory after vaccination [131].Subcutaneous delivery of polyanhydride NPs induced only a mild inflammatory response with no tissue damage [132].Hyperactivation of the inflammatory response impaired the trafficking, maturation, activation, and memory cell formation of CD8 + T cells [133].More efficient administration of vaccine (e.g., DC-based vaccines, antigen-coated particle formulations) leading to an absence of overt inflammation induced the formation of memory CD8 + T cell more effectively following antigen delivery [134]. Nanovaccines in clinical use and in clinical trials Only a few nanovaccines have been successfully translated from the laboratory to the clinics.Of these, most only elicit humoral responses, and there is an unmet need for the development of vaccines that can generate strong cellular responses against infectious diseases and cancer.Vaxfectin® a cationic liposomal nanovaccine which is currently in clinical trials.Vaxfectin® has been used against herpes simplex virus type 2 (HSV-2) and also against influenza virus (H5N1) [135].Similarly, another FDA-approved nanovaccine, Inflexal®V, has been used as a subunit influenza vaccine in which the hemagglutinin (HA) surface molecules of influenza virus are conjugated to lipid components [136].Stimulax®, a liposomal therapeutic nanovaccine that is currently under clinical investigation, has been employed as vaccine against cancer [137].Epaxal is another liposome-based nanovaccine against hepatitis A infection [138]. Trends Trends in in Biotechnology Biotechnology Significant attention to NPs has been recently drawn during the development of effective vaccines against severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) (Figure 4).Synchronized innate and adaptive (both humoral and cell-mediated) immune responses are essential for achieving virus clearance from the host.The use of NPs to achieve this goal is generally essential, and a list of SARS-CoV-2 vaccines that take advantage of nanomaterials is provided in Table 2. Boston-based Moderna in conjunction with the National Institute of Allergy and Infectious Diseases (NIAID) developed a mRNA-based NP vaccine against SARS-CoV-2 [139].The mRNA contains the coding sequence for SARS-CoV-2 spike (S) protein and is encapsulated within lipid NPs that induce efficient uptake by immune cells and the activation of T and B lymphocytes [139].An adaptive immune response is thus generated against the S protein [139,140].Pfizer and BioNTech jointly developed the BNT162 (b1, b2) vaccine against SARS-CoV-2.BNT162b1 is mRNA-based vaccine that encodes a trimer of the viral receptor-binding domain (RBD) [141].BNT162b2 is another mRNA vaccine which codes for full-length membraneanchored S protein [142].Both mRNAs are encapsulated in lipid NPs for efficient delivery into target cells.The mRNA sequences are partially modified to enhance RNA stability and protect the RNA conformation to improve immunogenicity [141,142].The Moderna and BNT vaccines were the among the first approved vaccines against SARS-CoV-2.Maryland-based Novavax expressed full-length SARS-CoV-2 S glycoprotein in a baculovirus/Sf9 system.The saponin-based Matrix-M1 adjuvant is used during administration, which overcomes the problem of not inducing a cell-mediated immune response observed with other protein subunit vaccines [143].The Novavax vaccine is currently under review for emergency use authorization (EUA).In addition to SARS-CoV-2, the use of nanovaccines is widespread in multiple other diseases as well.Many of them have been approved by FDA and/or European Medicines Agency (EMA) and others are currently in clinical trials.A list of such vaccines is provided in Table 2. Concluding remarks and future perspectives Despite advances in the development of traditional vaccines, improvements are needed because of the weak immunogenicity of conventional vaccines, intrinsic instability in vivo, toxicity, and the need for multiple booster immunizations.Nanovaccines, which are the focus of this review, provide distinct advantages over conventional vaccines because of their size proximity to pathogens, controllable physicochemical and biophysical attributes, enhanced protection of the antigen from degradation, biopersistence, improved transport through the lymphatics and into LNs, and codelivery of immunomodulatory molecules to boost immune recognition (Boxes 1 and 2). Outstanding questions Can nanoscale materials be used to facilitate vaccine development? How do nanoscale properties such as size, shape, geometry, and surface functionalization contribute toward an effective immune response? How do nanovaccines complement the vaccine development process in the current pandemic scenario? Is it possible to acquire and track indicators of the long-term impact of nanovaccines over the lifetime of an individual? Recent advances in nanoengineering have played a pivotal role in developing the highly anticipated liposome-based mRNA vaccine against SARS-CoV-2.Nevertheless, there are unanswered challenges in the path of successful translation of various nanovaccines.The nanoscale size range of the antigen vehicle is a crucial criterion which determines the spatial location of the antigen.The optimum size is not generalizable and depends on several factors such as the chemical composition of the nanovaccine and opsonization by complement and complement receptors.Understanding how nanovaccines elicit clonal bursts and somatic hypermutation needs to be addressed for the design of improved nanovaccines against highly variable viruses such as SARS-CoV-2 and influenza, where the success of immunization depends on eliciting extensive somatic hypermutation in antibody-secreting B cells. Finally, the promise of nanovaccines does not end with the simple induction of humoral or cellmediated immunity, and nanovaccines represent a new frontier in the development personalized vaccines (Box 2).However, many issues remain unresolved (see Outstanding questions) and a risk-benefit analysis is required.Once preclinical studies are validated in animal models, clinical translation of nanovaccines will require stringent safety testing to address different types of risks and scenarios (Box 2).In addition, setting up an analytical pipeline for the development of nanovaccines of different compositions will require further systematic investigations. Figure 1 . Figure 1.The basics of nanovaccines and their significance.(A) Nanovaccines comprise a selected antigen conjugated to a nanomaterial and an adjuvant to elicit immunogenic response.Multiple antigen epitopes (denoted by red and blue antigens) can be loaded onto the surface of the NPs.Nanomaterial and adjuvant types vary depending on the infection, tissue type, and the immune response required.(B) NPs aid efficient vaccine targeting to the desired cell and its receptors, thereby minimizing side effects.They increase the duration of antigen-receptor engagement and thus enhance the immune response.Specific types of NPs are useful in delivering the antigen into the cytoplasm of the target cell.Packaging of antigens within NPs enhances their protection against enzymatic or proteolytic cleavage.(C) NPs can pass through the lymphatic drainage system and activate APCs within the lymph nodes.(D) NPs aid the DC-T cell interaction that is necessary to boost the downstream immune response.They activate dendritic cells and influence the release of pro-and anti-inflammatory cytokines.(E) Antibody production by plasma B cells and the differentiation, maturation, and activation of lymphocytes and monocytes is also positively influenced by NP-mediated vaccine delivery.Abbreviations: APC, antigen-presenting cell; DC, dendritic cell; LN, lymph node; NP, nanoparticle; NV, nanovaccine. Figure 3 . Figure 3. Mechanism of action of nanovaccines.Different types of antigens conjugated to nanoparticles (NPs) stimulate antigen-presenting cells (APCs) to process and present the antigens in different manners.Some antigens are received by mannose receptors, some are degraded within the APCs and the antigenic peptide fragments are then presented via MHC I (to activate CD8 T cells) or via MHCII (to activate CD4 T cells).APCs (like dendritic cells and T cells) also secrete cytokines in the process.This release of cytokines alters the cytokine milieu and shapes either pro-or anti-inflammatory responses.Clonal expansion of the activated T cells and B cells leads to boosting of the immune response.Activated plasma B cells release antibodies in response to the specific antigen conjugated to the NPs.Some cells remain as memory cells to provide an immediate antibody response in the case of natural antigenic challenge.The annotations adjacent to individual nanovaccines highlight mechanistic steps taking place in APCs or downstream immune response column and illustrate the diverse mechanisms of action of individual nanovaccines.Abbreviation: LPS, lipopolysaccharide. Figure 4 . Figure 4. Strategies for the development of nanovaccines against SARS-CoV-2.(A) The spike protein S that is present at the surface of the virus is unique for SARS-CoV-2 and has been used as a vaccine target by different laboratories.Nanovaccines comprise S protein mRNA.although the corresponding DNA sequence can also used.S proteins are often broken down into fragments that can also be used as antigens.(B) (i) The Astrazeneca, Sputnik V, and Johnson & Johnson vaccines use conventional adenovirus-mediated DNA transfer method to express SARS-CoV-2 S protein at the site of inoculation.(ii) The Moderna and Pfizer vaccines introduce S mRNA by means of lipid nanoparticles, leading to local synthesis.(iii) Novavax contains S protein embedded in a nanoparticle system, whereas (iv) Bharat Biotech and Sinopharm used a conventional inactivated whole virus vaccine.Abbreviation: SARS-CoV-2, severe acute respiratory syndrome coronavirus-2. Table 1 . Effect of NP size on the immunological response [77]IV TAT protein modified NPs of 220 or 630 nm elicit strong TAT-specific cellular immune response but weaker anti-TAT antibody response than NPs of 1.99 μm[77] Table 2 . Nanovaccines approved or in clinical trials a
2022-04-21T15:16:09.754Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "00fa62eca420890ec8c4d8977e19c51ec7ebec80", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.tibtech.2022.03.011", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "128b8c090495522fad710eb3681024625be94211", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
11683719
pes2o/s2orc
v3-fos-license
A Refined Neuronal Population Measure of Visual Attention Neurophysiological studies of cognitive mechanisms such as visual attention typically ignore trial-by-trial variability and instead report mean differences averaged across many trials. Advances in electrophysiology allow for the simultaneous recording of small populations of neurons, which may obviate the need for averaging activity over trials. We recently introduced a method called the attention axis that uses multi-electrode recordings to provide estimates of attentional state of behaving monkeys on individual trials. Here, we refine this method to eliminate problems that can cause bias in estimates of attentional state in certain scenarios. We demonstrate the sources of these problems using simulations and propose an amendment to the previous formulation that provides superior performance in trial-by-trial assessments of attentional state. Introduction The advent of multi-channel microelectrode arrays has made it easier to simultaneously record from small populations of neurons. The richness of the data obtained from arrays requires appropriate analyses for describing a population of neurons that encodes sensory, motor, or cognitive information (the "population code"). Various techniques have been used to characterize a neuronal population's responses [1][2][3][4]. We recently introduced a method for using population activity to estimate the attentional state of subjects: the "attention axis" [5]. We recorded extracellularly from small populations of neurons in visual area V4 during a change-detection task that required spatial attention. The attention axis allowed us to infer the attentional state of the animal within relatively short periods, including single trials and individual stimulus presentations of only 200 ms. We found that on trials in which the neurons indicated that the monkey was attending strongly to one location, the monkey was better able to detect stimulus changes at that location. Thus, the attention axis might advance our ability to study visual attention and other cognitive processes at behaviorally relevant timescales. However, we have since discovered that certain biologically plausible scenarios can lead to distortions in the attention axis analysis that can bias measurements of cognitive state. Two different factors, in particular, can lead to apparent changes in position on the attention axis where none exist. First, when the attention axis is constructed using populations of neurons with responses that differ little between different attentional conditions, measurements can be biased toward the unattended location. Second, a bias in the same direction can occur when the attention axis is constructed using small samples of neuronal responses that provide poor estimates of the underlying activity. It is important to identify and address the sources of these distortions to ensure relatively unbiased estimates of neuronal activity and behavior. We describe here a refinement of the attention axis that effectively eliminates these biases by equating them across behavioral conditions and thereby minimizing their contributions to our results. For comparison, we re-compute previously reported results using our improved metric. The corrected measures show somewhat less variance in attentional state overall, but the refined attention axis retains its ability to capture moment-by-moment fluctuations in attention that otherwise would be undetected by conventional trial-averaging measures. Materials and Methods Because the construction of the attention axis is the focus of this work, we briefly outline the analysis below. Details were provided in previous work [1]. Monkeys were trained to attend to one of two stimuli, flashed on and off simultaneously, and to respond with an eye movement to a stimulus when one stimulus changed its orientation. During each daily session, we recorded simultaneously from approximately 40 single units and multiunit clusters in each hemisphere of visual area V4 using chronically implanted multielectrode arrays. One stimulus was located in each visual hemifield, in the receptive fields of neurons recorded in the contralateral hemisphere. To build the attention axis, neuronal responses to the stimulus presentations immediately preceding the stimulus change (stimulus "n-1") were sampled on individual trials and plotted in N-dimensional space, where N is the number of simultaneously recorded single units and multiunit clusters. Trials in which the monkey correctly detected the stimulus change ("Hits") at the cued location were divided into two conditions, "attend-left" and "attend-right", based on where the animal was cued to attend to at the start of each block of trials. The line connecting the means of neuronal responses on Hit trials in each attention condition was defined as the original attention axis [5]. Neuronal responses from correctly detected changes ("Hits") and missed changes ("Misses") were then projected onto the attention axis and individual projections (i.e., the position on the attention axis) were used to infer the animals' trial-by-trial state of attention. Mean activity in the attendleft and attend-right conditions was normalized to -1 and 1, respectively, to facilitate comparisons across recording sessions (e.g., Fig 1A, top). The revised attention axis is broadly applicable to simultaneous neuronal recordings in any brain region and from any microelectrode technology. It does not require simultaneous recordings from both brain hemispheres [5]. Also, the attention axis treats every neuron in the same manner regardless of cell type, and it does not take into account noise correlations or other forms of neuronal interactions. Two adult rhesus macaques, Macaca mulatta, weighing 9 and 12 kg, were purchased from the New England Primate Research Center and pair-housed in Harvard Medical School's animal facilities in accordance with University policies and the USA Public Health Service Guide for the Care and Use of Laboratory Animals. Neurophysiological data re-analyzed in this paper are from experiments approved by and conducted under the auspices of the Institutional Animal Care and Use Committee of Harvard Medical School [5,6]. Monkeys were fed nutrientrich biscuits as well as an assortment of supplemental treats (e.g., bananas, raisins, peanuts) daily. Enrichment activities typically included foraging for treats, music, movies, human interaction, and standard toys including mirrors. Animal health was monitored daily by trained professionals. The effects of water restriction were monitored closely by checking the weight, stool, and behavior of the animals; significant deviations of any of these factors led to ad libitum water access until symptoms resolved. Animals worked to satiation in the laboratory or were supplemented with water in their cages. Loose restraints were used to guide the animals into the chairs. Time in the laboratory and exposure to the visual attention task were increased gradually over the course of several weeks using operant conditioning and only positive reinforcement. Daily experimental sessions were terminated when the animals lost interest in performing the task. Simulations were carried out in Matlab 2012a and collection of physiological data was previously described [5]. Results The attention axis measures the attentional state of the subject during a brief period (here, 200 ms) based on the average modulation of neuronal responses between different attention conditions (e.g., attend-left vs. attend-right). Below, we illustrate potential problems with this measure as it was originally applied [5,6]. The first two issues (Figs 1 and 2) can generate different projections on Hit and Miss trials even when there is no meaningful change in the underlying neuronal activity. These issues can be eliminated by using the modified analysis described below. We verify this refined approach using simulations (Fig 3) and discuss limitations of the sensitivity of attention axis measurements. Finally, we re-plot central results from our previous work using the refined attention axis (Figs 4-6). The first potential problem with the attention axis occurs when the responses used to construct the axis are overlapping and the estimates of the response means are noisy. We demonstrate this problem in simulations using a highly reduced case in which the axis is constructed using a single sample from attend-right responses (A) and a single sample from attend-left responses (B) of a single neuron. In this example, the response estimates A and B are random samples drawn from two uniform distributions that overlap by 50% (Fig 1A, hatched and yellow). A single-trial test sample (T) is also drawn from the attend-right distribution. If the attention axis provides unbiased measures, the expected position for the test sample T on the [6]) and revised (right) attention axes. Orientation changes were normalized to the behavioral threshold (63% correct) for each recording session and then binned into five equally sized bins. Trials were placed into seven equally sized bins according to position on the attention axis ranging from strong (red) to weak (blue) attentional modulation. attention axis must be 1, because it is drawn from the same distribution as A. We will show that this is not the case. Samples A, B, and T might lie anywhere within their respective distributions. Here, for simplicity we distinguish cases where the samples lie in the left or right halves of the two distributions (labeled x, y and z in Fig 1A). By construction, A and T are equally likely to lie in y and z, and B is equally likely to lie in x or y. The probability that T < A is 0.5. Fig 1B illustrates specific outcomes for the locations of A and T and their probabilities. Because A and T are individually equally likely to fall in range y or z, the area of each of the four cells in Fig 1B corresponds to a probability of 0.25. When T is in y and A is in z, T will always lie to the left of A (black in Fig 1B). When T is in z and A is in y, T will always lie to the right of A (white in Fig 1B). When A and T are both in the same range (e.g., both in y), T < A and T > A are equally likely (the upper-left and lower-right cells in Fig 1B are half black and half white). Across all possible outcomes the black and white areas are equal, indicating that overall the probabilities T < A and T > A are each 0.5. Although T is equally likely to lie to the left or right of A, it is not equally likely to lie in positions that are greater or less than 1 on the attention axis ( Fig 1C). The symmetry of T relative to 1 on the attention axis holds when B is in x and is therefore always less than A (Fig 1C, top row). In this case, the possible outcomes and probabilities on the attention axis are identical to those in Fig 1B; there is equal probability of T taking a value greater than or less than 1 on the attention axis, as indicated by the equal blue and orange areas in the top row of Fig 1C. The bias occurs when A and B both lie in range y, where B can be greater than A. Such inversions reverse the polarity of the attention axis, such that sample values toward the left map to more positive attention axis positions than those toward the right. Axis inversions occur whenever B lies to the right of A. Inversions account for the outcomes labeled ABT where the blue color in the upper row of Fig 1C (T > 1 AA ) is replaced by orange in the lower row (T < 1 AA ). When A and B are both in y and T is in z (more positive than both A and B), T will have a position on the attention axis that is greater than 1 when A < B, but equally often it will have a position on the attention axis that is less than 1, when B < A (third cell in bottom row of Fig 1C). Additionally, there are six equally probable arrangements when A, B, and T are all in range y (leftmost cell in bottom row of Fig 1C). But only two of these six arrangements result in situations where T > 1 on the attention axis. Under these conditions, the overall probability that T will have a value less than 1 on the attention axis in Fig 1C is 0.583 (0.5 plus the bias caused by outcomes ABT, 1/16 + 1/48). The ability of the orientation of the attention axis to flip in certain configurations means that there will be a bias for T to have a value shifted from 1 toward the direction of 0, even though T was drawn from the same distribution as A. This bias in the attention axis stems from the overlap between distributions used to construct it and noisy estimates of the distribution means. If the estimates A, B and T were each based on a large number of samples, or individual responses from a large number of neurons, they would approximate the true means of the distributions, B would virtually never lie to the right of A, and the bias would be effectively eliminated. The amount of bias in the example in Fig 1 can be precisely calculated because the distributions have well-defined properties. To validate the probabilities in Fig 1C, we simulated the sampling scenario 10,000 times. The mean probability that T < 1 was 0.583 (SE 0.002) when A and T were randomly drawn from random uniform distributions (Fig 1C), while the mean probability that T < A was 0.50 (SE 0.002, Fig 1B), matching theoretical expectations. Thus, we can account for biased sampling on the attention axis when the statistics of the distributions are known, and potentially correct for it. In practice, however, the attention axis is built on responses from many neurons that have distributions that can only be estimated. It is therefore difficult to know how much bias might enter into attention axis measurements. If the distributions of firing rates of the two response distributions that comprise the attention axis are non-overlapping, then axis inversion is avoided. But because neurons are typically weakly modulated by attention and are often driven by suboptimal sensory stimuli in neurophysiological experiments, overlapping population responses are common. Although the bias described in Fig 1 does not occur if samples are drawn from two nonoverlapping distributions, a second bias can occur even when the sample distributions do not overlap and is related to the process of projecting points in multidimensional space. This bias acts in the same direction as that described above, shifting measurements towards 0 on the attention axis. Fig 2 illustrates this other source of bias using a simplified example in which the population responses are based on responses from only two neurons. All responses are noiseless except the response of neuron 2 in the attend-right condition, which is drawn from a uniform distribution (Fig 2, "attend right"). As with the simulations in Fig 1, the attention axis is constructed using only a single sample of the neurons' responses from the attend-right response distribution, A, and a single sample of the neurons' responses from the attend-left response distribution, B (which is noiseless in this example). A single test sample, T, is drawn from the attend-right distribution and projected onto the attention axis. When neuron 2's response to A is greater than its mean value and its response to T is less than its response to A, then the projected position of T (black arrow) will be less than 1 on the attention axis (orange line segment). Alternatively, if T > A in that situation, then the projection is greater than 1 (blue vertical segment). As indicated by the greater length of the orange segment, test values are more likely to project to positions less than 1 on the attention axis. This bias favoring values less than 1 will occur whether the slope of the attention axis is positive or negative because the longer segment will project to attention axis values that are less than 1 in either case (the orange segment would be above the blue segment when the slope is negative). Thus, noisy estimates of the population means can artificially bias attention axis positions towards zero, even in non-overlapping distributions. We simulated the scenario in Fig 2 with neuron 1's responses fixed one unit apart (abscissa) and neuron 2's attend-right response drawn from a uniform distribution one unit in extent (blue and orange vertical bars combined). The magnitude of the bias depended on the modulation by attention relative to the magnitude of the response noise. In the configuration illustrated in Fig 2, the mean position of sample T on the attention axis was 0.856 (SE 0.001), a bias towards 0. Doubling the extent of the uniform distribution increased the bias so that the mean position of T was 0.571 (SE 0.002). In contrast, doubling the difference between neuron 1's responses (i.e., by moving B leftward in Fig 2) reduced the original bias such that the mean axis position was 0.96. The bias towards zero therefore increases as either 1) population responses become more variable, or 2) attentional modulation decreases. The original attention axis was constructed using correct responses ("Hits") from the epoch immediately before the stimulus change ("n-1") in each trial. Because only Hits were used to construct the axis, the mean of Hit trials for the attend-right condition was forced to lie at 1, and that for the attend-left conditions was forced to lie at -1. Because the axis was constructed with a finite number of attend-left and attend-right Hit trial responses that had overlapping distributions [5], other responses projected onto the axis were susceptible to the biases described above (Figs 1 and 2). When the n-1 attend-left and attend-right Miss trials were projected onto the axis, these biases would shift their means toward 0 by some amount. Without precise knowledge of the response distributions, it is impossible to eliminate these biases. However, the effect of the biases can be eliminated by ensuring that it acts equally on the responses being compared, for example, by constructing the attention axis using the responses to one particular stimulus on Hit trials, and then comparing the responses on Hit and Miss trials to a different stimulus. We did this by constructing the axis using responses to the stimulus that came two before the stimulus change on each trial (stimulus "n-2"). Hit and Miss responses to stimulus n-1 were then projected onto the axis to yield the attention axis position. Although n-1 responses projected onto the n-2 attention axis will have the biases described above, these biases will affect Hit and Miss trials equally, making it possible to accurately compare estimates of attention on trials with different behavioral outcomes. The revised attention axis does not introduce artifactual differences between estimates for Hits and Misses because any bias in attention axis measurements acts on both equally. This fact is confirmed in the simulations shown in Fig 3. We fixed the mean of responses on Miss trials at 1/8 th of the distance between mean attend-left and mean attend-right Hit responses, and then varied attentional modulation. The expected difference between attend-right Hit and Miss projections was therefore always 0.25. We simulated the n-1 and n-2 responses of 20 neurons over 1000 trials. Responses were modeled as normal distributions and we varied the distance between mean attend-left and attend-right Hits (d' values: 0-0.4) to study the effect of attentional modulation on attention axis position. We measured the average difference between attend-right Hit and Miss positions on the original (Fig 3A) and revised (Fig 3B and 3C) attention axes after 1,000-1,000,000 repetitions at each d' value. Under these conditions, the original attention axis performs well when the distributions of attend-left and attend-right responses of individual neurons differ by more than a d' of about 0.2. At smaller differences the mean Hit responses remain at 1 (by construction), but the biases described above cause the mean Miss responses to take values that are increasingly biased toward 0 as modulation by attention approaches 0. For the revised attention axis (Fig 3B), the average attend-right Hit and Miss positions are asymptotically close to the correct positions of 1.00 and 0.75 when d' values are 0.2 and greater (Fig 3B, bottom). However at smaller d' values, Hit and Miss values both approach 0 as d' approaches 0 (Fig 3B, top). This inevitably leads to an underestimate of their difference, particularly when modulation by attention is small relative to the response noise. Thus, the revised attention axis leads to conservative measures of Hit-Miss differences when those differences in activity are very small (Fig 3B, bottom). Because the bias makes absolute measures on the attention axis uninformative, it can be helpful to re-normalize the mean attend-right Hits to 1 (and attend-left Hits to -1) so that the bias underlying both Hits and Misses is effectively removed. This approach produces a more convenient measure of population responses for d-primes 0.05 (Fig 3C) that is still comparable to the original attention axis (Fig 3A), which tended to overestimate the true strength of modulation by attention. While helpful, this renormalization cannot address the unreliability of measures when attention-related modulation is small relative to uncertainty in estimates of individual neuronal population responses. To illustrate the effectiveness of the refined attention axis using real data, we re-examined three key analyses that were previously presented using the original attention axis [5,6]. Sideby-side comparisons of the original and revised attention axes are shown below (Figs 4-6). If the attention axis is a useful measure of the animal's attentional state, then position on the attention axis should correlate with behavioral performance. Alternatively, if axis position and task performance are largely unrelated, then the attention axis fails to capture neuronal activity that supports visual attention in our task. Fig 4 illustrates changes in behavioral performance as a function of projected position on the attention axis. Each trial was assigned to a bin by projecting onto the attention axis the neuronal population response to the stimulus immediately before the change appeared. The probability of successfully detecting the change was then computed for each bin. Animals performed well when population responses on the original attention axis (Fig 4, left) were at the mean hit response positions and beyond (less than -1 and greater than 1 for attend-left and attend-right conditions, respectively). Performance decreased as position on the attention axis approached and moved beyond zero. These results suggest that the position on the original attention axis can be used to predict the animal's state of attention. But given the biases in the original attention axis, it is unclear how much of the result can be attributed to true changes in neuronal population responses. The bias would cause Miss trials, but not Hit trials, to be assigned to positions on the attention axis that are shifted toward 0, making the proportion correct (Hit trials / Hit trials + Miss trials) smaller for attention axis positions further from 1 or -1. To compare the same data on the original and revised attention axes, we combined data across attend-right and attend-left conditions by normalizing the mean of the Hit distributions in each attention condition to 1 (Fig 4, right). The relationship between attention axis position and performance is weaker when using the revised attention axis compared to the original attention axis, as indicated by the decrease in slope. This suggests that bias contributed some fraction of the effects previously reported. However, a clear relationship exists on both axes: task performance was high at positions greater than 1 (which we hypothesize correspond to trials in which the animal correctly allocated attention to the stimulus where the orientation change occurred), and performance steadily decreased as positions approached zero and beyond. The systematic change in behavioral performance as a function of position on the revised attention axis demonstrates that the refined attention axis captures substantial taskrelevant changes in attentional state. Given that position on the attention axis was a reliable predictor of trial-by-trial behavioral performance [5], we also previously examined whether attention improved performance by decreasing the threshold or slope of psychometric functions [6]. The influence of changes in population activity on these two behavioral parameters might help us better understand the neuronal mechanisms that allow for improved performance associated with attention. Using the original attention axis, we found that both the threshold and the slope of psychometric functions decreased as the strength of attention increased (Fig 5A). We reanalyzed the data using the revised attention axis and found that these findings hold true (Fig 5B), although both changes were reduced. Changes in population activity, as measured by the attention axis, can help elucidate not only how neuronal responses support behavior but also how activity is organized within the cerebral cortex. To this end, the attention axis was used to investigate the role of neuronal correlations between cerebral hemispheres during attention. One electrode array was implanted in each hemisphere in visual area V4, and for this analysis [6] a separate attention axis was constructed using neuronal responses from each of the two arrays. Trial-by-trial correlations in attention axis projections for the two cerebral hemispheres-using either the original or refined (Fig 6) attention axes-did not significantly deviate from zero between hemispheres (open bars, Fig 6). We also compared within-hemisphere correlations by randomly assigning the neurons recorded from each array into two groups and constructing an attention axis for each group. Unlike between-hemisphere changes, changes in position on the within-hemisphere attention axis were strongly correlated regardless of the behavioral outcome (filled bars, Fig 6). Between-hemisphere correlations were not detected when measured using either the original or revised attention axes, suggesting independent processing mechanisms for visual attention in each hemisphere [5]. Discussion We previously developed a novel method for measuring trial-by-trial changes in attentional state using activity recorded from small populations of neurons in animals performing a change detection task [5]. Conventional single-electrode neuronal measures of visual attention rely on activity averaged over many trials to obtain a reliable estimate of attentional state. The attention axis uses the responses of many neurons to obtain an estimate of attention on each trial. The ability to estimate attention at each moment makes it possible to study the dynamics and behavioral consequences of attention within brief periods. However, we found that the original attention axis [5,7] is susceptible to specific artifacts that can obscure results or suggest differences when none exist. At least two factors can bias the attention axis as it was originally defined. First, overlapping population responses between the two attention conditions can result in inversion of the attention axis polarity and, subsequently, sampling biased towards zero on missed trials (Fig 1). Second, a different bias on the attention axis occurs for multidimensional data and even for nonoverlapping distributions (Fig 2). In principle, these biases could be quantified and corrected if the properties of the relevant distributions are known, as in our reduced example in Fig 1. But such a correction is impossible for experimental data given the nature of neuronal responses. These two biases can be avoided if the attention axis is constructed using responses that are independent from those that are analyzed. In our experiment, using Hit responses to stimuli presented two stimuli before the stimulus change ("n-2") instead of the responses immediately preceding the stimulus change ("n-1") achieved this goal. Responses to the n-1 stimulus on Hit and Miss trials can then be projected on the n-2 axis. Any biases act equally on all responses from n-2 trials, allowing uncompromised measures of differences in neuronal activity as a function of attentional state (Fig 3). Simulations suggest that this approach is reliable and unbiased until the difference between mean responses in the two conditions becomes very small compared to the noisiness of the population response (d' < 0.1). Re-analysis of our previous work [5,6] shows that our main conclusions remain valid (Figs 4-6). Position on the revised attention axis continues to show a strong correlation with behavioral performance (Fig 4). Neuronal modulation by attention is associated with changes in the slope and threshold of psychometric functions (Fig 5). Finally, positions on the attention axis constructed using responses across hemispheres remained uncorrelated even when using the revised attention axis (Fig 6). In all cases, the magnitudes of the reported effects are reduced using the revised attention axis. This is likely because some of the previously reported values arose from the artifactual biases described here. This is supported by the fact that the mean responses to attend-left and attend-right hits differed only by 8.6% in the neuronal data [5]. Such a difference would have resulted in a d' between the conditions that was only slightly greater than 0.2, a value where bias in the original attention axis would be expected to have some effect (Fig 3A). Nevertheless, the results with the revised attention axis could differ from those with the original attention axis even if the original measures were in fact unaffected by bias. Because the revised axis examines responses to one stimulus (n-1) using an attention axis constructed from responses to a different stimulus (n-2) and then relates the outcome to detection of a change in still another stimulus, it might be noisier than the original attention axis, which made use of neuronal and behavioral responses involving only two stimuli. A noisier measure would reduce the magnitudes of the reported effects in the way described here. Thus, we cannot be sure whether the original attention axis yielded larger effects because it included bias, or because it was a less noisy measure. Whatever the case, only the results from the revised attention axis can be considered reliable. For decades, measures of population activity have played an important role in understanding limb movements and motor planning [8][9][10]. Multi-electrode recordings increase the amount of activity that can be monitored during a short interval and facilitate the analysis of the dynamics of the neural control of movement. The increasing use of multi-electrode recordings in visual cortex and associated regions will further test existing tools for interpreting population responses. The attention axis, as extended here, is one tool for monitoring visual attention or other arbitrary cognitive states. We hope that this approach and others like it [4,11] will help reveal new perspectives on the interaction of individual neurons and behavior at relevant timescales.
2016-05-14T22:35:11.295Z
2015-08-21T00:00:00.000
{ "year": 2015, "sha1": "25110f041b7d18a66527605cc19fc59fb935df5e", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0136570&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "29a82c596394f92fbd1029a841c956c6d22dd3de", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
265351102
pes2o/s2orc
v3-fos-license
Reinterpretation of tuberculate cervical vertebrae of Eocene birds as an exceptional anti‐predator adaptation against the mammalian craniocervical killing bite Abstract We report avian cervical vertebrae from the Quercy fissure fillings in France, which are densely covered with villi‐like tubercles. Two of these vertebrae stem from a late Eocene site, another lacks exact stratigraphic data. Similar cervical vertebrae occur in avian species from Eocene fossils sites in Germany and the United Kingdom, but the new fossils are the only three‐dimensionally preserved vertebrae with pronounced surface sculpturing. So far, the evolutionary significance of this highly bizarre morphology, which is unknown from extant birds, remained elusive, and even a pathological origin was considered. We note the occurrence of similar structures on the skull of the extant African rodent Lophiomys and detail that the tubercles represent true osteological features and characterize a distinctive clade of Eocene birds (Perplexicervicidae). Micro‐computed tomography (μCT) shows the tubercles to be associated with osteosclerosis of the cervical vertebrae, which have a very thick cortex and much fewer trabecles and pneumatic spaces than the cervicals of most extant birds aside from some specialized divers. This unusual morphology is likely to have served for strengthening the vertebral spine in the neck region, and we hypothesize that it represents an anti‐predator adaptation against the craniocervical killing bite (“neck bite”) that evolved in some groups of mammalian predators. Tuberculate vertebrae are only known from the Eocene of Central Europe, which featured a low predation pressure on birds during that geological epoch, as is evidenced by high numbers of flightless avian species. Strengthening of the cranialmost neck vertebrae would have mitigated attacks by smaller predators with weak bite forces, and we interpret these vertebral specializations as the first evidence of “internal bony armor” in birds. | INTRODUC TI ON The vertebrate fossil record shows an occasional occurrence of unusual osteological features.Most of these either represent extravagant morphological specializations (e.g., Szyndlar & Georgalis, 2023;Tapanila et al., 2013) or are of pathological origin (e.g., Schlüter et al., 1992).However, a few defy a straightforward explanation, and this is particularly true for avian cervical vertebrae that are densely covered with tubercles. For the first time, these tuberculate cervical vertebrae were reported in a bird from the latest early or earliest middle Eocene (48 million years ago [Ma]) of Messel in Germany (Peters, 1995), which is currently known as Dynamopterus tuberculatus (Mayr, 2022).The only specimen of this large-sized and possibly flightless species is a nearly complete skeleton on a slab, which exhibits numerous tubercles on the surfaces of the cervical vertebrae.These structures were considered to be a true morphological feature of the species, which was assigned to the cariamiform Idiornithidae in the original description (Peters, 1995). It was hypothesized that the tubercles represent a pathological condition (Mayr, 2007), but this assumption was challenged by the recognition of similar structures in multiple further individuals of P. microcephalon, virtually all of which exhibit tuberculate vertebrae (Mayr, 2010). Another species of Perplexicervix, P. paucituberculata, was recently identified in the early Eocene (53 Ma) British London Clay (Mayr et al., 2023).The surfaces of some cervical vertebrae of this species are covered with barb-like structures that are smaller than the tubercles of the species from Messel.The holotype of P. paucituberculata mainly consists of a series of vertebrae, but postcranial bones that were tentatively referred to the species clearly differ from those of cariamiform birds and show a resemblance to the Otidiformes (bustards).Owing to its osteological distinctness, Perplexicervix was placed in a new higher-level taxon Perplexicervicidae (Mayr et al., 2023). One impediment of previous studies was the lack of histological data for these vertebrae, most of which occurred in flattened fossils on slabs.However, one isolated tuberculate cervical vertebra from the late Eocene (37-38 Ma) locality La Bouffie of the Phosphorites du Quercy in France was previously figured but remained unstudied (Mayr, 2007).Here we describe three further such vertebrae from these fissure fillings, two of which also come from the locality of La Bouffie.Altogether four cervical vertebrae with tuberculate bone surfaces are now known from the Quercy fissure fillings, which are distributed over four institutions, were independently collected within a timespan of several decades, and almost certainly stem from different individuals.These specimens are the only three-dimensionally preserved vertebrae with pronounced surface sculpturing, and for the first time, we were able to perform micro-computed tomography (μCT) imaging, which led to a new hypothesis on the morphology and possible evolutionary significance of these unusual structures that have no analog among extant birds.PAL.2020.0.36.13, the Montpellier specimen UM BFI 3101, as well as an uncatalogued specimen in the collection of the Université Claude Bernard Lyon 1, France (Mayr, 2007)-are from the late Eocene locality La Bouffie.One vertebra, the Vienna specimen NHMW 2019/0059/0013, is from the old Quercy collections and lacks precise locality data.This vertebra was acquired in 1888/89 from A. Rossignol, Lacapelle-Livron, and was associated with numerous isolated bones of amphibians and squamates, which were recently studied and likewise lack exact locality data (Georgalis et al., 2023;Georgalis, Čerňanský, & Klembara, 2021;Georgalis, Rabi, & Smith, 2021).The colour of NHMW 2019/0059/0013 is different from that of the La Bouffie specimens, indicating disparate diagenetic environments.Most likely, therefore, the Vienna specimen is not from La Bouffie, but from a different (albeit unknown) site of the Phosphorites du Quercy, which span a time interval from the middle Eocene to the early Miocene, with the majority of localities ranging between the late middle Eocene and the late Oligocene (Georgalis, Čerňanský, & Klembara, 2021;Mourer-Chauviré, 2006;Pélissié et al., 2021). | RE SULTS The Toulouse specimen (Figures 1c-f, 2a-e; MHNT.PAL.2020.0.36.13) is an axis, which apart from its larger size resembles the axis of Perplexicervix paucituberculata from the early Eocene London Clay (Figure 2r,u;Mayr et al., 2023).As in this species, the dens is long and mediolaterally wide, but in the late Eocene Quercy vertebra, the processus spinosus is proportionally smaller and not as strongly dorsally protruding, and the notch between the processus articulares caudales (Figure 1c) is narrower and deeper.The Montpellier specimen (Figure 2f-j; UM BFI 3101) exhibits an osseous bridge, which delimits a lateral foramen and identifies the vertebra as the third or fourth cervical.The Vienna specimen (Figure 2k-p; NHMW 2019/0059/0013) is a fifth or sixth cervical vertebra and resembles a vertebra of P. paucituberculata from the London Clay (Figure 2w,z), which was initially identified as the third cervical vertebra (Mayr et al., 2023), but is now considered to also be the fifth or sixth cervical. The axis MHNT.PAL.2020.0.36.13 in particular is much more densely covered with tubercles than that of P. paucituberculata from the London Clay.In the Quercy vertebrae, the surface structures are also more pronounced than in the vertebrae from the London Clay, and rather than being tubercles they are markedly elongate and have a villi-like appearance, which is particularly evident in the Vienna specimen.As far as comparisons are possible, they correspond well the surface structures of the vertebrae of P. microcephalon and "Dynamopterus" tuberculatus from Messel. In all three cervical vertebrae, the tubercles/villi have a symmetrical distribution on the left and right sides of the vertebrae.They cover most of the external vertebral surfaces, but are absent from the articular surfaces and the dorsal ridge of the processus spinosus.Tubercles are also absent from the caudoventral portion of the corpus and the attachment sites of intervertebral ligaments.In the NHMW specimen, there are a few tubercles within the foramina transversaria (Figure 3a).Well-developed tubercles are furthermore largely absent from those areas on the ventral surface of the corpus, where vessels and nerves ran that passed through the foramina transversaria (the smooth, "slide-like" vertebral surface is particularly evident in the vertebrae from the London Clay, but can also be observed in the Quercy specimens; Figure 2y,z). At least some of the tubercles appear to be strung in longitudinal, craniocaudally extending rows.Closely adjacent tubercles usually are congruently aligned.Whereas most tubercles/villi direct more or less perpendicular to the vertebral surface, those on the vertebral arch of the axis point craniolaterally (Figure 2c).In the NHMW specimen, many tubercles are broken and it can be discerned that they are solid structures with a well-differentiated cortex. μCT scans were conducted of the MHNT and NHMW specimens (Figure 3a,b).Unexpectedly, these scans reveal a very thick bone cortex and an unusually dense interior of the bones, which exhibits much fewer pneumatic spaces and trabecles than the cervical vertebrae of most extant birds (Figure 3c; an exception are some specialized divers, see discussion).The scans furthermore show the tubercles to be outgrowths of the vertebral cortex and confirm the presence of some tubercles within the foramina transversaria. | DISCUSS ION Skeletons of Perplexicervix microcephalon and "Dynamopterus" tuberculatus show the tubercles to be mainly restricted to the cervical vertebrae, and all of the known P. microcephalon specimens with sufficiently well-preserved cervical vertebrae exhibit a tuberculate surface of the vertebral cortex (Mayr, 2007(Mayr, , 2010;;Peters, 1995). The wide occurrence of this feature in multiple individuals and its restricted distribution within the skeleton challenge a pathological origin.The μCT scans furthermore reveal that the tubercles/villi cannot be delimited from the bone cortex, which also conflicts with a pathological origin.2).There is also a damaged axis of an undetermined mammal from La Bouffie, in which parts of the cortex are broken so that the interior of the vertebral corpus can be seen.In this specimen, the exposed trabecles likewise show a close resemblance to the tubercles on the external surface of the avian vertebrae (Figure 4d,e).We therefore hypothesize that the tubercles represent a true morphological feature, which gradually evolved over time and under some selection pressure. All avian cervical vertebrae from the Quercy fissure fillings have matching sizes and are probably from the same or closely related species.Even though it is difficult to reliably identify early Cenozoic birds based on cervical vertebrae alone, the similarities between the Quercy vertebrae and those of Perplexicervix, which in the case of the fifth or sixth cervical include a cranially directed projection of processus articularis caudalis and a sheet-like expansion of lateral portion of vertebral body (Figure 2v,w), suggest close affinities. We therefore hypothesize that the tuberculate cervical vertebrae characterize a distinctive clade of Eocene birds, for which the name Perplexicervicidae is available (Mayr et al., 2023)."Dynamopterus" tuberculatus is more likely to be a flightless representative of this clade rather than belonging to the cariamiform taxon Dynamopterus. The μCT scans reveal that the tuberculate vertebrae have a thick cortex and very dense bone, whereas the cervical vertebrae of most extant birds exhibit large pneumatic spaces separated by narrow bony trabecles (e.g., Fajardo et al., 2007;Gutzwiller et al., 2013).In extant birds, unusually dense (osteosclerotic or pachyostotic) bone occurs only in some diving taxa, in which it contributes to a reduced buoyancy (Gutzwiller et al., 2013).The avifauna of La Bouffie does not include aquatic birds (Mourer-Chauviré, 2006) and the flightless "Dynamopterus" tuberculatus as well as the fairly long-legged, volant Perplexicervix microcephalon certainly had a terrestrial ecology. Hence, the thick cortex and dense bone structure of the cervical vertebrae from the Quercy fissure fillings most likely evolved to increase the mechanical strength of the bones.The density and distribution of the trabecular bone of a vertebra is a result of the mechanical loading acting on it (Smit et al., 1997), so that the tubercles and the dense interior of the bones may have developed due to high forces exerted by the neck muscles.A correlation with muscular forces is suggested by the fact that many of the tubercles on the dorsal surface of the axis vertebra direct cranially, in the direction of contraction of the muscles attaching to this part of the vertebrae.However, the tubercles on the more caudal cervical vertebrae direct perpendicular to the vertebral body and, as noted above, there are also some tubercles within the foramina transversaria, which do not encompass muscles but conduct blood vessels and nerves.Possibly, therefore, the tubercles are a morphological corollary of developmental or functional constraints of the increased thickness of the vertebral cortex. Lophiomys is the only mammal that is known to impregnate parts of its fur with plant toxins, and its unusual skull sculpturing in the back of the neck was interpreted as extra shielding of the brain that evolved in response to predation pressure (Kingdon et al., 2012). The cervical vertebrae of birds conduct and protect vital structures, that is, major arteries, nerves, and the spinal cord, and we consider it possible that the specialized morphologies of the Eocene species likewise evolved as an anti-predator adaptations to mitigate attacks against the neck. It is a distinctive feature of some mammalian predators to dispatch prey with a craniocervical killing bite in the neck or caudal portion of the skull.This behaviour evolved independently within carnivorans, primates, insectivores, and marsupials but is not found in other mammalian predators and reptiles, which dispatch their prey with undirected bites (Eisenberg & Leyhausen, 1972;Leyhausen, 1965;Steklis & King, 1978).The craniocervical killing bite is also employed to kill birds (Cuthbert, 2003;Lyver, 2000;Ratz et al., 1999), and with their presumed terrestrial ecology and fairly long necks, perplexicervicids are likely to have been prone to attacks by mammalian predators.We hypothesize that the surface tubercles and the thick cortex and dense interior of the vertebrae strengthened the vertebral spine in the neck region.In combination with behavioural anti-predator adaptations, such as death feigning (which is known from some extant birds; Sargeant & Eberhardt, 1975), these vertebral specializations would have raised the survival rates of birds in attacks against the neck by small-sized predators with comparatively weak bite forces. In the earliest Cenozoic, Europe was geographically largely isolated from other continents and featured a high number of flightless birds, which indicates a low predation pressure (Mayr, 2022). During the Eocene, the extinct clade Hyaenodonta dominated in Europe, and modern-type carnivorans first dispersed into Europe at the Eocene-Oligocene boundary, including the Feliformia (cats, mongooses, and allies) and Caniformia (weasels, dogs, and allies) (Solé et al., 2022), which are among the main mammalian predators of adult birds in many extant ecosystems (e.g., Hilton & Cuthbert, 2010;O'Donnell et al., 2015).It has been hypothesized that this faunal exchange terminated the existence of flightless birds in continental Europe (Mayr, 2022), and the immigration of more versatile mammalian carnivores may as well have led to the extinction of Perplexicervix-like birds.This evolutionary scenario explains why tuberculate vertebrae are only known from the Eocene of Europe and can be falsified by the discovery of tuberculate vertebrae in birds from post-Eocene strata or from fossil sites outside Europe. Morphological anti-predator adaptations are widespread among terrestrial vertebrates and include caudal autotomy in lizards or dermal armour in various groups of squamates and a few mammals (Broeckhoven et al., 2015).The tuberculate surfaces of the Eocene cervical vertebrae resemble the sculpturing of the osteoderms of some squamates (e.g., Buffrénil et al., 2011), and if our hypothesis is correct, they would represent the first evidence of "internal bony armor" in birds. AUTH O R CO NTR I B UTI O N S GM conceived and designed the study, analysed and interpreted the data, prepared the figures, authored the first draft, reviewed subsequential drafts, and approved the final manuscript.GG identified the Quercy vertebrae, interpreted data, co-authored the first draft, reviewed subsequential drafts, and approved of the final manuscript. VW performed CT scans of the Vienna vertebra, reviewed manuscript drafts, and approved the final manuscript.UBG, ZR, and AL contributed data, assisted with their interpretation, reviewed manuscript drafts, and approved the final manuscript. DATA AVA I L A B I L I T Y S TAT E M E N T The data of all scans are curated by the institutions, in which the specimens are deposited; access can be requested through each institution.MicroCT scans of the axis of Cygnus olor (https:// doi.org/ 10. 57756/ tw1z4s) and the fossil vertebra NHMW 2019/0059/0013 (https:// doi.org/ 10. 57756/ fbf5wk) are available online.Other data supporting the findings of this study are available from the corresponding author upon reasonable request. High-resolution microtomography (μCT) of two vertebrae (MHNT.PAL.2020.0.36.13 and NHMW 2019/0059/0013) and the skull of the extant rodent Lophiomys imhausi were conducted at the MRI platform of the Institut des Sciences de l'Evolution de Montpellier (UM) and the μCT facilities of NHMW and SMF. Another argument against a pathological origin comes from the different morphologies of the surface structures in the fossils from Messel and Quercy on the one hand and the London Clay on the other.As detailed above, their development is less pronounced in the older fossils from the London Clay, in which they are not differentiated into elongated villi-like structures but form smaller and more barb-like excrescences of the vertebral surface (Figure2w,z).Finally, a pathological origin of the tubercles is contradicted by the F I G U R E 2 (a-p), (v), (y) the new cervical vertebrae with tuberculate surfaces from the Quercy fissure fillings in comparison to (s), (x) vertebrae of Perplexicervix microcephalon from Messel and (r), (u), (w), (z) P. paucituberculata from the London Clay.(a-e) MHNT.PAL.2020.0.36.13, axis in (a) dorsal, (b) ventral, (c) left lateral, (d) caudal, and (e) cranial view.(f-j) UM BFI 3101, third cervical vertebra in (f) dorsal, (g) right lateral, (h) cranial, (i) ventral, and (j) caudal view.(k-p) NHMW 2019/0059/0013, fifth or sixth cervical vertebra in (k) dorsal, (l) ventral, (m) left and (n) right lateral, (o) cranial, and (p) caudal view.(q), (t) MHNT.PAL.2020.0.36.13, axis in (q) dorsal and (t) ventral view.(r), (u) axis of P. paucituberculata (NMS.Z.2021.40.7) in (r) dorsal and (u) ventral view.(s) axis of P. microcephalon (SMF-ME 3548) in ventral view; coated with ammonium chloride.(v), (y) NHMW 2019/0059/0013, fifth or sixth cervical vertebra in (v) dorsal and (y) ventral view.(w), (z) fifth or sixth cervical vertebra of P. paucituberculata (NMS.Z.2021.40.7) in (w) dorsal and (z) ventral view.(x) fifth cervical vertebra of P. microcephalon (SMF-ME 11211a) in dorsal view; coated with ammonium chloride.The arrows indicate enlarged details of the tuberculate surfaces.cdl, caudal articulation facet; cra, cranial articulation facet; dns, dens; prj, cranially directed projection of processus articularis caudalis; sht, sheet-like expansion of lateral portion of vertebral body; sms, smooth vertebral surface that guided vessels and nerves that passed through the foramen transversarium.The scale bars equal 5 mm.fact that they are absent from functionally critical structures, such as the articular surfaces, the foramen vertebrale, and the vertebral surfaces that were in contact with the vessels leading into the foramina transversaria.That similar tubercles can develop through normal developmental pathways is shown by the skull of an unusual extant African rodent, the Maned Rat Lophiomys imhausi, which bears a tuberculate sculpturing in its caudal portion (Figure 4a-c;Kingdon et al., 2012; Lazagabaster et al., 2021: Figure Figure 1a) exhibit granular surfaces, these are restricted to the neural spines or other parts of the dorsal surface of the vertebrae, with the only exception being the Cretaceous-Paleogene salamander Piceoerpeton and the Late Cretaceous pipid frog Pachycentrata, which also show F I G U R E 4 (a, b) μCT-based surface reconstructions of the skull of the extant rodent Lophiomys imhausi (SMF 34609) in (a) lateral and (b) dorsal view, with an enlarged detailed of the sculptured surface.(c) cross section through the skull in the area shown in the framed inset, with an enlarged detail of the tuberculate surface.(d, e) partial axis of an undetermined mammal from La Bouffie (UM BFI 3102) in (d) dorsal and (e) lateral view; the arrow denotes a detail of the trabecles.The scale bars equal 5 mm.some sculpturing in the ventral part of some vertebrae (Báez & Access to fossils specimens was provided by Yves Laurent (MHNT) and Mehdi Mouana, Annelise Charruault, and Lionel Hautier (all UM).We thank Lionel Hautier for scanning the Toulouse specimen and for pointing out the peculiar morphology of Lophiomys imhausi to us.For the MNHT vertebra, 3D data acquisition was performed using the μCT facilities of the MRI platform member of the national infrastructure France-Bio Imaging supported by the French National Research Agency (ANR-10-INBS-04, 'Investments for the future') and the Labex CEMEB (ANR-10-LABX-0004) and NUMEV (ANR-10-LABX-0020).Katrin Krohmann (SMF) is thanked for conducting the μCT scans of the Lophiomys skull.Krister Smith (SMF) is acknowledged for discussions on the identity of the mammalian Quercy vertebra.GLG was funded by the Ulam Program of the Polish National Agency for Academic Exchange (PPN/ ULM/2020/1/00022/U/00001) and also acknowledges travel support by SYNTHESYS FR-TAF_Call4_035 (MNHN).ZR was supported by the Research Plan of the Institute of Geology of the Czech Academy of Sciences (RVO67985831).Comments from two anonymous reviewers improved the manuscript.Open Access funding enabled and organized by Projekt DEAL.
2023-11-23T06:17:48.725Z
2023-11-22T00:00:00.000
{ "year": 2023, "sha1": "228611fcb0ae40cb06125b69647f54fb712b4eed", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/joa.13980", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "834aff88217e9b9e0254755acb0e727fb535bc22", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
226291735
pes2o/s2orc
v3-fos-license
Biosensors for Studies on Adhesion-Mediated Cellular Responses to Their Microenvironment Cells interact with their microenvironment by constantly sensing mechanical and chemical cues converting them into biochemical signals. These processes allow cells to respond and adapt to changes in their environment, and are crucial for most cellular functions. Understanding the mechanism underlying this complex interplay at the cell-matrix interface is of fundamental value to decipher key biochemical and mechanical factors regulating cell fate. The combination of material science and surface chemistry aided in the creation of controllable environments to study cell mechanosensing and mechanotransduction. Biologically inspired materials tailored with specific bioactive molecules, desired physical properties and tunable topography have emerged as suitable tools to study cell behavior. Among these materials, synthetic cell interfaces with built-in sensing capabilities are highly advantageous to measure biophysical and biochemical interaction between cells and their environment. In this review, we discuss the design of micro and nanostructured biomaterials engineered not only to mimic the structure, properties, and function of the cellular microenvironment, but also to obtain quantitative information on how cells sense and probe specific adhesive cues from the extracellular domain. This type of responsive biointerfaces provides a readout of mechanics, biochemistry, and electrical activity in real time allowing observation of cellular processes with molecular specificity. Specifically designed sensors based on advanced optical and electrochemical readout are discussed. We further provide an insight into the emerging role of multifunctional micro and nanosensors to control and monitor cell functions by means of material design. INTRODUCTION Cell adhesion is a critical aspect of the constitution of tissues and organs. The complex organization of tissues relies on a precise control over the formation of adhesive contacts between cells and the extracellular matrix (ECM) (Gumbiner, 1996). From regenerative medicine to development biology there is a great interest in comprehending the mechanisms that control the assembly of cells into tissues and organs (Keung et al., 2010;Gaharwar et al., 2020). Increasing evidence has shown that these processes are regulated not only by biochemical signals but also by biophysical cues from the environment. For example, it has been found that cells are able to sense and respond to the topography (Curtis and Riehle, 2001;Spatz and Geiger, 2007), rigidity (Discher et al., 2005;Engler et al., 2006), and anisotropy (Théry et al., 2006;Xia et al., 2008) of their environment. To reveal the mechanosensory elements involved in cell-cell and cell-ECM interactions, ECM-inspired materials have been developed (Mager et al., 2011;Xi et al., 2019). Advancements in material science and surface chemistry made it possible to create materials that mimic both physical (e.g., stiffness and topography) and chemical cues (adhesive and soluble) of the extracellular environment. These materials are highly desired because cell development and behavior can be studied under conditions similar to those found in the cell microenvironment in vivo. As materials incorporate biochemical and biophysical cues from the natural ECM, the information they provide allows a closer estimation of the in vivo situation. Among biomaterials, those with built-in sensing properties are particularly attractive to obtain quantitative information on how cells probe and respond to relevant physicochemical cues of the ECM. The pioneering work of Dembo and Wang (1999) on deformable elastic materials with embedded fluorescent tracers was one the first material of this class capable of deciphering cell contractile forces. Similarly, in the field of electrochemical sensors, the work of Giaever and Keese (1984, 1986 set the foundations for the development of materials capable of probing the cell adhesion interface in real time. Both approaches profited from their label-free capabilities. Since then, a myriad of synthetic responsive biointerfaces with unprecedented temporal and spatial resolution was engineered to study cell signaling and behavior in real time and with high sensitivity. In this review, we will focus on the design of micro and nanostructured biomaterials that resemble the complex properties of the ECM, and that play an active role in measuring cell-adhesion related processes. Materials developed to study the impact of different properties of the ECM on cell behavior, but only provide adhesion support for cells are covered elsewhere (Mager et al., 2011;Rosales and Anseth, 2016). Emphasis will be placed on label-free sensing schemes. These approaches have advantages over those that require labeling. Label-free sensors offer real-time measurements, less or no sample preparation, and low non-specific response, reducing the risk of generating artifacts and false positives in the measurements. Commonly used labels like fluorescent or colorimetric dyes are often cytotoxic and hamper further use of the cultured cells, which is particularly desired for regenerative tissue applications. Thus, biomaterials with sensing capabilities hold great potential to bridge the gap between traditional cell binding assays and in vivo studies. We aim to offer readers an overview of the latest sensing biomaterials and their main advantages and applications in order to guide the selection of the most appropriate platforms for specific purposes. We will describe the specific features that have been provided to materials and how these characteristics have contributed to reveal key aspects of the cellular adhesion mechanism. Although most of the systems described in this review are research-oriented, commercial applications are possible especially in biomedical diagnosis (Suhito et al., 2018;Tutar et al., 2019) and tissue engineering (Mitrousis et al., 2018;Gaharwar et al., 2020). CELLS ADHESION TO THE MICROENVIRONMENT The ECM is a complex and dynamic mixture of proteins and polysaccharides that provides not only support for cells but also is involved in regulating many important cellular processes including proliferation, survival, differentiation, and apoptosis (Frantz et al., 2010). Cells sense and respond to changes in topological, physical, and chemical properties of the ECM through a sophisticated system that allows them to adapt their behavior by converting these cues into biochemical signals. The interaction of cells with the ECM is mostly mediated by a family of cell transmembrane receptors called integrins that are responsible for cell attachment, and connect the cell-matrix adhesions with the cell cytoskeleton (Geiger and Yamada, 2011) ( Figure 1A). Integrins undergo conformational changes upon biochemical and mechanical interactions leading to outside-in and inside-out mechanotransduction. Although it is well known that integrins have a crucial role in regulating diverse adhesionrelated functions, the mechanism by which cells translate extracellular stimuli into biological responses remains unclear. Integrins are heterodimers constituted by two transmembrane protein subunits α and β which bind to specific ligands located in the ECM proteins or in membrane of other cells (I-CAM and V-CAM receptors) (Sun et al., 2019). Integrins adopt closed or open conformations characterized by low or high-affinity states, respectively. The transition from the closed to the open conformation is crucial for integrin activation and can be induced from the extracellular medium or from the cytoplasm (Luo et al., 2007). Upon activation, integrins form clusters and associate with adaptor proteins like talin, kindling, and vinculin among others. These proteins connect the integrins to F-actin fibers forming a molecular "clutch" that mediates mechanical forces between the membrane and the cytoskeleton ( Figure 1B) (Sun et al., 2019). The macromolecular complex of integrins and adaptor proteins constitute the focal adhesions (FA). It has been shown that FA act as mechanosensory machines, translating multiple environmental cues to cellular responses (Geiger et al., 2009). The integrin-mediated binding to the ECM, even though being one of the most important adhesion mechanisms is not the only one. Syndecans and lectins also participate in cell adhesion, although their role in mechanosensing is not totally clear (Gumbiner, 1996;Mager et al., 2011;Guilluy and Dolega, 2020). Cell-cell contacts are mediated by different types of junctions: adherent junctions, tight junctions, and desmosomes (Gumbiner, 1996). Adherent junctions are one of the most important sites of intercellular mechanical coupling (Ladoux and Mège, 2017). Cadherins are integral membrane proteins that participate in the formation of adherent junctions. Their extracellular domains mediate the adhesion to neighbor cells, whereas their intracellular regions are connected to the actin cytoskeleton FIGURE 1 | Molecular complexes involved in cell adhesion processes. Cells form bonds with the ECM and with other cells by protein receptors in the plasma membrane (A). These receptors are linked to the cytoskeleton by adaptor proteins. In (B), a scheme of a FA where integrins (formed by α and β subunits) recognize specific ligands in the ECM. In the cytoplasm, talin, kindling, and vinculin among other proteins ("adaptor proteins" in the image) associate with the integrins forming a complex that acts as a molecular clutch. The adaptor proteins connect the integrins with the cytoskeleton and participate actively in mechanosensing. In (C), a scheme of an adherent junction where cadherins in the extracellular medium establish bonds with cadherins in the other cell. The link between cadherins and the actin cytoskeleton is mediated by adaptor proteins α and β catenins. by adaptor proteins α and β-catenin ( Figure 1C). Cadherins associate between them and with adaptor proteins to form clusters constituting the mature adherent junction (Mège and Ishiyama, 2017). Cadherin complexes respond to tension load by the actomyosin. When pulling forces are applied to the adherent junction through the actin cytoskeleton, α catenin unfolds and associates with vinculin, which strengthens the adherent junction (Le Duc et al., 2010;Yonemura et al., 2010;Buckley et al., 2014). Therefore, mechanosensing in cell-cell bonds is mediated by the same structures that mediate cellcell adhesions. There is also evidence that tight junctions mediate mechanosensing in epithelia by association with the actin cytoskeleton (Tornavaca et al., 2015). Desmosomes are tightly associated with intermediary filaments and participate in mechanosensing (Weber et al., 2012), although their role in mechanotransduction is complex as intermediary filament proteins are diverse and their expression is tissue specific (Ladoux and Mège, 2017). An excellent review on cell-to-cell association and the dynamics of collective cell behaviors has been published by Ladoux and Mège (2017). Cell-to-cell adhesion is also important in cell communication processes. During antigen presentation, antigen presenting cells (APC) associate with T or B lymphocytes forming the immunological synapse. Evidence has shown that this process is mechanosensitive, although cell-to-cell bonds and mechanosensing is mediated by the interaction of the peptide major histocompatibility complex (pMHC) in the APC with the specific T or B cell receptor in the lymphocyte (Bashoura et al., 2014;Liu B. et al., 2014;Liu et al., 2016). RELEVANCE OF BIOSENSORS IN CELL ADHESION STUDIES Much of the information we have so far on how cells adhere to the ECM and other cells have come from a wide set of bioanalytical tools. Genetic modification like gene knockout (Elosegui-Artola et al., 2016;Strohmeyer et al., 2017), protein expression modulation by siRNA (Plotnikov et al., 2012;Bazellières et al., 2015) and the use of externally added inhibitors (Bashoura et al., 2014;Collins et al., 2017) are important methods to investigate the role of different proteins in the cell sensing mechanism. Cell behavior as a consequence of these modifications is often monitored by optical microscopy-based techniques like immunofluorescence (Engler et al., 2006), in vivo observation of recombinant fluorescence proteins (Reffay et al., 2014;Oria et al., 2017), genetically incorporated molecular probes (Grashoff et al., 2010), and more recently optochemical probes (Endo et al., 2019;Ollech et al., 2020). Even though these methods have revealed important aspects of the molecular mechanisms of cellular mechanotransduction, they could not provide a complete description of the interaction between cells and their environment. In this sense, biomaterials with builtin sensing capabilities have contributed to our understanding of cell mechanosensing by providing information not previously available by other bioanalytical methods. For example, it was not until the work of Harris et al. (1980) with soft silicon substrates that traction forces generated by cells were quantified during cell spreading and migration. Decades later, advances in material science and polymer chemistry have enabled remarkable improvements of this groundbreaking approach leading to a better understanding of how cells mechanically interact with the ECM (Roca-Cusachs et al., 2017). Noteworthy, although optical microscopy-based techniques have been fundamental for our understanding of the cell adhesion process, they often involve endpoint measurements and timeconsuming sample preparation (e.g., immunostaining). This greatly restricts the temporal resolution achievable with these techniques. Genetically encoded fluorophores can overcome this drawback but require modification of cell genome, and the time span of the measurement is limited by photobleaching. Transillumination microscopy imaging like phase contrast or differential interference contrast does not have this limitation, but provide only qualitative information. In this context, noninvasive and non-destructive methods stand out as sensitive and quantitative approaches to study cell-adhesion related processes in real-time. These include electrochemistry, quartz crystal microbalance (QCM), surface enhanced Raman spectroscopy (SERS), and surface plasmon resonance (SPR) (Janshoff et al., 2010;Méjard et al., 2014;Suhito et al., 2018) (More details of these approaches can be found in sections "Cell-substrate adhesions" and "Cell adhesion biomarkers"). Analytical approaches based on these techniques provide unique information with high time resolution about the cell-matrix and cell-cell interface. For instance, electric cell-impedance sensing (ECIS) methods can easily reveal the formation of cell-cell junctions and cell-substrate contacts, and are extensively used to follow epithelial maturation (Ngok et al., 2013;Gamal et al., 2017;Van Der Stoel et al., 2020). Surface enhanced Raman spectroscopy (SERS) is ideal to obtain information about biochemical changes inside cells in the proximity of a nanostructured surface (El-Said et al., 2015;Haldavnekar et al., 2018;Rusciano et al., 2019). In cell culture, substrate materials provide the physical scaffold that supports cells allowing their adhesion and proliferation. Therefore, these materials have to be biocompatible, meaning that they have to comply with determined characteristics in terms of topology, stiffness, and chemical composition. However, substrates are not necessarily relegated to being a physical support, on the contrary, by incorporating a transducer element these materials can reveal crucial aspects that regulate cell behavior. This type of responsive biomaterials can be classified as biosensors. Conventionally, biosensors are devices able to provide selective quantitative analytical information using a biological recognition element and a transducer component. In the context of this review, the recognition element is often a specific cell-binding molecule (e.g., the tripeptide motif Arg-Gly-Asp (RGD) to bind α v β 3 integrin), while the transducer can be optical (SERS, traction force microscopy), electrochemical (ECIS), or piezoelectric (QCM). Because of its pivoting role, the transducer element will define important aspects of the analytical response as spatial and time resolution, signal to noise ratio, and selectivity. Despite all the progress made in the study on how cells sense and react to physicochemical properties of their environment, understanding the mechanisms by which cells can transduce mechanical signals into biochemical events is still a challenge. Moreover, the relationship between these events and cell differentiation, physiological function, and pathology have not been elucidated (Mohammed et al., 2019). In the field of regenerative medicine, a scenario like this raises important questions on how to ensure the efficiency of materials to promote cell differentiation. In this regard, biomaterials with cellinstructive characteristics and built-in sensing capabilities could provide valuable information about cell-material interaction. Devices capable of providing new sources of information will be key to elucidate how different properties of the cell matrix affect cell behavior. In order to discuss the relevance of the biosensors in cell behavior studies, each section of the review nucleates biosensors in terms of the biological variable about which they provide information. Cell-Substrate Adhesions After a cell in suspension makes the first contact with a substrate, it carries out successive steps of attachment, adhesion, spreading, and in some cases followed by migration and proliferation. To allow cell adhesion, substrates require the presence of adhesionpromoting proteins or ligands immobilized on the substrate (Janshoff et al., 2010). There are two options to achieve substrate modification to elicit cell adhesion: the adhesive molecules are incorporated during the substrate synthesis or they are secreted by the cells in the adhesion process. In the second case, the material has to allow protein absorption. Surface wettability and topography have a major influence on this process (Prime and Whitesides, 1991;Janshoff et al., 2010). Most cells have excellent insulating properties. When cells adhere on an electrode surface, they modify the environment at the solution-electrode interface affecting the charge transfer events at the surface (Giaever and Keese, 1993;Ding et al., 2008). This phenomenon was exploited by Keese (1984, 1991) when they created the first electrodes to study cell adhesion using electric impedance. In their design, the substrate incorporated working electrodes and a counter electrode. The working and counter electrodes were connected to a lockin amplifier, and the culture medium completed the circuit. The authors monitored cell adhesion events by applying an alternating sinusoidal voltage and monitoring current. When cells adhered and spread on the working electrode they generated an impedance increase as a consequence of the formation of an insulating layer (Giaever and Keese, 1993). This technique was called electric cell impedance sensing (ECIS) and due to its high sensitivity, it has been used for monitoring cells attachment (Han et al., 2011;Xue et al., 2011), spreading (Wegener et al., 2000;Arias et al., 2010;Pradhan et al., 2014), locomotion (Giaever and Keese, 1991;Wang et al., 2008), and apoptosis (Arndt et al., 2004;Liu et al., 2009). Impedance measurements depend on the number of cells seeded on the electrode, their morphology, motility, and on the formation of cell-cell interactions (for further details see section "Cell-Cell adhesion"). Analysis of data is aided by mathematical models that allow calculating cell morphological parameters (Giaever and Keese, 1991;Lo et al., 1995). For more detailed reviews on ECIS please see Janshoff et al. (2010) and Hong et al. (2011). The use of ECIS substrates for monitoring cell behavior has advantages compared to traditional optical microscopy methods. ECIS is a non-invasive and nondestructive technique capable of providing information without the needing for cell staining (Suhito et al., 2018). In microscopy, quantification of the cell adhesion and spreading requires tedious data processing compared to the straightforward information provided by impedance measurements. Besides, impedance can be registered on cells over days with a temporal resolution of seconds (Hong et al., 2011). Moreover, transparent ECIS substrates can be excellent complements to optical microscopy as ECIS can provide information not easily accessible to visualization like the formation of cell-cell junctions, or cell micromotion (Giaever and Keese, 1991;Lo et al., 1995) (Table 1). Coatings on substrate materials for ECIS can offer better control of adhesive cell behavior without hampering the sensing capabilities of the electrodes (Giaever and Keese, 1986). Different strategies, including adherent polymer coatings , self-assembled monolayers (SAM) (Parviz et al., 2017), metallic nanoparticles (Kim et al., 2013;Pallarola et al., 2017a), carbon nanotubes (Srinivasaraghavan et al., 2014), and silicon nanowires (Abiri et al., 2015) have been reported. Susloparova et al. (2015) created new substrates for ECIS using open gate field-effect transistors instead of gold electrodes, which allowed to obtain single-cell resolution of the impedance measurements. Decker et al. (2019) employed 3D nanostructured multielectrode arrays to study cell adhesion. Using nanoimprint lithography, the authors created an electrode with incorporated nanostructures in different forms, dimensions, or pitch lengths in a reproducible way. By changing the synthesis parameters, especially the time during electroplating, the height and shape of the nanostructures could be modulated (Figures 2A,B). The authors created a multi-electrode array with half of the electrodes with nanostructured patterns and half without them. The authors tested different type of nanostructures with a pillar shape modifying the distances between nanostructures and their shape. Cells could attach to both the nanopatterned and unpatterned electrodes, although the nanostructured ones displayed a lower impedance ( Figure 2C). However, upon cell adhesion some nanostructures showed increased cell-nanostructure coupling and increased impedance change as consequence of cell adhesion ( Figure 2D). This work showed how nanostructured topologies on electrodes could improve ECIS biosensing capabilities. Moreover, ECIS sensitivity could be enhanced using redox probes. In this approach, the probe (for example [Fe(CN) 6 ] 3− ) is incorporated in the culture medium (Ding et al., 2008). Cell adhesion and spreading on the electrode forms a barrier that hinders the access of the probe to the electrode, decreasing electron transfer. Thus, high sensitivity to the area covered by the • SPR microscopy allows µmeter resolution Rothenhäusler and Knoll, 1988;Willets and Van Duyne, 2007;Chabot et al., 2009 Frontiers in Bioengineering and Biotechnology | www.frontiersin.org cell can be achieved (Ding et al., 2007). A modification of this strategy was used in a recent work from Du et al. (2020) where they created a biosensor using nanocomposite materials to follow the epithelial-mesenchymal transition of A459 lung cancer cells. Piezoelectric materials can also be employed for monitoring cell adhesion. Quartz microbalance experiments are performed on sensor materials made of an α-quartz disk sandwiched between two metal electrodes. Due to the piezoelectric nature of α-quartz, any mechanical deformation of the crystal creates an electrical potential at the quartz surface, and vice versa (Janshoff et al., 2010;Chen et al., 2018). In most common approaches, an alternating current is applied between the electrodes allowing measuring the resonance frequency of the crystal over time. Cells can adhere and grow on the resonator surface, which produces changes in its resonance frequency ( f). Moreover, other materials can be deposited on the resonator surface to assess the cell adhesion to them. It has been shown that f changes correlate with cell coverage on the sensor surface (Redepenning et al., 1993;Wegener et al., 1998;Tagaya et al., 2011). Hence, monitoring f as a function of time was employed to follow cell adhesion, spreading, and proliferation (Ishay et al., 2015). However, due to the viscous nature of cells and culture mediums, the changes in vibrational energy dissipation ( D) of the sensor can also provide relevant information on cell behavior. Nonetheless, the link between D and the physical characteristics of cells that elicit these changes has not been understood (Xi and Chen, 2013). The ratio D/ f has been regarded as a fingerprint of the cell adhesion process, as different cell lines display different D and f behaviors during adhesion and proliferation (Fredriksson et al., 1998). QCM sensors can be modified with coatings to provide enhanced cell adhesion, although the nature of the coating could influence the responses of the sensor (Lord et al., 2006). Despite the needing for noncommon materials (α-quartz sensors), QCM is an inexpensive and valuable technique to monitor cell adhesion dynamics. Table 1 summarizes the advantages and disadvantages of QCM in cell adhesion study. Materials with surface plasmonic properties allow the implementation of an evanescent wave-based optical technique, surface plasmon resonance (SPR). SPR can be employed for monitoring cell adhesion processes in the proximity of the substrate surface (Chabot et al., 2009;Peterson et al., 2009;Wang et al., 2012). SPR sensors consist of a glass substrate (LaSFN9 or BK7) coated with a thin gold layer (∼50nm). These sensors allow monitoring cell-substrate interactions occurring in the proximity of the first few hundreds nanometers over the gold layer due to the evanescent decay of the plasmon perpendicular to the surface interface (Willets and Van Duyne, 2007). The molecules in the proximity of the interface interact with the confined electromagnetic wave, resulting in changes in the refractive index at the metal surface (Homola, 2003;Willets and Van Duyne, 2007). Therefore, events occurring on the substrate surface such as cell adhesion and spreading modify the local refractive index of the surface, which can be followed in real time (Yashunsky et al., 2010;Borile et al., 2019). The changes in the refractive index are measured by irradiating the gold surface through a high refractive index prism in an angle that yields total internal reflection. The reflected light is interrogated by varying the angle of incidence at a fixed wavelength or by changing the wavelength at a fixed angle (Willets and Van Duyne, 2007). Also, spatial resolution of the cellsubstrate contacts can be obtained using SPR microscopy (Rothenhäusler and Knoll, 1988). SPR has the advantage of detecting changes in cell morphology, outperforming ECIS and QCM in this aspect (Table 1). Noteworthy, SPR can be combined with electrochemical measurements as the local effective refractive index on a determined spot depends on the local charge density of the surface. Changes in the electrode interface by cell adhesion processes modify the local impedance of the surface, which is translated into changes in the local SPR signal. Using this phenomenon, Wang et al. developed electrochemical impedance microscopy (EIM) using a transparent conductive substrate with plasmonic properties (Wang et al., 2011). Mapping the local impedance on the surface allowed obtaining high-resolution images of the cellsubstrate contacts. This technology is an example of how recent advances in the use of biosensor substrates allow characterizing the cell-substrate interaction with increasing time and spatial resolution. Cell-Cell Adhesion Direct interactions between cells are often mediated by a set of ligands and receptors expressed by both cells. When cells have to build long-term bonds between them, they assemble different types of junctions: adherent junctions, tight junctions, and desmosomes (Ladoux and Mège, 2017). In other cases, cellcell adhesions have to be transient like those formed by the natural killers lymphocytes and their target cells (Orange, 2008). Recently, Pallarola et al. (2017a) developed a nanostructured electrochemical sensor exhibiting high sensitivity to the constitution of cell-cell adhesion interactions. The sensing platform consisted of a 100-µm-diameter ITO microelectrode patterned with an ordered array of AuNPs, and surrounded by a SiO 2 -insulating layer ( Figure 3A). The sensor was built by a combination of diblock copolymer micelle nanolithography (Pallarola et al., 2014) and photolithography. The use of a gold nanopatterned surface allowed a precise control over the distribution of cell adhesion ligands on a non-adhesive PEG-passivated background (Pallarola et al., 2017b). Cell behavior was monitored over several hours by simultaneous electrochemical impedance spectroscopy and optical microscopy ( Figure 3B). The surface of the electrode exhibited high sensitivity toward early events of cell interaction. Resistance and capacitance recordings were used to study the behavior of different cell types. It was observed that cell lines expressing lower levels of E-cadherins registered lower resistance values at low frequency (429 Hz) (Figures 3C,D). This was in agreement with the fact that increased cell-to-cell adhesion results in a lower paracellular current (Hong et al., 2011). This feature allowed for distinguishing between different cell types based on the density of adherent junctions, as observed for MCF-7 cells, in comparison with MCF-10A cells. This approach is a powerful tool to study the dynamics of cell-cell contact formation and remodeling of junctions under specifically engineered environments in a highly sensitive, instantaneous, and non-destructive manner. The integrity of epithelia relies on the ability to form strong junctions between cells. Another suitable approach to monitor the formation of cell-cell contacts is to measure the electrical impedance across an epithelium placed between two electrodes. Measurements of trans-epithelial electrical resistance (TEER) is a valuable method for evaluating in vitro barrier tissue integrity. TEER measurements of cell cultures were widely utilized in research (Ferruzza et al., 1999;Huh et al., 2010;Lippmann et al., 2012) and for a more detailed review on this topic see the article of Srinivasan et al. (2015). In particular, the integration of immobilized TEER electrodes with microfluidics holds great potential to study cell barrier functions and cell behaviors in cell-mimetic environments. For example, Henry et al. (2017) developed a robust approach to fabricate organon-chip based on microfluidics with fully integrated electrodes (Figures 3E,F). The assembly of tight junctions was monitored by measuring the TEER and capacitance at low frequency. The authors demonstrated that when cells established tight junctions between them, capacitance reaches its maximum. A classical experiment to prove the ability of electrochemical devices to measure the formation of tight junctions is to add EGTA to the culture medium (Lo et al., 1995). The authors also showed that impedance decreased over time when Ca 2+ is sequestered due to of disassembly of the tight junction (D'Angelo Siliciano and Goodenough, 1988). After EGTA is removed, tight junctions are assembled again and impedance is recovered ( Figure 3G). Also, the authors could employ the chip to follow the behavior of airway epithelial cells cultured in an air-liquid interface ( Figure 3H). The work established a standard and reproducible protocol for the fabrication of organ-on-chip systems with TEER-based sensing capabilities. This kind of platforms displays promising applications as TEER measurements are frequently used to follow epithelial integrity and differentiation in organson-chips (van der Helm et al., 2019; De Gregorio et al., 2020). Cell and Tissue Architecture When cells constitute tissues, cell-ECM and cell-cell adhesions are orchestrated to achieve proper collective structure and organization. During tissue formation, cells selectively form bonds between them, change shape, migrate, and synthesize ECM (Gumbiner, 1996). In this context, materials with the ability to control topological and geometrical properties of cells are desired in order to induce the same cell architecture found in normal tissues. Due to the frequent complex 3D topological organization of cells in tissues, analyzing the response of single cells in vivo is challenging (Ladoux and Mège, 2017). Therefore, smart in vitro strategies are necessary to mimic as close as possible the conditions of cells in multicellular organizations. The incorporation of cell geometry aspects adds a layer of complexity to the design of biosensors, particularly if the nanometric topologies are also incorporated into the substrate material. The ability to produce precisely engineered scaffolds can provide a way to control cell architecture during culture. Conventional cell cultures lack the ability to control cell spatial organization, therefore micropatterning techniques have been developed to create 2D patterns to control cell-substrate interactions and cell behavior. Micropatterning can be done by microcontact patterning (µCP), which consist in creating micro-stamps to deposit an "ink" material on it and print the material onto a substrate, resulting in a 2D pattern of the "ink" material on the surface (Alom Ruiz and Chen, 2007). The first micropatterning methods were developed in the late 60s (Carter, 1967;Harris, 1973). However, they became more available with the increased accessibility to photolithography techniques. The use of polydimethylsiloxane elastomer (PDMS) to prepare molds made the process easier, allowing to print patterns of proteins on substrates (López et al., 1993b), and to employ them to study cell adhesion in different patterns (Whitesides et al., 2001). There are other methods to create micropatterns on surfaces like photolithography (Bélisle et al., 2009;Kim et al., 2010), microwritting (López et al., 1993a), and microfiltration (Kailas et al., 2009), although µCP remains as one of the most common methods (Théry, 2010). Micropatterning was also successfully combined with nanopatterning to create substrate materials controlling topography aspects in the micro and nanometer range (Ren et al., 2017). New strategies were developed for orthogonally functionalized surfaces with two different adhesion ligands, controlling each nanodistribution independently (Polleux et al., 2011;Guasch et al., 2016;Yüz et al., 2018). Surfaces could be tailored with Au or TiO 2 nanoparticles that act as anchor points of cell adhesive ligands. A similar approach was used to functionalize orthogonally micropatterns on a surface with two ligands with specificity for either α 5 β 1 or α V β 3 (Guasch et al., 2015). Using photolithography and metal sputtering, a micropatterned surface consisting of stripes of Au or TiO 2 was synthesized. The different Au or TiO 2 areas could be functionalized selectively by using integrin ligands with a thiol or phosphonic acid group respectively. These examples show how adhesive ligand distribution of a substrate can be precisely controlled to build micro and nanopatterned substrates. The creation of micropatterns on different materials opens the possibility to introduce topographical features on surfaces with built-in sensing capabilities. Micropatterning can be applied to sensing materials in order to study the role of cell architecture on mechanobiology. Ribeiro et al. (2015) created Matrigel microtopographies on polyacrylamide substrates with embedded fluorescent beads to study the differentiation of cardiomyocytes from human pluripotent stem cells (hPSC-CM). The material design allowed to: (i) control the 2D cellular aspect ratio on the substrate; (ii) modulate the stiffness of the substrate; (iii) measure the contractile force of cardiomyocytes by traction force microscopy (TFM). hPCS-CM were cultivated on isolated rectangular micropatterns with different aspect ratios. The authors showed that myofibril alignment and contractile forces along the major axes of the cell were greatest in high aspect ratios (7:1) and physiological stiffness (10 kPa). These conditions indicated a more differentiated phenotype compared to cells growing on micropatterns with different aspect ratios and/or substrate stiffness. The results were also supported by the characterization of Ca 2+ signaling, mitochondrial organization, and protein expression in the cells. The ability of this functional material to modulate cell aspect ratio, substrate stiffness, and force transduction constituted a powerful tool to control hPCS-CM differentiation. However, despite the versatility of micropatterning techniques on 2D substrates, they fail to mimic in vivo-like 3D microenvironment and organization. Several strategies have been developed to create biomimetic 3D scaffolds like solid porous substrates (Lai et al., 2012), hydrogels matrixes (McKinnon et al., 2013), microconduits (Anderson et al., 2016), and microtracks (Kraning-Rush et al., 2013). The use of two photon polymerization allowed to create very complex 3D scaffolds for cell culture (Turunen et al., 2017). Nevertheless, the incorporation of controlled nanopatterning and built-in sensing capabilities is still a challenge in 3D culture matrixes. To create a 3D biosensor that could transduce cell behavior in 3D environments, Pitsalidis et al. (2018) designed an organic biotransistor, in which the conductive polymer poly(3,4ethylenedioxythiophene) was doped with poly(styrene sulfonate) (PEDOT:PSS) to create a suitable scaffold for cell growth. The addition of collagen and single-walled carbon nanotubes (SWCNTs) for improving biocompatibility and electrical performance, respectively, was also studied. Then, the polymer scaffold was fixed between two electrodes in a chamber filled with aqueous solution containing a gate electrode. In these conditions, the organic polymer scaffold was in direct contact with two electrodes and indirectly with the gate through the aqueous medium. The system behaved as a transistor, the two electrodes induced a current that went through the scaffold, which depended on the gate voltage. Due to the porous nature of the scaffold, cells could attach and grow in it. Madin-Darby canine kidney cells II (MDCKII) proliferation inside the porous scaffold induced a decrease in transconductance, which allowed monitoring cell growth in real-time. Also, after 3 days of culture MDCKII cells exhibited less transconductance compared with telomerase immortalized fibroblast (TIF). The authors explained these results by the presence of a higher density of cell-cell junctions in MDCKII compared to TIF. Although this device is not yet fully biomimetic, it represents a step forward in creating electrochemical sensing platforms for 3D cell culture studies in real-time. Cell Adhesion Biomarkers Biochemical signals have a critical role in tissue formation. They can be found free in the extracellular medium, embedded in the ECM or at the surface of other cells like the Notch signaling pathway (Chacón-Martínez et al., 2018). At the same time, cell differentiation is followed by the release of biochemical mediators like neurotransmitters in neurons Kruss et al., 2017), hormones in endocrine cells (Lund et al., 2016;Hunckler and García, 2020), and ECM components (Mehlhorn et al., 2006;Zhang et al., 2020). Moreover, differentiation induces changes in cell metabolism that modifies the concentrations of intermediates in metabolic pathways (Quinn et al., 2013;Carey et al., 2015). The creation of active materials capable of detecting molecules released by cells, although is highly valuable, yet remains a challenge. This can be attributed to the fact that, in many cases, the sensor material is passivated by the same molecules released by cells (Spégel et al., 2007), and because of the complex mixtures of biochemicals that can interfere in the sensing process (Huang et al., 2011). Furthermore, high sensitivity should be provided, since often the concentration of the target molecule reaches low concentrations and for short periods of time (Amatore et al., 2008). Electrochemical biosensors provide a versatile and sensitive means to probe the content of biological environments (Ding et al., 2008). Wang et al. (2018) created stretchable photocatalytically renewable electrodes for nitric oxide (NO) sensing by functionalizing PDMS films with a nanonetwork of Au nanotubes (NTs) and TiO 2 nanowires (NWs). The Au NTs rendered electrochemical sensing performance, while the TiO 2 NWs provided photocatalytic activity to recover the performance of the sensor after UV irradiation. In addition, electrochemical biosensors can measure neurotransmitters released by neurons, measuring the redox reaction of a neurotransmitter with the electrode. Kim et al. (2015) created a nanopatterned electrochemical sensor to monitor the dopaminergic differentiation of human neural stem cells (hNSCs). The nanopatterned surface showed increased cell adhesion and spreading of a dopaminergic cell line (PC12) and enhanced sensitivity toward dopamine compared to a planar gold or ITO electrode. Spatial resolution of neurotransmitter release could be obtained by using microelectrode arrays (MEAs). Wang et al. (2013) created subcellular MEAs ranging from 4 to 16 µm 2 to record the release of dopamine across single cells and PC12 cell clusters. In addition to sensors based on electrochemical transduction, sensors based on optical readout have been proposed to monitoring molecule release by cells (Kim et al., 2018;Dinarvand et al., 2019;Liu et al., 2020). Kruss et al. (2017) developed single-walled carbon nanotubes wrapped with short DNA sequences (DNA-wrapped SWCNT) to measure dopamine release from PC12 cells. These DNA-wrapped SWCNT displayed near-infrared fluorescence and changed their fluorescence emission spectrum in presence of specific organic molecules ( Figure 4A). The authors optimized the DNA sequences to improve the selectivity and sensitivity toward dopamine. Then, sensors were immobilized on a glass surface and PC12 cells were seeded on it. PC12 cells dopamine release was measured by fluorescence microscopy. Surface immobilized DNA-wrapped SWCNT fluorescence emission depended on the local dopamine concentration on the surface. The authors recorded fluorescence images of the cells and divided the image into discrete pixels. A fitting algorithm was developed for the normalized fluorescence intensity traces of the pixel groups. Using this method the authors could localize transient peaks in fluorescence recordings due to dopamine exocytic events ( Figure 4B). Results showed that PC12 cells secreted dopamine at determined exocytosis sites or "hot spots, " instead of a release of neurotransmitters at random locations on the membrane (Figures 4C,D). Notably, the authors demonstrated that the substrate functionalized with DNA-wrapped SWCNTs could render higher spatial resolution compared with MEAs and similar time resolution to that of cyclic voltammetrybased sensors. Among existing materials for detecting the release of molecules from cells, metal nanopatterned substrates have a remarkable advantage. Metal nanostructured surfaces with plasmonic properties can enhance the Raman scattering of molecules, a phenomenon known as surface enhanced Raman scattering (SERS). Raman scattering occurs because of the inelastic scattering of photons by molecules. Due to their different vibrational modes, molecules produce a spectrum of Raman scattered light, which contains information about the chemical identity of the molecule (Kneipp et al., 2010). The Raman scattering is often weak, but when the molecules are very close to the metal nanostructured surface, their Raman scattering can be enhanced several orders of magnitude. The detailed mechanisms by which molecules enhance their Raman scattering near metal FIGURE 4 | Detection of dopamine released by PC12 with nanosensor arrays. In (A), a scheme of the biosensor surface where DNA-wrapped SWCNTs were immobilized on a glass surface. Surfaces were coated with collagen to facilitate cell adhesion. The DNA-wrapped SWCNTs modified their fluorescence spectrum as a consequence of non-covalent dopamine binding. In (B), the scheme shows the analysis of experimental images. Each pixel of the images corresponded to a region containing one or more DNA-wrapped SWCNT nanosensors. Each pixel of the fluorescence movies produces a trace that contains information about the local dopamine concentration. A function was fitted to the data of each pixel to obtain the amplitude, width, and time of the signal. The fitted parameters can be represented in false-color images. In (C), the image shows released dopamine profiles across the border of different cells. For this analysis, only pixels on the cell border were considered. On the top of (C), 3D plots of fitted sensor signal responses at different times before or after stimulation (t 0 ) are shown. Height and color of the 3D plot surfaces indicate the relative fluorescence change in pixels in the cell border normalized to the maximum fluorescent change in the same cell (values between 0 and 1). Results show that the maximum response is acquired at t 0 , then signal decreases. In (D), images 1, 2, and 3 indicate different cells and their respective 3D plots. In this case, the height of 3D surface plots shows the maximum dopamine response obtained in each pixel, showing that dopamine is released at particular locations or "hot spots" on the cell membrane. nanostructures are beyond the scope of this review, but the reader can refer to these excellent articles for more details (Haynes et al., 2005;Kneipp et al., 2010;Schlücker, 2014). SERS is a promising strategy to analyze the chemical composition near a nanostructured surface in a non-invasive and sensitive way. Cells can be seeded on SERS substrates, and once they adhered and spread, their chemical composition near the surface can be studied by irradiating the substrate and measuring the Raman spectra. Over the years, a wide diversity of metal nanostructured surfaces have been created for SERS, such as nanoparticles attached to a surface (Freeman et al., 1995;Zhai et al., 2009;Lussier et al., 2016), nanopillars (Kang et al., 2015;Li et al., 2016), nanoholes (Abdelsalam et al., 2005;Luo et al., 2019), and others (Yüksel et al., 2017;Yao et al., 2020). A detailed review on the fabrication of SERS substrates can be found in Fan et al. (2011). El-Said et al. (2015 built a nanopatterned surface for in vitro monitoring of neural stem cell (NSC) differentiation. The authors electrochemically deposited Au on an ITO surface to obtain an array of Au nanostars. PC12 cells adhered and spread on this substrate, and could be electrically stimulated due to the conductive characteristics of the material. The Raman spectrum was recorded at different stages of the cells differentiation process and exhibited different peaks that could be attributed to the presence of specific functional groups in biomolecules. Although it is difficult to link the changes in Raman spectra with specific changes in cell biochemistry composition, the Raman spectra can be used as a fingerprint to follow the cell differentiation process in a non-invasive manner. Excitatory Cell Adhesion Changes in membrane potential are a critical aspect of neurons and myocytes function. Action potentials travel through the membrane of these cells allowing communication of signals between different parts of the cell, eliciting the release of signaling molecules like neurotransmitters, and triggering contractile activity of muscle cells. Thus, proper cell electrical activity is an important characteristic of neural and cardiac differentiated tissues (Gunhanlar et al., 2017;Karbassi et al., 2020). Nowadays, neural and myocyte single-cell electrical activity can be measured by patch clamp techniques, optical imaging using genetically encoded or extrinsic fluorophores and substrate-integrated MEAs (Spira and Hai, 2013). In particular, MEAs provide non-invasive monitoring of electrical activity and stimulation of multiple neurons in vitro and in vivo (Hutzler et al., 2006;Berdondini et al., 2009). At the beginning, electrodes were only capable of registering extracellular potential of cells (Thomas et al., 1972;Gross et al., 1977;Pine, 1980;Csicsvari et al., 2003), however, modifications of the material topology allowed to record the intracellular action potential. This could be achieved by the generation of protrusions of different shapes like mushrooms (Hai et al., 2010) and pillars (Robinson et al., 2012) where electrodes are inserted into the cell. Another configuration was created by Desbiolles et al. (2019). These authors fabricated a surface containing nanovolcanoes with an electrode in their interior. Cells fused spontaneously with the nanovolcano allowing the electrodes to be in contact with the intracellular medium (Desbiolles et al., 2019). Other approaches include kinked nanowires (Tian et al., 2010) or nanotubes (Duan et al., 2012). In these cases, the nanostructures allow a fieldeffect transistor to gain access to the intracellular medium. Protrusive nanostructures disrupt the membrane and insert into the cytoplasm spontaneously (Desbiolles et al., 2019), by electroporation , or by chemical functionalization (Duan et al., 2012). Planar patch clamp chips are other type of substrates used to measure intracellular potential, constituting a protrusion-free approach. Using this strategy, Martina et al. developed a planar substrate with holes from 2 to 4 µm in diameter that connected to a microfluidic channel under the surface (Martina et al., 2011). Neurons adhered to the substrate and spread, covering the holes. Then, negative pressure was applied in the channel, which breaks the cell membrane and connects the cytoplasm with the microfluidic channel, similarly to a whole-cell patchclamp configuration. Thus, the intracellular potential could be registered through the microfluidic conduits. Usually, electrodes in MEAs are made of metallic conductors like gold, titanium nitride, platinum, and alloys like iridium oxide (IrOx). Electrode surface can be modified with porous materials from platinum, gold nanostructures, CNTs, and conductive polymers like poly(3,4-ethylenedioxythiophene) (PEDOT) (Obien et al., 2015) to increase the effective surface area of electrodes. In the last years, the use of complementary metal-oxide-semiconductor (CMOS) technology provided an increase of electrode density in MEAs (Obien et al., 2015). Despite the promising sensitive capabilities of protruding conductive electrodes to measure intracellular action potential, their effects on cells are not fully characterized. For example, it has been reported that protrusions that insert into cells could alter intracellular trafficking (Zhao et al., 2017). In the case of non-protrusive strategies like planar patch clamp chips, the measurement time could be limited by the perfusion of microfluidic liquid into the cell. Because of this, a different approach was employed by Dipalo et al. (2018) consisting of MEAs made of platinum or gold porous electrodes (Figures 5A,B). The electrodes acted as plasmonic antennas, which under infrared light illumination generated acoustic waves that transiently porate the cell membrane at the illuminated spot, a process named "optoacoustic poration." A platinum porous electrode was placed on top of the aluminum surface of the CMOS-MEA electrodes. Cardiomyocytes derived from human-induced pluripotent stem cells (hiPSCs-derived cardiomyocytes) were cultured on the MEA. After optoacoustic poration of cardiomyocytes the intracellular action potentials could be recorded ( Figure 5C). Interestingly, membrane potential recordings were not affected by repeated optoacoustic porations on the same spot or different spots on the same cell (Figures 5D,E). The material took advantage of the nanostructured properties of the porous electrode to facilitate the access to the intracellular medium and provided a robust and reliable way to measure intracellular action potentials. Measurements of Cell-Generated Forces Upon adhesion, cells generate contractile forces to their supporting material. On materials of soft composition, the forces transmitted from the cells cause substrate deformation. This phenomenon can be exploited to assess the link between the mechanical properties of cells and the stiffness of the extracellular environment. In the beginning, forces were estimated by counting wrinkles as a consequence of material deformation by the cell (Harris et al., 1980). Forces could be quantified by a calibration curve created by correlation of forces of known magnitude with the length of the wrinkles they produce (Burton and Taylor, 1997). However, this methodology was limited in terms of spatial and temporal resolution as wrinkles are usually larger than cells, they develop slowly, and are intrinsically nonlinear (Dembo and Wang, 1999). A major improvement was introduced in the work of Dembo and Wang (1999), where fluorescent beads were embedded in the soft polymer layer on which cells grow, establishing the basis of traction force microscopy (TFM). In classic TFM, forces exerted by the cells on the soft material are calculated by tracking the displacement of the tracer particles by fluorescent microscopy. An image of the substrate in a stress-free reference state is compared with the image of the substrate in the presence of cells. Over the last few years, TFM has been implemented on diverse elastic materials like silicone, polyacrylamide, and polyethylene, being polyacrylamide one of the most used due to its transparency and elastic properties (Mohammed et al., 2019). Regarding elasticity, substrate materials in TFM are generally linearly elastic, i.e., stress is directly proportional to strain, which facilitates force calculation. However, the ECM is a fibrous network, and therefore elasticity is not linear. For a more detailed description of the mechanical properties of the ECM see the recent review from Polacheck and Chen (2016). The first attempts to measure forces in non-linear elastic biopolymers were implemented by Steinwachs et al. (2016). Often, substrate materials are coated with ECM components like collagen, fibronectin, laminin, or peptide mimetic ligands of these proteins to provide cell-adhesion sites on the substrate. A detailed protocol and guideline to perform TFM can be found in a manuscript from Plotnikov et al. (2014). In a recent work, Oria et al. (2017) fabricated polyacrylamide hydrogels of different stiffness embedded with fluorescent nanobeads and decorated with a quasi-hexagonal array of gold nanoparticles on their surface ( Figure 6A). Nanometer scale distribution of integrin ligands (cyclic arginine-glycineaspartate, cRGD) was achieved by functionalization of the gold nanoarray, while the tracking of the fluorescent beads allowed the measurement of cell forces. The created substrates offered a versatile platform for studying how cells sense spatial and physical information at the nanoscale. Interestingly, the authors showed that ligand spacing and substrate stiffness had opposite effects on cell behavior. Increased adhesion was produced at long ligand spacing on less rigid surfaces (200 nm, 1.5 kPa) or at short ligand spacing on more rigid surfaces (50 nm, 30 kPa). The authors measured the length of FAs on cells that expressed a paxillin labeled with a green fluorescent protein (GFP-paxillin), which was an indicator of the maturation of the FA. Results showed how adhesion depended on the ligand spacing and surface stiffness (Figure 6B). At higher substrate stiffness, the cell exerted higher tensile forces on the surface ( Figure 6C). Moreover, cell adhesion was dependent on whether ligand distribution was ordered or not (homogenous ligand spacing or random distribution). These results ruled out a molecular-rule hypothesis in which there is an optimal ligand spacing for FA assembly and growth. On the contrary, the authors explained their results by a model that takes into account pulling forces exerted by myosin on actin filaments, a force threshold that triggers FA growing, and a maximum integrin recruitment in the FA. On soft surfaces, ligand spacing has to be long, so that actinpulling forces are distributed on fewer integrins. Thus, the force on each integrin is high enough to trigger FA growing. As more rigid surfaces were employed, FA can grow until the maximum number of integrins in the cluster is reached. If stiffness is too high, the pulling force exerted by the cell overpass the FA binding force to the substrate and adhesion collapse. This work is an exquisite example of how materials can be engineered to mimic different properties of the cellular environment, and at the same time provide sensitive means for probing cell mechanics. Recently, TFM substrates were improved by the generation of precise arrays of fluorescent quantum dots inside a silicone elastomer by electrohydrodynamic nanodrip-printing. The creation of controlled arrays of fluorescent particles removes the necessity of an image of the field after cell detachment (Bergert et al., 2016). Moreover, the use of super resolution fluorescence microscopy increased the resolution of the force map on the substrate (Colin-York et al., 2016;Stubb et al., 2020). However, calculation of the force maps from the microscopy images is not a trivial process and requires complex algorithms. For a more detailed review on this matter see Style et al. (2014), Schermelleh et al. (2019). TFM-optimized elastic materials have been created to study mechanobiology in 3D. The deconvolution in 3D of forces exerted by the cell on a surface (Maskarinec et al., 2009;Legant et al., 2013) or inside 3D matrixes (Legant et al., 2010) was developed. Although the deconvolution of the force map in 3D environments is still challenging (Polacheck and Chen, 2016), these approaches hold great promise for measuring cell forces and mimic the ECM in geometries that more closely resemble those found in normal tissues. Vorselen et al. (2020) created a sophisticated version of 3D TFM by using uniform hydrogel particles with tunable size and stiffness. These particles were incubated in the cell medium and suffer deformation as a consequence of the forces applied to them. A computational method was developed by the authors to infer the mechanical forces from the deformation of the hydrogel particles. Thus, the pressure exerted by the cell to the particle could be measured with high resolution. This approach moves away from the classical TFM concept as hydrogel particles are not fixed around the cells but free to move and interact with them. For example, the authors could measure the spatial distribution of forces applied by a macrophage that phagocyted a hydrogel particle. Besides, FIGURE 6 | Substrate materials for measuring cell-exerted forces. In (A), the scheme represents the nanopatterned substrate employed by Oria et al. (2017). Hydrogels with different stiffness and coated with AuNP at different distances were employed. AuNP were functionalized a cRGD-based ligand (integrin ligand). Due to steric hindrance only one integrin could bind to each AuNP. The length of FA (adhesion length) was measured using cells that expressed GFP-paxillin. In (B), results show that greater FA were assembled on soft substrates with high spacing between ligands (200 nm, blue dots). On stiffer substrates, longer FA were detected at shorter ligand spacing (50 nm, red dots). In (C), the graph shows averaged cell tractions as a function of substrate stiffness. The continuous lines represent the predictions of the model created by the authors to explain the results. Figures in (A-C) were adapted from Oria et al. (2017) with permission. Copyright 2017 Springer Nature. In (D), a scheme of a T cell on an array of elastomer pillars is presented. These pillars were coated with antibodies that activated the TCR and the CD28 (red coating on the pillars). Forces exerted by cells were calculated by measuring the pillar displacement (δ). In this type of material, pillar displacement is proportional to the force applied to them (for small displacements). In particles could be tailored with pMHC ligands for the T cell receptor (TCR) of lymphocytes and I-CAM. These ligands induced the adhesion of cytotoxic T lymphocytes to the particle, simulating the interaction of T lymphocytes with their target cells, thus allowing to monitor forces induced by the T cell. Remarkably, the authors introduced hydrogel particles as micrometer tension probes breaking the general concept of TFM limited to 2D surfaces. Other class of elastic materials have been developed to study cell mechanical properties and behavior. The use of micropillars to measure cell pulling forces was first introduced by Tan and coworkers (Tan et al., 2003). In this strategy, an array of cylindrical micrometer-scale cantilevers called micropillars is fabricated on polyacrylamide or polydimethylsiloxane (PDMS) substrates. The micropillars top is functionalized with ECM molecules, allowing cells to adhere on them. Therefore, when cells spread and maturate, they exert forces on the micropillars supporting them. The magnitude of these forces can be calculated from the micropillar displacement from their undeflected positions. This has the advantage that there is no need for cell detachment processes as in classical TFM (Roca-Cusachs et al., 2017;Banda et al., 2019). Besides, force calculation is simpler than with TFM, because small deformations follow a linear regime: micropillar deflection is directly proportional to the force (Du Roure et al., 2005). A third advantage is that micropillar stiffness can be modulated to some extent by modifying pillar geometry (Saez et al., 2007). This feature was exploited to create a wide range of stepped gradients by combining different pillar geometries (Lee et al., 2015). However, a major limitation of pillar arrays is that the substrate is not continuous. Cells can bind only to discrete spots (the top of the pillars) which can affect the morphology of cell-ECM adhesion. Researchers have been exploring the use of micropillars with magnetic properties to exert controlled forces on cells (Sniadecki et al., 2007;Le Digabel et al., 2011) or the generation of nanopatterns on top of the micropillar surface to control ligand spacing and the selective binding of integrin types (Rahmouni et al., 2013). Recently, Bashoura et al. (2014) assessed the role of receptors involved in the interactions of T cell lymphocytes with APC. A micropillar pattern was coated with antibodies that bind to the TCR or the CD28 proteins to mimic the activation exerted by the ligands of these proteins ( Figure 6D). Thus, the micropillar surface simulates the membrane activation by the APC. Pillars were also coated with fluorescent molecules to facilitate the force map generation. Using antibodies or ligands for the TCR or CD28, the authors could demonstrate that T lymphocytes exert pulling forces through the TCR (Figures 6E,F), whereas CD28 on its own does not mediate forces. Instead, CD28 is important for signal transduction. Other approaches have been developed where forces can be measured without the need for soft deformable substrates. In 2012, Salaita's group developed a sensitive approach to spatially and temporally map forces exerted by cells. Similar to protein tension probes expressed by cells (Grashoff et al., 2010), they implemented molecular probes as force transducer sensors (Stabley et al., 2012). The sensor consisted of a flexible polyethylene glycol (PEG) linker covalently bound to a ligand at one terminus and anchored onto a surface. The ligand and the surface were functionalized with a fluorophore and a quencher molecule, respectively. In a resting state, the linker is in a collapsed conformation, allowing the fluorophore to be quenched due to proximity with the quencher. When a force is applied to the probe, the linker stretches, separating the fluorophore from the quencher and increasing the fluorescence yield of the probe. Thus, knowing the quenching efficiency at a particular spot on the surface, it is possible to determine the distribution of collapsed/extended probes. As a proof of concept, the authors used an epidermal growth factor (EGF) ligand to map forces associated with initial uptake and trafficking of the EGF receptor (EGFR) upon binding to its cognate ligand. This work was motivated by the need for functional materials that can surpass the limitations of TFM in terms of sensitivity, spatial, and temporal resolution. Later on, the same group applied a similar concept to quantify the innate forces involved in the binding FIGURE 7 | Molecular fluorescence tension microscopy substrates. In (A), the image shows a drawing of a cell adhered on the nanostructured biosensor surface. In (B), the chemical structure of the tension probe synthesized by the authors is shown. The probe is composed of three main components, a cyclic RGDfK which constitutes the integrin ligand (represented by a blue triangle), a Cy3B dye (represented by a red dot), and a PEG chain with a thiol group at the end (represented by a gray line). The probe binds to the AuNP through the thiol group and acquires a collapsed conformation in the absence of pulling forces on the probe. Each AuNP has on average 2.5 probes. The image in (C) shows the scheme of the molecular probe. In a rest position the probe acquires a collapsed conformation where the Cy3B fluorescence is quenched by the AuNP. Upon exertion of pulling forces by the cell, the linker stretches and the fluorescent probe is no longer quenched. In (D), the graph shows the mean tension per ligand across one entire cell on substrates coated with AuNP at different interparticle distances. After 1 h only cells on surfaces with high ligand density (50 nm spacing) could exert forces higher than 5 pN. The graph in (E) shows the GFP-paxillin cluster size (which indicates FA size) as a function of time. The increase in FA size was correlated with high tension as observed in (D). of TCR to the pMHC (Liu et al., 2013(Liu et al., , 2016. The force sensor consisted of a DNA hairpin labeled with a fluorophore-quencher pair immobilized onto a gold nanoparticle (AuNP). Both the molecular quencher and AuNP supported on glass quench the fluorophore by FRET and plasmon effect, respectively. The dual quenching mechanism provided increased sensitivity and lower background signal. The use of molecular tension probes was also implemented on nanopatterned surfaces to study the molecular biophysics of integrin ligand clustering (Liu Y. et al., 2014) (Figure 7A). Nanopatterned surfaces were prepared to create an array of AuNP, in which the distance between AuNP is precisely controlled (between 30 and 300 nm). Molecular tension probes were bound to AuNP on one extreme and had an integrin ligand [cyclic Arg-Gly-Asp-dPhe-Lys, c-(RGDfK)] on the other. A Cy3B fluorophore was located next to the integrin ligand ( Figure 7B), so in the resting state, the fluorescence was quenched by the plasmon of the AuNP via nanometal surface energy transfer (NSET) (Figure 7C). The authors studied the adhesion of NIH/3T3 fibroblasts that expressed a recombinant GFP-paxillin (a protein expressed at the FA). A relationship between integrin ligand density and the magnitude of forces exerted by NIH/3T3 fibroblast was found. At low ligand density (100 nm of ligand spacing) cells showed less adhesion, smaller FA after 30 minutes, and lower pulling forces compared to cells cultured at higher ligand density (50 nm of ligand spacing) (Figure 7D). In the latter case, forces exerted by cells increased until a maximum value (at 1 h) after which forces remained constant. This could indicate that integrin ligand density has to be high enough to harness actin and myosin driven tension, which is necessary for FA maturation. Using cells expressing GFP-paxillin, the authors could measure the size of FA (Figure 7E). Results showed a similar behavior of the forces and FA size as a function of time. At higher ligand densities, cells exerted higher forces and assembled bigger FAs. SUMMARY AND FUTURE PERSPECTIVES In the last years, advances in material synthesis have allowed the incorporation of sensing capabilities into cell substrate materials, which expanded the experimental information obtained from cell adhesion studies. This trend continued as novel synthetic strategies to tailor specific properties to materials were in constant development, leading to improvements in terms of sensitivity, spatial, and time resolution. In addition, modulation of topographical and chemical features allowed more control over cell functions. Surface nanopatterning was crucial to reproduce the physicochemical characteristics of the cellular microenvironment, creating accurate synthetic ECM analogs. Novel synthetic strategies together with sophisticated analysis techniques contributed to address important aspects of the mechanism by which cells interact with their microenvironment. For instance, accurate localization of fluorescent nanotracer inside soft elastic materials was crucial to spatially and dynamically map cell forces by stimulated emission depletion (STED) microscopy (Bergert et al., 2016;Stubb et al., 2020). Adding multiple features to the functional material provides more realistic physiological conditions and higher information output as demonstrated using precisely distributed nanometerscale arrays of ECM ligands on TFM substrates of variable rigidity (Oria et al., 2017). 3D soft architectures with programmed physicochemical properties incorporating a transducer element were also demonstrated (Pitsalidis et al., 2018;Vorselen et al., 2020), which constitute a necessary step towards a new paradigm for in vitro studies of cell processes. 3D synthetic microenvironments provide a platform for cell culture and cell analysis ex vivo where cells behave more natively (Weigelt et al., 2014). Advances in 3D culture platforms that merge biosensing capabilities with sophisticated biochemical and biophysical properties as those found in the native ECM allow the real-time study of mammalian tissues (Shamir and Ewald, 2014). This type of stimuli-responsive functional materials would be ideal toward understanding how cells change their phenotype and acquire specific functions as a consequence of the cues from the environment. This is an important aspect of tissue physiology, and a critical step to understanding alterations that lead to tissue pathophysiology (Bhatia et al., 2014;Pickup et al., 2014;Gaetani et al., 2020). In this context, biomimetic 3D biosensors for cell culture will increasingly contribute to physiology, histology, and physiopathology studies. 3D matrices that combine essential biophysical and biochemical aspects of the native cellular microenvironment with biosensing features will also bring benefits to the field of tissue regeneration and healing. A major goal in this field is to improve the biointegration of orthopedic and dental implants with the surrounding tissues. During decades, numerous surface modification strategies based on chemical coatings and nanotopographical features were developed (Mas-Moruno et al., 2019). However, despite the research efforts, implants still nowadays fail at an unacceptable rate (Raphel et al., 2016). This, in part, is due to the lack of critical information about the mechanisms governing the material biointegration process. In this perspective, ECM mimetics able to monitor cell adhesion dynamics quantitatively in real time hold great promise toward a rational design of 'smart' implants that possess cell-instructive characteristics. A further challenge should be devoted to the development of novel approaches that could provide information in real time of the cell behavior in vivo. Mammalian tissues and organs are particularly difficult to study by direct optical observation (Shamir and Ewald, 2014). This is the case for a number of diverse 3D culture formats including organoids (Simian and Bissell, 2017). Electrochemistry could provide the means to accomplish this goal. Conducting polymers, for instance, are excellent building blocks for the creation of electrically responsive hydrogels (Zhang et al., 2019;Yang et al., 2020), although improvements in conductivity are desired to achieve highly sensitive cell sensing. Alternatively, the integration of hydrogels with conductive nanomaterials could overcome those limitations (Li et al., 2018). A fundamental requirement is to maintain not only the morphology of the hydrogel, but also the electrostatic and biocompatible properties, which will be essential for observing cells on a controllable nature-loyal microenvironment. The electronic industry had advanced in the creation of complex circuits in very small dimensions. Adapting these advances to the field of cell biology emerges as a promising way to improve biosensor sensing capabilities. For example, the incorporation of CMOS-based MEAs in biosensors increased the density of electrodes on a surface (Obien et al., 2015), and with it the spatial resolution. Moreover, electrochemical biosensors can enhance biocompatibility by the creation of flexible electrodes (Song et al., 2020). Finally, substrates with built-in sensing capabilities are suited for multiparametric cell monitoring. For example, transparent electrodes allow simultaneous microscopy observation and ECIS recordings (Pallarola et al., 2017a;Parviz et al., 2017), whereas nanostructured conductive substrates allow the implementation of ECIS and SERS (Zong et al., 2015). Often the result is more than the sum of its parts; SPR conductive substrates allowed the creation of electric impedance microscopy which added spatial resolution to ECIS measurements (Wang et al., 2011). Multiparametric approaches increase the information output from cell culture experiments. A further challenge will be the creation of computational models and simulations that could help to interpret and understand the multi-scale information obtained by multiparametric biosensors. AUTHOR CONTRIBUTIONS DP and NS designed the content of the manuscript. All the authors performed the literature survey, wrote the manuscript, and edited and reviewed the manuscript before submission. NS and DP prepared the figures. ACKNOWLEDGMENTS NS acknowledges CONICET for a postdoctoral fellowship. DP is staff researcher of CONICET.
2020-11-11T14:06:39.945Z
2020-11-11T00:00:00.000
{ "year": 2020, "sha1": "03645aa52012d7d3cce47652db60bc0774eb563d", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fbioe.2020.597950/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "03645aa52012d7d3cce47652db60bc0774eb563d", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
234802064
pes2o/s2orc
v3-fos-license
Determination of the youngest active domain in major fault 1 zones using medical X-ray computed tomography-derived 2 density estimations Determination of the youngest active domains in fault zones that are not overlain by Quaternary ) that does not disrupt lower river terrace deposits, which means 175 that there has been no known activity during the late Quaternary (Okada, 1992). The Hatai 176 tonalite is distributed within 2 km of the northern side of the MTL in this area, and consists 177 mainly of plagioclase, hornblende, and chloritized biotite. The Hatai tonalite is affected by 178 mylonitization along the MTL, and contains foliations that are almost parallel to the MTL 179 (Takagi, 1985). (Figs 3c and 2b). ATS-2 is a solid pelitic schist with well-developed schistosity, 211 whereas ATS-1 contains 2-cm-diameter quartz crystals. The cataclasite samples from the Hatai 212 tonalite were analyzed using X-ray CT imaging (ATR-2) and density and X-ray fluorescence 213 (XRF) analyses (ATR-3). 214 Photomicrographs of the fault rocks are shown in Fig. 3g The density, ρt, porosity, ϕ, and Zet measurement results are shown in Table 2 and Fig. 5, and 338 Table 3 contains the XRF analysis results, which were used to calculate Zet. 339 There is a decrease in ρt as the youngest fault plane Y of every analyzed fault is approached 340 (Fig. 5a). There is an ~24% increase in ϕ as ρt decreases by 1 g/cm 3 , regardless of rock type 341 (fault rock or protolith; Fig. 5b). The mean ϕ values are 1.5% (standard deviation (SD) = 1.0%) 342 for the protolith, 12.6% (SD = 6.9%) for the cataclasite, 12.0% (SD = 4.8%) for the fault gouge 343 19 along the inactive faults, 17.4% (SD = 4.6%) for the fault gouge along the active faults, and 344 32.2% (no SD calculated since there were only two samples) for the fault breccia (Table 2). 345 However, every rock type yielded a positive correlation between ρt and Zet (Fig. 5c), even 346 though the ρt and Zet are expected to vary among different rock types. Fault breccia YDA-1 and 347 YDA-2 of the Yamada Fault both exhibit maximum ρt decreases of ~40%, whereas the Zet values 348 are almost the same as that for protolith YK-1, which means that the relationship between ρt 349 and Zet for the fault breccias is quite different from those of the other samples. This is because 350 YDA-1 and YDA-2 have been strongly affected by weathering, as evidenced by their dark-351 brown color at outcrop and an Fe2O3 content of 4.28 wt%, which is much higher than those of 352 the fault gouge, cataclasite, and protolith samples (Table 3) (2) 371 where I0 is the initial intensity of the incident X-ray beam, I is its emergent intensity, S is the 372 sample thickness, and μ is the linear X-ray attenuation coefficient (LAC) of the sample. The 373 LAC depends on both bulk density, ρ, and the atomic number, Z (Wellington and Vinegar, 374 1987): 375 where E is the X-ray energy (keV), a is a nearly energy-independent coefficient this is termed 377 the Klein-Nishina coefficient, and b is a constant. Equation (3) is applicable for monochromatic 378 X-rays, such as those from a synchrotron radiation facility. However, this equation does not 379 hold for most commercial X-ray CT scanners, which use polychromatic X-ray beams, because 380 μ depends on the X-ray energy (e.g., Nakano et al., 2000;Tsuchiyama et al., 2000). The 381 photoelectric absorption of a compound consisting of multiple types of atoms is proportional to 382 the effective atomic number calculated via equation (1) (Wellington and Vinegar, 1987). 383 The CT number, NCT, which determines the contrast of a CT image, is defined as: 384 where μw is the X-ray attenuation coefficient of pure water. A polychromatic X-ray CT scanner 386 will allow μ to vary depending on the X-ray energy (effective energy), as described above. The 387 influence of the variations in μ with the energy differences in an X-ray energy distribution is 388 reduced by calculating NCT, which is standardized using the μ ratio in equation (4) CT image analysis methods 393 A CT image is essentially a bitmap of each pixel's CT number; however, it also contains various 394 artifacts due to the X-ray photography and image reconstruction. Therefore, the effects of these 395 artifacts, especially BH, must be eliminated or reduced to ensure the accuracy of the CT 396 numbers and therefore provide an accurate quantitative analysis. 397 22 BH artifacts cause the edges of a CT image to appear brighter than the center, such that the CT 398 numbers along the edges of a sample are greater than those in the center. This occurs because 399 the lower-energy X-rays are absorbed more readily than the higher-energy ones when 400 polychromatic X-rays pass through a sample near its center, where the transmission thickness 401 is large. and rock type in this study (Fig. 5c). Therefore, we investigated the relationship among NCT, ρ, 424 and Ze using the recorded CT images taken for a single tube voltage (140 kV). 425 We used a third-generation medical X-ray scanner (Aquilion Precision TSX-304A 160-row 426 multi-slice CT; Canon Medical Systems Co., Ltd.; Otawara, Tochigi, Japan) at CRIEPI. The 427 scanner has a 0.25-mm slice thickness and 0.098-0.313-mm pixel size. The X-ray tube has a W 428 target and a 0.4 mm × 0.5 mm focal size. Three-dimensional CT images were acquired using a Table 4 shows the X-ray CT image, density, and XRF analysis results. The CT images for sample MZ-5, a pelitic schist protolith in the Sanbagawa Belt, exhibits a 452 striped pattern corresponding to planar schistosity (Fig. 6b) and possesses a NCTM value of 2056. 453 A narrow band (≤1 mm wide) that is brighter than the rest of the image is inferred to be a 454 phengite vein, which has a greater effective atomic number than either quartz or albite. 455 Example histograms of the CT values in each of the sampled zones are shown in Fig. 6c-e. 456 Approximately 40,000-130,000 pixels are analyzed in each region, with the NCTM values 457 generally following a normal distribution and possessing a standard deviation of 112-312 458 (Table 4). There may either be an increase in the frequency to values lower than NCTM, or a 459 small side peak that is lower than NCTM if the sample contains many cracks; however, NCTM 460 corresponds to the CT value of the matrix, with the influence of cracks excluded. 461 The NCTM-ρt relationship for Sanbagawa pelitic schist possesses a high positive correlation (ρt 465 = 9.54 × 10 -4 NCTM + 0.76, γ = 0.958; Fig. 10a). The calculated density from this equation, ρc, 466 is consistent with the real value, ρt, and possesses an error of <9.5% (Table 4). 467 The NCTM-Zet relationship (Zet = 2.67 × 10 -4 NCTM + 11.8) can be derived from the 468 abovementioned NCTM-ρt relationship; the ρt-Zet relationship is shown in Fig. 5c (Zet = 0.28ρt + 469 26 11.6, γ = 0.847). The effective atomic number calculated from this equation, Zec, is consistent 470 with the real value, Zet, and possesses an error of <1.4% (Table 4). 471 <Figure 6 472 473 The MTL at the Awano-Tabiki outcrop 474 The imaging results for samples AT, HA-1, and ATS-2 (Fig. 7a-c) mottled white regions throughout the CT images (Fig. 7b), with a NCTM value of 1908. A 1-2-485 mm-thick band that appears brighter than the rest of the image is inferred to be hornblende and 486 chlorite, both of which have larger effective atomic numbers than quartz and plagioclase. 487 27 Sample ATS-2, a pelitic schist protolith from the Sanbagawa Belt, possesses striped patterns 488 corresponding to planar schistosity in the CT images (Fig. 7c), with a NCTM value of 1961. A 489 narrow band (at most ~1 mm wide) that appears brighter than the rest of the image is inferred 490 to be a thin layer containing phengite and calcite, both of which have larger effective atomic 491 numbers than quartz. 492 Example CT value histograms for each zone are shown in Fig. 7d-h. Approximately 12,000-493 200,000 pixels are analyzed in each zone, with the NCTM values generally following a normal 494 distribution and possessing a standard deviation of 90-210 (Table 4). There may be a slight 495 increase in the frequency to values higher than NCTM due to the influence of minerals with a 496 large effective atomic number in some instances; however, NCTM corresponds to the CT value 497 of the matrix, with the influence of these minerals excluded. 498 The NCTM-ρt relationships for the Sanbagawa pelitic schist and Ryoke tonalite possess high 502 positive correlations (pelitic schist: ρt = 1.08 × 10 -3 NCTM + 0.56, γ = 0.857; tonalite: ρt = 1.19 503 × 10 -3 NCTM + 0.40, γ = 0.813; Fig. 10b, c). The ρc values are consistent with the ρt values, and 504 possess errors of <10.7% (Table 4). 505 The Tsuruga Fault at the Oritodani outcrop 513 The CT results for samples T-3, K1, and T-5 (Fig. 8a-c) are representative of the five samples 514 (T-3, C-2, K-1, C-1, and T-5) collected from the Oritodani outcrop. 515 Sample T-3, which was taken from the rocks in the fault fracture zone that formed during the 516 most recent fault activity, appears dark in the fault gouge (T-3-1, T-3-2 and T-3-3) around the 517 main fault plane Y in the CT image (Fig. 8a), with NCTM values in the 1185-1492 range. The 518 smallest NCTM value is in T-3-3, which is in contact with the main fault plane Y. We consider T-519 3-2 to be possibly affected by the most recent fault activity based on our abovementioned 520 analysis, but its NCTM value is 1428, which is about the same as that in T-3-1 and exceeds that 521 in AT-2, a fault gouge along an inactive fault. Furthermore, the observed microstructures in T-522 3 suggest that repetitive fault activity, which is indicative of an active fault, is limited to fault 523 29 gouge T-3-3. Therefore, we classify T-3-1 and T-3-2 as inactive fault gouge, and T-3-3 as active 524 fault gouge in this analysis. The cataclasite (T-3-4) outside of the fault gouge appears brighter 525 than the fault gouge, with a NCTM value of 1622. 526 Sample K-1, a Koujaku granite protolith, possesses dark-gray and fine-grained white areas 527 throughout the CT images (Fig. 8b), with a NCTM value of 1656. The small white areas (≤2-mm 528 diameter) in the image are inferred to be biotite, which has a larger effective atomic number 529 than either quartz or plagioclase. 530 Sample T-5, which is a metabasalt protolith, is largely gray in the CT image, with the exception 531 of a white area at the upper right of the sample (Fig. 8c) and has a NCTM value of 2590. 532 Example CT value histograms for each zone are shown in Fig. 8d-h. Approximately 20,000-533 310,000 pixels are analyzed in each region, with the NCTM values generally following a normal 534 distribution and possessing a standard deviation of 70-206 (Table 4). There may be a slight 535 increase in the frequency to values above NCTM due to the influence of minerals with a large 536 effective atomic number in some instances, but NCTM corresponds to the CT value of the matrix, 537 with the influence of these minerals excluded. 538 Fig. 10d, e). The ρc values are the ρt values, and possess errors of <3.7% (Table 4). 544 The NCTM-Zet relationships (granite: Zet = 2.83 × 10 -4 NCTM + 11.6; metabasalt: Zet = 7.66 × 10 -545 4 NCTM + 12.1) can be derived from the abovementioned NCTM-ρt relationships; the ρt-Zet 546 relationships are shown in Fig. 5c Sample YK-1, a Miyazu granite protolith, possesses dark-gray and fine-grained white areas 558 throughout the CT images (Fig. 9b), with a NCTM value of 1730. The white area (≤2-mm 559 31 diameter) in the image is inferred to be biotite, which has a larger effective atomic number than 560 both quartz and plagioclase. 561 Example CT value histograms for each zone are shown in Fig. 9c-e. Approximately 50,000-562 330,000 pixels were analyzed in each region, with the NCTM values generally following a normal 563 distribution and possessing a standard deviation of 119-224 (Table 4). There may be a slight 564 increase in the frequency to values above NCTM due to the influence of minerals with a large 565 effective atomic number in some instances; however, NCTM corresponds to the CT value of the 566 matrix, with the influence of these minerals excluded. 567 the NCTM values that were calculated from the 2D CT images. 570 The NCTM-ρt relationship for Miyazu Granite has a high positive correlation (ρt = 9.79 × 10 −4 571 NCTM + 0.85, γ = 0.893; Fig. 10f). The ρc value is consistent with ρt, and possesses an the error 572 of <9.1% (Table 4). 573 The NCTM-Zet relationship (Zet = 6.95 × 10 -4 NCTM + 11.2) can be derived from the 574 abovementioned NCTM-ρt relationship; the ρt-Zet relationship is shown in Fig. 5c (Zet = 0.71ρt + 575 10.6, γ = 0.975). The Zec value is consistent with Zet, and possesses an error of <0.7% (Table 4). ρt-ρc and Zet-Zec relationships 585 There is no significant difference in the ρ-ϕ relationship for various fault rock and protolith 586 types (Fig. 5b), whereas the trend of the ρt-Zet relationship appears to be dependent on the 587 analyzed fault rock/protolith type (Fig. 5c). This indicates that NCTM, which is a function of ρ 588 and Ze, must be treated as an effective parameter for examining fault rock and protolith 589 characteristics by fault rock/protolith type. We observe strong correlations between ρc and ρt, 590 and Zec and Zet for each fault rock and protolith type, as shown in Fig. 11 (ρ: γ = 0.944, Ze: γ = 591 0.895). Therefore, NCTM, which is calculated by fault rock/protolith type, should be a reliable 592 parameter for calculating the ρt and Zet values of a given rock sample and determining its fault 593 rock/protolith characteristics. 594 33 596 Fault rock characteristics based on the NCTM-rock/protolith ratio (ρt and Zet) 597 relationship 598 We have demonstrated that ρt, Zet, and NCTM all decrease as the main fault plane is approached 599 Furthermore, ρt is affected by Zet, as shown in Fig. 5c, with a distinct ρt-Zet relationship for each 600 fault and protolith type. Therefore, the effect of Zet on ρt is suppressed by using the 601 rock/protolith density ratio of each fault and protolith type. 602 Table 5 shows the results of the analyzed fault rock characteristics based on the relationships 603 between NCTM and the ρt and Zet rock/protolith ratios. The statistics of the determined NCTM 604 values and the ρt and Zet rock/protolith ratios are provided in Table 6 and Fig. 12a-c. 605 <Table 5 <Table 6 <Figure 12 606 The NCTM values (taken at 140 kV) were ~1900 ± 300 for the protoliths, ~1650 ± 250 for 607 cataclasite, ~1450 ± 200 for the fault gouge along inactive faults, and ~1100 ± 100 for the fault 608 gouge along active faults, as shown in Fig. 12a. Both the NCTM values and NCTM variations 609 decrease as the fault rock becomes more heavily deformed and the main fault plane is 610 The rock/protolith ρt ratio was ~0.8 ± 0.15 for cataclasite and the fault gouge along inactive 612 faults, and ~0.7 ± 0.1 for the fault gouge along active faults, as shown in Fig. 12b. The ρt ratio 613 Table 1. Locations and fault/protolith details of the analyzed samples. See Figs 1 and 2
2021-05-21T16:57:50.939Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "41311f08748467695392cd5f40cf5ddb51ce175c", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-342540/v1.pdf?c=1631875597000", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "2b961e6ac15e3b7f96e2f1a9789d3804b5ba9916", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
55823273
pes2o/s2orc
v3-fos-license
The Effect of Explicit Instruction of Textual Discourse Markers on Saudi EFL Learners’ Reading Comprehension Discourse markers (DMs) instruction is currently receiving an increasing amount of attention in the literature on second language learning. As noted by Al-Yaari, Al Hammadi, Alyami, and Almaflehi (2013), and Algouzi (2014), the use of DMs is insufficient to support the development of the language skills, especially reading, of Saudi English as a Foreign Language (EFL) learners’. Recurrent reports (e.g., Al Abik, 2014; Al-Mansour & Al-Shorman, 2011) have shown that Saudi EFL learners perform poorly on reading comprehension tasks. Since these studies were generally descriptive, the current study attempted to fill the gap by providing empirical data, particular to low-proficiency learners in the Saudi EFL context, based on an eight-session intervention programme to familiarise learners with DMs. This study hypothesised that explicit DM instruction could improve learners’ reading comprehension and that there would be a significant positive relationship between Saudi EFL learners’ knowledge of DMs and their reading performance. To test these hypotheses, two classes with a total of 70 Saudi male third-grade secondary students were assigned as control and experimental groups. The experimental group was introduced to the intervention programme, whereas the control group was only taught the prescribed reading lessons. Two forms of tests in both DMs and reading comprehension were administered to the two groups before and after the intervention. A correlation analysis was also run to determine the relationship between learners’ knowledge of DMs and their reading performance. Results confirmed, with a large effect size, that explicit instruction in DMs improved low-proficiency EFL learners’ reading comprehension. The finding also suggested that knowledge of DMs correlated highly with reading comprehension. In other words, learners who were good at recognising DMs performed better in reading comprehension tasks, whereas those who were poor at recognising DMs performed poorly. Practical suggestions for pedagogy and future research were also identified. Introduction When a written text is understood, reading can be a fascinating and inspiring experience.Reading also offers us different perspectives on life and enhances our creativity.Reading can inform our knowledge and develop our vocabulary.However, when a message is not understood, the reading experience can have negative and far-reaching consequences for students' learning and overall development. Reading comprehension is a process that triggers highly sophisticated operations irrespective of the text's language.Some aspects of reading comprehension might include the gradual building up of understanding as we read, the confirmation of predictions by later information, the facilitation of guessing the meanings of new vocabulary words, the making of connections between different parts of a text to support interpretation, the support of comprehension by scanning and cues from sentence structure and punctuation, and many more skills (Clarke, Truelove, Hulme, & Snowling, 2014).However, when it comes to interpreting text, readers differ from each other as they interact with the text in different ways. There have been many attempts to explain reading comprehension.Two of the most well-known models for reading comprehension are the simple view of reading (Gough & Tunmer, 1986) and the construction-integration model (Kintsch & Rawson, 2005).According to the simple view of reading, successful comprehension occurs when the reader is able to recognise words (decoding) and understand spoken language (listening comprehension).In this model, poor readers experience difficulties with decoding, listening comprehension, or both skills.The construction-integration model proposes that readers construct personal representations of the texts they read based on the interaction between the information in the text and the reader's background knowledge about the topic of the passage and its vocabulary.Kintsch and Rawson (2005) argue that comprehension occurs on three levels: linguistic (processing individual words), microstructure (processing large chunks of text), and macrostructure (processing themes and genre information about the text).Because of the complexity of this process, learners often struggle to become proficient readers. Poor comprehension can be the result of multiple factors.These include weaknesses in language skills such as phonology, semantics, grammar, and pragmatics.Other related difficulties can be observed in learners' attempts to understand the meaning of words and identify the structure and organisation of words, sentences, and connected text.Working memory is another factor that influences reading comprehension because it is needed to hold information while processing a text.With regard to text-level skills, poor reading comprehension is visible in skills such as inferencing and monitoring understanding (Clarke et al., 2014).Motivation is influential with respect to reading comprehension, as motivated readers are more active and engaged with reading activities; hence, the "Matthew Effect", which refers to the positive result of enjoying reading at school that extends to pleasure reading at home, is observed (Stanovich, 1986).This means that some comprehension difficulties can be overcome by sustaining a reader's motivation, encouraging enjoyment of reading and reading at home, and exposure to interesting reading materials. Reading becomes even more complex and multidimensional when it is in a foreign language, as is the case when Saudi Arabian students read texts in English.Indeed, as Garcia (2003, p. 31) convincingly puts it: "reading comprehension performance of English language learners is a complex endeavor because of the multiple program, instructional, language, cultural, and affective factors that may intersect and affect their reading development".Some of the domains that affect reading comprehension of EFL learners are cognitive, linguistic, sociocultural, and developmental (Kucer & Silva, 2006).Similarly, according to Birch (2007), effective reading comprehension in an EFL situation can be influenced by linguistic knowledge, interference of the first language (L1), and the availability of processing strategies. EFL learners may lack the necessary knowledge of English language sounds, vocabulary, grammar, or culture, which can obstruct their ability to comprehend.Another influential factor is L1 interference, since readers draw on their L1 knowledge base to process English texts.L1 influence can actually facilitate second language (L2) reading comprehension, but it can also be harmful.Besides linguistic knowledge and L1 interference, missing low-level processing strategies can significantly impede an EFL reader's progress.These problems may require EFL teachers to provide their students with direct instruction and remediation. In Saudi EFL classrooms, reading is a problematic skill for teachers and learners.According to Al-Mansour and Al-Shorman (2011), Saudi EFL students of different educational levels are unable to read efficiently or comprehend what they read.In fact, TOEFL (Test of English as a Foreign Language) reports for the past ten years shows that Saudi students' performance is the worst among Middle Eastern students, particularly in reading (Al Abik, 2014).Even worse, Al Abik (2014) points out that Saudi TOEFL candidates' average mean score in reading (X=12) is far below the average mean score worldwide (X=20).This result was supported by his own study of Saudi English-major undergraduates, in which he concluded that the majority of students (almost 70 percent) who were majoring in English and translation could not score more than 10 in the reading comprehension test.He emphasised that reading comprehension instruction in Saudi Arabia is not given proper attention and that there is an urgent need to change classroom practices in order to develop students' reading skills.Alsamadani (2011) affirms that reading instruction in Saudi schools is generally made up of oral repetition of passages and a literal level of comprehension. An integral part of reading comprehension is the learner's knowledge of discourse markers (DMs).Swan (2005, p. 13) defines a discourse marker as "a word or expression which shows the connection between what is being said and the wider context".DMs are linguistic expressions that connect sentences, show the attitudes of the speaker, and facilitate understanding of texts (Ismail, 2012).DMs can have various classifications, but one of the most comprehensive is presented in Hyland and Tse (2004) who classified DMs into interactive and interactional markers. Interactive or textual markers guide the reader through the text.They are made up of conjunct, adverbial, and paraphrasing expressions that can be divided into five categories: transitions, frame markers, endophoric markers, evidentials, and code glosses (Hyland & Tse, 2004).Transitions express the semantic relationship between sentences and main clauses (e.g., in addition, moreover, but, thus, and), whereas frame markers indicate text acts, stages or sequence (e.g., first, to conclude, finally).Endophoric markers refer the reader to the location of information in other parts of the text (e.g., see Figure X, noted above), while evidentials refer readers to other texts (e.g., X states, according to Y). Code glosses support the reader's understanding of the functional value of ideas in the text (e.g., in other words, namely, such as). Interactional markers aim at engaging readers in the argument proposed by the text.They are subdivided into hedges, boosters, attitude markers, engagement markers, and self-mentions.Hedges are expressions that show that the author is not fully committed to a proposition (e.g., might, perhaps, possibly).Contrary to hedges, boosters indicate the writer's certainty and commitment to the proposition in the text (e.g., indeed, in fact, definitely).Attitude markers convey the author's attitudes towards the information in the text, which could include showing agreement, importance, or preference (e.g., I agree, sadly, fortunately).Engagement markers are expressions that attempt to immerse readers in the text by capturing their attention or inducing them to build a relationship between them and the text (e.g., think of, you can see, note that).Finally, self-mention markers show the writer's presence in the text through the existence of first-person pronouns or possessives (e.g., I, we, our) (Hyland & Tse, 2004). An EFL reader's knowledge of DMs is crucial to his/her reading ability.It is not possible to understand a text without identifying the elements that contribute to the creation of meaning such as DMs (Aidinlou & Shahrokhi, 2012).It is believed that DMs correlate highly with reading comprehension and that they facilitate the EFL reader's understanding by improving his/her reading speed and recall (Khatib & Safari, 2011;Martinez, 2009).Although very few studies indicated that DMs have little or no effect on reading comprehension (e.g., Degand et al., 1998), the majority of the literature on DMs shows a significant positive effect (Khatib & Safari, 2011). The findings of Bahrami's (1992) experimental study show that introducing more DMs in reading passages significantly improves students' reading comprehension abilities.Conversely, Akbarian (1998) and Degand et al. (1999) concluded that omitting DMs from passages negatively influenced students' comprehension.Innajih (2007) found that explicit instruction of DM types and functions seemed to enhance EFL learners' performance in reading comprehension tests.Therefore, it can be inferred that DMs play a vital role in the development of reading skills, particularly in EFL contexts. Given the importance of DMs in facilitating reading comprehension for EFL readers and the poor performance of Saudi students on standardized tests, it is safe to say that this issue has not been given proper attention in the Saudi EFL teaching context.Although there have been very few studies investigating the topic of DMs and Saudi performance, both the studies of Al-Yaari et al. (2013) and Algouzi (2014) compared the use of English discourse markers by Saudi learners with that of native speakers and other EFL learners.They attempted to identify the most frequent DMs used by Saudi learners in EFL classrooms and how and why Saudi EFL learners use DMs the way they do.These studies, although descriptive in nature, concluded that "and", "but", and "also" are the most frequent DMs used by Saudi EFL learners and that they used DMs less than native speakers and other EFL learners.They believed that the inability of Saudi EFL learners to use the correct or the most appropriate DMs could be due to lack of explicit training and L1 interference. The interest in conducting this study was based on the unsatisfactory state of reading instruction in Saudi Arabia and the lack of sufficient empirical literature on the topic of DM instruction to low-proficiency EFL learners.The effect of DM instruction on secondary-stage EFL learners' reading comprehension has been largely ignored in previous studies compared to studies of tertiary-level EFL learners.The current study attempted to investigate the effect of explicit instruction in DMs for Saudi EFL learners on their reading comprehension abilities and determine whether a learner's level of knowledge of DMs is related to his/her reading comprehension performance.To this end, the following research questions were developed: 1) Will explicit instruction in DMs positively influence EFL learners' reading comprehension? 2) Is there a significant relationship between the EFL learners' recognition of discourse markers and their reading comprehension? This study contributes to an understanding of the role played by explicit instruction in DMs in reading classes of low-proficiency EFL learners, through an exploration of the effect of DMs on the development of their reading comprehension.It also attempts to identify the relationship between EFL learners' recognition level of DMs and their reading comprehension performance.The results of this study may therefore be of benefit in second-language reading instruction if they convince course designers and EFL instructors of the importance of DMs in the L2 classroom. Following this introduction, the methodological approach adopted in this study will be presented.The major research instruments (the language proficiency test, the reading comprehension test, and the DM test) are identified and the procedures followed in collecting and analysing data are stated.Key results from an analysis of the research data are presented and discussed along with implications and recommendations for future research. Participants The participants of the present study included 70 Saudi male third-grade secondary students between the ages of 16 and 17 from the Taif Directorate of Education who had been studying English language for the past six years in public schools.To ensure the homogeneity of the participants, the TOEFL Junior Standard Test was administered, and the students participating in the study were randomly assigned into a control and an experimental group.Both groups were taught by the same teacher. Language Proficiency Test The TOEFL Junior Standard Test was used in this study to identify the Common European Framework of Reference for Languages (CEFR) level of the students to make sure that both groups were homogenous and that no significant differences existed between them with regard to their language proficiency prior to the planned intervention.The scores were also mapped to CEFR levels to help in understanding students' English proficiency levels.The TOEFL Junior Standard Test is intended for students ages 11+ and is often used for classroom placement purposes.The two-hour test consists of 126 items testing three areas: listening comprehension (42 items), reading comprehension (42 items), and language form and meaning (42 items).This test was administered at the beginning of the term; results showed that the students' proficiency levels were between levels A1 and A2 on the CEFR.The scores did not show any significant differences between the two groups. Reading Comprehension Test The reading comprehension sections of two TELC (The European Language Certificates) test forms were used to assess students' general reading comprehension abilities before and after the intervention.The TELC test was used because it offers language tests that are especially designed for the A1 and A2 levels of foreign language learners.It was used to examine whether the DM instructional treatment had any effect on general reading comprehension ability. Each test form had a total of 12 matching items based on three reading passages.The answers were scored as either correct or incorrect with a total achievable score of 24.The internal consistency reliability coefficients (Cronbach's alpha) for Forms A and B based on students' performances on the pretest were found to be 0.80 and 0.83 respectively. DMs Test Two forms of DM tests based on Hyland and Tse's (2004) model were administered: the first one as pre-test to evaluate the homogeneity of the two participating groups and the second as post-test in order to compare the two groups after the intervention.The DMs used in the DMs tests are: in addition, furthermore, however, yet, but, and, thus, firstly, secondly, then, after that, finally, to conclude, my purpose is to, because, so, consequently, noted above, see figure, in part 2, according to, X states, namely, e.g., such as, for example, in other words, might, perhaps, possible, about, in fact, definitely, clearly, unfortunately, I agree, surprisingly, consider, note that, you can see, I, we ,my, and our. Fifty multiple-choice items were developed for these DMs.Fifty sentences from authentic texts of the appropriate level of difficulty were selected.Each sentence had one DM removed and four choices of DMs provided to students to fill in the gap.These items were piloted at another secondary school and an item analysis was performed.Henning's (1987) facility and discriminability indices were used to verify the appropriacy of each item.According to these indices, items with an item facility ranging from 0.33 to 0.67 and an item discrimination of 0.67 and above were considered appropriate.Thirty of the items had the appropriate item facility and item discrimination and five of the remaining items had some minor problems that were addressed.These thirty-five items made up the final form of the DMs test. Classroom Materials Students in the experimental group were given training sessions on DMs, which involved hand-outs and an exercise book to develop students' awareness and appropriate use of different categories of DMs in selected samples of reading passages.In an attempt to deal with the "Hawthorne Effect" (The claim that participants change their behaviour whenever they are being observed or included in a study), a few of these handouts were also given to the control group. Procedure Initially, the TOEFL Junior Standard Test was administered to determine the language proficiency level of the participating classes.This was followed by assigning the participating students to control and experimental groups.Before the intervention programme, the pretests for reading comprehension and DMs were administered to both the experimental and control groups.The learners in the experimental group were introduced to the treatment programme, which involved eight sessions of DM instruction.In each session, they were familiarised with some types of DMs, which were explicitly taught through specially designed activities and training exercises. No explicit DM instruction was introduced to the control group participants.They were only given a few of the designed activities as they worked through their usual reading classes.After the intervention, in order to see the effect of the DM instruction on students' reading comprehension, participants in both groups were given post-tests in reading comprehension and DMs.By comparing the results obtained from the two groups, the researcher intended to investigate whether any significant difference existed between the performance of the experimental group and the control group after receiving DM instruction. Results In this section, a description of the statistical analyses of the data obtained in the present study is presented.First, a descriptive statistics method was used to determine the proficiency level of the participants.Then, a paired-samples t-test was performed to compare the performance of the participants in the experimental group on the pre-and post-tests.An independent-samples t-test was used to compare the pretest scores of both groups.To identify any significant difference between the experimental and control groups, an independent-samples t-test was run.Finally, to determine whether there was a relationship between learners' knowledge of DMs and their reading comprehension, a correlation analysis between their scores in the reading comprehension and DMs tests was done. Administering the TOEFL Junior Standard Test The TOEFL Junior Standard Test, consisting of three sections (listening comprehension, reading comprehension, and language form and meaning) was administered to two classes made up of 70 Saudi male third-grade secondary students from the Taif Directorate of Education assigned to an experimental group or a control group.The descriptive statistics of this test are shown in Table 1. Administering the Reading Comprehension Pretest The reading comprehension test (TELC) was also administered to the above-mentioned classes.The scores of the participants in reading comprehension in the pretest were analysed separately to ensure that the two groups were similar in terms of their reading ability before the intervention.The descriptive statistics of this test are shown in Table 2. Checking the Normality of Pretest Reading Scores and Homogeneity of the Two Groups To assess the normality of the distribution of the pretest reading comprehension scores, a normality test (the Kolmogorov-Smirnovtest) was carried out.The non-significant result (sig.value of .200)indicated normality.An independent-samples t-test was run to check the homogeneity of the two groups and to compare reading comprehension scores before the intervention programme.As shown in Table 4, there was no significant difference in the scores of the control group (M = 13.40,SD = 5.897) and the experimental group (M = 12.80, SD = 5.944; t (86) = .081,p = .936,two-tailed).The magnitude of the differences in the means (mean difference = .114,95% CI: -2.71 to 2.93) was very small (eta squared = .006). Checking the Normality of DM Pretest Scores and Homogeneity of the Two Groups To assess the normality of the distribution of the DM pretest scores, a normality test (the Kolmogorov-Smirnovtest) was carried out as shown in Table 5.The non-significant result (sig.value of .187)indicated normality.An independent-samples t-test was run to compare the DM pretest scores before the intervention for the control and the experimental groups.As shown in Table 6, there was no significant difference between scores for the control group (M = 18.54,SD = 6.070) and the experimental group (M = 17.83,SD = 5.943; t (86) = .497,p = .620,two-tailed).The magnitude of the differences in the means (mean difference = .714,95% CI: -2.151 to 3.580) was very small (eta squared = .003). The Effect of Explicit Instruction of DMs on EFL Learners' Reading Comprehension An independent-samples t-test comparing the mean scores for the reading comprehension post-test of the two groups was carried out after the intervention programme.Table 7 shows that the Levene sig.value was 0. 00, which is more than 0.05 level of significance.This meant that the variances of the two groups could not be assumed to be equal.The table also shows that there was a significant difference, in favour of the experimental group, between the scores for control group (M = 16.03,SD = 4.90) and the experimental group (M = 20.83,SD = 2.68; t (40) = -7.05,p = .000,two-tailed).The effect size was calculated to measure the magnitude of the differences between the two groups.Based on Cohen's (1988) guidelines, the effect size (mean difference = .866,95% CI: -7,865 to -4.363) was very large (eta squared = .422).Moreover, a paired-samples t-test was conducted to evaluate the impact of the intervention on students' scores on the reading comprehension test.There was a statistically significant increase in reading comprehension test scores from Time 1 (M = 15.5, SD = 3.8) to Time 2 (M = 20.8,SD = 2.6), t (34) = 11.73,p = 0.005 (two-tailed). The mean increase in reading comprehension scores was -5.257 with a 95% confidence interval ranging from -6.167 to -4.347.The eta squared statistic (.80) indicated a large effect size.These results suggest that explicit instruction in DMs really can have a positive influence on EFL learners' reading comprehension ability.Thus, the significant and positive results of the experimental group, compared to the control group, in the reading comprehension post-test can be attributed to the explicit instruction in DMs, which was only introduced to the experimental group during the eight-session intervention programme. The Relationship between the EFL Learners' Recognition of DMs and Their Reading Comprehension Level A correlation analysis was run to verify whether there was a relationship between students' level of knowledge of DMs and their performance in reading comprehension.The relationship between perceived knowledge of DMs (as measured by the DM test) and perceived reading comprehension ability (as measured by the reading comprehension test) was investigated using a Pearson correlation coefficient.Preliminary analyses were performed to ensure no violation of the assumptions of normality, linearity, and homoscedasticity.As Table 9 indicates, there was a strong positive correlation between the two variables, r (68) = .68,n = 70, p = .0005,with high levels of perceived knowledge of DMs associated with higher levels of perceived reading comprehension. Discussion As noted by Al-Yaari et al. (2013) and Algouzi (2014), Saudi EFL learners' knowledge and use of DMs are insufficient to support the development of their language skills, especially reading.Recurrent reports (e.g., Al-Mansour & Al-Shorman, 2011;Al Abik, 2014;Alsamadani, 2011) have shown that Saudi EFL learners perform poorly on reading comprehension tasks, indicating the necessity for improving their reading skills.Since these studies were generally descriptive in nature, the current study attempted to fill the gap by providing empirical data, particular to the Saudi EFL context, based on an intervention programme to familiarise Saudi EFL learners with the most frequently used DMs and develop their reading comprehension skills.This study hypothesised that explicit DM instruction can have a significant positive influence on Saudi EFL learners' reading comprehension and that there is a significant positive relationship between Saudi EFL learners' knowledge of DMs and their reading comprehension skills.To test these hypotheses, two forms of tests in both DMs and reading comprehension were developed and administered to two complete classes with a total of 70 Saudi male third-grade secondary students before and after the intervention programme.A correlation analysis was carried out to determine the relationship between learners' knowledge of DMs and their reading comprehension performance. The first question in this study investigated the effectiveness of explicit DM instruction in improving the reading comprehension of Saudi EFL students.The first independent-samples t-test prior to the intervention showed no significant differences between the mean scores of the experimental and control groups on the reading comprehension test (see Table 4), whereas the second independent-samples t-test after the intervention indicated a significant improvement in learners' reading performance for the experimental group compared to that of the control group (see Table 7).The paired-sample t-test for the experimental group on the reading comprehension pretest and post-test revealed a significant improvement in reading performance after the intervention programme (see Table 8).The effect size was calculated and the result showed that 80 percent of the difference of variance between the groups in reading comprehension performance could be explained by the intervention effect.Thus, these results confirm that explicit instruction in DMs can actually improve EFL learners' reading comprehension abilities.This result supports the conclusions of Innajih (2007), Pérez and Macia (2002), Bahrami (1992), Akbarian (1998), andDegand et al. (1999) that knowledge of DMs can enhance EFL learners' performance in reading comprehension tasks and that they play an important role in developing EFL learners' reading skills. The second question of the current study examined the relationship between learners' knowledge of DMs and their reading comprehension proficiency.The results of the correlation analysis (see Table 9) between the pretest results in DMs and reading comprehension as well as between their post-test results indicated a significant positive relationship between EFL learners' level of knowledge of DMs and their performance in the reading comprehension test.This suggests that EFL learners who are good at recognising DMs tend to perform better in reading comprehension tasks, whereas those who are poor at recognising DMs tend to perform poorly on reading tasks.This finding agrees with Sun (2013), Martinez (2009), and Khatib and Safari (2011), who assert that knowledge of DMs correlates highly with reading comprehension and that DMs are very helpful in facilitating both listening and reading comprehension. Knowledge of DMs can simultaneously serve several communicative functions in different dimensions.It help readers comprehend texts by signalling new information, elaboration, suggestions, warnings, and disagreements.DMs are also needed to create and maintain successful interactions between the reader and the text.Creating texts without DMs greatly inhibits comprehension and can cause major communicative breakdown (Britonm, 1990). This study had a number of limitations; the most obvious was the small sample size, which prevented the generation of a clear, generalised statement about the role played by direct instruction of DMs in L2 reading classes.However, several scholars consider that a sample of at least 30 participants is sufficient for correlational research and comparative and experimental procedures (Dörnyei, 2007). This study was further limited by the duration of the research, which was relatively short.Finally, the research findings of this study were limited by the quantitative nature of the research tools.Although the research tools that were used in the current study were very well established and served their purposes, the inclusion of other qualitative instruments would produce more comprehensive data and add strength to the generated results through data triangulation. In light of the findings of the present study regarding the effect of explicit instruction in DMs on developing the reading comprehension skills of Saudi EFL learners, there are several pedagogical implications and recommendations.First, this study has established the effectiveness of explicit instruction in DMs that introduced DMs to learners and gave them proper opportunities to learn and practice during the course of the programme.This approach needs to be encouraged and curriculum designers working with the Saudi Ministry of Education should consider developing knowledge and use of DMs from an early stage in current and future EFL textbook projects.Second, explicit instruction in DMs should be adopted and advocated as part of the agenda for pre-service and in-service teacher training in Saudi Arabia as a way of supporting and implementing activities that promote DM recognition, practice, and production.Third, it should be noted that the teaching of DMs is a gradual process needing time and practice, so instant success should not be expected.This is because EFL learners, especially in the Saudi EFL context, are not very familiar with DMs and how they can facilitate comprehension.There is also the issue of L1 interference, which can negatively influence the process of internalising English DMs.Much rests with EFL teachers' patience, hard work, and willingness to develop this important aspect of reading comprehension.Fourth, the high correlation between the recognition of DMs and reading comprehension on the test suggest that DMs are good indicators of EFL learners' understanding of texts.Therefore, DM activities (e.g., discourse cloze questions) can be incorporated to assess EFL learners' level of comprehension in textbooks and on reading comprehension tests. Future research incorporating a similar design and a larger sample size would be of great value.Larger samples would make it possible to generalise the findings to an L2 population.Another area of possible research would be to examine the effect of explicit instruction in DMs at different proficiency levels.The benefit of looking across different proficiency levels would be the capturing of the reading comprehension progress rate and areas of development that might not be detected at one level of proficiency during a relatively short study span. Future studies could be carried out to identify the reading skills most affected by DM instruction and to examine the relationship between DMs and the reading construct.Additional research that combines quantitative and qualitative methods is also needed.This would provide even richer data and potential for insight into the effect of DM instruction on reading comprehension. Table 4 . Independent-samples t-test of the means of the two groups on the reading pretest Table 6 . Independent-samples t-test of the means of the two groups on the DM pretest Table 7 . Independent-samples t-test of the means of the two groups on the reading post-test Table 8 . Paired-sample t-test for the experimental group on the reading pretest and post-test Table 9 . Correlation between the reading comprehension test and the DMs test
2018-12-05T22:34:11.136Z
2015-03-25T00:00:00.000
{ "year": 2015, "sha1": "4804365a84f88d460adb7ccca2d050757323bfb1", "oa_license": "CCBY", "oa_url": "https://www.ccsenet.org/journal/index.php/elt/article/download/46811/25257", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4804365a84f88d460adb7ccca2d050757323bfb1", "s2fieldsofstudy": [ "Education", "Linguistics" ], "extfieldsofstudy": [ "Psychology" ] }
256503801
pes2o/s2orc
v3-fos-license
Solving Graph Problems Using Gaussian Boson Sampling Gaussian boson sampling (GBS) is not only a feasible protocol for demonstrating quantum computational advantage, but also mathematically associated with certain graph-related and quantum chemistry problems. In particular, it is proposed that the generated samples from the GBS could be harnessed to enhance the classical stochastic algorithms in searching some graph features. Here, we use Jiuzhang, a noisy intermediate-scale quantum computer, to solve graph problems. The samples are generated from a 144-mode fully-connected photonic processor, with photon-click up to 80 in the quantum computational advantage regime. We investigate the open question of whether the GBS enhancement over the classical stochastic algorithms persists -- and how it scales -- with an increasing system size on noisy quantum devices in the computationally interesting regime. We experimentally observe the presence of GBS enhancement with large photon-click number and a robustness of the enhancement under certain noise. Our work is a step toward testing real-world problems using the existing noisy intermediate-scale quantum computers, and hopes to stimulate the development of more efficient classical and quantum-inspired algorithms. Gaussian boson sampling (GBS) is not only a feasible protocol for demonstrating quantum computational advantage, but also mathematically associated with certain graph-related and quantum chemistry problems. In particular, it is proposed that the generated samples from the GBS could be harnessed to enhance the classical stochastic algorithms in searching some graph features. Here, we use Jiǔzhāng, a noisy intermediate-scale quantum computer, to solve graph problems. The samples are generated from a 144-mode fully-connected photonic processor, with photon-click up to 80 in the quantum computational advantage regime. We investigate the open question of whether the GBS enhancement over the classical stochastic algorithms persists-and how it scales-with an increasing system size on noisy quantum devices in the computationally interesting regime. We experimentally observe the presence of GBS enhancement with large photon-click number and a robustness of the enhancement under certain noise. Our work is a step toward testing real-world problems using the existing noisy intermediate-scale quantum computers, and hopes to stimulate the development of more efficient classical and quantum-inspired algorithms. Recent experiments have constructed noisy intermediatescale quantum (NISQ) devices and shown increasingly more convincing evidence for quantum computational advantage [1][2][3][4][5], a milestone that demonstrates that the quantum devices can solve sampling problems overwhelmingly faster than classical computers. A natural next step is to test whether these NISQ devices can solve problems of practical interest. Proof-of-principle demonstrations of solving graph problems assisted by the GBS have been reported [16][17][18][19], however, in regimes where the GBS devices dynamics can be easily simulated on classical computers. An important and open question is whether the GBS could give enhancement on increasingly larger devices in the computationally interesting regime, and how the performance is affected by noise in NISQ devices. Furthermore, previous demonstrations on finding dense subgraphs could only address the problem with nonnegative-valued sampling matrices, for which efficient clas-sical algorithms of estimating the sampling probability exist [20,21] and a quantum-inspired classical algorithm was recently developed [22]. Here, we test solving nonplanar graph problems on the NISQ photonic quantum processor, Jiǔzhāng, with 50 singlemode squeezed states input into a 144-mode linear optical network [2,3]. We operate Jiǔzhāng in the computationally interesting regime to enhance stochastic algorithms solving two graph problems, namely the Max-Haf problem [9] and the dense k-subgraph problem [10]. We benchmark how the performance scales as a function of the GBS size, and how it is influenced by certain noise [23]. In the GBS, arrays of squeezed vacuum states are sent through a multi-mode interferometer and sample the output scattering events. Due to its Gaussian properties, the output state can be described by its Husimi covariance matrix σ Q [24,25], for which the sampling matrix is expressed as The sampling matrix A is in a block matrix form where A is a symmetric matrix, and L = 0 if the Gaussian state is a pure state. An illustration of the correspondence between a graph and a GBS setup is shown in Fig. 1(a). Any undirected graph can be represented by its adjacency matrix ∆ which is symmetrici.e. ∆ ij = ∆ ji -and the adjacency matrix element ∆ ij corresponds to the weighted value of the edge connecting vertex i to vertex j. The adjacency matrix can be encoded into the sampling matrix A of a pure state GBS with a proper rescaling factor c (see the Supplemental Material [26]): and by Takagi-Autonne decomposition [27] the corresponding GBS setup can be constructed. Each mode of the output light field maps to a column and row of the adjacency matrix, and each GBS sample corresponds to a submatrix of the sampling matrix A by taking the elements of the corresponding rows and columns. Once the relationship between the GBS device and the adjacency matrix of the graph under study is established, the GBS samples, whose probability is positively correlated to the mathematical quantity called Torontonian [28] (for threshold detection) or Hafnian [29] (for photon-numberresolving detection) of the corresponding submatrix, are harnessed to enhance solving the graph problems of interest. We study the GBS enhancement on solving the Max-Haf problem and dense k-subgraph problem. The Max-Haf problem is, for a complex-valued matrix B of any dimension, to find a submatrix B S of fixed even dimension k = 2m, with the largest Hafnian in square of absolute value. Hafnian was originally introduced in interacting quantum field theory and plays a variety of roles in physics and chemistry [8,[30][31][32][33][34][35][36]. When the matrix is an adjacency matrix composed of 0s and 1s, Hafnian can be interpreted as the number of perfect matching of the graph [11]. The Max-Haf problem is known to belong to the NP-hard complexity class [9]. The dense k-subgraph problem is, for an n-vertex graph G with adjacency matrix ∆, to find its subgraph of k < n vertices G S with the largest density where ∆ S is the adjacency matrix of G S . The dense k-subgraph problem is of fundamental interest in both mathematics [37] and applied fields like data mining, bioinformatics, finance and network analysis [38][39][40][41][42][43][44][45]. Although there are deterministic algorithms for finding subgraph of large density, they can be fooled and thus stochastic algorithms are important in certain scenarios [10]. The principle of the GBS enhancement on solving the two problems by stochastic algorithms can be understood from the concept of proportional sampling. Since the GBS samples are more likely to have a larger Hafnian in modulus (hereinafter we use Hafnian to refer to Hafnian in modulus), it also holds that subgraphs corresponding to the GBS samples are more likely to have larger Hafnian. Therefore, one can use the GBS samples to boost the effectiveness of stochastic algorithms in solving the Max-Haf problem by augmenting its success probability. Furthermore, it is proved that for a graph of 0s and 1s, its density is positively correlated to Hafnian, and dense ksubgraph problem solving can also be expected to gain enhancement from GBS [10]. From another point of view, by working in a quantum-classical hybrid scheme, the GBS serves as an oracle to significantly narrow down the combinatorial search space of the stochastic algorithm since the subgraphs of small Hafnian or density are unlikely to be sampled. These two graph problems differentiate from each other by means of their target function's computational complexity. Hafnian is hard-to-compute while density can be evaluated efficiently. Investigation on the two graph problems of distinct properties provides us with insights into the dependence of GBS enhancement on the computational complexity of the graph feature itself. While the above discussion holds for the ideal GBS, in experiments we need to consider three realistic derivations. (i) The sampling matrix retrieved from the experiment is not always ideally non-negative as in the original proposal. Imperfection in circuits can introduce negative or imaginary terms into the sampling matrix. (ii) Experimental noise like photon loss causes mixed state sampling in GBS, which can result in biased diagonal block matrix A, and nonzero off-diagonal block matrix L = 0 [46]. (iii) Threshold detectors are usually used instead of photon-number-resolving detectors [47]. To check whether the proportional sampling mechanism holds for the GBS with threshold detectors on complex-valued sampling matrices, we perform Monte Carlo simulation to reveal the numerical correlation between Torontonian and Hafnian or density for randomly generated complex sampling matrices (see the Supplemental Material [26]). Shown in Fig. 1(b) and (c), the positive correlation between Torontonian and Hafnian or density validates the underlying principle of proportional sampling, and portends the occurrence of GBS enhancement. We proceed to test the GBS enhancement on solving Max-Haf problem and dense k-subgraph problem. Two stochastic algorithms-namely, the random search (RS) and simulated annealing (SA) [9,10]-are studied. RS represents the naive way of solving the combinatorial problem by uniformly sampling from the whole solution space, which is free from being trapped by local optimum but is costly and inef- The score advantage as a function of photon-click number is shown in (a), which is defined as the ratio of maximum Hafnian in square of modulus searched at 1000 steps by the GBS-enhanced RS algorithm to that searched by the RS algorithm. The speed advantage, which is defined as the ratio of the number of steps reaching the target value by the RS algorithm to that by the GBS-enhanced RS algorithm, is shown in (b) as a function of photon-click number. The target value of each trial is set as that reached by the RS algorithm at 1000 steps. A clear rising trend with increased photon-click number can be observed for both the score advantage and the speed advantage. (c),(d) The GBS enhancement on the dense k-subgraph problem for various photon-click number. The mean photon-click number of the experiment is 61. The score advantage is displayed in (c), which is defined as the ratio of the density optimized at 10000 steps by the GBS-enhanced RS algorithm to that by the RS algorithm, as a function of photon-click number. The speed advantage which is defined as the ratio of the number of steps reaching the same density by the RS algorithm to that by the GBS-enhanced RS algorithm, is showns in (d) for various photon-click numbers. For each trial, the target value is set as that reached by the the RS algorithms at 10000 steps. No significant increasing trending with photon-click number is observed for this problem. Error bars indicate standard error. ficient. SA combines mechanisms from both random exploration that prevents it from being stuck in local minima and hill climbing that enables it to approach good solutions fast, but proper choice of parameters is crucial for guaranteeing the algorithm's performance. Together, the two algorithms of distinct working subroutines help benchmark the enhancement of GBS on graph applications more comprehensively. The experiment is performed on a randomly generated and fully connected 144-mode optical interferometer, and a subset of samples with coincident photon-click number up to 80 are used for the study. Figure 2(a) and (b) show the maximum Hafnian of a 12-vertex subgraph on a 144-vertex full graph found for the two algorithms and their GBS-enhanced variants as a function of searching steps. For both the RS and SA algorithm, it is evident the GBS-enhanced variants improve the effectiveness of the algorithms by finding larger Hafnian within the same steps. An illustration of the full graph corresponding to the experiment, together with the subgraph searched by the GBS-enhanced SA algorithm in highlight, is shown in Fig. 2(c). Similarly, Figure 2(d) and (e) plot the largest density found at various steps for the four algorithms. The GBS samples are 80 photon-click events from the 144-mode quantum device, which are in the quantum advantage regime. On average each sample would take Frontier, the current fastest supercomputer in the world [48], more than 700 seconds to generate using exact methods, as estimated with the state-of-the-art classical sampling algorithm [49], and we used 221891 samples in total for the study which amount to ∼ 5 years on Frontier. It is observed that both RS and SA algorithms gain enhancement from the GBS samples in searching for subgraphs of higher density at the given step. Specially, it is noted that the density found by the deterministic Greedy algorithm [50], which is marked as the horizontal dashed line, can be outperformed by the GBS-enhanced SA algorithm, confirming the advantage of stochastic algorithms. Having established the GBS enhancement, we continue to investigate how this enhancement scales on our device. We benchmark the GBS enhancement by defining the score advantage and speed advantage. The former is that, for a given step, the maximal score (in terms of the Hafnian or the density) obtained by the GBS-enhanced algorithms divided by that by the classical algorithms only. The latter is that, to reach a target score, the ratio of the needed searching steps between the GBS-enhanced and classical algorithms [26]. We use the parameter-free RS algorithm to probe the scaling properties. Figure 3(a) shows the scaling of the score advantage of the GBS enhancement for a fixed 10 3 steps. Remarkably, the score advantage rises up steadily, from ∼ 24 at a photon click of 12 to ∼ 92 at a photon click of 28. The speed advantage is plotted in Fig. 3(b) also as a function of photon-click number. The speed advantage starts at ∼ 89 at a photon click of 12 and becomes increasingly larger as the size increases, reaching ∼ 212 at 28 photons. Here, due to the computational overhead, we use the Sunway TaihuLight supercomputer to evaluate the Hafnian. Overall, the results of Fig. 3(a,b) provide strong evidence that the GBS enhancement as benchmarked by the score advantage and speed advantage increases with the photon-click number in solving the Max-Haf problem by RS algorithm on Jiǔzhāng. Figure 3(c) plots the score advantage for the dense ksubgraph problem at increasing photon-click number. While all the data points show a positive advantage (>1), there is no obvious increasing trend at larger size. A similar behavior is observed in the speed advantage, as shown in Fig. 3(d). In the Supplemental Material [26] we show numerically simulated results for an ideal GBS sampler on the dense ksubgraph problem of both randomly generated non-negativevalued graph and complex-valued graph, which exhibit trending of the score advantage and speed advantage similar to that reported in our experiment. Noise is a major problem for the NISQ device. In the GBS, photon loss [19] (which can be caused by limited efficiency of the optical elements and detection) and thermal noise [46] (which can be caused by spatial mode mismatch of the sources) can turn the pure-state GBS into mixed-state GBS. For the graph problem solving, these noises can make the sampling matrix deviate from the ideal [29], and could decrease the positive correlation between Hafnian or density of the encoded matrix and that of the sampling matrix. To characterize the influence of these noises on GBS enhancement on the graph problem solving, we benchmark them with the RS algorithm that is free from parameter choosing. We compare the steps needed for achieving a target value of the problem between samplers of various noise levels. The probability distribution of the steps follows the Geometric Distribution, which gives the probability that the first occurrence of success requires k independent trials. where k = 1, 2... is the number of steps, and p is the probability that a GBS sample can produce a better result than the target. The noise's influence on the GBS enhancement can be simply benchmarked by the parameter p, since a larger p would indicate fewer steps are needed which correspond to a stronger GBS enhancement, and vice versa. To investigate the effect of photon loss, we theoretically simulate the performance with an ideal sampler and that with an overall photon loss of 25% and 50%, for the same optimization task. Figure 4 (a) and (b) show histograms of the number of steps for the GBS-enhanced RS algorithms to achieve the target value. There is a significant reduction of the steps at increasing system efficiency η. The p value of the sampler with a unit efficiency is 0.0196 (0.0024) for the Max-Haf (dense k-subgraph) problem, whereas the lossy sampler with efficiency η = 0.5, 0.75 correspond to a p = 0.0039 (0.0012), p = 0.0071 (0.0018), respectively. The results indicate that lower photon loss will lead to a stronger GBS enhancement. A recent theoretical study reported similar findings, including the dependence of the GBS enhancement on the partial photon distinguishability [51]. Figure 4 (c) and (d) show the theoretical simulation results for the thermal noise [52]. Three examples are studied, where the thermal noise is chosen for = 0, 0.25, 0.5. Again, a strong decrease of the required steps is observed for lower thermal noise. The p value of the ideal sampler for the Max-Haf (dense k-subgraph) problem is 0.0194 (0.0024), whereas the p value of the sampler with 0.25, 0.5 thermal noise is 0.0023 (0.0011), and 0.0008 (0.0006), respectively. The results show the importance of eliminating the thermal noise to achieve a higher GBS enhancement. Having studied these effects theoretically, we now benchmark the noise influence experimentally. The experiment results at a typical noise level, whereη = 0.472 and¯ = 0.02, is compared with a controlled higher noise level,η = 0.333 and¯ = 0.192. As shown in Fig. 4(e) and (f), the experimental samples with low noise level demonstrate stronger GBS enhancement for both graph problems, which is in good agreement with the theoretical simulation. The p value for the low noise experimental sampler on the Max-Haf (dense k-subgraph) problem is 0.0031 (0.0012), whereas it is 0.0014 (0.0008) for the controlled high noise experimental sampler. Interestingly, samples from the modest-noise-level experiments, though with less enhancement, can still improve RS algorithm. In this Letter, we have demonstrated the GBS enhancement on stochastic algorithms in solving two graph problems of distinct properties with the 144-mode NISQ device Jiǔzhāng in the quantum computational advantage regime. It is an open question, however, whether the GBS can yield advantage compared to improved classical algorithms and quantuminspired algorithm. Also, the GBS enhancement can depend on the properties of the input graphs, for which more comprehensive algorithm analysis and discussions for various situations are expected. We hope that our work will stimulate experimental efforts on larger-scale, higher-fidelity and fully programmable GBS, exploration of real-world applications where the computational problems can be mapped onto the GBS, and development of more efficient classical and quantum-inspired algorithms.
2023-02-03T06:42:49.666Z
2023-02-02T00:00:00.000
{ "year": 2023, "sha1": "d686b01bbc16b17577d11cd2b979b38a98d9b865", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d686b01bbc16b17577d11cd2b979b38a98d9b865", "s2fieldsofstudy": [ "Physics", "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
37994746
pes2o/s2orc
v3-fos-license
Evaluation of nutritional status of United Arab Emirates University female students The purpose of this study was to evaluate the prevalence of underweight, overweight, and obesity, the widespread diseases and the food consumption patterns among UAE university female students. Height and weight were measured in a sample of 400 female students aged 18-25 years. A self administrated questionnaire addressing food habits, food consumption, sports practice and disease states was completed by each student. Body Mass Index (BMI) was calculated for each subject. WHO classification was used for defining underweight (BMI<18.5 kg/m), overweight (BMI=25-29.9 kg/m) and obesity (BMI>30 kg/m). The results indicate that the prevalence of underweight, overweight and obesity were 13 %, 19.4% and 6.7% respectively. The widespread self reported diseases in descending order of magnitude were anemia (19%), food allergy (4.8%), hypertension (2.8%) and diabetes (1%). 62% of the students did not practice any kind of sports. Food habits results showed that 44.8% of the respondents did not take breakfast, 34.9% took fast food at least once a day, and 52,3% took only 1 to 2 meals /day. Results of food consumption showed that 54.4% of the students consumed a diet low in cereals, 51.5% consumed a diet low in vegetables, 49.5% consumed a diet low in fruits, and 46.7 % of the students consumed a diet high in fat. Results also noted a statistically significant association between consumption of cereals and fruits and BMI classes. Therefore, there is a necessity to develop a nutrition education program for UAE students in order to help them change their food habits and avoid the negative health consequences of being overweight or underweight. Keys words: Obesity, Underweight, Food habits. ةدحتملا ةيبرعلا تاراملإا ةعماج تابلاطل ةيوذغتلا ةلاحلا مييقت Introduction Obesity is currently an escalating epidemic that affects many countries in the world including the Arabian Gulf region (Al Isa, 1995;Al Shammari et al., 1994;Al Mahroos and Al-Roomi, 1999: El Mugamer et al., 1995: Al-Awadi and Amine, 1989, where this condition is responsible for increasing death rates annually. It is a major contributor to precipitating or aggravating chronic diseases (Guo and Chumlea, 1999), including type 2 diabetes mellitus, coronary heart disease, and hypertension. Several studies have shown an increased prevalence and incidence of type 2 diabetes in obese persons in the Arabian peninsula (Moussa et al, 1999;Al-Mahroos and McKeigue, 1998;Al-Nuaim et al., 1996;El Mugamer et al., 1995). The prevalence of obesity, especially among females, in Arabian Gulf countries has increased dramatically during the past decades (Musaiger, 1987). This increase in the prevalence of obesity is due to life style changes (physical inactivity, leisure and modernization) and nutrition transition which are in relation with changing economic, social, and health factors. It is well known that Arabian Gulf countries have moved toward the higher fat and higher refined carbohydrates Western diet (Popkin, 2001). Major dietary changes include a large increase in the consumption of fat and added sugar in the diet, often a marked increase in animal food products contrasted with a fall in total cereal intake, and vegetable and fruits consumption. The United Arab Emirates enjoys a high capita income which is considered among the highest in the world. The country has undergone significant changes in nutritional and life style habits, similar to those in other Arabian Gulf countries, over the last three decades. Such changes are expected to have an impact on the magnitude of chronic diseases including obesity. 71% of married women and 56% of married men were found obese (Musaiger and Al-Ansari, 1992). These percentages are higher than those reported in other Gulf countries, indicating that obesity is a public health problem in the United Arab Emirates. Other studies have reported a high prevalence of overweight and obesity among UAE University female students (Musaiger and Radwan, 1995;Amine and Samy, 1996). In the present study, we examine the prevalence of underweight, overweight, obesity, and possible associated factors (food habits and physical activity) among female students in the UAE University. Subjects The study consisted of 400 female students aged between 18-25 years (2.5% of the total females students enrolled in the United Arab Emirates University in 2001/2002). Students were recruited by announcement at the hostels and at the different colleges. Subjects were assembled in the dietetic clinic of the Department of Nutrition and Health at UAE University where they were interviewed by trained graduate students and their weights and heights were measured. Anthropometric data Standard techniques were adopted for obtaining anthropometric measurements. Subjects were weighed in light clothes and with no shoes using a Seca scale to the nearest 0.1 kg. Height was measured to the nearest 0.1 cm using a wall-mounted stadiometer. The Body Mass Index (BMI) was determined from the ratio of weight to height squared (kg/m 2 ), and students were categorized according to the WHO classification. Food and life style questionnaire Each student was asked to complete a questionnaire which addressed questions related to food consumption pattern, food habits and physical activity (nature and duration). Statistical analysis All statistical analysis was conducted using SPSS (11.0) for Windows. In addition to descriptive statistics, the Chi square test was used to assess association between categorical variables. A P value ≤ 0.05 was used as the criterion of statistical significance. The percentage of the study population in different BMI categories was calculated for the total sample and stratified by age group. Table 1 shows the sample in terms of weight, height and BMI. Mean weight and height for the different age groups were 57.9 ± 0.6 kg and 158.7±0.3 cm respectively; BMI was 22.9± 0.2 kg/m 2 . Results show that the mean weight and height increased gradually with age reaching the maximum between 20-21 years old. We noted a significantly difference for weight and BMI according to age. Results The distribution of female students by age and BMI is illustrated in table 2. The results show that 6.7% were obese, 19.4% were overweight, 60.9% were normal, and 13% were underweight. The prevalence of overweight increased with age group and reached its highest rate (47.6%) among students aged between 20-21 years and decreased to 34.7% among students older than 22 years. The prevalence of obesity was least (11.5%) among young students aged between 18-19 years old while the highest rate (46.2%) was observed in the group 20-21 years. The opposite trend was noted with underweight which was most common among young students. Table 3 shows factors associated with BMI. Results indicate that 45% of female students skipped breakfast. 62.% of students did not practice sport, 38% practiced a mild sport (Walk). 6.8% of them were obese. 34.9% of students reported that they consumed fast food at least once a week, and 52.3% took only 1 to 2 meals /day. It is noted that the highest prevalence of underweight, overweight and obesity was observed among students living in the hostels compared to students living at home. We have to note that the factor age was found to be significantly associated with BMI classes. Table 4 indicates the widespread self reported diseases. Results show that the most frequent diseases in descending order of magnitude were anemia (19%), food allergy (4.9%), hypertension (2.8%) and diabetes (1%). Table 5 illustrates the association between food consumption levels and BMI. Results show that 32.6% of students reported a high consumption of cereals; among them 22% and 9.4% were overweight and obese respectively. 46.7% consumed a high level of fat; among them 18.1% and 10.4% were overweight and obese respectively. 27.2% and 40.8% of students reported a high consumption of meat and dairy products respectively. It is noted that a high percentage of students consumed a low level of vegetables and fruits (51.5% and 49.5% respectively). Results also show that a high percentage of obese and overweight students reported a high consumption of dairy foods, meat and fat and a low consumption of cereals, vegetables and fruits. It is also noted that a high percentage of overweight and obese students reported a high consumption of cereals, vegetables and fruits compared to students with normal weight. We noted a statistically significant association between consumption and BMI groups for cereals and fruits consumption. Discussion The present study was based on a limited sample of United Arab Emirate University female students attending the dietetic clinic and, therefore, results do not necessarily reflect the whole student population trends. Results reported that 19.4% and 6.7% of female students were considered overweight and obese respectively, compared with 8% and 1% in European countries (Bellisle et al., 1995). The rates reported in this study are much higher than those reported in Europe but they are close to those reported in Arabian Gulf countries. In the same population (UAEU female students), overweight and obesity were found to be 19% and 9.8% respectively (Musaiger and Radwan, 1995), and in another study Amine and Samy (1996) reported that 10.7% and 34.5% were considered as overweight and obese. Results of obesity reported in the second study are higher than our results. It is noted that in the last study, they did not use BMI to classify students but they used the ratio weight for height percentiles. This can explain the high difference between the two studies. We also reported a high prevalence of underweight (13%) especially among young students which confirmed previous results (Musaiger andRadwan, 1995: Amine andSamy, 1996). Musaiger and Radwan reported that 20% of female students were underweight using BMI ≤ 20 as a criterion for underweight which is different from our criterion (BMI ≤ 18.5). Other studies in the Arabian Gulf countries reported a high prevalence of overweight and obesity. In Saudia Arabia, Al Nuaim et al., (1996) found that the prevalence of overweight and obesity among female aged over 15 years was 27% and 24% respectively. In Bahrain, overweight and obesity were found to be 38% and 16% respectively (Al Mannai et al., 1996). Moreover, we found that age was significantly associated with overweight and obesity in agreement with other studies (Al Isa, 1999;Musaiger et al., 1993). Our results confirmed an assumption that underweight and overweight are a common public health problem among this age group. Physical activity and housing were not significantly associated with overweight and obesity, confirming earlier observations on the same population (Amine and Samy, 1996). It is noted that the prevalence of obesity seems to decrease among the UAEU students compared to former studies but the rate of underweight is still high (Musaiger andRadwan, 1995: Amine andSamy, 1996). This can be due to the increase of student's awareness about the body image which has lead to the increase in the rate of underweight. The influence of mass media and TV channels on food habits and food behaviors is very important especially among young students. While performing the study, we received many students at our dietetic clinic who were obsessed by losing weight despite the fact that they had a normal weight. This situation is totally different from that reported by Samy and Amine (1996) in their study with the same population. They reported that students with normal weight were seeking counseling to gain weight just to satisfy their husbands who prefer obese women. Concerning food consumption, we noted a high consumption of meat and fat and a low consumption of cereals, vegetables and fruits. The increase in the proportion of overweight and obese students reporting a high consumption of cereals, vegetables and fruits compared to normal students could be explained by the fact that they became more aware about a healthy diet (rich in cereals, vegetables and fruits) or they have overestimated their consumption. We evaluated the nutritional value of meals distributed in the hostels and we noted a high consumption of starchy products especially rice and bread (15 servings), meat (5-7 servings), and fruits (6 servings), fat (7 servings) and low consumption of vegetables (2-3 servings). We should take into consideration that this evaluation concerned the amount distributed for each student but not the quantity really consumed by each student. In addition, many students skipped lunch and most of the time skipped supper. This may be attributed to the timetables as well as the living in hostels. The timetable of students starts from 8 am and may end at 7 pm, with several breaks between. This certainly affects the time of consuming meals. The great majority of students are used to having a snack between meals. The components of the snack vary among potato chips, chocolates, soft drinks and sweets. Also, living in the university hostels limits the type and choice of food eaten. The rate of overweight and obesity in the present study is much more higher than that reported in Europe (Bellisle et al., 1995). These differences could be attributed to many factors and to the differences in the model of acceptable body image between the two regions, being more extreme in Europe than in the Gulf Arabian region, even though food supply is abundant in both regions. There may be a greater pressure to be thin in Europe than in the Gulf region. We did not find significant differences between obese and normal students regarding physical activity. Nevertheless, physical activity may be the foremost among those factors contributing to the level of overweight and obesity among UAEU female students. This can be due to the fact that social and religious norms may prelude female students, especially obese and overweight ones engaging in public sports. Conclusions The prevalence of underweight, overweight and obesity among the United Arab Emirates University was high. This may partly be due to the social and lifestyle of students. There is an urgent need to increase the level of awareness among students of the ill effects of either overweight and obesity or underweight. This requires intervention programs focused on promoting changes in lifestyles, food habits and increasing physical activity.
2017-10-28T01:57:46.909Z
2003-01-01T00:00:00.000
{ "year": 2003, "sha1": "e9a688f16be367b8ee6fa9fd526683a0ed408a07", "oa_license": "CCBY", "oa_url": "http://www.ejfa.me/index.php/journal/article/download/252/158", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "e9a688f16be367b8ee6fa9fd526683a0ed408a07", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
230817902
pes2o/s2orc
v3-fos-license
Plombage for Hemoptysis Control in Pulmonary Aspergilloma: Safety and Effectiveness of Forgettable Surgery in High-Risk Patients Objectives: To evaluate plombage surgery for hemoptysis control in pulmonary aspergilloma in high-risk patients. Methods: This study was carried out on 75 pulmonary aspergilloma patients presenting with hemoptysis that underwent a plombage surgery for approximately 7 years (November 2011–September 2018) at Pham Ngoc Thach Hospital. They revisited the hospital 6 months after plombage surgery and considered plombage removal. The group whose plombage was removed was compared with that whose plombage was retained 6 and 24 months after surgery. Results: Hemoptysis reduced significantly after surgery. Hemoptysis ceased in 91.67% of the patients and diminished in 8.33% of the patients 6 months after surgery. Similarly, hemoptysis ceased in 87.32% of the patients and diminished in 12.68% of the patients 24 months after surgery. Body mass index (BMI) index, Karnofsky score, and forced expiratory volume in one second (FEV1) increased. Plombage surgery was performed with operative time of 129.5 ± 36.6 min, blood loss during operation of 250.7 ± 163.1 mL, and the number of table tennis balls of 4.22 ± 2.02. No deaths related to plombage surgery were recorded. Plombage was removed in 29 cases because of patients’ requirements (89.8%), infection (6.8%), and pain (3.4%). There were no patient developing complications after the treatment and there were no statistically significant differences between the two groups. Conclusions: Plombage surgery is safe and effective for hemoptysis control in pulmonary aspergilloma. To minimize the risk of long-term complications, surgeons should remove the plombage 6 months after the initial operation. Introduction The development of this disease injuries lung compositions and may threaten the life of patients with hemoptysis. [1][2][3][4] Immediate treatment is critical for these cases, and the types of treatment depend on a patient's condition. 5) While surgical treatment with pulmonary resection is considered as the first-line method 6) with low morbidity and mortality, used in dealing with recurrent hemoptysis in low-risk patients, [7][8][9] that method should not be a recommendation for high-risk patients due to its considerable morbidity. 10) As such, alternative therapies, such as anti-fungal medication, bronchial artery intervention, and cavernostomy, have been introduced. 11) Cavernostomy is an effective 6,7,[12][13][14] and simple 13,15,16) treatment for highrisk patients. Among many types of cavernostomy, plombage surgery is considered for hemoptysis control in pulmonary aspergilloma. From the 1930s to 1950s, plombage, known as extraperiosteal or extrapleural pneumolysis, was used as a popular surgical treatment for cavitary tuberculosis. The method was based on a principle that a diseased lobe of the lungs would heal faster when it was physically deflated. However, this procedure is no longer performed because of the introduction of anti-tuberculosis drug therapy. 17) Our study showed the details of patients who underwent plombage surgery for hemoptysis control in pulmonary aspergilloma. Materials and Methods The objects of our study were 75 patients who were diagnosed with hemoptysis resulting from pulmonary aspergilloma and went through plombage surgery from November 2011 to September 2018 at Pham Ngoc Thach Hospital, Ho Chi Minh City, Vietnam. Our inclusion criteria were the following: over 18-yearold patients suffering from pulmonary aspergilloma and developing massive or recurrent hemoptysis; patients who were diagnosed with pulmonary aspergilloma based on typical clinical symptoms, conventional X-ray or computed tomography images, some tests (bronchoscopy, biochemistry, and microbiology), and pathological confirmation after surgery; and patients joining the study voluntarily, including the surgery and the surgery and the follow-up in line with the protocol approved by our ethics committee. Our exclusion criteria were patients disagreeing to join in the study and underwent any other procedure or surgery concurrent with our surgery. We also analyze hemoptysis, clinical findings and surgical features, and postoperative complications. Plombage surgery (Operative technique of plombage) Plombage surgery ( Fig. 1) was considered to perform on patients who present at least one of these: massive or recurrent hemoptysis that can threaten patient's life, poor general condition (body mass index [BMI] index <18.5, Karnofsky score <70), and compromised pulmonary function (forced expiratory volume in one second [FEV1] <50% or <1.5 L). Under general anesthesia, patients were placed in a lateral decubitus position with one-lung ventilation. To separate the edges of the surgical incision and access to the cavitary injuries, metal chest retractors were required. After removing the fungus ball, a cavity left under the ribs and that space might be filled with inert materials such as Lucite (acrylic) balls, ping-pong balls, mineral oil (oleothorax), air, fat, paraffin wax, and rubber sheath. 17) In our study, table tennis balls (ping-pong balls) were used. Other necessary techniques were conducted because of the lesions. One catheter (24-32 F) was placed into the cavity to control bleeding if necessary. A sterilized table tennis ball (ping-pong ball) was used in this study. It is made of a celluloid or plastic material that does not react with the human body. Orange and white are two colors of choice for balls and they are 40 mm in diameter and weigh 2.7 g. 18) Plombage should be removed 6 months after the initial surgery to ensure structural stability. The indication of this surgery included complications related to the material used, patients' requirements, or preventions of late complications. Its operative technique was quite simple with the same incision line, but the length was shorter (about 5-10 cm). After the table tennis ball was removed one by one through the thoracic incision, it was sutured, and drainage was required if needed. Statistical analysis Data were statistically analyzed with SPSS version 21. Performing the descriptive analyses as mean and standard deviation and using a Student's t-test at a significance level set at 95% to compare patients' characteristics between the two groups. Results In all, 75 patients underwent plombage surgery at Pham Ngoc Thach Hospital, Ho Chi Minh City, Vietnam, from November 2011 to September 2018. Their characteristics are shown in Table 1. The mean age was 52.59 ± 10.72 years, and tuberculosis was the most common lung disease accounting for 88% of the cases. The characteristics of hemoptysis before surgery are presented in Table 1. Two-thirds of the patients had moderate to severe hemoptysis, and 49.4% of the patients had more than or equal to four times of hemoptysis every 24 h. The follow-up process was divided into three episodes: before the operation, 6 months after the operation, and 24 months after the operation. The results showed that hemoptysis reduced significantly after surgery. Hemoptysis ceased in 91.67% of the patients and diminished in 8.33% of the patients 6 months after surgery. Hemoptysis ceased in 87.32% of the patients and Figure 2 shows the indication for surgery. Approximately 50% of the patients met all the three criteria. Surgical characteristics with perioperative and postoperative indices were recorded ( Table 1). The patients revisited the hospital 6 months after discharge and considered plombage removal. There was no one dying of plombage surgery, but four deaths recorded were the results of myocardial infarction, stroke, and complications of diabetes mellitus. The average operative time was 121.5 ± 29.0 min. Blood loss during surgery was 219.1 ± 95.4, but no cases required transfusion. No complications were recorded. With the same format as Table 2, Table 3 compares the two groups at two time points. No significant differences were observed. Figure 3 showed CT image before surgery, before plombage removal, and after plombage removal. Discussion Plombage surgery has two key points: (1) the cavity that formed after carvernostomy and (2) the use of table tennis balls to fill the space and to maintain the collapse. Cavernostomy procedures have been performed to remove pulmonary aspergilloma. Single-stage cavernostomy and a muscle transposition flap 16,19) and cavernostomy with limited thoracoplasty. 16) However, this is only one side of hemoptysis control in pulmonary aspergilloma. The other side that makes the difference in plombage surgery is filling a space with an inert material (table tennis ball in our study). As a principle of plombage surgery, it will take a shorter period for the diseased lobe of the lung to heal if it is deflated. Plombage also helps control hemoptysis and prevents recurrence. As such, plombage surgery showed preeminent points compared to other methods in terms of controlling hemoptysis in high-risk patients with pulmonary aspergilloma. Medical treatment using intrabronchial voriconazole installation in 82 patients (30.5%) and 52 patients (68.3%) has a significant resolution of hemoptysis after the first and second sessions, respectively. 20) Another approach involving the percutaneous intracavitary instillation of amphotericin B causes hemoptysis to cease in 85% of the patients, but major concerns related to this approach are complications (pneumothorax in 26% of the patients), recurrence of serious hemoptysis (six of 18 episodes), and unknown long-term benefit. 21) If medical treatments are ineffective, then an intervention treatment should be applied as an alternative, but their success rates vary. 22) Some chemicals and materials, namely polyvinyl alcohol, spring coil, and N-butyl cyanoacrylate glue showed their effectiveness in treatment for hemoptysis embolism and pulmonary artery embolism 23) but bronchial arterial embolization involving a gelatin sponge may be ineffective. 24) Hemostatic radiotherapy has also been introduced as a potential treatment option, but it has been used selectively. 25) In our study, hemoptysis from pulmonary aspergilloma stopped. Furthermore, no recurrence was recorded after 12 months. According to our viewpoints, the key elements to ensure the low recurrence rate was the manipulation of the bronchial fistula and the cavity's condition. The bronchial fistula must be closed that was checked by anesthesia through to expand the lung and no gas leakage if it closed, and the lesions must not be left in the cavity. Our results showed that plombage surgery was safe for hemoptysis control in pulmonary aspergilloma. There were no deaths relating to surgery, and a low risk of complications was recorded. Plombage surgery remains the appropriate therapy for high-risk patients defined in the inclusion criteria. Pulmonary function is one of the most important factors that determine the types of treatment. According to Lee et al., in patients with aspergilloma and hemoptysis, surgical resection should be preferred if FEV1 of patients is greater than 70% of the predicted value. An alternative remedy is required if FEV1 of patients in under 60% of the predicted value. 26) FEV1 is also considered the main criterion for the indication of cavernostomy for pulmonary aspergilloma. 14,27,28) In our study, it was one of the three criteria for plombage surgery and almost occurred. As such, therefore cavernostomy is taken account of to perform when lung resection is not realistic. 14,27) It is an effective option for high-risk patients. 5,11) If there was no complication or signs of late complication, the indication for surgery will depend on patients' requirements. Plombage still has not been removed in 43 cases after 6 months because the patient did not want to perform surgery at this time. The reason behind this may be that after surgery all symptoms were ceased. Besides that, all patients were poor conditions and they want more time to recover before performing the next surgery. In our study, patients who did not receive plombage removal surgery were follow-up every 6 months. Each visit, the patient was required to take the examination, X-ray, CT image, and pulmonary function. They were asked for taking surgery or required surgery if any potential signs of complications were noted. Early complications of plombage surgery in our study were three cases accounting for 4.0%: two cases of atelectasis caused by sputum occlusion and successfully treated with bronchoscopic evacuation and one case of pneumothorax that returned to normal after 2 days of drainage with negative pressure. Late complications may occur when table tennis balls are retained. In our study, the patients were recommended to revisit the hospital 6 months after plombage surgery for plombage removal. This period was appropriate because the remaining space was filled spontaneously, and the underlying cavities did not expand again after an average of 3-5 months. Foreign bodies should be removed as soon as plombage becomes superfluous to maintain the collapse. 29) After a long time, late complications, such as infection or migration of foreign materials, spontaneous hemoptysis, and extrusion of plombage may occur. When major complications occur, plombage removal results in higher operative blood loss (1.970 ± 3.199 mL), longer postoperative length stay (23 ± 13 days), or even death, 30) compared with those observed in our study. Hence, plombage should only be kept inside for 6 months and then remove it to avoid a high risk of potential complications. Conclusion Plombage surgery is safe and effective for hemoptysis control in patients with pulmonary aspergilloma. To minimize the risk of long-term complications, surgeons should remove the plombage 6 months after the initial operation.
2021-01-07T09:08:54.611Z
2021-01-06T00:00:00.000
{ "year": 2021, "sha1": "fb9789168ebb5781e640eb5b031ceefccf014c82", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/atcs/27/1/27_oa.20-00169/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9697c67b70ab5d516158dbde1f4065903e10e053", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255247764
pes2o/s2orc
v3-fos-license
A SOCIOLINGUISTIC ANALYSIS OF GENDER ON ENGLISH USE AT THE THIRD GRADE OF MTSN 2 PALU The students at the third grade of MTs N 2 Palu have different social context in their life. Since different students in different societies have distinct ways of life, and language to a considerable degree is influenced by culture. The factors above against this background that this study is carried out to examine how the differences in English use of men and women and at the third grade of MTs N 2 Palu. This research discusses about the differences between men and women in using English. Considering the purpose of the research and the nature of the problem, this research was a qualitative-research. To collect the data the researcher used observation, interview, and video recording. The result of this study shows that the difference between the students in using English is influenced by the social context especially in cultural aspect. When the students use English, the accent of each culture follows into their English. It also affects the intonations and accent when they use English. While, the differences between men and women. Women found it easier to adjust their accents and distinguish accents from the culture and accents of the English. While, man found it difficult. So, when using English men are often carried away with accents from their own culture. INTRODUCTION Language is used by human beings in social context, communicating their needs, ideas, and emotions to one another. Human language is a purely human and non-instinctive method of communicating ideas, emotions and desires by means of a system of voluntarily produced symbols. Animals also have a communication system but it is not a developed system. That is why language is said to species-specific and species-uniform. Language gives shape to people's thought; it guides and controls the entire activities. It is a carrier of civilization and culture as human thoughts and philosophy are conveyed from one generation to the other through the medium of language. Ultimately, attitudes to language reflects attitude to the users and uses of the language. People generally do not hold opinions about language in a vacum. They develop attitudes towards language which reflects their view about those who speak the language, and contexts and functions with which they are associated. The topic that has come to the fore in sociolinguistics in recent years is the connection if any, between the structure, vocabularies and ways of using particular languages and social roles of the men and women who speak languages. The social roles that men and women play, their different values and social network (who they talk the most), and their sensitivity to contextual factors including characteristics of the person they are talking to. More so, there are other reasons underlying the differences in the use of language of males and females. One of such factors is culture. Since different people in different societies have distinct ways of life, and language to a considerable degree is influenced by culture, their use of language will highly be influenced. The factors above againts this background that this study is carried out to examine the differences in language use of men and women and to do a sociolinguistic analysis. LITERATURE REVIEW Language A language is a structured system of communication. Language, in a broader sense, is the method of communication that involves the use of -particularly human -languages. The scientific study of language is called linguistics. Questions concerning the philosophy of language, such as whether words can represent experience, have been debated at least since Gorgias and Plato in ancient Greece. Thinkers such as Rousseau have argued that language originated from emotions while others like Kant have held that it originated from rational and logical thought. 20th-century philosophers such as Wittgenstein argued that philosophy is really the study of language. Major figures in linguistics include Ferdinand de Saussure and Noam Chomsky. Sociolinguistic Language is related to interactions in the society. Language and society are so intertwined that it is impossible to understand one without the other. Language also maintains every social institution such as education, law and family, since it is their main medium of expression. In education institution for instance, language can make educational experiences more engaging for students. Language is not a thing to be studied but a way of seeing, understanding, and communicating about the world. In family, language has an important role since it helps the members learn things for the first time. In law, it is manifested in a certain way within rules and acts. Sociolinguistics is analyzing the language use and its relationship with social and cultural aspect. That is why societies have to understand the role of language and social interaction. It is clear now that sociolinguistics is a branch of linguistic that takes language and the relationship with the society as the object of study. Gender Gender is the range of characteristics pertaining to, and differentiating between, masculinity and femininity. Depending on the context, these characteristics may include biological sex (i.e., the state of being male, female, or an intersex variation), sex-based social structures (i.e., gender roles), or gender identity. Most cultures use a gender binary, having two genders (boys/men and girls/women); those who exist outside these groups fall under the umbrella term non-binary or genderqueer. Some societies have specific genders besides "man" and "woman", such as the hijras of South Asia; these are often referred to as third genders (and fourth genders, etc). Language and Gender Reflecting social status or power difference, Lakoff in her research claim that women's language as a whole reveals women's social powerlessness and is thus dominated by stylistic features significantly insecurity and lack of assertiveness. She further argues that female language is consequently heavily influenced by the pragmatic principle of politeness which basically rules adaptive social behavior. The different views of language and gender as elicited above have come to a common ground that language and gender are inseparable and if any major difference exists, it becomes obvious in the intention of the user. Language and society Society is seen a "human being considered as a group in an organized community. It is also an organized community. It is also an organized community. It is also an organized group with common aim and interests. Human development has greatly been enhanced by language and its development. Arises because language as a social phenomenon is closely related to social attitudes. Men and women are socially different in that society lays down different social roles for them and expects different behavior patterns from them. Language simply reflects social fact. Many ethnic groups use a distinct language associated with their ethnic identity, where a choice of language is available for communication, it is often possible for an individual to signal their ethnicity by the language they choose to use. Speech differences in interaction may be reflected in people's social network. The differences Between Men and Women in Using Language The difference between men's and women's use of language is particularly thoroughly discussed in sociolinguistic studies. Modern sociolinguistic research traditions put particular weight on conversation, and use the term vernacular to mean "the language used by ordinary people in their everyday affairs" and "the style in which the minimum of attention is given to the monitoring of speech" It was shown that women students preferred using more adjectives such as soft, wonderful, sweet, good, nice, and so forth. On the other hand, men seldom use adjectives. The use of more adjectives indicates that when women would like to describe their feeling and everything in the world, they tend to be more heedful and sensitive to the environment. In addition, women were fond of expressing their emotions by using vivid words that men seldom used. From their conversation, it was found that women used 11 words of adjective, while men just used one adjective. Thus, this is in line with Wardhaugh's claim that women tend to use linguistic devices that is the use of more adjective in their conversation to show their solidarity and more vivid conditions. METHOD Considering the purpose of the research and the nature of the problem, this research was a descriptive qualitative one. It is descriptive because the objectives of this study are observing and finding the information as many as possible related to Phenomenon. Itis kind of method which is conducted by collecting and analyzing data, and drawing representative conclusion. Qualitative research uses semiotics, narrative, content, discourse, archival, and phonemic analysis, even statistics. They also drawed upon and utilize the approaches, methods, and techniques of ethno methodology, phenomenology, hermeneutics, feminism, deconstructionism, interview, psychoanalysis, cultural studies, survey research, and participant observation, among others. Qualitative research method was developed in the social sciences to enabled researchers to study social and cultural phenomena: observe feelings, thoughts, behaviours and the belief of the mass society.18 Qualitative data sources included observation and participation observation (fieldwork), interviews and questionnaires, documents and texts, and the researcher's impressions and reactions.19The research was dealing with human interaction and perspectives hence it is highly encouraged to used qualitative method. It could be a more accurate finding as the interviewer are perceived to give a more honest answers and opinions through personal interaction with the interviewer.This research method of the study devided to observation, interview and audio recording. This research has been done for the students of MTsN 2 Palu. This research was conducted to the third grade students. The third grade student which consist of From the total of the third grade students of MTS N 2 PALU, the researcher limits to ten students consist of five boy students and five girl students. Five boys and five girls were taken randomly from class representative. The researcher limited the number of the students because the school did not allow students to meet face to face more than ten students given the covid-19 condition and to implement health protocols. To get data, which is needed in this study the writer did the data collection techniques as follows. The researcher observed about the school condition, the school live, the activity of the students, and the important one is how they do their communication in daily activity with their differences in culture. After the researcher observed, the researcher found that in ten students they have three culture differences. They are Kailinese, Buginese, and Javanese. Questionnaire was used to collect data about students' differences in using English. The kind of this instrument was direct questionnaires by giving questions to all respondents directly. The researcher gives questionnaire for 10 students at the third grade of MTS N 2 Palu. The researcher utilizes the document related to the object research such as video recorder. The data are analyzed through the following steps: The data of this study comes from the result of collecting the data which is being done in the students answer from the questionnaire. The researcher observed and wrote down every object of researcher based on what researcher has found during the research took place. The data from questionnaire of interview were given code to helped researcher identified the data. After answering all the students' questionnaires, the researcher started to make a classification based on students' answer from the questionnaire. After classifying the data, researcher had to interpret data, gave the meaning to information, evaluated, concluded, responded appropriately and predicts the result, identification & evaluation. However, before interpreting the data, the researcher has to analyze the result of the data from the questionnaire that already being given to the students by descriptive text. Vocabulary differences From the students' conversation, it shows that men and women students had different style of choosing words in order to express their feelings. These differences of vocabulary choices can be seen in the numerous aspects below: 1. Adjectives From the transcript, it was shown that women students preferred using more adjectives such as soft, wonderful, sweet, good, nice, and so forth. On the other hand, men seldom used adjectives. The use of more adjectives indicates that when women would like to describe their feeling and everything in the world, they tend to be more heedful and sensitive to the environment. In addition, women were fond of expressing their emotions by using vivid words that men seldom used. 2. Color Words A sense of feminism usually belongs to women and they tend to use more color words to make something more vivid and colorful that men rarely use. For example: wow, amazing, extraordinary, unique, and so forth. 3. Adverbs The language usage of differences between men and women could be seen in using adverbs. In this case, women more prefer using a number of adverbs as "so", For instance: of course. Heeemmm mathematics is so difficult. Meanwhile, men tend to prefer using adverb "very". For example: mathematics is very difficult. Expletives and swear statements Women perhaps are stylistically more flexible and gentle than men. Hence, they try avoiding uttering swear words because these words are considered to be uncomfortable and they belong to taboo words for women. Besides, those words are considered to be able to annoy the friendship with their friends. Indeed, women tend to apply linguistic devices that focus more on solidarity than men do. From the conversation, it shows that the woman students rarely utter swear words as "damn". They used "oh my god" instead to express their feelings. For example: Wow Oh, my God! It's so wonderful view!. Consequently, women more focus on the manners and politeness of using language. From the conversation, it also finds that man students did not use swear words like "damn" at all. They may consider that the environment where they study is based on religious study. Therefore, neither men nor women students uses both swear words and expletives. 4. Pronouns Based on the conversation, woman students are fond of using first person plural pronouns to express something. On the other hand, the man students are more likely focus on using first person singular pronoun and the second person pronoun. Example: Women : We like mathematics Men : No, I don't. It is just you Pronouns Focus on using first person singular pronoun and the second person pronoun Fond of using first person plural pronouns to express something Attitude Differences Man and woman students tend to have different style and attitude when they express something. In certain moments, men and women show their differences in uttering the expression. Men usually try to find out solutions directly when they have problems. Meanwhile, women tend to show their sympathy by expressing panic statements and melancholic gestures. Further, women often protest or complain when they find unlucky situations supported by emotional expression, instead of solutions. As it has been expressed by the woman students of MTsN 2 Palu in their conversation in which they felt panic when her friend sick. For example: Other men : let's go visit him after school From the above conversation, it shows that men do not get panic when they find such problem. Indeed, they are fond of trying to find a solution by asking someone else. Besides that, women tend to get more attention to use standard language than men. Therefore, they are reluctant of breaking the language rules. From the conversation, it also indicates that power is quite fundamental for men's linguistic behavior. Correction grammar The research shows that women tend to use more a standard grammar of English than men do. This indicates that women focus more on the correctness of grammar by using clear utterances of precise grammar. Example: Women : Good Morning, I want to introduce myself, my name is Aulia Men : Good Morning, let me introduce myself, my name is Rafi Non-verbal Differences As it has been described previously that women tend to show politeness and pay more attention to correctness of grammar in their conversation that men seldom do. From the students' videos, it is found that women used more expressive gestures in their utterances by moving their hand, face, and other parts of body signifying the feeling, emotional and psychological state in their conversation, while men used less gesture The conversation involved ten men and ten women. After having transcribed, the scripts of the video were analyzed from the vocabulary, attitude, correctness of grammar and nonverbal aspect using sociolinguistic analysis. The interview and dialogue was done at MTs N 2 Palu. The researcher found that there are the differences caused by the culture differences. Generally, at the third grade of MTs N 2 Palu especially have three kinds of culture. There are Kailinese, Bugisnese, and Javanese. These differences culture also make differences in their communication. Not only in Indonesian use, the differences also affected the student's use of English. There are several different aspect from men and women. Firstly, the vocabulary differences shows that women were fond of expressing their emotions by using vivid words that men seldom used. Secondly, attitude differences shows that men do not get panic when they find such problem. Indeed, they are fond of trying to find a solution by asking someone else. Besides that, women tend to get more attention to use standard language than men. Thirdly, correction grammar differences shows that women tend to use more a standard grammar of English than men do. Finally, non-verbal differences shows that women used more expressive gestures in their utterances by moving their hand, face, and other parts of body signifying the feeling, emotional and psychological state in their conversation, while men used less gestures. CONCLUSION Based on the result of the research, the differences in the use of English in MTs N 2 Palu especially at the third grade students are greatly influenced by sociolinguistic aspect. If the students come from different culture, men and women speak differently. When the students used English, the accent of each culture follows into their English. It also affects the intonation and accent when they use English. Women found it easier to adjust their accent and distinguish accents from the culture and accents of the English. While men found it difficult to distinguish. So, when using English men are often carried away with accents from their own culture. If the students come from different social and economy, the students has different stage of English. The students which come from the high social and economy status had a good ability in using English. It caused their ability to do the extra learning such us courses out of the school time. While, the students which come from the low social and economy status had a less ability in using English because they did not get extra learning out of the school time. After describing, it shows that in MTs N 2 Palu especially at the third grade students, the factors of sociolinguistic analysis that most influences students use of English is sociolinguistic aspect such as social and economic status. Because of the sociolinguistic aspect differences that each student has, it also affected their use of English.
2022-12-30T16:11:49.836Z
2022-12-28T00:00:00.000
{ "year": 2022, "sha1": "e2dca515a33c25f2b0680b446425f294b1a06aa0", "oa_license": "CCBYNC", "oa_url": "http://deejournal.org/index.php/dee/article/download/59/52", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c70c0b60a1098c28d80132bee7288dd874914895", "s2fieldsofstudy": [ "Education", "Linguistics" ], "extfieldsofstudy": [] }
119200166
pes2o/s2orc
v3-fos-license
The Halo Occupation Distribution of HI from 21cm Intensity Mapping at Moderate Redshifts The spatial clustering properties of HI galaxies can be studied using the formalism of the halo occupation distribution (HOD). The resulting parameter constraints describe properties like gas richness verses environment. Unfortunately, clustering studies based on individual HI galaxies will be restricted to the local Universe for the foreseeable future, even with the deepest HI surveys. Here we discuss how clustering studies of the HI HOD could be extended to moderate redshift, through observations of fluctuations in the combined 21cm intensity of unresolved galaxies. In particular we make an analytic estimate for the clustering of HI in the HOD. Our joint goals are to estimate i) the amplitude of the signal, and ii) the sensitivity of telescopes like the Australian SKA Pathfinder to HOD parameters. We find that the power spectrum of redshifted 21cm intensity could be used to study the distribution of HI within dark matter halos at z>0.5 where individual galaxies cannot be detected. In addition to the HOD of HI, the amplitude of the 21cm power spectrum would also yield estimates of the cosmic HI content at epochs between the local Universe, and redshifts probed by damped Ly-alpha absorbers. INTRODUCTION The cosmic star-formation rate has declined by more than an order of magnitude in the 8 billion years since z ∼ 1 (Lilly et al. 1996, Madau et al. 1996. Why this decline has taken place, and what drove it are two of the most important unanswered questions in our current understanding of galaxy formation and evolution. In cold dark matter cosmologies, gas cools and collapses to form stars within gravitationally bound "halos" of dark matter. These galaxies can then grow via continued star formation or via mergers with other galaxies. As a result the decline in star formation at z 1 is presumably accompanied by a decrease in the amount of cold gas within halos. One of the issues that will need to be addressed in order to understand the evolution in star formation rate is the role of environment. As galaxies of a given baryonic mass can only reside within dark matter halos above a particular dark matter mass, galaxies are biased tracers of the overall dark matter distribution. The clustering of dark matter halos is a known function of their mass (e.g., Sheth, Mo & Tormen 2001), and consequently the large-scale clustering of galaxies provides an estimate of the typical halo mass in which that galaxy population resides. On smaller scales, multiple galaxies can reside within a single ( 1 Mpc radius) dark matter halo, so that the number of galaxy pairs with small spatial separations is a strong function of the number of galaxies per halo. One can thus constrain the number of galaxies per halo as a function of halo mass by measuring both the small ( 1 Mpc) and large ( 1 Mpc) scale clustering of galaxies (e.g. Peacock & Smith 2000, Zheng 2005). The clustering of galaxy samples selected to lie within different stages of galaxy formation based on their stellar and cold gas content therefore has the potential to play a central role in our understanding of the star formation history. In recent years large galaxy redshift surveys such as SDSS and the 2dFGRS have enabled detailed studies of the clustering of more than 100,000 optically selected galaxies in the nearby universe. By using clustering to understand how galaxies populate dark matter halos, key insights may be obtained into how galaxies grow over cosmic time. The way in which stellar mass populates dark matter halos has been determined through studies of clustering for optically selected galaxy samples. A popular formalism for modeling clustering on small to large scales is termed the halo occupation distri-bution (HOD; e.g. Peacock & Smith 2000;Seljak 2000;Scoccimarro et al. 2001;Berlind & Weinberg 2002;Zheng 2004). The HOD includes contributions to galaxy clustering from pairs of galaxies in distinct halos which describes the clustering in the large scale limit, and from pairs of galaxies within a single halo which describes clustering in the small scale limit. The latter contribution requires a parametrisation to relate the number and spatial distribution of galaxies within a dark matter halo of a particular mass. It is by constraining this parameterisation that observed clustering can be used to understand how galaxies are distributed. By comparison with the massive optical redshift surveys, the largest survey of HI selected galaxies contains only ∼ 5000 sources, obtained as part of the HIPASS survey, a blind HI survey of the southern sky (Barnes et al. 2001). Meyer et al. (2007) have studied the clustering of these HI galaxies. Their analysis reached the conclusion of weak clustering of HI galaxies based on parametric estimates of correlation length, but did not study the clustering in terms of the host dark matter halo masses of the HIPASS sample. Wyithe, Brown, Zwaan & Meyer (2009) analysed the clustering properties of HI selected galaxies from the HIPASS survey using the formalism of the halo occupation distribution. They found that the real-space clustering amplitude for HIPASS galaxies is significant on scales below the virial radius associated with the halo mass required to reproduce the clustering amplitude on large scales, indicating that single halo pairs are contributing a 1-halo term. However the resulting parameter constraints show that satellite galaxies make up only ∼ 10% of the HIPASS sample. HI satellite galaxies are therefore less significant in number and in terms of their contribution to clustering statistics than are satellites in optically selected galaxy redshift surveys. These results from HOD modeling of HI galaxy clustering therefore quantify the extent to which environment governs the HI content of galaxies in the local Universe and confirms previous evidence that HI galaxies are relatively rare in overdense environments (Waugh et al. 2002;Cortes et al. 2008). found a minimum halo mass for HIPASS galaxies at the peak of the redshift distribution of M ∼ 10 11 M⊙ (throughout this paper we refer to the halo mass as M and the HI mass as MHI), and showed that less than 10% of baryons in HIPASS galaxies are in the form of HI. Their analysis also revealed that the fingers-ofgod in the redshift space correlation function are sensitive to the typical halo mass in which satellite galaxies reside, and indicated that the HI rich satellites required to produce the measured 1-halo term must be preferentially in group rather than cluster mass halos. As described above, the clustering of HI galaxies can be studied at z = 0 using HIPASS. However in the future with the advent of the Square Kilometer Array (SKA) and its pathfinders the volume and redshift range over which clustering of HI galaxies can be studied will greatly increase. On the other hand, these studies will still be limited to moderate redshifts of z 0.5 owing to the sensitivity required for detection of even the most massive HI galaxies. For example, the Australian SKA pathfinder (ASKAP) will detect the most massive HI galaxies only out to z ∼ 0.7 in the deepest integrations (Johnston et al. 2007). At higher redshifts, we argue that progress on the clustering of HI galaxies may be made by measurement of fluctuations in the com-bined surface-brightness of unresolved HI galaxies (Wyithe & Loeb 2009;Chang et al. 2008;Wyithe 2008). A survey of 21cm intensity fluctuations at redshifts beyond those where individual galaxies can be detected would therefore measure the modulation of the cumulative 21cm emission from a large number of galaxies. The detectability of the 21 cm PS after reionization was discussed by Khandai et al. (2009). These authors used an N-body and simulation to predict the statistical signal of 21 cm fluctuations in the post-reionization IGM, and estimated its detectability. Khandai et al. (2009) find that a combination of these arrays offer good prospects for detecting the 21 cm PS over a range of redshifts in the post reionization era. Importantly, a statistical detection of 21 cm fluctuations due to discrete, unresolved clumps of neutral gas has already been made (Pen et al. 2008) through cross-correlation of the HIPASS ( Barnes et al. 2001) 21 cm observations of the local universe with galaxies in the 6 degree field galaxy redshift survey (Jones et al. 2005). This detection represents an important step towards using 21 cm surface brightness fluctuations to probe the neutral gas distribution in the IGM. The majority of the discussion in the literature concerning 21cm fluctuations in the low-redshift Universe has centered around their utility for cosmological constraint (Wyithe & Loeb 2009;Chang et al. 2008;Loeb & Wyithe 2009;Bharadwaj, Sethi & Saini 2009). In this paper we concentrate on the possibility of studying the distribution of HI within dark matter halos, on scales accessible to traditional configurations for radio interferometers (which do not include the very short baselines required to study the 21cm fluctuations in the large scale, linear regime). On these scales recent simulations suggest that the smoothed HI density field is highly biased owing to non-linear gravitational clustering ). Following from this prediction we discuss the possibility of studying the occupation of dark matter halos by HI at high redshift via 21cm intensity mapping. As a concrete example we consider the potential of ASKAP with respect to constraining the HI HOD. We concentrate on z ∼ 0.7, as this is the redshift at which ASKAP no longer has the sensitivity to study individual galaxies. Our goal is not to provide a detailed method for extracting detailed HOD parameters from an observed power spectrum of redshifted 21cm fluctuations. This would require calibration against N-body simulations, which is premature at this time. Rather, we present an analytic model for the 21cm power spectrum in the HOD, and investigate which of its properties could be constrained by observations using a telescope like ASKAP The paper is organised as follows. We begin by summarising the formalism for the HOD model, and introduce HOD modeling of 21cm intensity fluctuations in § 2. We discuss the potential sensitivity of ASKAP to these fluctuations in § 3. We then present our forecast constraints on HOD parameters in § 4 and describe estimates of the HI mass function in § 5. We summarise our findings in § 6. In our numerical examples, we adopt the standard set of cosmological parameters (Komatsu et al. 2009), with values of Ωm = 0.24, Ω b = 0.04 and ΩQ = 0.76 for the matter, baryon, and dark energy fractional density respectively, h = 0.73, for the dimensionless Hubble constant, and σ8 = 0.81 for the variance of the linear density field within regions of radius 8h −1 Mpc. INTENSITY MAPPING AND THE HI HOD We begin by reviewing the halo occupation distribution formalism for galaxies (e.g. Peacock & Smith 2000;Seljak 2000;Scoccimarro et al. 2001;Berlind & Weinberg 2002;Zheng 2004) which we describe only briefly, referring the reader to the above papers for details. The technique of surface brightness mapping will not allow resolution of individual galaxies, but rather the measurement of fluctuations in the surface brightness of unresulved galaxies. However we utilise a halo model formalism where galaxies are traced, rather than a form where the density field is a continuous function. This is because the HI is found in discrete galaxies, and treating individual galaxies allows us to explicitely calculate the HI mass weighted galaxy bias. The HOD model is constructed around the following simple assumptions. First, one assumes that there is either zero or one central galaxy that resides at the centre of each halo. Satellite galaxies are then assumed to follow the dark matter distribution within the halos. The mean number of satellites is typically assumed to follow a power-law function of halo mass, while the number of satellites within individual halos follows a Poisson (or some other) probability distribution. The twopoint correlation function on a scale r can be decomposed into one-halo (ξ 1h ) and two-halo (ξ 2h ) terms corresponding to contributions to the correlation function from galaxy pairs which reside in the same halo and in two different halos respectively (Zheng 2004). The 2-halo term can be computed as the halo correlation function weighted by the distribution and occupation number of galaxies within each halo. The 2-halo term of the galaxy power spectrum (PS) is (2) where Pm is the mass PS and yg is the normalised Fourier transform of the galaxy distribution, which is assumed to follow a Navarro, Frenk & White (1997;NFW) profile (see e.g. Seljak 2000;Zheng 2004). Hereng is the mean number density of galaxies. We assume the Sheth Tormen (1999) mass function dn/dM using parameters from Jenkins et al. (2001) throughout this paper. To compute the halo bias b(M ) we use the Sheth, Mo and Tormen (2001) fitting formula. The quantity Mmax is taken to be the mass of a halo with separation 2r. The 2-halo term for the correlation function follows from In real space the 1-halo term can be computed using (Berlind & Weinberg 2002) where N (N − 1) M is the average number of galaxy pairs within halos of mass M . The distribution of multiple galaxies within a single halo is described by the function F ′ (x), which is the differential probability that galaxy pairs are separated by a dimensionless distance x ≡ r/Rvir. As is common in the literature, we assume that there is always a galaxy located at the center of the halo, and others are regarded as satellite galaxies. The contribution to F ′ is therefore divided into pairs of galaxies that do, and do not involve a central galaxy, and is computed assuming that satellite galaxies follow the number-density distribution of an NFW profile. With this assumption, the term in the integrand of equation (4) reads where F ′ (x) is the pair-number-weighted average of the central-satellite pair distribution F ′ cs (x) and the satellitesatellite pair distribution F ′ ss (x) (see, e.g., Berlind & Weinberg 2002;Yang et al. 2003;Zheng 2004), 21cm intensity mapping of HI clustering The HOD method of estimating and modeling the clustering of HI galaxies will not work at redshifts beyond z ∼ 0.7, where even the most luminous galaxies will not be detectable in HI for the foreseeable future. Instead, observations of surface brightness fluctuations in 21cm intensity arising from the combined signal of a large number of unresolved galaxies could be used to measure the clustering of HI galaxies. Studies of 21cm surface brightness fluctuations over a large volume will be made possible by the widefield interferometers now coming on line, and will allow the HI properties of galaxies to be studied over a greater range of redshifts. Indeed, it has been argued that lack of identification of individual galaxies is an advantage when attempting to measure the clustering of the HI emission, since by not imposing a minimum threshold for detection, such a survey collects all the available signal. This point is discussed in Pen et al. (2008), where the technique is also demonstrated via measurement of the cross-correlation of galaxies with unresolved 21cm emission in the local Universe. The situation is analogous to mapping of the threedimensional distribution of cosmic hydrogen during the reionization era through the 21cm line (Furlanetto, Oh & Briggs 2007;Barkana & Loeb 2007). Several facilities are currently being constructed to perform this experiment (including MWA 1 , LOFAR 2 , PAPER 3 , 21CMA 4 ) and more ambitious designs are being planned (SKA 5 ). During the epoch of reionization, the PS of 21cm brightness fluctuations is shaped mainly by the topology of ionized regions. However the situation is expected to be simpler following reionization of the intergalactic medium (IGM; z 6) -when only dense pockets of self-shielded hydrogen, such as damped Lyα absorbers (DLA) and Lyman-limit systems (LLS) survive Chang et al. 2007;Pritchard & Loeb 2008). These DLA systems are thought to be the high redshift equivalents of HI rich galaxies in the local Universe (Zwaan et al. 2005b). We do not expect 21cm self absorption to impact the level of 21cm emission. This conclusion is based on 21cm absorption studies towards damped Lyα systems at a range of redshifts between z ∼ 0 and z ∼ 3.4, which show optical depths to absorption of the back-ground quasar flux with values less than a few percent (Kanekar & Chengalur 2003;Curran et al. 2007). Moreover, damped Lyα systems have a spin temperature that is large relative to the temperature of the cosmic microwave background radiation, and will therefore have a level of emission that is independent of the kinetic gas temperature (e.g. Kanekar & Chengalur 2003). Thus the intensity of 21cm emission can be directly related to the column density of HI. Modeling the power spectrum of 21cm fluctuations As mentioned above, low spatial resolution observations could be used to detect surface brightness fluctuations in 21cm emission from the cumulative sum of HI galaxies, rather than from individual sources of emission. Here the PS is a more natural observable than the correlation function, since a radio interferometer records visibilities that directly sample the PS. In the linear regime the 21cm PS follows directly from the PS of fluctuations in mass Pm(k) (Wyithe & Loeb 2009) where T b = 23.8 [(1 + z)/10] 1 2 mK is the brightness temperature contrast between the mean IGM and the CMB at redshift z, and b M is the HI mass weighted halo bias. Note that we have used the subscript HI rather than the more usual 21 in order to reduce confusion with the subscripts for the 1-halo and 2-halo PS terms. The fraction of hydrogen that is neutral is described by the parameter xHI ≡ ΩHI/(0.76Ω b ). We assume xHI = 0.01 (corresponding to ΩHI ∼ 3 × 10 −4 , Zwaan et al. 2005a) throughout this paper. The resulting PS is plotted in the right panel of Figure 1 (dashed line). The constant T b hides the implicit assumptions that the 21cm emission from the galaxies is not self absorbed, and that the spin temperature is much larger than the temperature of the cosmic microwave background. On small scales a model is needed to relate HI mass to halo mass. To achieve this we modify the HOD formalism as outlined below. HOD model for 21cm fluctuations Since surface brightness fluctuations depend on the total HI mass within a halo rather than on number counts of individual galaxies, the number of galaxies, and the number of galaxy pairs per halo in the HOD formalism need to be weighted by the HI mass per galaxy. In analogy with the HOD formalism, we distribute this mass between central and satellite galaxies. We define MHI,c M and MHI,s M to be the mean HI mass of central galaxies and of the combined satellite galaxies within a halo of mass M respectively. To compute the 2-halo PS, we replace N M in equation (2) with the mean value of the total HI mass in a halo of mass M , i.e. MHI M = MHI,c M + MHI,s M , yielding where,ρHI is the mean density of HI contributed by all galaxies in the IGM. The 2-halo term ξ 2h,HI (r) follows from substitution into equation (3). To compute the 1-halo term we again weight the number of galaxies by their HI mass. In difference from the calculation of the 1-halo term for galaxy clustering, the distribution of satellite masses will be important in addition to the number. This aspect of the HOD modeling will require simulation for a proper treatment (e.g. ). However for the purposes of our analysis it is sufficient to assume that most of the satellite HI for a halo mass M is contained within satellites of similar mass (as would be the case for a steep power-law mass function with a lower cutoff for example). We therefore further define mHI,s M to be the mean HI mass of satellite galaxies within a halo of mass M . The coefficients in equation (5) The correlation function follows from ξHI(r) = [T 2 b x 2 HI + ξ 1h,HI (r)] + ξ 2h,HI (r). In order to evaluate this expression the HI mass occupation of a halo of mass M must be parameterised, and is obviously quite uncertain. For illustration, we choose the following polynomial form, with a minimum halo mass (Mmin) and characteristic scale (M1) where satellites contribute HI mass that is comparable to the central galaxy, and The average HI mass within a halo of mass M > Mmin is therefore Note that the constant of proportionality in equations (11) and (13) is not specified but cancels with the same factor in ρHI in equations (7) and (10). From experience of the galaxy HOD there will be degeneracy between the parameters M1 and γs. We therefore make the simplification of setting γc = γs ≡ γ in our parameterisation for the remainder of this paper. The left panel of Figure 1 shows the real-space correlation function at z = 0.7 for an HOD model with parameters γ = 0.5, Mmin = 10 11 M⊙ and M1 = 10 13 M⊙. This model serves as our fiducial case throughout this paper, and is motivated by the parameters derived from estimates for HIPASS galaxies ). In particular we note the value of γ = 0.5 which is smaller than unity. This value encapsulates the assumption that smaller halos have more HI, and agrees with the conventional wisdom that galaxy clusters are HI poor. In the local Universe γ ∼ 0.5 is found to describe the relation between HI and dynamical masses of HIPASS galaxies ). Aside from this motivation the fiducial model is otherwise arbitrary. Redshift space modeling of the 21cm power spectrum Since a radio interferometer directly measures the 3 dimensional distribution of 21cm intensity it is more powerful to work in redshift space, where line-of-sight infall (Kaiser 1987) can be used to break the degeneracy between neutral fraction and galaxy bias (Wyithe 2008). In addition to gravitational infall the shape of the redshift space PS will be complicated by peculiar motions of galaxies within groups or clusters, which produce the so-called fingers-ofgod in the redshift space correlation function. In the case of 21cm fluctuations, the internal velocities of HI in galaxies will also contribute to the fingers-of-god. In this paper we use the combination of the real-space HOD 21cm PS PR,HI(k) = 4π Z drξHI(r) sin kr kr r 2 dr, and the dispersion model to estimate the redshift space PS including these effects. The dispersion model is written Pz,HI(kperp, k los ) = PR,HI(k)(1 + βµ 2 ) 2 (1 + k 2 σ 2 k µ 2 /2) −1 (15) where µ is the cosine of the angle between the line-of-sight and the unit-vector corresponding to the direction of a particular mode, kperp = kµ, k los = k p 1 − µ 2 , β = Ω 0.6 m / b M and b M is the average HI mass weighted halo bias. The quantity σ k is a constant which describes a "typical" velocity dispersion for galaxies and parametrises the prominence of the fingers-of-god. Simulations indicate a value of σ k ∼ 650/(1 + z)km s −1 (Lahav & Suto 2004). We note that the redshift space PS could have been generated through a 2-d Fourier transform of the redshift space correlation function computed within the HOD model using the formalism described in Tinker (2007), allowing additional constraints on HOD parameters to be placed based on the prominence of the fingers-of-god as in . In particular the assumption of γ < 1 as well as the potential lack of HI in clusters would lead to reduced prominance of the fingers of god. However in the absence of current data we have taken the simpler approach of parameterising the fingers-of-god using σ k . The left panel of Figure 2 shows the resulting redshift space PS for the fiducial model. The large scale motions induced by infall into overdense regions can be seen as an extension of the PS along the line-of-sight at small k, while the fingers of god are manifest as a compression at large k. The right panel of Figure 1 shows the corresponding spherically averaged redshift space PS (solid line) P sph z,HI (k) = Z Pz,HI(kperp, k los )dkperpdk los . For comparison, the dotted lines in the right hand panel of Figure 2 show the 1-halo and 2-halo contributions to the spherically averaged redshift space PS. The spherically averaged redshift space PS can be compared with the linear real-space 21cm PS estimated based on the neutral fraction xHI = 0.01 and the linear mass PS (dashed line, equation 6). On large (linear) scales the spherically averaged PS is larger in redshift space than in real space. This is analogous to the excess power seen in redshift space clustering of galaxy surveys (Kaiser 1987), and is due to an increase in the 21cm optical depth owing to velocity compression (Barkana & Loeb 2005) towards high density regions. On small scales there is excess power above the linear theory expectation owing to the inclusion of the non-linear 1-halo term. The 21cm PS shows a steepening at large k owing to the mass weighting in the 1-halo term. This steepening is also seen in the simulations of . Figure 3 illustrates the sensitivity of the clustering and the 21cm PS to variations in the HOD parameters. The solid lines repeat the fiducial model from Figure 1. For comparison, the dotted, dashed and dot-dashed lines show variations on this model, with γ = 0.4, Mmin = 10 10 M⊙ and M1 = 10 12 M⊙ respectively (with the remaining parameters set to their fiducial values in each case). On the largest scales the clustering amplitude is most sensitive to Mmin (dashed lines), which enables the typical host halo mass of HI to be measured from the PS amplitude (Wyithe 2008). Lowering the value of Mmin (while keeping M1 fixed) implies a smaller fraction of HI in satellites, and hence a relative decrease of power on small scales. A smaller value of γ also leads to a relative decrease of power on small scales because the flatter power-law preferentially places mass in the more common low mass halos (with M < M1), and so lowers the fraction of HI in satellites (dotted lines). Conversely, a smaller value of M1 leads to a larger fraction of HI in satellite systems, and hence an increase of small scale power (dot-dashed lines). variation of the 21cm PS with HOD parameters The variation in shape and amplitude of the 21cm PS implies that parameter values for a particular HOD model could be constrained if the PS were measured with sufficient signal-to-noise. In the remainder of this paper we therefore first discuss the sensitivity of a radio interferometer to the 21cm PS, and then estimate the corresponding constraints on the 5 parameters in our HOD model that could be placed using observations of 21cm intensity fluctuations. SENSITIVITY TO THE 21CM PS In this paper we estimate the ability of a telescope like ASKAP to measure the clustering of 21cm intensity fluctuations, and hence to estimate HOD parameters, and the In each case the 1-halo and 2-halo terms are plotted as dotted curves. For comparison we plot the spherically averaged sensitivity within pixels of width ∆k = k/10 for an a radio interferometer resembling the design of ASKAP (thick gray line). We also show the real-space 21cm PS assuming a linear mass-density PS (dashed line). For calculation of observational noise an integration of 3000 hours was assumed, with a multiple primary beam total field of view corresponding to 30(1 + z) 2 square degrees (see text for details). The cutoff at large scales is due to foreground removal within a finite frequency band-pass. √ 10) within pixels of width ∆k = k/10. The thick contour corresponds to a signal-to-noise per pixel of unity. For calculation of observational noise an integration of 3000 hours was assumed, with a multiple primary beam total field of view corresponding to 30(1 + z) 2 square degrees (see text for details). The cutoffs at large and small scales perpendicular to the line-of-sight are due to the lack of short and long baselines respectively. The cutoff at large scales along the line-of-sight is due to foreground removal within a finite band-pass. total HI content of the Universe. The latter quantity, which is not available from the clustering of resolved galaxies, could be used to bridge the gap in measurements of ΩHI (the cosmic density of HI relative to the critical density) between the local Universe where this quantity can be determined from integration of the HI mass function, and z 2 where it is measured from the column density through counting of damped Lyα absorbers. To compute the sensitivity ∆PHI(kperp, k los ) of a radiointerferometer to the 21cm PS, we follow the procedure outlined by McQuinn et al. (2006) and Bowman, Morales & Hewitt (2007) [see also Wyithe, Loeb & Geil (2008)]. The important issues are discussed below, but the reader is referred to these papers for further details. The uncertainty comprises components due to the thermal noise, and due to sample variance within the finite volume of the ob-servations. We also include a Poisson component due to the finite sampling of each mode (Wyithe 2008), since the post-reionization 21cm PS is generated by discrete clumps rather than a diffuse IGM. We consider a telescope based on ASKAP. This telescope is assumed to have 36 dish antenna with a density distributed as ρ(r) ∝ r −2 within a diameter of 2km. The antennae are each 12m in diameter, and being dishes are assumed to have physical and effective collecting areas that are equal. We assume that foregrounds can be removed over 80MHz bins, within a bandpass of 300MHz [based on removal within 1/4 of the available bandpass (McQuinn et al. 2006)]. Foreground removal therefore imposes a minimum on the wave-number accessible of k ∼ 0.02[(1 + z)/1.5] −1 Mpc −1 , although access to the large scale modes is actually limited by the number of short baselines available. An important ingredient is the an- gular dependence of the number of modes accessible to the array (McQuinn et al. 2006). ASKAP is designed to have multiple primary beams facilitated by a focal plane phased array. We assume 30 fields are observed simultaneously for 3000 hr each, yielding ∼ 30(1 + z) 2 square degrees [where the factor of (1 + z) 2 originates from the frequency dependence of the primary beam]. The signal-to-noise for observation of the PS in the left panel of Figure 2 is shown in the right panel of Figure 2. A telescope like ASKAP would be most sensitive to modes of kperp ∼ 0.1 − 1Mpc −1 and k los ∼ 0.03 − 0.3Mpc −1 . The spherically averaged signal-tonoise (within bins of ∆k = k/10) is shown in the right panels of Figures 1 and 3 (grey curves). Comparison of the noise curve with the variability of the PS amplitude and shape among different HOD models for the 21cm PS (Figure 3) indicates that a telescope like ASKAP would be sufficiently sensitive to generate constraints on the HOD. Moreover, the spatial scale on which the array would be most sensitive corresponds to wave-numbers where we expect 1-halo and 2-halo contributions to be comparable, indicating that such observations may constrain HOD model parameters. This statement is quantified in the next section. CONSTRAINTS ON HOD PARAMETERS FROM 21CM INTENSITY MAPPING Based on our estimate of the sensitivity to the 21cm PS we forecast the ability of ASKAP to constrain the HI HOD. To begin, we assume the fiducial model P true z,HI (kperp, k los ), as shown in Figure 2, and estimate the accuracy with which the parameters could be inferred. Our HOD model for the 21cm PS has five parameters, Mmin, M1, γ, xHI and σ k . For combinations of these parameters (Mmin, M1, γ, xHI, σ k ) that differ from the fiducial case (10 11 M⊙, 10 13 M⊙, 0.5, 0.01, 650/(1+z)km/s), we compute a trial model for the real space correlation function. We then use this to calculate the chi-squared of the difference between the fiducial and the trial models " P true z,HI − Pz,HI(kperp, k los |Mmin, M1, γ, xHI, σ k , σ8) ∆Pz,HI(kperp, k los ) and hence find the likelihood The uncertainty introduced through imperfect knowledge of the PS amplitude (which is proportional to the normalization of the primordial PS, σ8) is degenerate with xH (Wyithe 2008). For this reason the uncertainty in σ8 has been explicitly included in equation (18). We assume a Gaussian distribution dp/dσ8 for σ8 with σ8 = 0.81 ± 0.03 (Komatsu et al. 2009). Figure 4 shows an example of forecast constraints on HOD model parameters for a telescope like ASKAP, assuming a 3000hr integration of a single pointing [∼ 30(1 + z) 2 square degrees] centered on z = 0.7. Results are presented in the upper panels of Figures 4 which shows contours of the likelihood in 2-d projections of this 5-parameter space. Here prior probabilities on log xHI, log Mmin, log M1, γ and σ k are assumed to be constant. The contours are placed at 60%, 30% and 10% of the peak likelihood. The lower panels show the marginalised likelihoods on the individual parameters xHI, Mmin, M1 and γ. A deep integration of a single pointing for a telescope like ASKAP would place some constraints on the minimum mass (the projected uncertainty on Mmin is ∼ 0.5 dex), and measure the relationship between HI and halo mass (a ∼ 20% constraint on γ). In addition to these constraints on the halo occupation distribution of HI, observations of the 21cm PS would also provide a measurement of the global neutral fraction (or equivalently ΩHI), which would be constrained with a relative uncertainty of 20% at z ∼ 0.7. This indicates that 21cm intensity fluctuations could be used to measure the evolution of ΩHI from z ∼ 1 to the present day, even though ASKAP will not detect individual galaxies at . Example of forecast constraints on HOD model parameters from 21cm intensity fluctuations, assuming a 3000hr integration of a 30(1 + z) 2 square degree field with an array based on the ASKAP design centered on z = 0.7 (further details in the text). The upper panels show contours of the likelihood in 2-d projections of the 5-parameter space used for the HOD modeling of 21cm intensity fluctuations, while the lower panels show the marginalised likelihoods on individual parameters. Here prior probabilities on γ, log x HI , log M min and log M 1 are assumed to be constant. The contours are placed at 60%, 30% and 10% of the peak likelihood. The position of the dot indicates the peak likelihood in the 5-dimensional parameter space (i.e. the input model). a direct estimate of the cosmic HI mass density ). HI CONTENT AND THE HI MASS FUNCTION The combination of measurements for xHI and the HOD parameters γ, Mmin and M1 indicates that the HI mass function (summing both central and satellite galaxies) could be approximated from the HOD using where MHI = CM γ (1 + (M/M1) γ ) and the constant C is evaluated from The left panel of Figure 5 shows the mass function for the fiducial model in Figure 1 (thick grey curve) as well as ten HI mass functions computed assuming parameters drawn at random from the joint probability distribution [∝ M −1 min M −1 1 x −1 HI L(Mmin, M1, γ, xHI, σ k )], projections of which are shown in Figure 4. While the values over which the HI mass range extends in these realisations shows some variability, the possibility of constraints on γ and xHI implied by Figure 4 mean that the overall shape of the HI mass function could be quite well constrained by observation of a redshifted 21cm PS. In the central panel of Figure 5 we show the corresponding the HI mass functions for central galaxies [obtained by instead substituting MHI = CM γ ]. In this case the range of realisations is much larger, which can be traced to the degeneracy between Mmin and M1 seen in Figure 4. In addition to the HI mass function, it would also be possible to constrain the fraction of hydrogen within galaxies that is in atomic form. This number is given by where F col (Mmin) is the fraction of dark-matter that is collapsed in halos more massive than Mmin. From the upper left panel of Figure 4 we see that there is a degeneracy between xHI and Mmin. Larger neutral fractions correspond to lower values of Mmin and hence larger collapsed fractions. As a result the ratio fHI would be very well constrained, as shown by the likelihood distribution in the right hand panel of Figure 5 (which is based on the distributions in the left panel of Figure 5). The evolution of f, which can also be measured locally from clustering with a value of fHI = 10 −1.4±0.4 (Wyithe, Brown, Zwaan & Meyer 2009) will provide an important ingredient for studies of the role of HI in star formation. SUMMARY Due to the faintness of HI emission from individual galaxies, even deep HI surveys will be limited to samples at relatively low redshift (z 0.7) for the next decade. However these surveys will be able to detect fluctuations in 21cm intensity produced by the ensemble of galaxies out to higher redshifts, using observational techniques that are analogous to those being discussed with respect to the reionization epoch at z 6 (e.g. Furlanetto, Oh & Briggs 2006). As Figure 5. Examples of the range for the total halo (left) and central galaxy (central panel) HI mass functions. In addition to the fiducial case (thick lines, corresponding to the model in Figure 1), ten HI mass functions are shown in each case with parameters drawn from the probability distribution for HOD parameters in Figure 4 (dot-dashed lines). In the right panel we show forecast for corresponding constraints on f HI . A 3000hr integration of a 30(1 + z) 2 square degree field with an array based on the ASKAP design centered on z = 0.7 (corresponding to Figure 4) was assumed. The prior probability on log f HI was assumed to be constant. a result, studies of HI galaxy clustering could be extended to redshifts beyond those where individual HI galaxies can be identified through the use of 21cm intensity fluctuations. To investigate this possibility we have described an approximate model for the power spectrum of 21cm fluctuations, which is based on the halo occupation distribution formalism for galaxy clustering. Our goal for this paper has been to use this model to estimate the expected amplitude and features of the 21 cm power-spectrum, rather than to present a detailed method for extracting the halo occupation of HI from an observed power-spectrum. This latter goal would require numerical simulations (e.g. ). To frame our discussion we have made forecasts for ASKAP, specifically with respect to the use of the 21cm power spectrum as a probe of the occupation of HI in dark matter halos. We have chosen z = 0.7 for our estimates, which is the redshift at which individual galaxies are no longer detectable with ASKAP in deep integrations. We have shown that a telescope based on the design of ASKAP will have sufficient sensitivity to yield estimates of the HI halo occupation. Because 21cm intensity fluctuations combine the integrated HI from all galaxies (not just those detected as individual sources), the clustering amplitude is proportional to the total HI content of the Universe. We find that an array with the specifications of ASKAP could yield estimates of the global HI density which have a relative accuracy of ∼ 20%. Clustering measurements in 21cm surface brightness could therefore be used to make measurements of the global HI content in the currently unexplored redshift range between the local Universe, and surveys for damped Lyα absorbers in the higher redshift Universe. The cosmic star-formation rate has declined by more than an order of magnitude in the past 8 billion years (Lilly et al. 1996, Madau et al. 1996. Optical studies paint a somewhat passive picture of galaxy formation, with the stellar mass density of galaxies gradually increasing and an increasing fraction of stellar mass mass ending up within red galaxies that have negligible star-formation (e.g., Brown et al. 2008). On the other hand, the combination of direct HI observations at low redshift (Zwaan et al. 2005;Lah et al 2007) and damped Lyα absorbers in the spectra of high-redshift QSOs (Prochaska et al. 2005) show that the neutral gas den-sity has remained remarkably constant over the age of the universe. The evolutionary and environmental relationships between the neutral gas which provides the fuel for star formation and the stars that form are central to understanding these and related issues. The study of the halo occupation distribution of HI based on 21cm fluctuations has the potential to allow these studies to be made at redshifts beyond those where individual galaxies can be observed in HI with either existing or future radio telescopes.
2009-12-11T00:29:12.000Z
2009-12-11T00:00:00.000
{ "year": 2009, "sha1": "41bcc3621378ffc3311eb064c0a46b21db6ffad9", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/404/2/876/2903106/mnras0404-0876.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "41bcc3621378ffc3311eb064c0a46b21db6ffad9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
14494742
pes2o/s2orc
v3-fos-license
Predictive Dose-Based Estimation of Systemic Exposure Multiples in Mouse and Monkey Relative to Human for Antisense Oligonucleotides With 2′-O-(2-Methoxyethyl) Modifications Evaluation of species differences and systemic exposure multiples (or ratios) in toxicological animal species versus human is an ongoing exercise during the course of drug development. The systemic exposure ratios are best estimated by directly comparing area under the plasma concentration-time curves (AUCs), and sometimes by comparing the dose administered, with the dose being adjusted either by body surface area (BSA) or body weight (BW). In this study, the association between AUC ratio and the administered dose ratio from animals to human were studied using a retrospective data-driven approach. The dataset included nine antisense oligonucleotides (ASOs) with 2′-O-(2-methoxyethyl) modifications, evaluated in two animal species (mouse and monkey) following single and repeated parenteral administrations. We found that plasma AUCs were similar between ASOs within the same species, and are predictable to human exposure using a single animal species, either mouse or monkey. Between monkey and human, the plasma exposure ratio can be predicted directly based on BW-adjusted dose ratios, whereas between mouse and human, the exposure ratio would be nearly fivefold lower in mouse compared to human based on BW-adjusted dose values. Thus, multiplying a factor of 5 for the mouse BW-adjusted dose would likely provide a reasonable AUC exposure estimate in human at steady-state. Introduction Antisense oligonucleotides (ASOs) with 2′-O-(2-methoxyethyl) (2′-MOE) (Figure 1), represent a platform of RNA-based therapeutics designed to specifically hybridize to their target RNA via Watson-Crick base pairing and prevent expression of the encoded "disease-related" protein product. The last decade has seen a very rapid increase in the number of 2′-MOE ASOs progressing to phase 1, 2, and 3 clinical trials and targeting ever expanding therapeutic areas of interest including, but certainly not limited to, rheumatoid arthritis, 1,2 diabetes, 3 cancer, 4,5 hypercholestrolemia, [6][7][8] and multiple sclerosis. 9 A 2′-MOE ASO, mipomersen (Kynamro), was recently approved by the US FDA as an adjunct to lipidlowering medications and diet to reduce atherogenic lipids in patients with homozygous familial hypercholesterolemia (HoFH). Determination of systemic (plasma) exposure ratios in toxicological animal species versus humans are best done by comparing area under the plasma concentration-time curve (AUC) values. Such exposure ratios are commonly used to relate the exposure achieved in animal pharmacology or toxicology studies to human and thus, facilitate the assessment of the relevance of these findings to clinical efficacy or safety. 10 For example, determination of the "margin of safety" or "margin of exposure" is typically done based on the plasma AUC ratio of the no observable adverse effect level (NOAEL) or the lowest observable adverse effect level (LOAEL) in animals to the observed exposure in humans at the dose levels intended for clinical use. Understanding the PK and exposure differences between species would help to define the safety margins and human dose selections. Details of preclinical and clinical pharmacokinetic properties and interspecies scaling of several 2′-MOE ASOs, have been reported previously. 11,12 An article describing some initial evaluations of the predictive performance of several different interspecies scaling approaches for ASOs was recently published. 13 In the case of 2′-MOE ASOs, the most common animal species tested are mice and monkeys. Therefore, it is an important question to ask what dose-adjusted comparisons between these toxicology species and human best estimate the relative systemic exposure ratio. Does the same doseadjusted scaling approach work well for both species when extrapolating to human, or are species-dependent scaling approaches needed since mouse and monkey may perform differently when scaling to man as reported previously? 11 In this article, the association between relative systemic exposure (plasma AUC) and dose (adjusted by either body surface area (BSA) or body weight (BW)) from animals (mice and monkeys) to human for nine 2′-MOE ASOs was studied using a retrospective data-driven approach. These nine selected ASOs, with 20 or 21 nucleotides in length, have similar physicochemical properties, including charge, molecular weight, and amphipathic nature, and share similar pharmacokinetic characteristics such as comparable protein binding and tissue distribution. 11,12,17 This class of ASOs all have the same chemical modifications on the backbone structure and the sugar moiety (phosphorothioate and 2′-MOE, respectively), thus have prolonged in vivo half-lives due to increased nuclease resistance and metabolic stability in animals and humans. 11,12,18 Since ASOs are extensively distributed to tissues, where are often considered as the site(s) of actions for both pharmacologic and toxicologic activities, the plasma exposure ratio and the liver exposure ratio were compared between animal species. The consistency between plasma exposure ratio and tissue exposure ratio would support the use of plasma AUC exposure ratio, instead of tissue ratio, to guide the selection of dose-based estimation of systemic tissue exposure in humans. Pharmacokinetic properties of ASOs The primary route of administration for oligonucleotides for systemic applications is by parenteral injection, either intravenous (i.v.) infusion or subcutaneous (s.c.) injection. Following systemic s.c. or i.v. administration, plasma ASO concentrations rapidly declined from peak concentrations in a multiexponential fashion as characterized by a dominant initial rapid distribution phase (half-life of a few hours or less) representing extensive distribution to tissues, followed by a much slower terminal elimination phase (half-life of 2-4 weeks) in both animals and humans as reported previously. 11,12,14 The apparent terminal elimination rate observed in plasma is consistent with the slow elimination of ASOs from tissues, indicating equilibrium between postdistribution phase plasma concentrations and tissue concentrations. 14 There is little or no accumulation in plasma AUC (or C max ) values upon every other day or once weekly repeated dosing in both monkeys and humans. Mean plasma AUCs following both single and multiple dosing of a 2′-MOE ASO are generally comparable, with "steady-state" in plasma being essentially achieved with the very first dose in both monkeys and humans. In mice, plasma AUC did increase after repeated administrations (although little change in C max , data not shown), likely resulted from saturation of kidney uptake in this species. 11,12,14 The ASOs with TK data showed similar dose-normalized exposure within the same species with relatively low variability (%CV in the range of 29-38%). In this article, we employed the PK properties to compare the dose-normalized AUC data for multiple 2′-MOE ASOs across species as a measure of scaling between exposure multiples. The mean dose-normalized AUC ratios at steadystate were 2.58, 15.7, and 14.1 to 18.3 in mice, monkeys, and humans, respectively (Tables 1 and 2). Therefore, dosenormalized exposure in mice was substantially different from monkey and human, while it was very similar between monkeys and humans. Relative ratios of mouse to human As shown in Table 1, the systemic plasma exposure ratios (PER) varied substantially among ASOs, so were the BWor BSA-normalized administered dose ratios (ADR), which were not surprising since different dose levels were used in the toxicology studies. The relative ratios (RR) (mean ± SD) between mouse and human was 0.82 ± 0.35 for single doses and 0.48 ± 0.22 when dose was adjusted for BSA following multiple doses (Table 1, Figure 2). The RR (mean ± SD) was 10.1 ± 4.3 and 5.87 ± 2.75 when dose was adjusted for BW following single and multiple doses, respectively. Following repeated doses, the BSA-adjusted dose ratio would under-predict the AUC ratios at steady-state by ~50%. On the other hand, the BW-adjusted dose ratios would overpredict the AUC ratios for both single and multiple doses, by approximately ten-and fivefold, respectively. Taken together, these data suggest that neither the BSA-adjusted nor BWadjusted dose ratios can directly predict the AUC ratio at steady-state between mice and humans for 2′-MOE ASOs. However, considering the similarity of ASOs within the same species, the AUC ratios might be predicted by BW-or BSAadjusted dose ratios if corrected by certain factors. For example, the BW-adjusted dose ratio from mouse to human divided by a factor of 5 or the BSA-adjusted dose ratio multiplied by a factor of 2 would provide a reasonable estimate for the steady-state AUC ratio. RR monkey to human For monkey to human comparisons, the RR (mean ± SD) between monkey and human was only 0.33 ± 0.12 for single dose and 0.39 ± 0.11 for multiple doses when dose was adjusted for BSA suggesting that BSA-adjusted dose ratio cannot be used to predict plasma exposures directly following both single and multiple doses ( Table 2, and Figure 2). Nonetheless, unlike predictions from mouse to human, similar RRs were obtained following multiple doses as compared to single dose for monkey to human. However, when dose was adjusted for BW, the RR was 1.02 ± 0.38 for single dose and 1.21 ± 0.35 for multiple doses when dose was adjusted for BW ( Table 2, and Figure 2). Taken together, these data suggest plasma AUC ratio between monkeys and humans for 2′-MOE ASOs can be Relative ratios (RR), determined from administered dose ratio (mg/kg or mg/m 2 )/AUC ratio, is bolded if it is within the "acceptable range" of 0.5-2.0. predicted by the BW-adjusted dose ratio following both single and multiple doses. Comparison of RR between plasma AUC and liver concentration Liver contains high concentrations of oligonucleotides following parenteral administrations and is the primary organ of oligonucleotide distribution due to its large size. 11,12,14 For this reason, liver has been the primary therapeutic target for majority of antisense oligonucleotides currently in development. In this study, where both plasma AUC and tissue concentrations were available in animals, the relative exposure ratios were compared between rodent species or between rodent and monkey. As shown in Table 3, the ratio between the species was the same for either exposure measure, indicating that plasma AUC ratios can be used to estimate the relative tissue exposure between species. Discussion The results of this retrospective analysis indicate that, for 2′-MOE ASOs, the proper plasma AUC scaling factors are different for mouse and monkey. As an example, comparable AUC values would be expected in monkey and human at equivalent mg/kg dose levels, while the plasma exposures in mouse would be nearly fivefold lower at steadystate (Tables 1 and 2). Thus, an empirical value of fivefold of the dose ratio after adjusting the dose by BW following multiple doses can probably be used to estimate systemic exposure between mice to human following multiple doses Relative ratio (RR), determined from Administered Dose Ratio (mg/kg or mg/m 2 )/AUC Ratio, is bolded if it is within the "acceptable range" of 0.5-2.0. (or at steady-state). Although the data are limited, widened gap sizes instead of the standard 5-10-5 construct in two of the studied ASOs (4-13-4 for OGX-011 and 2-16-2 for ISIS 325568) did not appear to affect the exposure and calculated RR between mouse and monkey to human. The results presented here also appear generally consistent with some previously published literature 11,12,14 describing allometric scaling of plasma clearance versus BW for 2′-MOE ASOs. Plasma clearance is inversely related to plasma AUC (i.e., CL = Dose/AUC). Geary et al. reported a general simple linear allometric relationship of plasma clearance versus BW with a slope of ~1.0 for ISIS 104838 (a 2′-MOE ASO) across rat, monkey, dog, and human, but mouse was an "outlier" and thus was excluded from this relationship. This analysis supports BW based dose scaling from rat to human. Whereas, Yu et al. reported an attempt to develop a simple allometric relationship of plasma clearance versus BW across all evaluated species, including mouse, rat, monkey, and human for ISIS 301012 (mipomersen; Kynamro). This analysis generated an allometric exponent (slope) of 0.6461, suggesting BSA based dose scaling, which has an allometric exponent of 0.67. It is also worth noting that the regression line from the Yu et al. publication appears to better fit observed mean mouse and human mean plasma clearance data compared to monkey. Mahmood 13 also attempted to predict previously published observed mean human plasma clearance values for phosphorothioate oligodeoxynucleotides and 2′-MOE ASOs, utilizing previously published mean clearance data of the same compounds from several tested animal species, including mouse, rat, dog, and monkey, and applying various allometric scaling approaches. In this publication, human plasma clearance predictions were based on scaling data from one, two, or three animal species and the findings demonstrated mixed success. Mahmood reported that allometric scaling based on one or two species can be "erratic and unreliable," although either fixed exponent or fixed coefficient approaches were evaluated for the one species allometric evaluations. Scaling approaches based on BSA or BW as described here in our article were not included. Further, Mahmood indicated that reasonably accurate predictions could be obtained using at least three animal species; albeit the reported ratios of predicted to observed human clearance values for four different ASOs (two first generation and two second generation compounds) appear highly variable and ranged from 0.05 to 1.29. Only two of the four evaluated ASOs from the three species scaling were within the acceptable prediction range, with none of the predictions for the other two ASOs being within the acceptable range of 0.5-2.0. It is our opinion that simple allometric scaling approaches for 2′-MOE ASOs that utilize multiple species are likely of limited value and can provide misleading results since mouse is often included in the scaling analysis. The reasons for the mouse being an "outlier" could probably be due to the special physiology and anatomy of the mouse animal model and the special PK characteristics of ASOs. Mouse seems to have an exceptionally large liver and kidneys relative to its BW, with liver and kidney weight being nearly threefold higher relative to monkey and human. 15 The difference in liver and kidney size (relative to the BW) could be translated into substantial PK differences for ASOs since all known 2′-MOE ASOs are highly distributed into liver and kidney tissues, with liver and kidney concentrations being ~5,000-and 8,000-fold higher concentration over plasma trough levels based on data from literatures. 16,17 Liver and kidneys are not only a distribution organ but also an elimination organ since ASOs are generally metabolized by endonuclease in the tissues including liver and kidneys. Thus, a relatively large liver and kidneys in the mouse means both a higher clearance and higher volume of distribution for ASOs, leading to a lower plasma AUC but a similar terminal half-life in the mouse, as shown in Figure 3. Perhaps somewhat more complex allometric scaling models that include multiple factors such as species organ weights/ volumes, plasma protein binding, etc., are worth further evaluation and may allow better predictions across multiple species. Common simple allometric scaling approaches inherently assume "more species are better." Such an approach will often reasonably well apply across multiple species and ultimately lead to more accurate human clearance predictions. While these approaches may indeed be suitable for many small molecule compounds, we would argue otherwise based on our current investigations for 2′-MOE ASOs as discussed above. Acceptable human plasma clearance predictions can be made based on just a single species (mouse or monkey), after appropriate application of a species-specific scaling approach. It is also worth noting that acceptable human plasma exposure predictions for a new clinically untested 2′-MOE ASO can be reasonably estimated based on past clinical experience with other 2′-MOE ASOs. 18 Our findings are established based on dosing, pharmacokinetic and exposure data in multiple species from nine different 2′-MOE ASOs. This type of analysis was made possible given the remarkable similarity in the pharmacokinetic properties of 2′-MOE ASOs from sequence to sequence within species, which has been reported previously. 11,12,14,18 Furthermore, this translates to remarkable similarity in how these types of compounds, as a class, scale from mouse to human and from monkey to human. Nonetheless, our findings suggest that there is not a simple common dose scaling approach applicable between all three species (i.e., mouse, monkey, and human). In conclusion, the results of this retrospective analysis indicate that, for 2′-MOE ASOs, the proper scaling factors are different for mouse and monkey. Between monkey and human, the plasma exposure ratio can be predicted directly based on BW-adjusted dose ratios, while between mouse and human, the steady-state exposure ratio would be nearly fivefold lower based on BW-adjusted dose values. Thus, multiplying a factor of 5 for the mouse dose would likely provide a reasonable AUC exposure estimate in humans at steadystate. The assumption and relationship can be further validated as the database continues to grow steadily as more compounds enter development. Materials and methods Test compounds. Retrospective preclinical and clinical study data (from either published sources or available internally at Isis Pharmaceuticals) from a total of nine ASOs, which share a similar chemical composition, 20 or 21 nucleotides in length, were evaluated ( Table 4). The 2′-MOE ASOs are phosphorothioate oligonucleotides containing 2′-MOE sugar modifications on the 3′-and 5′-ends ("wings") of the molecule that flank a central DNA-like region ("gap"), and thus utilize a chimeric design strategy (i.e., the wings provide increased affinity and nuclease resistance, whereas the central gap allows RNase H-mediated cleavage of the target "sense" RNA) (Figure 1). Dose conversions. In humans, clinical doses (typically given as fixed mg doses) were converted to mg/kg levels based on an assumption of BW of 70 kg, as needed. In animals, the doses were generally given as mg/kg. In both animals and humans, conversion of mg/kg to mg/m 2 dose levels were made based on well-accepted conversion factors, i.e., mg/kg dose multiplier values of 3, 12, and 37 for mouse, monkey, and human, respectively, to determine corresponding mg/m 2 dose. 19 Mouse toxicology/toxicokinetic studies. Single and multiple dose toxicology /toxicokinetic (TK) studies were conducted in male and female CD-1 mice (Crl:CD-1 (ICR) BR; Charles River Laboratories, Wilmington, MA). Two dose levels were generally tested per compound, ranging from 3 to 40 mg/ kg (9-120 mg/m 2 ) administered by s.c. injection. ASOs were administered every other day for four doses (as loading dose for one week), followed by dosing once every fourth day or once a week for the remainder of a 4-to 13-week dosing period. Blood samples were collected for ASO quantitation in plasma by cardiac puncture at sacrifice in tubes containing EDTA at various time points over a 48-hour period following the dose (three mice per time point), and plasma was harvested. Both single and multiple dose plasma exposure data (mean AUCs) were used respectively for systemic exposure multiple determinations. In addition, liver and kidney samples were collected for drug concentration analysis at sacrifice ~48 hours after the last dose. In this study, only liver exposure data were included and compared across species. Monkey toxicology/toxicokinetic studies. Single and multiple dose toxicology/toxicokinetic studies were conducted in male and female cynomolgus monkeys (Macaca fascicularis; Sierra Biomedical Animal Colony, Sparks, NV). Four dose levels ranging from 1 to 40 mg/kg were generally tested for each compound administered via 1-hour i.v. infusion or s.c. injection. One of the four doses selected, ranging from 2 to 4 mg/ kg (24-48 mg/m 2 ), close to the clinical dose, was selected and included in this analysis. ASOs were administered every other day for four doses (as loading dose for one week), followed by dosing once every fourth day or once a week for the remainder of a 4-to 13-week dosing period. Blood was collected for quantitation of oligonucleotide concentrations in plasma by peripheral venipuncture into EDTA containing vacutainers at various time points over a 48-hour period following the dose during treatment as well as post treatment period, and plasma was harvested. Both single and multiple dose plasma exposure data (mean AUCs) were used respectively for systemic exposure multiple determinations. In addition, liver and kidney cortex samples were collected for drug concentration analysis at sacrifice ~48 hours after the last dose. In this study, only liver exposure data were included and compared across species. All mouse and monkey studies were conducted utilizing protocols and methods approved by the Institutional Animal Care and Use Committee (IACUC) and carried out in accordance with the Guide for the Care and Use of Laboratory Animals adopted and promulgated by the US National Institutes of Health. Human (clinical) studies. Human data were mostly from phase 1 clinical studies conducted in healthy volunteers or cancer patients. ASOs were dosed as 2-hour i.v. infusion or s.c. injection at dose levels that ranged from 175 to 640 mg (2.5-9.14 mg/kg; 92.5-338 mg/m 2 ). Doses were administered on days 1, 3, 5, and 8 as a loading regimen, followed thereafter by once weekly administrations for an additional 3-5 weeks. Intensive pharmacokinetic blood sampling at various time points occurred for 24 or 48 hours following an i.v. or s.c. dose. Samples were collected in EDTA tubes and plasma was harvested. Single and multiple dose plasma exposure data (mean AUCs) were used for systemic exposure multiple determinations. Analytical methods. Plasma samples were analyzed for parent ASO concentrations using quantitative and sensitive hybridization ELISA methods which were a variation of a previously reported method. 20 ASO concentrations in tissue samples were quantitated using capillary gel electrophoresis (CGE) or HPLC with UV detection. 12,20 These assays were validated for precision, accuracy, selectivity, sensitivity, metabolite cross-reactivity, dilution linearity, prozone effect, and stability of parent oligonucleotide prior to analysis of mouse, monkey, and human plasma or tissue samples. Both plasma and tissue sample analyses were conducted based on the principles and requirements described in 21 CFR part 58. The lower limit of quantitation (LLOQ) of the validated assays ranged from 0.2 to 2.0 ng/ml in mouse, monkey, and human plasma, and from 0.2 to 10.0 µg/ml in mouse and monkey tissues. Determination of pharmacokinetic plasma exposure (AUC). The area under the plasma concentration-time (AUC) values in individual animal and human were calculated using the linear trapezoidal rule (WinNonlin 3.1 or higher, Pharsight, Mountainview, CA) and summarized using descriptive statistics. Partial area plasma AUC (AUC 0-24 hours or AUC 0-48 hours ) values typically represent >90% of total AUC (AUC 0-∞ following single dose, or AUC 0-τ at steady-state) because the plasma distribution phase dominating plasma exposure and clearance of 2′-MOE ASOs. 18 While other plasma PK exposure parameters were also typically determined, this retrospective analysis focused on plasma AUC only given that it is the most commonly applied metric to assess systemic exposure multiples. Determination of systemic exposure and administered dose ratios. Systemic plasma exposure ratio (PER) is defined based on the mean plasma AUC values in animals and human at reported doses without adjustment for BW or BSA (Eq. 1). Similarly, the liver exposure ratio (LER) is defined based on reported mean liver concentrations between animal species (no liver tissue data from patients), and the administered dose ratio (ADR) after adjustment for BW (mg/ kg) or BSA (mg/m 2 ), are defined as shown below: In addition to the equations above, another metric designated as the "relative ratios" (i.e., ratio of PRM/ADR or LER/ ADR) is defined to assess how well an administered dose ratio (based on either mg/kg or mg/m 2 adjustment) estimates the corresponding systemic exposure ratio, with a relative multiple ratio value of 1.0 being a perfect predictive match, and a calculated value between 0.5 and 2.0 considered acceptable. The RR are calculated as follows: RR = ADR/PER Or RR = ADR/L ER with the ADR calculated based either on mg/kg or mg/m 2 values.
2018-04-03T05:25:07.069Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "202cde6e1110603e48f592c24a783dac8ac74c7a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1038/mtna.2014.69", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "202cde6e1110603e48f592c24a783dac8ac74c7a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
48357317
pes2o/s2orc
v3-fos-license
The Particle Shape of WC Governing the Fracture Mechanism of Particle Reinforced Iron Matrix Composites In this work, tungsten carbide particles (WCp, spherical and irregular particles)-reinforced iron matrix composites were manufactured utilizing a liquid sintering technique. The mechanical properties and the fracture mechanism of WCp/iron matrix composites were investigated theoretically and experimentally. The crack schematic diagram and fracture simulation diagram of WCp/iron matrix composites were summarized, indicating that the micro-crack was initiated both from the interface for spherical and irregular WCp/iron matrix composites. However, irregular WCp had a tendency to form spherical WCp. The micro-cracks then expanded to a wide macro-crack at the interface, leading to a final failure of the composites. In comparison with the spherical WCp, the irregular WCp were prone to break due to the stress concentration resulting in being prone to generating brittle cracking. The study on the fracture mechanisms of WCp/iron matrix composites might provide a theoretical guidance for the design and engineering application of particle reinforced composites. Introduction Recently, particle reinforced metal matrix composite coating (also named particle reinforced metal matrix surface composites, PRMMSC) has attracted extensive attentions because a metal surface without coating can easily to suffer abrasion causing the degradation or failure of materials [1]. It is necessary and important to improve the surface properties such as mechanical properties (like strength, toughness and wear-resistance) and chemical properties (corrosion-resistance and oxidation-resistance) for prolonging the service life or minimizing loss of production [2][3][4][5][6][7][8][9][10]. Recently, the WC p /iron matrix surface composites have been extensively used in slurry pump, slurry elbow pipe, liner plate, roll fitting and so forth. These composites can be fabricated by cast infiltration [2,11,12], powder metallurgy [3], laser cladding [5,6,[13][14][15][16][17], and so on, to generate great metallurgical bonding between the surface composite layer and the substrate due to the perfect wettability between WC p and molten ferrous alloy. In recent years, a large number of researchers have carried out plenty of studies on the mechanical properties of metal matrix composites varying with the particle concentration, particle size, stress state, temperature and so on [2,4,11,[18][19][20][21][22][23]. However, particle shape is also one of the most important geometric factors for the reinforcement and it can thus affect the overall performance of composites. It is generally believed that cracks in PRMMSC part manufacturing are crucial to the reliable material properties, especially for the reinforcement particles with different shapes. A finite element method was used to evaluate the effects of particle shape (spheres, regular octahedra, cubes or regular tetrahedra) on the mechanical properties of particle reinforced composites and found that particles with different shapes and equal sizes affected the yield stress at different extent [24]. Rasool et al. discussed the effects of particle shape (spherical and non-spherical particle) on the macroscopic and microscopic linear behaviors (linear elastic, thermoelastic and thermal conduction responses) of particle reinforced composites by numerical methods [25]. Trofimov et al. found that 15 convex polyhedral particle shapes could change the effective elastic properties of particle-reinforced composites predicted using micromechanical homogenization and direct finite element analysis approaches [26]. Therefore, different shapes of reinforced particles can affect the mechanical properties of composites, resulting in different fracture modes for the composites. However, there are various shapes in the actual products of WC p , and they are bound to affect the mechanical properties of composites regarding reinforcement. Thus, in this work, WC p /iron matrix composites were prepared utilizing a liquid sintering technique, and the effects of WC particle shapes (taking spherical particle and irregular particle as examples) on the microstructure, mechanical properties and fracture mechanism for particle reinforced iron matrix composites were investigated in details. Preparation of Composites The WC p /iron matrix composites were prepared utilizing a liquid sintering technique with the raw materials including WC p and iron powders. The XRD pattern of the as-received WC powders is shown in Figure 1. It is clear that the as-received WC particles were composed of W 2 C, WC and free carbon (C). The schematic diagram of the WC p /iron matrix composites and the morphology of the WC p are illustrated in Figure 2. WC p and iron powders were firstly mixed by XQM-4L planetary ball mill (Nanjing Daran Technology Corporation, Nanjing, China), and it could make sure that WC p would distribute in the iron powder uniformly. After that, the mixed powders were filled into a steel mold and then were pressed to form a green compact by manual hydraulic press with a pressure of 40 MPa for 60 min. The green compact was then placed into a corundum boat (100 mm × 56 mm × 35 mm). Later, it was placed into a tube furnace. The heating schematic diagram of the tube furnace was shown in Figure 2a. The process parameters of composites were described in Table 1. The heating rate of the vacuum tube furnace with a furnace pipe diameter of 80 mm (GSL-1600X, Kejing Company, Hefei, China) was in the range of 0-20 • C/min, operated at 220 V and 5.5 kW. Before being heated, the tube furnace was purged with high pure argon and then exhausted at least three times to protect the samples from pollution, and the vacuum valve was then closed when the pressure reached about 30 MPa. Finally, the heating temperature of the samples was elevated to 1500 • C, and kept for 60 min to make the interface react adequately. These samples were then naturally cooled in the furnace. Accordingly, the WC p /iron matrix composites with different shapes WC p were prepared. Characterization The relative density of composites reinforced by spherical particles and irregular particle was 89.2 ± 1.0 and 88.6 ± 1.0 vol %, respectively. There were no obvious differences within the resolution limits of relative density measurement. The phase composition of these samples was characterized utilizing X-ray diffractometer (XRD, Empyrean, Panalytical Company, Almelo, The Netherlands) with a Cu-Kα radiation operated at 40 kV and 30 mA. These samples were scanned in the 2θ range of 30-90 • . Data were collected in a continuous mode with a scanning step of 0.02 • and a time interval of 1 s/step. The microstructure of these samples was analyzed with scanning electron microscopy (SEM, VEGA 3 SBH, TESCAN, Brno, Czech Republic) combined with Energy Dispersive Spectrometer (EDS, GENESIS, EDAX, Mahwah, NJ, USA). Hardness of the samples was measured using a Rockwell hardness tester (FR-45, Laizhou Laihua Testing Instrument Factory, Laizhou, China) under a load of 150 kgf (1471 N) with a diamond cone indenter and duration of the test force 10 s. Each test was repeated at least 5 times, the value would be averaged. The compression tests were carried out by utilizing AG-IS 10 KN mechanical testing machine (Shimadzu Corporation, Kyoto, Japan). To ascertain reproducibility, each test result reported in this work was averaged from eight compression test under the same conditions. Finally, the fracture morphology of composites was observed using field emission scanning electron microscopy (FE-SEM, Nova Nano SEM 450, FEI Company, Hillsboro, OR, USA). Microstructure The WC p were mainly composed of WC and W 2 C phase identified by XRD, shown in Figure 1. Referencing the W-C phase diagram and previous theoretical calculations, the temperature of WC decomposition reaction was around 1250 • C [12]. The reaction (1) could promote to generate more W 2 C [19]. The W 2 C would react with iron to generate Fe 3 W 3 C. According to our previous first principles calculation, the cohesive energy E coh of reaction between W 2 C and Fe was −0.01 eV/atom [12]. According to thermodynamic theory, reaction (2) could occur spontaneously when the cohesive energy is negative. These two reactions promoted each other and led to the interface reaction between WC p and iron matrix around 1341 • C. Meanwhile, WC p could decompose partially at a high speed in the heating process, more products of reaction (1) could be generated. The enrichment of W 2 C could provide more reactants for reaction (2) to finalize more Fe 3 W 3 C concentrated in the local area around WC p [27]. Spherical particles and irregular particles were evenly distributed in the matrix, and there was no aggregation. Irregular WC p possessed more prominent edges and corners, while spherical WC p presented regular sphere. The microstructure of prepared WC p /iron matrix composites with different particle shapes was shown in Figure 3. The spherical and irregular WC p presented an integrated interface morphology state, and obvious interface reaction zones were generated in the surrounding, which demonstrated that particles occurred in the metallurgical reaction with iron matrix, shown in Figure 3a. A large number of brittle phase Fe 3 W 3 C was presented in the matrix with dispersed state. Comparing Figure 3a,b, the brittle phase Fe 3 W 3 C in spherical WC p /iron matrix composites was more homogeneous than that in irregular WC p /iron matrix. A typical magnification view is shown in Figure 3c,d, where plenty of intermittent massive structures appeared in irregular WC p due to the stress concentration, which scattered into the iron matrix. Most W 2 C in WC p would react with Fe 3 W 3 C in WC p /iron matrix composites. The metallurgical reaction (2) occurred between W 2 C and Fe, while the remaining WC particles distributed in the matrix presenting dark areas. In spherical WC p /iron matrix composites the bright white part (i.e., W 2 C) of WC p was more, while the dark part was less (i.e., non-dissolved WC). As shown in Figure 3c,d, the brittle phase Fe 3 W 3 C presented a block structure in matrix. As shown in Figure 3b, the flat shape WC p in irregular WC p /iron matrix composites tended to be round, and there was a trend turning into regular (spherical) WC p because irregular WC p had many bulges. These bulges would take precedence over some of the other flats or recessed parts, so the irregular WC p had a trend of turning into regular WC p . The thickness of interface was very thin ranging from 5 to 60 µm. The thin interface was beneficial to transmitting the stress from matrix to WC p . How did this kind of reaction zone between interface affect mechanical properties? Figure 3. The metallographic photographs of composites with different particle shape: spherical particle (a,c), and irregular particle (b,d). Mechanical Properties The mechanical properties of WC p /iron matrix composites with different particle shape were tested at least eight times. As shown in Figure 4, the yield strength and hardness of spherical WC p /iron matrix composites were 947.8 ± 50 MPa and 69.5 ± 2.5 HRC, respectively. Under corresponding process parameters, the yield strength and the hardness of irregular WC p /iron matrix composites were 556.8 ± 50 MPa and 59.4 ± 2.5 HRC, respectively. Apparently, the spherical WC p /iron matrix composites had higher compression yield strength and hardness in comparison with the irregular WC p /iron matrix composites. Discussion In order to explore the initiation location of the micro-crack under compression test, SEM together with EDS analyses of different fracture location was carried out for spherical and irregular WC p /iron matrix composites. The initiation location of micro-crack in composites was determined by observing the phase composition of fracture location. According to the SEM photographs in Figure 5 and the EDS results summarized in Table 2, we could see that there were different element contents at points 1 and 2 in Figure 5a, with a higher Fe content and otherwise lower W and C content, so it could be speculated that these parts were a matrix of composites. At points 3, 4, 5 and 6, however, the atomic percentages of Fe and W were close to 1:1. Therefore, it could be speculated that the phase could be Fe 3 W 3 C, i.e., the location should be the interface of the composites. Micro-cracks could be found near points 3, 4, 5 and 6 in Figure 5a, so it could be inferred that the micro-cracks of spherical WC p /iron matrix composites initiated at the interface. Table 2. The atomic percentage (at %) of WC p /iron matrix composites with different particle shape. Point Fe W C 1 85 5 10 2 87 4 9 3 43 40 17 4 43 39 18 5 43 39 18 6 43 40 17 7 2 63 35 8 3 62 35 9 43 40 17 10 43 40 17 According to the Figure 5b and Table 2, the main compositions of irregular WC p /iron matrix composites were W and C, at points 7 and 8 in Figure 5b. It could be speculated that the phase was WC and W 2 C. Thus, the location was WC p of composites. It meant that the brittle cracking occurred during compression tests. Because the convex portions of irregular WC p were easier to produce stress concentration, the particles within composites were prone to cause brittle cracking [22]. The chemical composition of irregular WC p /iron matrix composites at points 9 and 10 could be recognized as Fe 3 W 3 C, because the atomic percentages of Fe and W were close to 1:1. This is to say that the location was the interface of composites. Micro-cracks, however, mainly initiated from points 7 and 8 in Figure 5b, so it could be speculated that the micro-cracks of irregular WC p /iron matrix composites initiated from the WC p compound composed of WC and W 2 C. Micro-cracks initiated near the interface of different shape WC p /iron matrix composites during compression tests. The micro-cracks extended into large cracks and resulted in the failure of composites. In the compression process, the irregular WC p within composites tended to produce higher stress concentration in comparison with the spherical WC p , which were prone to cause brittle cracking. The fracture morphology images of WC p /iron matrix composites with different particle shape are shown in Figure 6. From the fracture morphology images of spherical WC p /iron matrix composites in Figure 6a,c, it could be seen that there were not only obvious cleavage steps but also small dimples. However, the number of small dimples was limited, therefore, during compression tests, the fracture mode should be the quasi-cleavage fracture [4,18,28]. From the fracture morphology images of irregular WC p /iron matrix composites in Figure 6b,d, it could be seen that the matrix did not have plastic deformation before breaking, and the section was full of a cleavage step surface, so the fracture mode was a cleavage fracture (brittle fracture). This was because the content of interfacial phase Fe 3 W 3 C in the irregular WC p /iron matrix composites was higher than that in the spherical ones, and some Fe 3 W 3 C dissociated in the matrix existed as a brittle phase. It would increase the brittleness of composites, and make the spherical WC p /iron matrix composites present the transition mode by way of quasi-cleavage fracture to cleavage fracture [20,21,29]. The micro-cracks initiated and then expanded into a wider crack at the interface, resulting in the failure of the material. The compression strength of brittle fracture mode was lower than that of quasi-cleavage fracture mode for the composites. In this case, the yield strength of spherical WC p /iron matrix composites was 1.7 times of the irregular ones. The fracture surface of these samples after the compression test are shown in Figure 6e,f. The compression fracture morphology of WC p /iron matrix composites with different particle shape: (a) spherical particle; (b) irregular particle. The crack propagation of WC p /iron matrix composites with different particle is schematically illustrated in Figure 7. It could be seen that the micro-cracks source of composites generated near the interface. Cracks initiated at the interface and expanded due to cohesive failure. Cracks could jump from one path to another when the fracture occurred. Several fracture paths might be produced when the cracks propagated through the matrix and encountered WC p . The cracks threaded entire irregular WC p and resulted in the breakage of WC p due to stress concentration. In fact, the irregular WC p had many bulges, resulting in a bigger specific surface area. In the interfacial reaction zones, a more brittle Fe 3 W 3 C phase could be generated through diffusion. As discussed above, the brittle Fe 3 W 3 C phase was the root of crack initiation. This is to say that an irregular WC p within the composites was prone to cause brittle crack. Therefore, the irregular WC p /iron matrix composites had lower yield strength and hardness. Figure 6. The fracture morphology of WC p /iron matrix composites with different particle shape: (a,c,e) spherical particle; (b,d,f) irregular particle. Figure 7. The crack propagations simulation diagram of WC p /iron matrix composites with different particle shape: (a) Spherical particle; (b) Irregular particle. Conclusions In summary, tungsten carbide particles (WC p ) reinforced iron matrix composites with different shapes (spherical particles and irregular particles) were manufactured successfully by utilizing a liquid sintering technique. The effects of WC particle shape on the microstructure, mechanical properties and fracture mechanism for particle-reinforced iron matrix composites were investigated. The following conclusions could be drawn: (1) In the interfacial reaction zone, WC particle and iron matrix could react into a brittle Fe 3 W 3 C phase. (2) The spherical WC p /iron matrix composites had higher compression yield strength and hardness compared with the irregular ones. (3) The micro-cracks source of composites were generated at the interface. The irregular WC p within composites tended to produce a higher stress concentration compared with spherical WC p , which were prone to cause brittle fracture. (4) Bigger specific surface area resulting from more bulges on irregular WC p could lead to a more brittle Fe 3 W 3 C phase in the interfacial reaction zones. Therefore, the irregular WC p /iron matrix composites had lower yield strength and hardness. Conflicts of Interest: The authors declare no conflict of interest.
2018-06-30T00:51:45.498Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "ead64512f368d75295406bf95fe5dc858c5e8209", "oa_license": "CCBY", "oa_url": "https://res.mdpi.com/d_attachment/materials/materials-11-00984/article_deploy/materials-11-00984.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ead64512f368d75295406bf95fe5dc858c5e8209", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
672213
pes2o/s2orc
v3-fos-license
A decomposition of equivariant K-theory in twisted equivariant K-theories For G a finite group and X a G-space on which a normal subgroup A acts trivially, we show that the G-equivariant K-theory of X decomposes as a direct sum of twisted equivariant K-theories of X parametrized by the orbits of the conjugation action of G on the irreducible representations of A. The twists are group 2-cocycles which encode the obstruction of lifting an irreducible representation of A to the subgroup of G which fixes the isomorphism class of the irreducible representation. Introduction Suppose that we have a group extension of finite groups The purpose of this article is to study the G-equivariant K-theory K * G (X), on compact, Hausdorff G-spaces X for which the action of A is trivial. Equivariant K-theory is an equivariant generalized cohomology theory that is constructed out of G-equivariant vector bundles. Its basic properties were derived in [10]. Whenever X is a G-space such that A acts trivially and p : E → X is a Gequivariant vector bundle, then we can regard E as an A-equivariant vector bundle and thus the fibers of E can be seen as A-representations. Decomposing E into A-isotypical pieces (see [10,Proposition 2.2]), we obtain a decomposition of E as an A-equivariant vector bundle Here V τ denotes the A-vector bundle π 1 : X × V τ → X associated to an irreducible representation τ : A → U (V τ ) and Irr(A) denotes the set of isomorphism classes of complex irreducible A-representations. It is important to point out that this decomposition is one of A-vector bundles and not one of G-vector bundles since in general the bundles V τ ⊗ Hom A (V τ , E) do not possess the structure of a G-vector bundle. The key observation of this work is that the sum [τ ]∈Irr(A) V τ ⊗ Hom A (V τ , E) can be rearranged using the different orbits of the action of Q on Irr(A) as to obtain a decomposition of E in terms of G-vector bundles. Moreover, we show that the factors obtained in this refined decomposition naturally define vector bundles that are used to defined twisted forms of equivariant K-theory and in this way we obtain a decomposition of K * G (X) as a direct sum of twisted forms of equivariant K-theory. To make this precise suppose that τ : A → U (V τ ) is an irreducible A-representation. Let G [τ ] (resp. Q [τ ] ) denote the isotropy group of the action of G (resp. Q) at [τ ] ∈ Irr(A). These groups fit into a group extension of the form thus defining a 2-cocycle α τ ∈ Z 2 (Q [τ ] , S 1 ) which is the data needed to define the α τ -twisted Q [τ ] -equivariant K-theory groups ατ K * Q [τ ] (X) (see [2,Section 7] for the definition). With this notation we can state the following theorem which is the main result of this article. Theorem 3.4. Suppose that A ⊂ G is a normal subgroup and X is a compact, Hausdorff G-space on which A acts trivially. Then there is a natural isomorphism where [τ ] runs over the orbits of G on Irr(A), This isomorphism is functorial on maps X → Y of G-spaces on which A acts trivially. We remark that the previous theorem also holds in the case of G being a compact Lie group. However we chose to work first with finite groups because in this case we can obtain explicit formulas for the cocycles used to twist equivariant K-theory. The general case will be handled in a sequel to this article. The layout of this article is as follows. In Section 1 we study the problem of extending homomorphisms of finite groups. In particular, the cocycles α τ ∈ Z 2 (Q [τ ] , S 1 ) that appear in Theorem 3.4 are constructed in this section. In Section 2 we construct a twisted form of equivariant K-theory using vector bundles that come equipped with a prescribed fiberwise representation. In Section 3 we prove Theorem 3.4 which is the main result of this work. In Section 4 we relate Theorem 3.4 to the Atiyah-Segal completion theorem. In Section 5 we provide a formula for the third differential of the Atiyah-Hirzebruch spectral sequence that computes K * G (X) whenever A acts trivially on X. Finally, in Section 6 some explicit computations are provided for the dihedral group D 8 . Throughout this work all the spaces in sight will be compact and Hausdorff endowed with a continuous action of the finite group G unless stated otherwise. Extensions of homomorphisms of finite groups In this section we study extensions of homomorphisms of finite groups. Our main goal is to show that the obstructions for finding such extensions can be studied using group cohomology. We remark that the material in this section may be known to experts but we include the main ingredients that will be used throughout this article for completeness. We refer the reader to [1, Chapter I] and [7] for background on group cohomology. Consider the group extension of finite groups and fix a given homomorphism of groups ρ : A → U . We want to study the conditions under which the homomorphism ρ may be extended to a homomorphism ρ : G → U in such a way that ρ • ι = ρ| A = ρ. First note that since A is normal in G, the group G acts on the left on the set Hom(A, U ) of homomorphisms from A to U : for a homomorphism χ : A → U and g ∈ G we define the homomorphism g · χ by the equation Second note that the group U acts on the right on Hom(A, U ) by conjugation: for a homomorphism χ : A → U and M ∈ U we define the homomorphism χ · M by the equation Further note that this left G action on Hom(A, U ) commutes with the right U action. If ρ were to be extended to ρ : G → U then we would have the equality thus implying that the homomorphisms g · ρ and ρ are conjugate to each other by an element ρ(g) in U , or in other words that g · ρ = ρ · ρ(g). In particular this implies that there must exist a homomorphism f : G → Inn(U ) from G to the inner automorphisms of U such that we have the equation and that ρ ∈ [Hom(A, U )/U ] G , i.e. the class of ρ is G-invariant on the set equivalence classes of homomorphisms up to conjugation. Therefore the first obstruction for the existence of the extension ρ of ρ is the existence of a homomorphism f : G → Inn(U ) such that the following diagram commutes where the homomorphism p is the canonical homomorphism, and Z(U ) is the center of U , and that the class of ρ up to U conjugation is 1.1. Extension of homomorphisms. Let us suppose that we are in the situation In what follows we will show that the obstruction for the existence of the extension ρ : G → U lies in H 2 (Q, Z(U )). Denote this group by Q ⋉ (χ,ψ) A and note that the maps become isomorphisms of groups one inverse to the other; therefore we may identify the group G with the group Q ⋉ (χ,ψ) A. On the other hand, let us consider the inner automorphisms of U defined by the elements f (σ(q)) and choose elements M q ∈ U such that for all q ∈ Q; choose M 1 = 1. In particular we have that for all q ∈ Q and a ∈ A. Definition 1.4. Consider the commutative diagram of groups of (1.1) with ρ ∈ [Hom(A, U )/U ] G , the set theoretical section σ : Q → G and the lifts M q ∈ U of the elements f (σ(q)) with M 1 = 1. Define the map α ρ : Q × Q → Z(U ) by the equation The following lemma shows that associated to α ρ we have a cohomology class that does not depend on the choices made above. The proof follows by direct computations and is left as an exercise to the reader. Lemma 1.5. The map α ρ : Q × Q → Z(U ) satisfies the 2-cocycle condition, i.e. for every q 1 , q 2 , q 3 ∈ Q the equality is satisfied, thus making α ρ a 2-cocycle of the group Q with coefficients in the abelian group Z(U ) seen as a trivial Q-module. Moreover, the cohomology class [α ρ ] ∈ H 2 (Q, Z(U )) is well defined. Namely, it does not depend on the choice of section σ nor on the choice of lifts M q 's. Proof. Let us suppose there is a group homomorphism ρ : G → U extending ρ : A → U and fitting into the diagram (1.1). Take any section σ : Q → G with σ(1) = 1 and choose M q := ρ(σ(q)). Then and therefore [α ρ ] = 1. Let us now suppose that there exist ε : Q → Z(U ) such that δε = α ρ ; this implies that for q 1 , q 2 ∈ Q we obtain Since the image of ε lies in the center of U , we may define M q := M q ε(q) thus obtaining the equation M q1 M q2 = M q1q2 ρ(χ(q 1 , q 2 )). Consider the map It is straight forward to verify that Ψ is a group homomorphism. Composing the map Ψ with the isomorphism G → Q ⋉ (χ,ψ) A, g → (π(g), σ(π(g))) −1 g we may define the homomorphism ρ : G → U, g → M π(g) ρ(σ(π(g)) −1 g). Remark 1.7. Note that in the case that U is abelian we may find the obstruction for the existence of the extension ρ : G → U from the Lyndon-Hochschild-Serre spectral sequence associated to the group extension 1 → A → G → Q → 1. The LHS spectral sequence converges to H * (G, U ) and its second page is E p,q where V ρ is a complex representation of A and U (V ρ ) denotes the group of unitary transformations of V ρ . To start we have the following lemma whose proof is left to the reader. Lemma 1.8. Suppose that for all g ∈ G the irreducible representation g · ρ is isomorphic to ρ, or what is the same that ρ ∈ [Hom(A, U )/U ] G . Then there exist a unique homomorphism f : G → Inn(U (V ρ )) making the following diagram Then the representation ρ may be extended to an irreducible representation ρ : Proof. By Lemma 1.8 we know that there exists f : G → Inn(U (V ρ )) making diagram (1.1) commutative. By Proposition 1.6 we know that the obstruction for the existence of the extension is [α ρ ] ∈ H 2 (Q, Z(U (V ρ ))), and since the center of U (V ρ ) is isomorphic to S 1 , the result follows. Twisted equivariant K-theory via representations In this section we provide a description of a twisted form of equivariant K-theory via vector bundles that come equipped with a prescribed fiberwise representation of a finite group. To start suppose that A is a finite group. Assume that A is a normal subgroup of a finite group G so that we have a group extension Let G act on a compact and Hausdorff space X in such a way that for every x ∈ X we have A ⊂ G x . In other words, the subgroup A acts trivially on X. Let p : E → X be a G-equivariant vector bundle. We can give E a Hermitian metric that is invariant under the action of A. If we see p : E → X as an A-vector bundle then as the action of A on X is trivial by [10, Proposition 2.2] we have a natural isomorphism of A-vector bundles In the above equation Irr(A) denotes the set of isomorphism classes of complex irreducible A-representations and if τ : It is important to point out that the decomposition given in (2.1) is a decomposition as A-equivariant vector bundles and not as G-equivariant vector bundles since in general the terms on the left hand side of (2.1) do not have the structure of a G-equivariant vector bundle. With this in mind we have the following definition. is an isomorphism of A-vector bundles. In other words, a (G, ρ)-equivariant vector bundle is a G-equivariant vector bundle p : E → X that satisfies the following property: for every x ∈ X the Arepresentation E x is isomorphic to a direct sum of the representation ρ; that is, the only irreducible A-representation that appears in the fibers of E is ρ. We can define a direct summand of the equivariant K-theory using (G, ρ)-equivariant vector bundles. For this let Vec G,ρ (X) denote the set of isomorphism classes of (G, ρ)-equivariant vector bundles, where two (G, ρ)-equivariant vector bundles are isomorphic if they are isomorphic as G-vector bundles. Notice that if E 1 and E 2 are two (G; ρ)-equivariant vector bundles then so is E 1 ⊕ E 2 . Therefore Vec G,ρ (X) is a semigroup. Definition 2.3. Assume that G acts on a compact space X in such a way that A acts trivially on X. We define K 0 G,ρ (X), the (G, ρ)-equivariant K-theory of X, as the Grothendieck construction applied to Vec G,ρ (X). For n > 0 the group K n G,ρ (X) is defined as K 0 G,ρ (Σ n X + ), where as usual X + denotes the space X with an added base point. The goal of this section is to provide a description of the previous equivariant K-groups in terms of the usual twisted equivariant K-groups as defined for example in [2,Section 7]. Suppose that ρ : A → U (V ρ ) is a complex irreducible representation with the property that g · ρ is isomorphic to ρ for every g ∈ G. Fix an assignment σ : We choose M 1 = 1. Let α ρ ∈ Z 2 (Q, S 1 ) be the cocycle corresponding to ρ constructed in Proposition 1.9. Using the cocycle α ρ ∈ Z 2 (Q, S 1 ) we can construct a central extension of Q by S 1 in the following way: as a set define and the product structure in Q αρ is given by the assignment The group Q αρ is a compact Lie group that fits into a central extension Let G act on a compact space X in such a way that A acts trivially on X. We can see X as a Q αρ space on which the central factor S 1 acts trivially. Suppose In general the vector Hom A (V ρ , E) does not have a structure of a G-equivariant vector bundle that is compatible the G-structure on E. Instead, we are going to show in the next theorem that Hom A (V ρ , E) has the structure of a Q αρ -vector bundle on which the central factor S 1 acts by multiplication of scalars. This is a key step in our work. Theorem 2.5. Let X be an G-space such that A acts trivially on X. Assume that g · ρ ∼ = ρ for every g ∈ G. If p : E → X is a (G, ρ)-equivariant vector bundle, then Hom A (V ρ , E) has the structure of a Q αρ -vector bundle on which the central factor S 1 acts by multiplication of scalars. Moreover, the assignment defines a natural one to one correspondence between isomorphism classes of (G, ρ)equivariant vector bundles over X and isomorphism classes of Q αρ -equivariant vector bundles over X for which the central S 1 acts by multiplication of scalars. is the element chosen above. It is easy to see that with this definition q • f is A-equivariant and that the map Equation (2.6) allows us to define an action of Q αρ on Hom It can be checked that this defines an action of Q αρ on Hom A (V ρ , E). Also, this action is fiberwise linear and the map p : Hom A (V ρ , E) → X is Q αρ -equivariant so that Hom A (V ρ , E) is a Q αρ -equivariant vector bundle over X. By definition the central factor S 1 acts by multiplication of scalars. Suppose now that p : F → X is a Q αρ -equivariant vector bundle over X for which the central S 1 acts by multiplication of scalars. Using the cocycle identities it can be verified that this defines an action of G on V ρ ⊗ F , and since it is linear on the fibers, the bundle V ρ ⊗ F becomes a G-vector bundle. Moreover for a ∈ A, as M 1 = 1, we have that a · (v ⊗ f ) = (ρ(a)v) ⊗ f so that A acts on V ρ ⊗ F by the representation ρ; that is, p : V ρ ⊗ F → X is a (G, ρ)-equivariant vector bundle over X. Finally we need to show that this defines a one to one correspondence between isomorphism classes of (G, ρ)-equivariant vector bundles over X and isomorphism classes of Q αρ -equivariant vector bundles over X for which the central S 1 acts by multiplication of scalars. To this end, assume that p : E → X is a (G, ρ)-equivariant vector bundle over X, then by definition the map is an isomorphism of A-vector bundles. Since Hom A (V ρ , E) is a Q αρ -equivariant vector bundle over X on which the central S 1 acts by multiplication of scalars, we may endow V ρ ⊗ Hom A (V ρ , E) with the structure of a G vector bundle as it was done in equation (2.7). The map β is an isomorphism of vector bundles and its G-equivariance follows from the next equations. For g ∈ G we have The previous argument shows that β : Now, if p : F → X is a Q αρ -equivariant vector bundle over X for which the central S 1 acts by multiplication of scalars, then by equation (2.7) we know that V ρ ⊗F is a (G, ρ)-equivariant vector bundle. Let us show that F and Hom A (V ρ , V ρ ⊗ F ) are isomorphic as Q αρ -equivariant vector bundles. For this it can be showed in a similar way as it was done above that the canonical isomorphism of vector bundles is Q αρ -equivariant. Therefore the vector bundles F and Hom A (V ρ , V ρ ⊗ F ) are isomorphic as Q αρ -equivariant vector bundles. We conclude that the inverse map of the assignment Theorem 2.5 provides a useful identification of the (G, ρ)-equivariant K-groups of Definition 2.2 with the α ρ -twisted Q-equivariant K-theory groups. For this purpose let us recall the definition of the α-twisted Q-equivariant K-theory groups whenever α : Q × Q → S 1 is a 2-cocycle given in [2,Section 7]. Consider the S 1 -central extension of Q that α defines with Q α as it is defined in (2.4). Let X be a Q-space and endow it with the action of Q α induced by the Q action. Let α K 0 Q (X) be Grothendieck group of the set of isomorphism classes of Q α vector bundles over X on which S 1 acts by multiplication of scalars on the fibers. For n > 0 the twisted groups α K n Q (X) are defined as α K 0 Q (Σ n X + ). The groups α K * Q (X) are called the α-twisted Q-equivariant K-theory groups of X. Note that α K 0 Q (X) is a free submodule of K * Qα (X) and we could have alternatively defined the α-twisted Q-equivariant as this submodule. Furthermore note that the α-twisted Q-equivariant vector bundles are the same as the ( Q α , u)equivariant bundles where u : S 1 → U (1) is the irreducible representation given by the oriented isomorphisms of groups u defined by multiplication by scalars. Applying the definition of the α-twisted Q-equivariant K-theory groups, Theorem 2.5 implies the following result which is the main result of this section: Corollary 2.9. Let X be a compact and Hausdorff G-space such that A acts trivially on X. Assume furthermore that ρ : A → U (V ρ ) is a representation whose isomorphism class is fixed by G, i.e. g · ρ ∼ = ρ for every g ∈ G. Then the assignment between the (G, ρ)-equivariant K-theory of X and the α ρ -twisted Q-equivariant Ktheory of X is a natural isomorphism of R(Q)-modules. Decomposition formula in Equivariant K-theory In this section we provide a decomposition of K * G (X) whenever G has a normal subgroup acting trivially on X. This decomposition is the main goal of this article. Suppose that G is a finite group for which we have a normal subgroup A. Let Q = G/A so that we have a group extension Recall that the group G acts on the set Irr(A) by conjugation. Notice that the group Assume that G acts on a compact and Hausdorff space X in such a way that A acts trivially on X. Let p : E → X be a G-equivariant vector bundle. Since A acts trivially on X then each fiber of E can be seen as an A-representation. We can give E a Hermitian metric that is invariant under the action of A. As before we have a natural isomorphism of A-vector bundles β : As pointed out before, in general each of the pieces V τ ⊗ Hom A (V τ , E) does not have the structure of a G-equivariant vector bundle. However, the previous decomposition can be used to obtain a decomposition of E as a direct sum of G-vector bundles by considering the different orbits of the action of Q on Irr(A). We claim the following theorem: Theorem 3.2. Suppose that A ⊂ G is a normal subgroup and X is a compact, Hausdorff G-space on which A acts trivially. Then there is a natural isomorphism where [τ ] runs over the orbits of G on Irr(A) and G [τ ] is the isotropy group of [τ ]. This isomorphism is functorial on maps X → Y of G-spaces on which A acts trivially. Proof. Decompose the set Irr(A) in the form Irr(A) = Notice that each E Ai is an A-equivariant vector bundle over X, and moreover the map β : defines an isomorphism of A-vector bundles. We will show that each E Ai is a G-vector bundle and that the map β is G-equivariant. Let us fix 1 ≤ i ≤ k and an irreducible representation ρ : A → U (V ρ ) such that [ρ] ∈ A i . Therefore all the representations in A i are of the form [g · ρ]. Fix representatives g 1 = 1, g 2 , . . . , g ni of the different cosets in G/G [ρ] ; that is, . Therefore We first notice that V ρ ⊗ Hom A (V ρ , E) has the structure of a G [ρ] -vector bundle. For this suppose that h ∈ G [ρ] and In a similar way as in equations (2.8) it can be checked that this defines a structure of a G [ρ] -vector bundle on V ρ ⊗ Hom A (V ρ , E) in such a way that the evaluation map is a G [ρ] -equivariant isomorphism onto its image (we may take M h as the transformation M π(h) M σ(π(h)) −1 h defined in equations (2.8)). With this in mind we can define an action of G on E Ai in the following way. Suppose that g ∈ G and that v ⊗ f ∈ (V gj ·ρ ⊗ Hom A (V gj ·ρ , E)) x . Decompose gg j in the form gg j = g l h, where 1 ≤ l ≤ n i and h ∈ G [ρ] . In other words g l is the representative chosen for the coset (gg j )G [ρ] and h = g −1 l gg j . Define Let us show that this defines an action of G ∈ E Ai . We show first that g • f ∈ Hom A (V g l ·ρ , E)) gx when f ∈ Hom A (V gj ·ρ , E)) x . To see this recall that the representation V gj ·ρ has as underlying spce V ρ and the action of A is given by a · w = ρ(g −1 j ag j )w. Therefore, the fact that f ∈ Hom A (V gj ·ρ , E)) x means that for all a ∈ A and all w ∈ V gj ·ρ we have f (ρ(g −1 j ag j )w) = af (w). With this in mind, for all w ∈ V g l ·ρ and all a ∈ A we have . Since the evaluation map β is injective and preserves the G-action, then it follows that for every g 1 , g 2 ∈ G we have g 1 ⋆ (g 2 ⋆ (v ⊗ f )) = (g 1 g 2 ) ⋆ (v ⊗ f ). The above argument proves that for each 1 ≤ i ≤ k the vector bundle E Ai has the structure of G-vector bundle. Moreover, the map β : is an isomorphism of A-vector and the map β is G-equivariant so that β is an isomorphism of G-vector bundles. Now, if for each A i we choose a representation [τ i ] ∈ A i we may write the map Note that Ψ i X (E Aj ) = 0 for i = j. Let us construct the map K G [τ i ] ,τi (X) → K * G (X) which will be the right inverse of Ψ i X . Let ρ = τ i and consider a vector bundle F ∈ Vec G [ρ] ,ρ (X). We need to construct a G-vector bundle from F taking into account that G acts on X. For g ∈ G let g * F := {(x, f ) ∈ X × F |gx = πf } be the pullback bundle where π : F → X is the projection map. Consider the bundle n j=1 (g −1 j ) * F where g 1 = 1, g 2 , . . . , g n are fixed elements of the different cosets in G/G [ρ] . Endow n j=1 (g −1 j ) * F with a G action in the following way. For (x, f ) ∈ (g −1 j ) * F and g ∈ G, let g l ∈ G and h ∈ G [ρ] be such that gg j = g l h. Define the action of g as follows: g • (x, f ) := (gx, hf ) ∈ (g −1 l ) * F. For anotherḡ ∈ G, let g m ∈ G and e ∈ G [ρ] such thatḡgg j = g m e and hencē gg l = g m eh −1 . Then we have the following equalities: Note that for h ∈ G [ρ] and (x, f ) ∈ (g −1 j ) * F we have that and therefore for k = g j hg −1 This implies that the restricted action of G gj ·[ρ] ⊂ G on (g −1 j ) * F matches the conjugation action that can be defined on the the bundle (g −1 j ) * F ; in particular the restriction of the G [ρ] -action on 1 * F matches the original action on F . -vector bundles, we have that at the level of K-theory we obtain that Therefore we have that the maps Ψ i X have right inverses, and hence we conclude that the map Ψ X is indeed an isomorphism. The functoriality follows from the fact that the bundles V τ ⊗Hom A (V τ , f * E) and f * (V τ ⊗ Hom A (V τ , E)) are canonically isomorphic as (G [τ ] , τ )-equivariant bundles whenever f : Y → X is a G-equivariant map from spaces on which A acts trivially. Theorem 3.2 and Corollary 2.9 imply the main result of this article. Theorem 3.4. Suppose that A ⊂ G is a normal subgroup and X is a compact, Hausdorff G-space on which A acts trivially. Then there is a natural isomorphism This isomorphism is functorial on maps X → Y of G-spaces on which A acts trivially. Proof. The result follows from the fact that the canonical map induces an isomorphism of α τ -twisted Q [τ ] -equivariant bundles. As a particular application of the previous theorem, suppose that X is a compact space on which Q acts freely. Then we can see X as a G-space in such a way that for every x ∈ X we have that G x = A. In this particular case the twisted equivariant groups ατ K * Q [τ ] (X) that appear in the previous theorem can be seen as suitable non-equivariant twisted K-groups as is explained next. For this suppose that H is a separable infinite dimensional Hilbert space. Let P U (H) denote the projective unitary group with the strong operator topology. Given a space Y together with a continuous function f : Y → BP U (H), pulling back the universal principal P U (H)-bundle EP U (H) → BP U (H) along f , we obtain a principal P U (H)-bundle P f → Y . Associated to the pair (Y ; f ) we may define the twisted K-theory groups as the homotopy groups of the space of sections of the associated Fredholm bundle Fred(P f ) := P f × P U(H) Fred(H). This is the usual definition of non-equivariant twisted K-groups given in [5,Def. 3.3]. Suppose now that α : Q × Q → S 1 is a 2-cocycle and let be the central extension that α defines as in (2.4 Qα is a Q α representation on which S 1 acts by multiplication of scalars and all the representations of this kind appear infinitely number of times. Hence we have an induced homomorphism of groups φ α : Q α → U (H) which induces a homomorphism φ α : Q → P U (H) making the following diagram of group extensions commutative Using the homomorphism φ α : Q → P U (H) and the natural action of P U (H) on Fred(H) we can obtain an action of Q on Fred(H). Recall that the α-twisted Qequivariant K-theory groups are generated by the Q α -equivariant vector bundles over X on which S 1 acts by multiplication of scalars, therefore using the alternative definition of equivariant K-theory using Fredholm operators, the group α K * Q (X) may be alternatively defined as the homotopy groups of the space of Q-equivariant maps from X to Fred(H), i.e. (3.5) α K −p Q (X) ∼ = π p map(X, Fred(H)) Q . In the particular case on which Q acts freely on X there is an homeomorphism of topological spaces between the space of Q-equivariant maps from X to Fred(H) and the space of sections of the Fred(H)-bundle X × Q Fred(H) → X/Q. On the other hand, since X is a free Q-space there is a unique up to homotopy Q-equivariant map X → EQ inducing a map h : X/G → BQ at the level of the quotient spaces. Combining h with the map Bφ α we obtain the following commutative diagram where the outer square is a pullback square. Therefore Using (3.5), (3.6) and (3.7) we conclude that if Q acts freely on X, the α-twisted Q-equivariant K-theory of X is canonically isomorphic to the twisted K-theory of the pair (X/Q, Bφ α • h), i.e. Combining (3.8) with Theorem 3.2 we obtain: Theorem 3.9. Let A be a normal subgroup of a finite group G and denote Q = G/A. Let X be a free Q-space and consider it as a G-space on which A acts trivially. Then there is a natural decomposition of the G-equivariant K-theory of X into a direct sum of twisted K-theories in the following way where φ αρ : Q [ρ] → P U (H) is the stable homomorphism defined by α ρ and h ρ : The decomposition formula and the Completion Theorem Let A be a normal subgroup in G and Q = G/A. Suppose that X a G-space on which A acts trivially. Let E n Q = Q * Q * · · · * Q be the Milnor join of n-copies of Q thus making EQ the direct limit of the free Q-spaces E n Q . Let B n Q = E n Q /Q and note that it is the union of n-contractible open sets, thus the product of any n elements in the reduced K-theory groups K * (B n Q ) is zero. Consider the ideal I(Q) = ker(res G A : R(G) → R(A)) of virtual representations whose restriction to A vanish, and consider the ideals I n Q = ker(R(G) → K G (E n Q )) of the map that takes a representation V an maps it to V × E n Q . Consider the map K-groups K * (X/Q [ρ] ; Bφ αρ • h ρ ) is the operator Z is the composition of the maps β •Sq 2 •mod 2 , where mod 2 is the reduction modulo 2, Sq 2 is the Steenrod operation, and β is the Bockstein map. We obtain the following theorem. Theorem 5.1. Suppose that A ⊂ G is a normal subgroup and let Q = G/A. Let X be a compact G-CW complex such that G x = A for all x ∈ X. With the identifications made above, the third differential of the Atiyah-Hirzebruch spectral sequence is defined coordinate-wise in such a way that for η ∈ H p (X/Q [ρ] ; Z) we have Therefore K * D8 (E(Z/2) 2 ) is the direct sum of the K-theory of B(Z/2) 2 and the free group generated by the D 8 -equivariant vector bundle ν × ED 8 → ED 8 . The differential d 3 is defined on generators as d 3 (α) = (x 2 y+xy 2 )α and d 3 p(x, y) = Sq 3 Z (p(x, y)) on any polynomial on x and y. Since the differential preserves the R(Z/2) structure we have that the fourth page is a direct sum H * H * (B(Z/2) 2 , Z), Sq 3 Z ⊕ H * H * (B(Z/2) 2 , Z), Sq 3 Z + (x 2 y + xy 2 )∪ . The left hand side was calculated by Atiyah [3, pp. 285] and the cohomology becomes Z[x 2 , y 2 ]/(x 4 y 2 − x 2 y 4 , 2x 2 , 2y 2 ), and since everything is of even degree, the spectral sequence collapses at the fourth page. The cohomology of the right hand side is localized in degree 0 and is isomorphic to Z ∼ = Z 2α (one can check that the differential Sq 3 + (x 2 y + xy 2 )∪ on F 2 [x 2 , y 2 , x 2 y + xy 2 ] has trivial cohomology). Therefore the page at infinity becomes where the ring Z[x 2 , y 2 ]/(x 4 y 2 −x 2 y 4 , 2x 2 , 2y 2 ) corresponds to the associated graded of K * (B(Z/2) 2 ) and Z 2α corresponds to K * D8,ρ ( * ) = Z ν with ν → 2α. In particular note that the image of edge homomorphism of the spectral sequence
2016-04-15T09:12:14.267Z
2016-04-06T00:00:00.000
{ "year": 2016, "sha1": "4c7f97135610490b647d4aee51f55d0fcde81ad2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1604.01656", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "967e3c915fd7907c131e8dd1564083ea49211a8e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
268904098
pes2o/s2orc
v3-fos-license
Concatenated ScaA and TSA56 Surface Antigen Sequences Reflect Genome-Scale Phylogeny of Orientia tsutsugamushi: An Analysis Including Two Genomes from Taiwan Orientia tsutsugamushi is an obligate intracellular bacterium associated with trombiculid mites and is the causative agent of scrub typhus, a life-threatening febrile disease. Strain typing of O. tsutsugamushi is based on its immunodominant surface antigen, 56-kDa type-specific antigen (TSA56). However, TSA56 gene sequence-based phylogenetic analysis is only partially congruent with core genome-based phylogenetic analysis. Thus, this study investigated whether concatenated surface antigen sequences, including surface cell antigen (Sca) proteins, can reflect the genome-scale phylogeny of O. tsutsugamushi. Complete genomes were obtained for two common O. tsutsugamushi strains in Taiwan, TW-1 and TW-22, and the core genome/proteome was identified for 11 O. tsutsugamushi strains. Phylogenetic analysis was performed using maximum likelihood (ML) and neighbor-joining (NJ) methods, and the congruence between trees was assessed using a quartet similarity measure. Phylogenetic analysis based on 691 concatenated core protein sequences produced identical tree topologies with ML and NJ methods. Among TSA56 and core Sca proteins (ScaA, ScaC, ScaD, and ScaE), TSA56 trees were most similar to the core protein tree, and ScaA trees were the least similar. However, concatenated ScaA and TSA56 sequences produced trees that were highly similar to the core protein tree, the NJ tree being more similar. Strain-level characterization of O. tsutsugamushi may be improved by coanalyzing ScaA and TSA56 sequences, which are also important targets for their combined immunogenicity. It was not until 1995 that O. tsutsugamushi was demarcated from Rickettsia spp. with recognition of its distinct cellular envelope, surface antigens, growth characteristics, and divergent 16S rRNA gene (rrs) sequences [35,36].Today, the genetic basis of these differences has largely been elucidated [37], but the function of some surface antigens remains unknown.Orientia and Rickettsia have distinct autotransporter domain-containing surface cell antigen (Sca) proteins with secreted or surface-displayed passenger domains [38].Sca genes are among those used for the speciation of Rickettsia (superseded by genome-based similarity measures [39]), based on pairwise nucleotide sequence homologies of sca0 (ompA), sca4 (gene D), and sca5 (ompB), in addition to rrs and gltA (citrate synthase gene) [40].Some Sca proteins are absent in certain Rickettsia clades (e.g., Sca0 in the typhus group) [40], and such variation is observed at the strain level for O. tsutsugamushi [41].Intact genes encoding ScaA, ScaC, ScaD, and ScaE are present in all complete O. tsutsugamushi genomes sequenced to date (i.e., core genes), while genes encoding ScaB and ScaF are only present in a subset of strains (i.e., accessory genes).Among core Sca proteins, only ScaA and ScaC are of known function and are involved in cellular attachment, binding to an isoform of the mixed-lineage leukemia 5 protein [42] and fibronectin [43], respectively.Like TSA56, Sca proteins (particularly ScaA) are immunogenic [44] yet reveal different phylogenetic relationships [41]. Initial core genome-based phylogenetic analysis of O. tsutsugamushi revealed incongruencies with analysis based on tsa56 and other targets [31].Given that whole-genome sequencing is not widely available for strain-level characterization of O. tsutsugamushi, it is desirable to improve upon single locus genotyping.Thus, this study investigated whether concatenated surface antigen sequences, including Sca proteins, can reflect the genome-scale phylogeny of O. tsutsugamushi. Cultivation, Purification, and Genomic DNA Isolation Two representative O. tsutsugamushi strains in Taiwan, TW-1 and TW-22, were selected for whole-genome sequencing (WGS).Clinical O. tsutsugamushi isolates LC0708a (TW-1) and KHC0708a (TW-22) (originally reported in [25]) were recovered in mouse (Mus musculus) fibroblast-like L929 cells (BCRC # RM60091) from frozen stocks at the Taiwan CDC Laboratory of Vector-borne Viral and Rickettsial Diseases.LC0708a was isolated from a 19-year-old male who presented with fever and rashes in Lienchiang County (Matsu Islands) in August 2007, and KHC0708a was isolated from a 45-year-old female who presented with fever, headache, malaise, lymphadenopathy, and an eschar in Kaohsiung City in August 2007.L929 cells were maintained in a 75 cm 2 (T75) flask using MEM (Gibco, Grand Island, NY, USA) supplemented with 4% fetal bovine serum (FBS) (Gibco) and 1% Antibiotic-Antimycotic (Gibco) at 37 • C with 5% CO 2 .Frozen O. tsutsugamushi stocks (0.5 mL) were rapidly thawed, resuspended to disrupt host cells, and used to inoculate L929 cells at ~60% confluence in a 25 cm 2 flask with a small volume of serum-free MEM and incubated at 32 • C with 5% CO 2 for 60-90 min, followed by the addition of MEM containing 2% FBS and 1% Antibiotic-Antimycotic. Media was changed within 24 h and again within 72 h (on day 3 or 4, depending on cell health).Bacterial load was monitored with semiquantification of O. tsutsugamushi DNA extracted from culture supernatant with SYBR Green-based quantitative PCR (qPCR) targeting a 120 bp fragment of the single-copy 47-kDa gene (tsa47) [45], examining changes in cycle threshold (C T ) values.This assay was performed in 20 µL reactions with 1X KAPA SYBR FAST qPCR Master Mix (Roche, Basel, Switzerland), 0.2 µM of each primer (synthesized by Mission Biotech, Taipei, Taiwan), and 2 µL of DNA template or water (no template control), and qPCR was performed using a MyiQ2 thermal cycler (Bio-Rad, Hercules, CA, USA) at 95 • C for 3 min and 40 cycles of 95 • C for 3 s and 60 • C for 20 s followed by a dissociation curve analysis from 65 • C to 95 • C with 0.5 • C increments.Once bacterial growth reached the late exponential phase, cells were harvested for preservation at −80 • C via gentle scraping, and remaining cells were disrupted with 0.5 mm glass beads to release intracellular O. tsutsugamushi for passage.Briefly, the cell suspension was diluted in serum-free MEM (1:10 to 1:20 based on relative load) to infect fresh L929 cells as before, except in a T75 flask.This process was repeated until passage 8, upon which 5 to 8 flasks were inoculated to harvest O. tsutsugamushi for purification and genomic DNA extraction. Filter purification and DNA isolation were performed using a similar approach to Batty et al. [31].Once O. tsutsugamushi growth was in the stationary phase, host cells were disrupted by gently agitating the flask containing 0.5 mm glass beads with a small volume of spent media, and the lysate, recovered using spent media, was filtered through a 2 µm pore size Puradisc 25 syringe filter (Whatman, Maidstone, UK).Filtered O. tsutsugamushi cells were pelleted at 14,000× g for 10 min, the supernatant was discarded, and cells were resuspended using 380 µL RDD Buffer (Qiagen, Hilden, Germany) and divided into two equal volumes for further processing.Residual host cell genomic DNA was depleted by adding 2.5 µL Benzonase nuclease (Qiagen) to each tube, with incubation in a 37 • C water bath for 30 min, followed by enzyme inactivation with the addition of 20 µL Proteinase K (Qiagen) and incubation at 56 • C for 30 min.O. tsutsugamushi cells were pelleted as before, the supernatant was discarded, and cells were resuspended in Dulbecco's phosphate-buffered saline.DNA was isolated using the DNeasy Blood and Tissue Kit (Qiagen) following the manufacturer's protocol, proceeding with the addition of 20 µL Proteinase K and 200 µL Buffer AL (Qiagen), gentle mixing, and incubation at 56 • C for 10 min.DNA was eluted using 100 µL 10 mM Tris-Cl (pH 8.5) per column, stored at 4 • C, and quantified using a Qubit fluorometer (dsDNA HS Assay Kit; Invitrogen, Waltham, MA, USA) and Fragment Analyzer 5200 (DNF-464 Kit; Agilent, Santa Clara, CA, USA). Quantitative PCR SYBR Green-based qPCR targeting tsa47 and a 108 bp fragment of the single-copy mouse adipsin gene (cfd) [46] was performed to evaluate the depletion of host cell genomic DNA.Triplicate 20 µL reactions were performed, each containing 1X iTaq Universal SYBR Green Supermix (Bio-Rad), 0.5 µM of each primer, and 2 µL of DNA template or water, and qPCR was performed using an ABI 7300 thermal cycler (Applied Biosystems, Foster City, CA, USA) at 95 • C for 5 min and 40 cycles of 95 • C for 15 s and 60 • C for 60 s followed by a dissociation curve analysis (system default).Copy number was determined based on calibration curves constructed using pCR2.1-TOPOvector (Thermo Fisher Scientific, Waltham, MA, USA) containing target gene fragments (tsa47 from Taiwan CDC Karp and cfd from L929) with 10 9 to 10 4 and 10 7 to 10 2 copies per reaction (serially diluted in 10-fold increments) for tsa47 and cfd assays, respectively.Linear regression was performed in R 4.3.0(https://www.r-project.org/,accessed on 25 April 2023), and ggpubr 0.6.0[47] was used for data visualization.The percentage of residual host cell genomic DNA was calculated referencing a genome size of 2.7 Gbp for M. musculus (reference assembly GRCm39; RefSeq GCF_000001635.27) and 2 Mbp for O. tsutsugamushi (reference assembly Ikeda; RefSeq GCF_000010205.1). Whole Genome Sequencing, Assembly, and Annotation WGS was performed by Genomics Bioscience and Technology Co., Ltd.(New Taipei City, Taiwan) using the PacBio Sequel sequencing platform (Pacific Biosciences, Menlo Park, CA, USA).Briefly, genomic DNA was sheared using a g-TUBE (Covaris, Woburn, MA, USA) and purified with AMPure PB beads (Beckman Coulter, Brea, CA, USA) for ~10 kbp libraries.SMRTbell libraries were sequenced using a SMRT Cell 1M v3 (Sequel Sequencing Kit 3.0; Pacific Biosciences). Core Genome Phylogeny of O. tsutsugamushi A core of 691 CDSs was identified among 11 O. tsutsugamushi strains, resulting in a concatenated amino acid alignment with 243,706 positions, which was reduced to 235,464 positions with 91.91% invariant sites after removal of positions containing gaps.Phylogenetic analysis based on 691 concatenated core protein sequences (235,464 positions) produced identical tree topologies with ML and NJ methods (Figure 1).TW-1 and TW-22 were in separate clades with Karp and Kato, respectively; TW-1 most related to Wuj/2014, UT76, and then UT176 and Karp; and TW-22 related to Ikeda and Kato, while TA686 and Gilliam were on separate ancestral branches with Boryong forming an outgroup.positions with 91.91% invariant sites after removal of positions containing gaps.Phylogenetic analysis based on 691 concatenated core protein sequences (235,464 positions) produced identical tree topologies with ML and NJ methods (Figure 1).TW-1 and TW-22 were in separate clades with Karp and Kato, respectively; TW-1 most related to Wuj/2014, UT76, and then UT176 and Karp; and TW-22 related to Ikeda and Kato, while TA686 and Gilliam were on separate ancestral branches with Boryong forming an outgroup. Among individual surface antigens, TSA56 trees had the highest congruence with the core tree, while ScaA trees had the lowest (Table 2; Figures S5 and S6).Concatenated ScaA and TSA56 trees were highly congruent with the core tree, and the NJ tree had higher congruence than the ML tree (Table 2; Figure 2 and Figure S7).Concatenated ScaC and TSA56 produced an ML tree with similar congruence to the core tree as the ML tree for TSA56 alone, though with a different topology, and an NJ tree with lower congruence (Table 2; Figures S7 and S8).Concatenation of ScaD or ScaE with TSA56 also produced trees with higher congruence to the core tree than TSA56 trees (Table 2; Figures S7 and S8).produced an ML tree with similar congruence to the core tree as the ML tree for TSA56 alone, though with a different topology, and an NJ tree with lower congruence (Table 2; Figures S7 and S8).Concatenation of ScaD or ScaE with TSA56 also produced trees with higher congruence to the core tree than TSA56 trees (Table 2; Figures S7 and S8). Discussion This study found that phylogenetic analysis based on concatenated ScaA and TSA56 sequences produces trees highly similar to core protein-based phylogeny despite a >100fold difference in the number of aligned amino acid positions analyzed.TSA56-based trees were most similar to the core tree among the surface antigens examined in this study but still had many incongruencies between phylogenies, and ScaA-based trees were highly dissimilar.This suggests that ScaA possesses phylogenetically informative sites subject to different evolutionary pressures than TSA56, which may be clarified by characterizing their protein-protein interactions.Sca proteins translocate via type V secretion [38], and while this system has not been characterized for Orientia, it likely involves a β-barrel assembly machine complex and other periplasmic chaperons similar to Rickettsia [73].The translocation mechanism of TSA56 has not been described, but it possesses an N-terminal signal peptide that appears to be cleaved [10].ScaA requires a conserved block (CB2, Boryong aa 843 to 875) and involves its flanking regions (fragments F4 and F5, Boryong aa 607 to 994 and 867 to 1241) for attachment, with F5 exhibiting the highest immunogenicity (i.e., anti-ScaA IgG titer) [42].TSA56 primarily binds fibronectin at its surface-exposed antigen domain III and adjacent C-terminal region (Boryong aa 312 to 341) [11], which is relatively conserved [74] and may work in concert with ScaC [43].TSA56 produces a robust humoral response [75] with multiple B-cell epitopes [10,76]; as such, recombinant protein-based enzyme-linked immunosorbent assays have been developed detecting anti-TSA56 antibodies for clinical diagnosis [77][78][79].Neutralizing antibodies are important for protective immunity, but cellular immunity is also necessary to mount an effective immune response against intracellular pathogens [80].TSA56 elicits limited T-cell responses compared to other immunoprevalent antigens, including TSA22 [81], which remains uncharacterized, and TSA47 [75], a periplasmic serine protease involved in cellular exit [82].Notably, coimmunization of mice with ScaA and TSA56 provided enhanced protection against lethal challenge with heterologous strains compared to immunization with either antigen alone [44]; however, even in natural infection, ScaA-and TSA56-directed B-and T-cell immunity rapidly declines after one year [80], though multidose vaccines may overcome this shortcoming.Recently, nanoparticle vaccines have demonstrated enhanced immunogenicity with ScaA, TSA56, and TSA47 subunits, with enhanced protection provided by dual-layered antigen nanoparticles [83,84].In the future, enhanced heterologous protection could be provided via nanoparticle vaccines that combine antigens from multiple strains, similar to what has been implemented for influenza viruses [85] and, importantly, may be tailored for different geographic regions.To this end, the determination of ScaA and TSA56 sequences is indispensable to identify representative antigen sequences for vaccine development. Phylogenetic trees produced using ScaA and TSA56 were not perfectly congruent with the core protein-based tree.Gilliam and TA686 were not placed on separate branches due to their high phylogenetic relatedness for ScaA.Additionally, using ML, TW-1 and Wuj/2014 were placed on different branches, though poorly supported with bootstrap replicates (<50%), whereas NJ, which is computationally much less intensive than ML, produced a topology that was more similar to the core protein-based tree with a higher level of bootstrap confidence across nodes (>75%).Even so, for core trees, both methods had nodes with low bootstrap support (<75%) but only consistently for the node separating the group containing TW-1, Wuj/2014, and UT76 and the group containing UT176 and Karp.This could be due to geographic relatedness between Thai strains UT76 and UT176 for CDSs other than ScaA and TSA56.Boryong formed an outgroup in core phylogenetic analyses and was also found to be ancestral in a preliminary phylogenetic analysis for Orientia spp.based on core CDSs with the inclusion of a partial assembly of Orientia chuto Dubai (RefSeq GCF_000964595.1) (findings not shown).ScaB was only identified in Boryong, which has been implicated in adherence to and invasion of nonphagocytic cells [86].ScaB has also been detected in TA686 [86] but has a gene sequence below the minimum identity threshold used to identify core CDSs in this study, which was relaxed from 80%, as used in the previous core phylogenetic analysis of O. tsutsugamushi with 657 core genes [31], to 70% in order to include tsa56 as a core CDS.The expression of this core proteome still needs to be verified, including in its natural host, while 599 of the previous 657 core genes were found to be transcribed in Karp and UT176 infecting human umbilic vein endothelial cells [87].Among other Sca proteins, ScaF was only identified in TA686 and Karp, which were clearly separated in the core tree, suggesting that ScaF has evolved multiple times, though its function remains unknown.TA686 and Karp also possessed similar ScaC, and whether ScaF is also involved in adherence in these strains should be determined.Most studies on adherence have been conducted using nonphagocytic cells [11,42,43,86], but O. tsutsugamushi also infects monocytes and antigen-presenting cells at the site of inoculation [88].It has yet to be determined whether variation in core Sca proteins or the presence of accessory Sca proteins control cellular tropism, which could explain variation in strain-level virulence among mice strains and nonhuman primates [89].In systemic infection, O. tsutsugamushi infects endothelial cells with the highest bacterial loads found in the lungs [90] and interstitial pneumonitis is commonly observed in severe cases which can progress to fatal acute respiratory distress syndrome [91,92], with macrophages playing a key role in pathogenesis [93]. Phylogenetic clustering did not consistently correspond with geographic origin for the 11 strains examined in this study.TW-1 was highly similar to Wuj/2014, which was isolated in Zhejiang, China (near Taiwan).TW-1 is the predominant strain isolated from scrub typhus patients in the offshore islands near China (Kinmen, Matsu, and Penghu) [25].TW-22 was most related to Ikeda and Kato, isolated in Japan to the north.However, TW-22 is predominantly isolated in southern Taiwan [25], which has a tropical climate.Ancestral to the aforementioned strains, TA686 and Gilliam were isolated in neighboring countries in Southeast Asia (Thailand and Burma), but TA686 was not found to cluster with other Thai strains (UT76 and UT176) in the Karp clade.Phylogenetic placement of Boryong (isolated in Korea), ancestral to TA686 and Gilliam, further obfuscates the phylogeographic picture.Thus, additional genomes of geographically diverse isolates (with adequate representation for each tsa56 genotype) are needed to clarify the phylogeography of O. tsutsugamushi.To this end, an effort should be made to obtain complete genomes for all described tsa56 genotypes in Taiwan.Studies are also needed to investigate the mite fauna of migratory birds, which have long been thought to play an important role in the dissemination of O. tsutsugamushi [94] and have been implicated in the spread of other acarids [95].There are at least 47 trombiculid mite species throughout Taiwan [96], but the association between mite species and O. tsutsugamushi strains remains unclear, and mite host-O.tsutsugamushi interactions remain poorly characterized.A single mite colony may be coinfected with O. tsutsugamushi [97], facilitating intragenic recombination [28].Competition with cocirculating Rickettsiaceae also needs to be clarified; for example, Rickettsia felis-like organisms that have been found to infect Leptotrombidium deliense in Taiwan [98].Globally, no complete genome sequences have been made publicly available for recently described divergent Orientia spp., including Orientia chuto (endemic in the Middle East) [99] and Candidatus Orientia chiloensis (endemic in South America) [100], and no criteria have been established for delineation of novel Orientia species.These taxa appear to be ancestral to O. tsutsugamushi and may shed light on the evolutionary origins of Sca proteins in Orientia, which has yet to be elucidated [73]. There are still methodological limitations in the ability to amplify and sequence complete tsa56 and scaA, as they are large (1.6 kbp and 4.3 to 4.6 kbp, respectively), and this is particularly challenging for culture-independent studies yielding small amounts of fragmented DNA.Nonetheless, long-range high-fidelity PCR can be used to amplify complete or nearly complete tsa56 [23] and scaA [44], though additional sequencing primers are required.For large-scale culture-independent studies, smaller fragments containing immunogenic epitopes may be prioritized; however, partial sequences will invariably exclude important phylogenetic signals and reduce congruence with core genome-based phylogeny. TW-1 and TW-22 genomes had acceptable sequencing coverage (>100×), and no assembly errors were identified using Pilon.Among the genomes examined in this study, TA686 was an outlier in that it possesses >1000 pseudogenes (representing >40% of CDSs), leading to questioning of its assembly accuracy and whether it is accurately placed in phylogenetic analyses.Contaminant reads that map to the host cell genome should be removed before genome assembly, including mitochondrial DNA, which was not depleted with filtration and nuclease treatment. In conclusion, phylogenetic analysis based on concatenated ScaA and TSA56 sequences offers a substantial improvement over TSA56-based analysis in its ability to reflect genomescale phylogeny, and future studies should prioritize their sequencing for O. tsutsugamushi isolates or clinical specimens if WGS-based methods are not available.ScaA and TSA56 sequences are also valuable to inform antigen selection for vaccine development. TW-22, Table S1: Pairwise amino acid alignments of Orientia tsutsugamushi surface antigens, Table S2: Summary of NCBI locus tags for surface antigens, Figure S5 Informed Consent Statement: Not applicable. Figure 1 . Figure 1.Phylogenetic analysis of 11 Orientia tsutsugamushi strains based on 691 concatenated core protein sequences (235,464 positions without gaps) based on (a) maximum likelihood with RAxML-NG v1.2.0 [66], performed using the JTT + I + G4 + F model substitution (tree with the highest loglikelihood is shown) and (b) neighbor-joining with MEGA11 [64] based on evolutionary distances computed using the JTT matrix with 4 discrete gamma categories (optimal tree is shown).Scale branch lengths represent the number of amino acid substitutions per site, and the percentage of replicate trees in which the associated taxa clustered together in 1000 bootstrap replicates are shown above the branches. Figure 1 . Figure 1.Phylogenetic analysis of 11 Orientia tsutsugamushi strains based on 691 concatenated core protein sequences (235,464 positions without gaps) based on (a) maximum likelihood with RAxML-NG v1.2.0 [66], performed using the JTT + I + G4 + F model substitution (tree with the highest log-likelihood is shown) and (b) neighbor-joining with MEGA11 [64] based on evolutionary distances computed using the JTT matrix with 4 discrete gamma categories (optimal tree is shown).Scale branch lengths represent the number of amino acid substitutions per site, and the percentage of replicate trees in which the associated taxa clustered together in 1000 bootstrap replicates are shown above the branches. Figure 2 . Figure 2. Neighbor-joining-based phylogenetic analysis of 11 Orientia tsutsugamushi strains based on concatenated ScaA and TSA56 amino acid sequences (1910 positions without gaps) with MEGA11 [64] based on evolutionary distances computed using the JTT matrix with 4 discrete gamma categories.The optimal tree is shown (scale branch lengths represent the number of amino acid substitutions per site), and the percentage of replicate trees in which the associated taxa clustered together in 1000 bootstrap replicates are shown above the branches. Figure 2 . Figure 2. Neighbor-joining-based phylogenetic analysis of 11 Orientia tsutsugamushi strains based on concatenated ScaA and TSA56 amino acid sequences (1910 positions without gaps) with MEGA11 [64] based on evolutionary distances computed using the JTT matrix with 4 discrete gamma categories.The optimal tree is shown (scale branch lengths represent the number of amino acid substitutions per site), and the percentage of replicate trees in which the associated taxa clustered together in 1000 bootstrap replicates are shown above the branches. : Maximum likelihood phylogenetic trees based on individual amino acid sequences, Figure S6: Neighbor-joining phylogenetic trees based on individual amino acid sequences, Figure S7: Maximum likelihood phylogenetic trees based on concatenated amino acid sequences, Figure S8: Neighbor-joining phylogenetic trees based on concatenated amino acid sequences.Author Contributions: Conceptualization, N.T.M. and K.-H.T.; methodology, N.T.M. and K.-H.T.; software, N.T.M.; validation, N.T.M.; formal analysis, N.T.M.; investigation, N.T.M. and K.-H.T.; resources, Y.-L.L.G., P.-Y.S. and K.-H.T.; data curation, N.T.M.; writing-original draft preparation, N.T.M.; writing-review and editing, N.T.M.; visualization, N.T.M.; supervision, T.-Y.Y., Y.-L.L.G., P.-Y.S. and K.-H.T.; project administration, T.-Y.Y.; funding acquisition, K.-H.T.All authors have read and agreed to the published version of the manuscript.Funding: This research was supported by the Ministry of Science and Technology, Taiwan (MOST 107-2314-B-002-279-MY2) and in part by the Taiwan CDC (MOHW113-CDC-C-315-144315).Institutional Review Board Statement: This study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Review Board of the Taiwan CDC, Ministry of Health and Welfare (IRB No. 106111). Table 2 . [72]ruence between phylogenetic trees based on the quartet similarity measure implemented in R package Quartet (normalized scores are shown)[72]. Table 2 . [72]ruence between phylogenetic trees based on the quartet similarity measure implemented in R package Quartet (normalized scores are shown)[72].
2024-04-05T16:04:14.566Z
2024-04-01T00:00:00.000
{ "year": 2024, "sha1": "b5fdedcfee453cc9f2f213822014f027e579f708", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-0817/13/4/299/pdf?version=1712110320", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b82292e8e91227537c84ccc3a0b11f6ce04cd5ef", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [] }
54070787
pes2o/s2orc
v3-fos-license
Calving localization at Helheim Glacier using multiple local seismic stations A multiple-station technique for localizing glacier calving events is applied to Helheim Glacier in southeastern Greenland. The difference in seismic-wave arrival times between each pairing of four local seismometers is used to generate a locus of possible event origins in the shape of a hyperbola. The intersection of the hyperbolas provides an estimate of the calving location. This method is used as the P and S waves are not distinguishable due to the proximity of the local seismometers to the event and the emergent nature of calving signals. We find that the seismic waves that arrive at the seismometers are dominated by surface (Rayleigh) waves. The surface-wave velocity for Helheim Glacier is estimated using a grid search with 11 calving events identified at Helheim from August 2014 to August 2015. From this, a catalogue of 11 calving locations is generated, showing that calving preferentially happens at the northern end of Helheim Glacier. Introduction The calving of marine-terminating grounded glaciers is a significant contributor to rising sea levels worldwide due to the massive volumes of ice involved that can suddenly be discharged into the sea.Depending on the glacier, the contribution of calving to sea-level rise can be equal to, or even greater than, the contribution from melt processes (Rignot et al., 2013;Depoorter et al., 2013).However, the lack of understanding of the physical principles that cause these events means that it is difficult to precisely forecast their contribution to sea-level rise in the near future (e.g.Pfeffer et al., 2008;Meier et al., 2007).Calving glaciers can rapidly advance and retreat in response to minimal climate signals, which can rapidly change the sea level (Meier and Post, 1987;Nick et al., 2013).A better understanding of calving processes is vital to developing accurate predictions of sealevel rise. The lack of understanding of why and how calving events happen makes it hard to create a general "calving law" (Amundson and Truffer, 2010;Bassis, 2011).There have not been enough direct observations of smaller calving events (e.g.Qamar, 1988;Amundson et al., 2008) to identify patterns to attempt to form a general calving law.Calving events are intermittent, though they exhibit some seasonality due to the seasonality of the mélange ice, ocean temperature variations and variations in basal motion due to meltwater input (Foga et al., 2014;Joughin et al., 2008).The overall unpredictability of calving requires monitoring equipment to be deployed on a long-term basis to detect events. One way to monitor glaciers and detect calving is to use seismic arrays (e.g.Walter et al., 2013;Amundson et al., 2012;Köhler et al., 2015).Calving events can generate glacial earthquakes, with surface waves detectable at a teleseismic range (Nettles et al., 2008;Nettles and Ekström, 2010;Tsai et al., 2008).A common automated calving detection method is to use triggers based on the ratio of short-timeaverage and long-time-average seismic signals (STA/LTA).After an event has been detected, it can then be localized. Currently, most localization methods require visual confirmation of the calving location, unless they are sufficiently large to be seen by satellite imagery.Automatic methods like STA/LTA can help narrow down the manual search in satellite and camera imagery for calving but, ultimately, visually locating a calving event requires clear weather and well-lit conditions (O'Neel et al., 2007).An exception to this is terrestrial radar (e.g.Holland et al., 2016), but radar cannot be deployed year-round without constant refuelling and swapping out the data drives, and also has problems seeing through atmospheric precipitation.Recently, high-frequency pressure metres, such as Sea-Bird Electronics tsunameters, have been deployed to monitor calving at Helheim (Vaňková and Holland, 2016). Land-based seismometers offer improvements over simple camera or satellite imagery for detecting calving because seismic arrays are not limited to daylight hours, are not affected by snow, can be deployed year-round without maintenance and provide quantitative data to help estimate the magnitude of calving events.Seismic studies of calving have been done at the regional ( < 200 km) as well as the teleseismic level.Generally, teleseismic detections of calving are done via low-frequency surface waves (e.g.Walter et al., 2012;O'Neel and Pfeffer, 2007;Chen et al., 2011), while local detections are done at some subset of frequencies within 1-10 Hz (e.g.Bartholomaus et al., 2012;Amundson et al., 2008Amundson et al., , 2012;;O'Neel et al., 2007;Köhler et al., 2015). Seismicity in glaciers has been observed for both basal processes (e.g.basal sliding) and surface processes (e.g.surface crevassing) unrelated to calving (Anandakrishnan and Bentley, 1993;West et al., 2010).Until recently, seismic signals generated by glacial calving were believed to be caused either by capsizing icebergs striking the fjord bottom (Amundson et al., 2012;Tsai et al., 2008), interacting with the sea surface (Bartholomaus et al., 2012) or by sliding glaciers that speed up after calving (Tsai et al., 2008).Murray et al. (2015a) found that glacial earthquakes at Helheim Glacier are caused by glaciers temporarily moving backwards and downwards during a large calving event.Nettles and Ekström (2010) found that only capsizing icebergs generate observable low-frequency surface-wave energy, with calving events that create tabular icebergs not generating glacial earthquakes.Basal crevassing has also been suggested as a mechanism for calving at Helheim (Murray et al., 2015b).It is not yet known how to fully categorize and characterize different calving events. Seismic signals of calving events typically have emergent onsets (i.e.having a gradual increase in amplitude with no clear initial onset) with dominating frequencies around the order of 1-10 Hz (e.g.Amundson et al., 2010;Richardson et al., 2010;O'Neel et al., 2007;Amundson et al., 2012).The emergent nature of the signals makes it hard to accurately identify a P wave onset time, let alone a S wave onset time, which hinders the traditional seismic triangulation method that takes the difference between the P and S wave arrival times to generate a distance to the epicentre (Spence, 1980).The other main method involves calculating back azimuths from a ratio of easting and northing amplitudes of P waves from a broadband seismic station (e.g.Jurkevics, 1988;Köhler et al., 2015); this fails for our study due to the proximity of our stations and the high speed of the sound waves (around 3.8 km s −1 through pure ice, e.g.Vogt et al., 2008) which make the waves arrive near-simultaneously.Another method to locate calving events, known as beam-forming, uses the seismic signals recorded on several array stations to determine the time delay associated with a back azimuth that aligns the signals coherently (Koubova, 2015).A more recent method for localizing calving events is the use of frequency dispersion of surface waves, which uses a regional array (100-200 km away) of hydroacoustic stations to estimate a distance between the event and detector and combines this with an azimuth (determined from the P waves) to create a unique intersection (Li and Gavrilov, 2008), as the stations are sufficiently far to separate different seismic wave components.This method has a similar precision to using intersecting azimuths from two remote stations, which is enough to identify at which glacier the calving occurred, but not enough to localize the event within the glacier. In seismology, another technique to locate the epicentre of seismic events uses differences in signal arrival times to create a hyperbola on which the epicentre lies.This was first used in Mohorovicic (1915), andPujol (2004) notes that this method is best for shallow events where refraction along a bottom interface (glacier rock) is insignificant.Such a technique has not yet been applied to localizing calving.The aspect ratio (vertical/horizontal dimension) of Helheim Glacier is of order 0.1 and so calving events should be sufficiently shallow to use this technique.This method is limited by determining the relevant wave velocity.In our case, this is empirically determined by using hyperparameter optimization, also known as grid search (Bergstra et al., 2013).This involves exhaustively evaluating a product space of parameters to optimize some performance metric.In our case, we use a product space of surface velocity v eff , x coordinate and y coordinate to minimize the total residual between the observed lags and the lag corresponding to each (v eff , x, y).The hyperbolic method is then applied to calving events using the mean v eff from the grid search to localize the epicentres of the seismic signals generated during calving events.The grid search is then repeated with a product space of just (x, y) with the mean v eff from the first grid search, and these localizations are compared to the hyperbolas. Data Four broadband seismometers (HEL1: Nanometrics Trillium 120, HEL2-HEL4: Nanometrics Trillium 240) with sampling rates of 40 and 200 Hz were deployed around the mouth of Helheim Glacier (Fig. 1).HEL1 and HEL2 were deployed in August 2013, while HEL3 and HEL4 were deployed in August 2014 (Holland et al., 2017).They were synchronized with Coordinated Universal Time.These stations detected seismic activity from calving as well as distant earthquakes, so we first inspect the frequency distributions of the signals to isolate calving events. A calving event that was observed in situ at Helheim in August 2014 (Fig. 2a) matches those of O'Neel et al. (2007), Richardson et al. (2010) and Amundson et al. (2012) very well, both in frequency distribution and shape, with an emergent onset and relatively high-frequency signals (1-20 Hz).In contrast, events from regional earthquakes have much lower-frequency signals (< 1 Hz).A M 5.2 regional earthquake in Bárðarbunga, Iceland on 1 September 2014 1 (Fig. 2b) shows that the dominant frequencies received at the HEL seismometers are all well below 1 Hz.This means we can easily separate calving events from regional seismic activity by using a bandpass filter (Butterworth, twopole and zero-phased).We use a bandpass filter between 2 and 18 Hz based off the spectrogram in Fig. 2a in order to maximize the signal-to-noise ratio.Using some threshold of STA/LTA counts, we are able to create a catalogue of 11 calving events on which to run our hyperbolic method al-1 Icelandic Meteorological Office record: http://en.vedur.is/earthquakes-and-volcanism/articles/nr/2947 gorithm.This ignores smaller calving events, which generally have amplitudes too small to easily identify a signal onset.Calving events, with the exception of events in January/February 2015 for which imagery is too snow-covered to use, are confirmed with local camera imagery and MODIS satellite imagery from the Rapid Ice Sheet Change Observatory (RISCO)2 . 3 Localization methods and results Hyperbolic method After isolating the calving events, we apply the hyperbolic method to generate a catalogue of calving locations.A hyperbola can be geometrically defined as the locus (set of points) with a constant path difference relative to two foci, as seen in Fig. 3.In our case, each pair of seismometers acts as foci.We need two variables to determine the path difference: the signal-arrival time lag at each pair of seismometers, and the horizontal velocity of the surface waves. Assuming that the speed of seismic waves across Helheim does not vary horizontally, the signals from a calving event that happened exactly at the midpoint of the two seismometers (or any other point along the perpendicular bisector of the two seismometers) would arrive simultaneously at the two seismometers.Similarly, if the event happened closer to HEL1, the seismic waves would arrive slightly earlier to HEL1, and the locus of possible calving locations would instead be the set of all points with a distance from HEL1 shorter than HEL2 by a fixed length.This length is 2a (Fig. 3), which is the product of the speed of the waves through the glacier (v seismic ) and the time lag in signal arrival ( t) and is defined for a hyperbola with equation x 2 /a 2 − y 2 /b 2 = 1.We may use the time lag of the signal arrivals at the two seismometers (which become the foci) to determine the path difference of the signals to form the locus.One of the curves (either the left or right in Fig. 3) may always be eliminated as we know to which seismometer the event occurred more closely.Each time lag therefore generates one curve that intersects uniquely with the calving front, which will give the location of the calving.If the calving front is not known, the calving event can be triangulated using additional pairings of other stations.This method requires evaluating the time lag between the signal arrival times at each seismometer (Fig. 4), and obtaining the speed of the seismic waves through the glacier.As the surface waves travel over a topography unique to each glacier, we rename the variable v eff , which is the effective speed of the seismic packet over the surface of Helheim Glacier using the above assumptions. Identifying signal lags To identify the time lag, we first try using a cross-correlation of the signals.For subpanels HEL2 and HEL4 in Fig. 4, cross-correlation gives 1.5 s, which is a plausible value by eye, but for subpanels HEL3 and HEL4, cross-correlation gives 2.2 s which is not plausible by eye.The signals in Fig. 4 do look qualitatively different for HEL3 and HEL4, and it is possible that this is what prevents cross-correlation from generating an accurate lag time.Instead of using crosscorrelation, we use an automated script that searches through the signal for the first instance of a raw waveform gradient exceeding 1.44 standard deviations of all point-wise gradients at each time step of 0.025 s for the total time window in Fig. 4.This value of 1.44 was empirically determined as this produced the closest match to cross-correlation for signals that were qualitatively similar enough to use crosscorrelation. Determining seismic wave velocity with grid search From particle motion plots (Fig. 5), we know these signals are dominated by surface waves.We assume that the seismic wave travels at the same lateral speed from the calving epicentre to each station.The dependence of wave speed on glacier depth is not important for this method as long as the effective (surface) lateral speed to each seismometer is the same in each direction.We also assume that the glacier surface, calving epicentre and seismometers are all coplanar, so that the hyperbolas can be kept two-dimensional for simplicity.In reality, there is some elevation between the seismometers and the glacier surface, though this distance (< 300 m) is so much shorter than the seismometer separation (> 6000 m) that refraction at the ice/rock boundary is likely negligible for characterizing the hyperbola.However, this method would become more precise with three-dimensional hyperboloids instead of two-dimensional hyperbolas. We apply a grid search (hyperparameter optimization) to find the optimal (x, y, v eff ) to minimize the sum of the residuals of the time lags that would occur at each (x, y) for that v eff as compared to the real observed time lags at each station.We parameterize between 1.00 < v eff < 1.40 km s −1 (step size 0.01 km s −1 ) and the coordinate span of the entire map in Fig. 1 (step size 1 pixel) for our 11 identified calving events, and get a mean v eff = 1.20 km s −1 with a standard deviation σ = 0.1 km s −1 .The standard error for these 11 samples is therefore σ/ √ 11 = 0.03 km s −1 .For all further plots, we therefore use v eff = 1.20 km s −1 .We generate four hyperbolas, using HEL1-HEL2, HEL1-HEL3, HEL2-HEL4 and HEL3-HEL4 as these have the greatest distance of ice between the stations, because we require that the rock has a negligible contribution to the wave arrival times. Localization results Once we generate four hyperbolas we may take their intersection to be an estimate of where the calving occurred.In Fig. 6, we show the progression of one calving event on 6 June 2015.From this, the main peak (blue) corresponding to the highest amplitude signal is taken as a representative location for the entire event for the purposes of creating a www.the-cryosphere.net/11/609/2017/The Cryosphere, 11, 609-618, 2017 catalogue of all events from August 2014 to August 2015.Applying this method to our entire catalogue of 11 calving events yields Fig. 7.We also re-run our grid-search method, this time with a fixed v eff = 1.20, as a check of our localization results. Interpretation of results The hyperbolic method and grid-search method give very similar localizations for calving events at Helheim.Qualitatively, Fig. 6 shows that calving propagates up-glacier, with an initial event near the calving front (red) and subsequent seismic signals originating from locations further up the glacier.The locations of events also diverge, as after the second event (yellow), the third and fourth events (green and blue) go in opposing directions.Given that the calving front depicted in grey corresponds to one day before the calving event, the fact that the first event (red) is localized so close to the calving front is a good indicator that the event is localized correctly.Similarly, the year-long catalogue in Fig. 7 has events being localized near the calving front.For example, the black event of 7 July 2015 is localized for both the hyperbolic method and grid-search method and is immediately adjacent to the black calving front corresponding to 9 July 2015.Moreover, local camera imagery (Fig. 8) also shows substantial ice loss on 7 July 2015 on the southern half of Helheim Glacier.We are therefore confident that the hyperbolic method and grid-search method are valid methods to localize calving.Based on Fig. 7, calving appears to cluster in the northern portion of Helheim Glacier.This is consistent with the topography of the bedrock at Helheim (Fig. 9), where the northern half is of the order of ∼ 200 m deeper than the southern half (Leuschen and Allen, 2013).It is possible that the deeper the ice, the higher the freeboard of the ice front and the greater the stresses that affect the calving front.In Fig. 7, we see wider gaps between crevasses in the north of the glacier as compared to the south.This may also mean that the surface velocities are different in each half, which would affect the localization results.The topographic differences of both the glacier surface and ice bottom may contribute to why we see calving primarily in the northern half of Helheim. It is possible to constrain the fault size of the rupture caused by calving.Using a shear model from Brune (1970), the radius r 0 of a circular fault is inversely proportional to the corner frequency f c of a S wave and is given by where β 0 is the shear velocity and K c is a constant, equal to 2.34 for Brune's source model (Gibowicz and Kijko, 2013).From Fig. 10, the corner frequency is approximately bounded between 5 and 10 Hz.Taking a Poisson ratio of 0.3 for ice (Vaughan, 1995), the ratio of the Rayleigh-wave velocity to S wave velocity is approximately 0.930 (Viktorov, 1970), giving a value of β 0 = 1.29 km s −1 .For this rough calcula- tion, we assume that the corner frequency is the same for the Rayleigh and S waves.This bounds the fracture size of the calving event between 48 and 96 m.Brune's relationship does not depend on properties of the material like effective stress σ or rigidity µ.Our range of 48-96 m is considerably smaller than a typical observed calving fracture by around one order of magnitude.A fracture size of order 1 km would require a corner frequency of order 0.1-1 Hz, which we do not observe.100 m is more of the order of a crevassing event, which also occur during/before events, so it is possible that crevassing events continue to happen during the calving event and obscure the power spectrum seen in Fig. 10.Both basal crevassing (e.g.Murray et al., 2015b;James et al., 2014) and surface crevassing (e.g.Benn et al., 2007) have been suggested as calving mechanisms.Basal crevassing may be a more plausible explanation for Helheim, as Murray et al. (2015b) found that buoyant flexure via basal crevasses was the dominant cause for calving at Helheim in 2013.Our estimated rupture sizes using Brune's model could plausibly be the size of either and, as our method assumes a planar glacier surface, we cannot distinguish whether the crevassing is at the base or the surface. Discussion of methods The hyperbolic method described in this paper offers some benefits to traditional seismic location techniques, which are more suited for regional seismic arrays that can distinguish between the different seismic wave types (e.g.O'Neel et al., 2007).Moreover, regional arrays do not give the kind of precision that local arrays would have, as small errors on a regional azimuth translate to a large area of uncertainty on the local glacier surface.The hyperbolic method takes advantage of the stations' proximity to calving events and does not require separating out the different wave phases, thus sidestepping the P wave identification problem that hampered localization techniques from Amundson et al. (2008) and Richardson et al. (2010). The method also offers advantages over traditional calving detection methods, which require the use of a local camera and/or satellite data to visually confirm that calving took place.As seen in Amundson et al. (2012Amundson et al. ( , 2010)), calving generates a characteristic seismic signal (Fig. 2) that is easily distinguishable from signals from regional earthquakes.This is likely because higher frequency signals from regional earthquakes are attenuated by the time they reach the seismometers.This allows seismometers to be used to monitor glaciers and quickly identify calving when power in the 2-18 Hz range exceeds some ratio above the ambient noise.Importantly, this monitoring could take place year-round, during the night and also on cloudy days, making it a helpful addition to locating calving alongside satellite imagery, camera imagery and radar monitoring. The seismic signals detected during calving events are clearly dominated by surface waves.Particle plots (Fig. 5) show the characteristic elliptical shape of a Rayleigh wave.The Rayleigh waves, which are in theory parallel to the vertical axis, appear slanted in Fig. 5.It is possible that the mix of different wave phases (e.g.Love waves, also a surface wave) has interfered with the Rayleigh wave such that it is no longer parallel to the vertical axis.There is also a lack of linear polarization as would be expected for a P wave.Our estimated S wave velocity, using a Poisson ratio of 0.3, is 1.29 km s −1 from above.This is lower than the 1.9 km s −1 for S waves in pure ice that Kohnen (1974) found.It is possible that this is due to the anisotropy of the glacier surface, such that the ice is cracked and the seismic waves do not travel through pure ice.Given our characteristic surface wave velocity of the order of 1 km s −1 with frequencies of the order 10 Hz (see Fig. 2), this corresponds to a surface wavelength of order 100 m.This is small enough to be affected by crevasses along the surface of the glacier which are of similar depths (Bassis, 2011).This means that we can reasonably expect these crevasses to affect the seismic wave velocity, which could slow the S waves and surface waves, making our surface wave speed of 1.20 km s −1 a plausible value. Because we are only working with surface waves, this limits our localization technique to just the epicentre of a calving event, with no suggestion of a focal depth.This means we could not distinguish between basal or surface crevassing, even if we could estimate a rupture size in the previous section.Moreover, we have assumed a planar ice front for simplicity.It is possible that this method could be extended to determine the depth at which calving (or crevassing) occurs by using a 3-D hyperboloid instead of 2-D hyperbolas. The calculation method we have used ignores the presence of the rock between the glacier and the seismometers, as the proximity of the seismometers to the glacier means that the time taken for the wave to propagate through rock is negligible.Our method does not take into account the refraction at the ice-rock interface.Due to the ice dominating the wave path from the source to the seismometers, we assume that the refraction has a negligible affect on the trajectory of the surface waves. The main source of error comes from identifying the signal onset.Picking out the signal onset is not fully automated because it requires setting a gradient threshold manually, or manually checking the plausibility of cross-correlation results.Local stations that are right by the calving front are subject to much more noise than regional arrays.While some of the noise can be filtered out, a lot of the noise still occurs in the 2-18 Hz range, which also contains most of the power from the calving signal.Moreover, as the calving events occur between the stations, the signals that arrive at each station come from different directions and may not necessarily be similar in shape.As a result, cross-correlation does not always work for determining lags.We cannot cross-correlate the envelopes as this would lose resolution of the lags (the envelope is of order 5 s in Fig. 4 but we have lags of order 1-3 s and even a 0.5 s shift would dramatically change the hyperbola).Our empirical method of using gradients is not rigorous as it requires manual confirmation; this means the error is difficult to quantify as the true signal onset time is not known.However, the v eff of the surface waves can be estimated using a grid-search method, giving plausible results.With more calving detections, the standard error of the optimized v eff value will decrease.As cross-correlation does work for some events, with a sufficiently large number of calving events, we may simply discard events that do not cross-correlate correctly.This would make it possible to create an event catalogue using only automated methods. Conclusions Our results show that calving can be localized with local seismic stations.We find that the local seismic signals are dominated by surface waves, and that the differences between these signal onsets can be used to localize calving.This offers an alternative to regional arrays, which can distinguish different wave phases but have a lower resolution of localization.Identifying the signal onsets can be automated, but still requires manual confirmation of results.Further study should be done in determining why cross-correlation only works for a subset of the events.With three or more seismometers, calving events can be detected and triangulated even without any satellite or camera imagery.Our catalogue of calving events at Helheim suggests that in the 2014-2015 season, calving typically initiated at the northern half of the calving front, which will help to constrain model simulations of glacier dynamics at Helheim.This technique can be applied to localize calving events at other glaciers. Data availability The data used in this study are publicly available at doi:10.5281/zenodo.293016. Competing interests.The authors declare that they have no conflict of interest. Figure 2 . Figure 2. Spectrograms for (a) a calving event at Helheim on 12 August 2014, and (b) a regional earthquake in Bárðarbunga, Iceland on 1 September 2014.The easting amplitude of the seismometers is used for both events.The seismogram (top) and spectrogram (bottom) of each event share the same time axis for direct comparison.The spectrograms have a window size of 256 points (= 6.4 s). Figure 3 . Figure 3.An example of a hyperbola of equation x 2 /a 2 − y 2 /b 2 = 1, with foci at F 1 and F 2 with constant path difference |d 2 − d 1 | = 2a.b can be generated by c 2 − a 2 , where 2c is the known distance between the foci. Figure 4 . Figure 4. Seismic signals for a calving event at Helheim Glacier on 26 January 2015.The signal onset times are determined using an automated script that searches for the first instance of a gradient exceeding a particular threshold as defined in Sect.3.2.The differences in the wave onset times are then used to generate a characteristic path difference for each hyperbola. Figure 5 . Figure 5. Particle plots of seismic wave arrivals for the calving event of 7 July 2015, split into radial and transverse components.The characteristic elliptic shape of the surface Rayleigh wave is clearly visible in the radial component of the particle plot. Figure 6 . Figure 6.The calving event from 6 June 2015, with the localizations (top panel) and the easting amplitudes of seismometer HEL1 (bottom panel) showing several sub-events.X indicates locations derived from using a grid search through a lattice of all points on the map with a fixed v eff = 1.20 km s −1 . Figure 7 . Figure 7. Catalogue of all calving events with clear signal onsets at Helheim Glacier from August 2014 to August 2015 overlaid on Landsat-8 imagery of Helheim Glacier.Each colour corresponds to a calving event, with only the area of overlap of the four hyperbolas being depicted.The x's represent the same event located using a grid-search technique. Figure 8 . Figure 8. Local camera imagery for the calving event from 7 July 2015.The blue line indicates the calving front from the last image taken before the calving event, and the black line indicates the first image taken after the calving events.Images are taken every hour.The position of the camera is given in Fig. 1 Figure 9 . Figure 9.The calving events from Fig. 7 overlain with the bedrock topography from the Multichannel Coherent Radar Depth Sounder (MCoRDS) L3 data set from NSIDC (Leuschen and Allen, 2013), with the calving front from 9 July 2015 in black.The topography is collated and averaged from 2008 to 2012. Figure 10 . Figure 10.A typical power spectrum for a calving event (13 August 2014), for a 3 s time window containing the highest peak amplitude of the event.The shaded inset in the top panel shows a zoomedin view of this window.
2018-12-01T16:52:09.169Z
2017-02-22T00:00:00.000
{ "year": 2017, "sha1": "f2740b9ca8bda8ad450fa9c9071d0669d19d5918", "oa_license": "CCBY", "oa_url": "https://www.the-cryosphere.net/11/609/2017/tc-11-609-2017.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "75240ec95dc340757aac920163187102c59730c6", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
142192851
pes2o/s2orc
v3-fos-license
Natural speech processing: An analysis using event-related brain potentials In two experiments, event-related brain potentials were collected as subjects listened to spoken sentences. In the first, all words were presented as connected (natural) speech. In the second, there was a 750-msec inter-stimulus interval (ISI) separating each of the words. Three types of sentenceending words were used: best completions (contextually meaningful), unrelated anomalies (contextually meaningless), and related anomalies (contextually meaningless but related to the best completion). In both experiments, large N400s were found for the unrelated and related anomalies, relative to those found for the best-completion final words, although the effect was earlier and more prolonged for unrelated anomalies. The auditory N400 effect onset earlier in the natural speech experiment than it did in either the 750-msec ISI experiment or previous visual studies. The role of context in language processing continues to be a central topic in much of the psycholinguistic literature (e.g., Ferreira & Clifton, 1986;Fischler & Bloom, 1985;Schustack, Enrlich, & Rayner, 1987;Tyler & Marslen-Wilson, 1982;Van Petten & Kutas, 1990). While there is considerable debate as to the range and source of contextual influences, particularly with regard to word recognition, there are few theorists that would deny that prior linguistic information plays at least some role in the processing of currently presented words. However, until very recently, the effects of context on spoken-language processing have been largely ignored (see Tyler & Frauenfelder, 1987). Early psychological models of language comprehension processes were primarily concerned with reading, and they assumed that the same set of operations are involved in spoken-language comprehension (e.g., Forster, 1979;Morton, 1969). The reasons for this concentration on written language are at least partially practical, in that visual stimuli are easier to present, control, and manipulate. However, from another perspective, the concentration on reading is curious, in that spoken language has certainly had more time to have an impact on the evolution of the language processing system and continues to be the most widely used language medium (e.g., Walker, 1987). A number of researchers have recently attempted to rectify this imbalance by focusing their efforts on deriving models of spoken-language processes (e.g., Cole & Jakimik, 1980;Ellman & McClelland, 1987;Marslen-Wilson, 1987). has proposed one of the most widely cited theories of spoken-language processing. In his cohort model, incoming acoustic-phonetic information from the beginning of a word makes contact with all of the compatible entries in the listener's lexicon. Marslen-Wilson refers to this set of activated, or "accessed," entries as the word-initial cohort. Over time, as more and more information becomes available, the initial cohort is pruned until a single entry remains, at which time the word is said to be "selected." It is only after selection that semantic and syntactic information stored with the lexical entry becomes available for "integration" with the ongoing discourse. Part of the appeal of the cohort model has been that it makes strong predictions about the time course of spoken-word processing. The point in time at which a word presented in isolation is selected (recognized) will be determined not by its absolute duration, but rather by the existence (or more accurately, nonexistence) of other words that share the same initial sounds. Marslen-Wilson and colleagues Marslen-Wilson & Tyler, 1980;Marslen-Wilson & Welsh. 1978) have demonstrated that subjects' response latencies to words spoken in isolation are best predicted by each word's point of uniqueness; the first point at which the incoming acoustic-phonetic pattern is different from all but one word. However, evidence from tasks in which target items were embedded in sentences has shown that selection can occur even earlier than predicted by the point of uniqueness (in one study, the average recognition time was 200 msec for words in a sentence context; see Marslen-Wilson & Tyler, 1980). This finding supports the hypothesis that spoken-word processing is facilitated by sentence-level contextual factors. Copyright 1991 Psychonomic Society. Inc. 286 One problem with the conclusions from the Marslen-Wilson studies is that the techniques he and others have used (including word monitoring, shadowing, lexical decision, and gating) are inherently intrusive and may thus require a different strategy or mode of word processing than that used in normal spoken-language comprehension. In addition, the dependent variables used by these researchers (primarily reaction time) usually yield information several hundreds of milliseconds after presentation of the critical stimulus, thus making them relatively "offline" measures of the cognitive processes under study. The goal of the first experiment to be reported here was to engage subjects in a more natural spoken-language task and to unobtrusively monitor ongoing linguistic processes in real time. This was accomplished by recording event-related brain potentials (ERPs) to the onset of spoken words placed in a sentence context. EVENT-RELATED POTENTIALS By placing electrodes on the scalp of human subjects, it is possible to record the ongoing electrical activity of the brain. ERPs are stimulus-locked perturbations in this activity, which have been demonstrated to be sensitive to both sensory and cognitive processes (see Regan, 1989, for a review). In several recent reports, ERPs have been used to study the effects of context on linguistic stimuli. A number of these studies have reported changes in a negative component (the individual waves that make up an ERP are referred to as components), which onsets as early as 200 msec post-stimulus onset and peaks near 400 msec. Kutas and her colleagues (e.g., Kutas & Hillyard, 1980 have reported that this "N400" t component is large (more negative) to sentence-final words that are anomalous (e.g., He takes cream and sugar in his attention.) and is small or nonexistent to highly probable "best-completion" sentence endings (e.g., "He takes cream and sugar in his coffee"). In contrast, manipulation of the physical parameters of the final word (e.g., using a different type font) results in a variability in an ERP positivity that peaks around 600 msec. In a related study, Kutas, Lindamood, and Hillyard (1984) demonstrated that N400 amplitude was a monotonic function of the cloze probability of sentence-final words; N400 was greater in amplitude to less predictable words (e.g., "Captain Sheir wanted to stay with the sinking raft") and smaller to more predictable words (e.g., "She called her husband at his office"). 2 In a second experiment, replicated an earlier reaction time study by Kleiman (1980). Kleiman demonstrated that sentences with best-completion endings (e.g., ""The king of beasts is the lion") are responded to faster than are sentences with related anomalous endings (e.g., "The king of beasts is the roar" (roar is semantically related to lion), which are in turn responded to faster than are sentences with completely anomalous endings (e.g., "The king of beasts is the work"). In their ERP study, demonstrated that N400 amplitude followed a similar pattern, and was largest to unrelated and anomalous sentence endings, was next largest to anomalous but related endings, and was smallest to best-completion endings. As pointed out by Kutas et al., the finding of faster responses and smaller N400s to related but anomalous endings, relative to unrelated anomalies, is consistent with a spreading-activation model of final-word processing. According to this view, the semantic context provided by the sentence (prior to the final item) primes the lexical entry for the best-completion ending, which in turn spreads activation to semantically related items, some of which may be anomalous in the context of the sentence. The spread of activation facilitates the processing of this otherwise contextually anomalous item. N400 was intermediate in amplitude presumably because these items received some priming due to spreading activation (this tends to reduce N400 amplitude), but they were anomalous in the context of the sentence (this tends to increase N400 amplitude). The above studies used visual letter strings as stimuli. Recently, Holcomb and Neville (1990) compared and contrasted semantic priming in the visual and auditory modalities. Subjects participated in two versions (one visual, one auditory) of a lexical decision task in which stimuli were word pairs consisting of a prime word followed by a semantically related word, an unrelated word, or a nonword. N400s were larger to unrelated words than to related words in both modalities. However, this ERP .. priming effect" began earlier in the auditory modality than in the visual modality. In addition, the distribution over the scalp differed in the two modalities, with the visual priming effect slightly larger over the right hemisphere and with the auditory effect slightly larger over the left hemisphere. Holcomb and Neville concluded that there may be overlap in the priming processes that occur in each modality but that these processes are not identical. In particular, they noted that the earlier onset of the N400 in the auditory modality was consistent with the Marslen-Wilson (1987) view that auditory word processing can begin prior to the arrival of all of the acoustic information in a spoken word and that the time course of this processing can be influenced by semantic properties of a prior word (i.e., context). McCallum, Farmer, and Pocock (1984) performed an auditory replication of the original Kutas and Hillyard (1980) visual sentence study. They included sentences spoken by a male with best-completion endings (e.g., "The pen was full of blue ink"), semantically anomalous endings (e.g., "At school he was taught to snow"), and best-completion endings that had a physical deviation (the final word spoken by a female). As in previous visual studies, ERPs to the anomalous endings produced a negative component (peak latency 456 msec), whereas ERPs to appropriate endings produced a relatively flat response between 200 and 600 msec and ERPs to physically deviant endings produced a large positivity in this latency band. McCallum et al., noted that while their auditory N400s had a similar scalp distribution to those recorded by Kutas and Hillyard (1980) in the visual modality, they were also somewhat less peaked. EXPERIMENT 1 Given the findings of Holcomb and Neville (1990) of a significantly earlier onset of the auditory N400 effect for word pairs, it is curious that McCallum et al. (1984) did not notice a similar early onset in their sentence study. Although McCallum et al. did not systematically examine the time course of the N400 (other than to note that the peak was somewhat later than in the visual modality), examination of their figures does not suggest much of a difference from previous visual studies (e.g., Kutas & Hillyard, 1980). One possibility for the apparent contradiction between the Holcomb and Neville study and the McCallum et al. work is that McCallum et al. did not use natural connected speech stimuli for the words of interest. Rather than using the actual final words spoken at the end of each experimental sentence, these investigators spliced in final words spoken in other sentence contexts. Although this seems like a reasonable procedure for looking at the "pure" effects of semantic context on the processing of sentence-final words, it might have introduced conflicting or contradictory nonsemantic contextual cues that are normally part of connected natural speech, such as prosodic or across-word coarticulatory factors. Prosody refers to the rhythmic patterns that occur both within words (lexical prosody) and across the words of an utterance (metrical prosody; see Cutler, 1989). From the listener's perspective, coarticulation refers to changes in speech sounds that are due to the influence of sounds coming prior to and after the current sound. For example, the a in cat is pronounced differently from the a in hat because of the different influences of the h and c sounds on the a sound. Likewise, the c in cat is pronounced differently from the c in cold due the differential influence of the following a and o sounds. Coarticulatory factors also play a role across word boundaries in natural speech. So, for example, the final sounds in her (as in the sentence: "The bird built a nest in which to lay her ... ) will differ depending on whether the next word is nest or whether it is car. It seems reasonable that the onset of the N400 effect may have been delayed in the McCallum et al. study because of inconsistent across-word cues between the penultimate and sentence-final words. The primary purpose of the first experiment to be reported here was to replicate and extend the McCallum et al. study by using more natural speech with normal semantic and nonsemantic cues and to more closely examine the time course of the ERP differences between the various types of sentence-final words. One prediction was that the inclusion of nonsemantic speech cues would shift the temporal distribution of the N400 effect earlier than in the McCallum et al. study. A second and related purpose of this experiment was to examine the generality of the finding in the visual mo-dality that anomalous words that are related to best-completion words produce smaller N400s than do unrelated anomalous words. Finally, a considerable amount of theorizing and speculation has been directed at the problem of speech segmentation (e.g., Cole & Jakimik, 1980;Cutler, 1987;Frazier, 1987;Pisoni & Luce, 1987). It has been known for many years that the speech signal is not composed of a series of discrete words, but rather is a continuous flow of sound with few breaks or pauses (e.g., Pisoni & Luce, 1987). Two important questions arise from this observation: (1) How do listeners segment the complex spoken signal into individual units? and (2) What are the basic units of speech perception? The ERP technique permits the recording of brain-wave activity time locked to the onset of each word in a sentence. Numerous studies have shown a consistent pattern of early "sensory" components in ERPs recorded to isolated nonlinguistic (e.g., Picton, Hillyard, Krausz, & Galambos, 1974) and linguistic (e.g., Hansen, Dickstein, Berka & Hillyard, 1983;Holcomb & Neville, 1990) sounds. A reasonable preliminary question then is, Do words in connected speech generate a consistent pattern of sensory ERP components? Method Subjects. Twelve young adults (7 male, 5 female; mean age = 21.6 years) were paid $5 per hour to serve as subjects. All were right-handed (three had at least one left-handed relative in the immediate family) and native speakers of English. Stimuli and Procedure. The stimuli for Experiment 1 were generated from 135 highly constrained sentences (final-word cloze probability > 0.8), ranging from 6 to 13 words in length. Forty five of the sentences were randomly selected to be used in the best completions condition. These sentences were all completed with a final word that fit with the previous context of the sentence (e.g., "December is the last month of the year"). Fortyfive other sentences were randomly selected to be used in the unrelated-anomaly condition. In these sentences, the bestcompletion final words were replaced with words that made no sense given the prior context (e.g., "The bird built a nest in which to lay her cars"). The remaining 45 sentences were selected to be used in the related-anomaly condition. In these sentences, best-completion final words were replaced with semantically related words that were anomalous given the sentence context (e.g., "The sink was so clogged they called a pipe"). 3 Unrelated anomalies and related anomalies had a cloze probability of zero. They also always maintained the inflection of the original bestcompletion endings and, with the exception of four items in each condition, did not share any word-initial sounds with the original best-completion words." There were no significant differences between the three conditions in the duration of final words (mean = 561 msec, range 318 to 901), nor were there any differences in the number of words in the stem sentences or the contextual constraint of the stems (as measured by cloze probability to the original sentences). There were no significant differences in log frequency between the best completions (log frequency = 1.77) and unrelated anomalies (log frequency = 1.62) (F = 0.58); however, related anomalies (log frequency = 1.42) were significantly less frequent than were best completions and unrelated anomalies [F(2,132) _ 3.43, p < .035]. Each sentence was spoken by a female member of our research team at a normal speaking rate (mean = two words per second) and with normal pitch and intonation. Sentences were first recorded on analogue tape and were then digitized (12 KHz, 12-bit resolution) and broken up into word-sized pieces, which were stored in separate disk files. Using this technique, the onset envelope of each word was aligned with the beginning of its digital disk file. Pauses and natural silent periods between words were maintained by placing them (when they occurred) at the ends of files (mote that this procedure results in all of the information in each sentence being preserved in the digital representation). During the experiment, the stimulus presentation software reassembled each sentence in real time (through the use of a circular buffer) and, simultaneous with the onset of each word, a code was output to the computer, which digitized the EEG. This procedure resulted in natural-sounding sentences that were indistinguishable from those recorded on analogue tape. The experiment was self-paced. each trial beginning when the subject pressed a button on a small panel testing in his/her lap. Two seconds later the outline of a white rectangle appeared on a video monitor in front of the subject (6° x3 °). The subject was told not to move or blink during the time the rectangle was on the screen. One second after the onset of the rectangle, the sentence was played (binaurally over headphones, 60 dB SL). The rectangle was turned off 1.5 sec after the end of the last word of the sentence and was replaced by a message to respond. The subject pressed a button indicating whether or not the sentence made sense. Because the response was delayed 1.5 sec (to prevent the motor response from contaminating the ERP to the final word), accuracy of response, rather than speed, was emphasized. Response hand was counterbalanced across subjects. Each subject engaged in 10 practice trials prior to the run of 135 experimental sentences. The three types of sentence endings (best completions. unrelated anomalies, and related anomalies) were randomly intermixed. ERP recording. The EEG was recorded from 14 scalp electrodes including 8 placed over standard International 10-20 System locations-left and right occipital (0 l, 02), posterior temporal (T5, T6), frontal (F7, F8), and the midline (Cz. Pz)-and 6 nonstandard locations-Wernicke's region and the right hemisphere homologue (WL and WR), left and right temporal (TL and TR), and anterior temporal left and right (ATL and ATR) 5 -attached to an elastic cap (Electro-Cap) and referenced to linked mastoids. The electrooculogram (EOG) was recorded from an electrode attached beneath the left eye (mastoid reference) and from a bipolar montage of electrodes attached just lateral to the two eyes. All impedances were maintained below 5 kOhm. Grass 7P511 amplifiers (bandpass 0.01 to 100 Hz) were interfaced to a 16channel 12-bit A/D converter, and the EEG was digitized on line and stored on digital tape. Off line, separate ERPs (100 msec pre-stimulus baseline) were averaged for each subject at each electrode site from trials free of EOG artifact. For the sentence-final words, a separate average was made for each condition (best completions. unrelated anomalies, and related anomalies). Only trials on which the subject responded correctly were included. Separate ERPs were also formed for all middle words from each of the sentences (first and final words were excluded). However, due to excessive eye artifact to middle words, 2 subjects' data were removed from middle-word analyses. ERPs to the final words were quantified by measuring the mean amplitude in several latency windows (200-500 msec, 500-1000 msec, and 1,000-1,400 msec) and by measuring the peak latency and amplitude of the N I component (90-200-msec window). To look at the time course of the onset of the N400 effect (the point in time at which the best completions and the two anomalies differentiate), a series of smaller epoch measures were taken. These included nine sequential 50-msec mean amplitude measurements between stimulus onset and 450 msec. Repeated measures analyses of variance (ANOVAs) were used to analyze all measures. Factors in the analysis included electrode site (frontal, anterior temporal, temporal, parietal, posterior temporal, and occipital) hemisphere and, for final words, condition (related anomalies vs. unrelated anomalies vs. related anomalies). 6 In cases where specific predictions were made for final words, the overall ANOVA was followed up with a series of planned pairwise comparisons contrasting each of the three conditions. The correction recommended by Geisser and Greenhouse (1959) was applied to all variables with greater than two levels. Results Behavior. The subjects reported no difficulty in understanding the sentences and were equally accurate at judging whether or not each of the three sentence types made sense (97% for each condition). ERPs from middle words. Plotted in Figure 1 are the ERPs from the average of all the middle words in the sentences. There is a very early positivity that appears to begin prior to stimulus onset and peaks at 45 msec post-stimulus onset (P45). P45 can be seen at all sites, but is smaller at the most posterior sites (T5, T6, 01, 02). This observation was confirmed by the ANOVA on P45 peak amplitude [main effect of electrode site, F(5,45) = 5.63, p < .0004]. The analysis of the latency of P45 indicated that this component peaked slightly earlier in the left hemisphere [40 vs. 50 msec; main effect of hemisphere, F(1,9) = 5.58, p < .043]. Because of its pre-stimulus onset, this component may reflect the summation of activity from the prior stimulus and the P45 component of the timelocked stimulus. A prominent feature early in the waveforms was a negative component that peaked near 140 msec (N 140). Although N140 can be seen as far posterior as the WL/WR sites, it was largest at the most anterior electrode locations (frontal, anterior temporal, and temporal sites) and was virtually nonexistent at the most posterior sites for both middle words [main effect of electrode site, F(5,45) = 17.8, p < .0001] and final words [F(5,55) = 15.72, p < .0004]. This scalp distribution is consistent with that observed to other auditory stimuli and suggests that this component belongs to the N1 family of negativities reported in numerous previous studies (Regan, 1989). Following N 140 was a positive-going component that peaked near 200 msec (P200). Although positive-going, the P200 to the middle words did not cross the zero baseline. P200 was most apparent at the most anterior electrode positions and was small or nonexistent at the most posterior sites [main effect of electrode, F(5,45) = 3.78, p < .043]. This positivity seems most likely to be related to the P2 component frequently reported in auditory studies (see Regan, 1989). Following P200, there was a slow negative shift that continued to increase in amplitude up to the end of the recording epoch at the three anterior sites but was more peaked at the three posterior locations. At all sites, the right hemisphere was more negative than the left [200-700-msec measure; main effect of hemisphere, F(1,9) = 6.70, p < .029]. In the post 600-msec epoch, the anterior sites continued to show a slow negative wave, whereas the more posterior sites displayed a positive shift that was larger over the left hemisphere [700-1,400-msec measure; main effect of hemisphere, F(1,9) = 13.01, p < .006]. ERPs from final words. Figure 2 displays the ERPs elicited by the final words of each of the three sentence types. The final words also elicited an N140 component that was largest from over anterior sites. Moreover, analyses of the differences between the two hemispheres indicated that the N 140 was more negative over the left hemisphere [main effect of hemisphere final words, F(1,11) _ 9.31, p < .011]. As seen in Figure 2, the ERPs in the best-completion condition tended to go positive after the peak of N140, whereas ERPs in the unrelated-anomaly condition and, to a lesser extent, the related-anomaly condition continued to go negative for up to an additional 700 msec. At most sites, this negative-going wave was quite broad, but, at a few sites, it had a discernible peak around 300 or 400 msec. Its time course and response to final-word conditions suggest that this wave was related to the N400 component. By 600 msec, all but the most anterior sites (F7, F8, ATL, ATR) crossed the baseline to become positive and remained positive until the end of the recording epoch. Like the preceding negativity, this late positivity had a broad base, but, at the more posterior locations, there was a consistent peak between 800 and 1,000 msec in all three conditions. This pattern, together with the requirement that subjects respond to final words, suggests that this positive wave was related to the P3 component (see Regan, 1989). The large negativity seen in the unrelated-anomaly and related-anomaly conditions and the positivity seen in the best-completion condition were quantified with an average area measure between 200 to 500 msec. The ANOVA on this measure confirmed that there was a main effect of conditions [F(2,22) = 15.72, p < .0001]. Planned contrasts revealed that waveforms in the unrelated-anomaly and related-anomaly conditions were more negative than those in the best-completion condition [F(1,11) = 21.35, p < .0007, and F(1,11) = 16.74, p < .0018, respectively], but that those in the unrelated-anomaly condition were only marginally more negative than those in the related-anomaly condition [F(1,11) = 3.87, p < .075]. A significant condition x electrode site interaction [F(10,110) = 6.08, p < .0002] indicated that the differ- 290 HOLCOMB AND NEVILLE ences between best completions and the two anomalous endings were largest over occipital, Wermcke's, and posterior temporal sites. This negative area displayed a marginal hemisphere x electrode site interaction [F(5,55) = 2.82, p < .08], indicating that the Wemicke's and temporal sites were asymmetric. Since previous studies of both auditory and visual word processing have reported asymmetries for negativities in this latency band over these sites (e.g., Holcomb & Neville, 1990;Neville, Kutas, Chesney, & Schmidt, 1986;Neville, Kutas, & Schmidt, 1982), a follow-up analysis of the Wernicke's and posterior temporal sites was performed. This analysis showed the left hemisphere was more negative than the right [F(1,11) = 5.11, p < .045] and that this effect was equivalent for all three final-word conditions. In other words, while overall the left hemisphere tended to be more negative than the right, the effects of context were symmetrical across the hemispheres. The end of the negativity and the beginning of the subsequent positivity were quantified with an average area measure from 500 to 1,000 msec. Analyses of this measure also showed that the three types of sentence endings differed [main effect of condition, F(2,22) = 6.40, p < .0064], but more so at the anterior temporal and frontal electrodes [condition x electrode site interaction, F(10,110) = 3.35, p < .027]. Planned comparisons revealed that ERPs in the unrelated-anomaly condition remained more negative than ERPs in the best-completion and related-anomaly conditions [F(1,11) = 11.19, p < .007, and F(l, 11) = 8.83, p < .013, respectively], but the related-anomaly and bestcompletion conditions no longer differed. The area at the end of the recording epoch (1,000 to 1,400 msec) produced a significant condition x electrode site interaction [F(10,110) = 2.92, p < .037], indicating that, at posterior sites, there was very little difference between the conditions but that, at more anterior sites, ERPs in the unrelated-anomaly condition remained more negative than ERPs in either the best-completion condition or the related-anomaly condition. Onset of the condition effects. As can be seen in Figure 2, differences between best completions and unrelated anomalies appear to onset quite early at some electrode sites. Analyses of the nine consecutive 50-msec epochs starting at stimulus onset confirmed this observation. The earliest significant condition x electrode site interaction occurred for the 50 to 100-msec measure, showing the ERPs in the unrelated-anomaly condition to be more negative than in the best-completion condition [F(5,55) = 5.05, p < .008]. Table 1 reports the first time window during which each of the electrode sites differentiated between conditions (best completion vs. unrelated anomaly and best completion vs. related anomaly). As can be seen in Table 1, more posterior locations revealed differences earlier than did anterior sites, and the best-completion condition differentiated from the unrelated-anomaly condition earlier than it differentiated from the related-anomaly condition. Over anterior regions, the best-completion and unrelatedanomaly conditions were differentiated 100 to 150 msec earlier from the left hemisphere than from the right hemisphere (see Figure 2). Discussion Even though the average duration of final words was 561 msec (range 318-901 msec), the electrical activity of the brain (at posterior electrode sites) reliably registered a difference between sentence-final words that were contextually appropriate and those that were contextually anomalous as early as 50 msec after word onset. This result is consistent with Marslen-Wilson's (1987) claim that sentence-level contextual factors can have an effect on auditory word processing prior to the point at which all the acoustic information about a given word is available. This finding is particularly striking when compared with similar data from procedurally similar visual studies (e.g., Neville et al., 1986). In most visual studies, where all of the information about a given final word is available at stimulus onset, the difference between best-completion and anomalous final words typically does not start prior to 200 msec. It is noteworthy that while these early posterior effects of context were bilaterally symmetrical, in line with evidence for a greater role of the left hemisphere in speech processing, more anterior sites revealed an earlier difference between best completions and the two anomalies in the left hemisphere than in the right hemisphere. The latency of the posterior context effects are 100150 msec earlier than the onset of the first effects visible in the McCallum et al. (1984) auditory study. A likely possibility for this finding is the presence of nonsemantic contextual cues (i.e., prosody and across-word coarticulation) in the natural speech stimuli from the current experiment since McCallum et al. spliced in final words spoken in other contexts. The results of this study more generally replicate McCallum et al.'s (1984) findings of an enhanced late negativity (N400) to semantically anomalous final words of spoken sentences and extend their results by demonstrating that final words-which, although anomalous, are semantically related to the best-completion word-generate an auditory N400 effect. However, the relatedanomaly effect was both smaller and more restricted in latency than was the effect found for unrelated anomalies, suggesting that semantic contextual factors played a role in reducing the N400 effect to these otherwise anomalous items. This is the same basic pattern previously reported by for similar materials presented visually and is consistent with their spreadingactivation interpretation. However, Kutas et al. did not report a similar difference in the time course of the unrelated-and related-anomaly effects, although examination of their waveforms (see Figure 11.5, in suggests that such differences may have been present at some scalp sites. There are at least two other possibilities for the above pattern of results, both of which assume differences in the characteristics of the three types of final words. First, it is possible, that related anomalies had, on average, a later point of uniqueness than did best completions. This might have delayed the onset of differences between the best completions and related anomalies. Although we did not attempt to match our final words along this dimension, we feel that such differences are an unlikely source of the observed differences, because Marslen-Wilson (1987) has shown that the point of uniqueness becomes less critical in determining when a word is recognized (selected) when the word is placed within a sentence context. A second possibility is that the larger amplitude N400 in the related-anomaly condition, relative to the best-completion condition, could be due to the related anomalies' being of slightly lower frequency (1.77 vs. 1.42 log frequency). Van Petten and Kutas (1990) and Rugg (1990) have shown that lower frequency words have a larger N400 than do higher frequency words. However, such frequency effects are typically much smaller than context effects and tend to disappear by the end of contextually constrained sentences (Van Petten & Kutas, 1990). Therefore, it seems unlikely that small frequency differences seen ;n the present experiment could be contributing to the observed differences in final word ERPs. Finally, the ERPs to the middle words in the sentences, although small in overall amplitude, revealed a clear set of early ERP components. That these waves represent the P l , N1, and P2 components frequently seen with other auditory stimuli is supported by their scalp distributions, latencies, and relative positions in the ERPs. In previous studies, these components have been shown to be closely tied to the physical parameters of the stimulus (see Regan, 1989). For example, previous work using pure tone stimuli has shown that both the N 1 and P2 components are attenuated at short interstimulus intervals (ISIS) and do not begin to regain their full amplitude with ISIS shorter than several seconds (e.g., Gastaut, Gastaut, Roger, Corriol, & Naquet, 1951;Knight, Hillyard, Woods, & Neville, 1980). The smaller amplitude of the N1 and P2 to middle words in the present experiment may have been due to the relatively rapid rate of word presentation and the short duration between the end of one word and the beginning of the next. The ERPs to words in the middle of sentences also generated a slow negative wave at anterior sites that started' quite early, possibly overlapping the N 140 and P200 (Figure 1). At posterior sites, there was a more peaked negativity with a latency of about 400 msec and a slow return to baseline. In both the 200-700-msec and the 700-1,400msec measurements, anterior sites were more negative than posterior; however, both the anterior and the posterior sites were more negative from over the right hemisphere. At posterior locations, the more peaked nature of the negativity and its right hemisphere predominance suggest a similarity to the visual N400 component. Kutas, Van Petten, and Besson (1988) have looked at negativities across various positions in visual sentences in a variety of studies. Their work has shown a greater right than left distribution for N400 in almost every case. EXPERIMENT 2 In Experiment 1, the words within each sentence were presented in a continuous stream of natural-sounding speech. This is the typical manner in which normal discourse is conducted, but, as pointed out, it may confound the effects of semantic factors with other between-word contextual influences. In Experiment 2, we attempted to limit these other sources by breaking up the natural flow of words in spoken sentences. This was accomplished by introducing a constant 750-msec ISI between the words of each sentence from Experiment 1. Pilot work with several subjects indicated that adding this time between words did not hinder subjects' comprehension, but that it did make the sentences sound less than natural. We assumed that, by adding this interval, semantic contextual influences would not be significantly altered, but that the effect of cues, which depend on the temporal/acoustic relationships between words in the sentence (prosody and between-word coarticulation), would be disrupted. We reasoned that, by comparing the pattern of ERP responses from the present experiment and Experiment 1, it should' be possible to determine if this between-word information contributes to the ERP differences in the three final word conditions. A second purpose of Experiment 2 was to examine the ERPs to words in the middle of sentences when the interval between items was temporally extended and static. It was predicted that a similar pattern of early components to those seen in Experiment 1 would be obtained in Experiment 2, but that these would be less refractory (greater in amplitude) due to the longer and fixed interval.. Method Subjects. Twelve young adults (7 male, 5 female; mean age = 21.6 years) were paid $5 per hour to serve as subjects. None had participated in Experiment 1. All were right-handed (4 had at least one left-handed fan-Lily member within the immediate family) and native speakers of English. Stimuli and Procedure. The stimuli were the same sequence of sentences used in Experiment l. The only difference was that all words were presented with a 750-msec ISI. Prior to the experimental run, each subject was presented with 10 practice sentences using the same stimulus-presentation procedures. Data analysis. The same components and measurement windows used for the ERPs in Experiment t were used in Experiment 2. Due to excessive artifact, the data from 2 subjects' middle words were not included in analyses. In addition to the contrasts made within Experiment 2, several across-experimcnt comparisons were made on difference waveforms (formed by a procedure that involves subtracting ERPs from different conditions). One advantage of this procedure is that factors contributing to static morphological differences in the ERPs of the two experiments (e.g., physical differences in the stimulus conditions that might differentially affect components such as the N 1) are removed, leaving only the effects of the three final-word conditions. Two sets of "difference" ERPs were formed by subtracting the ERPs from the best-completion words from the ERPs to the unrelated anomalies and the ERPs to the best completions from the related anomalies. The mean amplitude between 0 and 200 msec, 200 and 400 msec. 400 and 600 msec, and 600 and 1,000 msec, as well as the peak latency between 200 and 800 msec, were then measured. For within-experiment comparisons, the factors were electrode site, hemisphere, and condition (unrelated anomaly-best completion vs. related anomaly-best completion). Comparison of the two experiments added one between-subject factor (experiment number). Results Behavior. The subjects reported no difficulty in understanding the sentences in Experiment 2 (best completions = 96% correct, unrelated anomalies = 98% correct, and related anomalies = 96% correct). There were no significant differences in percent correct scores between Experiment 2 and Experiment 1. ERPs from middle words. Plotted in Figure 3 are the ERPs from the middle words of Experiment 2. Apparent are the larger and more clearly defined early components to words presented at this rate, relative to the naturalspeech results (Figure 1). An early positivity was apparent just after stimulus onset and peaked between 40 and 50 msec (P45). Note that, unlike Experiment 1, P45 did not appear to be overlapped by components from the prior word. P45 can be seen at all sites and, although there was no significant difference along the anterior-posterior dimension, there was a significant hemisphere effect [middle words, F(1,9) = 9.88, p < .012], indicating that the right hemisphere was more positive than the left. The analysis of the latency of P45 indicated that, as in Experiment 1, this component peaked earlier in the left hemisphere for both middle and final words (36 vs. 48 msec middle words and 47 vs. 55 msec final words) [main effect of hemisphere: middle, F(1,9) = 9.90, p < .012; final, F(1,11) = 10.07, p < .009]. Similar to Experiment 1, the most prominent feature early in the waveforms of both middle and final words was a negative component that peaked between 112 and 130 msec (because of its similarity to the N140 in Experiment 1, we will continue to refer to this negativity by this label). N140 can be seen as far posterior as T5/T6, but as in Experiment 1, it was larger at the more anterior electrode locations [main effect of electrode site: middle words, F(5,45) = 7.87, p < .01]. As in Experiment 1, the amplitude of N140 was larger over the left hemisphere than over the right hemisphere [electrode site x hemisphere interaction: final words, F(5,55) = 4.0, p < .005; middle words, F(5,45) = 6.68, p < .006]. As in Experiment 1, following the N 140, ERPs to middle words were characterized by a positivity, which peaked at about 200 msec (P200) and was largest over right frontal and temporal sites [hemisphere x electrode site interaction, F(5,45) = 6.02, p < .009]. Unlike in Experiment 1, ERPs to middle words did not produce a large frontal negative wave following the P200, but they did reveal a negativity between 300 and 600 msec over temporal and posterior sites [main effect of electrode site, F(5,45) = 5.55, p < .032]. Note that because of the constant ISI between words in Experiment 2, the N140/P200 to the next word can be seen starting at about 800 msec (Figure 3). ERPs from final words. The ERPs to the final words in the three conditions are shown in Figure 4. As in Experiment 1, an N140 can be seen in the ERPs of all three finalword conditions; however, in contrast to Experiment 1, there do not appear to be any differences between the conditions in this time period. Between 200 and 300 msec, the ERPs in the best-completion condition were markedly divergent from those in the unrelated-and related-anomaly conditions at most sites. As in Experiment 1, the bestcompletion condition displayed a positive shift, whereas both anomaly conditions were characterized by a negative component. As in Experiment 1, by 600 msec, all but the most anterior sites crossed the baseline to become positive and remained positive until the end of the recording epoch. Analyses on the 200 to 500-msec measure confirmed that the three conditions differed [main effect of condition, F(2,22) = 21.12, p < .0001]. Planned contrasts revealed a similar pattern of effects as was observed in Experiment 1: the unrelated-and related-anomaly conditions produced more negative-going ERPs than did the best completion condition [F(1,11) = 41.51, p < .0001, and F(1,11) = 18.24, p < .0013, respectively], but the unrelated-anomaly condition produced only marginally more negative-going ERPs than did the related-anomaly condition [F(1,11) = 4.78, p < .051]. As in Experiment 1, there was a hemisphere x electrode site interaction [F(5,55) = 4.02, p < .006], which indicated that at frontal, temporal, and parietal sites, the left hemisphere was more negative than the right. There was not, however, a significant interaction between hemisphere and the three conditions, suggesting that, as in Experiment 1, the condition effects were symmetrical across the hemispheres. The 500-1,000-msec area measure was used to quantify the end of the negativity and the beginning of the subsequent late positivity. Measures of this epoch revealed a condition x electrode site interaction [F(10,110) = 3.77, p < .047], indicating that, over posterior sites (O, P, T, W), there were no significant differences between the three conditions but that, at the more anterior sites (T, AT, and F), the unrelated-anomaly condition was more negative than the best-completion condition, and the related-anomaly condition was intermediate between the other two. Planned comparisons confirmed this observation. The unrelatedanomaly/best-completion contrast produced a significant electrode site x condition interaction [F(5,55) = 8.80, p < .004; unrelated anomalies more negative than best completions]. The unrelated-anomaly/related-anomaly contrast produced a main effect of condition [F(1,11) = 10.54, p < .008; unrelated anomalies more negative than related anomalies]. However, the relatedanomaly/best-completion contrast did not produce either effect. The area at the end of the recording epoch (1000-1400 msec) also produced a significant condition x electrode site interaction [F(10,110) = 8.77, p < .002], indicating that, at the anterior temporal and frontal sites, ERPs in the unrelated-anomaly condition and, to a lesser extent, those in the related-anomaly condition, were more negative than those in the best-completion condition. A significant hemisphere x electrode site interaction indicated that, over anterior sites, the left hemisphere was more negative than the right [F(5,55) = 6.79, p < .002]. Onset effects. We compared condition effects for each of nine 50-msec epochs (beginning at word onset) to determine the onset of the priming effects. The earliest reliable differences between conditions occurred from 200 to 250 msec (i.e., 150 msec later than in Experiment 1; see Table 1). Also, the onset of the differences between conditions was more similar from anterior and posterior locations of the scalp than in Experiment 1. However, the same tendency for the effects to outset earlier in regions of the left hemisphere than in regions of the right hemisphere is evident. Additionally, as in Experiment 1, the bestcompletion/unrelated-anomaly differences occurred earlier than did the best-completion/related-anomaly differences. Difference waves and comparison of Experiments 1 and 2. Figure 5 shows the difference waves ( Figure 5A, unrelated anomalies minus best completions; Figure 5B, related anomalies minus best completions) from Experiments 1 and 2. In the natural-speech experiment, but not in the 750-msec experiment, there were clear context effects present in the first 200 msec following word onset, but only over posterior locations [electrode site x experiment interaction, F(5,110) = 3.40, p < .05; see Figure 5A]. In both experiments, from 200 to 400 msec, context effects were asymmetrical over anterior sites and were larger from the left hemisphere [hemisphere x electrode site interaction, F(5,110) = 3.05, p < .04]. In the natural-speech experiment, the maximum negativity occurred between 400 and 600 msec and was larger from over posterior regions [main effect of electrode site, F(5,110) = 7.2, p < .004]. The final phase (6001,400 msec) was largest from anterior electrodes [main effect of electrode site, F(5, 110) = 30.8, p < .001]. Beyond 200 msec, the results from Experiment 2 were similar in many respects. The overall amplitude and the anterior/ posterior and lateral distribution of the difference waves were similar to those observed in Experiment 1. As noted above, the effect from 200 to 400 msec was larger from over anterior regions of the left hemisphere than the right hemisphere. The later ( > 600 msec) effects were larger from anterior regions [main effect of electrode site, F(5,110) = 13.0, p < .001]. Over temporal and parietal regions, these effects were asymmetrical. larger from over the right hemisphere [hemisphere x electrode site interaction, F(5,110) = 5.50. p < .007]. In the interval between 200 and 800 msec, the mean peak latency of the largest negativity was 444 msec in Experiment 1 and 474 msec in Experiment 2. However, neither this difference nor its interaction with other variables was significant. Across experiments, the latency of the peak negativity was earlier at posterior sites than at anterior sites [main effect of electrode site, F(5,110) = 15.73, p < .0001] and occurred earlier in the left hemisphere than in the right hemisphere, but most notably in the unrelated-anomaly-best-completion waves [condition x hemisphere interaction, F(1,22) = 6.20, p < .021]. Discussion The condition effects on the ERPs to final words in Experiment 2 were similar to those observed in Experiment 1, the major difference being that they began later. Unlike in Experiment 1, the differences between the final word conditions in Experiment 2 did not start in the time frame of the N140 component, but, as indicated by the timecourse analysis, they started later (220-300 msec), particularly at posterior sites. The later phase of the negativity (N400) did not appear to differ between the experiments. As in Experiment 1, the ERPs in the unrelated-and related-anomaly conditions were significantly more negative between 200 and 500 msec than those in the bestcompletion condition; however, between 500 and 1,000 msec, the unrelated-anomaly condition was more negative than either the best-completion or the related-anomaly condition. The longer interval between stimuli in Experiment 2 resulted in a similar, but more clearly differentiated, set of early components, relative to that seen in Experiment 1 (P45, N140, and P200). In both experiments, P45 could be seen with words in the middle of sentences and was earlier in peak latency in the left hemisphere. But asymmetries in amplitude of early components (P45, N140, and P200), where the left hemisphere was more negative than the right, occurred only in Experiment 2. We have previously reported the tendencies for the left hemisphere to be more negative than the right during the first 300 msec following word onset in both visual and auditory studies (e.g., Holcomb & Neville, 1990;Neville et al., 1986;Neville et al., 1982). Each of these studies involved relatively long ISIS. In Experiment 1, presentation rate averaged two words per second, with very little time between the offset of words. The absence of an asymmetry to the middle words of Experiment 1 may have been due to the presence of an overlapping component that was more negative over the right hemisphere (i.e., two opposite asymmetries may have canceled). The 200-700-msec and 700-1,400-msec epochs in Experiment 1 revealed a slow anteriorly GENERAL DISCUSSION Middle Words The presence of a slow right-hemisphere negativity in Experiment 1 and its absence in Experiment 2 suggest that this wave has something to do either with the rate of stimulus presentation or with the "naturalness" of the sentence. A number of studies have indicated that the right hemisphere plays a role in processing prosodic information (e.g., Behrens, 1988;Ross, 1981;Ross, Edmondson, Seibert, & Homan, 1988). Presumably, prosodic cues were diminished or removed by the artificial ISI introduced in Experiment 2. The right-more-negativethanleft asymmetry at anterior sites in Experiment 1 and the lack of a similar symmetry in Experiment 2 suggest that this negativity may reflect the greater role of the right hemisphere in processing prosodic cues in natural speech. However, the presence of similarly distributed effects in visual sentences (Kutas et al., 1988) casts some doubt on this hypothesis. Another possibility is that latency -jittered N400s from subsequent middle words contributed more to right-hemisphere sites (remember that subsequent middle words were not systematically time locked to the current middle word in Experiment 1 due to the natural speaking rate). However, this implies that N400s were, in general, more negative over the right hemisphere, as they typically are in visual studies. This was not the case in either of the present experiments. These and other speculations about middle word auditory ERPs need to be tested more directly in future studies. The existence of a relatively normal set of early ERP components time locked to the onset of words in natural continuous speech is consistent with the hypothesis that speech segmentation occurs on line and at a relatively early point in the processing of speech stimuli. However, whether these ERP findings directly reflect the segmentation process at work or whether they indicate that segmentation is complete prior to the occurrence of these early waves will have to await the results of future research. For example, it would be interesting to see if continuous speech produces discernible "middle latency" ERPs (those with a latency between 10 and 50 msec). It would also be interesting to see if time locking to other features within words (e.g., stressed vs. unstressed syllables) results in identifiable ERP components. Final Words The earlier onset of the N400 effect ( Figure 5) in Experiment 1, relative to that in Experiment 2, is consistent with the hypothesis that nonsemantic between-word contextual cues facilitated the processing of sentence-final words beyond that produced by semantic context effects. Evidence for this position comes from at least three sources. First, studies using reading tasks (e.g., Kutas & Hillyard, 1980), where these types of between-word cues are not present, have shown a later onset for the N400. 7 Second, McCallum et al. (1984) removed or attenuated these between-word cues and their N400s onset substantially later than did those in Experiment 1. Finally, Holcomb and Neville (1990) found differences in ERPs recorded to auditory and visual word pairs. However, although their auditory semantic-priming effects were relatively early (200-290 cosec auditory and 300-360 cosec visual), they were not as early as those found in Experiment 1 (SO to 100 cosec). One difference between their study and the present experiments is that their words were presented (and originally pronounced) in isolation. Therefore, the prime word (the first word in each pair) could not have provided prosodic or coarticulatory cues to facilitate the processing of the target word (the second word of each pair). Taken together, these results suggest that there was an interaction between the semantic contextual information and certain nonsemantic betweenword cues and that, together, these sources provided rapid information about the final words in the present sentence paradigm after a relatively small amount of the word had been heard. It should be pointed out that it is unlikely that the between-word nonsemantic cues alone would have been sufficient to determine whether the sentence-final word was the correct item (and therefore drive the time course of the early ERP effect), since even in the unrelatedanomaly sentences, the final two words were spoken together (naturally) and were always a perfectly plausible pair of words for ending a sentence (however, not the given sentence). There is another possible explanation for the early onset of the context effect. In addition to providing betweenword nonsemantic contextual cues, the stimuli in this study were also presented at a relatively high rate (an average of two words per second, which is a relatively normal speaking rate and is only high in comparison with most other ERP language studies). One line of evidence in support of an early rate effect comes from a recent visual study (Neville, Pratarelli, & Forster, 1989). Briefly, under conditions in which related and unrelated word pairs were presented very rapidly, an early (100-200 cosec) priming effect was evident. However, this early effect displayed an anterior/posterior and lateral distribution very different from the later, typical visual N400 effect. Another study casts doubt on the rate hypothesis. Kutas (1987) presented visual sentences to subjects at either 10 words per second or one word every 700 msec. The peak latency and the time course of the ERP difference between anomalous final words and best-completion final words were substantially earlier in the slower presentation condition than in the 10-per-second condition. Clearly, more work is needed in this area. For instance, it is important to know the relative contribution of both prosodic and coarticulatory cues. An experiment in which one or both of these variables could be factorially manipulated would be helpful. Also further studies looking at variations in speaking rate are critical to determine whether this variable does in fact interact in any way with the effects of context.
2019-05-02T13:06:47.980Z
1991-12-01T00:00:00.000
{ "year": 1991, "sha1": "ab1ce89877fe448406e64fc36b23c890c6e25904", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.3758/BF03332082.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d22b75d1280295834746d398d27b41ee9a776d16", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
53987890
pes2o/s2orc
v3-fos-license
Natural Hazards and Earth System Sciences Local scale multiple quantitative risk assessment and uncertainty evaluation in a densely urbanised area ( Brescia , Italy ) The study of the interactions between natural and anthropogenic risks is necessary for quantitative risk assessment in areas affected by active natural processes, high population density and strong economic activities. We present a multiple quantitative risk assessment on a 420 km2 high risk area (Brescia and surroundings, Lombardy, Northern Italy), for flood, seismic and industrial accident scenarios. Expected economic annual losses are quantified for each scenario and annual exceedance probabilityloss curves are calculated. Uncertainty on the input variables is propagated by means of three different methodologies: Monte-Carlo-Simulation, First Order Second Moment, and point estimate. Expected losses calculated by means of the three approaches show similar values for the whole study area, about 64 000 000C for earthquakes, about 10 000 000 C for floods, and about 3000C for industrial accidents. Locally, expected losses assume quite different values if calculated with the three different approaches, with differences up to 19 %. The uncertainties on the expected losses and their propagation, performed with the three methods, are compared and discussed in the paper. In some cases, uncertainty reaches significant values (up to almost 50 % of the expected loss). This underlines the necessity of including uncertainty in quantitative risk assessment, especially when it is used as a support for territorial planning and decision making. The method is developed thinking at a possible application at a regional-national scale, on the basis of data available in Italy over the national territory. Introduction The management and the reduction of natural and technological risks are strategic problems for disaster prone communities in the last century.The frequency and the consequences have increased worldwide (Munich Re Group, 2004).For this reason, the reduction of risk is recognized as an integral component of both emergency management and sustainable development, involving social, economic, political, and legal issues (Durham, 2003). A necessary condition for risk prevention, mitigation and reduction is its analysis, quantification, and assessment.Quantitative risk assessment (QRA) is demonstrated to be effective in the resolution and mitigation of critical situations, and it poses some interesting challenges to the scientific communities.As a consequence, different QRA methodologies have been proposed in the literature for different natural and technological risks, based on different approaches for the calculation of societal, individual and economic risk (e.g.Kaplan and Garrick, 1981;CCPS, 1989;Jonkman et al., 2003;Bell and Glade, 2004;Calvo and Savi, 2008;Agliardi et al., 2009;Fuchs and Brundl, 2005). In some contexts, e.g.where the presence of sources of natural hazards is combined with a strong industrial and urban development, the integration of different risks can help in the design of mitigation measures (Lari et al., 2009).In this context, multiple QRA allows the comparison and integration of risks deriving from different threats. Many approaches have been proposed to assess specific natural and technological risk, but only few studies combine multiple sources of hazard to obtain an overall analysis.Some transnational studies have been performed to compare risk levels in different countries (e.g.Cardona et al., Published by Copernicus Publications on behalf of the European Geosciences Union. 2004; UNDP, 2004;ESPON, 2005) or to identify key "hotspots" where the risks of natural disasters are particularly high (Dilley et al., 2005).However, these studies generally define risk in a qualitative way, based on relatively simplified approaches that make use of national-level indicators, without a detailed spatial analysis of hazard and element at risk patterns.Only a few local scale multi-risk analyses have been proposed including multiple sources of natural (Granger and Hayne, 2001;Middelmann and Granger, 2001;Van Westen et al., 2002;Kappes et al., 2012b) and natural/technological hazards (Barbat and Cardona, 2003;Ferrier and Haque, 2003;Lari, 2009). The analysis and the management of technological and natural risks are affected by large uncertainties (Patè-Cornell, 1995), reflecting incomplete knowledge (epistemic uncertainty) or intrinsic randomness of the processes (aleatory uncertainty).QRA is also used to model rare events, where experimental verification of its validity is missing, and uncertainties can be significant.For these reasons, it is important that uncertainties in the results of the QRA are correctly characterized and interpreted (Parry, 1996), in order to support meaningful decision making. In this paper, we present a local-scale heuristic quantitative risk assessment for a set of flood, earthquake and industrial accident scenarios in a high risk area (Brescia and lower Val Trompia, Lombardy, Northern Italy).We analysed risk scenarios derived from existing regulatory maps.These scenarios are easily available for the entire country, but they can be affected by large uncertainties because they are produced with a simplified approach at small scale. In order to assess uncertainty, we applied and compared Monte-Carlo-Simulation, (MC), First Order Second Moment, (FOSM), and point estimate, (PE) techniques. The aim of the analysis is to compare the values and distributions of different risks coexisting in the area, and to quantify the uncertainty introduced in the analysis by testing different methodologies.The identification of the most relevant risk in different zones is the basis for optimal allocation of resources in risk mitigation strategies, and for a more detailed investigation at the local scale.At the same time the proposed method is developed thinking at a possible application at a regional or national scale, by direct use of available datasets.The analysis does not consider domino effects, where some of the components of risk assessment (i.e.probability of occurrence and vulnerability) are not resulting from the simple sum of their values for single threats (Kappes et al., 2012a). Study area The study area includes the city of Brescia, its neighbourhoods, and lower Val Trompia (Lombardy, Northern Italy).It covers an area of 420 km 2 (Fig. 1), including a plain zone (90 m a.s.l.), and a pre-alpine zone, with a maximum elevation of 1360 m a.s.l.The population (722 000 people), is Fig. 1.Risk hot spot area (in black) as obtained from multi-risk analysis at the regional scale (Lari et al., 2009).Flooding zones as defined by the basin-scale regulatory map zonation (PAI -Piano stralcio di Assetto Idrogeologico, 2007).Three flooding areas are identified: zone A is defined as the area which contains 80 % of the discharge for a 200-yr flood; zone B is defined as the flooding area for a 200 yr flood; zone C is defined as the flooding area for a 500-yr flood. distributed in 36 municipalities, with a maximum density of 2068 inhabitants km −2 in the urban area of Brescia and along Val Trompia. The area is economically strongly developed and highly industrialised since the beginning of the 20th century: iron and steel, mechanic, chemical and foundry industries are diffused.Zootechnical and agricultural activities are relevant in the southern part of the area. The study area was selected since classified as a risk hot spot in a regional risk assessment study performed for Lombardy Region (Lari et al., 2009), by integrating natural (rockfalls, shallow landslides, debris flows, floods on alluvial fans, deep-seated landslides, floods, earthquakes, wildfires) and technological sources of hazard (car crashes, industrial accidents, work injuries).Hot spot risk areas were detected according to the number of coexisting natural and technologic threats, and to their level. Context for risk analysis, data availability and scenarios description The major threats menacing the area, as resulting from the regional scale analysis (Lari et al., 2009), are floods, earthquakes and industrial accidents.Floods occur along the floodplain of the Mella river (basin, 311 km 2 in size), which flows through Val Trompia and the Brescia urban center.Six flooding events in the last century (1928,1959,1960,1966,1993; AVI-Aree Vulnerate Italiane da frane ed inondazioni, 2007), caused the destruction of bridges, roads and buildings in the area.The basin-scale regulatory map zonation (PAI -Piano stralcio di Assetto Idrogeologico, 2007) identifies three flooding areas (A to C). Zone A is defined as the area that contains 80 % of the discharge for a 200-yr flood.This area is flooded very frequently, and corresponds to the river bed.In the study area, zone A is poorly relevant for risk analysis since only few structures are located in it.Zone B is defined as the flooding area for a 200-yr flood.In the study area, this zone is missing due to hydraulic works which efficiently contain the 200-yr river discharge within the A zone.Zone C, defined as the flooding area for a 500-yr flood, embraces large parts of the plain area, with a potential impact on a large number of residential and industrial buildings, and crops (Fig. 1).In the area, only flooding zones A and C are included, with a return period of respectively 200 and 500 yr. Seismic risk is quite relevant in the area of Brescia, being located 50 km from a system of active faults along Garda lake.According to the Italian Seismic hazard map (INGV -National Institute of Geophysics and Vulcanology -MPS, 2004), the peak ground acceleration (PGA) with exceedance probability of 10 % in 50 yr ranges from 0.125 to 0.15 g (Fig. 2).This value refers to reference site conditions with stiff soils characterised by shear wave velocities higher than 800 m s −1 .According to the PGA values, the area is classified in Italian law by ministerial decree OPCM 3274 20/3/03 as one of moderate risk (class 3d).Six events with magnitude larger than 5 were registered in a radius of 50 km in the last century (1901,1918,1932,1951,1979,2004), and 27 events since 1065, with a maximum historical magnitude of 5.7 Richter in 1901 AD.For the assessment of seismic risk, we considered three scenarios with different return periods (75, 475 and 2500 yr) corresponding respectively to an exceeding probability of 50 % in 50 yr, of 10 % in 50 yr and of 2 % in 50 yr (Meletti and Montaldo, 2006), as defined in the National Institute of Geophysics and Vulcanology (INGV). Industrial risk in the study area is due to the presence of hundreds of industrial plants, mainly related to manufacturing.We focused our analysis on eight of them, classified as major risk plants by legislative decree D.Lgs.238/05, according to Council Directive 96/82/EC on the control of major-accident hazards involving dangerous substances, also known as the Seveso II Directive.For these plants, safety plans are available for different accident scenarios.The plans include the probability of occurrence, the area of impact, the intensity of the event and the effects on structures and on human life.Due to incomplete documentation about the productive processes and the accident scenarios, only explosionrelated accidents were considered in this work, neglecting the release of toxic gases and pollutants.Overall, 8 accident scenarios were taken into account, related to 3 industrial plants.For each of the scenarios, we considered only damages related to impacts exceeding the plant perimeter.Actually, damages within plants, which can also reach high values, are covered by the companies themselves, and do not have any impact on public funds. The choice of risk scenarios to be considered for the analysis was based mainly on the availability of data, having the perspective of the applicability of the approach on different study areas.The use of scenarios defined by national law and zonations makes the approach potentially applicable on the whole national territory, providing an homogeneous methodology for multiple QRA.The number of scenarios for each threat, and the related return time are different.This problem is overcome by the use of curves which express the expected losses as a function of exceedance probability, deriving from F − N curves (Ale et al., 1996;Vrijling and Van Gelder, 1997;Jonkman et al., 2003).These curves allow to integrate losses related to all the considered scenarios for each threat, and to provide a total value of risk as the subtended area (Ale et al., 1996;Jonkman et al., 2002). Methodology for risk assessment Natural and technological risks are generally agreed to be defined as the measure of the probability and severity of an adverse effect to life, health, property, environment, and can be expressed as the probability of an adverse event times the consequences if the event occurs (ISSMGE Glossary of Risk Assessment Terms, http://www.engmath.dal.ca/tc32/) or the probability that a given loss will occur (e.g.Kaplan and Garrick, 1981;Baecher and Christian, 2003;CEDIM, 2005).Kaplan and Garrick (1981) define risk by a combination of the expected consequences of a set of scenarios, each with a probability and a consequence.Therefore, risk encompasses three aspects: hazard, vulnerability of the affected element and the asset of exposed elements at risk (Kleist et al., 2006). We perform a local-scale multiple quantitative probabilistic risk assessment, aimed at calculating the consequences of adverse events, in terms of expected economic annual loss for a set of scenarios (2 for floods, 3 for earthquakes and 8 for industrial accidents) with different probabilities and intensities. The aggregation of annual expected losses, E(L), obtained for each of the considered scenarios, is performed with an approach analogous to F −N curves, by representing the annual probability of exceedance of a certain level of loss, x 0 , as a function of the economic loss.The expected value of E(L) can be derived from the probability density function (pdf) of the economic loss, f L (x) (Ale et al., 1996;Vrijling and Van Gelder, 1997;Jonkman et al., 2003): (1) Ale et al. (1996) propose the area under the F − N curve as a measure for societal risk, analogous to the F − N curve and the expected number of fatalities.It can be shown that the area below the curves representing the annual probability of exceedance of a certain level of loss as a function of the economic loss, equals the expected value E(L) (Jonkman et al., 2002).The curves obtained combining the scenarios for each threat allow the quantification and the comparison of the economic impact of the threats in an area. The census parcel (2460 parcels) with an area ranging from 0.0001 to 19 km 2 has been adopted as terrain unit.The choice was driven by the fact that detailed socio-economic data, provided by the National Institute of Statistics (ISTAT, 2001), are based on this territorial unit. In this paper, we apply a methodology based on Fell et al. (2005), in which the frequency of the event, its probability of reaching the element at risk, the temporal spatial probability of the element at risk (i.e.exposure), its vulnerability and value are considered. For each scenario we evaluate the expected annual loss E(L) as where P (event) is the annual exceedance probability of the event of a given intensity I , calculated with a Poissonian model of the type: where t is the period on which the probability is calculated (here, 1 yr); µ is recurrence interval of the event.P (event) is also referred as Hazard. The potential loss given the event, E(L|event), is calculated as where P (I |event) is the probability that an event, once occurred, could impact a parcel; P (L|I ) is the vulnerability of a parcel, here intended as the physical vulnerability, i.e. the degree of loss to a given element or set of elements within the area affected by a hazard, as defined by ISSMGE Glossary of Risk Assessment Terms; W is the total economic value of a parcel, dependent on the number and typology of buildings. The probability of impact given the event, P (I |event) is calculated in GIS environment by means of a geometric analysis as the portion of built area of each parcel potentially impacted by an event (Fig. 3).This probability expresses, for each parcel, the exposure to the specific threat.use map (DUSAF, 2007) was used for mapping the built areas.P (I |event) is always equal to1 for seismic risk. In the literature, a few risk assessment studies provide some examples or the quantitative estimation of values of the exposed elements (e.g.MURL, 2000;IKSR, 2001;Dutta et al., 2003;Grunthal et al., 2006).In some cases, the value of residential buildings was estimated considering the mean insurance value for the buildings, which in general represent the replacement costs of the buildings (MURL, 2000;Grunthal et al., 2006).Dutta (2003) used economic replacement values for structures and other land uses, distinguishing buildings according to their use.This approach, based also on the use of aerial photographs, is adequate only for limited areas.Blong (2003) used construction costs published by Australian authorities. In this paper, the value W of each census parcel has been calculated by using minimum and maximum market values (C m −2 ) distinguishing among residential and nonresidential buildings, according to data provided by Italian Observatory of Real Estate Market (Osservatorio Mercato Immobiliare, 2010) (Fig. 4). Methodology for uncertainty evaluation Monte Carlo analysis (MC), or its variants such as the Latin hypercube sampling, is the most common technique for uncertainty evaluation, and it has been applied to many different fields (e.g.Fishman, 1997;Breuer et al., 2006;Hall, 2006;Calvo and Savi, 2008).In this approach, the uncertainties on parameters are generally described as probability distributions (e.g.normal, log-normal, uniform, triangular, discrete), which could be continuous or discrete.Probability distributions provide the range of values that the variable could take, and the likelihood of occurrence of each value within the range.Monte Carlo methods are a class of algorithms based on repeated random sampling, and include a wide range of engineering and scientific simulation methods based on randomized input.Monte Carlo methods are used when it is infeasible or impossible to compute an exact result with a deterministic algorithm.Probability distributions are a suitable way of describing uncertainty in variables of a concerning risk analysis. The First Order Second Moment (FOSM) method consists in the truncation of Taylor's series expansion, considering only the first order terms.It provides an estimate of the mean and the variance (first and second moment) of the output through computation of its derivative to the input at a single point (e.g. in Yen et al., 1986;Baecher and Christian, 2003;Uzielli et al., 2006).The discarded terms are functions of the second and higher order derivatives, the variances and shapes of the probability density functions of the input variables, and the correlations among input variables (El-Ramly et al., 2003).The method requires the evaluation of partial derivatives.In complex problems, this could not be possible, or too complicated and time costly. In point estimate (PE) method (Rosenblueth, 1981), random variables are replaced with point estimates (e.g.Harr, 1989;He and Sallfors, 1994;Christian and Baecher, 1999;Ellingwood, 2001).Each variable is replaced with a central value (expectation, median, or mode), or with one which is consciously biased, and the estimates are deterministic.The aim of PE is to calculate the first two or three moments of a random variable, which is a function of one or more random variables.In decision making, a calculation of the first moment (expected value) of functions of the random variables would often suffice (Rosenblueth, 1981).The method overcomes the deficiencies of a deterministic treatment, sacrificing the accuracy of a rigorous probabilistic analysis (Rosenblueth, 1981).PE reduces the computational effort of propagating uncertainty through a function, by eliminating the calculation of derivatives or use of Monte Carlo sampling.The pdf of each random variable is just represented by discrete points, located according to the first, second and eventually third moments. In general, the uncertainty evaluation performed by means of MC is computationally more demanding, and time consuming.The advantage of the method consists in the fact that the output function (i.e. the risk) is defined in its distribution, and all the moments can be calculated.The propagation of uncertainty is more accurate in Monte-Carlo-Simulation, as a variety of distributions can be chosen for the input variables, whereas this is not possible in the deterministic approaches.On the other side, FOSM and PE are simpler and faster.FOSM, however, requires the calculation of many derivatives, which in complex problems with many input variables can become problematic.In that case, a PE analysis should be preferred (Harr, 1989). Many authors (e.g.Hoffman and Hammonds, 1994;Paté-Cornell, 1996;Parry, 1996;Faber, 2005;Merz andThieken, 2005, 2009;DerKiureghian and Ditlevsen, 2009) underline the necessity of keeping the distinction between aleatory and epistemic uncertainty, since the two types assume different meanings in risk assessment.The aleatory uncertainty is a fundamental and integral part of the structure of the QRA, and it is related to the intrinsic uncertainty in the occurrence and in the effects of phenomena.It is an uncertainty related to the variables on which risk is dependent, and it is not eliminable.Epistemic uncertainty, instead, is related to the level of knowledge and understanding we have of the scenarios we are modelling, and it can be reduced. Other analysts (Hora, 1996;Hofer, 2001) have found that sometimes it is difficult to distinguish between the two types of uncertainty, especially when modelling the occurrence or the impacts of extreme physical phenomena, which can be outside our direct experience.In this work, epistemic and aleatory uncertainties are analysed jointly, being strongly connected at this scale of analysis.The use of regulatory maps and studies for risk assessment hardly allows to distinguish among the uncertainty deriving from lack of data related to processes or territory assets due to the scale (epistemic uncertainty), and the one related to intrinsic variability of the phenomena on such wide scenarios (aleatory uncertainty).Hence, the uncertainties expressed for the single variables and for the obtained risk values represent an overall unpredictability, lack of data and knowledge about the variables.Only in some cases, where possible, a distinction is maintained just in the phase of assessing uncertainty to the independent variables, as described below. A relevant level of uncertainty was perceived in some of the steps of risk assessment.For the hazard sources, we assigned uncertainties to both the annual probability and the intensity of the event (e.g.depth of water for floods, peak ground acceleration for earthquakes). For the elements at risk, we assigned uncertainties to the number of buildings of different typologies within each parcel, the value for unit area (C m −2 ), and the vulnerability.Data related to number of buildings of different typologies refer to 2001 (ISTAT, 2001).In more than 10 yr new areas have been urbanised, and some buildings can have changed morphology and structural characteristics.This introduces an uncertainty that is not negligible.The economic value of buildings with different use destinations (Osservatorio Mercato Immobiliare, OMI, 2010) is susceptible to market and local fluctuations.The use of a fixed mean value C m −2 , for each census parcel has been associated to a measure of uncertainty calculated on the basis of the range of values provided for each census parcel by OMI (2010). The uncertainty for these variables is represented by coefficients of variation (COV, the ratio between the standard deviation and the expected value).A high coefficient of variation means a higher dispersion around the expected value and, thus, a larger uncertainty. The assignment of COV values was performed according to expert knowledge.No values found in the literature (e.g.; Harr, 1996;Beck et al., 2002;Merz et al., 2004;Kaynia et al., 2008) were adaptable to the specific case study, since uncertainty is strictly related to the characteristic of each phenomenon and to the quality of the available data for the present case.In this work, both these aspects were taken into account for assignment by expert knowledge of COV values. In absence of more specific information, variables used for uncertainty analysis are supposed to be mutually independent.This assumption, is considered realistic given the Nat.Hazards Earth Syst.Sci., 12, 3387-3406, 2012 www.nat-hazards-earth-syst-sci.net/12/3387/2012/nature of the variables (e.g.number of buildings, their economic value, probability of impact of a census parcel), except for P (E) and the intensity of the scenario.However, the choice of independence considerably simplifies the analysis, especially for FOSM and PE.On the other hand, the fact that P (E) and the intensity are inversely correlated, leads to a conservative approach, where a small risk attenuation introduced by the correlation is neglected. In MC simulation, in absence of evidences leading to other assumptions, Gaussian distributions have been assigned to all the input uncertain variables, assuming that this type of distribution is generally the most probable.The only exception was for the number of buildings, for which a Pareto distribution was observed.Symmetric distributions have been always used in FOSM and PE. MC simulation was performed with Latin hypercube sampling, and 5000 iterations.In Latin hypercube sampling, the input probability distributions are stratified, which means that the cumulative curve is divided into equal intervals on the cumulative probability scale (0 to 1).From each interval, a sample is then randomly taken.In this way, sampling is forced to represent values homogeneously in each interval over the entire range of the distribution. For the application of FOSM, assuming that the variables are mutually independent, the calculation of output mean and variance is significantly reduced to a simpler form, since the correlation is neglected.Given a function of n variables: if θ is a sum of mutually independent variables x i , the first order approximation becomes exact, leading to and For the product of random variables, the approximation leads to where µ θ , and For the product of random variables, the approximation leads to: ) where µ θ , Ϭ θ and COV θ are the mean, the variance and the coefficient of variation of the output function, µ Xi, Ϭ Xn and COV i are the mean, the variance and the coefficient of variation of the input variable x i . In the propagation of uncertainty by means of PE, the Roesenblueth's procedure (1981) to the expected value of each variable x , increased or reduced of one standard deviation 13 Xn and COV i are the mean, the variance and the coefficient of variation of the input variable x i . In the propagation of uncertainty by means of PE, the Roesenblueth's procedure (1981) for non-skewed independent variables was used.The procedure calculates the first and second moments of each function θ = f (x 1 , x 2 , ...x n ) using 2 n points with coordinates corresponding to the expected value of each variable x i , increased or reduced by one standard deviation (Christian and Baecher, 1999). In this work, the number of buildings of different types (n. of floors, building materials) for each census parcel (IS-TAT, 2001), dates back to 2001.Considering that in the last decade a portion of about 12 % of agricultural surface was built up (Pileri and Maggi, 2010), the number of buildings is significantly underestimated.In order to characterise the frequency distribution of the urbanisation rate in the census parcels, 100 parcels were randomly sampled and analysed into detail through comparison of 1998 and 2007 aerial photographs.This allowed us to recognise a power-law frequency distribution of the urbanisation rate, with an average value of 11.3 %.This increase of urbanised areas can occur only for those parcels which are not completely built yet.For these parcels, a Pareto pdf is used in Monte-Carlo-Simulation in order to describe the power-law behaviour.For the other methods (FOSM and PE), we used a symmetrical distribution, with a COV of 0.1.For the parcels that are completely built, the uncertainty was assumed to be null. In some cases, the assignment of COV values was based on the available distribution of data.In other cases, it was performed based on expert knowledge.This introduces a subjective judgement, which could be reduced in presence of a larger amount of data, but never eliminated.However, this does not hamper the repeatability of the analysis once COV values are assigned and explicitly declared.Moreover, the methodology here proposed makes use of regulatory maps and data, which are available for the whole national territory: this suggests that homogeneous COV values could be used for the whole nation.Obviously, they could be discussed by the collaboration of experts of different fields. Flood risk assessment To calculate the intensity (i.e.water depth) for flood scenarios, we reconstructed the flood surface by using a 20 m × 20 m digital terrain model and assuming the flood as a plane intersecting the outer border of the hazard zone (A, B, C).For each parcel, the mean water depth value was used in the computations.Data related to flow velocity, bed shear stress, dynamic forces, rate of flood rise were not available. The damage to buildings and to their content was calculated by using depth damage vulnerability models for different typologies of building.Among the available data related to building structure, just the presence of one or more floors was considered as significant in order to assess flooding expected loss.The building material (e.g.masonry or reinforced concrete) was not considered to influence the vulnerability, because a collapse scenario is rather unlikely to happen. Among the models proposed in the literature (CH2M Hill, 1974; Sangrey et al., 1975;Smith, 1994;Torterotot et al., 1992;Hubert et al., 1996;Blong, 1998;ICPR, 2001;Dutta et al., 2003;USACE, 2003;Penning-Rowsell et al., 2005;Büchele et al., 2006;Kreibich and Thieken, 2008), we selected three models (ICPR, 2001;Dutta et al., 2003;USACE, 2003) which account for both damage to buildings and to their content.USACE (2003) also distinguishes, among the others, curves for one-storey and two or more storey residential buildings with basements, which is the typology of building diffused in the study area.In order to quantify the uncertainty related to the choice of the vulnerability model, we performed the risk analysis for each set of vulnerability curves.The damages related to buildings content are modeled in USACE (2003) as a percentage of structure value.In the use of the other vulnerability models, the value of content was considered equal to the one of the structure.This is considered a balanced choice, introducing an approximation by excess in case of residential buildings, where content value is in most cases lower, and an approximation by defect in case of non-residential buildings (which include productive plants, sanitary structures, and technology facilities), where content value is generally high.An example of the compared use of vulnerability models for a 1-storey residential building can be found in Table 1. For the assessment of flood risk, the uncertainty assigned to each random variable is reported in Table 2.For economic value of residential and non-residential buildings, COV were calculated from OMI (2010) data, which provides a range of market values expressed as C m −2 for each census parcel.For the other variables, in absence of specific data, expert knowledge criteria was used.To the water depth, a COV equal to 0.3 was assigned.Actually, water depth was calculated by water surface elevation obtained starting from regulatory mapping of the scenarios, realised at the regional scale.The precision of data related to number of buildings of different type was higher (COV = 0.1), but not free from changing for new urbanisation, as explained before.The probability of event for each scenario was assumed to have an uncertainty of 0.1, being determined by National Basin Authority on the basis of a wide observation of past events. The expected losses obtained by means of the three uncertainty methods are quite similar, although they reach their highest values in MC, and lowest in PE (Table 3).200-yr scenario, expected losses are limited because only a few buildings are potentially impacted.For both scenarios the highest level of expected losses is observed in small parcels close to the river, where the expected water depth is higher (Fig. 5).On the study area, the overall expected loss calculated by the integration of the scenarios is almost 10 000 000 C. Regarding the uncertainty, COV associated with expected loss ranges between 10 and 27 %, with small differences between the two scenarios.For all the three uncertainty methods, small parcels with high flood intensity show lower COV values (Figs. 6 and 7). Seismic risk assessment Seismic hazard has been defined in terms of peak ground acceleration, PGA (MPS, 2004), corrected in PGA c by a soil amplification factor, S s , for the horizontal acceleration spectrum based on soil categories (Table 4) defined according to velocity of shear waves in the first 30 m of depth, V s , 30 (DM 14 gennaio 2008). To account for the effects of earthquakes on different building types, data related to the number of buildings for each parcel, built with different materials (masonry, 52 % of the total, and reinforced concrete, 48 % of the total), were considered (ISTAT, 2001).Vulnerability of buildings was calculated using four sets of fragility curves as a function of PGA (Kostov et al., 2004;Hancilar et al., 2006;Rota et al., 2008;Kappos et al., 2006).As for floods, risk analysis was performed for each set of curves in order to compare the results (Table 5).Kappos et al. (2006) propose curves for concrete frames with unreinforced masonry walls, mid rise, and low level of seismic design, here used to model fragility of masonry buildings, and reinforced concrete systems, low level of seismic design, here used for reinforced-concrete.Among the large number of curves proposed by Kostov et al. (2004) for Bulgary, the most similar typologies are masonry buildings constructed before 1945 and masonry buildings with reinforced concrete floors constructed after 1945.Hancilar et al. (2006), provide fragility curves only for buildings with reinforced concrete frame and shear walls, in Turkey.In this case, the damage related to masonry buildings has been underestimated (Table 3).Rota et al. (2008) propose for Italy susceptibility curves for different building typologies.Among them, 1-3-storey reinforced concrete buildings without a seismic design, and multiple storey masonry buildings have been selected.The percentages of damage for the structure associated to each damage state were taken from Kappos et al. (2006). Uncertainties for the variables used in seismic risk assessment are reported in 3 for description of the categories. -an epistemic uncertainty due to the estimation of amplification, and to approximations introduced in the definition of the soil categories, in the order of 0.2 (Table 2). An epistemic uncertainty arises also from the averaging of peak ground acceleration on the whole census parcel area, which resulted to be negligible at the scale of analysis.COV values for the number of buildings of different types, and for the economic value of buildings, were calculated as explained for flood risk assessment.The probability of event presents a low COV value (0.001), being derived from a very detailed hazard analysis performed on the Italian territory (MPS, 2007). Seismic risk affects the whole study area (Fig. 9), showing a predictable correlation with the level of urbanisation and productive activities.For all the scenarios, the highest losses occur in parcels with soil categories C (Medium density coarse soil or medium stiff fine soil, depth > 30 m, 4), in presence of a high number of masonry buildings.The highest level of expected annual loss is obtained for the 75-yr scenario (Table 6).For 475 and 2500 yr scenarios, the larger potential damages, due to the higher earthquake intensity, are compensated by a longer return period, resulting in an overall lower annual loss.On the whole study area, the three uncertainty methods provide similar results (Table 6). Figure 10 represents the exceedance probability as a function of expected losses for seismic risk obtained by means of MC simulation, both for single parcels, (grey lines are some random examples, all ranging between minimum and maximum, i.e. in the grey area) and the whole study area (black line).Total expected loss, considering all seismic scenarios on the study area, was derived as the area under the curve. On average, the uncertainty propagated by the different approaches is about 40 % of the expected value, and does not show any dependence on risk value (see upper quadrants in Fig. 11, where COV values are shown with respect to the level of risk, since census parcels are ordered).COV values resulting from FOSM are higher and more scattered, while MC provides lower and more constant values (Fig. 11). Industrial risk assessment A high value of COV (0.4) was assigned to the loss to buildings provided in safety plans of the industrial plants.Actually, the possible effects of the scenarios are described in a qualitative way.The problem of converting descriptions in quantitative damage percentages can be affected by a high uncertainty.For the probability of occurrence of the events, a COV of 0.2 was assigned, considering that safety plans, financed by the same companies and compiled by private consulting groups, can be affected by an underestimation of hazard. Industrial risk is localised only in few census parcels (Fig. 12).The probability of occurrence and the impacted area for each scenario were derived, together with vulnerability, from the available safety plans, provided by the companies (Table 7). FOSM, MC and PE show quite similar values of expected losses (Table 8), orders of magnitudes smaller than flood and seismic risk.This is due to the fact that the impacted area is small, and the probability of occurrence from the safety plans is extremely low.As discussed in the previous chapter, industrial risk is underestimated because we analysed only explosions for major risk plants, thus neglecting other type of scenarios (toxic releases, etc.) and small industries.Due to the typology of source of hazard (e.g.tanks and parts of the plants), damages produced in case of accident affect a small area around the plants.In this case, the use of census parcels as territorial units spreads the effects over a larger area, which is not always realistic.COV values have similar patterns with the three methods, reaching about 69 % of the expected value.This result is justified by the assignment of high COV values to the vulnerability of exposed elements, because this vulnerability is described qualitatively in the safety plans and the position of elements within the parcels is largely unknown. Risk assessment The presence of multiple sources of hazard in areas with a strong industrial and urban development makes multiple QRA important to detect coexistence of threats and to establish priorities for mitigation strategies.The comparison of risks that are different in nature (natural vs. technological, spatially distributed vs. spot, frequent vs. extremely rare) demands a quantitative risk assessment (QRA) in order to deal with a common and comparable measure (e.g. the expected annual loss). For our case study, we observe that seismic risk affects the whole study area with higher expected losses, E(L) equal to 64 000 000 C (Fig. 13).On the contrary, industrial accidents produce limited expected consequences, mostly due to a low occurrence probability and a very limited impact area outside the plant, which was the only type of impact considered in this study.This result reflects the characteristics of the specific study area and of its industrial context.expected losses for single parcels, seismic risk results to be dominant over 93.8 % of the parcels (corresponding to about 96 % of the area).Flood dominates over the rest 6.2 % of the parcels, which are located in proximity to the Mella river, mainly in downtown Brescia.The total expected losses obtained using MC, FOSM and PE are similar when considering the whole study area (Tables 3, 6, and 8).However, the comparison performed for losses at each single parcel shows significant differences among the methods.In order to highlight these differences, pair-wise comparisons of the results provided by the three methods were performed for each parcel, and the frequency distributions of the expected loss differences was analysed (Fig. 14).The differences observed between MC and the other two methods always show a bimodal distribution, which is due to the way the uncertainty related to the number of buildings of different type for each parcel is treated.For completely urbanised parcels (49 % of the total), we assumed no uncertainty on the number of buildings, because a further urbanisation would be impossible or extremely limited, or eventually just connected to changes in the building function.For the other parcels, the uncertainty was modelled by means of a Pareto pdf in MC, or a COV of 0.1 in PE and FOSM.Being the Pareto distribution strongly skewed toward positive values, expected losses for these parcels in MC (i.e. the mean value of the loss distribution) are systematically higher than in the two other methods, for which the underneath distribution is always assumed to be symmetrical.The use of skewed distributions in a probabilistic risk analysis provides strong differences in the results obtained using different methods.In our analysis, this effect is undetectable on the overall result because a skewed distribution was used only for one variable and 51 % of the parcels.In other circumstances, this effect could be much greater.This would make the MC preferable, because it is more suitable for accounting for skewness.Implementing skewness in PE is complicated, and impossible in FOSM.On the other hand, we should consider that dealing with non-normal distributions in risk analysis at regional scale is not so common, due to a general lack of information about the specific shape of the distribution.In conclusion, the choice of the most appropriate method of uncertainty propagation should be based on data availability, also dependent on the scale of the analysis, and on sustainability of time-computational efforts. The calculated expected losses are probably underestimated because they include only buildings (i.e.lifelines and agricultural/natural resources are not included), without a specific discrimination according to use (e.g.hospital vs. private house). In this work, no domino effects were considered.Nevertheless, due to the presence of a high density of industrial plants, also belonging to those defined by Council Directive 96/82/EC as Major Risk Plants, the possible consequences of domino effects are not beforehand negligible. Multiple risk assessment could represent an important tool for decision making and territorial planning, providing the information related to the typology of threats present in each place, to the related level of risk, and to their simultaneous presence, which could generate domino effects and increase the effects of each single scenario.An effective territorial planning should account for data provided by this type of study, compatible with economic and environmental needs.For instance, new buildings could be planned with technical features according to the threats they will be exposed to, at a certain place.The planning of new residential or productive areas, and most of sensitive structures (e.g.schools, hospitals) could be performed choosing less exposed areas.Protective structures or non-structural mitigation measures could be implemented where their benefits could be maximised. Uncertainty The propagation of uncertainty in multiple QRA has the purpose of showing to the decision maker the effect of a lack of knowledge or precision in the calculation of risk.This is fundamental, also considering that often decisions need to be made on the basis of the available data, even if they are not detailed or complete at best.Uncertainty propagation, furthermore, provides a heuristic quantitative estimation of the level of reliability of the obtained results.This is useful in decision making, to decide whether a decision can be taken in presence of a certain degree of uncertainty (which is in this case acceptable), or if uncertainty must be reduced before coming to a decision.In this case, efforts to obtain a more precise result can be addressed to the right target.The three methods for uncertainty assessment provide a similar coefficient of variation of the output, ranging in mean from 0.1 to 1.2 (Figs. 6 and 7, Tables 6 and 8). For flood and seismic scenarios, uncertainty results to be lower for small, densely urbanised parcels.In these parcels, the uncertainty related to the number of buildings is null, and the uncertainty related to the value of residential and nonresidential buildings ( C m −2 ) is low, since values are quite uniform in small areas.On the other side, in large less urbanised parcels, the value of buildings can significantly vary, introducing a higher uncertainty.The size of census parcels is a variable that must be considered with great care when QRA is performed.In general, smallest census parcels are located in highly urbanized areas where the number and value of exposed elements can be high.On the contrary, the largest census parcels are located in the suburban areas where the number of exposed elements can be relatively low but also where there is the maximum potential for future developments or for uncontrolled or rapid urbanization.This causes large uncertainty relative to number and type of exposed elements.Moreover, large census parcels are affected by uncertainty for the variables describing the event intensity (e.g.water depth, PGA, landslide velocity or size).Census parcels are the most commonly adopted land units for collecting and representing statistical information/data available at a regional/national scale (e.g.number of buildings of different types per parcel).Nevertheless, for many processes, census parcel has not always the optimal spatial resolution to account for the shape of the source areas, the propagation of the events, the exact area of impact. The effects of different vulnerability models on flood and seismic risk is shown in Fig. 15.For floods, the use of different vulnerability models leads to large differences in total risk value, with respect to USACE (2003): 15 % for Dutta et al. (2003), and38 % for ICPR (2001).For earthquakes, the Kostov et al. (2004), Kappos et al. (2006), Hancilar et al. (2006), andRota et al. (2008) vulnerability models, respectively.differences with respect to Kappos et al. (2006) vulnerability model amounts to 19 % for Kostov et al. (2004), 38 % for Rota et al. (2008), and 72 % for Hancilar et al. (2006).In some parcels, local differences can reach values up to 86 %.The observed results suggest that the choice of vulnerability models has a strong impact in the whole risk assessment, introducing significant uncertainty.In general the choice of a model should be based on (1) the geographic context in which the model was developed, which should be as similar as possible to the study area; (2) the detail of the model: if available, a model considering highly detailed data (e.g.different typologies of buildings, different number of floors, number of damage curves etc.) could provide a better estimate of the level of damage; (3) the conservativeness of the model. Conclusions Multiple quantitative risk assessment performed with three different methods for uncertainty propagation (MC, FOSM and PE) leads to similar overall results for the whole study area.However, significant differences of expected value of total loss have been observed for single parcels (up to 19 %).This difference could be relevant when using the results of risk assessment for decision making, or territorial planning, at local scale.In particular, large differences are observed when skewed distributions are used in the risk analysis, because of different capability of the three methods to account for skewness.In this case MC analysis is preferable. Although the use of census parcels works well with societal and statistical data, we show that this type of terrain unit introduces large uncertainty related to number and type of exposed elements, representative values for the variables describing the event intensity, degree of exposure of the elements to the threat. The proposed methodology for quantitative risk and uncertainty assessment can be applied all over the national territory, being based on scenarios defined by national law and zonations and data available at the national scale.However, since these scenarios can be produced with simplified approach at small scales, the analysis can be affected by large uncertainties, which has to be considered. Multiple QRA associated with uncertainty evaluation can represent an important tool for -territorial planning and development according to the type of threat, the coexistence of different risks, and their level; -priority assessment in fund allocation and mitigation measures implementation; -detection of risk hot spots where a more detailed risk assessment at a higher scale could be performed. In all these situations, a quantification of risk must be supported by uncertainty: all the assumptions and lack of knowledge introduce an error which has to be accounted for. Fig. 2 . Fig. 2. Values of peak ground acceleration PGA, in g, with exceedance probability of 10 % in 50 yr, and location of historical earthquakes.Magnitude is expressed as Maw, momentum magnitude. Fig. 3 . Fig. 3. Example of calculation of flood P (I |event) for census parcels. Fig. 4 . Fig. 4. Economic value in euro m −2 , W , for residential buildings (a, c), non-residential buildings (b, e).Frequency of surface of census parcels in the study area (d). for non-skewed independent variables was used.The procedure calculates the first and second moments of each function points with coordinates corresponding to the expected value of each variable x i , increased or reduced of one standard deviation θ and COV θ are the mean, the variance and the coefficient of variation of the output function; µ Xi , θ and COV θ are the mean, the variance and the coefficient of variation of the 7 output function, µ Xi, Ϭ Xn and COV i are the mean, the variance and the coefficient of variation 8 of the input variable x i .9In the propagation of uncertainty by means of PE, the Roesenblueth's procedure (1981) for 10 non-skewed independent variables was used.The procedure calculates the first and second Fig. 10 . Fig.10.Expected losses as a function of exceedance probability for seismic risk(Kappos et al., 2006 vulnerability model).Grey curves for a random selection of 100 census parcels, ranging in the grey area between the minimum and the maximum level of risk.Curve for the whole study area is shown in black. Fig. 11 . Fig. 11.Values of seismic risk for the 2500-yr scenario, and of the related uncertainty, for the three methods, calculated by means of Kappos et al. (2006) vulnerability model.Parcels are ordered according to the level of risk.Light lines show the risk level increased and diminished of the standard deviation, dark line indicates the value of risk.All the 2460 parcels of the study area are impacted. Fig. 13 . Fig. 13.Expected losses as a function of exceedance probability for the analysed scenarios.Expected losses are those provided by MC.Value of flood risk under the curve corresponds to 9 643 805 C (US-ACE, 2003 vulnerability model).Seismic risk equals 63 897 207 C (Kappos et al., 2006, vulnerability model), industrial risk is 2956 C. Fig. 14 . Fig. 14.Frequency of the differences in expected loss values between couples of methods for (a) flood 500-yr scenario, and (b) seismic 475-yr scenario.Arrows indicate the mean value of the differences for the whole study area. Table 1 . Example of the application of the three flood vulnerability models to a census parcel containing one 1-floor non-residential building completely impacted by the flooding corresponding to a return period of 500 yr. Table 2 . Uncertain variables, and values for coefficients of variation (COV) adopted in the analysis.P (event) is the annual exceeding probability of the scenario, W the economic value. Table 5 . Example of calculation of seismic vulnerability of a census parcel containing one 1-floor non-residential building shaken by a T r 475 yr earthquake. Table 6 . Kappos et al., 2006ses and COV values for seismic scenarios with return time (T r ) 75, 475 and 2500 yr, vulnerability model fromKappos et al., 2006.Range of values for all the census parcels, and total on the study area.Comparison of the three methods: Monte-Carlo-Simulation, First Order Second Moment, and point estimate Table 7 . Vulnerability, P (L|I ) to industrial risk as from safety plans. Table 8 . Expected annual losses E(L) (C yr −1 ) and COV values for industrial scenarios.Comparison of the three methods: Monte-Carlo-Simulation, First Order Second Moment, and point estimate.
2018-12-01T00:39:42.312Z
2012-11-20T00:00:00.000
{ "year": 2012, "sha1": "c32e3347bea82f2a127af1582b10ce1f8cbdc1f3", "oa_license": "CCBY", "oa_url": "https://nhess.copernicus.org/articles/12/3387/2012/nhess-12-3387-2012.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "c32e3347bea82f2a127af1582b10ce1f8cbdc1f3", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Geography" ], "extfieldsofstudy": [ "Environmental Science" ] }
32136431
pes2o/s2orc
v3-fos-license
Origin of magnetic anisotropy in doped Ce$_2$Co$_{17}$ alloys Magnetocrystalline anisotropy (MCA) in doped Ce$_{2}$Co$_{17}$ and other competing structures was investigated using density functional theory. We confirmed that the MCA contribution from dumbbell Co sites is very negative. Replacing Co dumbbell atoms with a pair of Fe or Mn atoms greatly enhance the uniaxial anisotropy, which agrees quantitatively with experiment, and this enhancement arises from electronic-structure features near the Fermi level, mostly associated with dumbbell sites. With Co dumbbell atoms replaced by other elements, the variation of anisotropy is generally a collective effect and contributions from other sublattices may change significantly. Moreover, we found that Zr doping promotes the formation of 1-5 structure that exhibits a large uniaxial anisotropy, such that Zr is the most effective element to enhance MCA in this system. I. INTRODUCTION The quest for novel high energy permanent magnet without critical elements continues to generate great interest [1]. While a rare-earth-free permanent magnet is appealing, developing a Ce-based permanent magnet is also very attractive, because among rare-earth elements Ce is most abundant and relatively cheap. Among Ce-Co systems, Ce 2 Co 17 has always attracted much attention due to its large Curie temperature T C and magnetization M . The weak point of Ce 2 Co 17 is its rather small easyaxis magnetocrystalline anisotropy (MCA), which must be improved to use as an applicable permanent magnet. The anisotropy in Ce 2 Co 17 , in fact, can be improved significantly through doping with various elements. Experimental anisotropy field H A measurements by dopant and stoichiometry are shown in Fig. 1. This anisotropy enhancement has been attributed to the preferential substitution effects of doping atoms [2,3]: (i) The four non-equivalent Co sites contribute differently [4] to the magnetic anisotropy in Ce 2 Co 17 . Two out of the 17 Co atoms occupy the so-called dumbbell sites and have a very negative contribution to uniaxial anisotropy, leading to the small overall uniaxial anisotropy; (ii) Doping atoms preferentially replace the dumbbell sites first, eliminating their negative contribution and increasing the overall uniaxial anisotropy. The above explanation is supported by the observation that with many different dopants, the anisotropy field in Ce 2 T x Co 17−x shows a maximum around x = 2. This corresponds to the number of dumbbell sites in one formula unit [5]. tering or Mössbauer studies have suggested that Fe [12][13][14][15], Mn [16], and Al [7,17,18] atoms prefer to substitute at dumbbell sites. However, it is not clear whether only the preferential substitution effect plays a role in H A enhancement for all doping elements. For elements such as Zr, Ti, and Hf, the substitution preference is not well understood. Replacing the dumbbell Co atoms with a pair of large atoms may not always be the only energetically favorable configuration. For Mn and Fe, known to substitute at dumbbell sites, the elimination of negative contributions at those sites may explain the increase of magnetocrystalline anisotropy energy (MAE). It is yet unclear why different elements give a different amplitude of MAE enhancement or what mechanism provides this enhancement. For permanent magnet application, Fe and Mn are particularly interesting because they improve the anisotropy while preserving the magnetization with x < 2. Other dopants quickly reduce the magnetization and Curie temperature. Further tuning of magnetic properties for compounds based on Fe-or-Mn-doped Ce 2 Co 17 would benefit from this understanding. In this work, we use density functional theory (DFT) to investigate the origin of the MAE enhancement in doped Ce 2 Co 17 . By evaluating the on-site spin-orbit coupling (SOC) energy [19,20], we resolved anisotropy into contributions from atomic sites, spins, and orbital pairs. Furthermore, we explained the electronic-structure origin of MAE enhancement. II. CALCULATION DETAILS A. Crystal structure Ce 2 Co 17 crystallizes in the hexagonal Th 2 Ni 17 -type (P 63/mmc, space group no. 194) structure or the rhombohedral Zn 17 Th 2 -type (R3mh, space group no. 166) structure, depending on growth condition and doping [10]. As shown in Fig. 2, both 2-17 structures can be derived from the hexagonal CaCu 5 -type (P 6/mmm space group 191) structure with every third Ce atom being replaced by a pair of Co atoms (referred to as dumbbell sites). The two 2-17 structures differ only in the spatial ordering of the replacement sites. In the CeCo 5 cell, a Ce atom occupies the 1a(6/mmm) site and two Co atoms occupy the 2c(−6m2) site, together forming a Ce-Co basal plane. Three Co atoms occupy the 3g(mmm) sites and form a pure Co basal plane. The primitive cell of hexagonal Ce 2 Co 17 (H-Ce 2 Co 17 ) contains two formula units while the rhombohedral Ce 2 Co 17 (R-Ce 2 Co 17 ) contains one. The Co atoms are divided into four sublattices, denoted by Wyckoff sites 18h, 18f , 9d, and 6c in the rhombohedral structure, and 12k, 12j, 6g, and 4f in the hexagonal structure. The 6c and 4f sites are the dumbbell sites. In the R-structure, Ce atoms form -Ce-Ce-Co-Co-chains with Co atoms along the z axis. The H-structure has two inequivalent Ce sites, denoted as 2c and 2b, respectively. Along the z direction, Ce 2b form pure -Ce-atoms chains and Ce 2c form -Ce 2c -Co-Co-chains with Co dumbbell sites. B. Computational methods We carried out first principles DFT calculations using the Vienna ab initio simulation package (VASP) [21,22] and a variant of the full-potential linear muffin-tin orbital (LMTO) method [23]. We fully relaxed the atomic positions and lattice parameters, while preserving the symmetry using VASP. The nuclei and core electrons were described by the projector augmented-wave potential [24] and the wave functions of valence electrons were expanded in a plane-wave basis set with a cutoff energy of 520 eV. The generalized gradient approximation of Perdew, Burke, and Ernzerhof was used for the correlation and exchange potentials. The MAE is calculated below as K=E 100 −E 001 , where E 001 and E 100 are the total energies for the magnetization oriented along the [001] and [100] directions, respectively. Positive (negative) K corresponds to uniaxial (planar) anisotropy. The spin-orbit coupling is included using the second-variation procedure [25,26]. The k-point integration was performed using a modified tetrahedron method with Blöchl corrections. To ensure the convergence of the calculated MAE, dense k meshes were used. For example, we used a 16 3 k-point mesh for the calculation of MAE in R-Ce 2 Co 17 . We also calculated the MAE by carrying out all-electron calculations using the full-potential LMTO (FP-LMTO) method to check anisotropy results. To decompose the MAE, we evaluate the anisotropy of the scaled on-site SOC energy K so = 1 2 V so 100 − 1 2 V so 001 . According to second-order perturbation theory [19,20], where i indicates the atomic sites. Unlike K, which is calculated from the total energy difference, K so is localized and can be decomposed into sites, spins, and subband pairs [19,20]. A. Ce2Co17 TABLE I. Atomic spin ms and orbital m l magnetic moments (µB/atom) in CeCo5, R-Ce2Co17 and H-Ce2Co17. Atomic sites are grouped to reflect how the 2-17 structure arises from the 1-5 structure. Calculated interstitial spin moments are around −1.1 µB/f.u. in Ce2Co17 and −0.4 µB/f.u. in CeCo5. Measured magnetization is 26.5 µB/f.u. in H-Ce2Co17 at 5K [6], and 7.12 µB/f.u. in CeCo5 [27]. Dumbbell sites are denoted as 6c and 4f in R-Ce2Co17 and H-Ce2Co17, respectively. Atomic spin and orbital magnetic moments in Ce 2 Co 17 and CeCo 5 are summarized in Table I. The calculated magnetization are 25.2 and 25.8 µ B /f.u. in R-Ce 2 Co 17 and H-Ce 2 Co 17 , respectively, and 6.75 µ B /f.u. in CeCo 5 , which agree well with experiments [6]. Ce spin couples antiferromagneticlly with the Co spin. The orbital magnetic moment of Ce is antiparallel to its spin, which reflects the Hunds' third rule. In the Ce-Co plane of Ce 2 Co 17 the Ce atoms are partially replaced by dumbbell Co atoms and this leads to an increased moment for the Co atoms (in that plane) as compared to CeCo 5 , The dumbbell sites To understand the low uniaxial anisotropy in Ce 2 Co 17 , we resolve the anisotropy into atomic sites by evaluating K so . The anisotropy contributions in Ce 2 Co 17 can be divided into three groups: the pure Co plane (3g in CeCo 5 , 12k + 6g in H-Ce 2 Co 17 , or 18h + 9d in R-Ce 2 Co 17 ), the Ce-Co plane, and the Co dumbbell pairs. We found that the MAE contributions from these three groups in the two 2-17 structures are very similar: the dumbbell Co sites have a very negative contribution to uniaxial anisotropy; the pure-Co basal plane has a negligible or even slightly negative contribution to the uniaxial anisotropy; only the Ce-Co basal plane provides uniaxial anisotropy in Ce 2 Co 17 . The two inequivalent Ce sites contribute differently to the uniaxial anisotropy in H-Ce 2 Co 17 structure. Ce(2b) supports uniaxial anisotropy while Ce(2c) moment prefer to be in-plane. However, the total contribution from the two Ce sites is positive, as in the R-structure. Intrinsic magnetic properties and the effect of doping on them are very similar in the two 2-17 structures. We only discuss the results calculated using the R-structure because it has a smaller primitive cell than the H-structure, and the most interesting substituents, Fe and Mn, promote its formation [5]. B. MAE in Ce2T2Co15 We first calculate the MAE in Ce 2 T 2 Co 15 with a variety of doping elements T , by assuming the pair of Co Zr (C e 0 . 67 T 0 . 33 )C o5 Expe rime nts Magnetic anisotropy in Ce2T2Co15 and Ce0.66T0.33Co5 with T =Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zr, and Hf. In Ce2T2Co15, T atoms occupy the dumbbell sublattice. The Ce0.66T0.33Co5 structure was obtained by replacing the pair of dumbbell Co atoms in the original Ce2Co17 with a single T atom. K values derived from experimental HA measurements [5,11] pected that the enhancement of coercivity may also partially arise from the increase of MAE, although Cu atoms had been reported to randomly occupy all Co sites [17]. Moreover, the trend of MAE in Ce 2 T 2 Co 15 , as shown in Fig. 3, is rather generic. We also found the similar trend in Y 2 T 2 Co 15 and La 2 T 2 Co 15 , MAE increases with T =Mn, or late 3d elements. Calculations using FP-LMTO method also shows similar trends of MAE. The total K so , its contribution from the dumbbell site, and the other sublattices' contributions are shown in Fig. 4. Total K so closely follows K for all doping elements, thus validating our use of K so to resolve the MAE and understand its origin. As shown in Fig. 4, the Co dumbbell sublattice in R-Ce 2 Co 17 has a very negative contribution to the uniaxial anisotropy K so (6c)=1 meV/f.u. (0.5 meV/atom). Replacing Co with other 3d elements decreases or eliminates this negative contribution, or even make it positive, as with T =Mn. For the dumbbell site contributions, only four elements with large magnetic moments (all ferromagneticlly couple to Co sublattice), Mn, Fe, Co, and Ni, have non-trivial contributions. Atoms on both ends of the 3d elements have negligible contributions to the uniaxial anisotropy as expected. Although Cu and Zn have the largest SOC constants among 3d, they are nearly non-magnetic, hence, they barely contribute to the MAE itself [20]. The light elements Ti, V, and Cr have small spin moments between 0.36 and 0.55µ B (antiparallel to the Co sublattice) and smaller SOC constants, together resulting in a small K so (T ). Although the dumbbell site contribution dominates the MAE enhancement for T =Fe and Mn, it is obvious that the variation of MAE is a collective effect, especially for T =Cu, or Zn. While the −1 meV/f.u. negative contribution from the dumbbell sublattice is eliminated with T =Cu and Zn, the contributions from the rest sublattices increase by about 2 and 3 meV/f.u., respectively. Similarly, for the doping of non-magnetic Al atoms, the calculated MAE in Ce 2 Al 2 Co 15 has a large value of K = 3.8 meV/f.u.. Experimentally, Al atoms had been found to prefer to occupy the dumbbell site and also increase the uniaxial anisotropy [7,17]. MAE often depends on subtle features of the bandstructure near the Fermi level; therefore, the collective effect of MAE variation should be expected for a metallic system [28]. The modification of one site, such as doping, unavoidably affects the electronic configuration of other sites and their contribution to MAE. We found that all dopings except Fe and Mn decrease the magnetization, which is consistent with the experiments by Fujji et al. [5], and Schaller et al. [29]. Ce 2 Fe 2 Co 15 and Ce 2 Mn 2 Co 15 have slightly larger magnetization than Ce 2 Co 17 by 5% and 8%, respectively. It is worth noting that experimental result on Mn doping is rather inconclusive. A slight decrease of magnetization with Mn doping has also been reported [11]. Sublattice-resolved K so in Ce 2 T 2 Co 15 for T =Co, Fe, and Mn are shown in Fig. 5(a). The dominant enhancement of MAE are from the dumbbell site, although contributions from other sublattices also vary with T . To understand this enhancement of K so from the dumbbell sites, we further resolved K so into contributions from allowed transitions between all pairs of subbands. The dumbbell sites have 3m symmetry. Without considering SOC, five d orbitals on T sites split into three groups: d z 2 state, degenerate (d yz , d xz ) states, and degenerate (d xy , d x 2 −y 2 ) states. Equivalently, they can be labeled as m=0, m=±1, and m=±2 using cubic harmonics. K so (T ) can be written as [20] where ξ is the SOC constant and χ mm is the difference between the spin-parallel and spin-flip components of orbital pair susceptibility. It can be written as Contributions to K so (T ) resolved into transitions between pairs of subbands are shown in Fig. 5(b). The four groups of transitions correspond to the four terms in Eq. (1). The dominant effect is from |0 ↔ | ± 1 , namely the transitions between d z 2 and (d yz |d xz ) orbitals. This contribution is negative for T =Co, nearly disappears for T =Fe, and even becomes positive and large for T =Mn. The interesting dependence of |0 ↔ |±1 contribution on T can be understood by investigating how the electronic structure changes with different T elements. The sign of the MAE contribution from transitions between a pair of subbands |m, σ and |m , σ is determined by the spin and orbital character of the involved orbitals [20,30]. Inter-|m| transitions |0 ↔ | ± 1 promote easy-plane anisotropy within the same spin channel and easy-axis anisotropy when between different spin channels. The scalar-relativistic partial densities of states (PDOS) projected on the dumbbell site are shown in Fig. 6. For T =Co, the majority spin channel is nearly fully occupied and has very small DOS around the Fermi level, while the minority spin channel has a larger DOS. The transitions between d z 2 and (d yz , d xz ) states across the Fermi level and within the minority spin channel, namely |0, ↓ ↔ | ± 1, ↓ , promote the easy-plan anisotropy. For T =Fe, the PDOS of d z 2 and (d yz , d xz ) are rather small near the Fermi level in both spin channels and the net contribution from |0 ↔ | ± 1 becomes negligible. For T =Mn, the Fermi level intersects a large peak of the d z 2 state at the Fermi level in the minority spin channel. The spin-flip transitions |0, ↓ ↔ | ± 1, ↑ give rise to a large positive contribution to uniaxial anisotropy. D. Zr, Ti, and Hf doping in Ce2Co17 The failure to reproduce high anisotropy introduced by other dopants, such as Zr, Ti, and V, is likely due to our oversimplified assumption that a pair of T atoms always replaces a pair of Co dumbbell atoms. Unlike Fe and Mn, the site occupancy preference for those dopants is not well understood [31]. Considering Zr doping most effectively enhanced H A in experiments, here we focus on Zr doping. Both volume and chemical effects likely play important roles in substitution site preference. To have a better understanding of the Zr site preference, we calculated the formation energy of Ce 2 ZrCo 16 with the Zr atom occupying one of the four non-equivalent Co sites and found that Zr also prefers to occupy the dumbbell sites -likely due to the relatively large volume around the dumbbell sites. The formation energies are higher by 39, 58, and 81 meV/atom when Zr occupies the 18f , 18h, or 9d sites, respectively. Considering Zr atoms are relatively large, we investigated another scenario by replacing the pair of Co dumbbell atoms with a single Zr atom, as suggested by Larson and Mazin [31]. Indeed, this latter configuration of Ce 2 ZrCo 15 has the lowest formation energy, which is 3 meV/atom lower than that of Ce 2 Zr 2 Co 15 and 1 meV/atom lower than Ce 2 Co 16 Zr (with Zr replacing one of the two dumbbell Co atoms in Ce 2 Co 17 ). That is, with Zr additions the CeCo 5 structure is preferred over the Ce 2 Co 17 -based structure. The resulting Ce 2 ZrCo 15 has a 1-5 structure (Ce 0.67 Zr 0.33 )Co 5 , with one-third of the Ce in the CeCo 5 structure, shown in Fig. 2(a), replaced by Zr atoms. Hence, the formation energy calculation indicate that the realized structure is likely a mix of 2-17 and 1-5 structures. Interestingly, this may be related to experimental observations that successful 2-17 magnets usually have one common microstructure, i.e., separated cells of 2-17 phase surrounded by a thin shell of a 1-5 boundary phase, and Zr, Hf, or Ti additions promote the formation of such structure [3]. The calculated anisotropy in Ce 2 ZrCo 15 , or equivalently (Ce 0.67 Zr 0.33 )Co 5 , is about 4 MJm −3 and much larger than that of Ce 2 Zr 2 Co 15 . Analysis of K so reveals that not only is the negative contribution from the previous dumbbell sites eliminated, but more importantly, the pure Co plane becomes very uniaxial. For T =V and Ti, the calculated MAE in this configuration is also much larger than that of Ce 2 T 2 Co 15 , as shown in Fig. 3. Similarly, a large MAE of 2.41 meV/f.u. was obtained for (Ce 0.67 Hf 0.33 )Co 5 . IV. CONCLUSION Using density functional theory, we investigated the origin of anisotropy in doped Ce 2 Co 17 . We confirmed that the dumbbell sites have a very negative contribution to the MAE in Ce 2 Co 17 with a value about 0.5 meV/atom. The enhancement of MAE due to Fe and Mn doping agrees well with experiments, which can be explained by the preferential substitution effect because the enhancement is dominated by dumbbell sites. The transitions between the d z 2 and (d yz |d xz ) subbands on dumbbell sites are responsible for the MAE variation, and these transitions can be explained by the PDOS around the Fermi level, which in turn depends on the element T occupying on the dumbbell site. For Zr doping, the calculated formation energy suggests that the real structure is likely a mix of 2-17 and 1-5 structures, and the resulted 1-5 structure has a large anisotropy, which may explain the large MAE enhancement observed in experiments. The variation of MAE due to doping is generally a collective effect. Doping on dumbbell sites may significantly change the contributions from other sublattices and then the overall anisotropy. It is worth investigating other non-magnetic elements with a strong dumbbell site substitution preference because it may increase the total anisotropy in this system by increasing the contributions from other sublattices.
2017-05-01T10:34:37.338Z
2016-10-10T00:00:00.000
{ "year": 2016, "sha1": "bce29e2b77a3bf1141253484ba9d9ae236b948e0", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevB.94.144429", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "bce29e2b77a3bf1141253484ba9d9ae236b948e0", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
240152866
pes2o/s2orc
v3-fos-license
Usefulness of the CONUT index upon hospital admission as a potential prognostic indicator of COVID-19 health outcomes Abstract Background: In-hospital mortality in patients with coronavirus disease 2019 (COVID-19) is high. Simple prognostic indices are needed to identify patients at high-risk of COVID-19 health outcomes. We aimed to determine the usefulness of the CONtrolling NUTritional status (CONUT) index as a potential prognostic indicator of mortality in COVID-19 patients upon hospital admission. Methods: Our study design is of a retrospective observational study in a large cohort of COVID-19 patients. In addition to descriptive statistics, a Kaplan–Meier mortality analysis and a Cox regression were performed, as well as receiver operating curve (ROC). Results: From February 5, 2020 to January 21, 2021, there was a total of 2969 admissions for COVID-19 at our hospital, corresponding to 2844 patients. Overall, baseline (within 4 days of admission) CONUT index could be scored for 1627 (57.2%) patients. Patients’ age was 67.3 ± 16.5 years and 44.9% were women. The CONUT severity distribution was: 194 (11.9%) normal (0–1); 769 (47.2%) light (2–4); 585 (35.9%) moderate (5–8); and 79 (4.9%) severe (9–12). Mortality of 30 days after admission was 3.1% in patients with normal risk CONUT, 9.0% light, 22.7% moderate, and 40.5% in those with severe CONUT (P < 0.05). An increased risk of death associated with a greater baseline CONUT stage was sustained in a multivariable Cox regression model (P < 0.05). An increasing baseline CONUT stage was associated with a longer duration of admission, a greater requirement for the use of non-invasive and invasive mechanical ventilation, and other clinical outcomes (all P < 0.05). The ROC of CONUT for mortality had an area under the curve (AUC) and 95% confidence interval of 0.711 (0.676–0746). Conclusion: The CONUT index upon admission is potentially a reliable and independent prognostic indicator of mortality and length of hospitalization in COVID-19 patients. Introduction The global spread of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) producing the current coronavirus disease 2019 (COVID-19) pandemic has taken health systems around the world to the brink of collapse. [1,2] There is still a need for evidence about its pathogeny and clinical course, and their determining factors, among others. The clinical expression of COVID-19 is highly heterogenous and with severity ranging from mild to critical, it produces a dearth of systemic effects and damage in diverse organs, including the respiratory, circulatory, and neurological systems. It is of the utmost importance to identify, among admitted COVID-19 patients, those who are at the highest risk of complications during hospitalization, not only to achieve better quality of clinical care but also to optimize the use of health care resources. Therefore, it is vital to identify and implement a valid prognostic index upon admission, due to the limited value of classical semiology in this disease, where anamnesis, physical exploration, and complementary tests do not provide sufficient evidence to forecast individual outcomes. An index of clinical risk with enough sensitivity and predictive ability can help to identify promptly those COVID-19 patients that will develop severe disease. It can also have great utility in monitoring the disease during the follow-up of COVID-19. Since the beginning of the pandemic, diverse investigations have tried to identify a parameter/index with prognostic utility, [3][4][5] although with uneven results. This is because of the poor and premature reporting, complexity, high risk of bias, and several other limitations. Hence, the need to explore new prognostic indices to evaluate the risk of COVID-19 is considered as a research and clinical priority. Our aim was to test the usefulness and validity of the CONtrolling NUTritional status (CONUT), an already available and valid score for early detection and continuous control of undernutrition in hospitalized patients, [6] as a prognostic tool to evaluate the risk of worse progression and increased mortality in COVID-19 admitted patients. CONUT is based on serum thresholds of albumin, cholesterol, and total lymphocytes, with a range from 0 to 12. It is an easy score obtainable from parameters available in routine blood tests, calculated either mentally or automatically by an algorithm implemented in the laboratory informatic system, either in primary or specialized medical care. Therefore, the CONUT predictive power to help in individual medical decision-making, as well as aiding in the monitoring of high-risk patients, was tested. Ethical approval Research protocol was approved by the Ethics Committee for Medical and Drug Research of Hospital Universitario de La Princesa on May 20, 2021, acta CEIm 10/21 with No. 4468. Study design and settings This is a retrospective observational study in a cohort of hospitalized patients from February 5th, 2020 to January 21st, 2021 at the Hospital Universitario de La Princesa, Madrid, Spain. Participants This study included information from clinical records of all adult patients (age ≥ 18 years) with a positive COVID-19 clinical diagnosis upon hospital admission, confirmed either by positive antigen or polymerase tests. Variables and outcome The primary outcome of interest was in-hospital mortality up to 30 days from admission, which was obtained from the electronic medical records, with right truncation after discharge. There was no further follow-up over the phone or by other methods. The biometric, laboratory, and comorbidity variables were obtained and analyzed. Among routine laboratory variables, albumin, cholesterol, and total count of lymphocytes were used to calculate the CONUT score. Considering blood test results up to the fourth day was a pragmatic decision, to take into account biochemistry (of cholesterol, lymphocytes, and albumin) not assessed upon admission, mainly in the emergency room, due to collapse of laboratories during the peak of the first pandemic wave. Patients were classified in four stages according to the score obtained in the CONUT index (normal risk 0-1, light risk 2-4, moderate risk 5-8, and severe risk 9-12), depending on the blood stages thresholds of albumin, total cholesterol, and total lymphocytes [ Table 1]. Statistical methods A first descriptive analysis of the patients' characteristics was performed by calculating central tendency and dispersion measures of quantitative variables. For qualitative variables, comparison of proportions was tested by using the x 2 test or the Fisher exact test, whenever necessary. We performed Kolmogorov-Smirnov and Shapiro-Wilk tests in all continuous variables and confirmed their normal distribution. In addition to descriptive statistics, a Kaplan-Meier analysis of inhospital mortality up to 30 days from admission was performed. To obtain the hazards ratios, a Cox proportional regression (univariable and multivariable analysis) was fitted over the statistically significant variables obtained from the univariable analysis (age, sex, smoker, and CONUT were used as categorical variables; and height, weight and body mass index [BMI] as continuous variables). A receiver operating curve (ROC) for CONUT on mortality and its area under the curve (AUC) with 95% confidence interval were also estimated. Data management, statistical calculations, and graphical plots were conducted using the R statistical software (https://www.Rproject.org/); the particular packages used were survival, survminer, cmprsk, and ggplot. Results Overall, during the study period there were 2969 adult admissions with positive COVID-19 clinical diagnosis. After excluding the episodes corresponding to re-admissions (n = 125; 102 patients with one re-admission, six patients with two readmissions, one patient with five re-admissions, and one patient with six re-admissions), these admissions corresponded to 2844 single COVID-19 patients. Out of them, any of the three CONUT variables were unavailable in 1217 patients. Accordingly, clinical data from 1627 patients, from which a CONUT index could be calculated from the first blood test results obtained up to the fourth day of admission, were finally analyzed [ Figure 1]. The survival rate of the whole cohort can be found in [Supplementary Figure 1, http://links.lww. com/CM9/A786]. The distribution of patients according to the CONUT stages was as follows: 11.9% (n = 194) of the subjects were classified as normal risk, 47.2% (n = 769) as light, 35.9% (n = 585) as moderate and 4.9% (n = 79) as severe. A direct, clinically, and statistically significant association of baseline CONUT with length of hospitalization was observed. The higher the CONUT stage, the longer the duration of hospitalization, starting at 7.9 ± 9.2 days on low-risk stage, up to 22.1 ± 25.2 days in severe risk CONUT stage (P < 0.001). Having a higher CONUT stage (moderate or high risk) was also related (P < 0.001) to increased use of resources, including non-invasive mechanic ventilation (NIMV), invasive mechanic ventilation (IMV), and management in intermediate care respiratory units (ICRU) and intensive care units (ICU). Further, a high score in the CONUT index was also related (P < 0.001) to higher risk of 30-day mortality, starting on 3.1% on the normal risk CONUT stage, increasing up to a 40.5% in the severe CONUT stage [ Table 2]. Overall, men with COVID-19 had a higher CONUT stage than COVID-19 women (P < 0.001), and men had a higher proportion of hospital admissions for COVID-19 than women [ Figure 2]. In addition, higher scores of CONUT index were also related with lower BMI (P = 0.003) and increased age (P < 0.001). The categorization of BMI (i.e., underweight, normal, overweight, and obesity) was tested but did not present the statistically significant results (data not shown). These results are consistent with those expressed in [ Table 3], corresponding to an analysis of the risk of mortality according to the CONUT score, where it can be appreciated that in the crude analysis as well as in a multivariable Cox regression analysis adjusted by age and sex, a higher CONUT stage was associated with a higher risk of mortality: for light CONUT, hazard ratio (HR) = 1.72, 95% CI: 0.75 to 3.98; for moderate CONUT, HR = 2.61, 95% CI:1.14 to 5.95; and severe CONUT, HR = 2.77, 95% CI:1.14 to 6.73. Specifically, there was a statistically significant difference from the normal CONUT vs. moderate (P < 0.023) and vs. severerisk stages (P < 0.024), although there was no statistical significance between the normal and light stages (P = 0.202). The Kaplan-Meier survival curves [ Figure 3] identify a clear distinction in the survival probability of the four groups from admission, in particular between the CONUT stages light to severe from day three upon admission, with only some curve overlap in those with normal or light CONUT stages (log-rank test P < 0.001). Finally, the ROC of CONUT for mortality in 1627 admitted patients is in Figure 4, with an AUC and 95% CI of 0.711 (0.676-0746). Discussion The COVID-19 pandemic has had a great impact in health care systems all over the world, making health workers struggle in their duty due to the absence of appropriate tests to aid them in the correct and confident decisionmaking process on these new patients. [7] This has steered into a breach in patient care, not only because of the underlying lack of knowledge of a new disease with a rapid expansion and burden but also because of the uncertainty about the possible health outcomes from each particular patient, causing a negative influence on the functioning of health care systems and disease control. [8,9] In consequence, prognostic systems and tools which could enable an early recognition of patients with higher risk for severe health outcomes, especially on the most vulnerable ones, remain a research gap. [10] The CONUT index is a potential candidate in this respect, because it has accumulated evidence in the past, has been applied to a wide range of severe conditions, and was tested during aggressive therapeutic procedures. They include a great variety of cancers in diverse locations and types, as well as being useful in many acute and chronic conditions, and predictive studies after medical, surgical, radio-and chemo-therapies. [11][12][13][14] Recently, a first use of CONUT in COVID-19 was reported, although with a limited sample size. Wei et al [15] first concluded that a moderate or severe CONUT stage is an independent risk factor for greater mortality that can help in the recognition of high-risk patients. This was also described by Chen et al, [16] Wang et al, [17] Fong et al, [18] and Zhou et al. [19] CONUT has also been applied to the elaboration of recommendations in the nutritional treatment of oncologic patients before the advance of COVID-19. [20] It has also demonstrated its applicability in primary care, enabling a more efficient medical monitoring of COVID-19 patients, by adjusting patients' needs without transferring them. [21] Our study has confirmed that CONUT is an independent potential predictor of severe disease after infection by SARS-CoV-2, both in terms of mortality and hospitalization duration. In this manner, CONUT allows the identification of those patients which will most probably require management in intermediate and ICU admission, and will prospectively entail NIMV or IVM, warning physicians to apply a closer monitoring, and helping them to act as early as possible, limiting the risk of complications and permitting a more efficient allocation of resources. The CONUT index has been comprehensively confirmed by independent researchers and in many settings as a fine indicator of both short-and long-term prognosis for cancer patients. Also, CONUT has been used for research in inflammatory conditions (infectious or not), degenerative diseases, as well as to explore the side effects of therapeutic procedures such as surgery, radio/chemotherapy, etc. This is due because CONUT captures and quantifies the impact of all these clinical conditions on the physiological balance and homeostasis of the cellular environment. The diversity of organs affected by COVID-19 is one of the main reasons that led us to study the usefulness of the method in this pathology, as there was equipoise about the usefulness of CONUT in COVID-19 before starting this research. Thus, CONUT being an adequate score, we envisage it could be tested in the three main phases of COVID-19. First, it might help physicians in taking decisions ensuring the As of June 2021, the COVID-19 end-game is far from completion, but we endorse calls for elimination rather than mitigation strategies. [22,23] In future analysis, the usefulness of CONUT might be further evaluated, not only upon hospital admission, but also before and after admission, during the entire clinical course, enabling professionals to take early decisions on treatment effectiveness, and other relevant outcomes, including the need to manage patients in special units like IRCU or ICU. Other investigators and hospitals are encouraged to study CONUT in diverse populations, to expand the applicability of this system. Conclusions We conclude that CONUT can be helpful as a potential prognostic index of clinical risk in COVID-19 hospitalized patients, with utility in mortality and hospitalization duration. Although <5% of hospitalized patients had a severe baseline CONUT stage, even light and moderate CONUT stages were capable of identifying subgroups of COVID-19 patients with a higher risk in all clinical outcomes analyzed. CONUT holds potential for a tighter control of the clinical course during hospitalization, and calls for the deployment of health resources such as IMV and NIMV, and management in Intermediate Respiratory and ICUs, simply by scoring only three parameters that are easily obtained in blood tests either in primary or hospital care. Availability of data and materials Data and coding can be requested by contacting the author for correspondence and study team
2021-10-30T06:17:28.591Z
2021-10-26T00:00:00.000
{ "year": 2021, "sha1": "0af7f64d2938e7960180eb82915ba1d3b1a5a724", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/cm9.0000000000001798", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "3ce8251a8eb1112a935df939d7d98014468fca44", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
198134598
pes2o/s2orc
v3-fos-license
The Number of Adverse Childhood Experiences Is Associated with Emotional and Behavioral Problems among Adolescents This study aims to examine the association of adverse childhood experiences (ACE) with emotional and behavioral problems (EBP) among adolescents and the degree to which this association is stronger for more ACE. In addition, we assessed whether socioeconomic position (SEP) modifies the association of ACE with EBP. We obtained data from 341 adolescents aged 10–16 (mean age = 13.14 years; 44.0% boys), the baseline of a cohort study. We measured EBP with the strengths and difficulties questionnaire and socioeconomic position (SEP) with self-reported financial status. We used generalized linear models to analyze the association between ACE (0 vs. 1–2 vs. 3 and more) and EBP, and the modifying effect of SEP. Adolescents with 1–2 ACE (regression coefficient: 0.19; 95%-confidence interval (CI): 0.06–0.32) and with 3 ACE and over (0.35; 0.17–0.54) reported more overall problems compared with adolescents without ACE. Moreover, adolescents with 1–2 ACE (0.16; −0.01–0.32, and 0.16; 0.03–0.29) and with 3 and over ACE (0.33; 0.10–0.56, and 0.28; 0.09–0.47) reported more emotional problems and behavioral problems, respectively. The interactions of SEP with ACE were not significant. ACE are related to EBP among adolescents, with a clear dose-response association, and this association similarly holds for all SEP categories. Introduction Adverse childhood experiences (ACE) relate to various negative experiences at young ages; these include abuse and/or neglect towards a child, domestic violence towards the youth's mother, household substance abuse, household mental illness, parental separation/divorce, and a household member with a history of jail/imprisonment [1]. Their prevalence is high, e.g., Bellis et al. [2] found a lifetime cumulative prevalence of at least one ACE of 50%. Stressors such as abuse, neglect, and witnessing domestic violence are similarly common during childhood [3][4][5]. ACE can lead to signs of depression, anxiety, and personality disorder [3,[6][7][8][9], showing childhood ACE to be a major public health challenge. The experience of long-term ACE can cause serious emotional and behavioral problems (EBP) throughout the life of an individual. A dose-response has been found of ACE with various conditions, including depression, anxiety, panic reactions, hallucinations, psychosis, and suicide attempt, along with overall psychopathology, psychotropic medication use, and treatment for mental disorders. The mechanisms for these associations may involve differences in the physiological development of children or by adopting behaviors that harm their physical and mental health [6,10]. However, most of the available evidence has focused on the impact of ACE on adult health [11,12]. A recent study shows that the exposure to multiple types of ACE is associated with a higher prevalence of psychiatric disorders in adults [11]. ACE may also be expected to have effects still in adolescence [13], but evidence to support this is lacking. Low socioeconomic position might be considered as one of the ACE itself and to a great extent causes early life stress [14,15]. It is noteworthy that experienced financial stress was found to be associated with developing mental health [16] and behavioral problems [17]. However, the role of the socioeconomic position in the relationship between ACE and EBP is still unclear. Therefore, the aim of this study is to examine the association of adverse childhood experiences (ACE) with emotional and behavioral problems (EBP) among adolescents and the degree to which this association is stronger for more ACE. In addition, we assessed whether socioeconomic position modifies the association of ACE with EBP. Sample and Procedure We used data from the baseline wave of the Care4Youth-cohort study. We obtained participants using a two-step sampling. First, we randomly selected primary schools; these were approached from January to June 2017. Out of 11 primary schools approached, seven participated in our survey (response rate 64%). Next, parents of all pupils were asked to provide us with a signed informed consent on behalf of their children and themselves (response rate 23.4%). Questionnaires were administered by trained research assistants in the absence of teachers during regular class time. We obtained data from 341 adolescents from 5th to 9th grade aged from 10 to 16 (response rate: 94.3%, mean age: 13.14; boys: 44.0%). The study protocol was approved by the Ethics Committee of the Medical Faculty at P. J. Safarik University in Kosice (2N/2015). Measures Emotional and behavioural problems (EBP) were measured with the strengths and difficulties questionnaire (SDQ), which includes 25 items [18], of which we used the 20 difficulty items. Response categories were: not true (0), somewhat true (1), certainly true (2). The resulting score for overall difficulties can range from 0-40. In addition, we computed emotional problems (score 0-20, emotional symptoms and peer relationship problems subscales) and behavioral problems (score 0-20, conduct problems and hyperactivity/inattention subscales) [19]. A higher score indicates more problems in adolescents. Cronbach's alpha for the whole scale was 0.78 in our sample and 0.73 and 0.71 for the internalizing and externalizing subscales, respectively. Adverse childhood experiences were measured by the question: "Have you ever experienced any of the following serious events? (Death of a brother/sister, Death of your father/mother, Death of somebody else you love, Long or serious illness of yourself, Long or serious illness of one of your parents or of someone else close to you, Problems of one of your parents with alcohol or drugs, Repeated serious conflicts or physical fights between your parents, Separation/divorce of your parents, Separation of your parents due to work abroad). The response categories were "Yes" and "No". We created a sum score for the number of ACE experienced, with a higher score indicating more ACE. Consequently, we categorized the number of ACE into three categories: no ACE (0), one or two ACE (1), and three or more ACE (2). Socioeconomic position (SEP) [20] was measured using a validated tool among adolescents [21] on a 10-point scale (0-the worst, 10-the best), and the adolescents were asked to assess where they see their families on this ladder according to their financial status [22]. To illustrate what is meant, a description was provided e.g., about how much money the family had, what level of education their parents had achieved, and how profitable the work of their parents is. Statistical Analyses We first described the background characteristics of the sample, overall and by gender. Next, we assessed the association of ACE with EBP using generalized linear models adjusted for age and gender with ACE in three categories. Finally, we assessed modification of this association by the family's SEP. Statistical analyses were performed using IBM SPSS Statistics v.20 IBM Corpotation (New York, NY, USA) for Windows. Table 1 shows the descriptive statistics of the EBP and ACE for the whole study sample and by boys and girls separately. Table 2 presents regression coefficients (B) and 95%-confidence intervals (CI) from the generalized linear models adjusted for age and gender. Model 1 shows that adolescents with 1-2 ACE (B: 0.19; 95% CI: 0.06-0.32) and 3 or more ACE (0.35; 0.17-0.54) reported more overall difficulties in comparison with adolescents without ACE. When separately assessing emotional and behavioral problems, a similar dose-response was found with somewhat lower B coefficients. In Model 2, adolescents with a higher socioeconomic position reported fewer overall emotional (−0.05; −0.09-(−0.02)) and fewer behavioral problems (−0.05; −0.09-(−0.01)). However, the interactions of SEP with ACE were not significant (not shown in the table). Table 2. Associations between the number of adverse childhood experiences (ACE) and emotional and behavioral problems, overall and separately adjusted for gender, age (Model 1), and socioeconomic position (Model 2) from generalized linear models (B coefficients/95% Wald confidence intervals) (Slovakia 2017, 10-16 years old, n = 341). Discussion The present study shows that ACE are associated with EBP and that the accumulation of ACE is associated with more EBP. Socioeconomic position does not significantly influence the relationship between ACE and EBP. As suggested by our results, more ACE seems to be associated with more EBP among adolescents, thus adding to the already existing evidence on adults [6,8,9,12]. Experience of traumatic events in childhood might represent high levels of distress, which might be associated with emotional or behavioral problems [23] via enduring changes in the nervous systems [6,17,24]. In addition, it might be expected that the mentioned ACE from childhood are still present even in adolescence and have a direct and immediate influence. The deleterious effects of ACE on mental health may thus already start in adolescence. We found the association between ACE and EBP to have a clear dose−response association, with more ACE having more pronounced influence, in line with Chapman et al. [25]. An explanation for the dose-response character of the association might be found in the resilience theory [26][27][28][29], suggesting that a lower number of ACE might be buffered by existing resilience factors, whether within the individual, family, or community [30,31]. Several simultaneously present ACE, however, might often not be resolvable by the available resources. This might lead to more EBP, thus requiring additional help from professionals from the adolescent mental health care system more frequently. Contrary to our expectations, we found no influence of adolescents' perceived SEP on the association between ACE and EBP. Most such studies have examined the association of SEP with ACE [9,32] or with the development of EBP [13]. Evidence on the role of SEP between ACE and EBP is scarce and inconclusive and mostly in adult populations [32,33]; our study provides additional evidence on this understudied issue. An explanation for there being no influence of adolescents' perceived SEP on the association between ACE and EBP may be the fact that in comparison with experiencing other multiple adverse experiences in the presence of traumatic events, lower SEP might not be considered as an additional burden leading to even more pronounced EBP [34]. On the contrary, lower SEP might be expected to result in more ACE, which in turn have a detrimental influence on the development of EBP among adolescents, thus suggesting a different pathway. This study has several strengths, the most important being that it uses validated internationally recognized instruments that have been used in various studies [35][36][37]. In addition, our study contributes to the current literature by investigating the interaction between ACE and EBP on mental health in a community-based sample of adolescents. However, this study also has some limitations. First, its response rate was rather low due to required active parental consent. However, we do not expect this to cause a major selection bias, e.g., Dent et al. [38] found no differences in mental health outcomes from studies with active and passive parental consent. Another limitation might be use of self-reported data for measuring SEP, ACE, and EBP. However, previous research has confirmed the validity of self-reported measurement of SEP [21], EBP [18], and ACE [37,39]. Finally, the cross-sectional design of this study made it impossible to formulate conclusive statements about causality. Our study showed that ACE are associated with EBP among adolescents, with more ACE having a stronger association with EBP. These results imply a need to focus on prevention and early identification of adolescents exposed to ACE. Based on our results, implications for further research might be of interest for investigating individual ACE, as well as differentiating by the severity of the ACE. Furthermore, we in particular need longitudinal studies to assess pathways and existing mechanisms regarding the associations of socioeconomic position of the family, ACE, and EBP. Finally, research is needed on intervening in the chain of ACE towards EBP, to be able to improve future adolescent and adult mental health. Conclusions We found that ACE are related to EBP among adolescents and that an increasing number of ACE is associated with more EBP; SEP did not modify this association. Our results provide further evidence of associations between ACE and EBP and underscore the need for a public health and social welfare approach regarding prevention, risk reduction, and early intervention for adolescents exposed to ACE.
2019-07-23T13:07:16.361Z
2019-07-01T00:00:00.000
{ "year": 2019, "sha1": "825a7dae95b4fa136ec2386435f054d536f61285", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/16/13/2446/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "825a7dae95b4fa136ec2386435f054d536f61285", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
212431548
pes2o/s2orc
v3-fos-license
Physiological and Behavioral Manifestations of Children and Teenagers with Down Syndrome During the Dental Appointment: A Comparative Cross-Sectional Study Objective: To measure the heart rate (HR) and the behavior of children and teenagers with Down Syndrome (DS) during the dental appointment. Material and Methods: Two groups (n = 52), of both genders, aged 2-14 years, matched by age group were formed: study group (SG) individuals with DS and control group (CG) normotypical school children. The participants were submitted to clinical examination and prophylaxis. An oximeter was used to measure the HR at five moments of the dental consultation: before entering the practice room (T0), when sitting in the dental chair (T1), during the clinical examination (T2), during prophylaxis (T3) and immediately after prophylaxis (T4). Behavior, classified according to the Frankl Scale, was observed at T3. Mann Whitney, Kruskal-Wallis, Dunn and Pearson's Chi-square tests were used to analyze and compare variables (significance level at 5%). Results: In SG, a significant difference in HR was observed according to the moment of dental appointment (p<0.001 SG; 0.3385 CG). The highest HR value in SG was observed at T3 (median 110.00; IIQ 96.00-124.00), the only moment significantly different (p<0.001) from HR values for CG. A difference in behavior between groups (p<0.001) was also observed. Conclusion: HR of individuals with DS varied throughout the dental appointment, and they also had a higher prevalence of uncooperative behavior. Introduction Dental extraction and/or the sensation of pain has commonly been associated with the dental appointment [1]. Therefore, physiological changes -such as an increase in heart rate or blood pressure -and behavioral changes in patients due to dental anxiety are commonly observed during dental appointments [2]. Dental anxiety is a persistent fear and manifestation of exacerbated reactions to dental procedure stimuli [3]. Between 6 to 20% of children suffer from dental anxiety and they tend to avoid the dentist or are barely cooperative, making it difficult to complete procedures [3][4][5]. This can be of special concern in children with Down Syndrome. Down Syndrome (DS) is an autosomal chromosomal anomaly resulting from the trisomy of chromosome 21 [6]. Among DS-related conditions, cardiac abnormalities are frequent, including Tetralogy of Fallot, patent ductus arteriosus, and septal defects [6]. Congenital cardiac abnormalities require special precautions and are associated with complications such as congestive cardiac death, heart failure, thromboembolism and complications of non-cardiac surgery [6,7]. Due to the prevalence of people with cardiac conditions associated with genetic conditions and/or dental anxiety, the inclusion of blood pressure and heart rate measurements as a routine practice is extremely important in the professional routine of dentists [8,9]. The purpose of this study was to measure the heart rate and behavior of children and teenagers with DS during the dental appointment. We tested the null hypothesis that there is no difference in behavioral and heart rate variations in children and teenagers with or without DS during the dental appointment. Study design and Participants A comparative cross-sectional observational study was performed. The study population consisted of two groups. The study group (SG) consisted of 52 children and teenagers with DS, 4 to 14 years old, attending the Integrated Center for Special Education (CIES), in Teresina, Piauí, Brazil. The control group (CG) consisted of normotypical children and teenagers in the same age group, enrolled in municipal public schools in the same city. For SG, all the individuals who were regularly enrolled at CIES at the beginning of the data collection with a medical diagnosis of DS were considered eligible. We did not include those individuals with other developmental disorders associated with DS. The presence of any comorbidity, the use of medication that has an implication for the cardiovascular system and the presence of cardiovascular diseases were also considered as non-eligibility criteria. CG also consisted of 52 children aged 4 to 14 years, who did not present any cardiovascular disease. They were randomly selected from a school attendance list provided by the six schools, which had been previously selected at random from the public schools in Teresina, Brazil. All participants had already undergone dental consultations prior to inclusion in the study. Procedures and Data Collection Instruments Caregivers of the participants answered a questionnaire about socioeconomic data and habits related to oral health at the moment of care. As a preparation for the data collection, caregivers were instructed to inform the participants that they would go to the dentist for a dental session. Due to the extremely hot local weather conditions, all the consultations were carried out in a room equipped with air conditioning, so that any weather interference factors could be controlled during data collection. Data collection was carried out between April 2015 and October 2016. The heart rate (HR) of the study participants was measured using an oximeter (Finger Pulse Oximeter Model -Fingertip, China), that was placed on the left index finger of the participant and used according to the manufacturer's instructions. Changes in the lightwave spectrum through the finger during pulsation of blood generate a measure of blood oxygenation and pulse rate sent to the oximeter. Heart rate is measured in beats per minute (bpm). HR was recorded at five moments: one minute before the procedure (T0), one minute after sitting in the dental chair (T1), one minute after the beginning of the clinical examination (T2), one minute after the beginning of the prophylaxis (T3) and one minute after the end of prophylaxis (T4) [10,11]. The clinical examination of the oral cavity was performed in a dental chair with the use of artificial light from the reflector, a flat mouth mirror and a number 5 dental probe, according to the recommendation by the World Health Organization [13]. The procedure consisted of a prophylaxis using a Robinson's brush and pumice and prophylactic paste. A time of 5 min was standardized to perform the procedure. SG data collection was carried out in a dental room at CIES. For CG, data collection took place at a health unit dental office near the school where they were recruited. For both groups, participants already knew the data collection location and had previously undergone a dental appointment. The examiner was trained on the use of oximeter and behavioral analysis and calibrated regarding the use of the Frankl Scale. Intra-examiner (1.00) and inter-examiner agreement (0.85) Kappa scores, resulting from the agreement with a specialist in Dentistry for patients with special needs, were considered acceptable. The primary and secondary outcomes of the study were the changes in heart rate and behavioral changes during the dental procedure, respectively. Statistical Analysis Initially, a descriptive analysis of the data was performed. The heart rate normality distribution was tested using the Shapiro-Wilk test. As the data did not present normal distribution (p≤0.011), nonparametric statistical tests were used. Mann-Whitney test was applied for the analysis of the median HR intergroup. For the intragroup analysis of the moments of the dental appointment, Kruskal-Wallis and Dunn tests were applied. A Chi-square test was applied to compare the frequency of behavior types and socioeconomic characteristics between the groups. All analyses were conducted using the Statistical Package for Social Sciences (SPSS for Windows, version 21.0, SPSS Inc. Chicago, IL, USA), with significance level ≤ 0.05. Ethical Aspects This study was approved by the review board of the Federal University of Piauí (CAAE 17993013.4.0000.5214). Its development followed the ethical recommendations of the Declaration of Helsinki. The inclusion of study participants was consented to by the signature of informed consent and assignment statements. Results No difference was observed between groups regarding socio-demographic characteristics, including gender, age group, years of parental study and family income, validating the pairing between the participants of the two groups by age (Table 1). The intergroup analysis revealed that HR values for SG were higher than those for CG at T3 (SG median 110.00; CG median 91.50 -p<0.001). The intragroup analysis showed ( Figure 1) that there were differences in HR at different moments only for SG. Kruskal-Wallis and post-hoc Dunn test (p<0.05). Different letters in the bars mean statistically significant differences. Figure 1. Heart rate analysis during five moments of dental appointment. Table 2 shows that, for participants with DS, there was no difference in the HR values measured during the five moments of the dental appointment, when the group was classified according to behavior and gender. The frequency of participants, according to their behavior, can be observed in Table 3. In the study group, 59.6% of the participants presented uncooperative behavior. In the control group, 92.3% had cooperative behavior. There was a significant difference between groups regarding the type of behavior of the participant, whether cooperative or uncooperative, during the dental procedure (p<0.001). A post-hoc analysis was performed to calculate the power of the test, obtaining a power of 1.00, which means that the probability of type II error was zero, demonstrating that the sample was sufficient for the study findings. Discussion In this study, the heart rate was measured and the behavior of individuals with Down's Syndrome was observed during the dental appointment and compared with a control group of normotypical individuals. A significant increase in HR of DS participants was observed only at the time of prophylaxis. There was also a significant difference in behavior during the dental appointment; the prevalence of uncooperative behavior was higher among participants with DS. The fact that the intragroup analysis of HR significantly varied during the dental appointment only for SG suggests that the dental appointment is a determining factor for the occurrence of such physiological alterations. Furthermore, during prophylaxis, median HR values were higher for SG than for CG. This result may be explained by the delay in the psychosocial, cognitive and emotional development of individuals with DS, who may present a more anxious response [14]. Individuals with DS may find it difficult to evaluate the invasive nature of dental treatment or understand the instructions and explanations given by the dentist, thus increasing their level of tension [15]. In SG, a progressive increase in HR values was observed as the procedures progressed between T1 and T3, with a slight decrease at T4 (immediately after the end of prophylaxis). This increase may be related to the length of time and/or to the succession of the steps of the procedure adopted, which may increase fear and anxiety [16]. The highest values measured for HR were distinct between the groups. In SG, the highest values were measured at T3, whereas in CG, greater HR was observed at T4. This difference is probably related to differences in perception and manifestation of anxiety and stress-related to the procedure between the groups. Regarding behavior analysis, we chose the Frankl Behavior-Rating Scale [12] because it is a functional and reliable method that quantifies the behavior into four categories. It is the main method used for this purpose and is currently used by several researchers [17][18][19]. When dichotomizing the sample in an uncooperative and cooperative group, it became clear how individuals with DS tend to be more uncooperative. There are no studies in the literature that support these findings. However, it is believed that a higher prevalence of uncooperative behavior associated with individuals with DS is justified by the fact that DS individuals have greater difficulty in understanding and weaker perception skills. They probably have exacerbated reactions of stress to procedures that are considered low invasive for the individuals who can correctly perceive the dental procedures [20]. The effects of dental appointments on physiological and behavioral parameters of individuals with DS are rarely reported in the literature. Some authors compared the behavior and HR of individuals with DS who underwent two different dental caries removal protocols [21]. The results of this study demonstrated that the dental appointment promoted more significant physiological changes in the DS group, with emphasis on the increase of HR and a higher occurrence of "tense" behavior, a classification equivalent to uncooperative in the behavioral scale adopted in this study. Our findings corroborated their findings. A longitudinal study [22] demonstrated that the use of audiovisual resources was effective in reducing physiological changes in children (mean age = 7.1 years) submitted to prophylaxis. However, they did not have a control group to compare their results. Another study, a systematic review and meta-analysis concluded that individuals with DS, when compared with their controls without DS, did not have significant HR variability at rest [23]. However, the literature remains unclear about the cardiovascular behavior of individuals with the syndrome in stressful situations, such as a dental appointment [4]. Medical and dental procedures are stressful conditions that trigger physiological changes of the fightor-flight type, an inherent human reaction to challenging situations [2,[24][25][26][27]. The high prevalence of individuals with DS who have congenital cardiac alterations justifies the determination of potentially dangerous situations that can significantly alter cardiovascular parameters [1,28]. Due to the unavailability of a national, state or municipal database characterizing the population affected by DS, we could not calculate a sample to be representative of the entire population. Therefore, we used a non-probabilistic sample and selected at the level of a specialized health care center. CIES is funded by the government at a state level and is a referral centre for children with special needs. Thus, the generalization of the findings cannot be assumed for different populations with DS. However, pairing the study participants with a randomized control group strengthens the methodological rigor of the study. The internal validity of the study was adequate because a single-examiner was trained and calibrated, avoiding possible measurement bias. The blind analysis of the results by an independent researcher reduced the risk of possible detection bias. Selection bias was avoided with the adoption of inclusion criterion, for both groups: participants who had previously been submitted to a dental appointment and received prophylaxis. As a final consideration of our study, we strongly recommend dentists to use behavioral management techniques in order to minimize cardiovascular changes in patients during a dental appointment. Individuals with DS, even during non-invasive procedures such as prophylaxis, have been shown to experience significant variations in HR. Further studies to evaluate physiological parameters changes related to dental treatment could be carried out by comparing dental procedures with other situations that induce cardiovascular or other changes. Conclusion The dental appointment resulted in significant HR variation in individuals with DS, unlike in individuals without DS. "During prophylaxis" was the only moment when a significant difference in HR was observed between groups. Participants with DS tend to be more uncooperative during the dental appointment than participants without DS. Financial Support This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior -Brasil (CAPES) -Finance Code 001.
2020-02-13T09:12:23.888Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "db77ab919bb9e7fa92e1b45ecc445cd67bd30769", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/pboci/v20/1519-0501-pboci-20-e4658.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7c65b787ff3621b8acb5f065292d4722c062540a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256389267
pes2o/s2orc
v3-fos-license
The impact of disclosure of risk information on risk propagation in the industrial symbiosis network The interdependent symbiotic relationship between enterprises may bring potential risks to the stability of the industrial symbiosis network (ISN). In order to reduce the damage caused by further risk propagation to the system, this paper establishes the multiplex network to study the impact of disclosure of risk information on risk propagation. In the multiplex network, we use a small-world network to simulate a social network and propose an evolutionary model with scale-free characteristics to simulate the symbiotic relationships between enterprises. Then we establish a risk propagation model by defining transition rules among various states. Through theoretical analysis using the Microscopic Markov Chain Approach (MMCA), we find that the proportion of disclosed enterprises, the network structure of the ISN, the recovery rate of enterprises, and the degree of symbiotic dependence affect the risk propagation threshold of the ISN. Numerical simulation results show that increasing the disclosure probability of risk information can reduce the scope of risk propagation. Moreover, once the disclosure probability of risk information reaches a certain value, the risk propagation threshold can be increased. Finally, relevant suggestions are put forward: (i) strengthening the information communication between symbiotic enterprises may reduce risks caused by information asymmetry. (ii) In addition to the authenticity and integrity of risk information, it is necessary to prevent risk information from being over-interpreted or exaggerated. (iii) Enterprises should strengthen the ability to recover from risks, appropriately reduce the degree of symbiotic dependence, and enhance risk awareness to reduce the possibility of risk occurrence. Research background Industrial symbiosis, as an effective means to improve resource efficiency, can better alleviate the carbon emission problem and realize regional green development, which transforms the linear economic model into a circular economic model (Schlüter et al. 2022). In contrast to traditional industries that produce large quantities of pollution and consume a huge amount of energy, the industrial symbiosis system realizes a closed-loop material circulation and multilevel energy utilization through the exchange and circulation of material and energy flows, while also minimizing regional waste discharge Huang et al. 2020). Then industrial symbiosis can transform the linear economy model into a circular one (Chertow 2000;Demartini et al. 2022). In the industrial symbiosis system, the material exchange between symbiotic enterprises forms a dynamic complex network, i.e., the industrial symbiosis network (ISN). Enterprises in the industrial symbiosis network are not only affected by external risks, but also interact with each other. Moreover, this kind of interrelated symbiotic relationship closely links enterprises together (Valenzuela-Venegas et al. 2018). Therefore, the state of nodes in the network plays a significant role in the stability of the network (Fraccascia 2019; Massari and Giannoccaro 2022). Specifically, in the industrial symbiosis network, when some enterprises encounter risks, such as external disturbances or poor management and operation, they may interrupt or delay the supply of material, which will not only affect their own production and operation, but also transfer the risk to other enterprises and even the entire system, posing a major risk. For example, in 2021, Indian COVID-19 surged, hitting the supply chain of raw materials and manufacturing industries in many industries around the world, and in March 2022, network attacks led to system failures of Toyota's key suppliers. Thus, the risk in this paper refers to the probability that the industrial symbiosis network will be interrupted due to the uncertainty caused by external interference or internal symbiotic relationship. Therefore, it is of crucial theoretical and practical significance to explore the law of risk propagation in the ISN and measures to prevent risk. Enterprises with risk information and risk resilience, when accurately estimating the spread intensity and destructive ability of risks, can take risk control measures to reduce or even completely avoid the harm of risks to the enterprise and even the entire industrial symbiosis network. But for enterprises that cannot perceive risk information not only cannot identify and regulate risks, but may even accelerate the spread of risks and expand the scope of risk propagation. Following (Zhu et al. 2021b), we define risk information as any information that is used to achieve an improved state of knowledge about risk as a basis for making a risk-related decision. To sum up, in the industrial symbiosis network, it is worth studying how enterprises perceive risk information, how to obtain risk information to avoid risks, and whether the timely disclosure of risk information will affect the spread of risks in the industrial symbiosis network. Thus, we explore the impact of the disclosure of risk information on risk propagation in the industrial symbiosis network and create a multiplex network of risk perception and risk propagation based on the risk propagation model proposed by Granell et al. (2013). In the risk perception layer, enterprises are divided into three categories: unaware of risks, aware of risks and willing to disclose risk information, and aware of risks but unwilling to disclose risk information. When enterprises are aware of risks, they will take measures to mitigate them, minimizing the likelihood of risk occurrence and the destructive power of risks. For the risk propagation layer, enterprises are classified into three states based on the classic epidemic model (SIR) and characteristics of risk propagation in the ISN. Following that, we discuss the state transition rules. On these bases, the evolution of risk propagation in the ISN is simulated. Finally, we investigate the effect of several factors on risk propagation, including the disclosure probability of risk information, risk propagation rate, and recovery rate. Literature review This subsection will be divided into the following parts: network modeling of ISNs, risk propagation and resilience analysis, and information disclosure on complex networks. Network modeling of ISNs An industrial symbiosis network (ISN) is a supply chain network of cooperative symbiotic relationships between enterprises formed by the transfer of materials, energy, or information. It is a set of symbiotic relationships between some long-term regional activities, including the flow of material, energy, knowledge, personnel, and technical resources, which can produce positive environmental benefits and overall competitive advantages (Mirata and Emtairah 2005). The structural characteristics of ISNs are based on the characteristics of nodes and connected edges. Based on the development process of the symbiotic system, the topological structure of ISNs presents the characteristics of power law distribution (Ashton et al. 2017), that is, in the initial stage, the network consisted of a few enterprises, with the development of the symbiotic network, more and more enterprises join, the symbiotic relationship will be more. In terms of the structure of ISNs, most scholars take the ecological industrial park existing in realities as the research object and construct symbiosis networks, such as water symbiosis networks, infrastructure symbiosis networks, and waste exchange synergy networks. Li and Shi (2015) proposed that two kinds of networks can be generated in symbiotic systems: material exchange networks and infrastructure-sharing networks. Wu et al. (2021) constructed an iron and steel industrial symbiosis network from 1958 to 2019 based on the input and output of iron flow and carbon flow and analyzed the evolution characteristics of the network in different scenarios. Li and Xiao (2017) discovered the scale-free and small-world characteristics of the industrial symbiosis network by exploring the topological characteristics of Ningdong Coal Chemical Eco-industrial Park. Yang and Zheng (2020) generated a weighted and directed industrial symbiosis network, which exhibits scale-free characteristics. Risk propagation and resilience analysis of complex network In terms of risk propagation, when some enterprises in the network are exposed to risks, the risks will propagate to other nodes associated with them, which may cause adjacent nodes to be in a risk state, or even lead to the collapse of the network. Xu et al. (2019) studied the risk propagation characteristics of the water symbiosis network and found that the risk was caused by local risk diffusion, as well as the most influential risk propagation path through the ant colony algorithm. Xiao et al. (2016) developed a cascading failure model for an eco-industrial system and discovered that removing core nodes and core links causes the fault to propagate rapidly in the network and leads to significant damage to the network stability. Therefore, it is of great significance to strengthen the adaptability and resistance of the network to risk disruption, and evaluating and improving resilience has become a research hotspot. The definition of resilience by Holling (1996) is widely quoted, which determines the persistence of relationships within a system, and measures the ability of these systems to absorb state variables, driving variables, and parameter changes and still continue. For different systems, the definition of resilience will also be different. Walker et al. (2004) raised that resilience is the capacity of a system to absorb disturbance and reorganize while undergoing change so as to still retain essentially the same function, structure, identity, and deedbacks. Li and Shi (2015) defined resilience as the ability to maintain the function of network elements after an interruption and used the area under the response curve to measure the resilience of the system under a specific interruption situation. Fraccascia et al. (2017) pointed that resilience is a way to assess the ability of an ISN to maintain its functions. Supply chain resilience is based on some concepts, i.e., robustness, reliability, survivability, and fragility, which refers to the ability of the supply chain network to maintain operation and connectivity when certain structures or functions are lost (Shi et al. 2021). In references Fraccascia et al. (2017) and Walker et al. (2004), resilience in this paper refers to the ability of the network and its components to predict, absorb, adapt, and recover from risks in a timely and effective manner and maintain its function. Scholars have conducted research on resilience evaluation and improvement strategies. Valenzuela-Venegas et al. (2018) measured resilience from two indicators: the network connectivity index and flow adaptability index comprehensively. Hosseini et al. (2016) studied the resilience of the supply chain system based on absorptive capacity, adaptability, and resilience and proposed a research framework for resilience. Yang and Zheng (2020) pointed out that before the occurrence of risks, risk identification and preventive measures shall be taken to improve the ability to resist risks, when risks occur, flexibly adjust the network structure and recover from the risk in time. Li et al. (2022) developed a network resilience assessment method considering both the structure and nodal load and found that the larger the node capacity redundancy, the better the reinforcement strategy. Information disclosure in a complex network The establishment and maintenance of an ISN are inseparable from the disclosure and sharing of enterprise information (Boom Cárcamo and Peñabaena-Niebles 2022). Ensure the information transparency of the industrial symbiosis system by providing accurate enterprise supply-demand and safety information that enables potential cooperative enterprises to establish symbiotic relationships and avoid risks. Silva et al. (2022) found that the use of online platforms enhanced the symbiosis process by expanding the geographic scope of the industrial symbiosis processes and forming synergies among all parties. Lack of information sharing is an obstacle to the establishment of industrial symbiosis. Fraccascia and Yazan (2018) found that the establishment of an information platform can effectively improve the economic and environmental performance of the ISN. Risks from one enterprise may propagate to other enterprises in the system, so communication and sharing of risk information between partners can play a warning role, which is essential for reducing the speed of risk propagation and shortening the response time (Dubey et al. 2019). Sharing security information between enterprises can help enterprises reduce investment costs in preventing risks or attacks and improving overall resilience. However, enterprises may not be willing to disclose information publicly for various reasons. Privacy risk and information leakage concerns are still the main reasons hindering the disclosure of security information. Moreover, taking into account the business strategy and the free-riding behavior of competitors and other enterprises that do not disclose information, these further promote enterprises to choose to conceal risk information (Ezhei and Tork Ladani 2017;Wu et al. 2014). In addition, enterprises in the industrial symbiosis network are generally profitable organizations, so some of them will not actively disclose information in order to pursue higher profits. Contributions The main contributions of this research can be listed as follows. First, the construction process of the industry symbiosis network is discussed from the perspective of waste supply and demand. Ideas are provided for simulating indus-trial symbiosis practices, which are usually represented directly by existing structures or scale-free networks. Second, the multiplex network model is constructed considering the interaction of risk perception and risk propagation to explore the risk propagation mechanism and influencing factors of the ISN. Most of the research on risk propagation focused on the singlelayer network, ignoring the interaction of multiple network systems. Third, this paper analyzes the impact of information disclosure on risk propagation from the perspective of voluntary disclosure of risk information by enterprises. Enterprises are limited and rational, and they will adjust their behavior strategies according to the current state to maximize economic benefits. Structure of this paper The rest of this paper is structured as follows. Section "Model" constructs the network generation algorithm and risk propagation mechanism of the multiplex network. Section "Theoretical analysis using the MMCA" analyzes the transition probability of the enterprise state in the process of risk propagation by the Microscopic Markov Chain Approach (MMCA) and deduces the expression of the risk propagation threshold. Section "Simulation" verifies the feasibility of the network generation model, explores the change of the proportion of various states, and studies the impact of information disclosure on risk propagation. Section "Conclusion and Discussion" gives the conclusion and discusses the findings and the limitations of the paper. Model Enterprises in the industrial symbiosis network establish symbiotic relationships by the exchange of energy and waste. When an enterprise is at risk, this highly interdependent relationship will lead to risk propagating to its symbiotic partners and even cause a cascading failure. In order to avoid risks, enterprises always need to pay attention to the operation status of symbiotic enterprises and perceive the possibility of risks through risk information diffusion. In fact, due to strategic needs, some enterprises are not reluctant to disclose their operation status and risk information (Jain and Sohoni 2015). For example, an enterprise provides waste to multiple enterprises, when this enterprise may have insufficient waste supply due to production adjustment, if it notifies its symbiotic partners, it will lose some partners, resulting in waste retention and economic losses. Thus, this paper uses a two-layer multiplex network to describe the interaction of risk perception and risk propagation in the ISN, studies the impact of disclosure of risk information on risk propagation, and analyzes whether disclosure of risk information will improve the risk propagation threshold or reduce the scope of risk propagation. Related concepts Some necessary concepts are listed as follows: (1) The weight In this paper, the ISN is modeled as a directed and weighted graph, the weight of the directed edge from enterprise i to j is expressed as w ij which denotes the amount of waste from enterprise i to j. The weight matrix of the ISN can be described as W = (w ij ) n×n , wherew ij ≥ 0 . If there is no waste flow from enterprise i to j, then w ij = 0. (2) The degree The degree of a node is the number of edges that this node connects with others. In the directed network, the edge may point from one node to another, then there are two different degrees for each node, i.e., in-degree ( k in ) and out-degree ( k out ), which are the numbers of incoming edges and outgoing edges respectively (Li and Xiao 2017). Thus, in the directed network, the total degree of a node i(k i ) is the sum of its in-degree and out-degree, which can be written as Similar to the definition of node degree in the undirected network, the strength of the node is defined to represent the weight of edges in the weighted network. In this paper, the in-strength represents the amount of waste received from other symbiotic partners, and the out-strength represents the amount of waste provided to other symbiotic partners. Motivated by the definition in the reference proposed by (Barrat et al. 2004), the strength of each node in our model represents the amount of waste exchange, which reflects the importance of nodes in the network. Then the in-strength and out-strength of a node i are defined as Eq. (2), respectively. where Γ in i and Γ out i are the set of in-neighbors and out-neighbors of node i, respectively. (1) Therefore, the strength of a node i is the sum of its instrength and out-strength, that is (4) The degree of symbiotic dependence The amount of waste between enterprises determines the degree of symbiotic dependence between them. Therefore, the degree of symbiotic dependence of enterprise i on j can be defined as the ratio of the waste flows between enterprise i and j to the total waste flows of enterprise i in the system, which can be written as Due to different enterprises having different symbiotic partners or waste flows, it is easy to find that the degree of symbiotic dependence of enterprise i on enterprise j differs from the degree of symbiotic dependence of enterprise j on enterprise i, that is, ij ≠ ji . Generation of the multiplex network In this section, we will use the WS small-world network to simulate the social network in the upper layer (Watts and Strogatz 1998) and construct a new evolution model to generate the ISN in the lower layer based on the BBV model (Barrat et al. 2004). The topological structure of the multiplex network is described as shown in Fig. 1. Each layer of the multiplex network has different connectivity while the nodes in both layers are the same. The following describes the generation methods of these two networks. (3) The evolutionary model of the social network In the multiplex network, the upper layer is a social network, which describes the risk information diffusion behavior among enterprises and has the characteristics of a small world (Luo et al. 2022;Watts and Strogatz 1998). In addition, enterprises propagate risk information through highly interactive social interaction. Therefore, the upper-layer social network is an unweighted and undirected network, which is constructed as follows. Step 1: initialization. Starting from a ring lattice with N nodes and each node is connected to the K∕2 adjacent nodes on its left and right, where K is even. Step 2: randomize reconnect. Reconnect each edge in the above graph randomly with probability q, that is, one endpoint of the edge remains unchanged, while the other endpoint becomes randomly selected N − K − 1 nodes of the rest with probability q. Note that there can only be one edge between any two different nodes, and q will be set to 0.2 in the model. The evolutionary model of the ISN According to previous research findings, the industrial symbiosis network in the lower layer presents scale-free characteristics (Li and Xiao 2017), that is, nodes are not completely randomly connected, but based on the principle of preferential connection in the process of network growth. At the same time, considering the difference in the amount of waste flows between different enterprises, the ISN should be a weighted and directed network, which is constructed as follows. Step 1: initialization. It starts with a random weighted and directed network of N 0 N 0 ≪ N nodes connected by edges with assigned weight w 0 . Step 2: growth. At each time step, a new node is added to the network, and m m < N 0 directed edges are added based on the type of the new node and the strength of the existing nodes. See step 3 for details. Step 3: preferential attachment. Due to the waste supply-demand relationship of nodes in the ISN, the new node can be divided into three types: (i) only receive waste, (ii) only provide waste, and (iii) receive and provide waste. In addition, the larger the strength of a node, the more important it is in the network and the easier it is to attract other nodes to connect. Thus, new nodes are more inclined to connect with such nodes. The connection probability is given below. Case 1: if the new node n only receives waste, then the existing node i is attached to the new node n with the probability Case 2: if the new node only provides waste, then the existing node is attached to the new node with the probability Case 3: if the new node both receives and provides waste, suppose that it provides waste to existing nodes and receives waste from existing nodes, where,, the existing node is attached to the new node with the probability Step 4: weight update. The new node n establishes one edge with the node i, and the total weight on the existing edges connected to i is modified by an amount equal to . Moreover, it is proportionally distributed among edges according to their weights. Case 1: when the new node receives waste from the existing node i, the weight between i and its neighbor j and the out-strength of a node i are modified as Case 2: when the new node provides waste to the existing node i, the weight between i and its neighbor j and the in-strength of a node i are modified as Case 3: when the new node receives and provides waste from the existing node i, the weight between i and its neighbor j are modified as Eqs. (8) and (10); the out-strength and in-strength of a node i are modified as Eqs. (9) and (11) respectively, and the strength of node i are modified as if the new node belongs to m out newly added edges Step 5: repeat steps 2-4 until the network reaches the required size. Risk propagation model In this section, in order to study the impact of voluntary disclosure of information on risk propagation, the risk propagation layer constructed by the susceptible-infective-recovered (SIR) model is coupled with the risk perception layer constructed by the unaware-conceal-disclose state (UCD), which is defined as the UCD-SIR model. The states of nodes and their transition rules in each layer will be described in detail below. State transition rules in the risk perception layer In order to reflect whether the enterprise discloses risk information, it is assumed that some enterprises are aware of risks, and some are willing to share them publicly, but some are unwilling. Therefore, in the risk perception layer where nodes spread the risk information, an UCD model is established to represent the spreading process. Specifically, nodes are divided into three states: unaware of risks (U), aware of risks and willing to disclose (D), and aware of risks but unwilling to disclose, i.e., conceal (C). And unaware enterprises can obtain risk information from neighbors who are in the state of D. In the process of information diffusion, the risk awareness of the nodes will change, and the unaware node will be aware of risks if the proportion of neighbors in the state of D in its all neighbors exceeds the local awareness coefficient ; otherwise, it will still not be aware of risks. After the unaware node is aware of risks, it will be transformed into the state of D with probability p or transformed into the state of C with probability 1 − p . In addition, considering that enterprises that are aware of risks in practice will actively take measures to reduce the probability of risk occurrence or reduce economic losses when risks occur, so as to avoid further risk propagation, after that, they will not always be aware of risks. Thus, suppose that the enterprises in the state of D and C which are aware of risks are transformed into the state of U with probability . Moreover, as long as the enterprise is disturbed by risks, it will perceive the existence of risks immediately. The specific state transition rules in the risk perception layer of the multiplex network model are shown in Fig. 2. State transition rules in the risk propagation layer The risk propagation layer is the actual ISN, in which risk propagation occurs. Referring to the classical SIR (12) s i → s i + 2 + 2w 0 propagation model, nodes can be divided into three states: susceptible (S), infective (I), and recovered (R). At the initial stage of risk propagation, several enterprises in the ISN are in the state of infective, propagate risks to their symbiotic partners with a certain probability , and at the same time they take some measures to reduce risks, then recover with an inherent recovery rate , and will no longer be infected. Here, recovery refers to the restoration of the production of an enterprise in a risk state to its normal state, from which risk management experience can be accumulated. Considering the heterogeneity of enterprises, the infection rate and recovery rate of different enterprises are different. Specifically: (1) For enterprises in the state of S, they are more likely to be infected by infected symbiotic partners with a high degree of symbiotic dependence (Zhu and Ruth 2013). The more infected symbiotic partners, the greater the probability of risk propagation. For example, during the propagation of viruses (i.e., SARS, COVID-19), the closer the social distance between susceptible individuals and infected individuals, and the more contact times, the greater the possibility of susceptible individuals being infected. Therefore, in the process of risk propagation, the probability of the risk occurring will change after being affected by different degrees of symbiotic dependence. Assume that enterprise i in the state of S will trigger risks with probability ij by the infected neighbor j. (2) At the same time, infected enterprises can take measures to eliminate risks, turn to the recovered state, and become immune to risks. Due to the different capability and scale of enterprises, the resilience is also different. In this paper, we assume that the recovery probability depends not only on the strength and the degree of the enterprise itself, but also on the state of its neighbor nodes. When more neighbor nodes are at risk, the recovery ability of the node will be weakened, and more time and resources need to be invested in recovery. Then, integrating the inherent attributes with local risks on the recovery rate, the total influence coefficient of infected enterprise i in risk recovery is defined as where s i represents the strength of node i , k i denotes the degree of node i in the lower layer, i is the ratio of infected neighbor nodes of node i , 1 , 2 and 3 ∈ (0, 1) are the tunable parameters, and 1 + 2 + 3 = 1 . Therefore, an infected enterprise i will recover with probability i and will not infect other enterprises or be infected again after recovery. As shown in Fig. 3, the specific state transition process in the risk propagation layer is depicted based on the above assumptions. The interaction in the multiplex network Due to the coupling of the risk perception layer and the risk propagation layer, enterprises will take measures to avoid risks and reduce the probability of infection after grasping the risk information (Guo et al. 2015;Li et al. 2019). We define the effect of reducing the probability of risk propagation due to the perceived risk information as a risk attenuation factor, denoted by , where 0 ≤ ≤ 1 . Then, for a susceptible enterprise, the probability of risk propagation when the risk information is not aware is U , when the enterprise is aware of risks, whether the risk information is disclosed or not, the risk propagation rate is equal, i.e., D = C = U and U = . Obviously, the smaller the risk attenuation factor, the lower the probability of being infected. In particular, when the risk attenuation factor is 0, the risk propagation rate is 0, and the enterprise is in an immune state. When the risk attenuation factor is equal to 1, the risk propagation rate does not vary depending on whether the enterprise is aware of the risk information or not. According to the definition of enterprises' states in the multiplex network and their dynamic transition process above, there are eight states exist in the UCD-SIR model, that is, US, UR, CS, CI, CR, DS, DI, and DR. As long as the enterprises are infected by risks, they will be aware of risks, that is to say, the enterprises in the state of unaware and infected (UI) will immediately turn to DI or CI, so the state of UI is excluded. Theoretical analysis using the MMCA In this section, we use the MMCA to analyze the risk propagation process in the multiplex network and deduce the risk propagation threshold of the ISN under the action of the social network. Transition probabilities First of all, we define the adjacency matrix of each layer in the multiplex network, and M = m ij N×N is the adjacency According to the above analysis, the enterprise in the model has eight states. At the time t, each node i has a probability of being in one of the eight states mentioned above, denoted by p US , and p CR i (t) , respectively. On the risk perception layer, the transition probability that the unaware node i still unaware of any neighbors' risks at the time t is assumed to be r i (t) . While for the risk propagation layer, the transition probability of an unaware enterprise i not being infected by any neighbors at the time t is defined as q U i (t) , and the transition probability of enterprise i disclosing or concealing risk information not being infected are defined as q D i (t) and q C i (t) , respectively. The specific expressions are as follows: Note that H(x) is a Heaviside step function. If x > 0 , H(x) = 1 , else H(x) = 0 . In other words, the value of r i (t) can either be 0 when the fraction of its neighbors in the state of D surpasses the local awareness coefficient , or 1 if the fraction of its neighbors in the state of D is less than the local awareness coefficient . In addition, p D i (t) and p I i (t) represent the probabilities for the node i being in the state of D and I at the time t , respectively. Specifically, they can be calculated as follows: Parameters and corresponding descriptions in the multiplex network are shown in Table 1. Probability evolution of different states The transition probability trees for the eight possible states are illustrated in Fig. 4, which are able to describe the possible states of nodes and their transitions in the UCD-SIR model. Based on Eq. (14) and the transition probability trees in Fig. 4, we use the MMCA method to establish the dynamic evolution equations of eight possible states, as shown below: When the evolution time is long enough, the proportion of each state in the network will reach a stable state, that is, it will no longer change with time. Therefore, for any time t and any state of the node i, there is p X i (t + 1) = p X i (t) = p X i , where X is one state of the eight states in the model. Therefore, when t → ∞ , Eq. (16) can be rewritten as: Risk propagation threshold There is a risk propagation threshold c in the ISN, due to the cascading effect, if ≥ c , the risk is widely prevalent in the network; otherwise, the risk will be gradually eliminated and will not have much impact on the stability of the network (Pastor-Satorras et al. 2015). Therefore, the analysis of the risk propagation threshold is of great significance to risk prevention and control. Noting that, near the risk propagation threshold, the fraction of infected nodes is close to zero, that is p I i = p DI i + p CI i → 0 . So let p I i = i ≪ 1 . At this time, according to Eq. (14), ignoring the higher-order terms, the approximate values of q U i (t) , q D i (t) , and q C i (t) can be obtained as follows: Then, from the second equation in Eq. (15), p I i can be rewritten as: The average degree of the risk perception layer q The reconnection probability of the risk perception layer N 0 Initial network size in the risk propagation layer Probability that enterprise i is in X state at time t ,X is one state of the eight states in the model According to Eq. (18) and D = C = U , the Eq. (19) can be further written as: where e ji is the element of the identity matrix. The risk propagation threshold U c of the ISN is the minimum value of U satisfying Eq. (23) (Guo et al. 2015). Denoting Λ max (H) the largest eigenvalue of the matrix H , whose Then the risk propagation threshold of the ISN is equal to: According to Eq. (24), the risk propagation threshold mainly depends on the recovery rate ( ), the degree of symbiotic dependence ( ij ),the total influence coefficient of infected enterprise i in risk recovery ( i ), the dynamics on the risk perception layer ( p C i ,p D i ), and network structure of the risk propagation layer, i.e., the topological structure of the ISN. Simulation This section conducts numerical simulation analysis on the evolution process of risk propagation in the ISN to discuss the risk propagation mechanism under the background of risk information disclosure and concealment. First, we construct an upper-layer social network and a lower-layer ISN based on the WS model and the modified BBV model, respectively. The size of these two networks is 100 ( N = 100 ). In addition, the average degree ( K ) of the upper (24) c = Λ max (H) Fig. 4 Transition probability trees for the eight states network is 6.0, and the reconnection probability q is set to 0.2. Some parameters of the lower network are set as N 0 = 3 , w 0 = 1 , = 3 , m = 2 , m in = 1 , and m out = 1 . When the system is stable, the proportion of enterprises recovering from risks ( R ) is calculated by R = ∑ i p R i ∕N , which can describe the scope of risk propagation in the ISN. P R (t) , P U (t) , P D (t) , and P C (t) indicate the proportion of recovered enterprises, unaware enterprises, disclosed enterprises, and concealed enterprises at the time t , respectively. In addition, other initial parameters are set as follows: the proportion of initial infective nodes is set to 5%, the risk attenuation factor = 0.3 , the recovery rate for infective enterprises = 0.2 , the inherent risk propagation rate = 0.6 , local awareness coefficient = 0.2 , the disclosure probability of risk information p = 0.6 , and the probability of losing risk information = 0.3 . In order to avoid the randomness of the simulation results, each data in the simulation results is the average value of 500 runs under the same parameter conditions. Structural characteristics of the ISN In this section, we use numerical simulations to verify the scale-free characteristics of the ISN, and the distributions of node degree and strength in the form of double logarithmic coordinates shown in Fig. 5. Refer to Barabasi and Albert (1999) and Li and Xiao (2017), when networks have scalefree characteristics, then the degree of nodes satisfies the power-law distribution, i.e., p(k) ∼ k − * , and in the logarithmic coordinate, it satisfies logp(k) = − * logk + a . From Fig. 5, we can see the obvious straight-line characteristics, which means power-law distribution and the networks have scale-free characteristics. More specifically, it can be seen that following four distributions with different network sizes all obey power-law distribution, which indicates that there are few hub nodes with high degree and strength in the ISN. In addition, network size does not affect the scale-free characteristics of the generated network. The evolution process of risk propagation in the ISN According to the above parameter settings, the evolution law of the proportion of various states in the risk propagation process is analyzed within 200 simulation time steps. As shown in Fig. 6, the evolution results of the proportion of each state of enterprises in the risk propagation layer, the risk perception layer, and the multiplex network are depicted, respectively. As can be seen from Fig. 6a, risks propagate from the initially infected enterprises, and the proportion of infected enterprises reaches the maximum in a short time. Since then, the proportion of infected enterprises has been decreasing, and risks have finally been completely eliminated. From Fig. 6b, it can be found that risk information propagates rapidly in the early stage of a risk outbreak. With the gradual reduction of risks, the proportion of aware enterprises will no longer increase and remain in a stable range. In addition, Fig. 6c depicts the evolution of the eight states in the multiplex network. It can be found that the proportion of unaware and susceptible enterprises has been declining and finally stabilized at about 15%. Similar to Fig. 6a, the proportion of infected enterprises, whether they are disclosed or concealed, increases first and then decreases until they disappear completely. When the network is stable, the enterprises are in a state of susceptibility or recovery. Specifically, among the susceptible enterprises, about 21% of them are aware of risks and disclose them, 8% of them are aware of risks but conceal them, and about 15% of them are still unaware of risks, which indicates that being aware of risks may reduce the propagation of risks. Among the recovered enterprises, about 18% of them are unaware of risks, about 32% of them are aware of risks and disclose them, and only 6% of them conceal risks, which shows that the recovered enterprises tend to disclose risks. The impact of disclosure and concealment on enterprise states In order to discuss the impact of disclosure behaviors of enterprises on risk propagation, we draw a phase diagram of the proportion of recovered enterprises in the stable state ( R ) changing with risk propagation rate ( ) and disclosure probability of risk information ( p ), and the results are shown in Fig. 7. Other parameters are set as = 0.2, = 0.3 , = 0.2 , and = 0.3. If the risk propagation rate is small, the disclosure probability of risk information has little impact on risk propagation, because there are fewer infective enterprises in the system. For ∈ [0.1, 0.7) , the general trend is that when is fixed, the proportion of recovered enterprises R decreases with the increase of the disclosure probability of risk information p . This is because the increase of p means that more enterprises disclose risk information, which promotes enterprises' awareness of risks and inhibits risk propagation. If the risk propagation rate is large, the impact of risk propagation can be reduced only when the disclosure probability of risk information is large enough. Furthermore, when the disclosure probability of risk information remains unchanged, the higher the risk propagation rate, the more the proportion of recovered enterprises and the wider the scope of risk propagation. From the above analysis, it can be seen that the disclosure of risk information by enterprises has a certain influence on risk propagation in the ISN, and the scope of risk propagation will decrease with the increase of the disclosure probability of risk information. Therefore, enterprises should timely notify their symbiotic partners once they are aware of risks, so as to reduce the scope of risk propagation in the ISN. In Fig. 8, the evolution process of the proportion of disclosed and concealed enterprises with the change of the disclosure probability of risk information is given when = 0.6 . From Fig. 8a, it can be found that when the disclosure probability of risk information is very low ( p ≤ 0.3 ), the proportion of disclosed enterprises ( P D ) first increases rapidly to the highest point, then decreases gradually, and stabilizes at 0 finally. However, when p ≥ 0.5 , the proportion of disclosed enterprises increases rapidly over time and then gradually stabilized. In addition, the value of P D at stable state increases with the increase of p. Similar to Fig. 8a, b shows that the evolution process of the proportion of concealed enterprises basically increases first and then decreases until it is stable. Especially, when p ≤ 0.3 , the proportion of concealed enterprises finally stabilizes at 0, and when p ≥ 0.5 , the proportion of concealed enterprises at a stable state decreases as p increases. Combined Fig. 8a and b, under the condition p ≥ 0.5 , it can be found that the stable value of the proportion of aware enterprises ( P D + P C ) increases with the increase of p . Therefore, the risk awareness of The impact of disclosure on local risks of each enterprise In this section, the impact of disclosure on local risks of each enterprise in the ISN is analyzed, and other parameters remain unchanged, i.e., = 0.6 , = 0.2 , = 0.2 , = 0.3 , and = 0.3 . The simulation results of the ratio of infected neighbors changing with the disclosure probability of risk information are shown in Fig. 9. In Fig. 9, the evolution process of the impact of disclosure on the ratio of infected neighbors of each node is given. The ratio of infected neighbors of the node reflects the probability of risk occurrence in the symbiotic partners of the enterprise. The higher the ratio, the more risks occur in the symbiotic partners of the enterprise, i.e., the greater the local risk of the enterprise. From Fig. 9, it can be seen that as time evolves, for any node, the ratio of its infected neighbors basically increases first and then decreases until it becomes stable, which reflects the process of risk propagation as well as enterprise recovery. Also, it is obvious that there is a significant decrease in the ratio of infected neighbors when the disclosure probability of risk information p is increased from 0.3 to 0.9. The above findings indicate that enterprises can make more symbiotic partners perceive risks by disclosing risk information, thereby reducing the probability of risk occurrence in symbiotic partners and creating a healthy business environment for themselves. Fig. 10. From Fig. 10, we find that the impact of p on risk propagation threshold can be divided into three stages. In the first stage, i.e., p ∈ [0, 0.45) , the change of p has little effect on the risk propagation threshold, because P D + P C tends to 0 when the network is stable as shown in Fig. 8, then according to Eq. (24), the risk propagation threshold is independent of p . The second stage, i.e., p ∈ [0.45, 0.55) , the risk propagation threshold increases rapidly with the increase of p . The third stage, i.e., p ∈ [0.55, 1] , the risk propagation threshold increases with the increase of p , but the growth rate is relatively slow. The above results indicate that there is a critical value for the disclosure probability of risk information. Once a certain value is reached, the risk propagation threshold can be improved. Conclusion and discussion The exchange of by-products or waste between enterprises in the industrial symbiosis system forms the coupling of the industrial ecological chain Li and Xiao 2017). The symbiotic relationship formed by material exchange promotes the recycling of waste and balances the conflict between ecological protection and economic development. However, during the development of the industrial symbiosis system, various internal and external risk factors will inevitably lead to risk propagation in the system. In addition, not all enterprises are willing to disclose their operation status and risk information due to strategic needs. Therefore, in order to explore the impact of disclosure of risk information on risk propagation in the ISN, this paper constructs the UCD-SIR model and analyzes the influencing factors of risk propagation in the ISN from both theoretical analysis and numerical simulation. Results The main results are as follows: (1) This research regards the disclosure of risk information as an effective strategy to reduce risk propagation in the network. Especially, the simulation results confirm that the disclosure behavior of an enterprise can not only mitigate the spread of risks in the symbiotic network, but also enable the symbiotic partners of the enterprise to quickly respond to potential risks, so as to maintain the stability of the self-sustaining ecological economic environment. (2) Considering the difference in the amount of waste flows between different enterprises, the ISN should be a weighted and directed network. A new evolution model is constructed to generate the ISN in the lower layer based on the BBV model, four distributions with different network sizes from numerical simulations all present scale-free characteristics. (3) In order to study the impact of voluntary disclosure of information on risk propagation, the risk propagation layer constructed by the susceptible-infective-recovered (SIR) model is coupled with the risk perception layer constructed by the unaware-conceal-disclose state (UCD), which is defined as the UCD-SIR model. The states of nodes and their transition rules in each layer are described in detail, and the state transition process of nodes in the symbiotic network can be observed more intuitively. Implications Based on the research results, we put forward the following management implications to alleviate risk propagation in the ISN. Firstly, enterprises should adhere to the development concept of win-win cooperation, establish a symbiotic relationship of mutual trust and mutual benefit between enterprises, and actively share relevant information. Good communication and extensive interaction among different stakeholders are the basic requirements for establishing and maintaining any successful industrial symbiotic relationship (Song et al. 2018). Similarly, information asymmetry is one of Fig. 10 The impact of p on risk propagation threshold the important factors affecting the construction and stable development of the ISN. Therefore, when enterprises suffer from risks or perceive risks in the network and proactively disclose risk information, it may help other enterprises to identify risks quickly and respond actively. By cutting off the propagation path of risks, the possibility of risk occurrence is reduced, to prevent further propagation of risks. Secondly, the authenticity, integrity, and transparency of risk information are important guarantees for close cooperation among symbiotic members. On the one hand, active disclosure of risk information can increase the density of conscious enterprises in the network and create a safer and more resistant business environment for itself. On the other hand, it can reduce the transaction cost of repeated games with its symbiotic partners, better fulfill corporate social responsibility and improve its corporate image. However, it is necessary to prevent risk information from being overinterpreted or exaggerated. For each enterprise, actively obtaining risk information is conducive to improving the risk perception level, preventing risks in advance, and protecting the stability of its own supply chain. If the risk is over-exaggerated, it may cause unscientific or unreasonable use of the limited resources of the enterprise, which will affect the normal operation of the enterprise. Finally, the risk is not a sudden attack, but a process of gradual accumulation. The theoretical results show that the risk propagation threshold mainly depends on the recovery rate, the degree of symbiotic dependence, the proportion of enterprises that perceive risk, and the topological structure of the ISN. Therefore, enterprises may perceive risks in advance before they occur, so as to take risk prevention and control measures to reduce the possibility of risk occurrence. Specifically, we should strengthen the ability of enterprises to recover from risks, appropriately reduce the degree of symbiotic dependence, and increase the proportion of enterprises that perceive risks. Moreover, the disclosure probability of risk information has a critical effect on the risk propagation threshold, i.e., once the disclosure probability of risk information reaches a certain value, the risk propagation threshold can be significantly improved. Network modeling of ISNs Previous studies on modeling of ISNs mainly focus on some specific symbiotic networks, and some also involve weighted and directed network (Xiao et al. 2012;Yang and Zheng 2020;Zeng et al. 2013). However, they ignore the flow of waste in the network. In order to simulate the formation process of a real symbiotic relationship, an evolutionary model is established based on the BBV model to describe the waste exchange process between enterprises in the ISN, in which the weight and direction of the edge represent the waste trading volume and the direction of supply and demand among symbiotic enterprises, respectively. Moreover, the network satisfies the power-law distribution, which means that it has a high similarity to most actual ISNs. Therefore, it is more reasonable and effective to employ this evolutionary model to simulate the real ISNs. Risk propagation and resilience analysis of complex network Some existing researches focus on adjusting the network structure to reduce risk propagation and improve the resilience and stability of the network (Chopra and Khanna 2014;Yang and Zheng 2020). We focus on adjusting the enterprises' behavior strategy. This is because, for an already formed network, it is arduous to improve the resilience of the network by adjusting the network redundancy or network structure, which will change the material flow in the entire network and pay a huge cost. Existing researches on resilience analysis in the industrial symbiosis network mostly focus on static aspects such as key node identification and vulnerability assessment (Chopra and Khanna 2014;Wang et al. 2017;Zeng 2020;Zeng et al. 2013). In view of this, considering the interaction of risk perception and risk propagation, the UCD-SIR model is constructed to explore the risk propagation mechanism and influencing factors of the ISN. In addition to the same method as the researches of Granell et al. (2013) and Zhu et al. (2021a), our research has some improvements in parameter settings and node states. In our model, we define the degree of symbiotic dependence between symbiotic enterprises and the total influence coefficient of infected enterprises in risk recovery, to reflect the heterogeneity of triggering risks and recovery among different enterprises. Information disclosure in the complex network There is little literature on the dynamic propagation mechanism of risks and the risk mitigation mechanism of the ISN from the perspective of disclosure of risk information. During the operation of the industrial symbiosis system, the information disclosure behavior of enterprises can help enterprises to adopt more accurate risk prevention and control strategies, rather than widely adjust the network structure. Huo et al. (2020) constructed multiple networks to study the influence of herd mentality and risk preference on supply chain risk propagation, in which enterprises are divided into information and without information. However, we further refined the state of information disclosure in our research. In summary, the transition probability of the enterprise state in the process of risk propagation under information diffusion is analyzed, and the expression of the risk propagation threshold is deduced by the Microscopic Markov Chain Approach (MMCA). In addition, through the numerical simulations, we found that a risk propagation threshold does exist (refer to section "The impact of disclosure and concealment on enterprise states"), which is consistent with Granell et al. (2013) and Li et al. (2019). However, our research is different from their work. The difference is that our paper further analyzes the impact of disclosure and concealment on risk propagation threshold, which can be divided into three stages, i.e., independent, rapid increase, and slow increase (refer to section "The impact of disclosure on risk propagation threshold"). This provides a new idea for raising the risk propagation threshold and alleviating risk propagation. Limitation However, our research has certain limitations. First, the industrial symbiosis network constructed only uses waste to replace the materials exchanged between enterprises, without considering the diversity of material types. Second, the research on risk propagation in the ISN only from the perspective of disclosure of risk information roughly, without distinguishing from different perspectives, such as voluntary, external promotion, or mandatory disclosure, which may have different effects on risk propagation. For example, the government's mandatory regulations or subsidies, as well as contractual mechanisms between symbiotic enterprises, will promote information disclosure and reduce risk propagation in the ISN. Further research can consider these three aspects: (1) improve the algorithm of the industrial symbiosis network model to make it more general and universal. (2) The impact of the government reward and punishment mechanism or contractual constraints on information disclosure and risk propagation among symbiotic enterprises. (3) The information disclosure among symbiotic enterprises may promote the formation of symbiotic relationship and the stability of the system, thus the impact of risk information disclosure on the stability of symbiotic relationships can also be explored.
2023-01-31T06:18:09.156Z
2023-01-30T00:00:00.000
{ "year": 2023, "sha1": "e5efc35d5e17265e868589fe0adcc0b29fe0722c", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s11356-023-25592-7.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "6ba54c10b963f94f1e72b2048bbc2ab8c0582b97", "s2fieldsofstudy": [ "Business", "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
56323040
pes2o/s2orc
v3-fos-license
Three Types of Fuzzy Controllers Applied in High-Performance Electric Drives for Three-Phase Induction Motors The electric drives are very common in industrial applications because they provide high dynamic performance. Nowadays exist a wide variety of schemes to control the speed, the electromagnetic torque and stator flux of three-phase induction motors. However, control remains a challenging problem for industrial applications of high dynamic performance, because the induction motors exhibit significant nonlinearities. Moreover, many of the parameters vary with the operating conditions. Although the Field Oriented Control (FOC) [16] schemes are attractive, but suffer from a major disadvantage, because they are sensitive to motor parameter variations such as the rotor time constant, and an incorrect flux estimation at low speeds. Another popular scheme for electric drives is the direct torque control (DTC) scheme [15][8], and an another DTC scheme based on space vector modulation (SVM) technique that reduces the torque ripples. This scheme does not need current regulators because its control variables are the electromagnetic torque and the stator flux. In this chapter we use the DTC-SVM scheme to analyze the performance of our proposed fuzzy controllers. Introduction The electric drives are very common in industrial applications because they provide high dynamic performance. Nowadays exist a wide variety of schemes to control the speed, the electromagnetic torque and stator flux of three-phase induction motors. However, control remains a challenging problem for industrial applications of high dynamic performance, because the induction motors exhibit significant nonlinearities. Moreover, many of the parameters vary with the operating conditions. Although the Field Oriented Control (FOC) [16] schemes are attractive, but suffer from a major disadvantage, because they are sensitive to motor parameter variations such as the rotor time constant, and an incorrect flux estimation at low speeds. Another popular scheme for electric drives is the direct torque control (DTC) scheme [15] [8], and an another DTC scheme based on space vector modulation (SVM) technique that reduces the torque ripples. This scheme does not need current regulators because its control variables are the electromagnetic torque and the stator flux. In this chapter we use the DTC-SVM scheme to analyze the performance of our proposed fuzzy controllers. In the last decade, there was an increasing interest in combining artificial intelligent control tools with conventional control techniques. The principal motivations for such a hybrid implementation were that fuzzy logic issues such as uncertainty (or unknown variations in plant parameters and structure) can be dealt with more effectively. Hence improving the robustness of the control system. Conventional controls are very stable and allow various design objectives such as steady state and transient characteristics of a closed loop system. Several [5] [6] works contributed to the design of such hybrid control schemes. However, fuzzy controllers, unlike conventional PI controllers do not necessarily require the accurate mathematic model of the process to be controlled; instead, it uses the experience and knowledge about the controlled process to construct the fuzzy rules base. The fuzzy logic controllers are a good alternative for motor control systems since they are well known for treating with uncertainties and imprecision's. For example, in [1] the PI and fuzzy logic controllers are used to control the load angle, which simplifies the induction motor drive system. In [7], the fuzzy controllers are used to dynamically obtain the reference voltage vector in terms of torque error, stator flux error and stator flux angle. In this case, both torque and stator flux ripples are remarkably reduced. In [10], the fuzzy PI speed controller has a better response for a wide range of motor speed and in [3] a fuzzy self-tuning controller is implemented in order to substitute the unique PI controller, present in the DTC-SVM scheme. In this case, performance measures such as settling time, rise time and ITAE index are lower than the DTC-SVM scheme with PI controller. The fuzzy inference system can be used to modulate the stator voltage vector applied to the induction motor [18]. In this case, unlike the cases mentioned above, the quantity of available vectors are arbitrarily increased, allowing better performance of the control scheme and lower levels of ripple than the classic DTC. However, it requires the stator current as an additional input, increasing the number of input variables. In this chapter we design and analyze in details three kinds of fuzzy controllers: the PI fuzzy controller (PI-F), the PI-type fuzzy controller (PIF) and the self-tuning PI-type fuzzy controller (STPIF). All of these fuzzy controllers are applied to a direct torque control scheme with space vector modulation technique for three-phase induction motor. In this DTC-SVM scheme, the fuzzy controllers generate corrective control actions based on the real torque trend only while minimizing the torque error. The three-phase induction motor dynamical equations By the definitions of the fluxes, currents and voltages space vectors, the dynamical equations of the three-phase induction motor in stationary reference frame can be put into the following mathematical form [17]: Where u s is the stator voltage space vector, i s and i r are the stator and rotor current space vectors, respectively, ψ s and ψ r are the stator and rotor flux space vectors, ω r is the rotor angular speed, R s and R r are the stator and rotor resistances, L s , L r and L m are the stator, rotor and mutual inductance respectively. The electromagnetic torque t e is expressed in terms of the cross-vectorial product of the stator and the rotor flux space vectors. Where γ is the load angle between stator and rotor flux space vector, P is a number of pole pairs and σ = 1 − L 2 m /(L s L r ) is the dispersion factor. The three-phase induction motor model was implemented in MATLAB/Simulink as is shown in [2]. The principle of direct torque control In the direct torque control if the sample time is short enough, such that the stator voltage space vector is imposed to the motor keeping the stator flux constant at the reference value. The rotor flux will become constant because it changes slower than the stator flux. The electromagnetic torque (6) can be quickly changed by changing the angle γ in the desired direction. This angle γ can be easily changed when choosing the appropriate stator voltage space vector. For simplicity, let us assume that the stator phase ohmic drop could be neglected in (1). Therefore d ψ s /dt = u s . During a short time Δt, when the voltage space vector is applied it has: Thus the stator flux space vector moves by Δ ψ s in the direction of the stator voltage space vector at a speed which is proportional to the magnitude of the stator voltage space vector. By selecting step-by-step the appropriate stator voltage vector, it is possible to change the stator flux in the required direction. Direct torque control scheme with space vector modulation technique In Fig. 1, we show the block diagram for the DTC-SVM scheme [14] with a Fuzzy Controller, the fuzzy controller will be substitute for the three kind of proposed Fuzzy Controller one for time. The DTC-SVM scheme is an alternative to the classical DTC schemes [15], [8] and [9]. In this one, the load angle γ * is not prefixed but it is determinate by the Fuzzy Controller. Equation (6) shows that the angle γ * determines the electromagnetic torque which is necessary to supply the load. The three proposed Fuzzy Controllers determine the load angle using the torque error e and the torque error change Δe. Details about these controllers will be presented in the next section. Figure 1 shows the general block diagram of the DTC-SVM scheme, the inverter, the control signals for three-phase two-level inverter is generated by the DTC-SVM scheme. Flux reference calculation In stationary reference frame, the stator flux reference ψ * s can be decomposed in two perpendicular components ψ * ds and ψ * qs . Therefore, the output of the fuzzy controller γ * is added to rotor flux angle ∠ ψ r in order to estimate the next angle of the stator flux reference. In this chapter we consider the magnitude of stator flux reference as a constant. Therefore, we can use the relation presented in equation (8) to calculate the stator flux reference vector. Estimator Moreover, if we consider the stator voltage u s during a short time Δt,itispossibletoreproduce a flux variation Δ ψ s . Notice that the stator flux variation is nearly proportional to the stator voltage space vector as seen in the equation (7). Stator voltage calculation The stator voltage calculation uses the DC link voltage (U dc ) and the inverter switch state (S Wa , S Wb , S Wc ) of the three-phase two level inverter. The stator voltage vector u s is determined as in [4]: Electromagnetic torque and stator flux estimation As drawn by Fig. 1 the electromagnetic torque and the stator flux estimation depend on the stator voltage and the stator current space vectors, The problem with this kind of estimation is that for low speeds the back electromotive force (emf) depends strongly of the stator resistance, to resolve this problem is used the current model to improve the flux estimation as in [13]. The rotor flux ψ rdq represented in the rotor flux reference frame is given by: Notice that T r = L r /R r is the rotor time constant, and ψ rq = 0. Substituting this expression in the equation (11) yields: (12) In the current model the stator flux is represented by: Where ψ i r is the rotor flux according to the equation (12). Since the voltage model is based on equation (1), the stator flux in the stationary reference frame is given by With the aim to correct the errors associated with the pure integration and the stator resistance measurement, the voltage model is adapted through the PI controller. The K p and K i coefficients are calculated with the recommendation proposed in [13]. The rotor flux ψ r in the stationary reference frame is calculated as: The estimator scheme shown in the Fig. 2 works with a good performance in the wide range of speeds. Where LPF means low pass filter. In the other hand, when equations (14) and (16) are replaced in (5) we can estimated the electromagnetic torque t e as: 563 Three Types of Fuzzy Controllers Applied in High-Performance Electric Drives for Three-Phase Induction Motors The PI fuzzy controller (PI-F) The PI fuzzy controller combines two simple fuzzy controllers and a conventional PI controller. Note that fuzzy controllers are responsible for generating the PI parameters dynamically while considering only the torque error variations. The PI-F block diagram is shown in Fig. 3, this controller is composed of two scale factors G e , G Δe at the input. The input for fuzzy controllers are the error (e N ) and error change (e ΔN ), and theirs outputs represent the proportional gain K p and the integral time T i respectively. These parameters K p and T i are adjusted in real time by the fuzzy controllers. The gain K P is limited to the interval [K p,min , K p,max ], which we determined by simulations. For convenience, K P is normalized in the range between zero and one through the following linear transformation. Then, considering that the fuzzy controller output is a normalized value K p , we obtain K p by: However, for different reference values the range for the proportional gain values is chosen as Due to nonlinearities of the system and in order to avoid overshoots for large reference torque r, it is necessary to reduce the proportional gain. We use a gain coefficient ρ = 1/(1 + 0002 * r) that depends on the reference values. In order to achieves real time adjustment for the K p values. Therefore, K p,max = ρK p,max0 where the value K p,max0 = 1.24 was obtained through various simulations. Note that both ρ and K p,max decreases as the reference value increases. Consequently, the gain K p decreases. The PI-F controller receives as input the torque error e and as output the motor load angle γ * . Membership Functions (MF) In the Fig. 3, the first fuzzy controller receives as inputs the errors e N , Δe N , each of them has three fuzzy sets that are defined similarly, being only necessary to describe the fuzzy sets of the first input. The first input e N has three fuzzy sets whose linguistic terms are N-Negative, ZE-Zero and P-Positive. Each fuzzy set has a membership function associated with it. In our particular case of, these fuzzy sets have trapezoidal and triangular shapes as shown in Fig.4. The universe of discourse of these sets is defined over the closed interval [−1.5, 1.5]. Similarly, the second fuzzy controller has the same fuzzy sets for its two inputs, however, its output is defined by three constant values defined as 1.5, 2 and 3 which linguistic values associated with them are S-Small, M-Medium and B-Big. This controller uses the zero-order Takagi-Sugeno model which simplifies the hardware design and is easy to 565 Three Types of Fuzzy Controllers Applied in High-Performance Electric Drives for Three-Phase Induction Motors introduce programmability [19]. The defuzzification method used for this controller is the weighted sum. Scaling Factors (SF) The PI-F controller has two scaling factors, G e and G Δe as inputs, while the fuzzy controllers outputs are the gain K p and the integral time T I respectively. From these values we can calculate the parameter K I = K p /T I . The scale factor ensures that both inputs are within the universe of discourse previously defined. The parameters K p and K I are the tuned parameters of the PI controller. The inputs are normalized, by: The rule bases The rules are based on simulation that we conducted of various control schemes. Fig.6 shows an example for one possible response system. Initially, the error is positive around the point a, and the error change is negative, then is imposed a large control signal in order to obtain a small rise time. To produce a large signal control, the PI controller should have a large gain K p and a large integral gain K I (small integral time T I ), therefore, R x :i fe N is S and Δe N is N then K p is G Figure 6. Response system. The rule base for the first fuzzy controller is in Table 1, also, the rule base for the second fuzzy controller is in Table 2. The PI-type fuzzy controller (PIF) and The self-tuning PI-type fuzzy controller (STPIF) The PI-type fuzzy controller (PIF) is a fuzzy controller inspired by a digital PI controller, which is depicted in Fig. 8. It is composed by two input scale factors "G e , G Δe "andoneoutputscale factor "G γ * ". Finally it uses saturation block to limit the output. This controller has a single input variable, which is the torque error "e"andoneoutputvariable which is the motor load angle "γ * "givenby: In (23), k is the sampling time and Δγ * (k) represents the incremental change of the controller output. We wish to emphasize here that this accumulation (23) of the controller output takes place out of the fuzzy part of the controller and it does not influence the fuzzy rules. Fig.9 shows the self-tuning PI-type fuzzy controller (STPIF) block diagram, its main difference with the PIF controller is the gain tuning fuzzy controller (GTF) block. Membership Functions (MF) The MF for PIF controller are shown in Fig. 10(a). This MF for input variables "e N , Δe N "and output variable "Δγ * N " are normalized in the closed interval [-1,1]. The MF's for GTF controller are shown in Fig. 10(a) and in Fig. 10 Table 3 and in Table 4. Scaling factors The two inputs SF "G Δe , G e "a n dt h eo u t p u tS F" G γ * " can be adjusted dynamically through updating the scaling factor "α". "α" is computed on-line, using a independent fuzzy rule model Figure 8. PI-type fuzzy controller. The rule bases The incremental change in the controller output Δγ * N to PIF controller is defined as, Gain tuning fuzzy The purpose of the GTF controller is update continuous the value of α in every sample time. The output α is necessary to control the percentage of the output SF "G γ * ", and therefore for calculating new "Δγ * ", The GTF controller rule base is based on knowledge about the three-phase IM control, using a DTC type control according to the scheme proposed in [14], in order to avoid large overshoot and undershoot, e.g., when "e"and"Δe" have different signs, it means that the estimate torque 570 Fuzzy Controllers -Recent Advances in Theory and Applications "t e " is approaching to the torque reference "t * e ", then the output SF "G γ * " must be reduced to a small value by "α", for instance, if "e"is"PM"and"Δe"is"NM"then"α"is"S". On the other hand, when "e"a n d"Δe" have the same sign, it means that the torque estimate "t e " is moving away from the torque reference "t * e ", the output SF "G γ * " must be increased to a large value by "α" in order to avoid that the torque depart from the torque reference, e.g., if "e" is "PM"and"Δe"is"PM"then"α"is"VL". The nonlinear relationship between "e, Δe, Δγ * N "and"e, Δe, α"areshowninFig.11. The inference method used in PIF and GTF controllers is the Mamdani's implication based on max-min aggregation. We use the center of area method for defuzzification. Simulation results We have conducted our simulation with MATLAB simulation package, which include Simulink block sets and fuzzy logic toolbox. The switching frequency of the pulse width modulation (PWM) inverter was set to be 10kHz, the stator reference flux considered was 0.47 Wb. In order to investigate the effectiveness of the three proposed fuzzy controllers applied in the DTC-SVM scheme we performed several tests. We used different dynamic operating conditions such as: step change in the motor load (from 0 to 1.0 pu) at 90 percent of rated speed, no-load speed reversion (from 0.5 pu to -0.5 pu) and the application of a specific load torque profile at 90 percent of rated speed. The motor parameters used in the tests are given in Table 5. Fig. 12, shows the response of the speed and electromagnetic torque when speed reversion for DTC-SVM with PI-F controller is applied. Here, the rotor speed changes its direction at about 1.8 seconds. Fig. 13 shows the stator and rotor current sinusoidal behavior when applying reversion. Fig. 15 show the torque and currents responses respecively, when step change is applied in the motor load for DTC-SVM with the PI-F controller. This speed test was established at 90 percent of rated speed. In Fig. 16, we demonstrate the speed response when applying a speed reversion for DTC-SVM with PIF controller. In this case the speed of the rotor changes its direction at about 1.4 seconds. Fig. 17 shows the electromagnetic torque behavior when the reversion is applied. Fig. 18 and Fig. 19 show the response of the electromagnetic torque and phase a stator current respectively, when applying a step change in the motor load for DTC-SVM whit PIF controller. In this test the speed of the motor was set to 90 percent of rated speed. Fig. 20 shows the behaviors of the electromagnetic torque, phase a stator current and the motor speed, when applying speed reversion from 0.5 pu to -0.5 pu in the DTC-SVM scheme with STPIF controller. The sinusoidal waveform of the current shows that this control technique also leads to a good current control. Figure 19. Phase a stator current for sudden torque change for DTC-SVM with PIF controller. 0.089 P 2 Table 5. Induction Motor Parameters [12] Conclusion In this chapter we have presented the DTC-SVM scheme that controls a three-phase IM using three different kinds of fuzzy controllers. These fuzzy controllers were used in order to determinate dynamically and on-line the load angle between stator and rotor flux vectors. Therefore, we determine the electromagnetic torque necessary to supply the motor load. We have conducted simulations with different operating conditions. Our simulation results show that the all proposed fuzzy controllers work appropriately and according to the schemes reported in the literature. However, the STPIF controller achieves a fast torque response and low torque ripple in a wide range of operating conditions such as: sudden change in the command speed and step change of the load.
2017-09-17T14:47:34.243Z
2012-09-27T00:00:00.000
{ "year": 2012, "sha1": "83d48cc30d6dc94f742dcdd4169a2f0d96727aef", "oa_license": "CCBYNC", "oa_url": "https://www.intechopen.com/citation-pdf-url/39434", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "83d48cc30d6dc94f742dcdd4169a2f0d96727aef", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
52051679
pes2o/s2orc
v3-fos-license
Neuroimaging alterations related to status epilepticus in an adult population: Definition of MRI findings and clinical‐EEG correlation Magnetic resonance imaging (MRI) provides an opportunity for identifying peri‐ictal MRI abnormalities (PMAs) related to status epilepticus (SE). Extremely variable MRI alterations have been reported previously during or after SE, mainly in small selected populations. In a retrospective monocentric study, we analyzed brain MRI changes observed in the ictal/postictal periods of SE in an adult population. We included all consecutive patients observed in a 5‐year period with an electroclinical diagnosis of SE and an MRI performed within 30 days from the beginning of SE. We identified 277 patients. Among them, 32 (12%) showed PMAs related to SE. The duration of SE was strongly associated with MRI alterations, showing a mean duration of 6 days vs 2 days (P = .011) in the group with and without MRI alterations, respectively. Focal electroencephalography (EEG) abnormalities (P = .00003) and in particular, lateralized periodic discharges (LPDs) (P < .0001) were strongly associated with PMAs. MRI alterations were unilateral (23 patients, 72%), located in multiple brain structures (19 patients, 59%), and involving mesiotemporal structures (17 patients, 53%). Sixteen patients (50%) had good spatial correspondence between cortical PMAs and the focal EEG pattern; 12 patients (38%) with focal EEG pattern showed cortical PMAs plus MRI signal changes also involving subcortical structures. A follow‐up MRI was available for 14 of 32 patients (44%): 10 patients presented a disappearance of PMAs, whereas in 4, PMAs were still present. This study demonstrates that a long duration SE and the presence of certain EEG patterns (LPDs) are associated with the occurrence of PMAs. A good spatial concordance was observed between cortical PMA location and the EEG focus. | INTRODUCTION Status epilepticus (SE) is conceptually defined as "a condition resulting either from the failure of the mechanisms responsible for seizure termination or from the initiation of mechanisms, which lead[s] to abnormally, prolonged seizures (after time point t1). It is a condition, which can have long-term consequences (after time point t2), including neuronal death, neuronal injury, and alteration of neuronal networks, depending on the type and duration of seizures." 1 This new definition underscores the pathophysiologic mechanisms that sustain SE development and its maintenance, thereby increasing the risk of neuronal injury. 2 Ongoing seizure activity is accompanied by an excessive release of glutamate, which activates postsynaptic N-methyl-D-aspartate (NMDA) receptors and triggers receptor-mediated calcium influx. This leads to desensitization and internalization of postsynaptic γ-aminobutyric acid A (GABA-A) receptors 3 and increased expression of proconvulsive neuropeptides, 4,5 creating a vicious cycle of self-sustained seizure. Calcium influx also causes a cascade of biochemical changes, mitochondrial dysfunction, oxidative stress 6 modification of gene expression, and initiation of cell death. 7 At the same time, the sustained seizure activity increases cerebral glucose metabolism, oxygen and adenosine triphosphate (ATP) depletion, and lactate accumulation, finally leading to hypermetabolic neuronal necrosis. 8 Magnetic resonance imaging (MRI) provides an opportunity for early identification of alterations related to seizures activity. Variable periictal MRI alterations 9,10 (PMAs) have been reported in patients with SE, in either the ictal or the postictal period. Restricted diffusion (high signal in diffusion-weighted imaging [DWI] sequences and corresponding low apparent diffusion coefficient, ADC) 9,11 and hyper-intensities in T2-weighted and fluid-attenuated inversion recovery (FLAIR) sequences that could even appear simultaneously, 12 are the most frequently encountered alterations. These changes represent a continuum of cytotoxic (increased DWI and decreased ADC signal) and vasogenic edema (increased DWI and increased T2 without decreased ADC signal) mostly depending on the timing of MRI performance. 9,10 Rarely a T2 hypointensity can be seen in patients with ongoing SE. 9 Moreover, ictal hyperperfusion in MR perfusion (MRP) sequences, 11 corresponding increased vascularity in MR angiography (MRA), and contrast enhancement, which are co-localized with aforementioned alterations, may be seen. 9,13,14 PMAs have been observed in different cortical areas, 9,[15][16][17] as well as involving subcortical structures. Previous studies also highlighted some preferential susceptibility regions or networks: the mesolimbic structures, 9,10,15,18 the pulvinar nucleus of thalamus, 13,18,19 the splenium of corpus callosum, 18,20 the contralateral cerebellum (a sign known as crossed cerebellar diaschisis), 13,18,21,22 the insular cortex and basal ganglia, 9,13 and the claustrum. 10,23 The majority of these cases had focal SE 18,24 and showed PMAs both locally, in the cortical area of the ictal activity, as well as in remote cortical or subcortical areas generally believed to represent regions involved by ictal activity at the network level (eg, the thalamus and ipsilateral pulvinar). 12,18,19 Moreover, lateralized periodic discharges (LPDs) plus seizures seem to be associated with the development of acute/subacute DWI alterations. 25 These MRI changes can be completely reversible, although their exact appearance or disappearance timing is unknown 20 and is variable among different patients. 12 The changes can even persist and be followed by permanent alterations such as cortical laminar necrosis, mesial temporal sclerosis, 10,26 and focal brain atrophy. 9,13 | PURPOSE The MRI changes associated with SE have been described, but the reports in the literature are scarce and based mainly on small selected populations. There are only a few studies correlating MRI changes with electroclinical patterns in SE. Therefore, we aimed to identify and stratify the patterns of PMA associated with SE and to correlate them with electroclinical features of SE in a large series of adult patients. | Inclusion criteria and adopted definitions This is a retrospective monocentric study on an adult SE population studied with a brain MRI in the ictal/postictal period of SE. Key Points • MRI is a useful, noninvasive, and easily available tool for identifying acute/subacute SE related alterations • A long duration of SE with LPDs is strongly associated with periictal MRI abnormalities • Acute/subacute MRI changes are generally reversible, but long-term, permanent consequences could be generated Status epilepticus (or SE) was defined as a continuous seizure or 2 or more discrete seizures between which there is no complete recovery of consciousness, that lasts ≥5 minute for convulsive SE (CSE) and more than 10 minutes in nonconvulsive status epilepticus (NCSE). 1 The inclusion criteria were the following: (1) an electroclinical diagnosis of SE, (2) an MRI that included all sequences of the hospital SE protocol, and (3) MRI performed within 30 days of the beginning of SE. | Enrollment strategies We searched the clinical and EEG database of the Department Neurology, Christian Doppler Klinik, Paracelsus Medical University, Salzburg, Austria, between 01.01.2011 and 31.12.2015 (see Figure 1). | MRI study protocol and analysis All included patients underwent high-resolution MRI (3-Tesla; Philips Achieva Stream, Andover, Massachusetts) using the standard protocol for SE patients routinely used at our institution. MRI sequences included T1-weighted 3dimensional isovoxel (1 × 1 × 1 mm) turbo field echo images with (when needed) and without intravenous contrast application, axial and coronal T2-weighted turbo spin echo, T2-weighted FLAIR, and DWI sequences. Coronal T2-weighted (2 mm) and FLAIR slices were 2.35 mm thick and were acquired at 90 degrees perpendicular to the long axis of hippocampus. The acute/subacute MRI scans were reviewed independently by 2 raters who judged for the presence/absence of MRI's alterations related to SE. Whenever there was any discordance, a third rater was consulted. Finally, for the patients with PMAs, we searched for and analyzed the follow-up MRIs whenever present. | EEG study details and clinical data collection Using informatics databases, for each included SE episode we collected clinical information such as age, gender, and type of treatment. Each SE episode was classified in relation to etiology, duration, clinical manifestations, response to treatment, and EEG characteristics too. The EEGs were acquired using the international 10-20 system. Each EEG recording lasted for at least 20 minutes and was assessed by board-certified neurophysiologists. | Statistical analysis Statistical analysis was performed using SPSS Statistics 23.0 (2015; IBM Corporation, Armonk, New York). Descriptive statistics were used to analyze and compare clinical and demographic variables in the whole population and in subgroups divided according to the presence/ absence of PMAs. Categorical data were analyzed by means of 2 × 2 χ 2 test with Yates correction. Two-by-two comparisons were performed by means of the Mann-Whitney test. Inter-rater agreement for identifying SE-associated MRI alterations was assessed using Cohen's kappa coefficient (κ). F I G U R E 1 Selection process flowchart. We initially identified 1449 patients with either suspected or definite diagnosis of SE. Medical records of each patient were thereafter meticulously analyzed. We excluded 554 patients because, despite the initial suspicion, these patients did not have SE. Of the 895 selected SE cases, 286 (32%) underwent brain MRI scan in the first 30 days from the beginning of SE, and thus they were included in the study. For each included patient we reviewed brain MRI results, EEG recordings, and medical records: 2 patients were thereafter excluded because of lack of clinical information and 7 patients were excluded due to either presence of MRI artifacts that interfered with the analysis or because they received an incomplete MRI study. At the end of the search 277 patients fulfilled the criteria and were included in the study Univariate logistic regression analysis was conducted to identify significant associations of each clinical variable with the presence of PMAs. The statistical significance cutoff was set at .05. | Population's demographic characteristics In the studied population (n = 277), 58% were male with a mean age of 63 years (ranging from 13 to 90 years). Thirty-two of them (12%) showed PMAs related to SE (see Table 1). | SE clinical characteristics The demographic and clinical characteristics of the patients are shown in Table 1. In the whole population, the most common SE etiologies were brain tumors (21%), SE in a previously diagnosed epilepsy (16%), and chronic cerebrovascular disease (11% Lateralized periodic discharges (or LPDs) were present in 47% of patients with PMAs vs 13% of patients in whom MRI changes were not seen (P < .0001, OR 5.76, CI 2.62-12.67). Conversely, 172 patients were imaged after the end of SE. Among them 149 did not have LPD activity on the previous EEG, whereas 23 had LPDs in any of the EEG recorded during the SE. Eight of 149 patients without LPDs (6%) had PMA, whereas 4 of 23 with LPDs (17%) showed PMA (P < .035). Notably, the presence of LPDs and SE duration showed a "synergistic" effect on PMAs. Indeed, the median duration of SE in patients with LPDs and PMA was 11 days, whereas SE duration in patients with LPDs but without PMAs was 1 day. | Timing of MRI study In the majority (63%), MRI was performed after SE has ended. Among patients with SE-related MRI alterations, 20 of 32 patients (63%) had the investigation while SE was still ongoing, whereas only 78 of 245 patients (32%) without SE-related MRI alterations underwent MRI during the SE (P = .001). | Classification of SE-related MRI changes Inter-rater agreement for identifying SE-associated MRI alterations was very high (Cohen's kappa .818). An increased signal in DWI and ADC with or without correspondent hyperintensities in T2/FLAIR sequences was seen in 11 patients (34%); 14 patients (44%) presented with an increased signal in DWI and a correspondent decreased ADC value with or without hyperintensities in T2/FLAIR sequences; the remaining 7 patients presented an increased signal on DWI, but for them the ADC map was not available (see Table 2). | Correlation of EEG and MRI features PMAs were well co-localized with focal EEG ictal discharges in 16 patients (50%). Twelve patients (38%) had focal ictal activity on EEG, and MRI changes involving local structures corresponding to the site of the highest EEG activity plus subcortical structures (unilateral or bilateral changes in the pulvinar nucleus of the thalamus). Only 3 patients (9%) presented focal ictal activity on EEG and isolated deep homolateral thalamus involvement on the MRI without cortical alterations. On the other hand, the only patient with a diffuse ictal pattern on EEG presented a diffuse bilateral involvement of insular cortex and thalamus. Among patients with LPDs, 6 (40%) presented with isolated local MRI changes corresponding to the focus of LPDs activity; 6 (40%) showed focal MRI changes together with deep homolateral involvement of the pulvinar; 3 patients (20%) had isolated homolateral pulvinar involvement (Figures 2-5). | Follow-up MRI evaluations A follow-up MRI was performed (9 days to 3.6 years after the SE) in 14 of 32 patients (44%). MRI changes completely disappeared in 10 of 14 patients (71%), whereas in the remaining 4 of 14 (29%), signal alterations were attenuated but not completely resolved. No patients had unchanged alterations. | DISCUSSION In this retrospective study on a large single-center cohort, we identified 32 patients (12%) with MRI changes related to SE. The duration of the SE episode was the factor with the highest significant association with the appearance of PMAs. In addition, LPDs were strongly associated with SE-related MRI changes. Incidence of SE-related MRI signal alterations in retrospective series varies between 11.6% and 50%. 18,27 In our population, acute/subacute MRI changes were present in only 12% of the patients studied with MRI during or after an episode of SE. This low proportion is related mostly to the fact that only 35% of the patients received an MRI study during SE and most investigations were performed after cessation of the study. Because these MRI changes are supposedly caused by continuous or repetitive epileptic activity, a higher proportion of patients with PMAs can be | 125 expected if MRI is performed during the SE instead of after its cessation. 24,28 Nevertheless, it is still possible to find these alterations for some time after the end of SE; the duration could depend on two factors: patient characteristics (eg, age, comorbidities), or seizure characteristics (type and duration of ictal activity). Among all investigated clinical parameters, we identified a crucial role of SE duration: the longer the duration of SE, the higher the probability of finding SErelated MRI alterations. 24,29 Moreover, these alterations were mostly associated with the presence of LPDs. Thus, long-lasting SE with LPD activity is most frequently associated with the presence of PMAs. 25,30 LPDs were present in patients with SE of various etiologies, such as cerebrovascular, autoimmune, or infectious. In 4 patients, PMAs were observed in the context of autoimmune encephalitis. In these cases, the observed MRI alterations can represent abnormalities related to the underlying etiology not representing a consequence of the ictal activity per se. Our results indicate that PMAs well co-localize with either the epileptic focus or remote areas presumably involved in the epileptic network, 15 such as the cortical connections to the pulvinar of the thalamus. 19 Even if there is a certain degree of susceptibility among different individuals 12 (variability in mitochondrial reserve in stress situations) and in the same individual among the different cerebral areas, we confirmed that the mesiotemporal structures are highly susceptible to ictal damage, thus they were most frequently involved. 10,31 In the majority of our patients, PMA alterations were transient, 16 but because only a minority of our patients had a follow-up MRI we cannot draw any firm conclusion about their role as a possible biomarker of permanent functional and structural damage. CONCLUSIONS The most important limitations of the present study are its retrospective nature and the low number of patients with MRI acquired during SE itself. The number of follow-up MRI studies was also low and as it would be expected in a retrospective study, the MRI were not performed at fixed intervals but at highly different time points in the course of SE or after its cessation. These limitations do not allow us to attribute observed MRI alterations to SE per se. MRI changes might be caused by the nature of the possible underlying lesion such as limbic encephalitis or stroke. It is challenging to disentangle the role of the underlying structural lesion, and we did not address this issue in the current study, which could be better tackled in the prospective design. A prospective study with MRI performed during the SE and at regular follow-ups would also better define the relationship between the electroclinical characteristics and MRI alteration patterns and the role of MRI in the early prediction of long-term consequences after SE. In summary, in this retrospective study, PMAs were observed mainly in association with prolonged SE and lateralized epileptiform discharges on EEG. MRI alterations affected different brain structures, involving mesiotemporal in more than half of cases. Prof. Meletti received research grant support from the Ministry of Health (MOH) and from the nonprofit organization CarisMo Foundation; and has received personal compensation as a scientific advisory board member for UCB and Eisai. Prof Meletti has received speakers' honoraria from UCB, Eisai, Sandoz. Dr Giovannini, Dr Kuchukhidze, and Dr McCoy have no disclosures. We confirm that we have read the Journal's position on issues involved in ethical publication and affirm that this report is consistent with those guidelines.
2018-08-22T21:31:15.507Z
2018-08-20T00:00:00.000
{ "year": 2018, "sha1": "2a60057b4a4d02d8310318e2a03f11bb990bcb9e", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/epi.14493", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "2a60057b4a4d02d8310318e2a03f11bb990bcb9e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231991389
pes2o/s2orc
v3-fos-license
Spatio-Temporal Variations and Driving Forces of Harmful Algal Blooms in Chaohu Lake: A Multi-Source Remote Sensing Approach Harmful algal blooms (hereafter HABs) pose significant threats to aquatic health and environmental safety. Although satellite remote sensing can monitor HABs at a large-scale, it is always a challenge to achieve both high spatial and high temporal resolution simultaneously with a single earth observation system (EOS) sensor, which is much needed for aquatic environment monitoring of inland lakes. This study proposes a multi-source remote sensing-based approach for HAB monitoring in Chaohu Lake, China, which integrates Terra/Aqua MODIS, Landsat 8 OLI, and Sentinel-2A/B MSI to attain high temporal and spatial resolution observations. According to the absorption characteristics and fluorescence peaks of HABs on remote sensing reflectance, the normalized difference vegetation index (NDVI) algorithm for MODIS, the floating algae index (FAI) and NDVI combined algorithm for Landsat 8, and the NDVI and chlorophyll reflection peak intensity index (ρchl) algorithm for Sentinel-2A/B MSI are used to extract HAB. The accuracies of the normalized difference vegetation index (NDVI), floating algae index (FAI), and chlorophyll reflection peak intensity index (ρchl) are 96.1%, 95.6%, and 93.8% with the RMSE values of 4.52, 2.43, 2.58 km2, respectively. The combination of NDVI and ρchl can effectively avoid misidentification of water and algae mixed pixels. Results revealed that the HAB in Chaohu Lake breaks out from May to November; peaks in June, July, and August; and more frequently occurs in the western region. Analysis of the HAB’s potential driving forces, including environmental and meteorological factors of temperature, rainfall, sunshine hours, and wind, indicated that higher temperatures and light rain favored this HAB. Wind is the primary factor in boosting the HAB’s growth, and the variation of a HAB’s surface in two days can reach up to 24.61%. Multi-source remote sensing provides higher observation frequency and more detailed spatial information on a HAB, particularly the HAB’s long-short term changes in their area. Introduction As a vital freshwater resource, lakes provide essential and diverse habitats and ecosystem functions, and play vital roles in climate regulation, global carbon, nutrient cycles, thereby contributing to the industrial, agricultural, and food industries around the lakes [1]. However, the aquatic environment has been put at risk by both climate change and anthropogenic factors [2,3]. Wastewater discharge, farmland drainage, soil erosion, and agricultural fertilization are also primary nutrient sources leading to lake eutrophication. Besides, nitrogen and phosphorus pollution from inefficient sewage treatment systems and agricultural practices threaten to increase pollution and cause inland lakes' eutrophication [4]. Lake eutrophication may cause a harmful algal bloom (HAB), which is widely distributed, adaptable, and destructive [5]. A HAB increases oxygen consumption in the was evaluated and adopted for different satellite sensors, and the accuracy and uncertainty were analyzed. Based on HAB results from multi-source data, the variations and driving forces of HAB in Chaohu Lake for environmental management are discussed. Study Area Chaohu Lake, located in Hefei City, Anhui Province, is the fifth largest freshwater lake in China (Figure 1, projection: Gauss-Kruger projection, geographic coordinate system: World Geodetic System 1984). The tributaries of Chaohu Lake mainly include the Nanfei River, Shiwuli River, Pai River, Hangbu River, Baishitian River, Zhao River, Yuxi River, and Shuangqiao River. Chaohu Lake has an inflow of 344.2 million m 3 and an outflow of 23 million m 3 . The center of Chaohu Lake is located at 29°47′-31°16′ north and 115°45′-117°44′ east, with an average water depth of 2.89 m and an average annual lake temperature of about 20 °C [29]. The terrain around the lake is mostly mountains and hills, and the Chaohu Lake basin is cultivated mainly by rice, wheat, rape, cotton, and corn. The agricultural land around the lake makes it easily accumulate nutritive salt in the water, causing severe non-point source pollution, which caused the lake's external pollution load, mainly originating from the northwestern part of the basin [30,31]. Nutrients in farmland are mainly composed of phosphorus and nitrogen, and the inflow of total phosphorus and total nitrogen is one of the main reasons for the eutrophication of Chaohu Lake. Chaohu Lake has become one of the most eutrophic lakes in China [32]. The total phosphorus concentration was one of the main driving factors affecting Anabaena and microcystins' spatial and temporal distribution [33,34]. The farming period is from June to November. The average annual rainfall in Chaohu Lake is 224 mm, which drives the farmland nutrients to the lake during the farming period [35]. Moreover, the rain stirs up the mud at the bottom of Chaohu Lake, and large amounts of nutrient salts in the mud turn up, increasing the concentration of nutrient salts in Chaohu Lake. The total phosphorus content in Chaohu Lake is 0.131mg/L, and the total nitrogen content is 2.04 mg/L. The nitrogen and phosphorus ratio of optimum reproduction of the dominant species of HAB in Chaohu Lake was about 11.8:1 [36]. According to the monitoring data over the years, the ratio of nitrogen to phosphorus in Chaohu Lake is between 10:1 and 15:1, resulting in an outbreak situation of non-point source HAB [37]. When algae proliferate and die, they accelerate the consumption of dissolved oxygen in water, resulting in the death of many aquatic animals and plants, weakening the purification capacity of water, and causing severe harm to human health [5]. Therefore, it is essential to monitor the water environment with joint multi-source remote sensors. Remote Sensing Data A total of 420 images of Terra/Aqua MODIS L-1B data (MOD02) in 2019 were selected and downloaded from Earthdata's website (https://search.earthdata.nasa.gov/). Two images of Landsat 8 OLI (Level 1) were downloaded from the USGS official website of shared data (https://earthexplorer.usgs.gov/). A total of 16 images of Sentinel-2 MSI satellite data (L1C) were downloaded from the official website of ESA (https://scihub.copernicus.eu/). Clear and cloudless images were picked out (see Table 1) and preprocessed, including re-projection and geometric correction. Figure 2 shows the different cloudless products distributed in the space in 2019 so one can picture the time lag between the different satellite acquisitions. Remote Sensing Data A total of 420 images of Terra/Aqua MODIS L-1B data (MOD02) in 2019 were selected and downloaded from Earthdata's website (https://search.earthdata.nasa.gov/). Two images of Landsat 8 OLI (Level 1) were downloaded from the USGS official website of shared data (https://earthexplorer.usgs.gov/). A total of 16 images of Sentinel-2 MSI satellite data (L1C) were downloaded from the official website of ESA (https://scihub.copernicus.eu/). Clear and cloudless images were picked out (see Table 1) and preprocessed, including reprojection and geometric correction. Figure 2 shows the different cloudless products distributed in the space in 2019 so one can picture the time lag between the different satellite acquisitions. Environmental and Meteorological Data The meteorological analysis data were obtained from the Meteorological Center of the National Meteorological Administration (http://www.cma.gov.cn/) ( Figure 3). In 2019, Chaohu Meteorological Station's maximum sunshine hours, maximum temperature, and maximum wind speed occurred in May, July, and August, respectively. The variation range of wind speed was 0.5-6.4 m/s, the maximum number of sunshine hours was 12.9 h, and the time of direct sunlight was half a day. The average rainfall was 224 mm. The average maximum temperature was 33.9 °C. Environmental and Meteorological Data The meteorological analysis data were obtained from the Meteorological Center of the National Meteorological Administration (http://www.cma.gov.cn/) ( Figure 3). In 2019, Chaohu Meteorological Station's maximum sunshine hours, maximum temperature, and maximum wind speed occurred in May, July, and August, respectively. The variation range of wind speed was 0.5-6.4 m/s, the maximum number of sunshine hours was 12.9 h, and the time of direct sunlight was half a day. The average rainfall was 224 mm. The average maximum temperature was 33.9 • C. Figure 4 is the technical flow chart of this paper, using which the original satellite data were obtained and preprocessed. The most appropriate algorithms were selected respectively for Sentinel-2 MSI, Terra/Aqua MODIS, and Landsat 8 OLI to obtain the distribution map of HAB in Chaohu Lake, and we checked the accuracy of the algorithms with visual interpretation results. Finally, the formation and distribution of HAB were analyzed by combining various meteorological factors. Data Preprocessing The preprocessing steps mainly included geometric correction, radiometric calibration, and atmospheric correction. Landsat-8 OLI and Terra/Aqua MODIS data were preprocessed using ENVI software (ENVI 5.3) to convert DN (digital number) values into TOA (top of atmosphere reflectance) radiance or reflectance after radiometric calibration, and then different atmospheric correction models were selected according to different data sources. The FLAASH atmospheric correction module (Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes) was adopted for Landsat 8 OLI, which was based on the MODTRAN-4 (Moderate Spectral Resolution Atmospheric Transmittance Algorithm and Computer Model) radiation transmission model, with high accuracy. It can maximally eliminate the influences of water vapor and aerosol scattering over case II waters, and has been successfully used in previous studies from Landsat 8 OLI [38,39]. MODIS images were atmospherically corrected using the dark-objects method [40][41][42]. The procedure was to select the relatively clean area as a region of interest in the eastern Figure 4 is the technical flow chart of this paper, using which the original satellite data were obtained and preprocessed. The most appropriate algorithms were selected respectively for Sentinel-2 MSI, Terra/Aqua MODIS, and Landsat 8 OLI to obtain the distribution map of HAB in Chaohu Lake, and we checked the accuracy of the algorithms with visual interpretation results. Finally, the formation and distribution of HAB were analyzed by combining various meteorological factors. Data Preprocessing The preprocessing steps mainly included geometric correction, radiometric calibration, and atmospheric correction. Landsat-8 OLI and Terra/Aqua MODIS data were preprocessed using ENVI software (ENVI 5.3) to convert DN (digital number) values into TOA (top of atmosphere reflectance) radiance or reflectance after radiometric calibration, and then different atmospheric correction models were selected according to different data sources. The FLAASH atmospheric correction module (Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes) was adopted for Landsat 8 OLI, which was based on the MODTRAN-4 (Moderate Spectral Resolution Atmospheric Transmittance Algorithm and Computer Model) radiation transmission model, with high accuracy. It can maximally eliminate the influences of water vapor and aerosol scattering over case II waters, and has been successfully used in previous studies from Landsat 8 OLI [38,39]. MODIS images were atmospherically corrected using the dark-objects method [40][41][42]. The procedure was to select the relatively clean area as a region of interest in the eastern part of Chaohu Lake, and statistically analyze the pixel brightness value of each band, while using a non-zero pixel with a suddenly increased brightness value as the dark pixel value. The selected dark pixel was used as the distance luminance value for atmospheric correction. Sentinel-2A/B original L-1C images were mainly processed using SEN2COR (version: Sen2Cor-02.08.00-win64) Remote Sens. 2021, 13, 427 6 of 23 for radiometric calibration and atmospheric correction. SEN2COR is a plug-in released by the European Space Agency (ESA) specifically for Sentinel-2 atmospheric calibration. The spectral curve of the image by SEN2COR with atmospheric correction of Sentinel-2 images is consistent with the trend of the actual spectral curve on the ground [43]. The reflectance after atmospheric correction was compared with the field spectra of 39 ground objects; R 2 was 0.82 and the root mean square error was 0.04 [44], indicating high accuracy. All the images selected in the experiment were mostly cloudless. Before determining the HAB, cloud-covered regions of the remote sensing images were made into a cloud mask product by the single-band threshold method to eliminate the influence of clouds [45]. Remote Sens. 2021, 13, x FOR PEER REVIEW 6 of 25 part of Chaohu Lake, and statistically analyze the pixel brightness value of each band, while using a non-zero pixel with a suddenly increased brightness value as the dark pixel value. The selected dark pixel was used as the distance luminance value for atmospheric correction. Sentinel-2A/B original L-1C images were mainly processed using SEN2COR (version: Sen2Cor-02.08.00-win64) for radiometric calibration and atmospheric correction. SEN2COR is a plug-in released by the European Space Agency (ESA) specifically for Sentinel-2 atmospheric calibration. The spectral curve of the image by SEN2COR with atmospheric correction of Sentinel-2 images is consistent with the trend of the actual spectral curve on the ground [43]. The reflectance after atmospheric correction was compared with the field spectra of 39 ground objects; R 2 was 0.82 and the root mean square error was 0.04 [44], indicating high accuracy. All the images selected in the experiment were mostly cloudless. Before determining the HAB, cloud-covered regions of the remote sensing images were made into a cloud mask product by the single-band threshold method to eliminate the influence of clouds [45]. Extraction Algorithm of HAB Algae in water would cause an absorption peak near the wavelength of 620-630 nm and a reflection peak at 650 nm, with a sharp increase in reflectance at around 700 nm [46]. High absorption in the red band by vegetation pigments and high reflection in the nearinfrared band have been used for a long time to detect vegetation coverage, and eliminate some radiation errors. NDVI can reflect the background influence of the vegetation canopy. Therefore, the NDVI algorithm of MODIS was used for monitoring HAB in Chaohu Lake [47]. RGB band synthesis of Landsat/OLI B8 (0.85-0.88 µm), B4 (0.64-0.67 µm), and B3 (0.53-0.59 µm) renders HABs in a reddish color, in strong contrast with the bloom-free dark water, making it easy do distinguish bloom and non-bloom areas. Due to the influences of lake currents and wind, HAB areas generally present as elongated strips [48,49]. The FAI algorithm can eliminate the impact of the atmosphere by using the combination of these three bands. Compared with NDVI algorithm easily influenced by the observation environment, FAI would be suitable for the Landsat images. Unlike MODIS and Landsat 8, Sentinel-2 MSI was equipped with multiple spectral bands and 20 m ground resolution. Three special bands, B5 (693-713 nm), B6 (733-748 nm), and B7 (773-793 nm), are set for vegetation monitoring, which is also sensitive for HABs [50,51]. Therefore, the ρ chl -NDVI algorithm is used for improving the accuracy of acquiring HAB in Chaohu Lake by fusing these 5 characteristic bands. Detailed descriptions of these algorithms are included in Figure 5. The gradient contrast method was used for FAI algorithm to determine the threshold of HAB. The experimental results showed that FAI < −0.01 and FAI > 0.02 were non-bloom regions [19]. According to the average threshold value of the gradient diagram, FAI > −0.002 was finally determined to be the region of HAB. Chlorophyll Reflection Peak Intensity Algorithm Algae also contain chlorophyll, like land plants, so when the algae aggregates, the spectrum shows a vegetation-like characteristic [62,63]. Chlorophyll shows troughs at 420-500 nm (blue and violet light band) and 625 nm, and a small peak value is found at Normalized Vegetation Index (NDVI) Rouse [52] first used Landsat-1 MSS data to propose a NDVI based on the characteristic that the reflectivity of all vegetation increases dramatically near 700 nm. NDVI can reflect surface vegetation coverage [53]. Therefore, as the most common method, NDVI has been widely used in the study of algal extraction [54][55][56], which can eliminate the influences of terrain, shadow, and solar elevation angle [57]: where ρ RED and ρ NIR represent the reflectances of the red band and near-infrared band. Floating Algae Index (FAI) The floating algae index was first proposed by Hu [58]. FAI is defined as a linear spread of reflectivity in the near-infrared, red, and short-wave infrared regions, and can be applied to monitor proliferating algae, such as Ulva or Sargassum spp [59]. The observation results of this algorithm provide strong robustness. FAI is less affected by atmospheric environment, observation conditions, and water reflectivity absorption in the near-infrared band [60]. FAI is often used to identify dense HABs in marine and inland waters [61]. Therefore, the spectral information of the red band, near-infrared band, and short-wave infrared band can be used to correct the atmospheric effects [35]. The algorithm is as follows: where R RED , R NIR , and R SWIR represent the reflectances of red, near-infrared, and shortwave infrared bands respectively; λ RED , λ NIR , and λ SWIR represent the central wavelengths; and R NIR is the interpolating reflectance-namely, the reflectivity information of the infrared band can be obtained by linear interpolation of the red band and the short-wave infrared band. The gradient contrast method was used for FAI algorithm to determine the threshold of HAB. The experimental results showed that FAI < −0.01 and FAI > 0.02 were nonbloom regions [19]. According to the average threshold value of the gradient diagram, FAI > −0.002 was finally determined to be the region of HAB. Chlorophyll Reflection Peak Intensity Algorithm Algae also contain chlorophyll, like land plants, so when the algae aggregates, the spectrum shows a vegetation-like characteristic [62,63]. Chlorophyll shows troughs at 420-500 nm (blue and violet light band) and 625 nm, and a small peak value is found at the central wavelength of the green band [36]. Based on the correlation between algae and chlorophyll concentration, the following model was constructed to identify the concentration of HAB [37,64]: where ρ(490), ρ(560), and ρ(665) correspond to the reflectivity of the blue, green, and vegetation red edge bands of the Sentinel-2A satellite. Accuracy Assessment To obtain the reference data or "truth data" for accuracy assessment of HAB detection from different satellite data, the visual interpretation method was used on false-color images. The verification data of the spatial distribution and area statistics of HAB were also obtained from the Department of the Ecological Environment of Anhui Province (http://sthjt.ah.gov.cn/), which have been checked through ground monitoring points, field investigations, and validation. The root mean square error (RMSE) and relative error (RE) were used to evaluate the accuracy of the HAB extractions using the NDVI algorithm. Additionally, the accuracies of different HAB detection methods were assessed using following indexes [17]: Correct extraction rate (R) is the percentage of the extracted HAB area over the true data: Over-extraction rate (W) is the percentage of mixed extracted HAB area over the true data: Omitted extraction rate (M) is the percentage of the unextracted HAB area over the truth data: The reference data of HAB were denoted as A truth . The area statistic of HAB extracted by various extraction methods was designated as A. The overlapping part of A and A truth was regarded as the correct extracted part, which was denoted as A r . The disjoint part of A is considered to be the extracted by mistake, which was denoted as A w . In A truth , the disjoint part was regarded as the missing part, which was denoted as A m . Results Visual interpretation was analyzed based on 86 MODIS images and 2 Landsat images; 16 Sentinel-2 images were used to be the verification data to compare the accuracy of each extraction algorithm ( Figure 6). Accuracy of HAB Algorithms Depending on the algorithm selection and analysis in Section 3.2, NDVI was used for MODIS to extract HAB. The comparison of NDVI and ρchl values showed that for a low concentration of HAB, the threshold for ρchl was 0.05, and the NDVI threshold was 0.24. For a moderate or high algae concentration, the threshold for ρchl was 0.09, and NDVI was larger than 0.68. Therefore, a pixel with an NDVI > 0 was first classified as a vegetation pixel, and then combined with ρchl > 0.05 was judged as belonging to a HAB. NDVI < 0 and ρchl > 0.03 was an "algal-water" suspension region and also judged as HAB. The RMSE was 4.27 km 2 and RE was 15.9% when compared to HAB products reached by visual interpretation (Figure 7). For the significance test, p < 0.05, the results showed that the HAB region observed by satellite was consistent with the visual interpretation. Accuracy of HAB Algorithms Depending on the algorithm selection and analysis in Section 3.2, NDVI was used for MODIS to extract HAB. The comparison of NDVI and ρ chl values showed that for a low concentration of HAB, the threshold for ρ chl was 0.05, and the NDVI threshold was 0.24. For a moderate or high algae concentration, the threshold for ρ chl was 0.09, and NDVI was larger than 0.68. Therefore, a pixel with an NDVI > 0 was first classified as a vegetation pixel, and then combined with ρ chl > 0.05 was judged as belonging to a HAB. NDVI < 0 and ρ chl > 0.03 was an "algal-water" suspension region and also judged as HAB. The RMSE was 4.27 km 2 and RE was 15.9% when compared to HAB products reached by visual interpretation (Figure 7). For the significance test, p < 0.05, the results showed that the HAB region observed by satellite was consistent with the visual interpretation. Residual normal distribution of HAB areas extracted by MODIS and Sentinel-2 was showed on Figure 8, R 2 was 0.98 and 0.99 between MODIS, Sentinel-2 and visual interpretation, respectively. The Sentinel-2 MSI, MODIS, and Landsat 8 OLI randomly selected the day of the HAB outbreak, and a confusion matrix was used to evaluate the classification accuracy between the monitoring results and the visual interpretation (Table 2). NDVI and FAI were combined to detect HAB using Landsat 8 OLI images; NDVI and ρchl were combined for Sentinel-2 MSI data. Table 3 shows the accuracy evaluation results when compared to visual interpretation products, which demonstrated that HAB extracted by NDVI and FAI has a relatively correct extraction rate of about 95%. The RMSE of HAB from FAI algorithm was 0.56 km 2 and RE was 3.9%. However, the NDVI extraction method was affected by thin cloud or fog, and the cloud shadow was misidentified as a HAB. Moreover, NDVI method may miss pixels with lower algae concentrations, when compared with FAI. By comparing the extraction results on 3 August 2019 and 19 August 2019, the over-extraction areas of the NDVI method due to the mixed pixels and clouds NDVI and FAI were combined to detect HAB using Landsat 8 OLI images; NDVI and ρ chl were combined for Sentinel-2 MSI data. Table 3 shows the accuracy evaluation results when compared to visual interpretation products, which demonstrated that HAB extracted by NDVI and FAI has a relatively correct extraction rate of about 95%. The RMSE of HAB from FAI algorithm was 0.56 km 2 and RE was 3.9%. However, the NDVI extraction method was affected by thin cloud or fog, and the cloud shadow was misidentified as a HAB. Moreover, NDVI method may miss pixels with lower algae concentrations, when compared with FAI. By comparing the extraction results on 3 August 2019 and 19 August 2019, the over-extraction areas of the NDVI method due to the mixed pixels and clouds were found to be 1.46 and 0.18 km 2 , respectively. A comprehensive comparison shows that the extraction of HAB by the two methods was consistent, but the FAI method was better than NDVI at the details. Better results were obtained by combining NDVI with the chlorophyll reflection peak ρ chl , especially for regions with lower concentrations of HAB. According to this method, the correct extraction rate of the Sentinel-2 data reached 96.01%, while RMSE and RE were 2.4 km 2 and 6.2%, respectively. Monthly Variations of HAB MODIS images were mainly used to track monthly HAB changes in Chaohu Lake in 2019 with the advantage of its high temporal resolution. The HAB in Chaohu Lake occurs between May and November ( Figure 9). The northwestern part of Chaohu Lake is more seriously polluted by algae than the eastern, and the area of HAB reaches its maximum in July. The monthly frequency map is the ratio of the number of outbreaks in each region and month to the total numbers of the whole lake. The distribution frequency map indicates the probability of a HAB outbreak in each region of Chaohu Lake. Although HAB breaks out sometimes in a small region, they often occur in the west of the lake. According to the frequency distribution of inter-month HAB, it is increased in June and remains high from June to November. The highest outbreak frequency occurs in the northwestern part of the lake in October, and the peak of distribution frequency of Chaohu Lake in the eastern lake appears in June. The monthly coverage rates for the maximum, minimum, and average HAB area are shown in Figure 10. Adding up the maximum and the minimum area accounts for up to 50% of the total monthly HAB area in May, but the maximum HAB area was only 53.69 km 2 . The average monthly coverage area was less than 20 km 2 , which was the lowest in 2019. This indicates that the level of HAB in May was not serious. In contrast, from June to November, the maximum HAB area accounted for less than 25% to the total HAB area, and a HAB area exceeding 100 km 2 was always found in the mid-month. In July, the maximum area of HAB reached 217 km 2 , accounting for 28.6% of the Chaohu Lake area, covering the northwestern and central parts of the lake. In 2019, the minimum HAB area was 1.625 km 2 , which occurred on November 7, accounting for 0.2% of the total lake area. The average monthly coverage was lower than that in the period of HAB in Chaohu Lake (June to October). It indicated that the activity of HAB in Chaohu Lake began to decrease in November. Diurnal Variation of HAB The spatial-temporal patterns of HABs can be easily affected by hydrology and meteorological factors, and thus induce dramatic variation in a short time, which requires high-frequency monitoring by the integration of a multi-satellite sensor. To reveal the diurnal variations of HAB in Chaohu Lake, multi-source satellite, including Sentinel-2 MSI, Landsat 8 OLI, and Terra/Aqua MODIS are integrated, as shown in Figure 11. While HAB is concentrated and stable, such as on 4 October 2019, the difference of extraction regions between Sentinel-2 MSI and Terra/MODIS is the smallest. Significant differences were observed due to the scattered distribution of HAB on June 26. In the surrounding areas with low algal density, MODIS had a lower spatial resolution; the result may be biased due to the mixed pixels. Since Terra/MODIS is the morning satellite, it passes through the equator from north to south at about 10:30 local time, and Aqua/MODIS is the afternoon satellite and passes through the equator from south to north at about 13:30 local time. According to the common influence of all factors, the monitored HAB area and distribution were different at different times of passing the territory. Besides, there will also be weather effects, such as the possibility of cloud cover in the afternoon compared with the morning in the study area, which will also have impacts on the extraction and identification of HAB. Figure 10. The ratios of the monthly maximum, minimum, and average HAB area to the total HAB area per month (total HAB area: monthly statistics of the area where HAB occurs each time). Diurnal Variation of HAB The spatial-temporal patterns of HABs can be easily affected by hydrology and meteorological factors, and thus induce dramatic variation in a short time, which requires high-frequency monitoring by the integration of a multi-satellite sensor. To reveal the diurnal variations of HAB in Chaohu Lake, multi-source satellite, including Sentinel-2 MSI, Landsat 8 OLI, and Terra/Aqua MODIS are integrated, as shown in Figure 11. While HAB is concentrated and stable, such as on 4 October 2019, the difference of extraction regions between Sentinel-2 MSI and Terra/MODIS is the smallest. Significant differences were observed due to the scattered distribution of HAB on June 26. In the surrounding areas with low algal density, MODIS had a lower spatial resolution; the result may be biased due to the mixed pixels. Since Terra/MODIS is the morning satellite, it passes through the equator from north to south at about 10:30 local time, and Aqua/MODIS is the afternoon satellite and passes through the equator from south to north at about 13:30 local time. According to the common influence of all factors, the monitored HAB area and distribution were different at different times of passing the territory. Besides, there will also be weather effects, such as the possibility of cloud cover in the afternoon compared with the morning in the study area, which will also have impacts on the extraction and identification of HAB. The HAB diurnal changes from Landsat 8 and MODIS images on 19 August 2019 have no significant differences in the area and distribution. The morphology of HAB monitored by Terra (Figure 12a) was different from that of Landsat8 (Figure 12b), which may be due to the low quality (cloud coverage) of Terra/MODIS images on 3 August 2019. HAB region was disturbed by thin clouds, which could not represent the real distribution pattern at that time. The reliability of this result was also verified by the distribution diagram of bloom morphology in an Aqua image (Figure 12a). Compared with the result of Landsat 8 image (Figure 12d), the Aqua image (Figure 12e) result on August 3 showed a decrease in the distribution of HAB and a concentration increase in the coverage center. As the Terra image on August 3 was covered by clouds and fog, Figure 12 does not show the HAB distribution in the morning. The HAB diurnal changes from Landsat 8 and MODIS images on 19 August 2019 have no significant differences in the area and distribution. The morphology of HAB monitored by Terra (Figure 12a) was different from that of Landsat8 (Figure 12b), which may be due to the low quality (cloud coverage) of Terra/MODIS images on 3 August 2019. HAB region was disturbed by thin clouds, which could not represent the real distribution pattern at that time. The reliability of this result was also verified by the distribution diagram of bloom morphology in an Aqua image (Figure 12a). Compared with the result of Landsat 8 image (Figure 12d), the Aqua image (Figure 12e) result on August 3 showed a decrease in the distribution of HAB and a concentration increase in the coverage center. As the Terra image on August 3 was covered by clouds and fog, Figure 12 does not show the HAB distribution in the morning. Driving Forces of HAB The driving forces for the breakout of HAB are of great concern for HAB control and management. Among many factors, the temperature, rainfall, sunshine hours, wind, radiation, etc., have drawn great attention [1]. Some previous research demonstrated that the degree of HAB is positively correlated with temperature, sunshine hours, and global radiation changes, and negatively correlated with precipitation and wind speed [65]. Our results showed similar results on the correlation between the HAB areas and both temperature and sunshine hours, but the R 2 was quite low (<0.05). However, increased temperature promotes the growth of HABs, and colder months may delay the occurrence of HAB [66]. It can be seen that the maximum and minimum areas of Chaohu Lake HAB in July were higher than in other months ( Figure 13). The maximum, average, and minimum values of the HAB area in August and September were close. However, the number of hours of sunshine in September was 77.5 h lower than that in August. The low sunshine hours made it difficult for algae to reproduce and grow through photosynthesis, which inhibited the accumulation and explosion of large areas of HAB. However, too much sunshine will make algae inactive and also inhibit HAB growth. This is consistent with the conclusions from Zhang's research demonstrating that under high temperatures and with many sunshine hours, there will be no large-scale HAB [67,68]. Therefore, appropriate sunshine hours and temperature can promote the photosynthesis of algae. Driving Forces of HAB The driving forces for the breakout of HAB are of great concern for HAB control and management. Among many factors, the temperature, rainfall, sunshine hours, wind, radiation, etc., have drawn great attention [1]. Some previous research demonstrated that the degree of HAB is positively correlated with temperature, sunshine hours, and global radiation changes, and negatively correlated with precipitation and wind speed [65]. Our results showed similar results on the correlation between the HAB areas and both temperature and sunshine hours, but the R 2 was quite low (<0.05). However, increased temperature promotes the growth of HABs, and colder months may delay the occurrence of HAB [66]. It can be seen that the maximum and minimum areas of Chaohu Lake HAB in July were higher than in other months ( Figure 13). The maximum, average, and minimum values of the HAB area in August and September were close. However, the number of hours of sunshine in September was 77.5 h lower than that in August. The low sunshine hours made it difficult for algae to reproduce and grow through photosynthesis, which inhibited the accumulation and explosion of large areas of HAB. However, too much sunshine will make algae inactive and also inhibit HAB growth. This is consistent with the conclusions from Zhang's research demonstrating that under high temperatures and with many sunshine hours, there will be no large-scale HAB [67,68]. Therefore, appropriate sunshine hours and temperature can promote the photosynthesis of algae. The effect of precipitation showed a weak negative correlation with the HAB. The HAB on rainy days of August 3 and 7 was decreased by 79.3% and 61.3%, respectively, when compared with the previous days. This may indicate that the rainfall may dilute or inhibit the occurrence of HABs. HAB was often found on days after scattered rain, such as on 27 May, 26 August, and 17 October. In contrast, the total precipitation in September was half of that of August, and the scattered rain provided favorable conditions for the growth and reproduction of algae. Therefore, the rainfall was the main driving force of the monthly variations of the HAB from July to September. However, May-June rainfall is the highest and most frequent, which reduces the temperature of the water surface, and also reduces the density of algae and the concentrations of nutrients, making the probability of the occurrence of HABs only slightly increased in June compared with May. Rainfall decreased in July, the temperature increased, and the occurrence of HAB increased sharply. Therefore, the low occurrence of HABs in June was caused by precipitation. Based on the analysis of previous data, it was found that the period of highest temperature is inconsistent with the month with the highest probability of HAB, and atmospheric temperature is the main meteorological factor affecting HAB [69,70]. From mid-July to mid-August, Chaohu Lake's temperature in 2019 reached its annual maximum and the average daily sunshine hours were all over 8 h. However, due to the hysteresis effect [71] of temperature on the response of HAB in Chaohu Lake, the precipitation mainly occurred from June to mid-July. Much rain in June transports the nutrients from the catchment area as the non-point source. The algae in July with the highest maximum area is due to the inflow during June. The effect of nutrient supply appears with a time lag because the controlling factor is temperature. Even with a high concentration of nutrients, insufficient temperature regulates blooming. Remote Sens. 2021, 13, x FOR PEER REVIEW 18 of 25 Figure 13. Chart of the minimum area, average monthly area, maximum area, average monthly temperature, sunshine hours, and precipitation of Chaohu Lake HABs. The effect of precipitation showed a weak negative correlation with the HAB. The HAB on rainy days of August 3 and 7 was decreased by 79.3% and 61.3%, respectively, when compared with the previous days. This may indicate that the rainfall may dilute or inhibit the occurrence of HABs. HAB was often found on days after scattered rain, such as on 27 May, 26 August, and 17 October. In contrast, the total precipitation in September was half of that of August, and the scattered rain provided favorable conditions for the growth and reproduction of algae. Therefore, the rainfall was the main driving force of the monthly variations of the HAB from July to September. However, May-June rainfall is the highest and most frequent, which reduces the temperature of the water surface, and also reduces the density of algae and the concentrations of nutrients, making the probability of the occurrence of HABs only slightly increased in June compared with May. Rainfall decreased in July, the temperature increased, and the occurrence of HAB increased sharply. Therefore, the low occurrence of HABs in June was caused by precipitation. Based on the analysis of previous data, it was found that the period of highest temperature is inconsistent with the month with the highest probability of HAB, and atmospheric temperature is the main meteorological factor affecting HAB [69,70]. From mid-July to mid-August, Chaohu Lake's temperature in 2019 reached its annual maximum and the average daily sunshine hours were all over 8 h. However, due to the hysteresis effect [71] of temperature on the response of HAB in Chaohu Lake, the precipitation mainly occurred from June to mid-July. Much rain in June transports the nutrients from the catchment area as the non-point source. The algae in July with the highest maximum area is due to the inflow during June. The effect of nutrient supply appears with a time lag because the controlling Figure 13. Chart of the minimum area, average monthly area, maximum area, average monthly temperature, sunshine hours, and precipitation of Chaohu Lake HABs. The impact of wind speed on HAB showed a highly significant, positive correlation (R 2 = 0.383, p < 0.01). The wind direction map of Chaohu Lake in 2019 can be seen in Figure 14. A previous study revealed that when the average wind speed was larger than 3.8 m/s, the wind waves stirred the algal particles, causing them to sink, and reduced HAB concentration [72,73]. During the study period, only two days of HAB occurred with average wind speed greater than or equal to 3.8 m/s. The HAB area on August 12 was 4.8 km 2 (average wind speed of 4 m/s, average temperature of 28 • C), and the next day it was 113.94 km 2 (average wind speed of 1.5 m/s, average temperature of 29 • C). The solar radiation was similar, with sufficient sunshine hours (>9 h), but the HAB area was quite different. This indicated that the wind stirred up the algae particles so that the algae could not accumulate and sink, leading to a decrease in the HAB area. Moreover, appropriate wind speed and wind direction caused the HAB on the surface of Chaohu Lake to move toward the direction of the wind and accumulate. The results show that wind speed is an essential factor for the HAB outbreak and spread in Chaohu Lake. Prevailing winds in summer cause the shore water to converge on the northwest corner. The movement of water is not conducive to the material exchange on the surface of the flow field, which makes significant differences in the eutrophication pollution of algae of the whole lake [28]. Therefore, the frequency of HAB is the highest in the northwest of Chaohu Lake. There is counter-clockwise circulation in the vicinity of Zhefu River in the eastern Chaohu Lake and clockwise circulation in the vicinity of Zhao River [28], which brings N, P, and other nutrients to the northeast of Chaohu Lake and near the middle of the lake, and the nutrients concentrate, resulting in many of HABs. Chaohu sluice, connecting the southeastern part of Chaohu Lake with Yuxi River, has a certain influence on the flow field near the eastern part of Chaohu Lake and plays a favorable role in the exchange of HAB with the outside. wind speed and wind direction caused the HAB on the surface of Chaohu Lake to move toward the direction of the wind and accumulate. The results show that wind speed is an essential factor for the HAB outbreak and spread in Chaohu Lake. Prevailing winds in summer cause the shore water to converge on the northwest corner. The movement of water is not conducive to the material exchange on the surface of the flow field, which makes significant differences in the eutrophication pollution of algae of the whole lake [28]. Therefore, the frequency of HAB is the highest in the northwest of Chaohu Lake. There is counter-clockwise circulation in the vicinity of Zhefu River in the eastern Chaohu Lake and clockwise circulation in the vicinity of Zhao River [28], which brings N, P, and other nutrients to the northeast of Chaohu Lake and near the middle of the lake, and the nutrients concentrate, resulting in many of HABs. Chaohu sluice, connecting the southeastern part of Chaohu Lake with Yuxi River, has a certain influence on the flow field near the eastern part of Chaohu Lake and plays a favorable role in the exchange of HAB with the outside. The average wind speed on 24 October 2019 was 1.5 m/s, which was less than the critical value (3.8 m/s) for algae aggregation and movement [74]. Additionally, the maximum wind speed was 3.8 m/s. As can be seen in the HAB distribution in Chaohu Lake detected by Terra and Aqua on October 24 (Figure 15a,c), the HAB in the central part of Chaohu Lake is gradually moving in the east-southeast direction, in line with the maximum wind speed direction 14 (that is, the west-northwest direction). On 8 November 2019, the maximum wind speed was 2.9 m/s, and the maximum wind speed direction was The average wind speed on 24 October 2019 was 1.5 m/s, which was less than the critical value (3.8 m/s) for algae aggregation and movement [74]. Additionally, the maximum wind speed was 3.8 m/s. As can be seen in the HAB distribution in Chaohu Lake detected by Terra and Aqua on October 24 (Figure 15a,c), the HAB in the central part of Chaohu Lake is gradually moving in the east-southeast direction, in line with the maximum wind speed direction 14 (that is, the west-northwest direction). On 8 November 2019, the maximum wind speed was 2.9 m/s, and the maximum wind speed direction was 3 (that is, a northeasterly). HAB areas in Chaohu Lake were 31.75 and 43.6 km 2 , respectively, detected by Terra and Aqua. There was a low average wind speed (2 m/s) on Chaohu Lake on that day, which caused the algal particles to turn up and accumulate on the surface. The changes of HAB were also affected by wind waves, leading to the distribution location moving to the southwest (Figure 15b,d). Therefore, the multi-source remote sensing data can effectively monitor and reveal the diurnal change and development process of HAB. 3 (that is, a northeasterly). HAB areas in Chaohu Lake were 31.75 and 43.6 km 2 , respectively, detected by Terra and Aqua. There was a low average wind speed (2 m/s) on Chaohu Lake on that day, which caused the algal particles to turn up and accumulate on the surface. The changes of HAB were also affected by wind waves, leading to the distribution location moving to the southwest (Figure 15b,d). Therefore, the multi-source remote sensing data can effectively monitor and reveal the diurnal change and development process of HAB. Advantages of Multi-Source Satellite Remote Sensing MODIS satellites with moderate spatial resolution have been widely used in monitoring HABs in large water bodies. However, the identification of HABs by moderate spatial resolution is limited in small inland water bodies or reservoirs and even has a large accuracy error. Due to the moderate spatial resolution, the boundary of a HAB identified by MODIS data is fuzzy, and the recognition ability of low-concentration HAB is low, leading to large uncertainties for monitoring HABs of a small inland lake. Sentinel-2 images, with a spatial resolution of 20 m, could significantly improve the identification accuracy and spatial details of HAB. For a concentrated outbreak area (Figure 16h), MODIS satellite has a relatively good performance in extracting HABs, but its ability to define the boundary of a HAB's area is weak. The error extraction rate is 40%, which is relatively high. Therefore, in the same timeframe, the extraction of HABs by combining multi-source data can verify and correct the extraction results of moderate-resolution images. Advantages of Multi-Source Satellite Remote Sensing MODIS satellites with moderate spatial resolution have been widely used in monitoring HABs in large water bodies. However, the identification of HABs by moderate spatial resolution is limited in small inland water bodies or reservoirs and even has a large accuracy error. Due to the moderate spatial resolution, the boundary of a HAB identified by MODIS data is fuzzy, and the recognition ability of low-concentration HAB is low, leading to large uncertainties for monitoring HABs of a small inland lake. Sentinel-2 images, with a spatial resolution of 20 m, could significantly improve the identification accuracy and spatial details of HAB. For a concentrated outbreak area (Figure 16h), MODIS satellite has a relatively good performance in extracting HABs, but its ability to define the boundary of a HAB's area is weak. The error extraction rate is 40%, which is relatively high. Therefore, in the same timeframe, the extraction of HABs by combining multi-source data can verify and correct the extraction results of moderate-resolution images. In addition, remote sensing technology still makes it difficult to meet the requirements of high spatial-temporal resolution using a single satellite, especially for HABs with dramatic variations both spatially and temporally. To achieve both high spatial and high temporal resolution, multi-source satellite integration is an effective method to monitor the HABs in Chaohu Lake. Combined use of Terra/Aqua MODIS, Sentinel 2 MSI, and Landsat 8 OLI could provide more than three times per day monitoring of HAB, which is more efficient and accurate. For instance, parts of HAB information would be missed if only one satellite dataset was used; e.g., on 23 November 2019, some areas of HAB on the eastern part of Chaohu Lake would have been ignored by Terra image. By making full use of the advantages of multi-source images and monitoring the diurnal or long time scale changes of HAB in Chaohu Lake, they can learn from each other and make up for their shortcomings. Compared with single remote sensing data, more objective and accurate results were obtained. Conclusions Satellite remote sensing provides great potential to contribute significantly to the need for monitoring the HABs at a large scale; however, a multi-source remote sensingbased approach is preferred to achieve high temporal and spatial resolution observations In addition, remote sensing technology still makes it difficult to meet the requirements of high spatial-temporal resolution using a single satellite, especially for HABs with dramatic variations both spatially and temporally. To achieve both high spatial and high temporal resolution, multi-source satellite integration is an effective method to monitor the HABs in Chaohu Lake. Combined use of Terra/Aqua MODIS, Sentinel 2 MSI, and Landsat 8 OLI could provide more than three times per day monitoring of HAB, which is more efficient and accurate. For instance, parts of HAB information would be missed if only one satellite dataset was used; e.g., on 23 November 2019, some areas of HAB on the eastern part of Chaohu Lake would have been ignored by Terra image. By making full use of the advantages of multi-source images and monitoring the diurnal or long time scale changes of HAB in Chaohu Lake, they can learn from each other and make up for their shortcomings. Compared with single remote sensing data, more objective and accurate results were obtained. Conclusions Satellite remote sensing provides great potential to contribute significantly to the need for monitoring the HABs at a large scale; however, a multi-source remote sensing-based approach is preferred to achieve high temporal and spatial resolution observations of the HABs, such as the integration of Terra/Aqua MODIS, Landsat 8 OLI, and Sentinel-2A/B MSI. With the advantage of the high temporal resolution, MODIS data are efficient in tracking the inter-monthly variations and distributions of HABs. In contrast, the integrated multi-satellite data provide the possibility to grasp the breakout and spread, especially the diurnal change of a given HAB, which is more objective and accurate than results from one single satellite's monitoring, as shown in the case of the Chaohu Lake. To obtain reliable HAB monitoring results, it is crucial to determine an appropriate HAB detection method considering the spectral characteristics of HABs and the band settings of different satellite sensors, and our study proved that NDVI is suitable for MODIS; NDVI and FAI combined for Landsat 8 OLI; and the NDVI and ρ chl combined for Sentinel-2 MSI data. Besides, analysis of driving forces of HAB, including environmental and meteorological factors of temperature, rainfall, sunshine hours, and wind, indicated that higher temperatures and light rain favored HAB. The wind is the main factor in boosting a HAB's growth. Multisource remote sensing provides higher measurement frequency and more detailed spatial information on the HAB, particularly the HAB's long-short term variations. The results can be used as baseline data to evaluate the lake's HAB and water quality management in the future. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available on request from the corresponding website.
2021-02-23T14:06:20.830Z
2021-01-26T00:00:00.000
{ "year": 2021, "sha1": "c45e591e04c3139fc94a043c4aea5ddee80d8602", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/13/3/427/pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "86cfb6f761c95e14a1fb47506ee2267d9612b1c2", "s2fieldsofstudy": [ "Environmental Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
4743285
pes2o/s2orc
v3-fos-license
The Soluble Receptor for Vitamin B12 Uptake (sCD320) Increases during Pregnancy and Occurs in Higher Concentration in Urine than in Serum Background Cellular uptake of vitamin B12 (B12) demands binding of the vitamin to transcobalamin (TC) and recognition of TC-B12 (holoTC) by the receptor CD320, a receptor expressed in high quantities on human placenta. We have identified a soluble form of CD320 (sCD320) in serum and here we present data on the occurrence of this soluble receptor in both serum and urine during pregnancy. Methods We examined serum from twenty-seven pregnant women (cohort 1) at gestational weeks 13, 24 and 36 and serum and urine samples from forty pregnant women (cohort 2) tested up to 8 times during gestational weeks 17-41. sCD320, holoTC, total TC and complex formation between holoTC and sCD320 were measured by in-house ELISA methods, while creatinine was measured on the automatic platform Cobas 6000. Size exclusion chromatography was performed on a Superdex 200 column. Results Median (range) of serum sCD320 increased from 125 (87-839) pmol/L (week 15) to reach a peak value of 199 (72-672) pmol/L (week 35) then dropped back to its baseline level just before birth (week 40). Around one third of sCD320 was precipitated with holoTC at all-time points studied. The urinary concentration of sCD320 was around two fold higher than in serum. Urinary sCD320/creatinine ratio correlated with serum sCD320 and reached a peak median level of 53 (30–101) pmol/mmol creatinine (week 35). sCD320 present in serum and urine showed the same elution pattern upon size exclusion chromatography. Conclusion We report for the first time that sCD320 is present in urine and in a higher concentration than in serum and that serum and urine sCD320 increase during pregnancy. The high urinary concentration and the strong correlation between urinary and serum sCD320 suggests that sCD320 is filtered in the kidney. Introduction Vitamin B 12 (B12) is essential for normal fetal development [1,2]. The mother absorbs ingested B12 through a gastric intrinsic factor-mediated uptake in the ileal enterocytes [3]. After absorption, B12 is bound to transcobalamin (TC). TC circulates in plasma partly saturated with B12 (holoTC) and partly in its free form (apoTC) [3]. Due to its relatively low molecular mass of around 43-kDa [4] the molecule is filtered in the kidney, but reabsorbed in the proximal tubules [5,6]. HoloTC is essential for the receptor mediated cellular uptake of B12 [3]. During pregnancy the absorption of B12 ensures the B12 status not only for the mother but also for the fetus. While holoTC remains unchanged, total TC and B12 show declining levels in late pregnancy [2,7,8]. A holoTC binding receptor was recently purified from human placenta, and launched as the receptor mediating the uptake of holoTC in most cells [9]. This receptor, named CD320, binds holoTC and only to a much lesser degree apoTC. CD320 belongs to the low-density lipoprotein receptor family. Its 282amino acid sequence includes a signal peptide of 31 residues, an extracellular domain of 198 residues, a transmembrane region of 21 residues, and a cytoplasmic domain of 32 residues. The binding of CD320 to holoTC does not require the cytoplasmic domain or its orientation in the plasma membrane. The extracellular domain (sCD320) still binds holoTC with high affinity and specificity [10]. sCD320 is heavily glycosylated and behave as a 58-kDa molecule upon sodium dodecyl sulfatepolyacrylamide gel electrophoresis [9,10]. We recently identified soluble CD320 (sCD320) in human serum, and successfully developed an ELISA method for its measurement [10]. Further, we showed a positive correlation between circulating sCD320 and both total B12 and holoTC [10,11]. We did not find any obvious clinical associations to serum sCD320 levels, nor did we find evidence to suggest sCD320 as a novel biomarker for B12 deficiency [11]. The function of the recently discovered sCD320, the mechanism and regulation of its release remains unknown. In this study, we present data to show that both serum and urinary levels of sCD320 increase with gestational weeks but decline towards birth, and we present data supporting an unusual kidney handling of the sCD320 glycoprotein. Participants' characteristics and study design Sixty-seven pregnant Danish women from two longitudinal cohorts were included in this cross sectional study. The gestational age was defined based on the last menstrual date and the ultrasound examination. All women had healthy uncomplicated pregnancies and no chronic systemic diseases. Age, parity, sampling week, sample type, recruitment venue and date are described in Table 1. Though both cohorts represent longitudinal studies, our choice was to treat the data as a cross sectional study, as no systematic differences were observed in the results obtained from the two cohorts. The samples were divided according to the following six gestational intervals (in weeks): 12-17 ( Declaration. All participants gave their written informed consent before inclusion in the study. Biochemical analyses sCD320 was analyzed by an in-house sandwich ELISA (total imprecision of 4.0-8.0% and an intra-assay imprecision of 3.5-4.3%) [10], but standardized employing recombinant sCD320 (R&D Systems, Denmark) as a calibrator with a molarity calculated based on manufacture provided information. The conversion factor between the previously employed arb.u. and pmol/L is: 1 arb.u. ≈ 5 pmol/L. sCD320-holoTC complex was estimated by measuring serum sample for sCD320 before and after exposure to anti-TC coated magnetic beads as previously described [12]. Total TC was measured by an in-house sandwich ELISA (total imprecision of 4%-6% and an intra-assay imprecision of 3%) [13]. HoloTC was measured by the TC-ELISA after removal of the apoTC with B12-coated beads (total imprecision of 8% and an intraassay imprecision of 4%) [14,15]. Creatinine was assayed on the Cobas 6000 automatic platform (Roche, Japan) (total imprecision of 1.2% and an intra-assay imprecision of 1.1%). Statistical analysis B12 related parameters did not follow normal distribution (using Kolmogorov-Smirnov and Shapiro-Wilk normality tests). Thus, non-parametric statistical tests were used and levels of B12 related parameters were reported as medians with 95% confidence interval. Mann-Whitney U-test and Wilcoxon signed rank test were applied for testing the difference between median levels. Spearman's rank correlation was used to correlate B12 related parameters. Statistical analyses were performed using SPSS statistical computer software for WINDOWS (version 20, IBM Inc., New York, USA, www.ibm.com/ software / analytics /spss/). Results We present data on the occurrence of sCD320 in two cohorts of pregnant women. The characteristics of the two populations are indicated in table 1. We employed a previously established assay for measurement of sCD320 [10]. However in order to allow for a sCD320 Level during Pregnancy PLOS ONE | www.plosone.org molar determination of sCD320 we standardized the assay employing a commercially available human recombinant CD320. The sCD320 concentration during pregnancy in both serum and urine is presented in Figure 1 and its descriptive statistics presented in table 2. The serum sCD320 concentration increased gradually during pregnancy starting from gestation week 20(median =122 pmol/L) and reach a peak at week 35 (median =199 pmol/L) then declined before birth to the median level of 143 pmol/L (week 40). In support to our previous study [11], we found a positive correlation between serum sCD320 and holoTC (Spearman's rank correlation=0.381, P<0.001, n=199). Also we found a significant correlation between serum sCD320 and total TC (Spearman's rank correlation=0.412, P<0.001, n=199). Table 3 show descriptive statistics for serum holo-and total TC along with urine total TC during pregnancy. For the first time we document the presence of sCD320 in urine. We find concentrations that exceed its serum counterpart in all examined gestation weeks ( Figure 1, table 2). Urine sCD320 level (expressed as a ratio to urine creatinine) follows serum sCD320 and increase gradually from week 20 (median=29 pmol/mmol) to reach a peak at week 35(median=53 pmol/mmol) (Figure 1, table 2). We explored the molecular characteristics of sCD320 in both serum and urine by size exclusion chromatography. sCD320 from serum and urine behaved alike and sCD320 reactivity eluted as a sharp peak with a stokes radius ≈ 50 Å, which is identical to the size of sCD320 from donor serum(men and non-pregnant women) [10] (Figure 3). Finally we explored to which degree sCD320 occurred in its free form, and to which Discussion For the first time we report on the molar concentrations of sCD320 in serum and urine, we show increasing levels during pregnancy and surprisingly we report urinary concentrations exceeding those observed in serum. In addition our data suggests that the major part of circulating holoTC is present in complex with its receptor, sCD320. Early studies showed the human placenta to express a receptor that recognize holoTC [18][19][20]. More recently the receptor has been purified and characterized from human placenta [9] and realized to be the CD320 molecule. In addition our previous studies have shown a soluble form of CD320 to be present in serum [10,11]. Together these findings paved our way to explore the concentration of sCD320 during the physiological process of pregnancy. We observed increasing serum values of sCD320 up to gestational week 35 followed by declining values towards birth, and we find it likely that the increase relates to a contribution from placenta where CD320 is expressed abundantly and from which CD320 was purified and characterized [9]. The pattern of release is in accord with the fact that placenta proliferation is diminished during the last weeks of pregnancy [21][22][23]. We explored the concentration of both TC and sCD320 in urine. As expected very little TC was recovered in the urine. This is in accord with the current view that TC (26 Å) is filtered in the kidney but reabsorbed in the proximal tubules mediated by binding to the multifunctional receptor megalin [5,6]. To our surprise we measured levels of sCD320 in urine that were about two times higher than in serum. This was a totally unexpected result, since the heavily glycosylated sCD320 behaves as a molecule of 50 Å, a size that predict a very limited filtration. The two-pore model of glomerular permeability [24] operates with a large number of small size pores (30-45 Å) and low number of large size pores (110-115 Å), with a calculated ratio of 7 x 10 -7 for large to small pores based on data from normal rats [25]. The proteins that only pass from the low number large sized pores has a limited fractional plasmato-urine clearance and as an example IgG2 (55 Å) has a fractional plasma-to-urine clearance of 1.58 -4 . Based on this experimental data it seems unlikely that urinary sCD320 is derived from filtration in the kidney. Never the less our data does point in that direction. First we explored whether we measured a fragment of sCD320 in urine. This was not the case. Serum and urinary sCD320 behaved identically upon gel filtration (Figure 3). Second we observed a strong correlation between urinary sCD320 and the freely filtered waste product creatinine, thirdly serum sCD320 correlated well with the urinary excretion of sCD320 ( Figure 2) and finally previous data has shown a relation between serum sCD320 and kidney sCD320 Level during Pregnancy PLOS ONE | www.plosone.org function as judged from serum creatinine [11]. Together these observations point to an unusual kidney handling of sCD320. Further studies are needed in order to unravel the physiological background and possible implications. The first assay launched for sCD320 employed a calibrator designed an arbitrary value [10]. Here we benefit from the use of a calibrator prepared from recombinant CD320, allowing us to measure sCD320 in molar concentrations. We report sCD320 to be present in serum in concentrations exceeding that of its ligand, holoTC by a factor of two to three. A previous study has shown that holoTC is recognized by CD320 with a high affinity as compared to apoTC [9]. In addition we have previously shown that part of sCD320 coprecipitates with TC, but since sCD320 could not be measured in a molar unit at the time we were unable to judge to which extent holoTC formed complexes with sCD320. Here we show that around equimolar amounts of holoTC and sCD320 are precipitated by antibodies against TC. The absolute amount precipitated corresponds to the concentration of holoTC and accounts for around one third of sCD320. The results suggest that most of circulating holoTC form complexes with sCD320. The binding is likely to have a relatively low affinity since no complex formation is revealed upon gel filtration of serum (data not shown). In summary, we report for the first time that serum and urine sCD320 increase during pregnancy and that sCD320 is present in urine in a higher concentration than in serum. The strong correlation between urinary and serum sCD320 and between urine sCD320 and urine creatinine suggests that sCD320 is filtered in the kidney.
2018-04-03T05:22:35.055Z
2013-08-27T00:00:00.000
{ "year": 2013, "sha1": "63578557189257fe7881ad20197b69e30923fc50", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0073110&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "63578557189257fe7881ad20197b69e30923fc50", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
263690275
pes2o/s2orc
v3-fos-license
Crude Palm Oil Unloading Activities at Mt. Giat Armada 01 Loading and unloading is one of the activities carried out on board both when the ship is docked at the port or ship to ship. However, at the time of loading and unloading that occurred on the ship MT. Giat Armada 01 problems occur so that the loading and unloading process becomes less optimal.The research method that the writer uses in this thesis is qualitative descriptive with the Fishbone approach as a data analysis technique. Fishbone is shaped like a fishbone skeleton whose parts resemble the head and bones of fish. Fishbone is used to determine the causal relationship of the causal factors, the impact they cause, and the efforts made to optimize the handling of loading and unloading on the MT.Giat Armada01. The results of the research conducted, it can be said that loading and unloading handling at MT. Giat Armada 01. is not optimal. caused by the lack of maintenance of the equipment used for loading and unloading, damaged and poorly maintained cargo pump, lack of heating, lack of application of procedures for handling loading and unloading, and the length of the Jetty Batilicin pipe line shore. These factors have an impact on the less optimal handling of loading and unloading during loading and unloading, increased working hours, damage and loss of equipment used to support loading and unloading activities. To overcome these factors, it can be done by carrying out maintenance and checking of each tool used for loading and unloading, carrying out routine repairs and maintenance on the cargo pump, carrying out cargo maintenance, loading and unloading and squezing according to procedures, and always conducting safety meetings. before unloading or loading. INTRODUCTION Crude Palm Oil (CPO) is palm oil that has not undergone a refining process.CPO comes from the flesh of the oil palm fruit, generally from the species Elaeis guineensis and not much from the species Elaeis oleifera and Attalea marip.Having a high content of alpha and beta-carotene makes palm oil show a reddish hue. Ships are one form of sea transportation mode.Ships can carry larger loads of goods quickly and economically from one country or location to another.Ships carrying liquid cargo are referred to as tankers.MT.Giat Armada 01 is one of the tankers that load Crude Palm Oil (CPO) palm oil.This ship is included in the category III chemical tanker operated by PT.Indonesian Miniships.In accordance with the voyage order from the MT shipper.Giat Armada 01 has irregular routes covering Kalimantan, Gresik, Surabaya and Papua.MT ship.Active Armada 01 has Length Over: 91.00 meters Breadh: 15.80 meters, DWT 4505.674tons.MT ship.Giat Armada 01 has 22 tanks consisting of 8 cargo tanks, 9 ballast tanks, 2 slop tanks and 3 fresh water tanks with a total capacity of 5049.87 cubic meters. Special attention must be paid when loading and unloading crude palm oil or what is often called Crude Palm Oil (CPO) at the port of destination.However, when the ship carried out loading and unloading which was located at Jetty Batulicin, there were problems which resulted in the ship being less than optimal in carrying out the loading and unloading process. This is because the pumps on board are not strong enough to push the cargo towards the shore tanks on land.Therefore, it must be assisted by pumps from land so that the oil can be unloaded.The absence of heat on board is also an obstacle when carrying out loading and unloading due to the nature of Crude Palm Oil (CPO), which thickens more and more and is difficult to unload.Based on the sub-optimal loading and unloading procedures at MT. Active Armada 01 caused by several factors such as not having a strong cargo pump, ships that are not equipped with heating so that the oil thickens and is difficult to unload, ships that are not suitable for loading Crude Palm Oil (CPO) because the ship is a produck type, long pipe line shore tanks in Batulicin. In writing this thesis, the writer hopes to achieve things as following: 1. Theoretical Benefits This research aims to develop knowledge about how is the process of handling crude palm oil (CPO) cargo so that it can run optimally.2. Practical Benefits: Provides the reader with more detail about handling the implementationa unloading crude palm oil at MT. Giat Armada 01 and causes Handling of loading and unloading of CPO is not optimal.In an effort to improve service and security in handling cargo of crude palm oil (CPO), this research is expected 6 can provide input as a reference material for the company especially for MT.Giat Armada 01 as a chemical type ship III. LITERATURE REVIEW In theory take into account about the rules and specifications for loading and unloading, various types of cargo, and optimizing the loading and unloading process of crude palm oil a. Optimization According to the Big Indonesian Dictionary, the term "optimization" in language Indonesia is often used to describe actions or processes to achieve the best or highest level or result, in a way manage, or improve systems, procedures, and strategies that worn.In other words, optimization can also refer to effort 8 to improve operational efficiency and effectiveness at ports and cruises, such as setting optimal routes, efficient scheduling, good resource management, and good use of technology advanced.b.Arrangement Demolish Load According to Sudjatmiko, (2011: 264) in the book "The Principles of commercial shipping" stated that loading and unloading refers to a the process of transferring goods from one country to another or from one country ship to another ship, with the aim that the cargo can be stored or directly transported to the location of the owner of the goods through the wharf harbor.c.Crude Palm Oil (CPO) The main focus in this thesis research is the cargo being transported by MT.Giat Armada 01.The cargo is Crude Palm Oil (CPO).Crude Palm Oil or commonly called CPO is palm oil that has not undergone a refining process and is taken from the flesh of the palm fruit.Palm oil (CPO) is obtained from the coconut tree palms, usually of the Elaeis species guineensis and to a lesser extent the species Elaeis oleifera and Attalea marip.Unloading Crude Palm Oil must be heated to the temperature at the time of loading to keep the oil liquid and not thicken, palm oil itself has properties that will thicken over time.According to the handling process, crude palm oil must be kept at a temperature of 800 F and if the CPO temperature is below 800 F it will freeze (26,660C). d. Tankers According to Sony in "Tanker Ship" (2011) explains that a tanker is a type of ship specifically designed to transport oil as cargo.According to Marton, (2007: 19) in book Tanker Operation Fourth Edition, tankers in the field maritime There is various type types , including: 1. Depends the payload Tankers are grouped based on type their cargo transport, which consists of 3 categories, namely: The entire research was carried out on board the MT.Giat Armada 01.When there was a CPO loading and unloading that was not optimal when the ship docked at the Batulicin jetty on March 25 2022.So that the loading and unloading process was not optimal. Types and Sources of Research Data The data source is a very important factor in collecting data because in a study, it must have data subjects who have clear information regarding data collection and processing.In this study, the types and sources of data used are: Primary data, Sources of research data are data obtained directly by observation in location research, and interviews with the parties involved at the time demolish load on MT.Giat Armada 01.Secondary Data, Secondary data is a source of research data obtained indirectly, namely obtained from books, documents, literature and other references related to the content in this study. Method Data Collection In completing the research results, it is necessary to have data as clear as possible which will guarantee its level of validity, so several methods are needed in data collection.Data collection methods used include: Observation, data collection in this study was carried out by observing directly during the unloading process of crude palm oil at MT. Giat Armada 01.Interview, researcher do interview with a number of related parties with the unloading process less than optimal loading on MT.Giat Armada 01, ie Capt Ship, Chief Officer 1 and Second Officer 2. Documentation Study, researchers use data from archives owned by MT.Giat Armada 01 archives owned by personal during be on top ship.This method is used to support and strengthen information that has previously been obtained from observations and results interview related to the process of loading and unloading cpo that is not optimal at MT. Giat Armada 01.Of the three data collection methods in the form of observation, interviews, and documentation studies, the researcher obtained the information needed in the research. Qualitative Data Analysis Techniques In writing this study, the researcher used a qualitative descriptive data analysis method, which was carried out by analyzing the data obtained from interviews, field observations, and research documentation.Qualitative descriptive data analysis is a method used in analyzing and processing research data into information into research conclusions that can be more easily understood.Data analysis techniques used in this study include: a. Data reduction Data reduction is a series process of selecting, simplifying, abstracting and transforming the raw data that emerges from notes written obtained in the field. b. Data Presentation In presentation of data, researchers present data summary organized information about an event that will give inside convenience withdrawal conclusion.In qualitative research, data presentation can be presented in text form narrative like brief descriptions, charts, and the like. c. Conclusion Drawing In the process of drawing conclusions, the author gathered all data obtained from the research process become One unity form summary, based on data analysis with use easy language understood by readers as well as adapted to formula problem and goal research.d.Fishbone diagrams According to Kinasih (2022) A fishbone diagram, also known as a fishbone diagram method search because possible consequences used for knowing various type reason why a process does walk with Good or fail.It can also be said that fishbone analysis is helpful method in solve something formula level problem anywhere until possibility the cause that contributes to the effect.these diagrams introduced by an engineering professor Japan named Kaoru Ishikawa. FINDINGS AND DISCUSSION To provide a different research context, the authors use previous studies so that there are no similarities with existing researchers and can be used as a reference for comparison.Following are the differences between the research conducted by Tias Arfalian Noviki and current research.The context of this research is an update from previous research, which focused on the factors that affect the non-optimal unloading activities of crude palm oil on board the MT.Giat Armada 01 when docked at the Batulicin Jetty, the impact of the non-optimization, as well efforts made to improve the optimization of unloading activities cargo of crude palm oil (CPO) on board the MT.Giat Armada 01 at the moment dock at Jetty Batulicin. The problem started when MT. Giat Armada 01 carried out loading and unloading at Batulicin Port on March 25 2022, several factors were found that caused the loading and unloading process to be not optimal, such as damaged and poorly maintained cargo pumps, lack of maintenance of loading and unloading support equipment, lack of procedures during the drying process, the length of the pipe shore tank at Jetty Batulicin, thus slowing down all unloading activities.Incidents like this must be handled quickly so that the loading and unloading process becomes optimal.The unmaintained and poorly maintained cargo pump is the main problem in this case, when loading and unloading the CPO cargo pump experienced a problem, namely that it was not strong enough to pump the load.At that time the chief officer ordered the crew to check the cargo pump filter, and it was found that there were many waste materials stuck in the cargo pump filter, thereby hindering the rate of disassembling the CPO. Problem Analysis The process of loading and unloading CPO at MT. Giat Armada 01 when it docked at Jetty Batulicin was not optimal due to several factors, namely the lack of maintenance of the cargo pump, the lack of maintenance of loading and unloading support equipment, the absence of heat, the length of the shore tank pipe line.So the researchers made several efforts, namely, carrying out routine maintenance of the cargo pump.Problem Discussion.The less than optimal loading and unloading of crude palm oil at MT. Giat Armada 01 is caused by several factors. Based on the results of observations, the less optimal unloading of crude palm oil at MT. Giat Armada 01 was caused by several factors including: The tools used for loading and unloading do not receive regular maintenance and inspection, Broken and lacking maintained cargo pump, Boat not completed with heating, Lack of procedures in the drying process or squeezing, The length of the pipe line of the shore tank.Impact No optimal activity demolish load crude palm oil at MT. Giat Armada 01, including: Lateness demolition, Machine become damaged, Cost operational increase Efforts to optimize loading and unloading handling at MT. Giat Armada 01 include: Carry out routine maintenance and checking of each loading and unloading equipment, Carry out routine checks on tanks, cargo pipes, cargo pump filters on a regular basis, Submit a request to the company so that the ship is equipped with heathing, Carry out the drying / squezing process with obey procedure or existing rules, Request help to the jetty so turn on pumps that are on the jetty so pump on board it works become more light. CONCLUSIONS Conclusions a. Factor reason not enough optimal handling demolish loading Crude Palm Oil on the MT.Giat Armada Armada 01 got concluded from a number of factor following this: Lack maintenance and inspection routine to equipment used in activity demolish load, Damage and lack maintenance against cargo pump, No exists system heater (heating) installed on the ship, Disadvantages procedure in the drying process payload or squeezing, Length connecting pipelines boat with shore tanks b.Impact not enough optimal handling demolish loading Crude Palm Oil on the MT.Giat Armada 01 ie delays in the disassembly process that makes time Work longer, Machine pump become damaged, Cost operational become increase.c.Deep effort framework increase optimality handling demolish loading Crude Palm Oil on the MT.Giat Armada 01, as following: Do maintenance and inspection routine to equipment used in activity demolish load , Fix with fast and precise when happen damage to the cargo pump, make a procurement request heating on board to company, Do regular maintenance on the cargo pipe tank and cargo pump filter before and after the unloading process load, Do appropriate treatment procedure in load, unload loading , and cleaning tank (squeezing), guarding good communication during the unloading process fit going on. Suggestion a. Expected that the Officers and crew always notice procedure handling demolish load, and repair cargo.Important for them to follow the correct procedures like maintenance load, unload loading, and squeezing.As well as improve concern them.Before carry out above job boat or moment handling demolish load, recommended to do socialization intensive or meeting safety to use avoid possible causes bother handling is not desired in Crude Palm Oil. b.Order handling demolish loading Crude Palm Oil on the MT ship.Active Armada 01 is running optimally, it is hoped that the officers and crew will increase their level of accuracy regarding the conditions on board.In addition, they are also expected to always establish good coordination with the company to identify what needs are needed to ensure that loading and unloading handling on ships can reach optimal levels.c.It is expected that the ship's officers and crew will always carry out routine maintenance and checks on the cargo pump, filter, and other loading and unloading support equipment.If necessary, it is done once a week so that during the loading and unloading process there a. Crude-oil carriers b.Black-oil product carriers c.Light-oil product carriers 2. Based on the size a.Handy-sized tankers b.Medium-sized tankers c.Very-Large Crude Carriers (VLCCs) d.Ultra-Large Crude Carriers (ULCCs) To simplify the flow of this research, the authors describe the research framework in the form of a simple chart. Table 1 . Previous and Current Research
2023-10-06T15:18:24.722Z
2023-10-05T00:00:00.000
{ "year": 2023, "sha1": "b09c3fc2d6a5957a40b4791eaaeedbe87a7c39b7", "oa_license": "CCBYNC", "oa_url": "https://proceeding.researchsynergypress.com/index.php/cset/article/download/752/898", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "ccb59e1e190eac8e53616ddd6287af5abc4d2d87", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
126889931
pes2o/s2orc
v3-fos-license
Machine learning methods for crop chlorophyll variable retrieval : Hyperspectral remote sensing technology improves the retrieval ability of chlorophyll content in crops. The machine learning method has been developed and applied to crop phenotyping information inversion. This study combined radiative transfer model (PROSPECT-4) and Gauss Process Regression algorithm (GPR) to retrieval crop leaf chlorophyll content. The test was conducted in the eastern city of Shenyang, Liaoning Province, China with a japonica rice. This paper describes (1) The PROSPECT-4 model was analyzed by GSA tool, and the sensitivity band range of crop chlorophyll was at 400-750 nm. (2) The chlorophyll content model was established with great accuracy ( R 2 =0.8638) that can predict the crop leaf chlorophyll content; (3) The results demonstrated that crop chlorophyll is inversion by PROSPECT model and machine learning algorithm. Therefore, crop chlorophyll content can be estimated by hyperspectral data that may be used for crop growth management. This research can provide an efficient method to detect crop leaf chlorophyll content at the RTMS in the future. Introduction Hyperspectral remote sensing is characterized by many bands and narrow band, which is a crucial technology for the development of precision agriculture [1] . Precision agriculture requires fine management of farmland, so remote sensing technology, including hyperspectral, thermal imaging and LiDAR systems, which provides technical support for the implementation of precision agriculture. Methods using hyperspectral remote sensing technology are particularly promising as they allow for non-invasive, fast and automated measurements with both spatial and temporal resolution in the field. They are based on transmittance, reflectance signals from the plants, which contain information about agronomic and physiological traits [2] . Having access to operationally acquired imaging spectroscopy data with hundreds of bands paves the path for a wide variety of monitoring applications, such as the biochemical vegetation properties [3] . In the process of crop growth, its canopy structure, physiological characteristics, environmental background and so on will change, resulting in changes in leaf and canopy spectrum. It is based on crop differences in spectral response to monitor their growth. Such large number of data dimensions induces an important methodological challenge. Hyperspectral remote image data include highly correlated and noisy spectral bands, and frequently create statistical problems (e.g., the Hughes effect) due to small sample sizes compared to the large number of available, possibly redundant, spectral bands [4] . These characteristics may lead to a violation of basic assumptions behind statistical models or may affect the model outcome. Models fitted with such multi-collinear data sets are prone to over-fitting, and transfer to other scenarios may thus be limited. Naturally, these issues affect the prediction accuracy as well as the interpretability of the regression (retrieval) models [5] . Therefore, how to reduce the hyperspectral dimension and optimize the spectral, select the most suitable inversion of crop physiological information parameters, is the priorities of using the hyperspectral to carry out research. Chlorophyll is a class of photosynthesis related to the most important pigment. Photosynthesis is the process of converting light energy into chemical energy by synthesizing some organic compounds [6] . Chlorophyll absorbs energy from light, and t convert it into carbon dioxide to carbohydrates. It is a key factor in regulating the biophysical and physiological processes of crops [7] . Estimating NDVI model of rice leaves based on NDVI and environmental data of rice canopy. The results show that NDVI of rice leaves is highly correlated with canopy NDVI and multi-source environmental data [8] . Traditional methods of measuring chlorophyll in the laboratory, includes collecting crop leaf samples for chemical analysis, not only require destruction of the crop leaf samples but are also labor-intensive and expensive [9] . Currently, remote sensing techniques have been proposed for monitoring the crop chlorophyll. In recent years, the main method has been developed for remote estimation of chlorophyll status is empirical relationships between chlorophyll and vegetation indices (VIs). Crop leaf has a strong absorption characteristic in the visible light red band, strongly reflects near infrared band, which is the vegetation remote sensing monitoring using satellite band detection data from a combination of can reflect the crop growth index [10] . Different vegetation combinations can be obtained by different combinations of these two bands vegetation index. Vegetation index was used to invert chlorophyll content mainly through statistical methods. Chappelle used a narrow band vegetation index to determine the content of chlorophyll of leaves [11,12] . Blackburn found a number of spectral indices for estimating pigment concentrations at the leaf scale. The results indicate that the optimal wavebands for chlorophyll estimation are identified empirically at 680 nm and 635 nm. Two new indices (PSSR and PSND) that were developed had the strongest and most linear relations with chlorophyll concentrations [13] . However, there are still some problems in the retrieval of chlorophyll content using vegetation index, Including NDVI, PRI and Green Normalized Difference Vegetation Index (GNDVI) or other vegetation index are likely to be saturated during the inversion process of chlorophyll, resulting in a decrease of inversion accuracy. The model established by using the vegetation index to invert the chlorophyll content of the crop is limited by the experimental conditions, the testing instrument and other factors. Another method of inverting chlorophyll is radiative transfer models (RTMs) [14,15] . Given spectra, find the closest spectra in the database and return the corresponding parameter. Two main approaches: (i) Minimizes a function that calculates the RMSE between the measured and estimated quantities by successive input parameter iteration. (ii) Precompute the model reflectance for a large range of combinations of parameter values, so the problem reduces to searching a LUT for the modeled reflectance that most resembles the measured one [16,17] . The objectives of this work are therefore threefold: (1) The global sensitivity analysis of the prospect model was carried out using the global sensitivity analysis tool to find the hyperspectral spectral range affected by chlorophyll; (2) To compare the accuracy of the chlorophyll inversion model established by different machine learning methods; (3) to establish hyperspectral inversion models for chlorophyll. Experimental data We choose the LOPEX`93 as the experimental database for this study. LOPEX` 93 database including many crops biochemical information and leaf hyperspectral reflectance. The band range from 400 nm to 2500 nm. About 70 leaf samples representative of more than 50 species of woody and herbaceous plants in this database [18] . The crop parameters measured by the LOPEX`93 database include N, chlorophyll, water, biomass, and other input parameters required by the PROSPECT model. This database also including crop leaf high resolution visible and near infrared reflectance spectroscopy. Crop leaf radiative transfer model In this research we choice PROSPECT model for inversion of crop chlorophyll. PROSPECT is a radiative transfer model based on Allen`s generalized "plate model" that represents the optical properties of plant leaves from 400 nm to 2500 nm Scattering is described by a spectral refractive index (n) and a parameter characterizing the leaf mesophyll structure (N). Absorption is modeled using pigment concentration (C a+b ), water content (C w ), and the corresponding specific spectral absorption coefficients (K a+b and K w ). The parameters n, K a+b , and K w have been fitted (using experimental data corresponding to a wide range of plant types and status. PROSPECT has been tested successfully on independent data sets. Its inversion allows one to reconstruct, with reasonable accuracy, leaf reflectance, and transmittance features in the 400-2500 nm range by adjusting the three input variables N, C a + b , and C w [19] . The principle of the model is as follows: The interaction of electromagnetic radiation with plant leaves (reflection, transmission, absorption) is dependent on the chemical and physical properties of the leaves [20] . In the visible light band, the light absorption is essentially formed by the rotation and movement of electrons in chlorophyll a, chlorophyll b and other pigments; in the near-infrared and mid-infrared bands, it is mainly formed by the vibration and rotation of electrons in water [21] . Refractive index n is not continuous in the blade, n = 1.4 for water-containing cell walls, n = 1.33 for water and n = 1 for air, so the internal biochemical composition and structural properties of the whole spectrum the leaf reflectivity and transmittance of the band [22] . Hyperspectral remote sensing data Global Sensitivity Analysis(GSA) An important requirement is to know the key input variables driving the spectral output in a specific spectral region. Such knowledge can lead to a simplified model that is driven only by the key variables, which makes exploration of a broad range of target and observation conditions easier and more effective [23] . To achieve this, GSA is required. The global sensitivity analysis is performed over the entire parameter range, and the effect of coupling between different parameters on the output of the model is considered, which is very suitable for the sensitivity analysis of complex nonlinear models [24] . The most popular global sensitivity analysis method is Sobol algorithm [25] . This work the variance-based sensitivity measures was used. The algorithm are represented as follows: 12,..., 1 ,..., In this equation, S i , S ij ,…, S 12,…,k are Sobol's global sensitivity indices. Where S i and represent the first order sensitivity index, whereas, S ij ,…, S 12,…,k are the sensitivity measures for the higher order terms. Output Y to the input parameter X i (without interaction terms). The total effect sensitivity index S Ti , measures the whole effect of the variable: GSA useful to identify RTM key and non-influential variables [26] . Gaussian processes regression Traditional data modeling methods commonly used in the process are Estimation, regression and function approximation. With the continuous improvement of computer computing ability, the current research on machine learning has become a trend. Such as the Gaussian Processes regression (GPR) [27,28] , in which we will focus here. GPR can be transformed into a linear relation by mapping the data of non-linear relation to the characteristic space by the way of nucleus substitution, so that the complex nonlinear problem can be transformed into a linear problem. A key step in the use of GPR modeling is to determine the kernel function. Gaussian processes can choose different types of kernel functions. Each kernel function has different structures, and its ability to describe data is different. The nature of GPR model is also determined by its kernel function [29] . The kernel functions of the GPR model include the mean kernel function and the covariance kernel function. The mean function m(x;Φ) can be used to denote the mathematical expectation of the function y(x) for which x is input without observations. In general, Take m(x; Φ) = 0, the zero mean function, which means that the initial output of the function under any input data is ideally zero. It is also possible to assume that the mean function is a constant that is not zero, and that this constant constitutes a super parameter with respect to the prior probability. The covariance function K(x 1 , x 2 ; Φ) is the center of the stochastic output variable corresponding to two stochastic input points in space. It is a key factor to measure the similarity or correlation degree between different samples, which is the key factor influencing the prediction effect of GPR model. Commonly used covariance kernel functions include Squared exponential covariance function (SE), Matem covariance function, Rational square covariance function (RQ), Periodic covariance function (PER). We use the SE kernel function in this work: where, σ is output size parameters; l is length scale parameter; x i and x j are the input spectrum. If x i ≈x j , then k(x i , x j ) takes the maximum, which indicates that the two functions are close. If the difference between x i and x j gradually become larger, k(x i , x j ) approaching zero, which means that two points farther and farther apart, at this time 2l determines its distance from far and near effect. Hence, low values of 2l indicate a higher informative content of this certain per input bands to the training function k. This 2l property shall be further exploited in this paper. ARTMO tools The automated radiative transfer models operator (ARTMO) graphic user interface (GUI) is a software package that provides essential tools for running and inverting a suite of plant RTMs, both at the leaf and at the canopy level. ARTMO facilitates consistent and intuitive user interaction, thereby streamlining model setup, running, storing and spectra output plotting for any kind of optical sensor operating in the visible, near-infrared and shortwave infrared range (400-2500 nm) [30] . GSA reflectance results Sobol's First order sensitivity index (S Fi ) and Total order sensitivity index (S Ti ) results on surface reflectance across the 400-2400 nm region are given in Figure 1. The PROSPECT-4 model has four input variables, include leaf structural parameter (N), leaf chlorophyll content (Cab), leaf equivalent water thickness (EWT) and leaf dry matter content (DW). The GSA results revealed that four input variables drive the PROSPECT-4 reflectance. The leaf chlorophyll a+b content (Cab) only governed over 70% of variation in reflectance at wavelengths in the range of 400 nm to 750 nm. In this band range, the leaf structural parameter (N) governed less than 20% of variation in reflectance, and leaf dry matter content (DW) governed less than 10% of variation in reflectance. As shown in Figure 1, moisture has no sensitive band in 400-750 nm. Apart from EWT, spectral features in the visible part were controlled primarily by Cab, N and DW. The 400-750 nm spectral window is the photosynthetically active radiation for plants with Cab as the main absorbing pigment. Figure 2 shows the dominant range of chlorophyll band, from the figure can be seen, to use spectral information inversion of crop chlorophyll content, only need to use 400-750 nm to carry out research, do not need to 400-2500 nm all Bands are used for modeling, reducing the dimensionality of the data. Figure 3 shows the comparison of predicted and measured values of chlorophyll inversion using GPR. GPR results Regarding chlorophyll retrieval with GPR, the R 2 = 0.8638, RMSE = 0.9435. It can be seen from the values of R 2 and RMSE that it is ideal to use the prospect model to retrieve the chlorophyll content using the method of GPR. Discussion This study analyzed the global sensitivity of the PROSPECT-4 model to find the chlorophyll sensitive bands for establish inversion model. The chlorophyll inversion model was established by GPR method using Lope' 93 leaf biological parameters and hyperspectral datasets. It can be seen from the results that the inversion precision of this study is ideal, which is mainly due to the use of radiative transfer model to link spectral information and crop information, and not the vegetation index as the inversion parameter, which improves the accuracy of the model. The PROSPECT-4 model used in this study did not include other pigment parameters except chlorophyll, which would have some influence on the accuracy of the model. The sensitivity of chlorophyll in crops was 400 nm to 750 nm. Although chlorophyll was the main parameter affecting the spectral information in this region, other crop parameters also affected the range of this spectral range. The impact of other crop information was not explored in this study. The data set used in this study is the LOPEX database, which contains crop information and hyperspectral information, and the variety of crops is very generous, is used to explore the crop parameter inversion model of a better database. The inversion method used in this study is limited by the radiative transfer mechanism model. Only the crop parameters belong to the radiative transfer mechanism model. Therefore, how to establish a radiation transport model that can be used to retrieve crop parameters is the key to improve the accuracy of inversion. Beyond the here presented remote sensing vegetation products, it would be interesting to apply GPR and RTM to spectral-variable datasets that include more plant properties, such as leaf nitrogen content, biomass. Conclusions In this study, the LOPEX`93 database was used to combine crop information and hyperspectral information of the modified dataset to establish a hyperspectral information inversion model of crop chlorophyll content through PROSPECT-4 model and GPR method. The main contributions of this research are as follows: (1) The PROSPECT-4 model was analyzed by GSA tool, and the sensitivity band range of crop chlorophyll was at 400-750 nm. (2) the chlorophyll content model was established with great accuracy (R 2 =0.8638) that can predict the crop leaf chlorophyll content; (3) The results demonstrated that crop chlorophyll is inversion by PROSPECT model and machine learning algorithm. Therefore, crop chlorophyll content can be estimated by hyperspectral data that may be used for crop growth management. Acknowledgments This work was funded by National Key R&D Program of Chin a and by Ministry of Science and Technology of the People's Repu blic of China (2016YFD0200600 and 2016YFD0200603).
2019-04-23T13:21:44.471Z
2018-12-27T00:00:00.000
{ "year": 2018, "sha1": "aacca3457d943d2960ae99adce71f84d0caf2640", "oa_license": null, "oa_url": "http://www.ijpaa.org/index.php/ijpaa/article/download/6/4", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "aacca3457d943d2960ae99adce71f84d0caf2640", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Mathematics" ] }